As we use AI Agents to fulfill certain roles - like a junior developer, personal assistant, etc, we’ll need to have some way of evaluating their success. What will that mean? Who will be the evaluator? How will corrective actions be taken? How will we convert feedback to action? How do you tell an AI Agent that it’s not meeting expectations? Could this result in a whole new field where humans become experts in getting the best results possible out of AI Agents? As autonomous agents advance, will some need more coaching than others? submitted by /u/polonius67
Originally posted by u/polonius67 on r/ArtificialInteligence
You must log in or # to comment.
