Original Reddit post

There’s a massive trend right now where tech companies, businesses, and researchers are trying to replace real human feedback with Large Language Models (LLMs) so called synthetic participants/users. The idea is sounds great - why spend money and time recruiting real people to take surveys, test apps, or give opinions when you can just prompt ChatGPT to pretend to be a thousand different customers? A new systematic literature review analyzing 182 research papers just dropped to see if these “synthetic participants” can simulate humans. The short answer? They are bad at representing human cognition and behavior. submitted by /u/Complete_Answer

Originally posted by u/Complete_Answer on r/ArtificialInteligence