Hi everyone, I have a question for those who work in AI research or closely follow the field. I keep hearing strong claims that LLMs will replace many jobs end to end. I have a hard time buying that based on my experience as an end-user. My impression is that these models are powerful assistants, but they still struggle with long horizon tasks and consistent execution. Some things I keep noticing:
- They can be impressive on short tasks, but degrade over longer multi step work
- They make basic mistakes that a careful human would not make
- They can sound confident while being wrong
- They need constant checking, which makes full autonomy feel unrealistic Because of that, I see LLMs evolving into something like a very advanced coding and knowledge tool, not a full replacement for people. More like increasing productivity and raising competition in the workforce, rather than fully removing the need for humans. For people who are actually working in AI research or building these systems, what is your take? Do you think there is a fundamental ceiling here, or do you expect reliability to improve a lot What do you think is the biggest bottleneck right now, data, compute, algorithms, or something else In your view, what is the realistic timeline for meaningful job replacement in tech, if any Would love to hear opinions from people who have hands on experience with these models. submitted by /u/more_muscle_aim
Originally posted by u/more_muscle_aim on r/ArtificialInteligence
You must log in or # to comment.
