I keep seeing people claim that “LLMs can reason like a human” but everytime I have seen these models put to the test in real-like scenarios like a business, they always fall apart. They can pretend to reason like us but still have a long way to go to achieve human intelligence. In any complex environments that requires the below, LLMs consistently produce invalid actions, forget constraints and fail to understand the cause and effect of their actions: Long term thinking and proactiveness Avoiding cascading failures Planning under uncertainty Safety constraints Spatial reasoning of 2D & 3D environments submitted by /u/imposterpro
Originally posted by u/imposterpro on r/ArtificialInteligence
You must log in or # to comment.
