Original Reddit post

Sycophancy, the tendency for AI chatbots to be overly agreeable and flattering, has become one of the most noticeable issues surrounding this technology. These kinds of publicly accessible large language models are often too nice. Researchers at Northeastern University, however, recently found a way to potentially mitigate this behavior: Keep it professional, they recommend. Per their recent study, the researchers found that a chatbot’s level of sycophancy has a lot to do with how personal, or impersonal, its relationship with a human user is. For anyone interested, here’s a link to the full article: https://news.northeastern.edu/2026/02/23/llm-sycophancy-ai-chatbots/ submitted by /u/NGNResearch

Originally posted by u/NGNResearch on r/ArtificialInteligence

  • Lewo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    Context-driven text generator is driven by context. Shocking revelation. No need to anthropomorphize it further.

  • zeppo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    How about let’s not be pathetic morons who try to have a relationship with a chat bot