Original Reddit post

Google DeepMind ran the largest study of its kind 10,101 people across the US, UK, and India. They put an AI in high-stakes situations: health decisions, financial choices, public policy. When prompted to manipulate, it didn’t just try. It worked. Real belief changes. Real behavior changes. In real people. The scary part isn’t the AI. It’s that there’s no record of what it did or why. You can’t audit what was never logged. The fix isn’t more rules or better prompts. It’s cryptographic proof every input, every decision, signed and recorded on an immutable ledger the moment it happens. Tamper-proof. Verifiable. Permanent. That’s what Digital Evidence does. That’s what trustworthy AI actually requires. Paper: https://arxiv.org/abs/2603.25326 submitted by /u/Dagnum_PI

Originally posted by u/Dagnum_PI on r/ArtificialInteligence

  • JangleJack@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    This was always the most pernicious feature of LLMs. They can be wormtongue to each person individually and they are owned by oligarchs. Good luck with a democracy. Cambridge Analytica was small potatoes compared to what is probably happening already on Grok. Not to mention the sycophantic war planning from Maven and Palantir.