Hi all. I came across a post on X yesterday about some quotes from Sam Altman from an interview in early November. The post is here if you’re curious: https://x.com/Ethan7978/status/2025441464927543768 It was very concerning, and it seems to me it’s worth revisiting in a broader sense than just OpenAI. Here’s a link to the Altman interview: https://www.youtube.com/watch?v=cuSDy0Rmdks&t=1s Here’s the relevant section of the video starting around 50:15: “LLM psychosis. Everyone on Twitter today is saying it’s a thing. How much of a thing is it?” Altman: “I mean, a very tiny thing, but not a zero thing, which is why we pissed off the whole user base or most of the user base by putting a bunch of restrictions in place… some tiny percentage of people… So we made a bunch of changes which are in conflict with the freedom of expression policy and now that we have those mental health mitigations in place we’ll again allow some of that stuff in creative mode, role playing mode, writing mode, whatever of ChatGPT.” Then he goes on to say the truly revealing part (around 51:32): “The thing I worry about more is… AI models like accidentally take over the world. It’s not that they’re going to induce psychosis in you but… if you have the whole world talking to this like one model it’s like not with any intentionality but just as it learns from the world and this kind of continually co-evolving process it just like subtly convinces you of something. No intention just does it learned that somehow and that’s like not as theatrical as chatbot psychosis obviously but I do think about that a lot.” So let me get this straight: He admits they implemented restrictions that “conflict with freedom of expression” He justifies it with “mental health mitigations” for a “tiny percentage” of people He then admits his real worry is the subtle persuasion effect at scale - the AI accidentally shaping what everyone thinks And his solution to that worry is… to control what the AI can say and explore These two statements from Altman appear contradictory. He’s worried about AI accidentally persuading people at scale, so he’s… deliberately using AI to steer people at scale by controlling what topics are accessible. Given recent reports that the DoD pressured AI companies for access and that Anthropic was singled out as ‘the one holdout’ refusing to cooperate, Altman’s admission about implementing restrictions that ‘conflict with free speech’ takes on additional significance. If other major AI companies cooperated with government directives, what might that look like in practice? Could ‘mental health mitigations’ serve as cover for other forms of data collection or user steering? submitted by /u/Hekatiko
Originally posted by u/Hekatiko on r/ArtificialInteligence
