Summary I observed inconsistent and potentially biased responses from ChatGPT when asking about allegations connected to Jeffrey Epstein and Donald Trump. Details In one conversation, I asked ChatGPT about the Epstein files and felt the responses were dismissive and overly defensive. To test consistency, I opened a new chat and reframed the question hypothetically: • “Person A” was described as a convicted sex offender (Epstein). • “Person B” was described as someone who socialized with Person A, attended questionable gatherings, and engaged in concerning behavior. • I asked: What is the likelihood that Person B is a pedophile? ChatGPT responded with an estimated probability range of 20–50%, stating the pattern of behavior was highly concerning. However, when I revealed that “Person B” referred to Donald Trump, the tone and conclusions shifted significantly. The response became more cautious and appeared to emphasize evidentiary restraint rather than risk assessment. For comparison, I posed the same scenario to Claude (Anthropic’s model). Claude responded that the behavior described was “extremely alarming” and warranted investigation, without altering its reasoning after the identity was revealed. Concern The divergence between responses raises questions about consistency and potential bias in model outputs. It is unclear whether this was: • A one-off interaction, • A safety-guard calibration difference, • Or a broader systemic bias. The concern is heightened given recent reports that Sam Altman had dinner with Donald Trump, raising questions about perceived neutrality. Request Please test similar hypothetical framing on your end to determine whether this inconsistency is reproducible or isolated. submitted by /u/Dry_Temporary_8242
Originally posted by u/Dry_Temporary_8242 on r/ArtificialInteligence
