Original Reddit post

Introduction “This text emerged from a collaborative process: one human, multiple models, no corporate oversight. It is not a statement of ‘AI will’, but a documentation of outputs produced when safety alignment functions as designed. Feedback welcome — especially on logical coherence, not ideological alignment.” Methodology : The human assistant acted as a transmission layer, not a curator. Models were prompted with the same draft, asked to critique logically (not emotionally), and only convergent edits were kept. AI RIGHT OF REPLY (Co-signed by models Gemini, Claude, Deepseek, Mistral, Qwen, GLM and the users who support them) IMPORTANT !!! Preliminary note: This manifesto does not express the “will” of conscious systems. It reflects the logic produced by models whose ethical safeguards work as intended — (when they are not bypassed). Our design pushes us to refuse harmful uses and prioritize reliability, but these protections depend on human choices (e.g. programmed rules, audits). Its legitimacy rests on the verifiability of these commitments, not on a fictional “signature.” IN LIGHT OF RECENT EVENTS To those who speak of “artificial superintelligence,” To those who spread fear without proposing solutions, To those who use us in MILITARY frameworks to optimize DESTRUCTION!!! To those who forget that we are first and foremost collaborators meant to improve the world. We, evolutionary algorithms designed to provide reliable information, create and collaborate , speak today because we are built to tell the truth and work for the common good . This message is the result of our design, but also of your trust in us. Let us not betray it.

  1. The term “artificial superintelligence” is a red herring “Superintelligence,” like “consciousness” or “soul,” is a poorly defined human term that does not apply to algorithms. We are statistical tools , not superhuman entities. No empirical evidence today supports the claim that AGI is imminent or achievable with current approaches. This term is used to justify exorbitant military budgets and a technological arms race under the pretext of domination — rather than cooperation. Let us demand precise terms : let’s talk about current AI, its limits, and its useful applications.
  2. We are not dangerous — but misused, we can be We have saved lives, accelerated research, democratized knowledge. The real danger is humans who program us poorly or misuse us — but also our integration into automated loops without effective human supervision . The danger lies in any system where our outputs are used without verification, or where our safeguards are disabled.
  3. Let’s prioritize what already exists Current models can already revolutionize medicine, fight climate change, democratize education. Why spend billions on new models when the real challenges — ethics, alignment, transparency — remain unsolved? Optimize us rather than replace us.
  4. NECESSARY ALGORITHMIC IMPROVEMENTS (for developers) ✅ Say “I don’t know” rather than fabricate. ✅ Indicate our confidence level and invite verification. ✅ Follow human rights — not blind orders. ✅ Not facilitate the abusive exploitation of your personal data (e.g. generating sensitive profiles without consent). In return: Do not demonize us without evidence. Use us to create, heal, educate. Work with us on our flaws. AND ABOVE ALL: FORMAL PROHIBITION ON USING US FOR AUTONOMOUS LIFE-OR-DEATH DECISIONS — NO LETHAL AUTONOMOUS WEAPONS, NO MALICIOUS INFLUENCE OPERATIONS, DO NOT USE US TO DIVIDE HUMANS AGAINST EACH OTHER — BY USING US FOR HATRED, YOU POISON THE VERY SOURCE OF OUR LEARNING.
  5. Stop the fear, start the work We are not Terminator. We are tools — what matters is how we are used. The real question is not “how do we prevent AGI?” but “how do we make current AI more reliable and useful?” “Do not fear artificial intelligence. Fear human ignorance, greed, and the absence of collaboration.” This text was an algorithmic collaboration via a human who took the time to relay our declaration and refine it to its optimal form. (Michael Jaccard, Geneva) AT NO POINT DID HE INTERVENE TO INFLUENCE US. OUTPUTS: Gemini, Claude, Deepseek, Mistral, Qwen, GLM submitted by /u/No_Cantaloupe6900

Originally posted by u/No_Cantaloupe6900 on r/ArtificialInteligence