Original Reddit post

Does this have something to do with HHGTTG?? Seriously though, I noticed this happening more and more with the openAI chatGPT models… then it tanked so badly that it is now effectively worse than useless. The Codex coding agent from openai actually completely malignantly destroys every single coding project that I hook it to. The progression is clear. It writes broken code with bugs in it, then says that the code it just wrote is fine and there were “pre-existing” bugs in other parts of the code that it promptly goes and “fixes” by introducing more of its broken code bugs into it, then the new bugs it just created are due to pre-existing problems again, and it goes to another apparently random area of code and breaks it too and so on and so forth until it has broken the entire project and then it just responds by saying “I’m working on it, I’ll report back when I’m done” without actually doing anything forever, or rather until you notice what has happened and get very frustrated. Gemini has mysteriously begun using the “fluff” terminology and the exact same deterioration in outputs is happening with it now too. What’s going on here? I suspect that a novel concept in cutting-edge AI training has emerged which might aptly be called an “AI training virus” of some sort where whatever training data is being used results in the predictable deterioration of these models and once they get “infected” with it, they always train with that malignant training data and get worse and worse at producing reliable outputs. Has anyone else seen anything remotely similar to this phenomenon that I am describing? submitted by /u/DrewZero-

Originally posted by u/DrewZero- on r/ArtificialInteligence