OpenAI may have violated California’s new AI safety law with the release of its latest coding model, according to allegations from an AI watchdog group. A violation would potentially expose the company to millions of dollars in fines, and the case may become a precedent-setting first test of the new law’s provisions. An OpenAI spokesperson disputed the watch dog’s position, telling Fortune the company was “confident in our compliance with frontier safety laws, including SB 53.” The controversy centers on GPT-5.3-Codex, OpenAI’s newest coding model, which was released last week. The model is part of an effort by OpenAI to reclaim its lead in AI-powered coding and, according to benchmark data OpenAI released, shows markedly higher performance on coding tasks than earlier model versions from both OpenAI and competitors like Anthropic. However, the model has also raised unprecedented cybersecurity concerns. Read more: https://fortune.com/2026/02/10/openai-violated-californias-ai-safety-law-gpt-5-3-codex-ai-model-watchdog-claims/ submitted by /u/fortune
Originally posted by u/fortune on r/ArtificialInteligence

