Hi everyone, I was checking recently about AI security. Most articles warn you about the AI your users interact with. They don’t mention the AI tools you’re building with. I’ve used AI coding assistants to write code, generate documentation, and even learn cryptography fundamentals, all to deploy services in production. The OWASP Top 10 for LLM applications, updated after 2025, describes 10 risks that apply just as much to your internal AI toolchain as to the chatbot you’re shipping. The threat surface isn’t only in front of your users. It starts in your IDE. submitted by /u/strategizeyourcareer
Originally posted by u/strategizeyourcareer on r/ArtificialInteligence
You must log in or # to comment.

