Original Reddit post

I’d really appreciate some honest input from people already working in security. I’m currently a senior AI engineer building end-to-end agentic AI systems LLM integrations, tool-using agents, backend infrastructure, deployment, etc. I’m self-taught (no formal degree), but I’ve built my career from the ground up because I genuinely love this field. I work at a company in New Zealand (Remotely from Islamabad, Pakistan), and I’m heavily relied upon for both engineering and system-level decisions. I mention this only to clarify that I’m not experimenting casually this would be a serious long-term career move. Here’s what’s been on my mind: With the rise of AI-assisted development and “vibe coding,” I’m seeing a surge in insecure AI systems prompt injection risks, exposed API keys, unsafe tool execution, unvalidated outputs, data leakage, weak threat modeling, etc. The AI attack surface feels like it’s expanding faster than the security expertise around it. I’m considering shifting my primary focus toward: • AI application security • LLM security & red teaming • Securing agentic workflows • AI system threat modeling • AI-focused penetration testing Instead of just building systems, I’d specialize in breaking and securing them. Questions for those in security:

  1. Is AI Security / AI AppSec likely to become a distinct long-term specialization, or will it just merge into traditional AppSec? 2. From a career standpoint, would it be smarter to double down on AI engineering while layering security knowledge — or pivot more fully? 3. Are companies actively hiring AI security specialists yet, or is this still early-stage? 4. If you were in my position, how would you transition strategically without losing momentum? I’m thinking 5–10 years ahead, not chasing hype. I want to build depth in a field that compounds in value as AI adoption increases. Appreciate any honest perspectives. submitted by /u/Nietzsche-og

Originally posted by u/Nietzsche-og on r/ArtificialInteligence