Original Reddit post

Over the past couple years I’ve become more aware of how the advancement of technology and inclusion of social media has made it easier to perform psyops and behavioral manipulation. Add in Algorithms how algorithms are being used to mindshape by exploiting emotions, creating echo chambers and engineer attention with dopamine, our thoughts are more vulnerable than a lot of us may want to believe. (Dont get me started on TikTok) Now take AI, how it can mimic voices, create deepfake videos with uncanny characteristics and resemblance to real people, and I fear we will be in deep trouble in a couple years discern what is truth and what isnt. There’s already examples of cyberattacks where deepfakes of executives or stakeholders are being used in an attempt to scam businesses out of millions. Dont get me wrong, AI is still easy to detect right now if someone knows what to look for or listen for. But even then, deepfakes now are a vast improvement to what they were two years ago. So its only going to get better. What are ways to counteract this? Are there jobs within the AI, Computer Science, or Cyber Security fields that actively work to mitigate, detect and regulate? I am aware of AI Governance / Ethics but im unsure if this falls under their wheel house where Cyber Threat Intelligence may run into this sort of thing. Hopefully this isnt coming off as conspiratorial haha. submitted by /u/SwitchJumpy

Originally posted by u/SwitchJumpy on r/ArtificialInteligence