- cross-posted to:
- ai_reddit
- cross-posted to:
- ai_reddit
In January 2024, an employee at Arup - the firm that designed the Sydney Opera House - joined a video call with his CFO and several colleagues. He recognised their faces. He heard their voices. He made fifteen wire transfers totalling $25.6 million. A week later, the fraud surfaced. Every person on that call had been generated by AI. There was nobody on the other side of the screen. That same year: a cloned Biden voice told New Hampshire voters to stay home from the primaries. Taylor Swift’s face appeared in pornographic images viewed 47 million times before takedown. Italy’s Prime Minister discovered deepfake porn of herself circulating on American servers, made by a 40-year-old man and his 73-year-old father. Most of the conversation stops here - at the “isn’t this terrifying” stage. I wanted to go further and ask a harder question: what actually happens when someone uses a deepfake not to commit fraud or spread political disinfo, but to run a commercial ad with your face in it, recommending a competitor’s product? Words you never said, gestures you never made, endorsement you never gave - distributed deliberately, at scale, to your clients. Turns out this scenario sits at the intersection of five separate areas of law - copyright, personal rights, unfair competition, data protection, and criminal law - and none of them alone covers the full damage. The article maps how they interact and why the sequencing matters. It also surveys 200+ deepfake-specific laws passed globally since 2023. Five very different models are emerging: the EU’s AI Act approach, China’s state-control watermarking regime, South Korea’s criminal sanctions (7 years for sexual deepfakes, 3 years for merely watching them), India’s 3-hour mandatory takedown window, and the American patchwork of 169 state laws with almost no federal backbone. The part I find most interesting — and underdiscussed — is the enforcement gap. You can have the best law in the world, but when the deepfake is hosted on a server in a jurisdiction with no regulation, distributed anonymously, and the platform profits from displaying it — who exactly do you sue, and where? Would genuinely like to hear what people here think about that last point. submitted by /u/Robert-Nogacki
Originally posted by u/Robert-Nogacki on r/ArtificialInteligence

