AI generated content is flooding the internet. It’s the dead internet theory but accelerated. It’s making spending time online a more strenuous experience and users are going to begin to disengage from apps more and more as trust in the authenticity of content plummets. I think it is in the best interest of companies like YouTube, Reddit, Snapchat, etc. to take an active role in policing AI content. This doesn’t necessarily mean removal but active labeling of AI generated content, establishing reporting pathways to utilize organic support and buy-in from users, and not algorithmically boosting AI generated content. If social media companies do not do this, based on the amount of public fear and frustration, it will happen through government regulation at some point in the future. A government regulatory approach is likely to be much less effective, more frustrating to the experience of users, and feel much more authoritarian. It will also create a web of regulatory compliance requirements that will make managing these businesses miserable. Maybe this all just me hoping and shaking my fist at the sky but I think this stuff I going to drive society insane. Also note, this relies on users actually being willing to follow through on not using social media products and based on the level of addiction and ubiquity, this may be unlikely. submitted by /u/LeastSignificantBit0
Originally posted by u/LeastSignificantBit0 on r/ArtificialInteligence
