Original Reddit post

The most direct way is to have ID or biometric verfication for every account created on every social platform, but I think almost no one would prefer this and therefore would be impossible to enforce. Another way is to enforce a synth-ID like on all LLM developing companies, but people can use humanizers or fine-tune open-source models to evade detection when their capabilities catch up. A third way is to attack the problem from the hardware side, where every chip manufacturer is required to embed a unique marker towards any online activity, since hardware is much more difficult to duplicate unlike digital accounts, this might prevent bots somewhat more effectively. However, older chips might not be subjected to this requirement since they have already been sold and anonymity is also diminished to some extent. A fourth way I can think of is to detect abnormal activities using another AI algorithm, however this might lead to many false negatives / false positives. Moreover, this often leads to forcing users doing stupid and annoying captcha-like questions over and over. The more you need to prove you are a human, the more work you need to put in, and the more annoying it gets, not to mention that human online behaviors are also relatively easy to train. What do you guys think? What did I miss? Do you think some compromises from user’s side is necessary to save the internet? submitted by /u/LeadershipBoring2464

Originally posted by u/LeadershipBoring2464 on r/ArtificialInteligence