Original Reddit post

Current AI companies are valued as if they are on the road to achieving AGI, and indeed many state that AGI, or even ASI, is their ultimate goal. Without putting serious thought into it, it’s natural to assume that if a company achieves human-level, independent artificial intelligence they will be able to sell it as a cheaper replacement for human labor, and thus reap huge profits. I see two problems with this: one moral and one societal. The moral problem is this: if we assume that sentience is required for human-level AGI (which could be debated), how could we justify conscious agents being enslaved to the will of a corporation? Would these artificial humans even perform the probably boring, repetitive work that they would be assigned? Will they want to be compensated? If we consider ASI the problem is even worse: imagine a fly-level intelligence trying to manage a human-level intelligence. The societal problem is this: if the vast majority of non-manual labor is suddenly threatened, and huge portions of the population become unemployed as a result, wouldn’t the population push their governments to remedy the situation? I can’t see governments allowing AI companies to keep the profits from AGI. In fact, I assume that once one company achieves AGI, its implementation will rapidly become common knowledge either via competition or government intervention. Society as a whole will become much more productive, but the profits will need to be re-distributed to the now-unemployed. So TLDR all of this is to say that AI companies cannot benefit much from AGI, even if they develop it. To me it’s good news. Also plug for the series Pantheon. The show deals with some of these themes and it’s so good! The last two episodes of the second season are incredibly topical. submitted by /u/Swimming_Beginning24

Originally posted by u/Swimming_Beginning24 on r/ArtificialInteligence