Original Reddit post

I am not asking these questions out of fear of a ‘rogue AI’ scenario or something of that nature. I am asking it in the hypothetical that AI remains in our control as a tool and property. I see some people have wrote that what would happen is that AI is given resource control and optimises all we have so that everyone would receive enough to live comfortably and not work yada yada etc. But that comes with the innate presumption that the advanced AI would be collectively owned and serves the collective good. However, this is a presumption, and AI can be also presumed under ownership of individuals and corporations. No one says that if we necessarily create an advanced AI that it will be suddenly a collective miracle. That would require an extremely dramatic shift of economic and political systems and, the law. Private ownership of resources for example would have to be abolished, and rights to AI would have to trespass against the rights of ownership of AI and the systems that run and maintain it. This would obviously be such a change of magnitude that would only be possible in a slow and peaceful shift, or a fast and dramatic reactionary shift. In countries with large wealth gaps and protected corporate and private ownership rights, this would more likely be the latter as the wealthy and ‘owners’ obviously would seek to protect their positions of privilege, and not voluntarily surrender it over a sudden just to be lumped together with the masses. I am not sure if our economic systems would function if truly advanced AI could replace the majority of labour, because then it would call into question the rights and roles of the majority of people. So, my greater fear in imagination is not of dramatically advanced AI intrinsically, but of humans and our nature. However, I also know that in history great economic shifts were often fraught with fears of the imaginations and more dramatic predictions, but many of them did boil over to resolution with great social and political conflicts. I fear those conflicts. Realistically there’s the creation of sufficiently advanced AI, and there’s the implementation of it. Age changing technological and subsequent economic shifts in history happened over decades or centuries, like with our most recent age of information. Advanced AI does not exist yet. I imagine the advent of advanced AI and implementation of truly advanced automation would altogether occur in decades, but human beings adapting to it may only occur through conflict if not handled well. What we do know of history is that large dramatic changes of systems of governance, economics and politics often include violence and conflict, not necessarily peace and deliberation. AI is hard to predict because we don’t yet know how advanced it can really be compared to how we imagine it could be. What is certain though is that if it turns out as advanced as what we imagine to become, the shift would be monumental. So, I’m obviously no expert, just putting thought to the far future. Please do argue against or with me in the comments, I’m happy to hear where I was wrong and why. I just want to foster discussion, feel free to tell me if what I said was dumb, after all, I’m just a youth posting a thought train on reddit and want to learn more. I started thinking of all this after watching some of Geoffrey Hinton who some argue is a pessimist and others a realist on AI, otherwise I study history and economics so my generalised fears come from that realm. submitted by /u/chickenricenicenice

Originally posted by u/chickenricenicenice on r/ArtificialInteligence