Original Reddit post

For the last two years, my biggest worry about AI wasn’t AGI or some science fiction dystopia, but simply that massive layoffs of white collar workers are not just a loss of workers, but, more importantly, a loss of consumers. The entire global economy, and particularly in America, is a consumerist economy. White collar workers also represent a disproportionate amount of the spending in the economy, so if that population is unemployed (or worried that they will be anytime soon), it will affect every single sector of the economy. Demand will collapse, revenues for every single company will crater, and even the hyperscalers who are capturing the value of the current AI boom will eventually run out of enterprise costumers, because they themselves have run out of human costumers. This is not like other technological disruptions. AI agents don’t consume in the economy. For better or worse, what we need for prosperity is for companies to pay humans a living wage so that those humans are consumers of other businesses. What AI companies are going to do to all of us is a sort of Tragedy of the Commons: In a race to the bottom, each individual company is incentivized to lay off their workers to lower costs, but in doing so, they are also impoverishing their own (and others’) costumers. Again, this doesn’t just affect software companies or tech, it will affect everything. Restaurants will have fewer patrons, people will travel less, people will buy less real estate, less food, less everything, because they just can’t afford it. Personally, this presents a massive cognitive dissonance that I’m struggling with. I have long held NVDA, GOOGL, MSFT, and others at the center of this revolution for many years. It’s been good for my portfolio. I haven’t sold a single share. And now I think that the short-term sucess of these companies will result in the long-term collapse of all my savings, and I still can’t get myself to sell anything because I hope, more than anything, that I’m wrong. I’m a capitalist, but I think we need some sort of legislation. Something that protects the humans on this planet above short-term corporate profits. There should be a law that forces companies to have a % of their workforce be humans, so only a % of your output can be done by agents. It may not optimize for what makes the most sense for that company on a spreadsheet, but without guardrails, the greed and short-term profit motive is going to bring a level of societal pain we can’t even imagine. Finally, before anyone mentions this. Yes, I’ve read the Citrini article. The fact that it’s gotten so many people now taking my long-believed doomsday scenario, and the fact that I haven’t been persuaded by the ‘boom’ alternatives that have come out, is why I’m more scared than ever. But again, I’m posting here partly because I hope to find an intelligent take that persuades me. I want to be wrong. submitted by /u/TwelfieSpecial

Originally posted by u/TwelfieSpecial on r/ArtificialInteligence