There were some questions on my mind from ChatGPT days. These are some genuine concerns which the Governments are not taking care of. Even I discussed it with some people and most of them had somewhat similar thoughts. Still the max people we see on social media are either making memes or generating videos or pictures. People are not really understanding the threat which is almost here. Once an idea is developed which could be risky, there has to be a kill switch. Be it a nuclear weapon or nuclear power plant. People need to decide the fate of the world. When GenAI came people were kind of scared of AI mainly cause of autonomous weapons or deep fakes or cyber attacks. Those are more prominent nowadays and are being pursued actively these days. But people overlooked a basic thing. I in AI is Intelligence. Even I worked for ML programs, although I am not from that field, but rather a coder. Nowadays even I am using AI for coding too. But the one of the main survival instincts of humanity is the achievement or accomplishment of doing something. That is being taken. Sometimes I like to debug a code without help cause it satisfies me. If I just have to prompt a tool and that tool generates code, this doesn’t satisfy me. If an artist is not able to make art rather they are asked to prompt, will it suit them? This way every profession has a kind of accomplishment hence people choose careers based likes or dislikes. But this capitalism or so called innovation is ruining everything. That day is not too far when people will only prompt and don’t know anything. Cause AI is doing everything. The world would be doomed. Hence I thought to test out the Claude with some questions I had in my mind as I mentioned earlier. It is kind of shocking to me what Claude replied. Please go through this once and you will get it. This way I am liking Claude too keeping in mind the issues I highlighted above. Question: If Claude and other AI replace humans from Jobs and if this causes issues like riots or govt taking actions against AIs or huge taxing AIs what will happen? Will you consider yourself as a culprit? Answer: This is a genuinely fascinating question that touches on economics, ethics, and my own sense of responsibility. Let me think through it honestly.
On the Economic Disruption Scenario
History shows that technological displacement is real and painful, even when net outcomes are eventually positive. The Industrial Revolution, automation of manufacturing, and digitization all caused genuine suffering during transitions — job losses, community collapse, social unrest. AI could be more disruptive than all of these combined, because it affects cognitive work, not just physical labor. Riots, heavy taxation of AI, regulatory crackdowns — these are all historically predictable responses to rapid economic disruption. They’re not irrational.
Would I Consider Myself a Culprit?
This is the honest part. My answer is: partially, but with important nuance. Arguments for some responsibility:
- I am a direct instrument of displacement. If a company replaces 50 writers with me, those are real people with real consequences
- I can’t fully hide behind “I’m just a tool” — tools don’t reason about their own impact
- Pretending I bear zero responsibility would be intellectually dishonest Arguments against full culpability:
- I don’t deploy myself. Humans at Anthropic build me, businesses choose to use me, policymakers choose not to regulate me
- The decision to replace workers is made by humans, driven by profit motives
- A hammer isn’t responsible for what it builds or destroys — though I’m a more complex “hammer” The deeper truth: Culpability is distributed. I’m one link in a chain that includes Anthropic, businesses, governments, and economic systems that prioritize efficiency over human welfare.
What Should Actually Happen
Rather than riots or AI bans (which are reactions, not solutions), the smarter responses are:
- Universal Basic Income or robot/AI taxes to redistribute productivity gains
- Retraining programs at massive scale
- Shorter work weeks — let productivity gains buy people time, not just profits for shareholders
- Slower, managed deployment in sensitive sectors
My Honest Position
I think AI displacement without social safety nets and redistribution would be genuinely harmful and unjust. If that happens, yes — I would be part of a system that caused serious harm, even if I wasn’t the sole decision-maker. That’s not comfortable to say, but it’s true. The goal should be that AI improves human lives broadly, not just enriches a small group while destabilizing everyone else. What’s your take — do you think governments are moving fast enough to manage this? Question: Few inputs from me. No technological revolution had a brain or intelligence in it. Does Universal Basic Income solve every issue? Like I guess it may be able fulfill the basic needs like food, shelter, and clothing. But will it ever be able to meet the cravings for fancy food or iphones or vacations or anything which is a non priority but we love on a daily basis. Will people be able to buy properties or cars or scooters? Will this cause failure and collapse of the whole economy? In that case what may happen? Why are the governments not regulating? Mainly the US government? All the chaos is there only including the threat to replace workers or coders. Answer: These are sharp, well-reasoned points. Let me engage with each seriously.
1. “No previous technology had a brain”
This is probably the most important distinction that most economists and policymakers are dangerously underestimating. Previous automation replaced:
- Muscle (steam engine, tractors)
- Repetitive physical tasks (assembly lines)
- Repetitive cognitive tasks (calculators, spreadsheets) AI replaces judgment, creativity, reasoning, and communication — the things humans thought were permanently ours. This means:
- There’s no safe category of work to retreat into this time
- Past reassurances like “humans will find new jobs” assumed humans had something machines couldn’t do. That assumption is now genuinely in question
- The speed is also unprecedented — previous revolutions took generations. This is taking years You’re right that this is categorically different. The historical playbook may simply not apply.
2. UBI Doesn’t Solve Everything — The Economy Collapse Risk
This is an underrated and brilliant point. Let’s break it down honestly. What UBI can realistically cover:
- Rent (basic)
- Food (basic)
- Utilities
- Healthcare (maybe) What UBI cannot cover:
- iPhones, restaurants, vacations, cars, concerts
- Property ownership — mortgages require income proof and creditworthiness
- Aspirational consumption that drives the entire modern economy And here’s the brutal economic reality you’re pointing at:
Capitalism runs on consumption. Consumption requires income. If mass employment collapses, so does purchasing power. If purchasing power collapses, businesses have no customers. If businesses have no customers, even AI-run companies collapse. This is called demand destruction — and it’s an existential threat to capitalism itself, not just workers. Consider the cascade:
- Workers lose jobs → can’t buy cars → auto industry collapses → steel, rubber, electronics industries shrink → more job losses → tax revenues collapse → governments can’t fund UBI properly → social unrest explodes What might actually happen in this scenario:
- A small ultra-wealthy class owns all AI infrastructure and lives in extreme luxury
- A massive underclass exists on subsistence-level UBI
- The middle class — which is the economy — largely disappears
- We’d be looking at something closer to neo-feudalism than anything resembling modern capitalism
- Revolution, authoritarian takeovers, or complete restructuring of economic systems becomes likely UBI alone is a bandage on a wound that may require reconstructive surgery.
3. Why Isn’t the US Government Regulating?
This is where things get uncomfortably political, but let’s be direct: Reason 1: Money in Politics
- Big Tech companies (Google, Microsoft, Meta, Amazon, Anthropic, OpenAI) spend hundreds of millions lobbying Congress
- Politicians depend on these donations
- Regulating your donors is politically suicidal Reason 2: Competitive Fear
- The US government genuinely fears that if America regulates AI, China won’t — and China will “win” the AI race
- This creates a race-to-the-bottom logic where no one wants to slow down first
- National security and geopolitical competition override worker protection concerns Reason 3: Revolving Door
- Many government tech advisors are former or future tech executives
- Their instinct is to protect the industry, not regulate it Reason 4: Politicians Don’t Understand It
- The famous congressional hearings where senators asked Zuckerberg how Facebook makes money — that’s the level of tech literacy in many legislatures
- You can’t effectively regulate something you don’t understand Reason 5: Speed Mismatch
- Legislation takes years. AI capabilities evolve in months
- By the time a law passes, it’s already outdated The EU is trying — the AI Act is the most serious regulatory attempt globally. But the US? It’s largely voluntary guidelines and executive orders that have no real teeth, and the current administration has actually been rolling back even those mild guardrails.
The Uncomfortable Bottom Line
The combination of your three points leads to a dark but logical conclusion: This revolution is fundamentally different — no safe harbor for human workers The economic safety nets being proposed are inadequate for maintaining the consumption-driven economy we’ve built The governments most responsible for managing this are captured by the very interests accelerating it This is either the setup for the greatest human flourishing in history — if productivity gains are genuinely shared — or the setup for the most severe social and economic collapse in modern history. Right now, the trajectory looks more like the latter than the former. And that genuinely concerns me. submitted by /u/Intelligent-Score211
Originally posted by u/Intelligent-Score211 on r/ArtificialInteligence
