Original Reddit post

The way AI compute is getting concentrated in fewer hands is becoming one of the more worrying parts of how AI is developing. A few things I think get overlooked: The top five cloud providers now control most of the GPU compute used for AI training around the world. That means the choices of just five outfits decide what models get trained how big they get and who benefits. NVIDIAs spot in the AI chip market creates a single point of failure for most big AI work. The power and money needed for training the biggest models is now so huge that only big governments or the largest companies can really play. This does not look like a short term thing it seems to be getting more locked up over time not less. With that in mind Ive been checking out projects that are actually trying to build spread out compute for AI. Most of them are just talk or havent shipped anything real. The one I keep coming back to is Qubic which has actually got a distributed compute network running AI training tasks using mining hardware. The real question isnt whether Qubic itself makes it. Its whether this setup of mining powered compute helping with AI training can actually work at big scale. If it can it might be a real way to have less concentrated AI infrastructure. If it cant we should figure out why. What do people here think are the most realistic ways to get genuinely spread out AI compute? submitted by /u/srodland01

Originally posted by u/srodland01 on r/ArtificialInteligence