I’ve been following Qubic for a bit. It’s a network of bare metal servers that train neural networks together. This morning they activated something I haven’t seen before: they’re running a completely separate computational workload on different hardware in the same network, and using the revenue from that to fund the AI compute infrastructure. You can watch it happening live: https://doge.qubic.tools/ Here’s why I think this is interesting. The same network got audited by CertiK and they verified 15.52 million operations per second running on mainnet. That’s faster than Visa processes payments. No virtualization, just software running directly on the hardware. The model is pretty straightforward. One set of specialized machines handles a specific computation task. The money from that work pays for the whole network. Meanwhile, CPUs and GPUs in the same infrastructure are training neural networks. Same power bill, same data centers, two different jobs running in parallel. Everything is public and verifiable if you want to check the numbers yourself. I’m honestly curious what people here think. Can you actually build scalable AI compute infrastructure this way? Using one workload to subsidize another? This is a real world test of that concept happening right now. submitted by /u/srodland01
Originally posted by u/srodland01 on r/ArtificialInteligence
