I have a new take on AI and how it is going to play out in the future and I am curious to hear your feedback. So recently Anthropic set up this experiment where they let Opus 4.6 write an entire compiler in C “from scratch”. Whereas in reality it wasn’t really from scratch, because they pretrained the model ahead of the experiment on lots of open source compiler code in C. But what do I want to say by that? The main reason I am telling you this story is that AI is only able to output based on what the model was pretrained on. Now if that is the case then how is AI going to think as a human or similar without humans pretraining the model with the data or scientific knowledge that humans already knew of before? The point I want to make that I am curious of hearing your feedback on is: We can collect training data for every domain of human endeavour, whether it is craftsmanship, factory work, literally anything. But how is the AI supposed to be creating something that it hasn’t been trained before? My take is that we are going to see models getting better and better. But in the future robotics is going to be a big industry and for the robots to work in blue collar jobs we need training data. That is what I think is going to happen: We collect huge amounts of training data to feed the AI’s which then lets robots do real-world tasks. These jobs are replicable, tasks are done over and over again with minimal variance. AI’s can adapt to different conditions fairly good if they were pretrained correctly. Thanks for reading and let me hear your thoughts! submitted by /u/Last_Pay_7248
Originally posted by u/Last_Pay_7248 on r/ArtificialInteligence
