I study neuroscience and I just had a thought, I think LLMs are trained on large quantities of human language near immediately after they consistently form coherent patterns from the number noise and all of the wanted patterns have been selected. At this point in the AI development it’s essentially pure pattern recognition and can be directed toward a number of uses, whether LLMs or AlphaFold for example, but I’m wondering if there’s been any research done towards modelling what the brain of a newborn would experience as the training ground for the AI. For example being saturated with video and audio files initially, and then gradually adding language via words in or attached to image or video files, to mimic the experiential learning that human brains go through. Would it be unethical? Would the AI behave differently at the end of this training compared to traditional LLMs? Would it at purely that point be more willing to admit to not knowing something, especially if trained using multiple languages? I say this last question because after establishing that certain words correlate to a specific concept, then being told that new words also correlate to that concept and being able to predict some words that are the same and completely unable to correctly guess other new words, and thus, does this translate(haha) into the model being more willing to admit mistakes? Pls share your thoughts ❤️ submitted by /u/Sakagura2004
Originally posted by u/Sakagura2004 on r/ArtificialInteligence
