Original Reddit post

Meta just dropped TRIBE v2, a “trimodal brain encoder” that can predict how the human brain responds to almost any sight or sound. It’s trained on 500+ hours of fMRI recordings from 700+ people and builds on their Algonauts 2025 award‑winning architecture. The model can make zero‑shot predictions for new subjects, languages, and tasks – basically a digital twin of neural activity. They’ve released a demo and research paper (link in the tweet). This feels like a pretty big step for brain‑AI interfaces. What’s striking is how general it is – works across languages and tasks without retraining. Curious what people think: What are the most exciting use cases? (medical? AR/VR? education?) What ethical boundaries should we be thinking about now, before this tech matures? submitted by /u/Remarkable-Dark2840

Originally posted by u/Remarkable-Dark2840 on r/ArtificialInteligence