Have you seen Meta’s new research paper, Principia? They took a 120B Parameter model and got it up to 95% accuracy on the benchmark test they haven’t publicly released yet. So, we reconstructed their benchmark test and got a ZERO parameter model up to 96%. All utilizing Compression and Geometric Latent Space rules, nothing else. 120 billion parameters vs 0 parameters, which is better? Sorry, META, better luck next time! https://github.com/RichardAragon/GeoVerify-v0.1-/tree/main submitted by /u/Own-Poet-5900
Originally posted by u/Own-Poet-5900 on r/ArtificialInteligence
You must log in or # to comment.
