Quick experiment I ran. Took two identical AI coding agents (Claude Code), gave them the same task - optimize a small language model. One agent worked from its built-in knowledge. The other had access to a search engine over 2M+ computer science research papers. Agent without papers: did what you’d expect. Tried well-known optimization techniques. Improved the model by 3.67%. Agent with papers: searched the research literature before each attempt. Found 520 relevant papers, tried 25 techniques from them - including one from a paper published in February 2025, months after the AI’s training cutoff. It literally couldn’t have known about this technique without paper access. Improved the model by 4.05% - 3.2% better. The interesting moment: both agents tried the same idea (halving the batch size). The one without papers got it wrong - missed a crucial adjustment and the whole thing failed. The one with papers found a rule from a 2022 paper explaining exactly how to do it, got it right on the first try. Not every idea from papers worked. But the ones that did were impossible to reach without access to the research. AI models have a knowledge cutoff - they can’t see anything published after their training. And even for older work, they don’t always recall the right technique at the right time. Giving them access to searchable literature seems to meaningfully close that gap. I built the paper search tool (Paper Lantern) as a free MCP server for AI coding agents: https://code.paperlantern.ai/ Full experiment writeup: https://www.paperlantern.ai/blog/auto-research-case-study submitted by /u/kalpitdixit
Originally posted by u/kalpitdixit on r/ArtificialInteligence
