Original Reddit post

https://preview.redd.it/zgty2uy3ausg1.png?width=1118&format=png&auto=webp&s=3c15844e12e22d2028a0d98e1dcb16da513db66b We did experiments comparing Optuna & autoresearch. Autoresearch converges faster, is more cost-efficient, and even generalizes better. Experiments were done on NanoChat: we let Claude define Optuna’s search space to align the priors between methods. Both optimization methods were run three times. Autoresearch is far more sample-efficient on average In 5 min training setting, LLM tokens cost as much as GPUs, but despite a 2× higher per-step cost, AutoResearch still comes out ahead across all cost budgets: What’s more, the solution found by autoresearch generalizes better than Optuna’s. We gave the best solutions more training time; the absolute score gap widens, and the statistical significance becomes stronger: https://preview.redd.it/633lu40xausg1.png?width=1026&format=png&auto=webp&s=cebb1daecad92e118e3513e6bb3f765d2c8ad618 An important contributor to autoresearch’s capability is that it searches directly in code space. In the early stages, autoresearch tunes knobs within Optuna’s 16-parameter search space. However, with more iterations, it starts to explore code changes https://preview.redd.it/my7gfng0busg1.png?width=1018&format=png&auto=webp&s=7b9428989e39385f357213d66e26038332a64baa submitted by /u/Educational_Strain_3

Originally posted by u/Educational_Strain_3 on r/ArtificialInteligence