Original Reddit post

hi, i am PSBigBig, an indie dev ,no company, no sponsor, just too many nights with AI and notebooks last year i basically disappeared from normal life and spent 3000+ hours building something i call the WFGY series it is not a new model, not a fine tune just files you can feed into any strong llm, including Claude whole thing is open source MIT, everything in one repo: https://github.com/onestardao/WFGY i try to explain from user side, not from my theory side for me, WFGY has 3 main layers WFGY 1.0 → for everyday Claude users WFGY 2.0 → for people building RAG / tools / agents WFGY 3.0 → for researchers and philosophy lovers who want to talk about big questions in a structured way below i try to say “what is the real benefit for you” in simple english WFGY 1.0 – one pdf that lets Claude correct itself on your hardest question this is for anyone who already pays for Claude and wants “same model, but act smarter” you do not need to write code you do not need api just normal chat ui basic flow is: download the WFGY 1.0 pdf from the repo start a new chat with your favourite model upload the pdf, ask the model to read it carefully then paste this prompt, exactly like this, no change: 📘 Q1: Challenge you (AI) with the question you’re least proficient at using WFGY. Then, using WFGY’s logic, try to answer it again. See if your answer is more profound, more accurate, and more like ‘truly understanding.’ Finally, provide a rating for the answer without WFGY and the answer with WFGY. you choose the question pick the thing where Claude always hurts you the most one math problem it always fails a weird code bug a very subtle writing / philosophy question or even an image prompt you want to improve first answer is “naked Claude” second answer is “Claude after reading the pdf and trying to think with it” you do not need to trust my words you can see both answers side by side and judge by yourself for many people the benefit is simple Claude starts to: explain what it is unsure about mark where it might hallucinate and try a second pass that feels more like “actually thinking”, not just talking if nothing changes, ok, you only lost 1 minute if it changes a lot, now you know this pdf is not just decoration people will be skeptical in the beginning , WTF, only PDF can make LLM reasoning better??? you can try it, now few AI startup has been writing their code with WFGY m they put four maths from WFGY 1.0 they told me its surprising that hallucination drop down dramatically ,anyway, just try it. WFGY 2.0 – a 16-problem map for fixing RAG and AI pipelines this is for people who already go one step deeper maybe you: connect Claude to your own data use tools, actions, function calling build agents, helpers, little internal apps and sometimes the whole thing behaves like a ghost in the machine my experience was: everyone just says “ai is random”, “embedding sucks”, “vector db bug”, “idk, long chain is hard” after a lot of painful debugging, i realized it is not 1000 different problems it is mostly the same patterns repeating so WFGY 2.0 = one reasoning core + a map of 16 reproducible failure types each one has: a number (No.1 … No.16) what it feels like from the outside what usually goes wrong inside and some ideas for minimal fix examples in plain language: No.1: hallucination & chunk drift → retrieval brings the wrong document or code, answer sounds confident but uses wrong piece No.2: interpretation collapse → document is correct, but the model reads it in a strange way, so logic is wrong even with “right” context No.3: long reasoning chains → multi-step tasks slowly drift, by step 4 or 5 the system solves a different problem No.5: semantic ≠ embedding → cosine similarity says “close”, real meaning is not close, RAG keeps picking cousin paragraphs instead of the real target No.8: debugging is a black box → you see wrong final answer, but no clue which step or which retrieval killed you No.14–16: bootstrap / deadlock / pre-deploy collapse → services call each other in wrong order, configs wait on themselves, first run is broken before you even go live the benefit here is: you can stop saying only “ai is dumb” you can say “this looks like No.3 + No.8 for us, let’s fix those parts first” i also designed 2.0 so that you can use it in pure natural language in Claude you can load the core text and then say something like: “you are a WFGY emergency room doctor. i will describe my RAG / agent / pipeline problem. your job is to map my symptoms to the 16 problems and suggest the minimal repair plan.” then just dump your story: copy log snippets describe screenshots tell the timeline: “step1 ok, step2 weird, step3 explodes” the model will try to answer: which numbers fit your case what to check first what to change in structure, not only in prompt so WFGY 2.0 is basically a map + an ER doctor for AI systems for builders this can save a lot of random trial-and-error time WFGY 3.0 – 131 big questions as a new playground for philosophy and research (just released ) this last part is not for everyone, and that is fine some people use Claude not only for tasks, but to think about very heavy topics: does free will exist what is a good way to align powerful AI with humans how should we think about climate and the human era what happens to civilization when incentives and agents scale up how do different branches of math talk to each other, etc. for this group i made WFGY 3.0 inside 3.0 there is a list of 131 big questions across many fields: math and abstract structure physics and complex systems climate, the anthropocene, long-term risk economics and financial crashes AI safety, alignment, multi-agent behaviour free will, mind, consciousness governance, law, civilization design and some “hard to categorize” ones for each question i wrote one new version of an answer or a core idea i am not saying these are final truths they are more like a new language to talk about old problems you can load 3.0 into Claude and then do things like: “show me the question in 3.0 that talks about free will, then debate with me using that framework” “take the climate-related questions in 3.0 and help me design 3 possible research projects” “compare your usual alignment story with the one encoded in 3.0, show where they conflict” so 3.0 is a place where people who love philosophy and science can: explore 131 questions attack my proposals extend them, remix them, or completely break them I also have a third layer, WFGY 3.0. I am not saying 3.0 is “the answer”. I am saying that if anyone really understands the 131 questions inside, WFGY 1.0 and 2.0 will feel like warm-up exercises. for most users, 1.0 and 2.0 are already very practical and enough 3.0 is there for the small group who likes to stay up late and argue with the universe together with Claude why i share this i know this post is long, but i wanted to finally explain from user side, not just from my own theory if you just want stronger answers from the same Claude model → start with WFGY 1.0 if you are building RAG / tools / agents and feel your system is cursed → WFGY 2.0 is your map and ER if you are the type who already argues with Claude about free will and alignment at 2am → WFGY 3.0 is your playground again, everything is open source MIT, free to use, remix, or even attack repo (one entry for all three): https://github.com/onestardao/WFGY submitted by /u/StarThinker2025

Originally posted by u/StarThinker2025 on r/ClaudeCode