Original Reddit post

A common tool with LLM’s is context bloat and context overload (though this is becoming a non-issue with very high context limits). Could this somehow be prevented by modifying the weights of the model on the fly? Instead of adding context to the prompt, the context would be in the weights. Is this possible? submitted by /u/davidinterest

Originally posted by u/davidinterest on r/ArtificialInteligence