•Robert Youssef

Google Research just proved you can boost llm accuracy by up to 76 percentage points with zero extra output tokens, zero latency increase, and zero fine-tuning 🤯 the technique: paste your prompt twice. that's it. that's the paper. but WHY it works reveals something important about how every llm you use actually reads your input:

Post cover image

Sort: