From x.com
rryssf_'s profile

Robert Youssef @rryssf_

Together AI published a paper that made open-source models outperform GPT-4o on a major benchmark the method is called Mixture-of-Agents. it doesn't fine-tune anything. it doesn't train anything. it just asks multiple llms the same question, then feeds their answers to another llm that synthesizes the best response 65.1% vs GPT-4o's 57.5% on AlpacaEval 2.0. all open-source models. here's how it works and where it breaks down

Post cover image

Sort: