Satori is a 7B parameter LLM developed by researchers from MIT, Harvard, and other institutions to enhance reasoning through self-improvement without relying on human supervision or high-quality teacher models. It employs autoregressive search and a novel Chain-of-Action-Thought (COAT) reasoning paradigm. Satori demonstrates

4m read timeFrom marktechpost.com
Post cover image
Table of contents
Introducing Satori: A Model for Self-Reflective and Self-Exploratory ReasoningTechnical Details and Benefits of SatoriInsightsConclusion: A Step Toward Autonomous Learning in LLMs

Sort: