This repository showcases an early prototype for improving LLM's reasoning through o1-like reasoning chains using either local Ollama models or Perplexity API. It enhances LLM's ability to solve logical problems by employing multiple reasoning steps, re-examination, and exploring alternative answers. This experimental setup aims to inspire the open-source community to develop new strategies for dynamic reasoning. Initial testing shows it can achieve 60-80% accuracy on tasks that typically stump leading models.

7m read timeFrom github.com
Post cover image
Table of contents
How it worksFeatures of this forkOriginal benchmarks with g1Examples of the original g1 in actionQuickstartPrompting StrategyCredits

Sort: