Researchers propose Recursive Language Models (RLMs), an inference strategy enabling language models to handle unbounded context lengths by recursively calling themselves through Python REPL environments. RLMs treat input context as a variable that models can programmatically interact with, decompose, and query recursively.

20m read timeFrom alexzhang13.github.io
Post cover image
Table of contents
tl;drPrelude: Why is “long-context” research so unsatisfactory?Recursive Language Models (RLMs).Some early (and very exciting) results!What We’re Thinking Now & for the Future.AcknowledgementsCitation

Sort: