OpenJarvis is an open-source, local-first personal AI framework developed at Stanford's Hazy Research and Scaling Intelligence Lab. It provides shared primitives for building on-device AI agents, with evaluations that treat energy, FLOPs, latency, and cost as first-class constraints alongside accuracy. Research shows local language models already handle 88.7% of single-turn chat and reasoning queries, with intelligence efficiency improving 5.3Γ— from 2023 to 2025. The framework supports multiple local inference backends (Ollama, vLLM, SGLang, llama.cpp), includes a Rust extension for security and agent tooling, and aims to be both a research platform and production foundation for local AI β€” calling the cloud only when truly necessary.

Sort: