Capacitor LocalLLM is a new native Capacitor plugin that enables on-device AI inference for iOS and Android apps via a unified TypeScript API. It leverages Apple Intelligence and Gemini Nano to run LLMs locally without sending data to the cloud. The post demonstrates the plugin through Oakline Bank, a fictional fintech demo app featuring OakBot, an AI assistant that answers questions about account balances and spending by injecting serialized transaction data into the model's context window before inference. The post also explains why plain-text context injection is preferred over JSON for small on-device models due to token efficiency and better reasoning performance.

5m read timeFrom ionic.io
Post cover image
Table of contents
Community MentionsMore Coming Soon
4 Comments

Sort: