A step-by-step guide to running local AI models with LM Studio and integrating them into a Java Spring Boot application using Spring AI. Covers downloading and running the DeepSeek-V2-Lite-Chat model in MLX format on macOS, configuring Spring AI with LM Studio's OpenAI-compatible API server, and a key gotcha: LM Studio doesn't support HTTP/2, requiring a manual override of Spring's RestClient to force HTTP/1.1. Also demonstrates using LM Studio's Anthropic-compatible API endpoint, including enabling the 'thinking' mechanism via AnthropicChatOptions.

7m read timeFrom piotrminkowski.com
Post cover image
Table of contents
Source CodeRun AI Models with LM StudioIntegrate Spring AI with the Model on LM StudioConclusion

Sort: