The post details an approach to leverage remote LLM APIs by proxying them as local models compatible with JetBrains AI Assistant. It utilizes Ktor and kotlinx.serialization to avoid reflection issues and distribute the solution as a runnable jar and GraalVM native image. The solution helps overcome JetBrains' LLM limitations by

4m read timeFrom github.com
Post cover image
Table of contents
Story of this projectCurrently supportedHow to useConfig fileExample config file

Sort: