The post details an approach to leverage remote LLM APIs by proxying them as local models compatible with JetBrains AI Assistant. It utilizes Ktor and kotlinx.serialization to avoid reflection issues and distribute the solution as a runnable jar and GraalVM native image. The solution helps overcome JetBrains' LLM limitations by
Sort: