A conference talk recap from GeeCon 2025 covering how to build LLM-powered applications in Java using LangChain4j. Topics include prompt engineering (system vs user prompts), AI services with streaming, memory management via MessageWindowChatMemory, function calling/tools, RAG with PGVector, chunking and tokenization, and the

2m read timeFrom shaaf.dev
Post cover image

Sort: