Best of LLMJanuary 2024

  1. 1
    Article
    Avatar of freecodecampfreeCodeCamp·2y

    Learn LangChain and Gen AI by Building 6 Projects

    Learn how to build six end-to-end projects using LangChain and various LLMs in this course on the freeCodeCamp.org YouTube channel. The course covers the integration of LangChain with GPT-4, Google Gemini Pro, and Llama 2, enabling the creation of practical, real-world applications.

  2. 2
    Article
    Avatar of hnHacker News·2y

    vanna-ai/vanna: 🤖 Chat with your SQL database 📊. Accurate Text-to-SQL Generation via LLMs using RAG 🔄.

    Vanna is an open-source Python framework for SQL generation using RAG. It allows users to train a model and ask questions to generate SQL queries for their database. The framework provides high accuracy, security, self-learning capabilities, and supports any SQL database. Users can also extend Vanna to use their own LLM or vector database.

  3. 3
    Article
    Avatar of pointerPointer·2y

    mlabonne/llm-course: Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.

    This post provides a comprehensive course on large language models (LLMs), covering topics such as LLM fundamentals, Python for machine learning, neural networks, natural language processing (NLP), supervised fine-tuning, reinforcement learning from human feedback, evaluation, quantization, new trends, running LLMs, building a vector storage, retrieval augmented generation (RAG), advanced RAG, inference optimization, deploying LLMs, and securing LLMs.

  4. 4
    Article
    Avatar of medium_jsMedium·2y

    LLaVA: An open-source alternative to GPT-4V(ision)

    LLaVA is an open-source alternative to GPT-4V(ision) that allows users to discuss and describe images. It has features that improve upon other open-source solutions and is faster and cheaper to train. LLaVA can be used online or installed locally on a computer or laptop. The post provides an overview of LLaVA and explains its technical characteristics. It also includes instructions on how to program with LLaVA using a chatbot application built with HuggingFace libraries on Google Colab.

  5. 5
    Article
    Avatar of hnHacker News·2y

    srikanth235/privy: Your private coding assistant

    Privy is a coding assistant with AI chat, code explanation, unit test generation, bug finding, and error diagnosis features.

  6. 6
    Article
    Avatar of venturebeatVenture Beat·2y

    Browser alternatives Brave, Arc, add new AI integrations

    Two privacy-focused web browsers, Arc and Brave, have announced new generative AI powered features. Arc has added the Perplexity generative AI search engine as a default browser search bar option, while Brave's AI assistant, Leo, is being upgraded with Mixtral 8x7B, an open source LLM developed by Mistral.

  7. 7
    Article
    Avatar of towardsdevTowards Dev·2y

    Spring Boot Meets OpenAI: A Java’s Leep into GenAI

    The article discusses the Spring AI initiative, which aims to simplify the creation of AI-powered applications. It explores the building blocks of Spring AI, such as the ChatClient interface and the compatibility with OpenAI and Azure OpenAI. The article also mentions the use of prompt engineering techniques with Spring AI.

  8. 8
    Article
    Avatar of baeldungBaeldung·2y

    Java Weekly, Issue 523

    This post discusses using AI to generate descriptions for JFR events, integrating LLMs in Quarkus applications, and includes various articles and podcasts related to Java and Spring.

  9. 9
    Article
    Avatar of medium_jsMedium·2y

    Let’s Build a Standalone Chatbot with Phi-2 and Rust

    Learn how LLM-powered chatbots work, the key components of a RAG implementation, and how to build your own chatbot using RAG and Rust. Discover a fully working chatbot backed by LLM with long term memory, running locally.

  10. 10
    Article
    Avatar of communityCommunity Picks·2y

    Getting Started with Large Language Models: Key Things to Know

    An introductory guide to Large Language Models (LLMs) and their applications, including prompts, different types of LLMs, hallucination, and running LLMs on local machines. It also covers fine-tuning LLMs and optimization techniques.