The post outlines how to use local large language models (LLMs) on a MacBook Pro M1 Max for generating images, extracting text from audio, and summarizing text. The author describes the use of stable-diffusion.cpp for image generation, whisper.cpp for audio transcription, and llama.cpp for text summarization. Detailed scripts and time performance metrics are provided for each use case.

5m read timeFrom qt.io
Post cover image
Table of contents
stable-diffusion.cpp - text2imagewhisper.cpp - wav2textllama.cpp - text2textConclusion

Sort: