A step-by-step coding challenge building a voice chatbot inside a p5.js sketch using three components: speech-to-text (OpenAI Whisper via Transformers.js), a chatbot brain (ranging from simple Eliza-style pattern matching to RiveScript to a small LLM), and text-to-speech (Kokoro TTS). The tutorial covers Web Audio API for microphone capture, MediaRecorder for push-to-talk, audio buffer processing, WebGPU acceleration, and integrating HuggingFace models client-side in the browser. Multiple brain implementations are demonstrated, culminating in using SmallLM2 with a system prompt and conversation history.
ā¢39m watch time
Sort: