Google introduces a workflow for building motion-controlled games and interactive apps by combining Gemini AI with MediaPipe's real-time on-device ML capabilities. Using Google AI Studio, developers can describe app ideas in natural language and generate functional web apps in minutes. The post showcases several example prompts for games like a motion-controlled Chrome Dino clone, a hair recoloring app, a Squid Game Dalgona candy carver, and a multiplayer Red Light Green Light game — all leveraging MediaPipe's pose, face, hand, and gesture detection. MediaPipe's on-device processing ensures near-zero latency, critical for interactive experiences. Google also announces an upgrade to MediaPipe face detection supporting long-range distance tracking, and a new MediaPipe showcase gallery in AI Studio.
Table of contents
Prompt naturally in AI StudioExample 1: Chrome dino gameExample 2: Hair recoloring appIterative refinements:Leverage the full power of MediaPipeHand landmarks: Six-seven your handsFace landmarks: Bubble Gum Blow ChallengeFace landmarks: Dalgona candyGesture recognition: Gesture bubble matchFace detection: The Red Light, Green Light gameAcknowledgementsSort: