Logan Kilpatrick, product lead for Google AI Studio, discusses the platform's evolution at Google Cloud Next. Key topics include the Build tab for vibe coding (prompt to deployed app in minutes), new features like design previews, 'tap tap tap' autocomplete, and 'yap to app' voice input. Logan shares how Google internally uses agentic engineering with a partnership model between vibe coders and senior engineers. He teases upcoming Gemini coding models, mobile AI Studio support, and on-device models via Gemma 4. The conversation also covers Gemini Live's real-time multimodal capabilities, the Deep Research API, long-running agents (moving from hours to days/weeks autonomy), robotics as the next frontier, and the democratization of software creation for non-developers.
Sort: