Grab's engineering team built a Mobile UI Testing AI Workflow for iOS that converts recorded user interactions into executable UI test code within 10–20 minutes. The system uses a local proxy to capture API calls, feature flag states, and user actions, then feeds this data to an AI assistant that generates three linked files: API mocks, feature flag configs, and an XCTest-based UI test class with analytics verification. The workflow integrates with existing test infrastructure (local mock server, instrumentation server, build system) without requiring new tooling. Key lessons include: AI output is a starting point requiring human review, clean single-flow recordings produce better results, and developers must add meaningful assertions and replace fixed delays with proper waits before committing to CI.
Table of contents
IntroductionThe problem: UI tests are expensive to writeThe solution: From recording to an automatically AI-Generated testHow we built it: Architecture overviewWhat gets generatedEnabling event verificationLessons learned and best practicesKey takeawaysJoin usSort: