Doppel, an AI-native cybersecurity company, shares how migrating ML workflows to Modal eliminated infrastructure bottlenecks in both training and inference. For training, Modal's parallel execution (via simple map() constructs) replaced sequential experiment runs, dramatically shortening feedback loops and enabling K-fold cross-validation to run concurrently. Coding agents also leverage Modal's CLI to automate mechanical steps in the experimentation loop. For inference, Modal replaced a GCP Cloud Run + Docker + Flask stack, cutting build times by up to 10x with image layer caching, enabling automatic elastic scaling for unpredictable attack traffic spikes, and removing the HTTP service layer boilerplate around each model deployment.

6m read timeFrom modal.com
Post cover image
Table of contents
Part 1: Training

Sort: