Docker Model Runner now supports local image generation using Stable Diffusion via the DDUF (Diffusers Unified Format) packaging format. The setup involves pulling a model with `docker model pull stable-diffusion`, launching Open WebUI with `docker model launch openwebui`, and configuring the images endpoint in the UI. The system exposes a 100% OpenAI-compatible API at `/engines/diffusers/v1/images/generations`, supporting parameters like negative_prompt, num_inference_steps, guidance_scale, and seed. No cloud subscription, no privacy concerns — everything runs locally on your machine with GPU acceleration (NVIDIA CUDA, Apple Silicon MPS) or CPU fallback. A FastAPI server is auto-installed on first use from a self-contained Python environment, requiring no manual Python setup.
Table of contents
What You’ll NeedHow Docker Model Runner works with Open WebUIStep 1: Pull an Image Generation ModelStep 2: Launch Open WebUIStep 3: Configure Open WebUI for Image GenerationStep 4: Pull a Chat ModelStep 5: Generate Your First ImageStep 6: Generate Images Directly via the APIUnder the Hood: How the Diffusers Backend WorksTroubleshootingConclusion and What’s NextSort: