SingleStore engineers describe how they built a shared L4 routing tier to overcome cloud provider limits on Private Links across AWS, Azure, and GCP. Instead of provisioning a dedicated NLB per customer, they route all private link traffic through a single shared NLB and an Envoy proxy. A custom C++ WebAssembly plugin parses PROXY Protocol v2 headers to extract cloud-specific endpoint IDs (handling different byte formats per provider) and dynamically sets Envoy's TCPProxy filter state to route connections to the correct Kubernetes service. The team contributed a new `set_envoy_filter_state` foreign function upstream to Envoy to enable dynamic L4 routing from Wasm. Key challenges included memory leaks in the initial Go Wasm implementation (fixed by rewriting in C++), a security patch that broke TLV access in Wasm (worked around via a GitHub issue), and orchestrating graceful connection drains during Envoy pod restarts to avoid dropping long-lived database connections.
Table of contents
The Routing Mechanism: PROXY Protocol v2 (PPv2)Why Envoy and Wasm? (And Why We Ditched Golang)Upstreaming a Patch to EnvoyThe Envoy Pipeline and C++ Wasm Pseudo-codeControl Plane SimplicityOperational Trade-offs: The Graceful Shutdown ProblemConclusionSort: