Critical remote code execution vulnerabilities were discovered across major AI inference frameworks including Meta's Llama Stack, Nvidia TensorRT-LLM, vLLM, and SGLang. The flaws originated from unsafe use of ZeroMQ and Python's pickle deserialization in Meta's code, then spread to other projects through copy-paste development
•3m read time• From infoworld.com
Sort: