MCP and function calling are often framed as competing approaches, but they operate at different layers. Function calling is how an LLM expresses intent by returning structured tool requests, while MCP (Model Context Protocol) standardizes how those requests are executed across tools and providers via a client-server architecture built on JSON-RPC 2.0. Key differences include: function calling embeds tool logic inside the application, while MCP separates tools into independent servers with dynamic discovery; MCP eliminates vendor lock-in caused by incompatible tool-call formats across OpenAI, Anthropic, Gemini, and Llama; MCP improves credential isolation by scoping secrets to individual servers, reducing blast radius. However, MCP adds latency and operational complexity, making plain function calling preferable for small systems with few tools. Most production systems adopt a hybrid model. Security risks in MCP deployments are real, with 43% of sampled servers containing injection vulnerabilities, making centralized governance essential at scale.
Table of contents
What function calling and MCP actually doHow the architecture differs under the hoodThe vendor lock-in problem across OpenAI, Anthropic, Gemini, and LlamaWhen MCP is worth the overheadSecurity, credentials, and governing MCP at scaleChoosing function calling, MCP, or bothShip Faster with AISort: