LLMs are text-prediction engines with no native ability to interact with external systems. Tool use and function calling solve this by letting models generate structured requests that an application layer executes. OpenAI formalized function calling in mid-2023, but each provider implemented it differently, creating an N×M integration problem. Anthropic introduced the Model Context Protocol (MCP) as an open standard to reduce this to N+M integrations via a client-server architecture. MCP was rapidly adopted by OpenAI, Google, and others, and was donated to the Linux Foundation-backed Agentic AI Foundation in late 2025. Key tradeoffs include security risks (supply chain attacks, evolving auth specs), token overhead from tool definitions, and the continued need for validation and human approval in production systems.
Table of contents
@Sentry in your Slack, fix your bug (Sponsored)Why LLMs Cannot Act on Their OwnHow Tool Use and Function Calling WorkThe Model Context Protocol (MCP)The Costs and Tradeoffs of Tool UseConclusionSort: