JetBrains ran controlled experiments giving coding agents IDE-native search tools (file search, text search, regex, symbol lookup) via a unified MCP tool, comparing results against shell-based grep/find. The prebundled search skill reduced both latency and cost without degrading quality. Codex adopted the new tool heavily (91% of searches), while Claude models used it less due to existing built-in search capabilities. Results held across Java and Kotlin codebases and multiple models. The winning configuration is being integrated into the IntelliJ IDEA MCP Server and will be enabled by default in upcoming AI Assistant plugin updates.
Table of contents
IDE-native search reduced latency, cost, and budget overruns.Why We Built ThisMethodologySame tasks, same grading, one controlled difference.ResultsAbsolute metrics moved in the right direction.The difference is visible in the agent’s path through the project.Configuration ExplorerThe selected option had the best latency while preserving cost reduction.Cross-Model ValidationThe effect held beyond the original run.How Models Adopt ToolingModels do not use new tools at the same rate.What’s NextSort: