AI coding tools are increasing PR volume while introducing a distinct error profile — unused constructs, hardcoded values, and security vulnerabilities more common than in human-written code. Research shows reviewers apply less scrutiny to AI-generated code, and DORA data links rising AI adoption to a 7.2% drop in delivery stability per 25% adoption increase. Around 20–25% of AI code hallucinations are catchable via automated static analysis before a PR is raised. The post argues engineering leaders should ensure at least one IDE in every developer's workflow runs deep, whole-project structural analysis — citing JetBrains IDEs and Qodana as tools that do this — to protect reviewer capacity for the errors only humans can catch.

6m read timeFrom blog.jetbrains.com
Post cover image
Table of contents
Code review is a decision process – AI just added more decisionsAI is sending a different kind of code to reviewHave machines catch what machines canPut “no-excuses” structural analysis before the pipeline

Sort: