Best of AnthropicFebruary 2026

  1. 1
    Article
    Avatar of duyggxsrkwwaydbgf4dnrKonrad Psiuk·12w

    100% of code written by AI

    Anthropic claims 100% of its code is now AI-written, which is technically achievable when AI assists with even trivial tasks like variable renaming. However, this raises important questions about token efficiency and whether AI-generated code introduces unnecessary complexity and cognitive overhead for developers.

  2. 2
    Article
    Avatar of techcentralTechCentral·12w

    AI won’t replace software, says Nvidia CEO amid market rout

    Nvidia CEO Jensen Huang rejected concerns that AI will replace traditional software and tools, calling such fears "illogical" during a market selloff affecting global software stocks. He argued that AI systems will continue using existing software tools rather than rebuilding them from scratch, pointing to recent AI breakthroughs in tool use as evidence. The comments came as software stocks in India, Japan, and China experienced significant declines, partly triggered by Anthropic's recent chatbot release that heightened disruption fears in data and professional services sectors.

  3. 3
    Article
    Avatar of claudeClaude·10w

    Claude Enterprise, now available self-serve

    Claude Enterprise is now available for self-serve purchase with a seat-plus-usage pricing model. The offering provides organization-wide access to Claude, Claude Code, and Cowork with enterprise security features including SSO, SCIM provisioning, audit logs, custom data retention policies, and usage analytics. It integrates with Microsoft 365, Slack, Excel, and PowerPoint through connectors and built-in chat sidebars. Organizations use it across sales, engineering, marketing, product, and finance teams to accelerate workflows and handle complex tasks with large codebases and document sets.

  4. 4
    Article
    Avatar of securityboulevardSecurity Boulevard·9w

    Anthropic Didn’t Kill Cybersecurity. It Just Reminded Us There Are Two Doors.

    Anthropic's Claude Code Security announcement triggered a sharp selloff in cybersecurity stocks, with companies like Okta, SailPoint, and CrowdStrike dropping significantly. The panic was misplaced: AI-powered code scanning addresses only one of two primary attack vectors — software vulnerabilities. The second and equally significant vector — identity theft, credential abuse, phishing, and social engineering — remains entirely untouched by code scanning tools. Identity-focused companies like Okta and SailPoint don't compete with Claude Code Security at all; they solve a structurally different problem. The identity attack surface is durable because it stems from architectural patterns and human behavior, not patchable bugs. Analysts from Barclays and Jefferies called the selloff illogical, and the security industry's own data (Verizon DBIR, MITRE ATT&CK) consistently shows credentials and human manipulation as dominant breach vectors.

  5. 5
    Article
    Avatar of seangoedeckesean goedecke·10w

    LLM-generated skills work, if you generate them afterwards

    LLM-generated "skills" (explanatory prompts for specific tasks) work better when created after solving a problem rather than before. A recent paper found that pre-generated skills provide no benefit because they bake in incorrect assumptions from training data. The effective approach is to have the LLM solve the problem through iteration first, then distill that learned experience into a reusable skill document. This captures knowledge gained from millions of tokens of problem-solving rather than just regurgitating existing training data.

  6. 6
    Article
    Avatar of securityboulevardSecurity Boulevard·8w

    MY TAKE: The Pentagon punished Anthropic for red lines it accepted from OpenAI hours later

    President Trump ordered federal agencies to stop using Anthropic's AI, and Defense Secretary Hegseth labeled the company a national security supply-chain risk. Anthropic's offense was refusing to remove contract clauses prohibiting Claude's use for mass domestic surveillance or fully autonomous weapons. Within hours, OpenAI announced a deal to replace Claude on Pentagon classified networks, with Sam Altman claiming OpenAI holds the same red lines — yet the Pentagon accepted those terms from OpenAI while blacklisting Anthropic for identical ones. The author frames this as the latest in a decade-long pattern: from NSA metadata collection to the Apple-FBI iPhone dispute to now AI model behavioral boundaries, the government's surveillance choke point keeps migrating closer to the layer where judgment and language live. Each cycle compresses faster, leaving less time for public deliberation or governance.