Best of AINovember 2025

  1. 1
    Article
    Avatar of theregisterThe Register·25w

    Linus Torvalds: Vibe coding is fine, but not for production

    Linus Torvalds shares his perspective on AI-assisted coding, stating that while vibe coding can help newcomers get started with programming, it's unsuitable for production code due to maintenance concerns. He discusses Rust's gradual integration into the Linux kernel, noting it has taken longer than expected but is becoming a real part of the codebase. Torvalds addresses AI's impact on kernel development, mentioning issues with crawlers disrupting infrastructure and AI-generated bug reports, though these problems are less severe than in other projects. He compares AI to previous productivity tools like compilers, suggesting it won't eliminate programming jobs but will change how developers work.

  2. 2
    Article
    Avatar of iotechhubiO tech_hub·25w

    ChatGPT as My Coding Mentor: How I Learned React and Next.js as a Junior Developer

    A junior developer shares their experience using ChatGPT to learn React and Next.js from scratch. The key breakthrough came from learning to prompt effectively by asking for explanations 'like I'm 5' and providing context about experience level. The developer progressed from not understanding basic hooks like useState to confidently building full-stack Next.js applications in two months by having focused conversations, requesting simple analogies first, and building knowledge progressively rather than asking generic questions.

  3. 3
    Article
    Avatar of tcTechCrunch·25w

    Hugging Face CEO says we’re in an ‘LLM bubble,’ not an ‘AI bubble’

    Hugging Face CEO Clem Delangue argues the tech industry is experiencing an LLM bubble rather than a broader AI bubble, predicting it may burst soon. He believes the current focus on large, general-purpose language models is misplaced, and that smaller, specialized models will dominate the future for specific use cases like banking chatbots. While competitors spend billions on LLM infrastructure, Hugging Face maintains a capital-efficient approach with half of its $400 million funding still in reserve, positioning itself for long-term sustainability across the diversified AI landscape.

  4. 4
    Article
    Avatar of phProduct Hunt·26w

    Reindeer: Cursor for databases

    Reindeer is an AI-powered IDE for database work that understands database schemas and generates production-ready SQL queries. It features autocomplete for complex SQL, automatic query fixing, and schema-aware assistance to streamline debugging and query optimization workflows without switching between tools.

  5. 5
    Article
    Avatar of atomicobjectAtomic Spin·25w

    Go on an AI Detox

    A developer shares their experience temporarily disabling AI coding assistants like Cursor and Copilot to rediscover fundamental programming skills. After an initial productivity dip, they found themselves more engaged with code, better understanding of the codebase, and improved problem-solving abilities. The experiment revealed how AI convenience can make developers rusty in core skills like keyboard navigation, Vim commands, and deep code comprehension. The author now uses AI more deliberately, emphasizing the value of periodic detoxes to maintain sharp development instincts and genuine understanding rather than passive reliance on AI suggestions.

  6. 6
    Article
    Avatar of devtoDEV·26w

    I use AI when I code. And sometimes it makes me feel like I’m cheating.

    Using AI coding assistants can trigger feelings of guilt and imposter syndrome, as if the work doesn't count without manual struggle. The author reflects on how AI removes friction between ideas and implementation, arguing that the real value lies in creativity, decision-making, and what gets built—not the keystrokes. The piece validates developers who feel conflicted about AI assistance, reframing it as a tool that amplifies existing capabilities rather than diminishing them.

  7. 7
    Article
    Avatar of searlsJustin Searls·25w

    TDD is more important than ever

    Test-driven development skills are becoming critical for working effectively with AI coding agents. Developers experienced in TDD are both the most skeptical of AI code generation and the most successful at using it, because they understand how to build verification into workflows. AI agents, like human developers, need independent ways to verify their work—without verification, they resort to guessing, which compounds errors rapidly. The ability to establish automated testing and verification mechanisms, once a hallmark of agile practices, is now essential for enabling AI agents to produce reliable code through reinforcement learning.

  8. 8
    Article
    Avatar of bartwullemsThe Art of Simplicity·24w

    VS Code Planning mode

    VS Code now includes Planning mode (also called 'Hannibal mode'), which extends GitHub Copilot's Agent Mode to handle multi-step coding tasks with a structured approach. Unlike Visual Studio's implementation, VS Code offers Planning mode as a separate chat agent, giving developers explicit control over when it's activated. The feature creates detailed execution plans before generating code, allows plan editing, and includes follow-up questions to refine the approach before implementation begins.

  9. 9
    Article
    Avatar of dhhDavid Heinemeier Hansson·24w

    Local LLMs are how nerds now justify a big computer they don't need

    Local LLMs, while technically impressive, still lag significantly behind cloud-based frontier models for practical development work. Despite the hype around running AI models locally, most developers don't actually need expensive high-RAM machines. Budget mini PCs costing around $500 can handle typical development tasks just as well as premium $2,000+ workstations, especially when running Linux. This is fortunate timing given the current spike in RAM prices driven by AI's resource demands.

  10. 10
    Article
    Avatar of storybookStorybook·26w

    Storybook MCP sneak peek

    Storybook MCP is a new integration that provides AI coding agents with machine-readable component metadata from your Storybook setup. It helps agents generate higher quality code by giving them access to your existing component patterns, usage examples, and types. The system includes a self-healing loop that runs component tests and allows agents to fix their own bugs autonomously. Benchmarks show 3× faster code generation with 50% fewer tokens while maintaining quality standards. Early access begins December 2 for teams with mature React design systems.

  11. 11
    Article
    Avatar of Marmelabmarmelab·26w

    Spec-Driven Development: The Waterfall Strikes Back

    Spec-Driven Development frameworks like Kiro and Spec-kit generate extensive Markdown documentation before coding, echoing Waterfall methodology. While promising structure for AI coding agents, this approach creates context blindness, excessive documentation review, and diminishing returns on large codebases. The author argues for Natural Language Development instead: an iterative, Agile-inspired approach where developers give coding agents simple, incremental instructions without formal specifications, enabling faster convergence toward working products.

  12. 12
    Article
    Avatar of simonwillisonSimon Willison·25w

    Olmo 3 is a fully open LLM

    Ai2 released Olmo 3, a fully open LLM series that includes complete training data, process, and checkpoints. The flagship 32B Think model emphasizes interpretability with visible reasoning traces through OlmoTrace. Trained on 5.9 trillion tokens from the Dolma 3 Mix dataset (6x fewer tokens than competitors), it offers four 7B variants and two 32B models. The release enables auditing training data to detect potential backdoors, addressing security concerns in open-weight models. Performance testing shows improved SVG generation compared to Olmo 2, though OlmoTrace's training data attribution needs refinement.

  13. 13
    Article
    Avatar of phProduct Hunt·27w

    Termdock: Terminal-centric AI development environment

    Termdock unifies terminal management, Git visualization, and AI tools in a single interface. It supports multi-workspace layouts with up to 4 windows plus picture-in-picture mode for monitoring Docker, Redis, logs, and tests simultaneously. Features include AST-based symbol search using Tree-sitter for instant navigation, drag-and-paste image support with automatic compression, built-in file tree, and prompt libraries for streamlined development workflows.

  14. 14
    Article
    Avatar of googledevsGoogle Developers·25w

    Announcing the Genkit Extension for Gemini CLI

    Google launched the Genkit Extension for Gemini CLI, enabling developers to build and debug AI applications directly from the terminal. The extension provides context-aware assistance through tools like flow execution, documentation lookup, and OpenTelemetry trace analysis. It integrates Genkit's MCP server with specialized context files to give Gemini CLI comprehensive understanding of Genkit's architecture and best practices. Developers can install it via a single command and immediately access language-specific guidance, run flows for debugging, and receive tailored code suggestions that follow Genkit patterns.

  15. 15
    Article
    Avatar of developingdevThe Developing Dev·25w

    What Would You Automate if It Was Free?

    LLM-powered code generation tools have made automation practically free, changing the cost-benefit calculation for repetitive tasks. The author shares practical examples of using AI to generate scripts for podcast transcript processing, video file stitching with ffmpeg, and converting notes to Markdown. These tools enable automation of even one-off tasks that previously weren't worth the manual effort, fundamentally changing how developers approach small, repetitive work.

  16. 16
    Article
    Avatar of theregisterThe Register·25w

    Vibe coding: What is it good for? Absolutely nothing

    Vibe coding—using AI to generate code from natural language prompts—promises fast results without specialist knowledge, but falls short in practice. Unlike low-code platforms, AI code generation is non-deterministic, producing inconsistent results for identical prompts and creating unmaintainable codebases. While it offers accessibility similar to early BASIC interpreters, it lacks the learning pathway that builds genuine understanding. The author argues that vibe coding fails to provide the motivation, comprehension, and dopamine-driven discovery that fuel effective learning, and that traditional mentorship and tutorials remain superior for developing real coding skills.

  17. 17
    Article
    Avatar of lobstersLobsters·27w

    AI's 70% Problem — Zed's Blog

    Addy Osmani from Google's Chrome team discusses the "70% problem" in AI coding: while AI tools can rapidly generate 70% of a solution, the remaining 30% involving edge cases, security, and production integration remains as challenging as ever. Despite over 30% of Google's code being AI-generated, trust in AI-generated code has declined from 70% to 60% in two years. The talk covers common pitfalls like the "two steps back" pattern where AI fixes create new problems, the reality that productivity gains are modest (1-2x) compared to hype, and code review becoming a new bottleneck. Osmani emphasizes that developers must understand and take responsibility for AI-generated code, especially junior developers who should use AI as a learning aid while maintaining curiosity.

  18. 18
    Article
    Avatar of allthingsdistributedAll Things Distributed·24w

    Tech predictions for 2026 and beyond

    Five major technology predictions for 2026 and beyond: companion robots will combat the global loneliness epidemic through emotional AI, particularly for elderly and pediatric care; developers will evolve into renaissance polymaths who combine technical skills with domain expertise as AI handles code generation; quantum computing advances are compressing security timelines, requiring immediate post-quantum cryptography deployment across infrastructure; defense technology innovation cycles are accelerating from decades to years, with dual-use systems reaching civilian applications faster; and AI-powered personalized education will democratize one-on-one tutoring at scale, adapting to individual learning styles while freeing teachers from administrative tasks.

  19. 19
    Article
    Avatar of uxplanetUX Planet·27w

    Scenario-based AI Chatbots for Language Learning

    Scenario-based AI chatbots are transforming language learning by providing contextual, real-world practice environments that reduce anxiety and improve fluency. Companies like Duolingo, Mondly, and Babbel leverage GPT-4 and LLMs to create adaptive conversations simulating authentic situations like ordering coffee or navigating airports. Research shows chatbot-assisted learning produces significant positive effects (g = 0.484) compared to traditional methods, with key success factors including adaptive complexity, authentic contexts, immediate feedback, and multimodal engagement. The approach addresses the gap between classroom knowledge and real-world conversation by building neural pathways connecting words to situations, while creating judgment-free practice spaces that reduce speaking anxiety and build learner confidence.

  20. 20
    Article
    Avatar of postgresPostgreSQL·25w

    pg_ai_query — AI-powered SQL generation & query analysis for PostgreSQL

    pg_ai_query is a new PostgreSQL extension that enables AI-powered SQL query generation and analysis directly within Postgres. Developers can generate SQL queries from natural language descriptions and get AI-assisted explanations of query execution plans. The extension supports PostgreSQL 14+ and aims to streamline query development by eliminating the need to switch between tools.

  21. 21
    Article
    Avatar of vibecodingVibe Coding·25w

    I'm so productive but multi tasking is exhausting

    A developer shares their experience with AI-assisted coding, noting increased productivity but also mental exhaustion from constant context switching and cognitive load. They describe a cycle where AI tools create idle time that gets filled with more AI usage, leading to high brain utilization despite getting more work done.

  22. 22
    Article
    Avatar of tdsTowards Data Science·27w

    We Didn’t Invent Attention — We Just Rediscovered It

    Attention mechanisms in AI transformers aren't novel inventions but rediscoveries of fundamental optimization principles. The same mathematical pattern—selective amplification combined with normalization—emerges independently across evolution (500+ million years of neural systems), chemistry (autocatalytic reactions), and AI (gradient descent). This convergence suggests attention represents a universal solution to information processing under energy constraints. Reframing attention as amplification rather than selection offers practical insights for improving AI architectures: decoupling amplification from normalization, exploring non-content-based amplification, implementing local normalization pools, and designing systems that operate at critical dynamics for optimal information processing.

  23. 23
    Article
    Avatar of bytebytegoByteByteGo·27w

    How Uber Built a Conversational AI Agent For Financial Analysis

    Uber built Finch, a conversational AI agent that enables finance teams to query financial data using natural language directly in Slack. The system translates questions into SQL queries, retrieves data from curated single-table data marts, and returns results in seconds. Finch uses a modular architecture with specialized agents orchestrated by LangGraph, OpenSearch for semantic mapping, and role-based access controls for security. The system includes continuous evaluation against golden queries, performance optimizations through parallel processing and pre-fetching, and plans to expand with deeper FinTech integration and human-in-the-loop validation for executive decisions.

  24. 24
    Article
    Avatar of logrocketLogRocket·27w

    A Jarvis for everyone: AI agents as new interfaces

    AI agents powered by the Model Context Protocol (MCP) are transforming user interfaces from traditional screen-based interactions to conversational, context-aware systems. This shift requires developers to rethink frontend architecture, moving from designing static components to crafting intelligent workflows that agents can interpret. The article explores how multi-channel, multi-capability frameworks enable Jarvis-like assistants to seamlessly handle tasks across platforms, the design patterns needed for agent-first interfaces, and the challenges around reliability, privacy, and user trust that teams must address when building these systems.

  25. 25
    Article
    Avatar of gzasiv4jjdtovk6orcp3xBarion·24w

    I have never seen good AI code (challenge)

    A developer challenges the community to prove that LLM-generated code can meet professional standards for production use. They're specifically looking for examples that demonstrate good balance between readability, maintainability, reusability, simplicity, and performance, with preference for immutable and declarative design patterns. The author uses AI daily but claims to have never encountered AI-generated code worthy of inclusion in long-term projects.