Best of TestingDecember 2025

  1. 1
    Article
    Avatar of typescripttvTypeScript.TV·18w

    Stop Using TypeScript's Exclamation Mark

    The non-null assertion operator (!) in TypeScript bypasses type safety by forcing the compiler to treat potentially nullable values as non-null, leading to runtime crashes. Instead of using this operator, developers should employ safer alternatives: optional chaining for nested property access, nullish coalescing for default values, conditional operators for explicit branching, type guards for reusable validation, and assertion functions for enforcing invariants. These approaches maintain type safety while handling null and undefined values appropriately, following fail-fast principles and preventing silent failures.

  2. 2
    Article
    Avatar of elevateElevate·17w

    My LLM coding workflow going into 2026

    A comprehensive guide to using LLM coding assistants effectively in 2026. Key practices include starting with detailed specifications before coding, breaking work into small iterative chunks, providing extensive context to the AI, choosing appropriate models for different tasks, maintaining human oversight through testing and code review, committing frequently for version control safety, customizing AI behavior with rules and examples, leveraging automation as quality gates, and treating AI as a force multiplier rather than replacement. The workflow emphasizes treating LLMs as junior pair programmers requiring guidance while maintaining developer accountability for all code produced.

  3. 3
    Article
    Avatar of thoughbotthoughbot·17w

    Testing is software engineering

    Testing should be treated as a core engineering practice, not an afterthought. Writing tests early improves code design by providing immediate feedback on interfaces, enables confident refactoring through reliable CI checks, and facilitates collaboration by uncovering edge cases and serving as living documentation. Tests that are hard to write often signal poorly designed code. Integrating testing throughout development leads to better software design, reduced stress when making changes, and earlier discovery of hidden requirements.

  4. 4
    Article
    Avatar of simonwillisonSimon Willison·17w

    Your job is to deliver code you have proven to work

    Software engineers must deliver proven, working code rather than untested contributions. This requires both manual testing (seeing the code work yourself, documenting steps, testing edge cases) and automated testing (bundling tests with changes). With AI coding agents like Claude Code, developers should train these tools to prove their changes work through testing before submission. The human developer remains accountable for ensuring code quality and providing evidence that changes function correctly.

  5. 5
    Video
    Avatar of googledevelopersGoogle for Developers·18w

    3 skills every early-career engineer needs

    Early-career software engineers should focus on three fundamental skills: writing clean, maintainable code with meaningful names and simple logic; developing a quality mindset through comprehensive testing that prevents regressions and enables confident refactoring; and mastering essential tools like version control, debugging techniques, documentation practices, and communication skills. These foundational practices create long-term career success more effectively than chasing the latest frameworks.

  6. 6
    Article
    Avatar of nolanlawsonRead the Tea Leaves·16w

    How I use AI agents to write code

    A developer shares practical strategies for using AI coding agents effectively after transitioning from skepticism to adoption. Key recommendations include creating comprehensive CLAUDE.md files for project context, using automated tests as feedback loops, running separate AI sessions for code review to catch bugs, and leveraging agents for overnight work on side projects. The author acknowledges AI's limitations with UI work and novel projects, describes the shift toward an architect-like role focused on specs and review, but maintains reservations about using AI for open-source contributions due to ownership concerns.

  7. 7
    Article
    Avatar of advancedwebAdvanced Web Machinery·15w

    Why I prefer multi-tenant systems

    Multi-tenant architecture offers significant advantages even for single-customer systems. Key benefits include parallelizable integration testing through tenant isolation, production monitoring that mimics real user behavior without affecting actual clients, safe demo environments for sales and developers, and flexibility for future business evolution. The complexity overhead can be minimized using database-level features like PostgreSQL's row-level security, which centralizes tenant filtering rather than requiring it in every query.

  8. 8
    Article
    Avatar of neontechNeon·15w

    Stop Mocking Auth (It’s Breaking Your Tests)

    Mocking authentication in tests creates false confidence by skipping critical failure points like password verification, database constraints, and session management. Real auth testing is traditionally difficult due to shared state and slow database provisioning. Database branching offers a solution by creating isolated, copy-on-write database instances with separate auth endpoints for each test run, enabling fast, isolated testing against real authentication flows without test collisions or production data pollution.

  9. 9
    Article
    Avatar of bunBun·18w

    Bun v1.3.4

    Bun v1.3.4 fixes 194 issues and introduces several new features. The release adds URLPattern API for declarative URL matching, fake timers for the test runner to control time in tests, and custom proxy headers in fetch requests. It fixes a critical http.Agent connection pooling bug that prevented connection reuse. Standalone executables now skip loading config files at runtime by default for better performance. The update includes SQLite 3.51.1, console.log %j format specifier support, and numerous bugfixes across testing, bundler, Windows compatibility, Node.js APIs, and FFI.

  10. 10
    Article
    Avatar of phProduct Hunt·18w

    BrowserBook: The Browser Automation IDE

    BrowserBook is an AI-powered IDE that combines a Jupyter-style notebook interface with an inline browser and context-aware coding assistant for building Playwright-based browser automations. It addresses common issues with browser agents (cost, speed, reliability, debugging) by shifting AI assistance to the coding phase rather than execution. Key features include interactive browser testing, notebook-style cell execution, DOM-aware code suggestions, built-in authentication management, screenshot tools, data extraction helpers, and API deployment capabilities for production use.

  11. 11
    Article
    Avatar of ontestautomationOn Test Automation·17w

    Less, but better

    The pursuit of 'faster' and 'more' in test automation, especially with AI tools, often overshadows the more important goal of 'better'. Writing more tests faster doesn't automatically improve product quality or feedback value. Low-value tests written solely for coverage metrics can become dead weight. The focus should shift from speed and quantity to genuinely improving the quality of testing work and the products delivered. AI's potential lies not in generating more artifacts faster, but in enhancing the quality of software development and testing practices.

  12. 12
    Article
    Avatar of antonzAnton Zhiyanov·17w

    Detecting goroutine leaks in modern Go

    Go 1.24-1.26 introduces new tools for detecting goroutine leaks: the synctest package for testing and the goleakprofile profile for production monitoring. The article explains common leak patterns including unclosed channels, double sends, early returns, and orphaned workers, demonstrating how to detect each using synctest and pprof. The goleakprofile uses garbage collector marking to identify permanently blocked goroutines by checking if they're waiting on unreachable synchronization objects. Both tools provide detailed stack traces showing exactly where leaks occur, making it significantly easier to catch concurrency bugs during development and in production systems.

  13. 13
    Article
    Avatar of softwaretestingmagazineSoftware Testing Magazine·19w

    Open Source Test Management Tools

    A curated list of open source test management tools for organizing, tracking, and executing software tests. The collection includes platforms like AgileTC, Cherry, Kiwi TCMS, TestLink, and others, covering features such as test case management, integration with CI/CD tools, bug tracking systems, and collaboration capabilities. Each tool offers different approaches to test planning, execution tracking, and result analysis, with varying levels of integration with popular development tools like Jenkins, JIRA, and Selenium.