Best of PerformanceDecember 2025

  1. 1
    Article
    Avatar of tsTypescript·19w

    Progress on TypeScript 7

    The TypeScript team provides a major update on TypeScript 7.0 (Project Corsa), their native code rewrite of the compiler and language service. The native preview is now stable and production-ready, featuring 10x faster builds through parallelism, complete editor support including auto-imports and refactoring, and high type-checking compatibility with existing versions. TypeScript 6.0 will be the final JavaScript-based release, serving as a bridge to 7.0 with deprecations like removing ES5 support and enabling strict mode by default. The native preview is available today via VS Code extension and npm package, though some features like full emit pipeline and watch mode need refinement.

  2. 2
    Article
    Avatar of nextNext.js·17w

    Next.js 16.1

    Next.js 16.1 brings Turbopack file system caching to development mode by default, delivering up to 14× faster compile times when restarting the dev server. The release includes an experimental bundle analyzer for optimizing production bundles, simplified debugging with `next dev --inspect`, and improved handling of transitive external dependencies. Additional improvements include 20MB smaller installs, a new `next upgrade` command, and better async import bundling in Turbopack.

  3. 3
    Article
    Avatar of logrocketLogRocket·19w

    Stop using JavaScript to solve CSS problems

    Modern CSS features like content-visibility, container queries, and scroll-driven animations now handle tasks developers traditionally solved with JavaScript. Content-visibility provides native virtualization without libraries like react-window, container queries enable responsive design based on parent containers rather than viewport width, and scroll-driven animations run on the compositor thread for better performance. While JavaScript remains necessary for truly infinite lists, precise measurements, and dynamic layouts, most common use cases benefit from CSS-first solutions with simpler code and better performance.

  4. 4
    Article
    Avatar of perfplanetcalWeb Performance Calendar·16w

    Revisiting HTML streaming for modern web performance

    HTML streaming allows servers to send HTML progressively rather than in one chunk, enabling browsers to render content as it arrives. HTMS is an experimental project that extends basic streaming with progressive placeholders that can be updated asynchronously within a single HTTP response. This approach delivers early First Contentful Paint, maintains SEO-friendly complete HTML documents, and achieves strong Lighthouse scores without client-side hydration. The technique works best combined with SSR, SSG, or tools like HTMX, though it introduces constraints around error handling once streaming begins and requires careful layout planning.

  5. 5
    Article
    Avatar of vigetViget·16w

    Fixing TypeScript Performance Problems: A Case Study

    A TypeScript monorepo was experiencing severe performance issues with 6+ minute build times and sluggish intellisense. Using compiler diagnostics, trace analysis, and the @typescript/analyze-trace tool, the team identified that kysely helper functions with complex type inference were causing 80-second type-checking bottlenecks. By inlining these queries, removing circular dependencies, eliminating barrel files, and cleaning up unused types, they reduced build time by 79% (from 6.2 to 1.3 minutes), cut memory usage in half, and restored responsive editor performance.

  6. 6
    Article
    Avatar of hnHacker News·17w

    Avoid UUID Version 4 Primary Keys

    UUID Version 4 primary keys cause significant performance problems in PostgreSQL due to their random nature. Random values trigger excessive index page splits during inserts, create fragmented indexes with poor density (~79% vs ~98% for integers), and require accessing 31,000% more buffer pages for queries. The randomness prevents efficient B-Tree index operations and degrades cache hit ratios. Time-ordered alternatives like UUID Version 7 perform better by including timestamps in the first 48 bits. For most applications, integer or bigint primary keys backed by sequences remain the optimal choice, offering better performance, smaller storage footprint (4-8 bytes vs 16 bytes), and natural ordering. When obfuscation is needed, pseudo-random codes can be generated from integers using XOR operations and base62 encoding.

  7. 7
    Article
    Avatar of smashingSmashing Magazine·19w

    Masonry: Things You Won’t Need A Library For Anymore — Smashing Magazine

    CSS Masonry layout is coming to browsers as a native feature, eliminating the need for JavaScript libraries like Masonry.js. The article explores how modern web platform features—including popovers, dialogs, container queries, and anchor positioning—can replace third-party dependencies, resulting in better performance, smaller bundle sizes, and simpler code. Built-in Masonry offers significant advantages over JavaScript solutions: no render-blocking scripts, faster responsiveness, and familiar CSS syntax similar to Grid and Flexbox. The piece also provides resources for tracking new web features and ways developers can influence browser vendors' priorities through surveys and the Interop project.

  8. 8
    Article
    Avatar of react_nativeReact Native·18w

    React 19.2, New DevTools features, no breaking changes · React Native

    React Native 0.83 is released with React 19.2, introducing the Activity component and useEffectEvent hook. Major DevTools improvements include new Network and Performance panels, plus a standalone desktop app that no longer requires Chrome or Edge. The release adds stable Web Performance APIs and experimental Intersection Observer support. This is the first React Native release with no breaking changes, making upgrades from 0.82 seamless. Additional features include Hermes V1 performance improvements, iOS legacy architecture removal option, and precompiled binary debugging capabilities.

  9. 9
    Article
    Avatar of zalandoZalando·17w

    The Day Our Own Queries DoS’ed Us: Inside Zalando Search

    Zalando's Search & Browse team experienced a self-inflicted DoS attack when an internal application sent resource-intensive faceting queries on high-cardinality fields to their Elasticsearch cluster. The incident caused search slowdowns and empty results for customers. The team mitigated by splitting markets across clusters, implementing load shedding, and eventually traced the issue to a maintenance workload bug generating 50x normal query volume. Key lessons included improving per-client monitoring with X-Opaque-Id headers, implementing query-level rate limiting, adding aggregation size controls, and recognizing that performance issues can stem from unexpected sources rather than common causes.

  10. 10
    Article
    Avatar of herbsutterSutter's Mill·15w

    Software taketh away faster than hardware giveth: Why C++ programmers keep growing fast despite competition, safety, and AI

    C++ and Rust are the fastest-growing major programming languages from 2022 to 2025, driven by computing demand consistently outpacing hardware supply. Power and chips are the two biggest constraints on computing growth, making performance-per-watt efficiency critical. C++ continues evolving with C++26 adding significant security improvements including bounds-checking in the standard library, elimination of undefined behavior from uninitialized variables, and functional safety via contracts. Despite concerns about safety and AI replacing programmers, the global developer population grew 50% to 47 million, with C++ adding roughly as many developers in one year as Rust has total worldwide. AI serves as an accelerator tool rather than replacement, with major tech companies continuing to aggressively hire human programmers.

  11. 11
    Article
    Avatar of phoronixPhoronix·15w

    Unexpected Surprise: Windows 11 Outperforming Linux On An Intel Arrow Lake H Laptop

    Benchmark testing on a Lenovo ThinkPad P1 Gen 8 with Intel Arrow Lake H processor reveals Windows 11 outperforming Ubuntu Linux in multiple workloads, contradicting years of consistent results showing Linux performance advantages. The unexpected findings persisted across different kernel versions and power management configurations, with Lenovo and Intel teams confirming the hardware is working as expected. This marks a potential shift in the traditional Windows vs. Linux performance landscape, though it's unclear if this is isolated to this specific laptop model or represents a broader trend with newer hardware.

  12. 12
    Article
    Avatar of logrocketLogRocket·17w

    Angular vs. React vs. Vue.js: A performance guide for 2026

    Angular 20, React 19.2, and Vue 3.5 have converged around signals-based reactivity, compiler-driven optimizations, and improved hydration strategies. Angular offers zoneless architecture with 20-30% runtime gains and enterprise-grade structure. React provides the largest ecosystem with automatic batching and compiler-assisted memoization. Vue delivers the smallest bundle size at 20KB with fine-grained reactivity and Vapor Mode previews. Performance differences are narrowing as all three frameworks adopt similar architectural patterns around reactivity, edge rendering, and build tooling. Framework choice now depends more on team size, ecosystem needs, and architectural preferences than raw performance metrics.

  13. 13
    Article
    Avatar of cybertec_postgresqlCYBERTEC PostgreSQL·17w

    Comparing stats! PostgreSQL 18 against 17

    PostgreSQL 18 introduces several new statistics columns for performance monitoring. The pg_stat_all_tables table adds four time-tracking columns for operations. VACUUM/ANALYZE now reports WAL, CPU, and read statistics. The pg_stat_io table gains three new byte-level I/O columns (read_bytes, write_bytes, extend_bytes) while removing the generic op_bytes column. Additionally, pg_stat_statements now tracks parallel worker activity with two new columns for launched and planned parallel workers.

  14. 14
    Article
    Avatar of duckdbDuckDB·18w

    Announcing DuckDB 1.4.3 LTS

    DuckDB 1.4.3 LTS is now available with important bugfixes addressing correctness issues in HAVING clauses, JOIN operations, and indexed table updates. The release introduces beta support for Windows ARM64, including native extension distribution and Python wheels via PyPI. Benchmarks on TPC-H SF100 show 24% performance improvement for native ARM64 compared to emulated AMD64 on Snapdragon-based systems. Additional fixes include race condition crashes, memory management improvements during WAL replay, and various edge cases in Unicode handling and Parquet exports.

  15. 15
    Article
    Avatar of minersThe Miners·18w

    Surviving the RAM Squeeze: Efficiency Tips for JavaScript Developers

    Memory is becoming more expensive as AI data centers drive up demand. JavaScript developers can optimize memory usage by using efficient search methods like find() instead of filter(), avoiding chained array methods that create intermediate arrays, mutating intermediate values safely in reduce callbacks, and using iterators for lazy evaluation. These patterns reduce memory allocations while improving performance and battery life.

  16. 16
    Article
    Avatar of neontechNeon·19w

    Improving DNS performance with NodeLocalDNS

    Neon deployed NodeLocalDNS across their Kubernetes clusters to optimize DNS performance for hundreds of thousands of ephemeral Postgres databases. By caching DNS requests locally on each node instead of routing them to central CoreDNS pods, they achieved an 84% reduction in 99th percentile latency and 87% improvement in 99.9th percentile latency. The deployment reduced network DNS traffic by 97% (from 2k to 60 requests/s), made traffic scale with nodes rather than pods, and unexpectedly helped identify DNS misconfigurations. The implementation required careful sequencing to avoid race conditions between kube-proxy and the DaemonSet, particularly on nodes with slow iptables rule installation.

  17. 17
    Article
    Avatar of platformaticPlatformatic·17w

    Node.js CPU and Heap Profiling with Shareable Flame Graphs

    Watt Admin 1.0.0 introduces Recording Mode for Node.js applications running on Platformatic Watt. The feature enables capturing complete performance sessions with CPU and heap profiling, generating interactive flame graphs, and packaging everything into a single offline HTML file. It collects comprehensive metrics including memory usage, CPU utilization, event loop health, HTTP performance, and connection pool statistics. Developers can record sessions during specific scenarios, analyze bottlenecks through flame graphs, and share the self-contained HTML bundles with team members without requiring any setup. The tool uses V8's built-in profilers and stores data in pprof format for industry-standard analysis.

  18. 18
    Article
    Avatar of foojayioFoojay.io·19w

    Java 25: What’s New?

    Java 25 is the new Long-Term Support release featuring 18 JEPs. Key additions include PEM format support for cryptographic objects, Stable Values API for lazy initialization, JFR enhancements with CPU-time profiling and method timing/tracing, and improved AOT capabilities with method profiling. Several preview features graduate to standard including Scoped Values, Module Import Declarations, and Compact Source Files. The release removes 32-bit x86 support and includes performance optimizations like improved String hashcode handling and new security algorithms.

  19. 19
    Video
    Avatar of youtubeYouTube·16w

    Why I Stopped Using Next.js (And What I Switched To Instead)

    A developer shares their decision to migrate an interactive coding platform away from Next.js to Tanstack Start. The main issues cited include extremely slow development mode performance (especially during video rendering), complexity of server components mental model, and frequent bugs with interactive features. The platform's highly interactive nature (code editors, live previews, chat) makes client-side rendering more suitable. Tanstack Start was chosen due to familiarity with Tanstack Query and Router, despite being in release candidate status.

  20. 20
    Article
    Avatar of cloudflareCloudflare·18w

    Python Workers redux: fast cold starts, packages, and a uv-first workflow

    Cloudflare Python Workers now support any Pyodide-compatible package with significantly faster cold starts than competitors. Using memory snapshots and WebAssembly architecture, Workers achieve 2.4x faster cold starts than AWS Lambda and 3x faster than Google Cloud Run when loading common packages. The platform integrates with uv for package management through pywrangler tooling, enabling easy deployment of Python applications globally. Technical innovations include memory snapshot restoration, careful entropy handling for randomness, and function pointer table management to eliminate Python initialization overhead during cold starts.

  21. 21
    Article
    Avatar of logrocketLogRocket·19w

    Tanstack DB 0.5 Query-Driven Sync: Loading data will never be the same

    TanStack DB 0.5 introduces Query-Driven Sync, a feature that eliminates API sprawl by transforming client-side queries into precise network requests. Instead of creating multiple backend endpoints, developers define queries directly in components, and TanStack DB automatically generates the appropriate API calls. The feature offers three sync modes: Eager (loads entire dataset upfront), On-demand (fetches only requested data using predicate mapping), and Progressive (loads initial batch immediately while syncing remaining data in background). Query-Driven Sync optimizes performance through request deduplication, delta fetching, and intelligent joins, making it particularly effective when paired with sync engines like Electric or PowerSync for real-time data synchronization.

  22. 22
    Article
    Avatar of salesforceengSalesforce Engineering·19w

    How Agentforce Achieved 3–5x Faster Response Times

    Salesforce's Forward Deployed Engineering team optimized Agentforce for a multi-brand retailer by separating deterministic logic from LLM reasoning, moving hierarchical processing from prompts to Apex code. They consolidated multi-stage LLM calls into single passes and optimized Data 360 retrieval, reducing end-to-end latency by 75% (approximately 20 seconds). The team chose a multi-agent architecture over a unified model, enabling brand-specific conversational experiences while maintaining a shared foundation that accelerated subsequent brand deployments by 5x.

  23. 23
    Article
    Avatar of muratbuffaloMetadata·15w

    Rethinking the Cost of Distributed Caches for Datacenter Services

    Distributed caching in datacenters provides 3-4x better cost efficiency primarily by reducing CPU usage rather than just improving latency. Application-level caches that store fully materialized objects deliver far better cost savings than storage-layer caches by eliminating query amplification and coordination overhead. The approach works best for rich-object workloads but struggles with strong consistency requirements, as freshness checks traverse most of the database stack and erase cost benefits. Cache placement matters more than cache size for cost optimization.

  24. 24
    Article
    Avatar of ergq3auoeReinier·19w

    The ONE Tool That Makes Claude Code Lightning Fast

    React Grab is a developer tool that accelerates AI-assisted coding with Claude by 55% by providing direct locations of React elements, eliminating file searching. Created by Aiden Bai, it reduces token usage and costs but relies on React internals, making it risky for production use. The tool should only be used in development environments due to potential security vulnerabilities and breaking changes in React's internal architecture.

  25. 25
    Article
    Avatar of devclassDEVCLASS·18w

    AWS shows Rust love at re:Invent: 10 times faster than Kotlin, one tenth the latency of Go • DEVCLASS

    AWS now uses Rust by default for data plane projects after finding it significantly faster than Kotlin and Go. Aurora DSQL saw 10x performance improvement when rewritten from Kotlin to Rust. Datadog reduced Lambda cold start times from 700-800ms to 80ms by migrating from Go to Rust, with their observability agent running nearly 3x faster overall. The performance gains stem from Rust avoiding garbage collection overhead, which consumed 30% of execution time in Go code handling many small memory allocations. AWS Lambda now offers general availability for Rust functions using an OS-only runtime.