Best of PythonOctober 2025

  1. 1
    Article
    Avatar of lpythonLearn Python·31w

    Hi, I’m new to Python — please help me!

    A beginner seeks recommendations for free online resources to learn Python as a foundation for studying AI. They're looking for courses and practice platforms that don't require payment.

  2. 2
    Article
    Avatar of hnHacker News·31w

    microsoft/amplifier

    Microsoft released Amplifier, an experimental development environment that enhances AI coding assistants with 20+ specialized agents, a knowledge extraction system, parallel worktree workflows, and automatic conversation transcript preservation. The tool provides pre-loaded patterns, context management, and automation to transform AI assistants into more capable development partners. It requires Python 3.11+, UV, Node.js, and works primarily in WSL2, though it's explicitly marked as early-stage research software with no stability guarantees or official support.

  3. 3
    Article
    Avatar of planetpythonPlanet Python·31w

    Spotlight on pdfly, the Swiss Army knife for PDF files

    pdfly is a Python-based CLI tool for PDF manipulation, offering features like metadata display, page extraction and merging, document compression, image extraction, text extraction, and PDF signing. The newly released version 0.5.0 adds digital signature verification, annotated page extraction, and page rotation capabilities. Built on fpdf2 and pypdf libraries, it provides a comprehensive solution for common PDF operations through simple command-line interfaces.

  4. 4
    Video
    Avatar of nickchapsasNick Chapsas·32w

    Why Startups Don't Use .NET and C#

    Explores why startups favor JavaScript, TypeScript, and Python over .NET/C# despite modern improvements. Key factors include persistent stigma around .NET, faster MVP development with JS/Python ecosystems, easier hiring due to larger talent pools, and better AI tooling support. While .NET offers strong performance and complexity management for mature products, startups prioritize speed-to-market and product validation over code quality. The author, running a .NET-based startup, recommends choosing .NET only if teams already know it, as learning curves and hiring challenges outweigh benefits for early-stage companies. Microsoft's recent organizational shift placing .NET under AI platforms signals deeper AI integration coming to the ecosystem.

  5. 5
    Article
    Avatar of lpythonLearn Python·29w

    How can I ACTUALLY start on coding?

    A beginner programmer seeks guidance on how to effectively start learning to code, specifically with Python and Java. They're struggling with confusing lessons, unclear next steps after completing tutorials, and practical implementation challenges like adding UI features to a basic calculator project.

  6. 6
    Article
    Avatar of hnHacker News·29w

    character-ai/Ovi

    Ovi is an open-source audio-video generation model that simultaneously creates synchronized 5-second videos and audio from text or text+image inputs. The 11B parameter model supports flexible resolutions (720×720 to 960×960), multiple aspect ratios, and includes a custom-trained 5B audio branch. It offers inference options for single or multi-GPU setups, includes memory optimization features like fp8 quantization and CPU offloading for 24GB GPUs, and provides integration with Gradio UI and ComfyUI. The model is based on research from Character AI and builds upon Wan2.2 for video and MMAudio for audio processing.

  7. 7
    Article
    Avatar of hnHacker News·29w

    apple/pico-banana-400k

    Apple released Pico-Banana-400K, a dataset containing approximately 400,000 text-image-edit triplets for training text-guided image editing models. The dataset includes 257K single-turn examples, 56K preference learning samples, and 72K multi-turn conversations, covering 35 edit operations across 8 semantic categories. Built using Gemini-2.5-Flash for instruction generation and the Nano-Banana model for editing, each edit undergoes automated quality evaluation. Source images come from Open Images, with edits spanning object manipulation, scene composition, stylistic changes, and photometric adjustments. The dataset is available under CC BY-NC-ND 4.0 license for non-commercial research use.

  8. 8
    Article
    Avatar of bytebytegoByteByteGo·32w

    How OpenAI Uses Kubernetes And Apache Kafka for GenAI

    OpenAI built a stream processing platform using Apache Flink (PyFlink) on Kubernetes to handle real-time data for AI model training and experimentation. The architecture addresses three key challenges: providing Python-first APIs for ML practitioners, handling cloud capacity constraints, and managing multi-primary Kafka clusters. The system features a control plane for multi-cluster failover, per-namespace isolation in Kubernetes, watchdog services for Kafka topology monitoring, and decoupled state management using RocksDB with highly available blob storage. Custom Kafka connectors enable reading from multiple primary clusters simultaneously while maintaining resilience during outages.

  9. 9
    Article
    Avatar of platformaticPlatformatic·32w

    Integrate Python ASGI with Node.js Apps

    Platformatic releases @platformatic/python, enabling Python ASGI applications to run directly inside Node.js processes. The integration eliminates network overhead by using interprocess communication instead of HTTP calls between services. Built on a Rust bridge layer with the http-handler crate, it provides seamless request/response translation between Node.js and Python. Developers can embed FastAPI, Django, or custom ASGI apps alongside Node.js workloads, particularly useful for AI/ML inference, real-time data processing, and gradual migration scenarios. Benchmarks show competitive performance with traditional ASGI servers while offering microsecond-level latency through in-process architecture.

  10. 10
    Article
    Avatar of dailydoseofdsDaily Dose of Data Science | Avi Chawla | Substack·28w

    Every LangGraph User We know is Making the Same Mistake!

    The supervisor pattern in LangGraph has a fundamental limitation: it routes queries to only one specialized agent at a time, failing when users ask multi-topic questions. An alternative approach using dynamic guideline matching (implemented in the open-source Parlant framework) loads multiple relevant guidelines simultaneously into context, enabling coherent responses across topics. While LangGraph excels at workflow automation, Parlant is designed for free-form conversations, and both can work together complementarily.

  11. 11
    Video
    Avatar of youtubeYouTube·29w

    Coding in 2026: What No One Tells You

    Modern coding education has fundamentally changed with AI integration. Learning now involves combining traditional fundamentals like Python syntax, data structures, algorithms, and system design with AI-powered coding assistants and LLMs. Recommended approach includes structured courses that incorporate AI workflows, mastering prompt engineering using the rule of three (subject, end result, context), gradually progressing from basic LLMs to advanced tools like Cursor and Augment, and hands-on practice with cloud platforms like AWS and Azure for system design.

  12. 12
    Article
    Avatar of hnHacker News·32w

    Who needs git when you have 1M context windows?

    A developer accidentally lost code that improved their machine learning model by 5% after refactoring without committing changes. Unable to reproduce the results, they discovered that Gemini 2.5 Pro's 1M token context window had retained the original code from their development session, allowing them to recover the lost improvements through a simple prompt.

  13. 13
    Article
    Avatar of rpythonReal Python·32w

    Python 3.14 Preview: Better Syntax Error Messages – Real Python

    Python 3.14 introduces ten improvements to error messages that make debugging more intuitive. The enhancements provide specific guidance for common mistakes including keyword typos, misplaced elif blocks, conditional expression errors, string closure issues, incompatible string prefixes, unpacking mismatches, invalid import targets, unhashable type usage, math domain violations, and async context manager confusion. Each improved message follows a pattern of identifying the mistake, explaining the issue clearly, and suggesting likely fixes.

  14. 14
    Article
    Avatar of langchainLangChain·28w

    Introducing DeepAgents CLI

    DeepAgents CLI is a new command-line tool for building AI agents with persistent memory that can code, research, and execute tasks. The tool supports file operations, shell command execution with approval, web search, API requests, and cross-session memory retention. Agents store knowledge in local memory files and follow a memory-first protocol to recall information across sessions. Users can create multiple specialized agents for different projects, with the default using Anthropic's Claude Sonnet 4 model.

  15. 15
    Article
    Avatar of zedZed·32w

    Making Python in Zed Fun — Zed's Blog

    Zed editor has significantly improved its Python support by introducing automatic virtual environment detection with a toolchain selector, monorepo support with per-project toolchains tracked via pyproject.toml, and separate language servers per toolchain. The editor now defaults to Basedpyright and supports Ty and Ruff out of the box, with plans to integrate Astral's Ty language server into the core. These changes address previous pain points around venv management and configuration complexity, making Python development more seamless.

  16. 16
    Article
    Avatar of infoworldInfoWorld·31w

    Java or Python for building agents?

    Choosing between Java and Python for AI agents should depend on your team's existing expertise and technology stack, not trends. While Python dominates AI development due to its accessibility and rich ecosystem, Java developers can build equally effective agents using frameworks like Embabel. Organizations will achieve faster AI adoption by leveraging their current tools and skills rather than switching to unfamiliar technologies. By 2028, 80% of generative AI applications will be built on existing data management platforms, reinforcing the value of working with what you already have.

  17. 17
    Article
    Avatar of langchainLangChain·29w

    LangChain and LangGraph Agent Frameworks Reach v1.0 Milestones

    LangChain and LangGraph have reached their 1.0 stable releases, marking a commitment to no breaking changes until 2.0. LangChain 1.0 introduces the create_agent abstraction with middleware support for customization, standardized content blocks across providers, and a streamlined package focused on core agent functionality. LangGraph 1.0 provides production-ready features including durable state, built-in persistence, and human-in-the-loop patterns for complex workflows. Both frameworks are backward compatible, with LangChain built on top of LangGraph's runtime, allowing developers to start with high-level abstractions and drop down to lower-level control when needed.

  18. 18
    Article
    Avatar of huggingfaceHugging Face·29w

    huggingface_hub v1.0: Five Years of Building the Foundation of Open Machine Learning

    The huggingface_hub Python library has reached v1.0 after five years of development, now powering 200,000 dependent libraries and providing access to over 2 million models, 500,000 datasets, and 1 million Spaces. Major changes include migration from requests to httpx for modern HTTP infrastructure, a redesigned CLI replacing huggingface-cli with expanded features, and full adoption of hf_xet for file transfers with chunk-level deduplication. The release removes legacy patterns like the Git-based Repository class while maintaining backward compatibility for most ML libraries, though transformers v5 will be required for full v1.x support.

  19. 19
    Article
    Avatar of phProduct Hunt·31w

    nanochat: The best ChatGPT that $100 can buy

    nanochat is a minimal, full-stack LLM implementation by Andrej Karpathy in approximately 1000 lines of code. It enables running the complete pipeline—tokenization, pretraining, finetuning, evaluation, inference, and web UI—on a single 8XH100 node for under $1000. The project achieves competitive performance at the $100 tier model level while maintaining clean, hackable code designed to make LLM development accessible for learning purposes.

  20. 20
    Article
    Avatar of hnHacker News·32w

    newton-physics/newton: An open-source, GPU-accelerated physics simulation engine built upon NVIDIA Warp, specifically targeting roboticists and simulation researchers.

    Newton is a GPU-accelerated physics simulation engine built on NVIDIA Warp, designed for robotics and simulation research. The project extends Warp's deprecated sim module and integrates MuJoCo Warp as its primary backend. Key features include GPU-based computation, OpenUSD support, differentiability, and extensibility. Currently in active beta under the Linux Foundation with Apache 2.0 licensing, Newton was initiated by Disney Research, Google DeepMind, and NVIDIA. The engine includes extensive examples covering basic physics, robot simulations, cloth dynamics, inverse kinematics, material point method (MPM), and differentiable simulation scenarios.

  21. 21
    Article
    Avatar of planetpythonPlanet Python·31w

    Releasing Python 3.14.0

    A detailed chronicle of the Python 3.14.0 release process, documenting the week leading up to release day. The post walks through pre-release checks, handling last-minute bugs (including a Linux kernel issue), running automated release scripts, coordinating multi-platform builds (Windows, macOS, Android), and final publishing steps. It provides insight into the technical workflow of releasing a major Python version, including buildbot monitoring, CI automation, profile-guided optimization builds, and the various announcement channels used to communicate the release.

  22. 22
    Article
    Avatar of vercelVercel·32w

    Python package manager uv is now available for builds with zero configuration

    Vercel has adopted uv, a Rust-based Python package manager, as the default for Python builds. This integration delivers 30-65% faster build times and expands dependency format support beyond requirements.txt and Pipfile to include uv.lock and pyproject.toml files. The change requires zero configuration and applies automatically to all Python projects on the platform.

  23. 23
    Article
    Avatar of phProduct Hunt·29w

    Metorial: The open source integration gateway for AI agents.

    Metorial is an open-source integration platform that enables AI agents to connect with 600+ services through MCP (Model Context Protocol). It provides Python and TypeScript SDKs, one-line OAuth implementation, serverless deployment for custom MCP servers, and built-in observability. The team also released Starbase, a browser-based testing tool for MCP servers that allows developers to test integrations in real conversations with Claude or ChatGPT without any setup.

  24. 24
    Article
    Avatar of mitMIT News·30w

    The student becomes the teacher

    MIT graduate student Titus Roesler overcame a challenging start from a rural background without AP classes to become an award-winning teaching assistant and mentor. His teaching work helped him master signal processing, where he now focuses on compressed sensing applications in high-frequency radio communications. Through roles as a TA for multiple classes and designing seminars, he developed expertise in source separation problems, including a project separating harmonies in Bach chorales using Python.

  25. 25
    Article
    Avatar of bytebytegoByteByteGo·29w

    The Evolution of LinkedIn’s Generative AI Tech Stack

    LinkedIn evolved its GenAI infrastructure from fragmented experiments to a unified platform supporting multi-agent systems. The company shifted from Java to Python for both offline and online development, adopted LangChain as its primary framework, and built centralized systems for prompt management, skill registries, and memory. The platform leverages existing messaging infrastructure for agent orchestration, implements strict privacy controls, and uses OpenTelemetry for production observability. Key architectural decisions include keeping abstractions thin for flexibility, using human-in-the-loop controls for critical actions, and building reusable components that enable teams to ship AI features faster while maintaining consistency and trust.