Best of ArchitectureDecember 2024

  1. 1
    Article
    Avatar of systemdesigncodexSystem Design Codex·1y

    8 Must-Know Strategies to Build Scalable Systems

    Explore eight essential strategies for building scalable systems, such as stateless services, horizontal scaling, load balancing, auto-scaling, caching, database replication, database sharding, and asynchronous processing. These techniques ensure systems can handle increased loads efficiently without compromising performance or user experience.

  2. 2
    Article
    Avatar of medium_jsMedium·1y

    My DOs and DON’Ts of Software Architecture

    A software architect shares personal dos and don'ts based on their experience. Key advice includes treating everyone as equals, ensuring clarity, documenting decisions, defining ownership, using architectural contracts, and avoiding over-architecture. The don'ts emphasize challenging directives, proposing solutions with problems, and avoiding 'WaterGile' by implementing true Agile methodologies. This guidance aims to help navigate workplace dynamics and drive successful projects.

  3. 3
    Video
    Avatar of youtubeYouTube·1y

    Master Microservices with Real-Life UBER Project | Advanced Backend

    Learn how to effectively use microservices by working on a real-life project inspired by Uber. This guide explains the challenges with monolithic architectures and demonstrates how microservices can solve scalability issues. You will see specific coding examples of setting up a backend, breaking applications into smaller services, and handling high-traffic scenarios by scaling individual components.

  4. 4
    Article
    Avatar of hnHacker News·1y

    Thinking in Actors - Part 1

    This post explores the benefits of the Actor Model for managing state in software systems. It highlights the drawbacks of traditional approaches, such as anemic data models and misaligned business logic, and advocates for a richer domain-driven approach. Additionally, it discusses how virtual actors, as implemented in frameworks like Microsoft Orleans, can address challenges of concurrency, scalability, and fault tolerance in distributed systems.

  5. 5
    Article
    Avatar of tdsTowards Data Science·1y

    How X (Twitter) Designed Its Home Timeline API: Lessons to Learn

    The post delves into the design aspects of X's (formerly Twitter) home timeline API, covering data fetching, response structure, and pagination. It explores the mixed use of REST, RPC, and GraphQL approaches, the handling of hierarchical data, and special entities like tweets, feedback actions, and cursors. The post also discusses sorting, tweet actions, and how tweet details are retrieved.

  6. 6
    Article
    Avatar of communityCommunity Picks·1y

    I may start using Event Sourcing in all my Laravel applications

    Event sourcing is an architectural pattern that logs events in chronological order, providing a complete history of state changes. Unlike traditional systems that store only the current state, event sourcing offers a full audit trail and simplifies debugging and recovery by allowing the replay of events. It's particularly useful for applications that need to track changes over time, though it adds complexity and has a steep learning curve. The author shares experiences using event sourcing in Laravel applications and discusses when it might be overkill for simpler projects.

  7. 7
    Article
    Avatar of milanjovanovicMilan Jovanović·1y

    What Rewriting a 40-Year-Old Project Taught Me About Software Development

    Rewriting a legacy system built over four decades with APL into a modern stack using .NET, PostgreSQL, and React posed significant technical and organizational challenges. The process involved understanding the deeply complex and integrated legacy system, managing product vs. engineering priorities, and ensuring zero downtime during the migration. Key strategies included modular monolith architecture, cloud-ready design, and robust CI/CD pipelines. Success depended not only on technical solutions but also on effective stakeholder management and knowledge transfer.

  8. 8
    Article
    Avatar of alvaroduranThe PayEng Playbook·1y

    NoDB: Processing Payments Without a Database

    Exploring the concept of processing payments without using a database by shifting focus to event sourcing, an approach that treats changes in state as first-class citizens. This method keeps data temporarily in RAM and utilizes hot backups to ensure no loss of data. This article highlights how payment systems can operate efficiently without persistent storage by leveraging event streams to manage and reconstruct transactions.

  9. 9
    Article
    Avatar of hnHacker News·1y

    Building AI Products—Part I: Back-end Architecture

    In 2023, an AI-powered Chief of Staff tool for engineering leaders reached 10,000 users within a year. Insights gathered during its development led to the creation of Outropy, a developer platform to build AI products, focusing on sustainable and reliable AI systems. The journey involved navigating challenges with generative AI, understanding the role of agents versus microservices, and optimizing performance and scalability. The transition to using Temporal for stateful workflows and the evolution of AI product development is a highlight, offering valuable lessons in structuring AI applications.

  10. 10
    Article
    Avatar of tigerabrodiTiger's Place·1y

    API keys best practices notes

    Always put API keys in request headers to avoid exposure in browser history or logs. For frontend code, never put sensitive keys in headers or URLs; use a proxy backend for production-ready projects. Use different keys for different environments and store them securely using environment variables. Rotate keys periodically to maintain security.

  11. 11
    Article
    Avatar of codemazeCode Maze·1y

    Chain of Responsibility Design Pattern in C#

    The Chain of Responsibility design pattern breaks down logic into smaller components with distinct responsibilities, chaining them together to accomplish a task, promoting the Single Responsibility Principle and loose coupling. This pattern's implementation in C# is demonstrated through a rental request processing API. The post also highlights its advantages, such as workflow synthesis, and challenges, such as dependency order and possible request handling gaps.

  12. 12
    Article
    Avatar of bytebytegoByteByteGo·1y

    How Statsig Streams 1 Trillion Events A Day

    Statsig processes over a trillion events daily for high-profile clients such as OpenAI and Atlassian, with a robust data pipeline designed for scalability and cost-efficiency. Key components include a reliable data ingestion layer, scalable message queues, and effective routing and integration techniques. Their strategy involves using Google Cloud Storage, Pub/Sub, spot nodes, and advanced compression methods to optimize performance and minimize costs, ensuring high reliability and low latency.

  13. 13
    Article
    Avatar of lobstersLobsters·1y

    Software Design is Knowledge Building

    A company relies on an integration service but decides to build an in-house system to cut costs. Despite successful initial development, the system becomes hard to maintain when transferred to a new team. This is attributed to the lack of a shared mental model among the new developers, making it difficult to understand and modify the software. The post highlights the importance of knowledge building and proper documentation in software design to ensure long-term maintainability.

  14. 14
    Article
    Avatar of muratbuffaloMetadata·1y

    Stream Processing

    Batch processes can delay business operations, so stream processing is used to handle events immediately as they occur. Stream processing involves systems notifying consumers of new events, often through message brokers like RabbitMQ or log-based brokers like Kafka. Dual writes can lead to errors and inconsistencies, so Change Data Capture (CDC) allows for consistent data replication across systems. Event sourcing records all changes immutably, aiding in auditability, recovery, and analytics. Stream processing can be used in various applications, including fraud detection, trading systems, and manufacturing, and relies on techniques like microbatching and checkpointing for fault tolerance.

  15. 15
    Article
    Avatar of communityCommunity Picks·1y

    Nobody told me I would miss my JOINs when I started in microservices.

    Managing data in a microservices architecture can be challenging, especially replicating the efficiency of JOINs in a relational database. Moving to microservices often involves turning simple JOIN operations into latency-heavy HTTP calls. Solutions include data replication, materialized views, event-driven replication, and batch data sync, each with its own pros and cons. Selecting the right approach depends on understanding specific performance, consistency, and scalability needs.

  16. 16
    Article
    Avatar of wundergraphWunderGraph·1y

    I was wrong about GraphQL

    The author revisits and reassesses previous opinions on GraphQL based on current knowledge and experiences. Key insights include the importance of building on top of existing tools and patterns rather than introducing new ones, the organizational benefits of GraphQL Federation, and the evolution of best practices such as using persisted queries and APQ. The author also reflects on the practical challenges and benefits of GraphQL for large-scale enterprise applications, particularly in terms of caching, API design, and Subscriptions implementation.

  17. 17
    Article
    Avatar of infoqInfoQ·1y

    Software Architecture and the Art of Experimentation

    Architecting software involves unavoidable instances of being wrong, making experimentation crucial. Minimum Viable Architectures (MVAs) test the viability of architectural decisions, providing data to refine them. Effective experiments should be atomic, timely, and unambiguous, focusing on verifying assumptions. The goal is to minimize the cost of mistakes by running small, manageable experiments and making informed, sustainable decisions that support long-term value.

  18. 18
    Article
    Avatar of dailydoseofdsDaily Dose of Data Science | Avi Chawla | Substack·1y

    A crash course on RAG systems—Part 7

    Part 7 of the RAG crash course focuses on building graph RAG systems using a graph database to store entities and relationships. It highlights the advantages of structured data for LLMs and includes implementation details suitable for beginners. The series covers foundational aspects, evaluation, optimization, and multimodal techniques for RAG systems. Understanding RAG systems can help reduce costs, drive revenue, and scale ML models effectively.

  19. 19
    Article
    Avatar of swizecswizec.com·1y

    Smart core, thin interfaces

    The post introduces the concept of 'smart core, thin interfaces' for structuring software to avoid a big ball of mud. It emphasizes creating core business logic modules surrounded by lightweight interfaces that cater to different actors, ensuring maintainability and adaptability. The approach aligns with various architecture philosophies like hexagonal, service-oriented, and microservices architectures. Keeping the business logic tightly packed and using interfaces to handle interactions helps prevent errors and improves system stability.

  20. 20
    Article
    Avatar of langchainLangChain·1y

    Command: a new tool for building multi-agent architectures in LangGraph

    Command is a new tool in langgraph designed to simplify the communication within multi-agent systems. It allows for the creation of edgeless graphs, enabling nodes to dynamically determine their subsequent nodes and states. This enhances the flexibility and control over multi-agent architectures, particularly in scenarios involving agent handoffs.