Best of AWSNovember 2024

  1. 1
    Article
    Avatar of bytebytegoByteByteGo·2y

    EP136: The Ultimate DevOps Developer Roadmap

    The Ultimate DevOps Developer Roadmap highlights key areas for mastering DevOps skills, including programming languages like Python and JavaScript, operating systems, source control management tools, networking basics, CI/CD tools, scripting, various hosting platforms, infrastructure as code tools, and monitoring/logging tools. Each aspect is crucial for a well-rounded DevOps professional. Additional content covers Redis fundamentals, various software architectural patterns, and eventual consistency patterns for distributed databases.

  2. 2
    Article
    Avatar of devtoDEV·1y

    Diagram-as-Code: Creating Dynamic and Interactive Documentation for Visual Content

    Learn how to leverage Diagram-as-Code to create dynamic and interactive visual documentation using Python. This approach allows you to programmatically generate and maintain diagrams, making them always up-to-date. The post introduces the Diagrams Python library, details its benefits, and provides step-by-step tutorials on installation, node types, and creating diagrams for different cloud providers such as AWS.

  3. 3
    Article
    Avatar of golangnutsGo Developers·1y

    Building a chat app in Go with WebSockets and Nitric

    Learn to build a real-time chat application in Go using WebSockets and Nitric. The guide includes steps for creating a WebSocket API, managing connections, handling WebSocket events, local testing, and deploying to AWS.

  4. 4
    Article
    Avatar of theburningmonktheburningmonk.com·1y

    When to use Light Events vs. Rich Events in Event-Driven Architectures

    Choosing between light and rich events in event-driven architectures involves trade-offs. Light events, containing minimal information, are cost-efficient but may increase complexity for subscribers needing additional data. Rich events include more information, reducing the need for extra calls by subscribers but increasing data transfer and storage costs. Domain events, often benefitting from light events, are shared within the same domain, while integration events, suited for rich events, are shared across domains. Understanding these differences helps optimize event design and system efficiency.

  5. 5
    Article
    Avatar of bytebytegoByteByteGo·2y

    How McDonald Sells Millions of Burgers Per Day With Event-Driven Architecture

    McDonald's has developed a unified, event-driven platform to handle its global operations efficiently. The platform supports scalability, high availability, performance, security, reliability, consistency, and simplicity. Core components include AWS Managed Streaming for Kafka (MSK), a schema registry, a standby event store, custom SDKs, and an event gateway. The system ensures data integrity and efficient processing through schema validation and robust error-handling mechanisms. Key techniques include data governance, cluster autoscaling, and domain-based sharding. Future enhancements include formal event specification, transition to serverless MSK, and improved developer tooling.

  6. 6
    Article
    Avatar of gcgitconnected·1y

    Data Centers in System Design

    A data center is a building full of servers, storage systems, and network equipment. Multi-data center architectures improve reliability and speed by connecting users to the nearest center using GeoDNS for traffic direction. Failover scenarios are managed by automatic detection and routing updates. Scaling strategies include component decoupling into microservices and using messaging architectures with message queues. Companies like Netflix and Amazon effectively implement these concepts for redundancy and scalability.

  7. 7
    Video
    Avatar of beabetterdevBe A Better Dev·1y

    I Wish I Knew This One Thing Before Learning AWS

    Understanding the core services in AWS, such as EC2, S3, SQS, SNS, and DynamoDB, is essential before diving into other services. These foundational services form the basis upon which other, more advanced services are built. Having a strong understanding of these basics will facilitate learning and using more complex AWS offerings.

  8. 8
    Article
    Avatar of freecodecampfreeCodeCamp·1y

    What is Cloud Computing? A Guide for Beginners

    Cloud computing involves using the internet to access storage, databases, and computing power on powerful remote servers, eliminating the need for businesses to have large physical servers. Major cloud providers such as AWS, Microsoft Azure, and Google Cloud offer scalable, flexible, and reliable services. Cloud computing includes services like IaaS, PaaS, and SaaS, each offering different levels of control. Its benefits include cost reduction, enhanced collaboration, and reliable backups, making it crucial for modern businesses and individuals.

  9. 9
    Article
    Avatar of communityCommunity Picks·1y

    cshum/imagor: Fast, secure image processing server and Go library, using libvips

    imagor is a robust image processing server and Go library that utilizes the efficient libvips library, providing 4-8x faster performance than ImageMagick. It supports various image processing use cases through a HTTP server, integrates seamlessly with Docker, and offers video thumbnail capabilities via ffmpeg bindings. imagor can be configured for different storage solutions like HTTP, file system, AWS S3, and Google Cloud Storage, supporting complex image operations and filters with secure URL signing to prevent DDoS attacks.

  10. 10
    Article
    Avatar of theburningmonktheburningmonk.com·2y

    EventBridge best practice: why you should wrap events in event envelopes

    Learn why wrapping your event payloads in custom envelopes when using AWS EventBridge enhances structure, interoperability, filtering capabilities, versioning, idempotency, observability, and auditing. This approach provides a clear separation between metadata and business data, making it easier to manage, trace, and process events within your serverless architectures.

  11. 11
    Article
    Avatar of hnHacker News·2y

    How WebSockets cost us $1M on our AWS bill

    Recall.ai optimized their CPU usage for real-time video processing to cut down their AWS bill by $1M/year. They discovered their main CPU consumption was due to WebSocket's memory copying functions. By switching from WebSockets to shared memory for data transport, they significantly reduced overhead, leading to a 50% reduction in CPU usage.

  12. 12
    Article
    Avatar of pulumiPulumi·1y

    Fargate vs EC2

    When setting up an EKS cluster, you can choose to run your containers on either AWS EC2 instances or through AWS Fargate. EC2 requires managing instance types and handling resource allocation, whereas Fargate abstracts away the complexity by providing a dedicated environment for each pod. While Fargate can improve isolation and flexible scaling, EC2 offers cost efficiencies and resource sharing. Fargate is suitable for bursty, resource-heavy workloads, while EC2 works well for lightweight micro-services. Consider a combination of both for optimal results.

  13. 13
    Video
    Avatar of communityCommunity Picks·1y

    Redis vs Memcached Performance Benchmark

    The post benchmarks the performance of Redis and Memcached, focusing on latency, throughput, and resource usage. Tests were conducted using AWS infrastructure to measure how each cache handles set and get operations, as well as their scalability. Redis showed higher latency under load but offers more features, while Memcached demonstrated more stable performance and easier management. The results suggested choosing Memcached for simple SQL query caching and Redis for feature-rich requirements, despite its complex maintenance needs.

  14. 14
    Article
    Avatar of towardsdevTowards Dev·1y

    “Data-Driven Football Insights: From Web Scraping to Visualization Using Airflow, Dbt Cloud, and AWS Tech Stack”

    This project automates the process of collecting, storing, and analyzing football data using technologies like Apache Airflow, DBT Cloud, and AWS. The workflow includes web scraping data using Python, storing it in Amazon S3, processing it in Amazon Redshift, transforming data with DBT Cloud, and visualizing it through Amazon QuickSight. This integrated approach offers a scalable solution to manage and analyze detailed football statistics efficiently.

  15. 15
    Article
    Avatar of phaskellPlanet Haskell·1y

    The cost of hosting is too damn high

    The author describes migrating a side project from DigitalOcean to dedicated servers due to high costs and performance issues. They experienced frequent HTTP 520 errors on DigitalOcean's platform and found better value with OVH's dedicated servers. Despite the initial setup complexities, the author achieved significantly lower response times and better scalability. The post also touches on the inefficiencies and high costs of modern software and cloud services, advocating for more cost-effective and efficient engineering practices.

  16. 16
    Article
    Avatar of awsfundamentalsAWS Fundamentals·1y

    OpenTelemetry on AWS: Observability at Scale with Open-Source

    Learn how to implement an observability stack on AWS using OpenTelemetry, CloudWatch, and AWS X-Ray for serverless applications. This guide walks through configuring AWS Lambda for trace and log collection, and how the AWS Distro for OpenTelemetry provides a secure, production-ready solution for instrumenting applications with minimal code changes.

  17. 17
    Article
    Avatar of quastorQuastor Daily·1y

    How Reddit built a Metadata Store that Handles 100k Reads per Second

    Reddit built a scalable metadata store to handle over 100k reads per second, choosing Postgres over Cassandra due to manageability and flexibility challenges with the latter. They implemented dual writes, backfill data, dual reads, and other strategies for data migration. To achieve high performance, they utilized table partitioning and denormalization. Reddit's current setup on AWS Aurora Postgres delivers low latency without the need for a read-through cache.

  18. 18
    Article
    Avatar of cerbosCerbos·1y

    Cerbos is now available on AWS Marketplace

    Cerbos, known for its flexibility and performance in managing fine-grained access control, is now available on AWS Marketplace. This enables seamless management of authorizations within the AWS environment, whether on cloud, on-prem, or in hybrid setups. Cerbos PDP allows decoupling of authorization logic from application code, while Cerbos Hub offers centralized management and real-time policy orchestration. This move simplifies procurement and helps teams implement secure, adaptable authorization quickly.

  19. 19
    Article
    Avatar of allthingsdistributedAll Things Distributed·1y

    Return of The Frugal Architect(s)

    The Frugal Architect initiative, introduced during a re:Invent keynote, focuses on building cost-aware, sustainable, and modern architectures. The initiative relaunches with expanded content, including blog posts and a new podcast series featuring AWS customer stories about optimizing their architectures for cost and sustainability. Notable examples include WeTransfer's use of enhanced observability to reduce digital waste and PBS's shift to containerized and serverless architectures for cost-effectiveness.

  20. 20
    Video
    Avatar of communityCommunity Picks·1y

    Nginx vs Traefik: What Is the BEST Reverse Proxy?

    The post provides a detailed comparison between Nginx and Traefik when used as reverse proxies. It covers various metrics such as latency, throughput, error rate, CPU and memory usage, and network traffic. The analysis also delves into configuration differences, ease of use, and performance implications of both proxies. Additionally, practical recommendations for optimizing Nginx performance and leveraging Traefik's built-in functionalities are provided.

  21. 21
    Article
    Avatar of hnHacker News·1y

    How to setup self hosted wiki for your startup

    Setting up a self-hosted wiki can be a cost-effective alternative for startups compared to using paid platforms like Confluence or Notion. Wiki.js, combined with PostgreSQL, can be easily set up using Docker Compose, and it allows you to maintain fixed costs regardless of the number of users. Additionally, adding Elasticsearch can enhance search functionality. Running a self-hosted wiki on AWS EC2 can be significantly cheaper, and making the setup production-ready involves implementing custom domains, DNS, load balancers, SSO, and regular backups.

  22. 22
    Article
    Avatar of lobstersLobsters·1y

    Zero Disk Architecture

    Zero Disk Architecture involves offloading data storage to Amazon S3, allowing for scalable and elastic systems without managing stateful storage servers. This approach leverages the durability, availability, and cost-effectiveness of S3, making it suitable for various database systems, especially those prioritizing minimal latency and cost efficiency. Multiple systems, such as Snowflake and Clickhouse, already use S3 or similar services as their primary storage solutions.

  23. 23
    Article
    Avatar of theburningmonktheburningmonk.com·1y

    Here is one of the most misunderstood aspects of AWS Lambda

    Understanding AWS Lambda's throttling behavior is crucial. While synchronous invocations check for throttling limits, asynchronous invocations don't face immediate throttling issues as requests go through an internal queue. However, retries for failed async invocations happen up to 6 hours only. This clarification suggests you may not need SNS topics solely to prevent throttling errors, leading to fewer complexities and costs.

  24. 24
    Article
    Avatar of itnextITNEXT·1y

    Deploy Virtual Kubernetes Clusters on EKS, AKS, and GKE

    Virtual Kubernetes clusters offer a way to manage isolated environments and CI/CD pipelines without the overhead of multiple physical clusters. This guide provides a step-by-step process to deploy virtual clusters on Amazon EKS, Google GKE, and Microsoft AKS. Requirements include tools like kubectl, Helm, and the vCluster CLI. Each section covers the specific setup instructions for each cloud provider, including storage configurations and cleanup procedures. Virtual clusters enable efficient, cloud-agnostic operations and provide robust isolation for various use cases, enhancing scalability and operational efficiency.

  25. 25
    Article
    Avatar of taiTowards AI·1y

    Learn Web Scraping With AWS Bedrock Agents

    Learn how to set up and deploy AWS Bedrock Agents for web scraping using AWS Lambda, Streamlit, and Anthropic Claude. Get an introduction to Bedrock Agents, including their components and capabilities, followed by a hands-on project to implement a simple web scraping agent. The project involves setting up a Lambda function, deploying a Streamlit app on EC2 for user interaction, and testing the agent's web scraping functionality.