AWS is launching S3 Files, a new feature that integrates Amazon EFS into S3, allowing any S3 bucket or prefix to be mounted as a network-attached filesystem on EC2 instances, containers, or Lambda functions. The post traces the design journey from genomics research data friction at UBC to the core architectural challenge of unifying file and object semantics. Rather than forcing a lowest-common-denominator merge, the team adopted a 'stage and commit' model that treats the file/object boundary as an explicit, first-class design element. Changes made via the filesystem are aggregated and committed back to S3 roughly every 60 seconds as atomic PUTs, while S3 remains the source of truth. The post details key design tradeoffs around consistency, authorization, namespace semantics, and performance, including a 'read bypass' feature achieving 3 GB/s per client for sequential reads. S3 Files joins S3 Tables and S3 Vectors as part of AWS's broader effort to make S3 a multi-paradigm data platform beyond pure object storage.
Sort: