Part 4 of a Snowflake & Databricks interoperability series covering the SF-Managed Iceberg pattern, where Snowflake owns the full Iceberg table lifecycle while writing open-format Parquet data to your own cloud storage. Covers architecture setup (External Volumes, catalog integrations), full DML support, and how external engines like Databricks, Trino, and PyIceberg can read or write back via Snowflake's Horizon Catalog REST API using vended credentials. Key advantages include Snowflake Cortex AI enrichment at write time (so downstream consumers get pre-enriched data without their own AI infrastructure), Horizon governance policies, and zero data movement between platforms. Also addresses performance trade-offs between FDN and Iceberg tables, governance nuances when Databricks reads raw Parquet directly, and real-world use cases like ML feature stores, data sharing without Snowflake accounts, and migration optionality.
Table of contents
The SetupSort: