Atlas now supports Snowflake Iceberg tables as a first-class resource, with declarative management of Snowflake-managed and externally-cataloged tables, clustering and partition keys, primary/unique/foreign key constraints, and the full set of table parameters.
Snowflake Iceberg tables store data in the open Apache Iceberg format on object storage, while still being queried, written, and governed through Snowflake. The new iceberg_table block lets Atlas inspect, diff, and migrate them alongside regular tables, hybrid tables, and the rest of your Snowflake schema.
Snowflake-Managed Tables
The two required attributes are external_volume and catalog. For a Snowflake-managed table, set the catalog to "SNOWFLAKE" and point at a configured external volume. Columns, primary keys, unique constraints, and either partition_by or cluster_by (mutually exclusive) all behave as you would expect from a standard table:
iceberg_table "events" {schema = schema.PUBLICexternal_volume = "SNOWFLAKE_MANAGED"catalog = "SNOWFLAKE"comment = "Snowflake-managed iceberg table for events"target_file_size = "16MB"partition_by = ["\"id\"", "MONTH(\"created_at\")"]column "id" {type = NUMBER(10)}column "event_type" {type = VARCHAR(134217728)default = "click"}column "created_at" {type = TIMESTAMP_NTZ(6)}primary_key {columns = [column.id]}unique "uq_event_type" {columns = [column.event_type]}}
Atlas emits a single CREATE ICEBERG TABLE statement with constraints, partition clause, and parameters inlined:
CREATE ICEBERG TABLE "events" ("id" NUMBER(10, 0) NOT NULL,"event_type" VARCHAR(134217728) NOT NULL DEFAULT 'click',"created_at" TIMESTAMP_NTZ(6) NOT NULL,CONSTRAINT "PK_events_id" PRIMARY KEY ("id"),CONSTRAINT "uq_event_type" UNIQUE ("event_type")) PARTITION BY ("id", MONTH("created_at"))EXTERNAL_VOLUME = 'SNOWFLAKE_MANAGED'CATALOG = 'SNOWFLAKE'TARGET_FILE_SIZE = '16MB'COMMENT = 'Snowflake-managed iceberg table for events';
Externally-Cataloged Tables
For tables sitting on your own bucket, point external_volume at an S3, GCS, or Azure volume and set base_location to the prefix where Iceberg metadata lives. Foreign keys can reference other Iceberg tables, and Atlas orders CREATE / DROP statements correctly across the dependency chain:
iceberg_table "logs" {schema = schema.PUBLICexternal_volume = "EXT_S3_VOLUME"catalog = "SNOWFLAKE"base_location = "logs/"cluster_by = ["\"level\""]column "log_id" {type = NUMBER(10)}column "event_id" {type = NUMBER(10)}column "level" {type = VARCHAR(134217728)default = "INFO"}primary_key {columns = [column.log_id]}foreign_key "fk_event" {columns = [column.event_id]ref_columns = [iceberg_table.events.column.id]}}
Table Parameters
Every parameter from CREATE ICEBERG TABLE is diffed individually, so a change to a single attribute produces a single ALTER ICEBERG TABLE ... SET statement rather than a full table rebuild:
iceberg_table "events" {schema = schema.PUBLICexternal_volume = "SNOWFLAKE_MANAGED"catalog = "SNOWFLAKE"target_file_size = "16MB"enable_data_compaction = falseenable_iceberg_merge_on_read = truereplace_invalid_characters = truelog_event_level = WARNretention_time = 7column "id" {type = NUMBER(10)}primary_key {columns = [column.id]}}
Supported parameters include target_file_size, enable_data_compaction, enable_iceberg_merge_on_read, replace_invalid_characters, catalog_sync, log_event_level, error_logging, and retention_time.
For the complete attribute reference, see the iceberg_table HCL documentation.