Can I store migration files in S3? (CLI and Terraform examples)
With Atlas, we advocate for treating migration directories as deployment artifacts resulting from a structured build process. The preferred approach is to push migration directories to the Atlas Schema Registry. In addition to its role as a migration directory storage, the Schema Registry provides a tight integration with the Atlas CLI and the Atlas Cloud UI, allowing you to deploy migrations, visualize schemas over time, review deployment logs and errors, and more.
However, some users prefer to store their migration directories in S3, typically due to internal policies or requirements.
How to deploy migrations from S3 Atlas Pro
The blob_dir
data source is available to Atlas Pro users only.
To deploy migrations from S3, use the blob_dir
data source in your atlas.hcl
file:
data "blob_dir" "migrations" {
url = "s3://my-s3-bucket/path/to/migrations?profile=aws-profile"
}
env "dev" {
url = var.url
migration {
dir = data.blob_dir.migrations.url
}
}
You can tell the profile
query parameter to use a specific AWS profile through your local AWS credentials file, or
set the credentials using environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
.
How to deploy from blob_dir
in Terraform?
To deploy migrations from S3 using the Atlas Terraform provider, use the atlas_migration
resource in combination
with a custom configuration block.
terraform {
required_providers {
atlas = {
version = "0.9.8"
source = "ariga/atlas"
}
}
}
locals {
config = <<HCL
variable "url" {
type = string
default = getenv("DB_URL")
}
variable "s3_bucket" {
type = string
}
data "blob_dir" "migrations" {
url = var.s3_bucket
}
env {
name = atlas.env
url = var.url
dev = "docker://postgres/15/dev?search_path=public"
migration {
dir = data.blob_dir.migrations.url
}
}
HCL
vars = jsonencode({
s3_bucket = "s3://my-s3-bucket/migrations?profile=tf-dev"
})
}
data "atlas_migration" "hello" {
config = local.config
variables = local.vars
env_name = "foo"
}
resource "atlas_migration" "testdb" {
version = data.atlas_migration.hello.latest
config = local.config
variables = local.vars
env_name = "foo"
}
Assuming we have a local Postgres database running:
docker run --name some-postgres -p 5432:5432 -e POSTGRES_PASSWORD=mysecretpassword -d postgres
Set the environment variable DB_URL
to connect to the database:
export DB_URL='export DB_URL='postgresql://postgres:mysecretpassword@localhost:5432?search_path=public&sslmode=disable'
Then, run the migration:
terraform apply
This will apply the latest migration from the S3 bucket specified in the s3_bucket
variable:
data.atlas_migration.hello: Reading...
data.atlas_migration.hello: Read complete after 6s [id=8cld198f-23dd-61ca-0f97-2a3ealdf1fb6]
atlas_migration.testdb: Refreshing state... [id=7cac2593-a043-69f3-7383-ab1f27b14654]
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# atlas_migration.testdb will be created
+ resource "atlas_migration" "testdb" {
+ config = <<-EOT
variable "url" {
type = string
default = getenv("DB_URL")
}
variable "s3_bucket" {
type = string
}
data "blob_dir" "migrations" {
url = var.s3_bucket
}
env {
name = atlas.env
url = var.url
dev = "docker://postgres/15/dev?search_path=public"
migration {
dir = data.blob_dir.migrations.url
}
}
EOT
+ env_name = "foo"
+ id = (known after apply)
+ status = (known after apply)
+ variables = jsonencode({
53_bucket = "s3://my-s3-bucket/migrations?profile=tf-dev"
})
+ version = "20250618181441"
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
atlas_migration.testdb: Creating...
atlas_migration.testdb: Still creating... [10s elapsed]
atlas_migration.testdb: Creation complete after 18s [id=d9eb9c09-0c95-fldd-46e0-0de31df39581]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.