Registry Backups for Versioned Migrations
Overview
As part of a broader business continuity strategy, Atlas supports features that ensure your deployment pipeline keeps running autonomously, with no single point of failure.
With license grant caching, CI/CD jobs can keep using Atlas Pro capabilities even without connectivity to Atlas Cloud. GitHub Actions, GitLab CI, and CircleCI work out of the box with built-in caching. Bitbucket Pipelines requires a small cache configuration. The Kubernetes operator supports persistent volumes for grant caching across pod restarts.
Migration directories can be backed up to your own storage with registry backups.
This guide focuses on configuring these backups for versioned migrations using migration.repo.backup.
With this setting:
atlas migrate pushwrites to Atlas Cloud and to every configured backup URL.- The push succeeds if at least one target succeeds (Cloud or one backup).
- Reads from
atlas://...(for example inatlas migrate apply) try Atlas Cloud first and then fall back to backups in order.
Prerequisites
- Atlas CLI
- An Atlas Cloud account with a project
- An Atlas Cloud bot token (see Creating a Bot User)
- S3 bucket for backups (see AWS guide)
- AWS credentials with read/write access to that bucket (see AWS guide)
Full working example (GitHub Actions + S3)
1. Set up repository secrets
Store the following values as GitHub Actions repository secrets:
ATLAS_CLOUD_TOKEN: Atlas Cloud bot token used byariga/setup-atlasto authenticate Atlas CLI in CI.DATABASE_URL: Connection string for the target database used by the deploy (apply) workflow (see URLs).AWS_ACCESS_KEY_ID: Access key ID for the IAM principal that can read/write the backup S3 bucket.AWS_SECRET_ACCESS_KEY: Secret access key paired withAWS_ACCESS_KEY_ID.
2. atlas.hcl
Configure your atlas.hcl file to connect to Atlas Cloud, GitHub Actions, and the backup S3 bucket:
locals {
backup_urls = [
"s3://my-atlas-backups/example/migrations?region=us-east-1",
]
}
env "ci" {
dev = "postgres://postgres:pass@localhost:5432/dev?search_path=public&sslmode=disable"
migration {
dir = "file://migrations"
repo {
name = "app"
backup = local.backup_urls
}
}
}
env "prod" {
url = getenv("DATABASE_URL")
migration {
# Pull migration directory from Atlas Registry.
dir = "atlas://app"
# Keep backup URLs here as well, so atlas:// reads can fall back.
repo {
name = "app"
backup = local.backup_urls
}
}
}
3. GitHub Actions workflows
In this guide, both operations are shown together for clarity, but in production, you usually keep them separate:
- Push workflow: Runs on merge to main and updates Atlas Registry + backups once.
- Deploy workflow: Runs per environment/tenant/deployment target (for example prod, multi-tenant rollout, or Atlas Operator-based delivery).
Workflow A: Push on Merge (.github/workflows/versioned-push.yml)
This workflow runs on every merge to main that changes migrations/ or atlas.hcl. It authenticates with Atlas Cloud
and pushes the migration directory from the ci environment, which writes to Atlas Registry and replicates its state to
the configured S3 backup URL(s).
name: Versioned Push (Registry + Backups)
on:
push:
branches: [main]
paths:
- "migrations/**"
- "atlas.hcl"
jobs:
push:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: dev
POSTGRES_PASSWORD: pass
ports:
- "5432:5432"
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
- name: Push migration directory (Atlas Registry + S3 backup)
uses: ariga/atlas-action/migrate/push@v1
with:
config: file://atlas.hcl
env: ci
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Workflow B: Deploy on Demand (.github/workflows/versioned-deploy.yml)
This workflow is manually triggered and applies migrations from atlas://app to your production target at DATABASE_URL.
Atlas first reads from Cloud and falls back to S3 backups if needed.
name: Versioned Deploy (Apply)
on:
workflow_dispatch:
jobs:
apply:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
- name: Apply from atlas://app (Cloud first, S3 fallback)
uses: ariga/atlas-action/migrate/apply@v1
with:
config: file://atlas.hcl
env: prod
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Inspect backup replication results
The atlas migrate push --format '{{ json . }}' output includes a Backups array containing the backup URLs
(empty fields are omitted, e.g. if the backup succeeded, Error is not included):
{
"Slug": "app",
"URL": "atlas://app",
"Link": "https://example.atlasgo.cloud/dirs/123",
"Backups": [
{
"URL": "s3://my-atlas-backups/example/migrations?region=us-east-1"
}
]
}
If a Cloud push fails but at least one backup succeeds, the command still succeeds. If all targets fail, it fails.