Skip to main content

BCM/BCP for Declarative Schemas

Overview

As part of a broader business continuity strategy, Atlas supports features that ensure your deployment pipeline keeps running autonomously, with no single point of failure.

With license grant caching, CI/CD jobs can keep using Atlas Pro capabilities even without connectivity to Atlas Cloud. GitHub Actions, GitLab CI, and CircleCI work out of the box with built-in caching. Bitbucket Pipelines requires a small cache configuration. The Kubernetes operator supports persistent volumes for grant caching across pod restarts.

Schema state can be backed up to your own storage with registry backups. This guide focuses on configuring these backups for declarative workflows using schema.repo.backup.

With this setting:

  1. atlas schema push replicates schema state to Atlas Cloud and backup URLs.
  2. atlas schema plan ... --push (or atlas schema plan push) also replicates plans to backup URLs.
  3. Reads from atlas://... schema state use Atlas Cloud first and fall back to backups when eligible.

Prerequisites

Full working example (GitHub Actions + S3)

1. Set up repository secrets

Store the following values as GitHub Actions repository secrets:

  • ATLAS_CLOUD_TOKEN: Atlas Cloud bot token used by ariga/setup-atlas to authenticate Atlas CLI in CI.
  • DATABASE_URL: Connection string for the target database used by the deploy (apply) workflow (see URLs).
  • AWS_ACCESS_KEY_ID: Access key ID for the IAM principal that can read/write the backup S3 bucket.
  • AWS_SECRET_ACCESS_KEY: Secret access key paired with AWS_ACCESS_KEY_ID.

2. atlas.hcl

Configure your atlas.hcl file to connect to Atlas Cloud, GitHub Actions, and the backup S3 bucket:

atlas.hcl
locals {
backup_urls = [
"s3://my-atlas-backups/example/schemas?region=us-east-1",
]
}

env "ci" {
dev = getenv("DEV_DATABASE_URL")

schema {
src = "file://schema.hcl"
repo {
name = "app"
backup = local.backup_urls
}
}
}

env "prod" {
url = getenv("DATABASE_URL")
dev = getenv("DEV_DATABASE_URL")

schema {
# Pull desired state from Atlas Registry.
src = "atlas://app"
# Keep backup URLs here as well, so atlas:// reads can fall back.
repo {
name = "app"
backup = local.backup_urls
}
}
}

3. GitHub Actions workflows

In this guide, both operations are shown together for clarity, but in production, you usually keep them separate:

  • Push workflow: Runs on merge to main and updates Atlas Registry + backups once.
  • Deploy workflow: Runs per environment/tenant/deployment target (for example prod, multi-tenant rollout, or Atlas Operator-based delivery).

Workflow A: Push on Merge (.github/workflows/declarative-push.yml)

This workflow runs on every merge to main that changes schema.hcl or atlas.hcl. It authenticates with Atlas Cloud and pushes schema state from the ci environment, replicating the registry state to the configured S3 backup URL(s).

.github/workflows/declarative-push.yml
name: Declarative Push (Registry + Backups)

on:
push:
branches: [main]
paths:
- "schema.hcl"
- "atlas.hcl"

jobs:
push:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: dev
POSTGRES_PASSWORD: pass
ports:
- "5432:5432"
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4

- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}

- name: Push schema state (Atlas Registry + S3 backup)
uses: ariga/atlas-action/schema/push@v1
with:
config: file://atlas.hcl
env: ci
env:
DEV_DATABASE_URL: postgres://postgres:pass@localhost:5432/dev?search_path=public&sslmode=disable
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

Workflow B: deploy on demand (.github/workflows/declarative-deploy.yml)

This workflow is manually triggered and applies the desired state from atlas://app to your production target at DATABASE_URL. Atlas first reads from Cloud and can fall back to S3 backups if needed.

.github/workflows/declarative-deploy.yml
name: Declarative Deploy (Apply)

on:
workflow_dispatch:

jobs:
apply:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}

- name: Apply from atlas://app (Cloud first, S3 fallback)
uses: ariga/atlas-action/schema/apply@v1
with:
config: file://atlas.hcl
env: prod
auto-approve: true
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
DEV_DATABASE_URL: postgres://postgres:pass@localhost:5432/dev?search_path=public&sslmode=disable
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

Optional: Push Pre-Planned Migrations with Backups

The atlas schema plan --push output includes the backup URLs in the Backups field (with DEV_DATABASE_URL set):

atlas schema plan \
--env ci \
--config file://atlas.hcl \
--from file://schema_prev.hcl \
--to file://schema.hcl \
--repo atlas://app \
--push \
--auto-approve \
--format '{{ json . }}'

Example (truncated):

{
"File": {
"URL": "atlas://app/plans/plan1",
"Status": "APPROVED"
},
"Backups": [
{
"URL": "s3://my-atlas-backups/example/schemas?region=us-east-1"
}
]
}