Skip to main content

CI/CD for Databases on GitHub Actions

GitHub Actions is a popular CI/CD platform integrated with GitHub repositories. It allows users to automate workflows for building, testing, and deploying applications.

In this guide, we will demonstrate how to use GitHub Actions and Atlas to set up CI pipelines for your database schema changes.

Prerequisites

Installing Atlas

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

After installing Atlas locally, log in to your organization by running the following command:

atlas login

Creating a bot token

To report CI run results to Atlas Cloud, create an Atlas Cloud bot token by following these instructions and copy it.

Next, in your GitHub repository, go to Settings -> Secrets and variables -> Actions and create a new secret named ATLAS_CLOUD_TOKEN. Paste your token in the value field.

Creating a secret for your database URL

To avoid having plain-text database URLs that may contain sensitive information in your configuration files, create another secret named DB_URL and populate it with the URL (connection string) of your database.

To learn more about formatting URLs for different databases, see the URL documentation.

Creating a GitHub personal access token (optional)

Atlas will need permissions to comment lint reports on pull requests. To enable this, create a personal access token with the repo scope. Then, add it as a secret named GITHUB_TOKEN.

Choose a workflow

Atlas supports two types of schema management workflows:

  • Versioned Migrations - Changes to the schema are defined as migrations (SQL scripts) and applied in sequence to reach the desired state.
  • Declarative Migrations - The desired state of the database is defined as code, and Atlas calculates the migration plan to apply it.

To learn more about the differences and tradeoffs between these approaches, see the Declarative vs Versioned article.

Versioned Migrations Workflow

In the versioned workflow, changes to the schema are represented by a migration directory in your codebase. Each file in this directory represents a transition to a new version of the schema.

Based on our blueprint for Modern CI/CD for Databases, our pipeline will:

  1. Lint new migration files whenever a pull request is opened.
  2. Push the migration directory to the Schema Registry when changes are merged to the main branch.
  3. Apply new migrations to our database.

Pushing a migration directory to Atlas Cloud

Run the following command from the parent directory of your migration directory to create a "migration directory" repo in your Atlas Cloud organization (replace "app" with the name you want to give to your new repository):

$ atlas migrate push app \
--dev-url "docker://postgres/15/dev?search_path=public"
info

If the migration directory contains multiple schemas, adjust the dev-url accordingly.

Atlas will print a URL leading to your migrations on Atlas Cloud. You can visit this URL to view your migrations.

Setting up GitHub Actions

Create a .github/workflows/ci-atlas.yaml file with the following content, based on the type of your database. Remember to replace "app" with the real name of your repository.

.github/workflows/ci-atlas.yaml
name: Atlas
on:
push:
branches:
- master
paths:
- .github/workflows/ci-atlas.yaml
- 'migrations/*'
pull_request:
paths:
- 'migrations/*'
# Permissions to write comments on the pull request.
permissions:
contents: read
pull-requests: write
jobs:
atlas:
services:
# Spin up a postgres:15 container to be used as the dev-database for analysis.
postgres:
image: postgres:15
env:
POSTGRES_DB: dev
POSTGRES_PASSWORD: pass
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-start-period 10s
--health-timeout 5s
--health-retries 5
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: ariga/setup-atlas@v0
with:
cloud-token: '${{ secrets.ATLAS_CLOUD_TOKEN }}'
- uses: ariga/atlas-action/migrate/lint@v1
with:
dir: 'file://migrations'
dev-url: 'postgres://postgres:pass@localhost:5432/dev?search_path=public&sslmode=disable'
dir-name: 'app'
env:
GITHUB_TOKEN: '${{ github.token }}'
- uses: ariga/atlas-action/migrate/push@v1
if: github.ref == 'refs/heads/master'
with:
dir: 'file://migrations'
dev-url: 'postgres://postgres:pass@localhost:5432/dev?search_path=public&sslmode=disable'
dir-name: 'app'
- uses: ariga/atlas-action/migrate/apply@v1
if: github.ref == 'refs/heads/master'
with:
dir: 'file://migrations'
url: '${{ secrets.DB_URL }}'

Let's break down what this file is doing:

  1. The migrate-lint step will run on every pull request. If new migrations are detected, Atlas will lint them and post the report as a comment on the pull request. This helps you catch issues early in the development process.
  1. The migrate-push step will run only when the pull request is merged into the main branch. It will push the new migration directory to the Schema Registry on Atlas Cloud, allowing you to manage your migrations in a centralized location.

  2. The migrate-apply step will then deploy the new migrations to your database.

Testing our action

Let's take our new workflow for a spin. We will create a new migration, push it to the repository, and see how the GitHub Action runs.

  1. Locally, create a new branch and add a new migration with atlas migrate new --edit. Paste the following in the editor:
schema.sql
CREATE TABLE `test` (`c1` INT)
  1. Commit and push the changes.
  2. In GitHub, open a new pull request for the branch you just pushed. This will trigger the migrate-lint step in the workflow.
  3. View the lint report generated by Atlas. Follow the links to see the changes visually on Atlas Cloud.
  4. Merge the pull request into the main branch. This will trigger the migrate-push and migrate-apply steps in the workflow.
  5. When the pipeline is finished running, check your database to see if the changes were applied.

Declarative Migrations Workflow

In the declarative workflow, developers provide the desired state of the database as code. Atlas can read database schemas from various formats such as plain SQL, Atlas HCL, ORM models, and even another live database. Atlas then connects to the target database and calculates the diff between the current state and the desired state. It then generates a migration plan to bring the database to the desired state.

In this guide, we will use the SQL schema format.

Our goal

When a pull request contains changes to the schema, we want Atlas to:

  • Compare the current state (your database) with the new desired state
  • Create a migration plan to show the user for approval
  • Mark the plan as approved when the pull request is approved and merged
  • Use the approved plan to apply the changes to the database during deployment

Creating a simple SQL schema

Create a file named schema.sql and paste the following:

-- create table "users"
CREATE TABLE users(
id int NOT NULL,
name varchar(100) NULL,
PRIMARY KEY(id)
);

-- create table "blog_posts"
CREATE TABLE blog_posts(
id int NOT NULL,
title varchar(100) NULL,
body text NULL,
author_id int NULL,
PRIMARY KEY(id),
CONSTRAINT author_fk FOREIGN KEY(author_id) REFERENCES users(id)
);

Then, create a configuration file for Atlas named atlas.hcl as follows:

atlas.hcl
env "github" {
url = getenv("DB_URL")
schema {
src = "file://schema.sql"
repo {
name = "app"
}
}
}

Pushing the schema to Atlas Cloud

To push our initial schema to the Schema Registry on Atlas Cloud, run the following command:

$ atlas schema push app \
--dev-url "docker://postgres/15/dev?search_path=public" \
--env github

Setting up GitHub Actions

Create a .github/workflows/ci-atlas.yaml file with the following content, based on the type of your database.

.github/workflows/ci-atlas.yaml
name: Atlas
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
workflow_dispatch:
# Permissions to write comments on the pull request.
permissions:
contents: read
pull-requests: write
jobs:
plan:
if: ${{ github.event_name == 'pull_request' }}
services:
# Spin up a postgres:15 container to be used as the dev-database for analysis.
postgres:
image: postgres:15
env:
POSTGRES_DB: dev
POSTGRES_PASSWORD: pass
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-start-period 10s
--health-timeout 5s
--health-retries 5
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
- uses: ariga/atlas-action/schema/plan@v1
with:
env: github
dev-url: 'postgres://postgres:pass@localhost:5432/dev?search_path=public&sslmode=disable'
push:
if: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }}
services:
# Spin up a postgres:15 container to be used as the dev-database for analysis.
postgres:
image: postgres:15
env:
POSTGRES_DB: dev
POSTGRES_PASSWORD: pass
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-start-period 10s
--health-timeout 5s
--health-retries 5
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
- uses: ariga/atlas-action/schema/plan/approve@v1
id: plan-approve
with:
env: github
dev-url: 'postgres://postgres:pass@localhost:5432/dev?search_path=public&sslmode=disable'
- uses: ariga/atlas-action/schema/push@v1
with:
env: github
dev-url: 'postgres://postgres:pass@localhost:5432/dev?search_path=public&sslmode=disable'
- uses: ariga/atlas-action/schema/apply@v1
with:
env: github
dev-url: 'postgres://postgres:pass@localhost:5432/dev?search_path=public&sslmode=disable'
info

Make sure the DB_URL environment variable is set to the URL of your database in your GitHub Actions environment.

Let's break down what this file is doing:

  1. When a new pull request is opened, the plan job will check if the desired state of the schema was changed. If it was, Atlas will generate a migration plan, lint it and post the report as a pull request comment.

  2. When the pull request is merged, two things happen: First, the updated schema is pushed to the schema registry by the schema/push step. Second, the plan created in the pull request will be approved.

  3. The schema-apply step will then use the approved plan to apply the new schema state to the database.

Testing our workflow

Let's see our CI/CD workflow in action.

Add an "address" column to the users table:

schema.sql
	name varchar(100) NULL,
+ address varchar(100) NULL,
PRIMARY KEY(id)

Commit the change to a new branch, push it to GitHub, and open a pull request. The plan job will use Atlas to create a migration plan from the current state of the database to the new desired state:

There are two things to note: The comment also includes instructions for editing the plan. This is useful when the plan has lint issues (for example, dropping a column will raise a "destructive changes" error).

  • The plan is created in a "pending" state, which means Atlas can't use it yet against the real database.

After merging the changes to the main branch, the workflow will run the push job:

  1. The schema-plan-approve step will approve the plan that was generated earlier.

  2. The schema-push step will update the schema registry with the new desired state.

  3. The schema-apply step will deploy the changes to our database.

The last thing to do is to inspect our database to make sure the changes were applied correctly:

$ atlas schema diff \
--from $DB_URL \
--to "file://schema.sql" \
--dev-url "docker://postgres/15/dev?search_path=public"
Schemas are synced, no changes to be made.