Automatic CockroachDB Schema Migrations with Atlas
CockroachDB is a distributed SQL database designed for cloud-native applications. It offers PostgreSQL compatibility, horizontal scalability, and strong consistency with high availability. CockroachDB is available as a fully managed cloud service (CockroachDB Cloud) or as a self-hosted deployment.
Support for CockroachDB is available exclusively to Pro users. To use this feature, run:
atlas login
Enter: Atlas
Atlas helps developers manage their database schema as code - abstracting away the intricacies of database schema management. With Atlas, users provide the desired state of the database schema and Atlas automatically plans the required migrations.
In this guide, we will dive into setting up Atlas for CockroachDB schema migrations, and introduce the different workflows available.
Prerequisites
- A CockroachDB cluster (CockroachDB Cloud or self-hosted).
- Atlas installed on your machine:
- macOS + Linux
- Homebrew
- Docker
- Windows
- CI
- Manual Installation
To download and install the latest release of the Atlas CLI, simply run the following in your terminal:
curl -sSf https://atlasgo.sh | sh
Get the latest release with Homebrew:
brew install ariga/tap/atlas
To pull the Atlas image and run it as a Docker container:
docker pull arigaio/atlas
docker run --rm arigaio/atlas --help
If the container needs access to the host network or a local directory, use the --net=host flag and mount the desired
directory:
docker run --rm --net=host \
-v $(pwd)/migrations:/migrations \
arigaio/atlas migrate apply
--url "mysql://root:pass@:3306/test"
Download the latest release and move the atlas binary to a file location on your system PATH.
GitHub Actions
Use the setup-atlas action to install Atlas in your GitHub Actions workflow:
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
Other CI Platforms
For other CI/CD platforms, use the installation script. See the CI/CD integrations for more details.
Logging in to Atlas
To use CockroachDB with Atlas, you'll need to log in to Atlas. If it's your first time, you will be prompted to create both an account and a workspace (organization):
- Via Web
- Via Token
- Via Environment Variable
atlas login
atlas login --token "ATLAS_TOKEN"
ATLAS_TOKEN="ATLAS_TOKEN" atlas login
Connecting to CockroachDB
Atlas uses the crdb:// scheme for connecting to CockroachDB clusters:
- CockroachDB Cloud
- Local/Self-hosted
- Connect to Schema
Connect to CockroachDB Cloud:
crdb://user:pass@<cluster-name>.<region>.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full
Example:
crdb://myuser:mypassword@new-crane-11797.jxf.gcp-europe-west3.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full
Connect to a local or self-hosted CockroachDB instance:
crdb://root@localhost:26257/defaultdb?sslmode=disable
Connect to a specific schema using the search_path query parameter:
crdb://user:pass@<cluster-name>.<region>.cockroachlabs.cloud:26257/database?search_path=public&sslmode=verify-full
- CockroachDB Cloud requires SSL connections (
sslmode=verify-full) - The default database is
defaultdb - The default port is
26257
Inspecting the Schema
The atlas schema inspect command supports reading the database description provided by a URL and outputting it in
different formats, including Atlas DDL (default), SQL, and JSON. In this guide, we will
demonstrate the flow using both the Atlas DDL and SQL formats, as the JSON format is often used for processing the
output using jq.
- Atlas DDL (HCL)
- SQL
To inspect your CockroachDB cluster, use the -u flag and write the output to a file named schema.hcl:
atlas schema inspect \
-u "crdb://user:pass@cluster.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full" > schema.hcl
Open the schema.hcl file to view the Atlas schema that describes your database.
table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
}
column "name" {
null = false
type = character_varying(255)
}
primary_key {
columns = [column.id]
}
}
schema "public" {
}
To inspect your CockroachDB cluster, use the -u flag and write the output to a file named schema.sql:
atlas schema inspect \
-u "crdb://user:pass@cluster.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full" \
--format '{{ sql . }}' > schema.sql
Open the schema.sql file to view the inspected SQL schema that describes your database.
-- Create "users" table
CREATE TABLE "users" (
"id" bigint NOT NULL,
"name" character varying(255) NOT NULL,
PRIMARY KEY ("id")
);
For in-depth details on the atlas schema inspect command, covering aspects like inspecting specific schemas,
handling multiple schemas concurrently, excluding tables, and more, refer to our documentation
here.
Declarative Migrations
The declarative approach, sometimes called "state-based migrations", lets users manage schemas by defining the desired state of the database as code. Atlas then inspects the target database and calculates an execution plan to reconcile the difference between the desired and actual states. Let's see this in action.
We will start off by making a change to our schema file, such as adding a repos table:
- Atlas DDL (HCL)
- SQL
table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
}
column "name" {
null = false
type = character_varying(255)
}
primary_key {
columns = [column.id]
}
}
table "repos" {
schema = schema.public
column "id" {
type = bigint
null = false
}
column "name" {
type = character_varying(255)
null = false
}
column "owner_id" {
type = bigint
null = false
}
primary_key {
columns = [column.id]
}
}
schema "public" {
}
-- Create "users" table
CREATE TABLE "users" (
"id" bigint NOT NULL,
"name" character varying(255) NOT NULL,
PRIMARY KEY ("id")
);
-- Create "repos" table
CREATE TABLE "repos" (
"id" bigint NOT NULL,
"name" character varying(255) NOT NULL,
"owner_id" bigint NOT NULL,
PRIMARY KEY ("id")
);
Now that our desired state has changed, to apply these changes to our database, Atlas will plan a migration for us
by running the atlas schema apply command:
- Atlas DDL (HCL)
- SQL
atlas schema apply \
-u "crdb://user:pass@cluster.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full" \
--to file://schema.hcl \
--dev-url "docker://crdb/v25.1.1/dev"
atlas schema apply \
-u "crdb://user:pass@cluster.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full" \
--to file://schema.sql \
--dev-url "docker://crdb/v25.1.1/dev"
Approve the proposed changes, and that's it! You have successfully run a declarative migration.
For a more detailed description of the atlas schema apply command refer to our documentation
here.
Pushing schemas to Atlas
Similar to how Docker images are pushed to Docker Hub, you can push your schema to Atlas Cloud for versioning, collaboration, and deployment:
atlas schema push app \
--url "file://schema.hcl" \
--dev-url "docker://crdb/v25.1.1/dev"
Once pushed, Atlas prints a URL to the schema. You can then apply it to any database using the schema URL:
atlas schema apply \
-u "crdb://user:pass@cluster.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full" \
--to "atlas://app" \
--dev-url "docker://crdb/v25.1.1/dev"
This workflow allows you to manage your schema centrally and deploy it to multiple environments without having the schema files locally.
For more advanced workflows, you can use atlas schema plan to pre-plan and review migrations
before applying them. This enables teams to plan, lint, and review changes during the PR stage, edit generated
SQL if needed, and ensure no human intervention is required during deployment.
Versioned Migrations
Alternatively, the versioned migration workflow, sometimes called "change-based migrations", allows each change to the database schema to be checked-in to source control and reviewed during code-review. Users can still benefit from Atlas intelligently planning migrations for them, however they are not automatically applied.
Creating the first migration
In the versioned migration workflow, our database state is managed by a migration directory. The migration directory holds all of the migration files created by Atlas, and the sum of all files in lexicographical order represents the current state of the database.
To create our first migration file, we will run the atlas migrate diff command, and we will provide the necessary parameters:
--dirthe URL to the migration directory, by default it is file://migrations.--tothe URL of the desired state. A state can be specified using a database URL, HCL or SQL schema, or another migration directory.--dev-urla URL to a Dev Database that will be used to compute the diff.
- Atlas DDL (HCL)
- SQL
atlas migrate diff initial \
--to file://schema.hcl \
--dev-url "docker://crdb/v25.1.1/dev"
atlas migrate diff initial \
--to file://schema.sql \
--dev-url "docker://crdb/v25.1.1/dev"
Run ls migrations, and you'll notice that Atlas has automatically created a migration directory for us, as well as
two files:
- 20240221153232_initial.sql
- atlas.sum
-- Create "users" table
CREATE TABLE "public"."users" (
"id" bigint NOT NULL,
"name" character varying(255) NOT NULL,
PRIMARY KEY ("id")
);
-- Create "repos" table
CREATE TABLE "public"."repos" (
"id" bigint NOT NULL,
"name" character varying(255) NOT NULL,
"owner_id" bigint NOT NULL,
PRIMARY KEY ("id")
);
h1:example-hash=
20240221153232_initial.sql h1:another-hash=
The migration file represents the current state of our database, and the sum file is used by Atlas to maintain the integrity of the migration directory. To learn more about the sum file, read the documentation.
Pushing migration directories to Atlas
Now that we have our first migration, we can apply it to a database. There are multiple ways to accomplish this, with most methods covered in the guides section. In this example, we'll demonstrate how to push migrations to Atlas Cloud, much like how Docker images are pushed to Docker Hub.
Let's name our new migration project app and run atlas migrate push:
atlas migrate push app \
--dev-url "docker://crdb/v25.1.1/dev"
Once the migration directory is pushed, Atlas prints a URL to the created directory.
Applying migrations
Once our app migration directory has been pushed, we can apply it to a database from any CD platform without
necessarily having our directory there.
We'll create a simple Atlas configuration file (atlas.hcl) to store the settings for our environment:
variable "cockroach_url" {
type = string
default = getenv("COCKROACH_URL")
}
env "cockroach" {
url = var.cockroach_url
migration {
dir = "atlas://app"
}
}
The final step is to apply the migrations to the database. Let's run atlas migrate apply with the --env flag
to instruct Atlas to select the environment configuration from the atlas.hcl file:
atlas migrate apply --env cockroach
After applying the migration, you should receive a link to the deployment and the database where the migration was applied.
Next Step: Setup CI/CD
Once you have your migration directory set up, the next step is to integrate Atlas into your CI/CD pipeline. Atlas provides native integrations for popular platforms:
Dev Database
Atlas uses a dev database to normalize schemas, validate them, and simulate migrations. This temporary database allows Atlas to detect errors early and generate accurate migration plans.
Recommended approach: Use the Docker-based dev database for CockroachDB:
# When working on a single database schema.
--dev-url "docker://crdb/v25.1.1/dev"
# When working on multiple database schemas.
--dev-url "docker://crdb/v25.1.1"
Example configuration:
env "cockroach" {
src = "file://schema.sql"
url = var.cockroach_url
// Docker-based dev database
dev = "docker://crdb/v25.1.1/dev"
}
Wrapping Up
In this guide, we demonstrated how to set up Atlas to manage your CockroachDB database schema. We covered both declarative and versioned migration workflows, and showed how to generate migrations, push them to an Atlas workspace, and apply them to your databases. Atlas has many more features to explore. To learn more, check out the Atlas documentation.
As always, we would love to hear your feedback and suggestions on our Discord server.
Additional Resources
To learn more about CockroachDB, check out the official documentation: