Dev Database
Introduction
Some commands require a URL pointing to a "Dev Database", typically a temporary and locally running database, which Atlas uses to process and validate users' schemas, migrations and more. This requirement is necessary as Atlas cannot replicate every database type 'X' in every version 'Y'.
To simplify the process of creating temporary databases for one-time use, Atlas can spin up an ephemeral local Docker container using the special docker driver, and clean it up at the end of the process. Here are a few examples of how to use the docker driver:
- PostgreSQL
- MySQL
- MariaDB
- SQLite
- SQL Server
- ClickHouse
- Redshift
# When working on a single database schema, use the auth-created
# "public" schema as the search path.
--dev-url "docker://postgres/15/dev?search_path=public"
# When working on multiple database schemas.
--dev-url "docker://postgres/15/dev"
The docker
driver also supports the postgis
and pgvector
images in its URL format:
--dev-url "docker://postgis/latest/dev"
--dev-url "docker://pgvector/pg17/dev"
To work with a custom Docker image, use one of the following formats:
# When working on a single database schema.
docker+postgres://org/image/dev?search_path=public
docker+postgres://ghcr.io/namespace/image:tag/dev?search_path=public
# For local/official images, leave host empty or use "_".
docker+postgres://_/local/dev?search_path=public
docker+postgres://_/official:latest/dev?search_path=public
# When working on multiple database schemas.
docker+postgres://org/image/dev
# Default database is "postgres".
docker+postgres://org/image:tag
# When working on a single database schema.
--dev-url "docker://mysql/8/dev"
# When working on multiple database schemas.
--dev-url "docker://mysql/8"
To work with a custom Docker image, use one of the following formats:
# When working on a single database schema.
docker+mysql://org/image/dev
docker+mysql://user/image:tag/dev
# For local/official images, leave host empty or use "_".
docker+mysql:///local/dev
docker+mysql://_/mariadb:latest/dev
# When working on multiple database schemas.
docker+mysql://local
docker+mysql://org/image
docker+mysql://user/image:tag
docker+mysql://_/mariadb:latest
# When working on a single database schema.
--dev-url "docker://maria/latest/schema"
# When working on multiple database schemas.
--dev-url "docker://maria/latest"
To work with a custom Docker image, use one of the following formats:
# When working on a single database schema.
docker+maria://org/image/dev
docker+maria://user/image:tag/dev
# For local/official images, leave host empty or use "_".
docker+maria:///local/dev
docker+maria://_/mysql:latest/dev
# When working on multiple database schemas.
docker+maria://local
docker+maria://org/image
docker+maria://user/image:tag
docker+maria://_/mariadb:latest
# Atlas supports working with an in-memory database in SQLite.
--dev-url "sqlite://dev?mode=memory"
# When working on a single database schema.
--dev-url "docker://sqlserver/2022-latest/dev?mode=schema"
# When working on multiple database schemas.
--dev-url "docker://sqlserver/2022-latest/dev?mode=database"
To work with an Azure SQL Edge docker image or SQLServer version, use one of the following formats:
# Run SQLServer 2017-latest in schema mode.
docker://sqlserver/2017-latest?mode=schema
docker://sqlserver/2019-latest?mode=schema
docker://sqlserver/2022-latest?mode=schema
# Run Azure SQL Edge 1.0.7 in schema mode.
docker+sqlserver://mcr.microsoft.com/azure-sql-edge:1.0.7?mode=schema
# When working on a single database schema
--dev-url "docker://clickhouse/23.11/dev"
# When working on multiple database schemas.
--dev-url "docker://clickhouse/23.11"
To work with a custom Docker image, use one of the following formats:
# When working on a single database schema.
docker+clickhouse://org/image/dev
docker+clickhouse://user/image:tag/dev
# For local/official images, leave host empty or use "_".
docker+clickhouse:///local/dev
docker+clickhouse://_/clickhouse:latest/dev
# When working on multiple database schemas.
docker+clickhouse://local
docker+clickhouse://org/image
docker+clickhouse://user/image:tag
docker+clickhouse://_/clickhousedb:latest
If you are using the ClickHouse Cloud, there are some advanced features that are not supported in the Docker image. We recommend using another ClickHouse Cloud instance as the dev-database.
Redshift is a managed service and does not support running in a Docker container. To user dev-database with Redshift, we recommend using a real Redshift cluster with separate database.
# When working on a single database schema, use the auth-created
# "public" schema as the search path.
--dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public"
# When working on multiple database schemas.
--dev-url "redshift://user:pass@redshift-cluster:5439/dev"
Baseline Schema
The docker
and the dev
blocks are available for Atlas Pro users. To use these features, run:
atlas login
In some cases, you may need to configure a dev-database with a predefined schema state to ensure that any Atlas computation using the dev-database starts from this state. For example, this might be necessary if your schemas or migrations depend on objects not managed by you, such as PostgreSQL extensions or external functions used in your schema.
To configure a dev-database with a "baseline schema," you can use either the docker
or dev
blocks. Below are examples
of both options.
Docker with baseline
schema
The docker
block allows you to run an ephemeral Docker container with the relevant image and set up a baseline
schema state to apply after the container is created.
docker "postgres" "dev" {
image = "postgres:15"
schema = "public"
baseline = <<SQL
CREATE SCHEMA "auth";
CREATE EXTENSION IF NOT EXISTS "uuid-ossp" SCHEMA "auth";
CREATE TABLE "auth"."users" ("id" uuid NOT NULL DEFAULT auth.uuid_generate_v4(), PRIMARY KEY ("id"));
SQL
}
env "local" {
src = "file://schema.pg.hcl"
dev = docker.postgres.dev.url
}
When managing multiple schemas or there is a need to generate migrations with table qualifiers, the schema
argument
should not be set:
docker "postgres" "dev" {
image = "postgres:15"
// Use the file() function to read
// the contents of the baseline script.
baseline = file("baseline.sql")
}
When needed, the build
block can be used to build the docker image with custom configurations:
docker "postgres" "dev" {
image = "postgres:16-aws"
// The built image will be tagged as "postgres:16-aws".
build {
context = "path/to/context"
dockerfile = "Dockerfile"
target = "build-target"
args = {
key = "value"
}
}
}
MySQL and MariaDB users can use the docker "mysql"
/ docker "mariadb"
to configure such block.
The code for the above examples is available in the ariga/atlas-examples repository, within the migrate-diff-hcl and schema-apply-sql directories.
Dev-database with baseline
schema
Sometimes, you might want to use a running database as a dev-database, like AWS RDS or another managed service, but still
need to set up a baseline
state that your schema or migrations depend on. In such cases, you can use the dev
block to
configure Atlas to connect to an empty database provided by the url
argument, apply the baseline
schema state to it,
and use it for further computations. On exit, Atlas restores the database to its original state.
dev "postgres" "rds" {
url = var.dev_url
baseline = file("baseline.sql")
}
env "local" {
src = "file://schema.pg.hcl"
dev = dev.postgres.rds.url
}
Validation
The section below explains in more detail how Atlas uses the dev-database to process and validate users' schemas.
Suppose we want to the add the following CHECK
constraint to the table below:
table "t" {
schema = schema.test
column "c" {
type = int
}
check "ck" {
expr = "c <> d"
}
}
After running schema apply
, we get the following error because the CHECK
constraint is invalid, as column d
does not exist.
atlas schema apply \
--url "mysql://root:pass@:3308/test" \
--to "schema.hcl"
-- Planned Changes:
-- Modify "t" table
ALTER TABLE `test`.`t` ADD CONSTRAINT `ck` CHECK (c <> d), DROP COLUMN `c1`, ADD COLUMN `c` int NOT NULL
✔ Apply
Error: modify "t" table: Error 1054: Unknown column 'd' in 'check constraint ck expression'
exit status 1
Atlas cannot predict such errors without applying the schema file on the database, because some cases require parsing and compiling SQL expressions, traverse their AST and validate them. This is already implemented by the database engine.
Migration failures can leave the database in a broken state. Some databases, like MySQL, do not support transactional
migrations due to implicit COMMIT. However, this can be
avoided using the --dev-url
option. Passing this to schema apply
will first create and validate the desired state
(the HCL schema file) on temporary named-databases (schemas), and only then continue to apply
the changes if it passed
successfully.
atlas schema apply \
--url "mysql://root:pass@:3308/test" \
--to "schema.hcl" \
--dev-url "docker://mysql/8/test"
Error: create "t" table: Error 3820: Check constraint 'ck' refers to non-existing column 'd'.
exit status 1
Diffing
Atlas adopts the declarative approach for maintaining the schemas desired state, but provides two ways to manage and
apply changes on the database: schema apply
and migrate diff
. In both commands, Atlas compares the "current", and the
"desired" states and suggests a migration plan to migrate the "current" state to the "desired" state. For example, the
"current" state can be an inspected database or a migration directory, and the "desired" state can be an inspected
database, or an HCL file.
Schemas that are written in HCL files are defined in natural form by humans. However, databases store schemas in
normal form (also known as canonical form). Therefore, when Atlas compares two different forms it may suggest incorrect
or unnecessary schema changes, and using the --dev-url
option can solve this (see the above section for more
in-depth example).
Let's see it in action, by adding an index-expression to our schema.
table "t" {
schema = schema.test
column "c" {
type = varchar(32)
}
index "i" {
on {
expr = "upper(concat('c', c))"
}
}
}
atlas schema apply \
--url "mysql://root:pass@:3308/test" \
--to "schema.hcl"
-- Planned Changes:
-- Modify "t" table
ALTER TABLE `test`.`t` ADD INDEX `i` ((upper(concat('c', c))))
✔ Apply
We added a new index-expression to our schema, but using schema inspect
will show our index in its normal form.
atlas schema inspect --url "mysql://root:pass@:3308/test"
table "t" {
schema = schema.test
column "c" {
null = false
type = varchar(32)
}
index "i" {
on {
expr = "upper(concat(_utf8mb4'c',`c`))"
}
}
}
Therefore, running schema apply
again will suggest unnecessary schema changes.
atlas schema apply \
--url "mysql://root:pass@:3308/test" \
--to "schema.hcl"
-- Planned Changes:
-- Modify "t" table
ALTER TABLE `test`.`t` DROP INDEX `i`
-- Modify "t" table
ALTER TABLE `test`.`t` ADD INDEX `i` ((upper(concat('c', c))))
✔ Abort
Similarly to the previous example, we will use the --dev-url
option to solve this.
atlas schema apply \
--url "mysql://root:pass@:3308/test" \
--to "schema.hcl" \
--dev-url "docker://mysql/8/test"
Schema is synced, no changes to be made
Hooray! Our desired schema is synced and no changes have to be made.
Atlas cleans up after itself! You can use the same instance of a "Dev Database" for multiple environments, as long as they are not accessed concurrently.