Tame Complex PostgreSQL Schemas with Atlas, a Terraform for Databases
As applications grow, their underlying database schemas inevitably grow more complex. What often starts as an afterthought handled by a single developer quickly turns into a critical, high-risk responsibility that demands precision and discipline.
Tools like Flyway and Liquibase automate the application of schema changes, but they stop short of addressing the real pain points: planning and validating those changes before they hit production. These steps remain largely manual, error-prone, and disliked by most developers.
Atlas is designed to fill this gap by automating the entire lifecycle of schema changes. Inspired by Terraform, Atlas provides a declarative approach to database schema management, enabling teams to define their schemas as code and automate the planning, validation, and application of changes.
Why Terraform for Databases?
Infrastructure teams have standardized on tools like Terraform to manage cloud resources declaratively. Databases, despite being critical infrastructure, often remain outside this workflow. Schema changes are still handled manually or with ad-hoc migration scripts, leading to drift, unpredictability, and production risks.
Atlas applies these same declarative principles to databases. By treating your schema as code, you get version control, automated planning, validation, and safe application of changes - all integrated into your CI/CD pipelines. This reduces risk, improves velocity, and gives teams confidence when evolving critical data infrastructure.
Automation is always a welcome improvement for any team, but it is especially crucial for teams managing complex databases, where the system's reliability and performance depend on the ability to make changes quickly and safely.
Setting up the stage
Let's show how to use Atlas to manage a fairly complex PostgreSQL schema. While we regularly see customers managing schemas with thousands of objects, we'll use a schema that's a bit more manageable, but still demonstrates the power of Atlas.
To get started, let's first setup a local PostgreSQL database:
docker run --name atlas-pg -e POSTGRES_PASSWORD=pass -p 5432:5432 -d postgres:16
Next, let's download and provision our example schema:
curl -s https://raw.githubusercontent.com/ariga/atlas-showcase/refs/heads/master/schemas/pgdemo.sql | docker exec -i atlas-pg psql -U postgres
Let's verify we have the schema set up correctly:
docker exec -it atlas-pg psql -U postgres -t -c "
SELECT COUNT(*)
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE n.nspname IN ('public', 'manufacturing');
"
This returns:
261
Great. We have a mid-size PostgreSQL schema with 261 objects, including tables, views, and functions.
Using Atlas to manage the schema
- macOS + Linux
- Homebrew
- Docker
- Windows
- Manual Installation
To download and install the latest release of the Atlas CLI, simply run the following in your terminal:
curl -sSf https://atlasgo.sh | sh
Get the latest release with Homebrew:
brew install ariga/tap/atlas
To pull the Atlas image and run it as a Docker container:
docker pull arigaio/atlas
docker run --rm arigaio/atlas --help
If the container needs access to the host network or a local directory, use the --net=host
flag and mount the desired
directory:
docker run --rm --net=host \
-v $(pwd)/migrations:/migrations \
arigaio/atlas migrate apply
--url "mysql://root:pass@:3306/test"
Download the latest release and move the atlas binary to a file location on your system PATH.
As our schema contains advanced objects like views and functions, we need the Pro version of Atlas. To enable it run:
atlas login
If you do not have an Atlas account, you can start a free 30 day trial - just follow the instructions in the terminal.
Next, create the Atlas configuration file atlas.hcl
in the root of your project:
env "local" {
src = "file://sql"
url = "postgres://postgres:pass@localhost:5432/postgres?sslmode=disable"
dev = "docker://postgres/16/dev"
}
This configuration file defines an environment named local
that uses our local PostgreSQL as the target database,
the sql
directory (where we will store our schema migrations), and defines how to spin up local dev-databases
using Docker.
Next, let's bootstrap our schema by running the following command:
atlas schema inspect --env local --format '{{ sql . | split | write "sql" }}'
This command inspects the current state of the database and writes it to the sql
directory. By default, Atlas will write
each database object into its own file, marking the dependencies between them using the atlas:import
directive.
For example, examine the tables in the public
schema:
tree sql/schemas/public/tables
Contains a file for each table:
sql/schemas/public/tables
├── gantt_resource_assignments.sql
├── gantt_schedules.sql
├── gantt_task_dependencies.sql
├── gantt_tasks.sql
# ... redacted for brevity
├── threat_intelligence.sql
├── user_audit.sql
├── user_roles.sql
└── users.sql
1 directory, 35 files
Each file contains the SQL definition of the table, including its columns, constraints, and indexes. For example:
-- atlas:import ../public.sql
-- atlas:import ../types/enum_user_status_type.sql
-- create "users" table
CREATE TABLE "public"."users" (
"id" serial NOT NULL,
"email" character varying(255) NOT NULL,
"first_name" character varying(100) NOT NULL,
"last_name" character varying(100) NOT NULL,
"phone" character varying(20) NULL,
"hire_date" date NOT NULL DEFAULT CURRENT_DATE,
"status" "public"."user_status_type" NOT NULL DEFAULT 'active',
"created_at" timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY ("id"),
CONSTRAINT "users_email_valid" CHECK ((email)::text ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'::text),
CONSTRAINT "users_hire_date_reasonable" CHECK ((hire_date >= '1990-01-01'::date) AND (hire_date <= (CURRENT_DATE + '1 year'::interval))),
CONSTRAINT "users_name_not_empty" CHECK ((length(TRIM(BOTH FROM first_name)) > 0) AND (length(TRIM(BOTH FROM last_name)) > 0))
);
-- create index "users_email_unique" to table: "users"
CREATE UNIQUE INDEX "users_email_unique" ON "public"."users" ("email");
-- create index "users_hire_date_idx" to table: "users"
CREATE INDEX "users_hire_date_idx" ON "public"."users" ("hire_date");
-- create index "users_status_idx" to table: "users"
CREATE INDEX "users_status_idx" ON "public"."users" ("status");
Notice how Atlas automatically adds the atlas:import
directive to the top of the file, which allows it to
build the actual schema from the individual files.
Making changes to the schema
Now that we have our schema set up, let's make some changes to it. For example, let's add a new column to the users
table to store the user's address. Modify sql/schemas/public/tables/users.sql
to include the new column:
CREATE TABLE "public"."users" (
"id" serial NOT NULL,
"email" character varying(255) NOT NULL,
"first_name" character varying(100) NOT NULL,
"last_name" character varying(100) NOT NULL,
+ "address" character varying(255) NULL, -- New column for user's address
"phone" character varying(20) NULL,
"hire_date" date NOT NULL DEFAULT CURRENT_DATE,
"status" "public"."user_status_type" NOT NULL DEFAULT 'active',
"created_at" timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY ("id"),
CONSTRAINT "users_email_valid" CHECK ((email)::text ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'::text),
CONSTRAINT "users_hire_date_reasonable" CHECK ((hire_date >= '1990-01-01'::date) AND (hire_date <= (CURRENT_DATE + '1 year'::interval))),
CONSTRAINT "users_name_not_empty" CHECK ((length(TRIM(BOTH FROM first_name)) > 0) AND (length(TRIM(BOTH FROM last_name)) > 0))
);
Next, let's use declarative migrations to update the schema, run:
atlas schema apply --env local
Atlas connects to the database, compares the desired and current state, and plans a safe migration:
Planning migration statements (1 in total):
-- modify "users" table:
-> ALTER TABLE "public"."users" ADD COLUMN "address" character varying(255) NULL;
-------------------------------------------
Analyzing planned statements (1 in total):
-- no diagnostics found
-------------------------
-- 102.302166ms
-- 1 schema change
-------------------------------------------
? Approve or abort the plan:
▸ Approve and apply
Abort
Atlas presents us with a plan to add the new column to the users
table. Addtionally, Atlas analyzes the plan
against a set of safety checks, and in this case, it found no issues.
After approving the plan, Atlas applies the migration to the database:
Applying approved migration (1 statement in total):
-- modify "users" table
-> ALTER TABLE "public"."users" ADD COLUMN "address" character varying(255) NULL;
-- ok (9.289291ms)
-------------------------
-- 9.345166ms
-- 1 migration
-- 1 sql statement
If we re-run the atlas schema apply --env local
command, Atlas will detect that the schema is in sync
and report:
Schema is synced, no changes to be made.
What if I want migration files
In this example, we demonstrated Atlas's declarative workflow, which, in a Terraform-like fashion, allows you to calculate safe migration plans at runtime by comparing your code representation of the schema to a live database. However, many teams prefer the common approach of maintaining migration files. These are versioned SQL scripts that contain the statements required to upgrade the database to the next version.
Atlas supports this approach using the versioned migrations workflow. To learn more about the trade-off between the different approaches read "Declarative vs Versioned".
What's next
If you find Atlas interesting, here are some additional resources that may be useful:
- The Atlas Docs Portal
- Guides - the guides section contains multiple tutorials for achieving common tasks.
- Guide to Modern Database CI/CD - learn our approach to database CI/CD.
- YouTube Channel - if you prefer learning via video, our YouTube channel may be a valuable resource.
Wrapping Up
We hope you enjoyed this brief introduction to Atlas. As always, we would love to hear your feedback and suggestions on our Discord server.