Skip to main content

Tame Complex PostgreSQL Schemas with Atlas, a Terraform for Databases

· 7 min read
Rotem Tamir
Building Atlas

As applications grow, their underlying database schemas inevitably grow more complex. What often starts as an afterthought handled by a single developer quickly turns into a critical, high-risk responsibility that demands precision and discipline.

Tools like Flyway and Liquibase automate the application of schema changes, but they stop short of addressing the real pain points: planning and validating those changes before they hit production. These steps remain largely manual, error-prone, and disliked by most developers.

Atlas is designed to fill this gap by automating the entire lifecycle of schema changes. Inspired by Terraform, Atlas provides a declarative approach to database schema management, enabling teams to define their schemas as code and automate the planning, validation, and application of changes.

Why Terraform for Databases?

Infrastructure teams have standardized on tools like Terraform to manage cloud resources declaratively. Databases, despite being critical infrastructure, often remain outside this workflow. Schema changes are still handled manually or with ad-hoc migration scripts, leading to drift, unpredictability, and production risks.

Atlas applies these same declarative principles to databases. By treating your schema as code, you get version control, automated planning, validation, and safe application of changes - all integrated into your CI/CD pipelines. This reduces risk, improves velocity, and gives teams confidence when evolving critical data infrastructure.

Automation is always a welcome improvement for any team, but it is especially crucial for teams managing complex databases, where the system's reliability and performance depend on the ability to make changes quickly and safely.

Setting up the stage

Let's show how to use Atlas to manage a fairly complex PostgreSQL schema. While we regularly see customers managing schemas with thousands of objects, we'll use a schema that's a bit more manageable, but still demonstrates the power of Atlas.

To get started, let's first setup a local PostgreSQL database:

docker run --name atlas-pg -e POSTGRES_PASSWORD=pass -p 5432:5432 -d postgres:16

Next, let's download and provision our example schema:

curl -s https://raw.githubusercontent.com/ariga/atlas-showcase/refs/heads/master/schemas/pgdemo.sql | docker exec -i atlas-pg psql -U postgres

Let's verify we have the schema set up correctly:

docker exec -it atlas-pg psql -U postgres -t -c "
SELECT COUNT(*)
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE n.nspname IN ('public', 'manufacturing');
"

This returns:

261

Great. We have a mid-size PostgreSQL schema with 261 objects, including tables, views, and functions.

Using Atlas to manage the schema

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

As our schema contains advanced objects like views and functions, we need the Pro version of Atlas. To enable it run:

atlas login

If you do not have an Atlas account, you can start a free 30 day trial - just follow the instructions in the terminal.

Next, create the Atlas configuration file atlas.hcl in the root of your project:

env "local" {
src = "file://sql"
url = "postgres://postgres:pass@localhost:5432/postgres?sslmode=disable"
dev = "docker://postgres/16/dev"
}

This configuration file defines an environment named local that uses our local PostgreSQL as the target database, the sql directory (where we will store our schema migrations), and defines how to spin up local dev-databases using Docker.

Next, let's bootstrap our schema by running the following command:

atlas schema inspect --env local --format '{{ sql . | split | write "sql" }}'

This command inspects the current state of the database and writes it to the sql directory. By default, Atlas will write each database object into its own file, marking the dependencies between them using the atlas:import directive.

For example, examine the tables in the public schema:

tree sql/schemas/public/tables

Contains a file for each table:

sql/schemas/public/tables
├── gantt_resource_assignments.sql
├── gantt_schedules.sql
├── gantt_task_dependencies.sql
├── gantt_tasks.sql
# ... redacted for brevity
├── threat_intelligence.sql
├── user_audit.sql
├── user_roles.sql
└── users.sql

1 directory, 35 files

Each file contains the SQL definition of the table, including its columns, constraints, and indexes. For example:

-- atlas:import ../public.sql
-- atlas:import ../types/enum_user_status_type.sql

-- create "users" table
CREATE TABLE "public"."users" (
"id" serial NOT NULL,
"email" character varying(255) NOT NULL,
"first_name" character varying(100) NOT NULL,
"last_name" character varying(100) NOT NULL,
"phone" character varying(20) NULL,
"hire_date" date NOT NULL DEFAULT CURRENT_DATE,
"status" "public"."user_status_type" NOT NULL DEFAULT 'active',
"created_at" timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY ("id"),
CONSTRAINT "users_email_valid" CHECK ((email)::text ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'::text),
CONSTRAINT "users_hire_date_reasonable" CHECK ((hire_date >= '1990-01-01'::date) AND (hire_date <= (CURRENT_DATE + '1 year'::interval))),
CONSTRAINT "users_name_not_empty" CHECK ((length(TRIM(BOTH FROM first_name)) > 0) AND (length(TRIM(BOTH FROM last_name)) > 0))
);
-- create index "users_email_unique" to table: "users"
CREATE UNIQUE INDEX "users_email_unique" ON "public"."users" ("email");
-- create index "users_hire_date_idx" to table: "users"
CREATE INDEX "users_hire_date_idx" ON "public"."users" ("hire_date");
-- create index "users_status_idx" to table: "users"
CREATE INDEX "users_status_idx" ON "public"."users" ("status");

Notice how Atlas automatically adds the atlas:import directive to the top of the file, which allows it to build the actual schema from the individual files.

Making changes to the schema

Now that we have our schema set up, let's make some changes to it. For example, let's add a new column to the users table to store the user's address. Modify sql/schemas/public/tables/users.sql to include the new column:

CREATE TABLE "public"."users" (
"id" serial NOT NULL,
"email" character varying(255) NOT NULL,
"first_name" character varying(100) NOT NULL,
"last_name" character varying(100) NOT NULL,
+ "address" character varying(255) NULL, -- New column for user's address
"phone" character varying(20) NULL,
"hire_date" date NOT NULL DEFAULT CURRENT_DATE,
"status" "public"."user_status_type" NOT NULL DEFAULT 'active',
"created_at" timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY ("id"),
CONSTRAINT "users_email_valid" CHECK ((email)::text ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'::text),
CONSTRAINT "users_hire_date_reasonable" CHECK ((hire_date >= '1990-01-01'::date) AND (hire_date <= (CURRENT_DATE + '1 year'::interval))),
CONSTRAINT "users_name_not_empty" CHECK ((length(TRIM(BOTH FROM first_name)) > 0) AND (length(TRIM(BOTH FROM last_name)) > 0))
);

Next, let's use declarative migrations to update the schema, run:

atlas schema apply --env local

Atlas connects to the database, compares the desired and current state, and plans a safe migration:

Planning migration statements (1 in total):

-- modify "users" table:
-> ALTER TABLE "public"."users" ADD COLUMN "address" character varying(255) NULL;

-------------------------------------------

Analyzing planned statements (1 in total):

-- no diagnostics found

-------------------------
-- 102.302166ms
-- 1 schema change

-------------------------------------------

? Approve or abort the plan:
▸ Approve and apply
Abort

Atlas presents us with a plan to add the new column to the users table. Addtionally, Atlas analyzes the plan against a set of safety checks, and in this case, it found no issues.

After approving the plan, Atlas applies the migration to the database:

Applying approved migration (1 statement in total):

-- modify "users" table
-> ALTER TABLE "public"."users" ADD COLUMN "address" character varying(255) NULL;
-- ok (9.289291ms)

-------------------------
-- 9.345166ms
-- 1 migration
-- 1 sql statement

If we re-run the atlas schema apply --env local command, Atlas will detect that the schema is in sync and report:

Schema is synced, no changes to be made.

What if I want migration files

In this example, we demonstrated Atlas's declarative workflow, which, in a Terraform-like fashion, allows you to calculate safe migration plans at runtime by comparing your code representation of the schema to a live database. However, many teams prefer the common approach of maintaining migration files. These are versioned SQL scripts that contain the statements required to upgrade the database to the next version.

Atlas supports this approach using the versioned migrations workflow. To learn more about the trade-off between the different approaches read "Declarative vs Versioned".

What's next

If you find Atlas interesting, here are some additional resources that may be useful:

Wrapping Up

We hope you enjoyed this brief introduction to Atlas. As always, we would love to hear your feedback and suggestions on our Discord server.

Atlas v0.35: Oracle, Bootstrap Projects, and more

· 9 min read
Rotem Tamir
Building Atlas

Hey everyone!

It's been just over a week since our last release, and we are back with another batch of exciting features and improvements. Here's what's in store for you in Atlas v0.35:

  • Bootstrap Projects - You can now bootstrap SQL projects with one command, making it easier to get started with Atlas. Using the new split and write template functions, you can now create a code representation of your database schema in SQL or HCL format to turn your database into code in no time.
  • Atlas for Oracle in Beta - We are excited to announce that Atlas is now in beta for Oracle databases.

Case Study: How Darkhorse Emergency Tamed Complex PostgreSQL Schemas with Atlas

· 7 min read
Noa Rogoszinski
Noa Rogoszinski
DevRel Engineer

"When I came across Atlas and saw it described as Terraform for your database, it immediately resonated. That’s exactly what we needed. Just like Terraform solved our AWS problems, we needed something to bring that same level of control to our data."

– Maciej Bukczynski, Director of Technology, Darkhorse Emergency

Company Background

Darkhorse Emergency is a SaaS decision analytics platform for public safety services, primarily fire departments, that uses data and predictive analytics to optimize operations and resource allocation. Their platform allows for decisions to be simulated and assessed before being made, creating more transparency amongst public service teams and those that depend on them.

The Bottleneck: Evolving a Logic-Heavy Postgres Schema

"For us PostgreSQL isn't just storage. It's the core of our business logic. "

Darkhorse Emergency's platform is built on a complex PostgreSQL database that serves as the backbone for their application. It is an elaborate system that processes many types of data, including 911 calls, census reports, and other public data sources.

By maintaining a carefully designed chain of views, functions, custom types, and triggers, the team is able to offload complex calculations and logic to the database. This ensures that their application can efficiently handle the demands of public safety services. "For us PostgreSQL isn't just storage. It's the core of our business logic," said Maciej Bukczynski, Director of Technology at Darkhorse Emergency.

However, this complexity presents a significant challenge when it comes to evolving the database schema. With so much happening within the database itself, the team very quickly ran into the limitations that come with common migration tools. "For example, we might have a view that feeds into 50 other views; if we want to make a change to that, we need to carefully recreate dependencies and ensure that everything remains consistent", Bukczynski explained.

The team initially tried to use classic migration tools like Flyway and Liquibase, but found that manually planning and applying migrations in such an intricate system was not only time-consuming but error-prone.

Atlas v0.34: Ad-hoc Approval Policies and Terraform Docs

· 3 min read
Rotem Tamir
Building Atlas

Hey everyone!

It's been just over two weeks since our last release, and we are back with another batch of exciting features and improvements. Here's what's in store for you in Atlas v0.34.

Building scalable multi-tenant applications in Go

· 20 min read
Rotem Tamir
Building Atlas

Prepared for and presented at GopherCon Israel 2025.

Introduction

In this post, we will explore different strategies for building scalable multi-tenant applications in Go based on our experience building the backend for Atlas Cloud, which is part of our commercial offering.

But first, let's clarify what we mean by multi-tenant applications.

Multi-tenancy is a property of a system where a single instance serves multiple customers (tenants).

As a commercial enterprise, your goal is, of course, to have lots of customers! But while you want to serve many customers, they expect a smooth and seamless experience, as if they were the only ones using your service.

Two important promises you implicitly make to your customers are:

  1. Data Isolation: Each tenant's data is isolated and secure, ensuring that one tenant cannot access another's data.
  2. Performance: The application should perform well regardless of the number of tenants, ensuring that one tenant's usage does not degrade the experience for others.

Let's explore some ways in which we might fulfill these promises.

Atlas v0.33: Introducing Atlas Copilot and more

· 10 min read
Rotem Tamir
Building Atlas

Hey everyone!

It's been a couple of months since our last release, but for good reason. Today, I am super excited to tell you about everything we have been up to. Here's what's in store for you in this release, v0.33:

  • Atlas Copilot: A new coding assistant that helps you better manage your Atlas projects by leveraging an agentic, LLM-based approach.
  • Support for --include: Atlas Pro users may now use the --include flag to specify which database objects to query during inspection.
  • migrate/diff in GitHub Actions, GitLab CI, and CircleCI - Atlas now supports the migrate diff command in GitHub Actions, GitLab CI, and CircleCI. This allows teams to build CI/CD pipelines that automatically generate migration files based on the current state of the database and the desired state of the schema.
  • Check-level Lint Policies: Atlas comes pre-packaged with many built in analyzers that can be used to verify the safety of changes to your database. Using Check-level Lint Policies, you can now configure your CI/CD's pipelines sensitivity to these analyzers.
  • Support for sensitive annotations in migration files: Migration files can sometimes include sensitive or PII values, either passed in as input variables (using template-directories) or embedded directly in SQL statements. To prevent these values from being logged, Atlas provides a directive for marking files or specific statements as sensitive. This directive can be set at either the file or statement level.
  • Atlas Dashboard UI Revamp: We recently revamped the Atlas dashboard UI. The new design is cleaner and more modern, making it easier to navigate and find the information you need. Congrats to the team for their hard work on this!
  • Beta / Feedback Programs: We are launching beta/feedback programs for (signup link below):
    • Oracle
    • Google Spanner
    • Snowflake
    • Performance Optimization

From Manual to Automated Database Schema Migrations

· 7 min read
Noa Rogoszinski
Noa Rogoszinski
DevRel Engineer

Software teams commonly embrace DevOps for delivery, creating automated CI/CD pipelines that allow for rapid and reliable software delivery. Suprisingly, some of these same teams still manage their database schema manually, causing an interesting contrast.

Picture this: a team spent countless hours ensuring that every change to their application code is:

  • Version controlled
  • Automatically tested, built, and stored in an Artifact Repository
  • Automatically deployed
  • Easily rolled back

Yet when it comes to making changes to their database schema, the process looks very different: a developer writes a SQL migration script, connects to the production database with privileged access, runs the script manually, and (if successful) continues with deployment. The entire process is in the hands of the developer.

Projects frequently begin with manual database schema management because it's the easiest option, particularly when databases are small, changes are infrequent, and there are no users. However, as applications evolve and schema migrations grow more complex, this practice becomes a looming risk.

Let's explore the pitfalls of manual migrations, the benefits of automated migrations, and getting started with Atlas to automate your database schema management.

Handling Migration Errors: How Atlas Improves on golang-migrate

· 11 min read
Noa Rogoszinski
Noa Rogoszinski
DevRel Engineer

Database migrations are fundamental to modern software development, allowing teams to evolve their database schema in a controlled and versioned manner. As applications grow and requirements change, the ability to reliably alter your database is crucial for maintaining data integrity and application stability.

Atlas was originally created to support Ent, a popular Go ORM. From the start, Ent shipped with a simple "auto-migration" feature that could set up the database schema based on the Ent schema. However, as the project grew popular, it became clear that a more robust versioned migration system was needed.

Ent's authors had hoped to add functionality based on the existing "auto-migration" engine to generate migration files, and use an off-the-shelf migration tool to apply them. The most promising candidate was golang-migrate, a widely adopted migration tool in the Go community renowned for its simplicity and wide database support. But like many tools that start simple and grow popular, we realized that golang-migrate, too, has its limitations, and they led us to expand on its abilities.

In this article, we’ll explore some common challenges teams face with traditional migration tools like golang-migrate, and how Atlas takes a different approach to improve the developer experience.

The Missing Chapter in the Platform Engineering Playbook

· 12 min read
Rotem Tamir
Building Atlas

Prepared for SREDay London 2025

Introduction

Platform engineering is rapidly emerging as a discipline aimed at reducing cognitive load for developers, enabling self-service infrastructure, and establishing best practices for building and operating software at scale. While much of the conversation focuses on CI/CD, Kubernetes, and internal developer platforms, one crucial aspect often remains overlooked: database schema management.

Despite being at the heart of nearly every application, schema changes are still a major source of friction, outages, and bottlenecks. In this post, we'll explore why database schema management deserves a dedicated chapter in the platform engineering playbook and how organizations can integrate it into their platform strategies.

The prompt that nuked the database

Let me tell you a not-so-fictional story about a developer named Alice. Alice is a backend engineer at a fast-growing startup. One day, her manager asked her to make a small change to the database. The data engineering team was complaining that they were seeing duplicate emails in the user table, and they suspected that the email column did not have a unique constraint.

Atlas v0.32: Ask AI, SQL Imports, and More

· 11 min read
Rotem Tamir
Building Atlas

Hey everyone!

It's been a few weeks since our last release, and we're excited to share today everything that's new in Atlas v0.32. This release is packed with new features, improvements and bug fixes that will make your experience with Atlas even better.

Here are the highlights of this release:

  • Ask AI - Since its modest beginning, Atlas has come a long way. What started as a simple CLI tool for declarative schema management is now a full-blown platform. We know that the learning curve for new users can be steep, which is why we are introducing new AI-powered features to help you get started with Atlas.
  • SQL Importing - As projects grow, teams often want to split their schema definition across multiple files. Because SQL definitions are imperative and rely on the order of statements, splitting them can be challenging. With the new importing feature its easy to break large SQL schema definitions into smaller parts while keeping them correct and ordered.
  • Improved Ent Loader - Users of the popular Ent ORM can use the ent:// URL scheme to load their schema into Atlas. We have added support for multi-schema migrations, composite schemas, and Ent's globalid feature.
  • SQL Server Improvements - We have made several improvements to the SQL Server support in Atlas, including support for Full Text Search Index and Temporal Tables.
  • PostgreSQL Improvements - We have added support for defining Foreign Servers and Unlogged Tables in PostgreSQL.