Skip to main content

· 5 min read
Rotem Tamir

Hi everyone,

Thanks for joining us today for our v0.25 release announcement! In this version we are introducing a new feature that has been requested by many of you: support for Row-level Security Policies in PostgreSQL.

Additionally, we have made some minor changes to our pricing plans, more on that below.

What are Row-level Security Policies?

Row-level security (RLS) in PostgreSQL allows tables to have policies that restrict which rows can be accessed or modified based on the user's role, enhancing the SQL-standard privilege system available through GRANT.

When enabled, all normal access to the table must comply with these policies, defaulting to a deny-all approach if no policies are set, ensuring that no rows are visible or modifiable. Policies can be specific to commands, roles, or both, providing fine-grained control over data access and modification.

How does RLS work?

When you create and enable a row-level security (RLS) policy in PostgreSQL, the database enforces the specified access control rules on a per-row basis.

For example, you can create a policy that allows only employees to see their own records in an employees table. The policy could look like this:

CREATE POLICY employee_policy ON employees
FOR SELECT
USING (current_user = employee_role);

This SQL command creates an RLS policy named employee_policy on the employees table. The FOR SELECT clause specifies that this policy applies to SELECT queries. The USING clause contains the condition current_user = employee_role, which means that a user can only select rows where the employee_role column matches their PostgreSQL username.

Next, database administrators typically run:

ALTER TABLE employees ENABLE ROW LEVEL SECURITY;

This command enables RLS on the employees table. With RLS enabled, PostgreSQL will check the policies defined for this table whenever a user attempts to access or modify existing rows, or insert new ones.

When a user executes a SELECT query on the employees table, PostgreSQL evaluates the employee_policy. If the user's PostgreSQL role (username) matches the employee_role column value in a row, the row is included in the query result. Otherwise, the row is excluded.

For instance, if the employees table contains the following data:

idnameemployee_role
1Alicealice
2Bobbob
3Charliecharlie

When the user alice runs SELECT * FROM employees, PostgreSQL applies the policy:

SELECT * FROM employees WHERE current_user = employee_role;

This results in:

idnameemployee_role
1Alicealice

By enforcing these policies, RLS ensures that users only have access to the data they are permitted to see, enhancing the security and privacy of the database.

Manage your Row-level Security Policies as Code

With Atlas, you can now manage your RLS policies as code, just like you manage other database resources such as tables, indexes, and triggers. This allows you to version control your policies, track changes, and apply them consistently across your environments.

To get started with RLS in Atlas, first upgrade to the most recent version.

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

RLS is available to Atlas Pro users only. Get your free Atlas Pro account today by running:

atlas login

Next, you can define your RLS policies in your Atlas schema file (schema.hcl) using the new policy block:

policy "employee_policy" {
on = table.employees
for = SELECT
to = [PUBLIC]
using = "(current_user = employee_role)"
}

This HCL snippet defines an RLS policy named employee_policy on the employees table, allowing only users whose employee_role matches their PostgreSQL username to SELECT rows from the table.

Next, you need to enable RLS on the table:

table "employees" {
schema = schema.public
column "employee_role" {
type = text
}
row_security {
enabled = true // ENABLE ROW LEVEL SECURITY
}
}

Finally, run atlas schema apply to apply the changes to your database!

To learn more about RLS using Atlas, check out our documentation.

Introducing Atlas Pro

Since launching Atlas Cloud a little over a year ago, we have been working hard with our users and customers to make Atlas as easy and simple to use as possible.

One point of confusion we have encountered, especially around our pricing plans, was how users who currently don't want (or can't) use Atlas Cloud for their CI/CD pipelines can get access to the advanced CLI features that Atlas offers. Previously, teams needed to buy Cloud quota to get access to the CLI, which didn't make a lot of sense.

To address some of these issues we are making some small changes to our pricing plans:

Atlas now comes in three tiers:

  • Open - Our CLI, doesn't require creating an account and comes with a solid set of features (this is more than enough for many of our users).
  • Pro (previously "Business") - An enhanced version of our CLI, which includes support for advanced database features and drivers. It will cost $9/month/user, but users get their first 3 seats per company for free when they sign up. Pro users also have access to Atlas Cloud (pricing remains the same).
  • Enterprise - our enterprise tier, targeted mostly at larger organizations or teams in regulated industries with stricter compliance requirements.

To learn more about our new plans, head over to our updated pricing page.

Wrapping Up

That's all for this release! We hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 13 min read
Rotem Tamir

Hi everyone,

We are back again with a new release of Atlas, v0.24. In this release we double down on the core principle that has been guiding us from the start: enabling developers to manage their database schema as code. The features we announce today may appear like a yet another cool addition to Atlas, but I am fairly confident, that in a few years' time, they will be recognized as something foundational.

In this release we introduce:

  • schema test - a new command (and framework) for testing your database schema using familiar software testing paradigms.
  • migrate test - a new command for testing writing tests for you schema migrations.
  • Enhanced editor support - we have added support for some long awaited features in our VSCode and JetBrains plugins: multi-file schemas, jump to definition, and support for much larger schemas.

Doubling Down on Database Schema-as-Code

The core idea behind Atlas is to enable developers to manage their Database Schema-as-Code. Before we jump into the recent additions to Atlas, I would like to take a moment to reflect on why our industry seems to think that "X-as-Code" is a great idea.

In a nutshell, the "X-as-Code" movement is about being able to describe the desired state of a system (whether it's infrastructure, configuration, or schema) in a declarative way and then have that state enforced by a tool.

So why is having things described as code so great? Here are a few reasons:

  • Code can be versioned. This means that you can track changes to your system over time, easily compare states, and rollback as needed.
  • Code is understood by machines. As formal languages, code can be parsed, analyzed, and executed by machines.
  • Code can be tested and validated. By using software testing paradigms, you can ensure that your system behaves as expected in an automated way.
  • Code can be shared and reused. Code allows us to transfer successful ideas and implementations between projects and teams.
  • Code has a vast ecosystem of productivity tools. By using code, you can leverage the vast ecosystem of tools and practices that have been developed by software engineers over the years.

Our core goal with Atlas is to bring these benefits to the world of database schema management. We believe that by enabling developers to manage their database schema as code, we can help them build better, more reliable systems.

Today we bring one of the most important tenets of modern software development to the world of database schema management: testing.

Why test your database schema and migrations?

Testing is a fundamental part of modern software development. By writing tests, you can ensure that your code behaves as expected, catch bugs early, and prevent regressions.

When it comes to database schemas, testing is just as important. Databases are much more than just a storage layer, they can be programmed, enforce logic and constraints, and have complex relationships between tables. For example, table triggers allow you to run custom code when certain events occur, and you should be able to test that this code behaves as expected and that later changes to the schema do not break it. In a similar vein, developers can provide complex expressions in check constraints that should be tested to ensure they are working as expected.

When it comes to migrations, testing is equally important. Atlas already provides the migrate lint command to help you catch invalid migrations and common mistakes. However, migrate test takes validating your migrations a step further.

Many teams use migrations as a mechanism to apply data migrations in tandem with schema changes. As they involve data, these changes are super risky, yet it is notoriously hard to test them. By providing a way to test your migrations, we hope to make this process easier and more reliable.

Introducing schema test

The schema test command allows you to write tests for your database schema using familiar software testing paradigms.

To get started, first install the latest version of the Atlas CLI:

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

Next, login to your Atlas account to activate the new schema testing features:

atlas login

Let's see a brief example. We will begin our project by defining a basic Atlas project file named atlas.hcl:

atlas.hcl
env "local" {
src = "file://schema.hcl"
dev = "docker://postgres/16/dev?search_path=public"
}

Next, let's define a PostgreSQL Domain to model a data type for a us_postal_code:

schema.sql
CREATE DOMAIN "us_postal_code" AS text
CONSTRAINT "us_postal_code_check"
CHECK (
(VALUE ~ '^\d{5}$'::text) OR
(VALUE ~ '^\d{5}-\d{4}$'::text)
);

Next, let's create a file named "schema.test.hcl" with the following content:

schema.test.hcl
test "schema" "postal" {
exec {
sql = "select 'hello'::us_postal_code"
}
}

Per testing best practices, we start with a test that is going to fail, since the string "hello" is not a valid US postal code.

Now, we can run the test using the schema test command:

atlas schema test --env local

The output will be:

-- FAIL: postal (319µs)
schema.test.hcl:2:
Error: pq: value for domain us_postal_code violates check constraint "us_postal_code_check"
FAIL

As expected, the test failed, and we can now fix the test by catching that error and verifying its message:

schema.test.hcl
test "schema" "postal" {
catch {
sql = "select 'hello'::us_postal_code"
error = "value for domain us_postal_code violates check constraint"
}
}

Re-running the test:

atlas schema test --env local

The output will be:

-- PASS: postal (565µs)
PASS

Now we can expand the test to cover more cases, such as valid postal codes and more invalid cases:

schema.test.hcl
test "schema" "postal" {
exec {
sql = "select '12345'::us_postal_code"
output = "12345" // Assert the returned value is "12345"
}
exec {
sql = "select '12345-1234'::us_postal_code"
output = "12345-1234" // Assert the returned value is "12345-1234"
}
catch {
sql = "select 'hello'::us_postal_code"
error = "value for domain us_postal_code violates check constraint"
}
catch {
sql = "select '1234'::us_postal_code"
error = "value for domain us_postal_code violates check constraint"
}
assert {
sql = "select '12345'::us_postal_code::text='12345'" // Assert the query returns true.
}
log {
message = "Hooray, testing!"
}
}

Re-running the test:

atlas schema test --env local

The output will be:

-- PASS: postal (1ms)
schema.test.hcl:21: Hooray, testing!
PASS

Let's review what happens when we run atlas schema test:

  • Atlas will apply the schema for the local environment on the dev database.
  • Atlas will search the current directory for files matching the pattern *.test.hcl.
  • For each test file found, Atlas will execute a test for each test "schema" "<name>" block.
  • Here are the possible test blocks:
    • exec - Executes a SQL statement and verifies the output.
    • catch - Executes a SQL statement and verifies that an error is thrown.
    • assert - Executes a SQL statement and verifies that the output is true.
    • log - Logs a message to the test output.

Using this modest framework, you can now write tests for your database schema, ensuring that it behaves as expected. This command can be integrated into your local development workflow or even as part of your CI pipeline further ensuring the quality of your database schema changes.

Introducing migrate test

The migrate test command allows you to write tests for your schema migrations. This is a powerful feature that enables you to test logic in your migrations in a minimal and straightforward way. The command is similar to schema test but is focused on testing migrations.

Suppose we are refactoring an existing table users which has a name column that we want to split into first_name and last_name columns. The recommended way to do this kind of refactoring in a backward-compatible way. Initially, we will be adding the new columns In Atlas DDL, the schema change would look roughly like this:

schema.hcl
table "users " {
// .. redacted
+ column "first_name" {
+ type = text
+ null = true
+ }
+ column "last_name" {
+ type = text
+ null = true
+ }
}

Next, we will use Atlas to generate a migration for this change:

atlas migrate diff --env local

A new file will be created in our migrations directory:

20240613061102.sql
-- Modify "users" table
ALTER TABLE "users" ADD COLUMN "first_name" text NULL, ADD COLUMN "last_name" text NULL;

Next, let's add the backfill logic to populate the new columns with the data from the name column:

20240613061102.sql
-- Modify "users" table
ALTER TABLE "users" ADD COLUMN "first_name" text NOT NULL, ADD COLUMN "last_name" text NOT NULL;

-- Backfill data
UPDATE "users" SET "first_name" = split_part("name", ' ', 1), "last_name" = split_part("name", ' ', 2);

After changing the contents of our migration file, we must update our atlas.sum file to reflect the changes:

atlas migrate hash --env local

Next, we will create a test case to verify that our migration works correctly in different cases. Let's add the following block to a new file named migrations.test.hcl:

migrations.test.hcl
test "migrate" "name_split" {
migrate {
// Replace with the migration version before the one we just added.
to = "20240613061046"
}
exec {
sql = "insert into users (name) values ('Ada Lovelace')"
}
migrate {
to = "20240613061102"
}
exec {
sql = "select first_name,last_name from users"
output = "Ada, Lovelace"
}
}

Let's explain what this test does:

  • We start by defining a new test case named name_split.
  • The migrate block runs migrations up to a specific version. In this case, we are running all migrations up to the version before the one we just added.
  • The exec block runs a SQL statement. In this case, we are inserting a new user with the name "Ada Lovelace".
  • Next, we run our new migration, 20240613061102.
  • Finally, we run a SQL statement to verify that the first_name and last_name columns were populated correctly.

Let's run the test:

atlas migrate test --env local

The output will be:

-- PASS: name_split (33ms)
PASS

Great, our test passed! We can now be confident that our migration works as expected.

Testing Edge Cases

With our test infra all set up, it's now easy to add more test cases to cover edge cases. For example, we can add a test to verify that our splitting logic works correctly for names that include a middle name, for example, John Fitzgerald Kennedy:

migrations.test.hcl
test "migrate" "name_split_middle_name" {
migrate {
to = "20240613061046"
}
exec {
sql = "insert into users (name) values ('John Fitzgerald Kennedy')"
}
migrate {
to = "20240613061102"
}
exec {
sql = "select first_name,last_name from users"
output = "John Fitzgerald, Kennedy"
}
}

We expect to see only the family name in the last_name column, and the rest of the name in the first_name column.

Will it work? Let's run the test:

atlas migrate test --env local --run name_split_middle_name

Our test fails:

-- FAIL: name_split_middle_name (32ms)
migrations.test.hcl:27:
Error: no match for `John Fitzgerald, Kennedy` found in "John, Fitzgerald"
FAIL

Let's improve our splitting logic to be more robust:

20240613061102.sql
-- Modify "users" table
ALTER TABLE "users" ADD COLUMN "first_name" text NULL, ADD COLUMN "last_name" text NULL;

-- Backfill data
UPDATE "users"
SET "first_name" = regexp_replace("name", ' ([^ ]+)$', ''),
"last_name" = regexp_replace("name", '^.* ', '');

We changed our splitting logic to be more robust by using regular expressions:

  • The first_name column will now contain everything before the last space in the name column.
  • The last_name column will contain everything after the last space in the name column.

Before testing our new logic, we need to update our migration hash:

atlas migrate hash --env local

Now, let's run the test again:

atlas migrate test --env local --run name_split_middle_name

The output will be:

-- PASS: name_split_middle_name (31ms)
PASS

Great! Our test passed, and we can now be confident that our migration works as expected for names with middle names.

As a final check, let's also verify that our migration works correctly for names with only one word, such as Prince:

migrations.test.hcl
test "migrate" "name_split_one_word" {
migrate {
to = "20240613061046"
}
exec {
sql = "insert into users (name) values ('Prince')"
}
migrate {
to = "20240613061102"
}
exec {
sql = "select first_name,last_name from users"
output = "Prince, "
}
}

Let's run the test:

atlas migrate test --env local --run name_split_one_word

The output will be:

-- PASS: name_split_one_word (34ms)
PASS

Amazing! Our test passed, and we can move forward with confidence.

Enhanced Editor Support

In this release, we have added support for some long-awaited features in our VSCode and JetBrains plugins:

  • Multi-file schemas - Our editor plugins will now automatically detect and load all schema files in your project, allowing you to reference tables and columns across files.
  • Jump to definition - Source code can be modeled as a graph of entities where one entity can reference another. For example a Java class method invokes a method in another class, or a table's foreign key references another table's primary key. Jump to definition allows you to navigate this graph by jumping to the definition of the entity you are interested in.
  • Support for much larger schemas - We have improved the performance of our editor plugins to support much larger schemas.

To try the latest versions, head over to the VSCode Marketplace or the JetBrains Marketplace.

Wrapping Up

That's all for this release! We hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 10 min read
TL;DR

GORM, a popular ORM for Go can easily query SQL views, but managing them has traditionally been an issue. With the latest release of Atlas GORM Provider, you can now easily manage views for your GORM application.

See an example

Introduction

Making liberal use of views is a key aspect of good SQL database design.

Postgres documentation

Views are a powerful database feature: they are virtual table representing the result of a query. Many teams use them to simplify complex queries, encapsulate logic, and present a consistent interface to users, abstracting the underlying data structures.

Using Views with ORMs

Despite their numerous benefits, views are often underutilized in many applications. Specifically, many ORMs provide partial support for views.

This is also the case with GORM, one of the most popular ORMs in Go. Let's see how GORM users can integrate views into their applications today:

First, we need to define the query that will back our view, and then use the GORM Migrator interface to create the view:

query := db.Model(&User{}).Select("id, name, age").Where("age BETWEEN 18 AND 60")

db.Migrator().CreateView("working_aged_users", gorm.ViewOption{Query: query})
// CREATE VIEW working_aged_users AS SELECT id, name, age FROM users WHERE age BETWEEN 18 AND 60

In order to be able to use GORM to query our view, we need to define an additional struct:

type WorkingAgedUser struct {
ID uint
Name string
Age int
}

Finally, we can use GORM to query records from our view:

var johnFamilies []WorkingAgedUser
db.Where("name LIKE ?", "John%").Find(&johnFamilies)
// SELECT * FROM `working_aged_users` WHERE name LIKE "John%"

Notice that this works by convention, GORM uses reflection and transforms the struct type name WorkingAgedUser to working_aged_users.

I have always felt that working with views in GORM isn't the smoothest experience. Here's why:

The "GORM way" of doing things is defining struct types and using them for everything. They serve as the foundation for modeling, querying data, and migrations. However, in my eyes, the current way of using views in GORM doesn't align with this principle. Views are defined in multiple places: the backing query, the migration step, and finally the runtime query struct.

As a GORM user, I have always wished that everything would just work from the same struct definition.

To address this challenge, our team working on the Atlas GORM provider (an Atlas plugin that enhances GORM's migration and capabilities) came up with a neat solution. Here's what it looks like:

models/models.go
// WorkingAgedUsers is mapped to the VIEW definition below.
type WorkingAgedUsers struct {
Name string
Age int
}

func (WorkingAgedUsers) ViewDef(dialect string) []gormschema.ViewOption {
return []gormschema.ViewOption{
gormschema.BuildStmt(func(db *gorm.DB) *gorm.DB {
return db.Model(&User{}).Where("age BETWEEN 18 AND 60").Select("id, name, age")
}),
}
}

The migration step is now as simple as:

main.go
gormschema.New("mysql").Load(
&models.User{}, // Table-based model.
&models.WorkingAgedUsers{}, // View-based model.
)

It is also worth mentioning that querying the view is still the same:

var johnFamilies []WorkingAgedUser
db.Where("name LIKE ?", "John%").Find(&johnFamilies)
// SELECT * FROM `working_aged_users` WHERE name LIKE "John%"

The key benefits of this approach are:

  • Alignment with GORM Philosophy: It follows the GORM (and generally ORM) principle that structs model database objects, both for schema definition and querying.
  • Unified Source of Truth: It consolidates the schema source of truth for migrations and the DB Query API in a single location - the view definition structs.

This seamless integration of views with GORM's core principles results in a more organic and holistic workflow when working with database views. In the end, it's easy to think of views as read-only tables backed by a query, and this is precisely what this API is designed for.

Demo Time!

Let's walk through a step-by-step example of using GORM Atlas Provider to automatically plan schema migrations for tables and views in a GORM project.

Installation

If you haven't already, install Atlas from macOS or Linux by running:

curl -sSf https://atlasgo.sh | sh

See atlasgo.io for more installation options.

In addition, the view feature is only available for logged-in users, run the following command to login:

atlas login

Install the provider by running:

go get -u ariga.io/atlas-provider-gorm

Step 1: Create a GORM Application

Models are defined using normal structs. For views, we define a struct and implement the ViewDefiner interface. The ViewDef(dialect string) method receives the dialect argument to determine the SQL dialect to generate the view. It is helpful for generating the view definition for different SQL dialects if needed.

Let's create a file that will contain our database models. We will call it models/models.go

models/models.go
package models

import (
"ariga.io/atlas-provider-gorm/gormschema"
"gorm.io/gorm"
)

// User is a regular gorm.Model stored in the "users" table.
type User struct {
gorm.Model
Name string
Age int
Gender string
}

// WorkingAgedUser is mapped to the VIEW definition below.
type WorkingAgedUser struct {
Name string
Age int
}

For views, our provider provides two options for defining the view:

  • BuildStmt: allows you to define a query using the GORM API. This is useful when you need to use GORM's query building capabilities.
  • CreateStmt: allows you to define a query using raw SQL. This is useful when you need to define a complex query that GORM cannot handle.
BuildStmt

This option allows you to define the view using the GORM API. The dialect is handled automatically by GORM.

models/models.go
func (WorkingAgedUser) ViewDef(dialect string) []gormschema.ViewOption {
return []gormschema.ViewOption{
// view name will adhere to GORM's convention for table name
// which is "working_aged_users" in this case
gormschema.BuildStmt(func(db *gorm.DB) *gorm.DB {
return db.Table("users").Select("name, age").Where("age BETWEEN 18 AND 60")
}),
}
}
CreateStmt

This option gives you more flexibility to define the view using raw SQL. However, it also involves a trade-off, as you need to handle the SQL dialects yourself if you want it to work across multiple databases (e.g. switching databases, writing integration tests, etc.).

models/models.go
func (WorkingAgedUser) ViewDef(dialect string) []gormschema.ViewOption {
return []gormschema.ViewOption{
gormschema.CreateStmt(`
CREATE VIEW working_aged_users AS
SELECT
name,
age
FROM
users
WHERE
age BETWEEN 18 AND 60
`),
}
}

For demonstration purposes, we will use the CreateStmt option with the default dialect.

Step 2: Setup Atlas GORM Provider

Standalone vs Go Program mode

This feature works in both Standalone and Go Program modes:

  • Standalone: If your views and models are in the same package, you can use the provider directly to load your GORM schema into Atlas.
  • Go Program: If you have them defined in different packages, you can use the provider as a library in your Go program to load your GORM schema into Atlas.

Since all of our models are in the same package, it's pretty handy to use the Standalone mode. But if you're curious, you can also try the Go Program mode with more detail in the GORM Guide.

In your project directory, create a new file named atlas.hcl with the following contents:

atlas.hcl
data "external_schema" "gorm" {
program = [
"go",
"run",
"-mod=mod",
"ariga.io/atlas-provider-gorm",
"load",
"--path", "./models" // path to your models
"--dialect", "mysql", // | postgres | sqlite | sqlserver
]
}

env "gorm" {
src = data.external_schema.gorm.url
dev = "docker://mysql/8/dev" // the dev-database needs to be mapped to the same dialect above
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}
Using docker://

If you use the docker:// driver for spinning up your Dev Database be sure that Docker is running locally on your machine first.

Next, to prevent the Go Modules system from dropping this dependency from our go.mod file, let's follow the Go Module's official recommendation for tracking dependencies of tools and add a file named tools.go with the following contents:

tools.go
//go:build tools
package main

import _ "ariga.io/atlas-provider-gorm/gormschema"

Alternatively, you can simply add a blank import to the models.go file we created above.

Finally, to tidy things up, run:

go mod tidy

Step 3: Generate Migrations

We can now generate a migration file by running this command:

atlas migrate diff --env gorm 

Observe that files similar to this were created in the migrations directory:

migrations
├── 20240525153051.sql
└── atlas.sum

1 directory, 2 files

Examining the contents of 20240525153051.sql:

migrations/20240525153051.sql
-- Create "users" table
CREATE TABLE `users` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`created_at` datetime(3) NULL,
`updated_at` datetime(3) NULL,
`deleted_at` datetime(3) NULL,
`name` longtext NULL,
`age` bigint NULL,
`gender` longtext NULL,
PRIMARY KEY (`id`),
INDEX `idx_users_deleted_at` (`deleted_at`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- Create "working_aged_users" view
CREATE VIEW `working_aged_users` (
`name`,
`age`
) AS select `users`.`name` AS `name`,`users`.`age` AS `age` from `users` where (`users`.`age` between 18 and 60);

Amazing! Atlas automatically generated a migration file that will create the users table and working_aged_users view in our database!

Step 4: Update the View

Next, as business requirements change, the age range is now different for each gender. Let's update the WorkingAgedUser struct and its view definition.

models/models.go
type WorkingAgedUser struct {
Name string
Age int
+ Gender string
}

func (WorkingAgedUser) ViewDef(dialect string) []gormschema.ViewOption {
return []gormschema.ViewOption{
gormschema.CreateStmt(`
CREATE VIEW working_aged_users AS
SELECT
name,
age,
+ gender
FROM
users
WHERE
- age BETWEEN 18 AND 60
+ (gender = 'male' AND age BETWEEN 18 AND 65) OR
+ (gender = 'female' AND age BETWEEN 18 AND 60)
`),
}
}

Re-run this command:

atlas migrate diff --env gorm 

Observe a new migration file is generated 🎉:

migrations
├── 20240525153051.sql
├── 20240525153152.sql
└── atlas.sum

1 directory, 3 files
migrations/20240525153152.sql
-- Modify "working_aged_users" view
CREATE OR REPLACE VIEW `working_aged_users` (
`name`,
`age`,
`gender`
) AS select `users`.`name` AS `name`,`users`.`age` AS `age`,`users`.`gender` AS `gender` from `users` where (((`users`.`gender` = 'male') and (`users`.`age` between 18 and 65)) or ((`users`.`gender` = 'female') and (`users`.`age` between 18 and 60)));

Wrapping up​

In this post, we have shown how to use Atlas to manage database schema migrations for tables and views in a GORM project. This is just one of the many features that Atlas provides for working with your database schema. Checkout the Atlas documentation for more information.

Have questions? Feedback? Find our team on our Discord server.

· 9 min read
Rotem Tamir

Hi everyone,

It's been a few weeks since the release of v0.22, and we're excited to be back with the next version of Atlas, packed with some long awaited features and improvements.

  • Redshift Support - Amazon Redshift, a fully managed, petabyte-scale data warehouse service in the cloud. Starting today, you can use Atlas to manage your Redshift Schema-as-Code.
  • CircleCI Integration - Following some recent requests from our Enterprise customers, we have added a CircleCI orb to make it easier to integrate Atlas into your CircleCI pipelines.
  • Kubernetes Operator Down Migrations - The Kubernetes Operator now detects when you are moving to a previous version and will attempt to apply a down migration if configured to do so.
  • GORM View Support - We have added support for defining SQL Views in your GORM models.
  • SQLAlchemy Provider Improvements - We have added support for defining models using SQLAlchemy Core Tables in the SQLAlchemy provider.
  • ERD v2 - We have added a new navigation sidebar to the ERD to make it easier to navigate within large schemas.
  • PostgreSQL Improvements - We have added support for PostgreSQL Event Triggers, Aggregate Functions, and Function Security.

Let's dive in!

Redshift Beta Support

Atlas's "Database Schema-as-Code" is useful even for managing small schemas with a few tables, but it really shines when you have a large schema with many tables, views, and other objects. This is the common case instead of the exception when you are dealing with Data Warehouses like Redshift that aggregate data from multiple sources.

Data warehouses typically store complex and diverse datasets consisting of hundreds of tables with thousands of columns and relationships. Managing these schemas manually can be a nightmare, and that's where Atlas comes in.

Today we are happy to announce the beta support for Amazon Redshift in Atlas. You can now use Atlas to manage your Redshift schema, generate ERDs, plan and apply changes, and more.

To get started, first install the latest version of the Atlas CLI:

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

Next, login to your Atlas account to activate the Redshift beta feature:

atlas login

To verify Atlas is able to connect to your Redshift database, run the following command:

atlas schema inspect --url "redshift://<username>:<password>@<host>:<port>/<database>?search_path=<schema>"

If everything is working correctly, you should see the Atlas DDL representation of your Redshift schema.

To learn more about the Redshift support in Atlas, check out the documentation.

CircleCI Integration

CircleCI is a popular CI/CD platform that allows you to automate your software development process. With this version we have added a CircleCI orb to make it easier to integrate Atlas into your CircleCI pipeline. CircleCI orbs are reusable packages of YAML configuration that condense repeated pieces of config into a single line of code.

As an example, suppose you wanted to create a CircleCI pipeline that pushes your migration directory to your Atlas Cloud Schema Registry. You can use the atlas-orb to simplify the configuration:

version: '2.1'
orbs:
atlas-orb: ariga/atlas-orb@0.0.3
workflows:
postgres-example:
jobs:
- push-dir:
context: the-context-has-ATLAS_TOKEN
docker:
- image: cimg/base:current
- environment:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: pass
POSTGRES_USER: postgres
image: cimg/postgres:16.2
steps:
- checkout
- atlas-orb/setup:
cloud_token_env: ATLAS_TOKEN
version: latest
- atlas-orb/migrate_push:
dev_url: >-
postgres://postgres:pass@localhost:5432/postgres?sslmode=disable
dir_name: my-cool-project

Let's break down the configuration:

  • The push-dir job uses the cimg/postgres:16.2 Docker image to run a PostgreSQL database. This database will be used as the Dev Database for different operations performed by Atlas.
  • The atlas-orb/setup step initializes the Atlas CLI with the provided ATLAS_TOKEN environment variable.
  • The atlas-orb/migrate_push step pushes the migration directory my-cool-project to the Atlas Cloud Schema Registry.

To learn more about the CircleCI integration, check out the documentation.

Kubernetes Operator Down Migrations

The Atlas Operator is a Kubernetes operator that enables you to manage your database schemas using Kubernetes Custom Resources. In one of our recent releases, we added support for the migrate down command to the CLI. Using this command, you can roll back applied migrations in a safe and controlled way, without using pre-planned down migration scripts or manual intervention.

Starting with v0.5.0, the Atlas Operator supports down migrations as well. When you change the desired version of your database for a given AtlasMigration resource, the operator will detect whether you are moving to a previous version and will attempt to apply a down migration if you configured it to do so.

Down migrations are controlled via the new protectedFlows field in the AtlasMigration resource. This field allows you to specify the policy for down migrations. The following policy, for example, allows down migrations and auto-approves them:

apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlasmig-mysql
spec:
protectedFlows:
migrateDown:
allow: true
autoApprove: true
# ... redacted for brevity

Alternatively, Atlas Cloud users may set the autoApprove field to false to require manual approval for down migrations. In this case, the operator will pause the migration and wait for the user to approve the down migration before proceeding:

ERD v2

When you push your migration directory to the Atlas Cloud Schema Registry, Atlas generates an ERD for your schema. The ERD is a visual representation of your schema that shows the different database objects in your schema and the relationships between them.

To make it easier to navigate within large schemas we have recently added a fresh new navigation sidebar to the ERD:

GORM View Support

GORM is a popular ORM library for Go that provides a simple way to interact with databases. The Atlas GORM provider provides a seamless integration between Atlas and GORM, allowing you to generate migrations from your GORM models and apply them to your database.

SQL Views are a powerful feature in relational databases that allow you to create virtual tables based on the result of a query. Managing views with GORM (and ORMs in general) is a notoriously clunky process, as they are normally not first-class citizens in the ORM world.

With v0.4.0, we have added a new API to the GORM provider that allows you to define views in your GORM models.

Here's a glimpse of how you can define a view in GORM:

// User is a regular gorm.Model stored in the "users" table.
type User struct {
gorm.Model
Name string
Age int
}

// WorkingAgedUsers is mapped to the VIEW definition below.
type WorkingAgedUsers struct {
Name string
Age int
}

func (WorkingAgedUsers) ViewDef(dialect string) []gormschema.ViewOption {
return []gormschema.ViewOption{
gormschema.BuildStmt(func(db *gorm.DB) *gorm.DB {
return db.Model(&User{}).Where("age BETWEEN 18 AND 65").Select("name, age")
}),
}
}

By implementing the ViewDefiner interface, GORM users can now include views in their GORM models and have Atlas automatically generate the necessary SQL to create the view in the database.

To learn more about the GORM view support, check out the documentation.

Thanks to luantranminh for contributing this feature!

SQLAlchemy Provider Improvements

The Atlas SQLAlchemy provider allows you to generate migrations from your SQLAlchemy models and apply them to your database.

With v0.2.2, we have added support for defining models using SQLAlchemy Core Tables in addition to the existing support for ORM Models.

In addition, we have decoupled the provider from using a specific SQLAlchemy release, allowing users to use any version of SQLAlchemy they prefer. This should provide more flexibility and make it easier to integrate the provider into your existing projects.

Huge thanks to vshender for contributing these improvements!

Other Improvements

On our quest to support the long tail of lesser known database features we have recently added support for the following:

PostgreSQL Event Triggers

PostgreSQL Event Triggers are a special kind of trigger. Unlike regular triggers, which are attached to a single table and capture only DML events, event triggers are global to a particular database and are capable of capturing DDL events.

Here are some examples of how you can use event triggers in Atlas:

# Block table rewrites.
event_trigger "block_table_rewrite" {
on = table_rewrite
execute = function.no_rewrite_allowed
}

# Filter specific events.
event_trigger "record_table_creation" {
on = ddl_command_start
tags = ["CREATE TABLE"]
execute = function.record_table_creation
}

Aggregate Functions

Aggregate functions are functions that operate on a set of values and return a single value. They are commonly used in SQL queries to perform calculations on groups of rows. PostgreSQL allows users to define custom aggregate functions using the CREATE AGGREGATE statement.

Atlas now supports defining custom aggregate functions in your schema. Here's an example of how you can define an aggregate function in Atlas:

aggregate "sum_of_squares" {
schema = schema.public
arg {
type = double_precision
}
state_type = double_precision
state_func = function.sum_squares_sfunc
}

function "sum_squares_sfunc" {
schema = schema.public
lang = PLpgSQL
arg "state" {
type = double_precision
}
arg "value" {
type = double_precision
}
return = double_precision
as = <<-SQL
BEGIN
RETURN state + value * value;
END;
SQL
}

Function Security

PostgreSQL allows you to define the security level of a function using the SECURITY clause. The SECURITY clause can be set to DEFINER or INVOKER. When set to DEFINER, the function is executed with the privileges of the user that defined the function. When set to INVOKER, the function is executed with the privileges of the user that invoked the function. This is useful when you want to create functions that execute with elevated privileges.

Atlas now supports defining the security level of functions in your schema. Here's an example of how you can define a function with SECURITY DEFINER in Atlas:

function "positive" {
schema = schema.public
lang = SQL
arg "v" {
type = integer
}
return = boolean
as = "SELECT v > 0"
security = DEFINER
}

Wrapping Up

That's all for this release! We hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 7 min read

Hi everyone,

It's been a few weeks since our last release, and we're happy to be back with a version packed with brand new and exciting features. Here's what's inside:

  • RENAME Detection - This version includes a RENAME detector that identifies ambiguous situations of potential resource renames and interactively asks the user for feedback before generating the changes.
  • PostgreSQL Features
    • UNIQUE and EXCLUDE - Unique constraints and exclusion constraints were added.
    • Composite Types - Added support for composite types, which are user-defined types that represent the structure of a row.
    • Table lock checks - Eight new checks that review migration plans and alert in cases of a potential table locks in different modes.
  • SQL Server Sequence Support - Atlas now supports managing sequences in SQL Server.

Let's dive in!

RENAME Detection

One of the first things we were asked when we introduced Atlas's declarative approach to schema management was, “How are you going to deal with renames?” While column and table renames are a fairly rare event in the lifecycle of a project, the question arises from the fact that it's impossible to completely disambiguate between RENAME and DROP and ADD operations. While the end schema will be the same in both cases, the actual impact of an undesired DROP operation can be disastrous.

To avoid this, Atlas now detects potential RENAME scenarios during the migration planning phase and prompts the user about their intent.

Let's see this in action.

Assume we have a users table with the column first_name, which we changed to name.

After running the atlas migrate diff command to generate a migration, we will see the following:

? Did you rename "users" column from "first_name" to "name":
▸ Yes
No

If this was our intention, we will click "Yes" and the SQL statement will be a RENAME statement.

If we click "No", the SQL statement will drop the first_name column and create the name column instead.

PostgreSQL Features

Unique and Exclude Constraints

Now, Atlas supports declaring unique and exclude constraints in your schema.

For example, if we were to add a unique constraint on a name column, it would look similar to:

schema.hcl
# Columns only.
unique "name" {
columns = [column.name]
}

Read more about unique constraints in Atlas here.

Exclude constraints ensure that if any two rows are compared using a specified operator, at least one of the specified conditions must hold true. This means that the constraint ensures that no two rows satisfy the specified operator at the same time.

schema.hcl
exclude "excl_speaker_during" {
type = GIST
on {
column = column.speaker
op = "="
}
on {
column = column.during
op = "&&"
}
}

Composite Types

Composite Types are user-defined data types that represent a structure of a row or record. Once defined, composite types can be used to declare columns in tables or used in functions and stored procedures.

For example, let's say we have a users table where each user has an address. We can create a composite type address and add it as a column to the users table:

schema.hcl
composite "address" {
schema = schema.public
field "street" {
type = text
}
field "city" {
type = text
}
}

table "users" {
schema = schema.public
column "address" {
type = composite.address
}
}

schema "public" {
comment = "standard public schema"
}

Learn more about composite types here.

Table Locking Checks

One of the common ways in which schema migration cause production outages is when a schema change requires the database to acquire a lock on a table, immediately causing read or write operations to fail. If you are dealing with small tables, these locks might be acquired for a short time which will not be noticeable. However, if you are managing a large and busy database, these situations can lead to a full-blown system outage.

Many developers are not aware of these pitfalls only to discover them in the middle of a crisis, which is made even worse by the fact that once they happen, there's nothing you can do except quietly wait for the migration to complete.

Teams looking to improve the reliability and stability of their systems, reach out to automation to prevent human errors like these. Atlas's automatic analysis capabilities can be utilized to detect such risky changes during the CI phase of the software development lifecycle.

In this version, we have added eight new analyzers to our PostgreSQL integration that check for cases where planned migrations can lead to locking a table. Here's a short rundown of these analyzers and what they detect:

  • PG104 - Adding a PRIMARY KEY constraint (with its index) acquires an ACCESS EXCLUSIVE lock on the table, blocking all access during the operation.
  • PG105 - Adding a UNIQUE constraint (with its index) acquires an ACCESS EXCLUSIVE lock on the table, blocking all access during the operation.
  • PG301 - A change to the column type that requires rewriting the table (and potentially its indexes) on disk.
  • PG302 - Adding a column with a volatile DEFAULT value requires a rewrite of the table.
  • PG303 - Modifying a column from NULL to NOT NULL requires a full table scan.
    • If the table has a CHECK constraint that ensures NULL cannot exist, such as CHECK (c > 10 AND c IS NOT NULL), the table scan is skipped, and therefore this check is not reported.
  • PG304 - Adding a PRIMARY KEY on a nullable column implicitly sets them to NOT NULL, resulting in a full table scan unless there is a CHECK constraint that ensures NULL cannot exist.
  • PG305 - Adding a CHECK constraint without the NOT VALID clause requires scanning the table to verify that all rows satisfy the constraint.
  • PG306 - Adding a FOREIGN KEY constraint without the NOT VALID clause requires a full table scan to verify that all rows satisfy the constraint.

View a full list of all the checks here.

SQL Server Sequence Support

In SQL Server, sequences are objects that generate a sequence of numeric values according to specific properties.Sequences are often used to generate unique identifiers for rows in a table.

Atlas supports the different types of sequences. For example, a simple sequence with default values can be declared like so:

sequence "s1" {
schema = schema.dbo
}

We can also create a sequence with a custom configuration.

sequence "s2" {
schema = schema.dbo
type = int
start = 1001
increment = 1
min_value = 1001
max_value = 9999
cycle = true
}

In the example above, we have a sequence that starts at 1001, and is incremented by 1 until it reaches the maximum value of 9999. Once it reaches its maximum value, it will start over because cycle is set to true.

Another option is to create a sequence with an alias type. For example, if we were to create a sequence for Social Security Numbers, we would do the following:

sequence "s3" {
schema = schema.dbo
type = type_alias.ssn
start = 111111111
increment = 1
min_value = 111111111
}
type_alias "ssn" {
schema = schema.dbo
type = dec(9, 0)
null = false
}

Read the docs for more information about sequences.

Wrapping Up

That's all for this release! We hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 11 min read
Rotem Tamir

Building a loveable migration tool

"I just love dealing with migrations!"

-No developer, ever.

Over the past three years, Ariel, my co-founder and I (along with the rest of our team at Ariga), have been working on Atlas, a database schema-as-code tool. After many years of building software professionally, we've come to realize that one of the most stressful, tedious and error-prone parts of building software is dealing with database migrations.

In case you are unfamiliar with the term, database migrations are the process of changing the structure of a database. When applications evolve, the database schema needs to evolve with them. This is commonly done by writing scripts that describe the changes to the database schema. These scripts are then executed in order to apply the changes to the database. This process has earned the name "migrations" and an infamous reputation among developers.

The secret to building a successful tool for developers is to be relentlessly focused on the user experience. When we started working on Atlas, we spent a long time researching the common issues developers face when dealing with migrations. We wanted to understand the root causes of these issues and design a tool that would solve them.

In this post, I'll share the top 5 usability issues we identified with migration tools and how we addressed them in Atlas.

Issue #1: The weirdest source of truth

It should be possible to provision our environments and build, test, and deploy our software in a fully automated fashion purely from information stored in version control

Forsgren PhD, Nicole; Humble, Jez; Kim, Gene. Accelerate (p. 72). IT Revolution Press. Kindle Edition.

One of the most important principles that came from the DevOps movement is the idea that to achieve effective automation, you need to be able to build everything, deterministically, from a single source of truth.

This is especially important when it comes to databases! The database schema is a super critical part of our application and we better have a way to ensure it is compatible with the code we are deploying.

Classic migration tools (like Flyway and Liquibase) were (to this day) an amazing step forward into automating schema changes, to the point that it is possible to run them as part of your CI/CD pipeline, satisfying the principle above.

But while technically correct, in terms of usability, they provide a very poor developer experience.

Consider this directory structure from a typical Flyway project:

.
├── V1__create_table1.sql
├── V2__create_table_second.sql
└── V3_1__add_comments.sql

To have a migration directory describe the current schema of a database is like describing your home by listing the needed steps to build it. It's correct, but it's not very useful.

Issue #1: Migrations as a source of truth

Problem: As a human, I can't understand the current state of the database by looking at the migration files. I need to run them in chronological order to understand the current state of the database.

Solution: Use a declarative schema-as-code approach to describe the current state of the database. Then, in every version of our project, we can read the current schema in plain text and understand it.

Issue #2: Manual planning

Classic migration tools typically require you to write the migration scripts manually. This process can be tedious and error-prone, especially if you have developers who don't have a lot of experience operating databases.

Classic migration tools were created at a time when tech-stacks were much simpler and less diverse. In addition, many organizations employed a DBA who could serve as a technical authority for database changes. Nowadays, developers are expected to "own" their databases and DBAs are seriously outnumbered. This means that developers are expected to write migration scripts, even if they don't have a lot of experience with databases.

Doing something, when it's not in your area of expertise, especially when its critical and risky like database migrations, can be a daunting and stressful task. Even if it is something you are good at, it still requires your attention and focus.

It's important to note that some ORMs (such as Django and Prisma) do stand out in their ability to generate migration scripts automatically (with some important limitations). But for the most part, developers are expected to write migration scripts manually.

Issue #2: Manual planning

Problem: Writing migration scripts can be stressful and error-prone, especially for developers who don't have a lot of experience with databases. They need to first know what the current state of the database is, then they need to know what the desired state is, and then they need to write a script that will take the database from the current state to the desired state. Some changes are trivial, but others require consideration and research.

Solution: Use a tool that can automatically generate migration scripts for you. This way, you can focus on the desired state of the database and let the tool figure out how to get there.

Issue #3: Working in parallel

When a project succeeds to the point that it has many developers working on it, it's common to see developers working on different features that require different changes to the database schema. This can lead to conflicts when developers try to merge their changes together.

Classic migration tools don't provide a good way to detect and handle this situation. Because each migration script is created in its own file, common conflict detection tools like git can't help you. Your source control system can't tell you if two developers are working on the same table, or if they are adding columns with the same name.

For this reason, it's common to see teams surprised in production when conflicting migrations are applied. Even worse, in some cases migrations may be applied out of order or skipped entirely, leading to an inconsistent and unknown state of the database.

Issue #3: Working in parallel

Problem: Source control systems can't help you detect conflicts between migration scripts. This can lead to undetected issues in production when they are deployed. To overcome this, teams develop their own processes to coordinate changes to the database schema which slow down development and are error-prone.

Solution: Use a tool that maintains a directory integrity file that will detect conflicting changes and force you to resolve them before you can proceed.

Issue #4: Tracking partial failures

Virtually all classic migration tools maintain a metadata table on the target database that tracks which migrations have been applied. Using this metadata, the tool can determine which migrations need to be applied to bring the database up to date.

When migrations succeed, everything is great. But in my experience, when migrations fail, especially when they fail partially, it can be a nightmare to recover from.

Suppose you planning a migration to version N+1 of the database, from version N. The migration script for version N contains 10 changes. The migration fails after applying the 5th change. If you use a database that supports fully transactional DDL (like Postgres), and all of your changes are transactional, then you are in luck - your migration tool can safely roll back the changes that were applied and your database (revisions table) will remain in version N1.

But if you are using a database that doesn't support transactional DDL, or if your changes are not transactional, then you are in trouble. The migration tool can't rollback the changes that were applied, and your database will in a state that is somewhere in the middle between version N1 and version N+1. All of the migration tools that I know of don't capture this interim state, and so you are left with a database that is in an unknown state and a revision table that is out of sync with the actual state of the database.

The most viewed question about golang-migrate on StackOverflow is about this error:

Dirty database version 2. Fix and force version.

The answer explains:

Before a migration runs, each database sets a dirty flag. Execution stops if a migration fails and the dirty state persists, which prevents attempts to run more migrations on top of a failed migration.

And the solution?

After you clean up you database you can also open schema_migrations table and change Dirty flag and rollback version number to last migration that was successfully applied.

Issue #4: Tracking partial failures

Problem: Classic migration tools don't handle partial failures well. When a migration fails, especially when it fails partially, it can leave your database in an unknown state with the revision table out of sync with the actual state of the database. Resolving this issue requires manual handling of the database which is dangerous and error-prone.

Solution: Use a tool that natively supports transactions where possible, and uses statement-level granularity to keep track of applied changes.

Issue #5: Pre-planned rollbacks

One of the most curious things about classic migration tools is the prevalence of the down migration. The idea is that whenever you write a migration script to take the database from version N to version N+1, then you should also write the down migration that will take the database from version N+1 back to version N.

Why is this curious? Because after interviewing many developers across many organizations in virtually every industry, I have learned that in practice nobody uses down migrations, especially in production.

We have recently written extensively on this topic, but the gist is that down migrations are never used because:

  • They are naive. Pre-planned rollbacks assume that all statements in the up migration have been applied successfully. In practice, rolling back a version happens when things did not work as planned, making the pre-planned rollback obsolete.

  • They are destructive. If you successfully rolled out a migration that added a column to a table, and then decided to revert it, its inverse operation (DROP COLUMN) does not merely remove the column. It deletes all the data in that column. Re-applying the migration would not bring back the data, as it was lost when the column was dropped.

  • They are incompatible with broader rollback mechanisms. In theory, rolling back a deployment should be as simple as deploying the previous version of the application. When it comes to versions of our application code, this works perfectly. We pull the container image that corresponds to the previous version, and we deploy it.

    But what about the database? When we pull artifacts from a previous version, they do not contain the down files that are needed to revert the database changes back to the necessary schema - they were only created in a future commit!

Issue #5: Pre-planned rollbacks

Problem: Pre-planned rollbacks ("down migrations") are never used in practice. They are naive, destructive, and incompatible with broader rollback mechanisms. This leads to a false sense of security which is unraveled when a rollback is actually needed and the user realizes that they will need to handle it manually.

Solution: Use a tool that supports dynamically-planned, safe rollbacks that can revert the database to a previous state without data loss even in cases of partial failures.

Try Atlas today

I believe these usability issues should not be taken lightly. Your team's ability to move fast, refactor and respond to changing requirements is limited by the agility in which you can evolve your database schema. One of the most interesting things that we see with teams that adopt Atlas is seeing them move from occasional, dreaded, avoided-at-all-costs migrations, to planning and deploying hundreds of schema changes a year. This means that they can move faster, respond to customer feedback more quickly, and innovate more effectively.

To try Atlas, head over to the Getting Started today!

Wrapping up

In this article, we have discussed the top 5 usability issues with migration tools today and hinted at how modern migration tools (like Atlas) can address them.

As always, we would love to hear your feedback and suggestions on the Atlas Discord server.

· 16 min read
Ariel Mashraki

TL;DR

Ever since my first job as a junior engineer, the seniors on my team told me that whenever I make a schema change I must write the corresponding "down migration", so it can be reverted at a later time if needed. But what if that advice, while well-intentioned, deserves a second look?

Today, I want to argue that contrary to popular belief, down migration files are actually a bad idea and should be actively avoided.

In the final section, I'll introduce an alternative that may sound completely contradictory: the new migrate down command. I will explain the thought process behind its creation and show examples of how to use it.

Background

Since the beginning of my career, I have worked in teams where, whenever it came to database migrations, we were writing "down files" (ending with the .down.sql file extension). This was considered good practice and an example of how a "well-organized project should be."

Over the years, as my career shifted to focus mainly on infrastructure and database tooling in large software projects (at companies like Meta), I had the opportunity to question this practice and the reasoning behind it.

Down migrations were an odd thing. In my entire career, working on projects with thousands of down files, I never applied them on a real environment. As simple as that: not even once.

Furthermore, since we have started Atlas and to this very day, we have interviewed countless software engineers from virtually every industry. In all of these interviews, we have only met with a single team that routinely applied down files in production (and even they were not happy with how it worked).

Why is that? Why is it that down files are so popular, yet so rarely used? Let's dive in.

Down migrations are the naively optimistic plan for a grim and unexpected world

Down migrations are supposed to be the "undo" counterpart of the "up" migration. Why do "undo" buttons exist? Because mistakes happen, things fail, and then we want a way to quickly and safely revert them. Database migrations are considered something we should do with caution, they are super risky! So, it makes sense to have a plan for reverting them, right?

But consider this: when we write a down file, we are essentially writing a script that will be executed in the future to revert the changes we are about to make. This script is written before the changes are applied, and it is based on the assumption that the changes will be applied correctly. But what if they are not?

When do we need to revert a migration? When it fails. But if it fails, it means that the database might be in an unknown state. It is quite likely that the database is not in the state that the down file expects it to be. For example, if the "up" migration was supposed to add two columns, the down file would be written to remove these two columns. But what if the migration was partially applied and only one column was added? Running the down file would fail, and we would be stuck in an unknown state.

Rolling back additive changes is a destructive operation

When you are working on a local database, without real traffic, having the up/down mechanism for migrations might feel like hitting Undo and Redo in your favorite text editor. But in a real environment, it is not the case.

If you successfully rolled out a migration that added a column to a table, and then decided to revert it, its inverse operation (DROP COLUMN) does not merely remove the column. It deletes all the data in that column. Re-applying the migration would not bring back the data, as it was lost when the column was dropped.

For this reason, teams that want to temporarily deploy a previous version of the application, usually do not revert the database changes, because doing so will result in data loss for their users. Instead, they need to assess the situation on the ground and figure out some other way to handle the situation.

Down migrations are incompatible with modern deployment practices

Many modern deployment practices like Continuous Delivery (CD) and GitOps advocate for the software delivery process to be automated and repeatable. This means that the deployment process should be deterministic and should not require manual intervention. A common way of doing this is to have a pipeline that receives a commit, and then automatically deploys the build artifacts from that commit to the target environment.

As it is very rare to encounter a project with a 0% change failure rate, rolling back a deployment is a common scenario.

In theory, rolling back a deployment should be as simple as deploying the previous version of the application. When it comes to versions of our application code, this works perfectly. We pull the container image that corresponds to the previous version, and we deploy it.

But what about the database? When we pull artifacts from a previous version, they do not contain the down files that are needed to revert the database changes back to the necessary schema - they were only created in a future commit!

For this reason, rollbacks to versions that require reverting database changes are usually done manually, going against the efforts to automate the deployment process by modern deployment practices.

How do teams work around this?

In previous companies I worked for, we faced the same challenges. The tools we used to manage our database migrations advocated for down migrations, but we never used them. Instead, we had to develop some practices to support a safe and automated way of deploying database changes. Here are some of the practices we used:

Migration Rollbacks

When we worked with PostgreSQL, we always tried to make migrations transactional and made sure to isolate the DDLs that prevent it, like CREATE INDEX CONCURRENTLY, to separate migrations. In case the deployment failed, for instance, due to a data-dependent change, the entire migration was rolled back, and the application was not promoted to the next version. By doing this, we avoided the need to run down migrations, as the database was left in the same state as bit was before the deployment.

Non-transactional DDLs

When we worked with MySQL, which I really like as a database but hate when it come to migrations, it was challenging. Since MySQL does not support transactional DDLs, failures were more complex to handle. In case the migration contains more than one DDL and unexpectedly failed in the middle, because of a constraint violation or another error, we were stuck in an intermediate state that couldn't be automatically reverted by applying a "revert file".

Most of the time, it required special handling and expertise in the data and product. We mainly preferred fixing the data and moving forward rather than dropping or altering the changes that were applied - which was also impossible if the migration introduced destructive changes (e.g., DROP commands).

Making changes Backwards Compatible

A common practice in schema migrations is to make them backwards compatible (BC). We stuck to this approach, and also made it the default behavior in Ent. When schema changes are BC, applying them before starting a deployment should not affect older instances of the app, and they should continue to work without any issues (in rolling deployments, there is a period where two versions of the app are running at the same time).

When there is a need to revert a deployment, the previous version of the app remains fully functional without any issues - if you are an Ent user, this is one of the reasons we avoid SELECT * in Ent. Using SELECT * can also break the BC for additive changes, like adding a new column, as the application expects to retrieve N columns but unexpectedly receives N+1.

Deciding Atlas would not support down migrations

When we started Atlas, we had the opportunity to design a new tool from scratch. Seeing as "down files" never helped us solve failures in production, from the very beginning of Atlas, Rotem and I agreed that down files should not be generated - except for cases where users use Atlas to generate migrations for other tools that expect these files, such as Flyway or golang-migrate.

Listening to community feedback

Immediately after Atlas' initial release some two years ago, we started receiving feedback from the community that put this decision in question. The main questions were: "Why doesn't Atlas support down migrations?" and "How do I revert local changes?".

Whenever the opportunity came to engage in such discussions, we eagerly participated and even pursued verbal discussions to better understand the use cases. The feedback and the motivation behind these questions were mainly:

  1. It is challenging to experiment with local changes without some way to revert them.
  2. There is a need to reset dev, staging or test-like environments to a specific schema version.

Declarative Roll-forward

Considering this feedback and the use cases, we went back to the drawing board. We came up with an approach that was primarily about improving developer ergonomics and was in line with the declarative approach that we were advocating for with Atlas. We named this approach "declarative roll-forward".

Albeit, it was not a "down migration" in the traditional sense, it helped to revert applied migrations in an automated way. The concept is based on a three-step process:

  1. Use atlas schema apply to plan a declarative migration, using a target revision as the desired state:

    atlas schema apply \
    --url "mysql://root:pass@localhost:3306/example" \
    --to "file://migrations?version=20220925094437" \
    --dev-url "docker://mysql/8/example" \
    --exclude "atlas_schema_revisions"

    This step requires excluding the atlas_schema_revisions table, which tracks the applied migrations, to avoid deleting it when reverting the schema.

  2. Review the generated plan and apply it to the database.

  3. Use the atlas migrate set command to update the revisions table to the desired version:

    atlas migrate set 20220925094437 \
    --url "mysql://root:pass@localhost:3306/example" \
    --dir "file://migrations"

This worked for the defined use cases. However, we felt that our workaround was a bit clunky as it required a three-step process to achieve the result. We agreed to revisit this decision in the future.

Revisiting the down migrations

In recent months, the question of down migrations was raised again by a few of our customers, and we dove into it again with them. I always try to approach these discussions with an open mind, and listen to the different points of view and use cases that I personally haven't encountered before.

Our discussions highlighted the need for a more elegant and automated way to perform deployment rollbacks in remote environments. The solution should address situations where applied migrations need to be reverted, regardless of their success, failure, or partial application, which could leave the database in an unknown state.

The solution needs to be automated, correct, and reviewable, as it could involve data deletion. The solution can't be the "down files", because although their generation can be automated by Atlas and reviewed in the PR stage, they cannot guarantee correctness when applied to the database at runtime.

After weeks of design and experimentation, we introduced a new command to Atlas named migrate down.

Introducing: migrate down

The atlas migrate down command allows reverting applied migrations. Unlike the traditional approach, where down files are "pre-planned", Atlas computes a migration plan based on the current state of the database. Atlas reverts previously applied migrations and executes them until the desired version is reached, regardless of the state of the latest applied migration — whether it succeeded, failed, or was partially applied and left the database in an unknown version.

By default, Atlas generates and executes a set of pre-migration checks to ensure the computed plan does not introduce data deletion. Users can review the plan and execute the checks before the plan is applied to the database by using the --dry-run flag or the Cloud as described below. Let's see it in action on local databases:

Reverting locally applied migrations

Assuming a migration file named 20240305171146.sql was last applied to the database and needs to be reverted. Before deleting it, run the atlas migrate down to revert the last applied migration:

atlas migrate down \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/example" \
--dev-url "docker://mysql/8/dev"
Migrating down from version 20240305171146 to 20240305160718 (1 migration in total):

-- checks before reverting version 20240305171146
-> SELECT NOT EXISTS (SELECT 1 FROM `logs`) AS `is_empty`
-- ok (50.472µs)

-- reverting version 20240305171146
-> DROP TABLE `logs`
-- ok (53.245µs)

-------------------------
-- 57.097µs
-- 1 migration
-- 1 sql statement

Notice two important things in the output:

  1. Atlas automatically generated a migration plan to revert the applied migration 20240305171146.sql.
  2. Before executing the plan, Atlas ran a pre-migration check to ensure the plan does not introduce data deletion.

After downgrading your database to the desired version, you can safely delete the migration file 20240305171146.sql from the migration directory by running atlas migrate rm 20240305171146.

Then, you can generate a new migration version using the atlas migrate diff command with the optional --edit flag to open the generated file in your default editor.

For local development, the command met our expectations. It is indeed automated, correct, in the sense that it undoes only the reverted files, and can be reviewed using the --dry-run flag or the Cloud. But what about real environments?

Reverting real environments

For real environments, we're introducing another feature in Atlas Cloud today: the ability to review and approve changes for specific projects and commands. In practice, this means if we trigger a workflow that reverts schema changes in real environments, we can configure Atlas to wait for approval from one or more reviewers.

Here's what it looks like:

Review Required

With this new feature, down migrations are reviewable. But what about their safety and automation? As mentioned above, non-transactional DDLs can really leave us in trouble in case they fail, potentially keeping the database in an unknown state that is hard to recover from - it takes time and requires caution. However, this is true not only for applied (up) migrations but also for their inverse: down migrations. If the database we operate on does not support transactional DDLs, and we fail in the middle of the execution, we are in trouble.

For this reason, when Atlas generates a down migration plan, it considers the database (and its version) and the necessary changes. If a transactional plan that is executable as a single unit can be created, Atlas will opt for this approach. If not, Atlas reverts the applied statements one at a time, ensuring that the database is not left in an unknown state in case a failure occurs midway. If we fail for any reason during the migration, we can rerun Atlas to continue from where it failed. Let's explain this with an example:

Suppose we want to revert these two versions:

migrations/20240329000000.sql
ALTER TABLE users DROP COLUMN account_name;
migrations/20240328000000.sql
ALTER TABLE users ADD COLUMN account_id int;
ALTER TABLE accounts ADD COLUMN plan_id int;

Let's see how Atlas handles this for databases that support transactional DDLs, like PostgreSQL, and those that don't, like MySQL:

  • For PostgreSQL, Atlas starts a transaction and checks that account_id and plan_id do not contain data before they are dropped. Then, Atlas applies one ALTER statement on the users table to add back the account_name column but drops the account_id column. Then, Atlas executes the other ALTER statement to drop the plan_id column from the accounts table. If any of these statements fail, the transaction is rolled back, and the database is left in the same state as before. In case we succeed, the revisions table is updated, and the transaction is committed.

  • For MySQL, we can't execute the entire plan as a single unit. This means the same plan cannot be executed, because if we fail in the middle, this intermediate state does not represent any version of the database or the migration directory. Thus, when migrating down, Atlas first applies the ALTER statement to undo 20240329000000 and updates the revisions table. Then, it will undo 20240328000000 statement by statement, and update the revisions table after each successful step. If we fail in the middle, we can re-run Atlas to continue from where it failed.

What we achieved with the new migrate down command, is a safer and more automated way to revert applied migrations.

Down options

By default, atlas migrate down reverts the last applied file. However, you can pass the amount of migrations to revert as an argument, or a target version or a tag as a flag. For instance, atlas migrate down 2 will revert up to 2 pending migrations while atlas migrate down --to-tag 297cc2c will undo all migrations up to the state of the migration directory at this tag.

GitHub Actions Integration

In addition to the CLI, we also added an integration with GitHub Actions. If you have already connected your project to the Schema Registry and use GitHub Actions, you can set up a workflow that gets a commit and triggers Atlas to migrate down the database to the version defined by the commit. The workflow will wait for approval and then apply the plan to the database once approved. For more info, see the Down Action documentation.

Atlas GitHub Action

Wrapping up

In retrospect, I'm glad we did not implement the traditional approach in the first place. When meeting with users, we listened to their problems and focused on their expected outcomes rather than the features they expected. This helped us better understand the problem space instead of focusing the discussion on the implementation. The result we came up with is elegant, probably not perfect (yet), but it successfully avoided the issues that bother me most about pre-planned files.

What's next? We're opening it for GA today and invite you to share your feedback and suggestions for improvements on our Discord server.

· 7 min read

Hi everyone,

It's been a few weeks since our last version announcement and today I'm happy to share with you v0.20, which includes some big changes and exciting features:

  • New Pricing Model - As we announced earlier this month, beginning March 15th the new pricing model took effect. The new pricing is usage-based, offering you more flexibility and cost efficiency. Read about what prompted this change and view the new pricing plans here.
  • Django ORM Integration - Atlas now supports Django! Django is a popular ORM for Python. Developers using Django can now use Atlas to automatically plan schema migrations based on the desired state of their schema, instead of crafting them by hand.
  • Support for PostgreSQL Extensions - Atlas now supports installing and managing PostgreSQL extensions.
  • Dashboards in the Cloud - The dashboard (previously the 'Projects' page) got a whole new look in Atlas Cloud. Now you can view the state of your projects and environments at a glance.
  • _SQL Server is out of Beta](#sql-server-is-out-of-beta) - SQL Server is officially out of Beta! Along with this official support, we have included some new features:
    • User-Defined Types support for SQL Server - Atlas now supports two User-Defined Types: alias types and table types.
    • Azure Active Directory (AAD) Authentication for SQL Server - Connect to your SQL Server database using AAD Authentication.

Let’s dive in!

New Pricing Model

As of March 15th, there is a new pricing model for Atlas users. This change is a result of feedback we received from many teams that the previous $295/month minimum was prohibitive, and a gradual, usage-based pricing model would help them adopt Atlas in their organizations.

You can read the full reasoning for the change and a breakdown of the new pricing in this blog post.

Django ORM Integration

Django is the most popular web framework in the Python community. It includes a built-in ORM which allows users to describe their data model using Python classes. Migrations are then created using the makemigrations command, which can be applied to the database using migrate command.

Among the many ORMs available in our industry, Django's automatic migration tool is one of the most powerful and robust. It can handle a wide range of schema changes, however, having been created in 2014, a very different era in software engineering, it naturally has some limitations.

Some of the limitations of Django's migration system include:

  1. Database Features - Because it was created to provide interoperability across database engines, Django's migration system is centered around the "lowest common denominator" of database features.

  2. Ensuring Migration Safety - Migrations are a risky business. If you're not careful, you can easily cause data loss or a production outage. Django's migration system does not provide a native way to ensure that a migration is safe to apply.

  3. Modern Deployments - Django does not provide native integration with modern deployment practices such as GitOps or Infrastructure-as-Code.

Atlas, on the other hand, lets you manage your Django applications using the Database Schema-as-Code paradigm. This means that you can use Atlas to automatically plan schema migrations for your Django project, and then apply them to your database.

Read the full guide to set up Atlas for your Django project.

Support for PostgreSQL Extensions

Postgres extensions are add-on modules that enhance the functionality of the database by introducing new objects, such as functions, data types, operators, and more.

The support for extensions has been highly requested, so we are excited to announce that they are finally available!

To load an extension, add the extension block to your schema file. For example, adding PostGIS would look similar to:

schema.hcl
extension "postgis" {
schema = schema.public
version = "3.4.1"
comment = "PostGIS geometry and geography spatial types and functions"
}

Read more about configuring extensions in your schema here.

Dashboards in the Cloud

Atlas Cloud has a new and improved dashboard view!

When working with multiple databases, environments, or even projects - it becomes increasingly difficult to track and manage the state of each of these components. With Atlas Cloud, we aim to provide a single source of truth, allowing you to get a clear overview of each schema, database, environment, deployment and their respective statuses.

project-dashboard

Once you push your migration directory to the schema registry, you will be able to see a detailed dashboard like the one shown above.

Let’s break down what we see:

  • The usage calendar shows when changes are made to your migration directory via the migrate push command in CI.

  • The databases show the state of your target databases. This list will be populated once you have set up deployments for your migration directory. The state of the database can be one of the following:

    • Synced - the database is at the same version as the latest version of your migration directory schema.
    • Failed - the last deployment has failed on this database.
    • Pending - the database is not up to date with the latest version of your migration directory schema.

An alternate view to this page is viewing it per environment. This way, you can see a comprehensive list of the status of each database in each environment.

project-envs

SQL Server out of Beta

We are proud to announce that SQL Server is officially supported by Atlas! Since our release of SQL Server in Beta last August, our team has been working hard to refine and stabilize its performance.

In addition, we have added two new capabilities to the SQL Server driver.

User-Defined Types Support

In SQL Server, user-defined types (UDTs) are a way to create custom data types that group together existing data types. Atlas now supports alias types and table types.

Alias Types

Alias types allow you to create a custom data type, which can then make your code more readable and maintainable.

For example, you might want to create an alias type email_address for the VARCHAR(100) data type. Instead of rewriting this throughout the code, and in order to maintain consistency, you can simply use email_address for clarity.

In the schema.hcl file, you would define this like so:

schema.hcl
type_alias "email_address" {
schema = schema.dbo
type = varchar(100)
null = false
}
table "users" {
schema = schema.dbo
column "email_address" {
type = type_alias.email_address
}
}

Table Types

Table types allow you to define a structured data type that represents a table structure. These are particularly useful for passing sets of data between stored procedures and functions. They can also be used as parameters in stored procedures or functions, allowing you to pass multiple rows of data with a single parameter.

For example, we have a type_table to describe the structure of an address. We can declare this table and later use it in a function:

type_table "address" {
schema = schema.dbo
column "street" {
type = varchar(255)
}
column "city" {
type = varchar(255)
}
column "state" {
type = varchar(2)
}
column "zip" {
type = type_alias.zip
}
index {
unique = true
columns = [column.ssn]
}
check "zip_check" {
expr = "len(zip) = 5"
}
}
function "insert_address" {
schema = schema.dbo
lang = SQL
arg "@address_table" {
type = type_table.address
readonly = true // The table type is readonly argument.
}
arg "@zip" {
type = type_alias.zip
}
return = int
as = <<-SQL
BEGIN
DECLARE @RowCount INT;
INSERT INTO address_table (street, city, state, zip)
SELECT street, city, state, zip
FROM @address_table;

SELECT @RowCount = @ROWCOUNT;

RETURN @RowCount;
END
SQL
}
type_alias "zip" {
schema = schema.dbo
type = varchar(5)
null = false
}

Read the documentation to learn how to use these types in Atlas.

Azure Active Directory (AAD) Authentication

Now when using SQL Server with Atlas, instead of providing your regular database URL, you can connect to your Azure instance with Azure Active Directory Authentication.

Use the fedauth parameter to specify the AAD authentication method. For more information, see the document on the underlying driver.

To connect to your Azure instance using AAD, the URL will look similar to:

azuresql://<instance>.database.windows.net?fedauth=ActiveDirectoryDefault&database=master

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 5 min read
Rotem Tamir

Hi everyone,

We are updating you on a pricing change we will be rolling out to Atlas Cloud on March 15th, 2024.

As you know, Atlas is an open-core project, which means that while its core is an Apache 2-licensed open-source project, we are building it as a commercial, cloud-connected solution built and supported by our company, Ariga. As with any startup, our understanding of the product and the market are constantly evolving, and this pricing change is a reflection of that evolution.

Atlas Plans

Even through this change, we will keep providing the Atlas community with three options for how to consume Atlas.

  • Free Plan (formerly "Community Plan") - for individuals and small teams that want to unlock the full potential of Atlas. This plan will remain free forever and provides full access to all the capabilities of Atlas as a CLI as well as access to enough Atlas Cloud quota to successfully manage a single project. Support is provided via public community support channels.

  • Business Plan (formerly "Team Plan") - for teams professionally using Atlas beyond a single project. This plan has the same features and capabilities as the Free Plan but allows teams to purchase additional quotas if required. In addition, teams subscribing to this plan will get access to priority email support and in-app support via Intercom.

  • Enterprise Plan - for larger organizations looking to solve schema management at scale. This plan includes a dedicated support channel, solution engineering, and other features required for adoption by enterprises.

Why Change

The main reason for this change is the feedback we received from many small teams that the previous $295/month minimum price tag for the Team Plan was prohibitive and that a more gradual, usage-based pricing model would help them adopt Atlas in their organizations.

New Pricing

We’ve tried to keep the new pricing model as simple as possible. We have learned from our investors, advisors, and customers that a seat-based pricing model is less optimal as it disincentivizes the adoption of Atlas by people in roles with a lower-touch engagement. As such, we have made the new pricing model usage-based. Let’s break down how this is going to work.

Projects. The first dimension by which the new model works is the number of projects that you store in the Atlas Cloud schema registry. Currently, this is equal to the number of migration directories that you migrate push to Atlas Cloud, but soon we will also add support for schema push for declarative workflows.

Details:

  • Each project will cost $59 per month.
  • The Free Plan will include a single project free of charge.

Target Databases. Each project (migration directory) can be deployed to multiple target databases. This may be different environments (dev, staging, prod) or different tenants (for projects that manage separate databases per customer).

Details:

  • Each target database will cost $39 per month.
  • The Free Plan will include 2 target databases free of charge.
  • Whenever you purchase quota for an additional project, you will also get a bundled additional target database free of charge.

If pricing by target DB doesn't work for your particular use case, please reach out to us to discuss alternative solutions.

Additional Changes

  • Seats. The free plan will include 3 seats free of charge. Teams upgrading to business will receive 30 seats (regardless of how many projects and databases they use). This limit is supposed to allow as many people as needed to use Atlas Cloud features, but still impose some limit beyond which we expect teams to consider the Enterprise Plan.

  • Data Retention. Atlas users generate plenty of data which we store in our databases. To prevent it from becoming unsustainable for us to support free users over the long run, we are imposing a 30-day data retention limit on CI runs and deployment logs for free users. Business users get 90-day retention by default. If this becomes an issue for you, feel free to reach out to us and we will work something out.

  • Runs. Free Plan users can now report up to 100 CI Runs or Deployments (previously 500) per month in their cloud account. Business and Enterprise users can store an unlimited amount of runs.

Thanking Existing Users

As a way of saying thanks to existing early users who have trusted us to be part of their engineering infrastructure, we have worked out a few options for you to continue using Atlas without interruption. We will be reaching out to admins of these accounts personally to share the details.

This doesn’t work for me

If these changes cause an issue for you or you would like to discuss your specific pricing needs, please let me know personally via Email, Discord, Intercom, or Homing Pigeon 🙂.

-- Rotem and Ariel

· 10 min read

Hi everyone,

We are excited to share our latest release with you! Here's what's new:

  • Pre-migration Checks: Before migrating your schema, you can now add SQL checks that will be verified to help avoid risky migrations.
  • Schema Docs: Atlas lets you manage your database schema as code. One of the things we love most about code, is that because of its formal structure, it's possible to automatically generate documentation from it. With this release, we're introducing a new feature that lets you generate code-grade documentation for your database schema.
  • SQL Server Trigger Support: Atlas now supports managing triggers in SQL Server.
  • ClickHouse Materialized View Support: Atlas now supports managing materialized views in ClickHouse.

Let's dive in.

Pre-migration Checks

Atlas now supports the concept of pre-migration checks, where each migration version can include a list of assertions (predicates) that must evaluate to true before the migration is applied.

For example, before dropping a table, we aim to ensure that no data is deleted and the table must be empty, or we check for the absence of duplicate values before adding a unique constraint to a table.

This is especially useful if we want to add our own specific logic to migration versions, and it helps to ensure that our database changes are safe.

Cloud Directory

Pre-migration checks work for Cloud connected directories. Check out the introduction guide to get started with Atlas Cloud.

To add these checks, Atlas supports a text-based file archive to describe "migration plans". Unlike regular migration files, which mainly contain a list of DDL statements (with optional directives), Atlas txtar files (currently) support two file types: migration files and pre-execution check files.

The code below presents a simple example of a pre-migration check. The default checks file is named checks.sql, and the migration.sql file contains the actual DDLs to be executed on the database in case the assertions are passed.

20240201131900_drop_users.sql
-- atlas:txtar

-- checks.sql --
-- The assertion below must be evaluated to true. Hence, the "users" table must not contain any rows.
SELECT NOT EXISTS(SELECT * FROM users);

-- migration.sql --
-- The statement below will be executed only if the assertion above evaluates to true.
DROP TABLE users;

If the pre-execution checks pass, the migration will be applied, and Atlas will report the results.

atlas migrate --dir atlas://app --env prod

Check passed

Output
Migrating to version 20240201131900 from 20240201131800 (1 migrations in total):
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM users);
-- ok (624.004µs)
-- migrating version 20240201131900
-> DROP TABLE users;
-- ok (5.412737ms)
-------------------------
-- 22.138088ms
-- 1 migration
-- 1 check
-- 1 sql statement

If the pre-execution checks fail, the migration will not be applied, and Atlas will exit with an error.

atlas migrate --dir atlas://app --env prod

Check failed

Output
Migrating to version 20240201131900 from 20240201131800 (1 migrations in total):
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM internal_users);
-> SELECT NOT EXISTS(SELECT * FROM external_users);
-- ok (1.322842ms)
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM roles);
-> SELECT NOT EXISTS(SELECT * FROM user_roles);
2 of 2 assertions failed: check assertion "SELECT NOT EXISTS(SELECT * FROM user_roles);" returned false
-------------------------
-- 19.396779ms
-- 1 migration with errors
-- 2 checks ok, 2 failures
Error: 2 of 2 assertions failed: check assertion "SELECT NOT EXISTS(SELECT * FROM user_roles);" returned false

To learn more about how to use pre-migration checks, read the documentation here.

Schema Docs

One of the most surprising things we learned from working with teams on their Atlas journey, is that many teams do not have a single source of truth for their database schema. As a result, it's impossible to maintain up-to-date documentation for the database schema, which is crucial for disseminating knowledge about the database across the team.

Atlas changes this by creating a workflow that begins with a single source of truth for the database schema - the desired state of the database, as defined in code. This is what enables Atlas to automatically plan migrations, detect drift (as we'll see below), and now, generate documentation.

How it works

Documentation is currently generated for the most recent version of your schema for migration directories that are pushed to Atlas Cloud. To generate docs for your schema, follow these steps:

  1. Make sure you have the most recent version of Atlas:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  2. Login to Atlas Cloud using the CLI:

    atlas login

    If you do not already have a (free) Atlas Cloud account, follow the instructions to create one.

  3. Push your migrations to Atlas Cloud:

    atlas migrate push <dir name>

    Be sure to replace <dir name> with the name of the directory containing your migrations. (e.g app)

  4. Atlas will print a link to the overview page for your migration directory, e.g:

    https://gh.atlasgo.cloud/dirs/4294967296
  5. Click on "Doc" in the top tabs to view the documentation for your schema.

SQL Server Trigger Support

In version v0.17, we released trigger support for PostgreSQL, MySQL and SQLite. In this release, we have added support for SQL Server as well.

Triggers are a powerful feature of relational databases that allow you to run custom code when certain events occur on a table or a view. For example, you can use triggers to automatically update the amount of stock in your inventory when a new order is placed or to create an audit log of changes to a table. Using this event-based approach, you can implement complex business logic in your database, without having to write any additional code in your application.

Managing triggers as part of the software development lifecycle can be quite a challenge. Luckily, Atlas's database schema-as-code approach makes it easy to do!

BETA FEATURE

Triggers are currently in beta and available to logged-in users only. To use this feature, run:

atlas login

Let's use Atlas to build a small chunk of a simple e-commerce application:

  1. Download the latest version of the Atlas CLI:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  2. Make sure you are logged in to Atlas:

    atlas login
  3. Let's spin up a new SQL Server database using docker:

    docker run --rm -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@ssw0rd0995' -p 1433:1433 --name atlas-demo -d mcr.microsoft.com/mssql/server:latest
  4. Next, let's define and apply the base table for our application:

    schema.hcl
    schema "dbo" {
    }
    table "grades" {
    schema = schema.dbo
    column "student_id" {
    null = false
    type = bigint
    }
    column "course_id" {
    null = false
    type = bigint
    }
    column "grade" {
    null = false
    type = int
    }
    column "grade_status" {
    null = true
    type = varchar(10)
    }
    primary_key {
    columns = [column.student_id, column.course_id]
    }
    }

    The grades table represents a student's grade for a specific course. The column grade_status will remain null at first, and we will use a trigger to update whether it the grade is pass or fail.

    Apply this schema on our local SQL Server instance using the Atlas CLI:

    atlas schema apply \
    --url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
    --to "file://schema.hcl" \
    --dev-url "docker://sqlserver/2022-latest/dev?mode=schema" \
    --auto-approve

    This command will apply the schema defined in schema.hcl to the local SQL Server instance. Notice the --auto-approve flag, which instructs Atlas to automatically apply the schema without prompting for confirmation.

  5. Now, let's define the logic to assign a grade_status using a TRIGGER. Append this definition to schema.hcl:

    schema.hcl
      trigger "after_grade_insert" {
    on = table.grades
    after {
    insert = true
    }
    as = <<-SQL
    BEGIN
    SET NOCOUNT ON;

    UPDATE grades
    SET grade_status = CASE
    WHEN inserted.grade >= 70 THEN 'Pass'
    ELSE 'Fail'
    END
    FROM grades
    INNER JOIN inserted ON grades.student_id = inserted.student_id and grades.course_id = inserted.course_id;
    END
    SQL
    }

    We defined a TRIGGER called after_grade_insert. This trigger is executed after new rows are inserted or existing rows are updated into the grades table. The trigger executes the SQL statement, which updates the grade_status column to either 'Pass' or 'Fail' based on the grade.

    Apply the updated schema using the Atlas CLI:

    atlas schema apply \
    --url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
    --to "file://schema.hcl" \
    --dev-url "docker://sqlserver/2022-latest/dev?mode=schema" \
    --auto-approve

    Notice that Atlas automatically detects that we have added a new TRIGGER, and applies it to the database.

  6. Finally, let's test our application to see that it actually works. We can do this by populating our database with some students' grades. To do so, connect to the SQL Server container and open a sqlcmd session.

    docker exec -it atlas-demo /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'P@ssw0rd0995'

    Now that a sqlcmd session is open, we can populate the items:

    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (1, 1, 87, null);
    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (1, 2, 99, null);
    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (2, 2, 68, null);

    To exit the session write Quit.

    Now, let's check the grades table to see that the grade_status column was updated correctly:

     docker exec -it atlas-demo /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'P@ssw0rd0995' -Q "SELECT * FROM grades;"

    You should see the following output:

     student_id    course_id        grade   grade_status
    ---------- ------------- ----------- --------------
    1 1 87 Pass
    1 2 99 Pass
    2 2 68 Fail
    (3 rows affected)

    Amazing! Our trigger automatically updated the grade_status for each of the rows.

ClickHouse Materialized View Support

A materialized view is a table-like structure that holds the results of a query. Unlike a regular view, the results of a materialized view are stored in the database and can be refreshed periodically to reflect changes in the underlying data.

LOGIN REQUIRED

Materialized views are currently available to logged-in users only. To use this feature, run:

atlas login

Let's see an example of how to write a materialized view in HCL for a ClickHouse database:

materialized "mat_view" {
schema = schema.public
to = table.dest
as = "SELECT * FROM table.src"
depends_on = [table.src]
}

In the example above, when creating materialized views with TO [db.]table, the view will be created with the same structure as the table or view specified in the TO clause.

The engine and primary_key attributes are required if the TO clause is not specified. In this syntax, populate can be used for the first time to populate the materialized view:

materialized "mat_view" {
schema = schema.public
engine = MergeTree
column "id" {
type = UInt32
}
column "name" {
type = String
}
primary_key {
columns = [column.id]
}
as = "SELECT * FROM table.src"
populate = true
depends_on = [table.src]
}
info

Note that modifying the materialized view structure after the initial creation is not supported by Atlas currently.

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.