Skip to main content

17 posts tagged with "migration"

View All Tags

Announcing v0.11.0: Manage database schemas with Kubernetes and Atlas

· 8 min read
Rotem Tamir
Building Atlas
TL;DR

You can now use the Atlas Kubernetes Operator to safely manage your database schemas with Atlas from within your Kubernetes cluster.

See an example

Introduction

Today, we are excited to announce the release of Atlas v0.11.0, which introduces the Atlas Kubernetes Operator. This release is a major milestone in our mission to make Atlas the most robust and modern way to manage your database schemas. With the Atlas Kubernetes Operator, you can now manage your database schemas with Atlas from within your Kubernetes cluster.

In this release, we also introduce a new concept to Atlas - "Diff Policies" - which allow you to customize the way Atlas plans database migrations for you. This concept is directly related to the Kubernetes Operator, and we will explain how below.

What are Kubernetes Operators?

Kubernetes has taken the cloud infrastructure world by storm mostly thanks to its declarative API. When working with Kubernetes, developers provide their cluster's desired configuration to the Kubernetes API, and Kubernetes is responsible for reconciling the actual state of the cluster with the desired state. This allows developers to focus on the desired state of their cluster, and let Kubernetes handle the complexities of how to get there.

This works out incredibly well for stateless components, such as containers, network configuration and access policies. The benefit of stateless components is that they can be replaced at any time, and Kubernetes can simply create a new instance of the component with the desired configuration. For stateful resources, such as databases, this is not the case. Throwing away a running database and creating a new one with the desired configuration is not an option.

For this reason, reconciling the desired state of a database with its actual state can be a complex task that requires a lot of domain knowledge. Kubernetes Operators were introduced to the Kubernetes ecosystem to help users manage complex stateful resources by codifying this type of domain knowledge into a Kubernetes controller.

What is the Atlas Kubernetes Operator?

The Atlas Kubernetes Operator is a Kubernetes controller that uses Atlas to manage your database schema. The Atlas Kubernetes Operator allows you to define the desired schema and apply it to your database using the Kubernetes API.

Declarative schema migrations

The Atlas Kubernetes Operator supports declarative migrations. In declarative migrations, the desired state of the database is defined by the user and the operator is responsible for reconciling the desired state with the actual state of the database (planning and executing CREATE, ALTER and DROP statements).

Diffing policies

One of the common objections to applying declarative workflows to databases is that there are often multiple ways to achieve the same desired state. For example, if you are running a Postgres database, you may want to add an index to a table. Depending on your circumstances, you may want to add this index with or without the CONCURRENTLY option. When using a declarative workflow, you supply where you want to go, but not how to get there.

To address this concern, we have introduced the concept of "diff policies" to Atlas. Diff policies allow you to customize the way Atlas plans database schema changes for you. For example, you can define a diff policy that will always add the CONCURRENTLY option to CREATE INDEX statements. You can also define a diff policy that will skip certain kinds of changes (for example DROP COLUMN) altogether.

Diff policies can be defined in the atlas.hcl file you use to configure Atlas. For example:

env "local" {
diff {
// By default, indexes are not created or dropped concurrently.
concurrent_index {
create = true
drop = true
}
}
}

Diff policies are especially valuable when using the Atlas Kubernetes Operator, as they allow you to customize and constrain the way the operator manages your database to account for your specific needs. We will see an example of this below.

Demo time!

Let's see the Atlas Kubernetes Operator in action. In this demo, we will use the Atlas Kubernetes Operator to manage a MySQL database running in a Kubernetes cluster.

The Atlas Kubernetes Operator is available as a Helm Chart. To install the chart with the release name atlas-operator:

helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator

After installing the operator, follow these steps to get started:

  1. Create a MySQL database and a secret with an Atlas URL to the database:
kubectl apply -f https://raw.githubusercontent.com/ariga/atlas-operator/65dce84761354d1766041c7f286b35cc24ffdddb/config/integration/databases/mysql.yaml

Result:

deployment.apps/mysql created
service/mysql created
secret/mysql-credentials created
  1. Create a file named schema.yaml containing an AtlasSchema resource to define the desired schema:
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
name: atlasschema-mysql
spec:
urlFrom:
secretKeyRef:
key: url
name: mysql-credentials
schema:
sql: |
create table users (
id int not null auto_increment,
name varchar(255) not null,
email varchar(255) unique not null,
short_bio varchar(255) not null,
primary key (id)
);
  1. Apply the schema:
kubectl apply -f schema.yaml

Result:

atlasschema.db.atlasgo.io/atlasschema-mysql created
  1. Check that our table was created:
kubectl exec -it $(kubectl get pods -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -ppass -e "describe myapp.users"

Result:

+-----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | UNI | NULL | |
| short_bio | varchar(255) | NO | | NULL | |
+-----------+--------------+------+-----+---------+----------------+

Hooray! We applied our desired schema to our target database.

Diff policies in action

Now let's see how we can use diffing policies to customize the way the operator manages our database. In this example, we will demonstrate how we can prevent the operator from dropping columns in our database. Modify the schema.yaml file:

apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
name: atlasschema-mysql
spec:
urlFrom:
secretKeyRef:
key: url
name: mysql-credentials
+ policy:
+ diff:
+ skip:
+ drop_column: true
schema:
sql: |
create table users (
id int not null auto_increment,
name varchar(255) not null,
email varchar(255) unique not null,
- short_bio varchar(255) not null,
primary key (id)
);

In the example above we added a policy section to our AtlasSchema resource. In this section, we defined a diff policy that will skip DROP COLUMN statements. In addition, we dropped the short_bio column from our schema. Let's apply the updated schema:

kubectl apply -f schema.yaml

Next, wait for the operator to reconcile the desired state with the actual state of the database:

kubectl wait --for=condition=Ready atlasschema/atlasschema-mysql

Finally, let's check that the short_bio column was not dropped. Run:

kubectl exec -it $(kubectl get pods -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -ppass -e "describe myapp.users"

Result:

+-----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | UNI | NULL | |
| short_bio | varchar(255) | NO | | NULL | |
+-----------+--------------+------+-----+---------+----------------+

As you can see, the short_bio column was not dropped. This is because we defined a diffing policy that skips DROP COLUMN statements.

Linting policies

An alternative way to prevent the operator from dropping columns is to use a linting policy. Linting policies allow you to define rules that will be used to validate the changes to the schema before they are applied to the database. Let's see how we can define a policy that prevents the operator from applying destructive changes to the schema. Edit the schema.yaml file:

```diff
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
name: atlasschema-mysql
spec:
urlFrom:
secretKeyRef:
key: url
name: mysql-credentials
policy:
+ lint:
+ destructive:
+ error: true
- diff:
- skip:
- drop_column: true
schema:
sql: |
create table users (
id int not null auto_increment,
name varchar(255) not null,
email varchar(255) unique not null,
primary key (id)
);

In the example above we replaced the diff policy with a lint policy. In this policy, we defined a destructive rule that will cause the operator to fail if it detects a destructive change to the schema. Notice that the short_bio is not present in the schema (we did this in our previous change).

Let's apply the updated schema:

kubectl apply -f schema.yaml

Next, let's wait for the operator to reconcile the desired state with the actual state of the database:

kubectl wait --for=condition=Ready atlasschema/atlasschema-mysql --timeout 10s

Notice that this time, the operator failed to reconcile the desired state with the actual state of the database:

error: timed out waiting for the condition on atlasschemas/atlasschema-mysql

Let's check the reason for this failure:

kubectl get atlasschema atlasschema-mysql -o jsonpath='{.status.conditions[?(@.type=="Ready")].message}'

Result:

destructive changes detected:
- Dropping non-virtual column "short_bio"

Hooray! We have successfully prevented the operator from applying destructive changes to our database.

Conclusion

In this post, we have presented the Atlas Operator and demonstrated how you can use it to manage your database schema. We also covered diffing and linting policies and showed how you can use them to customize the way the operator manages your database.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Manage database schemas with Terraform in plain SQL

· 9 min read
Rotem Tamir
Building Atlas
TL;DR

You can now use plain SQL to define the desired state of your database schema.

See an example

Earlier today, we released v0.5.0 of the Atlas Terraform Provider. This release includes two changes that, in my opinion, are a great improvement to the experience of working with the Atlas Provider.

In this post, I will discuss these two changes and how they can help you to manage your database schemas with Terraform:

  • Support for the docker:// driver for dev-databases.
  • Support for defining the desired state of your database schema in plain SQL (and any other schema loader supported by Atlas).

Improving the dev-database experience with the docker:// driver

Atlas relies on a connection to an empty database which it can use to perform various calculations and operations. This database is called the "dev-database", and it allows Atlas to do things like validate the correctness of user-provided code as well as normalize user-input to the way the database actually sees it.

In previous versions of the Atlas Provider, the dev-database needed to be provided by the user. This was a bit cumbersome, as the user needed to spin up a database (usually by running a local Docker container), and then provide the connection string to it in the dev_url field of the atlas_schema resource.

To improve this experience, we added support for the docker:// driver, which allows the user to only provide the database engine and version, and Atlas will spin up an ephemeral container with the correct database engine and version. In addition, starting this version, users may define the dev_url on the provider scope. For example:

provider "atlas" {
dev_url = "docker://mysql/8/myapp"
}

Defining the desired state of the database schema in plain SQL

In earlier versions of the Atlas Provider, the atlas_schema resource required the user to provide an Atlas HCL file which describes the desired state of the database schema. Many users found this syntax, which resembles Terraform's own, to be clean and concise. However, others disliked it and asked for a way to define the desired state in plain SQL.

To support this use-case, and many others, we have announced support for "schema loaders" - components that can be used to load the desired schema from many kinds of sources (such as plain SQL, an existing database, or the data-model of an ORM). To use this capability, users may use the atlas_schema data source, which accepts a url field that points to the desired schema. The scheme of this URL determines which schema loader will be used, for instance:

  • file://schema.sql - loads the schema from a local SQL file.
  • mysql://root:pass@localhost:3306/myapp - loads the schema from an existing MySQL database.
  • ent://service/ent/schema - loads the schema from the schema of an Ent project.

Managing database schemas in plain SQL using Terraform

info

You can find the final code for this example here.

In the following example, we will show how you can use Terraform and the Atlas provider to manage a MySQL database schema in plain SQL.

Let's start by creating a Terraform file named main.tf installing the Atlas Terraform provider:

terraform {
required_providers {
atlas = {
source = "ariga/atlas"
version = "0.5.0"
}
}
}

In addition to installing the Atlas provider, we will also spin up a local MySQL database using Docker which will represent our target database that we will manage with Terraform. In a real-world scenario, you would probably use a managed database service such as AWS RDS or Google Cloud SQL, but for the purpose of brevity, a local database will suffice. Run:

docker run -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=myapp mysql:8

Now that we have a database to manage, we can define the desired state of the database schema. Add a file named "schema.sql" with the following content:

create table users (
id int not null auto_increment primary key,
name varchar(255) not null
);

Next, we will define an atlas_schema data source that will load the schema from the schema.sql file:

data "atlas_schema" "sql" {
src = "file://${path.module}/schema.sql"
}

Finally, we will define an atlas_schema resource that will manage the schema in the target database. In addition, we will configure the Atlas provider to use the docker:// driver to spin up a temporary database container:

provider "atlas" {
dev_url = "docker://mysql/8/myapp"
}

resource "atlas_schema" "mysql" {
url = "mysql://root:pass@localhost:3306/myapp"
hcl = data.atlas_schema.sql.hcl
}

Now that we have defined our Terraform configuration, we can run terraform init to install the required providers:

terraform init

This should output something like:

Initializing provider plugins...
- Finding ariga/atlas versions matching "0.4.7"...
- Installing ariga/atlas v0.5.0...
- Installed ariga/atlas v0.5.0 (signed by a HashiCorp partner, key ID 45441FCEAAC3770C)

# ...

Terraform has been successfully initialized!

Finally, we can run terraform apply to create the database schema:

terraform apply

Terraform will print the following plan:

data.atlas_schema.sql: Reading...
data.atlas_schema.sql: Read complete after 4s [id=qnUvTyupgQzivof5LYWDOQ]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create

Terraform will perform the following actions:

# atlas_schema.myapp will be created
+ resource "atlas_schema" "myapp" {
+ hcl = <<-EOT
table "hello" {
schema = schema.myapp
column "world" {
null = true
type = text
}
column "thoughts" {
null = true
type = varchar(100)
}
}
schema "myapp" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}
EOT
+ id = (known after apply)
+ url = (sensitive value)
}

Plan: 1 to add, 0 to change, 0 to destroy.

│ Warning: Atlas Plan

│ with atlas_schema.myapp,
│ on main.tf line 18, in resource "atlas_schema" "myapp":
│ 18: resource "atlas_schema" "myapp" {

│ The following SQL statements will be executed:


│ CREATE TABLE `myapp`.`hello` (`world` text NULL, `thoughts` varchar(100) NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci



Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value:

Notice that the plan shows the SQL statements that will be executed to create the database schema as well as our loaded schema in its HCL representation - this was done by the schema loader that was used to load the schema from the schema.sql file.

If you are happy with the plan, type yes and press enter to apply the changes:

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

atlas_schema.myapp: Modifying... [id=mysql:///myapp]
atlas_schema.myapp: Modifications complete after 4s [id=mysql:///myapp]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Hooray! We have successfully created a database schema using Terraform and the Atlas provider.

Terraform's true power comes from its declarative nature - we feed it a desired state and it will make sure that the actual state matches the desired state. Atlas is a perfect match for this paradigm. Let's see what happens if we change the schema in the schema.sql file and run terraform apply again:

Update the contents of schema.sql to the following:

create table `groups` (
id int not null auto_increment primary key,
name varchar(255) not null
);

create table `users` (
id int not null auto_increment primary key,
name varchar(255) not null,
group_id int not null,
foreign key (group_id) references `groups` (id)
);

Re-apply the changes:

terraform apply

Observe that our plan includes the addition of the groups table as well as the foreign key constraint on the users table:

data.atlas_schema.sql: Reading...
data.atlas_schema.sql: Read complete after 4s [id=Qhci62i6CFYRQ2CmUOjMeA]
atlas_schema.myapp: Refreshing state... [id=mysql:///myapp]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
~ update in-place

Terraform will perform the following actions:

# atlas_schema.myapp will be updated in-place
~ resource "atlas_schema" "myapp" {
~ hcl = <<-EOT
+ table "groups" {
+ schema = schema.myapp
+ column "id" {
+ null = false
+ type = int
+ auto_increment = true
+ }
+ column "name" {
+ null = false
+ type = varchar(255)
+ }
+ primary_key {
+ columns = [column.id]
+ }
+ }
table "users" {
schema = schema.myapp
column "id" {
null = false
type = int
auto_increment = true
}
column "name" {
null = false
type = varchar(255)
}
+ column "group_id" {
+ null = false
+ type = int
+ }
primary_key {
columns = [column.id]
}
+ foreign_key "users_ibfk_1" {
+ columns = [column.group_id]
+ ref_columns = [table.groups.column.id]
+ on_update = NO_ACTION
+ on_delete = NO_ACTION
+ }
+ index "group_id" {
+ columns = [column.group_id]
+ }
}
schema "myapp" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}
EOT
id = "mysql:///myapp"
# (1 unchanged attribute hidden)
}

Plan: 0 to add, 1 to change, 0 to destroy.

│ Warning: Atlas Plan

│ with atlas_schema.myapp,
│ on main.tf line 18, in resource "atlas_schema" "myapp":
│ 18: resource "atlas_schema" "myapp" {

│ The following SQL statements will be executed:


│ CREATE TABLE `myapp`.`groups` (`id` int NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`)) CHARSET
│ utf8mb4 COLLATE utf8mb4_0900_ai_ci
│ ALTER TABLE `myapp`.`users` ADD COLUMN `group_id` int NOT NULL, ADD INDEX `group_id` (`group_id`), ADD CONSTRAINT
│ `users_ibfk_1` FOREIGN KEY (`group_id`) REFERENCES `myapp`.`groups` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION



Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value:

After typing yes and pressing enter, Terraform will apply the changes, bringing the actual state of the database schema in line with the desired state:

atlas_schema.myapp: Modifying... [id=mysql:///myapp]
atlas_schema.myapp: Modifications complete after 4s [id=mysql:///myapp]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Conclusion

In this tutorial, we have seen how to use Terraform to manage the schema of a MySQL database using the Atlas provider with plain SQL. Using this workflow, teams can bridge the gap between their database schema management flows and their Terraform workflows, allowing for simpler and more reliable software delivery.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Troubleshooting schema migration issues with Atlas

· 9 min read
Rotem Tamir
Building Atlas

Database schema migrations are an essential part of software development, allowing teams to evolve and refine their application's data model over time. However, with schema changes, it's not always smooth sailing, and migration failures can be disruptive and challenging to resolve.

As much as we'd like to believe that our schema migrations will be executed flawlessly, the reality is that things can and do go wrong. Whether it's due to human error, unforeseen complications, or technical constraints, migration failures can be a significant source of frustration for development teams. Anticipating and preparing for these issues is essential to minimize their impact on your project.

In this blog post, we'll explore the common causes of migration failures and demonstrate how Atlas can help you quickly recover from such failures and easily get back on track.

Atlas: Optimized for MTTR

MTTR (mean-time-to-recovery) is a widely accepted metric for measuring the performance of teams delivering software. MTTR measures the mean time it takes to restore service when a production issue occurs. In the context of schema migrations, this would mean measuring how long it takes a team to detect, triage and resolve failures of schema migrations.

Contrary to existing tools, Atlas was designed with failure in mind and comes with some useful features to help your team get out of the mud if (and when) a schema migration fails. By utilizing these features, your team can greatly reduce MTTR for schema change related failures.

Why do migrations fail?

Let's begin our discussion of troubleshooting schema migration failures by mentioning the common causes for migration failures.

  1. Syntax errors - A surprisingly common cause for migration failures is syntax errors in the migration script: the migration tool tries to execute a statement and the database rejects it, causing the migration to fail. For example, adding an unnecessary comma at the end of a list:
mysql> create table users (   id int,   name varchar(255), );

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')' at line 1
  1. Schema dependent changes - Incorrect assumptions about the current state of the target database can lead to failed migrations when those assumptions are not met. For example, trying to create a table that was already created:
mysql> create table users (   id int,   name varchar(255) );
ERROR 1050 (42S01): Table 'users' already exists
  1. Data-dependent changes - If migrations manipulate data or modify constraints, the operation may fail depending on existing data in the target database. For example, adding a NOT NULL constraint to a column may fail if that column contains null values:
mysql> alter table users modify bio varchar(100) not null;
ERROR 1138 (22004): Invalid use of NULL value
  1. Lost connection - In some cases, and depending on the state of the target database and network connectivity, the client executing the migration commands against the database may lose the connection to the database, causing the migration to fail:
mysql> create table t1 (c int);
No connection. Trying to reconnect...
ERROR 2003 (HY000): Can't connect to MySQL server on '0.0.0.0:3306' (61)
ERROR:
Can't connect to the server

Troubleshooting failures with Atlas

In the next section, we review the capabilities that Atlas provides operators to troubleshoot and resolve migration failures:

  • Status observability - how to understand the current state of the system after a failure.
  • Statement level granularity - how to recover from partial migration failures.
  • Declarative roll-forward - how to use Atlas to automatically create a recovery plan from a failure.

Status observability

The first step to solving any failure is being able to triage the issue at hand. To assist operators in diagnosing the current status of a target database, Atlas provides the migrate status command which can be used to understand the current situation. For instance, suppose we tried to run the following migration which contains a drop table statement for a non-existing table:

create table users (
id int,
name varchar(255)
);

drop table non_existing;

The migration will fail with the following error:

Error 1051 (42S02): Unknown table 'test.non_existing'

In many cases, the migration will not be applied from our workstation, so we may not have access to the execution logs. To check the migration status, we can run the migrate status command:

atlas migrate status -u mysql://root:pass@/test

Atlas will print:

Migration Status: PENDING
-- Current Version: 20230409114917 (1 statements applied)
-- Next Version: 20230409114917 (1 statements left)
-- Executed Files: 2 (last one partially)
-- Pending Files: 1

Last migration attempt had errors:
-- SQL: drop table non_existing;
-- ERROR: Error 1051 (42S02): Unknown table 'test.non_existing'

Observe that Atlas prints out some useful information:

  • Migration Status: PENDING - There are pending migrations.
  • -- Executed Files: 2 (last one partially) - the last file was partially applied.
  • The last migration failed with an error: ERROR: Error 1051 (42S02): Unknown table 'test.non_existing'

Statement-level granularity

As we saw in the example above, in cases where migrations partially fail (only some statements succeed) our database schema will be in a limbo state of sorts, it's neither in the previous nor the next version. To keep implementations simple, in the past many migration tools have opted to treat migration files as opaque blobs, meaning they cannot provide any assistance in cases of partial failures.

Atlas, on the other hand, parses the migration files prior to executing them and can therefore provide information about failures on the statement (rather than the file) level. This is great for observability, but it is even more meaningful when trying to resolve issues.

Consider a situation similar to the one we presented above, where a migration fails halfway because of a constraint violation:

CREATE TABLE biographies (
id INT AUTO_INCREMENT PRIMARY KEY,
user_id INT NOT NULL,
bio TEXT,
FOREIGN KEY (user_id) REFERENCES users(id)
);

ALTER TABLE users modify bio varchar(100) not null;

In cases where the users.bio column already contains null values, this migration will partially fail:

  -- migrating version 20230409123337
-> CREATE TABLE biographies (
id INT AUTO_INCREMENT PRIMARY KEY,
user_id INT NOT NULL,
bio TEXT,
FOREIGN KEY (user_id) REFERENCES users(id)
);
-> alter table users modify bio varchar(100) not null;

Error: Error 1138 (22004): Invalid use of NULL value

This can be solved by backfilling the table with non-null values in the relevant column. To do this, we can update our migration script to contain this UPDATE statement:

CREATE TABLE biographies (
id INT AUTO_INCREMENT PRIMARY KEY,
user_id INT NOT NULL,
bio TEXT,
FOREIGN KEY (user_id) REFERENCES users(id)
);

update users set bio='' where bio is null;

alter table users modify bio varchar(100) not null;

Here's the good part: because Atlas operates at the statement level and remembers that we've already successfully applied the first CREATE TABLE statement, it will resume from where it stopped. If we run:

atlas migrate apply -u mysql://root:pass@/test

Atlas runs to completion:

Migrating to version 20230409123337 from 20230409123337 (1 migrations in total):

-- migrating version 20230409123337
-> update users set bio='' where bio is null;
-> alter table users modify bio varchar(100) not null;
-- ok (48.440861ms)

-------------------------
-- 56.051791ms
-- 1 migrations
-- 2 sql statements

Declarative roll-forward

One of the things people experienced with existing tools immediately notice when they start working with Atlas is the absence of down migrations. Many migration tools expect users to plan a down migration parallel to every migration, which contains the statements needed to roll back the schema changes for a version. In theory, this is done to allow users to seamlessly return to a previous version in case things go wrong with the new one.

Our decision to omit down migrations from Atlas and deserves its own lengthy discussion, but limited to the examples we just showed it is easy to demonstrate that attempting to execute down migrations in cases of partial failures may fail themselves, since they rely on the database being at the state where all statements executed successfully.

Instead of down migrations, Atlas provides an alternative strategy for reverting to a previous version. As you may know, one of Atlas's core features is its support for declarative migrations - the ability to automatically plan schema changes from the current state of a database to some desired state (similar to Terraform, but for databases).

Suppose we want to revert the database to a known version. This can happen in cases where the database was somehow manually modified in a way that's preventing us from making progress, or if we simply want to revert to a previous version. Using Atlas's schema apply, we can automatically plan this change:

atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://migrations?version=targetVersion" \
--dev-url "docker://mysql/8/example" \
--exclude "atlas_schema_revisions"

Atlas plans the change for us:

-- Planned Changes:
-- Drop "biographies" table
DROP TABLE `biographies`;
-- Modify "users" table
ALTER TABLE `users` MODIFY COLUMN `bio` varchar(100) NULL;
Apply

Let's unpack this command:

  • url - is the URL of the target database that we want to modify.
  • to - describes the desired state, in this case the migration directory at file://migrations at version targetVersion - omitting this query parameter will set the desired state at the most recent revision.
  • dev-url - Atlas requires a connection to an empty dev-database, which it uses to normalize the desired schema. Using the docker:// URL scheme tells Atlas to spin up and use a fresh Docker container for this purpose.
  • exclude - tells Atlas to ignore atlas_schema_revision which is a metadata table maintained by Atlas and not described in the migration directory. Adding this argument prevents Atlas from accidentally producing a plan that drops this table.

Wrapping up

This blog post discussed the common causes of database schema migration failures and demonstrated how Atlas is equipped to handle them. Atlas offers features such as status observability, statement-level granularity, and declarative roll-forward capabilities, which enable development teams to efficiently recover from migration failures, reduce MTTR, and minimize disruption to their services.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

New Analyzers and Cloud Linting Configuration

· 5 min read
Jannik Clausen
Building Atlas

It's been only two weeks since the release of v0.10.0, but we are already back with more exciting features we want to share with you. Besides some more improvements and bug fixes, we added two new SQL analyzers to Atlas and the capability to have Atlas Cloud pick up a linting configuration file from your repository.

Concurrent Index Policy (Postgres)

One of the Analyzers we added in this release is the Concurrent Index Policy Analyzer for Postgres. When creating or dropping indexes Postgres will lock the table against writes. Depending on the amount of data this lock might be in place longer than just a few moments, up to several hours. Therefore, Postgres provides the CONCURRENTLY option which will cause the index to be built without keeping a lock for the whole time it is built. While consuming more resources, this option oftentimes is preferred, and we are happy to present to you, that Atlas Linting engine is now capable of detecting statements that create or drop an index without using the CONCURRENTLY option.

Naming Conventions Policy

Keeping consistency when naming database schema resources is a widely common practice. Atlas now has an analyzer that can detect names that don't comply with a given naming policy and will warn you in such cases. You can configure both a global or a resource specific policy. Read on to learn how to configure this analyzer or have a look at the documentation.

Cloud Linting Configuration

In our last post, @a8m introduced the Community Preview for Atlas Cloud and how to connect a migration directory in your GitHub repository to Atlas Cloud with just a few clicks. As of then, the Atlas Cloud Linting reports that are added to your PR's used the default linting configuration. In this post, I will show you how to add configuration to the linting by making use of both the new analyzers I mentioned above.

When connecting a new migration directory, Atlas Cloud will scan the repository for an existing atlas.hcl file and propose to you to use that file on migration linting. If you don't have such a file, you can configure it manually as described in the next tab.

Enable the Analyzers

The Concurrent Index Analyzer will not report on creating or dropping indexes on tables that have been created in the same file. Therefore, lets ensure we have a table ready we can add an index to. Our first migration file can look something like this:

1.sql
CREATE TABLE users
(
id serial PRIMARY KEY,
email VARCHAR(50) UNIQUE NOT NULL,
first_name VARCHAR(50) NOT NULL
);

To configure the Atlas Cloud linter to warn about creating or dropping indexes without the CONCURRENTLY option and ensure that all our schema resources are named with lowercase letters only, use the following configuration:

note

The below configuration will also work with the latest release of the Atlas CLI.

atlas.hcl
lint {
concurrent_index {
error = true # block PR on violations instead of warning
}

naming {
error = true
match = "^[a-z]+$" # regex to match lowercase letters
message = "must be lowercase letters" # message to return if a violation is found
}
}

See It In Action

What is left to demonstrate is a migration file violating the above policies. Take the below example: the index name contains an underscore _, which is permitted by the naming analyzer and create the index non-concurrently.

2.sql
CREATE INDEX email_idx ON users (email);

After adding the above atlas.hcl configuration file and the new migration, create a Pull Request on GitHub and observe Atlas Cloud doing its magic wizardry:

Wonderful! As you can see, Atlas Cloud found the two issues with this simple statement. Since the Concurrent Index Analyzer is configured to error on violations, merging the PR is blocked (if you have this policy set on GitHub).

In addition to the comment on the Pull Request, you can find more detailed reporting in the Checks tab or have a look at the file annotations Atlas adds to your changes:

What next?

I sure hope the new analyzers will be useful to you. In the upcoming weeks, we will be rolling out several new major features that we have been working on lately, including schema drift detection, managed migration deployments, and much more. If any of these features sound interesting to you, please do not hesitate to contact us.

We would love to hear from you on our Discord server ❤️.

Announcing Atlas v0.10: Cloud Community Preview

· 4 min read
Ariel Mashraki
Building Atlas

It's been two months since the release of v0.9.0, so we figured it's about time to release a new version and share with you what we've accomplished so far, as well as what's to come in the upcoming weeks. Besides the many improvements and bug fixes in v0.10.0, we added two major features to Atlas that I want to share with you: schema loaders and the Community Preview of Atlas Cloud.

Schema Loaders

In our previous post, we discussed our motivation for developing an infrastructure to load desired states from external sources (not just SQL and HCL), and we highlighted the importance of schema loaders. Today, I'm happy to share that we've made significant progress on this front. We started by creating a schema loader for the Ent framework, and with the release of v0.10.0, Ent users can now use their ent/schema package as the desired state in all the different Atlas commands.

Using the new integration, users can compare an ent/schema package with any other state, apply it onto a database, generate migrations from it, and much more. Here are two examples:

atlas migrate diff create_users \
--dir "file://migrations" \
--to "ent://path/to/schema" \
--dev-url "sqlite://dev?mode=memory&_fk=1"

I'm really eager to see this initiative come to fruition because it has proven to work well for the Ent community. We are now ready to expand support for additional frameworks and languages. In the upcoming weeks, you can expect to see additional integrations, such as GORM, Sequelize, and more. With these new superpowers, users will be able to manage all of their database schemas using a single tool - Atlas!

Atlas Cloud Community Preview

We are also super thrilled to announce the Community Preview of Atlas Cloud! Atlas Cloud is a cloud-based service that provides teams with an end-to-end solution for managing database schema changes. As part of the Community Preview, we are offering a free "Community" plan for all users which you can use to manage up to 5 migration directories for your team or personal projects.

One important feature that was recently added to Atlas is the ability to connect remote migration directories stored in GitHub to Atlas Cloud. This new functionality empowers users to easily audit and view their migration history and get migration linting checks on their PRs, such as destructive or backwards incompatible changes detection.

Let's walk through a simple guide on how to set it up to a project with just a few clicks:

1. Login to atlasgo.cloud and create a new workspace (organization) for your projects:

2. Once created, go to /dirs/configure and connect your migration directory stored in GitHub to Atlas Cloud:

3. After connecting your directory, you'll see an extensive overview of your migration history and the schema it presents:

4. From this point on, every change made to the migration directory will be reflected in Atlas Cloud. But what about the changes themselves? Here's where the magic happens. Once a directory is connected, any pull request that modifies it will be automatically checked and reviewed by Atlas!

Let's create a sample migration change, open a pull request, and see it in action:

Wonderful! However, that's not all. There is another detailed and visualized report available in Atlas Cloud that has been specifically created for this CI run. Go to the migration directory page, click on the CI Runs button to check it out.

A big thanks to @giautm, @masseelch and @yonidavidson for building this feature for Atlas!

What next?

Well, this is just the beginning of Atlas Cloud! In the upcoming weeks, we will be rolling out several new major features that we have been working on lately, including schema drift detection, managed migration deployments, and much more. If any of these features sound interesting to you, please do not hesitate to contact us.

We would love to hear from you on our Discord server ❤️.

Announcing Atlas v0.9.0: SQL as a First-Class Citizen

· 6 min read
Ariel Mashraki
Building Atlas

For a long time, one of the most common feature requests we've been getting from our users is the ability to manage their desired "schema state" using SQL. This is understandable, using Atlas DDL (HCL) can feel unfamiliar to some users, especially those who have never worked with Terraform before. For this reason, we're excited to announce the release of Atlas v0.9.0, which now fully supports SQL.

Schema as Code (SaC)

Atlas applies the common IaC concept of declarative resource management to database schemas. With Atlas, users do not need to plan schema changes themselves. Instead of figuring out the correct SQL statements to update their database schemas, users provide to Atlas the schema definition that describe their desired state and Atlas generates a migration plan to move from the current state to the desired state defined by the schema.

Starting from v0.9.0, users can use SQL schema files (or a directory) containing CREATE and ALTER statements to describe their desired state. To demonstrate this, let's use this schema example with a single users table:

schema.sql
-- create table "users
CREATE TABLE users(
id int NOT NULL,
name varchar(100) NULL,
PRIMARY KEY(id)
);

Given this schema file, Atlas offers two workflows to update databases:

  • Declarative: Similar to Terraform, Atlas compares the current state of the database schema with the desired state defined by the SQL schema, and generates a migration plan to reach that state.
  • Versioned: Atlas compares the current state defined by the migrations directory to the desired state defined by the SQL schema, and writes a new migration script to the directory to update the database schema to the desired state.

In this blog post, we'll focus on explaining how SQL schemas can be used with the declarative workflow. For the sake of simplicity, let's assume we have an empty database that we want to apply the schema above to:

atlas schema apply \
--url "mysql://root:pass@localhost:3306/example" \
--to "file://schema.sql" \
--dev-url "docker://mysql/8/example"
FLAGS
  • --url - the database URL to apply the schema to.
  • --to - URLs describe the desired state: SQL or HCL schema definition, or a database URL.
  • --dev-url - a URL to a Dev Database that will be used to compute the diff.

Running the command above with the --auto-approve flag will apply the following changes:

-- Planned Changes:
-- Create "users" table
CREATE TABLE `users` (`id` int NOT NULL, `name` varchar NULL, PRIMARY KEY (`id`));

Hooray! We have successfully created the users table defined in our schema file. Let's inspect our database and ensure its schema was actually updated by the command above:

atlas schema inspect \
--url "mysql://root:pass@localhost:3306/example" \
--format "{{ sql . }}"

Excellent! As you can see, our database schema has been updated:

-- create "users" table
CREATE TABLE `users` (`id` int NOT NULL, `name` varchar NULL, PRIMARY KEY (`id`));

Now let's make our schema more interesting by adding a column to the users table and creating a blog_posts table with a foreign key that references users:

schema.sql
-- create table "users
CREATE TABLE users(
id int NOT NULL,
name varchar(100) NULL,
email varchar(50) NULL,
PRIMARY KEY(id)
);

-- create table "blog_posts"
CREATE TABLE blog_posts(
id int NOT NULL,
title varchar(100) NULL,
body text NULL,
author_id int NULL,
PRIMARY KEY(id),
CONSTRAINT author_fk FOREIGN KEY(author_id) REFERENCES users(id)
);

Next, executing atlas schema apply again will update the database schema with the following changes:

atlas schema apply
-- Planned Changes:
-- Add column "email" to table: "users"
ALTER TABLE `users` ADD COLUMN `email` varchar NULL;
-- Create "blog_posts" table
CREATE TABLE `blog_posts` (`title` varchar NULL, `body` text NULL, `author_id` int NULL, `id` int NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `author_fk` FOREIGN KEY (`author_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION);

Boom! Atlas automatically calculates the difference between the current state of our database and the desired state defined by our schema file, and generates the necessary changes to migrate the database to the new state. We don't need to specify each individual migration – we simply tell Atlas what state we want the database to be in, and it handles the rest.

To see a full description of this generated migration plan, check out this diagram example in Atlas public playground:

Diff ERD

Diff SQL

Atlas Playground

As part of this version, we have released the Atlas playground where users can visualize their database schemas in an interactive way. Simply provide an SQL or HCL schema, or import one from an existing database, and in return get an ERD visualizing their entire data model.

Users can also compare between two schemas with the Schema Diff button, and get the SQL statements necessary to migrate from one schema to the other - give it try!

Blog ERD

A big thanks to @solomonme, @ronenlu and @masseelch for contributing this feature to Atlas!

Schema Loaders

What's next? In the near future, we plan to add an infrastructure for loading schemas from external sources. This will enable ORM maintainers to integrate with Atlas and provide their schema definitions as Atlas schemas. As as result, they can utilize the Atlas engine to diff schemas, plan and lint migrations, execute them on the databases, and more.

The first ORM to integrate with Atlas will be Ent. Using this integration, Ent users will be able to generate Atlas schemas or migrations for their Ent projects with a single command:

atlas migrate diff create_users \
--dir "file://migrations" \
--to "ent://path/to/schema" \
--dev-url "docker://<driver-name>"

Would you like to see other ORMs integrated with Atlas? Please, join our Discord server and let me know.

What next?

Have questions or feedback? Feel free to reach out on our Discord server.

Picking a database migration tool for Go projects in 2023

· 7 min read

Most software projects are backed by a database, that's widely accepted. The schema for this database almost always evolves over time: requirements change, features are added, and so the application's model of the world must evolve. When this model evolves, the database's schema must change as well. No one wants to (or should) connect to their production database and apply changes manually, which is why we need tools to manage schema changes. Most ORMs have basic support, but eventually projects tend to outgrow them. This is when projects reach to choose a schema migration tool.

Many such tools exist, and it's hard to know which to choose. My goal in this article is to present 3 popular choices for migration tools for Go projects to help you make this decision.

By way of introduction (and full disclosure): my name is Pedro Henrique, I'm a software engineer from Brazil, and I've been a contributing member of the Ent/Atlas community for quite a while. I really love open-source and think there's room for a diverse range of tools in our ecosystem, so I will do my best to provide you with an accurate, respectful, and fair comparison of the tools.

golang-migrate - Created: 2014 GitHub Stars: 10.3k
Golang migrate is one of the most famous tools for handling database migrations. Golang migrate has support for many database drivers and migration sources, it takes a simple and direct approach for handling database migrations.

Goose - Created: 2012 GitHub Stars: 3.2k
Goose is a solid option when choosing a migration tool. Goose has support for the main database drivers and one of its main features is support for migrations written in Go and more control of the migrations application process.

Atlas - Created: 2021 GitHub Stars: 2.1k
Atlas is an open-source schema migration tool that supports a declarative workflow to schema migrations, making it a kind of "Terraform for databases". With Atlas, users can declare their desired schema and let Atlas automatically plan the migrations for them. In addition, Atlas supports classic versioned migration workflows, migration linting, and has a GitHub Actions integration.

Golang migrate

Golang migrate was initially created by Matt Kadenbach. In 2018 the project was handed over to Dale Hui, and today the project resides on the golang-migrate organization and is actively maintained, having 202 contributors.

One of Golang migrate's main strengths is the support for various database drivers. If your project uses a database driver that is not very popular, chances are that Golang migrate has a driver for it. For cases where your database is not supported, Golang migrate has a simple API for defining new database drivers. Databases supported by Golang migrate include: PostgreSQL, Redshift, Ql, Cassandra, SQLite, MySQL/MariaDB, Neo4j, MongoDB, Google Cloud Spanner, and more.

Another feature of Golang migrate is the support for different migrations sources, for cases where your migration scripts resides on custom locations or even remote servers.

Goose

Goose has a similar approach to Golang migrate. The project was initially created by Liam Staskawicz in 2012, and in 2016 Pressly created a fork improving the usage by adding support for migrations in Go, handling cases of migrations out of order and custom schemas for migration versioning. Today Goose has 80 contributors.

Goose only provides support for 7 database drivers, so if your project uses one of the main databases in the market, Goose should be a good fit. For migration sources, Goose allows only the filesystem, it's worth pointing out that with Go embed it is possible to embed the migration files on a custom binary. Goose's main difference from Golang migrate is the support for migrations written in Go, for cases where it is necessary to query the database during the migration. Goose allows for different types of migration versioning schemas, improving one key issue with Golang migrate.

Atlas

Atlas takes a completely different approach to Golang migrate or Goose. While both tools only focus on proving means of running and maintaining the migration directory, Atlas takes one step further and actually constructs a graph representing the different database entities from the migration directory contents, allowing for more complex scenarios and providing safety for migration operations.

Migrations in Atlas can be defined in two ways:

  • Versioned migrations are the classical style, where the migration contents are written by the developer using the database language.
  • Declarative migrations are more similar to Infrastructure-as-Code, where the schema is defined in a Terraform-like language and the migrations commands are calculated based on the current and desired state of the database. It's possible to use Atlas in a hybrid way as well, combining both styles, called Versioned Migration Authoring where the schema is defined in the Atlas language, but the Atlas engine is used to generate versioned migrations.

On top of Atlas's ability to load the migration directory as a graph of database entities, an entire infrastructure of static code analysis was built to provide warnings about dangerous or inefficient operations. This technique is called migration linting and can be integrated with the Atlas GitHub Action during CI.

In addition, if you would like to run your migrations using Terraform, Atlas has a Terraform provider as well.

Another key point that Atlas solves is handling migration integrity, which becomes a huge problem when working with multiple branches that all make schema changes. Atlas solves this problem by using an Integrity file. While we are on the topic of integrity, one key feature of Atlas is the support for running the migrations inside a transaction, unlike Goose during the process of migration. Atlas acquires a lock ensuring that only one migration happens at a time and the migration order/integrity is respected. For cases where problems are found, Atlas makes the troubleshooting process easier, allowing schema inspections, dry runs and providing helpful links to the common problems and solutions.

Feature comparison

FeatureGolang migrateGooseAtlas
Drivers supportedMain SQL and NoSQL databasesMain SQL databasesMain SQL databases
Migration sourcesLocal and remote SQL filesSQL and Go filesHCL and SQL files
Migrations typeVersionedVersionedVersioned and Declarative
Support for migrations in GoNoYesYes
Integrity checksNoNoYes
Migration out of orderNoPossible with hybrid versioningPossible calculating the directory hash
Lock supportYesNoYes
Use as CLIYesYesYes
Use as packageYesYesPartial support ¹
Versioned Migration AuthoringNoNoYes
Migration lintingNoNoYes
GitHub ActionNoNoYes
Terraform providerNoNoYes
  • 1: Atlas provides a few packages related to database operations, but the use is limited to complex cases and there is no package that provides migration usage out of the box.

Wrapping up

In this post we saw different strengths of each migration tool. We saw how Golang migrate has a great variety of database drivers and database sources, how Goose allows use to written migration in Go for the complexes migration scenarios and how Atlas makes the migration a complete different business, improving the safety of the migration operations and bringing concepts from others fields.