Skip to main content

· 11 min read
Rotem Tamir

Building a loveable migration tool

"I just love dealing with migrations!"

-No developer, ever.

Over the past three years, Ariel, my co-founder and I (along with the rest of our team at Ariga), have been working on Atlas, a database schema-as-code tool. After many years of building software professionally, we've come to realize that one of the most stressful, tedious and error-prone parts of building software is dealing with database migrations.

In case you are unfamiliar with the term, database migrations are the process of changing the structure of a database. When applications evolve, the database schema needs to evolve with them. This is commonly done by writing scripts that describe the changes to the database schema. These scripts are then executed in order to apply the changes to the database. This process has earned the name "migrations" and an infamous reputation among developers.

The secret to building a successful tool for developers is to be relentlessly focused on the user experience. When we started working on Atlas, we spent a long time researching the common issues developers face when dealing with migrations. We wanted to understand the root causes of these issues and design a tool that would solve them.

In this post, I'll share the top 5 usability issues we identified with migration tools and how we addressed them in Atlas.

Issue #1: The weirdest source of truth

It should be possible to provision our environments and build, test, and deploy our software in a fully automated fashion purely from information stored in version control

Forsgren PhD, Nicole; Humble, Jez; Kim, Gene. Accelerate (p. 72). IT Revolution Press. Kindle Edition.

One of the most important principles that came from the DevOps movement is the idea that to achieve effective automation, you need to be able to build everything, deterministically, from a single source of truth.

This is especially important when it comes to databases! The database schema is a super critical part of our application and we better have a way to ensure it is compatible with the code we are deploying.

Classic migration tools (like Flyway and Liquibase) were (to this day) an amazing step forward into automating schema changes, to the point that it is possible to run them as part of your CI/CD pipeline, satisfying the principle above.

But while technically correct, in terms of usability, they provide a very poor developer experience.

Consider this directory structure from a typical Flyway project:

.
├── V1__create_table1.sql
├── V2__create_table_second.sql
└── V3_1__add_comments.sql

To have a migration directory describe the current schema of a database is like describing your home by listing the needed steps to build it. It's correct, but it's not very useful.

Issue #1: Migrations as a source of truth

Problem: As a human, I can't understand the current state of the database by looking at the migration files. I need to run them in chronological order to understand the current state of the database.

Solution: Use a declarative schema-as-code approach to describe the current state of the database. Then, in every version of our project, we can read the current schema in plain text and understand it.

Issue #2: Manual planning

Classic migration tools typically require you to write the migration scripts manually. This process can be tedious and error-prone, especially if you have developers who don't have a lot of experience operating databases.

Classic migration tools were created at a time when tech-stacks were much simpler and less diverse. In addition, many organizations employed a DBA who could serve as a technical authority for database changes. Nowadays, developers are expected to "own" their databases and DBAs are seriously outnumbered. This means that developers are expected to write migration scripts, even if they don't have a lot of experience with databases.

Doing something, when it's not in your area of expertise, especially when its critical and risky like database migrations, can be a daunting and stressful task. Even if it is something you are good at, it still requires your attention and focus.

It's important to note that some ORMs (such as Django and Prisma) do stand out in their ability to generate migration scripts automatically (with some important limitations). But for the most part, developers are expected to write migration scripts manually.

Issue #2: Manual planning

Problem: Writing migration scripts can be stressful and error-prone, especially for developers who don't have a lot of experience with databases. They need to first know what the current state of the database is, then they need to know what the desired state is, and then they need to write a script that will take the database from the current state to the desired state. Some changes are trivial, but others require consideration and research.

Solution: Use a tool that can automatically generate migration scripts for you. This way, you can focus on the desired state of the database and let the tool figure out how to get there.

Issue #3: Working in parallel

When a project succeeds to the point that it has many developers working on it, it's common to see developers working on different features that require different changes to the database schema. This can lead to conflicts when developers try to merge their changes together.

Classic migration tools don't provide a good way to detect and handle this situation. Because each migration script is created in its own file, common conflict detection tools like git can't help you. Your source control system can't tell you if two developers are working on the same table, or if they are adding columns with the same name.

For this reason, it's common to see teams surprised in production when conflicting migrations are applied. Even worse, in some cases migrations may be applied out of order or skipped entirely, leading to an inconsistent and unknown state of the database.

Issue #3: Working in parallel

Problem: Source control systems can't help you detect conflicts between migration scripts. This can lead to undetected issues in production when they are deployed. To overcome this, teams develop their own processes to coordinate changes to the database schema which slow down development and are error-prone.

Solution: Use a tool that maintains a directory integrity file that will detect conflicting changes and force you to resolve them before you can proceed.

Issue #4: Tracking partial failures

Virtually all classic migration tools maintain a metadata table on the target database that tracks which migrations have been applied. Using this metadata, the tool can determine which migrations need to be applied to bring the database up to date.

When migrations succeed, everything is great. But in my experience, when migrations fail, especially when they fail partially, it can be a nightmare to recover from.

Suppose you planning a migration to version N+1 of the database, from version N. The migration script for version N contains 10 changes. The migration fails after applying the 5th change. If you use a database that supports fully transactional DDL (like Postgres), and all of your changes are transactional, then you are in luck - your migration tool can safely roll back the changes that were applied and your database (revisions table) will remain in version N1.

But if you are using a database that doesn't support transactional DDL, or if your changes are not transactional, then you are in trouble. The migration tool can't rollback the changes that were applied, and your database will in a state that is somewhere in the middle between version N1 and version N+1. All of the migration tools that I know of don't capture this interim state, and so you are left with a database that is in an unknown state and a revision table that is out of sync with the actual state of the database.

The most viewed question about golang-migrate on StackOverflow is about this error:

Dirty database version 2. Fix and force version.

The answer explains:

Before a migration runs, each database sets a dirty flag. Execution stops if a migration fails and the dirty state persists, which prevents attempts to run more migrations on top of a failed migration.

And the solution?

After you clean up you database you can also open schema_migrations table and change Dirty flag and rollback version number to last migration that was successfully applied.

Issue #4: Tracking partial failures

Problem: Classic migration tools don't handle partial failures well. When a migration fails, especially when it fails partially, it can leave your database in an unknown state with the revision table out of sync with the actual state of the database. Resolving this issue requires manual handling of the database which is dangerous and error-prone.

Solution: Use a tool that natively supports transactions where possible, and uses statement-level granularity to keep track of applied changes.

Issue #5: Pre-planned rollbacks

One of the most curious things about classic migration tools is the prevalence of the down migration. The idea is that whenever you write a migration script to take the database from version N to version N+1, then you should also write the down migration that will take the database from version N+1 back to version N.

Why is this curious? Because after interviewing many developers across many organizations in virtually every industry, I have learned that in practice nobody uses down migrations, especially in production.

We have recently written extensively on this topic, but the gist is that down migrations are never used because:

  • They are naive. Pre-planned rollbacks assume that all statements in the up migration have been applied successfully. In practice, rolling back a version happens when things did not work as planned, making the pre-planned rollback obsolete.

  • They are destructive. If you successfully rolled out a migration that added a column to a table, and then decided to revert it, its inverse operation (DROP COLUMN) does not merely remove the column. It deletes all the data in that column. Re-applying the migration would not bring back the data, as it was lost when the column was dropped.

  • They are incompatible with broader rollback mechanisms. In theory, rolling back a deployment should be as simple as deploying the previous version of the application. When it comes to versions of our application code, this works perfectly. We pull the container image that corresponds to the previous version, and we deploy it.

    But what about the database? When we pull artifacts from a previous version, they do not contain the down files that are needed to revert the database changes back to the necessary schema - they were only created in a future commit!

Issue #5: Pre-planned rollbacks

Problem: Pre-planned rollbacks ("down migrations") are never used in practice. They are naive, destructive, and incompatible with broader rollback mechanisms. This leads to a false sense of security which is unraveled when a rollback is actually needed and the user realizes that they will need to handle it manually.

Solution: Use a tool that supports dynamically-planned, safe rollbacks that can revert the database to a previous state without data loss even in cases of partial failures.

Try Atlas today

I believe these usability issues should not be taken lightly. Your team's ability to move fast, refactor and respond to changing requirements is limited by the agility in which you can evolve your database schema. One of the most interesting things that we see with teams that adopt Atlas is seeing them move from occasional, dreaded, avoided-at-all-costs migrations, to planning and deploying hundreds of schema changes a year. This means that they can move faster, respond to customer feedback more quickly, and innovate more effectively.

To try Atlas, head over to the Getting Started today!

Wrapping up

In this article, we have discussed the top 5 usability issues with migration tools today and hinted at how modern migration tools (like Atlas) can address them.

As always, we would love to hear your feedback and suggestions on the Atlas Discord server.

· 16 min read
Ariel Mashraki

TL;DR

Ever since my first job as a junior engineer, the seniors on my team told me that whenever I make a schema change I must write the corresponding "down migration", so it can be reverted at a later time if needed. But what if that advice, while well-intentioned, deserves a second look?

Today, I want to argue that contrary to popular belief, down migration files are actually a bad idea and should be actively avoided.

In the final section, I'll introduce an alternative that may sound completely contradictory: the new migrate down command. I will explain the thought process behind its creation and show examples of how to use it.

Background

Since the beginning of my career, I have worked in teams where, whenever it came to database migrations, we were writing "down files" (ending with the .down.sql file extension). This was considered good practice and an example of how a "well-organized project should be."

Over the years, as my career shifted to focus mainly on infrastructure and database tooling in large software projects (at companies like Meta), I had the opportunity to question this practice and the reasoning behind it.

Down migrations were an odd thing. In my entire career, working on projects with thousands of down files, I never applied them on a real environment. As simple as that: not even once.

Furthermore, since we have started Atlas and to this very day, we have interviewed countless software engineers from virtually every industry. In all of these interviews, we have only met with a single team that routinely applied down files in production (and even they were not happy with how it worked).

Why is that? Why is it that down files are so popular, yet so rarely used? Let's dive in.

Down migrations are the naively optimistic plan for a grim and unexpected world

Down migrations are supposed to be the "undo" counterpart of the "up" migration. Why do "undo" buttons exist? Because mistakes happen, things fail, and then we want a way to quickly and safely revert them. Database migrations are considered something we should do with caution, they are super risky! So, it makes sense to have a plan for reverting them, right?

But consider this: when we write a down file, we are essentially writing a script that will be executed in the future to revert the changes we are about to make. This script is written before the changes are applied, and it is based on the assumption that the changes will be applied correctly. But what if they are not?

When do we need to revert a migration? When it fails. But if it fails, it means that the database might be in an unknown state. It is quite likely that the database is not in the state that the down file expects it to be. For example, if the "up" migration was supposed to add two columns, the down file would be written to remove these two columns. But what if the migration was partially applied and only one column was added? Running the down file would fail, and we would be stuck in an unknown state.

Rolling back additive changes is a destructive operation

When you are working on a local database, without real traffic, having the up/down mechanism for migrations might feel like hitting Undo and Redo in your favorite text editor. But in a real environment, it is not the case.

If you successfully rolled out a migration that added a column to a table, and then decided to revert it, its inverse operation (DROP COLUMN) does not merely remove the column. It deletes all the data in that column. Re-applying the migration would not bring back the data, as it was lost when the column was dropped.

For this reason, teams that want to temporarily deploy a previous version of the application, usually do not revert the database changes, because doing so will result in data loss for their users. Instead, they need to assess the situation on the ground and figure out some other way to handle the situation.

Down migrations are incompatible with modern deployment practices

Many modern deployment practices like Continuous Delivery (CD) and GitOps advocate for the software delivery process to be automated and repeatable. This means that the deployment process should be deterministic and should not require manual intervention. A common way of doing this is to have a pipeline that receives a commit, and then automatically deploys the build artifacts from that commit to the target environment.

As it is very rare to encounter a project with a 0% change failure rate, rolling back a deployment is a common scenario.

In theory, rolling back a deployment should be as simple as deploying the previous version of the application. When it comes to versions of our application code, this works perfectly. We pull the container image that corresponds to the previous version, and we deploy it.

But what about the database? When we pull artifacts from a previous version, they do not contain the down files that are needed to revert the database changes back to the necessary schema - they were only created in a future commit!

For this reason, rollbacks to versions that require reverting database changes are usually done manually, going against the efforts to automate the deployment process by modern deployment practices.

How do teams work around this?

In previous companies I worked for, we faced the same challenges. The tools we used to manage our database migrations advocated for down migrations, but we never used them. Instead, we had to develop some practices to support a safe and automated way of deploying database changes. Here are some of the practices we used:

Migration Rollbacks

When we worked with PostgreSQL, we always tried to make migrations transactional and made sure to isolate the DDLs that prevent it, like CREATE INDEX CONCURRENTLY, to separate migrations. In case the deployment failed, for instance, due to a data-dependent change, the entire migration was rolled back, and the application was not promoted to the next version. By doing this, we avoided the need to run down migrations, as the database was left in the same state as bit was before the deployment.

Non-transactional DDLs

When we worked with MySQL, which I really like as a database but hate when it come to migrations, it was challenging. Since MySQL does not support transactional DDLs, failures were more complex to handle. In case the migration contains more than one DDL and unexpectedly failed in the middle, because of a constraint violation or another error, we were stuck in an intermediate state that couldn't be automatically reverted by applying a "revert file".

Most of the time, it required special handling and expertise in the data and product. We mainly preferred fixing the data and moving forward rather than dropping or altering the changes that were applied - which was also impossible if the migration introduced destructive changes (e.g., DROP commands).

Making changes Backwards Compatible

A common practice in schema migrations is to make them backwards compatible (BC). We stuck to this approach, and also made it the default behavior in Ent. When schema changes are BC, applying them before starting a deployment should not affect older instances of the app, and they should continue to work without any issues (in rolling deployments, there is a period where two versions of the app are running at the same time).

When there is a need to revert a deployment, the previous version of the app remains fully functional without any issues - if you are an Ent user, this is one of the reasons we avoid SELECT * in Ent. Using SELECT * can also break the BC for additive changes, like adding a new column, as the application expects to retrieve N columns but unexpectedly receives N+1.

Deciding Atlas would not support down migrations

When we started Atlas, we had the opportunity to design a new tool from scratch. Seeing as "down files" never helped us solve failures in production, from the very beginning of Atlas, Rotem and I agreed that down files should not be generated - except for cases where users use Atlas to generate migrations for other tools that expect these files, such as Flyway or golang-migrate.

Listening to community feedback

Immediately after Atlas' initial release some two years ago, we started receiving feedback from the community that put this decision in question. The main questions were: "Why doesn't Atlas support down migrations?" and "How do I revert local changes?".

Whenever the opportunity came to engage in such discussions, we eagerly participated and even pursued verbal discussions to better understand the use cases. The feedback and the motivation behind these questions were mainly:

  1. It is challenging to experiment with local changes without some way to revert them.
  2. There is a need to reset dev, staging or test-like environments to a specific schema version.

Declarative Roll-forward

Considering this feedback and the use cases, we went back to the drawing board. We came up with an approach that was primarily about improving developer ergonomics and was in line with the declarative approach that we were advocating for with Atlas. We named this approach "declarative roll-forward".

Albeit, it was not a "down migration" in the traditional sense, it helped to revert applied migrations in an automated way. The concept is based on a three-step process:

  1. Use atlas schema apply to plan a declarative migration, using a target revision as the desired state:

    atlas schema apply \
    --url "mysql://root:pass@localhost:3306/example" \
    --to "file://migrations?version=20220925094437" \
    --dev-url "docker://mysql/8/example" \
    --exclude "atlas_schema_revisions"

    This step requires excluding the atlas_schema_revisions table, which tracks the applied migrations, to avoid deleting it when reverting the schema.

  2. Review the generated plan and apply it to the database.

  3. Use the atlas migrate set command to update the revisions table to the desired version:

    atlas migrate set 20220925094437 \
    --url "mysql://root:pass@localhost:3306/example" \
    --dir "file://migrations"

This worked for the defined use cases. However, we felt that our workaround was a bit clunky as it required a three-step process to achieve the result. We agreed to revisit this decision in the future.

Revisiting the down migrations

In recent months, the question of down migrations was raised again by a few of our customers, and we dove into it again with them. I always try to approach these discussions with an open mind, and listen to the different points of view and use cases that I personally haven't encountered before.

Our discussions highlighted the need for a more elegant and automated way to perform deployment rollbacks in remote environments. The solution should address situations where applied migrations need to be reverted, regardless of their success, failure, or partial application, which could leave the database in an unknown state.

The solution needs to be automated, correct, and reviewable, as it could involve data deletion. The solution can't be the "down files", because although their generation can be automated by Atlas and reviewed in the PR stage, they cannot guarantee correctness when applied to the database at runtime.

After weeks of design and experimentation, we introduced a new command to Atlas named migrate down.

Introducing: migrate down

The atlas migrate down command allows reverting applied migrations. Unlike the traditional approach, where down files are "pre-planned", Atlas computes a migration plan based on the current state of the database. Atlas reverts previously applied migrations and executes them until the desired version is reached, regardless of the state of the latest applied migration — whether it succeeded, failed, or was partially applied and left the database in an unknown version.

By default, Atlas generates and executes a set of pre-migration checks to ensure the computed plan does not introduce data deletion. Users can review the plan and execute the checks before the plan is applied to the database by using the --dry-run flag or the Cloud as described below. Let's see it in action on local databases:

Reverting locally applied migrations

Assuming a migration file named 20240305171146.sql was last applied to the database and needs to be reverted. Before deleting it, run the atlas migrate down to revert the last applied migration:

atlas migrate down \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/example" \
--dev-url "docker://mysql/8/dev"
Migrating down from version 20240305171146 to 20240305160718 (1 migration in total):

-- checks before reverting version 20240305171146
-> SELECT NOT EXISTS (SELECT 1 FROM `logs`) AS `is_empty`
-- ok (50.472µs)

-- reverting version 20240305171146
-> DROP TABLE `logs`
-- ok (53.245µs)

-------------------------
-- 57.097µs
-- 1 migration
-- 1 sql statement

Notice two important things in the output:

  1. Atlas automatically generated a migration plan to revert the applied migration 20240305171146.sql.
  2. Before executing the plan, Atlas ran a pre-migration check to ensure the plan does not introduce data deletion.

After downgrading your database to the desired version, you can safely delete the migration file 20240305171146.sql from the migration directory by running atlas migrate rm 20240305171146.

Then, you can generate a new migration version using the atlas migrate diff command with the optional --edit flag to open the generated file in your default editor.

For local development, the command met our expectations. It is indeed automated, correct, in the sense that it undoes only the reverted files, and can be reviewed using the --dry-run flag or the Cloud. But what about real environments?

Reverting real environments

For real environments, we're introducing another feature in Atlas Cloud today: the ability to review and approve changes for specific projects and commands. In practice, this means if we trigger a workflow that reverts schema changes in real environments, we can configure Atlas to wait for approval from one or more reviewers.

Here's what it looks like:

Review Required

With this new feature, down migrations are reviewable. But what about their safety and automation? As mentioned above, non-transactional DDLs can really leave us in trouble in case they fail, potentially keeping the database in an unknown state that is hard to recover from - it takes time and requires caution. However, this is true not only for applied (up) migrations but also for their inverse: down migrations. If the database we operate on does not support transactional DDLs, and we fail in the middle of the execution, we are in trouble.

For this reason, when Atlas generates a down migration plan, it considers the database (and its version) and the necessary changes. If a transactional plan that is executable as a single unit can be created, Atlas will opt for this approach. If not, Atlas reverts the applied statements one at a time, ensuring that the database is not left in an unknown state in case a failure occurs midway. If we fail for any reason during the migration, we can rerun Atlas to continue from where it failed. Let's explain this with an example:

Suppose we want to revert these two versions:

migrations/20240329000000.sql
ALTER TABLE users DROP COLUMN account_name;
migrations/20240328000000.sql
ALTER TABLE users ADD COLUMN account_id int;
ALTER TABLE accounts ADD COLUMN plan_id int;

Let's see how Atlas handles this for databases that support transactional DDLs, like PostgreSQL, and those that don't, like MySQL:

  • For PostgreSQL, Atlas starts a transaction and checks that account_id and plan_id do not contain data before they are dropped. Then, Atlas applies one ALTER statement on the users table to add back the account_name column but drops the account_id column. Then, Atlas executes the other ALTER statement to drop the plan_id column from the accounts table. If any of these statements fail, the transaction is rolled back, and the database is left in the same state as before. In case we succeed, the revisions table is updated, and the transaction is committed.

  • For MySQL, we can't execute the entire plan as a single unit. This means the same plan cannot be executed, because if we fail in the middle, this intermediate state does not represent any version of the database or the migration directory. Thus, when migrating down, Atlas first applies the ALTER statement to undo 20240329000000 and updates the revisions table. Then, it will undo 20240328000000 statement by statement, and update the revisions table after each successful step. If we fail in the middle, we can re-run Atlas to continue from where it failed.

What we achieved with the new migrate down command, is a safer and more automated way to revert applied migrations.

Down options

By default, atlas migrate down reverts the last applied file. However, you can pass the amount of migrations to revert as an argument, or a target version or a tag as a flag. For instance, atlas migrate down 2 will revert up to 2 pending migrations while atlas migrate down --to-tag 297cc2c will undo all migrations up to the state of the migration directory at this tag.

GitHub Actions Integration

In addition to the CLI, we also added an integration with GitHub Actions. If you have already connected your project to the Schema Registry and use GitHub Actions, you can set up a workflow that gets a commit and triggers Atlas to migrate down the database to the version defined by the commit. The workflow will wait for approval and then apply the plan to the database once approved. For more info, see the Down Action documentation.

Atlas GitHub Action

Wrapping up

In retrospect, I'm glad we did not implement the traditional approach in the first place. When meeting with users, we listened to their problems and focused on their expected outcomes rather than the features they expected. This helped us better understand the problem space instead of focusing the discussion on the implementation. The result we came up with is elegant, probably not perfect (yet), but it successfully avoided the issues that bother me most about pre-planned files.

What's next? We're opening it for GA today and invite you to share your feedback and suggestions for improvements on our Discord server.

· 7 min read

Hi everyone,

It's been a few weeks since our last version announcement and today I'm happy to share with you v0.20, which includes some big changes and exciting features:

  • New Pricing Model - As we announced earlier this month, beginning March 15th the new pricing model took effect. The new pricing is usage-based, offering you more flexibility and cost efficiency. Read about what prompted this change and view the new pricing plans here.
  • Django ORM Integration - Atlas now supports Django! Django is a popular ORM for Python. Developers using Django can now use Atlas to automatically plan schema migrations based on the desired state of their schema, instead of crafting them by hand.
  • Support for PostgreSQL Extensions - Atlas now supports installing and managing PostgreSQL extensions.
  • Dashboards in the Cloud - The dashboard (previously the 'Projects' page) got a whole new look in Atlas Cloud. Now you can view the state of your projects and environments at a glance.
  • SQL Server is out of Beta - SQL Server is officially out of Beta! Along with this official support, we have included some new features:
    • User-Defined Types support for SQL Server - Atlas now supports two User-Defined Types: alias types and table types.
    • Azure Active Directory (AAD) Authentication for SQL Server - Connect to your SQL Server database using AAD Authentication.

Let’s dive in!

New Pricing Model

As of March 15th, there is a new pricing model for Atlas users. This change is a result of feedback we received from many teams that the previous $295/month minimum was prohibitive, and a gradual, usage-based pricing model would help them adopt Atlas in their organizations.

You can read the full reasoning for the change and a breakdown of the new pricing in this blog post.

Django ORM Integration

Django is the most popular web framework in the Python community. It includes a built-in ORM which allows users to describe their data model using Python classes. Migrations are then created using the makemigrations command, which can be applied to the database using migrate command.

Among the many ORMs available in our industry, Django's automatic migration tool is one of the most powerful and robust. It can handle a wide range of schema changes, however, having been created in 2014, a very different era in software engineering, it naturally has some limitations.

Some of the limitations of Django's migration system include:

  1. Database Features - Because it was created to provide interoperability across database engines, Django's migration system is centered around the "lowest common denominator" of database features.

  2. Ensuring Migration Safety - Migrations are a risky business. If you're not careful, you can easily cause data loss or a production outage. Django's migration system does not provide a native way to ensure that a migration is safe to apply.

  3. Modern Deployments - Django does not provide native integration with modern deployment practices such as GitOps or Infrastructure-as-Code.

Atlas, on the other hand, lets you manage your Django applications using the Database Schema-as-Code paradigm. This means that you can use Atlas to automatically plan schema migrations for your Django project, and then apply them to your database.

Read the full guide to set up Atlas for your Django project.

Support for PostgreSQL Extensions

Postgres extensions are add-on modules that enhance the functionality of the database by introducing new objects, such as functions, data types, operators, and more.

The support for extensions has been highly requested, so we are excited to announce that they are finally available!

To load an extension, add the extension block to your schema file. For example, adding PostGIS would look similar to:

schema.hcl
extension "postgis" {
schema = schema.public
version = "3.4.1"
comment = "PostGIS geometry and geography spatial types and functions"
}

Read more about configuring extensions in your schema here.

Dashboards in the Cloud

Atlas Cloud has a new and improved dashboard view!

When working with multiple databases, environments, or even projects - it becomes increasingly difficult to track and manage the state of each of these components. With Atlas Cloud, we aim to provide a single source of truth, allowing you to get a clear overview of each schema, database, environment, deployment and their respective statuses.

project-dashboard

Once you push your migration directory to the schema registry, you will be able to see a detailed dashboard like the one shown above.

Let’s break down what we see:

  • The usage calendar shows when changes are made to your migration directory via the migrate push command in CI.

  • The databases show the state of your target databases. This list will be populated once you have set up deployments for your migration directory. The state of the database can be one of the following:

    • Synced - the database is at the same version as the latest version of your migration directory schema.
    • Failed - the last deployment has failed on this database.
    • Pending - the database is not up to date with the latest version of your migration directory schema.

An alternate view to this page is viewing it per environment. This way, you can see a comprehensive list of the status of each database in each environment.

project-envs

SQL Server out of Beta

We are proud to announce that SQL Server is officially supported by Atlas! Since our release of SQL Server in Beta last August, our team has been working hard to refine and stabilize its performance.

In addition, we have added two new capabilities to the SQL Server driver.

User-Defined Types Support

In SQL Server, user-defined types (UDTs) are a way to create custom data types that group together existing data types. Atlas now supports alias types and table types.

Alias Types

Alias types allow you to create a custom data type, which can then make your code more readable and maintainable.

For example, you might want to create an alias type email_address for the VARCHAR(100) data type. Instead of rewriting this throughout the code, and in order to maintain consistency, you can simply use email_address for clarity.

In the schema.hcl file, you would define this like so:

schema.hcl
type_alias "email_address" {
schema = schema.dbo
type = varchar(100)
null = false
}
table "users" {
schema = schema.dbo
column "email_address" {
type = type_alias.email_address
}
}

Table Types

Table types allow you to define a structured data type that represents a table structure. These are particularly useful for passing sets of data between stored procedures and functions. They can also be used as parameters in stored procedures or functions, allowing you to pass multiple rows of data with a single parameter.

For example, we have a type_table to describe the structure of an address. We can declare this table and later use it in a function:

type_table "address" {
schema = schema.dbo
column "street" {
type = varchar(255)
}
column "city" {
type = varchar(255)
}
column "state" {
type = varchar(2)
}
column "zip" {
type = type_alias.zip
}
index {
unique = true
columns = [column.ssn]
}
check "zip_check" {
expr = "len(zip) = 5"
}
}
function "insert_address" {
schema = schema.dbo
lang = SQL
arg "@address_table" {
type = type_table.address
readonly = true // The table type is readonly argument.
}
arg "@zip" {
type = type_alias.zip
}
return = int
as = <<-SQL
BEGIN
DECLARE @RowCount INT;
INSERT INTO address_table (street, city, state, zip)
SELECT street, city, state, zip
FROM @address_table;

SELECT @RowCount = @ROWCOUNT;

RETURN @RowCount;
END
SQL
}
type_alias "zip" {
schema = schema.dbo
type = varchar(5)
null = false
}

Read the documentation to learn how to use these types in Atlas.

Azure Active Directory (AAD) Authentication

Now when using SQL Server with Atlas, instead of providing your regular database URL, you can connect to your Azure instance with Azure Active Directory Authentication.

Use the fedauth parameter to specify the AAD authentication method. For more information, see the document on the underlying driver.

To connect to your Azure instance using AAD, the URL will look similar to:

azuresql://<instance>.database.windows.net?fedauth=ActiveDirectoryDefault&database=master

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 5 min read
Rotem Tamir

Hi everyone,

We are updating you on a pricing change we will be rolling out to Atlas Cloud on March 15th, 2024.

As you know, Atlas is an open-core project, which means that while its core is an Apache 2-licensed open-source project, we are building it as a commercial, cloud-connected solution built and supported by our company, Ariga. As with any startup, our understanding of the product and the market are constantly evolving, and this pricing change is a reflection of that evolution.

Atlas Plans

Even through this change, we will keep providing the Atlas community with three options for how to consume Atlas.

  • Free Plan (formerly "Community Plan") - for individuals and small teams that want to unlock the full potential of Atlas. This plan will remain free forever and provides full access to all the capabilities of Atlas as a CLI as well as access to enough Atlas Cloud quota to successfully manage a single project. Support is provided via public community support channels.

  • Business Plan (formerly "Team Plan") - for teams professionally using Atlas beyond a single project. This plan has the same features and capabilities as the Free Plan but allows teams to purchase additional quotas if required. In addition, teams subscribing to this plan will get access to priority email support and in-app support via Intercom.

  • Enterprise Plan - for larger organizations looking to solve schema management at scale. This plan includes a dedicated support channel, solution engineering, and other features required for adoption by enterprises.

Why Change

The main reason for this change is the feedback we received from many small teams that the previous $295/month minimum price tag for the Team Plan was prohibitive and that a more gradual, usage-based pricing model would help them adopt Atlas in their organizations.

New Pricing

We’ve tried to keep the new pricing model as simple as possible. We have learned from our investors, advisors, and customers that a seat-based pricing model is less optimal as it disincentivizes the adoption of Atlas by people in roles with a lower-touch engagement. As such, we have made the new pricing model usage-based. Let’s break down how this is going to work.

Projects. The first dimension by which the new model works is the number of projects that you store in the Atlas Cloud schema registry. Currently, this is equal to the number of migration directories that you migrate push to Atlas Cloud, but soon we will also add support for schema push for declarative workflows.

Details:

  • Each project will cost $59 per month.
  • The Free Plan will include a single project free of charge.

Target Databases. Each project (migration directory) can be deployed to multiple target databases. This may be different environments (dev, staging, prod) or different tenants (for projects that manage separate databases per customer).

Details:

  • Each target database will cost $39 per month.
  • The Free Plan will include 2 target databases free of charge.
  • Whenever you purchase quota for an additional project, you will also get a bundled additional target database free of charge.

If pricing by target DB doesn't work for your particular use case, please reach out to us to discuss alternative solutions.

Additional Changes

  • Seats. The free plan will include 3 seats free of charge. Teams upgrading to business will receive 30 seats (regardless of how many projects and databases they use). This limit is supposed to allow as many people as needed to use Atlas Cloud features, but still impose some limit beyond which we expect teams to consider the Enterprise Plan.

  • Data Retention. Atlas users generate plenty of data which we store in our databases. To prevent it from becoming unsustainable for us to support free users over the long run, we are imposing a 30-day data retention limit on CI runs and deployment logs for free users. Business users get 90-day retention by default. If this becomes an issue for you, feel free to reach out to us and we will work something out.

  • Runs. Free Plan users can now report up to 100 CI Runs or Deployments (previously 500) per month in their cloud account. Business and Enterprise users can store an unlimited amount of runs.

Thanking Existing Users

As a way of saying thanks to existing early users who have trusted us to be part of their engineering infrastructure, we have worked out a few options for you to continue using Atlas without interruption. We will be reaching out to admins of these accounts personally to share the details.

This doesn’t work for me

If these changes cause an issue for you or you would like to discuss your specific pricing needs, please let me know personally via Email, Discord, Intercom, or Homing Pigeon 🙂.

-- Rotem and Ariel

· 10 min read

Hi everyone,

We are excited to share our latest release with you! Here's what's new:

  • Pre-migration Checks: Before migrating your schema, you can now add SQL checks that will be verified to help avoid risky migrations.
  • Schema Docs: Atlas lets you manage your database schema as code. One of the things we love most about code, is that because of its formal structure, it's possible to automatically generate documentation from it. With this release, we're introducing a new feature that lets you generate code-grade documentation for your database schema.
  • SQL Server Trigger Support: Atlas now supports managing triggers in SQL Server.
  • ClickHouse Materialized View Support: Atlas now supports managing materialized views in ClickHouse.

Let's dive in.

Pre-migration Checks

Atlas now supports the concept of pre-migration checks, where each migration version can include a list of assertions (predicates) that must evaluate to true before the migration is applied.

For example, before dropping a table, we aim to ensure that no data is deleted and the table must be empty, or we check for the absence of duplicate values before adding a unique constraint to a table.

This is especially useful if we want to add our own specific logic to migration versions, and it helps to ensure that our database changes are safe.

Cloud Directory

Pre-migration checks work for Cloud connected directories. Check out the introduction guide to get started with Atlas Cloud.

To add these checks, Atlas supports a text-based file archive to describe "migration plans". Unlike regular migration files, which mainly contain a list of DDL statements (with optional directives), Atlas txtar files (currently) support two file types: migration files and pre-execution check files.

The code below presents a simple example of a pre-migration check. The default checks file is named checks.sql, and the migration.sql file contains the actual DDLs to be executed on the database in case the assertions are passed.

20240201131900_drop_users.sql
-- atlas:txtar

-- checks.sql --
-- The assertion below must be evaluated to true. Hence, the "users" table must not contain any rows.
SELECT NOT EXISTS(SELECT * FROM users);

-- migration.sql --
-- The statement below will be executed only if the assertion above evaluates to true.
DROP TABLE users;

If the pre-execution checks pass, the migration will be applied, and Atlas will report the results.

atlas migrate --dir atlas://app --env prod

Check passed

Output
Migrating to version 20240201131900 from 20240201131800 (1 migrations in total):
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM users);
-- ok (624.004µs)
-- migrating version 20240201131900
-> DROP TABLE users;
-- ok (5.412737ms)
-------------------------
-- 22.138088ms
-- 1 migration
-- 1 check
-- 1 sql statement

If the pre-execution checks fail, the migration will not be applied, and Atlas will exit with an error.

atlas migrate --dir atlas://app --env prod

Check failed

Output
Migrating to version 20240201131900 from 20240201131800 (1 migrations in total):
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM internal_users);
-> SELECT NOT EXISTS(SELECT * FROM external_users);
-- ok (1.322842ms)
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM roles);
-> SELECT NOT EXISTS(SELECT * FROM user_roles);
2 of 2 assertions failed: check assertion "SELECT NOT EXISTS(SELECT * FROM user_roles);" returned false
-------------------------
-- 19.396779ms
-- 1 migration with errors
-- 2 checks ok, 2 failures
Error: 2 of 2 assertions failed: check assertion "SELECT NOT EXISTS(SELECT * FROM user_roles);" returned false

To learn more about how to use pre-migration checks, read the documentation here.

Schema Docs

One of the most surprising things we learned from working with teams on their Atlas journey, is that many teams do not have a single source of truth for their database schema. As a result, it's impossible to maintain up-to-date documentation for the database schema, which is crucial for disseminating knowledge about the database across the team.

Atlas changes this by creating a workflow that begins with a single source of truth for the database schema - the desired state of the database, as defined in code. This is what enables Atlas to automatically plan migrations, detect drift (as we'll see below), and now, generate documentation.

How it works

Documentation is currently generated for the most recent version of your schema for migration directories that are pushed to Atlas Cloud. To generate docs for your schema, follow these steps:

  1. Make sure you have the most recent version of Atlas:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  2. Login to Atlas Cloud using the CLI:

    atlas login

    If you do not already have a (free) Atlas Cloud account, follow the instructions to create one.

  3. Push your migrations to Atlas Cloud:

    atlas migrate push <dir name>

    Be sure to replace <dir name> with the name of the directory containing your migrations. (e.g app)

  4. Atlas will print a link to the overview page for your migration directory, e.g:

    https://gh.atlasgo.cloud/dirs/4294967296
  5. Click on "Doc" in the top tabs to view the documentation for your schema.

SQL Server Trigger Support

In version v0.17, we released trigger support for PostgreSQL, MySQL and SQLite. In this release, we have added support for SQL Server as well.

Triggers are a powerful feature of relational databases that allow you to run custom code when certain events occur on a table or a view. For example, you can use triggers to automatically update the amount of stock in your inventory when a new order is placed or to create an audit log of changes to a table. Using this event-based approach, you can implement complex business logic in your database, without having to write any additional code in your application.

Managing triggers as part of the software development lifecycle can be quite a challenge. Luckily, Atlas's database schema-as-code approach makes it easy to do!

BETA FEATURE

Triggers are currently in beta and available to logged-in users only. To use this feature, run:

atlas login

Let's use Atlas to build a small chunk of a simple e-commerce application:

  1. Download the latest version of the Atlas CLI:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  2. Make sure you are logged in to Atlas:

    atlas login
  3. Let's spin up a new SQL Server database using docker:

    docker run --rm -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@ssw0rd0995' -p 1433:1433 --name atlas-demo -d mcr.microsoft.com/mssql/server:latest
  4. Next, let's define and apply the base table for our application:

    schema.hcl
    schema "dbo" {
    }
    table "grades" {
    schema = schema.dbo
    column "student_id" {
    null = false
    type = bigint
    }
    column "course_id" {
    null = false
    type = bigint
    }
    column "grade" {
    null = false
    type = int
    }
    column "grade_status" {
    null = true
    type = varchar(10)
    }
    primary_key {
    columns = [column.student_id, column.course_id]
    }
    }

    The grades table represents a student's grade for a specific course. The column grade_status will remain null at first, and we will use a trigger to update whether it the grade is pass or fail.

    Apply this schema on our local SQL Server instance using the Atlas CLI:

    atlas schema apply \
    --url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
    --to "file://schema.hcl" \
    --dev-url "docker://sqlserver/2022-latest/dev?mode=schema" \
    --auto-approve

    This command will apply the schema defined in schema.hcl to the local SQL Server instance. Notice the --auto-approve flag, which instructs Atlas to automatically apply the schema without prompting for confirmation.

  5. Now, let's define the logic to assign a grade_status using a TRIGGER. Append this definition to schema.hcl:

    schema.hcl
      trigger "after_grade_insert" {
    on = table.grades
    after {
    insert = true
    }
    as = <<-SQL
    BEGIN
    SET NOCOUNT ON;

    UPDATE grades
    SET grade_status = CASE
    WHEN inserted.grade >= 70 THEN 'Pass'
    ELSE 'Fail'
    END
    FROM grades
    INNER JOIN inserted ON grades.student_id = inserted.student_id and grades.course_id = inserted.course_id;
    END
    SQL
    }

    We defined a TRIGGER called after_grade_insert. This trigger is executed after new rows are inserted or existing rows are updated into the grades table. The trigger executes the SQL statement, which updates the grade_status column to either 'Pass' or 'Fail' based on the grade.

    Apply the updated schema using the Atlas CLI:

    atlas schema apply \
    --url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
    --to "file://schema.hcl" \
    --dev-url "docker://sqlserver/2022-latest/dev?mode=schema" \
    --auto-approve

    Notice that Atlas automatically detects that we have added a new TRIGGER, and applies it to the database.

  6. Finally, let's test our application to see that it actually works. We can do this by populating our database with some students' grades. To do so, connect to the SQL Server container and open a sqlcmd session.

    docker exec -it atlas-demo /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'P@ssw0rd0995'

    Now that a sqlcmd session is open, we can populate the items:

    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (1, 1, 87, null);
    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (1, 2, 99, null);
    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (2, 2, 68, null);

    To exit the session write Quit.

    Now, let's check the grades table to see that the grade_status column was updated correctly:

     docker exec -it atlas-demo /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'P@ssw0rd0995' -Q "SELECT * FROM grades;"

    You should see the following output:

     student_id    course_id        grade   grade_status
    ---------- ------------- ----------- --------------
    1 1 87 Pass
    1 2 99 Pass
    2 2 68 Fail
    (3 rows affected)

    Amazing! Our trigger automatically updated the grade_status for each of the rows.

ClickHouse Materialized View Support

A materialized view is a table-like structure that holds the results of a query. Unlike a regular view, the results of a materialized view are stored in the database and can be refreshed periodically to reflect changes in the underlying data.

LOGIN REQUIRED

Materialized views are currently available to logged-in users only. To use this feature, run:

atlas login

Let's see an example of how to write a materialized view in HCL for a ClickHouse database:

materialized "mat_view" {
schema = schema.public
to = table.dest
as = "SELECT * FROM table.src"
depends_on = [table.src]
}

In the example above, when creating materialized views with TO [db.]table, the view will be created with the same structure as the table or view specified in the TO clause.

The engine and primary_key attributes are required if the TO clause is not specified. In this syntax, populate can be used for the first time to populate the materialized view:

materialized "mat_view" {
schema = schema.public
engine = MergeTree
column "id" {
type = UInt32
}
column "name" {
type = String
}
primary_key {
columns = [column.id]
}
as = "SELECT * FROM table.src"
populate = true
depends_on = [table.src]
}
info

Note that modifying the materialized view structure after the initial creation is not supported by Atlas currently.

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 11 min read
Rotem Tamir

The Evolution of Database Schema Management

In today's world, where software is developed and deployed at a rapid pace, selecting the right tools for your team is crucial. The right tools can help you move faster, be more productive, and reduce the risk of errors and bugs. Most backend applications are backed by a database, and maintaining the database schema is a critical part of the development process.

Our industry has seen many trends in the way database schemas evolve over the years. In the early days, DBAs would manually run SQL DDL commands directly against the database through a terminal. Then came GUI based, point-and-click database management tools that included schema editing capabilities.

In the early 2000s, we saw the rise of "database migration tools" that allowed developers to write versioned database upgrade scripts (usually in SQL), called "migrations", that would be executed by the tool itself. This trend was further intensified with the rise of DevOps and CI/CD, as it became the de-facto standard way for automating database schema management.

The Rise of Database Schema-as-Code

In recent years, we have witnessed a seismic shift in the way cloud infrastructure is managed. The rise of tools such as Terraform and Kubernetes has led to the concept of "Infrastructure-as-Code". As the complexity of infrastructure grew, it became clear that the only way to manage it effectively was to use a declarative approach and treat it "as code".

Infrastructure-as-code, in this context, means that the desired state of the infrastructure is defined in a formal language ("as code") and stored in a version control system (such as Git). The infrastructure is then managed by a tool that compares the desired state to the actual state and makes the necessary changes to bring the actual state to the desired state.

This approach has many advantages over the traditional imperative approach of manually running commands against the infrastructure. It allows for better collaboration between team members, it is more robust and less susceptible to errors, it is self documenting, and allows for better visibility, governance and control into the state of the infrastructure.

Following this trend, some projects such as Atlas and Skeema have started to apply the same principles to database schema management. Instead of using an imperative approach, where developers do the planning and tools are only used to execute the plan and keep track of what has been done, they bring to the table a simpler approach. This approach, which is gaining the name "Database Schema-as-Code", applies a declarative approach to database schema management: developers provide the desired state of the database and the tool automatically plans the required migrations.

A practical example

Let's use Atlas to show a quick example of how database schema-as-code tools work.

Atlas allows users to represent their database schema in plain SQL or in HCL. The schema is saved in a schema file, which holds the desired state of our database.

We will start with a simple schema that represents a users table, in which each user has an ID and a name:

schema.hcl
table "users" {
schema = schema.example
column "id" {
null = false
type = int
}
column "name" {
null = true
type = varchar(100)
}
primary_key {
columns = [column.id]
}
}

Now, let's apply this schema to an empty MySQL database:

atlas schema apply \
-u "mysql://root:pass@localhost:3306/example" \
--to file://schema.hcl \
--dev-url "docker://mysql/8/example"

Atlas will compare the desired state from our schema file to the database URL provided, and plan the migration:

-- Planned Changes:
-- Create "users" table
CREATE TABLE `users` (
`id` int NOT NULL,
`name` varchar(100) NULL,
PRIMARY KEY (`id`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
? Are you sure?:
▸ Apply
Lint and edit
Abort

After applying this to our database, we will run the command once more and see that Atlas recognizes there is no drift between the two states:

Schema is synced, no changes to be made

Let's make a simple change to our schema by adding an email column:

schema.hcl
table "users" {
schema = schema.example
column "id" {
null = false
type = int
}
column "name" {
null = true
type = varchar(100)
}
column "email" {
null = true
type = varchar(255)
}
primary_key {
columns = [column.id]
}
}

Now let's run the schema apply command again, and view the migration plan that Atlas will create for us:

-- Planned Changes:
-- Modify "users" table
ALTER TABLE `users` ADD COLUMN `email` varchar(255) NULL;
? Are you sure?:
▸ Apply
Lint and edit
Abort

To conclude, in this example, we got a glimpse of how a tool like Atlas works with a database schema using the declarative approach.

Do you need a Database Schema-as-Code tool?

If you are using an older more traditional database migration tool or not using one at all, you might be wondering if you should consider switching to a Database Schema-as-Code tool. Here are five signs that you might need to consider it:

1. Your team is making changes to the database schema directly in production

The worst kind of tool for managing database schemas is no tool at all. Running migrations manually against production databases should be avoided at all costs. First, humans are notoriously prone to errors, and the risk of making a mistake that will cause downtime or data loss is very high. Second, any changes made directly in production are not tracked anywhere, so it is very hard to know what has been done and by whom. Third, it means that your deployments cannot be fully automated and you are missing out on the benefits of CI/CD.

Using a Database Schema-as-Code tool as the exclusive way of making changes to the database schema is a great way to avoid these problems. It allows you to automate your deployments, track all changes to the database schema in source control, and collaborate more effectively with your team.

2. You don't have a single source of truth for your database schema

Having a single source of truth for how your application is deployed is a key principle of DevOps. It allows you to have a clear picture of the state of your application at any given time, and it allows you to automate your deployments and avoid human errors.

By requiring all migrations to be checked in to source control, traditional migration tools can go a long way to ensure that the database schema is in a consistent state, but they suffer from two issues in this regard:

  1. They only keep track of the changes to the database ("migrations"). To figure out the actual schema of the database at any given version, you need to apply all the migrations up to that version to an empty database and inspect the result with an additional tool. Database Schema-as-Code tools, on the other hand, naturally document the schema of the database in any given version.

  2. They have no way to ensure that the actual database's schema is consistent with the migrations. If someone makes a change directly in the database, there is no way to know about it. When you apply your schema to a target database using a Database Schema-as-Code tool, it will compare the desired state to the actual state and make the necessary changes to bring the actual state to the desired state.

    This ensures that over time your actual databases will not drift from the desired state.

3. You don't have any database experts on your team

In the past, managing database schemas was under the responsibility of DBAs. They were the ones who were responsible for making sure that the database schema is consistent and that all changes are done safely.

In recent years, as cloud-native, microservices-based architectures have become more popular, the amount of databases that each team manages has grown significantly. This has led to a situation where many teams don't have a DBA , and the responsibility for managing the database schema falls on the shoulders of the developers.

If you maintain a small schema with a few tables, you might not feel the need for a tool to help you manage it. But as schemas grow in size and complexity, to contain thousands of database objects (tables, views, functions, stored procedures, etc.), it becomes very hard to manage them manually. A Database Schema-as-Code tool can help you manage your schema more effectively, even if you don't have any DBAs on your team.

4. You manually verify the safety of your database migrations

If your application is small and the stakes for making a mistake are low, you might be able to get away with messing up a schema change now and then. But if your workload is mission-critical, if application downtime or data loss can have a significant impact on your business, you need to make sure that your migrations are safe.

Many teams have been through the unpleasant experience of a failed migration that caused a significant outage. This is usually followed by a change in review processes where migrations are reviewed by multiple team members, and a manual verification process is added to make sure that the migration is safe to run.

This process is time-consuming and error-prone. Even worse, it turns team members with more database experience into bottlenecks, slowing down the development process and team velocity.

Database Schema-as-Code tools can help you automate this process and make it more robust. Atlas, for example, comes with built-in support for Migration Linting, Diff Policies, and other safety mechanisms that help you catch errors and issues before they lead to a full-blown production outage.

5. Your app and infra delivery are way better than your database deployments

When interviewing platform engineers about how their deployment infrastructure works, we keep hearing the same story: "Our deployments are 100% pure Infrastructure-as-Code GitOps goodness, we can do them 50 times a day!" And when we ask about databases… "Oh yeah, there’s that part... It’s not as great."

A lot has changed in the way applications are deployed since the early days of database migrations. The versioned migrations paradigm was a great improvement over the previous manual approach, but it is still very much an imperative approach. Modern application delivery principles such as GitOps and Infrastructure-as-Code work in a declarative way, and so they need a database schema management tool that works in a similar way.

If your application and infra delivery feel like a well-oiled machine, but your database deployments are still semi-manual, or feel clunky and error-prone, you should consider switching to a Database Schema-as-Code tool where you can manage your database schema in a Kubernetes Operator or a Terraform Provider.

Enter: Atlas

Atlas is a Database Schema-as-Code tool that allows you to manage your database schema in a declarative way. With Atlas, developers provide the desired state of the database schema and Atlas automatically plans the required migrations. To get started with Atlas, you can follow our quickstart guide..

Wrapping up

In this article, we have discussed the evolution of database schema management tools and the rise of Database Schema-as-Code tools. We have also discussed five signs that you might need to consider switching to a Database Schema-as-Code tool. I hope you found this article useful.

As always, we would love to hear your feedback and suggestions on the Atlas Discord server.

· 6 min read
Rotem Tamir

Hi everyone,

Thanks for joining us today for another release announcement! We have a bunch of really exciting features to share with you today, so let's get started! Here's what we'll cover:

  • Drift Detection - A common source of database trouble is that the schema in your database doesn't match the schema in your code. This can happen for a variety of reasons, including manual changes to the database, or changes made by other tools. Today, we are happy to announce the availability of a new feature that lets you automatically detect these changes, and alerts you when they happen.
  • SQLAlchemy Support - SQLAlchemy is a popular Python ORM. Developers using SQLAlchemy can use Atlas to automatically plan schema migrations for them, based on the desired state of their schema instead of crafting them by hand.
  • VSCode ERDs - We've added a new feature to our VSCode extension that lets you visualize your database schema as an ERD diagram.
  • Composite Schemas - The newly added composite_schema data source lets you combine multiple schemas into one, which is useful for managing schemas that are loaded from multiple sources or to describe applications that span multiple database schemas.

Drift Detection

We believe, that in an ideal world, schema migrations on production databases should be done in an automated way, preferably in your CI/CD pipelines, with developers not having root access. However, we know that this is oftentimes is not the case. For this reason, it is also common to find databases which schemas differ from the ones they are supposed to have. This phenomenon, called a Schema Drift can cause a lot of trouble for a team.

Atlas now can periodically check if your deployed databases schemas match their desired state. To function correctly, this feature relies on Atlas Cloud being able to communicate to your database. As it is uncommon for databases to be directly accessible from the internet, we have added the option to run Atlas Agents in your database's network to facilitate this communication. Agents register themselves via credentials against your Atlas Cloud account and continuously poll it for work.

PAID FEATURE

Drift Detection is currently only available in a paid subscription.

To learn more about how to use this feature, check out our Drift Detection Guide.

In addition, Atlas Agents enable you do use a lot more cool features, like

  • Cloud mediated deployments (coming soon)
  • Schema monitoring and auditing (coming soon)

SQLAlchemy Support

Goodbye, Alembic. Hello, Atlas.

SQLAlchemy is a popular ORM toolkit widely used in the Python community. SQLAlchemy allows users to describe their data model using its declarative-mapping feature. To actually create the underlying tables, users can use the Base.metadata.create_all method which may be sufficient during development where tables can be routinely dropped and recreated.

However, at some point, teams need more control and decide to employ the versioned migrations methodology, which is a more robust way to manage a database schema.

The native way to manage migrations with SQLAlchemy is to use the Alembic migration tool. Alembic can automatically generate migration scripts from the difference between the current state of the database and the desired state of the application.

A downside of this approach is that in order for it to work, a pre-existing database with the current version of the schema must be connected to. In many production environments, databases should generally not be reachable from developer workstations, which means this comparison is normally done against a local copy of the database which may have undergone some changes that aren't reflected in the existing migrations.

In addition, Alembic auto-generation fails to detect many kinds of changes and cannot be relied upon to generate production-ready migration scripts without routine manual intervention.

Atlas, on the other hand, can automatically plan database schema migrations for SQLAlchemy without requiring a connection to such a database and can detect almost any kind of schema change. Atlas plans migrations by calculating the diff between the current state of the database, and its desired state.

To learn how to use Atlas with SQLAlchemy, check out our SQLAlchemy Guide.

Special thanks to No'am (Miko) Tamir (who also doubles as my young brother) for his fantastic work building the prototype for this feature and to Ronen Lubin for making it production-ready.

VSCode ERDs

Starting with v0.4.2, our VSCode Extension can now visualize your database schema as an ERD diagram. To use this feature, simply open the command palette (Ctrl+Shift+P on Windows/Linux, Cmd+Shift+P on Mac) and select Atlas: View in ERD.

Composite Schemas

The composite_schema data source allows the composition of multiple Atlas schemas into a unified schema graph. This functionality is useful when projects schemas are split across various sources such as HCL, SQL, or application ORMs. For example, each service might have its own schema.

Referring to the url returned by this data source allows reading the entire project schemas as a single unit by any of the Atlas commands, such as migrate diff, schema apply, or schema inspect.

Usage example

By running atlas migrate diff with the given configuration, Atlas loads the inventory schema from the SQLAlchemy schema, the graph schema from ent/schema, and the auth and internal schemas from HCL and SQL schemas defined in Atlas format. Then, the composite schema, which represents these four schemas combined, will be compared against the current state of the migration directory. In case of a difference between the two states, a new migration file will be created with the necessary SQL statements.

atlas.hcl
data "composite_schema" "project" {
schema "inventory" {
url = data.external_schema.sqlalchemy.url
}
schema "graph" {
url = "ent://ent/schema"
}
schema "auth" {
url = "file://path/to/schema.hcl"
}
schema "internal" {
url = "file://path/to/schema.sql"
}
}

env "dev" {
src = data.composite_schema.project.url
dev = "docker://postgres/15/dev"
migration {
dir = "file://migrations"
}
}

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 7 min read
Rotem Tamir

Hi everyone,

I hope you are enjoying the holiday season, because we are here today with the first Atlas release of 2024: v0.17. It's been only a bit over a week since our last release, but we have some exciting new features we couldn't wait to share with you:

  • Trigger Support - Atlas now supports managing triggers on MySQL, PostgreSQL, MariaDB and SQLite databases.
  • Improved ERDs - You can now visualize your schema's SQL views, as well as create filters to select the specific database objects you wish to see.

Without further ado, let's dive in!

Trigger Support

BETA FEATURE

Triggers are currently in beta and available to logged-in users only. To use this feature, run:

atlas login

Triggers are a powerful feature of relational databases that allow you to run custom code when certain events occur on a table or a view. For example, you can use triggers to automatically update the amount of stock in your inventory when a new order is placed or to create an audit log of changes to a table. Using this event-based approach, you can implement complex business logic in your database, without having to write any additional code in your application.

Managing triggers as part of the software development lifecycle can be quite a challenge. Luckily, Atlas's database schema-as-code approach makes it easy to do!

Let's use Atlas to build a small chunk of a simple e-commerce application:

  1. Download the latest version of the Atlas CLI:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  2. Make sure you are logged in to Atlas:

    atlas login
  3. Let's spin up a new PostgreSQL database using docker:

    docker run --name db -e POSTGRES_PASSWORD=pass -d -p 5432:5432 postgres:16
  4. Next, let's define and apply the base tables for our application:

    schema.hcl
     table "inventory" {
    schema = schema.public
    column "item_id" {
    null = false
    type = serial
    }
    column "item_name" {
    null = false
    type = character_varying(255)
    }
    column "quantity" {
    null = false
    type = integer
    }
    primary_key {
    columns = [column.item_id]
    }
    }
    table "orders" {
    schema = schema.public
    column "order_id" {
    null = false
    type = serial
    }
    column "item_id" {
    null = false
    type = integer
    }
    column "order_quantity" {
    null = false
    type = integer
    }
    primary_key {
    columns = [column.order_id]
    }
    foreign_key "orders_item_id_fkey" {
    columns = [column.item_id]
    ref_columns = [table.inventory.column.item_id]
    on_update = NO_ACTION
    on_delete = NO_ACTION
    }
    }

    This defines two tables: inventory and orders. The inventory table holds information about the items in our store, and the orders table holds information about orders placed by our customers. The orders table has a foreign key constraint to the inventory table, to ensure that we can't place an order for an item that doesn't exist in our inventory.

    Apply this schema on our local Postgres instance using the Atlas CLI:

    atlas schema apply \
    --dev-url 'docker://postgres/16?search_path=public' \
    --to file://schema.hcl \
    -u 'postgres://postgres:pass@:5432/postgres?search_path=public&sslmode=disable' \
    --auto-approve

    This command will apply the schema defined in schema.hcl to the local Postgres instance. Notice the --auto-approve flag, which instructs Atlas to automatically apply the schema without prompting for confirmation.

  5. Let's now populate our database with some inventory items. We can do this using the psql command that is installed inside the default PostgreSQL Docker image:

    docker exec -it db psql -U postgres -c "INSERT INTO inventory (item_name, quantity) VALUES ('Apple', 10);"
    docker exec -it db psql -U postgres -c "INSERT INTO inventory (item_name, quantity) VALUES ('Banana', 20);"
    docker exec -it db psql -U postgres -c "INSERT INTO inventory (item_name, quantity) VALUES ('Orange', 30);"
  6. Now, let's define the business logic for our store using a FUNCTION and a TRIGGER. Append these definitions to schema.hcl:

    schema.hcl
     function "update_inventory" {
    schema = schema.public
    lang = PLpgSQL
    return = trigger
    as = <<-SQL
    BEGIN
    UPDATE inventory
    SET quantity = quantity - NEW.order_quantity
    WHERE item_id = NEW.item_id;
    RETURN NEW;
    END;
    SQL
    }
    trigger "after_order_insert" {
    on = table.orders
    after {
    insert = true
    }
    foreach = ROW
    execute {
    function = function.update_inventory
    }
    }

    We start by defining a FUNCTION called update_inventory. This function is written in PL/pgSQL, the procedural language for PostgreSQL. The function accepts a single argument, which is a TRIGGER object. The function updates the inventory table to reflect the new order, and then returns the NEW row, which is the row that was just inserted into the orders table.

    Next, we define a TRIGGER called after_order_insert. This trigger is executed after a new row is inserted into the orders table. The trigger executes the update_inventory function for each row that was inserted.

    Apply the updated schema using the Atlas CLI:

    atlas schema apply \
    --dev-url 'docker://postgres/16?search_path=public' \
    --to file://schema.hcl \
    -u 'postgres://postgres:pass@:5432/postgres?search_path=public&sslmode=disable' \
    --auto-approve

    Notice that Atlas automatically detects that we have added a new FUNCTION and a new TRIGGER, and applies them to the database.

  7. Finally, let's test our application to see that it actually works. We can do this by inserting a new row into the orders table:

    docker exec -it db psql -U postgres -c "INSERT INTO orders (item_id, order_quantity) VALUES (1, 5);"

    This statement creates a new order for 5 Apples.

    Now, let's check the inventory table to see that the order was processed correctly:

    docker exec -it db psql -U postgres -c "SELECT quantity FROM inventory WHERE item_name='Apple';"

    You should see the following output:

     quantity
    ---------
    5
    (1 row)

    Amazing! Our trigger automatically detected the creation of a new order of apples, and updated the inventory accordingly from 10 to 5.

Improved ERDs

One of the most frequently used capabilities in Atlas is schema visualization. Having a visual representation of your data model can be helpful as it allows for easier comprehension of complex data structures, and enables developers to better understand and collaborate on the data model of the application they are building.

Visualizing Database Views

erd-views

Until recently, the ERD showed schema's tables and the relations between them. With the most recent release, the ERD now visualizes database views!

Within each view you can find its:

  • Columns - the view's columns, including their data types and nullability.
  • Create Statement - the SQL CREATE statement, based on your specific database type.
  • Dependencies - a list of the tables (or other views) it is connected to. Clicking on this will map edges to each connected object in the schema.

As of recently (including this release), we have added support for functions, stored procedures and triggers which are all coming soon to the ERD!

To play with a schema that contains this feature, head over to the live demo.

ERD Filters

In cases where you have many database objects and prefer to focus in on a specific set of tables and views, you can narrow down your selection by creating a filter. Filters can be saved for future use. This can be great when working on a feature that affects a specific part of the schema, this way you can easily refer to it as needed.

erd-filters

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 10 min read
Rotem Tamir

Hi everyone,

It's been a while since our last version announcement and today I'm happy to share with you v0.16, which includes some very exciting improvements for Atlas:

  • ClickHouse Beta Support - ClickHouse is a high-performance, columnar database optimized for analytics and real-time query processing. Support for ClickHouse in Atlas has been one of the top requested features by our community in the past year. Today, we are happy to announce that ClickHouse is officially in Beta!
  • Hibernate Provider - Atlas now supports loading the desired state of your database directly from your Hibernate code. Hibernate developers can now join developers from the GORM, Sequelize, TypeORM and more communities who can now use Atlas to manage their database schema.
  • Baseline Schemas - In some cases, your migrations rely on certain database objects to exist apriori to your application schema, for example extensions or legacy tables. Atlas now supports defining a baseline schema which will be loaded before automatically planning and applying your migrations.
  • Proactive conflict detection - Teams that have connected their project to Atlas Cloud will get a prompt in the CLI if their migration directory is out of sync with the latest version in Atlas Cloud. This ensures that new migration files are added in a sequential order, preventing unexpected behavior.
  • Mermaid Support - Atlas now supports generating a Mermaid diagram of your database schema. This is a great way to visualize your database schema and share it with your team.
  • Review Policies - Users working with declarative migrations can now define "review policies" which can define thresholds for which kinds of changes require human review and which can be auto-applied.
  • Postgres Sequences - Another long awaited feature, Atlas now supports managing sequences in PostgreSQL.

I know that's quite a list, so let's dive right in!

ClickHouse Support

ClickHouse is a high-performance, columnar database optimized for analytics and real-time query processing. Support for ClickHouse in Atlas has been one of the top requested features by our community in the past year. Our team has been working hard to bring this feature to you and today we are happy to announce that ClickHouse is now available to use in Beta!

Here's what you need to do to get started:

  1. Log in to your Atlas Cloud account. If you don't have an account yet, you can sign up for free.

  2. Download the latest version of the Atlas CLI:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  3. Log in to your Atlas Cloud account from the CLI:

    atlas login
  4. Spin up a local ClickHouse instance:

    docker run -d --name clickhouse-sandbox -p 9000:9000 -d clickhouse/clickhouse-server:latest
  5. Verify that you are able to connect to this instance:

    atlas schema inspect -u 'clickhouse://localhost:9000'

    If everything is working correctly, you should see the following output:

     schema "default" {
    engine = Atomic
    }
  6. Create a new file named schema.hcl with the following content:

     schema "default" {
    engine = Atomic
    }

    table "users" {
    schema = schema.default
    engine = MergeTree
    column "id" {
    type = UInt32
    }
    column "name" {
    type = String
    }
    column "created" {
    type = DateTime
    }
    primary_key {
    columns = [column.id]
    }
    }
  7. Run the following command to apply the schema to your local ClickHouse instance:

     atlas schema apply -u 'clickhouse://localhost:9000' -f schema.hcl

    Atlas will prompt you to confirm the changes:

     -- Planned Changes:
    -- Create "users" table
    CREATE TABLE `default`.`users` (
    `id` UInt32,
    `name` String,
    `created` DateTime
    ) ENGINE = MergeTree
    PRIMARY KEY (`id`) SETTINGS index_granularity = 8192;

    Hit "Enter" to apply the changes.

  8. Amazing! Our schema has been applied to the database!

Hibernate Provider

Atlas now supports loading the desired state of your database directly from your Hibernate code. Packaged as both a Maven and Gradle plugin, the Hibernate provider allows you seamlessly integrate Atlas into your existing Hibernate project.

Hibernate ships with an automatic schema management tool called hbm2ddl. Similarly to Atlas, this tool can inspect a target database and automatically migrate the schema to the desired one. However, the Hibernate team has been advising for years not to use this tool in production:

Although the automatic schema generation is very useful for testing and prototyping purposes, in a production environment, it’s much more flexible to manage the schema using incremental migration scripts.

This is where Atlas comes in. Atlas can read Hibernate schema and plan database schema migrations.

To get started, refer to the blog post we published earlier this week.

Baseline Schemas

LOGIN REQUIRED

The docker block is available for logged-in users only. To use this feature, run:

atlas login

In some cases, there is a need to configure a baseline schema for the dev database so that every computation using the dev-database starts from this baseline. For example, users' schemas or migrations rely on objects, extensions, or other schema resources that are not managed by the project.

To configure such a baseline, use the docker block with the relevant image and pass to it the script for creating the base schema for the project:

docker "postgres" "dev" {
image = "postgres:15"
schema = "public"
baseline = <<SQL
CREATE SCHEMA "auth";
CREATE EXTENSION IF NOT EXISTS "uuid-ossp" SCHEMA "auth";
CREATE TABLE "auth"."users" ("id" uuid NOT NULL DEFAULT auth.uuid_generate_v4(), PRIMARY KEY ("id"));
SQL
}

env "local" {
src = "file://schema.pg.hcl"
dev = docker.postgres.dev.url
}

For more details refer to the documentation.

Proactive conflict detection

Teams that have connected their project to Atlas Cloud (see setup) will get a prompt in the CLI if their migration directory is out of sync with the latest version in Atlas Cloud. This ensures that new migration files are added in a sequential order, preventing unexpected behavior. For example:

atlas migrate diff --env dev

? Your directory is outdated (2 migrations behind). Continue or Abort:
▸ Continue (Rebase later)
Abort (Pull changes and re-run the command)

Additionally, the atlas migrate lint command helps enforce this requirement during the CI stage. Learn more on how to integrate Atlas into your GitHub Actions or GitLab CI pipelines.

Mermaid Support

Atlas now supports generating a Mermaid diagram of your database schema. Let's demonstrate this feature using an example schema for a local SQLite database. First, we'll create a new file named sqlite.hcl with the following content:

sqlite.hcl
schema "default" {
}

table "users" {
schema = schema.default
column "id" {
type = int
}
column "name" {
type = text
}
column "email" {
type = text
}
primary_key {
columns = [column.id]
}
}

table "blog_posts" {
schema = schema.default
column "id" {
type = int
}
column "title" {
type = text
}
column "body" {
type = text
}
column "author_id" {
type = int
}
foreign_key "blog_author" {
columns = [column.author_id]
ref_columns = [table.users.column.id]
}
}

Run the following command to inspect the schema and generate the Mermaid code:

atlas schema inspect -u file://sqlite.hcl --dev-url 'sqlite://?mode=memory' --format "{{ mermaid . }}"

The output will look like this:

erDiagram
users {
int id PK
text name
text email
}
blog_posts {
int id
text title
text body
int author_id
}
blog_posts }o--o| users : blog_author

Next, copy this output and paste it into the Mermaid Live Editor.

The result should look like this:

Review Policies

Users working with declarative migrations can now define "review policies" which can define thresholds for which kinds of changes require human review and which can be auto-applied.

By default, when running atlas schema apply on a target database, if any changes to the target database are required, Atlas will prompt the user to confirm the changes. This is a safety measure to prevent accidental changes to the target database.

However, Atlas ships with an analysis engine that can detect the impact of different changes to the target database. For example, Atlas can detect irreversible destructive changes that will result in data loss or data dependent changes that may fail due to data integrity constraints.

With review policies, you can tell Atlas to first analyze the proposed changes and only prompt the user if the changes are above a certain risk threshold. For example, you can configure Atlas to only ask for review if any warnings are found and to automatically apply all changes that do not trigger any diagnostics:

atlas.hcl
lint {
review = WARNING
}

You can see a live demonstration of this feature towards the end of our recent HashiCorp conference talk.

Postgres Sequences

BETA FEATURE

Sequences are currently in beta and available to logged-in users only. To use this feature, run:

atlas login

The sequence block allows defining a sequence number generator. Supported by PostgreSQL.

Note, a sequence block is printed by Atlas on inspection, or it may be manually defined in the schema only if it represents a PostgreSQL sequence that is not implicitly created by the database for identity or serial columns.

# Simple sequence with default values.
sequence "s1" {
schema = schema.public
}

# Sequence with custom configuration.
sequence "s2" {
schema = schema.public
type = smallint
start = 100
increment = 2
min_value = 100
max_value = 1000
}

# Sequence that is owned by a column.
sequence "s3" {
schema = schema.public
owner = table.t2.column.id
comment = "Sequence with column owner"
}

# The sequences created by this table are not printed on inspection.
table "users" {
schema = schema.public
column "id" {
type = int
identity {
generated = ALWAYS
start = 10000
}
}
column "serial" {
type = serial
}
primary_key {
columns = [column.id]
}
}

table "t2" {
schema = schema.public
column "id" {
type = int
}
}

schema "public" {
comment = "standard public schema"
}

Wait, there's more!

A few other notable features shipped in this release are:

  • Analyzers for detecting blocking enum changes on MySQL. Certain kinds of changes to enum columns on MySQL tables change the column type and require a table copy. During this process, the table is locked for write operations which can cause application downtime.

    Atlas now ships with analyzers that can detect such changes and warn the user before applying them. For more information see the documentation for analyzers MY111, MY112 and MY113.

  • The external data source - The external data source allows the execution of an external program and uses its output in the project.

    For example:

    atlas.hcl
    data "external" "dot_env" {
    program = [
    "npm",
    "run",
    "load-env.js"
    ]
    }

    locals {
    dot_env = jsondecode(data.external.dot_env)
    }

    env "local" {
    src = local.dot_env.URL
    dev = "docker://mysql/8/dev"
    }

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 9 min read
Dor Avraham
TL;DR

You can now import the desired database schema from your Hibernate project into Atlas, and use it to automatically plan migrations for you.

See an example

Introduction

Hibernate-ORM is one of the most popular ORMs for Java, so much so that parts of it have evolved into the JPA standard and the Jakarta APIs.

Today, we are excited to announce that Atlas now supports loading and managing Hibernate schemas.

Atlas is a modern tool for managing your database schema. It allows you to inspect, plan, lint and apply schema changes to your database. It is designed to be used by developers, DBAs and DevOps engineers alike.

By using Atlas, Hibernate users can now enjoy these benefits:

  • A declarative migration flow - Atlas can operate like a "Terraform for databases", where by running atlas schema apply the application schema is applied on a target database.
  • Automatic schema migration planning - Alternatively, Atlas can operate using a more traditional versioned migration flow. However, contrary to most tools, Atlas will automatically analyze the diff between the migration directory and the current application data model, and will produce correct and safe SQL migration files.
  • CI for schema changes - Atlas can be used during CI to make sure you never merge a pull request that will break your database schema.
  • Modern CD integrations - Atlas integrates seamlessly with modern deployment tools such as Kubernetes, Terraform, Helm, Flux, and ArgoCD. This allows you to deploy changes to your database schema as part of your existing deployment pipelines.
  • Visualization - Atlas users can create beautiful, shareable ERDs of their application data model with a single command.
  • .. and much more (read more about Atlas features).
note

Hibernate support is currently in Beta and we would love to hear your feedback ❤️. Please reach out to us on Discord or by opening an issue.

Integrating Atlas into your Hibernate project

Hibernate ships with an automatic schema management tool called hbm2ddl. Similarly to Atlas, this tool can inspect a target database and automatically migrate the schema to the desired one. However, the Hibernate team has been advising for years not to use this tool in production:

Although the automatic schema generation is very useful for testing and prototyping purposes, in a production environment, it’s much more flexible to manage the schema using incremental migration scripts.

This is where Atlas comes in. Atlas can read Hibernate schema and plan database schema migrations.

How does it work?

Atlas compares two database schema states and plans a migration to get from ones state to the other. The database schema can be read directly from Hibernate, a migration directory, a database connection, or another ORM.

To read the Hibernate schema, Atlas utilizes the concept of an external_schema datasource.

Demo Time

For this demo, we are going to use Gradle, PostgreSQL and this example project.

Installation

If you haven't already, install the latest version of Atlas:

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

Add the hibernate-provider to your project via Gradle or Maven:

plugins {
id("io.atlasgo.hibernate-provider-gradle-plugin") version "0.1"
}

To check the installation, run: ./gradlew help --task schema

Configuration

The plugin adds a configurable Gradle task (or a Maven goal) that prints the Hibernate schema without requiring a database connection. However, the task needs to be configured with the database dialect. We can do this by creating a schema-export.properties file in the resource directory. For example, for MySQL / PostgreSQL:

jakarta.persistence.database-product-name=MySQL
jakarta.persistence.database-major-version=8

Lastly, we need to configure Atlas to use this configuration by creating an atlas.hcl file and adding the definition of the Hibernate schema:

atlas.hcl
data "external_schema" "hibernate" {
program = [
"./gradlew",
"-q",
"schema",
"--properties", "schema-export.properties"
]
}

And the Atlas configuration:

atlas.hcl
env "hibernate" {
src = data.external_schema.hibernate.url
dev = "docker://mysql/8/dev"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}

Running Atlas

We should now be able to view our schema using Atlas:

atlas schema inspect -w --env hibernate --url env://src

The -w flag allows us to inspect the schema in atlas cloud:

Atlas has many more features we can explore, let's create a migration directory from our schema:

atlas migrate diff --env hibernate

By running atlas migrate diff, Atlas compares the state of our Hibernate schema and the state of the schema in the migration directory. Atlas sees that the migration directory does not exist and initializes it with the current Hibernate schema. Observe the migration directory, it should contain similar files:

-- Create "movies" table
-- Create "movies" table
CREATE TABLE "movies" (
"id" bigserial NOT NULL,
"numberinseries" integer NULL,
"title" character varying(255) NULL,
PRIMARY KEY ("id")
);
-- Create "actors" table
CREATE TABLE "actors" (
"name" character varying(255) NOT NULL,
PRIMARY KEY ("name")
);
-- Create "movieparticipation" table
CREATE TABLE "movieparticipation" (
"actorname" character varying(255) NOT NULL,
"movieid" bigint NOT NULL,
PRIMARY KEY ("actorname", "movieid"),
CONSTRAINT "fkaq2kkwvh9870847sm35vtjtiy" FOREIGN KEY ("movieid") REFERENCES "movies" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT "fktm8fbwa577lnbvwdjegwxvget" FOREIGN KEY ("actorname") REFERENCES "actors" ("name") ON UPDATE NO ACTION ON DELETE NO ACTION
);

Atlas uses the atlas.sum file to protect against conflicting schema changes, you can read about it here.

note

Currently, Atlas does not support using generated fields that require data initialization such as GenerationType.SEQUENCE, GenerationType.TABLE, and Generation.AUTO.

If needed, you can still export the schema using the flag --enable-table-generators (or -Denable-table-generators using Maven). When applying the schema to your database, you will need to make sure to apply the ignored statements (using atlas migrate --env hibernate diff --edit). See more information on manual migrations here

For example, if you are adding GenerationType.SEQUENCE to the Event entity, you will need to add insert statements to your generated migration file:

diff --git a/migrations/20231210140844.sql b/examples/with_local_plugin_repository/migrations/20231210140844.sql
index ad80a64..5955834 100644
--- a/migrations/20231210140844.sql
+++ b/migrations/20231210140844.sql
@@ -4,3 +4,6 @@ CREATE TABLE `Event` (`id` bigint NOT NULL AUTO_INCREMENT, `title` varchar(255)
-- Create "Event_SEQ" table
CREATE TABLE `Event_SEQ` (`next_val` bigint NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
+ -- Initialize "Event_SEQ" table
+ insert into Event_SEQ values ( 1 );

Testing these changes can be done by running the application with a local database and creating the entity. To apply the migration directory to the local database, use atlas migrate apply.

Testing the migrations

Now that our migration directory is ready, let's see how to apply it to a target database. Let's start a local PostgreSQL instance:

docker run -it --rm --name mypostgres -p 5432:5432 -e 'POSTGRES_PASSWORD=password' postgres

Next, let's apply our migrations to the database:

atlas migrate apply --env hibernate --url 'postgres://postgres:password@0.0.0.0:5432/?search_path=public&sslmode=disable'

Atlas provides details on the applied migrations:

Migrating to version 20231211121102 (1 migrations in total):

-- migrating version 20231211121102
-> CREATE TABLE "movies" (
"id" bigserial NOT NULL,
"numberinseries" integer NULL,
"title" character varying(255) NULL,
PRIMARY KEY ("id")
);
-> CREATE TABLE "actors" (
"name" character varying(255) NOT NULL,
PRIMARY KEY ("name")
);
-> CREATE TABLE "movieparticipation" (
"actorname" character varying(255) NOT NULL,
"movieid" bigint NOT NULL,
PRIMARY KEY ("actorname", "movieid"),
CONSTRAINT "fkaq2kkwvh9870847sm35vtjtiy" FOREIGN KEY ("movieid") REFERENCES "movies" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT "fktm8fbwa577lnbvwdjegwxvget" FOREIGN KEY ("actorname") REFERENCES "actors" ("name") ON UPDATE NO ACTION ON DELETE NO ACTION
);
-- ok (9.282079ms)

-------------------------
-- 54.100203ms
-- 1 migrations
-- 3 sql statements

To confirm the migrations were applied, we can use Atlas to inspect the database. Run the following command:

atlas schema inspect -w --env hibernate --url 'postgres://postgres:password@0.0.0.0:5432/?search_path=public&sslmode=disable'

Making changes with confidence

Atlas ships with a static code analysis engine that can detect risky schema changes during development or Continuous Integration. This functionality is exposed to users via the migrate lint command. Let's demonstrate this capability with an example.

Suppose we make the following change:

--- a/src/main/java/org/example/Movie.java
+++ b/src/main/java/org/example/Movie.java
@@ -10,13 +10,10 @@ public class Movie {

Movie(String title, Integer numberInSeries) {
this.title = title;
- this.numberInSeries = numberInSeries;
}
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
public Long id;

public String title;
-
- public Integer numberInSeries;
}

By removing an attribute from the Movie model, we are removing a column from the database schema. Let's see how Atlas handles this change. Run atlas migrate diff --env hibernate, and observe the new file in the migration directory:

cat migrations/20231211124321.sql
-- Modify "movies" table
ALTER TABLE "movies" DROP COLUMN "numberinseries";

While this change may be desired, it is an irreversible operation that should be done with caution. Atlas can help us avoid dangerous schema changes by linting the migration directory and not allowing such a change to get merged.

Running the following command, we can see that Atlas will warn us about a destructive change to the database:

atlas migrate lint --env hibernate --latest 1

20231211124321.sql: destructive changes detected:
L2: Dropping non-virtual column "numberinseries"

Running migrate lint locally during development can be very useful, but linting becomes much more powerful when you integrate into your Continuous Integration pipeline. Atlas offers a set of Github Actions designed to make setting this up a breeze.

Conclusion

In this post, we have presented how Hibernate projects can use Atlas to automatically plan, lint and apply schema migrations based only on their data model.

If you want to explore more configuration options or dive deeper into how this works, please take a look at this repository.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.