Skip to main content

12 posts tagged with "atlas"

View All Tags

· 5 min read
Rotem Tamir

Hi everyone,

We are updating you on a pricing change we will be rolling out to Atlas Cloud on March 15th, 2024.

As you know, Atlas is an open-core project, which means that while its core is an Apache 2-licensed open-source project, we are building it as a commercial, cloud-connected solution built and supported by our company, Ariga. As with any startup, our understanding of the product and the market are constantly evolving, and this pricing change is a reflection of that evolution.

Atlas Plans

Even through this change, we will keep providing the Atlas community with three options for how to consume Atlas.

  • Free Plan (formerly "Community Plan") - for individuals and small teams that want to unlock the full potential of Atlas. This plan will remain free forever and provides full access to all the capabilities of Atlas as a CLI as well as access to enough Atlas Cloud quota to successfully manage a single project. Support is provided via public community support channels.

  • Business Plan (formerly "Team Plan") - for teams professionally using Atlas beyond a single project. This plan has the same features and capabilities as the Free Plan but allows teams to purchase additional quotas if required. In addition, teams subscribing to this plan will get access to priority email support and in-app support via Intercom.

  • Enterprise Plan - for larger organizations looking to solve schema management at scale. This plan includes a dedicated support channel, solution engineering, and other features required for adoption by enterprises.

Why Change

The main reason for this change is the feedback we received from many small teams that the previous $295/month minimum price tag for the Team Plan was prohibitive and that a more gradual, usage-based pricing model would help them adopt Atlas in their organizations.

New Pricing

We’ve tried to keep the new pricing model as simple as possible. We have learned from our investors, advisors, and customers that a seat-based pricing model is less optimal as it disincentivizes the adoption of Atlas by people in roles with a lower-touch engagement. As such, we have made the new pricing model usage-based. Let’s break down how this is going to work.

Projects. The first dimension by which the new model works is the number of projects that you store in the Atlas Cloud schema registry. Currently, this is equal to the number of migration directories that you migrate push to Atlas Cloud, but soon we will also add support for schema push for declarative workflows.

Details:

  • Each project will cost $59 per month.
  • The Free Plan will include a single project free of charge.

Target Databases. Each project (migration directory) can be deployed to multiple target databases. This may be different environments (dev, staging, prod) or different tenants (for projects that manage separate databases per customer).

Details:

  • Each target database will cost $39 per month.
  • The Free Plan will include 2 target databases free of charge.
  • Whenever you purchase quota for an additional project, you will also get a bundled additional target database free of charge.

If pricing by target DB doesn't work for your particular use case, please reach out to us to discuss alternative solutions.

Additional Changes

  • Seats. The free plan will include 3 seats free of charge. Teams upgrading to business will receive 30 seats (regardless of how many projects and databases they use). This limit is supposed to allow as many people as needed to use Atlas Cloud features, but still impose some limit beyond which we expect teams to consider the Enterprise Plan.

  • Data Retention. Atlas users generate plenty of data which we store in our databases. To prevent it from becoming unsustainable for us to support free users over the long run, we are imposing a 30-day data retention limit on CI runs and deployment logs for free users. Business users get 90-day retention by default. If this becomes an issue for you, feel free to reach out to us and we will work something out.

  • Runs. Free Plan users can now report up to 100 CI Runs or Deployments (previously 500) per month in their cloud account. Business and Enterprise users can store an unlimited amount of runs.

Thanking Existing Users

As a way of saying thanks to existing early users who have trusted us to be part of their engineering infrastructure, we have worked out a few options for you to continue using Atlas without interruption. We will be reaching out to admins of these accounts personally to share the details.

This doesn’t work for me

If these changes cause an issue for you or you would like to discuss your specific pricing needs, please let me know personally via Email, Discord, Intercom, or Homing Pigeon 🙂.

-- Rotem and Ariel

· 11 min read
Rotem Tamir

Hi everyone!

It's been a few weeks since our last version announcement and today I'm happy to share with you
v0.15, which includes some very exciting improvements for Atlas:

  • Interactive Declarative Migrations - Atlas supports a Terraform-like workflow for managing your database schema using the schema apply command. In this release we have added a new "Lint and Edit" mode to this command, which will analyze your schema changes for issues and will allow you to edit them interactively before applying them to your database.
  • Functions and Stored Procedures - Atlas now supports creating and managing functions and stored procedures in your database schema.
  • Postgres Domains - In addition, Atlas now supports Postgres Domains . A domain is essentially a data type with optional constraints (restrictions on the allowed set of values).
  • TypeORM Integration - TypeORM is a popular ORM for Node.js. In this release, we are happy to announce the TypeORM integration, which allows you to automatically generate your database schema from your TypeORM entities, create visualizations and more.

Let's dive right in!

Interactive Declarative Migrations

Atlas supports a Terraform-like workflow for managing your database schema using the schema apply command. This workflow, which we call "Declarative Migrations", is a modern alternative to the traditional "versioned migrations" workflow. In declarative migrations, you define your desired schema in one of the formats supported by Atlas and supply a connection string to your database. Atlas compares the current and desired schema of your database and generates a plan to migrate your database to the desired state.

Similar to Terraform, until today, Atlas would prompt you to confirm the migration plan before applying it to your database. This is a great way to ensure that you don't accidentally apply a migration that you didn't intend to. However, this flow suffers from a few drawbacks:

  1. Ensuring Safety - you can count on Atlas to generate a correct migration plan to your desired state, but it's still possible that this migration will have unintended side effects. For example, adding a UNIQUE constraint to a column might fail if there are duplicate values in the column.
  2. Editing - users often want to make changes to their migration plan before applying it. In the current flow, this requires running schema apply with the --dry-run flag, saving the output to a file, editing it, and then manually applying the edited migration plan to the database.

Enter: Interactive Declarative Migrations

In this release, we are introducing a new "Lint and Edit" mode to the schema apply command. This mode is available to logged-in users only, as it uses Atlas Cloud to provide a neat UI and rich analysis capabilities. Let's see it in action.

Start by downloading the latest version of Atlas:

curl -sSf https://atlasgo.sh | sh

For installation instructions on other platforms, see the installation guide.

After installing Atlas, make sure to log in using the atlas login command:

atlas login

Next, create a new file named schema.hcl that will contain your desired schema:

schema.hcl
schema "main" {
}

table "hello" {
schema = schema.main
column "name" {
type = varchar(100)
default = "Anonymous"
}
}

Now, let's apply this schema to a local SQLite database named "sqlite.db":

atlas schema apply -u sqlite://sqlite.db --dev-url sqlite://?mode=memory -f schema.hcl

Atlas will calculate the diff between the current (empty) state of the database and our desired state and prompt us to confirm the migration plan:

-- Planned Changes:
-- Create "hello" table
CREATE TABLE `hello` (`name` varchar NOT NULL DEFAULT 'Anonymous');
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
Apply
▸ Lint and edit # <-- Brand new!
Abort

Notice the new "Lint and edit" option. Select it and press Enter. Atlas will now analyze the migration plan and open your browser in the new, interactive migration plan screen. The screen contains three important sections:

  • Migration Plan - the migration plan generated by Atlas. You can click the "Edit" button to make changes to it.
  • Checks - a summary of the checks that Atlas ran against the generated plan. In this case, our plan is completely safe, so all checks passed.
  • ERD - A visual representation of the change we are planning.

Once we are content with the migration plan, let's go ahead and click the "Approve and Apply" button. Atlas will apply the migration plan to the database and scroll down to the execution logs section:

Let's edit our desired state a bit to delete the hello table and add a new users table:

schema.hcl
schema "main" {
}
-table "hello" {
- schema = schema.main
- column "name" {
- type = varchar(100)
- default = "Anonymous"
- }
-}
+table "users" {
+ schema = schema.main
+ column "id" {
+ type = int
+ }
+ column "email" {
+ type = text
+ }
+ primary_key {
+ columns = [column.id]
+ }
+ index "unique_email" {
+ columns = [
+ column.email
+ ]
+ unique = true
+ }
+}

Once again, let's run atlas schema apply to apply the changes to the database and select the "Lint and Edit" option.

This time, Atlas will warn us that the migration plan is not safe:

In this case, we decide to abort the migration in order to not lose the precious data on the hello table. Good thing we have automatic migration linting on our side!

Functions and Stored Procedures

info

Functions and stored procedures are currently in beta and available to logged-in users only. To use this feature run:

atlas login

Over the past few months, we have received numerous requests to support management of functions and stored procedures in popular databases such as PostgreSQL and MySQL. Functions and stored procedures are a way to encapsulate reusable logic in your database and are often used to improve performance by reducing the number of round-trips to the database.

Atlas now supports creating and managing functions and stored procedures in your database schema. Let's see how we can use this feature to create a simple function. In our example, we will implement the leet_speak function for PostgreSQL, which transforms a regular string into its Leet equivalent!

We can define the desired state of our database in either HCL or SQL:

schema.hcl
function "leet_speak" {
schema = schema.public
lang = PLpgSQL
arg "input_text" {
type = character_varying
}
return = character_varying
as = <<-SQL
DECLARE
output_text VARCHAR := '';
i INT := 1;
BEGIN
WHILE i <= LENGTH(input_text) LOOP
output_text := output_text ||
CASE SUBSTRING(input_text, i, 1)
WHEN 'a' THEN '4'
WHEN 'e' THEN '3'
WHEN 'i' THEN '1'
WHEN 'o' THEN '0'
WHEN 's' THEN '5'
WHEN 't' THEN '7'
ELSE SUBSTRING(input_text, i, 1)
END;
i := i + 1;
END LOOP;

RETURN output_text;
END;
SQL
}
schema "public" {
comment = "standard public schema"
}

For the purpose of this demo, we will run a local MySQL Docker container:

docker run --name db -e POSTGRES_PASSWORD=pass -d -p 5432:5432 postgres:16

Now, let's apply our schema to the database:

atlas schema apply -u 'postgres://postgres:pass@localhost:5432/postgres?sslmode=disable&search_path=public' --to file://schema.hcl

Atlas will calculate the diff between the current (empty) state of the database and our desired state and prompt us to confirm the migration plan:

-- Planned Changes:
-- Create "leet_speak" function
CREATE FUNCTION "leet_speak" ("input_text" character varying) RETURNS character varying LANGUAGE PLpgSQL AS $$
DECLARE
output_text VARCHAR := '';
i INT := 1;
BEGIN
WHILE i <= LENGTH(input_text) LOOP
output_text := output_text ||
CASE SUBSTRING(input_text, i, 1)
WHEN 'a' THEN '4'
WHEN 'e' THEN '3'
WHEN 'i' THEN '1'
WHEN 'o' THEN '0'
WHEN 's' THEN '5'
WHEN 't' THEN '7'
ELSE SUBSTRING(input_text, i, 1)
END;
i := i + 1;
END LOOP;

RETURN output_text;
END;
$$;
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

Let's go ahead and select the "Apply" option. Atlas will apply the migration plan to the database and print the following output:

✔ Apply

We can now verify that the function was created successfully by running:

docker exec -it db psql -U postgres -c "SELECT leet_speak('hello leet world')"

And the result indeed is:

    leet_speak
------------------
h3ll0 l337 w0rld
(1 row)

To learn more about functions and stored procedures in Atlas, check out the documentation.

Postgres Domains

info

Support for domains is currently in beta and available to logged-in users only. To use this feature run:

atlas login

Another highly requested feature was support for Postgres Domains. A domain is essentially a data type with optional constraints (restrictions on the allowed set of values). For example, you might want to define an email_address domain which would be a varchar column with a CHECK constraint to ensure that the value is a valid email address.

Starting with v0.15, Atlas can now manage domains in your database schema, as well as use them as types for table columns. Let's see an example schema that uses domains:

domain "us_postal_code" {
schema = schema.public
type = text
null = true
check "us_postal_code_check" {
expr = "((VALUE ~ '^\\d{5}$'::text) OR (VALUE ~ '^\\d{5}-\\d{4}$'::text))"
}
}

domain "username" {
schema = schema.public
type = text
null = false
default = "anonymous"
check "username_length" {
expr = "(length(VALUE) > 3)"
}
}

table "users" {
schema = schema.public
column "name" {
type = domain.username
}
column "zip" {
type = domain.us_postal_code
}
}

schema "public" {
comment = "standard public schema"
}

The above schema defines two domains: us_postal_code and username. The us_postal_code domain is a text column with a CHECK constraint to ensure that the value is a valid US postal code. The username domain is a text column with a CHECK constraint to ensure that the value is at least 4 characters long. We then define a users table that uses these domains for its columns.

Let's see what happens when we apply this schema to a local Postgres database:

atlas schema apply -u 'postgres://postgres:pass@localhost:5432/postgres?sslmode=disable' -f schema.hcl

Atlas calculates the diff between the current (empty) state of the database and our desired state and prompts us to confirm the migration plan:

-- Planned Changes:
-- Create domain type "us_postal_code"
CREATE DOMAIN "public"."us_postal_code" AS text CONSTRAINT "us_postal_code_check" CHECK ((VALUE ~ '^\d{5}$'::text) OR (VALUE ~ '^\d{5}-\d{4}$'::text));
-- Create domain type "username"
CREATE DOMAIN "public"."username" AS text DEFAULT 'anonymous' NOT NULL CONSTRAINT "username_length" CHECK (length(VALUE) > 3);
-- Create "users" table
CREATE TABLE "public"."users" ("name" "public"."username" NOT NULL, "zip" "public"."us_postal_code" NOT NULL);
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

After applying, let's re-run the schema apply command to make sure that the schema is up-to-date:

atlas schema apply -u 'postgres://postgres:pass@localhost:5432/postgres?sslmode=disable' --to file://schema.hcl

Indeed, Atlas reports that the schema is up-to-date:

Schema is synced, no changes to be made

Support for TypeORM

TypeORM is a popular ORM for Node.js. In this release, we are happy to announce the TypeORM integration, which allows you to automatically generate your database schema from your TypeORM entities, create visualizations, and more.

The TypeORM Atlas Provider is a Node.js module that can extract the desired schema of your database directly from your TypeORM entities. To use it, first install:

npm i @ariga/atlas-provider-typeorm

Next, add the TypeORM schema as a data source in your atlas.hcl file:

data "external_schema" "typeorm" {
program = [
"npx",
"@ariga/atlas-provider-typeorm",
"load",
"--path", "./path/to/entities",
"--dialect", "mysql", // mariadb | postgres | sqlite | mssql
]
}

env "typeorm" {
src = data.external_schema.typeorm.url
dev = "docker://mysql/8/dev"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}

Finally, run atlas schema apply to apply the schema to your database:

atlas schema apply -u mysql://<db credentials> --env typeorm

To learn more about the TypeORM integration, check out the documentation.

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 6 min read
Rotem Tamir
TL;DR

In this guide we will demonstrate how to use Atlas to automatically generate migrations for Microsoft SQL Server databases.

To skip the intro and go straight to the demo, click here.

Introduction

As your application's data model evolves, you will need to make changes to your database schema. In today's world, where teams are expected to own their infrastructure and ship code faster than ever, it is important to have a reliable and repeatable process for managing database schema changes.

Atlas lets you manage your database schema as code. It is a modern schema management tool that applies concepts from modern DevOps tools to database schema management. Using Atlas, teams can automatically plan, verify, deploy and monitor database schema changes.

Microsoft SQL Server, one of the longest-standing database engines in our business, was first released by Microsoft in 1989. MS-SQL is the go-to database for Windows environments in many industries.

In this guide, we will demonstrate how to use Atlas to automatically generate migrations for your Microsoft SQL Server databases.

Setting up

  1. Start by installing the Atlas CLI, if you haven't already. On macOS and Linux simply run:

    curl -sSf https://atlasgo.sh | sh

    For other platforms, see the installation instructions.

  2. The SQL Server driver is currently available to users of Atlas Cloud Beta Program. To join the program (for free), first sign up for an Atlas Cloud account.

  3. Once your inside your Atlas account, go to the account settings by clicking your avatar. Then, select the "Microsoft SQL Server" and click the "Save" button.

  4. After you have opted-in to the Beta Program, log in to your Atlas account using the CLI:

    $ atlas login
    You are now connected to "a8m" on Atlas Cloud.

Demo time!

In this guide, we will demonstrate some of the basic capabilities of Atlas by working against a local Microsoft SQL Server database.

To spin up a local SQL Server instance using docker run:

docker run --rm -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@ssw0rd0995' -p 1433:1433 -d mcr.microsoft.com/mssql/server:latest

Notice that by passing the ACCEPT_EULA environment variable, we are accepting the terms of Microsoft's EULA.

Managing your database schema as code

Atlas supports a workflow called declarative schema migrations. In this workflow, you first define the desired state of your database schema (in one of many supported formats and languages) and then let Atlas calculate the diff between the desired state and the actual state of your database. Atlas then generates the needed SQL commands that will bring your database to the desired state.

Let's see this in action. First, create a new file name schema.sql. This file will contain the desired state of our database in plain SQL.

schema.sql
-- Create the users table
CREATE TABLE users (
id INT PRIMARY KEY,
email NVARCHAR(255) UNIQUE,
display_name NVARCHAR(255)
);

-- Create the posts table with a custom name for the FK constraint
CREATE TABLE posts (
id INT PRIMARY KEY,
title NVARCHAR(255),
body TEXT,
author_id INT,
CONSTRAINT author_fk FOREIGN KEY (author_id) REFERENCES users(id)
);

Applying our schema

Next, let's apply this schema to our database. To do so, we will use the atlas schema apply command.

atlas schema apply -u "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
--to file://schema.sql \
--dev-url "docker://sqlserver"

Atlas will connect to our target database to inspect it's current state. Next, it will use the dev-database to normalize our schema and finally, it will generate the SQL commands that will bring our database to the desired state:

-- Planned Changes:
-- Create "users" table
CREATE TABLE [users] ([id] int NOT NULL, [email] nvarchar(255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [display_name] nvarchar(255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_users] PRIMARY KEY CLUSTERED ([id] ASC));
-- Create index "UQ__users__AB6E61643C9DEB30" to table: "users"
CREATE UNIQUE NONCLUSTERED INDEX [UQ__users__AB6E61643C9DEB30] ON [users] ([email] ASC);
-- Create "posts" table
CREATE TABLE [posts] ([id] int NOT NULL, [title] nvarchar(255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [body] text COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [author_id] int NULL, CONSTRAINT [PK_posts] PRIMARY KEY CLUSTERED ([id] ASC), CONSTRAINT [post_author_fk] FOREIGN KEY ([author_id]) REFERENCES [users] ([id]) ON UPDATE NO ACTION ON DELETE NO ACTION);

Atlas prompts us to approve the changes before applying them to the database:

Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

After applying the schema, Atlas confirms that the changes were applied:

✔ Apply

Next, let's re-run the atlas schema apply command. This time, Atlas will detect that the database is already in the desired state and will not generate any changes:

Schema is synced, no changes to be made.

Altering our schema

Now, let's make some changes to our schema. Open the schema.sql file and add a new column to the users table:

schema.sql
CREATE TABLE users (
id INT PRIMARY KEY,
email NVARCHAR(255) UNIQUE,
display_name NVARCHAR(255),
+ bio text
);

Next, let's re-run the atlas schema apply command. This time, Atlas will detect that the schema has changed and will generate the needed SQL commands to bring the database to the desired state:

-- Planned Changes:
-- Modify "users" table
ALTER TABLE [users] ADD [bio] text COLLATE SQL_Latin1_General_CP1_CI_AS NULL;

After applying the changes, Atlas confirms once again that the changes were applied:

✔ Apply

Visualizing our schema

One of the most useful features of Atlas is the ability to visualize your database schema. To do so, run the atlas schema inspect command with the -w (web) flag:

atlas schema inspect -u "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" -w

Atlas will ask whether you would like to create your visualization publicly (in a publicly accessible URL) or privately (in your Atlas Cloud account):

Use the arrow keys to navigate: ↓ ↑ → ←
? Where would you like to share your schema visualization?:
▸ Publicly (gh.atlasgo.cloud)
Privately (rotemtam85.atlasgo.cloud)

For this demo, let's choose the public option. Atlas will create the visualization and open it in your default browser:

See it for yourself at: https://gh.atlasgo.cloud/explore/5e15289a

Wrapping up

In this guide we have demonstrated how to set up Atlas to manage your Microsoft SQL Server database schema. We have also demonstrated some of the basic capabilities of Atlas, such as declarative schema migrations, and schema visualization. These two features are just the tip of the iceberg. Atlas has many more features that can help you better manage your database! To learn more, check out the Atlas documentation.

As always, we would love to hear your feedback and suggestions on our Discord server.

· 5 min read
Rotem Tamir

Hi everyone!

I'm very happy to share with you some of the recent improvements to Atlas, specifcially around GitHub Actions. In August of last year, we released our first version of the GitHub Actions experience for Atlas. It was a modest start, which included the ability to verify the safety and correctness of schema migrations during the CI process.

Over the past year, we have slowly added more features to the GitHub Actions experience, including the ability to sync migration directories to Atlas Cloud, deploy migrations, and even install Atlas. As often happens with quickly evolving systems, we felt that the API became complex, carrying over use cases and experiences that have become obsolete or superseded by better ones since the initial release.

At Ariga, the team developing Atlas, we have written a document named the "R&D Manifesto", which lists some the principles that we commit to as individuals and as an organization. One of them is "Obsess over APIs and DevEx" - we believe that the key to building a successful product is to provide the best possible experience to our users, and that starts with clear, consistent and composable APIs that empower our users to achieve amazing feats of engineering.

With that in mind, our team has been working hard in the past few weeks to revamp the GitHub Actions experience for Atlas. Here's a quick summary of the changes:

  1. We've moved all actions into a single repo - ariga/atlas-action. (With the exception of ariga/setup-atlas.)
  2. The API has been reviewed and updated to make sure it is consistent among the different actions and with the rest of the Atlas ecosystem.
  3. We've rewritten the code in Go, which is the language we use for all of our internal tools. This allows us to share code between the CLI and the GitHub Actions, and to provide a more consistent experience between the two. In addition, looking forward we have greatly simplified the process of adding new GitHub Actions as needed.

Deprecation Notice

As part of this change we are deprecating the previous generation of GitHub Actions, and we encourage you to migrate to the new ones as soon as possible. The old actions will continue to work for the time being, but we will not be receiving any updates. These actions are:

Introducing to the New Actions

Without further ado, I'm happy to present the new generation of GitHub Actions for Atlas. The new actions follow the design principle of building actions as small, composable units that can be combined to achieve different outcomes. All of the actions rely on Atlas being installed on the GitHub Actions runner, which is done using the ariga/setup-atlas action. The rest of the actions essentially map to CLI commands, and can be used to build more complex workflows.

The actions are:

ActionUse Case
ariga/setup-atlasInstall Atlas from a GitHub Actions workflow
ariga/atlas-action/migrate/lintCI for schema changes
ariga/atlas-action/migrate/pushPush your migration directory to Atlas Cloud (atlasgo.cloud)
ariga/atlas-action/migrate/applyDeploy versioned migrations from GitHub Actions

Example Workflows

Consider the following GitHub Actions workflow, which can be used to implement a CI/CD pipeline for your database schema changes.

name: Atlas CI/CD
on:
push:
branches:
- master # Use your main branch here.
pull_request:
paths:
- 'migrations/*' # Use the path to your migration directory here.
# Permissions to write comments on the pull request.
permissions:
contents: read
pull-requests: write
jobs:
atlas:
services:
# Spin up a mysql:8 container to be used as the dev-database for analysis.
mysql:
image: mysql:8
env:
MYSQL_DATABASE: dev
MYSQL_ROOT_PASSWORD: pass
ports:
- 3306:3306
options: >-
--health-cmd "mysqladmin ping -ppass"
--health-interval 10s
--health-start-period 10s
--health-timeout 5s
--health-retries 10
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
- uses: ariga/atlas-action/migrate/lint@v1
with:
dir: 'file://migrations'
dir-name: 'my-project' # The name of the project in Atlas Cloud
dev-url: "mysql://root:pass@localhost:3306/dev"
- uses: ariga/atlas-action/migrate/push@v1
if: github.ref == 'refs/heads/master'
with:
dir: 'file://migrations'
dir-name: 'my-project'
dev-url: 'mysql://root:pass@localhost:3306/dev' # Use the service name "mysql" as the hostname

This workflow uses 3 different actions to achieve the following:

  • ariga/setup-atlas - Installs Atlas on the GitHub Actions runner and logs in using the provided token.
  • ariga/atlas-action/migrate/lint - Lints the migration directory and verifies that it is safe to apply. This is run on every pull request that modifies the migration directory. If issues are found, the action will fail and a comment will be posted on the pull request with the details.
  • ariga/atlas-action/migrate/push - Pushes the migration directory to Atlas Cloud. This is run on every push to the main branch, so it can be used to deploy the migrations to production.

Tagging v1

With the release of the new actions, we are also tagging the v1 release of the actions to mark the maturity and stability of the API. We hope you will find the new actions useful, and we look forward to seeing what you build with them!

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

· 6 min read
Rotem Tamir

Hi everyone!

It's been a few weeks since our last version announcement and today I'm happy to share with you
v0.14, which includes some very exciting improvements for Atlas:

  • Checkpoints - as your migration directory grows, replaying it from scratch can become annoyingly slow. Checkpoints allow you to save the state of your database at a specific point in time and replay migrations from that point forward.
  • Push to the Cloud - you can now push your migration directory to Atlas Cloud directly from the CLI. Think of it like docker push for your database migrations.
  • JetBrains Editor Support - After launching our VSCode Extension a few months ago, our team has been hard at work to bring the same experience to JetBrains IDEs. Starting today, you can use Atlas directly from your favorite JetBrains IDEs (IntelliJ, PyCharm, GoLand, etc.) using the new Atlas plugin.

Let's dive right in!

Checkpoints

Suppose your project has been going on for a while, and you have a migration directory with 100 migrations. Whenever you need to install your application from scratch (such as during development or testing), you need to replay all migrations from start to finish to set up your database. Depending on your setup, this may take a few seconds or more. If you have a checkpoint, you can replay only the migrations that were added since the latest checkpoint, which can be much faster.

Here's a short example. Let's say we have a migration directory with 2 migration files, managing a SQLite database. The first one creates a table named t1:

migrations/20230830122359_start.sql
create table t1 ( c1 int );

And the second adds a table named t2 and adds a column named c2 to t1:

migrations/20230830122414_t2.sql.sql
create table t2 ( c1 int, c2 int );

alter table t1 add column c2 int;

To create a checkpoint, we can run the following command:

atlas migrate checkpoint --dev-url "sqlite://file?mode=memory&_fk=1"

This will create a SQL file, which is our checkpoint:

20230830123813_checkpoint.sql
-- atlas:checkpoint

-- Create "t1" table
CREATE TABLE `t1` (`c1` int NULL, `c2` int NULL);
-- Create "t2" table
CREATE TABLE `t2` (`c1` int NULL, `c2` int NULL);

Notice two things:

  1. The atlas:checkpoint directive which indicates that this file is a checkpoint.
  2. The SQL statement to create the t1 table included both the c1 and c2 columns and does not contain the alter table statement. This is because the checkpoint includes the state of the database at the time it was created, which can be thought of as the sum of all migrations that were applied up to that point.

Next, let's apply these migrations on a local SQLite database:

atlas migrate apply --url sqlite://local.db

Atlas prints:

Migrating to version 20230830123813 (1 migrations in total):

-- migrating version 20230830123813
-> CREATE TABLE `t1` (`c1` int NULL, `c2` int NULL);
-> CREATE TABLE `t2` (`c1` int NULL, `c2` int NULL);
-- ok (960.465µs)

-------------------------
-- 6.895124ms
-- 1 migrations
-- 2 sql statements

As expected, Atlas skipped all of the migrations up to the checkpoint and only applied the last one!

Push to Cloud

As we demonstrated above, once we have a migration directory, we can apply it to a database. If your database is running locally this is easy enough, but building deployment pipelines to production databases is more involved. There are multiple ways to accomplish this, such as building custom Docker images, as shown in most methods covered in the guides section.

In this release, we simplified the process of pushing migration directories to Atlas Cloud by adding a new atlas migrate push command. You can think of it as docker push for your database migrations.

atlas migrate push

Migration Directory created with atlas migrate push

Continuing with our example from above, let's push our migration directory to Atlas Cloud.

To start, you'll need to log in to Atlas. If it's your first time, you'll be prompted to create both an account and a workspace.

atlas login

After logging in, let's name our new migration project pushdemo and run:

atlas migrate push pushdemo --dev-url "sqlite://file?mode=memory&_fk=1"

After our migration directory is pushed, Atlas prints a URL to the created directory, similar to the one shown in the image above.

Once your migration directory is pushed, you can use it to apply migrations to your database directly from the cloud, just as you would execute docker run to run a container image that is stored in a Docker container registry.

To apply a migration directory directly from the cloud, run:

atlas migrate apply --dir atlas://pushdemo --url sqlite://local.db

Notice two flags that we used here:

  • --dir - specifies the URL of the migration directory. We used atlas://pushdemo to indicate that we want to use the migration directory named pushdemo that we pushed earlier. This directory is accessible to us because we used atlas login in a previous step.
  • --url - specifies the URL of the database we want to apply the migrations to. In this case, we used the same SQLite database that we used earlier.

JetBrains Editor Support

JetBrains makes some of the most popular IDEs for software developers, including IntelliJ, PyCharm, GoLand, and more. We are happy to announce that following our recent release of the VSCode Extension, we now have a plugin for JetBrains IDEs as well!

The plugin is built to make editing Atlas HCL files much easier by providing developers with syntax highlighting, code completion, and warnings. It supports both atlas.hcl project configuration files as well as schema definition files (.my.hcl, .pg.hcl, and .lt.hcl).

The plugin is available for download from the JetBrains Marketplace.

  1. To install the plugin, open your IDE and go to Preferences > Plugins > Marketplace and search for Atlas:

  2. Click on the Install button to install the plugin.

  3. Create a new file named schema.my.hcl (the .my.hcl suffix signifies to the plugin that this file is a MySQL schema (you can use .pg.hcl for Postgres or .lt.hcl for SQLite)

  4. Edit away!

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 6 min read
Rotem Tamir

TL;DR

Atlas now supports AWS IAM authentication, which enables you to perform passwordless schema migrations on your RDS databases. To use it with Atlas, add the aws_rds_token data source to your atlas.hcl configuration file:

data "aws_rds_token" "mydb" {
endpoint = "mydb.123456789012.us-east-1.rds.amazonaws.com:3306"
username = "atlas"
}

To skip the intro and jump straight to the tutorial, click here.

Introduction

Passwords have long been the default mechanism for authentication, but they come with a set of known vulnerabilities. In recent years, our industry has shifted towards alternative authentication methods due to these weaknesses. For databases, which store critical data, this security concern is even more important.

Schema migrations require elevated permissions, making it even more essential to approach them with utmost care in order to prevent security breaches. In this post, we'll show how to use Atlas to perform passwordless schema migrations on databases hosted in AWS's RDS service.

The Problem with Passwords

Passwords are considered a weak authentication mechanism for humans logging in to systems since they can be leaked or guessed. For this reason, many services offer more robust authentication methods, such as multi-factor authentication or single sign-on.

In this post, we'll focus on the security concerns of passwords (or API Tokens) for automated systems (such as CI/CD pipelines), which are used to perform schema migrations. Such tokens pose a challenge to securing systems in a few ways:

  • Leaks. When stored in configuration files, passwords are typically in plain text, increasing the risk of leaks.
  • Granularity. When passwords are shared among multiple users, it becomes challenging to grant and revoke access for individual users based on role changes or emerging security concerns.
  • Visibility. Because passwords are usually visible to operators and are shared by multiple users, it's hard to track who performed which operation once authenticated.
  • Rotation. Because passwords tend to be long-lived, their rotation becomes a cumbersome task.

IAM Authentication

IAM, short for Identity and Access Management, is a framework that has been adopted by virtually all cloud providers for managing digital identities and their permissions. Unlike traditional password-based systems where credentials are stored and checked, IAM verifies who (or what) is making a request and then checks the permissions associated with that identity.

IAM services supply mechanisms for generating short-lived tokens based on the identity of the caller. In addition, these services provide a centralized way to manage permissions (by creating granular access policies and grouping them into roles) and auditing capabilities to track how subjects (users or services) use the system.

Configured correctly, under IAM, every subject can access exactly what it needs and nothing more, without ever having to use a password or some other token that might be leaked or stolen. When a person leaves your organization (or no longer needs access to a particular resource), you can revoke their access by updating their IAM role.

IAM authentication for Databases

Most databases in use today predate IAM and have developed their own internal mechanisms for authentication and authorization. In recent years, cloud vendors have worked to create a bridge between IAM and databases, allowing users to authenticate their identity to databases using IAM credentials. In this post, we'll focus on AWS's implementation of IAM authentication for RDS databases.

How does it work?

First, enable IAM authentication on your RDS instance. This installs a plugin on the database that allows it to authenticate users with IAM credentials instead of passwords. Read how to do this in the AWS documentation

Next, create a database user and grant it permission to authenticate using IAM.

In MySQL, execute a statement like this:

CREATE USER 'atlas' IDENTIFIED WITH AWSAuthenticationPlugin as 'RDS';

In PostgreSQL, execute a statement like this:

CREATE USER atlas; 
GRANT rds_iam TO atlas;

Finally, create an IAM policy that allows subjects to create RDS connection tokens. This policy can then be attached to roles for developers or services that need to connect to the database. Read how to do this in the AWS documentation.

IAM Authentication with Atlas

Tools that perform schema migrations such as Atlas require elevated permissions to perform their tasks. For example, they need to be able to inspect the database's information schema tables as well as create and drop resources. For this reason, any mechanism that can further protect the security of their credentials is essential, making IAM authentication a great fit. To support this use case, we have recently added support for AWS IAM authentication to Atlas.

Demo Time!

Let's see how to use Atlas to perform passwordless schema migrations on an RDS database.

For the purpose of this demo, we assume that we have a PostgreSQL database running in RDS with IAM authentication enabled. We also assume that we have a user named atlas that has been granted the rds_iam permission and that we have created an IAM policy that allows us to generate RDS tokens.

Start by creating a new file named atlas.hcl to store our project configuration and add the following content:

// Define local variables for the database endpoint and username.
locals {
endpoint = "atlas-demo.xyzxyz.us-east-1.rds.amazonaws.com:5432"
username = "atlas"
}

// Use the "aws_rds_token" data source to generate a token for the database.
data "aws_rds_token" "db" {
endpoint = local.endpoint
username = local.username
region = "us-east-1"
}

// Define an environment named "rds" that uses the generated token.
env "rds" {
url = "postgres://${local.username}:${urlescape(data.aws_rds_token.db)}@${local.endpoint}/postgres"
}

Lets break this example down:

  • The locals block defines two variables – endpoint and username – that we use to store the database endpoint and the username of the user created in the database.
  • Next, we define an aws_rds_token data source to generate a token for the database. To read more about this data source, see the documentation.
  • Finally, we define an environment named rds that uses the generated token. The url property defines the connection URL that Atlas will use to connect to the database. Notice that we use the urlescape function to escape the token before embedding it in the URL.

Now that we have our project configuration, let's use Atlas to inspect the database schema. Run the following command:

atlas schema inspect -c "file://atlas.hcl" --env rds

You should see output similar to the following:

schema "public" {
}

Amazing! This output indicates that Atlas was able to both connect to the database and inspect the schema without us having to provide it with any credentials!

Wrapping up

In this post, we discussed the security concerns around passwords and how IAM authentication can help mitigate them. We also demonstrated how to use Atlas to perform passwordless schema migrations on an RDS database using IAM authentication. If you use Atlas to perform schema migrations on RDS databases, we encourage you to give IAM authentication a try!

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

· 11 min read
Rotem Tamir

Introduction

When we started building Atlas a couple of years ago, we noticed that there was a substantial gap between what was then considered state-of-the-art in managing database schemas and the recent strides from Infrastructure-as-Code (IaC) to managing cloud infrastructure.

In this post, we review that gap and show how Atlas – along with its Terraform provider – can bridge the two domains.

As an aside, I usually try to keep blog posts practical and to the point, but occasionally think it’s worth it to zoom out and explain the grander ideas behind what we do.

If you’re looking for a quick and practical explanation of working with Atlas and Terraform, I recommend this YouTube video.

Why Infrastructure-as-Code

Infrastructure as Code (IaC) refers to the practice of managing and provisioning infrastructure through machine-readable configuration files, instead of utilizing traditional interactive configuration tools. This approach makes for automated, consistent, and repeatable deployment of environments that are faster and less error-prone than previous, more manual approaches.

Terraform, a popular open-source tool created by HashiCorp, is the most prominent implementation of the IaC concept. With Terraform, organizations can describe the desired state of their infrastructure in a simple configuration language (HCL) and let Terraform plan and apply these changes in an automated way.

Terraform (and IaC in general) has taken the software engineering world by storm in recent years. As someone who had the dubious pleasure of managing complex cloud infrastructure manually, using what is today jokingly called "ClickOps", I can mention a few properties of IaC that I believe contributed to this success:

  • Declarative – Terraform is built on a declarative workflow, which means that users only define the final (desired) state of their system. Terraform is responsible for inspecting the target environment, calculating the difference between the current and desired states, and building a plan for reconciling between those two states.

    Cloud infrastructures are becoming increasingly complex, comprising thousands of different, interconnected components. Declarative workflows greatly reduce the mental overhead of planning changes to such environments.

  • Automated – Many engineers can attest that manually provisioning a new environment used to take days, even weeks! Once Terraform generates a plan for changing environments, the process runs automatically and finishes in a matter of minutes.

  • Holistic – With Terraform, it is possible to capture all of the resources and configurations required to provision an application as one interconnected and formally defined dependency graph. Deployments become truly reproducible and automated, with no dangling or manually provisioned dependencies.

  • Self-healing – Finally, these three properties converge to support a self-healing tool that can detect and fix drift on its own. Whenever drift occurs, it is only a matter of re-running Terraform to shift from the current state back to the desired one.

Comparing IaC with Schema Management Tools

Next, let’s discuss the current state of database schema management tools (often called schema migration tools) by contrasting them with the properties of IaC.

  • Imperative – If Terraform embodies the declarative approach, then schema management tools often exemplify the opposite, imperative (or revision-based) approach. In this case, we don’t provide the tools with the what (the desired state of the database), but the how (what SQL commands need to run to migrate the database from the previous version to the next).

  • Semi-automated – Migration tools were revolutionary when they came out a decade ago. One idea stood as one of the harbingers of the GitOps philosophy: that database changes should not be applied manually but first checked into source control and then applied automatically by a tool.

    Today’s migration tools automate two aspects of schema management: 1) execution and 2) tracking which migrations were already executed on a target database.

    Compared to modern IaC tools, however, they are fairly manual. In other words, they leave the responsibility of planning and verifying the safety of changes to the user.

  • Fragmented – As we described above, one of the most pleasant aspects of adopting the IaC mindset is having a unified, holistic description of your infrastructure, to the point where you can entirely provision it from a single terraform apply command.

    For database schema management, common practices are anything but holistic. In some cases, provisioning the schema might happen 1) when application servers boot, before starting the application, or 2) while it runs as an init container on Kubernetes.

    In fact, some places (yes, even established companies) still have developers manually connect (with root credentials) to the production database to execute schema changes!

  • A pain to fix – When a migration deployment fails, many schema management tools will actually get in your way. Instead of worrying about fixing the issue at hand, you now need to worry about both your database and the way your migration tool sees it (which have now diverged).

Bridging the Gap

After describing the gap between IaC and database schema management in more detail, let’s delve into what it would take to bridge it. Our goal is to have schema management become an integral part of your day-to-day IaC pipeline so that you can enjoy all the positive properties we described above.

To integrate schema change management and IaC, we would need to solve two things:

  1. A diffing engine capable of supporting declarative migration workflows, such that an engine should be capable of:
    • Loading the desired schema of the database in some form
    • Inspecting the current schema of the database
    • Calculating a safe migration plan automatically
  2. A Terraform Provider that wraps the engine as a Terraform resource, which can then seamlessly integrate into your overall application infrastructure configuration.

How Atlas drives Declarative Migrations

Atlas is a language-agnostic tool for managing and migrating database schemas using modern DevOps principles. It is different from Terraform in many ways, but similar enough to have received the informal nickname "Terraform for Databases".

At its core lie three capabilities that make it ideal to apply a declarative workflow to schema management:

  1. Schema loaders
  2. Schema inspection
  3. Diffing and planning

Let’s discuss each of these capabilities in more detail.

Schema loaders

Every declarative workflow begins with the desired state - what we want the system to look like. Using a mechanism called "schema loaders" Atlas users can provide the desired schema in many ways. For example:

Plain SQL

Atlas users can describe the desired schema of the database using plain SQL DDL statements such as:

CREATE TABLE users (
Id int primary key,
Name varchar(255)
)

Atlas HCL

Alternatively, users can use Atlas HCL, a configuration language that shares Terraform’s configuration language foundations:

table "users" {
schema = schema.public
column "id" {
type = int
}
column "name" {
type = varchar(255)
}
column "manager_id" {
type = int
}
primary_key {
columns = [
column.id
]
}
}

A live database

In addition, users can provide Atlas with a connection to an existing database which in turn Atlas can inspect and use as the desired state of the database.

External Schemas (ORM)

Finally, Atlas has an easily extensible design which makes writing plugins to load schemas from external sources a breeze. For example, Atlas can read the desired schema of the database directly from your ORM, using a simple integration.

Schema inspection

Once Atlas understands the desired state of the database, it needs to inspect the existing database to understand its current schema. This is done by connecting to the target database and querying the database’s information schema to construct a schema graph (an in-memory representation of all the components in the database and their connections).

Diffing and planning

The next phase involves calculating the difference ("diffing") between the desired and current states and calculating an execution plan to reconcile this difference. Because resources are often interconnected, Atlas must create a sensible order of execution using algorithms such as Topological Sort to ensure, for example, that dependencies on a resource are removed before it is dropped.

In addition, each database engine has its own peculiarities and limitations to take into account when creating an execution plan. For example, adding a default value to a column in an SQLite database must be performed in a multiple-step plan that looks similar to this:

-- Planned Changes:
-- Create "new_users" table
CREATE TABLE `new_users` (`id` int NOT NULL, `greeting` text NOT NULL DEFAULT 'shalom')
-- Copy rows from old table "users" to new temporary table "new_users"
INSERT INTO `new_users` (`id`, `greeting`) SELECT `id`, IFNULL(`greeting`, 'shalom') AS `greeting` FROM `users`
-- Drop "users" table after copying rows
DROP TABLE `users`
-- Rename temporary table "new_users" to "users"
ALTER TABLE `new_users` RENAME TO `users`

Atlas in action

What does this workflow look like in practice? As you can see in Atlas's "Getting Started" guide, suppose we made a change to our desired schema that adds a new table named blog_posts (this change may be described in a plain SQL file, an HCL file or even in your ORM's data model).

To apply the desired schema on a target database you would use the schema apply command:

atlas schema apply \
-u "mysql://root:pass@localhost:3306/example" \
--to file://schema.sql \
--dev-url "docker://mysql/8/example"

After which Atlas will generate a plan:

-- Planned Changes:
-- Create "blog_posts" table
CREATE TABLE `example`.`blog_posts` (`id` int NOT NULL, `title` varchar(100) NULL, `body` text NULL, `author_id` int NULL, PRIMARY KEY (`id`), INDEX `author_id` (`author_id`), CONSTRAINT `author_fk` FOREIGN KEY (`author_id`) REFERENCES `example`.`users` (`id`))
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

Observing this example, you may begin to understand how Atlas earned its nickname the "Terraform for Databases."

Integrating with Terraform

The second piece of bridging the gap is to create a Terraform Provider that wraps Atlas and allows users to define resources that represent the schema definition as part of your infrastructure.

Ariga (the company behind Atlas) is an official HashiCorp Tech Partner that publishes the Atlas Terraform Provider, which was created to solve this problem precisely.

Using the Atlas Terraform Provider, users can finally provision their database instance and its schema in one holistic definition. For example, suppose we provision a MySQL database using AWS RDS:

// Our RDS-based MySQL 8 instance.
resource "aws_db_instance" "atlas-demo" {
identifier = "atlas-demo"
instance_class = "db.t3.micro"
engine = "mysql"
engine_version = "8.0.28"
// Some fields skipped for brevity
}

Next, we load the desired schema from an HCL file, using the Atlas Provider:

data "atlas_schema" "app" {
src = "file://${path.module}/schema.hcl"
}

Finally, we use the atlas_schemaresource to apply our schema to the database:

// Apply the normalized schema to the RDS-managed database.
resource "atlas_schema" "hello" {
hcl = data.atlas_schema.app.hcl
url = "mysql://${aws_db_instance.atlas-demo.username}:${urlencode(random_password.password.result)}@${aws_db_instance.atlas-demo.endpoint}/"
}

You can find a full example here.

When we run terraform apply, this is what will happen:

  • Terraform will provision the RDS database using the AWS Provider
  • Terraform will use Atlas to inspect the existing schema of the database and load the desired state from a local HCL file.
  • Atlas will calculate for Terraform a SQL plan to reconcile between the two.

And this is how it may look like in the Terraform plan:

Terraform will perform the following actions:

# atlas_schema.hello will be created
+ resource "atlas_schema" "hello" {
+ hcl = <<-EOT
table "posts" {
schema = schema.app
column "id" {
null = false
type = int
}
column "user_id" {
null = false
type = int
}
column "title" {
null = false
type = varchar(255)
}
column "body" {
null = false
type = text
}
primary_key {
columns = [column.id]
}
foreign_key "posts_ibfk_1" {
columns = [column.user_id]
ref_columns = [table.users.column.id]
on_update = NO_ACTION
on_delete = CASCADE
}
index "user_id" {
columns = [column.user_id]
}
}
table "users" {
schema = schema.app
column "id" {
null = false
type = int
}
column "user_name" {
null = false
type = varchar(255)
}
column "email" {
null = false
type = varchar(255)
}
primary_key {
columns = [column.id]
}
}
schema "app" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}
EOT
+ id = (known after apply)
+ url = (sensitive value)
}

# aws_db_instance.atlas-demo will be created
+ resource "aws_db_instance" "atlas-demo" {
// .. redacted for brevity
+ }

And that's how you bridge the gap between IaC and schema management!

Conclusion

In this blog post, we reviewed some exceptional properties of Infrastructure-as-Code tools, such as Terraform, that have led to their widespread adoption and success in the industry. We then reviewed the current state of a similar problem, database schema management, in contrast to these properties. Finally, we showcased Atlas’s ability to adapt some IaC principles into the domain of schema management and how we can unify the two domains using the Atlas Terraform Provider.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

· 7 min read

Most software projects are backed by a database, that's widely accepted. The schema for this database almost always evolves over time: requirements change, features are added, and so the application's model of the world must evolve. When this model evolves, the database's schema must change as well. No one wants to (or should) connect to their production database and apply changes manually, which is why we need tools to manage schema changes. Most ORMs have basic support, but eventually projects tend to outgrow them. This is when projects reach to choose a schema migration tool.

Many such tools exist, and it's hard to know which to choose. My goal in this article is to present 3 popular choices for migration tools for Go projects to help you make this decision.

By way of introduction (and full disclosure): my name is Pedro Henrique, I'm a software engineer from Brazil, and I've been a contributing member of the Ent/Atlas community for quite a while. I really love open-source and think there's room for a diverse range of tools in our ecosystem, so I will do my best to provide you with an accurate, respectful, and fair comparison of the tools.

golang-migrate - Created: 2014 GitHub Stars: 10.3k
Golang migrate is one of the most famous tools for handling database migrations. Golang migrate has support for many database drivers and migration sources, it takes a simple and direct approach for handling database migrations.

Goose - Created: 2012 GitHub Stars: 3.2k
Goose is a solid option when choosing a migration tool. Goose has support for the main database drivers and one of its main features is support for migrations written in Go and more control of the migrations application process.

Atlas - Created: 2021 GitHub Stars: 2.1k
Atlas is an open-source schema migration tool that supports a declarative workflow to schema migrations, making it a kind of "Terraform for databases". With Atlas, users can declare their desired schema and let Atlas automatically plan the migrations for them. In addition, Atlas supports classic versioned migration workflows, migration linting, and has a GitHub Actions integration.

Golang migrate

Golang migrate was initially created by Matt Kadenbach. In 2018 the project was handed over to Dale Hui, and today the project resides on the golang-migrate organization and is actively maintained, having 202 contributors.

One of Golang migrate's main strengths is the support for various database drivers. If your project uses a database driver that is not very popular, chances are that Golang migrate has a driver for it. For cases where your database is not supported, Golang migrate has a simple API for defining new database drivers. Databases supported by Golang migrate include: PostgreSQL, Redshift, Ql, Cassandra, SQLite, MySQL/MariaDB, Neo4j, MongoDB, Google Cloud Spanner, and more.

Another feature of Golang migrate is the support for different migrations sources, for cases where your migration scripts resides on custom locations or even remote servers.

Goose

Goose has a similar approach to Golang migrate. The project was initially created by Liam Staskawicz in 2012, and in 2016 Pressly created a fork improving the usage by adding support for migrations in Go, handling cases of migrations out of order and custom schemas for migration versioning. Today Goose has 80 contributors.

Goose only provides support for 7 database drivers, so if your project uses one of the main databases in the market, Goose should be a good fit. For migration sources, Goose allows only the filesystem, it's worth pointing out that with Go embed it is possible to embed the migration files on a custom binary. Goose's main difference from Golang migrate is the support for migrations written in Go, for cases where it is necessary to query the database during the migration. Goose allows for different types of migration versioning schemas, improving one key issue with Golang migrate.

Atlas

Atlas takes a completely different approach to Golang migrate or Goose. While both tools only focus on proving means of running and maintaining the migration directory, Atlas takes one step further and actually constructs a graph representing the different database entities from the migration directory contents, allowing for more complex scenarios and providing safety for migration operations.

Migrations in Atlas can be defined in two ways:

  • Versioned migrations are the classical style, where the migration contents are written by the developer using the database language.
  • Declarative migrations are more similar to Infrastructure-as-Code, where the schema is defined in a Terraform-like language and the migrations commands are calculated based on the current and desired state of the database. It's possible to use Atlas in a hybrid way as well, combining both styles, called Versioned Migration Authoring where the schema is defined in the Atlas language, but the Atlas engine is used to generate versioned migrations.

On top of Atlas's ability to load the migration directory as a graph of database entities, an entire infrastructure of static code analysis was built to provide warnings about dangerous or inefficient operations. This technique is called migration linting and can be integrated with the Atlas GitHub Action during CI.

In addition, if you would like to run your migrations using Terraform, Atlas has a Terraform provider as well.

Another key point that Atlas solves is handling migration integrity, which becomes a huge problem when working with multiple branches that all make schema changes. Atlas solves this problem by using an Integrity file. While we are on the topic of integrity, one key feature of Atlas is the support for running the migrations inside a transaction, unlike Goose during the process of migration. Atlas acquires a lock ensuring that only one migration happens at a time and the migration order/integrity is respected. For cases where problems are found, Atlas makes the troubleshooting process easier, allowing schema inspections, dry runs and providing helpful links to the common problems and solutions.

Feature comparison

FeatureGolang migrateGooseAtlas
Drivers supportedMain SQL and NoSQL databasesMain SQL databasesMain SQL databases
Migration sourcesLocal and remote SQL filesSQL and Go filesHCL and SQL files
Migrations typeVersionedVersionedVersioned and Declarative
Support for migrations in GoNoYesYes
Integrity checksNoNoYes
Migration out of orderNoPossible with hybrid versioningPossible calculating the directory hash
Lock supportYesNoYes
Use as CLIYesYesYes
Use as packageYesYesPartial support ¹
Versioned Migration AuthoringNoNoYes
Migration lintingNoNoYes
GitHub ActionNoNoYes
Terraform providerNoNoYes
  • 1: Atlas provides a few packages related to database operations, but the use is limited to complex cases and there is no package that provides migration usage out of the box.

Wrapping up

In this post we saw different strengths of each migration tool. We saw how Golang migrate has a great variety of database drivers and database sources, how Goose allows use to written migration in Go for the complexes migration scenarios and how Atlas makes the migration a complete different business, improving the safety of the migration operations and bringing concepts from others fields.

· 8 min read
Ariel Mashraki

Wikipedia defines Multi-tenancy as:

a software architecture in which a single instance of software runs on a server and serves multiple tenants.

In recent years, multitenancy has become a common topic in our industry as many organizations provide service to multiple customers using the same infrastructure. Multitenancy usually becomes an issue in software architecture because tenants often expect a decent level of isolation from one another.

In this post, I will go over different known approaches for achieving multi-tenancy and discuss the approach we took to build Ariga's cloud platform. In addition, I will demonstrate how we added built-in support for multi-tenant environments in Atlas to overcome some of the challenges we faced.

Introduction

Throughout the last few years, I have had the opportunity to implement multi-tenancy in various ways. Some of them might be familiar to you:

  1. A separate environment (deployment) per tenant, where isolation is achieved at both compute and data layers.
  2. A schema (named database) per tenant, where there is one environment for compute (e.g., a K8S cluster), but tenants are stored in different databases or schemas. Isolation is achieved at the data layer while compute resources are shared.
  3. One environment for all tenants, including the data layer. Typically, in this case, each table holds a tenant_id column that is used to filter statements by the tenant. Both data and compute layers are shared, with isolation achieved at the logical, database query level.

Each approach has pros and cons, but I want to briefly list the main reasons we chose to build our cloud platform based on the second option: schema per tenant.

  1. Management: Easily delete, backup tenants, and allow them to export their data without affecting others.
  2. Isolation: Limit credentials, connection pooling, and quotas per tenant. This way, one tenant cannot cause the database to choke and interrupt other tenants in case they share the same physical database.
  3. Security and data privacy: In case it is required, some tenants can be physically separated from others. For example, data can be stored in the tenant's AWS account, and the application can connect to it using a secure connection, like VPC peering in AWS.
  4. Code-maintenance: Most of the application code is written in a way that it is unaware of the multi-tenancy. In our case, there is one layer "at the top" that attaches the tenant connection to the context, and the API layer (e.g., GraphQL resolver) extracts the connection from the context to read/write data. As a result, we are not concerned that API changes will cross tenant boundaries.
  5. Migration: Schema changes can be executed first on "test tenants" and fail-fast in case of error.

The primary con to this approach was that there was no elegant way to execute migrations on multiple databases (N times) in Atlas. In the rest of the post, I'll cover how we solved this problem in Ariga and added built-in support for multi-tenancy in Atlas.

Atlas config file

Atlas provides a convenient way to describe and interact with multiple environments using project files. A project file is a file named atlas.hcl and contains one or more env blocks. For example:

atlas.hcl
env "local" {
url = "mysql://root:pass@:3306/"
migrations {
dir = "file://migrations"
}
}

env "prod" {
// ... a different env
}

Once defined, a project's environment can be worked against using the --env flag. For example:

atlas schema apply --env local

The command above runs the schema apply against the database that is defined in the local environment.

Multi-Tenant environments

The Atlas configuration language provides a few capabilities adopted from Terraform to facilitate the definition of multi-tenant environments. The first is the for_each meta-argument that allows defining a single env block that is expanded to N instances, one for each tenant. For example:

atlas.hcl
variable "url" {
type = string
default = "mysql://root:pass@:3306/"
}

variable "tenants" {
type = list(string)
}

env "local" {
for_each = toset(var.tenants)
url = urlsetpath(var.url, each.value)
migration {
dir = "file://migrations"
}
}

The above configuration expects a list of tenants to be provided as a variable. This can be useful when the list of tenants is dynamic and can be injected into the Atlas command. The urlsetpath function is a helper function that sets the path of the database URL to the tenant name. For example, if url is set to mysql://root:pass@:3306/?param=value and the tenant name is tenant1, the resulting URL will be mysql://root:pass@:3306/tenant1?param=value.

The second capability is Data Sources. This option enables users to retrieve information stored in an external service or database. For the sake of this example, let's extend the configuration above to use the SQL data source to retrieve the list of tenants from the INFORMATION_SCHEMA in MySQL:

atlas.hcl
// The URL of the database we operate on.
variable "url" {
type = string
default = "mysql://root:pass@:3306/"
}

// Schemas that match this pattern will be considered tenants.
variable "pattern" {
type = string
default = "tenant_%"
}

data "sql" "tenants" {
url = var.url
query = <<EOS
SELECT `schema_name`
FROM `information_schema`.`schemata`
WHERE `schema_name` LIKE ?
EOS
args = [var.pattern]
}

env "local" {
for_each = toset(data.sql.tenants.values)
url = urlsetpath(var.url, each.value)
}

Example

Let's demonstrate how managing migrations in a multi-tenant architecture is made simple with Atlas.

1. Install Atlas

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

2. Create a migration directory with the following example content:

-- create "users" table
CREATE TABLE `users` (`id` int NOT NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

3. Create two example tenants on a local database:

create database tenant_a8m;
create database tenant_rotemtam;

4. Run Atlas to execute the migration scripts on the tenants' databases:

atlas migrate apply --env local
tenant_a8m
Migrating to version 20220811074314 (2 migrations in total):

-- migrating version 20220811074144
-> CREATE TABLE `users` (`id` int NOT NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- ok (36.803179ms)

-- migrating version 20220811074314
-> ALTER TABLE `users` ADD COLUMN `name` varchar(255) NOT NULL;
-- ok (26.184177ms)

-------------------------
-- 72.899146ms
-- 2 migrations
-- 2 sql statements
tenant_rotemtam
Migrating to version 20220811074314 (2 migrations in total):

-- migrating version 20220811074144
-> CREATE TABLE `users` (`id` int NOT NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- ok (61.987153ms)

-- migrating version 20220811074314
-> ALTER TABLE `users` ADD COLUMN `name` varchar(255) NOT NULL;
-- ok (24.656515ms)

-------------------------
-- 95.233384ms
-- 2 migrations
-- 2 sql statements

Running the command again will not execute any migrations:

No migration files to execute
No migration files to execute

Migration logging

At Ariga, our services print structured logs (JSON) to feed our observability tools. That is why we felt obligated to add support for custom log formatting in Atlas. To continue the example from above, we present how we configure Atlas to emit JSON lines with the tenant name attached to them.

1. Add the log configuration to the local environment block:

atlas.hcl
env "local" {
for_each = toset(data.sql.tenants.values)
url = urlsetpath(var.url, each.value)
// Emit JSON logs to stdout and add the
// tenant name to each log line.
format {
migrate {
apply = format(
"{{ json . | json_merge %q }}",
jsonencode({
Tenant : each.value
})
)
}
}
}

2. Create a new script file in the migration directory:

-- create "users" table
CREATE TABLE `users` (`id` int NOT NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

3. Run migrate apply in our "local" environment:

atlas migrate apply --env local
{"Applied":[{"Applied":["CREATE TABLE `pets` (`id` bigint, PRIMARY KEY (`id`));"],"Description":"create_pets","End":"2022-10-27T16:03:03.685899+03:00","Name":"20221027125605_create_pets.sql","Start":"2022-10-27T16:03:03.655879+03:00","Version":"20221027125605"}],"Current":"20220811074314","Dir":"migrations","Driver":"mysql","End":"2022-10-27T16:03:03.685899+03:00","Pending":[{"Description":"create_pets","Name":"20221027125605_create_pets.sql","Version":"20221027125605"}],"Start":"2022-10-27T16:03:03.647091+03:00","Target":"20221027125605","Tenant":"tenant_a8m","URL":{"ForceQuery":false,"Fragment":"","Host":":3308","OmitHost":false,"Opaque":"","Path":"/tenant_a8m","RawFragment":"","RawPath":"","RawQuery":"parseTime=true","Schema":"tenant_a8m","Scheme":"mysql","User":{}}}
{"Applied":[{"Applied":["CREATE TABLE `pets` (`id` bigint, PRIMARY KEY (`id`));"],"Description":"create_pets","End":"2022-10-27T16:03:03.787476+03:00","Name":"20221027125605_create_pets.sql","Start":"2022-10-27T16:03:03.757463+03:00","Version":"20221027125605"}],"Current":"20220811074314","Dir":"migrations","Driver":"mysql","End":"2022-10-27T16:03:03.787476+03:00","Pending":[{"Description":"create_pets","Name":"20221027125605_create_pets.sql","Version":"20221027125605"}],"Start":"2022-10-27T16:03:03.748399+03:00","Target":"20221027125605","Tenant":"tenant_rotemtam","URL":{"ForceQuery":false,"Fragment":"","Host":":3308","OmitHost":false,"Opaque":"","Path":"/tenant_rotemtam","RawFragment":"","RawPath":"","RawQuery":"parseTime=true","Schema":"tenant_rotemtam","Scheme":"mysql","User":{}}}

Next steps

Currently, Atlas uses a fail-fast policy, which means the process exits on the first tenant that returns an error. We built it this way because we find it helpful to execute migrations first on "test tenants" and stop in case the operation fails on any of them. However, this means the execution is serial and may be slow in cases where there is a large amount of tenants. Therefore, we aim to add more advanced approaches that will allow executing the first M tenants serially and the rest of the N-M tenants in parallel.

Have questions? Feedback? Feel free to reach out on our Discord server.

· 8 min read
Jannik Clausen

With the release of v0.6.0, we introduced a workflow for managing changes to database schemas that we have called: Versioned Migration Authoring.

Today, we released the first version of the Atlas migration execution engine, that can apply migration files on your database. In this post, we will give a brief overview of the features and what to expect in the future.

Migration File Format

The Atlas migration filename format follows a very simple structure: version_[name].sql, with the name being optional. version can be an arbitrary string. Migration files are lexicographically sorted by filename.

↪ tree .
.
├── 1_initial.sql
├── 2_second.sql
├── 3_third.sql
└── atlas.sum

0 directories, 4 files

If you want to follow along, you can simply copy and paste the above files in a folder on your system. Make sure you have a database ready to work on. You can start an ephemeral docker container with the following command:

# Run a local mysql container listening on port 3306.
docker run --rm --name atlas-apply --detach --env MYSQL_ROOT_PASSWORD=pass -p 3306:3306 mysql:8

Apply Migrations

In order to apply migrations you need to have the Atlas CLI in version v0.7.0 or above. Follow the installation instructions if you don't have Atlas installed yet.

Now, to apply the first migration of our migration directory, we call atlas migrate apply and pass in some configuration parameters.

atlas migrate apply 1 \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migrating to version 1 (1 migrations in total):

-- migrating version 1
-> CREATE DATABASE `my_schema`;
-> CREATE TABLE `my_schema`.`tbl` (`col` int NOT NULL);
-- ok (17.247319ms)

-------------------------
-- 18.784204ms
-- 1 migrations
-- 2 sql statements

Migration Status

Atlas saves information about the database schema revisions (applied migration versions) in a special table called atlas_schema_revisions. In the example above we connected to the database without specifying which schema to operate against. For this reason, Atlas created the revision table in a new schema called atlas_schema_revisions. For a schema-bound connection Atlas will put the table into the connected schema. We will see that in a bit.

Go ahead and call atlas migrate status to gather information about the database migration state:

atlas migrate status \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migration Status: PENDING
-- Current Version: 1
-- Next Version: 2
-- Executed Files: 1
-- Pending Files: 2

This output tells us that the last applied version is 1, the next one is called 2 and that we still have two migrations pending. Let's apply the pending migrations:

Note, that we do not pass an argument to the apply, in which case Atlas will attempt to apply all pending migrations.

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migrating to version 3 from 1 (2 migrations in total):

-- migrating version 2
-> ALTER TABLE `my_schema`.`tbl` ADD `col_2` TEXT;
-- ok (13.98847ms)

-- migrating version 3
-> CREATE TABLE `tbl_2` (`col` int NOT NULL);
Error 1046: No database selected

-------------------------
-- 15.604338ms
-- 1 migrations ok (1 with errors)
-- 1 sql statements ok (1 with errors)

Error: Execution had errors: Error 1046: No database selected

Error: sql/migrate: executing statement "CREATE TABLE `tbl_2` (`col` int NOT NULL);" from version "3": Error 1046: No database selected

What happened here? After further investigation, you will find that our connection URL is bound to the entire database, not to a schema. The third migration file however does not contain a schema qualifier for the CREATE TABLE statement.

By default, Atlas wraps the execution of each migration file into one transaction. This transaction gets rolled back if any error occurs withing execution. Be aware though, that some databases, such as MySQL and MariaDB, don't support transactional DDL. If you want to learn how to configure the way Atlas uses transactions, have a look at the docs.

Migration Retry

To resolve this edit the migration file and add a qualifier to the statement:

CREATE TABLE `my_schema`.`tbl_2` (`col` int NOT NULL);

Since you changed the contents of a migration file, we have to re-calculate the directory integrity hash-sum by calling:

atlas migrate hash --force \
--dir "file://migrations"

Then we can proceed and simply attempt to execute the migration file again.

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migrating to version 3 from 2 (1 migrations in total):

-- migrating version 3
-> CREATE TABLE `my_schema`.`tbl_2` (`col` int NOT NULL);
-- ok (15.168892ms)

-------------------------
-- 16.741173ms
-- 1 migrations
-- 1 sql statements

Attempting to migrate again or calling atlas migrate status will tell us that all migrations have been applied onto the database and there is nothing to do at the moment.

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
No migration files to execute

Moving an existing project to Atlas with Baseline Migrations

Another common scenario is when you need to move an existing project to Atlas. To do so, create an initial migration file reflecting the current state of a database schema by using atlas migrate diff. A very simple way to do so would be by heading over to the database from before, deleting the atlas_schema_revisions schema, emptying your migration directory and running the atlas migrate diff command.

rm -rf migrations
docker exec atlas-apply mysql -ppass -e "CREATE SCHEMA `my_schema_dev`;" # create a dev-db
docker exec atlas-apply mysql -ppass -e "DROP SCHEMA `atlas_schema_revisions`;"
atlas migrate diff \
--dir "file://migrations" \
--to "mysql://root:pass@localhost:3306/my_schema" \
--dev-url "mysql://root:pass@localhost:3306/my_schema_dev"

To demonstrate that Atlas can also work on a schema level instead of a realm connection, we are running on a connection bound to the my_schema schema this time.

You should end up with the following migration directory:

-- create "tbl" table
CREATE TABLE `tbl` (`col` int NOT NULL, `col_2` text NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- create "tbl_2" table
CREATE TABLE `tbl_2` (`col` int NOT NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Now, let's create a new migration file to create a table tbl_3 and update the directory integrity file.

atlas migrate new add_table --dir "file://migrations"
echo "CREATE TABLE `tbl_3` (`col` text NULL);" >> migrations/$(ls -t migrations | head -n1)
atlas migrate hash --force --dir "file://migrations"

Since we now have both a migration file representing our current database state and the new migration file to apply, we can make use of the --baseline flag:

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/my_schema" \
--baseline "20220908110527" # replace the version with the one generated by you
Migrating to version 20220908110847 from 20220908110527 (1 migrations in total):

-- migrating version 20220908110847
-> CREATE TABLE `tbl_3` (`col` text NULL);
-- ok (14.325493ms)

-------------------------
-- 15.786455ms
-- 1 migrations
-- 1 sql statements

Outlook

The Atlas migration engine is powering Ent and the execution engine is already being used within Ariga for several months. We will continue working on improving it, releasing cool features, such as assisted troubleshooting for failed migrations, a more intelligent, dialect-aware execution planning for things like MySQLs implicits commits and more.

Wrapping up

In this post we learned about the new migration execution engine of Atlas and some information about its internals.

Further reading

To learn more about Versioned Migration Authoring:

Have questions? Feedback? Find our team on our Discord server.