Skip to main content

9 posts tagged with "announcement"

View All Tags

· 11 min read
Rotem Tamir

Hi everyone!

It's been a few weeks since our last version announcement and today I'm happy to share with you
v0.15, which includes some very exciting improvements for Atlas:

  • Interactive Declarative Migrations - Atlas supports a Terraform-like workflow for managing your database schema using the schema apply command. In this release we have added a new "Lint and Edit" mode to this command, which will analyze your schema changes for issues and will allow you to edit them interactively before applying them to your database.
  • Functions and Stored Procedures - Atlas now supports creating and managing functions and stored procedures in your database schema.
  • Postgres Domains - In addition, Atlas now supports Postgres Domains . A domain is essentially a data type with optional constraints (restrictions on the allowed set of values).
  • TypeORM Integration - TypeORM is a popular ORM for Node.js. In this release, we are happy to announce the TypeORM integration, which allows you to automatically generate your database schema from your TypeORM entities, create visualizations and more.

Let's dive right in!

Interactive Declarative Migrations

Atlas supports a Terraform-like workflow for managing your database schema using the schema apply command. This workflow, which we call "Declarative Migrations", is a modern alternative to the traditional "versioned migrations" workflow. In declarative migrations, you define your desired schema in one of the formats supported by Atlas and supply a connection string to your database. Atlas compares the current and desired schema of your database and generates a plan to migrate your database to the desired state.

Similar to Terraform, until today, Atlas would prompt you to confirm the migration plan before applying it to your database. This is a great way to ensure that you don't accidentally apply a migration that you didn't intend to. However, this flow suffers from a few drawbacks:

  1. Ensuring Safety - you can count on Atlas to generate a correct migration plan to your desired state, but it's still possible that this migration will have unintended side effects. For example, adding a UNIQUE constraint to a column might fail if there are duplicate values in the column.
  2. Editing - users often want to make changes to their migration plan before applying it. In the current flow, this requires running schema apply with the --dry-run flag, saving the output to a file, editing it, and then manually applying the edited migration plan to the database.

Enter: Interactive Declarative Migrations

In this release, we are introducing a new "Lint and Edit" mode to the schema apply command. This mode is available to logged-in users only, as it uses Atlas Cloud to provide a neat UI and rich analysis capabilities. Let's see it in action.

Start by downloading the latest version of Atlas:

curl -sSf https://atlasgo.sh | sh

For installation instructions on other platforms, see the installation guide.

After installing Atlas, make sure to log in using the atlas login command:

atlas login

Next, create a new file named schema.hcl that will contain your desired schema:

schema.hcl
schema "main" {
}

table "hello" {
schema = schema.main
column "name" {
type = varchar(100)
default = "Anonymous"
}
}

Now, let's apply this schema to a local SQLite database named "sqlite.db":

atlas schema apply -u sqlite://sqlite.db --dev-url sqlite://?mode=memory -f schema.hcl

Atlas will calculate the diff between the current (empty) state of the database and our desired state and prompt us to confirm the migration plan:

-- Planned Changes:
-- Create "hello" table
CREATE TABLE `hello` (`name` varchar NOT NULL DEFAULT 'Anonymous');
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
Apply
▸ Lint and edit # <-- Brand new!
Abort

Notice the new "Lint and edit" option. Select it and press Enter. Atlas will now analyze the migration plan and open your browser in the new, interactive migration plan screen. The screen contains three important sections:

  • Migration Plan - the migration plan generated by Atlas. You can click the "Edit" button to make changes to it.
  • Checks - a summary of the checks that Atlas ran against the generated plan. In this case, our plan is completely safe, so all checks passed.
  • ERD - A visual representation of the change we are planning.

Once we are content with the migration plan, let's go ahead and click the "Approve and Apply" button. Atlas will apply the migration plan to the database and scroll down to the execution logs section:

Let's edit our desired state a bit to delete the hello table and add a new users table:

schema.hcl
schema "main" {
}
-table "hello" {
- schema = schema.main
- column "name" {
- type = varchar(100)
- default = "Anonymous"
- }
-}
+table "users" {
+ schema = schema.main
+ column "id" {
+ type = int
+ }
+ column "email" {
+ type = text
+ }
+ primary_key {
+ columns = [column.id]
+ }
+ index "unique_email" {
+ columns = [
+ column.email
+ ]
+ unique = true
+ }
+}

Once again, let's run atlas schema apply to apply the changes to the database and select the "Lint and Edit" option.

This time, Atlas will warn us that the migration plan is not safe:

In this case, we decide to abort the migration in order to not lose the precious data on the hello table. Good thing we have automatic migration linting on our side!

Functions and Stored Procedures

info

Functions and stored procedures are currently in beta and available to logged-in users only. To use this feature run:

atlas login

Over the past few months, we have received numerous requests to support management of functions and stored procedures in popular databases such as PostgreSQL and MySQL. Functions and stored procedures are a way to encapsulate reusable logic in your database and are often used to improve performance by reducing the number of round-trips to the database.

Atlas now supports creating and managing functions and stored procedures in your database schema. Let's see how we can use this feature to create a simple function. In our example, we will implement the leet_speak function for PostgreSQL, which transforms a regular string into its Leet equivalent!

We can define the desired state of our database in either HCL or SQL:

schema.hcl
function "leet_speak" {
schema = schema.public
lang = PLpgSQL
arg "input_text" {
type = character_varying
}
return = character_varying
as = <<-SQL
DECLARE
output_text VARCHAR := '';
i INT := 1;
BEGIN
WHILE i <= LENGTH(input_text) LOOP
output_text := output_text ||
CASE SUBSTRING(input_text, i, 1)
WHEN 'a' THEN '4'
WHEN 'e' THEN '3'
WHEN 'i' THEN '1'
WHEN 'o' THEN '0'
WHEN 's' THEN '5'
WHEN 't' THEN '7'
ELSE SUBSTRING(input_text, i, 1)
END;
i := i + 1;
END LOOP;

RETURN output_text;
END;
SQL
}
schema "public" {
comment = "standard public schema"
}

For the purpose of this demo, we will run a local MySQL Docker container:

docker run --name db -e POSTGRES_PASSWORD=pass -d -p 5432:5432 postgres:16

Now, let's apply our schema to the database:

atlas schema apply -u 'postgres://postgres:pass@localhost:5432/postgres?sslmode=disable&search_path=public' --to file://schema.hcl

Atlas will calculate the diff between the current (empty) state of the database and our desired state and prompt us to confirm the migration plan:

-- Planned Changes:
-- Create "leet_speak" function
CREATE FUNCTION "leet_speak" ("input_text" character varying) RETURNS character varying LANGUAGE PLpgSQL AS $$
DECLARE
output_text VARCHAR := '';
i INT := 1;
BEGIN
WHILE i <= LENGTH(input_text) LOOP
output_text := output_text ||
CASE SUBSTRING(input_text, i, 1)
WHEN 'a' THEN '4'
WHEN 'e' THEN '3'
WHEN 'i' THEN '1'
WHEN 'o' THEN '0'
WHEN 's' THEN '5'
WHEN 't' THEN '7'
ELSE SUBSTRING(input_text, i, 1)
END;
i := i + 1;
END LOOP;

RETURN output_text;
END;
$$;
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

Let's go ahead and select the "Apply" option. Atlas will apply the migration plan to the database and print the following output:

✔ Apply

We can now verify that the function was created successfully by running:

docker exec -it db psql -U postgres -c "SELECT leet_speak('hello leet world')"

And the result indeed is:

    leet_speak
------------------
h3ll0 l337 w0rld
(1 row)

To learn more about functions and stored procedures in Atlas, check out the documentation.

Postgres Domains

info

Support for domains is currently in beta and available to logged-in users only. To use this feature run:

atlas login

Another highly requested feature was support for Postgres Domains. A domain is essentially a data type with optional constraints (restrictions on the allowed set of values). For example, you might want to define an email_address domain which would be a varchar column with a CHECK constraint to ensure that the value is a valid email address.

Starting with v0.15, Atlas can now manage domains in your database schema, as well as use them as types for table columns. Let's see an example schema that uses domains:

domain "us_postal_code" {
schema = schema.public
type = text
null = true
check "us_postal_code_check" {
expr = "((VALUE ~ '^\\d{5}$'::text) OR (VALUE ~ '^\\d{5}-\\d{4}$'::text))"
}
}

domain "username" {
schema = schema.public
type = text
null = false
default = "anonymous"
check "username_length" {
expr = "(length(VALUE) > 3)"
}
}

table "users" {
schema = schema.public
column "name" {
type = domain.username
}
column "zip" {
type = domain.us_postal_code
}
}

schema "public" {
comment = "standard public schema"
}

The above schema defines two domains: us_postal_code and username. The us_postal_code domain is a text column with a CHECK constraint to ensure that the value is a valid US postal code. The username domain is a text column with a CHECK constraint to ensure that the value is at least 4 characters long. We then define a users table that uses these domains for its columns.

Let's see what happens when we apply this schema to a local Postgres database:

atlas schema apply -u 'postgres://postgres:pass@localhost:5432/postgres?sslmode=disable' -f schema.hcl

Atlas calculates the diff between the current (empty) state of the database and our desired state and prompts us to confirm the migration plan:

-- Planned Changes:
-- Create domain type "us_postal_code"
CREATE DOMAIN "public"."us_postal_code" AS text CONSTRAINT "us_postal_code_check" CHECK ((VALUE ~ '^\d{5}$'::text) OR (VALUE ~ '^\d{5}-\d{4}$'::text));
-- Create domain type "username"
CREATE DOMAIN "public"."username" AS text DEFAULT 'anonymous' NOT NULL CONSTRAINT "username_length" CHECK (length(VALUE) > 3);
-- Create "users" table
CREATE TABLE "public"."users" ("name" "public"."username" NOT NULL, "zip" "public"."us_postal_code" NOT NULL);
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

After applying, let's re-run the schema apply command to make sure that the schema is up-to-date:

atlas schema apply -u 'postgres://postgres:pass@localhost:5432/postgres?sslmode=disable' --to file://schema.hcl

Indeed, Atlas reports that the schema is up-to-date:

Schema is synced, no changes to be made

Support for TypeORM

TypeORM is a popular ORM for Node.js. In this release, we are happy to announce the TypeORM integration, which allows you to automatically generate your database schema from your TypeORM entities, create visualizations, and more.

The TypeORM Atlas Provider is a Node.js module that can extract the desired schema of your database directly from your TypeORM entities. To use it, first install:

npm i @ariga/atlas-provider-typeorm

Next, add the TypeORM schema as a data source in your atlas.hcl file:

data "external_schema" "typeorm" {
program = [
"npx",
"@ariga/atlas-provider-typeorm",
"load",
"--path", "./path/to/entities",
"--dialect", "mysql", // mariadb | postgres | sqlite | mssql
]
}

env "typeorm" {
src = data.external_schema.typeorm.url
dev = "docker://mysql/8/dev"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}

Finally, run atlas schema apply to apply the schema to your database:

atlas schema apply -u mysql://<db credentials> --env typeorm

To learn more about the TypeORM integration, check out the documentation.

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 5 min read
Rotem Tamir

Hi everyone!

I'm very happy to share with you some of the recent improvements to Atlas, specifcially around GitHub Actions. In August of last year, we released our first version of the GitHub Actions experience for Atlas. It was a modest start, which included the ability to verify the safety and correctness of schema migrations during the CI process.

Over the past year, we have slowly added more features to the GitHub Actions experience, including the ability to sync migration directories to Atlas Cloud, deploy migrations, and even install Atlas. As often happens with quickly evolving systems, we felt that the API became complex, carrying over use cases and experiences that have become obsolete or superseded by better ones since the initial release.

At Ariga, the team developing Atlas, we have written a document named the "R&D Manifesto", which lists some the principles that we commit to as individuals and as an organization. One of them is "Obsess over APIs and DevEx" - we believe that the key to building a successful product is to provide the best possible experience to our users, and that starts with clear, consistent and composable APIs that empower our users to achieve amazing feats of engineering.

With that in mind, our team has been working hard in the past few weeks to revamp the GitHub Actions experience for Atlas. Here's a quick summary of the changes:

  1. We've moved all actions into a single repo - ariga/atlas-action. (With the exception of ariga/setup-atlas.)
  2. The API has been reviewed and updated to make sure it is consistent among the different actions and with the rest of the Atlas ecosystem.
  3. We've rewritten the code in Go, which is the language we use for all of our internal tools. This allows us to share code between the CLI and the GitHub Actions, and to provide a more consistent experience between the two. In addition, looking forward we have greatly simplified the process of adding new GitHub Actions as needed.

Deprecation Notice

As part of this change we are deprecating the previous generation of GitHub Actions, and we encourage you to migrate to the new ones as soon as possible. The old actions will continue to work for the time being, but we will not be receiving any updates. These actions are:

Introducing to the New Actions

Without further ado, I'm happy to present the new generation of GitHub Actions for Atlas. The new actions follow the design principle of building actions as small, composable units that can be combined to achieve different outcomes. All of the actions rely on Atlas being installed on the GitHub Actions runner, which is done using the ariga/setup-atlas action. The rest of the actions essentially map to CLI commands, and can be used to build more complex workflows.

The actions are:

ActionUse Case
ariga/setup-atlasInstall Atlas from a GitHub Actions workflow
ariga/atlas-action/migrate/lintCI for schema changes
ariga/atlas-action/migrate/pushPush your migration directory to Atlas Cloud (atlasgo.cloud)
ariga/atlas-action/migrate/applyDeploy versioned migrations from GitHub Actions

Example Workflows

Consider the following GitHub Actions workflow, which can be used to implement a CI/CD pipeline for your database schema changes.

name: Atlas CI/CD
on:
push:
branches:
- master # Use your main branch here.
pull_request:
paths:
- 'migrations/*' # Use the path to your migration directory here.
# Permissions to write comments on the pull request.
permissions:
contents: read
pull-requests: write
jobs:
atlas:
services:
# Spin up a mysql:8 container to be used as the dev-database for analysis.
mysql:
image: mysql:8
env:
MYSQL_DATABASE: dev
MYSQL_ROOT_PASSWORD: pass
ports:
- 3306:3306
options: >-
--health-cmd "mysqladmin ping -ppass"
--health-interval 10s
--health-start-period 10s
--health-timeout 5s
--health-retries 10
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
- uses: ariga/atlas-action/migrate/lint@v1
with:
dir: 'file://migrations'
dir-name: 'my-project' # The name of the project in Atlas Cloud
dev-url: "mysql://root:pass@localhost:3306/dev"
- uses: ariga/atlas-action/migrate/push@v1
if: github.ref == 'refs/heads/master'
with:
dir: 'file://migrations'
dir-name: 'my-project'
dev-url: 'mysql://root:pass@localhost:3306/dev' # Use the service name "mysql" as the hostname

This workflow uses 3 different actions to achieve the following:

  • ariga/setup-atlas - Installs Atlas on the GitHub Actions runner and logs in using the provided token.
  • ariga/atlas-action/migrate/lint - Lints the migration directory and verifies that it is safe to apply. This is run on every pull request that modifies the migration directory. If issues are found, the action will fail and a comment will be posted on the pull request with the details.
  • ariga/atlas-action/migrate/push - Pushes the migration directory to Atlas Cloud. This is run on every push to the main branch, so it can be used to deploy the migrations to production.

Tagging v1

With the release of the new actions, we are also tagging the v1 release of the actions to mark the maturity and stability of the API. We hope you will find the new actions useful, and we look forward to seeing what you build with them!

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

· 6 min read
Rotem Tamir

Hi everyone!

It's been a few weeks since our last version announcement and today I'm happy to share with you
v0.14, which includes some very exciting improvements for Atlas:

  • Checkpoints - as your migration directory grows, replaying it from scratch can become annoyingly slow. Checkpoints allow you to save the state of your database at a specific point in time and replay migrations from that point forward.
  • Push to the Cloud - you can now push your migration directory to Atlas Cloud directly from the CLI. Think of it like docker push for your database migrations.
  • JetBrains Editor Support - After launching our VSCode Extension a few months ago, our team has been hard at work to bring the same experience to JetBrains IDEs. Starting today, you can use Atlas directly from your favorite JetBrains IDEs (IntelliJ, PyCharm, GoLand, etc.) using the new Atlas plugin.

Let's dive right in!

Checkpoints

Suppose your project has been going on for a while, and you have a migration directory with 100 migrations. Whenever you need to install your application from scratch (such as during development or testing), you need to replay all migrations from start to finish to set up your database. Depending on your setup, this may take a few seconds or more. If you have a checkpoint, you can replay only the migrations that were added since the latest checkpoint, which can be much faster.

Here's a short example. Let's say we have a migration directory with 2 migration files, managing a SQLite database. The first one creates a table named t1:

migrations/20230830122359_start.sql
create table t1 ( c1 int );

And the second adds a table named t2 and adds a column named c2 to t1:

migrations/20230830122414_t2.sql.sql
create table t2 ( c1 int, c2 int );

alter table t1 add column c2 int;

To create a checkpoint, we can run the following command:

atlas migrate checkpoint --dev-url "sqlite://file?mode=memory&_fk=1"

This will create a SQL file, which is our checkpoint:

20230830123813_checkpoint.sql
-- atlas:checkpoint

-- Create "t1" table
CREATE TABLE `t1` (`c1` int NULL, `c2` int NULL);
-- Create "t2" table
CREATE TABLE `t2` (`c1` int NULL, `c2` int NULL);

Notice two things:

  1. The atlas:checkpoint directive which indicates that this file is a checkpoint.
  2. The SQL statement to create the t1 table included both the c1 and c2 columns and does not contain the alter table statement. This is because the checkpoint includes the state of the database at the time it was created, which can be thought of as the sum of all migrations that were applied up to that point.

Next, let's apply these migrations on a local SQLite database:

atlas migrate apply --url sqlite://local.db

Atlas prints:

Migrating to version 20230830123813 (1 migrations in total):

-- migrating version 20230830123813
-> CREATE TABLE `t1` (`c1` int NULL, `c2` int NULL);
-> CREATE TABLE `t2` (`c1` int NULL, `c2` int NULL);
-- ok (960.465µs)

-------------------------
-- 6.895124ms
-- 1 migrations
-- 2 sql statements

As expected, Atlas skipped all of the migrations up to the checkpoint and only applied the last one!

Push to Cloud

As we demonstrated above, once we have a migration directory, we can apply it to a database. If your database is running locally this is easy enough, but building deployment pipelines to production databases is more involved. There are multiple ways to accomplish this, such as building custom Docker images, as shown in most methods covered in the guides section.

In this release, we simplified the process of pushing migration directories to Atlas Cloud by adding a new atlas migrate push command. You can think of it as docker push for your database migrations.

atlas migrate push

Migration Directory created with atlas migrate push

Continuing with our example from above, let's push our migration directory to Atlas Cloud.

To start, you'll need to log in to Atlas. If it's your first time, you'll be prompted to create both an account and a workspace.

atlas login

After logging in, let's name our new migration project pushdemo and run:

atlas migrate push pushdemo --dev-url "sqlite://file?mode=memory&_fk=1"

After our migration directory is pushed, Atlas prints a URL to the created directory, similar to the one shown in the image above.

Once your migration directory is pushed, you can use it to apply migrations to your database directly from the cloud, just as you would execute docker run to run a container image that is stored in a Docker container registry.

To apply a migration directory directly from the cloud, run:

atlas migrate apply --dir atlas://pushdemo --url sqlite://local.db

Notice two flags that we used here:

  • --dir - specifies the URL of the migration directory. We used atlas://pushdemo to indicate that we want to use the migration directory named pushdemo that we pushed earlier. This directory is accessible to us because we used atlas login in a previous step.
  • --url - specifies the URL of the database we want to apply the migrations to. In this case, we used the same SQLite database that we used earlier.

JetBrains Editor Support

JetBrains makes some of the most popular IDEs for software developers, including IntelliJ, PyCharm, GoLand, and more. We are happy to announce that following our recent release of the VSCode Extension, we now have a plugin for JetBrains IDEs as well!

The plugin is built to make editing Atlas HCL files much easier by providing developers with syntax highlighting, code completion, and warnings. It supports both atlas.hcl project configuration files as well as schema definition files (.my.hcl, .pg.hcl, and .lt.hcl).

The plugin is available for download from the JetBrains Marketplace.

  1. To install the plugin, open your IDE and go to Preferences > Plugins > Marketplace and search for Atlas:

  2. Click on the Install button to install the plugin.

  3. Create a new file named schema.my.hcl (the .my.hcl suffix signifies to the plugin that this file is a MySQL schema (you can use .pg.hcl for Postgres or .lt.hcl for SQLite)

  4. Edit away!

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 12 min read
Rotem Tamir

It's been just over two months since our last version announcement and today I'm super excited to share with you our latest release, v0.13, which includes some long-anticipated additions to Atlas:

  • Atlas and Atlas Community Editions - we are making a change in the way Atlas is distributed. Starting today, we are making many of the features previously available only in the commercial version of Atlas available to everyone for free. More on this below.

  • Support for SQL Views - SQL Views are virtual tables in a database created by a query and stored in the database. Starting today, you can manage views in your database using the same Atlas workflows you use for tables, indexes, foreign keys, etc.

  • Microsoft SQL Server Driver Beta - Over the last few months, we have been working on a driver for MS-SQL with some of our enterprise design partners and are happy to make it available to you as a beta.

  • Built-in schema visualization - Over the past year we released some tools to help developers visualize their database schemas using Entity Relationship Diagrams (ERDs). Starting today, you can use the atlas schema inspect --visualize command to create database ERDs directly from the command line.

  • GitHub Action for Deployments - We recently received a request from the community to add a GitHub Action to streamline applying migrations directly from GitHub workflows. We thought this was a great idea and are happy to introduce ariga/atlas-deploy-action which you can use to deploy your migrations today.

  • Type checking for schema files - As part of our continuous effort to make the experience of working with Atlas as smooth as possible, we are excited to ship a new engine for running type checks on your schema files.

This release is extra-packed with new features and improvements, so let's dive in!

Atlas and Atlas Community Editions

In recent months, based on strategic discussions with our advisors, we have come to realize that in order to keep fueling Atlas's rapid growth and adoption, we should give free access to some features previously available only in the commercial version. For this reason, we are making a change in the way Atlas is distributed. Here is a summary of how this is going to work:

The Atlas codebase will continue to be actively developed in our open-source repository (ariga/atlas) under the Apache 2 License. This version of Atlas will continue to be actively maintained and supported by us in community channels (GitHub, Discord, etc). We will continue to build and ship versions of this purely open-source version under the name "Atlas Community". For more information about obtaining Atlas Community binaries, see the docs.

In addition, we are starting to distribute for free (under the Atlas EULA) the binaries for our commercial product which will include features previously unavailable to the public. We plan to release many of these features to the open-source repositories over time to keep supporting more open-source innovation in the database space. To keep things short and sweet, this distribution of the project will simply be called "Atlas".

Starting today, our existing distribution channels (the atlasgo.sh installation script, Docker image, brew tap, etc.) will point to this edition, and we are adding parallel installation options for users that prefer to use the community edition.

Trying out the new features

Before we dive into the new features, let's make sure you have the latest version of Atlas installed. To do so, run the following command:

curl -sSf https://atlasgo.sh | sh

For more installation options, see the docs.

In addition, two of the following features (MS-SQL driver, schema inspect --visualize) require you to log in your CLI to Atlas Cloud. To do so, run the following command:

atlas login

The CLI will ask your permission to open a browser window to complete the login process.

After completing the login process, you should see a message similar to the following:

You are now connected to "acme-corp" on Atlas Cloud.

Support for Views

"Views" are virtual tables created by a query and stored in the database. Many teams use them to simplify complex queries, encapsulate logic, and present a consistent interface to users, abstracting the underlying data structures. Starting today, you can start managing views in your database using the same Atlas workflows you use for tables, indexes, foreign keys, etc. Views are currently supported for MySQL, MariaDB, PostgreSQL, MS-SQL, and SQLite.

To demonstrate how this works, let's start by creating a new database. For this example, let's run a local MySQL database using Docker:

docker run --rm -d --name atlas-demo -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=default mysql:8

Next, create a new file named schema.hcl with the following content:

schema "default" {}

table "users" {
schema = schema.default
column "id" {
type = int
}
column "nickname" {
type = varchar(32)
}
column "phone_number" {
type = varchar(32)
}
}

view "users_clean" {
schema = schema.default
column "id" {
type = int
}
column "nickname" {
type = varchar(32)
}
as = <<-SQL
SELECT id, nickname
FROM ${table.users.name}
SQL
depends_on = [table.users]
}

In this example, we first defined a table named users that contains some PII (the user's phone number). Next, we defined a view named users_clean that exposes only the user's id and nickname. This view can be used by other applications to access the user's data without exposing any personally identifiable information.

Next, run the following command to apply the schema to the database:

atlas schema apply -u mysql://root:pass@localhost:3306/default -f schema.hcl

Atlas will connect to the database, inspect its current schema (which is empty) and produce the following output:

-- Planned Changes:
-- Create "users" table
CREATE TABLE `default`.`users` (`id` int NOT NULL, `nickname` varchar(32) NOT NULL, `phone_number` varchar(32) NOT NULL);
-- Create "users_clean" view
CREATE VIEW `default`.`users_clean` (`id`, `nickname`) AS SELECT id, nickname
FROM users;
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

After reviewing the changes, press enter to apply them to the database. Atlas will connect to the database and apply the changes.

Next, run the following command to inspect the database schema:

atlas schema inspect -u mysql://root:pass@/default --format "{{ sql . \" \" }}"

Notice that we used the --format flag to instruct Atlas to output the schema as SQL statements.

Atlas will connect to the database and produce the following output:

-- Create "users" table
CREATE TABLE `users` (
`id` int NOT NULL,
`nickname` varchar(32) NOT NULL,
`phone_number` varchar(32) NOT NULL
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- Create "users_clean" view
CREATE VIEW `users_clean` (
`id`,
`nickname`
) AS select `users`.`id` AS `id`,`users`.`nickname` AS `nickname` from `users`;

As you can see, Atlas created the users table and the users_clean view in the database!

Microsoft SQL Server Driver

Microsoft SQL Server, one of the longest-standing database engines in our business, was first released by Microsoft in 1989. MS-SQL is the go-to database for Windows environments in many industries. Over the last few months, we have been working on a driver for MS-SQL with some of our enterprise design partners and are happy to make it available to you as a beta feature.

To get started with MS-SQL, you must first opt-in to its beta testing program. To do so:

  1. Login to your Atlas Cloud account using the instructions above.
  2. Head over to your Atlas Cloud account.
  3. Click on your profile picture in the bottom left corner to open the user menu and select the "Personal Settings" option.
  4. Select the "Microsoft SQL Server" checkbox from the driver list and click on the "Save" button to save your changes.

Next, run the following command to start a local MS-SQL server using Docker:

docker run --rm -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@ssw0rd0995' -p 1433:1433 -d mcr.microsoft.com/mssql/server:latest

Notice that by providing the ACCEPT_EULA environment variable you confirm your acceptance of the Microsoft's End-User Licensing Agreement.

Create a new file named schema.hcl with the following content:

schema "dbo" {
}
table "users" {
schema = schema.dbo
column "id" {
type = int
}
column "nickname" {
type = varchar(32)
}
}

With this file ready, run the following command to apply the schema to the database:

atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
--to "file://schema.hcl"

Atlas will connect to the database, inspect its current schema (which is empty) and produce the following output:

-- Planned Changes:
-- Create "users" table
CREATE TABLE [dbo].[users] ([id] int NOT NULL, [nickname] varchar(32) NOT NULL);
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

After reviewing the changes, press enter to apply them to the database. Atlas will connect to the database and apply the changes.

Next, run the following command to inspect the database:

atlas schema inspect \
--url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master"

Atlas will connect to the database and produce the following output:

table "users" {
schema = schema.dbo
column "id" {
null = false
type = int
}
column "nickname" {
null = false
type = varchar(32)
}
}
schema "dbo" {
}

Great! Atlas created the users table in the database!

Do you use MS-SQL? We would love to hear feedback about your experience using this driver. Feel free to give it a try and reach out on our Discord channel to let us know what you think.

Built-in Schema Visualization

Over the past year we have released some tools to help developers visualize their database schemas using Entity Relationship Diagrams (ERDs). The success of these experiments (especially for Ent and Django) convinced us that we should provide this functionality natively in Atlas. So starting today, you can use the atlas schema inspect --visualize and atlas schema diff --visualize to create database ERDs directly from the command line.

To get started, create a new schema file named schema.hcl:

schema "default" {}

table "t1" {
schema = schema.default
column "id" {
type = int
}
primary_key {
columns = [column.id]
}
}

table "t2" {
schema = schema.default
column "id" {
type = int
}
column "t1_id" {
type = int
}
foreign_key "t1_id" {
columns = [column.t1_id]
ref_columns = [table.t1.column.id]
}
}

Next, run the following command to generate the ERD:

atlas schema inspect --dev-url docker://mysql/8/d --url file://schema.hcl --visualize

Atlas will prompt you to check where you would like to share the ERD:

Use the arrow keys to navigate: ↓ ↑ → ←
? Where would you like to share your schema visualization?:
▸ Publicly (gh.atlasgo.cloud)
Privately (acme-corp.atlasgo.cloud)

After selecting the desired option, Atlas will generate the ERD and print out the URL where you can access it:

https://gh.atlasgo.cloud/explore/cdd7bee5

ariga/atlas-deploy GitHub Action

GitHub Actions is a powerful tool that allows developers to automate workflows around GitHub repositories. We have previously published GitHub Actions for running CI for schema changes and for syncing your migration directory with Atlas Cloud.

Today, we are happy to announce the release of the ariga/atlas-deploy-action to the GitHub Actions Marketplace. This action allows you to run migrations directly from your GitHub Actions workflow. To get started, add the following step to your workflow:

name: Deploy Database Migrations
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Deploy Atlas Migrations
uses: ariga/atlas-deploy-action@v0
with:
url: ${{ secrets.DATABASE_URL }}
dir: path/to/migrations

Type checking for schema files

I firmly believe that great developer experiences aren't created by any one single design decision but are rather a result of a relentless commitment to improving things little by little over long periods of time. As we work with Atlas as users every day, we constantly look for points of friction and frustration and try to find ways to make them better.

Sometimes, these changes are small and simple, like improving the help text of a command or updating a documentation page. Other times, they are more complex, and require building more fundamental infrastructure.

I give all of this as an intro to a feature that I am very excited about: type checking for the HCL schema. Starting with this release, we are shipping a type checker engine that will give you more precise and actionable feedback when you make mistakes in your schema files.

We have come as an industry to appreciate that a fast feedback loop is one of the most prominent factors in developer productivity, and it all starts in the local development environment.

Type checking validates the contents of your schema files and provides you with actionable feedback in case you make a mistake. For example, suppose you define a table like this (try spotting the errors yourself before reading on):

schema.hcl
table "users" {
column "id" {
type = int
}
column "name" {
type = varchar(255)
}
column "manager_id" {
type = int
}
primray_key {
columns = [
column.id
]
}
index "idx_name" {
coulmns = [
column.name
]
unique = true
}
foreign_key {
columns = [column.manager_id]
ref_columns = [column.id]
}
}

Trying to apply this schema will produce the following error:

Error: failed parsing mysql schema files: schema.hcl
schema.hcl:1,1-6: missing required attribute schema in block table
schema.hcl:11,3-14: unknown block type primray_key in block of type table (Did you mean primary_key?)
schema.hcl:17,5-19,6: unknown attribute coulmns in block of type index (Did you mean columns?)
schema.hcl:22,3-14: foreign_key block must have exactly 1 label

This type checking engine is available on the Atlas CLI and will be integrated into our editor plugins for VSCode and (the upcoming) JetBrains IDEs.

Wrapping up

This is it! I know this was a long post, but we are very excited about all of these new features and we hope you will find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 8 min read
Jannik Clausen

With the release of v0.6.0, we introduced a workflow for managing changes to database schemas that we have called: Versioned Migration Authoring.

Today, we released the first version of the Atlas migration execution engine, that can apply migration files on your database. In this post, we will give a brief overview of the features and what to expect in the future.

Migration File Format

The Atlas migration filename format follows a very simple structure: version_[name].sql, with the name being optional. version can be an arbitrary string. Migration files are lexicographically sorted by filename.

↪ tree .
.
├── 1_initial.sql
├── 2_second.sql
├── 3_third.sql
└── atlas.sum

0 directories, 4 files

If you want to follow along, you can simply copy and paste the above files in a folder on your system. Make sure you have a database ready to work on. You can start an ephemeral docker container with the following command:

# Run a local mysql container listening on port 3306.
docker run --rm --name atlas-apply --detach --env MYSQL_ROOT_PASSWORD=pass -p 3306:3306 mysql:8

Apply Migrations

In order to apply migrations you need to have the Atlas CLI in version v0.7.0 or above. Follow the installation instructions if you don't have Atlas installed yet.

Now, to apply the first migration of our migration directory, we call atlas migrate apply and pass in some configuration parameters.

atlas migrate apply 1 \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migrating to version 1 (1 migrations in total):

-- migrating version 1
-> CREATE DATABASE `my_schema`;
-> CREATE TABLE `my_schema`.`tbl` (`col` int NOT NULL);
-- ok (17.247319ms)

-------------------------
-- 18.784204ms
-- 1 migrations
-- 2 sql statements

Migration Status

Atlas saves information about the database schema revisions (applied migration versions) in a special table called atlas_schema_revisions. In the example above we connected to the database without specifying which schema to operate against. For this reason, Atlas created the revision table in a new schema called atlas_schema_revisions. For a schema-bound connection Atlas will put the table into the connected schema. We will see that in a bit.

Go ahead and call atlas migrate status to gather information about the database migration state:

atlas migrate status \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migration Status: PENDING
-- Current Version: 1
-- Next Version: 2
-- Executed Files: 1
-- Pending Files: 2

This output tells us that the last applied version is 1, the next one is called 2 and that we still have two migrations pending. Let's apply the pending migrations:

Note, that we do not pass an argument to the apply, in which case Atlas will attempt to apply all pending migrations.

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migrating to version 3 from 1 (2 migrations in total):

-- migrating version 2
-> ALTER TABLE `my_schema`.`tbl` ADD `col_2` TEXT;
-- ok (13.98847ms)

-- migrating version 3
-> CREATE TABLE `tbl_2` (`col` int NOT NULL);
Error 1046: No database selected

-------------------------
-- 15.604338ms
-- 1 migrations ok (1 with errors)
-- 1 sql statements ok (1 with errors)

Error: Execution had errors: Error 1046: No database selected

Error: sql/migrate: executing statement "CREATE TABLE `tbl_2` (`col` int NOT NULL);" from version "3": Error 1046: No database selected

What happened here? After further investigation, you will find that our connection URL is bound to the entire database, not to a schema. The third migration file however does not contain a schema qualifier for the CREATE TABLE statement.

By default, Atlas wraps the execution of each migration file into one transaction. This transaction gets rolled back if any error occurs withing execution. Be aware though, that some databases, such as MySQL and MariaDB, don't support transactional DDL. If you want to learn how to configure the way Atlas uses transactions, have a look at the docs.

Migration Retry

To resolve this edit the migration file and add a qualifier to the statement:

CREATE TABLE `my_schema`.`tbl_2` (`col` int NOT NULL);

Since you changed the contents of a migration file, we have to re-calculate the directory integrity hash-sum by calling:

atlas migrate hash --force \
--dir "file://migrations"

Then we can proceed and simply attempt to execute the migration file again.

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migrating to version 3 from 2 (1 migrations in total):

-- migrating version 3
-> CREATE TABLE `my_schema`.`tbl_2` (`col` int NOT NULL);
-- ok (15.168892ms)

-------------------------
-- 16.741173ms
-- 1 migrations
-- 1 sql statements

Attempting to migrate again or calling atlas migrate status will tell us that all migrations have been applied onto the database and there is nothing to do at the moment.

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
No migration files to execute

Moving an existing project to Atlas with Baseline Migrations

Another common scenario is when you need to move an existing project to Atlas. To do so, create an initial migration file reflecting the current state of a database schema by using atlas migrate diff. A very simple way to do so would be by heading over to the database from before, deleting the atlas_schema_revisions schema, emptying your migration directory and running the atlas migrate diff command.

rm -rf migrations
docker exec atlas-apply mysql -ppass -e "CREATE SCHEMA `my_schema_dev`;" # create a dev-db
docker exec atlas-apply mysql -ppass -e "DROP SCHEMA `atlas_schema_revisions`;"
atlas migrate diff \
--dir "file://migrations" \
--to "mysql://root:pass@localhost:3306/my_schema" \
--dev-url "mysql://root:pass@localhost:3306/my_schema_dev"

To demonstrate that Atlas can also work on a schema level instead of a realm connection, we are running on a connection bound to the my_schema schema this time.

You should end up with the following migration directory:

-- create "tbl" table
CREATE TABLE `tbl` (`col` int NOT NULL, `col_2` text NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- create "tbl_2" table
CREATE TABLE `tbl_2` (`col` int NOT NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Now, let's create a new migration file to create a table tbl_3 and update the directory integrity file.

atlas migrate new add_table --dir "file://migrations"
echo "CREATE TABLE `tbl_3` (`col` text NULL);" >> migrations/$(ls -t migrations | head -n1)
atlas migrate hash --force --dir "file://migrations"

Since we now have both a migration file representing our current database state and the new migration file to apply, we can make use of the --baseline flag:

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/my_schema" \
--baseline "20220908110527" # replace the version with the one generated by you
Migrating to version 20220908110847 from 20220908110527 (1 migrations in total):

-- migrating version 20220908110847
-> CREATE TABLE `tbl_3` (`col` text NULL);
-- ok (14.325493ms)

-------------------------
-- 15.786455ms
-- 1 migrations
-- 1 sql statements

Outlook

The Atlas migration engine is powering Ent and the execution engine is already being used within Ariga for several months. We will continue working on improving it, releasing cool features, such as assisted troubleshooting for failed migrations, a more intelligent, dialect-aware execution planning for things like MySQLs implicits commits and more.

Wrapping up

In this post we learned about the new migration execution engine of Atlas and some information about its internals.

Further reading

To learn more about Versioned Migration Authoring:

Have questions? Feedback? Find our team on our Discord server.

· 5 min read
Rotem Tamir

Losing data is painful for almost all organizations. This is one of the reasons teams are very cautious when it comes to making changes to their databases. In fact, many teams set explicit policies on what kinds of changes to the database are allowed, often completely prohibiting any change that is destructive.

Destructive changes are changes to a database schema that result in loss of data. For instance, consider a statement such as:

ALTER TABLE `users` DROP COLUMN `email_address`;

This statement is considered destructive because whatever data is stored in the email_address column will be deleted from disk, with no way to recover it.

Suppose you were in charge of a team that decided to prohibit destructive changes, how would you go about enforcing such a policy? From our experience, most teams enforce policies relating to schema migrations in code-review: a human engineer, preferably with some expertise in operating databases, manually reviews any proposed database migration scripts and rejects them if they contain destructive changes.

Relying on a human reviewer to enforce such a policy is both expensive (it takes time and mental energy) and error-prone. Just like manual QA is slowly being replaced with automated testing, and manual code style reviews are being replaced with linters, isn't it time that we automate the process of ensuring that changes to database schemas are safe?

Announcing the Atlas GitHub Action

Today, we're happy to announce the release of the official Atlas GitHub Action which can be used to apply migration directory linting for a bunch of popular database migration tools. golang-migrate, goose, dbmate and Atlas itself are already supported, and Flyway and Liquibase are coming soon.

If you're using GitHub to manage your source code, you're in luck. By adding a short configuration file to your repository, you can start linting your schema migration scripts today! Let's see a short example.

Setting up

Suppose we are running a website for an e-commerce business. To store the data for our website we use a MySQL database. Because the data in this database is everything to us, we use a careful versioned migrations approach where each change to the database schema is described in an SQL script and stored in our Git repository. To execute these scripts we use a popular tool called golang-migrate.

The source code for this example can be found in rotemtam/atlas-action-demo.

Initially, our schema contains two tables: users and orders, documented in the first few migration files:

Create the users table:

migrations/20220819060736.up.sql
-- create "users" table
CREATE TABLE `users` (
`id` int NOT NULL,
`name` varchar(100) NULL,
PRIMARY KEY (`id`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Add a unique email column:

migrations/20220819061056.up.sql
ALTER TABLE `users` ADD COLUMN `email` varchar(255) NOT NULL, ADD UNIQUE INDEX `email_unique` (`email`);

Create the orders table, with a foreign-key referencing the users table:

migrations/20220819075145.up.sql
-- create "orders" table
CREATE TABLE `orders` (
`id` int NOT NULL,
`user_id` int NOT NULL,
`total` decimal(10) NOT NULL,
PRIMARY KEY (`id`),
INDEX `user_orders` (`user_id`),
CONSTRAINT `user_orders` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Installing the Atlas Action

To make sure we never accidentally delete data during schema changes, we enact a policy that prohibits destructive changes to the database. To enforce this policy, we invoke the atlas-action GitHub Action from within our continuous integration flow by adding a workflow file name .github/workflows/atlas-ci.yaml:

.github/workflows/atlas-ci.yaml
name: Atlas CI
on:
# Run whenever code is changed in the master branch,
# change this to your root branch.
push:
branches:
- master
# Run on PRs where something changed under the `path/to/migration/dir/` directory.
pull_request:
paths:
- 'migrations/*'
jobs:
lint:
services:
# Spin up a mysql:8.0.29 container to be used as the dev-database for analysis.
mysql:
image: mysql:8.0.29
env:
MYSQL_ROOT_PASSWORD: pass
MYSQL_DATABASE: test
ports:
- "3306:3306"
options: >-
--health-cmd "mysqladmin ping -ppass"
--health-interval 10s
--health-start-period 10s
--health-timeout 5s
--health-retries 10
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3.0.1
with:
fetch-depth: 0 # Mandatory unless "latest" is set below.
- uses: ariga/atlas-action@v0
with:
dir: migrations/
dir-format: golang-migrate # Or: atlas, goose, dbmate
dev-url: mysql://root:pass@localhost:3306/test

Detecting a destructive change

Next, let's see what happens when a developer accidentally proposes a destructive change, to drop a column in the orders table:

migrations/20220819081116.up.sql
-- modify "orders" table
ALTER TABLE `orders` DROP COLUMN `total`;

This change is proposed in PR #1 in our example repo. Because we have previously set up the Atlas GitHub Action to lint our migration directory, whenever a file changes under the migrations/ directory, a workflow is triggered.

After letting our workflow complete, observe that GitHub informs us that the Atlas CI / lint check has failed:

Clicking on the "details" link we find a detailed explanation on the causes for the failure:

Examining the Action run summary we find the following annotation:

As you can see, Atlas has detected the destructive change we proposed to apply to our database and failed our build!

Wrapping up

In this post we discussed why many teams set policies to prevent destructive changes to database schemas. We further showed how such policies can be enforced in an automated way using the official Atlas GitHub Action.

Further reading

To learn more about CI for database schema changes:

Have questions? Feedback? Find our team on our Discord server.

· 4 min read
Amit Shani

Today, I'm happy to announce the release of v0.4.2 of the Atlas CLI. This version includes many improvements and fixes, but I wanted to share with you exciting news about something I personally worked on. As of v0.4.2, Atlas includes preview support for CockroachDB 🎉

Atlas is an open-source project that helps developers to better manage their database schemas. It has a CLI tool and a Terraform integration. By using Atlas's Data Definition Language (with a syntax similar to Terraform), users can plan, verify and apply changes to their databases in a simple, declarative workflow. Earlier this year, Atlas became the migration engine for Ent, a widely popular, Linux Foundation backed entity framework for Go.

CockroachDB is an open-source NewSQL database. From their README:

CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. It scales horizontally; survives disk, machine, rack, and even datacenter failures with minimal latency disruption and no manual intervention; supports strongly-consistent ACID transactions; and provides a familiar SQL API for structuring, manipulating, and querying data.

CockroachDB has been gaining popularity and many of you have been asking for Atlas to support it.

While CockroachDB aims to be PostgreSQL compatible, it still has some incompatibilities (e.g. 1, 2,3) which prevented Atlas users using the existing Postgres dialect from working with it.

With the latest release of Atlas, the Postgres driver automatically detects if it is connected to a CockroachDB database and uses a custom driver which provides compatability with CockroachDB.

Getting started with Atlas and CockroachDB

Let's see how we can use Atlas CLI to manage the schema of a CockroachDB database. Start by downloading the latest version of Atlas, on macOS:

brew install ariga/tap/atlas

For installation instructions on other platforms, see the docs.

For the purpose of this example, let's spin up a local, single-node CockroachDB cluster in a container by running:

docker run --rm -d -p 26257:26257 --name crdb cockroachdb/cockroach start-single-node --insecure

Next, let's seed the database with a simple table:

docker exec crdb cockroach sql --insecure -e 'CREATE TABLE users (id int primary key);'

After creating the users table, use Atlas's schema inspect command to read the schema of our local database and save the result to a file:

atlas schema inspect -u 'postgres://root:pass@localhost:26257/?sslmode=disable' --schema public > schema.hcl

Observe the current HCL representation of the public schema, which contains our newly created table, users:

table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
}
primary_key {
columns = [column.id]
}
}
schema "public" {
}

Next, edit schema.hcl to add a column to the users table:

schema.hcl
table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
}
column "name" {
type = varchar(100)
}
primary_key {
columns = [column.id]
}
}
schema "public" {
}

Now apply the schema using the schema apply command:

atlas schema apply -u 'postgres://root:pass@localhost:26257/?sslmode=disable' --schema public -f schema.hcl

Atlas prints out the planned changes and asks for your confirmation:

-- Planned Changes:
-- Create "test" table
ALTER TABLE "public"."users" ADD COLUMN "name" character varying(100) NOT NULL
? Are you sure?:
▸ Apply
Abort

After hitting "Apply", Atlas applies the desired schema to the database:

✔ Apply

We have successfully applied our schema to our database.

To stop the container running CockroachDB run:

docker stop crdb

Learn more about Atlas

In this short example, we demonstrated two of Atlas's basic features: database inspection and declarative schema migration (applying a desired schema on a database). Here are some topics you may want to explore when getting started with Atlas:

  • Learn the DDL - learn how to define any SQL resource in Atlas's data definition language.
  • Try the Terraform Provider - see how you can use the Atlas Terraform Provider to integrate schema management in your general Infrastructure-as-Code workflows.
  • Use the migrate command to author migrations - In addition to the Terraform-like declarative workflow, Atlas can manage a migration script directory for you based on your desired schema.

Preview support

The integration of Atlas with CockroachDB is well tested with version v21.2.11 (at the time of writing, latest) and will be extended in the future. If you're using other versions of CockroachDB or looking for help, don't hesitate to file an issue or join our Discord channel.

Have questions? Feedback? Find our team on our Discord server.

· 4 min read
Rotem Tamir

A few days ago we released v0.4.1 of Atlas. Along with a multitude of improvements and fixes, I'm happy to announce the release of a feature that we've been planning for a while: Project Files.

Project files provide a way to describe and interact with multiple environments while working with Atlas. A project file is a file named atlas.hcl that contains one or more env blocks, each describing an environment. Each environment has a reference to where the schema definition file resides, a database URL and an array of the schemas in the database that are managed by Atlas:

// Define an environment named "local".
env "local" {
// Declare where the schema definition file resides.
src = "./schema/project.hcl"

// Define the URL of the database which is managed in
// this environment.
url = "mysql://localhost:3306"

// Define the URL of the Dev Database for this environment.
// See: https://atlasgo.io/dev-database
dev = "mysql://localhost:3307"

// The schemas in the database that are managed by Atlas.
schemas = ["users", "admin"]
}

env "dev" {
// ... a different env
}

Project files arose from the need to provide a better experience for developers using the CLI. For example, consider you are using Atlas to plan migrations for your database schema. In this case, you will be running a command similar to this to plan a migration:

atlas migrate diff --dev-url mysql://root:password@localhost:3306 --to file://schema.hcl --dir file://migrations --format atlas

With project files, you can define an environment named local:

env "local" {
url = "mysql://root:password@localhost:3306"
dev = "mysql://root:password@localhost:3307"
src = "./schema.hcl"
migration {
dir = "file://migrations"
}
}

Then run the migrate diff command against this environment using the --env flag:

atlas migrate diff --env local

Alternatively, suppose you want to use Atlas to apply the schema on your staging environment. Without project files, you would use:

atlas schema apply -u mysql://root:password@db.ariga.dev:3306 --dev-url mysql://root:password@localhost:3307 -f schema.hcl

To do the same using a project file, define another env named staging:

env "staging" {
url = "mysql://root:password@db.ariga.dev:3306"
dev = "mysql://root:password@localhost:3307"
src = "./schema.hcl"
}

Then run:

atlas schema apply --env staging

Passing credentials as input values

Similar to schema definition files, project files also support Input Variables. This means that we can define variable blocks on the project file to declare which values should be provided when the file is evaluated. This mechanism can (and should) be used to avoid committing to source control database credentials. To do this, first define a variable named db_password:

variable "db_password" {
type = string
}

Next, replace the database password in all connection strings with a reference to this variable, for example:

env "staging" {
url = "mysql://root:${var.db_password}@db.ariga.dev:3306"
dev = "mysql://root:${var.db_password}@localhost:3307"
src = "./schema.hcl"
}

If we run schema apply without providing the password input variable, we will receive an error message:

Error: missing value for required variable "db_password"

To provide the input variable run:

atlas schema apply --env staging --var db_password=pass

Input variables can be used for many other use cases by passing them as input values to schema files.

What's next

In this post, I presented Project Files, a new feature recently added to Atlas to help developers create more fluent workflows for managing changes to their database schemas. In the coming weeks we will be adding a few more improvements to the dev flow, such as support for marking a specific environment as the default one (alleviating the need to specify --env in many cases) and multi-file schema definitions.

Have questions? Feedback? Find our team on our Discord server.

· 5 min read
Amit Shani

Today we are glad to announce the release of the official Atlas Terraform Provider.

What is Terraform

Terraform is a popular open-source tool created by HashiCorp, used to greatly simplify the task of provisioning and managing resources in the cloud. With Terraform, organizations can describe the desired state of their infrastructure in a simple configuration language and let Terraform plan and apply these changes in an automated way. This way, Terraform allows teams to truly deliver infrastructure-as-code (IaC), which completely change how teams and organizations manage their cloud infrastructure.

Infrastructure-as-Code and database management

Most cloud-native applications are backed by a database. The database is often the most critical part of many software systems, so making changes to its schema (structure and layout of the data inside) is a very risky business. However, schemas must evolve: as functionality changes over time, the backing tables are added, columns are dropped, indexes are created for performance reasons, and more.

Therefore it is surprising that there is no established way of integrating the management of schema changes (commonly called schema "migrations") into popular Infrastructure-as-Code workflows. For this reason, many organizations are running migrations from within the application code or using solutions outside the ecosystem of Terraform, meaning that management of the production environment is fragmented and hard to synchronize. Atlas aims to change that.

The Atlas Terraform provider allows you to synchronize your database with your desired schema, in a safe and stateful manner. By using Atlas’s core migration engine and embedding it in a Terraform provider, we are enabling teams to manage their database schemas as part of their full IaC workflow. This way, teams can use existing providers (such as AWS or GCP) to provision the database instance and use the Atlas provider to keep the schema in sync. Integrating Atlas with Terraform is especially useful because it couples the state of the infrastructure with the state of the database. It is also extremely neat when using a dev database, which is a feature that combines infrastructure and DB management to provide safety and correctness.

Demo

Prerequisites

Make sure you have installed:

Let’s see an example of the provider in action. First, spin a database using docker:

docker run -p 3306:3306 --name iloveatlas -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=market -d mysql:8

Great! Now we have an instance of MySQL database running.

As an extra measure of safety, we will run another identical database which will serve as a Dev Database. In short, the dev-db helps to catch errors that can only be detected when applying the schema. It is also useful to format the schema in a correct and predictable way. Read more about it here. Run a second instance of MySQL on another port, to serve as a dev-db:

docker run -p 3307:3306 --name devdb-greatness -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=market -d mysql:8

Next, we need an HCL file describing the desired state of our database. You can use atlas cli to inspect the state of another database or you can use the following basic schema:

schema.hcl
table "orders" {
schema = schema.market
column "id" {
null = false
type = int
auto_increment = true
}
column "name" {
null = false
type = varchar(20)
}
primary_key {
columns = [column.id]
}
}

schema "market" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}

Save the schema file locally in a file named schema.hcl.

Now that we have our database schema we can use terraform to apply that schema to our database. Create a file named main.tf and copy the following snippet:

main.tf
terraform {
required_providers {
atlas = {
version = "~> 0.4.0"
source = "ariga/atlas"
}
}
}

provider "atlas" {}

data "atlas_schema" "market" {
dev_db_url = "mysql://root:pass@localhost:3307/market"
src = file("${path.module}/schema.hcl")
}

resource "atlas_schema" "market" {
hcl = data.atlas_schema.market.hcl
url = "mysql://root:pass@localhost:3306/market"
dev_db_url = "mysql://root:pass@localhost:3307/market"
}

Finally, init terraform:

terraform init

And apply the schema to the database by executing:

terraform apply --auto-approve

Awesome! Now your database should have a table named orders. To verify that we can connect to the database:

$ docker exec -it iloveatlas mysql -ppass --database=market

mysql> show tables;
+------------------+
| Tables_in_market |
+------------------+
| orders |
+------------------+
1 row in set (0.00 sec)

mysql> show create table orders;
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| orders | CREATE TABLE `orders` (
`id` int NOT NULL AUTO_INCREMENT,
`name` varchar(20) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci |
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

For more examples and documentation visit the official GitHub repository or the provider page on Terraform registry.

What's next

In this post, we presented the Atlas Terraform Provider. The provider currently supports the basic, declarative migration workflow that is available in the Atlas engine. In upcoming versions, we will add support for an additional kind of workflow that is supported by the engine and is called versioned migration authoring. In addition, more advanced safety checks (such as simulation on database snapshots) and migration strategies are also being worked on.

While the Terraform provider has just been released, the core engine that it is driving, is well tested and widely used (especially as the migration engine backing the popular Ent framework.) If you, like me, have always wanted to manage your database schema as part of your team's infrastructure-as-code workflow, give the Atlas Terraform provider a try!

Have questions? Feedback? Find our team on our Discord server.