Skip to main content

Revamping the GitHub Actions Experience for Atlas

· 5 min read
Rotem Tamir
Building Atlas

Hi everyone!

I'm very happy to share with you some of the recent improvements to Atlas, specifcially around GitHub Actions. In August of last year, we released our first version of the GitHub Actions experience for Atlas. It was a modest start, which included the ability to verify the safety and correctness of schema migrations during the CI process.

Over the past year, we have slowly added more features to the GitHub Actions experience, including the ability to sync migration directories to Atlas Cloud, deploy migrations, and even install Atlas. As often happens with quickly evolving systems, we felt that the API became complex, carrying over use cases and experiences that have become obsolete or superseded by better ones since the initial release.

At Ariga, the team developing Atlas, we have written a document named the "R&D Manifesto", which lists some the principles that we commit to as individuals and as an organization. One of them is "Obsess over APIs and DevEx" - we believe that the key to building a successful product is to provide the best possible experience to our users, and that starts with clear, consistent and composable APIs that empower our users to achieve amazing feats of engineering.

With that in mind, our team has been working hard in the past few weeks to revamp the GitHub Actions experience for Atlas. Here's a quick summary of the changes:

  1. We've moved all actions into a single repo - ariga/atlas-action. (With the exception of ariga/setup-atlas.)
  2. The API has been reviewed and updated to make sure it is consistent among the different actions and with the rest of the Atlas ecosystem.
  3. We've rewritten the code in Go, which is the language we use for all of our internal tools. This allows us to share code between the CLI and the GitHub Actions, and to provide a more consistent experience between the two. In addition, looking forward we have greatly simplified the process of adding new GitHub Actions as needed.

Deprecation Notice

As part of this change we are deprecating the previous generation of GitHub Actions, and we encourage you to migrate to the new ones as soon as possible. The old actions will continue to work for the time being, but we will not be receiving any updates. These actions are:

Introducing to the New Actions

Without further ado, I'm happy to present the new generation of GitHub Actions for Atlas. The new actions follow the design principle of building actions as small, composable units that can be combined to achieve different outcomes. All of the actions rely on Atlas being installed on the GitHub Actions runner, which is done using the ariga/setup-atlas action. The rest of the actions essentially map to CLI commands, and can be used to build more complex workflows.

The actions are:

ActionUse Case
ariga/setup-atlasInstall Atlas from a GitHub Actions workflow
ariga/atlas-action/migrate/lintCI for schema changes
ariga/atlas-action/migrate/pushPush your migration directory to Atlas Cloud (atlasgo.cloud)
ariga/atlas-action/migrate/applyDeploy versioned migrations from GitHub Actions

Example Workflows

Consider the following GitHub Actions workflow, which can be used to implement a CI/CD pipeline for your database schema changes.

name: Atlas CI/CD
on:
push:
branches:
- master # Use your main branch here.
pull_request:
paths:
- 'migrations/*' # Use the path to your migration directory here.
# Permissions to write comments on the pull request.
permissions:
contents: read
pull-requests: write
jobs:
atlas:
services:
# Spin up a mysql:8 container to be used as the dev-database for analysis.
mysql:
image: mysql:8
env:
MYSQL_DATABASE: dev
MYSQL_ROOT_PASSWORD: pass
ports:
- 3306:3306
options: >-
--health-cmd "mysqladmin ping -ppass"
--health-interval 10s
--health-start-period 10s
--health-timeout 5s
--health-retries 10
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
- uses: ariga/atlas-action/migrate/lint@v1
with:
dir: 'file://migrations'
dir-name: 'my-project' # The name of the project in Atlas Cloud
dev-url: "mysql://root:pass@localhost:3306/dev"
- uses: ariga/atlas-action/migrate/push@v1
if: github.ref == 'refs/heads/master'
with:
dir: 'file://migrations'
dir-name: 'my-project'
dev-url: 'mysql://root:pass@localhost:3306/dev' # Use the service name "mysql" as the hostname

This workflow uses 3 different actions to achieve the following:

  • ariga/setup-atlas - Installs Atlas on the GitHub Actions runner and logs in using the provided token.
  • ariga/atlas-action/migrate/lint - Lints the migration directory and verifies that it is safe to apply. This is run on every pull request that modifies the migration directory. If issues are found, the action will fail and a comment will be posted on the pull request with the details.
  • ariga/atlas-action/migrate/push - Pushes the migration directory to Atlas Cloud. This is run on every push to the main branch, so it can be used to deploy the migrations to production.

Tagging v1

With the release of the new actions, we are also tagging the v1 release of the actions to mark the maturity and stability of the API. We hope you will find the new actions useful, and we look forward to seeing what you build with them!

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Announcing Automatic Migrations for Sequelize Users

· 8 min read
Ronen Lubin
Software Engineer
TL;DR

You can now import the desired database schema from your Sequelize project into Atlas, and use it to automatically plan migrations for you.

See an example

Introduction

Sequelize is one of the most popular ORMs for Node.js. It supports a variety of databases, including MySQL, PostgreSQL, SQLite, and Microsoft SQL Server.

Atlas is a modern tool for managing your database schema. It allows you to inspect, plan, lint and execute schema changes to your database. It is designed to be used by developers, DBAs and DevOps engineers alike.

Atlas supports many ways to describe database schemas: Using Schema Loaders, plain SQL, a connection to another database or using Atlas HCL.

Today, I'm happy to announce that Atlas supports loading the desired schema from Sequelize projects. This means that Sequelize users can now use Atlas instead of the existing Sequelize CLI to manage their database schema.

By using Atlas, Sequelize users can now enjoy these benefits:

  • A declarative migration flow - Atlas can operate like a "Terraform for databases", where by running atlas schema apply the application schema is applied on a target database.
  • Automatic schema migration planning - Alternatively, Atlas can operate using a more traditional versioned migration flow. However, contrary to most tools, Atlas will automatically analyze the diff between the migration directory and the current application data model, and will produce correct and safe SQL migration files.
  • CI for schema changes - Atlas can be used during CI to make sure you never merge a pull request that will break your database schema.
  • Modern CD integrations - Atlas integrates seamlessly with modern deployment tools such as Kubernetes, Terraform, Helm, Flux, and ArgoCD. This allows you to deploy changes to your database schema as part of your existing deployment pipelines.
  • Visualization - Atlas users can create beautiful, shareable ERDs of their application data model with a single command.
  • .. and much more (read more about Atlas features)

Evolving beyond Sequelize's native migration support

Sequelize allows users to manage their database schemas using its sync feature, which is usually sufficient during development and in many simple cases:

await sequelize.sync({ force: true });
console.log("All models were synchronized successfully.");

However, at some point, teams need more control and decide to employ the migrations methodology, which is a more robust way to manage your database schema. The problem with creating migrations in Sequelize is that they are usually written by hand in a very specific DSL, which is error-prone and time consuming:

module.exports = {
up: (queryInterface, Sequelize) => {
return queryInterface.createTable('Person', {
name: Sequelize.DataTypes.STRING,
isBetaMember: {
type: Sequelize.DataTypes.BOOLEAN,
defaultValue: false,
allowNull: false
}
});
},
down: (queryInterface, Sequelize) => {
return queryInterface.dropTable('Person');
}
};

Atlas can automatically plan database schema migrations for developers using Sequelize by calculating the diff between the current state of the migration directory and its desired state defined by the Sequelize schema.

Demo time

Let's demonstrate how to set up Atlas to manage your Sequelize schema.

Installation

If you haven't already, install Atlas from macOS or Linux by running:

curl -sSf https://atlasgo.sh | sh

See the documentation for more installation options.

Install the Atlas Sequelize Provider by running:

npm install @ariga/atlas-provider-sequelize

Make sure all your Node dependencies are installed by running:

npm install

Standalone vs Script mode

The provider can be used in two modes:

  • Standalone - If all of your Sequelize models exist in a single Node.js module, you can use the provider directly to load your Sequelize schema into Atlas.
  • Script - In other cases, you can use the provider as an npm package to write a script that loads your Sequelize schema into Atlas.

Standalone mode

In your project directory, create a new file named atlas.hcl with the following contents:

data "external_schema" "sequelize" {
program = [
"npx",
"@ariga/atlas-provider-sequelize",
"load",
"--path", "./path/to/models",
"--dialect", "mysql", // mariadb | postgres | sqlite | mssql
]
}

env "sequelize" {
src = data.external_schema.sequelize.url
dev = "docker://mysql/8/dev"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}

For the sake of brevity, we will not review the Script mode in this post, but you can find more information about it in the Sequelize Guide.

Load Sequelize Schema in action

Atlas supports a versioned migrations workflow, where each change to the database is versioned and recorded in a migration file. You can use the atlas migrate diff command to automatically generate a migration file that will migrate the database from its latest revision to the current Sequelize schema.

Suppose we have the following Sequelize models directory, with two models task and user:

'use strict';
module.exports = (sequelize, DataTypes) => {
const Task = sequelize.define('Task', {
complete: {
type: DataTypes.BOOLEAN,
defaultValue: false,
}
});

Task.associate = (models) => {
Task.belongsTo(models.User, {
foreignKey: {
name: 'userID',
allowNull: false
},
as: 'tasks'
});
};

return Task;
};

We can now generate a migration file by running this command:

atlas migrate diff --env sequelize

Running this command will generate files similar to this in the migrations directory:

migrations
|-- 20230918143104.sql
`-- atlas.sum

0 directories, 2 files

Examining the contents of 20230918143104.sql:

-- Create "Users" table
CREATE TABLE `Users` (
`id` int NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`email` varchar(255) NOT NULL,
`createdAt` datetime NOT NULL,
`updatedAt` datetime NOT NULL,
PRIMARY KEY (`id`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- Create "Tasks" table
CREATE TABLE `Tasks` (
`id` int NOT NULL AUTO_INCREMENT,
`complete` bool NULL DEFAULT 0,
`createdAt` datetime NOT NULL,
`updatedAt` datetime NOT NULL,
`userID` int NOT NULL,
PRIMARY KEY (`id`),
INDEX `userID` (`userID`),
CONSTRAINT `Tasks_ibfk_1` FOREIGN KEY (`userID`) REFERENCES `Users` (`id`) ON UPDATE CASCADE ON DELETE NO ACTION
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Amazing! Atlas automatically generated a migration file that will create the Users and Tasks tables in our database!

Next, alter the User model to add a new age field:

    name: {
type: DataTypes.STRING,
allowNull: false
},
+ age: {
+ type: DataTypes.INTEGER,
+ allowNull: false
+ },

Re-run this command:

atlas migrate diff --env sequelize

Observe a new migration file is generated:

-- Modify "Users" table
ALTER TABLE `Users` ADD COLUMN `age` int NOT NULL;

Conclusion

In this post, we have presented how Sequelize projects can use Atlas to automatically plan schema migrations based only on their data model.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Announcing v0.14.0: Checkpoints, Push to Cloud and JetBrains Editor Support

· 6 min read
Rotem Tamir
Building Atlas

Hi everyone!

It's been a few weeks since our last version announcement and today I'm happy to share with you
v0.14, which includes some very exciting improvements for Atlas:

  • Checkpoints - as your migration directory grows, replaying it from scratch can become annoyingly slow. Checkpoints allow you to save the state of your database at a specific point in time and replay migrations from that point forward.
  • Push to the Cloud - you can now push your migration directory to Atlas Cloud directly from the CLI. Think of it like docker push for your database migrations.
  • JetBrains Editor Support - After launching our VSCode Extension a few months ago, our team has been hard at work to bring the same experience to JetBrains IDEs. Starting today, you can use Atlas directly from your favorite JetBrains IDEs (IntelliJ, PyCharm, GoLand, etc.) using the new Atlas plugin.

Let's dive right in!

Checkpoints

Suppose your project has been going on for a while, and you have a migration directory with 100 migrations. Whenever you need to install your application from scratch (such as during development or testing), you need to replay all migrations from start to finish to set up your database. Depending on your setup, this may take a few seconds or more. If you have a checkpoint, you can replay only the migrations that were added since the latest checkpoint, which can be much faster.

Here's a short example. Let's say we have a migration directory with 2 migration files, managing a SQLite database. The first one creates a table named t1:

migrations/20230830122359_start.sql
create table t1 ( c1 int );

And the second adds a table named t2 and adds a column named c2 to t1:

migrations/20230830122414_t2.sql.sql
create table t2 ( c1 int, c2 int );

alter table t1 add column c2 int;

To create a checkpoint, we can run the following command:

atlas migrate checkpoint --dev-url "sqlite://file?mode=memory&_fk=1"

This will create a SQL file, which is our checkpoint:

20230830123813_checkpoint.sql
-- atlas:checkpoint

-- Create "t1" table
CREATE TABLE `t1` (`c1` int NULL, `c2` int NULL);
-- Create "t2" table
CREATE TABLE `t2` (`c1` int NULL, `c2` int NULL);

Notice two things:

  1. The atlas:checkpoint directive which indicates that this file is a checkpoint.
  2. The SQL statement to create the t1 table included both the c1 and c2 columns and does not contain the alter table statement. This is because the checkpoint includes the state of the database at the time it was created, which can be thought of as the sum of all migrations that were applied up to that point.

Next, let's apply these migrations on a local SQLite database:

atlas migrate apply --url sqlite://local.db

Atlas prints:

Migrating to version 20230830123813 (1 migrations in total):

-- migrating version 20230830123813
-> CREATE TABLE `t1` (`c1` int NULL, `c2` int NULL);
-> CREATE TABLE `t2` (`c1` int NULL, `c2` int NULL);
-- ok (960.465µs)

-------------------------
-- 6.895124ms
-- 1 migrations
-- 2 sql statements

As expected, Atlas skipped all of the migrations up to the checkpoint and only applied the last one!

Push to Cloud

As we demonstrated above, once we have a migration directory, we can apply it to a database. If your database is running locally this is easy enough, but building deployment pipelines to production databases is more involved. There are multiple ways to accomplish this, such as building custom Docker images, as shown in most methods covered in the guides section.

In this release, we simplified the process of pushing migration directories to Atlas Cloud by adding a new atlas migrate push command. You can think of it as docker push for your database migrations.

atlas migrate push

Migration Directory created with atlas migrate push

Continuing with our example from above, let's push our migration directory to Atlas Cloud.

To start, you'll need to log in to Atlas. If it's your first time, you'll be prompted to create both an account and a workspace.

atlas login

After logging in, let's name our new migration project pushdemo and run:

atlas migrate push pushdemo --dev-url "sqlite://file?mode=memory&_fk=1"

After our migration directory is pushed, Atlas prints a URL to the created directory, similar to the one shown in the image above.

Once your migration directory is pushed, you can use it to apply migrations to your database directly from the cloud, just as you would execute docker run to run a container image that is stored in a Docker container registry.

To apply a migration directory directly from the cloud, run:

atlas migrate apply --dir atlas://pushdemo --url sqlite://local.db

Notice two flags that we used here:

  • --dir - specifies the URL of the migration directory. We used atlas://pushdemo to indicate that we want to use the migration directory named pushdemo that we pushed earlier. This directory is accessible to us because we used atlas login in a previous step.
  • --url - specifies the URL of the database we want to apply the migrations to. In this case, we used the same SQLite database that we used earlier.

JetBrains Editor Support

JetBrains makes some of the most popular IDEs for software developers, including IntelliJ, PyCharm, GoLand, and more. We are happy to announce that following our recent release of the VSCode Extension, we now have a plugin for JetBrains IDEs as well!

The plugin is built to make editing Atlas HCL files much easier by providing developers with syntax highlighting, code completion, and warnings. It supports both atlas.hcl project configuration files as well as schema definition files (.my.hcl, .pg.hcl, and .lt.hcl).

The plugin is available for download from the JetBrains Marketplace.

  1. To install the plugin, open your IDE and go to Preferences > Plugins > Marketplace and search for Atlas:

  2. Click on the Install button to install the plugin.

  3. Create a new file named schema.my.hcl (the .my.hcl suffix signifies to the plugin that this file is a MySQL schema (you can use .pg.hcl for Postgres or .lt.hcl for SQLite)

  4. Edit away!

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

Passwordless Schema Migrations on RDS with Atlas

· 6 min read
Rotem Tamir
Building Atlas

TL;DR

Atlas now supports AWS IAM authentication, which enables you to perform passwordless schema migrations on your RDS databases. To use it with Atlas, add the aws_rds_token data source to your atlas.hcl configuration file:

data "aws_rds_token" "mydb" {
endpoint = "mydb.123456789012.us-east-1.rds.amazonaws.com:3306"
username = "atlas"
}

To skip the intro and jump straight to the tutorial, click here.

Introduction

Passwords have long been the default mechanism for authentication, but they come with a set of known vulnerabilities. In recent years, our industry has shifted towards alternative authentication methods due to these weaknesses. For databases, which store critical data, this security concern is even more important.

Schema migrations require elevated permissions, making it even more essential to approach them with utmost care in order to prevent security breaches. In this post, we'll show how to use Atlas to perform passwordless schema migrations on databases hosted in AWS's RDS service.

The Problem with Passwords

Passwords are considered a weak authentication mechanism for humans logging in to systems since they can be leaked or guessed. For this reason, many services offer more robust authentication methods, such as multi-factor authentication or single sign-on.

In this post, we'll focus on the security concerns of passwords (or API Tokens) for automated systems (such as CI/CD pipelines), which are used to perform schema migrations. Such tokens pose a challenge to securing systems in a few ways:

  • Leaks. When stored in configuration files, passwords are typically in plain text, increasing the risk of leaks.
  • Granularity. When passwords are shared among multiple users, it becomes challenging to grant and revoke access for individual users based on role changes or emerging security concerns.
  • Visibility. Because passwords are usually visible to operators and are shared by multiple users, it's hard to track who performed which operation once authenticated.
  • Rotation. Because passwords tend to be long-lived, their rotation becomes a cumbersome task.

IAM Authentication

IAM, short for Identity and Access Management, is a framework that has been adopted by virtually all cloud providers for managing digital identities and their permissions. Unlike traditional password-based systems where credentials are stored and checked, IAM verifies who (or what) is making a request and then checks the permissions associated with that identity.

IAM services supply mechanisms for generating short-lived tokens based on the identity of the caller. In addition, these services provide a centralized way to manage permissions (by creating granular access policies and grouping them into roles) and auditing capabilities to track how subjects (users or services) use the system.

Configured correctly, under IAM, every subject can access exactly what it needs and nothing more, without ever having to use a password or some other token that might be leaked or stolen. When a person leaves your organization (or no longer needs access to a particular resource), you can revoke their access by updating their IAM role.

IAM authentication for Databases

Most databases in use today predate IAM and have developed their own internal mechanisms for authentication and authorization. In recent years, cloud vendors have worked to create a bridge between IAM and databases, allowing users to authenticate their identity to databases using IAM credentials. In this post, we'll focus on AWS's implementation of IAM authentication for RDS databases.

How does it work?

First, enable IAM authentication on your RDS instance. This installs a plugin on the database that allows it to authenticate users with IAM credentials instead of passwords. Read how to do this in the AWS documentation

Next, create a database user and grant it permission to authenticate using IAM.

In MySQL, execute a statement like this:

CREATE USER 'atlas' IDENTIFIED WITH AWSAuthenticationPlugin as 'RDS';

In PostgreSQL, execute a statement like this:

CREATE USER atlas; 
GRANT rds_iam TO atlas;

Finally, create an IAM policy that allows subjects to create RDS connection tokens. This policy can then be attached to roles for developers or services that need to connect to the database. Read how to do this in the AWS documentation.

IAM Authentication with Atlas

Tools that perform schema migrations such as Atlas require elevated permissions to perform their tasks. For example, they need to be able to inspect the database's information schema tables as well as create and drop resources. For this reason, any mechanism that can further protect the security of their credentials is essential, making IAM authentication a great fit. To support this use case, we have recently added support for AWS IAM authentication to Atlas.

Demo Time!

Let's see how to use Atlas to perform passwordless schema migrations on an RDS database.

For the purpose of this demo, we assume that we have a PostgreSQL database running in RDS with IAM authentication enabled. We also assume that we have a user named atlas that has been granted the rds_iam permission and that we have created an IAM policy that allows us to generate RDS tokens.

Start by creating a new file named atlas.hcl to store our project configuration and add the following content:

// Define local variables for the database endpoint and username.
locals {
endpoint = "atlas-demo.xyzxyz.us-east-1.rds.amazonaws.com:5432"
username = "atlas"
}

// Use the "aws_rds_token" data source to generate a token for the database.
data "aws_rds_token" "db" {
endpoint = local.endpoint
username = local.username
region = "us-east-1"
}

// Define an environment named "rds" that uses the generated token.
env "rds" {
url = "postgres://${local.username}:${urlescape(data.aws_rds_token.db)}@${local.endpoint}/postgres"
}

Lets break this example down:

  • The locals block defines two variables – endpoint and username – that we use to store the database endpoint and the username of the user created in the database.
  • Next, we define an aws_rds_token data source to generate a token for the database. To read more about this data source, see the documentation.
  • Finally, we define an environment named rds that uses the generated token. The url property defines the connection URL that Atlas will use to connect to the database. Notice that we use the urlescape function to escape the token before embedding it in the URL.

Now that we have our project configuration, let's use Atlas to inspect the database schema. Run the following command:

atlas schema inspect -c "file://atlas.hcl" --env rds

You should see output similar to the following:

schema "public" {
}

Amazing! This output indicates that Atlas was able to both connect to the database and inspect the schema without us having to provide it with any credentials!

Wrapping up

In this post, we discussed the security concerns around passwords and how IAM authentication can help mitigate them. We also demonstrated how to use Atlas to perform passwordless schema migrations on an RDS database using IAM authentication. If you use Atlas to perform schema migrations on RDS databases, we encourage you to give IAM authentication a try!

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

How to Visualize GORM Models with Atlas

· 5 min read
Rotem Tamir
Building Atlas

We recently shipped a new feature in Atlas that enables developers to quickly create ERD (Entity Relationship Diagram) visualizations of their database schemas. Today, I want to show how this feature can be used in tandem with one of Atlas's core capabilities - "Schema Loaders" - to produce visualizations of GORM models.

But first, let's introduce the different characters in this story:

  • Atlas - an open-source tool for managing database schemas.
  • GORM - one of the most popular ORMs for Go.
  • ERD - a diagram that shows the relationships between entities in a database. (we'll see an example in a minute)
  • Schema Loaders - a feature in Atlas that allows users to load their database schemas from different sources.

In this blog post, we will create a toy GORM application, and show how you can use Atlas to visualize the database schema that GORM generates for you.

Step 1: Bootstrap the GORM application

Start by creating a new directory for our project. Then, initialize a new Go module:

mkdir gormviz
cd gormviz
go mod init example.io/gormviz

Next, we will install the GORM package and the SQLite driver:

go get -u gorm.io/gorm
go get -u gorm.io/driver/sqlite

Now, let's create a file that will contain our database models. We will call it models/models.go:

models/models.go
package models

import "gorm.io/gorm"

type User struct {
gorm.Model
Name string
Pets []Pet
}

type Pet struct {
gorm.Model
Name string
User User
UserID uint
}

Step 2: Setup Atlas

Now that we have a GORM application, let's set up Atlas. First, we will install the Atlas CLI:

curl -sSf https://atlasgo.sh | sh

For more installation options, see these instructions.

Next, let's install the GORM Atlas provider which will allow us to load our GORM models into Atlas:

go get ariga.io/atlas-provider-gorm@v0.1.0

Next, to prevent the Go Modules system from dropping this dependency from our go.mod file, let's follow the Go Module's official recommendation for tracking dependencies of tools and add a file named tools.go with the following contents:

tools.go
//go:build tools
package main

import _ "ariga.io/atlas-provider-gorm/gormschema"

Alternatively, you can simply add a blank import to the models.go file we created above.

Finally, to tidy things up, run:

go mod tidy

After the provider is installed, let's create an Atlas project file to glue everything together.

Create a new file named atlas.hcl with the following contents:

atlas.hcl
data "external_schema" "gorm" {
program = [
"go",
"run",
"-mod=mod",
"ariga.io/atlas-provider-gorm",
"load",
"--path", "./models",
"--dialect", "sqlite",
]
}

env "gorm" {
src = data.external_schema.gorm.url
dev = "sqlite://file?mode=memory&_fk=1"
}

This file defines two things:

  • An External Schema Loader named gorm that will load our GORM models into Atlas by executing the ariga.io/atlas-provider-gorm program on our models package.
  • An Environment named gorm that will use the gorm schema loader to load the models into a SQLite database in memory.

Step 3: Visualize the GORM models

Now that we have everything set up, let's run Atlas and see what we get:

atlas schema inspect -w --env gorm --url 'env://src'

Let's break down this command:

  • The schema inspect command is used to inspect a database schema and produce a representation of it.
  • The -w flag tells Atlas to open a web browser and display the visualization.
  • The --env gorm flag tells Atlas to use the gorm environment we defined in the atlas.hcl file.
  • The --url 'env://src' flag tells Atlas to use schema defined in the src attribute of the gorm env.

When we run this command,

Atlas will prompt us to ask if we want this visualization to be shared on the Public Atlas Playground or if we want to keep it private:

? Where would you like to share your schema visualization?:
▸ Publicly (gh.atlasgo.cloud)
Your personal workspace (requires 'atlas login')

As this is just a toy example, let's publish it to the public playground by pressing Enter.

Once we do this, Atlas will extract the database schema from our GORM models, and publish it to the playground, printing the URL of the visualization:

https://gh.atlasgo.cloud/explore/60d94de7

Privately sharing visualizations

By publishing visualizations to the public playground, you can easily share them with others by simply sending them the URL. However, if you want to keep your visualizations private, such that they are only accessible to you and your team, you can do so by first logging in to Atlas Cloud:

atlas login

Your browser should open and prompt you to either login or create a free account if you don't already have one. Upon completion, you should see a message in your terminal that looks like this:

You are now connected to rotemtam85 on Atlas Cloud.

Once you are logged in, you can re-run the atlas schema inspect --env gorm --web command and Atlas will prompt you to ask if you want to share the visualization in your personal workspace:

? Where would you like to share your schema visualization?:
Publicly (gh.atlasgo.cloud)
▸ Privately (rotemtam85.atlasgo.cloud)

Select the Privately option and press Enter. Atlas will then publish the visualization to your personal workspace and print the URL of the visualization:

https://rotemtam85.atlasgo.cloud/explore/2da80ffx

Wrapping up

In this post, we have shown how to use Atlas to visualize the database schema that GORM generates for your application. ERD visualizations are just one of the many features that Atlas provides for working with your database schema. To learn more about what you can do with the Atlas GORM provider, check out the relevant guide.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Announcing v0.13.0: New binary, support for SQL Views, MS-SQL Driver beta program and more

· 12 min read
Rotem Tamir
Building Atlas

It's been just over two months since our last version announcement and today I'm super excited to share with you our latest release, v0.13, which includes some long-anticipated additions to Atlas:

  • Atlas and Atlas Community Editions - we are making a change in the way Atlas is distributed. Starting today, we are making many of the features previously available only in the commercial version of Atlas available to everyone for free. More on this below.

  • Support for SQL Views - SQL Views are virtual tables in a database created by a query and stored in the database. Starting today, you can manage views in your database using the same Atlas workflows you use for tables, indexes, foreign keys, etc.

  • Microsoft SQL Server Driver Beta - Over the last few months, we have been working on a driver for MS-SQL with some of our enterprise design partners and are happy to make it available to you as a beta.

  • Built-in schema visualization - Over the past year we released some tools to help developers visualize their database schemas using Entity Relationship Diagrams (ERDs). Starting today, you can use the atlas schema inspect --visualize command to create database ERDs directly from the command line.

  • GitHub Action for Deployments - We recently received a request from the community to add a GitHub Action to streamline applying migrations directly from GitHub workflows. We thought this was a great idea and are happy to introduce ariga/atlas-deploy-action which you can use to deploy your migrations today.

  • Type checking for schema files - As part of our continuous effort to make the experience of working with Atlas as smooth as possible, we are excited to ship a new engine for running type checks on your schema files.

This release is extra-packed with new features and improvements, so let's dive in!

Atlas and Atlas Community Editions

In recent months, based on strategic discussions with our advisors, we have come to realize that in order to keep fueling Atlas's rapid growth and adoption, we should give free access to some features previously available only in the commercial version. For this reason, we are making a change in the way Atlas is distributed. Here is a summary of how this is going to work:

The Atlas codebase will continue to be actively developed in our open-source repository (ariga/atlas) under the Apache 2 License. This version of Atlas will continue to be actively maintained and supported by us in community channels (GitHub, Discord, etc). We will continue to build and ship versions of this purely open-source version under the name "Atlas Community". For more information about obtaining Atlas Community binaries, see the docs.

In addition, we are starting to distribute for free (under the Atlas EULA) the binaries for our commercial product which will include features previously unavailable to the public. We plan to release many of these features to the open-source repositories over time to keep supporting more open-source innovation in the database space. To keep things short and sweet, this distribution of the project will simply be called "Atlas".

Starting today, our existing distribution channels (the atlasgo.sh installation script, Docker image, brew tap, etc.) will point to this edition, and we are adding parallel installation options for users that prefer to use the community edition.

Trying out the new features

Before we dive into the new features, let's make sure you have the latest version of Atlas installed. To do so, run the following command:

curl -sSf https://atlasgo.sh | sh

For more installation options, see the docs.

In addition, two of the following features (MS-SQL driver, schema inspect --visualize) require you to log in your CLI to Atlas Cloud. To do so, run the following command:

atlas login

The CLI will ask your permission to open a browser window to complete the login process.

After completing the login process, you should see a message similar to the following:

You are now connected to "acme-corp" on Atlas Cloud.

Support for Views

"Views" are virtual tables created by a query and stored in the database. Many teams use them to simplify complex queries, encapsulate logic, and present a consistent interface to users, abstracting the underlying data structures. Starting today, you can start managing views in your database using the same Atlas workflows you use for tables, indexes, foreign keys, etc. Views are currently supported for MySQL, MariaDB, PostgreSQL, MS-SQL, and SQLite.

To demonstrate how this works, let's start by creating a new database. For this example, let's run a local MySQL database using Docker:

docker run --rm -d --name atlas-demo -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=default mysql:8

Next, create a new file named schema.hcl with the following content:

schema "default" {}

table "users" {
schema = schema.default
column "id" {
type = int
}
column "nickname" {
type = varchar(32)
}
column "phone_number" {
type = varchar(32)
}
}

view "users_clean" {
schema = schema.default
column "id" {
type = int
}
column "nickname" {
type = varchar(32)
}
as = <<-SQL
SELECT id, nickname
FROM ${table.users.name}
SQL
depends_on = [table.users]
}

In this example, we first defined a table named users that contains some PII (the user's phone number). Next, we defined a view named users_clean that exposes only the user's id and nickname. This view can be used by other applications to access the user's data without exposing any personally identifiable information.

Next, run the following command to apply the schema to the database:

atlas schema apply -u mysql://root:pass@localhost:3306/default -f schema.hcl

Atlas will connect to the database, inspect its current schema (which is empty) and produce the following output:

-- Planned Changes:
-- Create "users" table
CREATE TABLE `default`.`users` (`id` int NOT NULL, `nickname` varchar(32) NOT NULL, `phone_number` varchar(32) NOT NULL);
-- Create "users_clean" view
CREATE VIEW `default`.`users_clean` (`id`, `nickname`) AS SELECT id, nickname
FROM users;
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

After reviewing the changes, press enter to apply them to the database. Atlas will connect to the database and apply the changes.

Next, run the following command to inspect the database schema:

atlas schema inspect -u mysql://root:pass@/default --format "{{ sql . \" \" }}"

Notice that we used the --format flag to instruct Atlas to output the schema as SQL statements.

Atlas will connect to the database and produce the following output:

-- Create "users" table
CREATE TABLE `users` (
`id` int NOT NULL,
`nickname` varchar(32) NOT NULL,
`phone_number` varchar(32) NOT NULL
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- Create "users_clean" view
CREATE VIEW `users_clean` (
`id`,
`nickname`
) AS select `users`.`id` AS `id`,`users`.`nickname` AS `nickname` from `users`;

As you can see, Atlas created the users table and the users_clean view in the database!

Microsoft SQL Server Driver

Microsoft SQL Server, one of the longest-standing database engines in our business, was first released by Microsoft in 1989. MS-SQL is the go-to database for Windows environments in many industries. Over the last few months, we have been working on a driver for MS-SQL with some of our enterprise design partners and are happy to make it available to you as a beta feature.

To get started with MS-SQL, you must first opt-in to its beta testing program. To do so:

  1. Login to your Atlas Cloud account using the instructions above.
  2. Head over to your Atlas Cloud account.
  3. Click on your profile picture in the bottom left corner to open the user menu and select the "Personal Settings" option.
  4. Select the "Microsoft SQL Server" checkbox from the driver list and click on the "Save" button to save your changes.

Next, run the following command to start a local MS-SQL server using Docker:

docker run --rm -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@ssw0rd0995' -p 1433:1433 -d mcr.microsoft.com/mssql/server:latest

Notice that by providing the ACCEPT_EULA environment variable you confirm your acceptance of the Microsoft's End-User Licensing Agreement.

Create a new file named schema.hcl with the following content:

schema "dbo" {
}
table "users" {
schema = schema.dbo
column "id" {
type = int
}
column "nickname" {
type = varchar(32)
}
}

With this file ready, run the following command to apply the schema to the database:

atlas schema apply \
--url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
--to "file://schema.hcl"

Atlas will connect to the database, inspect its current schema (which is empty) and produce the following output:

-- Planned Changes:
-- Create "users" table
CREATE TABLE [dbo].[users] ([id] int NOT NULL, [nickname] varchar(32) NOT NULL);
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

After reviewing the changes, press enter to apply them to the database. Atlas will connect to the database and apply the changes.

Next, run the following command to inspect the database:

atlas schema inspect \
--url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master"

Atlas will connect to the database and produce the following output:

table "users" {
schema = schema.dbo
column "id" {
null = false
type = int
}
column "nickname" {
null = false
type = varchar(32)
}
}
schema "dbo" {
}

Great! Atlas created the users table in the database!

Do you use MS-SQL? We would love to hear feedback about your experience using this driver. Feel free to give it a try and reach out on our Discord channel to let us know what you think.

Built-in Schema Visualization

Over the past year we have released some tools to help developers visualize their database schemas using Entity Relationship Diagrams (ERDs). The success of these experiments (especially for Ent and Django) convinced us that we should provide this functionality natively in Atlas. So starting today, you can use the atlas schema inspect --visualize and atlas schema diff --visualize to create database ERDs directly from the command line.

To get started, create a new schema file named schema.hcl:

schema "default" {}

table "t1" {
schema = schema.default
column "id" {
type = int
}
primary_key {
columns = [column.id]
}
}

table "t2" {
schema = schema.default
column "id" {
type = int
}
column "t1_id" {
type = int
}
foreign_key "t1_id" {
columns = [column.t1_id]
ref_columns = [table.t1.column.id]
}
}

Next, run the following command to generate the ERD:

atlas schema inspect --dev-url docker://mysql/8/d --url file://schema.hcl --visualize

Atlas will prompt you to check where you would like to share the ERD:

Use the arrow keys to navigate: ↓ ↑ → ←
? Where would you like to share your schema visualization?:
▸ Publicly (gh.atlasgo.cloud)
Privately (acme-corp.atlasgo.cloud)

After selecting the desired option, Atlas will generate the ERD and print out the URL where you can access it:

https://gh.atlasgo.cloud/explore/cdd7bee5

ariga/atlas-deploy GitHub Action

GitHub Actions is a powerful tool that allows developers to automate workflows around GitHub repositories. We have previously published GitHub Actions for running CI for schema changes and for syncing your migration directory with Atlas Cloud.

Today, we are happy to announce the release of the ariga/atlas-deploy-action to the GitHub Actions Marketplace. This action allows you to run migrations directly from your GitHub Actions workflow. To get started, add the following step to your workflow:

name: Deploy Database Migrations
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Deploy Atlas Migrations
uses: ariga/atlas-deploy-action@v0
with:
url: ${{ secrets.DATABASE_URL }}
dir: path/to/migrations

Type checking for schema files

I firmly believe that great developer experiences aren't created by any one single design decision but are rather a result of a relentless commitment to improving things little by little over long periods of time. As we work with Atlas as users every day, we constantly look for points of friction and frustration and try to find ways to make them better.

Sometimes, these changes are small and simple, like improving the help text of a command or updating a documentation page. Other times, they are more complex, and require building more fundamental infrastructure.

I give all of this as an intro to a feature that I am very excited about: type checking for the HCL schema. Starting with this release, we are shipping a type checker engine that will give you more precise and actionable feedback when you make mistakes in your schema files.

We have come as an industry to appreciate that a fast feedback loop is one of the most prominent factors in developer productivity, and it all starts in the local development environment.

Type checking validates the contents of your schema files and provides you with actionable feedback in case you make a mistake. For example, suppose you define a table like this (try spotting the errors yourself before reading on):

schema.hcl
table "users" {
column "id" {
type = int
}
column "name" {
type = varchar(255)
}
column "manager_id" {
type = int
}
primray_key {
columns = [
column.id
]
}
index "idx_name" {
coulmns = [
column.name
]
unique = true
}
foreign_key {
columns = [column.manager_id]
ref_columns = [column.id]
}
}

Trying to apply this schema will produce the following error:

Error: failed parsing mysql schema files: schema.hcl
schema.hcl:1,1-6: missing required attribute schema in block table
schema.hcl:11,3-14: unknown block type primray_key in block of type table (Did you mean primary_key?)
schema.hcl:17,5-19,6: unknown attribute coulmns in block of type index (Did you mean columns?)
schema.hcl:22,3-14: foreign_key block must have exactly 1 label

This type checking engine is available on the Atlas CLI and will be integrated into our editor plugins for VSCode and (the upcoming) JetBrains IDEs.

Wrapping up

This is it! I know this was a long post, but we are very excited about all of these new features and we hope you will find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

Bridging the gap between IaC and Schema Management

· 11 min read
Rotem Tamir
Building Atlas

Introduction

When we started building Atlas a couple of years ago, we noticed that there was a substantial gap between what was then considered state-of-the-art in managing database schemas and the recent strides from Infrastructure-as-Code (IaC) to managing cloud infrastructure.

In this post, we review that gap and show how Atlas – along with its Terraform provider – can bridge the two domains.

As an aside, I usually try to keep blog posts practical and to the point, but occasionally think it’s worth it to zoom out and explain the grander ideas behind what we do.

If you’re looking for a quick and practical explanation of working with Atlas and Terraform, I recommend this YouTube video.

Why Infrastructure-as-Code

Infrastructure as Code (IaC) refers to the practice of managing and provisioning infrastructure through machine-readable configuration files, instead of utilizing traditional interactive configuration tools. This approach makes for automated, consistent, and repeatable deployment of environments that are faster and less error-prone than previous, more manual approaches.

Terraform, a popular open-source tool created by HashiCorp, is the most prominent implementation of the IaC concept. With Terraform, organizations can describe the desired state of their infrastructure in a simple configuration language (HCL) and let Terraform plan and apply these changes in an automated way.

Terraform (and IaC in general) has taken the software engineering world by storm in recent years. As someone who had the dubious pleasure of managing complex cloud infrastructure manually, using what is today jokingly called "ClickOps", I can mention a few properties of IaC that I believe contributed to this success:

  • Declarative – Terraform is built on a declarative workflow, which means that users only define the final (desired) state of their system. Terraform is responsible for inspecting the target environment, calculating the difference between the current and desired states, and building a plan for reconciling between those two states.

    Cloud infrastructures are becoming increasingly complex, comprising thousands of different, interconnected components. Declarative workflows greatly reduce the mental overhead of planning changes to such environments.

  • Automated – Many engineers can attest that manually provisioning a new environment used to take days, even weeks! Once Terraform generates a plan for changing environments, the process runs automatically and finishes in a matter of minutes.

  • Holistic – With Terraform, it is possible to capture all of the resources and configurations required to provision an application as one interconnected and formally defined dependency graph. Deployments become truly reproducible and automated, with no dangling or manually provisioned dependencies.

  • Self-healing – Finally, these three properties converge to support a self-healing tool that can detect and fix drift on its own. Whenever drift occurs, it is only a matter of re-running Terraform to shift from the current state back to the desired one.

Comparing IaC with Schema Management Tools

Next, let’s discuss the current state of database schema management tools (often called schema migration tools) by contrasting them with the properties of IaC.

  • Imperative – If Terraform embodies the declarative approach, then schema management tools often exemplify the opposite, imperative (or revision-based) approach. In this case, we don’t provide the tools with the what (the desired state of the database), but the how (what SQL commands need to run to migrate the database from the previous version to the next).

  • Semi-automated – Migration tools were revolutionary when they came out a decade ago. One idea stood as one of the harbingers of the GitOps philosophy: that database changes should not be applied manually but first checked into source control and then applied automatically by a tool.

    Today’s migration tools automate two aspects of schema management: 1) execution and 2) tracking which migrations were already executed on a target database.

    Compared to modern IaC tools, however, they are fairly manual. In other words, they leave the responsibility of planning and verifying the safety of changes to the user.

  • Fragmented – As we described above, one of the most pleasant aspects of adopting the IaC mindset is having a unified, holistic description of your infrastructure, to the point where you can entirely provision it from a single terraform apply command.

    For database schema management, common practices are anything but holistic. In some cases, provisioning the schema might happen 1) when application servers boot, before starting the application, or 2) while it runs as an init container on Kubernetes.

    In fact, some places (yes, even established companies) still have developers manually connect (with root credentials) to the production database to execute schema changes!

  • A pain to fix – When a migration deployment fails, many schema management tools will actually get in your way. Instead of worrying about fixing the issue at hand, you now need to worry about both your database and the way your migration tool sees it (which have now diverged).

Bridging the Gap

After describing the gap between IaC and database schema management in more detail, let’s delve into what it would take to bridge it. Our goal is to have schema management become an integral part of your day-to-day IaC pipeline so that you can enjoy all the positive properties we described above.

To integrate schema change management and IaC, we would need to solve two things:

  1. A diffing engine capable of supporting declarative migration workflows, such that an engine should be capable of:
    • Loading the desired schema of the database in some form
    • Inspecting the current schema of the database
    • Calculating a safe migration plan automatically
  2. A Terraform Provider that wraps the engine as a Terraform resource, which can then seamlessly integrate into your overall application infrastructure configuration.

How Atlas drives Declarative Migrations

Atlas is a language-agnostic tool for managing and migrating database schemas using modern DevOps principles. It is different from Terraform in many ways, but similar enough to have received the informal nickname "Terraform for Databases".

At its core lie three capabilities that make it ideal to apply a declarative workflow to schema management:

  1. Schema loaders
  2. Schema inspection
  3. Diffing and planning

Let’s discuss each of these capabilities in more detail.

Schema loaders

Every declarative workflow begins with the desired state - what we want the system to look like. Using a mechanism called "schema loaders" Atlas users can provide the desired schema in many ways. For example:

Plain SQL

Atlas users can describe the desired schema of the database using plain SQL DDL statements such as:

CREATE TABLE users (
Id int primary key,
Name varchar(255)
)

Atlas HCL

Alternatively, users can use Atlas HCL, a configuration language that shares Terraform’s configuration language foundations:

table "users" {
schema = schema.public
column "id" {
type = int
}
column "name" {
type = varchar(255)
}
column "manager_id" {
type = int
}
primary_key {
columns = [
column.id
]
}
}

A live database

In addition, users can provide Atlas with a connection to an existing database which in turn Atlas can inspect and use as the desired state of the database.

External Schemas (ORM)

Finally, Atlas has an easily extensible design which makes writing plugins to load schemas from external sources a breeze. For example, Atlas can read the desired schema of the database directly from your ORM, using a simple integration.

Schema inspection

Once Atlas understands the desired state of the database, it needs to inspect the existing database to understand its current schema. This is done by connecting to the target database and querying the database’s information schema to construct a schema graph (an in-memory representation of all the components in the database and their connections).

Diffing and planning

The next phase involves calculating the difference ("diffing") between the desired and current states and calculating an execution plan to reconcile this difference. Because resources are often interconnected, Atlas must create a sensible order of execution using algorithms such as Topological Sort to ensure, for example, that dependencies on a resource are removed before it is dropped.

In addition, each database engine has its own peculiarities and limitations to take into account when creating an execution plan. For example, adding a default value to a column in an SQLite database must be performed in a multiple-step plan that looks similar to this:

-- Planned Changes:
-- Create "new_users" table
CREATE TABLE `new_users` (`id` int NOT NULL, `greeting` text NOT NULL DEFAULT 'shalom')
-- Copy rows from old table "users" to new temporary table "new_users"
INSERT INTO `new_users` (`id`, `greeting`) SELECT `id`, IFNULL(`greeting`, 'shalom') AS `greeting` FROM `users`
-- Drop "users" table after copying rows
DROP TABLE `users`
-- Rename temporary table "new_users" to "users"
ALTER TABLE `new_users` RENAME TO `users`

Atlas in action

What does this workflow look like in practice? As you can see in Atlas's "Getting Started" guide, suppose we made a change to our desired schema that adds a new table named blog_posts (this change may be described in a plain SQL file, an HCL file or even in your ORM's data model).

To apply the desired schema on a target database you would use the schema apply command:

atlas schema apply \
-u "mysql://root:pass@localhost:3306/example" \
--to file://schema.sql \
--dev-url "docker://mysql/8/example"

After which Atlas will generate a plan:

-- Planned Changes:
-- Create "blog_posts" table
CREATE TABLE `example`.`blog_posts` (`id` int NOT NULL, `title` varchar(100) NULL, `body` text NULL, `author_id` int NULL, PRIMARY KEY (`id`), INDEX `author_id` (`author_id`), CONSTRAINT `author_fk` FOREIGN KEY (`author_id`) REFERENCES `example`.`users` (`id`))
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure?:
▸ Apply
Abort

Observing this example, you may begin to understand how Atlas earned its nickname the "Terraform for Databases."

Integrating with Terraform

The second piece of bridging the gap is to create a Terraform Provider that wraps Atlas and allows users to define resources that represent the schema definition as part of your infrastructure.

Ariga (the company behind Atlas) is an official HashiCorp Tech Partner that publishes the Atlas Terraform Provider, which was created to solve this problem precisely.

Using the Atlas Terraform Provider, users can finally provision their database instance and its schema in one holistic definition. For example, suppose we provision a MySQL database using AWS RDS:

// Our RDS-based MySQL 8 instance.
resource "aws_db_instance" "atlas-demo" {
identifier = "atlas-demo"
instance_class = "db.t3.micro"
engine = "mysql"
engine_version = "8.0.28"
// Some fields skipped for brevity
}

Next, we load the desired schema from an HCL file, using the Atlas Provider:

data "atlas_schema" "app" {
src = "file://${path.module}/schema.hcl"
}

Finally, we use the atlas_schemaresource to apply our schema to the database:

// Apply the normalized schema to the RDS-managed database.
resource "atlas_schema" "hello" {
hcl = data.atlas_schema.app.hcl
url = "mysql://${aws_db_instance.atlas-demo.username}:${urlencode(random_password.password.result)}@${aws_db_instance.atlas-demo.endpoint}/"
}

You can find a full example here.

When we run terraform apply, this is what will happen:

  • Terraform will provision the RDS database using the AWS Provider
  • Terraform will use Atlas to inspect the existing schema of the database and load the desired state from a local HCL file.
  • Atlas will calculate for Terraform a SQL plan to reconcile between the two.

And this is how it may look like in the Terraform plan:

Terraform will perform the following actions:

# atlas_schema.hello will be created
+ resource "atlas_schema" "hello" {
+ hcl = <<-EOT
table "posts" {
schema = schema.app
column "id" {
null = false
type = int
}
column "user_id" {
null = false
type = int
}
column "title" {
null = false
type = varchar(255)
}
column "body" {
null = false
type = text
}
primary_key {
columns = [column.id]
}
foreign_key "posts_ibfk_1" {
columns = [column.user_id]
ref_columns = [table.users.column.id]
on_update = NO_ACTION
on_delete = CASCADE
}
index "user_id" {
columns = [column.user_id]
}
}
table "users" {
schema = schema.app
column "id" {
null = false
type = int
}
column "user_name" {
null = false
type = varchar(255)
}
column "email" {
null = false
type = varchar(255)
}
primary_key {
columns = [column.id]
}
}
schema "app" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}
EOT
+ id = (known after apply)
+ url = (sensitive value)
}

# aws_db_instance.atlas-demo will be created
+ resource "aws_db_instance" "atlas-demo" {
// .. redacted for brevity
+ }

And that's how you bridge the gap between IaC and schema management!

Conclusion

In this blog post, we reviewed some exceptional properties of Infrastructure-as-Code tools, such as Terraform, that have led to their widespread adoption and success in the industry. We then reviewed the current state of a similar problem, database schema management, in contrast to these properties. Finally, we showcased Atlas’s ability to adapt some IaC principles into the domain of schema management and how we can unify the two domains using the Atlas Terraform Provider.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Versioned migrations on Kubernetes with the Atlas Operator

· 7 min read
Rotem Tamir
Building Atlas
TL;DR

You can now use the Atlas Kubernetes Operator to run versioned migrations natively from your Kubernetes cluster.

See an example

Introduction

The Atlas Kubernetes Operator is a Kubernetes operator that enables you to manage your database schemas natively from your Kubernetes cluster. By exposing custom resource definitions (CRD) the operator extends the Kubernetes API to support database schema management.

In a previous blog post we demonstrated how to use the Atlas Operator for the declarative (state-based) workflow in which you define the desired state of your database schema in a Kubernetes manifest and the operator takes care of the rest.

State vs. versioned based migrations is a common and unresolved debate in the database schema management world, and we built Atlas to support both from the get-go.

Today, we are happy to announce v0.2.0 of the Atlas Kubernetes Operator which adds support support for the versioned migration workflow.

In this blog post we will demonstrate how to use the Atlas Operator this new workflow.

How it works

The Atlas Kubernetes Operator supports versioned migrations. In versioned migrations, the database schema is defined by a series of SQL scripts ("migrations") that are applied in order. The user can specify the version and migration directory to run, which can be located on the Atlas Cloud or stored as a ConfigMap in your Kubernetes cluster.

In this workflow, after installing the Atlas Kubernetes Operator, the user defines the desired state of the database as an AtlasMigration resource which connects between a target database and a migration directory. The migration directory may be configured as a remote directory in Atlas Cloud or as a ConfigMap in your Kubernetes cluster.

The operator then reconciles the desired state with the actual state of the database by applying any pending migrations on the target database.

Demo time

In this demo we will use the Atlas Kubernetes Operator to run versioned migrations on a MySQL database.

Prerequisites

  1. A Kubernetes cluster - you can use Minikube to quickly spin up a local cluster.
  2. kubectl configured to connect to your cluster.
  3. Helm, the Kubernetes package manager, locally installed.
  4. The Atlas CLI tool, locally installed.

1. Install the Atlas Kubernetes Operator

The Atlas Kubernetes Operator is available as a Helm Chart. To install the chart with the release name atlas-operator:

helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator

2. Install a database

Create a MySQL database and a secret with an Atlas URL to the database:

kubectl apply -f https://raw.githubusercontent.com/ariga/atlas-operator/65dce84761354d1766041c7f286b35cc24ffdddb/config/integration/databases/mysql.yaml

Result:

deployment.apps/mysql created
service/mysql created
secret/mysql-credentials created

In this example, we are using a plain MySQL pod as a database. In a real-world scenario, you would probably use a managed database service such as Amazon RDS or Google Cloud SQL.

3. Set up a migration directory

With the operator and the database running, let's set up the migration directory which we will use to manage our database schema.

You can use the directory from an existing project, but for the sake of this demo we will use the Atlas template repo which contains a simple migration directory.

git clone git@github.com:ariga/atlas-template.git versioned-demo

Observe this directory contains a migrations directory with a couple of migration scripts:

cd versioned-demo
tree migrations
tree migrations
migrations
├── 20230316085611.sql
├── 20230316090502.sql
└── atlas.sum

4. Create a ConfigMap with the migration directory

The operator supports two ways to manage your migration directory:

  • Atlas Cloud - a cloud-based directory that is managed by Atlas.
  • ConfigMap - a Kubernetes resource that contains the migration directory files as key-value pairs.

In this demo we will use a ConfigMap to manage our migration directory. To create a ConfigMap with the migration directory files:

kubectl create configmap migrations --from-file=migrations -o yaml --dry-run=client --save-config > migrations.yaml

The above command creates a YAML named migrations.yaml file with the migration directory files. It should look something like this:

apiVersion: v1
data:
20230316085611.sql: |
-- Create "users" table
CREATE TABLE `users` (
`id` int NOT NULL,
`user_name` varchar(255) NOT NULL,
`email` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
20230316090502.sql: |
-- Create "posts" table
CREATE TABLE `posts` (
`id` int NOT NULL,
`user_id` int NOT NULL,
`title` varchar(255) NOT NULL,
`body` text NOT NULL,
PRIMARY KEY (`id`),
INDEX `user_id` (`user_id`),
CONSTRAINT `posts_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE CASCADE
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
atlas.sum: |
h1:XBXbh+rzLis8gknjlIqnxXLBkOZ+sN2v2p7KjyVFYYM=
20230316085611.sql h1:br6W6LPEnnsejlz/7hRm9zthwStCzjN2vZkqVPxlmvo=
20230316090502.sql h1:GfeRjkSeoCt3JVRtLQNa/r50lRfpAPXS7AqTU2ZNFgY=
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"migrations","creationTimestamp":null},"data":{"20230316085611.sql":"-- Create \"users\" table\nCREATE TABLE `users` (\n `id` int NOT NULL,\n `user_name` varchar(255) NOT NULL,\n `email` varchar(255) NOT NULL,\n PRIMARY KEY (`id`)\n) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;\n","20230316090502.sql":"-- Create \"posts\" table\nCREATE TABLE `posts` (\n `id` int NOT NULL,\n `user_id` int NOT NULL,\n `title` varchar(255) NOT NULL,\n `body` text NOT NULL,\n PRIMARY KEY (`id`),\n INDEX `user_id` (`user_id`),\n CONSTRAINT `posts_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE CASCADE\n) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;\n","atlas.sum":"h1:XBXbh+rzLis8gknjlIqnxXLBkOZ+sN2v2p7KjyVFYYM=\n20230316085611.sql h1:br6W6LPEnnsejlz/7hRm9zthwStCzjN2vZkqVPxlmvo=\n20230316090502.sql h1:GfeRjkSeoCt3JVRtLQNa/r50lRfpAPXS7AqTU2ZNFgY=\n"}}
name: migrations

Apply the ConfigMap to your cluster:

kubectl apply -f migrations.yaml

Kubernetes will create a ConfigMap named migrations with the migration directory files:

configmap/migrations created

5. Create an AtlasMigration resource

Now that we have a database and a migration directory, we can create an AtlasMigration resource to manage our database schema. The AtlasMigration resource is a custom resource that you use to define the desired state of your database schema. The operator will then reconcile the actual state of your database schema with the desired state.

To create an AtlasMigration resource, create a YAML file named atlas-migration.yaml with the following content:

apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlas-migration
spec:
urlFrom:
secretKeyRef:
key: url
name: mysql-credentials
dir:
configMapRef:
name: "migrations"

After you create the atlas-migration.yaml file, apply it to your cluster:

kubectl apply -f atlas-migration.yaml

Next, let's wait for the resource to enter a "Ready" state:

kubectl wait --for=condition=Ready atlasmigration/atlas-migration

When the operator finishes reconciling the AtlasMigration resource, the AtlasMigration resource will be ready:

atlasmigration.db.atlasgo.io/atlas-migration condition met

6. Verify the migrations were applied

Finally, to verify the migrations were applied, connect to the database and check to see if the users table was created:

kubectl exec -it $(kubectl get pods -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -ppass -e "describe myapp.users"

You should see the following output:

+-----------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+-------+
| id | int | NO | PRI | NULL | |
| user_name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | | NULL | |
+-----------+--------------+------+-----+---------+-------+

Check that the posts table was created as well:

kubectl exec -it $(kubectl get pods -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -ppass -e "describe myapp.posts"

You should see the following output:

+---------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+--------------+------+-----+---------+-------+
| id | int | NO | PRI | NULL | |
| user_id | int | NO | MUL | NULL | |
| title | varchar(255) | NO | | NULL | |
| body | text | NO | | NULL | |
+---------+--------------+------+-----+---------+-------+

And that's it! You've successfully deployed the Atlas Operator and applied migrations to your database.

Conclusion

In this blog post, we showed you how to use the Atlas Operator to manage your database schema in Kubernetes using a versioned migrations workflow. To learn more about the Atlas Operator, check out the Atlas Operator GitHub repository as well as the documentation on the Atlas website.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Announcing External Schemas and magical GORM support

· 7 min read
Rotem Tamir
Building Atlas
TL;DR

You can now import the desired database schema from any ORM or other tool into Atlas, and use it to automatically plan migrations for you.

See an example

Introduction

Today, I'm happy to share with you one of the most exciting features we've added to Atlas since its inception: "External Schemas".

Atlas is a modern tool for managing your database schema. It allows you to inspect, plan, lint and execute schema changes to your database. It is designed to be used by developers, DBAs and DevOps engineers alike.

Schema-as-Code

Atlas is built around the concept of database "Schema-as-Code", which means that you define the desired schema of your database in a declarative way, and Atlas takes care of planning and executing the necessary migrations to get your database to the desired state. The goal of this approach is to let organizations build a single source of truth for complex data topologies, and to make it easy to collaborate on schema changes.

Schema Loaders

To achieve this goal, Atlas provides support for "Schema Loaders" which are different mechanisms for loading the desired state of your database schema into Atlas. Until today, Atlas supported a few ways to load your schema:

  • Using Atlas DDL - an HCL based configuration language for defining database schemas.
  • Using Plain SQL - a simple way to define your schema using plain SQL files (CREATE TABLE statements, etc.)
  • From an existing database - Atlas can connect to your database and load the schema from it.
  • The Ent ORM - Atlas can load the schema of your Ent project.

Today, we are adding support for "External Schemas", which means that you can now import the desired database schema from any ORM or other tool into Atlas, and use it to automatically plan migrations and execute them for you.

How do External Schemas work?

External Schemas are implemented using a new type of Datasource called external_schema. The external_schema data source enables the import of an SQL schema from an external program into Atlas' desired state. With this data source, users have the flexibility to represent the desired state of the database schema in any language.

To use an external_schema, create a file named atlas.hcl with the following content:

data "external_schema" "example" {
program = [
"echo",
"create table users (name text)",
]
}

env "local" {
src = data.external_schema.example.url
dev = "sqlite://file?mode=memory&_fk=1"
}

In this dummy example, we use the echo command to generate a simple SQL schema. In a real-world scenario, you would use a program that understands your ORM or tool of choice to generate the desired schema. Some ORMs support this out-of-the-box, such as Laravel's Eloquent's schema:dump command, while others require some simple integrations work to extract the schema from.

In the next section we will present the GORM Atlas Provider and how it can be used to seamlessly integrate a GORM based project with Atlas.

Demo Time

GORM is a popular ORM widely used in the Go community. GORM allows users to manage their database schemas using its AutoMigrate feature, which is usually sufficient during development and in many simple cases.

However, at some point, teams need more control and decide to employ the versioned migrations methodology. Once this happens, the responsibility for planning migration scripts and making sure they are in line with what GORM expects at runtime is moved to developers.

Atlas can automatically plan database schema migrations for developers using GORM. Atlas plans migrations by calculating the diff between the current state of the database, and its desired state.

In the context of versioned migrations, the current state can be thought of as the database schema that would have been created by applying all previous migration scripts.

Installation

If you haven't already, install Atlas from macOS or Linux by running:

curl -sSf https://atlasgo.sh | sh

See atlasgo.io for more installation options.

Install the provider by running:

go get -u ariga.io/atlas-provider-gorm

Standalone vs Go Program mode

The Atlas GORM Provider can be used in two modes:

  • Standalone - If all of your GORM models exist in a single package, and either embed gorm.Model or contain gorm struct tags, you can use the provider directly to load your GORM schema into Atlas.
  • Go Program - If your GORM models are spread across multiple packages, or do not embed gorm.Model or contain gorm struct tags, you can use the provider as a library in your Go program to load your GORM schema into Atlas.

Standalone mode

If all of your GORM models exist in a single package, and either embed gorm.Model or contain gorm struct tags, you can use the provider directly to load your GORM schema into Atlas.

In your project directory, create a new file named atlas.hcl with the following contents:

data "external_schema" "gorm" {
program = [
"go",
"run",
"-mod=mod",
"ariga.io/atlas-provider-gorm",
"load",
"--path", "./path/to/models",
"--dialect", "mysql", // | postgres | sqlite
]
}

env "gorm" {
src = data.external_schema.gorm.url
dev = "docker://mysql/8/dev"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}

In this example, we use the go run command to run the atlas-provider-gorm program and load the schema from the ./path/to/models directory. The atlas-provider-gorm program will scan the directory for GORM models and generate the desired schema for them. The --dialect flag is used to specify the database dialect that the schema should be generated for. The atlas-provider-gorm program supports the following dialects: mysql, postgres, and sqlite.

For the sake of brevity, we will not review the Go program mode in this post, but you can find more information about it in the GORM Guide.

External schemas in action

Atlas supports a versioned migrations workflow, where each change to the database is versioned and recorded in a migration file. You can use the atlas migrate diff command to automatically generate a migration file that will migrate the database from its latest revision to the current GORM schema.

Suppose we have the following GORM models in our models package:

package models

import "gorm.io/gorm"

type User struct {
gorm.Model
Name string
Pets []Pet
}

type Pet struct {
gorm.Model
Name string
User User
UserID uint
}

We can now generate a migration file by running this command:

atlas migrate diff --env gorm 

Observe that files similar to this were created in the migrations directory:

migrations
|-- 20230627123246.sql
`-- atlas.sum

0 directories, 2 files

Examining the contents of 20230625161420.sql:

-- Create "users" table
CREATE TABLE `users` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`created_at` datetime(3) NULL,
`updated_at` datetime(3) NULL,
`deleted_at` datetime(3) NULL,
`name` longtext NULL,
PRIMARY KEY (`id`),
INDEX `idx_users_deleted_at` (`deleted_at`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- Create "pets" table
CREATE TABLE `pets` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`created_at` datetime(3) NULL,
`updated_at` datetime(3) NULL,
`deleted_at` datetime(3) NULL,
`name` longtext NULL,
`user_id` bigint unsigned NULL,
PRIMARY KEY (`id`),
INDEX `fk_users_pets` (`user_id`),
INDEX `idx_pets_deleted_at` (`deleted_at`),
CONSTRAINT `fk_users_pets` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Amazing! Atlas automatically generated a migration file that will create the pets and users tables in our database!

Next, alter the models.Pet struct to add a Nickname field:

type Pet struct {
gorm.Model
Name string
+ Nickname string
User User
UserID uint
}

Re-run this command:

atlas migrate diff --env gorm 

Observe a new migration file is generated:

-- Modify "pets" table
ALTER TABLE `pets` ADD COLUMN `nickname` longtext NULL;

Conclusion

In this post, we have presented External Schemas and how they can be used to automatically generate database schema directly from your ORM models. We have also demonstrated how to use the GORM Atlas Provider to automatically plan migrations for your GORM models.

We believe that this is a huge step forward in making Atlas more accessible to developers who are already using ORMs in their projects. We hope that you will find this feature useful and we look forward to hearing your feedback.

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

Announcing SOC2 Compliance for Atlas Cloud

· 3 min read
Rotem Tamir
Building Atlas

Today we are happy to announce that Atlas Cloud, our cloud offering, has achieved SOC2 compliance. This is a big milestone for us, which shows our determination to providing solid infrastructure for our users and customers.

SOC2 is a security and compliance standard that helps organizations demonstrate their ability to protect customer data and ensure the availability of their services. It’s like an independent third-party audit that evaluates how well a company follows industry-standard security practices, covering areas such as availability, processing integrity, confidentiality, and privacy.

Achieving SOC2 compliance requires a significant investment in time, effort, and resources, so you may be wondering why we decided to pursue this goal so early in the life of our product. The knee-jerk response of any seasoned engineer to large and long infrastructure projects should is:

YAGNI. You ain’t gonna need it.

When building software systems, we often spend a huge amount of time developing abstractions and tooling, only to find out that product requirements changed, rendering our work useless.

The first commandment of the lean movement: Waste Not. Do the minimum you can to learn what will work. Do less to move fast. But there’s another side to this coin: teams with solid infrastructure move way faster. Try driving a sportscar on a shabby dirt road.

*So why invest resources in compliance early on?

First of all, are we gonna need it? What are the odds that we won’t need the SOC2 certification, and that it won’t bring us business value?

Ariga is an open-core company building tools for software engineering teams. We are building Atlas Cloud to be the safest, fastest, and richest way for organizations to manage database schema changes.

We believe that in order to earn the trust of other organizations, in order for them to grant us the privilege of being infrastructure to their business, we must hold ourselves accountable to rigid standards.

To be perfectly honest, the vast majority of the things that we were required to demonstrate in the compliance process, such as mandatory code reviews, disaster recovery, and data privacy controls are things that we consider just consider to be solid engineering practices that we hold ourselves accountable to regardless of an external auditing process.

We are proud of this accomplishment and look forward to continuing to provide our users with the best possible experience using Atlas Cloud. We will continue to invest in our security and compliance programs to ensure we stay ahead of the curve and remain a trusted partner to our customers.