Skip to main content

Automatic Migrations for SQL Server with Atlas

Microsoft SQL Server is a powerful relational database management system that has one of the prominent enterprise-grade data solutions for decades. Commonly used by enterprises, SQL Server can efficiently handle growing amounts of data and users, making it easy to scale.

However, managing a large database schema in SQL Server can be challenging due to the complexity of related data structures and the need for coordinated schema changes across multiple teams and applications.

Enter: Atlas

Atlas helps developers manage their database schema as code - abstracting away the intricacies of database schema management. With Atlas, users provide the desired state of the database schema and Atlas automatically plans the required migrations.

In this guide, we will dive into setting up Atlas for SQL Server, and introduce the different workflows available.

Prerequisites

  1. Docker
  2. Atlas installed on your machine:

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

Logging in to Atlas

To use SQL Server with Atlas, you'll need to log in to Atlas. If it's your first time, you will be prompted to create both an account and a workspace (organization):

atlas login

Inspecting our Database

Let's start off by spinning up a database using Docker:

docker run --rm -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@ssw0rd0995' -p 1433:1433 --name atlas-demo -d mcr.microsoft.com/mssql/server:latest

For this example we will begin with a minimal database with a users table and an id as the primary key.

CREATE TABLE users (
id bigint PRIMARY KEY,
name varchar(255) NOT NULL
)

To create this on our SQL Server database, run the following command:

docker exec -it atlas-demo /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'P@ssw0rd0995' -Q "CREATE TABLE users (ID bigint PRIMARY KEY, name varchar(255) NOT NULL);"

The atlas schema inspect command supports reading the database description provided by a URL and outputting it in different formats, including Atlas DDL (default), SQL, and JSON. In this guide, we will demonstrate the flow using both the Atlas DDL and SQL formats, as the JSON format is often used for processing the output using jq.

To inspect our locally-running SQL Server instance, use the -u flag and write the output to a file named schema.hcl:

atlas schema inspect -u "sqlserver://SA:P@ssw0rd0995@localhost:1433?database=master&mode=database > schema.hcl

Open the schema.hcl file to view the Atlas schema that describes our database.

schema.hcl
table "users" {
schema = schema.dbo
column "ID" {
type = bigint
null = false
}
column "name" {
type = varchar(255)
null = false
}
primary_key {
columns = [column.ID]
}
}
schema "dbo" {
}

This first block represents a table resource with id, and name columns. The schema field references the dbo schema that is defined in the block below. In addition, the primary_key sub-block defines the id column as the primary key for the table. Atlas strives to mimic the syntax of the database that the user is working against. In this case, the type for the id column is bigint, and varchar(255) for the name column.

info

For in-depth details on the atlas schema inspect command, covering aspects like inspecting specific schemas, handling multiple schemas concurrently, excluding tables, and more, refer to our documentation here.

To generate an Entity Relationship Diagram (ERD), or a visual representation of our schema, we can add the -w flag to the inspect command:

atlas schema inspect -u "sqlserver://SA:P@ssw0rd0995@localhost:1433?database=master&mode=database" -w

mssql-inspect

Declarative Migrations

The declarative approach lets users manage schemas by defining the desired state of the database as code. Atlas then inspects the target database and calculates an execution plan to reconcile the difference between the desired and actual states. Let's see this in action.

We will start off by making a change to our schema file, such as adding a repos table:

table "users" {
schema = schema.dbo
column "ID" {
type = bigint
null = false
}
column "name" {
type = varchar(255)
null = false
}
primary_key {
columns = [column.ID]
}
}

table "repos" {
schema = schema.dbo
column "ID" {
type = bigint
null = false
}
column "name" {
type = varchar(255)
null = false
}
column "owner_id" {
type = bigint
null = false
}
primary_key {
columns = [column.ID]
}
foreign_key "fk_repo_owner" {
columns = [column.owner_id]
ref_columns = [table.users.column.ID]
}
}
schema "dbo" {
}

Now that our desired state has changed, to apply these changes to our database, Atlas will plan a migration for us by running the atlas schema apply command:

atlas schema apply \
-u "sqlserver://SA:P@ssw0rd0995@localhost:1433?database=master&mode=database" \
--to file://schema.hcl \
--dev-url "docker://sqlserver/2022-latest/dev?mode=database"

Apply the changes, and that's it! You have successfully run a declarative migration.

info

For a more detailed description of the atlas schema apply command refer to our documentation here.

To ensure that the changes have been made to the schema, let's run the inspect command with the -w flag once more and view the ERD:

atlas-schema

Versioned Migrations

Alternatively, the versioned migration workflow, sometimes called "change-based migrations", allows each change to the database schema to be checked-in to source control and reviewed during code-review. Users can still benefit from Atlas intelligently planning migrations for them, however they are not automatically applied.

Creating the first migration

In the versioned migration workflow, our database state is managed by a migration directory. The migration directory holds all of the migration files created by Atlas, and the sum of all files in lexicographical order represents the current state of the database.

To create our first migration file, we will run the atlas migrate diff command, and we will provide the necessary parameters:

  • --dir the URL to the migration directory, by default it is file://migrations.
  • --to the URL of the desired state. A state can be specified using a database URL, HCL or SQL schema, or another migration directory.
  • --dev-url a URL to a Dev Database that will be used to compute the diff.
atlas migrate diff initial \
--to file://schema.hcl \
--dev-url "docker://sqlserver/2022-latest/dev?mode=database"

Run ls migrations, and you'll notice that Atlas has automatically created a migration directory for us, as well as two files:

-- Create "users" table
CREATE TABLE [dbo].[users] (
[ID] bigint NOT NULL,
[name] varchar(255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
CONSTRAINT [PK_users] PRIMARY KEY CLUSTERED ([ID] ASC)
);

-- Create "repos" table
CREATE TABLE [dbo].[repos] (
[ID] bigint NOT NULL,
[name] varchar(255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[owner_id] bigint NOT NULL,
CONSTRAINT [PK_repos] PRIMARY KEY CLUSTERED ([ID] ASC),
CONSTRAINT [repo_owner] FOREIGN KEY ([owner_id]) REFERENCES [dbo].[users] ([ID]) ON UPDATE NO ACTION ON DELETE NO ACTION
);

The migration file represents the current state of our database, and the sum file is used by Atlas to maintain the integrity of the migration directory. To learn more about the sum file, read the documentation.

Pushing migration directories to Atlas

Now that we have our first migration, we can apply it to a database. There are multiple ways to accomplish this, with most methods covered in the guides section. In this example, we'll demonstrate how to push migrations to Atlas Cloud, much like how Docker images are pushed to Docker Hub.

mssql migrate push

Migration Directory created with atlas migrate push

Let's name our new migration project app and run atlas migrate push:

atlas migrate push app \
--dev-url "docker://sqlserver/2022-latest/dev?mode=database"

Once the migration directory is pushed, Atlas prints a URL to the created directory, similar to the once shown in the image above.

Applying migrations

Once our app migration directory has been pushed, we can apply it to a database from any CD platform without necessarily having our directory there.

Let's create another database using Docker to resemble a local environment, this time on port 1434:

docker run --platform linux/amd64 -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@ssw0rd0995' -p 1434:1433 --name atlas-local -d mcr.microsoft.com/mssql/server:latest

Next, we'll create a simple Atlas configuration file (atlas.hcl) to store the settings for our local environment:

atlas.hcl
# The "dev" environment represents our local testings.
env "local" {
url = "sqlserver://SA:P@ssw0rd0995@localhost:1434?database=master&mode=database"
migration {
dir = "atlas://app"
}
}

The final step is to apply the migrations to the database. Let's run atlas migrate apply with the --env flag to instruct Atlas to select the environment configuration from the atlas.hcl file:

atlas migrate apply --env local

Boom! After applying the migration, you should receive a link to the deployment and the database where the migration was applied. Here's an example of what it should look like:

first deployment

Migration deployment reported created with atlas migrate apply

Generating another migration

After applying the first migration, it's time to update our schema defined in the schema file and tell Atlas to generate another migration. This will bring the migration directory (and the database) in line with the new state defined by the desired schema (schema file).

Let's make two changes to our schema:

  • Add a new description column to our repos table
  • Add a new commits table
schema.hcl
table "users" {
schema = schema.dbo
column "ID" {
type = bigint
null = false
}
column "name" {
type = varchar(255)
null = false
}
primary_key {
columns = [column.ID]
}
}

table "repos" {
schema = schema.dbo
column "ID" {
type = bigint
null = false
}
column "name" {
type = varchar(255)
null = false
}
column "owner_id" {
type = bigint
null = false
}
column "description" {
type = varchar(max)
null = true
}
primary_key {
columns = [column.ID]
}
foreign_key "fk_repo_owner" {
columns = [column.owner_id]
ref_columns = [table.users.column.ID]
}
}
table "commits" {
schema = schema.dbo
column "ID" {
type = bigint
null = false
}
column "message" {
type = varchar(255)
null = false
}
column "repo_id" {
type = bigint
null = false
}
column "author_id" {
type = bigint
null = false
}
primary_key {
columns = [column.ID]
}
foreign_key "fk_repo_commit" {
columns = [column.repo_id]
ref_columns = [table.repos.column.ID]
}
foreign_key "fk_commit_author" {
columns = [column.author_id]
ref_columns = [table.users.column.ID]
}
}
schema "dbo" {
}

Next, let's run the atlas migrate diff command once more:

atlas migrate diff add_commits \
--to file://schema.hcl \
--dev-url "docker://sqlserver/2022-latest/dev?mode=database"

Run ls migrations, and you'll notice that a new migration file has been generated.

20240208115924_add_commits.sql
ALTER TABLE [dbo].[repos] ADD [description] varchar(MAX) COLLATE SQL_Latin1_General_CP1_CI_AS NULL;
-- Modify "repos" table
ALTER TABLE [dbo].[repos] ADD CONSTRAINT [fk_repo_owner] FOREIGN KEY ([owner_id]) REFERENCES [dbo].[users] ([ID]) ON UPDATE NO ACTION ON DELETE NO ACTION;
-- Create "commits" table
CREATE TABLE [dbo].[commits] (
[ID] bigint NOT NULL,
[message] varchar(255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[repo_id] bigint NOT NULL,
[author_id] bigint NOT NULL,
CONSTRAINT [PK_commits] PRIMARY KEY CLUSTERED ([ID] ASC),
CONSTRAINT [FK_commits_repos] FOREIGN KEY ([repo_id]) REFERENCES [dbo].[repos] ([ID]) ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT [FK_commits_users] FOREIGN KEY ([author_id]) REFERENCES [dbo].[users] ([ID]) ON UPDATE NO ACTION ON DELETE NO ACTION
);

Let's run atlas migrate push again and observe the new file on the migration directory page.

atlas migrate push app \
--dev-url "docker://sqlserver/2022-latest/dev?mode=database"
mssql migrate push

Migration Directory created with atlas migrate push

Next Steps

In this guide we learned about the declarative and versioned workflows, and how to use Atlas to generate migrations, push them to an Atlas workspace and apply them to databases.

Next steps:

For more in-depth guides, check out the other pages in this section or visit our Docs section.

Have questions? Feedback? Find our team on our Discord server.