Skip to main content

4 posts tagged with "ci"

View All Tags

· 5 min read
Rotem Tamir

Hi everyone!

I'm very happy to share with you some of the recent improvements to Atlas, specifcially around GitHub Actions. In August of last year, we released our first version of the GitHub Actions experience for Atlas. It was a modest start, which included the ability to verify the safety and correctness of schema migrations during the CI process.

Over the past year, we have slowly added more features to the GitHub Actions experience, including the ability to sync migration directories to Atlas Cloud, deploy migrations, and even install Atlas. As often happens with quickly evolving systems, we felt that the API became complex, carrying over use cases and experiences that have become obsolete or superseded by better ones since the initial release.

At Ariga, the team developing Atlas, we have written a document named the "R&D Manifesto", which lists some the principles that we commit to as individuals and as an organization. One of them is "Obsess over APIs and DevEx" - we believe that the key to building a successful product is to provide the best possible experience to our users, and that starts with clear, consistent and composable APIs that empower our users to achieve amazing feats of engineering.

With that in mind, our team has been working hard in the past few weeks to revamp the GitHub Actions experience for Atlas. Here's a quick summary of the changes:

  1. We've moved all actions into a single repo - ariga/atlas-action. (With the exception of ariga/setup-atlas.)
  2. The API has been reviewed and updated to make sure it is consistent among the different actions and with the rest of the Atlas ecosystem.
  3. We've rewritten the code in Go, which is the language we use for all of our internal tools. This allows us to share code between the CLI and the GitHub Actions, and to provide a more consistent experience between the two. In addition, looking forward we have greatly simplified the process of adding new GitHub Actions as needed.

Deprecation Notice

As part of this change we are deprecating the previous generation of GitHub Actions, and we encourage you to migrate to the new ones as soon as possible. The old actions will continue to work for the time being, but we will not be receiving any updates. These actions are:

Introducing to the New Actions

Without further ado, I'm happy to present the new generation of GitHub Actions for Atlas. The new actions follow the design principle of building actions as small, composable units that can be combined to achieve different outcomes. All of the actions rely on Atlas being installed on the GitHub Actions runner, which is done using the ariga/setup-atlas action. The rest of the actions essentially map to CLI commands, and can be used to build more complex workflows.

The actions are:

ActionUse Case
ariga/setup-atlasInstall Atlas from a GitHub Actions workflow
ariga/atlas-action/migrate/lintCI for schema changes
ariga/atlas-action/migrate/pushPush your migration directory to Atlas Cloud (atlasgo.cloud)
ariga/atlas-action/migrate/applyDeploy versioned migrations from GitHub Actions

Example Workflows

Consider the following GitHub Actions workflow, which can be used to implement a CI/CD pipeline for your database schema changes.

name: Atlas CI/CD
on:
push:
branches:
- master # Use your main branch here.
pull_request:
paths:
- 'migrations/*' # Use the path to your migration directory here.
# Permissions to write comments on the pull request.
permissions:
contents: read
pull-requests: write
jobs:
atlas:
services:
# Spin up a mysql:8 container to be used as the dev-database for analysis.
mysql:
image: mysql:8
env:
MYSQL_DATABASE: dev
MYSQL_ROOT_PASSWORD: pass
ports:
- 3306:3306
options: >-
--health-cmd "mysqladmin ping -ppass"
--health-interval 10s
--health-start-period 10s
--health-timeout 5s
--health-retries 10
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
- uses: ariga/atlas-action/migrate/lint@v1
with:
dir: 'file://migrations'
dir-name: 'my-project' # The name of the project in Atlas Cloud
dev-url: "mysql://root:pass@localhost:3306/dev"
- uses: ariga/atlas-action/migrate/push@v1
if: github.ref == 'refs/heads/master'
with:
dir: 'file://migrations'
dir-name: 'my-project'
dev-url: 'mysql://root:pass@localhost:3306/dev' # Use the service name "mysql" as the hostname

This workflow uses 3 different actions to achieve the following:

  • ariga/setup-atlas - Installs Atlas on the GitHub Actions runner and logs in using the provided token.
  • ariga/atlas-action/migrate/lint - Lints the migration directory and verifies that it is safe to apply. This is run on every pull request that modifies the migration directory. If issues are found, the action will fail and a comment will be posted on the pull request with the details.
  • ariga/atlas-action/migrate/push - Pushes the migration directory to Atlas Cloud. This is run on every push to the main branch, so it can be used to deploy the migrations to production.

Tagging v1

With the release of the new actions, we are also tagging the v1 release of the actions to mark the maturity and stability of the API. We hope you will find the new actions useful, and we look forward to seeing what you build with them!

How can we make Atlas better?

We would love to hear from you on our Discord server ❤️.

· 5 min read
Rotem Tamir

As applications evolve, database schema changes become inevitable. However, some types of changes can break the contract between the backend server and the database, leading to errors during deployment and application downtime. Ensuring that these breaking changes are detected and prevented is crucial for maintaining system stability and delivering a seamless user experience. In this post, we will explore how Atlas can help you prevent breaking schema changes from impacting your application.

What are breaking changes?

Backward-incompatible changes, also known as breaking changes, are schema changes that have the potential to break the contract with applications that rely on the old schema. Applications today are expected to be highly-available, which means that we cannot shut them down for maintenance, and must continuously serve traffic, 24/7.

The common strategy for dealing with this requirement is called a "rolling deployment" where we deploy new versions of the application in stages, gradually increasing the percentage of traffic served by the new version. This means that during the deployment phase of a new version we will always have old and new code running side by side.

This means, for example, that renaming a column from email_address to email will cause older versions of the application to fail when they try to access the column using the old name. This is a breaking change, and it can cause downtime if not detected and prevented.

Manually enforcing backward-compatability

Preventing breaking changes in production is crucial to maintaining a highly-available system. To enforce backward-compatibility most teams rely on two things:

  1. GitOps: Making sure every change to the database is checked-in to source control system, and using automated schema migration tools to apply the change.
  2. Manual code-review: Carefully reviewing every schema change to make sure it will not break the contract between the backend and the database.

Checking-in database artifacts into source control is a well established technique - in fact it's one of the first principles mentioned in the seminal Evolutionary Database Design (Fowler and Sadalage, 2016).

However, let's consider the impact of relying on human review for changes using the widely accepted DORA Metrics for assessing DevOps performance:

  1. Lead time increases - Lead time measures the time it takes to get a commit to production. If a change needs the careful attention and approval of an expert reviewer that reviewer becomes a bottleneck.

  2. Change failure rate increases - measures the percentage of deployments causing a failure in production. Relying on humans to routinely evaluate and verify every change against dozens of rules and policies is naturally error-prone, especially if they are busy and under constant pressure to deliver (which expert reviewers almost always are).

  3. Deployment frequency decreases - How often an organization successfully releases to production. When engineers learn that a certain type of change gets queued up in slow review cycles there is a tendency to batch changes together, decreasing the frequency of deployments and increasing their riskiness.

This pattern is not unique to database changes, in fact it looks exactly the same for any high risk change. Since the emergence of the DevOps movement manual verification of changes is gradually being replaced by automation in many fields, can the same be done for database changes?

Automated detection of schema changes with Atlas

Atlas provides support for a process called migration linting, which many teams use to automatically verify the safety of schema changes against a predefined set of policies. Unlike most available SQL linters, Atlas's linter is focused on the semantic level, rather than the syntactic level, which involves formatting and coding style standards. The linter analyzes the meaning of the changes and their potential impact, rather than how the changes are written.

Atlas's migrate lint command exposes this mechanism, making the automatic detection of breaking schema changes simple and straightforward.

Suppose we wanted to run the following migration:

ALTER TABLE `users` RENAME TO `Users`;

Renaming a table is a backward-incompatible change that can cause errors during deployment (migration) if applications running the previous version of the schema refer to the old name in their statements. To check if our latest migration file contains any dangerous changes we run:

atlas migrate lint --dev-url docker://mysql/8/dev --latest 1

Atlas prints:

20230330203525.sql: backward incompatible changes detected:

L1: Renaming table "users" to "Users"

Amazing! Atlas detected our breaking change automatically. In addition to breaking changes Atlas supports many other safety checks that you can apply to your schema changes.

Linting can be used during development from the developer's workstation to detect issues locally, but it really shines when you connect it to your project's continuous integration pipeline, allowing you to prevent such changes from ever reaching production. Setting up CI for schema changes takes less than a minute using Atlas Cloud or a little longer using our GitHub Action.

Wrapping up

Detecting and preventing breaking changes in your database schema is essential for maintaining a high-quality user experience and ensuring system stability. Atlas provides you with a practical and efficient solution to analyze and detect potential breaking changes before they impact your application.

What's next?

Have questions or feedback? Feel free to reach out on our Discord server.

· 5 min read
Rotem Tamir

Losing data is painful for almost all organizations. This is one of the reasons teams are very cautious when it comes to making changes to their databases. In fact, many teams set explicit policies on what kinds of changes to the database are allowed, often completely prohibiting any change that is destructive.

Destructive changes are changes to a database schema that result in loss of data. For instance, consider a statement such as:

ALTER TABLE `users` DROP COLUMN `email_address`;

This statement is considered destructive because whatever data is stored in the email_address column will be deleted from disk, with no way to recover it.

Suppose you were in charge of a team that decided to prohibit destructive changes, how would you go about enforcing such a policy? From our experience, most teams enforce policies relating to schema migrations in code-review: a human engineer, preferably with some expertise in operating databases, manually reviews any proposed database migration scripts and rejects them if they contain destructive changes.

Relying on a human reviewer to enforce such a policy is both expensive (it takes time and mental energy) and error-prone. Just like manual QA is slowly being replaced with automated testing, and manual code style reviews are being replaced with linters, isn't it time that we automate the process of ensuring that changes to database schemas are safe?

Announcing the Atlas GitHub Action

Today, we're happy to announce the release of the official Atlas GitHub Action which can be used to apply migration directory linting for a bunch of popular database migration tools. golang-migrate, goose, dbmate and Atlas itself are already supported, and Flyway and Liquibase are coming soon.

If you're using GitHub to manage your source code, you're in luck. By adding a short configuration file to your repository, you can start linting your schema migration scripts today! Let's see a short example.

Setting up

Suppose we are running a website for an e-commerce business. To store the data for our website we use a MySQL database. Because the data in this database is everything to us, we use a careful versioned migrations approach where each change to the database schema is described in an SQL script and stored in our Git repository. To execute these scripts we use a popular tool called golang-migrate.

The source code for this example can be found in rotemtam/atlas-action-demo.

Initially, our schema contains two tables: users and orders, documented in the first few migration files:

Create the users table:

migrations/20220819060736.up.sql
-- create "users" table
CREATE TABLE `users` (
`id` int NOT NULL,
`name` varchar(100) NULL,
PRIMARY KEY (`id`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Add a unique email column:

migrations/20220819061056.up.sql
ALTER TABLE `users` ADD COLUMN `email` varchar(255) NOT NULL, ADD UNIQUE INDEX `email_unique` (`email`);

Create the orders table, with a foreign-key referencing the users table:

migrations/20220819075145.up.sql
-- create "orders" table
CREATE TABLE `orders` (
`id` int NOT NULL,
`user_id` int NOT NULL,
`total` decimal(10) NOT NULL,
PRIMARY KEY (`id`),
INDEX `user_orders` (`user_id`),
CONSTRAINT `user_orders` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Installing the Atlas Action

To make sure we never accidentally delete data during schema changes, we enact a policy that prohibits destructive changes to the database. To enforce this policy, we invoke the atlas-action GitHub Action from within our continuous integration flow by adding a workflow file name .github/workflows/atlas-ci.yaml:

.github/workflows/atlas-ci.yaml
name: Atlas CI
on:
# Run whenever code is changed in the master branch,
# change this to your root branch.
push:
branches:
- master
# Run on PRs where something changed under the `path/to/migration/dir/` directory.
pull_request:
paths:
- 'migrations/*'
jobs:
lint:
services:
# Spin up a mysql:8.0.29 container to be used as the dev-database for analysis.
mysql:
image: mysql:8.0.29
env:
MYSQL_ROOT_PASSWORD: pass
MYSQL_DATABASE: test
ports:
- "3306:3306"
options: >-
--health-cmd "mysqladmin ping -ppass"
--health-interval 10s
--health-start-period 10s
--health-timeout 5s
--health-retries 10
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3.0.1
with:
fetch-depth: 0 # Mandatory unless "latest" is set below.
- uses: ariga/atlas-action@v0
with:
dir: migrations/
dir-format: golang-migrate # Or: atlas, goose, dbmate
dev-url: mysql://root:pass@localhost:3306/test

Detecting a destructive change

Next, let's see what happens when a developer accidentally proposes a destructive change, to drop a column in the orders table:

migrations/20220819081116.up.sql
-- modify "orders" table
ALTER TABLE `orders` DROP COLUMN `total`;

This change is proposed in PR #1 in our example repo. Because we have previously set up the Atlas GitHub Action to lint our migration directory, whenever a file changes under the migrations/ directory, a workflow is triggered.

After letting our workflow complete, observe that GitHub informs us that the Atlas CI / lint check has failed:

Clicking on the "details" link we find a detailed explanation on the causes for the failure:

Examining the Action run summary we find the following annotation:

As you can see, Atlas has detected the destructive change we proposed to apply to our database and failed our build!

Wrapping up

In this post we discussed why many teams set policies to prevent destructive changes to database schemas. We further showed how such policies can be enforced in an automated way using the official Atlas GitHub Action.

Further reading

To learn more about CI for database schema changes:

Have questions? Feedback? Find our team on our Discord server.

· 4 min read
Rotem Tamir

With the release of v0.5.0, we are happy to announce a very significant milestone for the project. While this version includes some cool features (such as multi-file schemas) and a swath of incremental improvements and bugfixes, there is one feature that we're particularly excited about and want to share with you in this post.

As most outages happen directly as a result of a change to a system, Atlas provides users with the means to verify the safety of planned changes before they happen. The sqlcheck package provides interfaces for analyzing the contents of SQL files to generate insights on the safety of many kinds of changes to database schemas. With this package, developers may define an Analyzer that can be used to diagnose the impact of SQL statements on the target database.

This functionality is exposed to CLI users via the migrate lint subcommand. By utilizing the sqlcheck package, Atlas can now check your migration directory for common problems and issues.

atlas migrate lint in action

Recall that Atlas uses a dev database to plan and simulate schema changes. Let's start by spinning up a container that will serve as our dev database:

docker run --name atlas-db-dev -d -p 3307:3306 -e MYSQL_ROOT_PASSWORD=pass  mysql

Next let's create schema.hcl, the HCL file which will contain the desired state of our database:

schema.hcl
schema "example" {
}
table "users" {
schema = schema.example
column "id" {
type = int
}
column "name" {
type = varchar(255)
}
primary_key {
columns = [
column.id
]
}
}

To simplify the commands we need to type in this demo, let's create an Atlas project file to define a local environment.

atlas.hcl
env "local" {
src = "./schema.hcl"
url = "mysql://root:pass@localhost:3306"
dev = "mysql://root:pass@localhost:3307"
}

Next, let's plan the initial migration that creates the users table:

atlas migrate diff --env local

Observe that the migrations/ directory was created with an .sql file and a file named atlas.sum:

├── atlas.hcl
├── migrations
│ ├── 20220714090139.sql
│ └── atlas.sum
└── schema.hcl

This is the contents of our new migration script:

-- add new schema named "example"
CREATE DATABASE `example`;
-- create "users" table
CREATE TABLE `example`.`users` (`id` int NOT NULL, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`)) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Next, let's make a destructive change to the schema. Destructive changes are changes to a database schema that result in loss of data, such as dropping a column or table. Let's remove the name name column from our desired schema:

schema.hcl
schema "example" {
}
table "users" {
schema = schema.example
column "id" {
type = int
}
// Notice the "name" column is missing.
primary_key {
columns = [
column.id
]
}
}

Now, let's plan a migration to this new schema:

atlas migrate diff --env local

Observe the new migration which Atlas planned for us:

-- modify "users" table
ALTER TABLE `example`.`users` DROP COLUMN `name`;

Finally, let's use atlas migrate lint to analyze this change and verify it's safety:

atlas migrate lint --env local --latest 1

Destructive changes detected in file 20220714090811.sql:

L2: Dropping non-virtual column "name"

When we run the lint command, we need to instruct Atlas on how to decide what set of migration files to analyze. Currently, two modes are supported.

  • --git-base <branchName>: which selects the diff between the provided branch and the current one as the changeset.
  • --latest <n> which selects the latest n migration files as the changeset.

As expected, Atlas analyzed this change and detected a destructive change to our database schema. In addition, Atlas users can analyze the migration directory to automatically detect:

  • Data-dependent changes
  • Migration Directory integrity
  • Backward-incompatible changes (coming soon)
  • Drift between the desired and the migration directory (coming soon)
  • .. and more

Wrapping up

We started Atlas more than a year ago because we felt that the industry deserves a better way to manage databases. A huge amount of progress has been made as part of the DevOps movement on the fronts of managing compute, networking and configuration. So much, in fact, that it always baffled us to see that the database, the single most critical component of any software system, did not receive this level of treatment.

Until today, the task of verifying the safety of migration scripts was reserved to humans (preferably SQL savvy, and highly experienced). We believe that with this milestone we are beginning to pave a road to a reality where teams can move as quickly and safely with their databases as they can with their code.

Have questions? Feedback? Find our team on our Discord server.