Skip to main content

5 posts tagged with "announcement"

View All Tags

Β· 8 min read
Jannik Clausen

With the release of v0.6.0, we introduced a workflow for managing changes to database schemas that we have called: Versioned Migration Authoring.

Today, we released the first version of the Atlas migration execution engine, that can apply migration files on your database. In this post, we will give a brief overview of the features and what to expect in the future.

Migration File Format​

The Atlas migration filename format follows a very simple structure: version_[name].sql, with the name being optional. version can be an arbitrary string. Migration files are lexicographically sorted by filename.

β†ͺ tree .
.
β”œβ”€β”€ 1_initial.sql
β”œβ”€β”€ 2_second.sql
β”œβ”€β”€ 3_third.sql
└── atlas.sum

0 directories, 4 files

If you want to follow along, you can simply copy and paste the above files in a folder on your system. Make sure you have a database ready to work on. You can start an ephemeral docker container with the following command:

# Run a local mysql container listening on port 3306.
docker run --rm --name atlas-apply --detach --env MYSQL_ROOT_PASSWORD=pass -p 3306:3306 mysql:8

Apply Migrations​

In order to apply migrations you need to have the Atlas CLI in version v0.7.0 or above. Follow the installation instructions if you don't have Atlas installed yet.

Now, to apply the first migration of our migration directory, we call atlas migrate apply and pass in some configuration parameters.

atlas migrate apply 1 \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migrating to version 1 (1 migrations in total):

-- migrating version 1
-> CREATE DATABASE `my_schema`;
-> CREATE TABLE `my_schema`.`tbl` (`col` int NOT NULL);
-- ok (17.247319ms)

-------------------------
-- 18.784204ms
-- 1 migrations
-- 2 sql statements

Migration Status​

Atlas saves information about the database schema revisions (applied migration versions) in a special table called atlas_schema_revisions. In the example above we connected to the database without specifying which schema to operate against. For this reason, Atlas created the revision table in a new schema called atlas_schema_revisions. For a schema-bound connection Atlas will put the table into the connected schema. We will see that in a bit.

Go ahead and call atlas migrate status to gather information about the database migration state:

atlas migrate status \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migration Status: PENDING
-- Current Version: 1
-- Next Version: 2
-- Executed Files: 1
-- Pending Files: 2

This output tells us that the last applied version is 1, the next one is called 2 and that we still have two migrations pending. Let's apply the pending migrations:

Note, that we do not pass an argument to the apply, in which case Atlas will attempt to apply all pending migrations.

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migrating to version 3 from 1 (2 migrations in total):

-- migrating version 2
-> ALTER TABLE `my_schema`.`tbl` ADD `col_2` TEXT;
-- ok (13.98847ms)

-- migrating version 3
-> CREATE TABLE `tbl_2` (`col` int NOT NULL);
Error 1046: No database selected

-------------------------
-- 15.604338ms
-- 1 migrations ok (1 with errors)
-- 1 sql statements ok (1 with errors)

Error: Execution had errors:
Error 1046: No database selected

Error: sql/migrate: execute: executing statement "CREATE TABLE `tbl_2` (`col` int NOT NULL);" from version "3": Error 1046: No database selected
exit status 1

What happened here? After further investigation, you will find that our connection URL is bound to the entire database, not to a schema. The third migration file however does not contain a schema qualifier for the CREATE TABLE statement.

By default, Atlas wraps the execution of each migration file into one transaction. This transaction gets rolled back if any error occurs withing execution. Be aware though, that some databases, such as MySQL and MariaDB, don't support transactional DDL. If you want to learn how to configure the way Atlas uses transactions, have a look at the docs.

Migration Retry​

To resolve this edit the migration file and add a qualifier to the statement:

CREATE TABLE `my_schema`.`tbl_2` (`col` int NOT NULL);

Since you changed the contents of a migration file, we have to re-calculate the directory integrity hash-sum by calling:

atlas migrate hash --force \
--dir "file://migrations"

Then we can proceed and simply attempt to execute the migration file again.

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
Migrating to version 3 from 2 (1 migrations in total):

-- migrating version 3
-> CREATE TABLE `my_schema`.`tbl_2` (`col` int NOT NULL);
-- ok (15.168892ms)

-------------------------
-- 16.741173ms
-- 1 migrations
-- 1 sql statements

Attempting to migrate again or calling atlas migrate status will tell us that all migrations have been applied onto the database and there is nothing to do at the moment.

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/"
No migration files to execute

Moving an existing project to Atlas with Baseline Migrations​

Another common scenario is when you need to move an existing project to Atlas. To do so, create an initial migration file reflecting the current state of a database schema by using atlas migrate diff. A very simple way to do so would be by heading over to the database from before, deleting the atlas_schema_revisions schema, emptying your migration directory and running the atlas migrate diff command.

rm -rf migrations
docker exec atlas-apply mysql -ppass -e "CREATE SCHEMA `my_schema_dev`;" # create a dev-db
docker exec atlas-apply mysql -ppass -e "DROP SCHEMA `atlas_schema_revisions`;"
atlas migrate diff \
--dir "file://migrations" \
--to "mysql://root:pass@localhost:3306/my_schema" \
--dev-url "mysql://root:pass@localhost:3306/my_schema_dev"

To demonstrate that Atlas can also work on a schema level instead of a realm connection, we are running on a connection bound to the my_schema schema this time.

You should end up with the following migration directory:

-- create "tbl" table
CREATE TABLE `tbl` (`col` int NOT NULL, `col_2` text NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
-- create "tbl_2" table
CREATE TABLE `tbl_2` (`col` int NOT NULL) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Now, let's create a new migration file to create a table tbl_3 and update the directory integrity file.

atlas migrate new add_table --dir "file://migrations"
echo "CREATE TABLE `tbl_3` (`col` text NULL);" >> migrations/$(ls -t migrations | head -n1)
atlas migrate hash --force --dir "file://migrations"

Since we now have both a migration file representing our current database state and the new migration file to apply, we can make use of the --baseline flag:

atlas migrate apply \
--dir "file://migrations" \
--url "mysql://root:pass@localhost:3306/my_schema" \
--baseline "20220908110527" # replace the version with the one generated by you
Migrating to version 20220908110847 from 20220908110527 (1 migrations in total):

-- migrating version 20220908110847
-> CREATE TABLE `tbl_3` (`col` text NULL);
-- ok (14.325493ms)

-------------------------
-- 15.786455ms
-- 1 migrations
-- 1 sql statements

Outlook​

The Atlas migration engine is powering Ent and the execution engine is already being used within Ariga for several months. We will continue working on improving it, releasing cool features, such as assisted troubleshooting for failed migrations, a more intelligent, dialect-aware execution planning for things like MySQLs implicits commits and more.

Wrapping up​

In this post we learned about the new migration execution engine of Atlas and some information about its internals.

Further reading​

To learn more about Versioned Migration Authoring:

Have questions? Feedback? Find our team on our Discord server.

Β· 5 min read
Rotem Tamir

Losing data is painful for almost all organizations. This is one of the reasons teams are very cautious when it comes to making changes to their databases. In fact, many teams set explicit policies on what kinds of changes to the database are allowed, often completely prohibiting any change that is destructive.

Destructive changes are changes to a database schema that result in loss of data. For instance, consider a statement such as:

ALTER TABLE `users` DROP COLUMN `email_address`;

This statement is considered destructive because whatever data is stored in the email_address column will be deleted from disk, with no way to recover it.

Suppose you were in charge of a team that decided to prohibit destructive changes, how would you go about enforcing such a policy? From our experience, most teams enforce policies relating to schema migrations in code-review: a human engineer, preferably with some expertise in operating databases, manually reviews any proposed database migration scripts and rejects them if they contain destructive changes.

Relying on a human reviewer to enforce such a policy is both expensive (it takes time and mental energy) and error-prone. Just like manual QA is slowly being replaced with automated testing, and manual code style reviews are being replaced with linters, isn't it time that we automate the process of ensuring that changes to database schemas are safe?

Announcing the Atlas GitHub Action​

Today, we're happy to announce the release of the official Atlas GitHub Action which can be used to apply migration directory linting for a bunch of popular database migration tools. golang-migrate, goose, dbmate and Atlas itself are already supported, and Flyway and Liquibase are coming soon.

If you're using GitHub to manage your source code, you're in luck. By adding a short configuration file to your repository, you can start linting your schema migration scripts today! Let's see a short example.

Setting up​

Suppose we are running a website for an e-commerce business. To store the data for our website we use a MySQL database. Because the data in this database is everything to us, we use a careful versioned migrations approach where each change to the database schema is described in an SQL script and stored in our Git repository. To execute these scripts we use a popular tool called golang-migrate.

The source code for this example can be found in rotemtam/atlas-action-demo.

Initially, our schema contains two tables: users and orders, documented in the first few migration files:

Create the users table:

migrations/20220819060736.up.sql
-- create "users" table
CREATE TABLE `users` (
`id` int NOT NULL,
`name` varchar(100) NULL,
PRIMARY KEY (`id`)
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Add a unique email column:

migrations/20220819061056.up.sql
ALTER TABLE `users` ADD COLUMN `email` varchar(255) NOT NULL, ADD UNIQUE INDEX `email_unique` (`email`);

Create the orders table, with a foreign-key referencing the users table:

migrations/20220819075145.up.sql
-- create "orders" table
CREATE TABLE `orders` (
`id` int NOT NULL,
`user_id` int NOT NULL,
`total` decimal(10) NOT NULL,
PRIMARY KEY (`id`),
INDEX `user_orders` (`user_id`),
CONSTRAINT `user_orders` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION
) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

Installing the Atlas Action​

To make sure we never accidentally delete data during schema changes, we enact a policy that prohibits destructive changes to the database. To enforce this policy, we invoke the atlas-action GitHub Action from within our continuous integration flow by adding a workflow file name .github/workflows/atlas-ci.yaml:

.github/workflows/atlas-ci.yaml
name: Atlas CI
on:
# Run whenever code is changed in the master branch,
# change this to your root branch.
push:
branches:
- master
# Run on PRs where something changed under the `path/to/migration/dir/` directory.
pull_request:
paths:
- 'migrations/*'
jobs:
lint:
services:
# Spin up a mysql:8.0.29 container to be used as the dev-database for analysis.
mysql:
image: mysql:8.0.29
env:
MYSQL_ROOT_PASSWORD: pass
MYSQL_DATABASE: test
ports:
- "3306:3306"
options: >-
--health-cmd "mysqladmin ping -ppass"
--health-interval 10s
--health-start-period 10s
--health-timeout 5s
--health-retries 10
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3.0.1
with:
fetch-depth: 0 # Mandatory unless "latest" is set below.
- uses: ariga/atlas-action@v0
with:
dir: migrations/
dir-format: golang-migrate # Or: atlas, goose, dbmate
dev-url: mysql://root:pass@localhost:3306/test

Detecting a destructive change​

Next, let's see what happens when a developer accidentally proposes a destructive change, to drop a column in the orders table:

migrations/20220819081116.up.sql
-- modify "orders" table
ALTER TABLE `orders` DROP COLUMN `total`;

This change is proposed in PR #1 in our example repo. Because we have previously set up the Atlas GitHub Action to lint our migration directory, whenever a file changes under the migrations/ directory, a workflow is triggered.

After letting our workflow complete, observe that GitHub informs us that the Atlas CI / lint check has failed:

Clicking on the "details" link we find a detailed explanation on the causes for the failure:

Examining the Action run summary we find the following annotation:

As you can see, Atlas has detected the destructive change we proposed to apply to our database and failed our build!

Wrapping up​

In this post we discussed why many teams set policies to prevent destructive changes to database schemas. We further showed how such policies can be enforced in an automated way using the official Atlas GitHub Action.

Further reading​

To learn more about CI for database schema changes:

Have questions? Feedback? Find our team on our Discord server.

Β· 4 min read
Amit Shani

Today, I'm happy to announce the release of v0.4.2 of the Atlas CLI. This version includes many improvements and fixes, but I wanted to share with you exciting news about something I personally worked on. As of v0.4.2, Atlas includes preview support for CockroachDB πŸŽ‰

Atlas is an open-source project that helps developers to better manage their database schemas. It has a CLI tool and a Terraform integration. By using Atlas's Data Definition Language (with a syntax similar to Terraform), users can plan, verify and apply changes to their databases in a simple, declarative workflow. Earlier this year, Atlas became the migration engine for Ent, a widely popular, Linux Foundation backed entity framework for Go.

CockroachDB is an open-source NewSQL database. From their README:

CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. It scales horizontally; survives disk, machine, rack, and even datacenter failures with minimal latency disruption and no manual intervention; supports strongly-consistent ACID transactions; and provides a familiar SQL API for structuring, manipulating, and querying data.

CockroachDB has been gaining popularity and many of you have been asking for Atlas to support it.

While CockroachDB aims to be PostgreSQL compatible, it still has some incompatibilities (e.g. 1, 2,3) which prevented Atlas users using the existing Postgres dialect from working with it.

With the latest release of Atlas, the Postgres driver automatically detects if it is connected to a CockroachDB database and uses a custom driver which provides compatability with CockroachDB.

Getting started with Atlas and CockroachDB​

Let's see how we can use Atlas CLI to manage the schema of a CockroachDB database. Start by downloading the latest version of Atlas, on macOS:

brew install ariga/tap/atlas

For installation instructions on other platforms, see the docs.

For the purpose of this example, let's spin up a local, single-node CockroachDB cluster in a container by running:

docker run --rm -d -p 26257:26257 --name crdb cockroachdb/cockroach start-single-node --insecure

Next, let's seed the database with a simple table:

docker exec crdb cockroach sql --insecure -e 'CREATE TABLE users (id int primary key);'

After creating the users table, use Atlas's schema inspect command to read the schema of our local database and save the result to a file:

atlas schema inspect -u 'postgres://root:pass@localhost:26257/?sslmode=disable' --schema public > schema.hcl

Observe the current HCL representation of the public schema, which contains our newly created table, users:

table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
}
primary_key {
columns = [column.id]
}
}
schema "public" {
}

Next, edit schema.hcl to add a column to the users table:

schema.hcl
table "users" {
schema = schema.public
column "id" {
null = false
type = bigint
}
column "name" {
type = varchar(100)
}
primary_key {
columns = [column.id]
}
}
schema "public" {
}

Now apply the schema using the schema apply command:

atlas schema apply -u 'postgres://root:pass@localhost:26257/?sslmode=disable' --schema public -f schema.hcl

Atlas prints out the planned changes and asks for your confirmation:

-- Planned Changes:
-- Create "test" table
ALTER TABLE "public"."users" ADD COLUMN "name" character varying(100) NOT NULL
? Are you sure?:
β–Έ Apply
Abort

After hitting "Apply", Atlas applies the desired schema to the database:

βœ” Apply

We have successfully applied our schema to our database.

To stop the container running CockroachDB run:

docker stop crdb

Learn more about Atlas​

In this short example, we demonstrated two of Atlas's basic features: database inspection and declarative schema migration (applying a desired schema on a database). Here are some topics you may want to explore when getting started with Atlas:

  • Learn the DDL - learn how to define any SQL resource in Atlas's data definition language.
  • Try the Terraform Provider - see how you can use the Atlas Terraform Provider to integrate schema management in your general Infrastructure-as-Code workflows.
  • Use the migrate command to author migrations - In addition to the Terraform-like declarative workflow, Atlas can manage a migration script directory for you based on your desired schema.

Preview support​

The integration of Atlas with CockroachDB is well tested with version v21.2.11 (at the time of writing, latest) and will be extended in the future. If you're using other versions of CockroachDB or looking for help, don't hesitate to file an issue or join our Discord channel.

Have questions? Feedback? Find our team on our Discord server.

Β· 4 min read
Rotem Tamir

A few days ago we released v0.4.1 of Atlas. Along with a multitude of improvements and fixes, I'm happy to announce the release of a feature that we've been planning for a while: Project Files.

Project files provide a way to describe and interact with multiple environments while working with Atlas. A project file is a file named atlas.hcl that contains one or more env blocks, each describing an environment. Each environment has a reference to where the schema definition file resides, a database URL and an array of the schemas in the database that are managed by Atlas:

// Define an environment named "local".
env "local" {
// Declare where the schema definition file resides.
src = "./schema/project.hcl"

// Define the URL of the database which is managed in
// this environment.
url = "mysql://localhost:3306"

// Define the URL of the Dev Database for this environment.
// See: https://atlasgo.io/dev-database
dev = "mysql://localhost:3307"

// The schemas in the database that are managed by Atlas.
schemas = ["users", "admin"]
}

env "dev" {
// ... a different env
}

Project files arose from the need to provide a better experience for developers using the CLI. For example, consider you are using Atlas to plan migrations for your database schema. In this case, you will be running a command similar to this to plan a migration:

atlas migrate diff --dev-url mysql://root:password@localhost:3306 --to file://schema.hcl --dir file://migrations --format atlas

With project files, you can define an environment named local:

env "local" {
url = "mysql://root:password@localhost:3306"
dev = "mysql://root:password@localhost:3307"
src = "./schema.hcl"
migration {
dir = "file://migrations"
format = atlas
}
}

Then run the migrate diff command against this environment using the --env flag:

atlas migrate diff --env local

Alternatively, suppose you want to use Atlas to apply the schema on your staging environment. Without project files, you would use:

atlas schema apply -u mysql://root:password@db.ariga.dev:3306 --dev-url mysql://root:password@localhost:3307 -f schema.hcl

To do the same using a project file, define another env named staging:

env "staging" {
url = "mysql://root:password@db.ariga.dev:3306"
dev = "mysql://root:password@localhost:3307"
src = "./schema.hcl"
}

Then run:

atlas schema apply --env staging

Passing credentials as input values​

Similar to schema definition files, project files also support Input Variables. This means that we can define variable blocks on the project file to declare which values should be provided when the file is evaluated. This mechanism can (and should) be used to avoid committing to source control database credentials. To do this, first define a variable named db_password:

variable "db_password" {
type = string
}

Next, replace the database password in all connection strings with a reference to this variable, for example:

env "staging" {
url = "mysql://root:${var.db_password}@db.ariga.dev:3306"
dev = "mysql://root:${var.db_password}@localhost:3307"
src = "./schema.hcl"
}

If we run schema apply without providing the password input variable, we will receive an error message:

Error: missing value for required variable "db_password"

To provide the input variable run:

atlas schema apply --env staging --var db_password=pass

Input variables can be used for many other use cases by passing them as input values to schema files.

What's next​

In this post, I presented Project Files, a new feature recently added to Atlas to help developers create more fluent workflows for managing changes to their database schemas. In the coming weeks we will be adding a few more improvements to the dev flow, such as support for marking a specific environment as the default one (alleviating the need to specify --env in many cases) and multi-file schema definitions.

Have questions? Feedback? Find our team on our Discord server.

Β· 5 min read
Amit Shani

Today we are glad to announce the release of the official Atlas Terraform Provider.

What is Terraform​

Terraform is a popular open-source tool created by HashiCorp, used to greatly simplify the task of provisioning and managing resources in the cloud. With Terraform, organizations can describe the desired state of their infrastructure in a simple configuration language and let Terraform plan and apply these changes in an automated way. This way, Terraform allows teams to truly deliver infrastructure-as-code (IaC), which completely change how teams and organizations manage their cloud infrastructure.

Infrastructure-as-Code and database management​

Most cloud-native applications are backed by a database. The database is often the most critical part of many software systems, so making changes to its schema (structure and layout of the data inside) is a very risky business. However, schemas must evolve: as functionality changes over time, the backing tables are added, columns are dropped, indexes are created for performance reasons, and more.

Therefore it is surprising that there is no established way of integrating the management of schema changes (commonly called schema "migrations") into popular Infrastructure-as-Code workflows. For this reason, many organizations are running migrations from within the application code or using solutions outside the ecosystem of Terraform, meaning that management of the production environment is fragmented and hard to synchronize. Atlas aims to change that.

The Atlas Terraform provider allows you to synchronize your database with your desired schema, in a safe and stateful manner. By using Atlas’s core migration engine and embedding it in a Terraform provider, we are enabling teams to manage their database schemas as part of their full IaC workflow. This way, teams can use existing providers (such as AWS or GCP) to provision the database instance and use the Atlas provider to keep the schema in sync. Integrating Atlas with Terraform is especially useful because it couples the state of the infrastructure with the state of the database. It is also extremely neat when using a dev database, which is a feature that combines infrastructure and DB management to provide safety and correctness.

Demo​

Prerequisites​

Make sure you have installed:

Let’s see an example of the provider in action. First, spin a database using docker:

docker run -p 3306:3306 --name iloveatlas -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=market -d mysql:8

Great! Now we have an instance of MySQL database running.

As an extra measure of safety, we will run another identical database which will serve as a Dev Database. In short, the dev-db helps to catch errors that can only be detected when applying the schema. It is also useful to format the schema in a correct and predictable way. Read more about it here. Run a second instance of MySQL on another port, to serve as a dev-db:

docker run -p 3307:3306 --name devdb-greatness -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=market -d mysql:8

Next, we need an HCL file describing the desired state of our database. You can use atlas cli to inspect the state of another database or you can use the following basic schema:

schema.hcl
table "orders" {
schema = schema.market
column "id" {
null = false
type = int
auto_increment = true
}
column "name" {
null = false
type = varchar(20)
}
primary_key {
columns = [column.id]
}
}

schema "market" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}

Save the schema file locally in a file named schema.hcl.

Now that we have our database schema we can use terraform to apply that schema to our database. Create a file named main.tf and copy the following snippet:

main.tf
terraform {
required_providers {
atlas = {
version = "~> 0.1.0"
source = "ariga/atlas"
}
}
}

provider "atlas" {}

data "atlas_schema" "market" {
dev_db_url = "mysql://root:pass@localhost:3307/market"
src = file("${path.module}/schema.hcl")
}

resource "atlas_schema" "market" {
hcl = data.atlas_schema.market.hcl
url = "mysql://root:pass@localhost:3306/market"
dev_db_url = "mysql://root:pass@localhost:3307/market"
}

Finally, init terraform:

terraform init

And apply the schema to the database by executing:

terraform apply --auto-approve

Awesome! Now your database should have a table named orders. To verify that we can connect to the database:

$ docker exec -it iloveatlas mysql -ppass --database=market

mysql> show tables;
+------------------+
| Tables_in_market |
+------------------+
| orders |
+------------------+
1 row in set (0.00 sec)

mysql> show create table orders;
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| orders | CREATE TABLE `orders` (
`id` int NOT NULL AUTO_INCREMENT,
`name` varchar(20) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci |
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

For more examples and documentation visit the official GitHub repository or the provider page on Terraform registry.

What's next​

In this post, we presented the Atlas Terraform Provider. The provider currently supports the basic, declarative migration workflow that is available in the Atlas engine. In upcoming versions, we will add support for an additional kind of workflow that is supported by the engine and is called versioned migration authoring. In addition, more advanced safety checks (such as simulation on database snapshots) and migration strategies are also being worked on.

While the Terraform provider has just been released, the core engine that it is driving, is well tested and widely used (especially as the migration engine backing the popular Ent framework.) If you, like me, have always wanted to manage your database schema as part of your team's infrastructure-as-code workflow, give the Atlas Terraform provider a try!

Have questions? Feedback? Find our team on our Discord server.