Skip to main content

10 posts tagged with "release"

View All Tags

· 5 min read
Rotem Tamir

Hi everyone,

Thanks for joining us today for our v0.25 release announcement! In this version we are introducing a new feature that has been requested by many of you: support for Row-level Security Policies in PostgreSQL.

Additionally, we have made some minor changes to our pricing plans, more on that below.

What are Row-level Security Policies?

Row-level security (RLS) in PostgreSQL allows tables to have policies that restrict which rows can be accessed or modified based on the user's role, enhancing the SQL-standard privilege system available through GRANT.

When enabled, all normal access to the table must comply with these policies, defaulting to a deny-all approach if no policies are set, ensuring that no rows are visible or modifiable. Policies can be specific to commands, roles, or both, providing fine-grained control over data access and modification.

How does RLS work?

When you create and enable a row-level security (RLS) policy in PostgreSQL, the database enforces the specified access control rules on a per-row basis.

For example, you can create a policy that allows only employees to see their own records in an employees table. The policy could look like this:

CREATE POLICY employee_policy ON employees
FOR SELECT
USING (current_user = employee_role);

This SQL command creates an RLS policy named employee_policy on the employees table. The FOR SELECT clause specifies that this policy applies to SELECT queries. The USING clause contains the condition current_user = employee_role, which means that a user can only select rows where the employee_role column matches their PostgreSQL username.

Next, database administrators typically run:

ALTER TABLE employees ENABLE ROW LEVEL SECURITY;

This command enables RLS on the employees table. With RLS enabled, PostgreSQL will check the policies defined for this table whenever a user attempts to access or modify existing rows, or insert new ones.

When a user executes a SELECT query on the employees table, PostgreSQL evaluates the employee_policy. If the user's PostgreSQL role (username) matches the employee_role column value in a row, the row is included in the query result. Otherwise, the row is excluded.

For instance, if the employees table contains the following data:

idnameemployee_role
1Alicealice
2Bobbob
3Charliecharlie

When the user alice runs SELECT * FROM employees, PostgreSQL applies the policy:

SELECT * FROM employees WHERE current_user = employee_role;

This results in:

idnameemployee_role
1Alicealice

By enforcing these policies, RLS ensures that users only have access to the data they are permitted to see, enhancing the security and privacy of the database.

Manage your Row-level Security Policies as Code

With Atlas, you can now manage your RLS policies as code, just like you manage other database resources such as tables, indexes, and triggers. This allows you to version control your policies, track changes, and apply them consistently across your environments.

To get started with RLS in Atlas, first upgrade to the most recent version.

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

RLS is available to Atlas Pro users only. Get your free Atlas Pro account today by running:

atlas login

Next, you can define your RLS policies in your Atlas schema file (schema.hcl) using the new policy block:

policy "employee_policy" {
on = table.employees
for = SELECT
to = [PUBLIC]
using = "(current_user = employee_role)"
}

This HCL snippet defines an RLS policy named employee_policy on the employees table, allowing only users whose employee_role matches their PostgreSQL username to SELECT rows from the table.

Next, you need to enable RLS on the table:

table "employees" {
schema = schema.public
column "employee_role" {
type = text
}
row_security {
enabled = true // ENABLE ROW LEVEL SECURITY
}
}

Finally, run atlas schema apply to apply the changes to your database!

To learn more about RLS using Atlas, check out our documentation.

Introducing Atlas Pro

Since launching Atlas Cloud a little over a year ago, we have been working hard with our users and customers to make Atlas as easy and simple to use as possible.

One point of confusion we have encountered, especially around our pricing plans, was how users who currently don't want (or can't) use Atlas Cloud for their CI/CD pipelines can get access to the advanced CLI features that Atlas offers. Previously, teams needed to buy Cloud quota to get access to the CLI, which didn't make a lot of sense.

To address some of these issues we are making some small changes to our pricing plans:

Atlas now comes in three tiers:

  • Open - Our CLI, doesn't require creating an account and comes with a solid set of features (this is more than enough for many of our users).
  • Pro (previously "Business") - An enhanced version of our CLI, which includes support for advanced database features and drivers. It will cost $9/month/user, but users get their first 3 seats per company for free when they sign up. Pro users also have access to Atlas Cloud (pricing remains the same).
  • Enterprise - our enterprise tier, targeted mostly at larger organizations or teams in regulated industries with stricter compliance requirements.

To learn more about our new plans, head over to our updated pricing page.

Wrapping Up

That's all for this release! We hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 13 min read
Rotem Tamir

Hi everyone,

We are back again with a new release of Atlas, v0.24. In this release we double down on the core principle that has been guiding us from the start: enabling developers to manage their database schema as code. The features we announce today may appear like a yet another cool addition to Atlas, but I am fairly confident, that in a few years' time, they will be recognized as something foundational.

In this release we introduce:

  • schema test - a new command (and framework) for testing your database schema using familiar software testing paradigms.
  • migrate test - a new command for testing writing tests for you schema migrations.
  • Enhanced editor support - we have added support for some long awaited features in our VSCode and JetBrains plugins: multi-file schemas, jump to definition, and support for much larger schemas.

Doubling Down on Database Schema-as-Code

The core idea behind Atlas is to enable developers to manage their Database Schema-as-Code. Before we jump into the recent additions to Atlas, I would like to take a moment to reflect on why our industry seems to think that "X-as-Code" is a great idea.

In a nutshell, the "X-as-Code" movement is about being able to describe the desired state of a system (whether it's infrastructure, configuration, or schema) in a declarative way and then have that state enforced by a tool.

So why is having things described as code so great? Here are a few reasons:

  • Code can be versioned. This means that you can track changes to your system over time, easily compare states, and rollback as needed.
  • Code is understood by machines. As formal languages, code can be parsed, analyzed, and executed by machines.
  • Code can be tested and validated. By using software testing paradigms, you can ensure that your system behaves as expected in an automated way.
  • Code can be shared and reused. Code allows us to transfer successful ideas and implementations between projects and teams.
  • Code has a vast ecosystem of productivity tools. By using code, you can leverage the vast ecosystem of tools and practices that have been developed by software engineers over the years.

Our core goal with Atlas is to bring these benefits to the world of database schema management. We believe that by enabling developers to manage their database schema as code, we can help them build better, more reliable systems.

Today we bring one of the most important tenets of modern software development to the world of database schema management: testing.

Why test your database schema and migrations?

Testing is a fundamental part of modern software development. By writing tests, you can ensure that your code behaves as expected, catch bugs early, and prevent regressions.

When it comes to database schemas, testing is just as important. Databases are much more than just a storage layer, they can be programmed, enforce logic and constraints, and have complex relationships between tables. For example, table triggers allow you to run custom code when certain events occur, and you should be able to test that this code behaves as expected and that later changes to the schema do not break it. In a similar vein, developers can provide complex expressions in check constraints that should be tested to ensure they are working as expected.

When it comes to migrations, testing is equally important. Atlas already provides the migrate lint command to help you catch invalid migrations and common mistakes. However, migrate test takes validating your migrations a step further.

Many teams use migrations as a mechanism to apply data migrations in tandem with schema changes. As they involve data, these changes are super risky, yet it is notoriously hard to test them. By providing a way to test your migrations, we hope to make this process easier and more reliable.

Introducing schema test

The schema test command allows you to write tests for your database schema using familiar software testing paradigms.

To get started, first install the latest version of the Atlas CLI:

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

Next, login to your Atlas account to activate the new schema testing features:

atlas login

Let's see a brief example. We will begin our project by defining a basic Atlas project file named atlas.hcl:

atlas.hcl
env "local" {
src = "file://schema.hcl"
dev = "docker://postgres/16/dev?search_path=public"
}

Next, let's define a PostgreSQL Domain to model a data type for a us_postal_code:

schema.sql
CREATE DOMAIN "us_postal_code" AS text
CONSTRAINT "us_postal_code_check"
CHECK (
(VALUE ~ '^\d{5}$'::text) OR
(VALUE ~ '^\d{5}-\d{4}$'::text)
);

Next, let's create a file named "schema.test.hcl" with the following content:

schema.test.hcl
test "schema" "postal" {
exec {
sql = "select 'hello'::us_postal_code"
}
}

Per testing best practices, we start with a test that is going to fail, since the string "hello" is not a valid US postal code.

Now, we can run the test using the schema test command:

atlas schema test --env local

The output will be:

-- FAIL: postal (319µs)
schema.test.hcl:2:
Error: pq: value for domain us_postal_code violates check constraint "us_postal_code_check"
FAIL

As expected, the test failed, and we can now fix the test by catching that error and verifying its message:

schema.test.hcl
test "schema" "postal" {
catch {
sql = "select 'hello'::us_postal_code"
error = "value for domain us_postal_code violates check constraint"
}
}

Re-running the test:

atlas schema test --env local

The output will be:

-- PASS: postal (565µs)
PASS

Now we can expand the test to cover more cases, such as valid postal codes and more invalid cases:

schema.test.hcl
test "schema" "postal" {
exec {
sql = "select '12345'::us_postal_code"
output = "12345" // Assert the returned value is "12345"
}
exec {
sql = "select '12345-1234'::us_postal_code"
output = "12345-1234" // Assert the returned value is "12345-1234"
}
catch {
sql = "select 'hello'::us_postal_code"
error = "value for domain us_postal_code violates check constraint"
}
catch {
sql = "select '1234'::us_postal_code"
error = "value for domain us_postal_code violates check constraint"
}
assert {
sql = "select '12345'::us_postal_code::text='12345'" // Assert the query returns true.
}
log {
message = "Hooray, testing!"
}
}

Re-running the test:

atlas schema test --env local

The output will be:

-- PASS: postal (1ms)
schema.test.hcl:21: Hooray, testing!
PASS

Let's review what happens when we run atlas schema test:

  • Atlas will apply the schema for the local environment on the dev database.
  • Atlas will search the current directory for files matching the pattern *.test.hcl.
  • For each test file found, Atlas will execute a test for each test "schema" "<name>" block.
  • Here are the possible test blocks:
    • exec - Executes a SQL statement and verifies the output.
    • catch - Executes a SQL statement and verifies that an error is thrown.
    • assert - Executes a SQL statement and verifies that the output is true.
    • log - Logs a message to the test output.

Using this modest framework, you can now write tests for your database schema, ensuring that it behaves as expected. This command can be integrated into your local development workflow or even as part of your CI pipeline further ensuring the quality of your database schema changes.

Introducing migrate test

The migrate test command allows you to write tests for your schema migrations. This is a powerful feature that enables you to test logic in your migrations in a minimal and straightforward way. The command is similar to schema test but is focused on testing migrations.

Suppose we are refactoring an existing table users which has a name column that we want to split into first_name and last_name columns. The recommended way to do this kind of refactoring in a backward-compatible way. Initially, we will be adding the new columns In Atlas DDL, the schema change would look roughly like this:

schema.hcl
table "users " {
// .. redacted
+ column "first_name" {
+ type = text
+ null = true
+ }
+ column "last_name" {
+ type = text
+ null = true
+ }
}

Next, we will use Atlas to generate a migration for this change:

atlas migrate diff --env local

A new file will be created in our migrations directory:

20240613061102.sql
-- Modify "users" table
ALTER TABLE "users" ADD COLUMN "first_name" text NULL, ADD COLUMN "last_name" text NULL;

Next, let's add the backfill logic to populate the new columns with the data from the name column:

20240613061102.sql
-- Modify "users" table
ALTER TABLE "users" ADD COLUMN "first_name" text NOT NULL, ADD COLUMN "last_name" text NOT NULL;

-- Backfill data
UPDATE "users" SET "first_name" = split_part("name", ' ', 1), "last_name" = split_part("name", ' ', 2);

After changing the contents of our migration file, we must update our atlas.sum file to reflect the changes:

atlas migrate hash --env local

Next, we will create a test case to verify that our migration works correctly in different cases. Let's add the following block to a new file named migrations.test.hcl:

migrations.test.hcl
test "migrate" "name_split" {
migrate {
// Replace with the migration version before the one we just added.
to = "20240613061046"
}
exec {
sql = "insert into users (name) values ('Ada Lovelace')"
}
migrate {
to = "20240613061102"
}
exec {
sql = "select first_name,last_name from users"
output = "Ada, Lovelace"
}
}

Let's explain what this test does:

  • We start by defining a new test case named name_split.
  • The migrate block runs migrations up to a specific version. In this case, we are running all migrations up to the version before the one we just added.
  • The exec block runs a SQL statement. In this case, we are inserting a new user with the name "Ada Lovelace".
  • Next, we run our new migration, 20240613061102.
  • Finally, we run a SQL statement to verify that the first_name and last_name columns were populated correctly.

Let's run the test:

atlas migrate test --env local

The output will be:

-- PASS: name_split (33ms)
PASS

Great, our test passed! We can now be confident that our migration works as expected.

Testing Edge Cases

With our test infra all set up, it's now easy to add more test cases to cover edge cases. For example, we can add a test to verify that our splitting logic works correctly for names that include a middle name, for example, John Fitzgerald Kennedy:

migrations.test.hcl
test "migrate" "name_split_middle_name" {
migrate {
to = "20240613061046"
}
exec {
sql = "insert into users (name) values ('John Fitzgerald Kennedy')"
}
migrate {
to = "20240613061102"
}
exec {
sql = "select first_name,last_name from users"
output = "John Fitzgerald, Kennedy"
}
}

We expect to see only the family name in the last_name column, and the rest of the name in the first_name column.

Will it work? Let's run the test:

atlas migrate test --env local --run name_split_middle_name

Our test fails:

-- FAIL: name_split_middle_name (32ms)
migrations.test.hcl:27:
Error: no match for `John Fitzgerald, Kennedy` found in "John, Fitzgerald"
FAIL

Let's improve our splitting logic to be more robust:

20240613061102.sql
-- Modify "users" table
ALTER TABLE "users" ADD COLUMN "first_name" text NULL, ADD COLUMN "last_name" text NULL;

-- Backfill data
UPDATE "users"
SET "first_name" = regexp_replace("name", ' ([^ ]+)$', ''),
"last_name" = regexp_replace("name", '^.* ', '');

We changed our splitting logic to be more robust by using regular expressions:

  • The first_name column will now contain everything before the last space in the name column.
  • The last_name column will contain everything after the last space in the name column.

Before testing our new logic, we need to update our migration hash:

atlas migrate hash --env local

Now, let's run the test again:

atlas migrate test --env local --run name_split_middle_name

The output will be:

-- PASS: name_split_middle_name (31ms)
PASS

Great! Our test passed, and we can now be confident that our migration works as expected for names with middle names.

As a final check, let's also verify that our migration works correctly for names with only one word, such as Prince:

migrations.test.hcl
test "migrate" "name_split_one_word" {
migrate {
to = "20240613061046"
}
exec {
sql = "insert into users (name) values ('Prince')"
}
migrate {
to = "20240613061102"
}
exec {
sql = "select first_name,last_name from users"
output = "Prince, "
}
}

Let's run the test:

atlas migrate test --env local --run name_split_one_word

The output will be:

-- PASS: name_split_one_word (34ms)
PASS

Amazing! Our test passed, and we can move forward with confidence.

Enhanced Editor Support

In this release, we have added support for some long-awaited features in our VSCode and JetBrains plugins:

  • Multi-file schemas - Our editor plugins will now automatically detect and load all schema files in your project, allowing you to reference tables and columns across files.
  • Jump to definition - Source code can be modeled as a graph of entities where one entity can reference another. For example a Java class method invokes a method in another class, or a table's foreign key references another table's primary key. Jump to definition allows you to navigate this graph by jumping to the definition of the entity you are interested in.
  • Support for much larger schemas - We have improved the performance of our editor plugins to support much larger schemas.

To try the latest versions, head over to the VSCode Marketplace or the JetBrains Marketplace.

Wrapping Up

That's all for this release! We hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 9 min read
Rotem Tamir

Hi everyone,

It's been a few weeks since the release of v0.22, and we're excited to be back with the next version of Atlas, packed with some long awaited features and improvements.

  • Redshift Support - Amazon Redshift, a fully managed, petabyte-scale data warehouse service in the cloud. Starting today, you can use Atlas to manage your Redshift Schema-as-Code.
  • CircleCI Integration - Following some recent requests from our Enterprise customers, we have added a CircleCI orb to make it easier to integrate Atlas into your CircleCI pipelines.
  • Kubernetes Operator Down Migrations - The Kubernetes Operator now detects when you are moving to a previous version and will attempt to apply a down migration if configured to do so.
  • GORM View Support - We have added support for defining SQL Views in your GORM models.
  • SQLAlchemy Provider Improvements - We have added support for defining models using SQLAlchemy Core Tables in the SQLAlchemy provider.
  • ERD v2 - We have added a new navigation sidebar to the ERD to make it easier to navigate within large schemas.
  • PostgreSQL Improvements - We have added support for PostgreSQL Event Triggers, Aggregate Functions, and Function Security.

Let's dive in!

Redshift Beta Support

Atlas's "Database Schema-as-Code" is useful even for managing small schemas with a few tables, but it really shines when you have a large schema with many tables, views, and other objects. This is the common case instead of the exception when you are dealing with Data Warehouses like Redshift that aggregate data from multiple sources.

Data warehouses typically store complex and diverse datasets consisting of hundreds of tables with thousands of columns and relationships. Managing these schemas manually can be a nightmare, and that's where Atlas comes in.

Today we are happy to announce the beta support for Amazon Redshift in Atlas. You can now use Atlas to manage your Redshift schema, generate ERDs, plan and apply changes, and more.

To get started, first install the latest version of the Atlas CLI:

To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

curl -sSf https://atlasgo.sh | sh

Next, login to your Atlas account to activate the Redshift beta feature:

atlas login

To verify Atlas is able to connect to your Redshift database, run the following command:

atlas schema inspect --url "redshift://<username>:<password>@<host>:<port>/<database>?search_path=<schema>"

If everything is working correctly, you should see the Atlas DDL representation of your Redshift schema.

To learn more about the Redshift support in Atlas, check out the documentation.

CircleCI Integration

CircleCI is a popular CI/CD platform that allows you to automate your software development process. With this version we have added a CircleCI orb to make it easier to integrate Atlas into your CircleCI pipeline. CircleCI orbs are reusable packages of YAML configuration that condense repeated pieces of config into a single line of code.

As an example, suppose you wanted to create a CircleCI pipeline that pushes your migration directory to your Atlas Cloud Schema Registry. You can use the atlas-orb to simplify the configuration:

version: '2.1'
orbs:
atlas-orb: ariga/atlas-orb@0.0.3
workflows:
postgres-example:
jobs:
- push-dir:
context: the-context-has-ATLAS_TOKEN
docker:
- image: cimg/base:current
- environment:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: pass
POSTGRES_USER: postgres
image: cimg/postgres:16.2
steps:
- checkout
- atlas-orb/setup:
cloud_token_env: ATLAS_TOKEN
version: latest
- atlas-orb/migrate_push:
dev_url: >-
postgres://postgres:pass@localhost:5432/postgres?sslmode=disable
dir_name: my-cool-project

Let's break down the configuration:

  • The push-dir job uses the cimg/postgres:16.2 Docker image to run a PostgreSQL database. This database will be used as the Dev Database for different operations performed by Atlas.
  • The atlas-orb/setup step initializes the Atlas CLI with the provided ATLAS_TOKEN environment variable.
  • The atlas-orb/migrate_push step pushes the migration directory my-cool-project to the Atlas Cloud Schema Registry.

To learn more about the CircleCI integration, check out the documentation.

Kubernetes Operator Down Migrations

The Atlas Operator is a Kubernetes operator that enables you to manage your database schemas using Kubernetes Custom Resources. In one of our recent releases, we added support for the migrate down command to the CLI. Using this command, you can roll back applied migrations in a safe and controlled way, without using pre-planned down migration scripts or manual intervention.

Starting with v0.5.0, the Atlas Operator supports down migrations as well. When you change the desired version of your database for a given AtlasMigration resource, the operator will detect whether you are moving to a previous version and will attempt to apply a down migration if you configured it to do so.

Down migrations are controlled via the new protectedFlows field in the AtlasMigration resource. This field allows you to specify the policy for down migrations. The following policy, for example, allows down migrations and auto-approves them:

apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlasmig-mysql
spec:
protectedFlows:
migrateDown:
allow: true
autoApprove: true
# ... redacted for brevity

Alternatively, Atlas Cloud users may set the autoApprove field to false to require manual approval for down migrations. In this case, the operator will pause the migration and wait for the user to approve the down migration before proceeding:

ERD v2

When you push your migration directory to the Atlas Cloud Schema Registry, Atlas generates an ERD for your schema. The ERD is a visual representation of your schema that shows the different database objects in your schema and the relationships between them.

To make it easier to navigate within large schemas we have recently added a fresh new navigation sidebar to the ERD:

GORM View Support

GORM is a popular ORM library for Go that provides a simple way to interact with databases. The Atlas GORM provider provides a seamless integration between Atlas and GORM, allowing you to generate migrations from your GORM models and apply them to your database.

SQL Views are a powerful feature in relational databases that allow you to create virtual tables based on the result of a query. Managing views with GORM (and ORMs in general) is a notoriously clunky process, as they are normally not first-class citizens in the ORM world.

With v0.4.0, we have added a new API to the GORM provider that allows you to define views in your GORM models.

Here's a glimpse of how you can define a view in GORM:

// User is a regular gorm.Model stored in the "users" table.
type User struct {
gorm.Model
Name string
Age int
}

// WorkingAgedUsers is mapped to the VIEW definition below.
type WorkingAgedUsers struct {
Name string
Age int
}

func (WorkingAgedUsers) ViewDef(dialect string) []gormschema.ViewOption {
return []gormschema.ViewOption{
gormschema.BuildStmt(func(db *gorm.DB) *gorm.DB {
return db.Model(&User{}).Where("age BETWEEN 18 AND 65").Select("name, age")
}),
}
}

By implementing the ViewDefiner interface, GORM users can now include views in their GORM models and have Atlas automatically generate the necessary SQL to create the view in the database.

To learn more about the GORM view support, check out the documentation.

Thanks to luantranminh for contributing this feature!

SQLAlchemy Provider Improvements

The Atlas SQLAlchemy provider allows you to generate migrations from your SQLAlchemy models and apply them to your database.

With v0.2.2, we have added support for defining models using SQLAlchemy Core Tables in addition to the existing support for ORM Models.

In addition, we have decoupled the provider from using a specific SQLAlchemy release, allowing users to use any version of SQLAlchemy they prefer. This should provide more flexibility and make it easier to integrate the provider into your existing projects.

Huge thanks to vshender for contributing these improvements!

Other Improvements

On our quest to support the long tail of lesser known database features we have recently added support for the following:

PostgreSQL Event Triggers

PostgreSQL Event Triggers are a special kind of trigger. Unlike regular triggers, which are attached to a single table and capture only DML events, event triggers are global to a particular database and are capable of capturing DDL events.

Here are some examples of how you can use event triggers in Atlas:

# Block table rewrites.
event_trigger "block_table_rewrite" {
on = table_rewrite
execute = function.no_rewrite_allowed
}

# Filter specific events.
event_trigger "record_table_creation" {
on = ddl_command_start
tags = ["CREATE TABLE"]
execute = function.record_table_creation
}

Aggregate Functions

Aggregate functions are functions that operate on a set of values and return a single value. They are commonly used in SQL queries to perform calculations on groups of rows. PostgreSQL allows users to define custom aggregate functions using the CREATE AGGREGATE statement.

Atlas now supports defining custom aggregate functions in your schema. Here's an example of how you can define an aggregate function in Atlas:

aggregate "sum_of_squares" {
schema = schema.public
arg {
type = double_precision
}
state_type = double_precision
state_func = function.sum_squares_sfunc
}

function "sum_squares_sfunc" {
schema = schema.public
lang = PLpgSQL
arg "state" {
type = double_precision
}
arg "value" {
type = double_precision
}
return = double_precision
as = <<-SQL
BEGIN
RETURN state + value * value;
END;
SQL
}

Function Security

PostgreSQL allows you to define the security level of a function using the SECURITY clause. The SECURITY clause can be set to DEFINER or INVOKER. When set to DEFINER, the function is executed with the privileges of the user that defined the function. When set to INVOKER, the function is executed with the privileges of the user that invoked the function. This is useful when you want to create functions that execute with elevated privileges.

Atlas now supports defining the security level of functions in your schema. Here's an example of how you can define a function with SECURITY DEFINER in Atlas:

function "positive" {
schema = schema.public
lang = SQL
arg "v" {
type = integer
}
return = boolean
as = "SELECT v > 0"
security = DEFINER
}

Wrapping Up

That's all for this release! We hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 7 min read

Hi everyone,

It's been a few weeks since our last release, and we're happy to be back with a version packed with brand new and exciting features. Here's what's inside:

  • RENAME Detection - This version includes a RENAME detector that identifies ambiguous situations of potential resource renames and interactively asks the user for feedback before generating the changes.
  • PostgreSQL Features
    • UNIQUE and EXCLUDE - Unique constraints and exclusion constraints were added.
    • Composite Types - Added support for composite types, which are user-defined types that represent the structure of a row.
    • Table lock checks - Eight new checks that review migration plans and alert in cases of a potential table locks in different modes.
  • SQL Server Sequence Support - Atlas now supports managing sequences in SQL Server.

Let's dive in!

RENAME Detection

One of the first things we were asked when we introduced Atlas's declarative approach to schema management was, “How are you going to deal with renames?” While column and table renames are a fairly rare event in the lifecycle of a project, the question arises from the fact that it's impossible to completely disambiguate between RENAME and DROP and ADD operations. While the end schema will be the same in both cases, the actual impact of an undesired DROP operation can be disastrous.

To avoid this, Atlas now detects potential RENAME scenarios during the migration planning phase and prompts the user about their intent.

Let's see this in action.

Assume we have a users table with the column first_name, which we changed to name.

After running the atlas migrate diff command to generate a migration, we will see the following:

? Did you rename "users" column from "first_name" to "name":
▸ Yes
No

If this was our intention, we will click "Yes" and the SQL statement will be a RENAME statement.

If we click "No", the SQL statement will drop the first_name column and create the name column instead.

PostgreSQL Features

Unique and Exclude Constraints

Now, Atlas supports declaring unique and exclude constraints in your schema.

For example, if we were to add a unique constraint on a name column, it would look similar to:

schema.hcl
# Columns only.
unique "name" {
columns = [column.name]
}

Read more about unique constraints in Atlas here.

Exclude constraints ensure that if any two rows are compared using a specified operator, at least one of the specified conditions must hold true. This means that the constraint ensures that no two rows satisfy the specified operator at the same time.

schema.hcl
exclude "excl_speaker_during" {
type = GIST
on {
column = column.speaker
op = "="
}
on {
column = column.during
op = "&&"
}
}

Composite Types

Composite Types are user-defined data types that represent a structure of a row or record. Once defined, composite types can be used to declare columns in tables or used in functions and stored procedures.

For example, let's say we have a users table where each user has an address. We can create a composite type address and add it as a column to the users table:

schema.hcl
composite "address" {
schema = schema.public
field "street" {
type = text
}
field "city" {
type = text
}
}

table "users" {
schema = schema.public
column "address" {
type = composite.address
}
}

schema "public" {
comment = "standard public schema"
}

Learn more about composite types here.

Table Locking Checks

One of the common ways in which schema migration cause production outages is when a schema change requires the database to acquire a lock on a table, immediately causing read or write operations to fail. If you are dealing with small tables, these locks might be acquired for a short time which will not be noticeable. However, if you are managing a large and busy database, these situations can lead to a full-blown system outage.

Many developers are not aware of these pitfalls only to discover them in the middle of a crisis, which is made even worse by the fact that once they happen, there's nothing you can do except quietly wait for the migration to complete.

Teams looking to improve the reliability and stability of their systems, reach out to automation to prevent human errors like these. Atlas's automatic analysis capabilities can be utilized to detect such risky changes during the CI phase of the software development lifecycle.

In this version, we have added eight new analyzers to our PostgreSQL integration that check for cases where planned migrations can lead to locking a table. Here's a short rundown of these analyzers and what they detect:

  • PG104 - Adding a PRIMARY KEY constraint (with its index) acquires an ACCESS EXCLUSIVE lock on the table, blocking all access during the operation.
  • PG105 - Adding a UNIQUE constraint (with its index) acquires an ACCESS EXCLUSIVE lock on the table, blocking all access during the operation.
  • PG301 - A change to the column type that requires rewriting the table (and potentially its indexes) on disk.
  • PG302 - Adding a column with a volatile DEFAULT value requires a rewrite of the table.
  • PG303 - Modifying a column from NULL to NOT NULL requires a full table scan.
    • If the table has a CHECK constraint that ensures NULL cannot exist, such as CHECK (c > 10 AND c IS NOT NULL), the table scan is skipped, and therefore this check is not reported.
  • PG304 - Adding a PRIMARY KEY on a nullable column implicitly sets them to NOT NULL, resulting in a full table scan unless there is a CHECK constraint that ensures NULL cannot exist.
  • PG305 - Adding a CHECK constraint without the NOT VALID clause requires scanning the table to verify that all rows satisfy the constraint.
  • PG306 - Adding a FOREIGN KEY constraint without the NOT VALID clause requires a full table scan to verify that all rows satisfy the constraint.

View a full list of all the checks here.

SQL Server Sequence Support

In SQL Server, sequences are objects that generate a sequence of numeric values according to specific properties.Sequences are often used to generate unique identifiers for rows in a table.

Atlas supports the different types of sequences. For example, a simple sequence with default values can be declared like so:

sequence "s1" {
schema = schema.dbo
}

We can also create a sequence with a custom configuration.

sequence "s2" {
schema = schema.dbo
type = int
start = 1001
increment = 1
min_value = 1001
max_value = 9999
cycle = true
}

In the example above, we have a sequence that starts at 1001, and is incremented by 1 until it reaches the maximum value of 9999. Once it reaches its maximum value, it will start over because cycle is set to true.

Another option is to create a sequence with an alias type. For example, if we were to create a sequence for Social Security Numbers, we would do the following:

sequence "s3" {
schema = schema.dbo
type = type_alias.ssn
start = 111111111
increment = 1
min_value = 111111111
}
type_alias "ssn" {
schema = schema.dbo
type = dec(9, 0)
null = false
}

Read the docs for more information about sequences.

Wrapping Up

That's all for this release! We hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 7 min read

Hi everyone,

It's been a few weeks since our last version announcement and today I'm happy to share with you v0.20, which includes some big changes and exciting features:

  • New Pricing Model - As we announced earlier this month, beginning March 15th the new pricing model took effect. The new pricing is usage-based, offering you more flexibility and cost efficiency. Read about what prompted this change and view the new pricing plans here.
  • Django ORM Integration - Atlas now supports Django! Django is a popular ORM for Python. Developers using Django can now use Atlas to automatically plan schema migrations based on the desired state of their schema, instead of crafting them by hand.
  • Support for PostgreSQL Extensions - Atlas now supports installing and managing PostgreSQL extensions.
  • Dashboards in the Cloud - The dashboard (previously the 'Projects' page) got a whole new look in Atlas Cloud. Now you can view the state of your projects and environments at a glance.
  • _SQL Server is out of Beta](#sql-server-is-out-of-beta) - SQL Server is officially out of Beta! Along with this official support, we have included some new features:
    • User-Defined Types support for SQL Server - Atlas now supports two User-Defined Types: alias types and table types.
    • Azure Active Directory (AAD) Authentication for SQL Server - Connect to your SQL Server database using AAD Authentication.

Let’s dive in!

New Pricing Model

As of March 15th, there is a new pricing model for Atlas users. This change is a result of feedback we received from many teams that the previous $295/month minimum was prohibitive, and a gradual, usage-based pricing model would help them adopt Atlas in their organizations.

You can read the full reasoning for the change and a breakdown of the new pricing in this blog post.

Django ORM Integration

Django is the most popular web framework in the Python community. It includes a built-in ORM which allows users to describe their data model using Python classes. Migrations are then created using the makemigrations command, which can be applied to the database using migrate command.

Among the many ORMs available in our industry, Django's automatic migration tool is one of the most powerful and robust. It can handle a wide range of schema changes, however, having been created in 2014, a very different era in software engineering, it naturally has some limitations.

Some of the limitations of Django's migration system include:

  1. Database Features - Because it was created to provide interoperability across database engines, Django's migration system is centered around the "lowest common denominator" of database features.

  2. Ensuring Migration Safety - Migrations are a risky business. If you're not careful, you can easily cause data loss or a production outage. Django's migration system does not provide a native way to ensure that a migration is safe to apply.

  3. Modern Deployments - Django does not provide native integration with modern deployment practices such as GitOps or Infrastructure-as-Code.

Atlas, on the other hand, lets you manage your Django applications using the Database Schema-as-Code paradigm. This means that you can use Atlas to automatically plan schema migrations for your Django project, and then apply them to your database.

Read the full guide to set up Atlas for your Django project.

Support for PostgreSQL Extensions

Postgres extensions are add-on modules that enhance the functionality of the database by introducing new objects, such as functions, data types, operators, and more.

The support for extensions has been highly requested, so we are excited to announce that they are finally available!

To load an extension, add the extension block to your schema file. For example, adding PostGIS would look similar to:

schema.hcl
extension "postgis" {
schema = schema.public
version = "3.4.1"
comment = "PostGIS geometry and geography spatial types and functions"
}

Read more about configuring extensions in your schema here.

Dashboards in the Cloud

Atlas Cloud has a new and improved dashboard view!

When working with multiple databases, environments, or even projects - it becomes increasingly difficult to track and manage the state of each of these components. With Atlas Cloud, we aim to provide a single source of truth, allowing you to get a clear overview of each schema, database, environment, deployment and their respective statuses.

project-dashboard

Once you push your migration directory to the schema registry, you will be able to see a detailed dashboard like the one shown above.

Let’s break down what we see:

  • The usage calendar shows when changes are made to your migration directory via the migrate push command in CI.

  • The databases show the state of your target databases. This list will be populated once you have set up deployments for your migration directory. The state of the database can be one of the following:

    • Synced - the database is at the same version as the latest version of your migration directory schema.
    • Failed - the last deployment has failed on this database.
    • Pending - the database is not up to date with the latest version of your migration directory schema.

An alternate view to this page is viewing it per environment. This way, you can see a comprehensive list of the status of each database in each environment.

project-envs

SQL Server out of Beta

We are proud to announce that SQL Server is officially supported by Atlas! Since our release of SQL Server in Beta last August, our team has been working hard to refine and stabilize its performance.

In addition, we have added two new capabilities to the SQL Server driver.

User-Defined Types Support

In SQL Server, user-defined types (UDTs) are a way to create custom data types that group together existing data types. Atlas now supports alias types and table types.

Alias Types

Alias types allow you to create a custom data type, which can then make your code more readable and maintainable.

For example, you might want to create an alias type email_address for the VARCHAR(100) data type. Instead of rewriting this throughout the code, and in order to maintain consistency, you can simply use email_address for clarity.

In the schema.hcl file, you would define this like so:

schema.hcl
type_alias "email_address" {
schema = schema.dbo
type = varchar(100)
null = false
}
table "users" {
schema = schema.dbo
column "email_address" {
type = type_alias.email_address
}
}

Table Types

Table types allow you to define a structured data type that represents a table structure. These are particularly useful for passing sets of data between stored procedures and functions. They can also be used as parameters in stored procedures or functions, allowing you to pass multiple rows of data with a single parameter.

For example, we have a type_table to describe the structure of an address. We can declare this table and later use it in a function:

type_table "address" {
schema = schema.dbo
column "street" {
type = varchar(255)
}
column "city" {
type = varchar(255)
}
column "state" {
type = varchar(2)
}
column "zip" {
type = type_alias.zip
}
index {
unique = true
columns = [column.ssn]
}
check "zip_check" {
expr = "len(zip) = 5"
}
}
function "insert_address" {
schema = schema.dbo
lang = SQL
arg "@address_table" {
type = type_table.address
readonly = true // The table type is readonly argument.
}
arg "@zip" {
type = type_alias.zip
}
return = int
as = <<-SQL
BEGIN
DECLARE @RowCount INT;
INSERT INTO address_table (street, city, state, zip)
SELECT street, city, state, zip
FROM @address_table;

SELECT @RowCount = @ROWCOUNT;

RETURN @RowCount;
END
SQL
}
type_alias "zip" {
schema = schema.dbo
type = varchar(5)
null = false
}

Read the documentation to learn how to use these types in Atlas.

Azure Active Directory (AAD) Authentication

Now when using SQL Server with Atlas, instead of providing your regular database URL, you can connect to your Azure instance with Azure Active Directory Authentication.

Use the fedauth parameter to specify the AAD authentication method. For more information, see the document on the underlying driver.

To connect to your Azure instance using AAD, the URL will look similar to:

azuresql://<instance>.database.windows.net?fedauth=ActiveDirectoryDefault&database=master

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 10 min read

Hi everyone,

We are excited to share our latest release with you! Here's what's new:

  • Pre-migration Checks: Before migrating your schema, you can now add SQL checks that will be verified to help avoid risky migrations.
  • Schema Docs: Atlas lets you manage your database schema as code. One of the things we love most about code, is that because of its formal structure, it's possible to automatically generate documentation from it. With this release, we're introducing a new feature that lets you generate code-grade documentation for your database schema.
  • SQL Server Trigger Support: Atlas now supports managing triggers in SQL Server.
  • ClickHouse Materialized View Support: Atlas now supports managing materialized views in ClickHouse.

Let's dive in.

Pre-migration Checks

Atlas now supports the concept of pre-migration checks, where each migration version can include a list of assertions (predicates) that must evaluate to true before the migration is applied.

For example, before dropping a table, we aim to ensure that no data is deleted and the table must be empty, or we check for the absence of duplicate values before adding a unique constraint to a table.

This is especially useful if we want to add our own specific logic to migration versions, and it helps to ensure that our database changes are safe.

Cloud Directory

Pre-migration checks work for Cloud connected directories. Check out the introduction guide to get started with Atlas Cloud.

To add these checks, Atlas supports a text-based file archive to describe "migration plans". Unlike regular migration files, which mainly contain a list of DDL statements (with optional directives), Atlas txtar files (currently) support two file types: migration files and pre-execution check files.

The code below presents a simple example of a pre-migration check. The default checks file is named checks.sql, and the migration.sql file contains the actual DDLs to be executed on the database in case the assertions are passed.

20240201131900_drop_users.sql
-- atlas:txtar

-- checks.sql --
-- The assertion below must be evaluated to true. Hence, the "users" table must not contain any rows.
SELECT NOT EXISTS(SELECT * FROM users);

-- migration.sql --
-- The statement below will be executed only if the assertion above evaluates to true.
DROP TABLE users;

If the pre-execution checks pass, the migration will be applied, and Atlas will report the results.

atlas migrate --dir atlas://app --env prod

Check passed

Output
Migrating to version 20240201131900 from 20240201131800 (1 migrations in total):
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM users);
-- ok (624.004µs)
-- migrating version 20240201131900
-> DROP TABLE users;
-- ok (5.412737ms)
-------------------------
-- 22.138088ms
-- 1 migration
-- 1 check
-- 1 sql statement

If the pre-execution checks fail, the migration will not be applied, and Atlas will exit with an error.

atlas migrate --dir atlas://app --env prod

Check failed

Output
Migrating to version 20240201131900 from 20240201131800 (1 migrations in total):
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM internal_users);
-> SELECT NOT EXISTS(SELECT * FROM external_users);
-- ok (1.322842ms)
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM roles);
-> SELECT NOT EXISTS(SELECT * FROM user_roles);
2 of 2 assertions failed: check assertion "SELECT NOT EXISTS(SELECT * FROM user_roles);" returned false
-------------------------
-- 19.396779ms
-- 1 migration with errors
-- 2 checks ok, 2 failures
Error: 2 of 2 assertions failed: check assertion "SELECT NOT EXISTS(SELECT * FROM user_roles);" returned false

To learn more about how to use pre-migration checks, read the documentation here.

Schema Docs

One of the most surprising things we learned from working with teams on their Atlas journey, is that many teams do not have a single source of truth for their database schema. As a result, it's impossible to maintain up-to-date documentation for the database schema, which is crucial for disseminating knowledge about the database across the team.

Atlas changes this by creating a workflow that begins with a single source of truth for the database schema - the desired state of the database, as defined in code. This is what enables Atlas to automatically plan migrations, detect drift (as we'll see below), and now, generate documentation.

How it works

Documentation is currently generated for the most recent version of your schema for migration directories that are pushed to Atlas Cloud. To generate docs for your schema, follow these steps:

  1. Make sure you have the most recent version of Atlas:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  2. Login to Atlas Cloud using the CLI:

    atlas login

    If you do not already have a (free) Atlas Cloud account, follow the instructions to create one.

  3. Push your migrations to Atlas Cloud:

    atlas migrate push <dir name>

    Be sure to replace <dir name> with the name of the directory containing your migrations. (e.g app)

  4. Atlas will print a link to the overview page for your migration directory, e.g:

    https://gh.atlasgo.cloud/dirs/4294967296
  5. Click on "Doc" in the top tabs to view the documentation for your schema.

SQL Server Trigger Support

In version v0.17, we released trigger support for PostgreSQL, MySQL and SQLite. In this release, we have added support for SQL Server as well.

Triggers are a powerful feature of relational databases that allow you to run custom code when certain events occur on a table or a view. For example, you can use triggers to automatically update the amount of stock in your inventory when a new order is placed or to create an audit log of changes to a table. Using this event-based approach, you can implement complex business logic in your database, without having to write any additional code in your application.

Managing triggers as part of the software development lifecycle can be quite a challenge. Luckily, Atlas's database schema-as-code approach makes it easy to do!

BETA FEATURE

Triggers are currently in beta and available to logged-in users only. To use this feature, run:

atlas login

Let's use Atlas to build a small chunk of a simple e-commerce application:

  1. Download the latest version of the Atlas CLI:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  2. Make sure you are logged in to Atlas:

    atlas login
  3. Let's spin up a new SQL Server database using docker:

    docker run --rm -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@ssw0rd0995' -p 1433:1433 --name atlas-demo -d mcr.microsoft.com/mssql/server:latest
  4. Next, let's define and apply the base table for our application:

    schema.hcl
    schema "dbo" {
    }
    table "grades" {
    schema = schema.dbo
    column "student_id" {
    null = false
    type = bigint
    }
    column "course_id" {
    null = false
    type = bigint
    }
    column "grade" {
    null = false
    type = int
    }
    column "grade_status" {
    null = true
    type = varchar(10)
    }
    primary_key {
    columns = [column.student_id, column.course_id]
    }
    }

    The grades table represents a student's grade for a specific course. The column grade_status will remain null at first, and we will use a trigger to update whether it the grade is pass or fail.

    Apply this schema on our local SQL Server instance using the Atlas CLI:

    atlas schema apply \
    --url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
    --to "file://schema.hcl" \
    --dev-url "docker://sqlserver/2022-latest/dev?mode=schema" \
    --auto-approve

    This command will apply the schema defined in schema.hcl to the local SQL Server instance. Notice the --auto-approve flag, which instructs Atlas to automatically apply the schema without prompting for confirmation.

  5. Now, let's define the logic to assign a grade_status using a TRIGGER. Append this definition to schema.hcl:

    schema.hcl
      trigger "after_grade_insert" {
    on = table.grades
    after {
    insert = true
    }
    as = <<-SQL
    BEGIN
    SET NOCOUNT ON;

    UPDATE grades
    SET grade_status = CASE
    WHEN inserted.grade >= 70 THEN 'Pass'
    ELSE 'Fail'
    END
    FROM grades
    INNER JOIN inserted ON grades.student_id = inserted.student_id and grades.course_id = inserted.course_id;
    END
    SQL
    }

    We defined a TRIGGER called after_grade_insert. This trigger is executed after new rows are inserted or existing rows are updated into the grades table. The trigger executes the SQL statement, which updates the grade_status column to either 'Pass' or 'Fail' based on the grade.

    Apply the updated schema using the Atlas CLI:

    atlas schema apply \
    --url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
    --to "file://schema.hcl" \
    --dev-url "docker://sqlserver/2022-latest/dev?mode=schema" \
    --auto-approve

    Notice that Atlas automatically detects that we have added a new TRIGGER, and applies it to the database.

  6. Finally, let's test our application to see that it actually works. We can do this by populating our database with some students' grades. To do so, connect to the SQL Server container and open a sqlcmd session.

    docker exec -it atlas-demo /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'P@ssw0rd0995'

    Now that a sqlcmd session is open, we can populate the items:

    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (1, 1, 87, null);
    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (1, 2, 99, null);
    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (2, 2, 68, null);

    To exit the session write Quit.

    Now, let's check the grades table to see that the grade_status column was updated correctly:

     docker exec -it atlas-demo /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'P@ssw0rd0995' -Q "SELECT * FROM grades;"

    You should see the following output:

     student_id    course_id        grade   grade_status
    ---------- ------------- ----------- --------------
    1 1 87 Pass
    1 2 99 Pass
    2 2 68 Fail
    (3 rows affected)

    Amazing! Our trigger automatically updated the grade_status for each of the rows.

ClickHouse Materialized View Support

A materialized view is a table-like structure that holds the results of a query. Unlike a regular view, the results of a materialized view are stored in the database and can be refreshed periodically to reflect changes in the underlying data.

LOGIN REQUIRED

Materialized views are currently available to logged-in users only. To use this feature, run:

atlas login

Let's see an example of how to write a materialized view in HCL for a ClickHouse database:

materialized "mat_view" {
schema = schema.public
to = table.dest
as = "SELECT * FROM table.src"
depends_on = [table.src]
}

In the example above, when creating materialized views with TO [db.]table, the view will be created with the same structure as the table or view specified in the TO clause.

The engine and primary_key attributes are required if the TO clause is not specified. In this syntax, populate can be used for the first time to populate the materialized view:

materialized "mat_view" {
schema = schema.public
engine = MergeTree
column "id" {
type = UInt32
}
column "name" {
type = String
}
primary_key {
columns = [column.id]
}
as = "SELECT * FROM table.src"
populate = true
depends_on = [table.src]
}
info

Note that modifying the materialized view structure after the initial creation is not supported by Atlas currently.

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 6 min read
Rotem Tamir

Hi everyone,

Thanks for joining us today for another release announcement! We have a bunch of really exciting features to share with you today, so let's get started! Here's what we'll cover:

  • Drift Detection - A common source of database trouble is that the schema in your database doesn't match the schema in your code. This can happen for a variety of reasons, including manual changes to the database, or changes made by other tools. Today, we are happy to announce the availability of a new feature that lets you automatically detect these changes, and alerts you when they happen.
  • SQLAlchemy Support - SQLAlchemy is a popular Python ORM. Developers using SQLAlchemy can use Atlas to automatically plan schema migrations for them, based on the desired state of their schema instead of crafting them by hand.
  • VSCode ERDs - We've added a new feature to our VSCode extension that lets you visualize your database schema as an ERD diagram.
  • Composite Schemas - The newly added composite_schema data source lets you combine multiple schemas into one, which is useful for managing schemas that are loaded from multiple sources or to describe applications that span multiple database schemas.

Drift Detection

We believe, that in an ideal world, schema migrations on production databases should be done in an automated way, preferably in your CI/CD pipelines, with developers not having root access. However, we know that this is oftentimes is not the case. For this reason, it is also common to find databases which schemas differ from the ones they are supposed to have. This phenomenon, called a Schema Drift can cause a lot of trouble for a team.

Atlas now can periodically check if your deployed databases schemas match their desired state. To function correctly, this feature relies on Atlas Cloud being able to communicate to your database. As it is uncommon for databases to be directly accessible from the internet, we have added the option to run Atlas Agents in your database's network to facilitate this communication. Agents register themselves via credentials against your Atlas Cloud account and continuously poll it for work.

PAID FEATURE

Drift Detection is currently only available in a paid subscription.

To learn more about how to use this feature, check out our Drift Detection Guide.

In addition, Atlas Agents enable you do use a lot more cool features, like

  • Cloud mediated deployments (coming soon)
  • Schema monitoring and auditing (coming soon)

SQLAlchemy Support

Goodbye, Alembic. Hello, Atlas.

SQLAlchemy is a popular ORM toolkit widely used in the Python community. SQLAlchemy allows users to describe their data model using its declarative-mapping feature. To actually create the underlying tables, users can use the Base.metadata.create_all method which may be sufficient during development where tables can be routinely dropped and recreated.

However, at some point, teams need more control and decide to employ the versioned migrations methodology, which is a more robust way to manage a database schema.

The native way to manage migrations with SQLAlchemy is to use the Alembic migration tool. Alembic can automatically generate migration scripts from the difference between the current state of the database and the desired state of the application.

A downside of this approach is that in order for it to work, a pre-existing database with the current version of the schema must be connected to. In many production environments, databases should generally not be reachable from developer workstations, which means this comparison is normally done against a local copy of the database which may have undergone some changes that aren't reflected in the existing migrations.

In addition, Alembic auto-generation fails to detect many kinds of changes and cannot be relied upon to generate production-ready migration scripts without routine manual intervention.

Atlas, on the other hand, can automatically plan database schema migrations for SQLAlchemy without requiring a connection to such a database and can detect almost any kind of schema change. Atlas plans migrations by calculating the diff between the current state of the database, and its desired state.

To learn how to use Atlas with SQLAlchemy, check out our SQLAlchemy Guide.

Special thanks to No'am (Miko) Tamir (who also doubles as my young brother) for his fantastic work building the prototype for this feature and to Ronen Lubin for making it production-ready.

VSCode ERDs

Starting with v0.4.2, our VSCode Extension can now visualize your database schema as an ERD diagram. To use this feature, simply open the command palette (Ctrl+Shift+P on Windows/Linux, Cmd+Shift+P on Mac) and select Atlas: View in ERD.

Composite Schemas

The composite_schema data source allows the composition of multiple Atlas schemas into a unified schema graph. This functionality is useful when projects schemas are split across various sources such as HCL, SQL, or application ORMs. For example, each service might have its own schema.

Referring to the url returned by this data source allows reading the entire project schemas as a single unit by any of the Atlas commands, such as migrate diff, schema apply, or schema inspect.

Usage example

By running atlas migrate diff with the given configuration, Atlas loads the inventory schema from the SQLAlchemy schema, the graph schema from ent/schema, and the auth and internal schemas from HCL and SQL schemas defined in Atlas format. Then, the composite schema, which represents these four schemas combined, will be compared against the current state of the migration directory. In case of a difference between the two states, a new migration file will be created with the necessary SQL statements.

atlas.hcl
data "composite_schema" "project" {
schema "inventory" {
url = data.external_schema.sqlalchemy.url
}
schema "graph" {
url = "ent://ent/schema"
}
schema "auth" {
url = "file://path/to/schema.hcl"
}
schema "internal" {
url = "file://path/to/schema.sql"
}
}

env "dev" {
src = data.composite_schema.project.url
dev = "docker://postgres/15/dev"
migration {
dir = "file://migrations"
}
}

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 7 min read
Rotem Tamir

Hi everyone,

I hope you are enjoying the holiday season, because we are here today with the first Atlas release of 2024: v0.17. It's been only a bit over a week since our last release, but we have some exciting new features we couldn't wait to share with you:

  • Trigger Support - Atlas now supports managing triggers on MySQL, PostgreSQL, MariaDB and SQLite databases.
  • Improved ERDs - You can now visualize your schema's SQL views, as well as create filters to select the specific database objects you wish to see.

Without further ado, let's dive in!

Trigger Support

BETA FEATURE

Triggers are currently in beta and available to logged-in users only. To use this feature, run:

atlas login

Triggers are a powerful feature of relational databases that allow you to run custom code when certain events occur on a table or a view. For example, you can use triggers to automatically update the amount of stock in your inventory when a new order is placed or to create an audit log of changes to a table. Using this event-based approach, you can implement complex business logic in your database, without having to write any additional code in your application.

Managing triggers as part of the software development lifecycle can be quite a challenge. Luckily, Atlas's database schema-as-code approach makes it easy to do!

Let's use Atlas to build a small chunk of a simple e-commerce application:

  1. Download the latest version of the Atlas CLI:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  2. Make sure you are logged in to Atlas:

    atlas login
  3. Let's spin up a new PostgreSQL database using docker:

    docker run --name db -e POSTGRES_PASSWORD=pass -d -p 5432:5432 postgres:16
  4. Next, let's define and apply the base tables for our application:

    schema.hcl
     table "inventory" {
    schema = schema.public
    column "item_id" {
    null = false
    type = serial
    }
    column "item_name" {
    null = false
    type = character_varying(255)
    }
    column "quantity" {
    null = false
    type = integer
    }
    primary_key {
    columns = [column.item_id]
    }
    }
    table "orders" {
    schema = schema.public
    column "order_id" {
    null = false
    type = serial
    }
    column "item_id" {
    null = false
    type = integer
    }
    column "order_quantity" {
    null = false
    type = integer
    }
    primary_key {
    columns = [column.order_id]
    }
    foreign_key "orders_item_id_fkey" {
    columns = [column.item_id]
    ref_columns = [table.inventory.column.item_id]
    on_update = NO_ACTION
    on_delete = NO_ACTION
    }
    }

    This defines two tables: inventory and orders. The inventory table holds information about the items in our store, and the orders table holds information about orders placed by our customers. The orders table has a foreign key constraint to the inventory table, to ensure that we can't place an order for an item that doesn't exist in our inventory.

    Apply this schema on our local Postgres instance using the Atlas CLI:

    atlas schema apply \
    --dev-url 'docker://postgres/16?search_path=public' \
    --to file://schema.hcl \
    -u 'postgres://postgres:pass@:5432/postgres?search_path=public&sslmode=disable' \
    --auto-approve

    This command will apply the schema defined in schema.hcl to the local Postgres instance. Notice the --auto-approve flag, which instructs Atlas to automatically apply the schema without prompting for confirmation.

  5. Let's now populate our database with some inventory items. We can do this using the psql command that is installed inside the default PostgreSQL Docker image:

    docker exec -it db psql -U postgres -c "INSERT INTO inventory (item_name, quantity) VALUES ('Apple', 10);"
    docker exec -it db psql -U postgres -c "INSERT INTO inventory (item_name, quantity) VALUES ('Banana', 20);"
    docker exec -it db psql -U postgres -c "INSERT INTO inventory (item_name, quantity) VALUES ('Orange', 30);"
  6. Now, let's define the business logic for our store using a FUNCTION and a TRIGGER. Append these definitions to schema.hcl:

    schema.hcl
     function "update_inventory" {
    schema = schema.public
    lang = PLpgSQL
    return = trigger
    as = <<-SQL
    BEGIN
    UPDATE inventory
    SET quantity = quantity - NEW.order_quantity
    WHERE item_id = NEW.item_id;
    RETURN NEW;
    END;
    SQL
    }
    trigger "after_order_insert" {
    on = table.orders
    after {
    insert = true
    }
    foreach = ROW
    execute {
    function = function.update_inventory
    }
    }

    We start by defining a FUNCTION called update_inventory. This function is written in PL/pgSQL, the procedural language for PostgreSQL. The function accepts a single argument, which is a TRIGGER object. The function updates the inventory table to reflect the new order, and then returns the NEW row, which is the row that was just inserted into the orders table.

    Next, we define a TRIGGER called after_order_insert. This trigger is executed after a new row is inserted into the orders table. The trigger executes the update_inventory function for each row that was inserted.

    Apply the updated schema using the Atlas CLI:

    atlas schema apply \
    --dev-url 'docker://postgres/16?search_path=public' \
    --to file://schema.hcl \
    -u 'postgres://postgres:pass@:5432/postgres?search_path=public&sslmode=disable' \
    --auto-approve

    Notice that Atlas automatically detects that we have added a new FUNCTION and a new TRIGGER, and applies them to the database.

  7. Finally, let's test our application to see that it actually works. We can do this by inserting a new row into the orders table:

    docker exec -it db psql -U postgres -c "INSERT INTO orders (item_id, order_quantity) VALUES (1, 5);"

    This statement creates a new order for 5 Apples.

    Now, let's check the inventory table to see that the order was processed correctly:

    docker exec -it db psql -U postgres -c "SELECT quantity FROM inventory WHERE item_name='Apple';"

    You should see the following output:

     quantity
    ---------
    5
    (1 row)

    Amazing! Our trigger automatically detected the creation of a new order of apples, and updated the inventory accordingly from 10 to 5.

Improved ERDs

One of the most frequently used capabilities in Atlas is schema visualization. Having a visual representation of your data model can be helpful as it allows for easier comprehension of complex data structures, and enables developers to better understand and collaborate on the data model of the application they are building.

Visualizing Database Views

erd-views

Until recently, the ERD showed schema's tables and the relations between them. With the most recent release, the ERD now visualizes database views!

Within each view you can find its:

  • Columns - the view's columns, including their data types and nullability.
  • Create Statement - the SQL CREATE statement, based on your specific database type.
  • Dependencies - a list of the tables (or other views) it is connected to. Clicking on this will map edges to each connected object in the schema.

As of recently (including this release), we have added support for functions, stored procedures and triggers which are all coming soon to the ERD!

To play with a schema that contains this feature, head over to the live demo.

ERD Filters

In cases where you have many database objects and prefer to focus in on a specific set of tables and views, you can narrow down your selection by creating a filter. Filters can be saved for future use. This can be great when working on a feature that affects a specific part of the schema, this way you can easily refer to it as needed.

erd-filters

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 10 min read
Rotem Tamir

Hi everyone,

It's been a while since our last version announcement and today I'm happy to share with you v0.16, which includes some very exciting improvements for Atlas:

  • ClickHouse Beta Support - ClickHouse is a high-performance, columnar database optimized for analytics and real-time query processing. Support for ClickHouse in Atlas has been one of the top requested features by our community in the past year. Today, we are happy to announce that ClickHouse is officially in Beta!
  • Hibernate Provider - Atlas now supports loading the desired state of your database directly from your Hibernate code. Hibernate developers can now join developers from the GORM, Sequelize, TypeORM and more communities who can now use Atlas to manage their database schema.
  • Baseline Schemas - In some cases, your migrations rely on certain database objects to exist apriori to your application schema, for example extensions or legacy tables. Atlas now supports defining a baseline schema which will be loaded before automatically planning and applying your migrations.
  • Proactive conflict detection - Teams that have connected their project to Atlas Cloud will get a prompt in the CLI if their migration directory is out of sync with the latest version in Atlas Cloud. This ensures that new migration files are added in a sequential order, preventing unexpected behavior.
  • Mermaid Support - Atlas now supports generating a Mermaid diagram of your database schema. This is a great way to visualize your database schema and share it with your team.
  • Review Policies - Users working with declarative migrations can now define "review policies" which can define thresholds for which kinds of changes require human review and which can be auto-applied.
  • Postgres Sequences - Another long awaited feature, Atlas now supports managing sequences in PostgreSQL.

I know that's quite a list, so let's dive right in!

ClickHouse Support

ClickHouse is a high-performance, columnar database optimized for analytics and real-time query processing. Support for ClickHouse in Atlas has been one of the top requested features by our community in the past year. Our team has been working hard to bring this feature to you and today we are happy to announce that ClickHouse is now available to use in Beta!

Here's what you need to do to get started:

  1. Log in to your Atlas Cloud account. If you don't have an account yet, you can sign up for free.

  2. Download the latest version of the Atlas CLI:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  3. Log in to your Atlas Cloud account from the CLI:

    atlas login
  4. Spin up a local ClickHouse instance:

    docker run -d --name clickhouse-sandbox -p 9000:9000 -d clickhouse/clickhouse-server:latest
  5. Verify that you are able to connect to this instance:

    atlas schema inspect -u 'clickhouse://localhost:9000'

    If everything is working correctly, you should see the following output:

     schema "default" {
    engine = Atomic
    }
  6. Create a new file named schema.hcl with the following content:

     schema "default" {
    engine = Atomic
    }

    table "users" {
    schema = schema.default
    engine = MergeTree
    column "id" {
    type = UInt32
    }
    column "name" {
    type = String
    }
    column "created" {
    type = DateTime
    }
    primary_key {
    columns = [column.id]
    }
    }
  7. Run the following command to apply the schema to your local ClickHouse instance:

     atlas schema apply -u 'clickhouse://localhost:9000' -f schema.hcl

    Atlas will prompt you to confirm the changes:

     -- Planned Changes:
    -- Create "users" table
    CREATE TABLE `default`.`users` (
    `id` UInt32,
    `name` String,
    `created` DateTime
    ) ENGINE = MergeTree
    PRIMARY KEY (`id`) SETTINGS index_granularity = 8192;

    Hit "Enter" to apply the changes.

  8. Amazing! Our schema has been applied to the database!

Hibernate Provider

Atlas now supports loading the desired state of your database directly from your Hibernate code. Packaged as both a Maven and Gradle plugin, the Hibernate provider allows you seamlessly integrate Atlas into your existing Hibernate project.

Hibernate ships with an automatic schema management tool called hbm2ddl. Similarly to Atlas, this tool can inspect a target database and automatically migrate the schema to the desired one. However, the Hibernate team has been advising for years not to use this tool in production:

Although the automatic schema generation is very useful for testing and prototyping purposes, in a production environment, it’s much more flexible to manage the schema using incremental migration scripts.

This is where Atlas comes in. Atlas can read Hibernate schema and plan database schema migrations.

To get started, refer to the blog post we published earlier this week.

Baseline Schemas

LOGIN REQUIRED

The docker block is available for logged-in users only. To use this feature, run:

atlas login

In some cases, there is a need to configure a baseline schema for the dev database so that every computation using the dev-database starts from this baseline. For example, users' schemas or migrations rely on objects, extensions, or other schema resources that are not managed by the project.

To configure such a baseline, use the docker block with the relevant image and pass to it the script for creating the base schema for the project:

docker "postgres" "dev" {
image = "postgres:15"
schema = "public"
baseline = <<SQL
CREATE SCHEMA "auth";
CREATE EXTENSION IF NOT EXISTS "uuid-ossp" SCHEMA "auth";
CREATE TABLE "auth"."users" ("id" uuid NOT NULL DEFAULT auth.uuid_generate_v4(), PRIMARY KEY ("id"));
SQL
}

env "local" {
src = "file://schema.pg.hcl"
dev = docker.postgres.dev.url
}

For more details refer to the documentation.

Proactive conflict detection

Teams that have connected their project to Atlas Cloud (see setup) will get a prompt in the CLI if their migration directory is out of sync with the latest version in Atlas Cloud. This ensures that new migration files are added in a sequential order, preventing unexpected behavior. For example:

atlas migrate diff --env dev

? Your directory is outdated (2 migrations behind). Continue or Abort:
▸ Continue (Rebase later)
Abort (Pull changes and re-run the command)

Additionally, the atlas migrate lint command helps enforce this requirement during the CI stage. Learn more on how to integrate Atlas into your GitHub Actions or GitLab CI pipelines.

Mermaid Support

Atlas now supports generating a Mermaid diagram of your database schema. Let's demonstrate this feature using an example schema for a local SQLite database. First, we'll create a new file named sqlite.hcl with the following content:

sqlite.hcl
schema "default" {
}

table "users" {
schema = schema.default
column "id" {
type = int
}
column "name" {
type = text
}
column "email" {
type = text
}
primary_key {
columns = [column.id]
}
}

table "blog_posts" {
schema = schema.default
column "id" {
type = int
}
column "title" {
type = text
}
column "body" {
type = text
}
column "author_id" {
type = int
}
foreign_key "blog_author" {
columns = [column.author_id]
ref_columns = [table.users.column.id]
}
}

Run the following command to inspect the schema and generate the Mermaid code:

atlas schema inspect -u file://sqlite.hcl --dev-url 'sqlite://?mode=memory' --format "{{ mermaid . }}"

The output will look like this:

erDiagram
users {
int id PK
text name
text email
}
blog_posts {
int id
text title
text body
int author_id
}
blog_posts }o--o| users : blog_author

Next, copy this output and paste it into the Mermaid Live Editor.

The result should look like this:

Review Policies

Users working with declarative migrations can now define "review policies" which can define thresholds for which kinds of changes require human review and which can be auto-applied.

By default, when running atlas schema apply on a target database, if any changes to the target database are required, Atlas will prompt the user to confirm the changes. This is a safety measure to prevent accidental changes to the target database.

However, Atlas ships with an analysis engine that can detect the impact of different changes to the target database. For example, Atlas can detect irreversible destructive changes that will result in data loss or data dependent changes that may fail due to data integrity constraints.

With review policies, you can tell Atlas to first analyze the proposed changes and only prompt the user if the changes are above a certain risk threshold. For example, you can configure Atlas to only ask for review if any warnings are found and to automatically apply all changes that do not trigger any diagnostics:

atlas.hcl
lint {
review = WARNING
}

You can see a live demonstration of this feature towards the end of our recent HashiCorp conference talk.

Postgres Sequences

BETA FEATURE

Sequences are currently in beta and available to logged-in users only. To use this feature, run:

atlas login

The sequence block allows defining a sequence number generator. Supported by PostgreSQL.

Note, a sequence block is printed by Atlas on inspection, or it may be manually defined in the schema only if it represents a PostgreSQL sequence that is not implicitly created by the database for identity or serial columns.

# Simple sequence with default values.
sequence "s1" {
schema = schema.public
}

# Sequence with custom configuration.
sequence "s2" {
schema = schema.public
type = smallint
start = 100
increment = 2
min_value = 100
max_value = 1000
}

# Sequence that is owned by a column.
sequence "s3" {
schema = schema.public
owner = table.t2.column.id
comment = "Sequence with column owner"
}

# The sequences created by this table are not printed on inspection.
table "users" {
schema = schema.public
column "id" {
type = int
identity {
generated = ALWAYS
start = 10000
}
}
column "serial" {
type = serial
}
primary_key {
columns = [column.id]
}
}

table "t2" {
schema = schema.public
column "id" {
type = int
}
}

schema "public" {
comment = "standard public schema"
}

Wait, there's more!

A few other notable features shipped in this release are:

  • Analyzers for detecting blocking enum changes on MySQL. Certain kinds of changes to enum columns on MySQL tables change the column type and require a table copy. During this process, the table is locked for write operations which can cause application downtime.

    Atlas now ships with analyzers that can detect such changes and warn the user before applying them. For more information see the documentation for analyzers MY111, MY112 and MY113.

  • The external data source - The external data source allows the execution of an external program and uses its output in the project.

    For example:

    atlas.hcl
    data "external" "dot_env" {
    program = [
    "npm",
    "run",
    "load-env.js"
    ]
    }

    locals {
    dot_env = jsondecode(data.external.dot_env)
    }

    env "local" {
    src = local.dot_env.URL
    dev = "docker://mysql/8/dev"
    }

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 5 min read
Tran Minh Giau

Introduction

Today we are very excited to announce the release of Atlas Terraform Provider v0.4.0. This release brings some exciting new features and improvements to the provider which we will describe in this post.

In addition, this release is the first to be published under our new partnership with HashiCorp as a Technology Partner. Atlas is sometimes described as a "Terraform for Databases", so we have high hopes that this partnership will help us to bring many opportunities to create better ways for integrating database schema management into IaC workflows.

What's new

When people first hear about integrating schema management into declarative workflows, many raise the concern that because making changes to the database is a high-risk operation, they would not trust a tool to do it automatically.

This is a valid concern, and this release contains three new features that we believe will help to address it:

  • SQL plan printing
  • Versioned migrations support
  • Migration safety verification

SQL Plan Printing

In previous versions of the provider, we displayed the plan as a textual diff showing which resources are added, removed or modified. With this version, the provider will also print the SQL statements that will be executed as part of the plan.

For example, suppose we have the following schema:

schema "market" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
comment = "A schema comment"
}

table "users" {
schema = schema.market
column "id" {
type = int
}
column "name" {
type = varchar(255)
}
primary_key {
columns = [
column.id
]
}
}

And our Terraform module looks like this:

terraform {
required_providers {
atlas = {
source = "ariga/atlas"
version = "0.4.0"
}
}
}

provider "atlas" {}

data "atlas_schema" "market" {
src = file("${path.module}/schema.hcl")
dev_db_url = "mysql://root:pass@localhost:3307"
}

resource "atlas_schema" "market" {
hcl = data.atlas_schema.market.hcl
url = "mysql://root:pass@localhost:3306"
dev_db_url = "mysql://root:pass@localhost:3307"
}

When we run terraform plan we will see the following output:

Plan: 1 to add, 0 to change, 0 to destroy.

│ Warning: Atlas Plan

│ with atlas_schema.market,
│ on main.tf line 17, in resource "atlas_schema" "market":
│ 17: resource "atlas_schema" "market" {

│ The following SQL statements will be executed:


│ -- add new schema named "market"
│ CREATE DATABASE `market`
│ -- create "users" table
│ CREATE TABLE `market`.`users` (`id` int NOT NULL, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`)) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci

Versioned migrations

Atlas supports two types of workflows: Declarative and Versioned. With declarative workflows, the plan to migrate the database is generated automatically at runtime. Versioned migrations provide teams with a more controlled workflow where changes are planned, checked-in to source control and reviewed ahead of time. Until today, the Terraform provider only supported the declarative workflow. This release adds support for versioned migrations as well.

Suppose we have the following migration directory of two files:

20221101163823_create_users.sql
CREATE TABLE `users` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`age` bigint(20) NOT NULL,
`name` varchar(255) COLLATE utf8mb4_bin NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `age` (`age`)
);
atlas.sum
h1:OlaV3+7xXEWc1uG/Ed2zICttHaS6ydHZmzI7Hpf2Fss=
20221101163823_create_users.sql h1:mZirkpXBoLLm+M73EbHo07muxclifb70fhWQFfqxjD4=

We can use the Terraform Atlas provider to apply this migration directory to a database:

terraform {
required_providers {
atlas = {
source = "ariga/atlas"
version = "0.4.0"
}
}
}

provider "atlas" {}

// The `atlas_migration` data source loads the current state of the given database
// with regard to the migration directory.
data "atlas_migration" "hello" {
dir = "migrations?format=atlas"
url = "mysql://root:pass@localhost:3306/hello"
}

// The `atlas_migration` resource applies the migration directory to the database.
resource "atlas_migration" "hello" {
dir = "migrations?format=atlas"
version = data.atlas_migration.hello.latest # Use latest to run all migrations
url = data.atlas_migration.hello.url
dev_url = "mysql://root:pass@localhost:3307/test"
}

Running terraform plan will show the following output:

data.atlas_migration.hello: Reading...
data.atlas_migration.hello: Read complete after 0s [id=migrations?format=atlas]

Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:

# atlas_migration.hello will be created
+ resource "atlas_migration" "hello" {
+ dev_url = (sensitive value)
+ dir = "migrations?format=atlas"
+ id = (known after apply)
+ status = (known after apply)
+ url = (sensitive value)
+ version = "20221101163823"
}

Plan: 1 to add, 0 to change, 0 to destroy.

Linting

Atlas provides extensive support for linting database schemas. This release adds support for linting schemas as part of the Terraform plan. This means that you can now run terraform plan and see if there are any linting errors in your schema. This is especially useful when you are using the versioned migrations workflow, as you can now run terraform plan to see if there are any linting errors in your schema before you apply the changes.

Suppose we add the following migration:

20221101165036_change_unique.sql
ALTER TABLE users
DROP KEY age,
ADD CONSTRAINT NAME UNIQUE (`name`);

If we run terraform plan on the above schema, Terraform prints the following warning:


│ Warning: data dependent changes detected

│ with atlas_migration.hello,
│ on main.tf line 20, in resource "atlas_migration" "hello":
│ 20: resource "atlas_migration" "hello" {

│ File: 20221101165036_change_unique.sql

│ - MF101: Adding a unique index "NAME" on table "users" might fail in case column
│ "name" contains duplicate entries

Atlas detected that the migration may fail in case the column name contains duplicate entries! This is a very useful warning that can help you avoid unpredicted failed deployments. Atlas supports many more safety checks, which you can read about here.

Wrapping up

In this blogpost we have discussed three new features that were added to the Terraform Atlas provider that are designed to make it safer and more predictable to manage your database schemas with Terraform. We hope you will enjoy this release!

Have questions? Feedback? Feel free to reach out on our Discord server.