Skip to main content

2 posts tagged with "clickhouse"

View All Tags

· 10 min read

Hi everyone,

We are excited to share our latest release with you! Here's what's new:

  • Pre-migration Checks: Before migrating your schema, you can now add SQL checks that will be verified to help avoid risky migrations.
  • Schema Docs: Atlas lets you manage your database schema as code. One of the things we love most about code, is that because of its formal structure, it's possible to automatically generate documentation from it. With this release, we're introducing a new feature that lets you generate code-grade documentation for your database schema.
  • SQL Server Trigger Support: Atlas now supports managing triggers in SQL Server.
  • ClickHouse Materialized View Support: Atlas now supports managing materialized views in ClickHouse.

Let's dive in.

Pre-migration Checks

Atlas now supports the concept of pre-migration checks, where each migration version can include a list of assertions (predicates) that must evaluate to true before the migration is applied.

For example, before dropping a table, we aim to ensure that no data is deleted and the table must be empty, or we check for the absence of duplicate values before adding a unique constraint to a table.

This is especially useful if we want to add our own specific logic to migration versions, and it helps to ensure that our database changes are safe.

Cloud Directory

Pre-migration checks work for Cloud connected directories. Check out the introduction guide to get started with Atlas Cloud.

To add these checks, Atlas supports a text-based file archive to describe "migration plans". Unlike regular migration files, which mainly contain a list of DDL statements (with optional directives), Atlas txtar files (currently) support two file types: migration files and pre-execution check files.

The code below presents a simple example of a pre-migration check. The default checks file is named checks.sql, and the migration.sql file contains the actual DDLs to be executed on the database in case the assertions are passed.

20240201131900_drop_users.sql
-- atlas:txtar

-- checks.sql --
-- The assertion below must be evaluated to true. Hence, the "users" table must not contain any rows.
SELECT NOT EXISTS(SELECT * FROM users);

-- migration.sql --
-- The statement below will be executed only if the assertion above evaluates to true.
DROP TABLE users;

If the pre-execution checks pass, the migration will be applied, and Atlas will report the results.

atlas migrate --dir atlas://app --env prod

Check passed

Output
Migrating to version 20240201131900 from 20240201131800 (1 migrations in total):
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM users);
-- ok (624.004µs)
-- migrating version 20240201131900
-> DROP TABLE users;
-- ok (5.412737ms)
-------------------------
-- 22.138088ms
-- 1 migration
-- 1 check
-- 1 sql statement

If the pre-execution checks fail, the migration will not be applied, and Atlas will exit with an error.

atlas migrate --dir atlas://app --env prod

Check failed

Output
Migrating to version 20240201131900 from 20240201131800 (1 migrations in total):
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM internal_users);
-> SELECT NOT EXISTS(SELECT * FROM external_users);
-- ok (1.322842ms)
-- checks before migrating version 20240201131900
-> SELECT NOT EXISTS(SELECT * FROM roles);
-> SELECT NOT EXISTS(SELECT * FROM user_roles);
2 of 2 assertions failed: check assertion "SELECT NOT EXISTS(SELECT * FROM user_roles);" returned false
-------------------------
-- 19.396779ms
-- 1 migration with errors
-- 2 checks ok, 2 failures
Error: 2 of 2 assertions failed: check assertion "SELECT NOT EXISTS(SELECT * FROM user_roles);" returned false

To learn more about how to use pre-migration checks, read the documentation here.

Schema Docs

One of the most surprising things we learned from working with teams on their Atlas journey, is that many teams do not have a single source of truth for their database schema. As a result, it's impossible to maintain up-to-date documentation for the database schema, which is crucial for disseminating knowledge about the database across the team.

Atlas changes this by creating a workflow that begins with a single source of truth for the database schema - the desired state of the database, as defined in code. This is what enables Atlas to automatically plan migrations, detect drift (as we'll see below), and now, generate documentation.

How it works

Documentation is currently generated for the most recent version of your schema for migration directories that are pushed to Atlas Cloud. To generate docs for your schema, follow these steps:

  1. Make sure you have the most recent version of Atlas:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  2. Login to Atlas Cloud using the CLI:

    atlas login

    If you do not already have a (free) Atlas Cloud account, follow the instructions to create one.

  3. Push your migrations to Atlas Cloud:

    atlas migrate push <dir name>

    Be sure to replace <dir name> with the name of the directory containing your migrations. (e.g app)

  4. Atlas will print a link to the overview page for your migration directory, e.g:

    https://gh.atlasgo.cloud/dirs/4294967296
  5. Click on "Doc" in the top tabs to view the documentation for your schema.

SQL Server Trigger Support

In version v0.17, we released trigger support for PostgreSQL, MySQL and SQLite. In this release, we have added support for SQL Server as well.

Triggers are a powerful feature of relational databases that allow you to run custom code when certain events occur on a table or a view. For example, you can use triggers to automatically update the amount of stock in your inventory when a new order is placed or to create an audit log of changes to a table. Using this event-based approach, you can implement complex business logic in your database, without having to write any additional code in your application.

Managing triggers as part of the software development lifecycle can be quite a challenge. Luckily, Atlas's database schema-as-code approach makes it easy to do!

BETA FEATURE

Triggers are currently in beta and available to logged-in users only. To use this feature, run:

atlas login

Let's use Atlas to build a small chunk of a simple e-commerce application:

  1. Download the latest version of the Atlas CLI:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  2. Make sure you are logged in to Atlas:

    atlas login
  3. Let's spin up a new SQL Server database using docker:

    docker run --rm -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@ssw0rd0995' -p 1433:1433 --name atlas-demo -d mcr.microsoft.com/mssql/server:latest
  4. Next, let's define and apply the base table for our application:

    schema.hcl
    schema "dbo" {
    }
    table "grades" {
    schema = schema.dbo
    column "student_id" {
    null = false
    type = bigint
    }
    column "course_id" {
    null = false
    type = bigint
    }
    column "grade" {
    null = false
    type = int
    }
    column "grade_status" {
    null = true
    type = varchar(10)
    }
    primary_key {
    columns = [column.student_id, column.course_id]
    }
    }

    The grades table represents a student's grade for a specific course. The column grade_status will remain null at first, and we will use a trigger to update whether it the grade is pass or fail.

    Apply this schema on our local SQL Server instance using the Atlas CLI:

    atlas schema apply \
    --url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
    --to "file://schema.hcl" \
    --dev-url "docker://sqlserver/2022-latest/dev?mode=schema" \
    --auto-approve

    This command will apply the schema defined in schema.hcl to the local SQL Server instance. Notice the --auto-approve flag, which instructs Atlas to automatically apply the schema without prompting for confirmation.

  5. Now, let's define the logic to assign a grade_status using a TRIGGER. Append this definition to schema.hcl:

    schema.hcl
      trigger "after_grade_insert" {
    on = table.grades
    after {
    insert = true
    }
    as = <<-SQL
    BEGIN
    SET NOCOUNT ON;

    UPDATE grades
    SET grade_status = CASE
    WHEN inserted.grade >= 70 THEN 'Pass'
    ELSE 'Fail'
    END
    FROM grades
    INNER JOIN inserted ON grades.student_id = inserted.student_id and grades.course_id = inserted.course_id;
    END
    SQL
    }

    We defined a TRIGGER called after_grade_insert. This trigger is executed after new rows are inserted or existing rows are updated into the grades table. The trigger executes the SQL statement, which updates the grade_status column to either 'Pass' or 'Fail' based on the grade.

    Apply the updated schema using the Atlas CLI:

    atlas schema apply \
    --url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \
    --to "file://schema.hcl" \
    --dev-url "docker://sqlserver/2022-latest/dev?mode=schema" \
    --auto-approve

    Notice that Atlas automatically detects that we have added a new TRIGGER, and applies it to the database.

  6. Finally, let's test our application to see that it actually works. We can do this by populating our database with some students' grades. To do so, connect to the SQL Server container and open a sqlcmd session.

    docker exec -it atlas-demo /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'P@ssw0rd0995'

    Now that a sqlcmd session is open, we can populate the items:

    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (1, 1, 87, null);
    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (1, 2, 99, null);
    INSERT INTO grades (student_id, course_id, grade, grade_status) VALUES (2, 2, 68, null);

    To exit the session write Quit.

    Now, let's check the grades table to see that the grade_status column was updated correctly:

     docker exec -it atlas-demo /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'P@ssw0rd0995' -Q "SELECT * FROM grades;"

    You should see the following output:

     student_id    course_id        grade   grade_status
    ---------- ------------- ----------- --------------
    1 1 87 Pass
    1 2 99 Pass
    2 2 68 Fail
    (3 rows affected)

    Amazing! Our trigger automatically updated the grade_status for each of the rows.

ClickHouse Materialized View Support

A materialized view is a table-like structure that holds the results of a query. Unlike a regular view, the results of a materialized view are stored in the database and can be refreshed periodically to reflect changes in the underlying data.

LOGIN REQUIRED

Materialized views are currently available to logged-in users only. To use this feature, run:

atlas login

Let's see an example of how to write a materialized view in HCL for a ClickHouse database:

materialized "mat_view" {
schema = schema.public
to = table.dest
as = "SELECT * FROM table.src"
depends_on = [table.src]
}

In the example above, when creating materialized views with TO [db.]table, the view will be created with the same structure as the table or view specified in the TO clause.

The engine and primary_key attributes are required if the TO clause is not specified. In this syntax, populate can be used for the first time to populate the materialized view:

materialized "mat_view" {
schema = schema.public
engine = MergeTree
column "id" {
type = UInt32
}
column "name" {
type = String
}
primary_key {
columns = [column.id]
}
as = "SELECT * FROM table.src"
populate = true
depends_on = [table.src]
}
info

Note that modifying the materialized view structure after the initial creation is not supported by Atlas currently.

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 10 min read
Rotem Tamir

Hi everyone,

It's been a while since our last version announcement and today I'm happy to share with you v0.16, which includes some very exciting improvements for Atlas:

  • ClickHouse Beta Support - ClickHouse is a high-performance, columnar database optimized for analytics and real-time query processing. Support for ClickHouse in Atlas has been one of the top requested features by our community in the past year. Today, we are happy to announce that ClickHouse is officially in Beta!
  • Hibernate Provider - Atlas now supports loading the desired state of your database directly from your Hibernate code. Hibernate developers can now join developers from the GORM, Sequelize, TypeORM and more communities who can now use Atlas to manage their database schema.
  • Baseline Schemas - In some cases, your migrations rely on certain database objects to exist apriori to your application schema, for example extensions or legacy tables. Atlas now supports defining a baseline schema which will be loaded before automatically planning and applying your migrations.
  • Proactive conflict detection - Teams that have connected their project to Atlas Cloud will get a prompt in the CLI if their migration directory is out of sync with the latest version in Atlas Cloud. This ensures that new migration files are added in a sequential order, preventing unexpected behavior.
  • Mermaid Support - Atlas now supports generating a Mermaid diagram of your database schema. This is a great way to visualize your database schema and share it with your team.
  • Review Policies - Users working with declarative migrations can now define "review policies" which can define thresholds for which kinds of changes require human review and which can be auto-applied.
  • Postgres Sequences - Another long awaited feature, Atlas now supports managing sequences in PostgreSQL.

I know that's quite a list, so let's dive right in!

ClickHouse Support

ClickHouse is a high-performance, columnar database optimized for analytics and real-time query processing. Support for ClickHouse in Atlas has been one of the top requested features by our community in the past year. Our team has been working hard to bring this feature to you and today we are happy to announce that ClickHouse is now available to use in Beta!

Here's what you need to do to get started:

  1. Log in to your Atlas Cloud account. If you don't have an account yet, you can sign up for free.

  2. Download the latest version of the Atlas CLI:

    To download and install the latest release of the Atlas CLI, simply run the following in your terminal:

    curl -sSf https://atlasgo.sh | sh
  3. Log in to your Atlas Cloud account from the CLI:

    atlas login
  4. Spin up a local ClickHouse instance:

    docker run -d --name clickhouse-sandbox -p 9000:9000 -d clickhouse/clickhouse-server:latest
  5. Verify that you are able to connect to this instance:

    atlas schema inspect -u 'clickhouse://localhost:9000'

    If everything is working correctly, you should see the following output:

     schema "default" {
    engine = Atomic
    }
  6. Create a new file named schema.hcl with the following content:

     schema "default" {
    engine = Atomic
    }

    table "users" {
    schema = schema.default
    engine = MergeTree
    column "id" {
    type = UInt32
    }
    column "name" {
    type = String
    }
    column "created" {
    type = DateTime
    }
    primary_key {
    columns = [column.id]
    }
    }
  7. Run the following command to apply the schema to your local ClickHouse instance:

     atlas schema apply -u 'clickhouse://localhost:9000' -f schema.hcl

    Atlas will prompt you to confirm the changes:

     -- Planned Changes:
    -- Create "users" table
    CREATE TABLE `default`.`users` (
    `id` UInt32,
    `name` String,
    `created` DateTime
    ) ENGINE = MergeTree
    PRIMARY KEY (`id`) SETTINGS index_granularity = 8192;

    Hit "Enter" to apply the changes.

  8. Amazing! Our schema has been applied to the database!

Hibernate Provider

Atlas now supports loading the desired state of your database directly from your Hibernate code. Packaged as both a Maven and Gradle plugin, the Hibernate provider allows you seamlessly integrate Atlas into your existing Hibernate project.

Hibernate ships with an automatic schema management tool called hbm2ddl. Similarly to Atlas, this tool can inspect a target database and automatically migrate the schema to the desired one. However, the Hibernate team has been advising for years not to use this tool in production:

Although the automatic schema generation is very useful for testing and prototyping purposes, in a production environment, it’s much more flexible to manage the schema using incremental migration scripts.

This is where Atlas comes in. Atlas can read Hibernate schema and plan database schema migrations.

To get started, refer to the blog post we published earlier this week.

Baseline Schemas

LOGIN REQUIRED

The docker block is available for logged-in users only. To use this feature, run:

atlas login

In some cases, there is a need to configure a baseline schema for the dev database so that every computation using the dev-database starts from this baseline. For example, users' schemas or migrations rely on objects, extensions, or other schema resources that are not managed by the project.

To configure such a baseline, use the docker block with the relevant image and pass to it the script for creating the base schema for the project:

docker "postgres" "dev" {
image = "postgres:15"
schema = "public"
baseline = <<SQL
CREATE SCHEMA "auth";
CREATE EXTENSION IF NOT EXISTS "uuid-ossp" SCHEMA "auth";
CREATE TABLE "auth"."users" ("id" uuid NOT NULL DEFAULT auth.uuid_generate_v4(), PRIMARY KEY ("id"));
SQL
}

env "local" {
src = "file://schema.pg.hcl"
dev = docker.postgres.dev.url
}

For more details refer to the documentation.

Proactive conflict detection

Teams that have connected their project to Atlas Cloud (see setup) will get a prompt in the CLI if their migration directory is out of sync with the latest version in Atlas Cloud. This ensures that new migration files are added in a sequential order, preventing unexpected behavior. For example:

atlas migrate diff --env dev

? Your directory is outdated (2 migrations behind). Continue or Abort:
▸ Continue (Rebase later)
Abort (Pull changes and re-run the command)

Additionally, the atlas migrate lint command helps enforce this requirement during the CI stage. Learn more on how to integrate Atlas into your GitHub Actions or GitLab CI pipelines.

Mermaid Support

Atlas now supports generating a Mermaid diagram of your database schema. Let's demonstrate this feature using an example schema for a local SQLite database. First, we'll create a new file named sqlite.hcl with the following content:

sqlite.hcl
schema "default" {
}

table "users" {
schema = schema.default
column "id" {
type = int
}
column "name" {
type = text
}
column "email" {
type = text
}
primary_key {
columns = [column.id]
}
}

table "blog_posts" {
schema = schema.default
column "id" {
type = int
}
column "title" {
type = text
}
column "body" {
type = text
}
column "author_id" {
type = int
}
foreign_key "blog_author" {
columns = [column.author_id]
ref_columns = [table.users.column.id]
}
}

Run the following command to inspect the schema and generate the Mermaid code:

atlas schema inspect -u file://sqlite.hcl --dev-url 'sqlite://?mode=memory' --format "{{ mermaid . }}"

The output will look like this:

erDiagram
users {
int id PK
text name
text email
}
blog_posts {
int id
text title
text body
int author_id
}
blog_posts }o--o| users : blog_author

Next, copy this output and paste it into the Mermaid Live Editor.

The result should look like this:

Review Policies

Users working with declarative migrations can now define "review policies" which can define thresholds for which kinds of changes require human review and which can be auto-applied.

By default, when running atlas schema apply on a target database, if any changes to the target database are required, Atlas will prompt the user to confirm the changes. This is a safety measure to prevent accidental changes to the target database.

However, Atlas ships with an analysis engine that can detect the impact of different changes to the target database. For example, Atlas can detect irreversible destructive changes that will result in data loss or data dependent changes that may fail due to data integrity constraints.

With review policies, you can tell Atlas to first analyze the proposed changes and only prompt the user if the changes are above a certain risk threshold. For example, you can configure Atlas to only ask for review if any warnings are found and to automatically apply all changes that do not trigger any diagnostics:

atlas.hcl
lint {
review = WARNING
}

You can see a live demonstration of this feature towards the end of our recent HashiCorp conference talk.

Postgres Sequences

BETA FEATURE

Sequences are currently in beta and available to logged-in users only. To use this feature, run:

atlas login

The sequence block allows defining a sequence number generator. Supported by PostgreSQL.

Note, a sequence block is printed by Atlas on inspection, or it may be manually defined in the schema only if it represents a PostgreSQL sequence that is not implicitly created by the database for identity or serial columns.

# Simple sequence with default values.
sequence "s1" {
schema = schema.public
}

# Sequence with custom configuration.
sequence "s2" {
schema = schema.public
type = smallint
start = 100
increment = 2
min_value = 100
max_value = 1000
}

# Sequence that is owned by a column.
sequence "s3" {
schema = schema.public
owner = table.t2.column.id
comment = "Sequence with column owner"
}

# The sequences created by this table are not printed on inspection.
table "users" {
schema = schema.public
column "id" {
type = int
identity {
generated = ALWAYS
start = 10000
}
}
column "serial" {
type = serial
}
primary_key {
columns = [column.id]
}
}

table "t2" {
schema = schema.public
column "id" {
type = int
}
}

schema "public" {
comment = "standard public schema"
}

Wait, there's more!

A few other notable features shipped in this release are:

  • Analyzers for detecting blocking enum changes on MySQL. Certain kinds of changes to enum columns on MySQL tables change the column type and require a table copy. During this process, the table is locked for write operations which can cause application downtime.

    Atlas now ships with analyzers that can detect such changes and warn the user before applying them. For more information see the documentation for analyzers MY111, MY112 and MY113.

  • The external data source - The external data source allows the execution of an external program and uses its output in the project.

    For example:

    atlas.hcl
    data "external" "dot_env" {
    program = [
    "npm",
    "run",
    "load-env.js"
    ]
    }

    locals {
    dot_env = jsondecode(data.external.dot_env)
    }

    env "local" {
    src = local.dot_env.URL
    dev = "docker://mysql/8/dev"
    }

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.