Skip to main content

3 posts tagged with "cloud"

View All Tags

· 5 min read
Rotem Tamir

Hi everyone,

We are updating you on a pricing change we will be rolling out to Atlas Cloud on March 15th, 2024.

As you know, Atlas is an open-core project, which means that while its core is an Apache 2-licensed open-source project, we are building it as a commercial, cloud-connected solution built and supported by our company, Ariga. As with any startup, our understanding of the product and the market are constantly evolving, and this pricing change is a reflection of that evolution.

Atlas Plans

Even through this change, we will keep providing the Atlas community with three options for how to consume Atlas.

  • Free Plan (formerly "Community Plan") - for individuals and small teams that want to unlock the full potential of Atlas. This plan will remain free forever and provides full access to all the capabilities of Atlas as a CLI as well as access to enough Atlas Cloud quota to successfully manage a single project. Support is provided via public community support channels.

  • Business Plan (formerly "Team Plan") - for teams professionally using Atlas beyond a single project. This plan has the same features and capabilities as the Free Plan but allows teams to purchase additional quotas if required. In addition, teams subscribing to this plan will get access to priority email support and in-app support via Intercom.

  • Enterprise Plan - for larger organizations looking to solve schema management at scale. This plan includes a dedicated support channel, solution engineering, and other features required for adoption by enterprises.

Why Change

The main reason for this change is the feedback we received from many small teams that the previous $295/month minimum price tag for the Team Plan was prohibitive and that a more gradual, usage-based pricing model would help them adopt Atlas in their organizations.

New Pricing

We’ve tried to keep the new pricing model as simple as possible. We have learned from our investors, advisors, and customers that a seat-based pricing model is less optimal as it disincentivizes the adoption of Atlas by people in roles with a lower-touch engagement. As such, we have made the new pricing model usage-based. Let’s break down how this is going to work.

Projects. The first dimension by which the new model works is the number of projects that you store in the Atlas Cloud schema registry. Currently, this is equal to the number of migration directories that you migrate push to Atlas Cloud, but soon we will also add support for schema push for declarative workflows.

Details:

  • Each project will cost $59 per month.
  • The Free Plan will include a single project free of charge.

Target Databases. Each project (migration directory) can be deployed to multiple target databases. This may be different environments (dev, staging, prod) or different tenants (for projects that manage separate databases per customer).

Details:

  • Each target database will cost $39 per month.
  • The Free Plan will include 2 target databases free of charge.
  • Whenever you purchase quota for an additional project, you will also get a bundled additional target database free of charge.

If pricing by target DB doesn't work for your particular use case, please reach out to us to discuss alternative solutions.

Additional Changes

  • Seats. The free plan will include 3 seats free of charge. Teams upgrading to business will receive 30 seats (regardless of how many projects and databases they use). This limit is supposed to allow as many people as needed to use Atlas Cloud features, but still impose some limit beyond which we expect teams to consider the Enterprise Plan.

  • Data Retention. Atlas users generate plenty of data which we store in our databases. To prevent it from becoming unsustainable for us to support free users over the long run, we are imposing a 30-day data retention limit on CI runs and deployment logs for free users. Business users get 90-day retention by default. If this becomes an issue for you, feel free to reach out to us and we will work something out.

  • Runs. Free Plan users can now report up to 100 CI Runs or Deployments (previously 500) per month in their cloud account. Business and Enterprise users can store an unlimited amount of runs.

Thanking Existing Users

As a way of saying thanks to existing early users who have trusted us to be part of their engineering infrastructure, we have worked out a few options for you to continue using Atlas without interruption. We will be reaching out to admins of these accounts personally to share the details.

This doesn’t work for me

If these changes cause an issue for you or you would like to discuss your specific pricing needs, please let me know personally via Email, Discord, Intercom, or Homing Pigeon 🙂.

-- Rotem and Ariel

· 6 min read
Rotem Tamir

Hi everyone!

It's been a few weeks since our last version announcement and today I'm happy to share with you
v0.14, which includes some very exciting improvements for Atlas:

  • Checkpoints - as your migration directory grows, replaying it from scratch can become annoyingly slow. Checkpoints allow you to save the state of your database at a specific point in time and replay migrations from that point forward.
  • Push to the Cloud - you can now push your migration directory to Atlas Cloud directly from the CLI. Think of it like docker push for your database migrations.
  • JetBrains Editor Support - After launching our VSCode Extension a few months ago, our team has been hard at work to bring the same experience to JetBrains IDEs. Starting today, you can use Atlas directly from your favorite JetBrains IDEs (IntelliJ, PyCharm, GoLand, etc.) using the new Atlas plugin.

Let's dive right in!

Checkpoints

Suppose your project has been going on for a while, and you have a migration directory with 100 migrations. Whenever you need to install your application from scratch (such as during development or testing), you need to replay all migrations from start to finish to set up your database. Depending on your setup, this may take a few seconds or more. If you have a checkpoint, you can replay only the migrations that were added since the latest checkpoint, which can be much faster.

Here's a short example. Let's say we have a migration directory with 2 migration files, managing a SQLite database. The first one creates a table named t1:

migrations/20230830122359_start.sql
create table t1 ( c1 int );

And the second adds a table named t2 and adds a column named c2 to t1:

migrations/20230830122414_t2.sql.sql
create table t2 ( c1 int, c2 int );

alter table t1 add column c2 int;

To create a checkpoint, we can run the following command:

atlas migrate checkpoint --dev-url "sqlite://file?mode=memory&_fk=1"

This will create a SQL file, which is our checkpoint:

20230830123813_checkpoint.sql
-- atlas:checkpoint

-- Create "t1" table
CREATE TABLE `t1` (`c1` int NULL, `c2` int NULL);
-- Create "t2" table
CREATE TABLE `t2` (`c1` int NULL, `c2` int NULL);

Notice two things:

  1. The atlas:checkpoint directive which indicates that this file is a checkpoint.
  2. The SQL statement to create the t1 table included both the c1 and c2 columns and does not contain the alter table statement. This is because the checkpoint includes the state of the database at the time it was created, which can be thought of as the sum of all migrations that were applied up to that point.

Next, let's apply these migrations on a local SQLite database:

atlas migrate apply --url sqlite://local.db

Atlas prints:

Migrating to version 20230830123813 (1 migrations in total):

-- migrating version 20230830123813
-> CREATE TABLE `t1` (`c1` int NULL, `c2` int NULL);
-> CREATE TABLE `t2` (`c1` int NULL, `c2` int NULL);
-- ok (960.465µs)

-------------------------
-- 6.895124ms
-- 1 migrations
-- 2 sql statements

As expected, Atlas skipped all of the migrations up to the checkpoint and only applied the last one!

Push to Cloud

As we demonstrated above, once we have a migration directory, we can apply it to a database. If your database is running locally this is easy enough, but building deployment pipelines to production databases is more involved. There are multiple ways to accomplish this, such as building custom Docker images, as shown in most methods covered in the guides section.

In this release, we simplified the process of pushing migration directories to Atlas Cloud by adding a new atlas migrate push command. You can think of it as docker push for your database migrations.

atlas migrate push

Migration Directory created with atlas migrate push

Continuing with our example from above, let's push our migration directory to Atlas Cloud.

To start, you'll need to log in to Atlas. If it's your first time, you'll be prompted to create both an account and a workspace.

atlas login

After logging in, let's name our new migration project pushdemo and run:

atlas migrate push pushdemo --dev-url "sqlite://file?mode=memory&_fk=1"

After our migration directory is pushed, Atlas prints a URL to the created directory, similar to the one shown in the image above.

Once your migration directory is pushed, you can use it to apply migrations to your database directly from the cloud, just as you would execute docker run to run a container image that is stored in a Docker container registry.

To apply a migration directory directly from the cloud, run:

atlas migrate apply --dir atlas://pushdemo --url sqlite://local.db

Notice two flags that we used here:

  • --dir - specifies the URL of the migration directory. We used atlas://pushdemo to indicate that we want to use the migration directory named pushdemo that we pushed earlier. This directory is accessible to us because we used atlas login in a previous step.
  • --url - specifies the URL of the database we want to apply the migrations to. In this case, we used the same SQLite database that we used earlier.

JetBrains Editor Support

JetBrains makes some of the most popular IDEs for software developers, including IntelliJ, PyCharm, GoLand, and more. We are happy to announce that following our recent release of the VSCode Extension, we now have a plugin for JetBrains IDEs as well!

The plugin is built to make editing Atlas HCL files much easier by providing developers with syntax highlighting, code completion, and warnings. It supports both atlas.hcl project configuration files as well as schema definition files (.my.hcl, .pg.hcl, and .lt.hcl).

The plugin is available for download from the JetBrains Marketplace.

  1. To install the plugin, open your IDE and go to Preferences > Plugins > Marketplace and search for Atlas:

  2. Click on the Install button to install the plugin.

  3. Create a new file named schema.my.hcl (the .my.hcl suffix signifies to the plugin that this file is a MySQL schema (you can use .pg.hcl for Postgres or .lt.hcl for SQLite)

  4. Edit away!

Wrapping up

That's it! I hope you try out (and enjoy) all of these new features and find them useful. As always, we would love to hear your feedback and suggestions on our Discord server.

· 5 min read
Ariel Mashraki

It has been two months since we announced the Community Preview Plan for Atlas Cloud, and today I am thrilled to announce the next batch of features that we are releasing to open-source and to Atlas Cloud:

In summary, version v0.12 includes a few major features that are explained in detail below:

  1. We have added support for importing and running migration linting on GitHub PRs for external migration formats, such as Flyway and golang-migrate.
  2. Atlas now supports reading your migration directory directly from your Atlas Cloud account. This eliminates the need for users to build their Docker images with the directory content and makes running schema migrations in production much easier.
  3. By connecting Atlas CLI to Atlas Cloud, migration runs will be recorded in the cloud account, making it easier to monitor and troubleshoot executed migrations.
  4. A new Slack integration is now available for Community Plan accounts. Organizations that connect their migration directories to the cloud can receive notifications to Slack channels when the schemas are updated or deployed, among other events.
  5. A new look has been given to the CI report page. It will be enhanced with additional features in the next version.

Remote Directory State

One of the most common complaints we received from our users is that setting up migration deployments for real-world environments is time-consuming. For each service, users are required to build a Docker image that includes the content of the migration directory, which ensures it is available when Atlas is executed. After this, the built image must be pushed to a registry, and finally, the deployment process needs to be configured to use this newly created image.

Not only does this process add complexity to the setup, but it is also repetitive for each migration directory and involves setting up a CI/CD pipeline for each service, adding another layer of complexity.

Atlas supports the concept of Data Sources, which enables users to retrieve information stored in an external service or database. In this release, we are introducing a new data source called remote_dir. This feature allows users to configure Atlas to read the content of the migration directory directly from their cloud account, thereby eliminating the need to build Docker images with the directory content.

Here is an example of how to configure the remote_dir data source:

atlas.hcl
variable "cloud_token" {
type = string
}

atlas {
cloud {
token = var.cloud_token
}
}

data "remote_dir" "migrations" {
// The name of the migration directory in Atlas Cloud.
// In this example, the directory is named "graph".
name = "graph"
}

env {
// Set environment name dynamically based on --env value.
name = atlas.env
migration {
dir = data.remote_dir.migrations.url
}
}
atlas migrate apply \
--url "<DATABASE_URL>" \
--config file://path/to/atlas.hcl \
--env prod \
--var cloud_token="<ATLAS_TOKEN>"

Visualizing Migration Runs

Schema migrations are an integral part of application deployments, yet the setup might vary between different applications and teams. Some teams may prefer using init-containers, while others run migrations from CD pipeline. There are also those who opt for Helm upgrade hooks or use our Kubernetes operator. The differences also apply to databases. Some applications work with one database, while others manage multiple databases, often seen in multi-tenant applications.

However, across all these scenarios, there's a shared need for a single place to view and track the progress of executed schema migrations. This includes triggering alerts and providing the means to troubleshoot and manage recovery if problems arise.

Starting from version v0.12, if the cloud configuration was set with a valid token, Atlas will log migration runs in your cloud account. Here's a demonstration of how it looks in action:

We have several new features lined up for the Community Plan in the next release. If you're interested in them earlier, don't hesitate to reach out to me in our Discord community.

Slack Webhooks

In this release, we're making our Slack Webhooks integration available to all users, promoting better team collaboration and providing instant alerts when issues occur. This new feature allows different groups within the organization, such as the data engineering team, to receive notifications when the schema changes. Or, ping the on-call when deployment fails.

If you're interested in enabling this feature for your project, please check out the documentation.

Screenshot example

What's next?

There's a lot more coming in the following months. Our next releases will be focused on making other database objects such as views, triggers, and policies accessible to all Atlas users. We'll also continue to make more features from our commercial product available to both open-source and community preview users.

As always, we value community feedback and strive to be responsive to it. Please feel free to reach out and share your feedback with us on our Discord if you think something is missing or could be improved. Cheers!