Declarative Migrations with Atlas Terraform Provider
Declarative migrations provide a powerful approach to evolving your database schema. When using the Atlas Terraform Provider with declarative migrations, you define your desired schema state, and Atlas automatically plans and applies the necessary changes.
Getting Started
Before you begin, ensure you have:
- Installed Terraform
- Installed the Atlas CLI
- Configured Terraform with the Atlas Terraform Provider
- Docker installed and running
- Access to your target database
Configuring Terraform for Declarative Migrations
To use declarative migrations with Terraform, you'll need to configure both a data source to define your schema and a resource to apply it:
terraform {
required_providers {
atlas = {
source = "ariga/atlas"
version = "~> 0.9.7"
}
}
}
provider "atlas" {
dev_url = "docker://mysql/8/dev"
}
// Load (and normalize) the desired schema from an sql file.
data "atlas_schema" "shop" {
src = file("${path.module}/schema.sql")
}
// Sync the state of the target database with the hcl file.
resource "atlas_schema" "shop" {
url = "mysql://root:pass@localhost:3306/shop"
hcl = data.atlas_schema.shop.hcl
}
Configuration Parameters Explained
src
: The source of your schema definition. Can be a file path (HCL or SQL) or a written out HCL schema definitionurl
: The connection URL to your target databasehcl
: The normalized HCL representation of your schema (typically from a data source)dev_url
: The URL to a dev database used for validation and planning
Defining Your Schema
Atlas supports multiple ways to define your schema. Choose the approach that best fits your workflow:
- SQL
- HCL
CREATE TABLE users (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(255) NOT NULL,
email VARCHAR(255) UNIQUE
);
CREATE TABLE orders (
id INT PRIMARY KEY AUTO_INCREMENT,
user_id INT NOT NULL,
amount DECIMAL(10,2) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id)
);
schema "shop" {}
table "users" {
schema = schema.shop
column "id" {
null = false
type = int
auto_increment = true
}
column "name" {
null = false
type = varchar(255)
}
column "email" {
null = true
type = varchar(255)
}
primary_key {
columns = [column.id]
}
index "email" {
unique = true
columns = [column.email]
}
}
table "orders" {
schema = schema.shop
column "id" {
null = false
type = int
auto_increment = true
}
column "user_id" {
null = false
type = int
}
column "amount" {
null = false
type = decimal(10,2)
unsigned = false
}
column "created_at" {
null = true
type = timestamp
default = sql("CURRENT_TIMESTAMP")
}
primary_key {
columns = [column.id]
}
foreign_key "orders_ibfk_1" {
columns = [column.user_id]
ref_columns = [table.users.column.id]
on_update = NO_ACTION
on_delete = NO_ACTION
}
index "user_id" {
columns = [column.user_id]
}
}
Applying Schema Changes
Once your schema is defined and your Terraform configuration is set up, you can apply your changes using:
terraform apply
The command will show a plan of the changes that will be made to the database:
atlas_migration.shop: Refreshing state... [id=file://migrations]
data.atlas_schema.shop: Reading...
data.atlas_schema.shop: Read complete after 2s [id=lOvh5NwwEVXjjK+stqHckQ]
Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# atlas_schema.shop will be created
+ resource "atlas_schema" "shop" {
+ hcl = <<-EOT
table "orders" {
schema = schema.shop
column "id" {
null = false
type = int
auto_increment = true
}
# ... more schema definition ...
EOT
+ id = (known after apply)
+ url = (sensitive value)
}
Plan: 1 to add, 0 to change, 0 to destroy.
╷
│ Warning: Atlas Plan
│
│ with atlas_schema.shop,
│ on main.tf line 20, in resource "atlas_schema" "shop":
│ 20: resource "atlas_schema" "shop" {
│
│ The following SQL statements will be executed:
│
│
│ CREATE TABLE `users` (
│ `id` int NOT NULL AUTO_INCREMENT,
│ `name` varchar(255) NOT NULL,
│ `email` varchar(255) NULL,
│ PRIMARY KEY (`id`),
│ UNIQUE INDEX `email` (`email`)
│ ) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
│ CREATE TABLE `orders` (
│ `id` int NOT NULL AUTO_INCREMENT,
│ `user_id` int NOT NULL,
│ `amount` decimal(10,2) NOT NULL,
│ `created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
│ PRIMARY KEY (`id`),
│ INDEX `user_id` (`user_id`),
│ CONSTRAINT `orders_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION
│ ) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
│
╵
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:
Type yes
and hit enter to apply the changes:
atlas_schema.shop: Creating...
atlas_schema.shop: Creation complete after 1s [id=fb936b91-d37b-2157-9e0b-e4bb218ade1f]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Evolving Your Schema
As your application evolves, you'll need to update your schema to reflect new requirements. With the declarative approach, all you need to do is update your schema definition file:
- SQL
- HCL
CREATE TABLE users (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(255) NOT NULL,
email VARCHAR(255) UNIQUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP -- New column added
);
CREATE TABLE orders (
id INT PRIMARY KEY AUTO_INCREMENT,
user_id INT NOT NULL,
amount DECIMAL(10,2) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id)
);
schema "shop" {}
table "users" {
schema = schema.shop
column "id" {
null = false
type = int
auto_increment = true
}
column "name" {
null = false
type = varchar(255)
}
column "email" {
null = true
type = varchar(255)
}
column "created_at" { # New column added
type = timestamp
default = sql("CURRENT_TIMESTAMP")
}
primary_key {
columns = [column.id]
}
index "email" {
unique = true
columns = [column.email]
}
}
table "orders" {
# ... unchanged ...
}
After updating your schema definition, run terraform apply
again. Atlas will automatically determine what changes need to be made:
data.atlas_schema.shop: Reading...
data.atlas_schema.shop: Read complete after 2s [id=nFhve04m2QEQq1neCK6bYQ]
atlas_schema.shop: Refreshing state... [id=fb936b91-d37b-2157-9e0b-e4bb218ade1f]
Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# atlas_schema.shop will be updated in-place
~ resource "atlas_schema" "shop" {
~ hcl = <<-EOT
table "users" {
schema = schema.shop
column "id" {
null = false
type = int
auto_increment = true
}
column "name" {
null = false
type = varchar(255)
}
column "email" {
null = true
type = varchar(255)
}
+ column "created_at" {
+ null = true
+ type = timestamp
+ default = sql("CURRENT_TIMESTAMP")
+ }
# ... more schema definition ...
Plan: 0 to add, 1 to change, 0 to destroy.
╷
│ Warning: Atlas Plan
│
│ with atlas_schema.shop,
│ on main.tf line 20, in resource "atlas_schema" "shop":
│ 20: resource "atlas_schema" "shop" {
│
│ The following SQL statements will be executed:
│
│
│ ALTER TABLE `users` ADD COLUMN `created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP;
│
╵
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:
Type yes
and hit enter to apply the changes, your database should now have the new column:
mysql -h localhost -P 3306 -u root -ppass shop -e "DESCRIBE users;"
+------------+--------------+------+-----+-------------------+-------------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+-------------------+-------------------+
| id | int | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | NULL | |
| email | varchar(255) | YES | UNI | NULL | |
| created_at | timestamp | YES | | CURRENT_TIMESTAMP | DEFAULT_GENERATED |
+------------+--------------+------+-----+-------------------+-------------------+
Advanced Configuration
Custom Diff Policies
You can customize how Atlas handles specific schema changes with diff policies. This gives you fine-grained control over which types of changes Atlas will or won't make to your database:
resource "atlas_schema" "shop" {
url = "mysql://root:pass@localhost:3306/shop"
hcl = data.atlas_schema.shop.hcl
diff {
# Skip destructive changes like dropping schemas or tables
skip {
drop_schema = true
drop_table = true
}
# Control how indexes are created and dropped (PostgreSQL)
concurrent_index {
create = true # Create indexes concurrently
drop = true # Drop indexes concurrently
}
}
}
You can also make these policies configurable using variables:
variable "allow_destructive" {
type = bool
default = false
description = "Whether to allow destructive schema changes"
}
resource "atlas_schema" "shop" {
url = "mysql://root:pass@localhost:3306/shop"
hcl = data.atlas_schema.shop.hcl
diff {
skip {
drop_schema = !var.allow_destructive
drop_table = !var.allow_destructive
}
}
}
Then, you can run terraform apply
with the variable set to true
to allow destructive changes:
terraform apply -var="allow_destructive=true"
Conclusion
The Atlas Terraform Provider's declarative workflow offers a powerful approach to managing database schemas as part of your infrastructure as code. By defining the desired state of your schema and letting Atlas handle the migration planning, you gain:
- Simplified schema management
- Integration with your existing Terraform workflows
- Automated, safe database migrations
- Comprehensive control over how changes are applied
For more information, visit the provider registry page.
Need More Help?
Join the Ariga Discord Server for support, early access to features, and to provide feedback.
Sign up for our newsletter to stay up to date about Atlas and the cloud platform, Atlas Cloud.