Project Configuration
Project Files
Project files provide a convenient way to describe and interact with multiple
environments when working with Atlas. A project file is a file named
atlas.hcl
and contains one or more env
blocks. For example:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
// Define an environment named "local"
env "local" {
// Declare where the schema definition resides.
// Also supported: ["file://multi.hcl", "file://schema.hcl"].
src = "file://project/schema.hcl"
// Define the URL of the database which is managed
// in this environment.
url = "mysql://user:pass@localhost:3306/schema"
// Define the URL of the Dev Database for this environment
// See: https://atlasgo.io/concepts/dev-database
dev = "docker://mysql/8/dev"
}
env "dev" {
// ... a different env
}
// Define an environment named "local"
env "local" {
// Declare where the schema definition resides.
// Also supported: ["file://multi.hcl", "file://schema.hcl"].
src = "file://project/schema.hcl"
// Define the URL of the database which is managed
// in this environment.
url = "maria://user:pass@localhost:3306/schema"
// Define the URL of the Dev Database for this environment
// See: https://atlasgo.io/concepts/dev-database
dev = "docker://maria/latest/dev"
}
env "dev" {
// ... a different env
}
// Define an environment named "local"
env "local" {
// Declare where the schema definition resides.
// Also supported: ["file://multi.hcl", "file://schema.hcl"].
src = "file://project/schema.hcl"
// Define the URL of the database which is managed
// in this environment.
url = "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable"
// Define the URL of the Dev Database for this environment
// See: https://atlasgo.io/concepts/dev-database
dev = "docker://postgres/15/dev?search_path=public"
}
env "dev" {
// ... a different env
}
// Define an environment named "local"
env "local" {
// Declare where the schema definition resides.
// Also supported: ["file://multi.hcl", "file://schema.hcl"].
src = "file://project/schema.hcl"
// Define the URL of the database which is managed
// in this environment.
url = "sqlite://file.db?_fk=1"
// Define the URL of the Dev Database for this environment
// See: https://atlasgo.io/concepts/dev-database
dev = "sqlite://file?mode=memory&_fk=1"
}
env "dev" {
// ... a different env
}
Flags
Once the project configuration has been defined, you can interact with it using one of the following options:
- Env
- Custom File
- Global Config (without --env)
To run the schema apply
command using the prod
configuration defined in atlas.hcl
file located in your working directory:
atlas schema apply --env prod
To run the schema apply
command using the prod
configuration defined in atlas.hcl
in arbitrary location:
atlas schema apply \
-c file://path/to/atlas.hcl \
--env prod
Some commands accept global configuration blocks such as lint
and
diff
policies. If no env
is defined, you can instruct Atlas to explicitly use the config
file using the -c
(or --config
) flag:
atlas migrate lint \
-c file://path/to/atlas.hcl \
--dir "file://path/to/migrations" \
--dev-url "sqlite://file?mode=memory"
Will run the schema apply
command against the database that is defined for the local
environment.
env
blocksIt is possible to define an env
block whose name is dynamically set during command execution using the --env
flag.
This is useful when multiple environments share the same configuration and the arguments are dynamically set during
execution:
env {
name = atlas.env
url = var.url
format {
migrate {
apply = format(
"{{ json . | json_merge %q }}",
jsonencode({
EnvName : atlas.Env
})
)
}
}
}
Projects with Versioned Migrations
Environments may declare a migration
block to configure how versioned migrations
work in the specific environment:
env "local" {
// ..
migration {
// URL where the migration directory resides.
dir = "file://migrations"
// An optional format of the migration directory:
// atlas (default) | flyway | liquibase | goose | golang-migrate | dbmate
format = atlas
}
}
Once defined, migrate
commands can use this configuration, for example:
atlas migrate validate --env local
Will run the migrate validate
command against the Dev Database defined in the
local
environment.
Passing Input Values
Project files may pass input values to variables defined in Atlas HCL schemas. To do this,
define an hcl_schema
data source, pass it the input values, and then designate it as the
desired schema within the env
block:
- atlas.hcl
- schema.hcl
data "hcl_schema" "app" {
path = "schema.hcl"
vars = {
// Variables are passed as input values to "schema.hcl".
tenant = "ariga"
}
}
env "local" {
src = data.hcl_schema.app.url
url = "sqlite://test?mode=memory&_fk=1"
}
// This variable is passed as an input value from "atlas.hcl".
variable "tenant" {
type = string
}
schema "main" {
name = var.tenant
}
Project Input Variables
Project files may also declare input variables that can be supplied to the CLI at runtime. For example:
variable "tenant" {
type = string
}
data "hcl_schema" "app" {
path = "schema.hcl"
vars = {
// Variables are passed as input values to "schema.hcl".
tenant = var.tenant
}
}
env "local" {
src = data.hcl_schema.app.url
url = "sqlite://test?mode=memory&_fk=1"
}
To set the value for this variable at runtime, use the --var
flag:
atlas schema apply --env local --var tenant=rotemtam
It is worth mentioning that when running Atlas commands within a project using
the --env
flag, all input values supplied at the command-line are passed only
to the project file, and not propagated automatically to children schema files.
This is done with the purpose of creating an explicit contract between the environment
and the schema file.
Schema Arguments and Attributes
Project configuration files support different types of blocks.
Input Variables
Project files support defining input variables that can be injected through the CLI, read more here.
type
- The type constraint of a variable.default
- Define if the variable is optional by setting its default value.
variable "tenants" {
type = list(string)
}
variable "url" {
type = string
default = "mysql://root:pass@localhost:3306/"
}
variable "cloud_token" {
type = string
default = getenv("ATLAS_TOKEN")
}
env "local" {
// Reference an input variable.
url = var.url
}
Local Values
The locals
block allows defining a list of local variables that can be reused multiple times in the project.
locals {
tenants = ["tenant_1", "tenant_2"]
base_url = "mysql://${var.user}:${var.pass}@${var.addr}"
// Reference local values.
db1_url = "${local.base_url}/db1"
db2_url = "${local.base_url}/db2"
}
Data Sources
Data sources enable users to retrieve information stored in an external service or database. The currently supported data sources are:
Data sources are evaluated only if they are referenced by top-level blocks like locals
or variables
, or by the
selected environment, for instance, atlas schema apply --env dev
.
Data source: sql
The sql
data source allows executing SQL queries on a database and using the results in the project.
Arguments
url
- The URL of the target database.query
- Query to execute.args
- Optional arguments for any placeholder parameters in the query.
Attributes
count
- The number of returned rows.values
- The returned values. e.g.list(string)
.value
- The first value in the list, ornil
.
data "sql" "tenants" {
url = var.url
query = <<EOS
SELECT `schema_name`
FROM `information_schema`.`schemata`
WHERE `schema_name` LIKE ?
EOS
args = [var.pattern]
}
env "prod" {
// Reference a data source.
for_each = toset(data.sql.tenants.values)
url = urlsetpath(var.url, each.value)
}
Data source: runtimevar
Arguments
Attributes
- The loaded variable is a
string
type with no attributes.
- GCP Secret Manager
- AWS Secrets Manager
- HTTP
- File
The data source uses Application Default Credentials by default;
if you have authenticated via gcloud auth application-default login
,
it will use those credentials.
data "runtimevar" "pass" {
url = "gcpsecretmanager://projects/<project>/secrets/<secret>"
}
env "dev" {
src = "schema.hcl"
url = "mysql://root:${data.runtimevar.pass}@host:3306/database"
}
Usage example
gcloud auth application-default login
atlas schema apply --env dev
GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json" atlas schema apply --env dev
The data source provides two ways to work with AWS Secrets Manager:
- If the
awssdk
query parameter is not set or is set tov1
, a default AWS Session will be created with the SharedConfigEnable option enabled; if you have authenticated with the AWS CLI, it will use those credentials. - If the
awssdk
query parameter is set tov2
, the data source will create an AWS Config based on the AWS SDK V2.
data "runtimevar" "pass" {
url = "awssecretsmanager://<secret>?region=<region>"
}
env "dev" {
src = "schema.hcl"
url = "mysql://root:${data.runtimevar.pass}@host:3306/database"
}
Usage example
# Default credentials reside in ~/.aws/credentials.
atlas schema apply --env dev
AWS_ACCESS_KEY_ID="ACCESS_ID" AWS_SECRET_ACCESS_KEY="SECRET_KEY" atlas schema apply --env dev
data "runtimevar" "pass" {
url = "http://service.com/foo.txt"
}
env "dev" {
src = "schema.hcl"
url = "mysql://root:${data.runtimevar.pass}@host:3306/database"
}
data "runtimevar" "pass" {
url = "file:///path/to/config.txt"
}
env "dev" {
src = "schema.hcl"
url = "mysql://root:${data.runtimevar.pass}@host:3306/database"
}
Data source: hcl_schema
The hcl_schema
data source allows the loading of an Atlas HCL schema from a file or directory, with optional variables.
Arguments
path
- The path to the HCL file or directory.vars
- A map of variables to pass to the HCL schema.
Attributes
url
- The URL of the loaded schema.
- atlas.hcl
- schema.hcl
variable "tenant" {
type = string
}
data "hcl_schema" "app" {
path = "schema.hcl"
vars = {
tenant = var.tenant
}
}
env "local" {
src = data.hcl_schema.app.url
url = "sqlite://test?mode=memory&_fk=1"
}
// This variable is passed as an input value from "atlas.hcl".
variable "tenant" {
type = string
}
schema "main" {
name = var.tenant
}
Data source: external_schema
The external_schema
data source enables the import of an SQL schema from an external program into Atlas' desired state.
With this data source, users have the flexibility to represent the desired state of the database schema in any language.
Arguments
program
- The first element of the string is the program to run. The remaining elements are optional command line arguments.
Attributes
url
- The URL of the loaded schema.
Usage example
By running atlas migrate diff
with the given configuration, the external program will be executed and its loaded state
will be compared against the current state of the migration directory. In case of a difference between the two states,
a new migration file will be created with the correct SQL statements.
data "external_schema" "graph" {
program = [
"npm",
"run",
"generate-schema"
]
}
env "local" {
src = data.external_schema.graph.url
dev = "docker://mysql/8/dev"
migration {
dir = "file://migrations"
}
}
Data source: remote_dir
The remote_dir
data source reads the state of a migration directory from Atlas Cloud. For
instructions on how to connect a migration directory to Atlas Cloud, please refer to the cloud documentation.
Arguments
name
- The name of the migration directory, as defined in Atlas Cloud.tag
(optional) - The tag of the migration directory, such as Git commit. If not specified, the latest tag (e.g.,master
branch) will be used.
Attributes
url
- A URL to the loaded migration directory.
variable "database_url" {
type = string
default = getenv("DATABASE_URL")
}
variable "cloud_token" {
type = string
default = getenv("ATLAS_TOKEN")
}
atlas {
cloud {
token = var.cloud_token
}
}
data "remote_dir" "migrations" {
// The name of the migration directory in Atlas Cloud.
// In this example, the directory is named "graph".
name = "graph"
}
env {
// Set environment name dynamically based on --env value.
name = atlas.env
url = var.database_url
migration {
dir = data.remote_dir.migrations.url
}
}
Usage example
atlas migrate apply \
--url "<DATABASE_URL>" \
-c file://path/to/atlas.hcl \
--env prod \
--var cloud_token="<ATLAS_TOKEN>"
DATABASE_URL="<DATABASE_URL>" ATLAS_TOKEN="<ATLAS_TOKEN>" \
atlas migrate apply \
-c file://path/to/atlas.hcl \
--env prod
In case the cloud
block was activated with a valid token, Atlas logs migration runs in your cloud account
to facilitate the monitoring and troubleshooting of executed migrations. The following is a demonstration of how it
appears in action:
Screenshot example
Data source: template_dir
The template_dir
data source renders a migration directory from a template directory. It does this by parsing the
entire directory as Go templates, executing top-level (template) files that
have the .sql
file extension, and generating an in-memory migration directory from them.
Arguments
path
- A path to the template directory.vars
- A map of variables to pass to the template.
Attributes
url
- A URL to the generated migration directory.
- Read only templates
- Variables shared between HCL and directory
variable "path" {
type = string
description = "A path to the template directory"
}
data "template_dir" "migrations" {
path = var.path
vars = {
Key1 = "value1"
Key2 = "value2"
// Pass the --env value as a template variable.
Env = atlas.env
}
}
env "dev" {
url = var.url
migration {
dir = data.template_dir.migrations.url
}
}
variable "schema_name" {
type = string
default = "Database schema name injected to both migrations directory and HCL schema"
}
data "hcl_schema" "app" {
path = "path/to/schema.hcl"
vars = {
schema_name = var.schema_name
}
}
data "template_dir" "migrations" {
path = "path/to/directory"
vars = {
schema_name = var.schema_name
}
}
env "local" {
src = data.hcl_schema.app.url
dev = "sqlite://file?mode=memory&_fk=1"
migration {
dir = data.template_dir.migrations.url
}
}
Data source: aws_rds_token
The aws_rds_token
data source generates a short-lived token for an AWS RDS database
using IAM Authentication.
To use this data source:
- Enable IAM Authentication for your database. For instructions on how to do this, see the AWS documentation.
- Create a database user and grant it permission to authenticate using IAM, see the AWS documentation for instructions.
- Create an IAM role with the
rds-db:connect
permission for the specific database and user. For instructions on how to do this, see the AWS documentation.
Arguments
region
- The AWS region of the database (Optional).endpoint
- The endpoint of the database (hostname:port).username
- The database user to authenticate as.
Attributes
- The loaded variable is a
string
type with no attributes. Notice that the token contains special characters that need to be escaped when used in a URL. To escape the token, use theurlescape
function.
Example
locals {
user = "iamuser"
endpoint = "hostname-of-db.example9y7k.us-east-1.rds.amazonaws.com:5432"
}
data "aws_rds_token" "db" {
region = "us-east-1"
endpoint = local.endpoint
username = local.user
}
env "rds" {
url = "postgres://${local.user}:${urlescape(data.aws_rds_token.db)}@${local.endpoint}/postgres"
}
Data source: gcp_cloudsql_token
The gcp_cloudsql_token
data source generates a short-lived token for an GCP CloudSQL database
using IAM Authentication.
To use this data source:
- Enable IAM Authentication for your database. For instructions on how to do this, see the GCP documentation.
- Create a database user and grant it permission to authenticate using IAM, see the GCP documentation for instructions.
Attributes
- The loaded variable is a
string
type with no attributes. Notice that the token contains special characters that need to be escaped when used in a URL. To escape the token, use theurlescape
function.
Example
locals {
user = "iamuser"
endpoint = "34.143.100.1:3306"
}
data "gcp_cloudsql_token" "db" {}
env "rds" {
url = "mysql://${local.user}:${urlescape(data.gcp_cloudsql_token.db)}@${local.endpoint}/?allowCleartextPasswords=1&tls=skip-verify&parseTime=true"
}
The allowCleartextPasswords
and tls
parameters are required for the MySQL driver to connect to CloudSQL. For PostgreSQL, use sslmode=require
to connect to the database.
Environments
The env
block defines an environment block that can be selected by using the --env
flag.
Arguments
for_each
- A meta-argument that accepts a map or a set of strings and is used to compute anenv
instance for each set or map item. See the example below.url
- The URL of the target database.dev
- The URL of the Dev Database.schemas
- A list of strings defines the schemas that Atlas manages.exclude
- A list of strings defines glob patterns used to filter resources on inspection.migration
- A block defines the migration configuration of the env.dir
- The URL to the migration directory.baseline
- An optional version to start the migration history from. Read more here.lock_timeout
- An optional timeout to wait for a database lock to be released. Defaults to10s
.revisions_schema
- An optional name to control the schema that the revisions table resides in.
format
- A block defines the formatting configuration of the env per command (previously namedlog
).migrate
apply
- Set custom formatting formigrate apply
.lint
- Set custom formatting formigrate lint
.status
- Set custom formatting formigrate status
.
schema
apply
- Set custom formatting forschema apply
.diff
- Set custom formatting forschema diff
.
lint
- A block defines the migration linting configuration of the env.format
- Override the--format
flag by setting a custom logging formigrate lint
(previously namedlog
).latest
- A number configures the--latest
option.git.base
- A run analysis against the base Git branch.git.dir
- A path to the repository working directory.
diff
- A block defines the schema diffing policy.
Multi Environment Example
Atlas adopts the for_each
meta-argument that Terraform uses
for env
blocks. Setting the for_each
argument will compute an env
block for each item in the provided value. Note
that for_each
accepts a map or a set of strings.
- Versioned Migration
- Declarative Migration
env "prod" {
for_each = toset(data.sql.tenants.values)
url = urlsetpath(var.url, each.value)
migration {
dir = "file://migrations"
}
format {
migrate {
apply = format(
"{{ json . | json_merge %q }}",
jsonencode({
Tenant : each.value
})
)
}
}
}
env "prod" {
for_each = toset(data.sql.tenants.values)
url = urlsetpath(var.url, each.value)
src = "schema.hcl"
format {
schema {
apply = format(
"{{ json . | json_merge %q }}",
jsonencode({
Tenant : each.value
})
)
}
}
// Inject custom variables to the schema.hcl defined below.
tenant = each.value
}
variable "tenant" {
type = string
description = "The schema we operate on"
}
schema "tenant" {
name = var.tenant
}
table "users" {
schema = schema.tenant
// ...
}
Configure Migration Linting
Project files may declare lint
blocks to configure how migration linting runs in a specific environment or globally.
lint {
destructive {
// By default, destructive changes cause migration linting to error
// on exit (code 1). Setting `error` to false disables this behavior.
error = false
}
// Custom logging can be enabled using the `format` attribute (previously named `log`).
format = <<EOS
{{- range $f := .Files }}
{{- json $f }}
{{- end }}
EOS
}
env "local" {
// Define a specific migration linting config for this environment.
// This block inherits and overrides all attributes of the global config.
lint {
latest = 1
}
}
env "ci" {
lint {
git {
base = "master"
// An optional attribute for setting the working
// directory of the git command (-C flag).
dir = "<path>"
}
}
}
Configure Diff Policy
Project files may define diff
blocks to configure how schema diffing runs in a specific environment or globally.
diff {
skip {
// By default, none of the changes are skipped.
drop_schema = true
drop_table = true
}
concurrent_index {
create = true
drop = true
}
}
env "local" {
// Define a specific schema diffing policy for this environment.
diff {
skip {
drop_schema = true
}
}
}