Skip to main content

Project Configuration

Config Files

Config files provide a convenient way to describe and interact with multiple environments when working with Atlas. A config file is a file named atlas.hcl and contains one or more env blocks. For example:

// Define an environment named "local"
env "local" {
// Declare where the schema definition resides.
// Also supported: ["file://multi.hcl", "file://schema.hcl"].
src = "file://project/schema.hcl"

// Define the URL of the database which is managed
// in this environment.
url = "mysql://user:pass@localhost:3306/schema"

// Define the URL of the Dev Database for this environment
// See: https://atlasgo.io/concepts/dev-database
dev = "docker://mysql/8/dev"
}

env "dev" {
// ... a different env
}

Flags

Once the project configuration has been defined, you can interact with it using one of the following options:

To run the schema apply command using the prod configuration defined in atlas.hcl file located in your working directory:

atlas schema apply --env prod

Will run the schema apply command against the database that is defined for the local environment.

Unlabeled env blocks

It is possible to define an env block whose name is dynamically set during command execution using the --env flag. This is useful when multiple environments share the same configuration and the arguments are dynamically set during execution:

env {
name = atlas.env
url = var.url
format {
migrate {
apply = format(
"{{ json . | json_merge %q }}",
jsonencode({
EnvName : atlas.env
})
)
}
}
}

Projects with Versioned Migrations

Environments may declare a migration block to configure how versioned migrations work in the specific environment:

env "local" {
// ..
migration {
// URL where the migration directory resides.
dir = "file://migrations"
}
}

Once defined, migrate commands can use this configuration, for example:

atlas migrate validate --env local

Will run the migrate validate command against the Dev Database defined in the local environment.

Passing Input Values

Config files may pass input values to variables defined in Atlas HCL schemas. To do this, define an hcl_schema data source, pass it the input values, and then designate it as the desired schema within the env block:

data "hcl_schema" "app" {
path = "schema.hcl"
vars = {
// Variables are passed as input values to "schema.hcl".
tenant = "ariga"
}
}

env "local" {
src = data.hcl_schema.app.url
url = "sqlite://test?mode=memory&_fk=1"
}

Builtin Functions

file

The file function reads the content of a file and returns it as a string. The file path is relative to the project directory or an absolute path.

variable "cloud_token" {
type = string
default = file("/var/run/secrets/atlas_token")
}

fileset

The fileset function returns the list of files that match the given pattern. The pattern is relative to the project directory.

data "hcl_schema" "app" {
paths = fileset("schema/*.pg.hcl")
}

getenv

The getenv function returns the value of the environment variable named by the key. It returns an empty string if the variable is not set.

env "local" {
url = getenv("DATABASE_URL")
}

Project Input Variables

atlas.hcl file may also declare input variables that can be supplied to the CLI at runtime. For example:

atlas.hcl
variable "tenant" {
type = string
}

data "hcl_schema" "app" {
path = "schema.hcl"
vars = {
// Variables are passed as input values to "schema.hcl".
tenant = var.tenant
}
}

env "local" {
src = data.hcl_schema.app.url
url = "sqlite://test?mode=memory&_fk=1"
}

To set the value for this variable at runtime, use the --var flag:

atlas schema apply --env local --var tenant=rotemtam

It is worth mentioning that when running Atlas commands within a project using the --env flag, all input values supplied at the command-line are passed only to the config file, and not propagated automatically to children schema files. This is done with the purpose of creating an explicit contract between the environment and the schema file.

Supported Blocks

Atlas configuration files support various blocks and attributes. Below are the common examples; see the Atlas Config Schema for the full list.

Input Variables

Config files support defining input variables that can be injected through the CLI, read more here.

  • type - The type constraint of a variable.
  • default - Define if the variable is optional by setting its default value.
variable "tenants" {
type = list(string)
}

variable "url" {
type = string
default = "mysql://root:pass@localhost:3306/"
}

variable "cloud_token" {
type = string
default = getenv("ATLAS_TOKEN")
}

env "local" {
// Reference an input variable.
url = var.url
}

Local Values

The locals block allows defining a list of local variables that can be reused multiple times in the project.

locals {
tenants = ["tenant_1", "tenant_2"]
base_url = "mysql://${var.user}:${var.pass}@${var.addr}"

// Reference local values.
db1_url = "${local.base_url}/db1"
db2_url = "${local.base_url}/db2"
}

Atlas Block

The atlas block allows configuring your Atlas account. The supported attributes are:

  • org - Specifies the organization to log in to. If Atlas executes using atlas.hcl without logging in to the specified organization, the command will be aborted.
  • token - CI/CD pipelines can use the token attribute for Atlas authentication.
atlas.hcl
atlas {
cloud {
org = "acme"
}
}
tip

Atlas Pro users are advised to set the org in atlas.hcl to ensure that any engineer interacting with Atlas in the project context is running in logged-in mode. This ensures Pro features are enabled and the correct migration is generated.

Data Sources

Data sources enable users to retrieve information stored in an external service or database. The currently supported data sources are:

note

Data sources are evaluated only if they are referenced by top-level blocks like locals or variables, or by the selected environment, for instance, atlas schema apply --env dev.

Data source: sql

The sql data source allows executing SQL queries on a database and using the results in the project.

Arguments
  • url - The URL of the target database.
  • query - Query to execute.
  • args - Optional arguments for any placeholder parameters in the query.
Attributes
  • count - The number of returned rows.
  • values - The returned values. e.g. list(string).
  • value - The first value in the list, or nil.
data "sql" "tenants" {
url = var.url
query = <<EOS
SELECT `schema_name`
FROM `information_schema`.`schemata`
WHERE `schema_name` LIKE ?
EOS
args = [var.pattern]
}

env "prod" {
// Reference a data source.
for_each = toset(data.sql.tenants.values)
url = urlsetpath(var.url, each.value)
}

Data source: external

The external data source allows the execution of an external program and uses its output in the project.

Arguments
  • program - The first element of the string is the program to run. The remaining elements are optional command line arguments.
  • working_dir - The working directory to run the program from. Defaults to the current working directory.
Attributes
  • The command output is a string type with no attributes.

Usage example

atlas.hcl
data "external" "dot_env" {
program = [
"npm",
"run",
"load-env.js"
]
}

locals {
dot_env = jsondecode(data.external.dot_env)
}

env "local" {
src = local.dot_env.URL
dev = "docker://mysql/8/dev"
}

Data source: runtimevar

Arguments
  • url - The URL identifies the variable. See, the CDK documentation for more information. Use timeout=X to control the operation's timeout. If not specified, the timeout defaults to 10s.
Attributes
  • The loaded variable is a string type with no attributes.

The data source uses Application Default Credentials by default; if you have authenticated via gcloud auth application-default login, it will use those credentials.

atlas.hcl
data "runtimevar" "db" {
url = "gcpruntimeconfig://projects/<project>/configs/<config-id>/variables/<variable>?decoder=string"
}

env "dev" {
src = "schema.hcl"
url = "mysql://root:pass@host:3306/${data.runtimevar.db}"
}

Usage example

gcloud auth application-default login
atlas schema apply --env dev
GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json" atlas schema apply --env dev

Data source: hcl_schema

The hcl_schema data source allows the loading of an Atlas HCL schema from a file or directory, with optional variables.

Arguments
  • path - The path to the HCL file or directory (cannot be used with paths).
  • paths - List of paths to HCL files or directories (cannot be used with path).
  • vars - A map of variables to pass to the HCL schema.
Attributes
  • url - The URL of the loaded schema.
variable "tenant" {
type = string
}

data "hcl_schema" "app" {
path = "schema.hcl"
vars = {
tenant = var.tenant
}
}


env "local" {
src = data.hcl_schema.app.url
url = "sqlite://test?mode=memory&_fk=1"
}

Data source: external_schema

The external_schema data source enables the import of an SQL schema from an external program into Atlas' desired state. With this data source, users have the flexibility to represent the desired state of the database schema in any language.

Arguments
  • program - The first element of the string is the program to run. The remaining elements are optional command line arguments.
  • working_dir - The working directory to run the program from. Defaults to the current working directory.
Attributes
  • url - The URL of the loaded schema.

Usage example

By running atlas migrate diff with the given configuration, the external program will be executed and its loaded state will be compared against the current state of the migration directory. In case of a difference between the two states, a new migration file will be created with the necessary SQL statements.

atlas.hcl
data "external_schema" "graph" {
program = [
"npm",
"run",
"generate-schema"
]
}

env "local" {
src = data.external_schema.graph.url
dev = "docker://mysql/8/dev"
migration {
dir = "file://migrations"
}
}

Data source: composite_schema

The composite_schema data source allows the composition of multiple Atlas schemas into a unified schema graph. This functionality is useful when projects schemas are split across various sources such as HCL, SQL, or application ORMs. For example, each service have its own database schema, or an ORM schema is extended or relies on other database schemas.

Referring to the url returned by this data source allows reading the entire project schemas as a single unit by any of the Atlas commands, such as migrate diff, schema apply, or schema inspect.

Arguments

schema - one or more blocks containing the URL to read the schema from.

Usage Details
Mapping to Database Schemas

The name of the schema block represents the database schema to be created in the composed graph. For example, the following schemas refer to the public and private schemas within a PostgreSQL database:

data "composite_schema" "project" {
schema "public" {
url = ...
}
schema "private" {
url = ...
}
}
Schema Dependencies

The order of the schema blocks defines the order in which Atlas will load the schemas to compose the entire database graph. This is useful in the case of dependencies between the schemas. For example, the following schemas refer to the inventory and auth schemas, where the auth schema depends on the inventory schema and therefore should be loaded after it:

data "composite_schema" "project" {
schema "inventory" {
url = ...
}
schema "auth" {
url = ...
}
}
Schema Composition

Defining multiple schema blocks with the same name enables extending the same database schema from multiple sources. For example, the following configuration shows how an ORM schema, which relies on database types that cannot be defined within the ORM itself, can load them separately from another schema source that supports it:

data "composite_schema" "project" {
schema "public" {
url = "file://types.pg.hcl"
}
schema "public" {
url = "ent://ent/schema"
}
}
Labeled vs. Unlabeled Schema Blocks

Note, if the schema block is labeled (e.g., schema "public"), the schema will be created if it does not exist, and the computation for loading the state from the URL will be done within the scope of this schema.

If the schema block is unlabeled (e.g., schema { ... }), no schema will be created, and the computation for loading the state from the URL will be done within the scope of the database. Read more about this in Schema vs. Database Scope doc.

Attributes
  • url - The URL of the composite schema.

Usage example

By running atlas migrate diff with the given configuration, Atlas loads the inventory schema from the SQLAlchemy schema, the graph schema from ent/schema, and the auth and internal schemas from HCL and SQL schemas defined in Atlas format. Then, the composite schema, which represents these four schemas combined, will be compared against the current state of the migration directory. In case of a difference between the two states, a new migration file will be created with the necessary SQL statements.

atlas.hcl
data "composite_schema" "project" {
schema "inventory" {
url = data.external_schema.sqlalchemy.url
}
schema "graph" {
url = "ent://ent/schema"
}
schema "auth" {
url = "file://path/to/schema.hcl"
}
schema "internal" {
url = "file://path/to/schema.sql"
}
}

env "dev" {
src = data.composite_schema.project.url
dev = "docker://postgres/15/dev"
migration {
dir = "file://migrations"
}
}

Data source: remote_dir

The remote_dir data source reads the state of a migration directory from Atlas Cloud. For instructions on how to connect a migration directory to Atlas Cloud, please refer to the cloud documentation.

Arguments
  • name - The slug of the migration directory, as defined in Atlas Cloud.
  • tag (optional) - The tag of the migration directory, such as Git commit. If not specified, the latest tag (e.g., master branch) will be used.
Attributes
  • url - A URL to the loaded migration directory.
note

The remote_dir data source predates the atlas:// URL scheme. The example below is equivalent to executing Atlas with --dir "atlas://myapp".

atlas.hcl
variable "database_url" {
type = string
default = getenv("DATABASE_URL")
}

data "remote_dir" "migrations" {
// The slug of the migration directory in Atlas Cloud.
// In this example, the directory is named "myapp".
name = "myapp"
}

env {
// Set environment name dynamically based on --env value.
name = atlas.env
url = var.database_url
migration {
dir = data.remote_dir.migrations.url
}
}

Usage example

ATLAS_TOKEN="<ATLAS_TOKEN>" \
atlas migrate apply \
--url "<DATABASE_URL>" \
-c file://path/to/atlas.hcl \
--env prod
DATABASE_URL="<DATABASE_URL>" ATLAS_TOKEN="<ATLAS_TOKEN>" \
atlas migrate apply \
-c file://path/to/atlas.hcl \
--env prod
Reporting Cloud Deployments

In case the cloud block was activated with a valid token, Atlas logs migration runs in your cloud account to facilitate the monitoring and troubleshooting of executed migrations. The following is a demonstration of how it appears in action:

Screenshot example

Data source: template_dir

The template_dir data source renders a migration directory from a template directory. It does this by parsing the entire directory as Go templates, executing top-level (template) files that have the .sql file extension, and generating an in-memory migration directory from them.

Arguments
  • path - A path to the template directory.
  • vars - A map of variables to pass to the template.
Attributes
  • url - A URL to the generated migration directory.
atlas.hcl
variable "path" {
type = string
description = "A path to the template directory"
}

data "template_dir" "migrations" {
path = var.path
vars = {
Key1 = "value1"
Key2 = "value2"
// Pass the --env value as a template variable.
Env = atlas.env
}
}

env "dev" {
url = var.url
migration {
dir = data.template_dir.migrations.url
}
}

Data source: aws_rds_token

The aws_rds_token data source generates a short-lived token for an AWS RDS database using IAM Authentication.

To use this data source:

  1. Enable IAM Authentication for your database. For instructions on how to do this, see the AWS documentation.
  2. Create a database user and grant it permission to authenticate using IAM, see the AWS documentation for instructions.
  3. Create an IAM role with the rds-db:connect permission for the specific database and user. For instructions on how to do this, see the AWS documentation.
Arguments
  • region - The AWS region of the database (Optional).
  • endpoint - The endpoint of the database (hostname:port).
  • username - The database user to authenticate as.
  • profile - The AWS profile to use for authentication (Optional).
Attributes
  • The loaded variable is a string type with no attributes. Notice that the token contains special characters that need to be escaped when used in a URL. To escape the token, use the urlescape function.
Example
atlas.hcl
locals {
user = "iamuser"
endpoint = "hostname-of-db.example9y7k.us-east-1.rds.amazonaws.com:5432"
}

data "aws_rds_token" "db" {
region = "us-east-1"
endpoint = local.endpoint
username = local.user
}

env "rds" {
url = "postgres://${local.user}:${urlescape(data.aws_rds_token.db)}@${local.endpoint}/postgres"
}

Data source: gcp_cloudsql_token

The gcp_cloudsql_token data source generates a short-lived token for an GCP CloudSQL database using IAM Authentication.

To use this data source:

  1. Enable IAM Authentication for your database. For instructions on how to do this, see the GCP documentation.
  2. Create a database user and grant it permission to authenticate using IAM, see the GCP documentation for instructions.
Attributes
  • The loaded variable is a string type with no attributes. Notice that the token contains special characters that need to be escaped when used in a URL. To escape the token, use the urlescape function.
Example
atlas.hcl
locals {
user = "iamuser"
endpoint = "34.143.100.1:3306"
}

data "gcp_cloudsql_token" "db" {}

env "rds" {
url = "mysql://${local.user}:${urlescape(data.gcp_cloudsql_token.db)}@${local.endpoint}/?allowCleartextPasswords=1&tls=skip-verify&parseTime=true"
}
note

The allowCleartextPasswords and tls parameters are required for the MySQL driver to connect to CloudSQL. For PostgreSQL, use sslmode=require to connect to the database.

Environments

The env block defines an environment block that can be selected by using the --env flag.

Arguments
  • for_each - A meta-argument that accepts a map or a set of strings and is used to compute an env instance for each set or map item. See the example below.

  • src - The URL of or reference to for the desired schema of this environment. For example:

    • file://schema.hcl
    • file://schema.sql
    • file://relative/path/to/file.hcl
    • Directories are also accepted: file://schema/
    • Lists are accepted as well:
      env "local" {
      src = [
      "file://a.hcl",
      "file://b.hcl"
      ]
      }
    • As mentioned, references to data sources such as external_schema or composite_schema are a valid value for the src attribute.
  • url - The URL of the target database.

  • dev - The URL of the Dev Database.

  • schemas - A list of strings defines the schemas that Atlas manages.

  • exclude - A list of strings defines glob patterns used to filter resources on inspection.

  • migration - A block defines the migration configuration of the env.

    • dir - The URL to the migration directory.
    • baseline - An optional version to start the migration history from. Read more here.
    • exec_order - Set the file execution order [LINEAR (default), LINEAR_SKIP, NON_LINEAR]. Read more here.
    • lock_timeout - An optional timeout to wait for a database lock to be released. Defaults to 10s.
    • revisions_schema - An optional name to control the schema that the revisions table resides in.
    • repo. - The repository configuration for the migrations directory in the registry.
      • name - The repository name.
  • schema -The configuration for the desired schema.

    • src - The URL to the desired schema state.
    • repo. - The repository configuration for the desired schema in the registry.
      • name - The repository name.
  • format - A block defines the formatting configuration of the env per command (previously named log).

    • migrate
      • apply - Set custom formatting for migrate apply.
      • diff - Set custom formatting for migrate diff.
      • lint - Set custom formatting for migrate lint.
      • status - Set custom formatting for migrate status.
    • schema
      • inspect - Set custom formatting for schema inspect.
      • apply - Set custom formatting for schema apply.
      • diff - Set custom formatting for schema diff.
  • lint - A block defines the migration linting configuration of the env.

    • format - Override the --format flag by setting a custom logging for migrate lint (previously named log).
    • latest - A number configures the --latest option.
    • git.base - A run analysis against the base Git branch.
    • git.dir - A path to the repository working directory.
    • review - The policy to use when deciding whether the user should be prompted to review and approve the changes. Currently works with declarative migrations and requires the user to log in. Supported options:
      • ALWAYS - Always prompt the user to review and approve the changes.
      • WARNING - Prompt if any diagnostics are found.
      • ERROR - Prompt if any severe diagnostics (errors) are found. By default this will happen on destructive changes only.
  • diff - A block defines the schema diffing policy.

Multi Environment Example

Atlas adopts the for_each meta-argument that Terraform uses for env blocks. Setting the for_each argument will compute an env block for each item in the provided value. Note that for_each accepts a map or a set of strings.

atlas.hcl
env "prod" {
for_each = toset(data.sql.tenants.values)
url = urlsetpath(var.url, each.value)
migration {
dir = "file://migrations"
}
format {
migrate {
apply = format(
"{{ json . | json_merge %q }}",
jsonencode({
Tenant : each.value
})
)
}
}
}

Configure Migration Linting

Config files may declare lint blocks to configure how migration linting runs in a specific environment or globally.

lint {
destructive {
// By default, destructive changes cause migration linting to error
// on exit (code 1). Setting `error` to false disables this behavior.
error = false
}
// Custom logging can be enabled using the `format` attribute (previously named `log`).
format = <<EOS
{{- range $f := .Files }}
{{- json $f }}
{{- end }}
EOS
}

env "local" {
// Define a specific migration linting config for this environment.
// This block inherits and overrides all attributes of the global config.
lint {
latest = 1
}
}

env "ci" {
lint {
git {
base = "master"
// An optional attribute for setting the working
// directory of the git command (-C flag).
dir = "<path>"
}
}
}

Configure Diff Policy

Config files may define diff blocks to configure how schema diffing runs in a specific environment or globally.

diff {
skip {
// By default, none of the changes are skipped.
drop_schema = true
drop_table = true
}
concurrent_index {
create = true
drop = true
}
}

env "local" {
// Define a specific schema diffing policy for this environment.
diff {
skip {
drop_schema = true
}
}
}