Pre & Post Execution Hooks
Pre-deployment scripts let you prepare the environment, such as taking a snapshot, before running atlas migrate apply
,
while post-deployment scripts help you validate, seed or update data, and clean up after the migration completes.
Atlas supports both through pre and post execution hooks, allowing you to run SQL statements or external scripts at each stage to coordinate complex rollouts. Common use cases include running pre-deployment scripts that enforce guardrails, applying post-deployment scripts that seed lookup tables, or triggering post-deployment scripts that refresh reporting views.
Pre and post execution hooks are available to Atlas Pro users. You can create a trial account
using the atlas login
command. To learn more about logged-in features, see Features page.
Overview
Pre and post execution hooks are configured inside the relevant environment in your atlas.hcl
config file.
Each hook targets the migrate_apply
command and runs outside the migration transaction:
env "dev" {
pre "migrate_apply" {
exec {
sql = <<-SQL
CALL ensure_guardrails();
CALL validate_data_integrity();
SQL
}
}
post "migrate_apply" {
exec {
sql = <<-SQL
REFRESH MATERIALIZED VIEW CONCURRENTLY analytics.latest_activity;
CALL sync_reporting_cache();
SQL
}
}
}
Atlas executes all pre "migrate_apply"
hooks sequentially before applying any migration file.
If all pre hooks succeed, the pending migrations are executed. Once the migration is complete,
the post "migrate_apply"
hooks run in order.
To configure custom logic for the transaction or connection lifecycle, such as
setting statement_timeout
or lock_timeout
, refer to transaction hooks.
Skipping Hooks
You can skip a hook programmatically by adding a skip
block. When condition
evaluates to true
the hook is
skipped and the optional message
is printed to the CLI output:
env {
name = atlas.env
pre "migrate_apply" {
skip {
condition = atlas.env == "staging"
message = "Skipping staging warm-up"
}
exec {
src = "file://scripts/pre-flight.sql"
}
}
}
Executing SQL or Scripts
Hooks execute statements via an exec
block. There are two ways to supply the payload:
sql
: Inline SQL statement(s) to run directly.src
: Path or URL to a file containing statements. Atlas supports any source registered with thefile://
URL scheme.
variable "script_file" {
type = string
}
env "dev" {
pre "migrate_apply" {
exec {
src = var.script_file
}
}
}
Provide the variable at runtime, for example:
atlas migrate apply \
--dir "file://migrations" \
--url "sqlite://file?mode=memory" \
--config "file://atlas.hcl" \
--env "dev" \
--var "script_file=file://scripts/pre.sql"
Error Handling
Use the on_error
attribute to control what happens when a hook statement fails. The supported values are:
FAIL
(default): stops executing the current hook and aborts the migration run.BREAK
: stops executing the current hook, skips the remaining hooks, and continues the migration.CONTINUE
: skips the failing hook and proceeds to the next one.
env "dev" {
pre "migrate_apply" {
exec {
src = "file://scripts/pre-flight.sql"
on_error = CONTINUE
}
}
}
Seeding Reference Data
Post-deployment scripts are a great way to keep lookup tables (for example, configuration tables with static values) aligned across environments. In PostgreSQL, keep these statements in version-controlled SQL files and execute them through hooks:
env "prod" {
post "migrate_apply" {
exec {
src = "file://scripts/config_table.sql"
on_error = BREAK
}
}
}
The referenced script can use MERGE
to upsert the canonical seed values while keeping the table tidy:
MERGE INTO config_table AS target
USING (VALUES
('max_login_attempts', '5', 'Maximum allowed failed login attempts'),
('session_timeout_minutes', '30', 'User session timeout duration'),
('password_min_length', '12', 'Minimum password length requirement'),
) AS source(key, value, description)
ON target.key = source.key
WHEN MATCHED THEN
UPDATE SET value = source.value, description = source.description
WHEN NOT MATCHED THEN
INSERT (key, value, description) VALUES (source.key, source.value, source.description);
Each deployment keeps schema and reference data synchronized without relying on manual follow-up steps.