Export Database Schema to Code
The atlas schema inspect
command can read an existing schema from a live database or any supported schema format, such as
HCL, SQL, or an ORM schema, and generate an equivalent
representation in HCL or SQL. This lets you capture the current state of a schema
as code so it can be stored, modified, or used in migration workflows in a version-controlled environment.
Split Your Schema into Multiple Files
By default, the entire schema is written to standard output as a single file. For larger projects or cases where you want
a structured layout, Atlas provides built-in functions like split
and write
to automatically organize the output into
multiple files and directories. This makes it easy to inspect, edit, and maintain your schema as code in version control systems.
First, specify the format of the output (HCL or SQL) using the --format
flag.
Then, pipe it to the split
and write
functions to separate the schema objects into multiple files and write them to the
current directory. The output files will be named according to the object name and type.
# HCL.
atlas schema inspect -u '<url>' --format '{{ hcl . | split | write }}'
# SQL.
atlas schema inspect -u '<url>' --format '{{ sql . | split | write }}'
split
The split
function splits schema dumps into multiple files and produces a txtar
formatted output. The result is then
piped to the write
function to write the output to files and directories.
The API for the split
function depends on the input format used, either hcl
or sql
:
- SQL Format
- HCL Format
When used with the sql
function, the split
function splits the SQL schema dump into multiple files and subdirectories
with different formats based on the scope you inspect - either a database or a specific schema.
Split Database Scope
If you inspect a database scope with more than one schema Atlas will generate a directory for each schema, and subdirectories
for each object type defined in that schema. Database-level objects, such as PostgreSQL extensions, will be generated in
their own directory alongside the schemas
directory.
Each object will be defined in its own file within the its type's directory, along with atlas:import
directives
pointing to its dependencies.
A main.sql
file will also be generated as an "entry point", containing import lines for all files generated by Atlas.
This allows you to point to the entire schema just by referencing the main.sql
file (e.g., file://path/to/main.sql
).
A typical output might look like:
├── extensions
│ ├── hstore.sql
│ └── citext.sql
├── schemas
│ └── public
│ ├── public.sql
│ ├── tables
│ │ ├── profiles.sql
│ │ └── users.sql
│ ├── functions
│ └── types
└── main.sql
Split Schema Scope
When inspecting a specific schema, Atlas will only generate subdirectories for each object type defined in that schema.
Each object will be defined in its own file within the its type's diretory, along with atlas:import
directives
pointing to its dependencies.
In addition, a main.sql
file will be generated as an "entry point", containing import lines for all files generated by Atlas.
This allows you to easily point to the entire schema by referencing the main.sql
file (e.g., file://path/to/main.sql
).
Note that database objects such as schemas and extensions will not be generated. Additionally, CREATE
statements will
not be qualified with the schema name, so you can use the generated files in a different schema set by the URL.
A typical output might look like:
├── tables
│ ├── profiles.sql
│ └── users.sql
├── functions
├── types
└── main.sql
The split
function takes two optional arguments: strategy
and suffix
.
The strategy
argument states how the output is split. The following strategies are supported:
object
(default) - Each schema gets its own directory, a subdirectory for each object type, and a file for each object.schema
- Each schema gets its own file.type
- Each object type gets its own file.
The suffix
argument defines the suffix of the output files, .hcl
by default. It is recommended to
use a database specific suffix for better editor plugin support, for example:
Database | File Suffix |
---|---|
MySQL | .my.hcl |
MariaDB | .ma.hcl |
PostgreSQL | .pg.hcl |
SQLite | .lt.hcl |
ClickHouse | .ch.hcl |
SQL Server | .ms.hcl |
Redshift | .rs.hcl |
Oracle | .oc.hcl |
Spanner | .sp.hcl |
Snowflake | .sf.hcl |
To work with this directory structure, use the hcl_schema
data source in your
atlas.hcl
project configuration:
data "hcl_schema" "app" {
paths = fileset("schema/**/*.hcl")
}
env "app" {
src = data.hcl_schema.app.url
dev = "docker://mysql/8/example"
}
write
The write
function takes one argument: path
.
The path
argument states the directory where the output files will be written. The path can be relative or absolute.
If no path is specified, the output files will be written to the current directory.
The write
function creates the directory if it does not exist.
Examples
- SQL Format
- HCL Format
Default split and write to the current directory:
atlas schema inspect -u '<url>' --format '{{ sql . | split | write }}'
Write to the project/
directory:
atlas schema inspect -u '<url>' --format '{{ sql . | split | write "project/" }}'
Customize indentation to \t
and write to the project/
directory:
atlas schema inspect -u '<url>' --format '{{ sql . "\t" | split | write "project/" }}'
Default split and write to the current directory:
atlas schema inspect -u '<url>' --format '{{ hcl . | split | write }}'
Split by object type and write to the schema/
directory for PostgreSQL:
atlas schema inspect -u '<url>' --format '{{ hcl . | split "type" ".pg.hcl" | write "schema/" }}'
Split by schema and write to the schema/
directory for MySQL:
atlas schema inspect -u '<url>' --format '{{ hcl . | split "schema" ".my.hcl" | write "schema/" }}'
Video Tutorial
To see this process in action, check out our video tutorial that covers the entire process using a PostgreSQL schema and SQL-formatted output.