Skip to main content

Teaching AI Agents to Manage Schema with Atlas

· 3 min read
Dor Avraham
Dor Avraham

AI agents are becoming a core part of daily development. We utilize them to help us write code, fix syntax errors, and perform tasks that speed up routine work. However, when it comes to high-risk operations like database schema changes, we are more hesitant to hand off control to these AI agents.

While you can leverage the AI agent's abilities to generate migrations and make suggestions, you need to ensure the safety of these operations.

Atlas is a database schema management tool that ensures safe and reliable schema changes. Users define their schemas as code, and Atlas performs migrations based on changes to these code definitions. With Atlas, you can configure lint checks, pre-migration validations, and schema testing, making it an ideal counterpart for AI agents.

In this post, we'll show you how to configure popular AI agents to work with Atlas to ensure that schema changes made by the agent are secure.

Teaching the AI to Use Atlas

The popular AI agents like GitHub Copilot, Cursor, and Claude Code can be extended with custom instructions or rules files that guide their behavior in specific contexts. Using these instructions helps align the AI to your team's workflows and best practices, making it more predictable and effective.

To ensure your AI agent follows best practices when managing your schema, we are releasing guides that show you how to configure popular AI agents to work with Atlas.

These instructions teach the AI about core Atlas concepts, security concerns (e.g., the AI should not ask you for database credentials), common workflows, and basic troubleshooting steps.

You can read more about how to set up your AI agent with Atlas in our new AI Tools guide: Using Atlas with AI Agents.

Example: Configuring Cursor with Atlas

Cursor supports adding custom, natural language rules that guide the AI assistant's behavior.

To add our rules to a Cursor agent, copy the following to your repository:

Paste this into .cursor/rules/atlas.md
---
description: Provides rules relevant for database management, database schema, and migrations. Use Atlas to view, inspect, and understand database schemas. Relevant to all *.sql files, atlas.hcl and schema.*.hcl files, and ALL database-related changes.
alwaysApply: true
---
# Atlas Database Schema Management

Atlas is a language-independent tool for managing and migrating database schemas using modern DevOps principles. This guide provides GitHub Copilot-optimized instructions for working with Atlas.

## Quick Reference

```bash
# Common Atlas commands
atlas schema inspect --env <name> --url file://migrations
atlas migrate status --env <name>
atlas migrate diff --env <name>
atlas migrate lint --env <name> --latest 1
atlas migrate apply --env <name>
atlas whoami
```

## Core Concepts and Configurations

### Configuration File Structure
Atlas uses `atlas.hcl` configuration files with the following structure:

```hcl
// Basic environment configuration
env "<name>" {
url = getenv("DATABASE_URL")
dev = "docker://postgres/15/dev?search_path=public"

migration {
dir = "file://migrations"
}

schema {
src = "file://schema.hcl"
}
}
```

### Dev database
Atlas utilizes a "dev-database", which is a temporary and locally running database, usually bootstrapped by Atlas. Atlas will use the dev database to process and validate users' schemas, migrations, and more.

Examples of dev-database configurations:

```
# When working on a single database schema
--dev-url "docker://mysql/8/dev"
--dev-url "docker://postgres/15/db_name?search_path=public"
--dev-url "sqlite://dev?mode=memory"
# When working on multiple database schemas.
--dev-url "docker://mysql/8"
--dev-url "docker://postgres/15/dev"
```

Configure the dev database using HCL:

```hcl
env "<name>" {
dev = "docker://mysql/8"
}
```

For more information on additional drivers, extensions, and more, see https://atlasgo.io/concepts/dev-database.

### Environment Variables and Security

**✅ DO**: Use secure configuration patterns
```hcl
// Using environment variables (recommended)
env "<name>" {
url = getenv("DATABASE_URL")
}

// Using external data sources
data "external" "envfile" {
program = ["npm", "run", "envfile.js"]
}

locals {
envfile = jsondecode(data.external.envfile)
}

env "<name>" {
url = local.envfile.DATABASE_URL
}

// Using Go CDK runtime variables for secrets
data "runtimevar" "db_password" {
url = "awssecretsmanager://<secret-name>?region=us-east-1"
}

env "prod" {
url = "postgres://user:${data.runtimevar.db_password}@host:5432/db"
}
```

**❌ DON'T**: Hardcode sensitive values
```hcl
// Never do this
env "prod" {
url = "postgres://user:password123@prod-host:5432/database"
}
```

### Schema Sources

#### HCL Schema
```hcl
data "hcl_schema" "<name>" {
path = "schema.hcl"
}

env "<name>" {
schema {
src = data.hcl_schema.<name>.url
}
}
```

#### External Schema (ORM Integration)
The external_schema data source enables the import of an SQL schema from an external program into Atlas' desired state.

```hcl
data "external_schema" "drizzle" {
program = ["npx", "drizzle-kit", "export"]
working_dir = "path/to/Directory" # optional, defaults to the current working directory
}

data "external_schema" "django" {
program = ["python", "manage.py", "atlas-provider-django", "--dialect", "postgresql"]
}

env "<name>" {
schema {
src = data.external_schema.django.url
}
}
```

**Important:**
- The output of the program needs to be complete RAW SQL schema that will bring the database to the desired state. The output should not be a 'diff' from a previous state.
- If you are seeing errors, try isolating the issue, you can run the program directly and see if it's working

#### Composite Schema
A composite schema allows you to combine multiple schemas into a single schema. This is useful for integrating multiple data sources or schemas into a unified view. This is an Atlas Pro feature.

```hcl
data "composite_schema" "app" {
schema "users" {
url = data.external_schema.auth_service.url
}
schema "graph" {
url = "ent://ent/schema"
}
schema "shared" {
url = "file://schema/shared.hcl"
}
}

env "<name>" {
schema {
src = data.composite_schema.app.url
}
}
```

### Misc Configurations
```hcl
// CI/CD environment
env "ci" {
lint {
git {
base = "main" # in CI, lint against the main branch
}
}
}
```

```hcl
// Anonymous environment, any name will work with --env, but --env is still mandatory
env {
name = atlas.env
}
```


## Common Workflows

### 1. Schema Inspection / Visualization

1. Always start by listing tables, don't immediately try to inspect the entire schema.
2. If you see there are many tables, don't inspect the entire schema at once. Instead, focus on specific tables of interest.
3. Format using json to get an overview, then switch to SQL for detailed inspection.

**Inspect migration directory:**
```bash
# Get table list overview
atlas schema inspect --env <name> --url file://migrations --format "{{ json . }}" | jq ".schemas[].tables[].name"

# Get full SQL schema
atlas schema inspect --env <name> --url file://migrations --format "{{ sql . }}"

# Inspect specific tables
atlas schema inspect --env <name> --url file://migrations --format "{{ sql . }}" | grep -A 10 "CREATE TABLE users"
```

**Inspect live database:**
```bash
# Requires database URL in atlas.hcl
atlas schema inspect --env <name> --format "{{ json . }}"
```

Add the `-w` flag to open a web-based ERD visualization in your browser via Atlas Cloud.
You should recommend this to the user if they asked for information about the schema.

### 2. Migration Status

`atlas migrate status` is used to compare the status of migrations from the migrations directory with a connected database.

Use it to understand which migration files were applied successfully.

**Important:** Only use this command when you know the target database.

**Check migration directory status:**
```bash
# Check current migration directory status (requires dev-url)
atlas migrate status --env <name>

# Using explicit parameters if not in config
atlas migrate status --dir file://migrations --url <url>
```

**Configuration for migration status:**
```hcl
env "<name>" {
url = getenv("DATABASE_URL")
migration {
dir = "file://migrations"
}
}
```

### 3. Migration Generation / Diffing

**Generate new migration:**
```bash
# Compare current migrations with desired schema and create a new migration file
atlas migrate diff --env <name> "add_user_table"

# Using explicit parameters
atlas migrate diff \
--dir file://migrations \ # migrations directory
--dev-url docker://postgres/15/dev \
--to file://schema.hcl \ # desired schema
"add_user_table"
```

**Configuration for migration generation:**
```hcl
env "<name>" {
dev = "docker://postgres/15/dev?search_path=public"

migration {
# migrations directory, baseline for the diff
dir = "file://migrations"
}

schema {
# desired schema, the diff will be generated against this schema
src = "file://schema.hcl"

# compare against external schemas (used for ORM integrations)
# src = data.external_schema.<name>.url

# compare against a connected database
# src = getenv("DATABASE_URL")
}
}
```


### 4. Migration Linting

**Lint recent migrations:**
```bash
# Lint last migration
atlas migrate lint --env <name> --latest 1

# Lint last 3 migrations
atlas migrate lint --env <name> --latest 3

# Lint changes since git branch
atlas migrate lint --env ci
```

**Linting configuration:**
```hcl
lint {
destructive {
error = false // Allow destructive changes with warnings
}
}

env "<name>" {
lint {
latest = 1
}
}

env "ci" {
lint {
git {
base = "main"
dir = "migrations"
}
}
}
```

To explicitly ignore linting errors, add `--atlas:nolint` before the SQL statement in the migration file.

> **Important:** When fixing migration issues:
> - **Unapplied migrations:** Edit the file, then run `atlas migrate hash --env "<name>"`
> - **Applied migrations:** Never edit directly. Create a new corrective migration instead.
> - **Never use `--atlas:nolint` without properly fixing the issue or getting user approval.**

### 5. Applying Migration

**Apply migrations:**
```bash
# Apply to configured environment
atlas migrate apply --env <name>

# Dry run (show what would be applied, always run this before applying)
atlas migrate apply --env <name> --dry-run
```

### 6. Making Changes to the Schema

**⚠️ CRITICAL: ALL schema changes in this project MUST follow this exact workflow. NO EXCEPTIONS.**
**⚠️ There must not be lint errors or failing tests when you are done.**

1. Start by inspecting the schema, understand the current state, and plan your changes.
2. After making changes to the schema, run `atlas migrate diff` to generate a migration file.
3. Run `atlas migrate lint` to validate the migration file.
4. Run `atlas migrate test` to test the data migrations. This is only available for Pro users.
5. Make changes to the migration file to fix the issues. Run `atlas migrate hash` to recalculate the hash of the migration file.
6. Repeat the process until there are no more issues.

At this point, the schema can be applied to the database.
If the user has a local database, you can run `atlas migrate apply` to apply the migration file:

1. Start by understanding the status of the database by running `atlas migrate status`
2. Run `atlas migrate apply` to apply the migration file, always start with a dry run first.
3. Run `atlas migrate status` to verify the migration file was applied successfully.

> **Important for data migrations:** make sure to check if the user is logged in using `atlas whoami` before running `atlas migrate test`.
> If the user is not logged in, mention that they need to login to use migration testing and move on.
> If the user is logged in and they have migration tests, add a new test, run it and validate it until it works.
> If the user is logged in and they don't have migration tests, mention that migration testing is highly recommended for data migrations and move on.

## Troubleshooting Commands

```bash
# Check Atlas installation and login status (logging in is required for Pro features)
atlas version
atlas whoami

# Repair migration integrity
atlas migrate hash --env <name>
```

### Missing driver
If you encounter the following error: `Error: sql/sqlclient: missing driver. See: https://atlasgo.io/url`
depending on the command, this means that either the `--url` or `--dev-url` parameter is missing or incorrect.

## Key Reminders for AI Assistants

1. **Always read `atlas.hcl` first** before running any Atlas commands
2. **Use environment names** from the config file, don't assume defaults
3. **Never hardcode database URLs** - use environment variables or secure data sources
4. **Specify `--url` parameter** for schema inspection even with `--env`
5. **Run `atlas migrate hash`** after manually editing migration files
6. **Use `atlas migrate lint`** to validate migrations before applying
7. **Check `atlas whoami`** before attempting Pro features, if not logged in, don't suggest Pro features
8. **Use JSON format + jq** for initial schema exploration
9. **Use SQL format + grep** for detailed schema investigation
10. **Suggest visualizations** if the user asked for information about the schema
11. **Never ask for sensitive information** such as passwords or database URLs
12. **Always use `--dry-run`** with `atlas migrate apply` before applying migrations
13. **Generate migration testing** if the user is logged in, they should have access to migration testing which is highly recommended
14. **Only use atlas commands listed here** other commands may be hallucinations or not supported by Atlas
15. **Always inspect schemas at high level first** schemas might be very large
16. **Always prefer running atlas** to inspect or analyze schemas
17. Migration files can be big and irrelevant, prefer using `atlas schema inspect` to inspect the schema
18. **Never ignore linting errors** they are there for a reason, always fix them. Add `--atlas:nolint` only if the fix is not detected by the linter.

## Guidelines for workflows
1. **Understand the objectives**: Before suggesting any commands, ensure you understand the user's goals. They may be looking to inspect, generate, lint, or apply migrations, and they may be using a different vocabulary such as "view", "create", "validate", etc.
2. **Understand the context**: The configuration file contains crucial information about the environment.
3. **Verify changes** after generating, linting or applying migrations.
4. **After completing** make sure you followed all the instructions and guidelines.
@atlas.hcl

Once added, Cursor will know to use Atlas for schema work and to validate changes before applying them.

Commit the project rules file so teammates get the same behavior.

Generating and applying a migration

Below is the recommended flow the AI agent (e.g. Cursor) should follow (and you can run directly as well):

1) Change the desired schema

Editing the schema with AI assistance

2) Create a migration file and fix issues

Run atlas migrate diff to generate the migration, then validate with atlas migrate lint and fix any issues.

Generating and linting migrations

3) Apply to the database

Apply using atlas migrate apply (optionally with a dry run first).

Applying the migration

Next steps

Atlas can bring safety and reliability by establishing guardrails that will help you and your team move fast with confidence.

Get started by downloading Atlas and configuring your AI agent to use it - see our guide.

Questions or feedback? Join our Discord