Skip to main content

PostgreSQL Database Security as Code (Versioned)

Managing database security through ad-hoc GRANT and REVOKE commands leads to drift, and permissions scattered across migration files make it hard to answer "who can access what?" Atlas lets you define roles, users, permissions, and row-level security policies as code. With the versioned workflow, each change is captured in a migration file, reviewed by your team, and applied through CI/CD.

This guide covers the versioned workflow. For the declarative approach, see the declarative security guide.

Roles, users, and permissions are available only to Atlas Pro users. To use this feature, run:

atlas login

Prerequisites

  1. Docker
  2. Atlas installed on your machine (installation guide)
  3. An Atlas Pro account (run atlas login to authenticate)

Project Setup

Start by spinning up a local PostgreSQL database using atlas tool docker:

export DATABASE_URL=$(atlas tool docker --url "docker://postgres/16/demo" --name my-db)
Cleaning up

When you're done, stop the container with atlas tool docker kill --name my-db.

Create an atlas.hcl configuration file with roles and permissions enabled:

atlas.hcl
env "local" {
url = getenv("DATABASE_URL")
dev = "docker://postgres/16/dev"
schema {
src = "file://schema.pg.hcl"
mode {
roles = true // Inspect and manage roles and users
permissions = true // Inspect and manage GRANT / REVOKE
}
}
}

Note that sensitive = ALLOW is not set here. In the versioned workflow, passwords are not included in migration files by default. We will see how to inject them safely using template directories later in this guide.

Dev Database for Cloud Environments

When targeting cloud-managed databases (Amazon RDS, Cloud SQL, etc.), the dev database must know about provider-specific roles that exist on the target but not in a vanilla PostgreSQL image. Use a docker block with a baseline script to pre-create them:

atlas.hcl
docker "postgres" "rds" {
image = "postgres:16"
baseline = <<-SQL
CREATE ROLE rds_superuser;
CREATE ROLE rds_replication;
CREATE ROLE rds_iam;
CREATE ROLE rdsadmin;
GRANT rds_superuser TO postgres;
SQL
}

env "rds" {
url = getenv("DATABASE_URL")
dev = docker.postgres.rds.url
schema {
src = "file://schema.pg.hcl"
mode {
roles = true
permissions = true
}
}
migration {
dir = "file://migrations"
}
}

If you prefer to use your own dev database instead of Docker, use a dev block:

atlas.hcl
dev "postgres" "rds" {
url = getenv("DEV_URL")
baseline = <<-SQL
CREATE ROLE rds_superuser;
CREATE ROLE rds_replication;
CREATE ROLE rds_iam;
CREATE ROLE rdsadmin;
GRANT rds_superuser TO postgres;
SQL
}

Defining the Desired State

The schema file is the target state Atlas diffs against the migration directory. Let's model a SaaS order management application with tiered access:

RolePurpose
app_readonlyRead-only access for reporting dashboards
app_writerRead-write access for the application backend, inherits from app_readonly
app_adminFull administrative access, inherits from app_writer

Two login users:

UserRole membership
dashboard_userapp_readonly
api_userapp_writer
schema.pg.hcl
variable "api_password" {
type = string
}

variable "dashboard_password" {
type = string
}

role "app_readonly" {
comment = "Read-only access for reporting"
}

role "app_writer" {
comment = "Read-write access for the application"
member_of = [role.app_readonly]
}

role "app_admin" {
create_role = true
comment = "Administrative access"
member_of = [role.app_writer]
}

user "dashboard_user" {
password = var.dashboard_password
conn_limit = 5
comment = "Reporting dashboard service account"
member_of = [role.app_readonly]
}

user "api_user" {
password = var.api_password
conn_limit = 20
comment = "Application API service account"
member_of = [role.app_writer]
}

table "users" {
schema = schema.public
column "id" {
type = serial
null = false
}
column "name" {
type = character_varying
null = false
}
column "email" {
type = character_varying
null = false
}
primary_key {
columns = [column.id]
}
}

table "products" {
schema = schema.public
column "id" {
type = serial
null = false
}
column "name" {
type = character_varying
null = false
}
column "price" {
type = numeric
null = false
}
primary_key {
columns = [column.id]
}
}

table "orders" {
schema = schema.public
column "id" {
type = serial
null = false
}
column "user_id" {
type = integer
null = false
}
column "product_id" {
type = integer
null = false
}
column "tenant_id" {
type = integer
null = false
}
column "total" {
type = numeric
null = false
}
primary_key {
columns = [column.id]
}
foreign_key "fk_order_user" {
columns = [column.user_id]
ref_columns = [table.users.column.id]
}
foreign_key "fk_order_product" {
columns = [column.product_id]
ref_columns = [table.products.column.id]
}
row_security {
enabled = true
enforced = true
}
}

// Admin gets ALL on the schema with grant option
permission {
to = role.app_admin
for = schema.public
privileges = [ALL]
grantable = true
}

// Read-only: SELECT on all tables
permission {
for_each = [table.orders, table.products, table.users]
for = each.value
to = role.app_readonly
privileges = [SELECT]
}

// Writer: SELECT, INSERT, UPDATE on orders and products
permission {
for_each = [table.orders, table.products]
for = each.value
to = role.app_writer
privileges = [SELECT, INSERT, UPDATE]
}

// Column-level: api_user can only UPDATE the email column
permission {
to = user.api_user
for = table.users.column.email
privileges = [UPDATE]
}

policy "tenant_isolation" {
on = table.orders
for = ALL
to = ["app_writer"]
using = "(tenant_id = current_setting('app.current_tenant')::integer)"
check = "(tenant_id = current_setting('app.current_tenant')::integer)"
}

schema "public" {
comment = "standard public schema"
}
External and predefined roles

Cloud-managed databases (Amazon RDS, Google Cloud SQL, Azure) include provider-specific roles like rds_superuser or cloudsqlsuperuser. PostgreSQL also ships with predefined roles like pg_read_all_data and pg_monitor. Mark these roles as external = true so Atlas knows they already exist and will only generate the GRANT, never a CREATE ROLE:

// PostgreSQL predefined role
role "pg_read_all_data" {
external = true
}

// Cloud provider role (e.g., Amazon RDS)
role "rds_superuser" {
external = true
}

role "app_readonly" {
comment = "Read-only access for reporting"
member_of = [role.pg_read_all_data]
}

Generating the Initial Migration

Run atlas migrate diff to generate the first migration file:

atlas migrate diff add_security \
--env local \
--var "api_password=s3cret" \
--var "dashboard_password=r3port"

Atlas creates a migration directory with the generated SQL and a checksum file:

migrations/
├── 20260303120000_add_security.sql
└── atlas.sum

The generated migration contains every role, grant, table, policy, and RLS setting:

migrations/20260303120000_add_security.sql
-- Create role "app_readonly"
CREATE ROLE "app_readonly";
-- Create role "app_writer"
CREATE ROLE "app_writer";
-- Grant role "app_readonly" to "app_writer"
GRANT "app_readonly" TO "app_writer";
-- Create role "app_admin"
CREATE ROLE "app_admin" WITH CREATEROLE;
-- Grant role "app_writer" to "app_admin"
GRANT "app_writer" TO "app_admin";
-- Grant on schema "public" to "app_admin"
GRANT CREATE, USAGE ON SCHEMA "public" TO "app_admin" WITH GRANT OPTION;
-- Create user "api_user"
CREATE ROLE "api_user" WITH LOGIN CONNECTION LIMIT 20;
-- Grant role "app_writer" to "api_user"
GRANT "app_writer" TO "api_user";
-- Create user "dashboard_user"
CREATE ROLE "dashboard_user" WITH LOGIN CONNECTION LIMIT 5;
-- Grant role "app_readonly" to "dashboard_user"
GRANT "app_readonly" TO "dashboard_user";
-- Create "products" table
CREATE TABLE "public"."products" (
"id" serial NOT NULL,
"name" character varying NOT NULL,
"price" numeric NOT NULL,
PRIMARY KEY ("id")
);
-- Grant on table "products" to "app_readonly"
GRANT SELECT ON TABLE "public"."products" TO "app_readonly";
-- Grant on table "products" to "app_writer"
GRANT INSERT, SELECT, UPDATE ON TABLE "public"."products" TO "app_writer";
-- Create "users" table
CREATE TABLE "public"."users" (
"id" serial NOT NULL,
"name" character varying NOT NULL,
"email" character varying NOT NULL,
PRIMARY KEY ("id")
);
-- Grant on table "users" to "app_readonly"
GRANT SELECT ON TABLE "public"."users" TO "app_readonly";
-- Grant on column "email" of table "users" to "api_user"
GRANT UPDATE ("email") ON TABLE "public"."users" TO "api_user";
-- Create "orders" table
CREATE TABLE "public"."orders" (
"id" serial NOT NULL,
"user_id" integer NOT NULL,
"product_id" integer NOT NULL,
"tenant_id" integer NOT NULL,
"total" numeric NOT NULL,
PRIMARY KEY ("id"),
CONSTRAINT "fk_order_product" FOREIGN KEY ("product_id") REFERENCES "public"."products" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT "fk_order_user" FOREIGN KEY ("user_id") REFERENCES "public"."users" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION
);
-- Enable and enforce row-level security for "orders" table
ALTER TABLE "public"."orders" ENABLE ROW LEVEL SECURITY, FORCE ROW LEVEL SECURITY;
-- Grant on table "orders" to "app_readonly"
GRANT SELECT ON TABLE "public"."orders" TO "app_readonly";
-- Grant on table "orders" to "app_writer"
GRANT INSERT, SELECT, UPDATE ON TABLE "public"."orders" TO "app_writer";
-- Create policy "tenant_isolation"
CREATE POLICY "tenant_isolation" ON "public"."orders" AS PERMISSIVE FOR ALL TO "app_writer"
USING (tenant_id = (current_setting('app.current_tenant'::text))::integer)
WITH CHECK (tenant_id = (current_setting('app.current_tenant'::text))::integer);

Notice that passwords are not included. Atlas intentionally excludes them from versioned migration files - secrets should never be committed to source control.

Adding Passwords with Template Directories

To set passwords at deploy time, add ALTER ROLE ... PASSWORD statements to the migration and mark them with the -- atlas:sensitive directive so the values are never logged:

-- atlas:sensitive
ALTER ROLE "api_user" WITH PASSWORD '{{ .api_password }}';
-- atlas:sensitive
ALTER ROLE "dashboard_user" WITH PASSWORD '{{ .dashboard_password }}';

Then configure a template directory in atlas.hcl to inject the actual values at runtime:

atlas.hcl
variable "api_password" {
type = string
default = "changeme"
}

variable "dashboard_password" {
type = string
default = "changeme"
}

data "template_dir" "migrations" {
path = "migrations"
vars = {
api_password = var.api_password
dashboard_password = var.dashboard_password
}
}

env "local" {
url = getenv("DATABASE_URL")
dev = "docker://postgres/16/dev"
schema {
src = "file://schema.pg.hcl"
mode {
roles = true
permissions = true
}
}
migration {
dir = data.template_dir.migrations.url
}
}

At deploy time, {{ .api_password }} and {{ .dashboard_password }} are replaced with real values from the template_dir vars. Source these from environment variables, or use the runtimevar data source to read passwords directly from AWS Secrets Manager, GCP Secret Manager, HashiCorp Vault, or any other supported secret store:

atlas.hcl
data "runtimevar" "api_pass" {
url = "awssecretsmanager://api-db-password?region=us-east-1"
}

data "template_dir" "migrations" {
path = "migrations"
vars = {
api_password = data.runtimevar.api_pass
}
}

Applying Migrations

Apply to the target database:

atlas migrate apply --env local \
--var "api_password=$API_PASSWORD" \
--var "dashboard_password=$DASHBOARD_PASSWORD"
Migrating to version 20260303120000 (1 migration in total):

-- migrating version 20260303120000
-> CREATE ROLE "app_readonly";
-> CREATE ROLE "app_writer";
-> GRANT "app_readonly" TO "app_writer";
-> CREATE ROLE "app_admin" WITH CREATEROLE;
-> GRANT "app_writer" TO "app_admin";
-> GRANT CREATE, USAGE ON SCHEMA "public" TO "app_admin" WITH GRANT OPTION;
-> CREATE ROLE "api_user" WITH LOGIN CONNECTION LIMIT 20;
-> GRANT "app_writer" TO "api_user";
-> CREATE ROLE "dashboard_user" WITH LOGIN CONNECTION LIMIT 5;
-> GRANT "app_readonly" TO "dashboard_user";
-> CREATE TABLE "public"."products" ( ... );
-> GRANT SELECT ON TABLE "public"."products" TO "app_readonly";
-> GRANT INSERT, SELECT, UPDATE ON TABLE "public"."products" TO "app_writer";
-> CREATE TABLE "public"."users" ( ... );
-> GRANT SELECT ON TABLE "public"."users" TO "app_readonly";
-> GRANT UPDATE ("email") ON TABLE "public"."users" TO "api_user";
-> CREATE TABLE "public"."orders" ( ... );
-> ALTER TABLE "public"."orders" ENABLE ROW LEVEL SECURITY, FORCE ROW LEVEL SECURITY;
-> GRANT SELECT ON TABLE "public"."orders" TO "app_readonly";
-> GRANT INSERT, SELECT, UPDATE ON TABLE "public"."orders" TO "app_writer";
-> CREATE POLICY "tenant_isolation" ON "public"."orders" ...
-- ok

-------------------------
-- 1 migration
-- 21 sql statements

With Atlas Cloud, push your migration directory and deploy from any CI/CD platform:

atlas migrate push app --env local

Making Incremental Changes

When requirements change, update the schema file and generate a new migration. Atlas computes only the diff. For example, adding an app_support role for the support team:

Add to schema.pg.hcl:

schema.pg.hcl
role "app_support" {
comment = "Support team access"
member_of = [role.app_readonly]
}

// Support can view and update orders
permission {
to = role.app_support
for = table.orders
privileges = [SELECT, UPDATE]
}

Generate the incremental migration:

atlas migrate diff add_support_role \
--env local \
--var "api_password=s3cret" \
--var "dashboard_password=r3port"

Atlas generates only what changed - the new role and its grants:

migrations/20260303130000_add_support_role.sql
-- Create role "app_support"
CREATE ROLE "app_support";
-- Grant role "app_readonly" to "app_support"
GRANT "app_readonly" TO "app_support";
-- Grant on table "orders" to "app_support"
GRANT SELECT ON TABLE "public"."orders" TO "app_support";
-- Grant on table "orders" to "app_support"
GRANT UPDATE ON TABLE "public"."orders" TO "app_support";

The migration directory now holds both files. The sum of all migrations represents the current security state:

migrations/
├── 20260303120000_add_security.sql
├── 20260303130000_add_support_role.sql
└── atlas.sum

Next Steps

Have questions? Feedback? Find our team on our Discord server or schedule a demo.