Unlogged Tables in PostgreSQL
Overview of Unlogged Tables
With PostgreSQL, users may create unlogged tables, which are a specialized table type that bypasses the Write-Ahead Log (WAL) to achieve higher write throughput. By skipping WAL writes, unlogged tables provide significant performance gains for workloads where durability is not critical.
Why do we need them?
In PostgreSQL, writes to regular tables always go through the WAL. That provides durability and enables point-in-time recovery, but it also adds overhead. Under heavy write load, WAL traffic can easily become the bottleneck.
Unlogged tables skip WAL entirely. If the data can be regenerated, and you don't need it to survive a crash, the performance gain can be substantial. The trade-offs are straightforward: after a crash or unclean shutdown, unlogged tables are truncated. After a clean shutdown, they persist normally. Their contents never appear on physical replicas, and logical replication can't publish them.
You can learn more about unlogged tables in PostgreSQL here
Basic PostgreSQL syntax for using Unlogged Tables
In PostgreSQL, you can create unlogged tables using the UNLOGGED keyword:
CREATE UNLOGGED TABLE table_name (
column1 datatype,
column2 datatype,
...
);
You can also transition existing tables between logged and unlogged modes:
ALTER TABLE table_name SET UNLOGGED;
ALTER TABLE table_name SET LOGGED;
Switching a table to LOGGED or UNLOGGED requires PostgreSQL to perform a full table rewrite. When converting to
LOGGED, the system rewrites the table to produce a WAL-protected copy. When converting to UNLOGGED, it rewrites
the table into a new unlogged heap. Both operations can be expensive for large tables.
Use cases
Use unlogged tables when you care more about raw write speed than durability:
- ETL staging tables: Bulk loads run faster because you avoid flooding the WAL. If something goes wrong, you can reload the data.
- Short-lived analytics tables: Good for intermediate results that come from upstream data and can be rebuilt at any time.
- Caching layers: Store values that can be recomputed or fetched again without cost.
- High-throughput pipelines: Fits cases where you're processing large batches, and the intermediate steps don't need to survive a restart.
Managing Unlogged Tables is easy with Atlas
Managing unlogged tables and larger schema changes in PostgreSQL can get messy. Atlas (the "Terraform for Databases") lets you manage your database schema as code, using a clear declarative format in SQL, HCL, or any ORM schema.
If your ORM doesn't support unlogged tables, Atlas does. You describe the table once in your ORM schema, and Atlas generates the necessary SQL to create and manage it. To learn more, check out ORM and framework guides.
If you are just getting started, install the latest version of Atlas using the guide to setting up Atlas.
Defining unlogged tables with Atlas
Let's see how to define unlogged tables:
- SQL
- Atlas DDL (HCL)
CREATE UNLOGGED TABLE "t1" (
"a" integer NOT NULL,
PRIMARY KEY ("a")
);
CREATE TABLE "t2" (
"a" integer NOT NULL,
PRIMARY KEY ("a")
);
Atlas represents unlogged tables using the unlogged attribute in HCL schema definitions.
When unlogged = true is specified, Atlas models the table as unlogged. When omitted or set to false, the table is
logged by default. This declarative approach makes persistence modes explicit and version-controlled.
table "t1" {
schema = schema.script_unlogged
unlogged = true
column "a" {
null = false
type = integer
}
primary_key {
columns = [column.a]
}
}
table "t2" {
schema = schema.script_unlogged
column "a" {
null = false
type = integer
}
primary_key {
columns = [column.a]
}
}
schema "script_unlogged" {}
Migration behavior
Atlas automatically detects changes in persistence modes and generates the appropriate migration statements:
- When the desired state changes from logged to unlogged, Atlas generates
ALTER TABLE ... SET UNLOGGED. - When changing from unlogged to logged, Atlas generates
ALTER TABLE ... SET LOGGED.
Keep in mind that PostgreSQL rewrites the table when converting to LOGGED, which can be a costly operation for large tables.
Example: Managing unlogged tables with Atlas
Let's start by creating an initial schema by inspecting it from our database.
First, we'll use the atlas schema inspect command to get a representation of our database:
- SQL
- Atlas DDL (HCL)
atlas schema inspect -u "postgres://postgres:pass@localhost:5432/demo?sslmode=disable" \
--format '{{ sql . }}' > schema.sql
Now, let's add to our schema both logged and unlogged tables:
-- Create "staging_data" table
CREATE UNLOGGED TABLE "public"."staging_data" (
"id" serial NOT NULL,
"data" text NULL,
PRIMARY KEY ("id")
);
-- Create "user_sessions" table
CREATE TABLE "public"."user_sessions" (
"id" serial NOT NULL,
"session_data" text NULL,
PRIMARY KEY ("id")
);
atlas schema inspect -u "postgres://postgres:pass@localhost:5432/demo?sslmode=disable" > schema.hcl
Now, let's add to our schema both logged and unlogged tables:
table "staging_data" {
schema = schema.public
unlogged = true
column "id" {
null = false
type = serial
}
column "data" {
null = true
type = text
}
primary_key {
columns = [column.id]
}
}
table "user_sessions" {
schema = schema.public
column "id" {
null = false
type = serial
}
column "session_data" {
null = true
type = text
}
primary_key {
columns = [column.id]
}
}
schema "public" {}
Save and apply the schema changes on the database by using the following command:
- SQL
- Atlas DDL (HCL)
atlas schema apply -u "postgres://postgres:pass@localhost:5432/demo?sslmode=disable" \
--dev-url "docker://postgres/16/dev" \
--to file://schema.sql
atlas schema apply -u "postgres://postgres:pass@localhost:5432/demo?sslmode=disable" \
--dev-url "docker://postgres/16/dev" \
--to file://schema.hcl
Atlas generates the necessary SQL statements to create the tables.
Planning migration statements (2 in total):
-- create "staging_data" table:
-> CREATE UNLOGGED TABLE "public"."staging_data" (
"id" serial NOT NULL,
"data" text NULL,
PRIMARY KEY ("id")
);
-- create "user_sessions" table:
-> CREATE TABLE "public"."user_sessions" (
"id" serial NOT NULL,
"session_data" text NULL,
PRIMARY KEY ("id")
);
-------------------------------------------
? Approve or abort the plan:
▸ Approve and apply
Abort
Choosing Approve and apply executes the migration and creates the tables in the database.
To verify that our tables were created correctly, we can inspect the database again using Atlas, but this time, we only inspect the two created tables.
atlas schema inspect -u "postgres://postgres:pass@localhost:5432/demo?sslmode=disable" \
--include '*.user_sessions,*.staging_data' \
--format '{{ sql . }}'
-- Create "staging_data" table
CREATE UNLOGGED TABLE "public"."staging_data" (
"id" serial NOT NULL,
"data" text NULL,
PRIMARY KEY ("id")
);
-- Create "user_sessions" table
CREATE TABLE "public"."user_sessions" (
"id" serial NOT NULL,
"session_data" text NULL,
PRIMARY KEY ("id")
);
Amazing! Our unlogged table is now created!
--includeThe --include flag allows us to filter the inspection to only the specified objects, making it easier to
verify our specific tables without inspecting the entire database schema.
Conclusion
In this section, we learned about PostgreSQL unlogged tables and how we can easily create and manage them in our database by using Atlas. Unlogged tables provide a powerful way to optimize write performance for workloads where durability can be sacrificed, and Atlas's declarative modeling makes these patterns safe and predictable.