Resolving pq: out of shared memory error during schema loading
Question
How can I resolve the pq: out of shared memory error that appears while Atlas loads my schema?
Answer
This message typically comes from the PostgreSQL dev-database that Atlas spins up to simulate your schema changes, not from the target database itself.
Why does this error occur?
Postgres acquires locks before dropping objects to prevent concurrent modifications. These locks are held within a single
transaction until the DROP completes or rolls back. When a large schema is dropped with a CASCADE clause, Postgres must
lock every dependent object in that one transaction. If the number of required locks exceeds the available space in the
lock table, Postgres raises the following error:
HINT: You might need to increase max_locks_per_transaction.
The lock table and max_locks_per_transaction
The lock table is created in shared memory when the server starts. This parameter controls the average number of object locks allocated for each transaction. The default is 64.
If the lock table runs out of space, PostgreSQL reports the above error and suggests increasing the parameter value.
Increasing max_locks_per_transaction
Raising this parameter allows PostgreSQL to allocate more object locks per transaction. Because the lock table is initialized at startup, changing this value requires a server restart.
If you're using an Atlas docker-based dev-database, you can set a higher value in the command argument as shown below:
docker "postgres" "dev" {
image = "postgres:16"
// ... other configuration as needed
command = [
"-c", "max_locks_per_transaction=512",
]
}
env "local" {
dev = docker.postgres.dev.url
// ... other configuration as needed
}
This example starts the Postgres server with max_locks_per_transaction=512.
Adjust the value based on your need (for example, 128, 256, 512, or 1024).