Atlas Operator Quickstart
Intro
In this guide we will quickly go through setting up the Atlas Operator on a local Kubernetes cluster and demonstrate some of its basic features.
Local Cluster Setup
To get started, you will need a Kubernetes cluster running on your local machine. For the purpose of this guide, we will
use minikube
.
To install minikube
on macOS, you can use brew
:
brew install minikube
For other operating systems, follow the instructions on the official website.
Provision a local database
Next, we will install a PostgreSQL database to manage using the Atlas Operator:
kubectl apply -f https://gist.githubusercontent.com/rotemtam/a7489d7b019f30aff7795566debbedcc/raw/53bac2b9d18577fed9e858642092a7f4bcc44a60/db.yaml
This command will install a few resources in your cluster:
- A
Deployment
for the PostgreSQL database running thepostgres:latest
image. - A
Service
to expose the database to the cluster. - A
Secret
containing the database credentials in the Atlas URL format.
Install the Atlas Operator
Now we can install the Atlas Operator using Helm:
helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator
This command will install the Atlas Operator in your cluster. The Operator includes three important components:
- The
AtlasSchema
Custom Resource Definition (CRD) that supports the declarative migration flow. - The
AtlasMigration
CRD that supports the versioned migrations flow. - A controller that watches for
AtlasSchema
andAtlasMigration
resources and applies the desired schema to the database.
Apply a schema
To apply a schema to the database, create a file named atlas-schema.yaml
with the following content:
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
name: atlasschema-pg
spec:
urlFrom:
secretKeyRef:
key: url
name: postgres-credentials
schema:
sql: |
create table t1 (
id int
);
This manifest includes two important parts:
- The
urlFrom
field that references thepostgres-credentials
secret containing the database URL. This tells the Operator where to apply the schema. - The
schema
field that contains the desired state of the database. In this case, we are creating a table namedt1
with a single columnid
.
To apply the schema, run:
kubectl apply -f atlas-schema.yaml
The Operator will detect the new AtlasSchema
resource and apply the schema to the database.
To verify that the schema was applied correctly, let's use the kubectl exec
command to connect to the PostgreSQL
database and list the tables:
kubectl exec -it $(kubectl get pods -l app=postgres -o jsonpath='{.items[0].metadata.name}') -- \
psql -U root -d postgres -c "\d t1"
This command will connect to the PostgreSQL database and show the schema of the t1
table:
Table "public.t1"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
id | integer | | |
Great! You have successfully applied a schema to a PostgreSQL database using the Atlas Operator.
Evolve the schema
Let's modify the schema to update the t1
table by adding a new column name
. Update the atlas-schema.yaml
file
with the following content:
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
name: atlasschema-pg
spec:
urlFrom:
secretKeyRef:
key: url
name: postgres-credentials
schema:
sql: |
create table t1 (
id int,
name text -- new column we're adding
);
Apply the updated schema:
kubectl apply -f atlas-schema.yaml
To verify the schema change, connect to the PostgreSQL database and list the tables:
kubectl exec -it $(kubectl get pods -l app=postgres -o jsonpath='{.items[0].metadata.name}') -- \
psql -U root -d postgres -c "\d t1"
You should see the updated schema:
Table "public.t1"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
id | integer | | |
name | text | | |