Choices! A Kubernetes cluster needs to be able to spin up Postgres clusters for testing. The first two implementations which Google spits at you are:

Giving Zolando whirl

Pretty easy to get things moving. The following spun up a database and got us moving.

apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
  name: acid-minimal-cluster
  namespace: xp-mattermost-pgo
spec:
  teamId: "acid"
  volume:
    size: 1Gi
  numberOfInstances: 2
  users:
    tws:  # database owner
      - superuser
      - createdb
    mattermost: []  # role for application foo
  databases:
    mattermost: tws  # dbname: owner
  postgresql:
    version: "14"

Pods come up and services exist. I was not able to port forward from the service name matching the cluster. kubectl complains about missing selectors which I am not fully understanding. Connecting to the service in the form of ${cluster_name}-repl does provide read and write access to the service.

Exploring Services without selectors

Taking a closer look the services without selectors appear to be managed via the operator itself. An implicit link appears to exist between the Service and underlying Endpoint with matching names in the same namespace. Effectively the Endpoint is the backend targets for a given service.

Endpoint values can be manually specified. Kubernetes has documented several reasons themselves although I am not sure if these apply in this case. Interesting read regardless.

Connecting

Turns out Zolando publishes how to connect. I missed it the first time around. Really I would like to connect with tooling like DataGrip to verify where I am.

Deploying something real.

Zolando uses the idea of teams. Which means names such as mattermost will not work for a deployment. Teams are controlled via the custom resource PostgresTeam however one not need to be created. teamId under spec must be the prefix on metadata.name element though.

Setting priority class

Priority class for the operator itself can be controlled via the priorityClassName value. Each cluster manifest should use the podPriorityClassName with the desired value. Documentation references being able to set a default value for each cluster however the Helm Chart appears to create the priority class which is not appropriate for my target.

Additional Tuning

Many options exist and are documented within Github . Most important size and storageClass for production deployments.