In continuing my attempts to get a stable and reasonable deployment of Vault operators, I need to get Tiller properly installed and configured. Following the default instructions resulted in permission errors which is totally my bad.

First up is removing the existing Tiller installations. Based on the output of kubectl get deployments --namespace=kube-system it looks like Helm is currently installing Tiller as a deployment. kubectl delete deployment tiller-deploy --namespace=kube-system removed it!

Installing Tiller correctly

So if helm init does not work the way I intended what is the correct path? According to Jonathan Campos in “Installing Helm in Google Kubernetes Engine (GKE)” published on 2018-08-13 we need to do two things.

  1. Create a new service account for Tiller : kubectl --namespace=kube-system create serviceaccount tiller
  2. Bind Tiller’s service account to the cluster-admin role: kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
  3. Install Tiller with the --service-account argument: helm init --service-account tiller

I am wondering if the serivceaccount specification is in a standardized format like namespace:account.

Anyway, after a few seconds Tiller is up and healthy. Time to try again!

> helm repo add banzaicloud-stable http://kubernetes-charts.banzaicloud.com/branch/master
> helm install banzaicloud-stable/vault-operator

Works as expected! The output of Tiller looks healthy, based on a complete guess of what healthy is :-) .

Deploying & Verifying Vault

I took the first example from Banzicloud’s page which is unfortunately missing anchors in the text. I updated line 7 to use the most recent version at the time of writing 1.0.2. As promised on the label it creates a single instance with the specified version. To verify the application was working correctly ` kubectl logs vault-0 vault` produced the expected Vault initialization code. There are two objectives methods of verification I will be using: able to connect to the instance for administrative use; being able to retrieve secrets from a production view.

Connecting to the instance

Exposing the port is fairly easy: kubectl port-forward vault-0 8200:8200. As advertised on the box, Vault will be exposed on port 8200. The service is using TLS which will require openssl s_client -host localhost -port 8200 to get access to the underlying socket. To stop Vault from rejecting the socket you should set VAULT_SKIP_VERIFY=true unless you got the certificate chain from Kubernetes. I’m not that advanced yet.

With the network links established now on to figuring out how to unseal. Disappointingly VAULT_SKIP_VERIFY=true vault status shows as initialized but sealed. After some searching I was not abel to find a location where the seal keys were placed which is probably me not understanding.

Turns out I was not the only one who was unable to locate the secrets. The pod vault-0 container banks-vaults shows:

time="2019-02-05T05:11:06Z" level=info msg="checking if vault is sealed..."
time="2019-02-05T05:11:06Z" level=info msg="vault sealed: true"
time="2019-02-05T05:11:06Z" level=error msg="error unsealing vault: unable to get key 'vault-unseal-0': error getting secret for key 'vault-unseal-0': secrets \"vault-unseal-keys\" is forbidden: User \"system:serviceaccount:default:default\" cannot get secrets in the namespace \"default\""

Well, looks like I have another element I missed and need to implement. This will have to drag on yet another day sadly.