Experimenting with GKE on GCP
• Mark Eschbach
Time to start learning me some GKE for great good. In truth I am kind of novice with Kubernetes so this will be an interesting exploration. My current task is to explore Vault under GKE and taste the operators.
Building via the WUI
Unlike AWS, I am a bit of newbie at this whole GCP thing. If working with Terraform taught me one thing, that would be I should start with the WUI to reduce the barrier of entry to automating later. So onto the WUI!
Naming things are hard! The first real choice I am confronted with is where to build a zonal or regional cluster. Sounds like Google has a similar notion of fault isolation of regions and zones as AWS. Although AWS has hinted a region is a separate data center in some of their documentation whereas this mentions the zone is just separate hardware. I will go with zonal and see how that impacts choices later; probably not the best DR scenario to play out at the scale the system will exsit at.
us-west1 is in Oregon. Also seems like the currently
cheapest center at $0.0475 an hour. Versus like North Virgina at $0.0535 an hour.
Being a newbie I am going to leave the version and default recommendations on size alone. Snooping under the hood though there is a maintenance window. Hopefully that isn’t like take whole cluster thing off-line as RDS did.
Totally pressing the big red button now.
Exploring the beast
After a bit the beast is done. I thought I had only asked for 3 nodes however I have found myself with 9 nodes for some
reason. Next up is to figure out how to connect to the node. Possibly the
Connect button in the panel?
Well that gave me a command:
gcloud beta container clusters get-credentials vault-test-cluster --region us-west1 --project gke-vault-test.
beta in the command makes me wonder a bit. Next step is getting
gcloud I suppose.
Definitely a different approach than just using the local
pip instance like AWS. Nice I don’t have to install the
package though. Just able to run out the extracted
tar…well you can run the binaries. The command insisted I run
it from elsewhere.
The application decided to install a some components and respectfully errored out on missing credentials. Nice to see
the tool set uses Google OAuth 2.0 style setup. Although I’m confused how the browser wrote the credentials back to
disk. Alrighty, ran the command and in theory it updated my local
kube files to deal with it’s cluster.
kubectl config get-contexts shows the correct context.
kubectl get nodes shows all 9 nodes happy and ready to go!
Time to start wrenching on the actual problem at hand.
I need to shave the yak of installing Helm & Tiller before I can proceed. In theory it’s simple enough:
and we are off. I did not find a decent method to verify the sanity of the configuration though.
First project up on the docket is testing Banzicloud’s
project. In theory the following script should work
helm init -c helm repo add banzaicloud-stable http://kubernetes-charts.banzaicloud.com/branch/master helm install vault-operator
However this results in the following output:
Error: failed to download “vault-operator” (hint: running
helm repo updatemay help)
So sad times. Unfortunately this exhausted my time to work on this problem today. Stay tuned for next installment of our epic saga: Insane developer or secure data store?!