Playing with Helm Charts
• Mark Eschbach
There has been a proposal to use Helm at work. I am not entirely sold on the tool since it adds a lot of complexity I am not sure pays off. However I may juts be inexperienced. Many of the Helm charts I have seen are hyper focused on a single unit. For example: just a single Postgres instance. Based on my conversation with others I am betting this is the correct approach and I am fundamentally missing something in the way Kubernetes and Helm charts interact.
Looking at the example chart for Drupal it seems like each service should be package separately. To be honest it looks like we would be better off keeping a repository of k8s descriptors for production and development. That way we don’t have to deal with the complexity of the templates. Maybe I am being pessimistic about it.
Diving into a chart
The goal: deploy a hello world service using Helm. According to the documentation the way
to create a new Helm chart is via helm create <fs-slug>
. This results in a large tree of files being created:
:> find hello-world
hello-world
hello-world/Chart.yaml
hello-world/charts
hello-world/.helmignore
hello-world/templates
hello-world/templates/deployment.yaml
hello-world/templates/NOTES.txt
hello-world/templates/ingress.yaml
hello-world/templates/tests
hello-world/templates/tests/test-connection.yaml
hello-world/templates/service.yaml
hello-world/templates/_helpers.tpl
hello-world/values.yaml
Many of these files are probably useful in large deploys. Most of these files feel like boiler plate. Perhaps in a future they can simplify the boiler plate? Maybe not for flexibility reasons.
Jut getting it deployed
In the hello-world/values.yaml
file there are two properties: image.repository
and image.tag
; I updated these to
meschbach/docker-hello-world
and latest
respectively. Once updated we deploy:
:> helm package hello-world
:> helm install hello-world-0.1.0.tgz
This will go out and create the target resources in the cluster. You can check on the target cluster by issuing
helm ls
which will list all deployed charts for a given tiller
instance. A Helm Chart will register as DEPLOYED
even if the backing pods do not come on-line. For example, in this configuration I have kubectl get pods
shows
CrashLoopBackOff
for the pod itself. Using kubectl describe pod <mangled-hello-world-name>
provides the reason: the
liveness check is configured for port 80 while the container’s actual port is on 3000. In
hello-world/templates/deployment.yaml
the property spec.template.spec.containers[0].ports[0].containerPort
needs to
be set to 3000.
Running :> helm package hello-world && helm install hello-world-0.1.0.tgz
resulted in a second deploy. I was
expecting Helm to replace the current chart. To clean up use helm list
and pass each of the deployments to
helm delete
. I tried using --name <id>
on the first run with --name <id> --replace
on subsequent invocations
which failed because the name was in use. Apparently a known bug. Using
the work around helm upgrade <id> <package>
did it. Kubernetes successfully registers the pod as alive and ready for
service.
From my current understanding it is desirable to isolate applications into separate name spaces. According to theory
this should be a trivial --namespace <id>
flag to Helm. To reset I have deleted the original deployment until I build
enough faith Helm will react reasonably. Interestingly this produced the result:
:> helm package hello-world && helm install --name dev-hw --namespace hellow-world hello-world-0.1.0.tgz
Successfully packaged chart and saved it to: /Users/mark/wc/xp/virta-system/hello-world-0.1.0.tgz
Error: a release named dev-hw already exists.
Run: helm ls --all dev-hw; to check the status of the release
Or run: helm del --purge dev-hw; to delete it
Turns out Helm will track these for a while. helm del --purge dev-hw
will hopefully do it. And it did. Unicorns
and roses everywhere. Verification phase it is!
Verifying
Since I am a noob at this, I’ll follow the instructions given:
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace hellow-world -l "app.kubernetes.io/name=hello-world,app.kubernetes.io/instance=dev-hw" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
Hmm, that failed: Error from server (NotFound): pods "dev-hw-hello-world-7c4dc6d454-4mgvv" not found
:-/. The pod
exists according to kubectl get pods --namespace hellow-world
. kubectl describe pods dev-hw-hello-world-7c4dc6d454-4mgvv --namespace hellow-world
results in a success message. I found the culprit: the generated script assumes the target name space is the default
for kubectl
context. Adding a --namespace <id>
works as expected. kubectl port-forward --namespace hellow-world svc/dev-hw-hello-world 3000:3000
will also work. Well, changing the service port to 80 did.
Now to check the ingress. Hmm, kubectl get ingress --namespace hellow-world
produced no results. Nor did
kubectl get ingress --all-namespaces
. A bit disconcerting. Double checking the output, there was not an ingress
actually created. Although it looks like the service
might have an external address associated. kubectl describe service dev-hw-hello-world --namespace=hellow-world
shows the application was exposed on an internal address; I am guessing this might have something to do with the cluster
being configured for only attach on the private network. Either way it is configured for ClusterIP
, I would rather
have it attach to load balancers at the edge of the GCP network.
Fixing GKE Ingress
Ah! Turns out in hello-world/values.yaml
ingress.enabled
was defaulted to false. My bad. Hmm, that alone was not
enough:
:> helm package hello-world && helm upgrade --namespace hellow-world dev-hw hello-world-0.1.0.tgz
Successfully packaged chart and saved it to: /Users/mark/wc/xp/virta-system/hello-world-0.1.0.tgz
UPGRADE FAILED
ROLLING BACK
Error: failed to create resource: Ingress.extensions "dev-hw-hello-world" is invalid: spec: Invalid value: []extensions.IngressRule(nil): either `backend` or `rules` must be specified
Error: UPGRADE FAILED: failed to create resource: Ingress.extensions "dev-hw-hello-world" is invalid: spec: Invalid value: []extensions.IngressRule(nil): either `backend` or `rules` must be specified
Looks like I have to set more values to configure the routing. inngress.hosts[0].paths[0]
needs to be set to /*
in
order to pass through deployment. kubectl describe ingress --namespace=hellow-world dev-hw-hello-world
yielded a
complaint the service is exposed via ClusterIP
but should be either {NodePort
, LoadBalancer
}. Sandeep Dinesh
recommends LoadBalancer
however I do not fully understand his claims one has to pay for each IP address exposed; but
I think that is related to my lack of understanding. Setting service.type
to LoadBalancer
results in an error.
kubectl describe ingress --namespace=hellow-world dev-hw-hello-world
shows it does not have sufficient privileges to
run a command gcloud compute firewall-rules create k8s-b9e14962e2ec624b-node-http-hc --network <network-project>-vpc --description "{\"kubernetes.io/cluster-id\":\"<cluster-id>\"}" --allow tcp:10256 --source-ranges 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 --target-tags <client-project-id>-node --project <network-project>-network
.
Running this allowed me to verify via nc -vvv <ip-from-describe> <port>
after letting the ingress update.
Perhaps it would be better to script ingresses from Terraform in the future. The source-ranges
are unfortunately
outside of the network, which is a bit disappointing. Another time I will have to attempt to find a way to bind a load
balancer within the network’s private ranges.