k8s: LVM Persistent Volumes
• Mark Eschbach
Through a series of unfortunate CI pipeline failures I eventually discovered CouchDB exposed a race condition in GlusterFS. Although I am not entirely sure if the consistancy error from the race condition is an artifact of assumptions from Couch or Gluster, in the end it does not matter. I really need to deploy Couch in a stable manner. To that end I need a reliable method to deploy persistent volumes.
At the disk management level for volumes I had setup LVM. Over the years this has been a reliable mechanism, despite upgrades, changes in use cases, and other time base drift. I would love to just provision from that pool. From a process perspective one would do the following to provision a new volume:
- Provision a new logical volume from a volume group
- Format the volume group with a given file systme
- Mount the volume
- Create a new local mount for the volume
Automation
Someone has already created a provisioner to automate this. The
provisioner uses nsenter
to escape the container and manage the local file system. Overall the work looks pretty
decent. My only real objection is modifying the underlying SystemD install on the host to remount the volumes on
restart. Instead I would have advocated for inspecting the Kubernetes state and mounting the expected volumes when
the node comes back online. Possibly running it as a daemonset
.
The provisioner runs as a conatiner listening for storage volume matching it’s criteria. To actually create the volume a new privileged pod is run to manage the volumes. For deletion the same technique is used. Very awesome actually.