I wrote a really basic application in NodeJS which is intended to help us track where my family stores our belongings. Admittedly we have too much stuff. However recently we moved and realized there were things which were being stored but had no idea where they were. To solve this problem I wrote a basic NodeJS service backed by Postgres. This is the story of taking the application from a toy on my laptop to running on my production cluster at home.

The future vision of my home cluster invovles Kubernetes on multiple AMD64 and ARM (Raspberry Pi) computers. For now I will have to settle my current Docker clusters. This means I have to schedule the application on a machine manually, which is easy since I only have one server right now. At this point I tend to prefer deploying applications via Docker.

Building the Docker container

At the time of writing the latest NodeJS version is 12.12.0. Easy enough to write out the basic Dockerfile for a Node application:

FROM node:12.11


ADD package.json /app/package.json
RUN npm install --production

ADD . /app

And a .dockerignore to ensure some large things aren’t sucked into the images while developing:


Building with Jenkins Pipelines

I am still using Jenkins, probably should take a survey and see what is around. However for this project I am going to use Jenkins Pipelines. Building Docker Images is a bit new and I am definitely missing something with understanding the pipelines.

pipeline {
    agent any
    stages {
        stage('Build Docker container'){
            steps {
                sh "docker build . --tag inventory:${env.BUILD_NUMBER}"
            agent {
                docker {
                    image "inventory:${env.BUILD_NUMBER}"
            steps {
                sh "npm install"
                sh "npm test"
    post {
        success {
          slackSend (color: '#00FF00', message: "SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})")

        failure {
          slackSend (color: '#FF0000', message: "FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})")

This fails with the with complaints of files not being modifyable by the current user, which is a bit unfortuante. After quiet a bit of debugging and searching I finally found the Jenkins Docker Pipeline Plugin. Turns out the CloudBee folks intentionally setup Jenkins to locate it’s user ID and group ID and force all containers to run under that user. They have decided they will not fix this issue and let updated PRs rot.

Well, I can at least use a shell. In the end Jenkins invokes a shell which drives Docker to run a command /app/test.sh within the container. This installs and executes tests. If the command fails then I remove the image otherwise the image sticks around.

Next up on the list is the final deployment of the service. This was a simple as the following script:

docker stop prod__inventory || true
docker rm prod__inventory || true
docker run -d -p 666:666 -e 'CONFIG_SERVER=server' -e 'CONFIG_PATH=/prod/inventory' --name prod__inventory inventory:${env.BUILD_NUMBER}"

And done.