I proxy my personal Jenkins instance through a system nginx proxy. This normally works like a charm and it’s great! However when I go to perform updates there is one noticable problem: nginx will error because there are no healthy instances in the instant Jenkins redirects my browser for update. nginx appears to have the queue directive for the upstream block, however the docs claim it’s only available on the commerical instance. Let’s try and find out!

Well, bad news. It failed. There are a few other interesting directives I’ve apparently already tried: proxy_connect_timeout, proxy_send_timeout, and proxy_read_timeout. Up first to investigate is proxy_connect_timeout. I’ve set this to a unitless value, so I’m assuming it’s seconds. Looks like that is the default for nginx. I’ll spec it so next time I won’t have to wonder if I messed up there. Both proxy_send_timeout and proxy_send_timeout seem to only deal with established connections. Well, it’s looking like nginx isn’t really going to work for this use case. If I was doing this for a commerical project I probably wouldn’t mind paying, however this a personal instance of nginx for my toy projects.

It’s been a while since I’ve setup or managed an HAproxy environment, however I thought it was worth a gander. Turns out HAproxy is still relatively awesome however they haven’t really kept up with HTTP/2 and websockts look interesting to implement. I’ll have to return to this problem in the future. I’ll just accept the strange behavior from nginx.