First up is aleavating the memory pessure. The application seems to be reaching 100% memory allocation fairly quickly. In the GUI they have way to specify the hard and soft limits. For now I’m assuming I’m specifying the hard limit and for our application I’ve upped it to 256M…which might be enough if I give it more time to start up. By default it turns out the only acceptable code is 200 OK. I’ve overriden this to be the normal ranges of responses however I’ll probably revert that back to the actual liveness check we have.

Updating the memory to 512 and listing the additional response codes seems to have done the trick. Now I’m just getting 502: Bad Gateway errors, which doesn’t bode well for me. Checking the logs is rather intersting: there is nothing. I feel like the system is missing the standard error logs. Booting up a similar image locally I’ve found the current container configuration consumes approximately 900MB. Which is a problem since I’m running T2 micros. I also have a deadline so sliming them down is more work than it would be to just increase the size.

Turns out half my woes were specifying the wrong port and attempting to connect to a WebPack service. Now I need to excise the WebPack service from the build pipeline and I should be good to go! Time to figure out how to do that. Turns out I need to run RUN python manage.py collectstatic --noinput and ./node_modules/.bin/webpack --config capn/webpack.config.js in that order to resolve the issue. Now I have local tests passing I need to get Travis to finish the job.

I’ve been having some issues with getting the AWS CLI tools installed and available to the scripts. Through the packages resulted in a really old version of tools, like 1.2. Using the curl command piping to python resulted in the tool not being found. pip install awscli should work according to S3 Deployment with Travis. Bingo! Worked like a charm.