Hopefully! The flow logs finally came through for me and provided some actionable information.

Nope. Nope, it didn’t. I can’t even find the IP address of the node not being able to connect to the internet. Sigh.

Perhaps I’m taking entirely the wrong approach. Perhaps when I create the host AMI I should just preinstall all of the software. I’m not entirely sure how I feel about this. In reality this software shouldn’t need to contact the internet. The downside is updates will require building new AMIs and retiring old ones. At this point I fear this might be the most time efficient approach.

Hmm, my search for creating an AMI using a setup script turned up turtles :-(. On the plus side I figured out part of the problem. Traffic may now trasnit to the gateway but fails to roll farther along. Now I have to figure out what I did.

Well that was painful to get the Network Access Control Lists setup. Part of the problem is the density of information is low coupled with a high level of indirection. I’m half tempted to write a quick meta language to express the concepts in a far more terse and understand way. Perhaps another day.

Onto the next problem: now that I can get into the machine I need to setup the service. There is a boostrapping issue stairing me in the face that I’m going to ignore until I can get up multiple nodes. Since the application is written in Go it’s not so bad to get it up and running; mostly just copying a file and creating a configuration. Or I spoke to early. EC2 Linux, based off Centos, still uses the old RC style system. With all the pain which goes along with it. All of the documentation is for Ubuntu plus I’m really familar with SystemD.

Time to see if I can get it up and running in a reasonable period of time. Most of the configuration should remain the same. I wonder if I’ll need to pull Conanoncial’s cloud montioring system. They’ve made it easy to find Ubuntu VMs. I’m not sure what the difference between hvm:ebs-ssd and hvm:ebs is since I believe the GPIO EBS volumes are SSDs. The ami-id is the same for the both in my target region so it shouldn’t make a differnece.

The actual conversion to an ecrypted instance wasn’t too bad. The bigger problem is figuring out how to get awslogs installed now. The default instructions from AWS don’t look very promising. Looks like Mijndert Stuij has already crossed this bridge and wasn’t too far down the Google list. Not that the page ranking mechanism has ever been a reliable metric. I’m going to see if I can cheat and relie on AWS to have created the SystemD file (doubt it). So I was happily on my way to encrypt the Ubuntu AMI and start spinning up some services…until Conanical let thee air out of my tires. Bummer. The underlying storage isn’t copyable which means I can’t acheive the encrypted root drive requirements. Pretty silly in my opinion to run secrete software on an unencrypted drive.

Back to taming the EC2 Linxu lion. Perhaps there is a transitional package or something? Not sure if it’s going to do weird things to the system.

Looks like Mathias Lafeldt has an interesting resliency project using a similar setup as I have. I didn’t consider using CoreOS as possibility. Hmm, it would give me SystemD out of the box. It’s also a bit overkill for running two services which will not be run under Docker.

Sure! Let’s give CoreOS a go around and see what happens. Well, oddly enough the image boots up with messages about the file system being corrupt. I’m hoping this isn’t because I’m using an encrypted volumes. I’ll have to play with this more tomorrow.