I’ve unsealed vaults a few times already, however I’m still on my quest to get ECS conatiners to authenticate. Annoying there will need to be SANs for each host name the Vault clients try to contact; which is fine I guess. I’ll probably just setup a load balancer with the certificate or something. In the mean time I’ve disabled the check by setting the environment variables VAULT_SKIP_VERIFY to true. A horrible thing to do in a production system but that is the luxry of the research stage I’m in now.

Alrighty, following their aws-ec2 auth docs I tried to add a role using something like vault write auth/aws-ec2/role/engine policies=engine which failed with requiring a binding parameter. In their documentation they use a binding AMI however I expect this AMI to be updated regularly. Looks like I can bind agains the IAM role. For now I’ll work with the AMI though just to simplify the current tests. Role written successfully.

Alrighty, now onto the next part. The template for logins are vault write auth/aws-ec2/login role=$(role) pkcs7=$(signature). Unforunately they don’t have the command embedded, however they do link to the AWS Documentation for it. Looks like is the target URL. curl produces a lot of line breaks that mess everything up. Piping through tr --delete '\n' resulted in BER tag length is more than available data unforunately. Verified quoting isn’t really the problem; I’m guessing the linebreaks are required.

Well this isn’t promising. The top hit looks to be an underlying Go library with the issue still open. A Vault Github Issue seems to have a clue of how to approach getting the PKCS#7 signature in a reasonable way with the following:

    curl -s \
    | paste -s -d ''

Alrighty, we can plugin that is using ./vault write auth/aws-ec2/login role=engine-role pkcs7=$pkcs7 and get the token. Once there I realized I hadn’t given the role the default permissions and as I would expect with security software the default is to deny. On to learning about policy documents!

Vault policies seems fairly staright forward! Time to try it out. I’ve got a role name the tokens are bound to. Alrighty, never mind. I realized the documentation doesn’t cover how to assciate a policy with a role like I’m using for EC2. Or you, you know, I just completely overlook the write-policies and policies families of subcommands. ./vault read auth/aws-ec2/role/{role-name will give you an array of role names the role attempts to attach to. So let’s try that again!

Going for a test drive. Oh no! The nonce troll! For now I would really like to disable the usage for nonce. I’ll return to using them when I can handle a greater number of moving parts. Well, either I don’t understand disallow_reauthentication or I’m not using it propelry because I keep getting client nonce mismatch. Meh. I did discover you can recover the nonce value as the superuser at auth/aws-ec2/identity-whitelist/{instance-id} for the instance. If that fails you may also delete the path to the instance and that will remove the token.

Next up is seeing if I can log in through the IAM instance profile alone insetad of pinning it via the AMI. Updating using something like ./vault write auth/aws-ec2/role/engine-role bound_iam_role_arn={arn} policies=engine resulted in an EC2 security error requiring iam:GetInstanceProfile on the client side. I’m not sure if I want the containers to access them. Using the bound_iam_instance_profile_arn resulted in obtaining a token without client error; I’ll run with this one for now.

Now time to find how others have built a client within ECS which authenticates. The result which repeatidly comes up? A build it yourself approach. Not bad, however I would really like a more integrated solution. Searching around there doesn’t seem to be much information about it :-/. Found an interesting opinion piece about the security of the AWS Metadata Service, or rather the lack of access controls to the metadata point.

Trying to follow the EC2 style, probably a mistake. First stop on my adventure to building out the service: ECS Instances service. There wasn’t much there. Moving on, I’m realy waiting for ECS signed meatadata. Sounds like the Vault people would be really happy a feature meeting those needs. I know I would! Game plan for the first one: build a container containing a single webservice who establishes it’s identity with Vault and reads a well known secret.

There are a lot of resources available under The value I want to go after specifically is; but I’m going to also try exposing and to see if I can get actionable information to the container. Today I learned you pass timeout arg to HTTParty gem’s class methods to make it take less than 30 seconds to give up. I’s gots a basic service setup to qeury the metadata, now time to dockerize. Modified the file from ORC. Frustratingly I hit the port export issue on OSX again. That is disappointing. Turns out the behavior has changed at some point and publishing on the default localhost is acceptable anymore. The container must bind to in order to be accessible. I wonder if this is a change in Docker or it’s just specific for Docker for Mac.