This month has been exaushgting; both professionally and personally. I’ll probably not write about the professional stuff directly. The personal I’ll probably reflect upon at a different time.

Anyway, onto the programming. First up is figuring out how to log ECS host data into CloudWatch. From my reading so far this should be as simple as logging a number of files from disk directly into CloudWatch. I’ve got to admit I’m not particullary a fan of what I’ve seen in CloudWatch so far; the logs are not digestable by a human without additional tools. The search feature smooths over some of the issues but no many.

Official Amazon documentation makes this sound fairly easy. The policy seems really lax; so I’ll need to look into tightening that one up. As it currently stands the open ruleset is great because it allows all contained docker systems to be logged. I should probably change those to be opened by the specific instance. Installtion of the additional file will be thrown into user_data. roylines seems to have a fairly solid example with a lot of bells and whistles. We don’t need alot of the fancyness which exists in there yet. I’ll have to checkout ruxit and sysdig to see what they are about.

The modifications need require a seperate file to be written to disk, specifically /etc/awslogs/awslogs.conf. This greatly increasing the complexity from the 2-line user_data field I was using. Time to learn how to use Terraform’s template_file. In the future I should really read through the cloud-init-config docs. Looks like the secret sauce is ${file("${path.module}/user-data.sh")} to use files in the curnrent module. The way strings are constructed in el-expressions Terraform still trip me out.

Hmm, apparently template_file is deprecated. Looks like I can just swap out resource "template_file" for data "template_file". On one hand it’s great they force a boundary at the template file…on the other it’s super annoying. I mean it makes it generic. In this case I don’t have a generic need though. It’s specific to the case. All well. Well that was relatively easy. I was expecting bigger problems. Done.

Next up is to export the bastion host logs. Unforunately I’m reading AWS’s guide on bastions well after a coworker has already setup the thing. EC2 Linux uses RSyslog so configuration files are stored at /etc/rsyslog.conf. By default SSH is logged to /var/log/secure. Security policies are really important.

Time to start looking into locking down the CloudWatch logs to the specific intents. Target resources are hidden in the documentation. Looks like arn:aws:logs:region:account-id:log_group_name is the format, with account-id and log_group_name being variables. Turns out that is just for the logs:CreateLogGroup, logs:CreateLogStream, and logs:DescribeLogStreams actions. For logs:PutLogEvents one needs arn:aws:logs:${aws-region}:${account-id}:log-group:${log-group-name}:log-stream:*. Where aws-region may be either the region name or *; account-id is the account and log-group-name is the name of the log group. You can pair down the target even farther, restricting to which log streams you want to write, however this would remoeve the per-host logging. In my tests this didn’t interfere with the ECS logs at all when applied to the hosts, which is what I hoped for.

New SSH option for root log in I’ve never encountered before: forced-commands-only which allows root access if there is a target command specified. Remember to restart sshd after changing the configuration to pick it up.