If there is one match of pain made on this earth then it would definitely be building iOS applications on Travis CI. Travis has it’s own share of problems however Fastlane is very opinionated and fails at my primary employee of it often. And that is interact with Apple services for me. I’m very grateful they often fix problems they encounter before I even know there are problems, however not being able to pin dependencies kind of bothers me.

So I’ve tried to automate the Fastlane upgrade a few times. Should be a piece of cake, no? bundle update fastlane. NOT! Turns out Travis installs all bundles under the deployment profile. Thinking about maybe just passing the development flag? Nope! Let’s see if it accepts a non-existent without group. I settled on --without not-travis-ci which passed the initial install, now just waiting on the results of the build. Unforunately this will take about an hour. More of the hurry up and wait stuff.

A majority of the build time is spent rebuilding dependnecies. I would really like to implement something like Carthage so we an build the dependencies once and cache their artifacts. This would hopefully get the project artifact build time below my 10 minute maximum desire.

After reading through their page it looks like Carthage isn’t as close to Maven in terms of producing reusable artifacts which may be archieved in a repository manager like Sonatype Nexus. The project definitely looks like they have possibility of supporting that kind of setup though. I’ll investigate more when I’ve got a higher throughput connection than my phone.

Subnet musings

While trying to get our app cluster connected to Vault and Consul I ran into a problem several times with the number of network access control lists (NACL) rules. At the core of the problem is the need to specify one rule per subnet due to our current setup casuing one subnet per availability zone to be specified. In the current worst case we would need to specify four rules per subsystem you need to talk with. At the limit of twenty you require at least one for your default rule, one to blacklist to any subnet you haven’t allowed access to, leaving you with 18. Any service would have a maximum of 4 other service it could network with, especially since ephemeral ports also have to be taken into account.

Walking through the construction of a different design of using an isolated system I have the following proposal for the network under IPv4 using the 10/8 network. The second octet would be used for the AWS region. Really you only need 9, so it would sufficient to desginate only the top 4 bits of the octet to the region, giving a maximum limit of 16. In theory you could reserve the next four per availability zone giving you a maximum of 16 per ergion, however I really like the idea of goruping these by service. Partiallly because you’ll hit the connectivity wall again.

We could use the second octect for the service identifier, however most systems will probably evovle to provide more than 16 services. As of current I can forsee a network having six easily. This feels pretty reasonable actually depending on your requirements. If you isolate each microservice within your system this will be blow pretty quickly though. In the current AWS region setup you only have up to four availability zones per region meaning you could get away with having up only two bits for the zones. Given us-east-1 goes through e at this point though I would recommend you use at least 3 bits for the netmask. I’m wondering if 4 would be even better for symmetry just for future proofiing.

So far we stand with the following:

+--------------------------------+--------------------------------+--------------------------------+--------------------------------+
| Octet 1                        | Octet 2                        | Octet 3                        | Octet 4                        |
+--------------------------------+----------------+---------------+----------------+---------------+--------------------------------+
| Private Network (10/8)         | Region         | Service       | AZ subnet      | Host Group?   | Host                           |
+--------------------------------+----------------+---------------+----------------+---------------+--------------------------------+
| 8-bits                         | 4-bits         | 4-bits        | 4-bits         | 4-bits        | 8 bits                         |
+--------------------------------+----------------+---------------+----------------+---------------+--------------------------------+

This would produce 10.0/16 netmasks for individual servicecs on ingress and egress, shrinking the number of rules by half. Effectively we would have an upper bounds of 18 services the each service may communicate with as a result of the NACL problems. So a drawback to this would be sharing services across regions. In truth I don’t think this will be a problem as the system grows. We should only have a subset of systems crossing the regional boundaries.

The Host Group? could have some interesting uses. This could be used to version particular implementations and migrations allowing one to know more about the launch configuration. Otherwise you could just split up the network into 12-bit hosts section.

In making the above chart Vim commands came in handy for figuring out R allows one to overwrite characters. Much less tedious than deleting a certain number, entering the info, then double checking the formatting.