Workflow is difficult, especially when you have responsibilities to provide internal services. Working on a particularly long task which was considered higher priority than most other things I’ve accumulated a backlog of work to be reviewed. I’ve never had the build engineering work really fall within any kind of Sprint. It’s one of those critical services required to be always available when someone would like to us it. Normally I can pivot for the temporary period required to resolve an issue and pop back to my prior work; but this time was definitely different. There is work which is considered ‘lower’ priority but let’s face it: there is never really time for those tasks.

Travis CI isn’t building SwiftyPaperTrail

First up on my list is to restore the builds for SwiftyPaperTrail! I tried to get all fancy with matrix builds and the like, which is my current hypothesis as to why the builds are failing. Travis has the jobs running for hours before they are killed so I’m really aiming in the dark here. Hopefully pinning the job to the xcode8.2 image will resolve the problem. They’ve got a backlog of open source OSX jobs, so I’m assuming the build will probably take 30 minutes to get the results. I’m a little jelous their entire company has time off until the 6th.

Documenting SwiftyPaperTrail

Next up on the queue is to update the for SwiftyPaperTrail. The original goals still stand however this project has significantly morphed into really a syslog producer and consumer. I’m kind of hoping to fork out the RFC’s packets and leave the framing logic in. Remmebering to tag important versions is important. I feel like SwiftyLogger provides a good outline on how to deal with documentation with this project, plus it should fit the culuture of the logging system. Fun fact, the Cocoapod’s file examples use the ruby language for syntax highlighting. Hmm, I realized some of the capitalization is messed up. I’ll need to fix that at some point in the future.

Returning to Travis

Getting back to Travis CI the build failures are again because Travis CI doesn’t contian the most recent CocoaPod’s repository. I’ve inserted a command to force the pods to update, so hopefully that resolves everything. It really helps if I read the actual output so I can figure out the phase the errors are originating from. Derp! Helps if I ask for the bundle exec pod repo update command to be run in the before_install phase instead of the before_script phase!

Hmm, go the results back again, this time the error is a locked gem is not installed. Oops, forgot to mark the file .travis/before_install as executable. Easy enough to fix. xcode -workspace 'example.xcworkspace' -list is awesome for figuring out schemes are available for use.

Understanding Swift’s exception model

Realm is the first heavy user of exceptions that I’ve come across in the Swift world, or at least the binding I’m seeing here. There seems to be various operators which I can probably infer their behavior, such as try!, try? and catch. There are some others which I should probably verify such so do {block} which may have different semantics than I except. In the many languages I’ve worked with exceptions seem to have different behavior in each. Straight from the horse’s mouth: Apple has put together a fairly long article on the subject.

Interesting they use the marker interface technique, with an empty protocol named Error to indicate a type is throwable. They really do like pushing enumerations for everything, rather interesting. Kudos to Apple for being straight forward about constructing custom exception types. I’ve come across many languages which try to hide how to create them. Interesting note under Handling Errors which states they don’t do stack unwinding but have the same performance characteristics of a return. I wonder what they do then. From my understanding in many cases the callee must clean up their locals (usually a subtraction of the stack pointer) and the caller or calle are responsible for destroying the stack frame. I wonder how unwinding the stack would be different from this basic operation?

Interesting: By default exceptions don’t propogate through the stack. You must add an attribute to the function throws to allow propogation through the frame in addition to using the keyword try in front of a the function which may raise the error. If one desires to intercept an exception there is the do { block } catch-patterns { result } setup. This is closer to the traditional try { } catch predicate {} finally {} construct. In the do-block you still use the prefix oeprator try to indicate where the exception will be risen from, with multiple allowed. The catch-patterns allow for your standard Swift pattern matching. There doesn’t appear to be a finally block associated, so expect redundant clean up code in the standard usage of this construct. They did provide us with some syntatic sugar regarding functions which return a value or throw. The try? operator will coerce the result to nil if the operand raises an exception. If you believe an error condition can not occur at runtime you can use the try! prefix operator to cause a runtime error if the target function does raise na error.

The approach they recommend in the docs for the standard finally case is to use defer blocks, which is something I don’t recall using before. They look pretty cool, effectively tapping into the lifecycle of the frame and runs upon exit from the frame. I’ll have to keep an eye out for oppertunities of dealing with those. Makes sense but may cuase resources to be retained longer than the risky block you are trying to insulate. There may be an argument for breaking down the method due to the Single Responsibility Princiapl at this point though.

Swift and Unsafe pointers

I’ve got an interesting problem which came up: the returning NSData is causing an assertion failure. The value is created via the constructor NSMutableData which then has it’s buffer modified. Nevermind, chasing rabbits thre. Turns out the cause was SecItemAdd was failing. I’m guessing from the test device’s state, with low memory and low disk space, is the real cause. Easy enough to push something to log the actual failure code.

Crossing off version enforcement from teh backlog

We’ve structured our git repository to reflect the release states we have to support. As apart of this we have three logically active branches which exist in origin plus a number of short lived unstable branches. The three long lived logical branches are: development, release candidate, and released. These mapped onto the following branch names:

  • development -> master (future release)
  • release candidate -> release-v${next_version_number}
  • hotfix version -> release-v${last_version_number}

Although we have aother release pipelines, our primary target is the Apple App Store. Since we can realistically only have a single version of the application in the App Store, taht is the considered the last_version_number. The code we are stablizing for release is next_version_number using the minor verison. For example if we have 1.815.0 in the App Store then we have 1.816.0 on the release candidate branch. Master is really the next release candidate, intended for unstable changes such as new features, restructuring, or dependency upgrades. Once we are ready to release a candidate it then becomes the version number with the tag v${version_number}. Any hotfixes which need to be deployed prior to the next release will get a micro version bump.

By default all stable code should be included in future releases. In my entire career I’ve rarely encountered issues where stable code from a hotfix shouldn’t be rolled into the next release. When I was originally setting up the release pipeline a few months ago we were manually merging. This meant the worst case merge scenario was three merges: the working branch to the hotfix version; hotfix version into the release candidate; release candidate to master. This was time consuming but we didn’t have too many hotfixes in general. The code was setup to automatically build release candidate which were aware of their place in the lifecycle. We modified these to remove the barier, easing the most common case of merging work from the release candidate into master.

Works pretty well it does. There are bumps from time to time. Every once in a while somehow get a merge from master to the release candidate, or some other backwards merge like that. These are generally by accident and are either backed out or we just move forward with them. The only pain point I’ve had supporting this was dealing with unexpected version numbers. Unlike most product departs ours is reasonable about letting us control the version number. We try not skip numbers and they micro is monotonically increasing with our sprints, generally. It does create confusion for everyone since master is really just a place holder for the second next version to ship. Really as an about once a month goof up it’s not that bad though. It was annoying enough to automate the version check to ensure the given commit is actually on the branch it’s attempting to build a release candidate about.


Martian Fowler wrote on Serverless Architectures so I need to find some time to work on that. I also need to look into the details of security set-key-partition-list -S apple-tool:,apple: -s -k password to understand the details in the future.