CocoaPod Declarations for Testing

We’ve hit the performing stage and now hopefully we’ll stay there through the duration of this task. We’ve got the primary product to compile and now we’re working on getting tests to pass. Next time I think I would recommend starting with the tests and working backwards towards the primary product.

A problem I’m running into with the tests is the message error: missing required module 'Firebase'. It looks like they aren’t resolving the transitive dependencies for an @testable target. According to a CocoaPods issue in the subject, they claim it’s a defect in the pod itself. I had it working yesterday, let’s see if I put it back in as a direct dependency of the tests if it will compile. While waiting for the system to compile I read the entry on November 17th by acchou, which indicates more options to the inherit directive. Time to explore some documentation!

So in the docs it’s pretty clear we’ve been using the wrong flags the whole time. Interesting it worked so well up until Swift 3 and CocoaPods 1.1.1. We’ve been using the inherit! :search while we were intending to link against the target libraries. The correct invocation should have been inherit! :complete. Compilation cycles take forever sometimes. So manually adding the dependencies worked as I had expected, so I’ve replaced the inherit! directive with the :complete symobl. Now just waiting for it to build. Getting myself into trouble: CocoaPod’s install! directive docs needs some love. Why would one want a determinitic UUID for Pods? A quick Goolge search turned up collisions but nothing particular about what it’s used for. Perhaps it’s project IDs.

After several rounds of compiling, I’ve had some fun with being told in an incremental fashion the updated names of methods and variables. Pathing calculations have changed. Most notably they have been removed from String and placed on URL that totally makes sense. Okay, so to big problems left. Semaphore waits became strangely wonky in their expressions of timeout. So many freaking changes.

RVM Blues

Back to driving Travis CI crazy with my requests! Next up on the queue I’ve got the following:

$ rvm get stable --auto-dotfiles
Verifying /Users/travis/.rvm/archives/rvm-installer.asc
Warning: using insecure memory!
gpg: directory `/Users/travis/.gnupg' created
gpg: new configuration file `/Users/travis/.gnupg/gpg.conf' created
gpg: WARNING: options in `/Users/travis/.gnupg/gpg.conf' are not yet active during this run
gpg: keyring `/Users/travis/.gnupg/pubring.gpg' created
gpg: Signature made Wed Nov  2 19:59:26 2016 GMT using RSA key ID BF04FF17
gpg: Can't check signature: No public key
Warning, RVM 1.26.0 introduces signed releases and automated check of signatures when GPG software found.
Assuming you trust Michal Papis import the mpapis public key (downloading the signatures).
GPG signature verification failed for '/Users/travis/.rvm/archives/rvm-installer' - ''!
try downloading the signatures:
    gpg2 --keyserver hkp:// --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
or if it fails:
    command curl -sSL | gpg2 --import -
the key can be compared with:
/Users/travis/.rvm/scripts/functions/cli: line 243: return: _ret: numeric argument required
The command "rvm get stable --auto-dotfiles" failed and exited with 255 during .

I resolved the issue by gpg2 --keyserver hkp:// --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3, not the choice since I would have rather not like to manually add a key but it solves the problem now. I’m slowly finding, for the case of my own sanity, I’m extracting all scripts out of the .travis.yml into executable scripts in the repository. I’ve settled on using .travis.{target} for each one. In this case I’ve set it to .travis.install.before. Fun, new, and exciting errors are popping up now! The requested device could not be found because multiple devices matched the request. Simple enough to resolve. Strange, the target device already contains OS=latest; I would really hate to pin that. Damn, that means simply specifying the OS level will not fix the issue.

Tracking down a testing device

Time to learn more about device selection! Well, I found an awesome article on testing multiple iOS devices on Travis by Andreas Böhrnsen. Looks like an awesome technique for me to add to our configurations. From my expierence with trying to get py-rover to compile with multiple C++14 compilers it looks like it will work! In the comments someone in the article mentioned a StackOverflow post which gives an example of using the UDIDs directly in the configurations. I believe xcrun instruments tool is older, but it still runs under Xcode 8.2! I really hate doing this, but I’m going to also hardcode the ID && add a task to the backlog. So name and id properties are exclusive. I still need to restore SSH to use OSX keychain :-/. Typing in my password every like 10 seconds is getting old.

Doni found an article by JFrog using their artifact maanager for CocoaPods. Titles like Executive Summary always make me laugh, should read Marketing non-tehcnical lies.

Hmm, strange response from Travis: iOSSimulator: task_name_for_pid(mach_task_self(), 8744, &task) returned 5. First hit for that task was on the Mach call. Definitely an interesting method. Well I at least have a vector to learn more details about the failure, now time to check into the details. Not a good sign when the next article is Who needs task_for_pid() anyway… describing a major security leak from BlackHat 2014 using this as an attack vector to modify the kernel_task structure. There is a good PDF on the internals of how Mach system calls work called Abusing Mach on Mac OS X.

I’ve apparently stumbled on a really rare and strange error, which we can only reproduce on Travis. On the entire indexed world of Google it looks like no one else has encountered this issue. I’m attempting to fall back an iOS version and use a different emulator. Perhaps it’s just a bad combo of tools and emulators? I can understand punting on an issue to resolve your current task but I really wish people wouldn’t just sweep it under the rug. Well, updating the emulator resulted in some sane behavior. Now I’ve got a fault in the Travis harness for caching. Alrighty, so noting other errors in the log it really looks like the Travis Xcode 8.2 image is borken. Time to revert to Xcode 8.1. Hopefully they fix the image shortly.