Mark Eschbach

Software Developer && System Analyst

Vows of Working Software

I’ve been on a quest for a great testing tool in NodeJS. For a while I used Jasmine because I was familiar with the framework from my time at StreamSend. For traditional JavaScript systems Jasmine was great, however started to show some cracks when I began applying asynchronous and promised based testing. I’ve heard of Vows, they have appeared to be around for a while and cater to the type of asynchroonus code I ‘m attempting to test!

Getting Started

I need a testing harness for establishing, transmitting a token, and tearing down a TCP/IP connection. Since I’m already familiar with NodeJS’s TCP/IP libraries, along with the protocols and underlying operating system features, Vows is the only wildcard I know of in this project. In the spirit of Growing Object Oriented Software, my first objective is to update get a pending test wired through my CI system (Jenkins).

So, after initializing a new Git repo, wiring it into Gitolite, and threading the empty project through my Jenkins instance I’m ready to go. I added the dependency on the most recent version of Vows (0.7.0). My experience in Build Engineering states never install something globally that I can install locally, so my choice is to install through the package.json file under the devDependencies section. Once installed via npm install the binary may be found under ./node_modules/.bin/vows.

Vows requested a ‘test’ directory, so I create one. I created a new file called ‘harness_use_case_test.js’ with the following content:

var vows = require("vows");

vows.describe("Tunnel testing use case").addBatch({
	'test': "pending"
});
		

Which failed with: “Could not find any tests to run”. Upon closer inspection I found this issue, which I found rather amusingly mismatched the documentation. So appending the function exportTo(module) fixed the problem. If you are unfamiliar with NodeJS, module refers to the current module for the source file containing the source. With this fix I correctly got the pending test to run, outputting in ‘dot matrix’ style.

Integration with Jenkins

Time to integrate the test reports with Jenkins! Unfortunately after some time time researching integration options, the best automated report options I could find was an xUnit reporter. So I fell back to using the spec reporter as Vows doesn’t support multiple output formats. I’m hoping when something does go wrong I will at least get some output! The following is the scripting I use:

export NODE_HOME=/opt/node-v0.10.10
export PATH=$NODE_HOME/bin:$PATH

npm install
./node_modules/.bin/vows --spec
		

From pending to behavior

In an attempt to follow a BDD school of construction, I defined the following basic set of business requirements:

var vows = require("vows");
var assert = require("assert");

vows.describe("TCP/IP Tunnel Harness").addBatch({
	“given a setup harness”: {
	"when establishing a client connection" : {
		"a connection is made to the intake port": "todo",
		"a connection is not directly made to the service" : "todo"
	},
	"when connecting to the proxied service": {
		"a connection event is generated" : "todo"
	}
}
}).exportTo(module);
		

At the core of the library is a harness object which acts a mediator to coordinate the dance between the network components. Vows promises to execute the tests in parallel (although I think they mean concurrently as NodeJS doesn’t support threads yet 2014-01-28), which will be awesome as the test suite grows. So on to implementing the given!

Vows uses the key ‘topic’ for the test setup. The topic will be executed prior to the tests, providing the subject under test. There are three methods of providing the subject under test: returning a non emitter value, using the this.callback property as a NodeJS style callback, or returning an EventEmitter.

Sad sad problems

As I was developing the next sample and figuring out what the code would look like I ran into a series of issues. The first, most counter productive to my workflow is errors occurring in topics are propagated through all clients of the topic. I find having errors on each test clutters the tests, as we now have to check within each test for an error occurring in the topic. Last, code reuse and general structure still seems a little raw.

I will fully admit most of these could probably be overcome with experience testing, however at this time I would like to survey the field for a lower barrier of entery.