Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 03 2014


Testing in Node.js

A test driven development cycle simplifies the thought process of writing code, makes it easier, and quicker in the long run. But just writing tests is not enough by itself, knowing the kinds of tests to write and how to structure code to conform to this pattern is what it's all about. In this article we will take a look at building a small app in Node.js following a TDD pattern.

Besides simple 'unit' tests, which we are all familiar with; We can also have Node.js's Async code running, which adds an extra dimension in that we don't always know the order in which functions will run or we may be trying to test something in a callback or checking to see how an async function is working.

In this article we will be building a Node app which can search for files that match a given query. I know there are already things for this (ack) but for the sake of demonstrating TDD I think it could be a well rounded project.

The first step is obviously to write some tests, but even before that, we need to choose a testing framework. You can use vanilla Node, as there is an assert library built-in, but it's not much in terms of a test runner, and is pretty much the bare essentials.

Another option and probably my favorite for general use is Jasmine. It's pretty self-contained, you don't have any other dependencies to add to your scripts and the syntax is very clean and easy to read. The only reason I am not going to use this today, is because I think Jack Franklin did an excellent job covering this in his recent Tuts+ series here, and it's good to know your options so you can pick the best tool for your situation.

What We’ll Be Building

In this article we will be using the flexible 'Mocha' test runner along with the Chai assertion library.

Unlike Jasmine which is more like an entire test suite in one package, Mocha only takes care of the overall structure but has nothing to do with the actual assertions. This allows you to keep a consistent look and feel when running your tests, but also allows you to run whichever assertion library best fits your situation.

So for example, if you were going to use the vanilla 'assert' library, you could pair it with Mocha to add some structure to your tests.

Chai is a fairly popular option, and is also all about options and modularity. Even without any plugins, just using the default API you have three different syntaxes you can use depending on if you would like to use a more classic TDD style or a more verbose BDD syntax.

So now that we know what we are going to use, let's get into the installation.

The Setup

To get started, let's install Mocha globally by running:

npm install -g mocha

When that completes create a new folder for our project and run the following inside it:

npm install chai

This will install a local copy of Chai for our project. Next, create a folder named test inside our project's directory, as this is the default location Mocha will look for tests.

That's pretty much it for setup, the next step is to talk about how to structure your apps when following a test driven development process.

Structuring Your App

It's important to know, when following a TDD approach, what needs to have tests and what does not. A rule of thumb is to not write tests for other peoples already tested code. What I mean by this is the following: let's say your code opens a file, you don't need to test the individual fs function, it's part of the languge and is supposedly already well tested. The same goes when using third-party libraries, you shouldn't structure functions which primarily call these types of functions. You don't really write tests for these and because of this you have gaps in the TDD cycle.

Now of course, with every programming style there are a lot of different opinions and people will have different views on how to TDD. But the approach I use is that you create individual components to use in your app, each of which solves a unique functional problem. These components are built using TDD ensuring that they work as expected and you won't break their API. Then you write your main script, which is essentially all glue code, and does not need to be tested / can't be tested, in certain situations.

This also means that most of your components can be reused in the future as they do not really have much to do, directly, with the main script.

Following what I just said, it's common practice to create a folder named 'lib' where you put all the individual components. So up to this point you should have Mocha and Chai installed, and then a project directory with two folders: 'lib' and 'test'.

Getting Started With TDD

Just in case you are new to TDD I thought it would be a good idea to quickly cover the process. The basic rule is that you can't write any code unless the test runner tells you to.

Essentially, you’re writing what your code is supposed to do before actually doing it. You have a really focused goal while coding and you never compromise your idea by getting side-tracked or thinking too far ahead. Besides that, since all of your code will have a test affiliated with it you can be certain you will never break your app in the future.

A test, in reality, is just a declaration of what a function is expected to do when run, you then run your test runner, which will obviously fail (since you haven't written the code yet) and then you write the minimum amount of code needed to pass the failing test. It's important never to skip this step, because sometimes a test will pass even before you add any code, due to other code you have in the same class or function. When this happens, you either wrote more code then you were supposed to for a different test or this is just a bad test (usually not specific enough).

Again according to our rule above, if the test passes right away you can't write any code, because it didn't tell you to. By continuously writing tests and then implementing the features you construct solid modules that you can rely on.

Once you’re finished implementing and testing your component, you can then go back and refactor the code to optimize it and clean it up but making sure the refactoring doesn't fail any of the tests you have in place and more importantly, doesn't add any features that are untested.

Every testing library will have its own syntax, but they usually follow the same pattern of making assertions and then checking if they pass. Since we are using Mocha and Chai let's take a look at both their syntaxes starting with Chai.

Mocha & Chai

I will be using the 'Expect' BDD syntax, because as I mentioned Chai comes with a few options out of the box. The way this syntax works is you start by calling the expect function, passing it the object you want to make an assertion on, and then you chain it with a specific test. An example of what I mean could be as follows:


That's the basic syntax, we are saying expect the addition of 4 and 5 to equal 9. Now this isn't a great test because the 4 and 5 will be added by Node.js before the function is even called so we are essentially testing my math skills, but I hope you get the general idea. The other thing you should note, is this syntax is not very readable, in terms of the flow of a normal English sentence. Knowing this, Chai added the following chain getters which don't do anything but you can add them to make it more verbose and readable. The chain getters are as follows:

  • to
  • be
  • been
  • is
  • that
  • and
  • have
  • with
  • at
  • of
  • same
  • a
  • an

Using the above, we can rewrite our previous test to something like this:


I really like the feel of the entire library, which you can check out in their API. Simple things like negating the operation is as easy as writing .not before the test:


So even if you have never used the library before, it won't be hard to figure out what a test is trying to do.

The last thing I would like to look over before we get into our first test is how we structure our code in Mocha


Mocha is the test runner, so it doesn't really care too much about the actual tests, what it cares about is the tests structure, because that is how it knows what is failing and how to layout the results. The way you build it up, is you create multiple describe blocks which outline the different components of your library and then you add it blocks to specify a specific test.

For a quick example, let's say we had a JSON class and that class had a function to parse JSON and we wanted to make sure the parse function can detect a badly formatted JSON string, we could structure this like so:

describe("JSON", function() {
   describe(".parse()", function() {
       it("should detect malformed JSON strings", function(){
           //Test Goes Here

It's not complicated, and it's about 80% personal preference, but if you keep this kind of format, the test results should come out in a very readable format.

We are now ready to write our first library, let's begin with a simple synchronous module, to get ourselves better acquainted with the system. Our app will need to be able to accept command line options for setting things like how many levels of folders our app should search through and the query itself.

To take care of all this, we will create a module which accepts the command's string and parses all the included options along with their values.

The Tag Module

This is a great example of a module you can reuse in all your command line apps, as this issue comes up a lot. This will be a simplified version of an actual package I have on npm called ClTags. So to get started, create a file named tags.js inside of the lib folder, and then another file named tagsSpec.js inside of the test folder.

We need to pull in the Chai expect function, as that will be the assertion syntax we will be using and we need to pull in the actual tags file so we can test it. Altogether with some initial setup it should look something like this:

var expect = require("chai").expect;
var tags = require("../lib/tags.js");

describe("Tags", function(){


If you run the 'mocha' command now from the root of our project, everything should be passing like expected. Now let's think about what our module will do; we want to pass it the command arguments array that was used to run the app, and then we want it to build an object with all the tags, and it would be nice if we could also pass it a default object of settings, so if nothing get's overridden, we will have some settings already stored.

When dealing with tags, a lot of apps also provide shortcut options which are just one character, so let's say we wanted to set the depth of our search we could allow the user to either specify something like --depth=2 or something like -d=2 which should have the same effect.

So let's just begin with the long formed tags (for example, '–depth=2'), To begin with, let's write the first test:

describe("Tags", function(){
   describe("#parse()", function(){
       it("should parse long formed tags", function(){
           var args = ["--depth=4", "--hello=world"];
           var results = tags.parse(args);

           expect(results)"depth", 4);
           expect(results)"hello", "world");

We added one method to our test suite called parse and we added a test for long formed tags. Inside this test I created an example command and added two assertions for the two properties it should pickup.

Running Mocha now, you should get one error, namely that tags doesn't have a parse function. So to fix this error let's add a parse function to the tags module. A fairly typical way to create a node module is like so:

exports = module.exports = {};

exports.parse = function() {


The error said we needed a parse method so we created it, we didn't add any other code inside because it didn't yet tell us to. By sticking with the bare minimum you are assured that you won't write more then you are supposed to and end up with untested code.

Now let's run Mocha again, this time we should be getting an error telling us that it can't read a property named depth from an undefined variable. That is because currently our parse function isn't returning anything, so let's add some code so that it will return an object:

exports.parse = function() {
var options = {}

return options;

We are slowly moving along, if you run Mocha again, their shouldn't be any exceptions being thrown, just a clean error message saying that our empty object has no property called depth.

No 'depth' property

Now we can get into some real code. For our function to parse the tag and add it to our object we need to cycle through the arguments array and remove the double dashes at the start of the key.

exports.parse = function(args) {
   var options = {}
   for (var i in args) { //Cycle through args
       var arg = args[i];
       //Check if Long formed tag
       if (arg.substr(0, 2) === "--") {
           arg = arg.substr(2);
           //Check for equals sign
           if (arg.indexOf("=") !== -1) {
               arg = arg.split("=");
               var key = arg.shift();
               options[key] = arg.join("=");
   return options;

This code cycles through the list of arguments, makes sure we are dealing with a long formed tag, and then splits it by the first equals character to create the key and value pair for the options object.

Now this almost solves our issue, but if we run Mocha again, you will see that we now have a key for depth, but it's set to a string instead of a number. Numbers are a bit easier to work with later on in our app, so the next piece of code we need to add is to convert values to numbers whenever possible. This can be achieved with some RegEx and the parseInt function as follows:

        if (arg.indexOf("=") !== -1) {
            arg = arg.split("=");
            var key = arg.shift();
            var value = arg.join("=");

            if (/^[0-9]+$/.test(value)) {
                value = parseInt(value, 10);
            options[key] = value;

Running Mocha now, you should get a pass with one test. The number conversion should arguably be in its own test, or at least mentioned in the tests declaration so you don't, by mistake, remove the number conversion assertion; so just add-on “add and convert numbers” to the it declaration for this test or separate it into a new it block. It really depends whether you consider this “obvious default behavior” or a separate feature.

First Pass

Now like I have been trying to stress throughout this whole article, when you see a passing spec, it's time to write more tests. The next thing I wanted to add was the default array, so inside the tagsSpec file let's add the following it block right after the previous one:

    it("should parse long formed tags and convert numbers", function(){
        var args = ["--depth=4", "--hello=world"];
        var results = tags.parse(args);

        expect(results)"depth", 4);
        expect(results)"hello", "world");
    it("should fallback to defaults", function(){
        var args = ["--depth=4", "--hello=world"];
        var defaults = { depth: 2, foo: "bar" };
        var results = tags.parse(args, defaults);

        var expected = {
            depth: 4,
            foo: "bar",
            hello: "world"


Here we are using a new test, the deep equal which is good for matching two objects for equal values. Alternatively, you can use the eql test which is a shortcut but I think this is more clear. This test passes two arguments as the command string and passes two defaults with one overlap, just so we can get a good spread on the test cases.

Running Mocha now, you should get a sort of diff, containing the differences between what is expected and what it actually got.

Defaults Diff

Let's now continue back to the tags.js module, and let's add this functionality in. It's a fairly simple fix to add, we just need to accept the second parameter, and when it's set to an object we can replace the standard empty object at the start with this object:

exports.parse = function(args, defaults) {
   var options = {};
   if (typeof defaults === "object" && !(defaults instanceof Array)) {
       options = defaults

This will bring us back to a green state. The next thing I want to add is the ability to just specify a tag without a value and let it work like a boolean. For example, if we just set --searchContents or something like that, it will just add that to our options array with a value of true.

The test for this would look something like the following:

   it("should accept tags without values as a bool", function(){
       var args = ["--searchContents"];
       var results = tags.parse(args);

       expect(results)"searchContents", true);

Running this will give us the following error just like before:

Boolean Tags

Inside of the for loop, when we got a match for a long formed tag, we checked if it contained an equals sign; we can quickly write the code for this test by adding an else clause to that if statement and just setting the value to true:

        if (arg.indexOf("=") !== -1) {
             arg = arg.split("=");
             var key = arg.shift();
             var value = arg.join("=");

             if (/^[0-9]+$/.test(value)) {
                 value = parseInt(value, 10);
             options[key] = value;
        } else {
             options[arg] = true;

The next thing I want to add is the substitutions for the short-hand tags. This will be the third parameter to the parse function and will basically be an object with letters and their corresponding replacements. Here is the spec for this addition:

    it("should accept short formed tags", function(){
        var args = ["-sd=4", "-h"];
        var replacements = {
            s: "searchContents",
            d: "depth",
            h: "hello"

        var results = tags.parse(args, {}, replacements);

        var expected = {
            searchContents: true,
            depth: 4,
            hello: true


The trouble with shorthand tags is that they are able to be combined in a row. What I mean by this is unlike the long formed tags where each one is separate, with short hand tags – since they are each just a letter long – you can call three different ones by typing -vgh. This makes the parsing a bit more difficult because we still need to allow for the equals operator for you to add a value to the last tag mentioned, while at the same time you need to still register the other tags. But not to worry, it's nothing that can't be solved with enough popping and shifting.

Here is the entire fix, from the beginning of the parse function:

exports.parse = function(args, defaults, replacements) {
   var options = {};
   if (typeof defaults === "object" && !(defaults instanceof Array)) {
       options = defaults

   if (typeof replacements === "object" && !(defaults instanceof Array)) {
        for (var i in args) {
             var arg = args[i];
             if (arg.charAt(0) === "-" && arg.charAt(1) != "-") {
                  arg = arg.substr(1);
                  if (arg.indexOf("=") !== -1) {
                      arg = arg.split("=");
                      var keys = arg.shift();
                      var value = arg.join("=");

                      arg = keys.split("");
                      var key = arg.pop();
                      if (replacements.hasOwnProperty(key)) {
                           key = replacements[key];

                      args.push("--" + key + "=" + value);
                  } else {
                      arg = arg.split("");

                      if (replacements.hasOwnProperty(key)) {
                          key = replacements[key];
                      args.push("--" + key);

It's a lot of code (in comparison) but all we are really doing is splitting the argument by an equals sign, then splitting that key into the individual letters. So for example if we passed -gj=asd we would split the asd into a variable called value, and then we would split the gj section into individual characters. The last character (j in our example) will become the key for the value (asd) whereas any other letters before it, will just be added as regular boolean tags. I didn't want to just process these tags now, just in case we changed the implementation later. So what we are doing is just converting these short hand tags into the long formed version and then letting our script handle it later.

Running Mocha again will take us back to our illustrious green results of four tests passing for this module.

Now there are a few more things we can add to this tags module to make it closer to the npm package, like the ability to also store plain text arguments for things like commands or the ability to collect all the text at the end, for a query property. But this article is already getting long and I would like to move on to implementing the search functionality.

The Search Module

We just went through creating a module step by step following a TDD approach and I hope you got the idea and feeling of how to write like this. But for the sake of keeping this article moving, for the rest of the article, I will speed up the testing process by grouping things together and just showing you the final versions of tests. It's more of a guide to different situations which may come up and how to write tests for them.

So just create a file named search.js inside the lib folder and a searchSpec.js file inside of the test folder.

Next open the spec file and let's setup our first test which can be for the function to get a list of files based on a depth parameter, this is also a great example for tests which require a bit of external setup for them to work. When dealing with outside object-like-data or in our case files, you will want to have a predefined setup which you know will work with your tests, but you also don't want to add fake info to your system.

There are basically two options to solve this problem, you can either mock the data, like I mentioned above if you are dealing with the languages own commands for loading data, you don't necessarily need to test them. In cases like that, you can simply provide the 'retrieved' data and continue on with your testing, kind of like what we did with the command string in the tags library. But in this case, we are testing the recursive functionality we are adding to the languages file reading capabilities, depending on the specified depth. In cases like these, you do need to write a test and so we need to create some demo files to test the file reading. The alternative is to maybe stub the fs functions to just run but not do anything, and then we can count how many times our fake function ran or something like that (check out spies) but for our example, I am just going to create some files.

Mocha provides functions which can run both before and after your tests, so you can perform these kinds of external setup and cleanup around your tests.

For our example, we will create a couple of test files and folders at two different depths so we can test out that functionality:

var expect = require("chai").expect;
var search = require("../lib/search.js");
var fs = require("fs");

describe("Search", function(){
   describe("#scan()", function(){
       before(function() {
           if (!fs.existsSync(".test_files")) {
               fs.writeFileSync(".test_files/a", "");
               fs.writeFileSync(".test_files/b", "");
               fs.writeFileSync(".test_files/dir/c", "");
               fs.writeFileSync(".test_files/dir2/d", "");

       after(function() {

These will be called based on the describe block they are in, and you can even run code before and after each it block using beforeEach or afterEach instead. The functions themselves just use standard node commands to create and remove the files respectively. Next we need to write the actual test. This should go right next to the after function, still inside the describe block:

       it("should retrieve the files from a directory", function(done) {
           search.scan(".test_files", 0, function(err, flist){

This is our first example of testing an async function, but as you can see it's just as simple as before; all we need to do is use the done function Mocha provides in the it declarations to tell it when we are finished with this test.

Mocha will automatically detect if you specified the done variable in the callback and it will wait for it to be called allowing you to test asynchronous code really easily. Also, it's worth mentioning that this pattern is available throughout Mocha, you can for example, use this in the before or after functions if you needed to setup something asynchronously.

Next I would like to write a test that makes sure the depth parameter works if set:

    it("should stop at a specified depth", function(done) {
        search.scan(".test_files", 1, function(err, flist) {

Nothing different here, just another plain test. Running this in Mocha you will get an error that the search doesn’t have any methods, basically because we haven't written anything in it. So let's go add an outline with the function:

var fs = require("fs");

exports = module.exports = {};

exports.scan = function(dir, depth, done) {


If you now run Mocha again, it will pause waiting for this async function to return, but since we haven't called the callback at all, the test will just timeout. By default it should time out after about two seconds, but you can adjust this using this.timeout(milliseconds) inside of a describe or it block, to adjust their timeouts respectively.

This scan function is supposed to take a path and depth, and return a list of all the files it finds. This is actually kind of tricky when you start thinking about how we are essentially recursing two different functions together in a single function. We need to recurse through the different folders and then those folders need to scan themselves and decide on going further.

Doing this synchronously is fine because you can kind of step through it one by one, slowly completing one level or path at a time. When dealing with an async version it get's a bit more complicated because you can't just do a foreach loop or something, because it won't pause in between folders, they will all essentially run at the same time each returning different values and they would sort of overwrite each other.

So to make it work, you need to create a sort of stack where you can asynchronously process one at a time (or all at once if you use a queue instead) and then keep some order in that manner. It's a very specific algorithm so I just keep a snippet by Christopher Jeffrey which you can find on Stack Overflow. It doesn't apply just to loading files, but I have used this in a number of applications, basically anything where you need to process an array of objects one at a time using async functions.

We need to alter it a bit, because we would like to have a depth option, how the depth option works is you set how many levels of folders you want to check, or zero to recurs indefinitely.

Here is the completed function using the snippet:

exports.scan = function(dir, depth, done) {
   var results = [];
   fs.readdir(dir, function(err, list) {
       if (err) return done(err);
       var i = 0;
       (function next() {
           var file = list[i++];
           if (!file) return done(null, results);
           file = dir + '/' + file;
           fs.stat(file, function(err, stat) {
               if (stat && stat.isDirectory()) {
                   if (depth !== 0) {
                       var ndepth = (depth > 1) ? depth-1 : 1;
                       exports.scan(file, ndepth, function(err, res) {
                           results = results.concat(res);
                   } else {
               } else {

Mocha should now be passing both tests. The last function we need to implement is the one which will accept an array of paths and a search keyword and return all matches. Here is the test for it:

    describe("#match()", function(){
        it("should find and return matches based on a query", function(){
            var files = ["hello.txt", "world.js", "another.js"];
            var results = search.match(".js", files);
            expect(results).to.deep.equal(["world.js", "another.js"]);

            results = search.match("hello", files);

And last but not least, let's add the function to search.js:

exports.match = function(query, files){
  var matches = [];
  files.forEach(function(name) {
      if (name.indexOf(query) !== -1) {
  return matches;

Just to make sure, run Mocha again, you should have a total of seven tests all passing.

All Green!

Putting It All Together

The last step is to really write the glue code which pulls all our modules together; so in the root of our project add a file named app.js or something like that and add the following inside:

# !/usr/bin/env node

var tags = require("./lib/tags.js");
var search = require("./lib/search.js");
var defaults = {
   path: ".",
   query: "",
   depth: 2
var replacements = {
   p: "path",
   q: "query",
   d: "depth",
   h: "help"

tags = tags.parse(process.argv, defaults, replacements);

if ( {
   console.log("Usage: ./app.js -q=query [-d=depth] [-p=path]");
} else {
   search.scan(tags.path, tags.depth, function(err, files) {
       search.match(tags.query, files).forEach(function(file){

No actual logic going on here really, we are just basically connecting the different modules together to get the desired results. I usually don't test this code as it's just glue code which has all been tested already.

You can now make your script executable (chmod +x app.js on a Unix system) and then run it like so:

./app.js -q=".js"

Optionally customizing some of the other placeholders we setup.

Action Still


In this article we have built an entire file searching app, albeit a simple one, but I think it demonstrates the process as a whole fairly well.

Some personal advice moving forward; if you are going to do a lot of TDD, setup your environment. A lot of the overhead time people associate with TDD is due to them having to keep switching windows around, opening and closing different files, then running tests and repeating this 80 dozen times a day. In such a case it interrupts your workflow decreasing productivity. But if you have your editor setup, like you either have the tests and code side-by-side or your IDE supports jumping back and forth, this saves a ton of time. You can also get your tests to automatically run by calling it with the -w tag to watch the files for changes and auto run all tests. These kinds of things make the process more seamless and more of an aid then a bother.

I hope you enjoyed this article, if you have any questions you can leave them below, contact me on Twitter @gabrielmanricks or on the Nettuts+ IRC channel (#nettuts on freenode).

November 19 2013


Fail Fast, Fail Often: An Interview with Victor Lombardi

Retrospectives are common. You’ve likely conducted one before. But how many companies are actually good at them? How many companies actually have the courage to be open and honest about their own shortcomings? My experience tells me that very few are. And that’s why Victor Lombardi’s recently released book, is so necessary: unlike the ones designers are used to seeing, Lombardi’s stories are full of objective, thoughtful, and insightful commentary.

An award-winning product designer, Victor Lombardi’s had a hand in over 40 different software and internet projects throughout the course of his career. And during that time he’s clearly paid attention to one thing: namely, all of the different ways in which a project can unfold. His new book, Why We Fail, tells over a dozen stories of projects gone awry.

So why do design projects fail? Many reasons. Lombardi attempts to answer the question from a number of angles: product ideation, design, development, and marketing. After reading his book, we brought additional questions to the discussion: How does bias factor in? Or branding? And, on a different level, what can we learn from

Our full interview appears, below. Additionally (as is always the case when we interview an author published by Rosenfeld Media) the publisher has graciously offered to give away a few books to readers. More information on that follows the interview!

Hey, Victor! Thanks for taking the time to chat. Throughout the book, you note a wide variety of places in which cognitive biases might affect an organization (“survivorship bias,” for example, is a perspective that exclusively favors success). Were you aware of bias and its effects from the outset or did you simply start to see bias the further you delved into your research?
I wasn’t expecting to hear about bias when I interviewed people for the book. Maybe that’s because I didn’t think people would open up this way. But they did.

I think it’s good therapy for us to talk through not only what we did during a project but also what we thought and felt. From there I brushed up on my psychology—Max Bazerman’s “Blind Spots” was particularly helpful—to explain the cognitive science behind the issues that led to failures.

Many companies find it (understandably) difficult to financially justify a culture that “embraces” failure. What advice do you have for them?
If senior management rules by ego, believing that the people at the top have the best ideas, then I’ve got nothing to say. They won’t hear my message.

For others, I think the overt message of “fail fast” is actually better framed as “experiment fast.” The most effective innovators succeed through experimentation. They’ve updated the traditional R&D department by stepping out of the lab and interacting directly with customers, running thoughtful experiments, and executing them quickly to learn quickly what works and what doesn’t.

Anyone doing user-centered design is already 80% of the way there. It makes a huge difference just to shift your process towards the scientific method, phrasing research questions as hypotheses and iteratively testing towards them. A key difference is in the results: instead of a lot of usability data to analyze and interpret, you get a true or false result. This makes it much easier to decide what to do next.

I recommend reading up on methods like customer development, lean startup, or by starting with the final chapter of my book.

In chapter four you recount the story of Wesabe and Mint, two startups who approached the financial space from slightly different perspectives. Wesabe suggested that users manually upload their financial data (in the name of privacy and security) whereas automated this task (at the risk of perceived security). Both were minimum viable products, but one failed while the other succeeded. Can you speak a little as to what startups can learn, generally, from Wesabe and’s subtle differentiation?
Wesabe was a useful service with a smart Web 2.0 strategy. Given more time and investment it would still be around. But certain classes of startups are dependent on attracting large numbers of customers in order to attract more investment. chose features and designed them in a way that excelled at attracting customers. They won the competition even though Wesabe was superior in many ways.

But this isn’t true in every case. In the book I cover a broad spectrum of products: startups and mature products; mobile, web, and desktop software; hardware; and services. Different situations resulted in different lessons. I summarize the lessons at the end of each case study.

One of my favorite case studies in the book is Google Wave, in which you suggest that the first sign of trouble was that everyone had a different definition of what a “wave” actually was. Personally, I think this speaks to the strong connection between user experience, semantics and branding. How do we fail in this regard and how might we do better?
The UX field generally is not good at the conceptual design stage of creating new products compared to, say, industrial design or architecture. We fall in love with our first idea, and we can quickly and cheaply move from idea to working prototype—it isn’t natural to stay in the idea stage for a while to explore alternate solutions.

It’s unfortunate that Google Wave failed because the problem still exists. The solution was close. …maybe “Concept Design” should be my next book ;-)

Chapter 7, titled “Do the right thing,” tells the story of Plaxo and, two companies who each decided to employ dark patterns to “better” their business. What other kinds of stories/examples did you consider including in this chapter that exhibited bad behavior?
In cases like I had no doubt the behavior was unethical. Others were less clear cut. Some of the things Plaxo did [ed: such as mass emailing its members’ contacts] that annoyed us back then are now accepted practice. So it’s relative. I decided against including others because there was no smoking gun, so I’ll refrain from mentioning them here as well. If you really want to know, you’ll have to buy me a drink sometime.
Last question! I know it’s a bit premature, but what, if anything, do you think designers might learn from the (highly publicized) failure of
Let’s say we solved for the myriad of political and vendor integration problems that plagued the project. What’s left are some intriguing customer experience issues. One seems to be that a long registration process is required before the customer can view prices of health plans, because the plans and prices are determined by your registration information. I don’t know how they ended up with that design, but the decision to design it this way sounds like a policy decision made around a conference table rather than through a design process that included running experiments.

What you can do if you find yourself in this situation is to acknowledge, out loud, that the goal of not showing prices prematurely is a good one, but the solution of making the customer do a lot of work up front is risky because more people will abandon the process before receiving any value from the site (see Wesabe vs. Mint). To mitigate this risk, you can generate alternate designs, mock them up, and test them out with customers.

Offhand, I can think of a few options:

  • Let visitors browse plans upon arrival and show the range of prices next to each plan to give people a general idea of cost. Then show them the actual prices after registration.
  • Show some realistic content so visitors know what factors will influence the price, like “Sally, a single mother of two in New York will pay $100/month for Plan X which includes benefits A, B, and C.”
  • If just a bit of data is needed to determine price, like state and income, just ask for that data, and require registration later when people are ready to buy a plan.

Thanks, again, for taking the time, Victor! Your book was a pleasure to read.

If you’re as jazzed about learning from failure as we are, I’d strongly suggest entering for a chance to win a copy of your own, courtesy of our friends over at Rosenfeld Media. To enter, simply follow UX Booth on twitter and leave a comment on this post answering the question: What’s your favorite story of design failure (one you’ve witnessed firsthand or otherwise) and what lessons to you think it provides? Rather than pick the winners at random, as we usually do, we’ll work with Victor to pick the three best stories of failure. Their authors will receive copies of the book. Entries must be made by Midnight, PST of November 21st. Good luck!

The post Fail Fast, Fail Often: An Interview with Victor Lombardi appeared first on UX Booth.

Sponsored post

October 29 2013


Five Tips for Conducting Scientific Research in the UX World

Despite the fact that research plays such a pivotal role in the practice of user-centered design, much has been written about how to approach it in a “quick and dirty” manner. Why the rush? I believe that the application of a more rigorous, scientific methodology could lend some much-needed credibility to our approach.

My love story with research began almost a decade ago. One day, while working as a novice prototyper, I was instructed to get feedback from customers. So — awkwardly — I introduced my ideas to potential users. Some told me what they liked; others gently glossed over what they would improve. I came away feeling accomplished.

Little did I know. My subsequent training as a scientific researcher helped me see the error of my ways. I realized that, in that moment, I used biased responses to inform my design. I heard what I wanted and not necessarily what I needed.

A rigorous approach to research provides a much clearer path to unbiased findings, findings that go a long way towards informing our design. This article covers five perspectives to that end. Starting with research plans, we’ll cover details of testing methodologies and even the role of the researcher herself. Finally, we’ll discuss the ways these tips apply to our research in practice.

Go back to where it all began

All scientific research projects begin with a research plan, a document that outlines:

  • The problem (or the research questions) to be explored,
  • A summary of existing literature,
  • The hypothesis(es) or an extrapolation of any patterns evident in the existing literature,
  • The research participants who will take part (more on this, below),
  • The data collection methodology(ies) to be employed,
  • The planned analysis methods, and
  • Any expected results.

The goal in writing a research plan is to be absolutely certain that the entire team understands not only the purpose of the study but also the fact that each aspect of the study has been given due consideration.

Developing a sound research plan requires that we begin with an extensive review of existing theories, models, and other research studies. This ensures that we aren’ t reinventing the wheel. For instance, if the study is based around the System Usability Scale, the best thing to do is to read the original paper to truly understand the scale. Finding original research is more valuable than pretty diagrams or the popularity of the source. Valuable academic citation sites include Google scholar and Microsoft Academic Search. While there’ s always the risk of playing a game of “telephone”, these documents often go through extensive committee review which minimizes the chance that they will contain incorrect information.

Determine the number of participants beforehand

Sample size has been a hot topic for a while now. Some researchers assert that five participants will suffice2; others calculate their sample size based on the power that they want to achieve3; still others believe that a higher number has a lower percentage of risk associated with it4. My take is that the sample size depends on the methodology of the study.

For example, a qualitative, exploratory study on mobile phone usage behavior needs descriptive, insightful data, so the number of participants depends on the richness of the information received. But, a quantitative study, such as looking at the effects of mobile phone usage on behavior depends on confidence limits and intervals as well as analysis methods. The more analytical you want to be, the bigger your sample size needs to be.

Either way, the key is to determine the number of participants before conducting our research and to continue researching until we’ ve hit that number. This ensures that we aren’ t swayed by early trends that might ultimately cause us to miss subtle issues. The Three Mile High tragedy is a painful reminder of the severity of subtle issues.

Don’t let your interests conflict

Scientific research focuses on objectivity. For that reason, it always begins with getting approval from an Institutional Review Board (IRB), an academic organization that approves and monitors any research involving humans. The IRB requires that all researchers state they do not have a conflict of interest in the research project at hand.

So, what does this imply for UX designers? Simple: designers shouldn’t research their own designs.

Designers inevitably design things that make sense to themselves. This is beneficial in some ways, but it also paves the way for hundreds of biases to affect decision making. In order to gather unbiased research to inform designs, a trained, unbiased researcher needs to have the final say on the questions, and decipher the answers. This helps avoid experimenter biases like interpretive bias and observer bias.

Test the test

Pilot tests are tests of a research plan. For scientific researchers, pilot tests are necessary in order to ensure the validity of the research plan and help identify possible problems with it5. Ideally, pilot tests are conducted with a group of users that are representative of the target audience.

The pilot test works exactly like the proposed one, but instead of looking for data, it allows us to catch errors in our test itself. For example, if we are pilot-testing a survey and users don’ t understand the word ldquo; cumbersome” , we might remove that from our final survey. With a survey, we’ ll also time how long users take to complete it, make sure that every question is understood correctly and ask the participants for candid feedback.

If we’ re doing a usability test, we’ ll provide the instructions and watch them complete the tasks that we plan to assign to users, to ensure that our instructions are clear; we’ ll remind users to think aloud and to be frank with their opinions, as they would in an actual test; and, most important, we’ ll take notes every time they ask that a question be repeated or for more clarity.

Make sure to stick to the planned script and behave as though this was a regular research study. Ask for honest feedback on how users would improve the overall study and let your expertise as a researcher apply their answers accordingly.

Typically, results of a pilot test are only used to modify the actual test. Results like answers to surveys, time taken to complete tasks, etc. should not be incorporated into the final results of the research to ensure consistency.

De-bias, de-stress, de-tect

Scientific research often requires extensive vetting of researchers — the people conducting the research — prior to their participation in a project. The latest trend in the UX world is to get everyone involved with the research. As a researcher, nothing excites me more than this but that being said, it is extremely important to acknowledge that the inexperience of a researcher and its number of open (versus hidden) observers can be inversely proportionate to its overall ldquo; success.”

For instance, usability testing (arguably, the most common type of research method in the UX world), can be extremely stressful for participants6. Aside from being asked to ‘perform’ , users are sometimes put in unnatural conditions which can be very nerve wracking. This, in turn, could hinder performance and risk invalidating our findings.

Another thing that affects performance is the fact that participants change their behaviour when they know they’ re being observed, something otherwise known as the Hawthorne effect. Worse still, this effect is only exacerbated as the number of observers increases. So while it’ s definitely good to get more people involved and invested in research, there are a few precautions we should take in order to minimize their potential negative effects.

  1. First, whenever we’ ve got a group of people involved in the research process, we should always ensure the facilitator of a research session has some experience and training so that they’ re not unknowingly influencing participants. Keep an eye out for leading questions and analyze the results accordingly.
  2. Second, either keep the observers hidden or to a minimum. A researcher’ s main job is to keep the data as pure as possible (objectivity, remember?), and a stressed participant does not provide reliable data.
  3. Third, let’ s remind the users that we had nothing to do with the design, so that users aren’ t hesitant to give too much negative feedback.
  4. Fourth, always remind the user that we’ re testing the product and not them. This is (hopefully) old news, but users need to be reminded of this constantly.
  5. Fifth, and finally, always keep an eye out (or a ear, if the session is remote) for any sign of stress. If the participant starts to appear stressed, immediately change the topic or consider ending the session. The key here is to note the difference between a stressful or frustrating design interaction and a stressful research session. The former provides valuable insight while the latter can provide unreliable data.

Repurposing the scientific method

In summary, I suggest taking a leaf out of the scientific researchers ’ book:

  1. Plan your study out and read the sources to get accurate information.
  2. Choose a number of participants before testing, and stick to it regardless of the first trends that appear.
  3. Be alert! Watch out for bias when conducting research.
  4. Test our tests.
  5. Avoid biases, stress, and leading questions.

Most importantly, don’t shy away from rigor in the research process; it’ s the only thing that can lead to truly dependable results!


  1. Fin, P. (2006). Bias and Blinding: Self-Fulfilling Prophecies and Intentional Ignorance
  2. Nielsen, J. (1993). Usability Engineering. Boston :AP Professional.
  3. Kraemer, C. & Thiemann, S. (1987). How many subjects? Statistical power analysis in research. Oaks, CA, US: Sage Publications, Inc. (1987). 119 pp.
  4. Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increase sample sizes in usability testing. Behavior Research Methods, Instruments, & Computers, 2003, 35(3), 379-383.
  5. van Teijlingen, E. & Hundley, V. (2001). The importance of pilot studies. Social Research Update. Issue 35.
  6. Schrier, J. (1992). Reducing Stress Associated with Participating in a Usability Test. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 1992 36: 1210. DOI: 10.1177/154193129203601606.

The post Five Tips for Conducting Scientific Research in the UX World appeared first on UX Booth.

July 02 2013


The Art of Guerilla Usability Testing

Guerrilla usability testing is a powerful technique. Designer Martin Belam describes it as “the art of pouncing on lone people in cafes and public spaces, [then] quickly filming them whilst they use a website for a couple of minutes.” Let’s skip the pouncing part and instead focus on its subtleties, including how to obtain and share feedback with our team.

I recently worked on a quickstart project in which my team was asked to build a responsive website in a short amount of time. We were given very little time to code (let alone conduct research) for the endeavor, yet by employing guerilla usability testing along the way we collected feedback on the brand position. Eventually, we aligned our designs to both customer expectations and business goals.

Once a week throughout the project, we tested different kinds of prototypes to bring the business’s ideas to life. For example, while mid-development, we sketched a mobile version of the site on index cards and did a quick assessment. This revealed navigational problems (which guided us to rethink a key point in the customer journey) and even ended up shaping a bit of the brand’s media material. What’s more, guerilla usability testing opened our stakeholders’ eyes so that they challenged their own, innate assumptions about “the user.”

We iterated through our design ideas using lo-fi techniques like paper prototyping. Sketch by Chris Cheshire.

The bottom line? Guerilla usability testing presented itself as an easy-to-perform technique for refining the user experience. It helped us validate (and invalidate) critical assumptions at cheap cost and with rapid speed.

Breaking it down

It’s hard to see the magic that guerrilla usability testing affords and not want in on the action, right? Here are some basic questions to consider before getting started:

  1. What shall we test?
  2. Where will we test?
  3. With whom will we test? and, of course,
  4. How will we test?

What shall we test?

One of the best parts about this kind of testing is that it can be done with almost anything, from concepts drawn on the back of napkins to fully functioning prototypes. Steve Krug recommends testing things earlier than we think we should and I agree – get out of the building as soon as possible.

Test what the product could be so as to shape what the product should be. Even loosely defined UI sketches can be a great way to evaluate a future product. In fact, recent research shows that lower-fidelity prototypes can be more valuable concerning both high and low-level user interactions.

Where do we test?

Where we conduct tests affects how we perform and document our work. For instance, if we’re testing a new mobile app for a retail chain, we might go to the store itself and walk the aisles; if we’re working on “general” office software, we might test it with coworkers in a different part of the office; etc. The point is: let context drive the work.

With whom do we test?

When designing for the mass market, it’s easy enough to ask friendly looking strangers if they have a couple minutes to spare. Public spaces and shopping centers present some of the best places to do this on account of the sheer amount of foot traffic they receive (as well the relaxed nature of the environment). With more specific user sets, however, it’s useful to target subjects based on their context (see above): a mixture of location and behavior.

Coffeeshops are great because you’ll often find test subjects from varying cultural backgrounds and different age ranges.

How do we test?

Testing is fairly straightforward: have participants talk aloud as they perform tasks. Use the think-aloud protocol to test overall product comprehension rather than basic task completion. The key is to watch customers fiddle with a product and silently evaluate its usability. As Sarah Harrison explains, “Observing users is like flossing–people know they’re supposed to do it every day, but they don’t. So just do it. It’s not a big deal.”

Always start with open-ended, non-leading questions like:

  1. What do you make of this?
  2. What would you do here?
  3. How would you do [that]?

By answering these kinds of questions, participants tell a loose story in which they explain how they perceive a product. Along the way, we can generate ideas for how to improve things in the next iteration.

Employing the technique

Guerrilla usability testing is very much about adapting to the situation. That said, here are some helpful hints that I find consistently work in different international contexts:

  1. Beware the implicit bias. While coffeeshops are a great place to find test participants, focusing on people who frequent them introduces bias to our work. Simply acknowledging this implicit bias can help designers neutralise subjective experiences and account for individual differences. Remember to target different genders and be fair in who you approach.
  2. Explain what’s going on. Designers should be honest about who we are, why we’re testing, and what sort of feedback we’re looking to receive. Oftentimes, it’s best to do this with a release form, so that people are fully aware of the implications of their participation – like if it’s going to just be used internally versus shared globally at conferences. These sort of release forms, while tedious to carry around, help establish trust.
  3. Be ethical. Of course, being honest doesn’t mean we need to be fully transparent. Sometimes it’s useful to skip certain information, like if we worked on the product they’re testing. Alternatively, we might tell white lies about the purpose of a study. Just make sure to always tell the truth at the end of each session: trust is essential to successful collaboration.
  4. Make it casual. Lighten up tests by offering cups of coffee and/or meals in exchange for people’s time. Standing in line or ordering with a test subject is a great opportunity to ask questions about their lifestyle and get a better feel for how a test might go.
  5. Be participatory. Break down barriers by getting people involved: ask them to draw – on a napkin or piece of notebook paper, for example – what they might expect to see on the third or fourth screen of a UI flow. This doesn’t have to be a full-blown user interface necessarily, just a rough concept of what’s in their head. You never know what you’ll learn by fostering imagination.
  6. Don’t lead participants. When you sense confusion, ask people what’s going through their head. Open them up by prodding, saying “I don’t know. What do you think?”. People in testing situations often can feel as though they are being tested (as opposed to the product itself), and therefore can start to apologise or shut down.
  7. Keep your eyes peeled. It’s important to encapsulate passing thoughts for later analysis. Ethnographic observation is one good way to capture what you were thinking of during tests. Don’t get too hung up about formalised notes though, most of the time your scribbles will work just fine. It’s about triggering memories, not showing it off at an academic conference.
  8. Capture the feedback. A key part of any testing process is capturing what we’ve learned. While the way in which we do this is definitely a personal choice, there are a few preferred tools available: apps like Silverback or UX Recorder collect screen activity along with a test subject’s facial reaction. Other researchers build their own mobile rigs. The important part to remember here is to use tools that fit your future sharing needs.
  9. Be a timecop. Remember, this isn’t a usability lab with paid users. Be mindful of how much time you spend with test subjects and always remind them that they can leave at any point during the test. The last thing you’d want is a grumpy user skewing your feedback.

Sharing the feedback

Conducting the tests is only half the battle, of course. To deliver compelling and relevant results from guerilla usability tests, designers need to strategically decide how we’ll share our findings with our colleagues.

When analysing and preparing captured feedback, always consider your audience. The best feedback is the kind that understands stakeholders and kickstarts important conversations between them. For example, developers who need to evaluate bugs will have different needs than executives who want to prioritise new features.

Next, when delivering feedback, align it with your audience’s expectations. Try editing clips in iMovie or making slides in PowerPoint. Your co-workers are probably as busy as you, so an edited down “trailer” that highlights relevant results or a bullet-point summary along with powerful quotes is always a good method to keep people listening.

Go guerilla

At the end of the day, guerilla usability testing comes in many forms. There’s no perfection to the art. It is unashamedly and unapologetically impromptu. Consider making up your own approach as you go: learn by doing.

Note: Thanks to Andrew for providing lots of feedback on early drafts of this article.

Related reading

The post The Art of Guerilla Usability Testing appeared first on UX Booth.

August 06 2012


Let’s TDD a Simple App in PHP

In this tutorial, I will present an end-to-end example of a simple application – made strictly with TDD in PHP. I will walk you through each step, one at a time, while explaining the decisions I made in order to get the task done. The example closely follows the rules of TDD: write tests, write code, refactor.

Step 1 - Introduction to TDD & PHPUnit

Test Driven Development (TDD)

TDD is a “test-first” technique to develop and design software. It is almost always used in agile teams, being one of the core tools of agile software development. TDD was first defined and introduced to the professional community by Kent Beck in 2002. Since then, it has become an accepted – and recommended – technique in everyday programming.

TDD has three core rules:

  1. You are not allowed to write any production code, if there is not a failing test to warrant it.
  2. You are not allowed to write more of a unit test than is strictly necessary to make it fail. Not compiling / running is failing.
  3. You are not allowed to write more production code than is strictly necessary to make the failing test pass.


PHPUnit is the tool that allows PHP programmers to perform unit testing, and practice test-driven development. It is a complete unit testing framework with mocking support. Even though there are a few alternative choices, PHPUnit is the most used and most complete solution for PHP today.

To install PHPUnit, you can either follow along with the previous tutorial in our “TDD in PHP” session, or you can use PEAR, as explained in the official documentation:

  • become root or use sudo
  • make sure you have the latest PEAR: pear upgrade PEAR
  • enable auto discovery: pear config-set auto_discover 1
  • install PHPUnit: pear install

More information and instructions for installing extra PHPUnit modules can be found in the official documentation.

Some Linux distributions offer phpunit as a precompiled package, though I always recommend an installation, via PEAR, because it ensures that the most recent and up-to-date version is installed and used.

NetBeans & PHPUnit

If you’re a fan of NetBeans, you can configure it to work with PHPUnit by following these steps:

  • Go to NetBeans’ configuration (Tools / Options)
  • Select PHP / Unit Testing
  • Check that the “PHPUnit Script” entry points to a valid PHPUnit executable. If it does not, NetBeans will tell you this, so if you don’t see any red notices on the page, you are good to go. If not, look for the PHPUnit executable on your system and enter its path in the input field. For Linux systems, this path is typically /usr/bin/phpunit.

If you do not use an IDE with unit testing support, you can always run your test directly from the console:

cd /my/applications/test/folder

Step 2 - The Problem to Solve

Our team is tasked with the implementation of a “word wrap” feature.

Let’s assume that we are part of a large corporation, which has a sophisticated application to develop and maintain. Our team is tasked with the implementation of a “word wrap” feature. Our clients don’t wish to see horizontal scroll bars, and it’s out job to comply.

In that case, we need to create a class that is capable of formatting an arbitrary bit of text provided as input. The result should be word wrapped at a specified number of characters. The rules of word wrapping should follow the behavior of other every-day applications, like text editors, web page text areas, etc. Our client does not understand all the rules of word wrapping, but they know they want it, and they know it should work in the same way that they’ve experienced in other apps.

Step 3 - Planning

TDD helps you achieve a better design, but it does not eliminate the need for up-front design and thinking.

One of the things that many programmers forget, after they start TDD, is to think and plan beforehand. TDD helps you achieve a better design most of the time, with less code and verified functionality, but it does not eliminate the need for up-front design and human thinking.

Every time you need to solve a problem, you should set aside time to think about it, to imagine a little design – nothing fancy – but enough to get you started. This part of the job also helps you to imagine and guess possible scenarios for the logic of the application.

Let’s think about the basic rules for a word wrap feature. I suppose some un-wrapped text will be given to us. We will know the number of characters per line and we will want it to be wrapped. So, the first thing that comes to my mind is that, if the text has more characters than the number on one line, we should add a new line instead of the last space character that is still on the line.

Okay, that would sum up the behavior of the system, but it is much too complicated for any test. For example, what about when a single word is longer than the number of characters allowed on a line? Hmmm… this looks like an edge case; we can’t replace a space with a new line since we have no spaces on that line. We should force wrap the word, effectively splitting it into two.

These ideas should be clear enough to the point that we can start programming. We’ll need a project and a class. Let’s call it Wrapper.

Step 4 - Starting the Project and Creating the First Test

Let’s create our project. There should be a main folder for source classes, and a Tests/ folder, naturally, for the tests.

The first file we will create is a test within the Tests folder. All our future test will be contained within this folder, so I will not specify it explicitly again in this tutorial. Name the test class something descriptive, but simple. WrapperTest will do for now; our first test looks something like this:

require_once dirname(__FILE__) . '/../Wrapper.php';

class WrapperTest extends PHPUnit_Framework_TestCase {

	function testCanCreateAWrapper() {
		$wrapper = new Wrapper();


Remember! We are not allowed to write any production code before a failing test – not even a class declaration! That’s why I wrote the first simple test above, called canCreateAWrapper. Some consider this step useless, but I consider it to be a nice opportunity to think about the class we are going to create. Do we need a class? What should we call it? Should it be static?

When you run the test above, you will receive a Fatal Error message, like the following:

PHP Fatal error:  require_once(): Failed opening required '/path/to/WordWrapPHP/Tests/../Wrapper.php' (include_path='.:/usr/share/php5:/usr/share/php') in /path/to/WordWrapPHP/Tests/WrapperTest.php on line 3

Yikes! We should do something about it. Create an empty Wrapper class in the project’s main folder.

class Wrapper {}

That’s it. If you run the test again, it passes. Congratulations on your first test!

Step 5 - The First Real Test

So we have our project set up and running; now we need to think about our first real test.

What would be the simplest…the dumbest…the most basic test that would make our current production code fail? Well, the first thing that comes to mind is “Give it a short enough word, and expect the result to be unchanged.” This sounds doable; let’s write the test.

require_once dirname(__FILE__) . '/../Wrapper.php';

class WrapperTest extends PHPUnit_Framework_TestCase {

	function testDoesNotWrapAShorterThanMaxCharsWord() {
		$wrapper = new Wrapper();
		assertEquals('word', $wrapper->wrap('word', 5));


That looks fairly complicated. What does “MaxChars” in the function name mean? What does 5 in the wrap method refer to?

I think something is not quite right here. Isn’t there a simpler test that we can run? Yes, there certainly is! What if we wrap … nothing – an empty string? That sounds good. Delete the complicated test above, and, instead, add our new, simpler one, shown below:

require_once dirname(__FILE__) . '/../Wrapper.php';

class WrapperTest extends PHPUnit_Framework_TestCase {

	function testItShouldWrapAnEmptyString() {
		$wrapper = new Wrapper();
		$this->assertEquals('', $wrapper->wrap(''));


This is much better. The name of the test is easy to understand, we have no magic strings or numbers, and most of all, IT FAILS!

Fatal error: Call to undefined method Wrapper::wrap() in ...

As you can observe, I deleted our very first test. It is useless to explicitly check if an object can be initialized, when other tests also need it. This is normal. With time, you will find that deleting tests is a common thing. Tests, especially unit tests, have to run fast – really fast… and frequently – very frequently. Considering this, eliminating redundancy in tests is important. Imagine that you run thousands of tests every time you save the project. It should take no more than a couple of minutes, maximum, for them to run. So, don’t be terrified to delete a test, if necessary.

Getting back to our production code, let’s make that test pass:

class Wrapper {

	function wrap($text) {


Above, we’ve added absolutely no more code than is necessary to make the test pass.

Step 6 - Pressing On

Now, for the next failing test:

	function testItDoesNotWrapAShortEnoughWord() {
		$wrapper = new Wrapper();
		$this->assertEquals('word', $wrapper->wrap('word', 5));

Failure message:

Failed asserting that null matches expected 'word'.

And the code that makes it pass:

	function wrap($text) {
		return $text;

Wow! That was easy, wasn’t it?

While we are in the green, observe that our test code can begin to rot. We need to refactor a few things. Remember: always refactor when your tests pass; this is the only way that you can be certain that you’ve refactored correctly.

First, let’s remove the duplication of the initialization of the wrapper object. We can do this only once in the setUp() method, and use it for both tests.

class WrapperTest extends PHPUnit_Framework_TestCase {

	private $wrapper;

	function setUp() {
		$this->wrapper = new Wrapper();

	function testItShouldWrapAnEmptyString() {
		$this->assertEquals('', $this->wrapper->wrap(''));

	function testItDoesNotWrapAShortEnoughWord() {
		$this->assertEquals('word', $this->wrapper->wrap('word', 5));


The setup method will run before each new test.

Next, there are some ambiguous bits in the second test. What is ‘word’? What is ’5′? Let’s make it clear so that the next programmer who reads these tests doesn’t have to guess.

Never forget that your tests are also the most update-to-date documentation for your code.

Another programmer should be able to read the tests as easily as they would read the documentation.

	function testItDoesNotWrapAShortEnoughWord() {
		$textToBeParsed = 'word';
		$maxLineLength = 5;
		$this->assertEquals($textToBeParsed, $this->wrapper->wrap($textToBeParsed, $maxLineLength));

Now, read this assertion again. Doesn’t that read better? Of course it does. Don’t be afraid of lengthy variable names for your tests; auto-completion is your friend! It’s better to be as descriptive as possible.

Now, for the next failing test:

	function testItWrapsAWordLongerThanLineLength() {
		$textToBeParsed = 'alongword';
		$maxLineLength = 5;
word", $this->wrapper->wrap($textToBeParsed, $maxLineLength));

And the code that makes it pass:

	function wrap($text, $lineLength) {
		if (strlen($text) > $lineLength)
			return substr ($text, 0, $lineLength) . "
" . substr ($text, $lineLength);
		return $text;

That’s the obvious code to make our last test pass. But be careful – it is also the code that makes our first test to not pass!

We have two options to fix this problem:

  • modify the code – make the second parameter optional
  • modify the first test – and make it call the code with a parameter

If you choose the first option, making the parameter optional, that would present a little problem with the current code. An optional parameter is also initialized with a default value. What could such a value be? Zero might sound logical, but it would imply writing code just to treat that special case. Setting a very large number, so that the first if statement would not result in true can be another solution. But, what is that number? Is it 10? Is it 10000? Is it 10000000? We can’t really say.

Considering all these, I will just modify the first test:

	function testItShouldWrapAnEmptyString() {
		$this->assertEquals('', $this->wrapper->wrap('', 0));

Again, all green. We can now move on to the next test. Let’s make sure that, if we have a very long word, it will wrap on several lines.

	function testItWrapsAWordSeveralTimesIfItsTooLong() {
		$textToBeParsed = 'averyverylongword';
		$maxLineLength = 5;
rd", $this->wrapper->wrap($textToBeParsed, $maxLineLength));

This obviously fails, because our actual production code wraps only once.

Failed asserting that two strings are equal.
--- Expected
+++ Actual
@@ @@

Can you smell the while loop coming? Well, think again. Is a while loop the simplest code that would make the test pass?

According to ‘Transformation Priorities’ (by Robert C. Martin), it is not. Recursion is always simpler than a loop and it is much more testable.

	function wrap($text, $lineLength) {
		if (strlen($text) > $lineLength)
			return substr ($text, 0, $lineLength) . "
" . $this->wrap(substr($text, $lineLength), $lineLength);
		return $text;

Can you even spot the change? It was a simple one. All we did was, instead of concatenating with the rest of the string, we concatenate with the return value of calling ourself with the rest of the string. Perfect!

Step 7 - Just Two Words

The next simplest test? What about two words can wrap, when there’s a space at the end of the line.

	function testItWrapsTwoWordsWhenSpaceAtTheEndOfLine() {
		$textToBeParsed = 'word word';
		$maxLineLength = 5;
word", $this->wrapper->wrap($textToBeParsed, $maxLineLength));

That fits nicely. However, the solution may be getting a bit trickier this time.

At first, you might refer to a str_replace() to get rid of the space and insert a new line. Don’t; that road leads to a dead end.

The second most obvious choice would be an if statement. Something like this:

	function wrap($text, $lineLength) {
		if (strpos($text,' ') == $lineLength)
			return substr ($text, 0, strpos($text, ' ')) . "
" . $this->wrap(substr($text, strpos($text, ' ') + 1), $lineLength);
		if (strlen($text) > $lineLength)
			return substr ($text, 0, $lineLength) . "
" . $this->wrap(substr($text, $lineLength), $lineLength);
		return $text;

However, that enters an endless loop, which will cause the tests to error out.

PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted

This time, we need to think! The problem is that our first test has a text with a length of zero. Also, strpos() returns false when it can’t find the string. Comparing false with zero … is? It is true. This is bad for us because the loop will became infinite. The solution? Let’s change the first condition. Instead of searching for a space and comparing its position with the line’s length, let’s instead attempt to directly take the character at the position indicated by the line’s length. We will do a substr() only one character long, starting at just the right spot in the text.

	function wrap($text, $lineLength) {
		if (substr($text, $lineLength - 1, 1) == ' ')
			return substr ($text, 0, strpos($text, ' ')) . "
" . $this->wrap(substr($text, strpos($text, ' ') + 1), $lineLength);
		if (strlen($text) > $lineLength)
			return substr ($text, 0, $lineLength) . "
" . $this->wrap(substr($text, $lineLength), $lineLength);
		return $text;

But, what if the space is not right at the end of line?

	function testItWrapsTwoWordsWhenLineEndIsAfterFirstWord() {
		$textToBeParsed = 'word word';
		$maxLineLength = 7;
word", $this->wrapper->wrap($textToBeParsed, $maxLineLength));

Hmm… we have to revise our conditions again. I am thinking that we will, after all, need that search for the position of the space character.

	function wrap($text, $lineLength) {
		if (strlen($text) > $lineLength) {
			if (strpos(substr($text, 0, $lineLength), ' ') != 0)
				return substr ($text, 0, strpos($text, ' ')) . "
" . $this->wrap(substr($text, strpos($text, ' ') + 1), $lineLength);
			return substr ($text, 0, $lineLength) . "
" . $this->wrap(substr($text, $lineLength), $lineLength);
		return $text;

Wow! That actually works. We moved the first condition inside the second one so that we avoid the endless loop, and we added the search for space. Still, it looks rather ugly. Nested conditions? Yuck. It’s time for some refactoring.

	function wrap($text, $lineLength) {
		if (strlen($text) <= $lineLength)
			return $text;
		if (strpos(substr($text, 0, $lineLength), ' ') != 0)
			return substr ($text, 0, strpos($text, ' ')) . "
" . $this->wrap(substr($text, strpos($text, ' ') + 1), $lineLength);
		return substr ($text, 0, $lineLength) . "
" . $this->wrap(substr($text, $lineLength), $lineLength);

That’s better better.

Step 8 - What About Multiple Words?

Nothing bad can happen as a result of writing a test.

The next simplest test would be to have three words wrapping on three lines. But that test passes. Should you write a test when you know it will pass? Most of the time, no. But, if you have doubts, or you can imagine obvious changes to the code that would make the new test fail and the others pass, then write it! Nothing bad can happen as a result of writing a test. Also, consider that your tests are your documentation. If your test represents an essential part of your logic, then write it!

Further, the fact the tests we came up with are passing is an indication that we are getting close to a solution. Obviously, when you have a working algorithm, any test that we write will pass.

Now – three words on two lines with the line ending inside the last word; now, that fails.

	function testItWraps3WordsOn2Lines() {
		$textToBeParsed = 'word word word';
		$maxLineLength = 12;
		$this->assertEquals("word word
word", $this->wrapper->wrap($textToBeParsed, $maxLineLength));

I nearly expected this one to work. When we investigate the error, we get:

Failed asserting that two strings are equal.
--- Expected
+++ Actual
@@ @@
-'word word
+word word'

Yep. We should wrap at the rightmost space in a line.

	function wrap($text, $lineLength) {
		if (strlen($text) <= $lineLength)
			return $text;
		if (strpos(substr($text, 0, $lineLength), ' ') != 0)
			return substr ($text, 0, strrpos($text, ' ')) . "
" . $this->wrap(substr($text, strrpos($text, ' ') + 1), $lineLength);
		return substr ($text, 0, $lineLength) . "
" . $this->wrap(substr($text, $lineLength), $lineLength);

Simply replace the strpos() with strrpos() inside the second if statement.

Step 9 - Other Failing Tests? Edge Cases?

Things are getting trickier. It’s fairly hard to find a failing test … or any test, for that matter, that was not yet written.

This is an indication that we are quite close to a final solution. But, hey, I just thought of a test that will fail!

	function testItWraps2WordsOn3Lines() {
		$textToBeParsed = 'word word';
		$maxLineLength = 3;
d", $this->wrapper->wrap($textToBeParsed, $maxLineLength));

But, I was wrong. It passes. Hmm… Are we done? Wait! What about this one?

	function testItWraps2WordsAtBoundry() {
		$textToBeParsed = 'word word';
		$maxLineLength = 4;
word", $this->wrapper->wrap($textToBeParsed, $maxLineLength));

It fails! Excellent. When the line has the same length as the word, we want the second line to not begin with a space.

Failed asserting that two strings are equal.
--- Expected
+++ Actual
@@ @@
+ wor

There are several solutions. We could introduce another if statement to check for the starting space. That would fit in with the rest of the conditionals that we’ve created. But, isn’t there a simpler solution? What if we just trim() the text?

	function wrap($text, $lineLength) {
		$text = trim($text);
		if (strlen($text) <= $lineLength)
			return $text;
		if (strpos(substr($text, 0, $lineLength), ' ') != 0)
			return substr ($text, 0, strrpos($text, ' ')) . "
" . $this->wrap(substr($text, strrpos($text, ' ') + 1), $lineLength);
		return substr ($text, 0, $lineLength) . "
" . $this->wrap(substr($text, $lineLength), $lineLength);

There we go.

Step 10 - We Are Done

At this point, I can’t invent any failing test to write. We must be done! We’ve now used TDD to build a simply, but useful, six-line algorithm.

A few words on stopping and “being done.” If you use TDD, you force yourself to think about all sorts of situations. You then write tests for those situations, and, in the process, begin to understand the problem much better. Usually, this process results in an intimate knowledge of the algorithm. If you can’t think of any other failing tests to write, does this mean that your algorithm is perfect? Not necessary, unless there is a predefined set of rules. TDD does not guarantee bug-less code; it merely helps you write better code that can be better understood and modified.

Even better, if you do discover a bug, it’s that much easier to write a test that reproduces the bug. This way, you can ensure that the bug never occurs again – because you’ve tested for it!

Final Notes

You may argue that this process is not technically “TDD.” And you’re right! This example is closer to how many everyday programmers work. If you want a true “TDD as you mean it” example, please leave a comment below, and I’ll plan to write one in the future.

Thanks for reading!

Tags: PHP TDD Testing

February 05 2012


JavaScript Testing From Scratch: New on Premium

This likely isn’t the first tutorial on testing that you’ve ever seen. But perhaps you’ve had your doubts about testing, and never took the time to read them. After all, it can seem like extra work for no reason.

This tutorial (with screencasts) intends to change your views. We’re going to start at the very beginning: what is testing and why should you do it? Then, we’ll talk briefly about writing testable code, before actually, you know, doing some testing! Let’s get to it.

Become a Premium member to read this tutorial/screencast, as well as hundreds of other advanced tutorials and screencasts from the Tuts+ network.

Tuts+ Premium

The recently re-launched Tuts+ Premium is a service that provides top-tier training in a variety of creative fields. Whether you prefer books, visual training, or in depth tutorials, we have you covered. While we unfortunately can’t afford to provide the service for free, it’s only $19 a month – less than you’d spend on dinner.

I hope you’ll consider checking it out! In addition to learning a huge variety of new skills, it’s also a fantastic way to say thank you to Nettuts+.

January 21 2012


Testing your PHP Codebase with EnhancePHP

You know it; I know it. We should be testing our code more than we do. Part of the reason we don’t, I think, is that we don’t know exactly how. Well, I’m getting rid of that excuse today: I’m teaching you to test your PHP with the EnhancePHP framework.

Meet EnhancePHP

I’m not going to try to convince you to test your code; and we’re not going to discuss Test Driven Development, either. That’s been done before on Nettuts+. In that article, Nikko Bautista explains exactly why testing is a good thing and outlines a TDD workflow. Read that sometime, if you aren’t familiar with TDD. He also uses the SimpleTest library for his examples, so if you don’t like the look of EnhancePHP, you might try SimpleTest as an alternative.

As I said, we’ll be using the EnhancePHP. It’s a great little PHP library—a single file—that offers a lot of testing functionality.

Start by heading over to their download page and grabbing the latest version of the framework.

We’re going to be building a really simple Validation class to test. It won’t do too much: just return true if the item passes validation, or false if it doesn’t. So, set up a really simple little project:

We’ll do this is a semi-TDD fashion, so let’s start by writing a few tests.

Writing Tests

Out little class is going to validate three things: email addresses, usernames, and phone numbers.

But before we get to writing actual tests, we’ll need to set up our class:


class Validation_test extends EnhanceTestFixture {
	public function setUp () {
		$this-> val = new Validation();


This is our start; notice that we’re extending the class EnhanceTestFixture. By doing so, we let EnhancePHP know that any public methods of this class are tests, with the exception of methods setUp and tearDown. As you might guess, these methods run before and after all your tests (not before and after each one). In this case, our setUp method will create a new Validation instance and assign it to a property on our instance.

By the way, if you’re relatively new to PHP, you might not be familiar with that EnhanceTestFixture syntax: what’s with the slashes? That’s PHP namespacing for you; check out the docs if you aren’t familiar with it.

So, the tests!

Email Addresses

Let’s start by validating email addresses. As you’ll see, just doing a basic test is pretty simple:

public function validates_a_good_email_address () {
	$result = $this->val->validate_email("");

We simply call the method we want to test, passing it a valid email address, and storing the $result. Then, we hand $result to the isTrue method. That method belongs to the EnhanceAssert class.

We want to make sure our class will reject non-email addresses. So, let’s test for that:

public function reject_bad_email_addresses () {
	$val_wrapper = EnhanceCore::getCodeCoverageWrapper('Validation');
	$val_email = $this->get_scenario('validate_email');
	$addresses = array("john", "jo!", "john@doe.", "jo*");

	foreach ($addresses as $addr) {

This introduces a pretty cool feature of EnhancePHP: scenarios. We want to test a bunch of non-email addresses to make sure our method will return false. By creating a scenario, we essentially wrap an instance of our class in some EnhancePHP goodness, are write much less code to test all our non-addresses. That’s what $val_wrapper is: a modified instance of our Validation class. Then, $val_email is the scenario object, somewhat like a shortcut to the validate_email method.

Then, we’ve got an array of strings that should not validate as email addresses. We’ll loop over that array with a foreach loop. Notice how we run the test: we call the with method on our scenario object, passing it the parameters for the method we’re testing. Then, we call the expect method on that, and pass it whatever we expect to get back.

Finally, we call the scenario’s verifyExpectations method.

So, the first tests are written; how do we run them?

Running Tests

Before we actually run the tests, we’ll need to create our Validation class. Inside lib.validation.php, start with this:


class Validation {
	public function validate_email ($address) {


Now, in test.php, we’ll pull it all together:


require "vendor/EnhanceTestFramework.php";
require "lib/validation.php";
require "test/validation_test.php";


First, we’ll require all the necessary files. Then, we call the runTests method, which finds our tests.

Next comes the neat part. Fire up a server, and you’ll get some nice HTML output:

Very nice, right? Now, if you’ve got PHP in your terminal, run this is in the terminal:

EnhancePHP notices that you’re in a different environment, and adjusts its output appropriately. A side benefit of this is that if you’re using an IDE, like PhpStorm, that can run unit tests, you can view this terminal output right inside the IDE.

You can also get XML and TAP output, if that’s what you prefer, just pass EnhanceTemplateType::Xml or EnhanceTemplateType::Tap to the runTests method to get the appropriate output. Note that running it in the terminal will also produce command-line results, no matter what you pass to runTests.

Getting the Tests to Pass

Let’s write the method that causes our tests to pass. As you know, that’s the validate_email. At the top of the Validation class, let’s define a public property:

public $email_regex = '/^[w+-_.]+@[w.]+.w+$/';

I’m putting this in a public property so that if the user wants to replace it with their own regex, they could. I’m using this simple version of an email regex, but you can replace it with your favourite regex if you want.

Then, there’s the method:

public function validate_email ($address) {
	return preg_match($this->email_regex, $address) == 1

Now, we run the tests again, and:

Writing More Tests

Time for more tests:


Let’s create some tests for usernames now. Our requirements are simply that it must be a 4 to 20 character string consisting only of word characters or periods. So:

public function validates_a_good_username () {
	$result = $this->val->validate_username("some_user_name.12");

Now, how about a few usernames that shouldn’t validate:

public function rejects_bad_usernames () {
	$val_username = $this->get_scenario('validate_username');
	$usernames = array(
		"name with space",

	foreach ($usernames as $name) {

This is very similar to our reject_bad_email_addresses function. Notice, however, that we’re calling this get_scenario method: where’s that come from? I’m abstracting the scenario creation functionality into private method, at the bottom of our class:

private function get_scenario ($method) {
	$val_wrapper = EnhanceCore::getCodeCoverageWrapper('Validation');
    return EnhanceCore::getScenario($val_wrapper, $method);

We can use this in our reject_bad_usernames and replace the scenario creation in reject_bad_email_addresses as well. Because this is a private method, EnhancePHP won’t try to run it as a normal test, the way it will with public methods.

We’ll make these tests pass similarly to how we made the first set pass:

# At the top . . .
public  $username_regex = '/^[w.]{4,20}$/';

# and the method . . .
public function validate_username ($username) {
	return preg_match($this->username_regex, $username) == 1;

This is pretty basic, of course, but that’s all that’s needed to meet our goal. If we wanted to return an explanation in the case of failure, you might do something like this:

public function validate_username ($username) {
	$len = strlen($username);
	if ($len < 4 || $len > 20) {
  		return "Username must be between 4 and 20 characters";
	} elseif (preg_match($this->username_regex, $username) == 1) {
  		return true;
	} else {
  		return "Username must only include letters, numbers, underscores, or periods.";

Of course, you might also want to check if the username already exists.

Now, run the tests and you should see them all passing.

Phone Numbers

I think you’re getting the hang of this by now, so let’s finish of our validation example by checking phone numbers:

public function validates_good_phonenumbers () {
	$val_phonenumber = $this->get_scenario("validate_phonenumber");
	$numbers = array("1234567890", "(890) 123-4567",
		"123-456-7890", "123 456 7890", "(123) 456 7890");

	foreach($numbers as $num) {

public function rejects_bad_phonenumnbers () {
	$result = $this->val->validate_phonenumber("123456789012");

You can probably figure out the Validation method:

public $phonenumber_regex = '/^d{10}$|^((?d{3})?[ |-]d{3}[ |-]d{4})$/';

public function validate_phonenumber ($number) {
	return preg_match($this->phonenumber_regex, $number) == 1;

Now, we can run all the tests together. Here’s what that looks like from the command line (my preferred testing environment):

Other Test Functionality

Of course, EnhancePHP can do a lot more than what we’ve looked at in this little example. Let’s look at some of that now.

We very briefly met the EnhanceAssert class in our first test. We didn’t really use it otherwise, because it’s not useful when using scenarios. However, it’s where all the assertion methods are. The beauty of them is that their names make their functionality incredibly obvious. The following test examples would pass:

  • EnhanceAssert::areIdentical("Nettuts+", "Nettuts+")
  • EnhanceAssert::areNotIdentical("Nettuts+", "Psdtuts+")
  • EnhanceAssert::isTrue(true)
  • EnhanceAssert::isFalse(false)
  • EnhanceAssert::contains("Net", "Nettuts+")
  • EnhanceAssert::isNull(null)
  • EnhanceAssert::isNotNull('Nettust+')
  • EnhanceAssert::isInstanceOfType('Exception', new Exception(""))
  • EnhanceAssert::isNotInstanceOfType('String', new Exception(""))

There are a few other assertion methods, too; you can check the docs for a complete list and examples.


EnhancePHP can also do mocks and stubs. Haven’t heard of mocks and stubs? Well, they aren’t too complicated. A mock is a wrapper for object, that can keep track of what methods are called, with what properties they are called, and what values are returned. A mock will have some test to verify, as we’ll see.

Here’s a small example of a mock. Let’s start with a very simple class that counts:


require "vendor/EnhanceTestFramework.php";

class Counter {
  public $num = 0;
  public function increment ($num = 1) {
    $this->num = $this->num + $num;
    return $this->num;

We have one function: increment, that accepts a parameter (but defaults to 1), and increments the $num property by that number.

We might use this class if we were building a scoreboard:

class Scoreboard {
  public $home = 0;
  public $away = 0;

  public function __construct ($home, $away) {
    $this->home_counter = $home;
    $this->away_counter = $away;

  public function score_home () {
    $this->home = $this->home_counter->increment();
    return $this->home;
  public function score_away () {
    $this->away = $this->away_counter->increment();
    return $this->home;

Now, we want to test to make sure that the Counter instance method increment is working properly when the Scoreboard instance methods call it. So we creat this test:

class ScoreboardTest extends EnhanceTestFixture {
  public function score_home_calls_increment () {
    $home_counter_mock = EnhanceMockFactory::createMock("Counter");
    $away_counter = new Counter();

    $home_counter_mock->addExpectation( EnhanceExpect::method('increment') );

    $scoreboard = new Scoreboard($home_counter_mock, $away_counter);



Notice that we start by creating $home_counter_mock: we use the EnhancePHP mock factory, passing it the name of the class we’re mocking. This returns a “wrapped” instance of Counter. Then, we add an expectation, with this line

$home_counter_mock->addExpectation( EnhanceExpect::method('increment') );

Our expectation just says that we expect the increment method to be called.

After that, we go on to create the Scoreboard instance, and call score_home. Then, we verifyExpectations. If you run this, you’ll see that our test passes.

We could also state what parameters we want a method on the mock object to be called with, what value is returned, or how many times the method should be called, with something like this:

$home_counter_mock->addExpectation( EnhanceExpect::method('increment')->with(10) );
$home_counter_mock->addExpectation( EnhanceExpect::method('increment')->times(2) );
$home_counter_mock->addExpectation( EnhanceExpect::method('increment')->returns(1) );
$home_counter_mock->addExpectation( EnhanceExpect::method('increment')->with(3)->times(1) );
$home_counter_mock->addExpectation( EnhanceExpect::method('increment')->with(2)->returns(2) );

I should mention that, while with and times will show failed tests if the expectations aren’t meant, returns doesn’t. You’ll have to store the return value and use an assertion to very that. I’m not sure why that’s the case, but every library has its quirks :). (You can see an example of this in the library examples in Github.)


Then, there are stubs. A stub fills in for a real object and method, returning exactly what you tell it to. So, let’s say we want to make sure that our Scoreboard instance is correctly using the value it receives from increment, we can stub a Counter instance so we can control what increment will return:

class ScoreboardTest extends EnhanceTestFixture {
  public function score_home_calls_increment () {
    $home_counter_stub = EnhanceStubFactory::createStub("Counter");
    $away_counter = new Counter();

    $home_counter_stub->addExpectation( EnhanceExpect::method('increment')->returns(10) );

    $scoreboard = new Scoreboard($home_counter_stub, $away_counter);
    $result = $scoreboard->score_home();

    EnhanceAssert::areIdentical($result, 10);



Here, we’re using EnhanceStubFactory::createStub to create our stub counter. Then, we add an expectation that the method increment will return 10. We can see that the result it what we’d expect, given our code.

For more examples of mocks and stub with the EnhancePHP library, check out the Github Repo.


Well, that’s a look at testing in PHP, using the EnhancePHP framework. It’s an incredibly simple framework, but it provides everything you need to do some simple unit testing on your PHP code. Even if you choose a different method/framework for testing your PHP (or perhaps roll your own!), I hope this tutorial has sparked an interest in testing your code, and how simple it can be.

But maybe you already test your PHP. Let us all know what you use in the comments; after all, we’re all here to learn from each other! Thank you so much for stopping by!

January 15 2012


Shim uses node.js to test sites on multiple browsers

Shim was developed within the Boston Globe’s media lab as a way to study how Web sites look on various devices and browsers. A laptop intercepts all wifi traffic – this is redirected to a custom node.js server – which inserts a javascript, or “shim,” at the head of each web page that is visited.

The shim, once loaded in a device’s browser, opens and maintains a socket connection to the server, according to to Shim’s developers. Shim was written in 2011 by Chris Marstall, Creative Technologist at the Boston Globe. The software has been open sourced. Write the Shim originators on git.hub:

Whenever a new page is requested, the page’s URL is broadcast to all connected browsers, which then redirect themselves to that URL, keeping all devices in sync. Shim info is available on git.hub.

June 11 2011


Code injection, error throwing

In a blog, Opera Software Developer Relations team member Tiffany B. Brown looks at code injection, error throwing and handling and mobile debugging. She notes Opera Dragonfly and its remote debug features provide a way to debug mobile sites from their desktop. Brown mentions WebKit’s recently added remote debugging capabilities, folded into Google Chrome developer tools. Pointed to as well are Bugaboo, an iOS app for Safari-based debugging; JS Console which is available on the Web or as an iOS app; and Weinre for WebKit-based browsers. In this entry, Brown looks more closely at Dragonfly remote debug and JSConsole.



April 08 2011


Best Practices for test revisited

With Google and their apps like Search, Docs or GMail only a very small time is actually spent in the initial page load, writes Andreas Grabner in a recent blog post. Of course, much time is spent in JavaScript, XHR Calls and DOM Manipulations triggered by user actions. Grabner writes:

It is very important to speed up Page Load Time – don’t get me wrong. It is the initial perceived performance by a user who interacts with your site. But it is not all we need to focus on. Most of the time in modern web applications is spent in JavaScript, DOM Manipulations, XHR Calls and Rendering that happen after the initial page load. Automatic verification against Best Practices won’t work here anymore because we have to analyze individual user actions that do totally different things. The way this will work is to analyze the individual user actions, track performance metrics and automate regression detection based on these measured values.

April 04 2011


Multivariate Testing 101: A Scientific Method Of Optimizing Design

Advertisement in Multivariate Testing 101: A Scientific Method Of Optimizing Design
 in Multivariate Testing 101: A Scientific Method Of Optimizing Design  in Multivariate Testing 101: A Scientific Method Of Optimizing Design  in Multivariate Testing 101: A Scientific Method Of Optimizing Design

In a previous article on Smashing Magazine, I described A/B testing and various resources related to it. I have also covered the basics of multivariate testing in the past, yet in this post I’ll go deeper in the technical details of multivariate testing which is similar to A/B testing but with crucial differences.

In a multivariate test, a Web page is treated as a combination of elements (including headlines, images, buttons and text) that affect the conversion rate. Essentially, you decompose a Web page into distinct units and create variations of those units. For example, if your page is composed of a headline, an image and accompanying text, then you would create variations for each of them. To illustrate the example, let’s assume you make the following variations:

  • Headline: headline 1 and headline 2
  • Text: text 1 and text 2
  • Image: image 1 and image 2

The scenario above has three variables (headline, text and image), each with two versions. In a multivariate test, your objective is to see which combination of these versions achieves the highest conversion rate. By combinations, I mean one of the eight (2 × 2 × 2) versions of the Web page that we’ll come up with when we combine variations of the sections:

  • Headline 1 + Text 1 + Image 1
  • Headline 1 + Text 1 + Image 2
  • Headline 1 + Text 2 + Image 1
  • Headline 1 + Text 2 + Image 2
  • Headline 2 + Text 1 + Image 1
  • Headline 2 + Text 1 + Image 2
  • Headline 2 + Text 2 + Image 1
  • Headline 2 + Text 2 + Image 2

In multivariate testing, you split traffic between these eight different versions of the page and see which combination produces the highest conversion rate — just like in A/B testing, where you split traffic between two versions of a page.

Getting Started With Multivariate Testing

To create your first multivariate test, first choose a tool or framework that supports multivariate testing. You can use one of the tools listed in the section “Tools” in the end of this article. Please note that not all A/B testing tools support multivariate testing, so make sure your tool of choice allows it.

Once you’ve decided which tool to use, choose which sections to include in the test. As you know, a Web page can contain tens or hundreds of different sections (footer, headline, sidebar, log-in form, navigation buttons, etc.). You cannot include all of these sections in the test; creating variations for all of them would be an enormous task (and, as you’ll read below, the traffic requirements for the test will grow exponentially with each new section). Narrow it down to the few sections of the page that you think are most important to the conversion goal.

The following parts of a page (listed in order of importance) are typically included in a multivariate test:

  • Headline and heading,
  • Call-to-action buttons (color, text, size, placement),
  • Text copy (content, length, size),
  • Image (type, placement, size),
  • Form length.

The Difference Between A/B Testing And Multivariate Testing

Conceptually, the two techniques are similar, but there are crucial differences. First and foremost, the traffic requirements are different. As I said, the number of combinations that need to be tested grows exponentially in a multivariate test. You can test three or four versions in an A/B test and tens or hundreds of versions in a multivariate test. Clearly, then, a lot of traffic — and time — is required to arrive at meaningful results.

For example, if you have three sections with three variations each, the number of combinations is 27. Add another section with three variations, and the total number of combinations jumps to 81. If you want meaningful results, you can’t keep adding sections to the test. Be selective. A good rule is to limit the total number of combinations to 25 or fewer.

Variation-testing in Multivariate Testing 101: A Scientific Method Of Optimizing Design
Use A/B testing for large scale changes, not to refine or optimize existing designs. Image by Meet the Chumbeques

Another difference is in how these techniques are used. A/B testing is usually reserved for large radical changes (such as completely changing a landing page or displaying two different offers). Multivariate testing is used to refine and optimize an existing design. For the mathematically inclined, A/B testing is used to optimize for a global optimum, while multivariate testing is used to optimize for a local optimum.

One advantage of multivariate testing over A/B split testing is that it can tell you which part of the page is most influential on conversion goals. Say you’re testing the headline, text and image on your landing page. How do you know which part has the most impact? Most multivariate testing tools will give you a metric, called the “impact factor,” in their reports that tells you which sections influence the conversion rate and which don’t. You don’t get this information from A/B testing because all sections are lumped into one variation.

Types Of Multivariate Tests

Based on how you distribute traffic to your combinations, there are several types of multivariate tests (MVT):

Full factorial testing
This is the kind people generally refer to when they talk about multivariate testing. By this method, one distributes website traffic equally among all combinations. If there are 16 combinations, each one will receive one-sixteenth of all the website traffic. Because each combination gets the same amount of traffic, this method provides all of the data needed to determine which particular combination and section performed best. You might discover that a certain image had no effect on the conversion rate, while the headline was most influential. Because the full factorial method makes no assumptions with regard to statistics or the mathematics of testing, I recommend it for multivariate testing.

Results-by-ItoWorld in Multivariate Testing 101: A Scientific Method Of Optimizing Design
Record and compare the resulting traffic for each tested version. Image by ItoWorld

Partial or fractional factorial testing
As the name suggests, in this method only a fraction of all combinations are exposed to website traffic. The conversion rate for unexposed combinations is inferred from the ones that were included in the test. For example, if there are 16 combinations, then traffic is split among only eight of those. For the remaining eight, we get no conversion data, and hence we need to resort to fancy mathematics (with a few assumptions) for insight. For obvious reasons, I don’t recommend this method: even though there are fewer traffic requirements for partial factorial testing, the method forces too many assumptions. No matter how advanced the mathematics are, hard data is always better than inference.

Taguchi testing
This is the most esoteric method of all. A quick Google search reveals a lot of tools claiming to cut your testing time and traffic requirements drastically with Taguchi testing. Some might disagree, but I believe the Taguchi method is bit of a sham; it’s a set of heuristics, not a theoretically sound method. It was originally used in the manufacturing industry, where specific assumptions were made in order to decrease the number of combinations needing to be tested for QA and other experiments. These assumptions are not applicable to online testing, so you shouldn’t need to do any Taguchi testing. Stick to the other methods.

Do’s And Don’ts

I have observed hundreds of multivariate tests, and I have seen many people make the same mistakes. Here is some practical advice, direct from my experience.


  • Don’t include a lot of sections in the test.
    Every section you add effectively doubles the number of combinations to test. For example, if you’re testing a headline and image, then there are a total of four combinations (2 × 2). If you add a button to the test, there are suddenly eight combinations to test (2 × 2 × 2). The more combinations, the more traffic you’ll need to get significant results.


  • Do preview all combinations.
    In multivariate testing, variations of a section (image, headline, button, etc.) are combined to create page variations. One of the combinations might be odd-looking or, worse, illogical or incompatible. For example, one combination might put together a headline that says “$15 off” and a button that says “Free subscription.” Those two messages are incompatible. Detect and remove incompatibilities at the preview stage.
  • Do decide which sections are most worthy of inclusion in the test.
    In a multivariate test, not all sections will have an equal impact on the conversion rate. For example, if you include a headline, a call-to-action button and a footer, you might come to realize that footer variations have little impact, and that headline and call-to-action variations produce winning combinations. You get a powerful section-specific report. Below is a sample report from Visual Website Optimizer. Notice how the button has more impact (91%) than the headline (65%):
  • Mvt-small in Multivariate Testing 101: A Scientific Method Of Optimizing Design

  • Do estimate the traffic needed for significant results.
    Before testing, get a clear idea of how much traffic you’ll need in order to get statistically significant results. I’ve seen people add tens of sections to a page that gets just 100 visitors per day. Significant results from such a test would take months to accumulate. I suggest using a calculator, such as this A/B split and multivariate testing duration calculator, to estimate how much traffic your test will require. If it’s more than what’s acceptable, reduce some sections.
  • Case Studies

    A lot of A/B testing case studies are on the Web, but unfortunately, finding multivariate test case studies is still difficult. So, I scoured the Internet and compiled relevant ones.

    Software Download Case Study: downloads increased by 60%
    This is one multivariate test I did to compare different versions of headlines and links. In the end, one of the variations resulted in a more than 60% increase in downloads.

    Mvt-case-3 in Multivariate Testing 101: A Scientific Method Of Optimizing Design

    Microsoft Multivariate Testing Case Study
    This presentation details the variations that were tested for this website and the ultimate winner.

    SiteSpect Case Studies
    This page presents a dozen of multivariate testing case studies of large companies using multivariate testing and behavioral targeting to optimize their sites.

    Maxymiser Case Studies
    Another set of multivariate testing case studies.

    Look Inside a 1,024-Recipe Multivariate Experiment
    YouTube did a gigantic multivariate test in 2009. It can afford to do tests with a thousand-plus combinations because it has sufficient traffic.

    Multivariate testing of an email newsletter
    An agency tested color and text on the call-to-action button of its email newsletter. The best button had the highest CTR: 60%.

    Multivariate Testing Tools And Resources


    Google Website Optimizer
    A free basic multivariate testing tool by Google. It’s great if you want to test the waters before investing money in multivariate testing. The downside? You’ll need to tag different sections of the Web page with JavaScript, which can be cumbersome. It’s also prone to error and forces you to rely on others (like the technology department) for implementation.

    Visual Website Optimizer
    The main advantage of this paid tool is that you can create a multivariate test visually in a WYSIWYG editor by choosing different sections of the page. You can then run the test without having to tag sections individually (although a snippet of code is required in the header). The tool includes heat map and click map reports.

    A website that publishes user reviews of all of the multivariate testing tools available on the market. If you are planning to adopt a multivariate testing tool for your organization, do your research on this website.

    Enterprise testing tools
    Omniture’s Test&Target, Autonomy’s Optimost, Vertster, Webtrends’ Optimize, and SiteSpect.


    Expert Guide to Multivariate Testing Success, by Jonathan Mendez
    A series of blog posts detailing different aspects of multivariate testing.

    Fail Faster With Multivariate Testing (PDF)
    An excellent free mini-guide to multivariate testing.

    Online Testing Vendor Landscape
    A commercial report by Forrester that compares the various testing vendors out there.

    Lessons Learned from 21 Case Studies in Conversion Rate Optimization
    This article discusses ideas for conversion rate optimization detailed through different case studies.

    Related posts

    You may be interested in the following related articles:


    © Paras Chopra for Smashing Magazine, 2011. | Permalink | Post a comment | Smashing Shop | Smashing Network | About Us
    Post tags: multivariate, testing

    August 16 2010


    The Newbie’s Guide to Test-Driven Development

    Testing your code is annoying, but the impact of not doing so can be orders of magnitude more annoying! In this article, we’ll use test-driven development to write and test our code more effectively.

    What is Test-Driven Development?

    Since the dawn of the computer era, programmers and bugs have battled for supremacy. It’s an inevitable occurrence. Even the greatest programmers fall prey to these anomalies. No code is safe. That’s why we do testing. Programmers, at least sane ones, test their code by running it on development machines to make sure it does what it’s supposed to.

    Sane programmer who tests his programs.
    Image courtesy of

    Insane programmer who doesn’t test his programs.
    Image courtesy of

    Test-driven development is a programming technique that requires you to write actual code and automated test code simultaneously. This ensures that you test your code—and enables you to retest your code quickly and easily, since it’s automated.

    How does it work?

    Test-driven development, or TDD as we’ll call it from now on, revolves around a short iterative development cycle that goes something like this:

    1. Before writing any code, you must first write an automated test for your code. While writing the automated tests, you must take into account all possible inputs, errors, and outputs. This way, your mind is not clouded by any code that’s already been written.
    2. The first time you run your automated test, the test should fail—indicating that the code is not yet ready.
    3. Afterward, you can begin programming. Since there’s already an automated test, as long as the code fails it, it means that it’s still not ready. The code can be fixed until it passes all assertions.
    4. Once the code passes the test, you can then begin cleaning it up, via refactoring. As long as the code still passes the test, it means that it still works. You no longer have to worry about changes that introduce new bugs.
    5. Start the whole thing over again with some other method or program.

    Great, but how is this better than regular testing?

    Have you ever purposefully skipped testing a program because:

    • You felt it was a waste of time to test, since it was only a slight code change?
    • You felt lazy testing everything again?
    • You didn’t have enough time to test because the project manager wanted it moved up to production ASAP?
    • You told yourself you’d do it “tomorrow”?
    • You had to choose between manual testing, or watching the latest episode of your favorite TV show (Big Bang Theory)?

    Most of the time, nothing happens, and you successfully move your code to production without any problems. But sometimes, after you’ve moved to production, everything goes wrong. You’re stuck fixing a hundred holes in a sinking ship, with more appearing every minute. You do not want to find yourself in this situation.

    Screw it, just move it to production!
    Image courtesy of

    TDD was meant to eliminate our excuses. When a program has been developed using TDD, it allows us to make changes and test quickly and efficiently. All we need to do is run the automated tests, and voila! If it passes all automated tests, then we’re good to go—if not, then it just means we broke something with the changes. By knowing which exact parts of the test failed, it also allows us to easily pinpoint at which part of the changes it broke, so it makes fixing the bugs easier.

    Okay, I’m sold. How do we do this?

    There’s a multitude of PHP automated testing frameworks out there we can use. One of the most widely-used testing frameworks is PHPUnit.

    PHPUnit is a great testing framework, which can easily be integrated into your own projects, or other projects built on top of popular PHP frameworks.

    For our purposes though, we won’t need the multitude of functions that PHPUnit offers. Instead, we’ll opt to create our tests using a much easier testing framework, called SimpleTest.

    In the next steps, let’s assume that we’re developing a guestbook application where any user can add and view guestbook entries. Let’s assume that the markup has been completed, and that we’re simply making a class which contains the application logic of the guestbook, which is where the application inserts and reads to the database. The reading portion of this class is what we’re going to develop and test.

    Step 1. Set up SimpleTest

    This is arguably the easiest step of all. Even this guy could do it:

    I can do this…I can use, my, um…brain!
    Image courtesy of

    Download SimpleTest here, and extract to a folder of your choice — preferably the folder where you’re going to develop your code, or your PHP include_path for easy access.

    For this tutorial, I’ve set up the folder like so:

    Index.php will run guestbook.php, and invoke the view method and display the entries. Inside the classes folder is where we’ll put the guestbook.php class, and the test folder is where we place the simpletest library.

    Step 2. Plan Your Attack

    The second step, which is actually the most important one, is to start creating your tests. For this, you really need to plan and think about what your function will do, what possible inputs it will get, and the corresponding outputs it will send. This step resembles playing a game of chess—you need to know everything about your opponent (the program), including all his weaknesses (possible errors) and strengths (what happens if it successfully runs).

    So for our guestbook application, let’s lay down the schematics:


    • This function will not have any inputs since it will just retrieve all of the entries from the database and send back the data to be printed out.
    • It will return an array of guestbook records, stating the name of the poster and his message. If there are no records, then it should still return an empty array.
    • If there are records, the array will have 1 or more values in it.
    • At the same time, the array will have a specific structure, something like:
    Array (
        [0] => Array (
            ['name'] = "Bob"
            ['message'] = "Hi, I'm Bob."
        [1] => Array (
            ['name'] = "Tom"
            ['message'] = "Hi, I'm Tom."

    Step 3. Write a Test!

    Now, we can write our first test. Let’s start by creating a file called guestbook_test.php inside the test folder.

    require_once(dirname(__FILE__) . '/simpletest/autorun.php');
    class TestGuestbook extends UnitTestCase {

    Then, let’s convert what we’ve determined from step two,.

    require_once(dirname(__FILE__) . '/simpletest/autorun.php');
    class TestGuestbook extends UnitTestCase {
        function testViewGuestbookWithEntries()
    		$guestbook = new Guestbook();
    		// Add new records first
    		$guestbook->add("Bob", "Hi, I'm Bob.");
    		$guestbook->add("Tom", "Hi, I'm Tom.");
    		$entries = $guestbook->viewAll();
    		$count_is_greater_than_zero = (count($entries) > 0);
    		$this->assertIsA($entries, 'array');
    		foreach($entries as $entry) {
    			$this->assertIsA($entry, 'array');
    	function testViewGuestbookWithNoEntries()
    		$guestbook = new Guestbook();
    		$guestbook->deleteAll(); // Delete all the entries first so we know it's an empty table
    		$entries = $guestbook->viewAll();
    		$this->assertEqual($entries, array());

    Assertions make sure that a certain thing is what it’s supposed to be—basically, it ensures that what’s returned is what you’re expecting it to return. For example, if a function is supposed to return true if it’s successful, then in our test, we should assert that the return value is equal to true.

    As you can see here, we test the viewing of the guestbook with entries and without. We check if these two scenarios pass our criteria from step two. You probably also noticed that each of our test functions start with the word ‘test.’ We did this because, when SimpleTest runs this class, it will look for all the functions that start with the word ‘test’ and run it.

    In our test class, we’ve also used some assertion methods, such as assertTrue, assertIsA, and assertEquals. The assertTrue function checks whether or not a value is true. AssertIsA checks if a variable is of a certain type or class. And lastly, assertEquals checks if a variable is totally equal to a certain value.

    There are other assertion methods provided by SimpleTest, which are:

    assertTrue($x) Fail if $x is false assertFalse($x) Fail if $x is true assertNull($x) Fail if $x is set assertNotNull($x) Fail if $x not set assertIsA($x, $t) Fail if $x is not the class or type $t assertNotA($x, $t) Fail if $x is of the class or type $t assertEqual($x, $y) Fail if $x == $y is false assertNotEqual($x, $y) Fail if $x == $y is true assertWithinMargin($x, $y, $m) Fail if abs($x – $y) < $m is false assertOutsideMargin($x, $y, $m) Fail if abs($x – $y) < $m is true assertIdentical($x, $y) Fail if $x == $y is false or a type mismatch assertNotIdentical($x, $y) Fail if $x == $y is true and types match assertReference($x, $y) Fail unless $x and $y are the same variable assertClone($x, $y) Fail unless $x and $y are identical copies assertPattern($p, $x) Fail unless the regex $p matches $x assertNoPattern($p, $x) Fail if the regex $p matches $x expectError($x) Swallows any upcoming matching error assert($e) Fail on failed expectation object $e

    Assertion method list courtesy of

    Step 4. Fail to Win

    Once you’re finished writing the code, you should run the test. The first time you run the test, it SHOULD FAIL. If it doesn’t, then it means that your test doesn’t really test anything.

    To run your test, simply run guestbook_test.php in your browser. You should see this first:

    This happened because we haven’t created our guestbook class yet. To do so, create guestbook.php inside your classes folder. The class should contain the methods we’re planning to use, but shouldn’t contain anything yet at first. Remember, we’re writing the tests first before writing any code.

    class Guestbook
    	public function viewAll() {
    	public function add( $name, $message ) {
    	public function deleteAll() {

    When you run the test again, it should look something like this:

    As we can see here, our test is now winning by failing. This means that our test is now ready to get “answered.”

    Step 5. Answer Your Test by Writing Code

    At some point, we’ve all felt like this when we’re programming.
    Image courtesy of

    Now that we have a working automated test, we can start writing code. Open up your guestbook.php class and start creating the answer to your test.

    class Guestbook
    	// To save time, instead of creating and connecting to a database, we're going to
    	// simulate a "database" by creating a static entries array here.
    	// It will be like we have two entries in the table.
    	private static $_entries = array(
    		array (
    			'name' => 'Kirk',
    			'message' => 'Hi, I\'m Kirk.'
    		array (
    			'name' => 'Ted',
    			'message' => 'Hi, I\'m Ted.'
    	public function viewAll() {
    		// Here, we should retrieve all the records from the database.
    		// This is simulated by returning the $_entries array
    		return self::$_entries;
    	public function add( $name, $message ) {
    		// Here, we simulate insertion into the database by adding a new record into the $_entries array
    		// This is the correct way to do it: self::$_entries[] = array('name' => $name, 'message' => $message );
    		self::$_entries[] = array('notname' => $name, 'notmessage' => $message ); //oops, there's a bug here somewhere
    		return true;
    	public function deleteAll() {
    		// We just set the $_entries array to simulate
    		self::$_entries = array();
    		return true;

    This guestbook.php class has some bugs in it on purpose, so we can see what it looks like if our test fails.

    Once we run our test, we should see something like this:

    The test output shows us in which test and in which assertion our code failed. From this, we can easily pinpoint that line 16 and 17 was the assertion that threw the error.

    require_once(dirname(__FILE__) . '/simpletest/autorun.php');
    class TestGuestbook extends UnitTestCase {

    This clearly tells us that the returned entry array did not have the correct array key. Based on this, we’ll easily know which part of our code went wrong.

    class Guestbook
    	public function add( $name, $message ) {
    		// Here, we simulate insertion into the database by adding a new record into the $_entries array
    		self::$_entries[] = array('name' => $name, 'message' => $message ); //fixed!
    		return true;

    Now, when we run our test again, it should show us:

    Step 6. Refactor and Refine Your Code

    Since the code we’re testing here is pretty simple, our testing and bug fixing didn’t last very long. But if this was a more complex application, you’d have to make multiple changes to your code, make it cleaner so it’s easier to maintain, and a lot of other things. The problem with this, though, is that change usually introduces additional bugs. This is where our automated test comes in—once we make changes, we can simply run the test again. If it still passes, then it means we didn’t break anything. If it fails, we know that we made a mistake. It also informs us where the problem is, and, hopefully, how we’ll be able to fix it.

    Step 7. Rinse and Repeat

    Eventually, when your program requires new functionality, you’ll need to write new tests. That’s easy! Rinse and repeat the procedures from step two (since your SimpleTest files should already be set up), and start the cycle all over again.


    There are a lot more in-depth test-driven development articles out there, and even more functionality to SimpleTest than what was displayed in this article—things like mock objects, stubs, which make it easier to create tests. If you’d like to read more, Wikipedia’s test-driven development page should set you on the right path. If you’re keen on using SimpleTest as your testing framework, browse the online documentation and be sure to review its other features.

    Testing is an integral part of the development cycle, however, it’s too often the first thing to be cut when deadlines are imminent. Hopefully, after reading this article, you’ll appreciate how helpful it is to invest in test-driven development.

    What are your thoughts on Test-Driven Development? Is it something you’re interested in implementing, or do you think it’s a waste of time? Let me know in the comments!

    July 14 2010


    Synthetic Event Library Syn Aims to Make Testing Easier

    The team at Jupiter IT have release Syn, a library which allows you to create synthetic events for use in testing. This standalone library is meant to assist in testing complex UI behavior by simulating user actions such as typing, clicking, dragging the mouse.

    Testing rich, dynamic web applications sucks. At Jupiter, we've tried almost every testing solution available (qUnit, Quick Test Pro, Selenium, JsUnit, Env.js, TestCase) and all of them suffer from some fatal flaw.


    • Manual - A tester has to run the tests manually on every supported browser.  People are lazy. 
    • Unit Tests Only - We need to test the app as a whole and complex UI behavior like drag-drop.
    • Low fidelity - We need to make sure the tests are reporting accurate results.
    • Difficult to write - We sling JS like a ninja monkey throws poo.  We want to write tests in a nice JS API.
    • Expensive - A QTP license is 5k a person!  I'd rather buy a vacation.
    • Support - We want to test Mac and Linux browsers.

    We've solved all of these problems in our upcoming FuncUnit testing framework. It's a mashup of qUnit, Selenium, Rhino, and Env.js. But its core library, Syn, which does the work of simulating user actions with very high fidelity, is what we are releasing today.

    So by using code like this:

    JAVASCRIPT: {},'hello' )
       .type( 'Hello World' )
       .drag( $('#trash') );

    you can simulate clicking an element with id='hello', typing "Hello World", and then dragging your mouse from that element to an element with id='trash'.

    Pretty cool stuff. Check out their demo to see how playback occurs.

    July 09 2010


    12 Tools To Reevaluate And Analyze Your Web Design

    Preview-web-design-analytics-toolsNowadays there are pretty many tools to analyze your site’s code and traffic, but there aren’t that much tools to evaluate design and accessibility. Personally, I think that best ‘tool’ to evaluate web design is other people feedback.

    Surprisingly – there are just that type of tools available. Check them out together with some other useful tools and see if your design is easy accessible and practical. In this list less is more, you don’t need to much tools to analyze your web design, just few you really trust.

    1. Google Website Optimizer

    Google’s free website testing and optimization tool, allows you to increase the value of your existing websites and traffic without spending a cent. Using Website Optimizer to test and optimize site content and design, you can quickly and easily increase revenue and ROI whether you’re new to marketing or an expert.


    2. Firefox Accessibility Extension

    The Firefox Accessibility Extension makes it easier for people with disabilities to view and navigate web content. Developers can use the extension to check their use of structural and styling markup that support functional web accessibility.


    3. YSlow

    YSlow analyzes web pages and suggests ways to improve their performance based on a set of rules for high performance web pages. YSlow is a Firefox add-on integrated with the Firebug web development tool. YSlow grades web page based on one of three predefined ruleset or a user-defined ruleset. It offers suggestions for improving the page’s performance, summarizes the page’s components, displays statistics about the page, and provides tools for performance analysis, including and JSLint.


    4. Click Density

    Find out where your visitors are clicking more than usual. And the clickdensity heat map will show at a glance the relative popularity of components on your pages.


    5. Pagealizer

    Pagealizer helps increase landing page conversion by showing relevant usage data and suggesting page design and landing page optimization. It shows in great detail how long people visit a page, bounce rate, how far visitors scroll down a page, and where visitors clicked.


    6. Clixpy

    Clixpy helps you find out which features are difficult to use and which are easy, where your users spend more time and why they leave your website. Clixpy can help you optimize landing pages and convert more visitors to clients.


    7. Access Color

    AccessColor tests the color contrast and color brightness between the foreground and background of all elements in the DOM to make sure that the contrast is high enough for people with visual impairments.


    8. Check My Colours

    Checkmycolours is a tool for checking foreground and background color combinations of all DOM elements and determining if they provide sufficient contrast when viewed by someone having color deficits.
    All the tests are based on the algorithms suggested by the World Wide Web Consortium (W3C).


    9.  Browsershots

    Browsershots makes screenshots of your web design in different operating systems and browsers. It is a free open-source online web application providing developers a convenient way to test their website’s browser compatibility in one place.


    10. Five Second Test

    Five second tests are simple usability tests that help you measure the effectiveness of your designs.Conducting a five second test allows you to find out which parts of your designs are the most prominent. You can do this in two ways:

    • Memory Test
      You give users five seconds to look at your design and then ask them to remember specific elements
    • Click Test
      You give users five seconds to locate and click on specific elements of your design

    All you need to do is upload a design you want to test and choose the type of test you want to run. You will be given a unique link that you can share with friends and colleagues and have them do your test. Of course, you can always just let our random users do your test for really fast feedback!


    11. Usabilla

    Usabilla offers a fast and simple way to collect feedback in any stage of the design process.
    Ask your users simple questions to collect valuable feedback and discover usability issues. Use our One-Click-Tasks to measure task performance. Users simply share their feedback by clicking anywhere on your website, mockup, sketch, or image. Measure time, collect points, and get valuable feedback with notes.


    12. User Testing

    UserTesting provides quick and cheap website usability testing. Website owners can access a network of pre-screened testers who are articulate and observant and who meet specified demographics. Give UserTesting your website’s demographic, and the service will select the right users to browse your site. All of this cost only $39 per test.


    July 01 2010


    Keep your vows; Keeping wed to Node

    Vows can be a beautiful thing. Alexis Sellier of LESS fame, is becoming an open source star. This time around he brings us Vows an asynchronous-friendly behavior driven development framework for Node.js.

    Write you BBD specs like this:

    // division-by-zero-test.js

    var vows = require('vows'),
        assert = require('assert');

    // Create a Test Suite
    vows.describe('Division by Zero').addBatch({
        'when dividing a number by zero': {
            topic: function () { return 42 / 0 },

            'we get Infinity': function (topic) {
                assert.equal (topic, Infinity);
        'but when dividing zero by zero': {
            topic: function () { return 0 / 0 },

            'we get a value which': {
                'is not a number': function (topic) {
                    assert.isNaN (topic);
                'is not equal to itself': function (topic) {
                    assert.notEqual (topic, topic);
    }).run(); // Run it

    and you get a very nice report card out of the other end:

    With macros you end up with very nice DSL syntax such as:

    {   'GET /': {
            topic: api.get('/'),
            'shoud respond with a 200 OK': assertStatus(200)
        'POST /': {
            'shoud respond with a 405 Method not allowed': assertStatus(405)
        'GET /resources (no api-key)': {
            topic: api.get('/resources'),
            'shoud respond with a 403 Forbidden': assertStatus(403)
        'GET /resources?apikey=af816e859c249fe'
            topic: api.get('/resources?apikey=af816e859c249fe'),
            'shoud return a 200 OK': assertStatus(200),
            'should return a list of resources': function (res) {
                assert.isArray (res.body);

    // or even
      'GET  /':                   respondsWith(200),
      'POST /':                   respondsWith(405),
      'GET  /resources (no key)': respondsWith(403)

    The website itself goes into exquisite detail on the install process, sample usage, and more. The website also happens to be beautiful itself, and full of HTML5 markup to boot.

    June 08 2010


    A/B testing your Cohorts with Google Analytics result views

    var header_test = new Cohorts.Test({
        name: 'big_vs_small_header',
        sample: 1, // we want to include all visitors in the test
        cohorts: {
            big: {
                onChosen: function() {
            small: {
                onChosen: function() {

    $('#big').click(function() {
        header_test.event('Clicked on Header');

    $('#small').click(function() {
        header_test.event('Clicked on Header');

    The code above shows you exactly how you could run a test that shows either a large, or small clickable header... and gives you A/B results on how many were clicked.

    This is all view a new library called Cohorts by James Yu:

    Cohorts is a simple, purely javascript, multivariate testing framework.

    It allows you to easily run split tests for visitors on your site, showing them different designs, layouts, or whatever you want. Cohorts also allows you to track interesting events that occur for each of the cohorts. By default, it uses Google Analytics event tracking to store data, but you can customize it to use your own or another.

    Very nice.

    June 04 2010


    Cross-Browser Testing: A Detailed Review Of Tools And Services

    Smashing-magazine-advertisement in Cross-Browser Testing: A Detailed Review Of Tools And Services
     in Cross-Browser Testing: A Detailed Review Of Tools And Services  in Cross-Browser Testing: A Detailed Review Of Tools And Services  in Cross-Browser Testing: A Detailed Review Of Tools And Services

    As you probably know, cross-browser testing is an important part of any developer’s routine. As the number of browsers increase, and they certainly have in recent years, the need for automatic tools that can assist us in the process becomes ever greater. In this article, we present an overview of different cross-browser testing applications and services. Surely, you are already familiar with some of them, and you may have even stumbled across another overview article, but this one takes a different approach.

    This is not just a list of available tools, but rather a comprehensive analysis based on my experience with each of them. For the impatient among you, a summary table is at the end summarizing key metrics and unique features for each service. But if you’re interested in my personal experience with these tools, then read on.

    Probably the most important metric of these services is the capture delay, which I measured for the URL stackoverflow, with the following browsers enabled: Firefox, IE, Chrome and Safari.

    [Offtopic: by the way, did you know that Smashing Magazine has one of the most influential and popular Twitter accounts? Join our discussions and get updates about useful tools and resources — follow us on Twitter!]


    BrowserShots is the oldest and best known free online multi-browser screenshot service. It supports the largest number of browsers: a total of 61 different browser versions and operating systems, which is great, but I can hardly imagine anyone wanting to test their website under Kazahakase 0.5 running on BSD Unix. Feature-wise, it allows you to enable and disable Javascript, Java and Flash and change the screen size. I find the latter very useful, especially nowadays when one has to take into account smartphone browsers with non-standard resolutions.

    Testing-01 in Cross-Browser Testing: A Detailed Review Of Tools And Services

    The interface is not very user-friendly. Selecting the browsers and options you want takes time, and because it is a Web service you have to do it over every time you want to take a screenshot. When (and if) you finally get your screenshots, there is no easy way to compare different captures in order to find rendering inconsistencies. HTTP redirect is not fully automated: BrowserShots displays the URL you are being redirected to, but you have to start the screenshot again manually.

    The biggest disadvantage of BrowserShots—which, in my opinion, makes it practically unusable for a professional developer — is the response time. In our test scenario, it was more than 45 minutes. Note that a screenshot expires in 30 minutes, unless you manually extend it. As you can see from the shot below, BrowserShots has serious bugs with scrolling (see MSIE 8.0 screenshot) and at least one browser screenshot failed, even though it said the operation was successful.

    Browsershots1 in Cross-Browser Testing: A Detailed Review Of Tools And Services


    Unless you need only a single test on a particular browser, this service is not for you. Even then, by the way, it would probably take less time to install that browser, test the page and then uninstall it.

    Unique features: None.

    Disadvantages: Painfully slow.


    BrowserCam is another well-known screenshot service. Unlike BrowserShots, this is a commercial service. The cheapest plan cost $159.80 a year and provides access for five users. The interface is nice. It allows you to create a project and specify the URL and browsers you want to capture, so that you do not have to do it all over again to re-test the page. But because it is a non-AJAX Web-based interface, its response time is not comparable to that of a native application, which is a bit annoying.

    Testing-09 in Cross-Browser Testing: A Detailed Review Of Tools And Services

    Browser support is slightly more limited than that of BrowserShots, but it is good enough for practical purposes; it supports multiple versions of IE, Firefox, Safari, Opera and Chrome, as well as some older browsers on OS X, Linux and multiple versions of Windows. Capture speed is decent: it took about two minutes to take a screenshot of our test scenario.

    BrowserCam supports multiple resolutions and has window and full-page capture, which means scroll bar support. Another nice feature is mobile device capture: it supports Blackberry, iPhone, Android and Windows Mobile devices. Note that mobile capture support is not part of the browser capture plan and costs $999.95 extra annually. It also has an email capture service, which in my opinion is of limited use, and remote access, which can be useful for troubleshooting rendering inconsistencies that are detected from a screen capture. Both services cost extra. The screenshot below is of a BrowserCam results window.

    Remote access packages allow you to connect using VNC to your choice of Linux, Windows and Mac machines with different browser versions. This can be a good option for debugging on hardware that you do not have, such as Mac. But the price of $499.95 a year is not far from the price of Mac mini, and because the VNC protocol is not terribly efficient, extensive remote debugging via VNC can be daunting.

    Browsercam in Cross-Browser Testing: A Detailed Review Of Tools And Services


    A very good professional service with advanced features and thoughtful interface.

    Unique features: Mobile device support, remote access.

    Disadvantages: Expensive.

    Adobe BrowserLab

    BrowserLab is a new offering from Adobe and was previously known as Meer-Meer. It is written in Flash and as such has the advantage of being cross-platform compatible and of having the look, feel and (most importantly) response time of an application. It is currently offered free of charge in preview mode while Adobe “is monitoring the performance.” Because it will monitor it for more than one year, one wonders whether it has other reasons for this. According to Adobe, it will charge $10 to $20 per month for this service starting in 2011.

    Testing-03 in Cross-Browser Testing: A Detailed Review Of Tools And Services

    The interface is attractive, polished and easy to use, as you can see from the screenshot below. You can inspect captures one by one or view two captures side by side, which is more useful. The much lauded “onion skin” option is not very practical: most of the time, browsers will not render a page identically pixel by pixel, but the page might still look the same.

    Browser support is modest compared to the competition. At the time of writing, BrowserLab supports only Chrome, Firefox, IE and Safari: a total of 12 browsers and OS version combinations. But it looks like the quality of the product is still at beta level; in two captures, it actually cut the image horizontally. Scroll bar support is buggy, too.

    Screenshot speed is very good. Our test scenario did it in less than one minute.

    Browserlab1 in Cross-Browser Testing: A Detailed Review Of Tools And Services


    A very nice interface, and free till the end of 2010.

    Unique features: None.

    Disadvantages: Modest browser support, minor bugs.

    Microsoft Expression Web SuperPreview

    SuperPreview is a new addition to Microsoft’s Expression Web WYSIWYG development environment. This is the standalone version, limited to Internet Explorer and available for download free of charge. Browser support is limited. The standalone version supports only IE 6, 7 and 8, while the full version has support for Firefox and Safari. The user experience, on the other hand, is very impressive.

    Testing-04 in Cross-Browser Testing: A Detailed Review Of Tools And Services

    Because it is an application that runs on your PC, the response time and screenshot delay are among the best in class. In our test scenario, it loaded the website in a matter of seconds. Please note, though, that because SuperPreview works with only two browsers at a time and does not support Chrome, this test was not identical to that of other services.

    SuperPreview cannot be purchased without the Expression Web, whose retail price is $149.

    Superpreview1 in Cross-Browser Testing: A Detailed Review Of Tools And Services


    The interface is extremely easy to use, and the speed is incredible. But browser support is very limited, and being part of the Expression Web package, it’s almost unusable.

    Unique features: None.

    Disadvantages: Limited browser support, expensive.


    BrowserSeal is a new tool. Similar to SuperPreview, it is an application rather than a Web service, and as such its capture speed is very good. BrowserSeal finished our test case in less than one minute. Browser support is quite broad. With multiple versions of IE, Firefox, Safari, Opera and Chrome, it pretty much covers every browser anyone would want to test. It has two good features that set it apart from the competition: standalone browser support and a command-line interface for automation scripting.

    Testing-05 in Cross-Browser Testing: A Detailed Review Of Tools And Services

    All browsers supported by BrowserSeal can be launched manually, which means that once you’ve found a rendering inconsistency in a browser, you can actually run the problem browser and troubleshoot the issue. This is something most other services do not offer.

    The price is very competitive, too: the standard version sells for $49. There is also an automation version with a command-line interface that lets you capture multiple URLs from a script or batch file. The interface is attractive and easy to use. The optional tabbed interface makes it easy to spot even the slightest rendering difference when switching from one capture to another.

    Browserseal1 in Cross-Browser Testing: A Detailed Review Of Tools And Services


    A very nice tool, with comprehensive browser support. The interface is easy to use, the capture speed is great, and the price is competitive.

    Unique features:: Comes with standalone versions for all major browsers; has command-line mode for automation scripts.

    Disadvantages:: Runs on Windows only.


    Litmus is another Web-based screenshot service. Its browser support is impressive, with 23 browser versions and operating system combinations, including IE, Firefox, Chrome, Safari, Opera, Flock, Camino, SeaMonkey and Netscape. Capture speed is okay but not comparable to that of native applications: our test took five minutes.

    Testing-06 in Cross-Browser Testing: A Detailed Review Of Tools And Services

    The interface is clear and simple but lacks some features. For instance, there is no easy way to compare capture results. All you can do is view them one by one or download them to your PC. The app, though, does support projects, so you don’t have to enter URLs and change browser settings every time you want to take a screenshot, but this is pretty much all it does.

    Litmus does not support scrolling; that is, it captures only the top of long pages, which is a major drawback. The price is a bit high for a service that has such basic features: a single-user license costs $588 annually.

    Litmusapp1 in Cross-Browser Testing: A Detailed Review Of Tools And Services


    Good browser support, and average capture speed, which is probably good enough for most users. But very few features.

    Unique features:: None.

    Disadvantages: Does not support scrolling, and lacks other standard features found in competing products.

    Multi-Browser Viewer

    Multi-Browser Viewer is an application but relies on a server farm for browser rendering; in other words, the application is just a graphical interface, so it is as easy to use as an application but suffers the delays of a typical Web-based service.

    Testing-07 in Cross-Browser Testing: A Detailed Review Of Tools And Services

    One interesting feature is that it comes with standalone browsers that can be used for debugging. But note that these are not the same browsers used for screen capture. Multi-Browser Viewer has standalone browsers that can be used for debugging, and it has a rendering farm with many more browsers that can be used for screen capture.

    Browser support is impressive, with 54 browser and OS version combinations (out of which 17 are available in standalone versions), including IE, Firefox, Chrome, Opera, Safari, Camino, Konqueror. The price is reasonable: a single-user license costs $129.95 annually.

    Feature-wise, it does lag significantly behind the competition: there is no support for authentication or capture delay. Scroll bar support is buggy; in our test case, it worked for IE, Firefox and Safari, but not for Opera.

    Multibrowserviewer1 in Cross-Browser Testing: A Detailed Review Of Tools And Services


    A good interface and impressive browser support.

    Unique features: Standalone versions of some (but not all) browsers.

    Disadvantages: Lacks many features of competing products, buggy scroll bar support, runs on Windows only.


    Browsera is a Web-based screenshot service. Browser support is limited compared to that of most competitors: only IE, Firefox and Safari are supported. The standard plan costs $588 annually. The interface is attractive, fast and clean. You can conveniently organize your screenshot sessions into projects.

    Testing-08 in Cross-Browser Testing: A Detailed Review Of Tools And Services

    Browsera supports authentication, scroll bars and page crawling (i.e. you can ask Browsera to crawl your website recursively and take a screenshot of every page). The screenshot response time is very fast for a Web-based service; it completed our test in three minutes.

    Browsera1 in Cross-Browser Testing: A Detailed Review Of Tools And Services


    A professional service with a good interface and interesting features, but limited browser support.

    Unique features: Recursive website crawling.

    Disadvantages: Limited browser support, expensive.

    Browser Packs

    If all you need is to test your website in specific browsers with and you are willing to perform the tests manually, there are a few free services and applications that could help:

    At first glance, Spoon looks convenient because it is a Web service, which relieves you from having to install many browsers locally. But I had some stability problems with this service.

    Meanwhile, both the IE Collection and BrowserSeal.BrowserPack (offered free of charge, separate from the BrowserSeal commercial screenshot service) work very reliably. I did not have any issues with browsers installed by these packs. The IE Collection has every IE version you could think of. BrowserSeal.BrowserPack, which relies on the IE Collection for IE support, also supports two Firefox, three Opera and two Safari versions.


    The following table summarizes services that were tested and analyzed in the article. You can use the separate page for the full table for a better overview. I have included some metrics for each service to make it easier for you to choose the best one based on price, features and performance trade-offs.

    Supported BrowsersCapture speedPrice (1 year)InterfaceAuthenticationCapture delayScroll barsSpecial featuresBrowserShotsIE, Firefox, Chrome, Opera, Safari, Dilo, SeaMonkey, Minefield, Epiphany, Flock, Galeon, Konqueror, K-Meleon, Avant, Netscape, Shireteko, Kazehakase, Iceweasel45 minsFreeBadNoNoNoNoneBrowserCamIE, Firefox, Chrome, Opera, Safari, Konqueror, Camino, Netscape, AOL2 mins$999.95GoodYesYesYesMobile browsers support, remote access serviceBrowserLabIE, Firefox, Chrome and Safari1 minFree (till end of 2010)GoodNoYesBuggyNoneSuperPreviewIE, Firefox and Safari1 min$149GoodNoNoYesNoneBrowserSealIE, Firefox, Chrome, Opera and Safari1 min$49GoodYesYesYesStandalone browser versions, support for automation scriptsLitmusIE, Firefox, Chrome, Opera, Safari, Flock, Camino, SeaMonkey, Netscape5 mins$588BasicYesNoNoNoneMulti Browser ViewerIE, Firefox, Chrome, Opera, Safari, Flock, SeaMonkey, Netscape, K-Meleon, Camino, Konqueror, Epiphany, Kazehakase2 mins$129.95GoodNoNoBuggyStandalone browser versionsBrowseraIE, Firefox, Safari3 mins$588GoodYesNoYesRecursive crawling

    Obviously, we have no clear winner. Each service has its advantages and disadvantages, and you are left to decide what is the best trade-off for your case. Professional developers would likely not use BrowserShots because of the unreasonably long response time. SuperPreview and Browsera are probably also impractical because of their very limited browser support.

    BrowserLab will probably remain popular as long as it is free. Once Adobe starts charging about $20 per month for it, one would hardly have reason to use it, unless you worked in Dreamweaver, which has a BrowserLab extension, because there are much better alternatives.

    When choosing a tool, one of the most important factors in your decision will be whether to use a Web service or application. Some people prefer Web-based tools because they do not require installation. Personally, I prefer applications, at least for the development tools that I use frequently. They generally have a better interface and faster response time; they never have outages, and they can be used to debug locally (i.e. on my hard drive or company intranet — although some Web-based services offer a workaround for this issue).

    BrowserCam, BrowserSeal, Litmus and Multi-Browser Viewer are all very good choices. But they do vary significantly in price. If you need to test mobile browsers, BrowserCam is probably your only option. For everyone else, I would recommend either BrowserSeal or Multi-Browser Viewer; both come with standalone browser versions that are extremely important for testing. Unfortunately, both of them are Windows only, so Mac users will probably have to go with BrowserLab or BrowserCam. If automatic testing is important to you, then the BrowserSeal automation edition is your best bet.


    © Demiurg for Smashing Magazine, 2010. | Permalink | 11 comments | Add to | Digg this | Stumble on StumbleUpon! | Tweet it! | Submit to Reddit | Forum Smashing Magazine
    Post tags: testing, tools

    May 20 2010


    Mock Data Randomizer

    Mike Wilcox has written about a nice little tool for folks who test their JavaScript code. The Mock Data Randomizer is fairly self-explanatory:

    rand.sentences(6,7,7,9); // # of words in sentences
    rand.real = true;

    would generate something like:

    Whut koop mida miins knew eftor kind. Deys evin thaaght has plecu uthur melos. Eftor shaeld ha lift hend wath ets unto big. Huwovir till lergi wa kind olsa con bofare. Yaor sentonca sean cemu man in whoch man. Mi sentonca nught get fund very onswar tui plecu. Shew en geed liva knew poiplu childrun ilung gat.

    Far night very land room for then we. First kind turned out is own usually back much. Read sometimes since does answer himself into much. Live not for large hand important they. People another sea across across even world. Water could things young by white own only very.

    JAVASCRIPT:;{seconds:true});{delimiter:"/", yearRange:-5});{delimiter:"/"});{min:new Date('11/20/1964'), max:"4/19/1969", delimiter:"-"});

    would generate something like:

    Mon Feb 14 2011 14:13:17 GMT-0800 (PST)

    Check out the live demo to see it in action and then view the source.

    April 19 2010


    Put your own tests up on BrowserScope

    Lindsey Simon has got a powerful update to BrowserScope, the community-driven tool to test and profile browsers. The new feature is exciting as it truly delivers the "community-driven" piece at scale: you can now add your own tests to the corpus, TestSwarm style. Now, the two are very different of course as TestSwarm is about having many browsers test *your* code, and BrowserScope is more about testing the browsers in general.

    Here is what Lindsey says:

    It seems like nearly every week we read about an interesting new hosted test for people to visit and run their browsers through (recent examples include and Developers really love to poke at the black boxes they code for - and the matrix of browsers, OS, and networks is enormous. One thing I, and I presume other developers, would love to see are the aggregated results for these tests by user agent. Considering this is exactly what we built Browser to accomplish for our test categories, and that a user-hosted test feature has been on our Roadmap, the Browserscope team is happy to announce that we're opening up an alpha of our User Tests feature.

    Conveniently, this past week a User Tests use case came up for me at work and so it's been a driver for building this feature. We began working on a UI component that we wanted to test for speed of task completion. After building up a test page with a timer and some deltas it dawned on me just how cool it would be to open up this test to the world, and aggregate the results. The test is kind of strange in that the the UI component is out of its context, and you can argue about the mechanics of the test itself, but I still feel like the results may be informative. Interestingly too, this test is exactly the kind of thing we would *not* want to feature on the homepage of Browserscope (it's more of a performance test than a spec/binary test). And yet, the backend system with its median aggregation, scalability, and user-agent parsing library is a perfect fit. So check it out - and see how other people are doing on the test (courtesy of Browserscope).

    This is definitely a release early/often feature, and we want to be explicit that things may change or break in the API while we're in alpha mode. We may have to take the service offline briefly to fix things. But if you write tests for browsers and want to aggregate your results, sign in to Browserscope, register your test page and then read the HOWTO to start saving your results in the system. Please send any feedback to me or to our mailing list. We really hope to make this an easy system to use for the tests you're already hosting.

    I hope that we see a bunch of tests in there. One advantage of the browser scope approach is that it self-updates. A lot of the other sites are static, and when new browser versions come out the new feature data isn't reflected. It could be cool for example, if the readiness visualization pulled the data from a JSON feed from BrowserScope :)

    Thanks for doing this Lindsey and team!

    February 17 2010


    Harmony: Bringing together great libraries to enable awesome JS testing in Ruby

    Martin Aumont has released Harmony, which "provides a simple DSL to execute JavaScript and DOM code within Ruby."

    This enables you to do very cool things such as unit test JavaScript in the same area as your Ruby tests:

      require 'test/unit'

    require 'harmony'

    class JavascriptTest <Test::Unit::TestCase
      def setup
        @page =

      def test_foo
        assert_equal "world", @page.execute_js(<<-JS)
          foo = new Foo;

    and you can even use JavaScript libraries.... as script tags are autofetched:

      require 'harmony'

    page =<<-HTML)
          <script src="javascripts/jquery.js" type="text/javascript"></script>
          <div id="widget">ohaie</div>

    page.execute_js("$('#widget').innerHTML") #=> "ohaie"

    This library builds on the shoulders of giants, one of which is Mr. Johnson, John Barnette who I had the pleasure of working with many moons ago. He is the person I think of when I remember that the best engineers that I have worked with haven't been computer scientists, but musicians and biologists. He is also a great fun guy.

    Anyway, sorry for the aside.

    If you are a Rails chap, you may also be interested in the Rails plugin holygrail.

    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.
    No Soup for you

    Don't be the product, buy the product!

    YES, I want to SOUP ●UP for ...