Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 26 2013

21:31

Handlebars.js – a Behind the Scenes Look

Handlebars has been gaining popularity with its adoption in frameworks like Meteor and Ember.js, but what is really going on behind the scenes of this exciting templating engine?

In this article we will take a deep look through the underlying process Handlebars goes through to compile your templates.

This article expects you to have read my previous introduction to Handlebars and as such assumes you know the basics of creating Handlebar templates.

When using a Handlebars template you probably know that you start by compiling the template's source into a function using Handlebars.compile() and then you use that function to generate the final HTML, passing in values for properties and placeholders.

But that seemingly simple compile function is actually doing quite a few steps behind the scenes, and that is what this article will really be about; let's take a look at a quick breakdown of the process:

  • Tokenize the source into components.
  • Process each token into a set of operations.
  • Convert the process stack into a function.
  • Run the function with the context and helpers to output some HTML.

The Setup

In this article we will be building a tool to analyze Handlebars templates at each of these steps, so to display the results a bit better on screen, I will be using the prism.js syntax highlighter created by the one and only Lea Verou. Download the minified source remembering to check JavaScript in the languages section.

The next step is to create a blank HTML file and fill it with the following:

<!DOCTYPE HTML>
<html xmlns="http://www.w3.org/1999/html">
    <head>
        <title>Handlebars.js</title>
        <link rel="stylesheet" href="prism.css"></p>

        <script src="prism.js" data-manual></script>
        <script src="handlebars.js"></script>
    </head>
    <body>
        <div id="analysis">
            <div id="tokens"><h1>Tokens:</h1></div>
            <div id="operations"><h1>Operations:</h1></div>
            <div id="output"><h1>Output:</h1></div>
            <div id="function">
                <h1>Function:</h1>
                <pre><code class="language-javascript" id="source"></code></pre>
            </div>
        </div>
        <script id="dt" type="template/handlebars">
        </script>

        <script>
            //Code will go here
        </script>
    </body>
</html>

It's just some boilerplate code which includes handlebars and prism and then set's up some divs for the different steps. At the bottom, you can see two script blocks: the first is for the template and the second is for our JS code.

I also wrote a little CSS to arrange everything a bit better, which you are free to add:

     
    body{
        margin: 0;
        padding: 0;
        font-family: "opensans", Arial, sans-serif;
        background: #F5F2F0;
        font-size: 13px;
    }
    #analysis {
        top: 0;
        left: 0;
        position: absolute;
        width: 100%;
        height: 100%;
        margin: 0;
        padding: 0;
    }
    #analysis div {
        width: 33.33%;
        height: 50%;
        float: left;
        padding: 10px 20px;
        box-sizing: border-box;
        overflow: auto;
    }
    #function {
        width: 100% !important;
    }

Next we need a template, so let's begin with the simplest template possible, just some static text:

<script id="dt" type="template/handlebars">
    Hello World!
</script>

<script>
    var src = document.getElementById("dt").innerHTML.trim();

    //Display Output
    var t = Handlebars.compile(src);
    document.getElementById("output").innerHTML += t();
</script>

Opening this page in your browser should result in the template being displayed in the output box as expected, nothing different yet, we now have to write the code to analyze the process at each of the other three stages.

Basic Output

Tokens

The first step handlebars performs on your template is to tokenize the source, what this means is we need to break the source apart into its individual components so that we can handle each piece appropriately. So for example, if there was some text with a placeholder in the middle, then Handlebars would separate the text before the placeholder placing it into one token, then the placeholder itself would be placed into another token, and lastly all the text after the placeholder would be placed into a third token. This is because those pieces need to both retain the order of the template but they also need to be processed differently.

This process is done using the Handlebars.parse() function, and what you get back is an object that contains all the segments or 'statements'.

To better illustrate what I am talking about, let's create a list of paragraphs for each of the tokens taken out:

    
    //Display Tokens
    var tokenizer = Handlebars.parse(src);
    var tokenStr = "";
    for (var i in tokenizer.statements) {
        var token = tokenizer.statements[i];
        tokenStr += "<p>" + (parseInt(i)+1) + ") ";
        switch (token.type) {
            case "content":
                tokenStr += "[string] - "" + token.string + """;
                break;
            case "mustache":
                tokenStr += "[placeholder] - " + token.id.string;
                break;
            case "block":
                tokenStr += "[block] - " + token.mustache.id.string;
        }
    }
    document.getElementById("tokens").innerHTML += tokenStr;

So we begin by running the templates source into Handlebars.parse to get the list of tokens. We then cycle through all the individual components and build up a set of human readable strings based on the segment’s type. Plain text will have a type of “content” which we can then just output the string wrapped in quotes to show what it equals. Placeholders will have a type of “mustache” which we can then display along with their “id” (placeholder name). And last but not least, block helpers will have a type of “block” which we can then also just display the blocks internal “id” (block name).

Refreshing this now in the browser, you should see just a single 'string' token, with our template's text.

Tokens!

Operations

Once handlebars has the collection of tokens, it cycles through each one and "generates" a list of predefined operations that need to be performed for the template to be compiled. This process is done using the Handlebars.Compiler() object, passing in the token object from step 1:

    
    //Display Operations
    var opSequence = new Handlebars.Compiler().compile(tokenizer, {});
    var opStr = "";
    for (var i in opSequence.opcodes) {
        var op = opSequence.opcodes[i];
        opStr += "<p>" + (parseInt(i)+1) + ") - " + op.opcode;
    }
    document.getElementById("operations").innerHTML += opStr;

Here we are compiling the tokens into the operations sequence I talked about, and then we are cycling through each one and creating a similar list as in the first step, except here we just need to print the opcode. The opcode is the "operation's" or the function's 'name' that needs to be run for each element in the sequence.

Back in the browser, you now should see just a single operation called 'appendContent' which will append the value to the current 'buffer' or 'string of text'. There are a lot of different opcodes and I don't think I am qualified to explain some of them, but doing a quick search in the source code for a given opcode will show you the function that will be run for it.

Op Codes

The Function

The last stage is to take the list of opcodes and to convert them into a function, it does this by reading the list of operations and smartly concatenating code for each one. Here is the code required to get at the function for this step:

    
    //Display Function
    var outputFunction = new Handlebars.JavaScriptCompiler().compile(opSequence, {}, undefined, true);
    document.getElementById("source").innerHTML = outputFunction.toString();
    Prism.highlightAll();

The first line creates the compiler passing in the op sequence, and this line will return the final function used for generating the template. We then convert the function to a string and tell Prism to syntax highlight it.

With this final code, your page should look something like so:

The Function

This function is incredibly simple, since there was only one operation, it just returns the given string; let's now take a look at editing the template and seeing how these individually straight forward steps, group together to form a very powerful abstraction.


Examining Templates

Let's start with something simple, and let's simply replace the word 'World' with a placeholder; your new template should look like the following:

    <script id="dt" type="template/handlebars">
        Hello {{name}}!
    </script>

And don't forget to pass the variable in so that the output looks OK:

    //Display Output
    var t = Handlebars.compile(src);
    document.getElementById("output").innerHTML += t({name: "Gabriel"});

Running this, you will find that by adding just one simple placeholder, it complicates the process quite a bit.

Single Placeholder

The complicated if/else section is because it doesn't know if the placeholder is in fact a placeholder or a helper method

If you were still unsure about what tokens are, you should have a better idea now; as you can see in the picture, it split out the placeholder from the strings and created three individual components.

Next, in the operations section, there are quite a few additions. If you remember from before, to simply output some text, Handlebars uses the 'appendContent' operation, which is what you can now see on the top and bottom of the list (for both "Hello " and the "!"). The rest in the middle are all the operations needed to process the placeholder and append the escaped content.

Finally, in the bottom window, instead of just returning a string, this time it creates a buffer variable, and handles one token at a time. The complicated if/else section is because it doesn't know if the placeholder is in fact a placeholder or a helper method. So it tries to see if a helper method with the given name exists, in which case it will call the helper method and set 'stack1' to the value. In the event it is a placeholder, it will assign the value from the context passed in (here named 'depth0') and if a function was passed in it will place the result of the function into the variable 'stack1'. Once that is all done, it escapes it like we saw in the operations and appends it to the buffer.

For our next change, let's simply try the same template, except this time without escaping the results (to do this, add another curly brace "{{{name}}}")

Refreshing the page, now you will see it removed the operation to escape the variable and instead it just appends it, this bubbles down into the function which now simply checks to make sure the value isn't a falsy value (besides 0) and then appends it without escaping it.

Single Placeholder Non Escaped

So I think placeholders are pretty straight forward, lets now take a look at using helper functions.


Helper Functions

There is no point in making this more complicated then it has to be, let's just create a simple function that will return the duplicate of a number passed in, so replace the template and add a new script block for the helper (before the other code):

<script id="dt" type="template/handlebars">
    3 * 2 = {{{doubled 3}}}
</script>

<script>
    Handlebars.registerHelper("doubled", function(number){
        return number * 2;
    });
</script>

I have decided to not escape it, as it makes the final function slightly simpler to read, but you can try both if you like. Anyways, running this should produce the following:

Helper Function

Here you can see it knows it is a helper, so instead of saying 'invokeAmbiguous' it now says 'invokeHelper' and therefore also in the function there is no longer an if/else block. It does still however make sure the helper exists and tries to fall back to the context for a function with the same name in the event it doesn't.

Another thing worth mentioning is you can see the parameters for helpers get passed in directly, and are actually hard coded in, if possible, when the function get's generated (the number 3 in the doubled function).

The last example I want to cover is about block helpers.


Block Helpers

Block helpers allow you to wrap other tokens inside a function which is able to set its own context and options. Let's take a look at an example using the default 'if' block helper:

<script id="dt" type="template/handlebars">
    Hello
    {{#if name}}
        {{{name}}}
    {{else}}
        World!
    {{/if}}
</script>

Here we are checking if "name" is set in the current context, in which case we will display it, otherwise we output "World!". Running this in our analyzer, you will see only two tokens even though there are more; this is because each block is run as its own 'template' so all the tokens inside it (like {{{name}}}) will not be part of the outer call, and you would need to extract it from the block’s node itself.

Besides that, if you take a look at the function:

Block Helper

You can see that it actually compiles the block helper’s functions into the template’s function. There are two because one is the main function and the other is the inverse function (for when the parameter doesn't exist or is false). The main function: "program1" is exactly what we had before when we just had some text and a single placeholder, because like I mentioned, each of the block helper functions are built up and treated exactly like a regular template. They are then run through the "if" helper to receive the proper function which it will then append to the outer buffer.

Like before, it is worth mentioning that the first parameter to a block helper is the key itself, whereas the 'this' parameter is set to the entire passed in context, which can come in handy when building your own block helpers.


Conclusion

In this article we may not have taken a practical look at how to accomplish something in Handlebars, but I hope you got a better understanding of what exactly is going on behind the scenes which should allow you to build better templates and helpers with this new found knowledge.

I hope you enjoyed reading, like always if you have any questions feel free to contact me on Twitter (@GabrielManricks) or on the Nettuts+ IRC (#nettuts on freenode).

July 24 2013

22:36

Resources to Get You Up to Speed in Ember.js

You've probably noticed a lot of chatter lately about the Ember.js framework and rightfully so. It aims to make it substantially easier to build single-page web apps by abstracting a lot of the complexities for writing scalable and maintainable MVC-based code. And developers are jumping on-board in droves.

Over the last year, the project and the framework have evolved quite a bit and while it's just now reaching a very stable point in its life, the frequent changes to the API over the last year has left quite a bit of outdated documentation. The net effect is that if you pick the wrong learning resource, you could end up with a bad experience because the tutorial doesn't match the current state of the framework.

This happened to me as I was ramping up and I want to ease your ramp up time by listing out some great resources for learning Ember. The resources I'll list, at least at the time of this writing, I know are current and useful for getting a good understanding of how Ember works and is structured. So follow along as I jot out some great learning resources for you.


The Resources

ember-resources-nettuts-ember

Nettuts' Ember Series

I'm going to be a little biased here because I'm the author of this series, but the feedback I've received tells me that I did a decent job of outlining the basics of Ember. The four-part series takes you through the core concepts of Ember, setting up the framework, using templates, defining your model, routing and a whole lot more.

It was a learning experience for me and I tried my best to distill some of the headaches and complexities I encountered. Ember makes a lot of things incredibly easy, while some parts are “pound your head against a wall” difficult. Having the help of the Ember core team, especially Yehuda Katz and Tom Dale, was incredibly beneficial and helped me to articulate some of the nuances of the framework that the docs simply don't provide.


ember-resources-ember-js

Emberjs.com

The Ember project's main site is a great starting point for getting started with the framework. It does a great job of organizing a lot of the pieces that comprise Ember and the documentation is robust. The team took some knocks early on because the docs weren't as comprehensive as developers wanted, but that was to be expected for a growing effort. The cool thing is that they've been responsive and you can see the documentation evolving rapidly (daily at times). For example, Tom Dale recently recorded a great video on how to build an app in Ember which is now part of the intro section of the docs.

And the documentation itself is structured in a way to walk you through a logical path for understanding what Ember is about. While there's always room for improvement, all in all, the docs are solid and what it doesn't provide can be supplemented with the countless tutorials available or by asking questions on the discussion group.

In addition, the community section of the site helps you learn about how to contribute to the project, meet new developers or find help. And don't forget that with Ember being open-source, the source is easily available to you on Github.


ember-resources-ember-forum

Ember Discussion Forum

One of the great apps that leverages the Ember framework is Discourse, the recently launched discussion platform project lead by Jeff Atwood and Robin Ward. It's allowed the Ember team to dogfood their own work by using it as a vehicle for developer support. And it's awesome to see such an incredibly talented community. These are devs that are living and breathing Ember and you can feel confident that most any question will be answered. On top of that, courtesy and professionalism are enforced throughout to ensure that discussions stay on track, helping community members instead of raging out of control in some negative fashion.

Just note that depending on the topic or question, you may be asked to post on Stack Overflow for better results. In looking at Stack Overflow that's not necessarily a bad thing since the Ember section there is VERY active.


ember-resources-ember-watch

EmberWatch

This has become an essential source for everything Ember-related. The site does an amazing job of scouring the Internet for great Ember content. The key thing is to look at the date when something was posted and of the article itself. Almost everything prior to 2013 will be outdated in terms of Ember's API so if you stick with the fresher links, you should be fine.

With that said, though, EmberWatch has categorized the content to make it easier to find the type of stuff you want to learn from. Whether it's a screencast, book, podcast or post, EmberWatch has you covered.

I'd also recommend following them on Twitter for the latest updates to the site.


ember-resources-ember-101

Ember 101 Video Series

I've not met Ryan Florence in person, but have had enough online exchanges with him to know he's incredibly smart. He knows JavaScript REALLY well so when I saw him jump into Ember, I was incredibly excited.

He didn't fail taking on a project called Ember 101 with the intent to help new developers get up-to-speed in Ember. The best part about it is that his videos are technically sound and FREE.

The series walks you through all of the core aspects of jumping in Ember and Ryan made sure to include an explanation on each page as well as sample code to work with.

I would definitely urge you to check this great resource out as you're starting your Ember journey.


ember-resources-peepcode

PeepCode's Fire Up Ember.js Video

This was the best money I've ever spent on a screencast. Sometimes you just need to hear someone walk you through the concepts and Geoffrey Grosenbach did a stellar job with his Ember screencast. Everything from the way he discusses each concept to the demo app he built along the way is presented in a fashion that is easily digestible and understandable.

The saying, “You get what you pay for” definitely applies here because it's super high-quality work.


ember-resources-handlebars

HandlebarsJS

Ember apps rely HEAVILY on templates. In fact, in my opinion, if you're not going to use templates, not only are you in for a really rough time but you might as well just build everything without Ember.

One thing I appreciate about the Ember team is their desire to not reinvent the wheel. It's evident in their choice of jQuery for DOM manipulation and parsing and the use of the Handlebars templating library for managing templates. So it makes sense to point out the Handlebars site as an important resource because you have the full breadth of its API available in your Ember apps.

The Ember docs will highlight certain key parts of creating templates, especially when it comes to data binding, but for the full picture of what you can do, you should checkout the Handlebars API.


ember-resources-discourse

The Discourse Discussion Platform

Some people learn by reading. Some need to sit in a classroom. Others are quite content with videos. Then there are those that learn best by seeing actual code. This is where Discourse comes in. I mentioned earlier about the fact that the Ember project's discussion forum is based on the Ember-powered Discourse. Well, there's a really great upside in that the Discourse team released their product as open source allowing you to see the code of a real-world Ember system.

This is a really big deal because it's one thing to attempt to learn by the school of hard knocks and another to be able to check out a system built by highly-regarded developers like Jeff Atwood and Robin Ward. And because this is such a high-profile Ember project, it's bound to get a lot of scrutiny and code review. I can't stress enough how valuable a learning resource this is.


ember-resources-robin-ward

Robin Ward AKA Evil Trout

Speaking of valuable, you have to check out Robin Ward's blog. He's been on a tear putting up great posts about leveraging specific features of Ember. The fact that he's building out such a large-scale app allows him to offer invaluable advice on how to structure your code and properly use the features Ember has to offer.

Be sure to also catch him on Twitter and don't be afraid of his scary avatar. He's actually a pretty nice guy.


ember-resources-billy

Billy's Billing Developer Blog

I only recently found this blog for Billy's Billing, a small business accounting software shop. I was pleasantly surprised when I found a number of high-quality Ember-related blog posts. You might be wondering why I would link to it. Well, it's because historically, I've found many of the best posts about a technology come from companies that are very vested in it. And you can see that here.

Additionally, I like the fact that they're not trying to teach you Ember basics. They're posting up things that they've obviously struggled with and want to share the solution. A great example is their post on representing data structures as trees in Ember.


Ramping Up

The main thing I hope, is that this post helps make the Ember learning curve substantially smaller. Ember is a non-trivial framework to learn and every bit of up-to-date information you can grab will help you grok it faster. That last part is important. Remember that there is a lot of old content out there and you need to make sure you're focusing on the best stuff.

Also, please realize that this is by no means an exhaustive resource. There are countless blog posts that do a fantastic job of digging into Ember. This article doesn't aim to list them all. It's meant to give you a jumpstart so that you don't have to fumble around trying to find useful resources. And hopefully, it will help you build amazing apps.

Sponsored post
feedback2020-admin
04:05

July 23 2013

00:37

Real-Time Messaging for Meteor With Meteor Streams

This is 2013. If you are going to build a webapp, you must add real-time capabilities to the app. It is the standard. Meteor does a pretty good job at helping you to quickly build and make apps real-time. But meteor is tightly coupled with MongoDB and it is the only way to add real-time capabilities. Sometimes, this is overkill.

MongoDB is a perfect match for Meteor. But we don’t need to use MongoDB for all our real-time activities. For some problems, messaging based solutions work really well. It’s the same problem that pubnub and real-time.co are also addressing.

It would be great if we could have a hybrid approach to real-time, in Meteor, combining the MongoDB Collection based approach and a Messaging based approach. Thus Meteor Streams was born to add this messaging based, real-time communication to Meteor.


Introducing Meteor Streams

A Stream is the basic building block of Meteor Streams. It is a real-time EventEmitter. With a Stream, you can pass messages back and forth between connected clients. It is highly manageable and has a very good security model.

Lets Give It a Try

Let’s create a very simple, browser console based chat application with Meteor Streams. We’ll first create a new Meteor application:

meteor create hello-stream

Next we install Meteor Streams from the atmosphere:

mrt add streams

Then we need to create a file named chat.js and place in the following code:

chatStream = new Meteor.Stream('chat');

if(Meteor.isClient) {
  sendChat = function(message) {
    chatStream.emit('message', message);
    console.log('me: ' + message);
  };

  chatStream.on('message', function(message) {
    console.log('user: ' + message);
  });
}

Start your app with:

meteor

Your app will now be running on – http://localhost:3000.

Now you have a fully functioning chat app. To start chatting, open the browser console and use the sendChat method as shown as below.

Browser Console based Chat with Meteor Streams

Let’s Dive In Further

It’s kind of hard to understand Meteor Streams with just a simple console based example, like the one we just built above. So, let’s build a full featured chat application to become more familiar with Meteor Streams.

The App

The app we are creating is a web based chat application. Anyone can chat anonymously. Also, users can register and chat with their identity(username). It also has a filtering system, which filters out bad words (profanity).

At the end, it will look something like this. You can grab the source code from github to see the final result.

Meteor Streams Chat App in Action

Let’s Create the App

Let’s create a standard Meteor app and install Meteor Streams from atmosphere. We’ll also be adding support for bootstrap and Meteor Accounts.

meteor create awesome-chat-app
cd awesome-chat-app
meteor remove insecure autopublish
meteor add bootstrap accounts-password accounts-ui
mrt add streams
rm awesome-chat-app.* //remove files added automatically

Let’s Build the UI

The user interface for our app will be pretty simple. We have a div showing the chat messages and an input box to enter in new chat messages. See below for the complete HTML of our UI. Check out the inline comments if you need help understanding the code.

Add the following content into client/home.html:

<head>
  <title>Awesome Chat App</title>
  <style type="text/css">
    #chat-message {
      width: 500px;
      height: 50px;
    }

    #messages {
      width: 700px;
      height: 300px;
      border: 1px solid rgb(230, 230, 230);
      margin: 0px 0px 10px 0px;
    }
  </style>
</head>

<body>
  {{> mainBox}}
</body>

<!-- Main Chat Window -->
<template name='mainBox'>
  <div class='container'>
    <h2>Awesome Chat App</h2>
    <!-- shows login buttons -->
    {{loginButtons}}
    {{> chatBox}}
  </div>
</template>

<!-- Chat Box with chat messages and the input box -->
<template name='chatBox'>
  <div id='messages'>
    {{#each messages}}
      {{>chatMessage}}
    {{/each}}
  </div>
  <textarea id='chat-message'></textarea><br>
  <button class='btn btn-primary' id='send'>Send Chat</button>
</template>

<!-- Template for the individual chat message -->
<template name='chatMessage'>
  <div>
    <b>{{user}}:</b> {{message}}
  </div>
</template>

Wiring Up Our Chat

Meteor’s reactivity is an awesome concept and very useful. Now, Meteor Streams is not a reactive data source. But it can work well with local only collections to provide reactivity.

As the name implies, local only collections do not sync its data with the server. Its data is only available inside the client(browser tab).

Add the following content into lib/namespace.js to create our local only collection:

if(Meteor.isClient) {
  chatCollection = new Meteor.Collection(null);
}

Now it’s time to wire up our templates with the collection. Let’s do following:

  • Assign the collection to the messages helper in the chatBox template.
  • Generate a value for the user helper in the chatMessage template.
  • When the Send Chat button is clicked, add the typed chat message into the collection.

Add the following content to client/ui.js:

// assign collection to the `messages` helper in `chatBox` template
Template.chatBox.helpers({
  "messages": function() {
    return chatCollection.find();
  }
});

// generate a value for the `user` helper in `chatMessage` template
Template.chatMessage.helpers({
  "user": function() {
    return this.userId;
  }
});

// when `Send Chat` clicked, add the typed chat message into the collection
Template.chatBox.events({
  "click #send": function() {
    var message = $('#chat-message').val();
    chatCollection.insert({
      userId: 'me',
      message: message
    });
    $('#chat-message').val('');
  }
});

With the above changes you’ll be able to chat, but messages are only display on your client. So let’s handover the rest of the job to Meteor Streams.

Let’s Create the Stream

We’ll be creating the stream on both the client and the server (with the same name) and adding the necessary permissions.

Append the following code into lib/namespace.js to create the stream:

chatStream = new Meteor.Stream('chat-stream');

Just creating the stream alone is not enough; we need to give the necessary permissions, which allow clients to communicate through it. There are two types of permissions (read and write). We need to consider the event, userId, and the subscriptionId when we are creating the permission.

  • userId is the userId of the client connected to the stream.
  • subscriptionId is the unique identifier created for each client connected to the stream.

For our chat app, we need to give anyone using the app full read and write access to the chat event. This way, clients can use it for sending and receiving chat messages.

Add the following code to server/permissions.js:

chatStream.permissions.read(function(eventName) {
  return eventName == 'chat';
});

chatStream.permissions.write(function(eventName) {
  return eventName == 'chat';
});

Connecting the Stream With the UI

Now that we have a fully functioning stream, let’s connect it to the UI so others can see the messages that you are sending.

The first thing we need to do is add our chat messages to the stream, when we click on the Send Chat button. For that, we need to modify the code related to the Send Chat button’s click event(click #send), as follows (in client/ui.js):

Template.chatBox.events({
  "click #send": function() {
    var message = $('#chat-message').val();
    chatCollection.insert({
      userId: 'me',
      message: message
    });
    $('#chat-message').val('');

    // == HERE COMES THE CHANGE ==
    //add the message to the stream
    chatStream.emit('chat', message);
  }
});

Then we need to listen to the stream for the chat event and add the message to the chatCollection which is being rendered in the UI, reactively. Append the following code to the client/ui.js file:

chatStream.on('chat', function(message) {
  chatCollection.insert({
    userId: this.userId, //this is the userId of the sender
    subscriptionId: this.subscriptionId, //this is the subscriptionId of the sender
    message: message
  });
});

Now we need to modify the logic which generates the value for the user helper in the chatMessage template as follows:

  • Logged in user – user-<userId>
  • Anonymous user – anonymous-<subscriptionId>

Modify the code for the user helper in the chatMessage template to reflect the above changes (in client/ui.js):

Template.chatMessage.helpers({
  "user": function() {
    var nickname = (this.userId)? 'user-' + this.userId : 'anonymous-' + this.subscriptionId;
    return nickname;
  }
});

Displaying the Username Instead of the userId

Showing just the userId is not very useful. So let’s change it to display the actual username. Here, we’ll be using Meteor Pub/Sub to get the username for a given userId.

First of all, lets configure Meteor Accounts to accept the username when creating the user. Add the following code to client/users.js:

Accounts.ui.config({
  passwordSignupFields: "USERNAME_ONLY"
});

Then let’s create the publication for getting the user. Add the following code to server/users.js. It simply returns the username for a given userId.

Meteor.publish("user-info", function(id) {
  return Meteor.users.find({_id: id}, {fields: {username: 1}});
});

Now we need to create a subscription on the client for each user we are interested in. We’ll do this inside a method. Additionally, after we get the username, it needs to be assigned to a session variable. Then we can use the session variable inside the user helper to get the username reactively.

Append the following code into client/users.js:

getUsername = function(id) {
  Meteor.subscribe('user-info', id);
  Deps.autorun(function() {
    var user = Meteor.users.findOne(id);
    if(user) {
      Session.set('user-' + id, user.username);
    }
  });
}

Finally, let’s modify the user helper in the chatMessage template to get the username from the session (in client/ui.js):

Template.chatMessage.helpers({
  "user": function() {
    if(this.userId == 'me') {
      return this.userId;
    } else if(this.userId) {
      getUsername(this.userId);
      return Session.get('user-' + this.userId);
    } else {
      return 'anonymous-' + this.subscriptionId;
    }
  }
});

Filtering Out Bad Words

Our chat app will make sure to hide any profanity. If someone tries to send a message with some bad words, we need to filter those out. Meteor Stream has a feature called filters, which is designed for this. Let’s see how we can filter out the word fool from any chat message.

Add the following code into server/filters.js:

chatStream.addFilter(function(eventName, args) {
  if(eventName == 'chat') {
    var message = args[0];
    if(message) {
      message = message.replace(/fool/ig, '****');
    }
    return [message];
  } else {
    return args;
  }
});

Feel free to add in your own filters.

Our chat app is now complete. You can see a live version of the app at http://streams-chat.meteor.com. Additionally, the Source code for the app is available on Github.


Conclusion

In this tutorial we built a chat application using local only collections for adding in reactivity and used Meteor Pub/Sub for getting the username of a user. Hopefully you can see how nicely Meteor Streams can work with existing Meteor functionality. Still, this is just an introduction to Meteor Streams, for additional resources, check out the following links:

July 19 2013

23:14

Introduction to Express

A few years ago I, like many people, began to hear more and more about Node.js. I had been building server-side applications in ColdFusion for over a decade but was always curious to see how other platforms worked as well. I enjoyed JavaScript on the client and using it on the server seemed fascinating. (Although not necessarily new. I’m old enough to remember when Netscape released SSJS back in the 90s.) I read a few tutorials, sat in a few sessions, and in general came away… unimpressed.

Every time I read something about Node it ended up coming back to the same story: building a web server. Frankly, that wasn’t exciting. Even back when I first did web applications in Perl CGI scripts I didn’t have to worry about that. I could see the potential in small, lightweight API providers, but would I build a simple site in it? No way!

But then one day I got lucky. I had decided to sit in one more Node presentation (mainly because I knew the presenter well) and during his presentation the speaker demonstrated Express. A light bulb went off. Here was the Node framework I was waiting for! Yes, in theory, you’re still building your own web server, but that aspect is minimized and instead you can focus on your site logic and content. Even better, I saw how integrating templating engines allowed for an almost ColdFusion (or PHP) type experience. In this article I’ll show you how to install Express and how to get started building web applications using the framework.


Installation

Most likely you’re already a Node developer. If you’re new to the platform, then you may still be in awe of the power of npm. I know I am. As I said, most of my life I spent doing ColdFusion. There is a rich, if not large, ecosystem of open source for that platform. But discovering and installing is a manual affair. The first few times I used npm I was in awe. Frankly, I have a hard time now imagining using a platform without a tool like npm. Let’s start with a package.json file where we’ll add the Express dependency.

{
	"name": "demo1",
	"description": "First Express app",
	"version": "0.0.1",
	"dependencies": {
		"express": "3.x"
	}
}

Again, this should be fairly standard to Node developers. Drop down to the command line and run:

npm install

Which will install Express and its dependencies.

Conversely, you can also install Express as a command line tool to quickly generate skeleton apps. This can be done by running:

npm install -g express

Once you’ve done that, you can run express at the command line to generate an application.


First Steps

Let’s go back to the application we created with the previous package.json file and add an app.js file. At minimum, your application will create a new instance of the Express object and begin listening on a particular port. Let’s start with this:

var express = require('express');
var app = express();

app.listen(3000);

Personally, I tend to take things pretty slowly when learning a new framework, so it might make sense to quickly run node app just to ensure nothing was screwed up.

Defining Our Routes

Now let’s try adding a few simple routes to the application. Express apps can respond to various HTTP verbs as API methods. So as an example:

//Regular HTTP get
app.get(some url, do something);

//Some other page
app.get(some other url, do something else);

//I can respond to a form post
app.post(some url, do more stuff);

Let’s create a real example of this and add a home page to the application:

app.get('/', function(request, response) {
	response.send("This would be some HTML");
});

Note that Express adds a simple send() method to the response object. This abstracts away some of the boilerplate code to handle responses. If you’ve done everything right, now you can restart your application and open your browser to port 3000.

The request.send() API also intelligently handles different types of data. Imagine you want to add a simple JSON-based API to your site. By simply returning an object instead of a string, Express will handle converting the result to JSON as well as setting the appropriate response headers.

app.get('/api', function(request, response) {
	response.send({name:"Raymond",age:40});
});

As you can imagine, at this point, building an application would consist of adding more and more routes to handle whatever your needs may be. Let’s build a very simple static site that uses what we’ve learned so far.


Generic Blog App 9000

For our first site, we’ll build a blog. Yeah, this isn’t terribly new or exciting, but it’s also something that everyone conceptually understands. You can find the complete source code for this application in the attached download in the blog1 folder. We’ll skip the package.json file as it is exactly the same, except for the name. Let’s instead look at app.js.

var express = require('express');
var app = express();

app.get('/', function(req, res) {
	res.sendfile('./views/index.html');
});

app.get('/about', function(req, res) {
	res.sendfile('./views/about.html');
});

app.get('/article', function(req, res) {
	res.sendfile('./views/article.html');
});

app.listen(3000);

One of the first things you’ll notice is that we’ve switched from the send api to sendfile. While we could embed large HTML strings into our app.js file, that would get messy pretty darn quickly. We’ve got three routes for this application. One for the home page, one for an “About” page, and one for an article. Typically the article page would represent one blog entry, but for now, we’re keeping things simple.

Adding in the HTML

The HTML for our pages is also rather simple. Here’s the home page:

<html>
<head>
	<title>Home Page</title>
</head>

<body>
<h1>Blog!</h1>

<footer>
<p>
	<a href="/">Home</a> ~ <a href="/about">About Me</a> ~ <a href="/article">Some Article</a>
</p>
</footer>

</body>
</html>

Notice there isn’t anything special here yet. It’s plain static HTML that will be returned by the Express application as is. Both the About and Article pages are the same outside of changes to the title and h1 values.

Once again, fire this up at the command line and open your browser. (By the way, one of the common mistakes I made when first learning Node was to forget to kill my previous demos. If you are still running the last application, it will be holding port 3000. Either kill it or use another port for this application.) You should be able to browse around this simple application in a few simple clicks.

Now let’s move from static to dynamic.

From Static to Dynamic

Express supports a variety of templating engines. Template engines are like many things in the technology world – one step lower than religion and politics. The express command line can add support for Jade, EJS, JSHTML, and Hogan. According to the Express documentation, any templating engine that conforms to a particular signature will work with it. They also recommend checking the consolidate.js library for a list of supported template engines.

Personally, I’m a huge fan of Handlebars (handlebarsjs.com). I’ve used it in many client-side applications and it was a natural fit for me to use on the server side. In order to use Handlebars you need to install a wrapper library called hbs. Let’s add this to our application.

{
	"name": "blog2",
	"description": "Blog app",
	"version": "0.0.1",
	"dependencies": {
		"express": "3.x",
		"hbs":"*"
	}
}

Now let’s update our app.js to use this engine:

var express = require('express');
var app = express();

var hbs = require('hbs');

app.set('view engine', 'html');
app.engine('html', hbs.__express);

app.get('/', function(req, res) {
	res.render('index');
});

app.get('/about', function(req, res) {
	res.render('about');
});

app.get('/article', function(req, res) {
	res.render('article');
});

app.listen(3000);

We’ve done a few important things here. To use Handlebars, we load in (via require) the HBS wrapper library. We then need to tell Express to use it. By default, Handlebars will work with files that contain an extension matching the particular engine. In our case, something.hbs. But we can tell Express to treat HTML files as dynamic by using the "view engine" directive, you see above. This is not required, but I prefer working with HTML files. My editor can then provide nicer code hinting and syntax highlighting. Actually loading the engine is then done via app.engine.

Finally, the routes all switch to using the new render method. Express defaults to using the views folder, so we can leave that off. Since Express also knows the extension we prefer, we can forget about that as well. Essentially, res.render('something') equates to telling Express to look for views/something.html, parse it based on the rules of our templating engine, and return it to the browser.

You can find this example in the blog2 folder, in the attached source code. As I said before, I like to take baby steps, so even though we’re not actually doing anything dynamic, I recommend firing this up at the command line and ensuring you can still browse the site.

Displaying Blog Entries on the Home Page

Given that we now support dynamic templates, lets actually make them dynamic. Since we’re building a blog, we should add support for listing a set of blog entries on the home page and allowing you to link to a particular blog engine. While we could fire up a connection to MySQL or Mongo, let’s create a static set of data as well as a simple wrapper library for it. Here is a blog.js file that provides both for getting a set of entries as well as getting just one.

var entries = [
{"id":1, "title":"Hello World!", "body":"This is the body of my blog entry. Sooo exciting.", "published":"6/2/2013"},
{"id":2, "title":"Eggs for Breakfast", "body":"Today I had eggs for breakfast. Sooo exciting.", "published":"6/3/2013"},
{"id":3, "title":"Beer is Good", "body":"News Flash! Beer is awesome!", "published":"6/4/2013"},
{"id":4, "title":"Mean People Suck", "body":"People who are mean aren't nice or fun to hang around.", "published":"6/5/2013"},
{"id":5, "title":"I'm Leaving Technology X and You Care", "body":"Let me write some link bait about why I'm not using a particular technology anymore.", "published":"6/10/2013"},
{"id":6, "title":"Help My Kickstarter", "body":"I want a new XBox One. Please fund my Kickstarter.", "published":"6/12/2013"}];
exports.getBlogEntries = function() {
	return entries;
}

exports.getBlogEntry = function(id) {
	for(var i=0; i < entries.length; i++) {
		if(entries[i].id == id) return entries[i];
	}
}

Typically, we would also have methods for adding, editing, and deleting, but for now this is sufficient. Let’s now look at an updated app.js file that uses this engine.

var express = require('express');
var app = express();

var hbs = require('hbs');

var blogEngine = require('./blog');

app.set('view engine', 'html');
app.engine('html', hbs.__express);
app.use(express.bodyParser());

app.get('/', function(req, res) {
	res.render('index',{title:"My Blog", entries:blogEngine.getBlogEntries()});
});

app.get('/about', function(req, res) {
	res.render('about', {title:"About Me"});
});

app.get('/article/:id', function(req, res) {
	var entry = blogEngine.getBlogEntry(req.params.id);
	res.render('article',{title:entry.title, blog:entry});
});

app.listen(3000);

Let’s tackle the updates one by one. (This version can be found in the blog3 folder.) First, we load in our engine using a quick require call. This gives us the ability to call it and get entries. You may notice a new line calling bodyParser, but ignore that for now.

In our home page router, we’ve passed a second argument to the render API. The argument is an object with two keys, title and entries. The value for title is just a string, but entries calls our blogEngine API. Here’s where things get interesting. Any data we pass here will be available to our templates. Depending on your template language, the particulars in how you use it may change, but lets look at the home page.

<h1>Blog!</h1>

{{#each entries}}
	<p>
		<a href="/article/{{id}}">{{title}}</a><br/>
		Published: {{published}}
	</p>
{{/each}}

If you’ve never used Handlebars before, you can still probably take a good guess as to what is going on here. The #each directive will iterate over an array. Inside of the block I’ve used a combination of Handlebar tokens that point back to my blog data as well as HTML to generate a simple list of blog entries. Coming from a ColdFusion background, this is very familiar.

Creating a Layout

I bet you’re also wondering where the rest of the HTML went. When using templating engines in Express you get automatic layout support. That means I can create a generic layout with my site design and Express will inject a particular page’s output within it. By convention, this is called layout.something where “something” is the particular extension you are using. Since we used HTML, this will just be layout.html:

<html>

<head>
	<title>{{title}}</title>
</head>

<body>

	{{{body}}}

	<footer>
		<p>
			<a href="/">Home</a> ~ <a href="/about">About Me</a>
		</p>
	</footer>
	
</body>
</html>

Pretty slick, right? The About page isn’t interesting so we will skip it, but check out the article route. It now includes a token, :id, in the URL. Express allows us to create dynamic URLs that can then map to request arguments. You’ll notice that back in the home page, we defined links that looked like this: /article/{{id}}.

We would, in theory, add one route for every blog entry we have, but it is much better to create an abstract route that will match any request of that particular form. To get access to this value, we also add in another piece, the bodyParser line we defined earlier. (This particular feature comes from the Connect framework and actually provides quite a bit of help to you in supporting both query string and form bodies. Almost every Express app will want to include this.)

Displaying Individual Articles

Because we get access to the dynamic value at the end of the URL, we can simply then pass this to the blogEngine object and use the result as an argument to the view.

Here’s the article.html file:

<h1>{{blog.title}}</h1>
Published: {{blog.published}}

<p/>

{{blog.body}}

Now we’ve got a truly dynamic, but ugly, application. Here’s our new home page:

express-blog3-home

And here’s one of a particular blog entry:

express-blog3-article

Put Some Lipstick on That Pig!

Let’s add some basic styling to our application to make it a bit prettier. Express provides a simple way to add support for static resources like images, JavaScript libraries, and style sheets. By simply defining a static folder, any request will for a file will be checked against that particular folder before being compared to routes. Here is an example from the final version of our blog engine (which may be found in the blog4 folder):

app.use(express.static('public'));

At this point, if you request /foo.css, and the file foo.css exists in the public folder, it will be returned. Since my design skills are as good as any developer, I took the easy way out and grabbed a copy of Bootstrap (http://twitter.github.io/bootstrap/). I dropped it and a copy of jQuery, into my public folder.

Then within my layout.html, I can refer to these resources. Here is one example of linking in bootstrap.css:

<link href="/bootstrap/css/bootstrap.css" rel="stylesheet">

Express will now automatically check for this file within the public folder. You can have multiple static folders like this and can even set up custom URL prefixes for them. The result is – stunning. (Ok, compared to the first version, it is a huge improvement!)

The Home page:

express-blog4-home

And an article:

express-blog4-article

What Next?

Want to learn more? Here are some links that may be helpful.

July 15 2013

22:04

Building Apps With the Yeoman Workflow

What Is Yeoman?

Trick question. It’s not a thing. It’s this guy:

yo

Basically, he wears a top hat, lives in your computer, and waits for you to tell him what kind of application you wish to create. As an example, scaffolding a web application would look something like this:

yo-webapp

The first thing that comes to mind is OMG so dreamy. Second, thanks, bro.

All we did was tell him what we wanted and he replied with specific questions to give us even more control.

Let’s go back a step, though. Not every new computer comes with a Yeoman pre-installed. He lives in the NPM package repository. You only have to ask for him once, then he packs up and moves into your hard drive. Make sure you clean up, he likes new and shiny things.

He’s a person with feelings and opinions, but he’s very easy to work with. If you think he’s too opinionated, he can be easily convinced.

Let’s take a second to break apart what that yo webapp command, from the previous screenshot,really did.

yo

This is an OS X, Linux, and Windows friendly system-wide command that scours your hard drive for any installed “generators,” then gives them control based on the next argument:

webapp

This is actually a separate plug-in, or “generator,” called generator-webapp. Yeoman recognizes other generator-____ Node modules, which opens the door for using Backbone, AngularJS, and countless other you-name-it generators.

Something important to take away from this is, it’s the generator-webapp module that prompts us with questions. The same goes for any other generators we install. They are maintained by the community, not necessarily the Yeoman team members themselves.

By using Yeoman, you’re not saying “I want to do things your way, master. bow bow,” without having any control. It’s actually quite the opposite. What you’re really saying is, “I want to make an application that follows best practices that have been discovered by frequent users and contributors of the web development community.”

Seriously, you have to say it just like that, or it won’t work.

Should you prefer to do something differently than what he gives you, you simply change the code that was generated for you, or even go to the source of the “generator” itself, and send in your contribution.


Friendship

Our buddy, yo has some buddies of his own, and thinks you’ll all get along over endless tea and smiles. If you haven’t heard of Grunt or Bower, here’s a quick summary of what these give us:

Grunt

Grunt is a JavaScript-based task runner, that does the dirty stuff. Like yo, it also provides a base set of functionality, then allows the community to share their own plug-ins, or “tasks” that help accomplish common things. When you scaffold your application with yo webapp, Grunt and some hand-picked tasks will come along, which accomplish things like running your website in a local development environment, concatenating and minifying your code, optimizing your images, and much more. Tasks are run through the command line, by typing grunt server, grunt build, grunt test, and many more.

Tasks are defined and customized in a Gruntfile.js file, which lives in the root directory of your project. Check it out to see what Yeoman set up for you.

Bower

Nobody likes going to GitHub or random developers’ sites to download a .zip of a JavaScript tool. Like when fetching a Node package with npm install ___, Bower lets you say bower install ___. The component is then saved in a directory of your choosing, generally, app/bower_components/ for Yeoman-generated apps. Assuming you wanted jQuery, you would run the bower install query command, then include the relevant file inside of your HTML file. In our case, <script src="bower_components/jquery/jquery.js"></script>


A Typical Application

Let’s get wild. It’s time to create an app.

Real quick though, find your nearest terminal and make sure you have yo installed globally:

$ npm install -g yo

Create a folder we can play around in, then run:

$ yo webapp

Here’s what should have happened:

  • A whole buncha stuff.

Did it? Good!

To prevent you from scrolling up through all of the text that was just spit out at you, here’s an overview:

yo-webapp-long

The new web application was scaffolded and your Bower components and NPM packages were automatically installed.

Open all this new stuff in your favorite editor, and we’ll look over what we have.

├─ app/
│  ├─ images/
│  │  ├─ glyphicons-halflings.png
│  │  └─ glyphicons-halflings-white.png
│  ├─ scripts/
│  │  ├─ vendor/
│  │  │  └─ bootstrap.js
│  │  ├─ app.js
│  │  ├─ hello.coffee
│  │  └─ main.js
│  ├─ styles/
│  │  └─ main.css
│  ├─ .htaccess
│  ├─ 404.html
│  ├─ favicon.ico
│  ├─ index.html
│  └─ robots.txt
│
├─ node_modules/
│  ├─ so/
│  ├─ many/
│  └─ packages/
│
├─ test/
│  ├─ spec/
│  │  └─ test.js
│  ├─ .bowerrc
│  ├─ bower.json
│  └─ index.html
│
├─ .bowerrc
├─ .editorconfig
├─ .gitattributes
├─ .gitignore
├─ .jshintrc
├─ bower.json
├─ Gruntfile.js
└─ package.json

If you take anything away from this article, let it be the beautiful file/folder text representation above. That just took a whole Mountain Dew of my time.

Back on track. What you’re looking at is the most common application structure a Yeoman generator will produce.

  • app/ is where your pure, non-compiled, non-minified source code lives.
  • app/scripts/ is where your JavaScript goes. You’re free to create sub-directories and even use CoffeeScript if that’s your cup of tea. That didn’t make sense. Again. You’re free to use TeaScript if that’s your cup of coffee. Nope.
  • app/styles/ is where your CSS goes. Again, sub-directories, LESS, Sass, whatevs.
  • app/index.html is the non-minified version of index.html that will eventually be squashed and delivered to the client. More on that later.
  • Gruntfile.js has all of the build, server, and test tasks defined.

At this point, yo has done his job. He’s given you everything you need to launch a production-ready web application. Let’s now shift our focus to what Grunt tasks he’s pre-configured for us.

grunt build

Running grunt build takes your app/ source code files and turns them into a distributable application, which ends up in dist/.

That dist/ folder is what you feed to your server. dist/ will have it’s own index.html, with references to minified and concatenated dist/scripts and dist/styles, and optimized dist/images. Your users will appreciate this. Your phone-card, dial-up users will really appreciate this.

Behind the scenes, grunt build is a task that runs several sub-tasks. One of those is grunt-usemin, which looks for blocks inside of your app/index.html, like this:

app/index.html

<!-- build:js scripts/main.js -->
<script src="bower_components/jquery/jquery.js"></script>
<script src="scripts/main.js"></script>
<!-- endbuild -->

After your grunt build task completes, you will end up with this:

dist/index.html

<script src="scripts/c155266f.main.js"></script>

It sucked those scripts up, concatenated, minified, and even prefixed them with unique hashes to prevent browsers from caching outdated versions. Quite powerful.

That’s one of the shining features about using Yeoman. Instead of manually defining what you want your build process to do each time you create an application, you can just place some trust in Yo and your chosen generator. Together, they’ll wire you up with everything you need to launch a production-ready application.

grunt server

Now that you’ve seen what type of work grunt build will do when your application is complete, you should probably start working on your application! We’ll create one together in just a sec, but first let’s see what kind of workflow we’ll have. Like grunt build, grunt server uses several other Grunt tasks to make development as easy as it can be.

Try it out:

yo-webapp-grunt-server

The aforementioned “several other Grunt tasks” are:

  • clean: Yeoman stores some stuff in a .tmp folder. That will be wiped out.
  • coffee: Compiles your CoffeeScript files from app/scripts.
  • compass: Compiles your Sass files from app/styles.
  • connect: Creates a local server, watches for changes to your source files, then triggers a reload in your browser.
  • open: Opens the server instance, typically localhost:9000 in your browser.

Make an edit or two in the source files to see the change reflected in the browser. Like I said above, this is about as easy as it can be. It just works.


Let’s App It Up!

I of course meant appetizers. Grab some cheese sticks, then meet me in a little bit.

Wash your hands!


Let’s Create an Application

To get a feel for some other Yeoman generators, let’s try out Backbone. We’ll create a simple To Do app, use Bower for our dependencies, and introduce you to a real-life workflow with Yeoman.

$ Sound good? (Y/n)

I’ll assume you entered “Y”. We ride! But first:

# install the Backbone generator:
$ npm install -g generator-backbone

# make another play directory, then do these things:
$ yo backbone

     _-----_
    |       |
    |--(o)--|   .--------------------------.
   `---------´  |    Welcome to Yeoman,    |
    ( _´U`_ )   |   ladies and gentlemen!  |
    /___A___   '__________________________'
     |  ~  |
   __'.___.'__
 ´   `  |° ´ Y `

Out of the box I include HTML5 Boilerplate, jQuery, Backbone.js and Modernizr.

Would you like to include Twitter Bootstrap for Sass? (y/N) No
Would you like to include RequireJS (for AMD support)? (y/N) No

Open the new app in your editor. Things should feel quite familiar after our experience with the web app generator. You still have an app directory, with scripts/, styles/ and an index.html.

Before we start editing files, run:

$ grunt server

As we talked about earlier, this starts the server, sets up watches on our files, blah blah yipsie-doodle. Your browser should open, and you should be greeted with:

‘Allo, ‘Allo!

Well, shoot, we have to keep that. It’s just so nice. However, let’s clear out the other stuff.

index.html

<div class="container">
    <div class="hero-unit">
        <h1>'Allo, 'Allo!</h1>
        <section id="todo-app">
            <!-- Where our To Do app will go -->
        </section>
    </div>
</div>

When you save, your browser will refresh, and there we have it! Just a simple, warm “‘Allo, ‘Allo”.

Let’s get ourselves a game plan. We know we’re going to create a To Do app, but what might that look like? Will we need any other libraries to help us?

Hmm.

It’s been at least 4 seconds, and I haven’t heard any answers.

Alright, I’m gonna grab another Dew after that file tree drank my last one. I’ll let you know if I think of anything.


To Do: Set Up Our File Structure

B3. A terrible slot in a vending machine for a carbonated drink. Fizz, foam, disaster.

While I was in the bathroom washing my hands, I had a vision.

[ Add a New To Do ] ← input

checkbox
- clicking will draw a line through the title of the to do item
 ↓
[x] To Do Item #1
[ ] To Do Item #2
    ↑ title
      - double clicking will trigger an "edit" mode

Or…

vision-big

Let’s set ourselves up with a structure that will bring this vision to life.

generator-backbone came with some secret weapons: sub-generators. yo backbone scaffolded our application, but flip back to your terminal and check out what these guys can do:

todo-generate-models

Check out your index.html:

<!-- build:js scripts/main.js -->
<script src="scripts/main.js"></script>
<script src="scripts/templates.js"></script>
<script src="scripts/collections/todos-collection.js"></script>
<script src="scripts/models/todo-model.js"></script>
<script src="scripts/views/todos-view.js"></script>
<script src="scripts/views/todo-view.js"></script>
<!-- endbuild -->

How ’bout that! It not only created and placed files in relevant directories, it even included them in your HTML for you.

I’ve created a repository for our To Do application&mbdash; go check it out. We’ll take a glance at the files together, but please refer to the repository to get the full code.

scripts/main.js

/*global backboneApp, $*/

window.backboneApp = {
    Models: {},
    Collections: {},
    Views: {},
    Routers: {},
    init: function () {
        new this.Views.TodosView({
            collection: new this.Collections.TodosCollection()
        });
    }
};

$(document).ready(function () {
    backboneApp.init();
});

Thoughts

The Backbone generator is establishing some good practices you can use right out of the box. It took the name of your directory, in my case “backboneApp”, and exposed an object literal to hold the Models, Collections, and other Backbone objects we may create.

The generator also incorporates JSHint into your app’s build process, making sure your code is of the highest, most consistent quality. You are encouraged to customize your preferences inside the .jshintrc file in the root of your project’s directory.

Finally, $(document).ready will call backboneApp.init, which creates a TodosCollection, then passes it into a TodosView. I’ll go over these in more detail soon.

scripts/collections/todos-collection.js

/*global backboneApp, Backbone*/

backboneApp.Collections.TodosCollection = Backbone.Collection.extend({

    localStorage: new Backbone.LocalStorage('backbone-generator-todos'),

    initialize: function () {
        this.model = backboneApp.Models.TodoModel;
    }

});

Thoughts

If we want our To Do app to be somewhat usable, we have to store our To Do items somewhere. There’s a handy Backbone adapter you may be familiar with called Backbone.LocalStorage. It will intercept Backbone’s calls to the default remote backend and use your browser’s window.localStorage instead.

We know we’ll need the Backbone.LocalStorage adapter, but where should we go to get it? Idea! Idea!

We haven’t made much use of Bower directly. When our application was scaffolded, Bower was used behind the scenes to grab Modernizr, Twitter Bootstrap, jQuery, Underscore, and Backbone. But, what if we want to add in another JavaScript dependency?

Go back to your favorite terminal and try this:

$ bower search backbone
bower-search-backbone

Ok, wow. That’s… a lot. Maybe we should narrow that down.

$ bower search backbone.localstorage
Search results:

    backbone.localStorage git://github.com/jeromegn/Backbone.localStorage.git

There we go. Now we just have to install it.

$ bower install backbone.localStorage --save
bower cloning git://github.com/jeromegn/Backbone.localStorage.git
bower cached git://github.com/jeromegn/Backbone.localStorage.git
bower fetching backbone.localStorage
bower checking out backbone.localStorage#v1.1.4
bower installing backbone.localStorage#v1.1.4

When working with multiple developers, it can be troublesome assuring everyone has the correct dependencies and matching versions. By using --save above, we are telling Bower to remember this new dependency, then write about it in our bower.json file. When another developer clones your project, they just have to run bower install to download every dependency, keeping everyone in sync. That’s why app/bower_components is listed in your .gitignore file. Gone are the days of bloated repositories!

Now that Bower has awesomed all over our application, go into app/index.html and update the scripts/vendor.js comment block:

<!-- build:js scripts/vendor.js -->
<script src="bower_components/jquery/jquery.js"></script>
<script src="bower_components/underscore/underscore.js"></script>
<script src="bower_components/backbone/backbone.js"></script>
<script src="bower_components/backbone.localStorage/backbone.localStorage.js"></script>
<!-- endbuild -->

When you save the file, your browser will refresh and you’ll have the new library ready to use. More specifically, TodosCollection will have it ready to use.

scripts/collections/todo-model.js

/*global backboneApp, Backbone*/

backboneApp.Models.TodoModel = Backbone.Model.extend({

    defaults: {
        title: '',
        completed: false
    },

    toggle: function () {
        this.save({
            completed: !this.get('completed')
        });
    }

});

Thoughts

This is a pretty basic Backbone Model. We set some default properties for our To Do items and define a toggle function, simply used to switch between a “Complete” or “Incomplete” state.

scripts/views/todos-view.js

/*global backboneApp, Backbone, JST*/

backboneApp.Views.TodosView = Backbone.View.extend({

    el: '#todo-app',

    template: JST['app/scripts/templates/todos.ejs'],

    events: { /* ... */ },

    initialize: function () { /* ... */ },

    render: function () { /* ... */ },

    createTodo: function () { /* ... */ },

    addTodoItem: function () { /* ... */ },

    addAllTodoItems: function () { /* ... */ }

});

Thoughts

This is our most robust Backbone View, so to see the definitions to these various properties and methods, please refer to the repository.

However, here are a couple key things:

el: '#todo-app'

This selector matches that <section id="todo-app"></section> element we created in our index.html file. This will be our primary View.

template: JST['app/scripts/templates/todos.ejs']

This little JST thing snuck in when we said yo backbone:view ____. When our View’s JavaScript file was created, the Backbone sub-generator created a matching template file for us: app/scripts/templates/todos.ejs.

These .ejs template files will define our Views’ HTML. When we run our app with grunt server or grunt build, our template files will be crushed together into a JavaScript object, JST. When our view file says template: JST['path/to/view/template.ejs'], this is referring to that object.

scripts/templates/todos.ejs

<form class="input-append">
    <input type="text" id="new-todo" placeholder="What do you need to do today?">
    <input type="submit" class="btn" value="Submit">
</form>
<ul>
    <!-- Where our To Do items will go -->
</ul>

Thoughts

Because we answered “Yes” to including Twitter Bootstrap for Sass when we scaffolded our application, I’ve added a couple of class names to pretty up our app. Feel free to style to your heart’s content in the styles/main.scss file.

styles/main.scss

@import 'sass-bootstrap/lib/bootstrap';

.hero-unit {
    margin: 50px auto 0 auto;
    width: 300px;
}

form {
    margin-top: 10px;
}

ul,
li form {
    margin: 0;
    padding: 0;
}

ul {
    list-style: none;
}

li form {
    display: none;
}

.editing {
    span {
        display: none;
    }

    form {
        display: inline-block;
    }
}

input:checked ~ span {
    text-decoration: line-through;
}

Thoughts

Sass is pretty cool.

Also, it’s pretty cool that the browser still reloads when you make a change to your Sass files. If you’ve used Sass before, you know it can be a hassle to get a productive development environment set up quickly. Out of the Yeoman box, you’re editing, watching, and reloading with none of the aforementioned hassle. Smiley face.

scripts/views/todo-view.js

/*global backboneApp, Backbone, JST*/

backboneApp.Views.TodoView = Backbone.View.extend({

    tagName: 'li',

    template: JST['app/scripts/templates/todo.ejs'],

    events: {
        'click input[type="checkbox"]': 'toggle',
        'dblclick span': 'toggleEdit',
        'submit form': 'toggleEdit'
    },

    initialize: function () { /* ... */ },

    render: function () { /* ... */ },

    toggle: function () { /* ... */ },

    toggleEdit: function () { /* ... */ }

});

Thoughts

This TodoView will represent an individual item. It will be an <li> with some custom functionality handling click, double click, and submit events, enabling a user to edit and save a To Do item.

scripts/templates/todo.ejs

<input type="checkbox" <% if (completed) { %>checked<% } %>>
<form>
    <input type="text" value="<%= title %>">
</form>
<span>
    <%= title %>
</span>

Thoughts

Simple enough. We’re using some basic Underscore templating to spit out values and toggle a checked state on our checkbox.


To Do: Do It Again

Our To Do application is actually done! It’s quite basic in functionality, but you should have a sense of how natural it is to develop an application using Yeoman and his Generator buddies. And even though the functionality is basic, none of the techniques we used to get here were “basic.” We’re using smart, efficient libraries (Sass, Backbone, Underscore) with a finely-tuned development process (Grunt, LiveReload, Compass), and it took us only a few terminal commands.

If you’re like me, you probably want to stop with the To Do stuff and start making your own applications. If you want to go play around, go for it! When you’re done generating like a crazy person, come back and let’s ship our To Do app.


To Do: Ship It

Let’s put this thing in the water and see if she floats! Do NOT put your computer in the water. Wait, would a MacBook Air float? No, probably not. Hmm…

That was a dangerous paragraph. Let’s just get our app ready for production, safe and dry.

grunt server has been amazing, but it’s time to meet his brother, grunt build. We talked about him a bit earlier, but let’s go over a few more details.

Here is what the grunt build task is defined as in your Gruntfile.js:

grunt.registerTask('build', [
    'clean:dist',    // Clears out your .tmp/ and dist/ folders
    'coffee',        // Compiles your CoffeeScript files (if any)
    'createDefaultTemplate', // Creates a JS file that sets up your JST object
    'jst',           // Compiles your `scripts/templates/` files
    'compass:dist',  // Compiles your Sassiness
    'useminPrepare', // Looks for those <!-- special blocks --> in your HTML
    'imagemin',      // Optimizes your images!
    'htmlmin',       // Minifies your HTML files
    'concat',        // Task used to concatenate your JS and CSS
    'cssmin',        // Minifies your CSS files
    'uglify',        // Task used to minify your JS
    'copy',          // Copies files from .tmp/ and app/ into dist/
    'rev',           // Creates unique hashes and re-names your new JS/CSS files
    'usemin'         // Updates the references in your HTML with the new files
]);

So, that thing is pretty legit. All of these tasks are defined inside of Gruntfile.js, so feel free to poke and tweak around to customize your application’s build. It’s highly likely you won’t need to do any customization at all, but it’s there if you need to.

Oh, one other thing. grunt build is actually wrapped inside of another task.

grunt

Simply running grunt will execute the default task:

grunt.registerTask('default', [
    'jshint',
    'test',
    'build'
]);

Those first two tasks, jshint and test are easy to overlook when rushing an app out the door, but are very important.

JSHint

The jshint task will consult with your .jshintrc file to learn your preferences, then scan through all of your JS files to make sure your rules are abided by. To get the full run down of your options with JSHint, check the JSHint documentation.

Test

The test task looks like this:

grunt.registerTask('test', [
    'clean:server',
    'coffee',
    'createDefaultTemplate',
    'jst',
    'compass',
    'connect:test',
    'mocha'
]);

It basically does enough to create and serve your application for your test framework, Mocha, to execute your tests.

Oh crap, tests.

Next door to your app/ and dist/ directories, this little test/ buckaroo has been waiting for our attention. Aww.

If you open that up, you’ll see test/ has its own bower.json and index.html, as well as a spec/ directory. Your tests will have some dependencies of their own, the Chai Assertion Library and Mocha testing framework.

Expand that spec/ directory and you’ll see a test.js file that looks something like this:

/*global describe, it */
'use strict';

(function () {
    describe('Give it some context', function () {
        describe('maybe a bit more context here', function () {
            it('should run here few assertions', function () {

            });
        });
    });
})();

Ok, looks like we could use a pull request to correct some grammar. Anybody?

If you haven’t written your own tests before, you’ll see terms like describe, it, before, beforeEach, after, and afterEach pop up. describe is a wrapper for a group of related tests, ____Each are optional functions that will execute before or after your test(s), and each it is a specific test.

Try running a grunt test to see all the magic unfold.

todo-grunt-test

You should play around and see if you can write some tests for our To Do application. A few ideas for test cases might be:

  • Does creating a new To Do item get saved in localStorage?
  • Does a new To Do item’s title get trimmed (removing extra whitespace)?
  • When editing a To Do item, does deleting the title, then saving remove the To Do item from localStorage?

There’s only one more thing to do.

Press Enter

$ grunt

You should see our favorite words: Done, without errors.


Finding Yeoman

Yeoman is still quite young; he just turned one! Things are pretty great now and they’re only going to get better. However, like all one year olds, Yeoman is still learning to walk without falling, and talk without drooling. You just might run into a bug or two. In times like these, think of him like your cute little nephew. He needs positive role models in his life, so help him learn!

That got real children’s book-y, real fast. I’ll grow it up a little: there are bugs and we need your help to squash the doody out of them (I said “a little”). Even if it’s not a bug, but you’re like, “I know a MUCH faster Grunt plug-in this generator could use,” report it to the appropriate generator’s issue tracker.

If you want to learn some more about Yeoman or just get to know the team, you’ll find us peppered all over the following sites.

If you’re just plain stuck, try one of the following resources for a helping hand.

Yeoman is just one piece of the entire stack— NPM, Node, Grunt, and Bower. It can be intimidating if you’re unfamiliar with these, but it is crucial not to fear the curve! Learning will need to happen, and like always, it will probably need to happen the hard way before it really sticks.

Psst, if you’re using sudo before every command, run, don’t walk, to Node and NPM in 30 Seconds. There, you’ll find several scripts you can run to give control back to your user account. It will also help you install Node and NPM if you’re starting from scratch.


Yo’ Next Application – Will You Yo?

Like all tools, I believe Yeoman is something every developer should try. If you give it a shot and find it’s not suitable for your task, I and the rest of the team would love to hear why. If you need a buddy to help you with your project, come find me. I’m always available around the links above, or just ping me on Twitter. I’m @stephenplusplus or Stephen Sawchuk.

me

Nice to meet you.

July 11 2013

19:52

Eight Cool Features Coming in ES6

ES6 is coming and with it brings many great things that will help us develop better and faster web applications. Let’s have a look at the most significant ones.


Syntax Sugar

In ES6 there is a lot of syntax sugar that will shorten development time and speed up our applications. Let’s have a taste, shall we?

Destructuring

This one is really interesting. For example, you can destructure an object into separate variables:

var point = { x: 1, y: 2 };
var [ x, y ] = point;

console.log(x); // 1
console.log(y); // 2

Another use for it would be returning a few different values in a function – no need to return an object anymore. We can also omit some of the returned values if we need to. So this code:

function returnsFewValues() {
	return {
		x: 1,
		y: 2
	}
}

Can be replaced with this:

function returnsFewValues() {
	return [ 1, 2 ];
}

Let and Const

These two are really useful. Both of them allow us to lock a variable in scope, so it will be inaccessible outside of that scope: let is for normal variables and const is for constants, values that never change (const is an old idea which finally came to JavaScript). We can now forget about using ugly closures:

function x() {
	(function () {
		var a = 2;
		a = a + 3;
	}());
}

console.log(a); // undefined

These can be replaced with more elegant let statements (of course, for more applicable use than this example):

function x() {
	let a = 2;
	a = a + 3;
}

console.log(a); // undefined

For-Of and Array Comprehensions

for-of is the younger brother of the old for-in statement. Using it, we iterate over values, not over keys:

var someArray = [ "a", "b", "c" ];

for (v of someArray) {
	console.log(v);
}

It is also used in array comprehensions. They are inspired by mathematical set notation. For example, if you have an array of names, and you want to grab only the ones that are eight or more letters in length. Before ES6, you would do it like this:

var eightLetterLongNames = [];
someNamesArray.forEach(function (name) {
	if (name.length >= 8) eightLetterLongNames.push(name);
});

But in ES6, it is much easier and cleaner:

var eightLetterLongNames = [ x for (name of someNamesArray) if (name.length >= 8) ];

Arrow Functions

Ever needed to only return a value or log something from a function? Usually, we did it like this:

element.addEventListener('click', function (e) { console.log(e); });

Another bit of syntactic sugar from ES6 are the arrow functions, which define a much better approach for one-line functions:

element.addEventListener('click' (e) => console.log(e));

Spread Operator, Rest Parameter, and Default Arguments

Lets start with the last one. We can finally define default arguments and forget about the old argument = argument || "default value" incantation. Just define the function like this:

function (arg1 = "default value", arg2 = false, arg3) {
	// function body
}

Spread operator allows us to pass an array like separate arguments to a function. Instead of this:

var someArray = [ "a", "b", "c" ];

console.log.apply(window, someArray);

You can now do it like so:

var someArray = [ "a", "b", "c" ];

console.log(...someArray);

Rest parameter is a perfect solution if you want your function to accept a few named arguments and later accept an unlimited number of them. The syntax is pretty simple and saves us from doing this, the ugly way:

function logWithTag(tag) {
	var logs = [].slice.call(arguments, 1);
	logs.forEach(function (log) {
		console.log(tag + ': ' + log);
	});
}

The above syntax is what we’re used to doing, but the new one, is much more readable:

function logWithTag(tag, ...logs) {
	logs.forEach(function (log) {
		console.log(tag + ': ' + log);
	});
}

Or even shorter, now with arrow functions:

var logWithTag = (tag, ...logs) => logs.forEach((log) => console.log(tag + ': ' + log));

Classes

Finally, ES is turning into a true object-oriented language. With the introduction of classes, we can write our code in a clean and structured way. It will remove the need for the current coded solutions present in nearly every popular library:

// ExtJS
var SimpleClass = new Ext.Class({
	name: 'John Doe',
    initialize: function(name) {
		this.name = name;
	},
    say: function (message) {
        return this.name + ': ' + message;
    }
});

// PrototypeJS
var Person = Class.create();
Person.prototype = {
	name: 'John Doe',
	initialize: function(name) {
		this.name = name;
	},
	say: function (message) {
		return this.name + ': ' + message;
	}
};

These can be replaced with pure JavaScript:

class Person {
	public name: 'John Doe'
	
	initialize(name) {
		this.name = name;
	}
	
	say(message) {
		return this.name + ': ' + message;
	}
}

It also supports inheritance and extending classes, so you can use it to produce fully object-oriented applications like you do in Java. This way, your code gets much cleaner and easier to read.


Modules

Another exciting feature of ES6 are modules. For now, there are few alternative solutions to modularize your app, mainly CommonJS and AMD. Here’s some examples:

/* CommonJS */

// point.js
function Point(x, y) {
	this.x = x;
	this.y = y;
}
module.exports = Point;

// myapp.js
var Point = require("point");
var origin = new Point(0, 0);
console.log(origin);

/* AMD */

// point.js
define("point", function() {
	function Point(x, y) {
		this.x = x;
		this.y = y;
	}
	return Point;
});

// myapp.js
define("myapp", ["point"], function(Point) {
	var origin = new Point(0, 0);
	console.log(origin);
});

ES6 removes the need for these libraries, as it has everything you need to split your code into smaller pieces. Together with classes, you get clean and maintainable code:

// point.js
module "point" {
	export class Point {
		constructor (x, y) {
			public x = x;
			public y = y;
		}
	}
}

// myapp.js
module point from "/point.js";
import Point from "point";

var origin = new Point(0, 0);
console.log(origin);

Maps and Sets

Maps and sets were long absent from JavaScript. They were imitated by using Objects, but it was quite messy when you wanted to iterate over them – you had to check if the property is not in the prototype, and maybe if it is not a method:

var map = {
	a: 1,
	b: 2,
	c: 3
};

for (var k in map) {
    if (map.hasOwnProperty(k)) {
        console.log(map[k]);
    }
}

for (var k in map) {
    if (map.hasOwnProperty(k)) {
        console.log(k);
    }
}

The ES6 approach is much cleaner:

var map = new Map;
map.set("a", 1);
map.set("b", 2);
map.set("c", 3);

for (var v in map.values()) {
	console.log(v);
}

for (var k in map.keys()) {
	console.log(k);
}

As for sets, they are similar to arrays, but the values are sequenced and can’t be duplicated. Here’s a small example of how to create a set:

var set = new Set;

set.add(1);
set.add(2);
set.add(3);

for (var v in set.values()) {
	console.log(v);
}

Iterators

Iterators are special functions that will help to iterate over a custom object. They are also called generators, because when used alone, they can generate content (like Fibonacci numbers or random ones). The functions values() and keys() that you saw above are default iterators for Maps. Basically, you define an iterator just like a function, but there’s an asterisk between the name and function statement:

function *fibonacci() {
	let [ current, next ] = [ 0, 1 ];
	for (;;) { // infinite loop
		[ current, next ] = [ next, current + next ];
		yeld current;
	}
}

With this approach, the scope is suspended, and you get the next values by calling next() on the returned value or by iterating over it:

var fib = fibonacci();

console.log(fib.next()); // 1
console.log(fib.next()); // 2

for (var v of fib) {
	if (v > 100) break;
	console.log(v); // 3, 5, 8 ...
}

Weak Maps

Memory leaks were always a problem, especially when it comes to web applications. One common source were collections, where keys were not strings or numbers (for example, you may have had objects as keys). Usually the implementation used two arrays – one for keys and one for values. This is bad, as the array of keys is keeping the key objects from being garbage-collected and here is where the memory leaks were produced.

Weak Maps have the advantage of having references to the key objects held weakly, meaning it would not block the garbage collector if other references to the key object would be gone. Check out the example below:

var key1 = window, key2 = document;

var map = new WeakMap(), map2 = new WeakMap();

map.set(key1, "some value");
map.set(key2, key1);
map.set(map2, map); // you can even do a circular reference

Proxies

Data binding is a useful feature of many DOM-manipulation libraries. Now, with the use of Proxies, its syntax and performance can be improved. Also, Proxies can be used to intercept changes on objects and log them, if you need to. The biggest change here is that you don’t need to use any get() and set() functions on the spied object, so you can remove the logging without any changes in the later code. Here’s an example of a logging proxy:

var logger = function(obj, info) {
	return Proxy.create({
		get: function get(receiver, prop) {
			info('Getting ' + prop);
			return obj[prop];
		},
		set: function set(receiver, prop, value) {
			info('Setting ' + prop + ' to ' + value + '; was ' + obj[prop]);
			debugger;
			obj[prop] = value;
		}
	});
};

var object = { a: 'some value', b: 22 };
// this can be removed later when you don't need to log anything anymore:
object = logger(object, function info(msg) {
	console.log(msg);  
});

document.write(obejct.a);
object.a = 'new value';
document.write(object.a);

Template Strings

Many of us really missed template strings in JavaScript. These not only allow us to easily insert variables in the code, but also to secure the output from various attacks. Template strings are defined using backticks `. Inside, you can use a dollar sign with brackets to insert executable code or variables:

var name = "John", surname = "Doe";
var template1 = `Hello! My name is ${name} ${surname}!`;

console.log(template1); // Hello! My name is John Doe!
var a = 1, b = 2, c = 3;
var template2 = `${a} + ${b} - ${c} / 2 = ${ a + b - c / 2 }`;

console.log(template2); // 1 + 2 - 3 / 2 = 1.5

Tag

You can also define a tag before the template string. Tag is a function which will parse separate parts of the template string, so you can do something with user-generated content. There will be a default set of tags in ES6. The most useful one, I think will be the safehtml tag. In brief, it will detect unsafe content like XSS attacks etc. and change them to innocuous values. Example tag usage:

var template = safehtml`<a href="${someUserdefinedVariable}">Click me!</a>`;

Conclusion

If you didn’t know about these features coming with the new ES6 standard, now you probably want them to come as quick as possible! At least, that was my reaction when I discovered them one by one. The examples provided here are only a handful of possible improvements. If you want to know more about ES6, here are some additional resources:

The new standard is most widely supported in new Firefox versions, so if you want to play around with some of these features, give Firefox a try.

July 10 2013

22:35

Responsive Screenshots With Casper

Capturing screenshots is annoying, but most of time it has to be done and usually it’s you – The Developer – who has to do it. Taking a few screenshots isn’t too bad, but lets say for example that now you are working on a website using Responsive Web Design and you have to take five times as many screenshots at various viewports. That one, two second annoying task has now started eating into your lunch time.


Intro

Today I am going to run through writing a quick and simple script to take some screenshots of any site at various viewports and save the images to disc. This is something I first saw at mediaqueri.es and started implementing it into my build process. It’s not ideal for real testing, as it acts more like a sanity test and provides a good overview for anything I might be working on involving Responsive Web.

When a build is run, a small script using PhantomJS and CasperJS can go and grab a screenshot at various viewports which I’ve defined and give me a quick overview of any desired page. It’s particularly good when you are working within a larger team and you might not have the time before each build to go and check every single page and module for how it looks at various resolutions. It will also give you something to possibly show the client in regular intervals, to display how their site flexes at various viewports.

Note : PhantomJS is a headless WebKit browser and anything rendered would be using WebKit rendering, so it’s not an actual representation of how the site would render on various devices that might run different browsers and whilst you are able to change the User Agent string sent to the site on-load, this doesn’t change the rendering engine.

PhantomJS has a great JavaScript API built with testing in mind. For a great introduction to PhantomJS and using it for testing there is a tutorial right here on nettuts and be sure to check the official site and documentation.

CasperJS is a toolkit that sits on-top of PhantomJS and eases the process of writing Phantom scripts by providing functions, methods, and syntactic sugar.


Installation

You might already have Phantom installed, especially if you are already testing your client-side code, if not, it’s pretty straight forward and you can get more detailed instructions on the official site

For Windows users, there’s an executable to download and run.

For Mac users, there’s both the binary or you can install using Homebrew:

brew update && brew install phantomjs

For Linux users, there’s a 64-bit binary or you have an option to compile PhantomJS from source.

Once installed, you can open a terminal and check that everything is OK by running:

phantomjs --version

which should return:

1.8.0

Once Phantom is installed you can go ahead and install CasperJS too. For Mac users, you can again use Homebrew:

brew install casperjs

For Windows users, you need to append your PATH file with ";C:casperjsatchbin" (Modify this path depending on where you want to store CasperJS). Within the batchbin directory, there is a batch file called casperjs.bat, this will be the script that runs your Casper scripts without having to need Ruby or Python installed to run it. Whenever you need to run the Casper script, just use casperjs.bat scriptname.js rather than casperjs scriptname.js.

Then check that:casperjs --version returns:1.0.0

Both of these version numbers are up-to-date as of the time of writing this article.


Hello PhantomJS

Now that we have both of these running, let’s do a couple of quick Hello Worlds to make sure that both Phantom and Casper are running as expected.

Make a new directory and inside of it, make two JavaScript files, hellophantom.js and hellocasper.js. Open these up in the editor of your choice and lets start with making sure Phantom is actually running correctly.

We are going to start in the hellophantom.js file and write a quick test to grab the title of a webpage. I’m not going to cover the PhantomJS API in-detail, this will just give you a quick introduction and test our installation. If you have PhantomJS running already, you can skip this part.

First we need to set up a couple of variables, one which instantiates the ‘webpage’ Module and another just as a ‘URL’ variable.

var page = require('webpage').create(),
    url = "http://net.tutsplus.com";

Next we can create the function that navigates to the webpage, we pass in the URL as an argument, and a callback function. We receive a status in our callback (success or fail) on the open method.

page.open(url, function(status) {
});

Now we can call the evaluate function to get the title of the page. We can return the result, to a variable, by assigning the function to it:

page.open(url, function(status) {
    var title = page.evaluate(function () {
        return document.title;
    });
});

Lastly, we are just going to log this out so we can see the result in the terminal and then exit out of the Phantom process.

console.log('Hello, World! The Page title on '+ url +' is ' + title);
phantom.exit();

Our finished script will look something like this.

var page = require('webpage').create(),
    url = "http://net.tutsplus.com";

page.open(url, function (status) {
    var title = page.evaluate(function () {
        return document.title;
    });
    console.log('Hello, World! The Page title on '+ url +' is ' + title);
    phantom.exit();
});

cd into the directory where this script is located and you can run it using the following command:

phantomjs hellophantom.js

After a few seconds, you will get the following result in your terminal:

Hello, World! The Page title on http://net.tutsplus.com is Web development tutorials, from beginner to advanced | Nettuts+

Thats great, but before we move on, we can just make this code a little more flexible with a quick re-factor. There are a few modules that are available for us to use and one of them is the system module. Some of the properties of the system module give you access to things such as the Process ID that PhantomJS is running on or the OS being used, but the one we are interested in is the args property.

The args property returns an array of the command line arguments. The first item in the array is always the script name, but we can pass any number of arguments from the command line and use them in our script. So we can pass the URL that we want to open, with phantom, in the command line, so we can re-use the script anytime without needing to edit it every time.

We just need to first require the system module and then change the url variable to be the argument we pass through:

system = require('system'),
url = system.args[1];

and now we can run the script with the following command:

phantomjs hellophantom.js http://net.tutsplus.com

Hello Casper

Now that we know that Phantom is working, we can move on to testing out Casper. We will replicate the same test script, only this time we will use the Casper API.

First we need to instantiate a casper instance:

var casper = require("casper").create();

and then grab the URL from one of the arguments passed from the terminal. Casper has its own command-line parser that sits on-top of the one that comes with Phantom and we can get access to any arguments passed through from the command-line just as we did previously. The only difference being, that our first argument will be the first one that we pass through and not the script name (as it was with Phantom)

var url = casper.cli.args[0];

The Casper CLI API can also take named options as well as positional arguments, we can use this if we wanted to set up some options or be more verbose with our script, for example:

casperjs hellocasper.js argumentOne argumentTwo --option1=this --option2=that

and we can get these named options using cli.get('optionName'), so we could do something like the following, to pass in both arguments and options (if we had some config options that needed to be set):

var argumentOne = casper.cli.args[0];
var argumentTwo = casper.cli.args[1];
var thisOption = casper.cli.get('option');
var thatOption = casper.cli.get('option2');

For now, I am just going to use the positional argument to get the URL. Next we are going to run the start() method in order to do any sort of navigation. The start method takes a string URL and a callback function.

casper.start(url, function() {
    this.echo('Hello, World! The Page title on '+ url +' is ');
});

If you don’t want to have all your functionality, you can use the then() method. Each then() method call gets added as a step in the stack and is executed in a linear fashion, so rather than the above, you could have:

casper.start(url);
casper.then(function(){
    this.echo('Hello, World! The Page title on '+ url +' is ');
});

I prefer to use then(), as I find it easier to read, but either is acceptable and really it’s all just a matter of taste.

In order to get the title of the page there is already a getTitle() method available to us, so we can just use that in our echo.

casper.start(url);
casper.then(function(){
    this.echo('Hello, World! The Page title on '+ url +' is ' this.getTitle());
});

Finally, we run our steps with the run() method, which is a mandatory method, needed in order for your Casper script to run. This method can also have an optional onComplete callback to run once all steps are complete. If you utilized the callback, you need to make sure that you exit the Casper process by using the exit() method. Here’s an example of both:

//this doesn't need to use the exit method.
casper.run();

//OR

//this needs the exit method
casper.run(function(){
    this.echo('Everything in the stack has ended');
    this.exit();
})

Alternatively, you could just chain the exit method after the echo:

casper.run(function(){
    this.echo('Everything in the stack has ended').exit();  
})

Again, just a matter of taste.

Now our complete HelloCasper.js script should look like this:

var casper = require("casper").create();
var url = casper.cli.args[0];

casper.start(url, function(){
    this.echo('Hello, World! The Page title on '+ url +' is ' this.getTitle());
});

casper.run(function() {
    this.echo('Everything in the stack has ended.').exit();
});

We can now run the Casper script with the following command:

casperjs hellocasper.js http://net.tutsplus.com

It doesn’t do anything different than what we were already doing with Phantom, Casper just gives us a nice API (with some added extras) to sit on top of Phantom and makes the code we write a little more verbose and readable, this is particularly helpful when you get into writing scripts that have to navigate a site.

Lets now dive into saving some snapshots of our screen.


Snapshot Basics

I’m going to start off with a file called casperscreens.js and instantiate Casper. Then set up an array that will contain our desired viewport widths that we want to capture screenshots at. Each item in the array will consist of another array which will have the width and height we want to set.

viewportSizes = [
    [320,480],
    [320,568],
    [600,1024],
    [1024,768],
    [1280,800],
    [1440,900]
]

I’m also going to set a var for getting the URL from the command line and then I want to run a regex on the URL to create a directory to save the screenshots in. I’m just going to remove the http:// part and replace the periods with hyphens. Then we are going to run casper.start().

saveDir = url.replace(/[^a-zA-Z0-9]/gi, '-').replace(/^https?-+/, '');
casper.start();

Now we are going to use a loop and for each viewport size, grab a screenshot of the specified URL. We are going to set the viewport to the sizes defined in the array item that we are on – open the URL – wait 5000 milliseconds to ensure the page has loaded – and then capture two types of screenshots.

The first one is for the actual height and width defined, for this we use the capture() method which takes two arguments – a string for the output file and an object argument for setting what part of the page to clip. The second is for a complete page screenshot with just the defined width and we do this using the captureSelector() method which captures the area within the defined selector, in our case we’re just using body and this method takes two arguments, the first being the filename and the second being the selector.

Whilst the actual defined screenshot is useful, I’ve found that it’s helpful to also have a full page chromeless screenshot, so that you can see how the whole page flows.

casper.each(viewportSizes, function(self, viewportSize, i) {

    // set two vars for the viewport height and width as we loop through each item in the viewport array
    var width = viewportSize[0],
        height = viewportSize[1];

    //give some time for the page to load
    casper.wait(5000, function() {

        //set the viewport to the desired height and width
        this.viewport(width, height);

        casper.thenOpen(url, function() {
            this.echo('Opening at ' + width);

            //Set up two vars, one for the fullpage save, one for the actual viewport save
            var FPfilename = saveDir + '/fullpage-' + width + ".png";
            var ACfilename = saveDir + '/' + width + '-' + height + ".png";

            //Capture selector captures the whole body
            this.captureSelector(FPfilename, 'body');

            //capture snaps a defined selection of the page
            this.capture(ACfilename,{top: 0,left: 0,width: width, height: height});
            this.echo('snapshot taken');
        });
    });
});

Finally we call the run() method and in the callback function I am just going to echo out that the capturing has finished.

casper.run(function() {
    this.echo('Finished captures for ' + url).exit();
});

The full script should now look like this:

var casper = require("casper").create(),
    viewportSizes = [
    [320,480],
    [320,568],
    [600,1024],
    [1024,768],
    [1280,800],
    [1440,900]
],
    url = casper.cli.args[0],
    saveDir = url.replace(/[^a-zA-Z0-9]/gi, '-').replace(/^https?-+/, '');

casper.start();

casper.each(viewportSizes, function(self, viewportSize, i) {

    // set two vars for the viewport height and width as we loop through each item in the viewport array
    var width = viewportSize[0],
        height = viewportSize[1];

    //give some time for the page to load
    casper.wait(5000, function() {

        //set the viewport to the desired height and width
        this.viewport(width, height);

        casper.thenOpen(url, function() {
            this.echo('Opening at ' + width);

            //Set up two vars, one for the fullpage save, one for the actual viewport save
            var FPfilename = saveDir + '/fullpage-' + width + ".png";
            var ACfilename = saveDir + '/' + width + '-' + height + ".png";

            //Capture selector captures the whole body
            this.captureSelector(FPfilename, 'body');

            //capture snaps a defined selection of the page
            this.capture(ACfilename,{top: 0,left: 0,width: width, height: height});
            this.echo('snapshot taken');
        });
    });
});

casper.run(function() {
    this.echo('Finished captures for ' + url).exit();
});

And now we can run this script using the following command:

casperjs casperscreens.js http://todomvc.com

I’ve chosen to capture some screens from todomvc.com simply because it’s a responsive site that can display the sort of results we are looking for.

Now, if you navigate to the directory where the script was run from, you will see a new directory has been created and inside are all of your PNGs.

final

Wrap Up

So we’ve managed to write a fairly small bit of JavaScript that will save a lot of hassle, next time the boss or the client wants a bunch of screenshots, while also providing an additional script we can add to our toolbox when doing some testing. Sure, this only shows us a WebKit rendering, but for many, that’s good enough.

Now try integrating this into your build process, run it alongside your other tests and use the screen capture functionality to not only test the responsiveness of your site, but how a users journey might look on different size screens. Also, check out the Grunt plugin grunt-casper if Grunt is part of your build process.

If you are a fan of CoffeeScript, you can even try re-writing this script in the CoffeeScript syntax, just ensure that your file ends with the .coffee extension:

casperjs casperscreen.coffee http://example.com

And you don’t even have to worry about pre-compiling your CoffeeScript, Casper scripts.

There’s so much more to both CasperJS and PhantomJS, so check out their respective sites and see how they can help with your testing.

July 09 2013

16:42

Required JavaScript Reading

I’m asked regularly about books and online resources to get up to speed on JavaScript. Seems like everyone wants to get in on the JS action and with good reason; it’s a great language to know if you’re interested in exciting career opportunities. It’s also pretty fun to play with.

With JavaScript’s increased popularity, the number of books, blogs and tutorials about the language have grown exponentially and while I can’t possibly know all of them, I’ve found some gems which I really like and would like to share those with you.


Books

Note that I’m only listing books that I’ve read and can recommend. There are a ton of books out there that you may find equally great and that’s cool.


Professional JavaScript for Web Developers by Nicholas Zakas

This continues to be my most recommended book. It is the one that I tell any JavaScript developer (aspiring or expert) to get and keep on their shelves. Nicholas Zakas has written a fantastic book that does an excellent job of covering all major facets of the language. Want to know the basics? It’s in there. Want to get into OOP in JS? It’s in there. Want to know about the Document Object Model? Well you get the gist.

It’s a complete reference written in a very accessible way that guides you from the basics to advanced JavaScript features. Hence why I recommend it to developers of all levels.


JavaScript Patterns by Stoyan Stefanov

Understanding the most common and useful JavaScript patterns and how they work is critical to properly managing and organizing your code. The pattern you choose in many cases becomes the basis for the foundation of your application and plays an important role in deciding how you’ll architect your site.

Stoyan is one of the best JavaScript developers in the world, having worked on large-scale sites at Yahoo! and Facebook and was part of the team that managed the YUI framework. He knows his stuff and he’s distilled his understanding of the most common patterns into an easy-to-read book.


Object-Oriented JavaScript: Create scalable, reusable high-quality JavaScript applications and libraries by Stoyan Stefanov

Oh JavaScript OOP, how you taunt me with your prototypes. Since JS isn’t a traditional class-based object-oriented language, it can be a bit wonky to get your head around things like prototypal inheritance or mixins. Stoyan steps up again and does a great job of breaking the topic down. It starts off slow with some language basics to get your feet wet but quickly dives into how OOP works and how you should be taking advantage of the language’s features to build reusable objects.


High Performance JavaScript by Nicholas Zakas

We all want our code to run fast so we tweak and refactor constantly. Learning by the school of hard knocks is cool but I prefer to have a resource based on someone’s previous experiences to build from. This book aims to teach you about those tweaks that can help optimize the performance of your code. Whether it’s improving loop performance or working with timers to improve UI responsiveness, this book covers it.


Effective JavaScript by David Herman

When I first read about this book, I wasn’t 100% sure I wanted to get it. I figured it was a rehash of other books I’ve read but when I saw that Alex Russell of Dojo and Chrome Frame fame recommended it, I scooped it up. I’m really glad I did.

Dave is Mozilla’s TC39 rep and an expert in the language both at an academic and pragmatic level. The book is great because it’s not so much about the JS language as it is about using the language in the best way possible. It works to clarify how specific parts of the language work (e.g.: coercion, currying or automatic semicolon insertion) so that it doesn’t bite you later on. This is definitely a book for those wanting to take their level of JS understanding to the next level.


Secrets of the JavaScript Ninja by John Resig and Bear Bibeault

No matter what you think of jQuery, John Resig deserves mad respect and kudos for building the most popular JavaScript library ever and highlighting the power and flexibility of JavaScript. This book took 4 years to finish but it was well worth the wait.


Freebies

I realize that many of us are on a tight budget and free is always a great choice. Luckily I found a site that can point you to free resources. The site, JSbooks, was created by Revolunet and offers placeholders to several quality online references such as:

And a whole lot more. And no, these aren’t pilfered ebooks you find on scam sites. The icons simply point to established and publicly available resources acting as a directory instead of a host.


Blogs

There’s so many people sharing great content and it’s incredibly hard to filter out which ones to keep up with. As my experience has grown, I’ve worked to narrow down the blogs I read and I think these have really worked out well:


DailyJS

The name says it best. It’s a combination of news and tutorials focusing both on front-end and server-side JavaScript code. It has a regular roundup of jQuery plugins as well.


Nicholas Zakas

I think by now it’s obvious that I’m a fan of Nicholas. His blog is as great a resource as his books and tackles advanced JavaScript concepts that should be on every developer's mind.


David Walsh

I remember when he first hit the scene; a brash, in your face type of dude learning the ropes and building a name. Since then David has worked hard to become a great developer contributing to the MooTools and Dojo frameworks and now working at Mozilla. He produces great articles that are immediately usable.


Badass JavaScript

Another great news and tutorial site. Reminds me a lot of how Ajaxian used to be, in that it allows me to discover very cool libraries and new techniques to use.


JavaScript Weekly

This is the ONLY email newsletter that I keep in my inbox because every weekly issue is chock full of awesomeness. Seriously, I don’t know how Peter Cooper does it but he manages to find so many great articles to read. Just be careful because it’s a total productivity killer.


Addy Osmani

Addy is one of the most talented developers out there and in his blog he demonstrates his breadth of knowledge by covering everything from jQuery promises to AMD. His post on writing modular JavaScript is epic.


Peter van der Zee

He calls me old and says I use a cane but I still respect Peter’s incredible knowledge of JavaScript. He’s part of the younger generation that will be the future of web development. Get to know this guy.


Juriy Zaytsev AKA kangax

When you’re ready to learn from the big boys, start by subscribing to Juriy’s blog. He eats, drinks, and sleeps ECMAScript.


2ality

I’ve never met or spoken with Dr. Axel Rauschmayer but when I found his site I had a “holy crap this dude is sharp” moment. When you read his posts, you’ll see why. They’re DEEP, like into the bowels of ECMAScript deep. Really awesome stuff.


Nettuts

Yeah, yeah I’m biased but it seriously does have great stuff. You wouldn’t be reading this if you didn’t agree. :)


Taking a Class

When a book or blog won’t do and you want to get your hands dirty, you now have a ton of courses, both offline and online, to get you up-to-speed:

  • Nettuts Premium JavaScript Course (Online)

    Not only will you find courses on JavaScript fundamentals but on frameworks like Backbone and topics such as OOP.

  • The JavaScript Master Class (Online)

    I took this online course by Thomas Fuchs and Amy Hoy. So worth it, especially because the instructor is accessible for Q&A.

  • appendTo’s Free JavaScript Training (Online)

    A free set of eight video lessons to help you ramp up on JavaScript.

  • Bocoup Training (Online and Offline)

    Bocoup has a great training program. They offer some online videos but the real value comes from attending their numerous training classes in person.


Getting Help

One of the hardest parts of learning any new programming language is getting support. You have questions and hope to be able to get answers.

  • JSMentors

    I started JSMentors with Asen Bozhilov with the intention of providing a safe and helpful place for developers to come and ask questions about JavaScript. It came out of my experiences in other less desirable, not to be mentioned parts of the Internet which weren’t exactly beginner friendly. We enlisted the help of a number of notable experts to help answer questions and the group is doing great.

  • StackOverflow

    I personally think this is such a great resource. The site helps developers resolve their issues daily and it has sections for JavaScript and related libraries like jQuery.


Conclusion

Hopefully I didn’t overwhelm you with all of these listings. It’s really great that we have so many options to learn JavaScript and I’ve only scratched the surface. Remember, pick and choose what’s best for you and take your time. JavaScript is great language to use and it does take a little time to get all of the nuances. Thankfully, you have many methods for learning it and options to help you get your questions answered.

July 05 2013

20:28

Developing Google Chrome Extensions

It’s no secret that my favorite browser is Google Chrome. I like it because it’s fast, reliable, it doesn’t crash (very often), and it looks good. There’s also something else which I find even more valuable. It’s the fact that you can build an extension for it using only HTML, CSS, and JavaScript. I always support such products, products that are open to the community and Chrome happens to be one of these products. If you need something and it is not yet implemented, you are free to develop it yourself.

So at the end of this article you will find a working Chrome extension which uses most of the techniques explained below. You can download the final example using the download source code button at the top of this page.


Why You Should Write Your Own Extension

I always encourage people to use better tools to speed up their workflow. The software which we use should help us, we should not have to fight with it. Developing extensions/plugins for your favorite editor or browser helps not only you, but also other programmers, who sooner or later will be in the same situation. If something is missing, you can build it yourself and with Chrome this is really easy. Modeling your environment around your needs is key to being highly productive.


Developing & Testing Out Your Extensions

Thankfully there is a way to test your extension without having to upload it to Chrome’s web store. In your browser’s address bar, just type in:

chrome://extensions

Make sure that you check Developer mode and click the Load unpacked extension… button. Then simply select the folder from your hard disk which contains the extension’s files.

extensionspanel

Architecture

Here’s a diagram of the architecture for a Chrome extension:

architecture

And now let’s take a closer look at each element within the architecture.

Manifest

The entry point of your extension is the manifest.json file. It should contain a valid JSON object. For example:

{
    "name": "BrowserActionExtension",
    "version": "0.0.1",
    "manifest_version": 2,
    "browser_action": {
        "default_title": "That's the tool tip",
        "default_popup": "popup.html"
    }
}

The required properties are name, version, and manifest_version. The version can be anywhere from one to four, dot-separated integers. It’s something which is used by Google’s autoupdate system. That’s how it knows when to update your extension. The value of the manifest_version should be the integer 2.

The manifest could contain other properties depending on what kind of extension you need, but I’ll describe only those which I find to be more interesting.

Background Pages

Every extension has an invisible background page which is run by the browser. There are two types – persistent background pages and event pages. The first one is active, all of the time. The second is active only when it is needed. Google encourages developers to use event pages, because this saves memory and improves the overall performance of the browser. However, it’s good to know that this is also where you should put your main logic and initialization. Normally the background page/script plays the role of a bridge between the other parts of the extension.

Here is how you should describe it in the manifest:

"background": {
    "scripts": ["background.js"],
    "persistent": false/true
}

As you may have guessed, if the persistent property is false then you are using event pages. Otherwise, you are working with a persistent background page.

Content Script

If you need access to the current page’s DOM, then you have to use a content script. The code is run within the context of the current web page, which means that it will be executed with every refresh. To add such a script, use the following syntax.

"content_scripts": [
    {
        "matches": ["http://*/*", "https://*/*"],
        "js": ["content.js"]
    }
]

Keep in mind that the value of matches determines for which pages your script will be used. Read more about matches patterns here.

User Interface

There are several ways to build the UI of your extension. Here are the four most popular.

Browser Action

Most developers use the browser_action property to build their plugins. Once you set it, an icon representing your extension will be placed on the right side of the address bar. Users can then click the icon and open a pop-up which is actually HTML content controlled by you.

browseraction

The manifest files should contain the following data:

"browser_action": {
    "default_icon": {
        "19": "icons/19x19.png",
        "38": "icons/38x38.png"
    },
    "default_title": "That's the tool tip",
    "default_popup": "popup.html"
}

The default_title is a little tool tip which is shown when the user mouses over your icon. default_popup is actually the HTML file which is loaded inside the pop-up. There is also a badge which you can place over your icon. You can do that inside of your background script. For example:

chrome.browserAction.setBadgeText({text: "yeah"});

This was the code which I used to produce the image above.

Page Action

The page_action property is similar to the browser action, but the icon is shown inside the address bar:

pageaction

The interesting thing here is that your icon is hidden initially, so you should decide when to show it. For example, in the image above, the RSS icon will be shown only if the current page contains a link to the RSS feed. If you need to see your icon all the time, it is good to use browser_action directly.

To add the page action, type the following code inside your manifest:

"page_action": {
    "default_icon": {
        "19": "images/icon19.png",
        "38": "images/icon38.png"
    },
    "default_title": "Google Mail",
    "default_popup": "popup.html"
}

Unlike the browser action’s icon, the page action’s icon doesn’t have badges.

DeveloperTools

I use DeveloperTools a lot and it’s nice that Chrome offers a method for adding new tabs to these tools. The first thing you should do is add an HTML page which will be loaded when the panel is opened:

"devtools_page": "devtools.html"

There’s no need to put any HTML inside the page, except for linking in a JavaScript file, which will create the tab:

<script src="devtools.js"></script>;

And then include the following code inside the devtools.js file:

chrome.devtools.panels.create(
    "TheNameOfYourExtension", 
    "img/icon16.png", 
    "index.html",
    function() {

    }
);

Now the above code will add a new tab with a name of TheNameOfYourExtension and once you click on it the browser will load index.html inside the DeveloperTools.

Omnibox

The omnibox is the keyword which is shown inside Chrome’s address bar. For example, if you add the following property into your manifest:

"omnibox": { "keyword" : "yeah" }

And then add the code below, inside your background script:

chrome.omnibox.onInputChanged.addListener(function(text, suggest) {
    suggest([
      {content: text + " one", description: "the first one"},
      {content: text + " number two", description: "the second entry"}
    ]);
});
chrome.omnibox.onInputEntered.addListener(function(text) {
    alert('You just typed "' + text + '"');
});

You should be able to type yeah inside the address bar. Then you should see something like this:

omnibox

Pressing tab will produce the following screen:

omnibox2

Of course using the chrome.omnibox API, you could catch the user’s input and react to that input.

APIs

There are bunch of different things which you can do in your extension. For example, you can get access to the user’s bookmarks or history. You can move, create tabs or even resize the main window. I strongly recommend to check out the documentation to get a better idea of how to accomplish these tasks.

What you should know is that not all APIs are available in every part of your extension. For example, your content script can’t access chrome.devtools.panels or the script in your DeveloperTools tab can’t read the page’s DOM. So, if you’re wondering why something is not working, this could be why.

Messaging

As I mentioned above, you don’t always have access to the API that you want to use. If that’s the case, then you should use message passing. There are two types of messaging – one-time requests and long-lived connections.

One-Time Requests

This type of communication happens only once. I.e. you send a message and wait for an answer. For example, you could place the following code in your background script:

chrome.extension.onMessage.addListener(function(request, sender, sendResponse) {
    switch(request.type) {
        case "dom-loaded":
            alert(request.data.myProperty);
        break;
    }
    return true;
});

Then use the code from below in your content script:

window.addEventListener("load", function() {
    chrome.extension.sendMessage({
        type: "dom-loaded", 
        data: {
            myProperty: "value"
        }
    });
}, true);

And this is how you can get information about the current page’s DOM and use it inside your background script, which normally doesn’t have access to this data.

Long-Lived Connections

Use this type of messaging if you need a persistent communication channel. Inside your content script place the following code:

var port = chrome.runtime.connect({name: "my-channel"});
port.postMessage({myProperty: "value"});
port.onMessage.addListener(function(msg) {
    // do some stuff here
});

And then in the background script, use this:

chrome.runtime.onConnect.addListener(function(port) {
    if(port.name == "my-channel"){
        port.onMessage.addListener(function(msg) {
            // do some stuff here
        });
    }
});

Override Pages

Overriding pages is a nice way to customize your browser. You’re also able to substitute some of the default pages in Chrome. For example you can create your own history page. To do that, add in the following code snippet:

"chrome_url_overrides" : {
    "<page to override>;": "custom.html"
}

The possible values of <page to override> are bookmarks, history, and newtab. It’s kinda cool to have a fresh new tab page.


An Example Extension

To wrap up this article I decided to include a simple example, so you can get a better understanding of the whole picture. This example extension uses most of the things that I described above to simply set a #F00 background color for all of the divs in the current page. Feel free to download the source code using the button at the top of this article.

The Manifest File

Of course I started with the manifest file:

{
    "name": "BrowserExtension",
    "version": "0.0.1",
    "manifest_version": 2,
    "description" : "Description ...",
    "icons": { "16": "icons/16x16.png", "48": "icons/48x48.png", "128": "icons/128x128.png" },
    "omnibox": { "keyword" : "yeah" },
    "browser_action": {
        "default_icon": { "19": "icons/19x19.png", "38": "icons/38x38.png" },
        "default_title": "That's the tool tip",
        "default_popup": "browseraction/popup.html"
    },
    "background": {
        "scripts": ["background.js"],
        "persistent": false
    },
    "chrome_url_overrides" : {
        "newtab": "newtab/newtab.html"
    },
    "content_scripts": [{
        "matches": ["http://*/*", "https://*/*"],
        "js": ["content.js"]
    }],
    "devtools_page": "devtools/devtools.html"
}

Keep in mind that you can organize your files into folders. Also, pay attention to the version property. You should update this property every time you want to upload your extension into the web store.

Background Script

// omnibox
chrome.omnibox.onInputChanged.addListener(function(text, suggest) {
    suggest([
      {content: "color-divs", description: "Make everything red"}
    ]);
});
chrome.omnibox.onInputEntered.addListener(function(text) {
    if(text == "color-divs") colorDivs();
});

// listening for an event / one-time requests
// coming from the popup
chrome.extension.onMessage.addListener(function(request, sender, sendResponse) {
    switch(request.type) {
        case "color-divs":
            colorDivs();
        break;
    }
    return true;
});

// listening for an event / long-lived connections
// coming from devtools
chrome.extension.onConnect.addListener(function (port) {
    port.onMessage.addListener(function (message) {
        switch(port.name) {
            case "color-divs-port":
                colorDivs();
            break;
        }
    });
});

// send a message to the content script
var colorDivs = function() {
    chrome.tabs.getSelected(null, function(tab){
        chrome.tabs.sendMessage(tab.id, {type: "colors-div", color: "#F00"});
        // setting a badge
        chrome.browserAction.setBadgeText({text: "red!"});
    });
}

The first few lines get the user’s action from the omnibox. After that, I set a one-time request listener, which will accept the message from the browser action icon.

The next snippet is a long-lived connection with the devtools tab (it’s not absolutely necessary to use a long-lived connection for this, I did it just for educational purposes). Using these listeners, I’m able to get the input from the user and send it to the content script, which has access to the DOM elements. The key point here was to first select the tab which I wanted to manipulate and then send a message to it. Lastly, I put a badge on the extensions icon.

Browser Action

We start with our popup.html file:

// popup.html
<script type="text/javascript" src="popup.js"></script>
<div style="width:200px">
    <button id="button">Color all the divs</button>
</div>

Then we create the popup.js file:

// popup.js
window.onload = function() {
    document.getElementById("button").onclick = function() {
        chrome.extension.sendMessage({
            type: "color-divs"
        });
    }
}

The pop-up contains a single button and once the user clicks it, it sends a message to the background script.

DeveloperTools

window.onload = function() {
    var port = chrome.extension.connect({ name: "color-divs-port" });
    document.getElementById("button").onclick = function() {
        port.postMessage({ type: "color-divs"});
    }
}

For the DeveloperTools, we’re doing almost the same thing here as we did in the pop-up, the only difference is that I used a long-lived connection.

Content Script

chrome.extension.onMessage.addListener(function(message, sender, sendResponse) {
    switch(message.type) {
        case "colors-div":
            var divs = document.querySelectorAll("div");
            if(divs.length === 0) {
                alert("There are no any divs in the page.");
            } else {
                for(var i=0; i&lt;divs.length; i++) {
                    divs[i].style.backgroundColor = message.color;
                }
            }
        break;
    }
});

The content script listens for a message, selects all the divs on the current page, and changes their background color. Pay attention to the object which I attached the listener to. In the content script that’s chrome.extension.onMessage.

Customizing the New Tab Page

The last thing that this extension does is customize the new tab page. We can do that easily just by pointing the newtab property to the newtab/newtab.html file:

"chrome_url_overrides" : {
    "newtab": "newtab/newtab.html"
}

Keep in mind that you can’t create a replica of the default new tab page. The idea of this feature is to add in a completely different functionality. Here is what Google says:

Don’t try to emulate the default New Tab page. The APIs necessary to create a slightly modified version of the default New Tab page — with top pages, recently closed pages, tips, a theme background image, and so on — don’t exist yet. Until they do, you’re better off trying to make something completely different.


Debugging

Writing an extension for Google Chrome is not always an easy task and you’ll likely run into some problems. The good thing is that you can still use the console to output your variables to help with debugging. Feel free to add console.log into your background or content scripts. However this will not work in scripts which are run in the context of the developer tools, in that case you might consider using the alert method, as it works everywhere.


Conclusion

In my opinion, Chrome is one of the best browsers available. The developers at Google make creating extensions relatively easy by giving us the power to create them in HTML, CSS, and JavaScript.

Yes, there are some tricky parts, but generally we’re able to produce valuable plugins. Keep in mind that this article doesn’t cover everything related to developing Chrome extensions. There are some other useful things like context menus, options pages, and notifications. For the topics I did not cover, please refer to the documentation for more detailed information.

July 04 2013

00:10

Organic Development


Introduction

I was working as a graphic designer a few years ago and a common problem that I would run into was picking color schemes for new projects. One of my colleagues said, “Just pick a nice photo and grab colors from there”. This technique works well because photos offer you a natural combination of colors. So I was thinking, “Why not transfer this same concept to my work as a coder?”. And this is where Organic comes in to play. When I was first introduced to Organic I was amazed how simple it was and at the same time, how flexible its approach is. Finally, I had something which encourages modular programming, its just as useful as the MVC pattern, and it’s a great tool for architecting.


The Concept

As you may have guessed, the Organic concept is biology based. Your main application acts as a Cell, which has a Membrane and a Nucleus. But the real job of a Cell is done by the Organelles, which communicate between each other with Chemicals. Of course, the elements and the processes in Organic are not 100% identical to real life Cells, but they are pretty close. Now, I know it sounds crazy, but once you start working with it you’ll see how simple and natural this approach can be when applying it to your apps.

organic-concept

Download Organic

Organic is distributed as a Node module. So you should have NodeJS already installed. If you don’t, please go to nodejs.org and grab the latest version for your OS. Your package.json file should look like this:

{
    "name": "OrganicDevelopment",
    "version": "0.0.0",
    "description": "Organic development",
    "dependencies": {
        "organic": "0.0.11"
    },
    "author": "Your Name Here"
}

Run npm install in the same directory and the manager will download the necessary files. The core of Organic is actually pretty small. It contains only the definition of the main elements – Cell, Nucleus, Membrane, Plasma, Organelle, Chemical, and DNA. Of course it comes with a few tests, but it’s a small package overall. This helps in making it easy to learn and start developing with almost immediately.


The Example

For this article I decided to create a simple web site using only the core of Organic. The source code can be downloaded at the top of this article, if you’d like to follow along. I think that this sample application is the best way to present this new pattern. The site contains two pages – Home and About. Here’s a screenshot of the site:

site

The app contains two buttons linking to the two different pages. The About page has just a little bit more text than the Home page does. Simple enough, but let’s see what’s behind the curtains. Here’s a diagram displaying the basic request flow of our application:

structure

The user sends a request to our NodeJs application. The Server accepts the request and sends it to the Router. After that, the Render knows which page should be used and returns an answer to the Server. At the end, the response is then sent to the user.

There is one additional element, Data Providers, which prepares the needed CSS or JavaScript for the Render (keep in mind that in our example app I didn’t use JavaScript, there is only a CSS module).

Here’s what our app would look like as a Cell, in Organic:

structureorganic

In the Cell, we have a membrane which keeps the internal elements away from the outside world. In this membrane is where we’ll put our first organel, our Server, because this is where data can either enter or leave our application. The other organelles (Router, Render, and CSS) are placed in the plasma. All of these modules are communicating between each other via chemicals (request, page and css, marked in red). The Server emits a request chemical. The Router emits a page and the CSS organel sends the css. I should also mention that the plasma acts as an event bus for the chemicals. Organelles listen for a particular chemical and if found, they react on it.

Here’s another request flow diagram, but this time with the chemicals that are emitted (marked in red):

structureorganic2

Now if this concept is still unclear to you, don’t worry, as we proceed through the next few sections and get into the actual code, it should make more sense then!


DNA

dna

Everything starts with the DNA (Deoxyribonucleic acid), which you can think of as a Cells configuration. In this DNA is where you will define your organelles and their settings.

Let’s create a new index.js file and put in the following code:

var DNA = require("organic").DNA;
var Cell = require("organic").Cell;

var dna = new DNA({
    membrane: {
        Server: {
            source: "membrane.Server"
        }
    },
    plasma: {
        Router: {
            source: "plasma.Router"
        },
        CSS: {
            source: "plasma.CSS",
            file: "./css/styles.css"
        },
        Render: {
            source: "plasma.Render",
            templates: "./tpl/"
        }       
    }
});

var cell = new Cell(dna);

The above code is just a definition for the DNA and Cell initialization. You can see we’ve placed our Server in the membrane and the Router, CSS, and Render in the plasma, as we discussed in the last section. The source property is actually mandatory and contains the path to your individual organelles.

Keep in mind that the file property in the CSS organel and the templates property in the Render organel are actually custom properties, which I set. You can add whatever customization you need in here as well.

And just for your reference, the directory structure for your app should look like this:

/css
    /styles.css
/membrane
    /Server.js
/node_modules
/plasma
    /CSS.js
    /Render.js
    /Router.js
/tpl

A Basic Organel

var Chemical = require("organic").Chemical;
var Organel = require("organic").Organel;
var util = require("util");

module.exports = function YourOrganelName(plasma, config) {
    Organel.call(this, plasma);
    // your custom logic here
}

util.inherits(module.exports, Organel);

The above code shows the basic format for creating an organel. If you want to use this.emit or this.on you’ll need to make sure to inherit Organel as we did above. And actually, the plasma parameter variable has those exact same methods (emit and on), so you could use plasma directly and skip the inheritance if you wanted.

Also, notice the config parameter; This is the object that you defined in your DNA, which is a good place for any of your custom configuration.


The Server

The Server is your main organel, which accepts requests and sends responses to the browser. Here’s what your Server organel should look like:

var port = 3000;
module.exports = function Server(plasma, config) {
    Organel.call(this, plasma);

    var self = this;
    http.createServer(function(req, res) {
        console.log("request " + req.url);
        self.emit(new Chemical({
            type: "request",
            req: req
        }), function(html) {
            res.writeHead(200);
            res.end(html);
        });
    }).listen(port, '127.0.0.1');
    console.log('Server running at http://127.0.0.1:' + port + '/');

}

Two things are happening here. The first one is the definition of the NodeJS server, which of course has a handler accepting request (req) and response (res) objects. Once the request is received, the Server organel sends a chemical, with the type request, notifying the rest of the organelles. It also attaches the req object, so whoever needs more information about the incoming request can access data from the chemical directly.

The emit method then takes a second argument which is a callback function. You can use this to return the flow back to the organel, which sends the chemical. I.e. once the Render finishes its job, it calls the Server’s callback. It takes the produced HTML and by using the res object sends the page to the user.


The Router

For our next organel, the Router just listens for a request chemical, which is sent by the Server. It gets the URL from the req object and decides which page should be shown. Here’s the code for the Router:

module.exports = function Router(plasma, config) {
    Organel.call(this, plasma);

    var self = this;
    this.on("request", function(chemical, sender, callback) {
        var page = chemical.req.url.substr(1, chemical.req.url.length);
        page = page == "" || page == "/" ? "home" : page;
        self.emit(new Chemical({
            type: "page",
            page: page,
            ready: callback
        }));
    });

}

Now, the router itself just emits a new chemical with a type of page. Keep in mind, there are two other organels listening for this chemical as well, but by default, it’s not transfered to all of the other elements in the plasma. Of course, there may be times when you will need such functionality. To do so, you just need to return false; in the chemical’s listener. We’ll see this in action in the next section.


CSS Styles Provider

module.exports = function CSS(plasma, config) {
    Organel.call(this, plasma);

    var cssStyles = fs.readFileSync(config.file).toString();
    var self = this;
    this.on("page", function(chemical) {
        self.emit(new Chemical({
            type: "css",
            value: cssStyles
        }));
        return false;
    });

}

This module is just a simple one-task organel which gets the path to the .css file, reads it, and later emits a chemical containing the actual CSS styles. Also, pay attention to the return false; statement at the bottom. As I said from the last section, it’s important to do this, otherwise the Render will not receive the page chemical sent by the Router. This happens because the CSS organel is defined before the Render in the DNA.


The Render

And lastly, here’s the code for our Render organel:

module.exports = function Render(plasma, config) {
    Organel.call(this, plasma);

    var getTemplate = function(file, callback) {
        return fs.readFileSync(config.templates + file);
    }
    var formatTemplate = function(html, templateVars) {
        for(var name in templateVars) {
            html = html.replace("{" + name + "}", templateVars[name]);
        }
        return html;
    }
    var templates = {
        layout: getTemplate("layout.html").toString(),
        home: getTemplate("home.html").toString(),
        about: getTemplate("about.html").toString(),
        notFound: getTemplate("notFound.html").toString()
    }
    var vars = {};
    var self = this;

    this.on("css", function(chemical) {
        vars.css = chemical.value;
    });
    this.on("page", function(chemical) {
        console.log("Opening " + chemical.page + " page.");
        var html = templates[chemical.page] ? templates[chemical.page] : templates.notFound;
        html = formatTemplate(templates.layout, {content: html});
        html = formatTemplate(html, vars);
        chemical.ready(html);
    });

}

There are two helper methods here: getTemplate and formatTemplate which implement a simple template engine for loading an external HTML file and replacing mustache-style variables. All of the templates are stored in an object for quick access. Afterwards we have just a few lines for HTML formatting and then everything is ready to go. The Render organel also listens for the css chemical and lastly the application provides a notFound 404 page, if needed.

So here’s what the final app’s directory structure looks like:

/css
    /styles.css
/membrane
    /Server.js
/node_modules
/plasma
    /CSS.js
    /Render.js
    /Router.js
/tpl
    /about.html
    /home.html
    /layout.html
    /notFound.html

Running the Application

Simply run node index.js in the console and you should see something similar to this:

console1

With your server running, you should now be able to visit http://127.0.0.1:3000 in your favorite browser. Try clicking on the links to switch between the two pages a few times and then go back to your console to view the output.

console2

You should see a nice report about the applications recent activity. Now you may also notice something else in the console:

request /favicon.ico
Opening favicon.ico page.

You can see that there is one more request coming from the browser. It wants to load favicon.ico. However our little site doesn’t have such an icon, so it just opens the 404 page. You can try this for yourself by visiting: http://127.0.0.1:3000/favicon.ico.

If you’d like to check out the full source code for this tutorial, you can download it using the download link at the top of this page.


Conclusion

In my opinion, Organic is a great concept. It’s very flexible and encourages producing better applications. Keep in mind that the example in this article is based on my personal experience with other design patterns. So my use of terms like Router, Data Provider or Render is completely optional and you can change the names as you see fit. Feel free to experiment by creating new modules based on Organic and let me know what you think in the comments!

The core of Organic is developed by Boris Filipov and Valeri Bogdanov and I strongly recommend that you check them out on Github. If you are interested in using Organic, you will find things like Angel and WebCell really helpful as well.

June 27 2013

21:01

Building a Web App From Scratch in AngularJS

In a previous AngularJS tutorial I covered all the basics of how to get up and running with Angular in around 30 minutes. This tutorial will expand on what was covered there by creating a simple real world web application.

This simple web application will allow its users to view, search and filter TV Show Premieres for the next 30 days. As a keen series viewer, I am always looking for something new to watch when my favorite shows are off air, so I thought I would create an app to help me find what I am looking for.

Before we get started, you may want to take a look at the demo from above, to see what we will be creating in this tutorial.


Getting Started

To begin, we need a skeleton AngularJS application which already has all the required JavaScript and CSS to create the TV Show Premieres app. Go ahead and download this skeleton from the “download source files” button above.

Once you have downloaded the files you should have a directory structure as shown below:

figure1-skeleton-directory-structure

Looking at the directory structure and the included files you will see that we will be using Twitter Bootstrap to make our web app a little prettier, but this tutorial will not look at Twitter Bootstrap in any detail (learn more about Twitter Bootstrap). Additionally, this tutorial will not be covering how to setup a new AngularJS application as the aforementioned AngularJS tutorial already covers this in detail.

Upon opening index.html, with your browser of choice, you should see a very simple web page with just a title and some basic formatting as seen below:

figure2-basic-web-page

Loading In Our Data

The first thing we are going to need to create our TV Show app, is information about TV shows. We are going to use an API provided by Trakt.tv. Before we can get started you are going to need an API key, you can register for one on their website.

Why use this API? Do I really have to register? We are using this API so our app will use real data and will actually provide some use once completed. Also, by using this API we do not need to go over any server side implementations within this tutorial and can focus completely on AngularJS. An extra couple of minutes to register for the API will be well worth it.

Now that you have your own API key, we can utilize the Trakt API to get some information on TV shows. We are going to use one of the available API calls for this tutorial, more information on this is available in the api docs. This API call will provide us with all the TV Show Premieres within a specified time frame.

Open mainController.js and modify it to match the below code:

    app.controller("mainController", function($scope, $http){

        $scope.apiKey = "[YOUR API KEY HERE]";
        $scope.init = function() {
            //API requires a start date
            var today = new Date();
            //Create the date string and ensure leading zeros if required
            var apiDate = today.getFullYear() + ("0" + (today.getMonth() + 1)).slice(-2) + "" + ("0" + today.getDate()).slice(-2);
            $http.jsonp('http://api.trakt.tv/calendar/premieres.json/' + $scope.apiKey + '/' + apiDate + '/' + 30 + '/?callback=JSON_CALLBACK').success(function(data) {
                console.log(data);
            }).error(function(error) {

            });
        };

    });

If you look within the index.html file, for the following line:

    <div class="container main-frame" ng-app="TVPremieresApp" ng-controller="mainController" ng-init="init()">

You will see that the ng-init method is calling the init function, this means that the init() function within our mainController will be called after the page has been loaded.

If you read the API documentation for the calendar/premieres method you will have seen that it takes three parameters, your API key, the start date (e.g. 20130616) and the number of days.

To provide all three parameters, we first need to get today’s date using JavaScripts Date() method and format it to the API specified date format to create the apiDate string. Now that we have everything we need, we can create an $http.jsonp call to the API method. This will allow our web app to call a URL that is not within our own domain and receive some JSON data. Ensure that ?callback=JSON_CALLBACK is prepended onto the request URI so that our attached .success callback function is called on response.

Within our .success function we then simply output the received data to the console. Open index.html within your browser and open the JavaScript console, you should see something like the following:

figure3-javascript-console

This demonstrates that we are successfully performing a call to the Trakt API, authenticating with our API key and receiving some JSON data. Now that we have our TV show data, we can move on to the step.


Displaying Our Data

Processing the JSON Objects

Before we can display our data, we need to process and store it. As the API returns the premiere episodes grouped by date, we want to remove this grouping and just create a single array with all the premiere episodes and their associated data. Modify mainController.js to be as follows:

    app.controller("mainController", function($scope, $http){
        $scope.apiKey = "[YOUR API KEY]";
        $scope.results = [];
        $scope.init = function() {
            //API requires a start date
            var today = new Date();
            //Create the date string and ensure leading zeros if required
            var apiDate = today.getFullYear() + ("0" + (today.getMonth() + 1)).slice(-2) + "" + ("0" + today.getDate()).slice(-2);
            $http.jsonp('http://api.trakt.tv/calendar/premieres.json/' + $scope.apiKey + '/' + apiDate + '/' + 30 + '/?callback=JSON_CALLBACK').success(function(data) {
                //As we are getting our data from an external source, we need to format the data so we can use it to our desired effect
                //For each day, get all the episodes
                angular.forEach(data, function(value, index){
                    //The API stores the full date separately from each episode. Save it so we can use it later
                    var date = value.date;
                    //For each episodes, add it to the results array
                    angular.forEach(value.episodes, function(tvshow, index){
                        //Create a date string from the timestamp so we can filter on it based on user text input
                        tvshow.date = date; //Attach the full date to each episode
                        $scope.results.push(tvshow);
                    });
                });
            }).error(function(error) {

            });
        };
    });

The above code is well commented and should be easy to follow, lets take a look at these changes. First, we declare a scope variable $scope.results as an array which will hold our processed results. We then use angular.forEach (which is similar to jQuery’s $.each method for those who know it) to loop through each date group and store the date in a local date variable.

We then create another loop which loops through each of the TV shows within that date group, adds the locally stored date to the tvshow object and then finally adds each tvshow object to the $scope.results array. With all of this done, our $scope.results array will look like the following:

figure4-formatted-tvshow-json-objects

Creating the List HTML

We now have some data we wish to display within a list, on our page. We can create some HTML with ng-repeat to dynamically create the list elements based on the data within $scope.results. Add the following HTML code within the unordered list that has the episode-list class in index.html:

    <li ng-repeat="tvshow in results">
        <div class="row-fluid">
            <div class="span3">
                <img src="{{tvshow.episode.images.screen}}" />
                <div class="ratings"><strong>Ratings:</strong> <span class="label"><i class="icon-thumbs-up"></i> {{tvshow.episode.ratings.loved}}</span> <span class="label"><i class="icon-thumbs-down"></i> {{tvshow.episode.ratings.hated}}</span> <span class="label label-important" ng-class="{'label-success': tvshow.episode.ratings.percentage >= 50}"><strong>%</strong> {{tvshow.episode.ratings.percentage}}</div>
            </div>
            <div class="span6">
                <h3>{{tvshow.show.title}}: {{tvshow.episode.title}}</h3>
                <p>{{tvshow.episode.overview}}</p>
            </div>
            <div class="span3">
                <div class="fulldate pull-right label label-info">{{tvshow.date}}</div>
                <ul class="show-info">
                    <li><strong>On Air:</strong> {{tvshow.show.air_day}} {{tvshow.show.air_time}}</li>
                    <li><strong>Network:</strong> {{tvshow.show.network}}</li>
                    <li><strong>Season #:</strong> {{tvshow.episode.season}}</li>
                    <li><strong>Genres:</strong> <span class="label label-inverse genre" ng-repeat="genre in tvshow.show.genres">{{genre}}</span></li>
                </ul>
            </div>
        </div>
    </li>

This HTML is simply creating a single list element with ng-repeat. ng-repeat="tvshow in results" is telling angular to repeat this list element for each object within the $scope.results array. Remember that we do not need to include the $scope, as we are within an element with a specified controller (refer to the previous tutorial for more on this).

Inside the li element we can then reference tvshow as a variable which will hold all of the objects data for each of the TV shows within $scope.results. Below is an example of one of the objects within $scope.results so you can easily see how to reference each slice of data:

    {
    "show":{
        "title":"Agatha Christie's Marple",
        "year":2004,
        "url":"http://trakt.tv/show/agatha-christies-marple",
        "first_aired":1102838400,
        "country":"United Kingdom",
        "overview":"Miss Marple is an elderly spinster who lives in the village of St. Mary Mead and acts as an amateur detective. Due to her long and eventful life crimes often remind her of other incidents. Although Miss Marple looks sweet, frail, and old, she fears nothing; either dead or living.",
        "runtime":120,
        "network":"ITV",
        "air_day":"Monday",
        "air_time":"9:00pm",
        "certification":"TV-14",
        "imdb_id":"tt1734537",
        "tvdb_id":"78895",
        "tvrage_id":"2515",
        "images":{
            "poster":"http://slurm.trakt.us/images/posters/606.jpg",
            "fanart":"http://slurm.trakt.us/images/fanart/606.jpg",
            "banner":"http://slurm.trakt.us/images/banners/606.jpg"
        },
        "ratings":{
            "percentage":91,
            "votes":18,
            "loved":18,
            "hated":0
        },
        "genres":[
            "Drama",
            "Crime",
            "Adventure"
        ]
    },
    "episode":{
        "season":6,
        "number":1,
        "title":"A Caribbean Mystery",
        "overview":""Would you like to see a picture of a murderer?", Jane Marple is asked by Major Palgrave whilst on a luxurious holiday in the West Indies. When she replies that she would like to hear the story, he explains. There once was a man who had a wife who tried to hang herself, but failed. Then she tried again later, and succeeded in killing herself. The man remarried to a woman who then tried to gas herself to death. She failed, but then tried again later and succeeded. Just as Major Palgrave is about to show the picture to her, he looks over her shoulder, appears startled, and changes the subject. The next morning, a servant, Victoria Johnson, finds him dead in his room. Doctor Graham concludes that the man died of heart failure; he showed all the symptoms, and had a bottle of serenite (a drug for high blood pressure) on his table.",
        "url":"http://trakt.tv/show/agatha-christies-marple/season/6/episode/1",
        "first_aired":1371366000,
        "images":{
            "screen":"http://slurm.trakt.us/images/fanart/606-940.jpg"
        },
        "ratings":{
            "percentage":0,
            "votes":0,
            "loved":0,
            "hated":0
        }
    },
    "date":"2013-06-16"
    }

As an example, within the li element, we can get the show title by referencing tvshow.show.title and wrapping it in double curly brackets:{{ }}. With this understanding, it should be easy to see what information will be displayed for each list element. Thanks to the CSS bundled with the skeleton structure, if you save these changes and open index.html within your browser, you should see a nicely formatted list of TV shows with the associated information and images. This is shown in the figure below:

figure5-formatted-show-list

Conditional Classes

You may or may not have noticed:

ng-class="{'label-success': tvshow.episode.ratings.percentage >= 50}"

…which is attached to one of the span elements, within the ratings section, in the above HTML. ng-class allows us to conditionally apply classes to HTML elements. This is particularly useful here, as we can then apply a different style to the percentage span element depending on whether the TV show rating percentage is high or not.

In the above HTML example, we want to apply the class label-success, which is a Twitter Bootstrap class, which will style the span to have a green background and white text. We only want to apply this class to the element if the rating percentage is greater than or equal to 50. We can do this as simply as tvshow.episode.ratings.percentage >= 50. Take a look down the list of formatted TV shows in your browser, if any of the percentages meet this condition, they should be displayed green.


Creating a Search Filter

We now have a list of upcoming TV show premieres, which is great, but it doesn’t offer much in the way of functionality. We are now going to add a simple text search which will filter all of the objects within the results array.

Binding HTML Elements to Scope Variables

Firstly we need to declare a $scope.filterText variable within mainController.js as follows:

    app.controller("mainController", function($scope, $http){
        $scope.apiKey = "[YOUR API KEY]";
        $scope.results = [];
        $scope.filterText = null;
        $scope.init = function() {
            //API requires a start date
            var today = new Date();
            //Create the date string and ensure leading zeros if required
            var apiDate = today.getFullYear() + ("0" + (today.getMonth() + 1)).slice(-2) + "" + ("0" + today.getDate()).slice(-2);
            $http.jsonp('http://api.trakt.tv/calendar/premieres.json/' + $scope.apiKey + '/' + apiDate + '/' + 30 + '/?callback=JSON_CALLBACK').success(function(data) {
                //As we are getting our data from an external source, we need to format the data so we can use it to our desired affect
                //For each day get all the episodes
                angular.forEach(data, function(value, index){
                    //The API stores the full date separately from each episode. Save it so we can use it later
                    var date = value.date;
                    //For each episodes add it to the results array
                    angular.forEach(value.episodes, function(tvshow, index){
                        //Create a date string from the timestamp so we can filter on it based on user text input
                        tvshow.date = date; //Attach the full date to each episode
                        $scope.results.push(tvshow);
                    });
                });
            }).error(function(error) {

            });
        };
    });

Now we need to add a text input so that the user can actually input a search term. We then need to bind this input to the newly declared variable. Add the following HTML within the div which has the search-box class in index.html.

    <label>Filter: </label>
    <input type="text" ng-model="filterText"/>

Here we have used ng-model to bind this input to the $scope.filterText variable we declared within our scope. Now this variable will always equal what is inputted into this search input.

Enforcing Filtering On ng-repeat Output

Now that we have the text to filter on, we need to add the filtering functionality to ng-repeat. Thanks to the built-in filter functionality of AngularJS, we do not need to write any JavaScript to do this, just modify your ng-repeat as follows:

    <li ng-repeat="tvshow in results | filter: filterText">

It’s as simple as that! We are telling AngularJS – before we output the data using ng-repeat, we need to apply the filter based on the filterText variable. Open index.html in a browser and perform a search. Assuming you searched for something that exists, you should see a selection of the results.


Creating a Genre Custom Filter

So, our users can now search for whatever they are wanting to watch, which is better than just a static list of TV shows. But we can take our filter functionality a little further and create a custom filter that will allow the user to select a specific genre. Once a specific genre has been selected, the ng-repeat should only display TV shows with the chosen genre attached.

First of all, add the following HTML under the filterText input in index.html that we added previously.

    <label>Genre: </label>
    <select ng-model="genreFilter" ng-options="label for label in availableGenres">
        <option value="">All</option>
    </select>

You can see from the above HTML that we have created a select input bound to a model variable called genreFilter. Using ng-options we are able to dynamically populate this select input using an array called availableGenres.

First of all, we need to declare these scope variables. Update your mainController.js file to be as follows:

    app.controller("mainController", function($scope, $http){
        $scope.apiKey = "[YOUR API KEY HERE]";
        $scope.results = [];
        $scope.filterText = null;
        $scope.availableGenres = [];
        $scope.genreFilter = null;
        $scope.init = function() {
            //API requires a start date
            var today = new Date();
            //Create the date string and ensure leading zeros if required
            var apiDate = today.getFullYear() + ("0" + (today.getMonth() + 1)).slice(-2) + "" + ("0" + today.getDate()).slice(-2);
            $http.jsonp('http://api.trakt.tv/calendar/premieres.json/' + $scope.apiKey + '/' + apiDate + '/' + 30 + '/?callback=JSON_CALLBACK').success(function(data) {
                //As we are getting our data from an external source, we need to format the data so we can use it to our desired affect
                //For each day get all the episodes
                angular.forEach(data, function(value, index){
                    //The API stores the full date separately from each episode. Save it so we can use it later
                    var date = value.date;
                    //For each episodes add it to the results array
                    angular.forEach(value.episodes, function(tvshow, index){
                        //Create a date string from the timestamp so we can filter on it based on user text input
                        tvshow.date = date; //Attach the full date to each episode
                        $scope.results.push(tvshow);
                        //Loop through each genre for this episode
                        angular.forEach(tvshow.show.genres, function(genre, index){
                            //Only add to the availableGenres array if it doesn't already exist
                            var exists = false;
                            angular.forEach($scope.availableGenres, function(avGenre, index){
                                if (avGenre == genre) {
                                    exists = true;
                                }
                            });
                            if (exists === false) {
                                $scope.availableGenres.push(genre);
                            }
                        });
                    });
                });
            }).error(function(error) {

            });
        };
    });

It is obvious that we have now declared both genreFilter and availableGenres which we saw referenced within our HTML. We have also added some JavaScript which will populate our availableGenres array. Within the init() function, while we are processing the JSON data returned from the API, we are now doing some additional processing and adding any genres that are not already within the availableGenres array to this array. This will then populate the select input with any available genres.

If you open index.html within your browser, you should see the genre select drop down populate as illustrated below:

figure6-genre-select-drop-down

When the user chooses a genre, the $scope.genreFilter variable will be updated to equal the selected value.

Creating the Custom Filter

As we are wanting to filter on a specific part of the TV show objects, we are going to create a custom filter function and apply it alongside the AngularJS filter within the ng-repeat.

At the very bottom of mainController.js, after all of the other code, add the following JavaScript:

    app.filter('isGenre', function() {
        return function(input, genre) {
            if (typeof genre == 'undefined' || genre == null) {
                return input;
            } else {
                var out = [];
                for (var a = 0; a < input.length; a++){
                    for (var b = 0; b < input[a].show.genres.length; b++){
                        if(input[a].show.genres[b] == genre) {
                            out.push(input[a]);
                        }
                    }
                }
                return out;
            }
        };
    });

The above JavaScript declares a custom filter to our app called isGenre. The function within the filter takes two parameters, input and genre. input is provided by default (which we will see in a moment) and is all the data that the ng-repeat is processing. genre is a value we need to pass in. All this filter does, is take the specified genre and checks to see if each of the TV show objects within input have the specified genre attached to them. If an object has the specified genre, it adds it to the out array, which will then be returned to the ng-repeat. If this doesn’t quite make sense, don’t worry! It should shortly.

Applying the Custom Filter

Now that we have our customer filter available, we can add this additional filter to our ng-repeat. Modify your ng-repeat in index.html as follows:

    <li ng-repeat="tvshow in results | filter: filterText | isGenre:genreFilter">

This simply chains another filter onto the ng-repeat output. Now the output will be processed by both filters before it is displayed on the screen. As you can see we have specified our custom filter as isGenre: and then we are passing the scope variable genreFilter as a parameter, which is how we provide our customer filter with the genre variable we talked about earlier. Remember that AngularJS is also providing our filter with the data that the ng-repeat is processing as the input variable.

OK, our custom genre filter is complete. Open index.html in a browser and test out the new functionality. With this filter in place, a user can easily filter out genres they are not interested in.


Calling Scope Functions

You may have noticed that each TV show listing also shows the genre itself. For some additional functionality, we are going to allow the user to click these genres, which will then automatically apply the genre filter for the genre they have clicked on. First of all, we need to create a scope function that the ng-click can call. Add the following code within the mainController on mainController.js:

    $scope.setGenreFilter = function(genre) {
        $scope.genreFilter = genre;
    }

In the above code, this function takes a genre value and then sets the $scope.genreFilter to the specified value. When this happens, the genre filter select box’s value will update and the filter will be applied to the ng-repeat output. To trigger this function when the genre span elements are clicked, add an ng-click to the genre span elements within index.html as follows:

    <span class="label label-inverse genre" ng-repeat="genre in tvshow.show.genres" ng-click="setGenreFilter(genre)">{{genre}}</span>

The ng-click calls our previously created setGenreFilter function and specifies a genre. Open index.html and try it out!


Custom Ordering With AngularJS

Our TV show premiere app is looking pretty good, users can easily refine the results displayed using a series of intuitive filters. To enhance this experience we are going to add some custom ordering functionality so our users will be able to choose a range of ordering options.

Add the following HTML under the genre select drop down:

    <label>Order by: </label>
    <select ng-model="orderField" ng-options="label for label in orderFields" class="input-medium"></select>
    <select ng-model="orderReverse"class="input-medium">
        <option value="true">Descending</option>
        <option value="false">Ascending</option>
    </select>

With this code added, we have two more drop downs. One to select how to order the data and another to choose the direction in which to order the data. We now need to create a function within our controller to make the order comparison. Add the following JavaScript under our setGenreFilter function:

    $scope.customOrder = function(tvshow) {
        switch ($scope.orderField) {
            case "Air Date":
                return tvshow.episode.first_aired;
                break;
            case "Rating":
                return tvshow.episode.ratings.percentage;
                break;
        }
    };

We also need to declare some additional scope variables:

    $scope.orderFields = ["Air Date", "Rating"];
    $scope.orderDirections = ["Descending", "Ascending"];
    $scope.orderField = "Air Date"; //Default order field
    $scope.orderReverse = false;

If you now open index.html within your browser, you should see the added drop downs populated with Air Date already selected as the default order field. This is shown in the figure below:

figure7-order-drop-downs

Finally, as we have done with our other filters, we are going to need to append this to our ng-repeat, update this as follows:

    <li ng-repeat="tvshow in results | filter: filterText | isGenre:genreFilter | orderBy:customOrder:orderReverse">

We are now applying an order-by-filter on our data in addition to the other filters. We are telling the order by to use our customOrder function and we are passing our orderReverse scope variable through as well. Open index.html in a browser and see the ordering in action.


Conclusion

AngularJS has allowed us to quickly create a detailed and functional web application with minimum effort. Utilizing AngularJS’s built-in filter functions, alongside some of our own custom code, our web application allows our users to easily filter and search through the TV show premieres.

After reading this tutorial you should now be able to understand and use the following principles:

  • Using ng-repeat to display information on screen.
  • Binding to inputs, allowing users to search and filter ng-repeat output.
  • Chaining filters on ng-repeat to perform multiple filtering functions.
  • Custom ordering of data.
  • Using events such as ng-click to respond to user interaction.
  • Using ng-class to conditionally apply styling to page elements.

So in conclusion, the topics covered in this tutorial should give you a strong foundation and understanding of what you can achieve when creating rich web applications in AngularJS.

June 25 2013

20:02

Advanced Unit Testing Techniques in JavaScript

By now, everyone knows about Test-Driven Development and unit testing. But are you using the testing frameworks to their fullest?


Introduction

In this tutorial, I’ll introduce you to some of the more advanced techniques available to you.

As this tutorial will cover some advanced topics, I assume you’ve already created unit tests before and are familiar with the basics and its terminology. If not, here’s an excellent article to you get started: TDD Terminology Simplified.

We’ll be using SinonJS. This is a standalone framework which provides an API for mocks, stubs, spies and more. You can use it with any testing framework of your choosing, but for this tutorial, we’ll be using BusterJS, as it ships with SinonJS built-in.


Up & Running With BusterJS

To install BusterJS just run the following in a terminal: npm install -g buster

Note that you need Node v0.6.3 or newer.

BusterJS requires a buster.js configuration file which tells Buster where your sources and tests files are.

Create the buster.js file and paste in the following:

var config = module.exports;

config["Nettuts Tests"] = {
    rootPath: "./",
    environment: "browser",
    sources: [
        "src/*.js"
    ],
    tests: [
        "spec/*-test.js"
    ]
}

Now we’ve told Buster that our tests can be found in the spec folder, and our implementation code in the src folder. You can reference files by their filenames, or use wildcards as we have done here. Note that these are relative to the rootPath that we specified.

You can have as many configurations as you want. This allows you to set up different test groups.

To run a test with Buster you’ll first need to start its server and hook it up to a browser. You can do so by running buster server in your terminal. You should see the following:

buster-server

Now open your favorite browser and point it to http://localhost:1111. You should see the following screen:

capture-browser

Click the big Capture Browser button to start capturing this browser. Your tests will now run in this browser as long as you leave it open. You can hook up as many browsers as you want, giving you the ability to test in multiple browsers simultaneously (yes, even good ol’ IE)!

You can also hook up BusterJS with PhantomJS to execute your tests without a browser, but that’s outside the scope of this tutorial.

Make sure to leave the server and the browser tab running for the remainder of this tutorial.

To run your tests, simply type buster test in a new terminal tab/window. Once you’ve added some tests you will see an output similar to the following:

test-output

Test Doubles

Before we dive into mocks and stubs, lets talk a bit about test doubles; A test double is an object which looks and behaves more or less like the real thing. They are used in a test when using the real object would be difficult or undesirable, and they make testing much easier.

It’s commonly compared to using a stunt double instead of the real actor in a movie scene.

In this tutorial we’ll cover the following types of test doubles:

  • stubs
  • spies
  • mocks

Spies

A spy is a function that records all calls made to it. It will keep track of arguments, return values, the value of this, exceptions thrown (if any), etc. It can be an anonymous function or it can wrap an existing function. If used as a wrapper it will not modify the underlying function in any way; the original function will still be executed as usual.

Here’s how you create a spy:

var spy = sinon.spy();

This creates an anonymous function that records arguments, the value of this, exceptions, and return values for all calls it receives.

var spy = sinon.spy(my_function);

This spies on the provided function.

var spy = sinon.spy(object, "method");

This creates a spy for object.method and replaces the original method with the spy. The spy still executes the original method, but will record all calls.
You can access this spy via the newly created spy variable or by calling object.method directly. object.method can be restored by calling spy.restore() or object.method.restore().

The returned spy object has the following methods and properties:

spy.withArgs(arg1[, arg2, ...]);

Creates a spy that only records calls when the received arguments match those passed to withArgs.

spy.callCount

Returns the number of recorded calls.

spy.called

Returns true if the spy was called at least once.

spy.calledOnce

Returns true if spy was called exactly one time.

spy.calledWith(arg1, arg2, ...);

Returns true if the spy was called at least once with the provided arguments. This can be used for partial matching. SinonJS will only check the provided arguments against actual arguments. So a call that receives the provided arguments (and possibly others) will return true.

spy.threw([exception]);

Returns true if the spy threw an exception at least once. If you want, you can pass in a string or an object to test for a specific exception type or object.

var spyCall = spy.getCall(n);

Returns the nth call made to the spy. Spy calls have their own API, which you can find here: Spy Call API

spy.args

An array of arguments received per call. spy.args[0] is an array of arguments received in the first call, spy.args[1] is an array of arguments received in the second call, etc.

spy.reset()

Resets the state of a spy.

This was just a small excerpt of the methods available to you. For the full list of all available API methods check the documentation here: Spy API

Spies Example

Now lets look at an example on how to use a spy. In the following test we’re checking if jQuery.get() is using jQuery.ajax(). We do this by spying on jQuery.ajax().

buster.testCase("Spies", {

    tearDown: function() {
        jQuery.ajax.restore();
    },

    "should call jQuery.ajax when using jQuery.get": function() {

        sinon.spy(jQuery,"ajax");

        jQuery.get("/user");

        assert(jQuery.ajax.calledOnce);

    }

});

The original method is unaffected, and is still being executed. We just wrapped it in a spy so we can record the calls to it.

After the test has run, we remove the spy from jQuery.ajax by calling .restore().


Stubs

A stub is a test double with preprogrammed behavior. Stubs are used to make a piece of code work without actually using the real implementation of it.

It provides preprogrammed responses to calls, and it doesn’t care about how many times it’s called, in which order, or with which arguments.

Stubs have their own API, but also implement the full Spy API. Just like spies they can be either anonymous or wrap an existing function. Unlike spies, they will not execute the wrapped function. Instead, you can specify what the stub should do when it is called.

Because you can control exactly how a stub behaves, it allows you to easily test different flows in your code, or to prevent undesirable behavior to be executed.

Here’s an excerpt from Sinon’s Stub API:

var stub = sinon.stub();

This creates an anonymous stub function.

var stub = sinon.stub(object, "method");

This replaces object.method with a stub function. When stubbing an existing method like this, the original method will not be executed whenever object.method() is called.

The original function can be restored by calling object.method.restore() or stub.restore().

var stub = sinon.stub(obj);

Stubs all the object’s methods. It’s usually considered a better practice to stub individual methods, which are less prone to unexpected behavior.

stub.withArgs(arg1[, arg2, ...]);

Stubs the method only for the provided arguments.

stub.returns(value);

Makes the stub return the provided value.

stub.returnsArg(index);

Causes the stub to return the argument at the provided index; stub.returnsArg(0) causes the stub to return the first argument.

stub.throws();

Causes the stub to throw an exception. Optionally, you can pass in the type of error to throw, e.g. stub.throws("TypeError").

You can find the full API reference here: Stubs API

Stubs Examples

The simplest way to use a stub is by creating an anonymous stub function:

buster.testCase("Stubs Example", {

    "should demonstrate anonymous stub usage": function() {

        var callback = sinon.stub();

        callback.returns("result");

        assert.equals(callback(), "result");

    }

});

Here’s a different example. It demonstrates how to stub methods to force the code flow down a certain path:

buster.testCase("User", {

    setUp : function() {

        this.user = new User({
            name  : 'John',
            age   : 24,
            loves : 'coffee'
        });

    },

    tearDown : function() {

        Database.saveRecord.restore();

    },

    "should return `User saved successfully` when save in database is successful": function() {

        sinon.stub(Database, 'saveRecord').returns(true);

        var result = this.user.save();

        assert.equals(result, 'User saved successfully');

    },

    "should return `Error saving user` when save in database fails": function() {

        sinon.stub(Database, 'saveRecord').returns(false);

        var result = this.user.save();

        assert.equals(result, 'Error saving user');

    }

});

In the above tests we have a User class which uses the Database class to save data. Our goal is to test if the User class responds with the correct message when the Database is done saving the user data. We want to test both the good and bad scenarios.

In a production environment the Database class might do various things to save the data (connect to a real database, do some AJAX calls, etc.) which are not of interest for this test. It could even have a negative effect on our test results. If something in the Database class is broken, we want the Database class’s own unit tests to break and indicate the problem for us. Other classes which use the Database class as a dependency should still work as expected. Mocking or stubbing dependencies allows us to do this, which is a strong argument for using these in the first place.

In the above test we use a stub to preprogram the Database.saveRecord() method behavior. This allows us to test both code paths we need for our test.

After each test we call .restore() on the method we stubbed to restore the original method.

In the above example we stub all calls to Database.saveRecord(). We can also limit our stub to calls which have a certain collection of arguments.

Here’s a quick example of how to force different actions based on the passed arguments:

buster.testCase("Stubs", {

    "should stub different behaviour based on arguments": function() {

        var callback = sinon.stub();

        // Stub the same method in 3 different ways, based on the arguments
        callback.withArgs('success').returns(true);
        callback.withArgs('getOrder').returns(['pizza', 'icecream']);
        callback.withArgs(false).throws("My Error");

        // Verify each stub
        assert( callback('success') );
        assert.equals( callback('getOrder'), ['pizza', 'icecream']);

        try {
            callback(false)
        } catch(e) {}

        assert( callback.threw("My Error"), "Exception 'My Error' was not thrown" );

    }

});

Mocks

Mocks are stubs with preprogrammed expectations. They allow you to verify the behavior of a piece of software, as opposed to verifying the state of something, as you’d do with normal assertions.

Here’s a list of Sinon’s Mock API:

var mock = sinon.mock(obj);

This creates a mock for the provided object. It does not modify the object, but returns a mock object to set expectations on the object’s methods.

var expectation = mock.expects("method");

This overrides obj.method with a mock function and returns it. Expectations come with their own API, which we’ll cover later.

mock.restore();

Restores all mocked methods to their original functions.

mock.verify();

Verifies all expectations on the mock. If any expectation is not met, an exception is thrown. This will also restore the mocked methods to their original functions.

Mocks also implement the full Stub API.

Mocks Examples

Now lets see how we can implement this in the User example we used earlier when talking about stubs.

Remember how it used the Database.saveRecord method? We never wrote a test to make sure the User class is actually calling this method correctly, we just assumed it would.

We don’t have any tests to verify the communication between the two objects, but we can fix this easily by writing the following test:

buster.testCase("User", {

    setUp : function() {

        var userdata = this.userdata = {
            name  : 'John',
            age   : 24,
            loves : 'coffee'
        };

        this.user = new User(userdata);

    },

    "should use Database class to save userdata": function() {

        var mock = sinon.mock(Database);

        mock
            .expects('saveRecord')
            .withExactArgs(this.userdata)
            .once();

        this.user.save();

        mock.verify();

    }

});

As you can see, we mocked the Database object and explicitly stated how we expect the saveRecord method to be called. In this test we expect the method to be called only once, with the userdata object as the only parameter.

Because our expectations are already in our mock, we do not need to write any assertions, instead we just tell the mock to verify its expectations by using mock.verify().

If the mock was called more than once, or with parameters other than those we specified, it would throw an error which would make the test fail:

mock-failed-verification

Lets look at an other example where mocks could come in handy.

If you’ve worked with unit tests before in a PubSub system, you’ll probably have seen something similar to the following:

    "should execute subscribers with correct data": function() {

        var pubsub = new PubSub(),
            called = false,
            eventdata = { foo : 'bar' },
            callback = function(data) { 
                called = (data === eventdata);
            };

        pubsub.subscribe("message", callback);
        pubsub.publish("message", eventdata);

        assert(called);

    }

This test verifies that the subscriber is called when an event is published.

The callback function is acting more or less like a mock, as it’s verifying if it was called with the correct arguments. Lets improve the test by turning callback into a real mock:

    "should execute subscribers with correct data (using mocks)": function() {

        var pubsub = new PubSub(),
            eventdata = { foo : 'bar' },
            callback = sinon.mock().withExactArgs(eventdata).once();

        pubsub.subscribe("message", callback);
        pubsub.publish("message", eventdata);

        callback.verify();

    }

Easy as pie. And it also improved the readability of the test!

Expectations

The .once() and .withExactArgs() methods used above are expectations. Sinon offers a ton of different expectations that you can use for your mocks. Here are a few of my favorites:

expectation.atLeast(n)

Expect the method to be called a minimum of n times.

expectation.atMost(n)

Expect the method to be called a maximum of n times.

expectation.never()

Expect the method to never be called.

expectation.once()

Expect the method to be called exactly once.

expectation.exactly(n)

Expect the method to be called exactly n times.

expectation.withArgs(arg1, arg2, ...)

Expect the method to be called with the provided arguments, and possibly others.

expectation.withExactArgs(arg1, arg2, ...)

Expect the method to be called with the provided arguments, and no others.

expectation.verify()

Verifies the expectation and throws an exception if it’s not met.

The full list of expectations can be found here: Expectations API

Expectations Examples

You can chain these expectations to your heart’s content. So this is totally valid:

sinon.mock(obj)
     .expects('method')
     .withExactArgs(data)
     .atLeast(1)
     .atMost(3);

Also, you can set expectations for multiple methods on the same mock simultaneously:

var mock = sinon.mock(obj);

mock.expects('method1')
    .atLeast(1)
    .atMost(3);

mock.expects('method2')
    .withArgs(data)
    .once();

Or even set multiple expectations on the same method:

var mock = sinon.mock(obj);

mock.expects('myMethod')
    .withArgs('foo')
    .atLeast(1)
    .atMost(3);
    
mock.expects('myMethod')
    .withArgs('bar')
    .exactly(4);

Both expectations will have to be met for the test to pass.

Now that we’ve covered test doubles, lets talk about something completely different, but equally awesome: time travel!


Time-Travel in Unit Tests

I don’t always bend time and space in unit tests, but when I do, I use Buster.JS + Sinon.JS ~ Brian Cavalier, Cujo.JS

Do you often use setTimeout, clearTimeout, setInterval, or clearInterval to delay execution of a piece of code? If so, then you’ve probably encountered tests like this:

buster.testCase("EggTimer", {

    "should execute callback method after 5000ms": function(done) {

        // Overwrite BusterJS default test timeout of 250ms
        this.timeout = 6000;

        var mock = sinon.mock().once();

        EggTimer.start(5000, mock);

        setTimeout(function() {
            mock.verify();

            // Because of the asynchronous nature of setTimeout,
            // we need to tell BusterJS when our test is done:
            done();
        }, 5001);

    }

});

This test verifies if the EggTimer.start method executes the callback after a certain period of time. But by doing so, it forces you to wait for five plus seconds every time you run the test!

Imagine having ten tests which rely on setTimeout in this way; Your test suite will quickly become so slow, you’ll start hating to run it.

Fortunately, SinonJS provides fake timers which allow us to override the browser’s clock and travel forward in time — Great Scott!

We can do this by using the sinon.useFakeTimers() method. By doing so, SinonJS will create a clock object and override the browser’s default timer functions with its own.

The returned clock object has only two methods:

clock.tick(time)

Tick the clock ahead for time milliseconds. This causes all timers scheduled within the specified time period to be executed.

clock.restore()

This call is usually done in the tearDown step of a test(suite). It resets the timer functions back to the browser’s native ones.

Fake Timers Example

Now that we know about fake timers, lets see how we can use them to rewrite the above test:

buster.testCase("EggTimer (with fake timers)", {

    setUp: function () {
        this.clock = sinon.useFakeTimers();
    },

    tearDown: function () {
        this.clock.restore();
    },

    "should execute callback method after 5000ms": function() {

        var mock = sinon.mock().once();

        EggTimer.start(5000, mock);
        this.clock.tick(5001);

        mock.verify();

    }

});

First we’ve added setUp and tearDown methods to override and restore the browser’s clock before and after each test.

Then we used the clock.tick() method to travel forward in time. Because SinonJS’s fake timers are synchronous implementations, we no longer need the done() call. As an added benefit, our test is now much easier to read.

Here’s a speed comparison:

fake-timers-speed-comparison

Our rewritten test takes the total test execution time down from 5012ms to 12ms! We saved exactly 5000ms, which was the value we used in the first test’s setTimeout() call!

By using fake timers, having ten of these tests is no big deal. It will only increase the total test execution time by a few milliseconds, as opposed to 5000ms per added test!

More information on Sinon’s clock and timer functions can be found here: Clock API


Conclusion

We’ve looked at various advanced techniques which you can use in your JavaScript unit tests. We discussed spies, stubs, mocks, and how to fake the browser’s timer functions.

We used SinonJS for this, but most other testing frameworks (like Jasmine) have support for these functionalities (although with their own API).

If you’re interested in more in-depth knowledge on unit testing in JavaScript, I highly recommend the Test-Driven JavaScript Development book by Christian Johansen (the creator of SinonJS.)

I hope this article was helpful, and that you’ve learned some new techniques that you can use the next time you write a unit test. Thank you for reading.

June 15 2013

02:01

Building Ribbit in Meteor

This is a continuation of the Twitter clone series with building Ribbit from scratch, this time using Meteor.

For this tutorial, please do not expect an in depth explanation of the Meteor framework. You should already have some Meteor experience so that you’ll understand some of the important concepts that will be presented. In order to get a basic understanding of Meteor I recommend Andrew Burgesses course on Tutsplus Premium.

So, let’s get started.


Step 0: Creating and Configuring the Meteor Application

We begin by generating a new Meteor application. Open a command line prompt and navigate to the folder where you want the application to be stored. Then run:

meteor create ribbit

Meteor will create a folder named ribbit containing some files and will ask you to change directory into ribbit and run meteor commands in that folder.

If you step into the ribbit folder and issue a ls -la command in order to see what was generated by Meteor, you will find a hidden folder named .meteor and three files named ribbit.html, ribbit.css and ribbit.js. The .meteor folder contains only one file named packages. The content of the .meteor/packages file informs Meteor about what smart packages are used by the application.

Since we will start developing Ribbit from scratch we will keep only the .meteor folder and delete all the rest.

Let’s see what packages Meteor uses by default: at the command prompt enter

cat .meteor/packages

The output of this command should list the autopublish, insecure, and preserve-inputs packages.

We will discuss the concept of publishing data from the server later, for the moment let’s note that the autopublish package is used on the server side to automatically make data available to the client. Usually this packet is not used in a production environment, so we can remove it by issuing the following command:

meteor remove autopublish

Next, the insecure package allows database writes by default, which can be dangerous, since this allows any user to do any database operations, including deleting data. We will remove this package before deploying the app and add all the necessary code to protect writing to the database.

And lastly, the preserve-inputs package keeps data from all of the form fields with a unique id.

After all of that, we should now have a folder containing just a .meteor sub-folder and this should be the base of our app.


Step 1: Creating the Necessary Folder Structure

There are several rules Meteor uses to serve files and data which we will not being covering in detail here (see Andrew’s course for details). What is important to note, is that Meteor considers all files present in a server folder as being server code. All files in the client and public folders as being client code, which should be served to the client. And lastly, the files outside of these folders are served to both the client and the server. So let’s create the following folders:

  • client – to store all the client side code.
  • server – to store all the server side code.
  • public – containing all the assets such as graphic files, robots.txt
    files, favicons etc.

Also, it’s important to know that Meteor considers JavaScript and CSS files as part of the client/server code and not as assets, which should be served from the public folder.

So, we end up with the following folder structure:

Our Folder Structure

Step 2: The Static Version of the Login Page

Now it’s time to download the static version of the Ribbit app.

After unpacking the files, copy the gfx folder into the public folder of our Ribbit app.

Also, copy the home.html and style.less files into the client folder of our app. Now if you would try to start the server by issuing meteor from the command line, in the root of our application, you will see the server crashing and complaining about setting the DOCTYPE in our file.

Meteor expects our HTML files to be composed from only three base elements: the head, the body and the template tags.

So in order to fix the error, we should edit the home.html file and remove the DOCTYPE and html tags, which will be added by Meteor when serving the files.

Now if you refresh the page, you should see its content, but it won’t have any styling since the package we downloaded uses LESS for styling the page.

This is easy to fix by installing Meteor’s LESS smart package. So issue:

meteor add less

then restart the server and reload the page and things should look a little bit better now.

We still have to do some adjustments in the LESS file since not everything looks as it should. Search for all occurrences of gfx in the style.less file and change them by adding a trailing slash before each. This should be done in order to allow files to be loaded from the root of the project.

The next step is to adjust the home.html and style.less files to add in a login button and have everything on one line in the header. The files should look like following:

// client/style.less
input {
  width: 236px;
  height: 26px;
  border: 1px solid @border-color;
  padding: 0 10px;
  outline: none;
  font-size: 17px;
  &amp;:focus {
    background: #FFFDF2;
  }
}

input[type="submit"] {
  height: 26px;
  width: auto;
  border: 1px solid #7BC574;
  border-radius: 2px;
  color: white;
  font-size: 12px;
  font-weight: bold;
  padding: 0 20px;
  cursor: pointer;
  .gradient4f(0%, #8CD585, 23%, #82CD7A, 86%, #55AD4C, 100%, #4FA945);
}

header {
  background: url(/gfx/bg-header.png);
  height: 85px;
  width: 100%;
  div.wrapper {
    padding: 11px 0;
    img {
      position: relative;
      top: 10px;
      margin: 0 15px 0 0;
    }
    span {
      font-size: 18px;
      margin: 0 42px 0 0;
    }
    p {
      display: inline;
      input {
        margin: 0 0 0 14px;
        width: 180px;
        height: 28px;
      }
    }
    #btnLogOut {
      float: right;
      width: auto;
      height: 28px;
      margin: 19px 0 0 0;
    }
  }
}

And here’s what the home.html file should look like:

<!-- client/home.html -->
<head>
</head>
<body>
  <header>
    <div class="wrapper">
      <img src="gfx/logo.png">
      <span>Twitter Clone</span>
      <p>
        <input name="username" placeholder="username" type="text">
        <input name="password" placeholder="password"  type="password">
        <input type="submit" id="btnLogOut" value="Log In">
      </p>
    </div>
  </header>
  <div id="content">
    <div class="wrapper">
      <img src="gfx/frog.jpg">
      <div class="panel right">
        <h1>New to Ribbit?</h1>
        <p>
          <input name="email" placeholder="email" type="text">
          <input name="username" placeholder="username" type="text">
          <input name="fullname" placeholder="fullname" type="text">
          <input name="password" placeholder="password" type="text">
          <input name="password2" placeholder="retype password" type="password">
          <input type="submit" value="Create Account">
        </p>
      </div>
    </div>
  </div>
  <footer>
    <div class="wrapper">
      Ribbit - A Twitter Clone Tutorial<img src="gfx/logo-nettuts.png">
    </div>
  </footer>
</body>

Now take a look at the page in the browser. It should look like the image below:

Our Home page.

Well that was pretty easy, wasn’t it? But, what we’ve done up to now is not what we really desired and it’s definitely far from following the Meteor style of doing things. Let’s fix this up!


Step 3: Meteor-ize the Home Page

Meteor constructs the HTML page that is sent to the browser based on three elements, it expects the following to be found: the head element, the body element, and the template element. Since the head and the body are already well known, special attention should be given to the template element.

A template is declared with <template name="foo">...</template> and its content is rendered in the HTML content that is sent to the browser. On the other hand, in the JavaScript file, the template can be accessed using Template.foo, which returns a string of HTML when called.

Now Meteor uses Handlebars as the template framework, but others can be used if needed, such as Jade.

So, let’s see what should be done in order to transform the home.html file into a Meteor compatible page. Let’s take a look at the modified code:

<!-- client/home.html -->
<head>

</head>

<body>
  <header>
    {{> header}}
  </header>
  {{> content}}
  <footer>
    {{> footer}}
  </footer>
</body>

<template name="header">
  <div class="wrapper">
    <img src="gfx/logo.png">
    <span>Twitter Clone</span>
    <p>
      <input id="username" name="username" placeholder="username" type="text">
      <input id="password" name="password" placeholder="password"  type="password">
      <input id="btnLogOut" type="submit" value="Log In">
    </p>
  </div>
</template>

<template name="content">
  <div id="content">
    <div class="wrapper">
      <img src="gfx/frog.jpg">
      <div class="panel right">
        <h1>New to Ribbit?</h1>
        <p>
          <input id="email" name="email" placeholder="email" type="text">
          <input id="newusername" name="username" placeholder="username" type="text">
          <input id="fullname" name="fullname" placeholder="fullname" type="text">
          <input id="newpassword" name="password" placeholder="password" type="text">
          <input id="password2" name="password2" placeholder="retype password" type="password">
          <input id= "btnCreateAccount" type="submit" value="Create Account">
        </p>
      </div>
    </div>
  </div>
</template>

<template name="footer">
  <div class="wrapper">
    Ribbit - A Twitter Clone Tutorial<img src="gfx/logo-nettuts.png">
  </div>
</template>

As you can see, we start with an empty head element. Since Meteor loads all the necessary files for us, we can use the head element to establish the page’s title, but for the moment, it’s blank.

Inside the body element, we render three templates: the header template, which we place in the header tag, the content template, which is placed right after the header, and finally the footer template, inside a footer tag.

What I’ve done for these templates, was to simply copy the old code and place it into the appropriate template. So what was in the header tag, is now in the header template and so on.

I’ve also added some ids to the edit boxes and the buttons, so that later, I’m able to access these from the JavaScript code.

And that’s it. Now our page is finished and works well with the Meteor framework.

The next step will be to create the buddies page which should be displayed when a user logs into the system. We also need to allow the user to sign up and log in to our application.


Step 4: Log Into the Buddies Page

If we follow the same steps from in the previous section, to transform the buddies.html page into a Meteor one, we end with the following code:

<!-- client/buddies.html -->
<head>

</head>
<body>
  <header>
    {{> header}}
  </header>
  {{> content}}
  <footer>
    {{> footer}}
  </footer>
</body>

<template name="header">
  <div class="wrapper">
    <img src="gfx/logo.png">
    <span>Twitter Clone</span>
    <p>
      <input type="submit" id="btnLogOut" value="Log Out">
    </p>
  </div>
</template>

<template name="content">
  <div id="content">
    <div class="wrapper">
      <div id="createRibbit" class="panel right">
        <h1>Create a Ribbit</h1>
        <p>
          <textarea name="text" class="ribbitText"></textarea>
          <input type="submit" value="Ribbit!">
        </p>
      </div>
      <div id="ribbits" class="panel left">
        <h1>Your Ribbit Profile</h1>
        <div class="ribbitWrapper">
          <img class="avatar" src="gfx/user1.png">
          <span class="name">Frogger</span> @username
          <p>
            567 Ribbits<span class="spacing">45 Followers</span><span class="spacing">32 Following</span><br>
            Cras justo odio, dapibus ac facilisis in, egestas Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. <a href="#">http://net.tutsplus.com/tutorials/php/ ...</a>
          </p>
        </div>
      </div>
      <div class="panel left">
        <h1>Your Ribbit Buddies</h1>
        <div class="ribbitWrapper">
          <img class="avatar" src="gfx/user2.png">
          <span class="name">Kermit</span> @username <span class="time">15m</span>
          <p>
            Cras justo odio, dapibus ac facilisis in, egestas Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. <a href="#">http://net.tutsplus.com/tutorials/php/ ...</a>
          </p>
        </div>
      </div>
    </div>
  </div>
</template>

<template name="footer">
  <div class="wrapper">
    Ribbit - A Twitter Clone Tutorial<img src="gfx/logo-nettuts.png">
  </div>
</template>

As you can see, I took out the long list of ribbits, but other than that, there’s nothing else too impressive about the above code.

However, there are two problems with it:

  • If you try viewing it in the browser, you will find it’s being displayed twice. There are two reasons for this: Meteor sends files to the browser in alphabetical order, so buddies.html is sent first. We also have a lot of duplication in the code, even the name of the templates are duplicated. So when Meteor wants to display the content template, it displays the first defined one, which is the one in the buddies.html file.
  • The second problem is also code duplication, which needs to be fixed.

The issues above can be resolved by doing the following: we will create an index.html file containing only the code that will trigger the needed templates: one for the header, one for the footer, one for the home page, and one for the buddies page.

Let’s take a look of each of them:

<!-- client/index.html -->
<head>

</head>

<body>
  <header>
    {{> header}}
  </header>
  {{#if currentUser}}
    {{> buddiescontent}}
  {{else}}
    {{> homecontent}}
  {{/if}}
  <footer>
    {{> footer}}
  </footer>
</body>

This is the main file in which all of the templates are loaded. A very interesting thing can be found when the content template is displayed. We check with a Handlebars if clause whether a user is logged in or not. The Meteor currentUser variable contains the logged in user’s data, if a user is logged in (so we display the buddies template) and is null if no user is logged in (so we display the home template).

<!-- client/header.html -->
<template name="header">
  <div class="wrapper">
    <img src="gfx/logo.png">
    <span>Twitter Clone</span>
    {{#if currentUser}}
      <nav>
        <a id="buddies" href="#">Your Buddies</a>
        <a href="#">Public Ribbits</a>
        <a id='profiles' href="#">Profiles</a>
      </nav>
      <input type="submit" id="btnLogOut" value="Log Out" />
    {{else}}
      <p>
        <input id="username" name="username" placeholder="username" type="text">
        <input id="password" name="password" placeholder="password"  type="password">
        <input id="btnLogOut" type="submit" value="Log In">
      </p>
    {{/if}}
  </div>
</template>

Here we render the header for our pages, and as above, if a user is logged in, we display a navigation bar, otherwise we display a log in form.

<!-- client/footer.html -->
<template name="footer">
  <div class="wrapper">
    Ribbit - A Twitter Clone Tutorial<img src="gfx/logo-nettuts.png">
  </div>
</template>

Then we create the footer template, which only renders the content in the footer.

<!-- client/home.html -->
<template name="homecontent">
  <div id="content">
    <div class="wrapper">
      <img src="gfx/frog.jpg">
      <div class="panel right">
        <h1>New to Ribbit?</h1>
        <p>
          <input id="email" name="email" placeholder="email" type="text">
          <input id="newusername" name="username" placeholder="username" type="text">
          <input id="fullname" name="fullname" placeholder="fullname" type="text">
          <input id="newpassword" name="password" placeholder="password" type="password">
          <input id="password2" name="password2" placeholder="retype password" type="password">
          <input id= "btnCreateAccount" type="submit" value="Create Account">
        </p>
      </div>
    </div>
  </div>
</template>

Now we’ve already saw this template before, it just contains a form to allow the users to register for the application.

<!-- client/buddies.html -->
<template name="buddiescontent">
  <div id="content">
    <div class="wrapper">
      <div id="createRibbit" class="panel right">
        <h1>Create a Ribbit</h1>
        <p>
          <textarea name="text" class="ribbitText"></textarea>
          <input type="submit" value="Ribbit!">
        </p>
      </div>
      <div id="ribbits" class="panel left">
        <h1>Your Ribbit Profile</h1>
        <div class="ribbitWrapper">
          <img class="avatar" src="gfx/user1.png">
          <span class="name">Frogger</span> @username
          <p>
            567 Ribbits<span class="spacing">45 Followers</span><span class="spacing">32 Following</span><br>
            Cras justo odio, dapibus ac facilisis in, egestas Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. <a href="#">http://net.tutsplus.com/tutorials/php/ ...</a>
          </p>
        </div>
      </div>
      <div class="panel left">
        <h1>Your Ribbit Buddies</h1>
        <div class="ribbitWrapper">
          <img class="avatar" src="gfx/user2.png">
          <span class="name">Kermit</span> @username <span class="time">15m</span>
          <p>
            Cras justo odio, dapibus ac facilisis in, egestas Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. <a href="#">http://net.tutsplus.com/tutorials/php/ ...</a>
          </p>
        </div>
      </div>
    </div>
  </div>
</template>

And then the last template, containing information about the current user, that user’s ribbits, and the people the current user is following.

Up until now, we have two pages of our application which can be displayed, but both are static and we can display only one at a time. So next, let’s add some JavaScript to our application to make it more dynamic.

Registering and Logging In

Meteor gives us several smart packages to handle the process of registering and logging in to an application. For our Ribbit app, we will use following packages: accounts-base for basic support for accounting and accounts-password for getting password support for accounting. To install these packages, run the following commands:

meteor add accounts-base
meteor add accounts-password

Now, let’s create a client.js file in the client folder to hold our client side code. For the home page, we need to handle events for clicking on the Log In button and Create Account button.

Events in Meteor are associated with a specific template, so in order to handle clicking on the Log In button we will add an event handler for the header template. To handle the click event for the Create Account button, we have to add an event handler associated with the homecontent template.

Let’s take a look at the code:

Clicking on the Log In Button
// client/client.js
// handling click event on the Log In button
Template.header.events({
  'click #btnLogOut': function (event, template) {
    if (Meteor.userId()) {
      Meteor.logout();
    } else {
      var userName     = template.find('#username').value,
        userPassword = template.find('#password').value;
      Meteor.loginWithPassword(userName, userPassword, function (error) {
        if (error) {
          console.log(error);
        }
      });
    }
  }
});

On the first line, we attach an events object to the header template. Then we handle the click event on the button with an id of btnLogOut (which we’ve created in our HTML template). Then if a user is logged in, simply log the user out. If Meteor.userId returns null, that means no user is logged in, so we perform a log in with the supplied username and password. Also, notice the values that are inserted in the edit boxes: the template object has a find function which takes as a parameter, the ID of the edit box for which we want to get the value of. If there is an error during logging in, we just display it in the console. For now, this is a temporary approach since we will implement displaying information to the user a little bit later.

Clicking on the Create Account Button
// client/client.js
// hnadling click event on the Create Accounts button
Template.homecontent.events({
  'click #btnCreateAccount': function (event, template) {
    var userEmail = template.find('#email').value,
      userName  = template.find('#newusername').value,
      password  = template.find('#newpassword').value,
      password2 = template.find('#password2').value,
      name      = template.find('#fullname').value;

    Accounts.createUser({
      username: userName,
      email:    userEmail,
      password: password,
      profile: {
        name: name
      }
    }, function (error) {
      if (error) {
        console.log("Cannot create user");
      }
    });
  }
});

Just like before, we attach an events object to the homecontent template, then we get the values submitted by the user from the form fields. To create an account we simply use the Accounts.createUser function, which takes as a parameter an object with the username, password, email and profile components. In case of an error while adding the user, the error is displayed in the console.


Step 5: Making the Buddies Page Dynamic

The next step in developing our application is to make the buddies page dynamic. At the moment, everything displayed on our page is statically taken from the template files. We want to take the information from a database and also to allow the user to create ribbits and store them in a database on the server side.

Changing the Your Ribbit Profile Section

This section displays the full name of the user, their username, the number of ribbits the current logged in user has, the number of followers and following people, and the last ribbit posted by the user.

We will store the code for this part in a main.js file. Note that Meteor loads files with the word main in their name, after the page is loaded.

Let’s see how to display them in the template:

<!-- client/buddies.html -->
...
  <div id="ribbits" class="panel left">
<h1>Your Ribbit Profile</h1>
<div class="ribbitWrapper">
<img class="avatar" src="gfx/user1.png">
<span class="name">{{fullName}}</span> @{{userName}}
<p>
{{noOfRibbits}}<span class="spacing">45 Followers</span><span class="spacing">32 Following</span><br>
{{lastRibbit}}
</p>
    </div>
  </div>
...

What we want to do in this template is to use Handlebars’ variables for the items we’d like to pull from the database. The fullname and the username are taken from the users collection and the noOfRibbits and lastRibbit should be taken from the ribbit collection.

The above variables are returned to the template by the following JavaScript code in the main.js file:

// client/main.js
Ribbits = new Meteor.Collection('ribbits');

Template.buddiescontent.helpers({
  fullName: function () {
    return Meteor.user().profile.name;
  },

  userName: function () {
    return Meteor.user().username;
  },

  noOfRibbits: function () {
    var ribbits = Ribbits.find({user_id: Meteor.userId()}),
      retVal;
    if (ribbits.count() === 1) {
      retVal = "1 Ribbit";
    } else {
      retVal = ribbits.count() + " Ribbits";
    }
    return retVal;
  },

  lastRibbit: function () {
    var lastRibbit = Ribbits.findOne({user_id: Meteor.userId()}, {sort: {created_at: -1}}),
      retVal;

    if (lastRibbit) {
      retVal = lastRibbit.ribbit;
    } else {
      retVal = 'This user has no Ribbits';
    }

    return retVal;
  }
});

Two of Meteor’s principles state that access to the database is done using the same API on both the server and the client and on the client side there is a model simulation, which should look like a zero-latency connection to the database (latency compensation). These principles can be seen in the above code, on line one. We create a Ribbits instance of a Meteor collection, which is the client side database. Since on the server side, we don’t have a ribbits collection, the one defined in our code will be empty.

Next, a helpers function is defined, which takes as a parameter, an object with a function for each Handlebars variable in the template. As you can see, for each one, the appropriate data is returned.

The fullName and userName are both taken from the users database profile object.

The noOfRibbits and the lastRibbit should be taken from the Ribbits collection. Since this is empty at the moment, we expect the functions above to return zero as the number of ribbits and the message, ‘This user has no Ribbits’, instead of the last ribbit.

Changing the Your Ribbit Buddies Section

Let’s take a look at the code. First the template:

<!-- client/buddies.html -->
<div class="panel left">
<h1>Your Ribbit Buddies</h1>
{{#each ribbits}}
<div class="ribbitWrapper">
  <img class="avatar" src="gfx/user2.png">
  <span class="name">{{buddyFullName}}</span> @{{buddyUserName}} <span class="time">15m</span>
  <p>
  {{ribbit}}
  </p>
</div>
{{/each}}

The new thing here is that we use Handlebars’ each to loop through all the ribbits returned from the JavaScript function. The rest is pretty similar to the code inside of the profile section.

Now for the JavaScript code:

// client/main.js
...
buddyFullName: function () {
  Ribbits.find().forEach(function (ribbit) {
    var theUser = Meteor.users.findOne({_id: ribbit.user_id});

    return theUser.profile.name;
  });
},

buddyUserName: function () {
  Ribbits.find().forEach(function (ribbit) {
    var theUser = Meteor.users.findOne({_id: ribbit.user_id});

    return theUser.username;
  });
},

ribbits: function () {
  return Ribbits.find();
}

In this area we want to display all of the ribbits belonging to the current logged in user and all of the users which are followed by the current logged in user. We will implement on the server, a mechanism for returning only the ribbits to the client, that fulfill the above condition. In the client, the Ribbits collection is scanned and for each, a user with the same id as the one stored in the ribbit collection is searched. Then the full name and the username are returned to the template. As for the ribbits, since we already have the ones we are interested in, they can just be returned entirely to the template.

Creating a Ribbit

When adding a ribbit, there are two main tasks that should be completed:

  • First we should insert a record into the ribbits collection.
  • Next, update the buddies page with the following information: number of
    ribbits, the last ribbit in the “Your Ribbit Profile” section, and the last
    ribbit, which could have been posted by any user, along with its author and the time elapsed since it was posted.

It’s now time to write a little bit of server code. Let’s start with the following:

// server/server.js
Ribbits = new Meteor.Collection('ribbits');
Meteor.publish('ribbits', function () {
  return Ribbits.find({});
});

For the above code, we create a Meteor collection instance for the ribbits collection and we publish all the ribbits data to the client.

Now for the client side code:

// client/client.js
Ribbits = new Meteor.Collection('ribbits');
Meteor.subscribe('ribbits');

...

// handling the click event on the Ribbit button
Template.buddiescontent.events({
  'click #createTheRibbit': function (event, template) {
    var ribbitContent= template.find('.ribbitText').value;

    Ribbits.insert({
      user_id: Meteor.user()._id,
      ribbit: ribbitContent,
      created_at: new Date()
    });
    template.find('.ribbitText').value = "";
  }
});

Just like the server side, the client also holds a Ribbits instance of the ribbits collection, in order to support the concept of “Data everywhere”. Additionally, it must also subscribe to the data published by the server in order to make sure that modifications done on one instance of the application are replicated everywhere.

After that, the click event for the Ribbit! button is handled like so: the data inserted into the text field is read and inserted into the ribbits collection. Afterwards, the content of the text field is set to an empty string.

// client/main.js
ribbits: function () {
  return Ribbits.find({}, {sort: {created_at: -1}});
},

buddyFullName: function (ribbitUserId) {
  var theUser = Meteor.users.findOne({_id: ribbitUserId});
  return theUser.profile.name;
},

buddyUserName: function (ribbitUserId) {
  var theUser = Meteor.users.findOne({_id: ribbitUserId});
  return theUser.username;
},

elapsedTime: function (text) {
  var currentDate = new Date(),
    ribbitDate,
    minutes_elapsed,
    hours_elapsed,
    days_elapsed,
    retVal,
    record = Ribbits.findOne({ribbit: text});

  ribbitDate = new Date(record.created_at);
  minutes_elapsed = (currentDate - ribbitDate) / 60000;
  if (minutes_elapsed > 60) {
    hours_elapsed = minutes_elapsed / 60;
    if (hours_elapsed > 24) {
      days_elapsed = hours_elapsed / 24;
      retVal = parseInt(days_elapsed, 10) + "d";
    } else {
      retVal = parseInt(hours_elapsed, 10) + "h";
    }
  } else {
    retVal = parseInt(minutes_elapsed, 10) + "m";
  }
  return retVal;
}

Now the above code injects dynamic data into the Handlebars’ template variables. Allow me to explain:

  • The ribbits value is replaced by a collection instance with all of the values stored in the database, in the reverse order of creation.
  • The buddyFullName is returned by searching the users collection for a user with the same id as the current ribbit’s user. Notice that the ribbitUserId is a parameter received by the function from the template.
  • The buddyUserName is obtained in a similar manner as the buddyFullName.
  • The elapsedTime is calculated based on the current time and the time of when a ribbit was originally created.

The buddies template now looks like the following:

<!-- client/buddies.html -->
<div class="panel left">
  <h1>Your Ribbit Buddies</h1>
  {{#each ribbits}}
  <div class="ribbitWrapper">
    <img class="avatar" src="gfx/user2.png">
    <span class="name">{{buddyFullName user_id}}</span> @{{buddyUserName user_id}} <span class="time">{{elapsedTime ribbit}}</span>
    <p>
    {{ribbit}}
    </p>
  </div>
  {{/each}}
</div>

What is interesting to notice here, is that the buddyFullName is followed by the user_id variable taken from the ribbits collection. This is the parameter from the function we described above.


Step 6: Creating the Public Ribbits Page

This task shouldn’t be a show stopper since we already did something similar with the buddies page. What needs to be done here is to take the public.html file from the downloaded assets and turn it into a template. We’ll then hook the data that we want displayed on the page, to some JavaScript functions, that will take that data from the database and return it to the template.

First, let’s see the template file:

<!-- client/public.html -->
<template name="public">
  <div id="content">
    <div class="wrapper">
      <div class="panel right">
        <h1>Create a Ribbit</h1>
        <p>
          <textarea name="text" class="ribbitText"></textarea>
          <input type="submit" value="Ribbit!">
        </p>
      </div><!-- panel right -->
      <div id="ribbits" class="panel left">
        <h1>Public Ribbits</h1>
        {{#each ribbits}}
          <div class="ribbitWrapper">
            <img class="avatar" src="gfx/user2.png">
            <span class="name">{{publicUserFull user_id}}</span> @{{publicUserName user_id}} <span class="time">{{elapsedTime ribbit}}</span>
            <p>
              {{ribbit}}
            </p>
          </div><!-- ribbitWrapper -->
        {{/each}}
      </div><!-- panel left -->
    </div><!-- wrapper -->
  </div><!-- content -->
</template>

Nothing spectacular here: in the Public Ribbits section there is the publicUserFull and publicUserName displayed as well as the ribbit user_id is sent to the function connected to these.

The JavaScript code looks like following:

// client/client.js
...
Template.public.helpers({
  ribbits: function () {
    return Ribbits.find({}, {sort: {created_at: -1}});
  },

  publicUserFull: function (currentRibbitId) {
    var theUser = Meteor.users.findOne({_id: currentRibbitId});

    return theUser.profile.name;
  },

  publicUserName: function (currentRibbitId) {
    var theUser = Meteor.users.findOne({_id: currentRibbitId});

    return theUser.username;
  },

  elapsedTime: function (text) {
    var currentDate = new Date(),
      ribbitDate,
      minutes_elapsed,
      hours_elapsed,
      days_elapsed,
      retVal,
      record = Ribbits.findOne({ribbit: text});

    ribbitDate = new Date(record.created_at);
    minutes_elapsed = (currentDate - ribbitDate) / 60000;
    if (minutes_elapsed > 60) {
      hours_elapsed = minutes_elapsed / 60;
      if (hours_elapsed > 24) {
        days_elapsed = hours_elapsed / 24;
        retVal = parseInt(days_elapsed, 10) + "d";
      } else {
        retVal = parseInt(hours_elapsed, 10) + "h";
      }
    } else {
      retVal = parseInt(minutes_elapsed, 10) + "m";
    }
    return retVal;
  }
});
...

Notice that all of the functions in this code fragment are similar to the ones defined in the buddies part of the file. You should see that we have massive code repetition going on here, which is not so good. But for the moment, let’s leave it as is, in order to see that it works and we’ll figure out how we can refactor this in a later step.


Step 7: Display the Public Ribbits Page

So we have a functional public Ribbits page, but there’s no way to display it. This is what we’ll fix in this step.

In order to perform this task, we will use Meteor’s reactive data concept. We already saw this in action on the buddies page; when a new ribbit is added, it is automatically displayed on the page.

What we want to achieve is that whenever a user clicks on the Public Ribbits link in the header, to change from the current page that is being displayed, to the Public Ribbits page. So we need a currentPage variable, that we can change periodically, so it points to the desired page.

We will use the Meteor’s Session global object, which contains an arbitrary set of key-value pairs. We will store a string in a currentPage variable, which points to the name of the page that we want to display. Then when clicking on a link, we change the currentPage variable as appropriate to display the new page.

Let’s get started on the code. First, the main template file, index.html:

<!-- client/index.html -->
<head>

</head>

<body>
  <header>
    {{> header}}
  </header>
  {{> content}}
  <footer>
    {{> footer}}
  </footer>
</body>

<template name="content">
  {{#if currentUser}}
    {{#if currentPage "buddies"}}
      {{> buddiescontent}}
    {{/if}}

    {{#if currentPage "public"}}
      {{> public}}
    {{/if}}

    {{#if currentPage "profile"}}
      {{> profile}}
    {{/if}}
  {{else}}
    {{> homecontent}}
  {{/if}}
</template>

The changes in this file are small. A new template named content was added and in this template the value of the currentPage variable is checked and the appropriate sub-template is rendered.

Let’s see how this reflects in the JavaScript code:

// client/main.js
Template.content.helpers({
  currentPage: function (type) {
    var thePage = Session.get("currentPage");
    return thePage === type;
  }
});

A helper for the content template was added, containing a currentPage function, which returns a boolean by checking the parameter sent to this function against the value of the currentPage, stored in the Meteor’s Session. The function returns true if they are the same or false if they are not. This way its value can be checked in the template to decide which sub-template is rendered.

// client/client.js
Session.set("currentPage", "buddies");

Now for the client.js file, the value of currentPage is initialized at the beginning of the application by storing it in buddies, so by default, if a user is logged in, the buddies page is displayed.

// client/client.js
...
'click #public': function (event, template) {
  Session.set("currentPage", "public");
},

'click #buddies': function (event, template) {
  Session.set("currentPage", "buddies");
},
...

Then, the header events object is updated by adding two click handlers, one for the Public Ribbits link and one for the Your Buddies link, so that the user can navigate to these pages:

As you can see in this function, the only thing we do is set the Session value of currentPage to the desired value.

And that’s it. Now it’s possible to switch between buddies and the public page using the links in the page’s header.


Step 8: Creating the Profiles Page

As you might have noticed, there is a pattern we’ve followed throughout this tutorial while creating each new page: we started with the implementation of the page template, then we added the necessary JavaScript code for making the page dynamic. Lastly, we add in some event handlers for the actions that the user may take on the page. Let’s follow this same pattern now for creating the profiles page.

The Profile Template

The profile template code should look like this:

<template name="profile">
  <div id="content">
    <div class="wrapper">
      <div class="panel right">
        <h1>Search for profiles</h1>
        <p>
          <input name="query" type="text">
          <input type="submit" value="Search!">
        </p>
      </div>
      <div id="ribbits" class="panel left">
        <h1>Public Profiles</h1>
        {{#each users}}
          <div class="ribbitWrapper">
            <img class="avatar" src="{{gravatarLink _id}}">
            <span class="name">{{profile.name}}</span> @{{username}} <span class="time">{{noOfFollowers _id}}<a href="#" id='{{_id}}' class='follow'>{{followText _id}}</a></span>
            <p id='last-ribbit'>
              {{lastRibbit _id}}
            </p>
          </div>
        {{/each}}
      </div>
    </div>
  </div>
</template>

For this template, our page is split into two panels: the right panel allows searching for a specific user profile and on the left panel we have a list of registered users. Also on the left panel, the logged in user has the possibility to follow or unfollow another user by clicking the follow/unfollow link.

Since the left panel’s code looks more interesting, let’s start analyzing it first: after the header part, we iterate over the Meteor users collection using a Handlebars #each construction.

For each user we display the following information:

  • The user’s gravatar (retrieved just as we did on the buddies page).
  • The user’s name, taken from the mongoDb database.
  • The user’s username.
  • The user’s number of follows.
  • The follow/unfollow link.
  • The user’s last ribbit.

As for the right panel, this code contains a form that allows the user to search for a specific profile.

Now, let’s move on to the second step of our pattern by adding in the JavaScript.

Getting the Profile Data

The first thing the template needs is the users collection. Let’s take a look at the code that returns this collection:

users: function () {
  if (Session.get('searchedName') !== undefined) {
    return Meteor.users.find({
      $and: [
        {_id: {$ne: Meteor.userId()}},
        {username: Session.get('searchedName')}
      ]
    });
  } else {
    return Meteor.users.find({_id: {$ne: Meteor.userId()}});
  }
},

The above code takes care of the profile search functionality for our app. In short, the collection returned to the template is filtered to contain only the searched user and if no search was performed, all of the users are returned, except the current logged in user.

Let me explain this in more detail: When a search for a specific user profile is performed, as we will see later, we set a profile variable named searchedName, which hold the name that the user is searching for. We then filter the users collection using the $and Mongo construction to meet the following criteria: It should only return users that are not the current logged in user and inside of username we’ll store the searchedName. I have to admit, that I am simplifying things a bit here, since I could implement a like clause instead to return all users with a username similar to the searchedName. But since this is not a Mongo tutorial, I’m going to leave this as an additional exercise for you to try out on your own.

By default, this should display all of the users except the current
logged in user, which is done in the else clause.

As for obtaining the gravatar image, the code is similar to what we used in the buddies template, so I won’t describe it again here.

The same also applies for the name, username, and noOfFollowers.

Now for the followText part, we are going to do things a little differently. The behavior we intend to have here is to display the ‘follow’ text if the current logged in user is not already following the displayed user. Otherwise, we want to allow the user to stop following the user by displaying the text ‘unfollow’.

The code looks like following:

followText: function (userId) {
  var followee = Follows.findOne({
    $and: [
      {followee_id: Meteor.userId()},
      {user_id: userId}
    ]
  });
  if (followee) {
    return 'unfollow';
  } else {
    return 'follow';
  }
},

This function receives the userId parameter from the template and performs a search in the Follows table for a user that has a followee_id which matches the current user’s id and which also has a user_id, which matches the id of the user who was clicked on. If this criteria is met, that means the user is being followed by the current logged in user, so return the ‘unfollow’ string, otherwise return the ‘follow’ string.

Actions on the Profile Page

There are really only two actions users can make on the Profile page: clicking on the Search! button, to search for other user profiles. And the second action is to follow or unfollow another user by clicking the appropriate link.

Let’s tackle them one by one:

Searching a User

'click input[type="submit"]': function(event, template) {
  var searchedUser = template.find('input[type="text"]').value;
  if (searchedUser !== "") {
    Session.set('searchedName', searchedUser);
  } else {
    Session.set('searchedName', undefined);
  }
  Template.profile();
}

When the user clicks on our Search! button (the input field with a type of submit), we read the name entered into the text field and set a session variable named searchedName to hold this value. We’ve already saw from the previous step how this variable is used to perform a search in the database and return a subset of the users collection containing only users with that username.

Following a User

'click .follow': function(event, template) {
  var isFollowed, theClickedUserId = event.currentTarget.id,
    theFollowees = Follows.find({user_id: theClickedUserId});

  theFollowees.forEach(function (theFollowee) {
    if (theFollowee.followee_id === Meteor.userId()) {
      isFollowed = true;
    } else {
      isFollowed = false;
    }
  });

  if (!isFollowed) {
    Follows.insert({
      user_id: theClickedUserId,
      followee_id: Meteor.userId()
    });
  } else {
    Follows.remove({
     $and: [
       {user_id: theClickedUserId},
       {followee_id: Meteor.user()._id}
      ]
    });
  }
},

We start off with the theClickedUserId variable, which will store the id of the user who was clicked on. Then we search in the Follows collection for all of the users with that id and store the result in the theFollowees variable.

The next step is to loop through the theFollowees collection and check if the current followee has the same followee_id as the current logged in user’s id. If so, the logged in user is following the clicked on user.

Finally, if the user is not followed, just add it to the Follows collection, which will result in following that user, otherwise, remove it from the Follows collection, to unfollow that user.


Conclusion

This concludes the Profile page implementation and also this tutorial. I hope you will find it interesting and discover that working with Meteor indeed changes the way JavaScript applications are being developed. If you have any questions feel free to ask away in the comments below!

June 14 2013

01:33

Combining Laravel 4 and Backbone

For this tutorial, we’re going to be building a single page app using Laravel 4 and Backbone.js. Both frameworks make it very easy to use a different templating engine other than their respective default, so we’re going to use Mustache, which is an engine that is common to both. By using the same templating language on both sides of our application, we’ll be able to share our views betweem them, saving us from having to repeat our work multiple times.

Our Backbone app will be powered by a Laravel 4 JSON API which we’ll develop together. Laravel 4 comes with some new features that make the development of this API very easy. I’ll show you a few tricks along the way to allow you to stay a bit more organized.

All of our dependencies will be managed by Package Managers, there will be no manual downloading or updating of libraries for this application! In addition, I’ll be showing you how to leverage a little extra power from some of our dependencies.

For this project we’ll be using:

To complete this tutorial, you’ll need the following items installed:

  • Composer: You can download this from the homepage, I recommend the global install instructions located here.
  • Node + NPM: the installer on the homepage will install both items.
  • LESS Compiler: If you’re on a Mac, I recommend CodeKit. However, regardless of your operating system, or if you do not feel like paying for CodeKit, you can just install the LESS Compiler for Node.js by typing npm install -g less at the command prompt.

Part 1: The Base Architecture

First things first, we need to get our application setup before we can begin adding our business logic to it. We’ll do a basic setup of Laravel 4 and get all of our dependencies installed using our Package Managers.

Git

Let's start by creating a git repository to work in. For your reference, this entire repo will be made publicly available at https://github.com/conarwelsh/nettuts-laravel4-and-backbone.

mkdir project && cd project
git init

Laravel 4 Installation

Laravel 4 uses Composer to install all of its dependencies, but first we’ll need an application structure to install into. The “develop” branch on Laravel's Github repository is the home for this application structure. However, at the time of writing this article, Laravel 4 was still in beta, so I needed to be prepared for this structure to change at any time. By adding Laravel as a remote repository, we can pull in these changes whenever we need to. In fact, while something is in beta-mode, it’s a good practice to run these commands after each composer update. However, Laravel 4 is now the newest, stable version.

git remote add laravel https://github.com/laravel/laravel
git fetch laravel
git merge laravel/develop
git add . && git commit -am "commit the laravel application structure"

So we have the application structure, but all of the library files that Laravel needs are not yet installed. You’ll notice at the root of our application there’s a file called composer.json. This is the file that will keep track of all the dependencies that our Laravel application requires. Before we tell Composer to download and install them, let's first add a few more dependencies that we’re going to need. We’ll be adding:

  • Jeffrey Way's Generators: Some very useful commands to greatly improve our workflow by automatically generating file stubs for us.
  • Laravel 4 Mustache: This will allow us to seamlessly use Mustache.php in our Laravel project, just as we would Blade.
  • Twitter Bootstrap: We’ll use the LESS files from this project to speed up our front-end development.
  • PHPUnit: We’ll be doing some TDD for our JSON API, PHPUnit will be our testing engine.
  • Mockery: Mockery will help us "mock" objects during our testing.

PHPUnit and Mockery are only required in our development environment, so we’ll specify that in our composer.json file.


composer.json

{
  "require": {
    "laravel/framework": "4.0.*",
    "way/generators": "dev-master",
    "twitter/bootstrap": "dev-master",
    "conarwelsh/mustache-l4": "dev-master"
  },
  "require-dev": {
    "phpunit/phpunit": "3.7.*",
    "mockery/mockery": "0.7.*"
  },
  "autoload": {
    "classmap": [
      "app/commands",
      "app/controllers",
      "app/models",
      "app/database/migrations",
      "app/database/seeds",
      "app/tests/TestCase.php"
    ]
  },
  "scripts": {
    "post-update-cmd": "php artisan optimize"
  },
  "minimum-stability": "dev"
}

Now we just need to tell Composer to do all of our leg work! Below, notice the --dev switch, we’re telling composer that we’re in our development environment and that it should also install all of our dependencies listed in "require-dev".

composer install --dev

After that finishes installing, we’ll need to inform Laravel of a few of our dependencies. Laravel uses "service providers" for this purpose. These service providers basically just tell Laravel how their code is going to interact with the application and to run any necessary setup procedures. Open up app/config/app.php and add the following two items to the "providers" array. Not all packages require this, only those that will enhance or change the functionality of Laravel.


app/config/app.php

...

'WayGeneratorsGeneratorsServiceProvider',
'ConarwelshMustacheL4MustacheL4ServiceProvider',

...

Lastly, we just need to do some generic application tweaks to complete our Laravel installation. Let's open up bootstrap/start.php and tell Laravel our machine name so that it can determine what environment it’s in.


bootstrap/start.php

/*
|--------------------------------------------------------------------------
| Detect The Application Environment
|--------------------------------------------------------------------------
|
| Laravel takes a dead simple approach to your application environments
| so you can just specify a machine name or HTTP host that matches a
| given environment, then we will automatically detect it for you.
|
*/

$env = $app->detectEnvironment(array(

  'local' => array('your-machine-name'),

));

Replace "your-machine-name" with whatever the hostname for your machine is. If you are unsure of what your exact machine name is, you can just type hostname at the command prompt (on Mac or Linux), whatever it prints out is the value that belongs in this setting.

We want our views to be able to be served to our client from a web request. Currently, our views are stored outside of our public folder, which would mean that they are not publicly accessible. Luckily, Laravel makes it very easy to move or add other view folders. Open up app/config/view.php and change the paths setting to point to our public folder. This setting works like the PHP native include path, it will check in each folder until it finds a matching view file, so feel free to add several here:


app/config/view.php

'paths' => array(__DIR__.'/../../public/views'),

Next you will need to configure your database. Open up app/config/database.php and add in your database settings.

Note: It is recommended to use 127.0.0.1 instead of localhost. You get a bit of a performance boost on most systems, and with some system configurations, localhost will not even connect properly.

Finally, you just need to make sure that your storage folder is writable.

chmod -R 755 app/storage

Laravel is now installed, with all of its dependencies, as well as our own dependencies. Now let's setup our Backbone installation!

Just like our composer.json installed all of our server-side dependencies, we’ll create a package.json in our public folder to install all of our client-side dependencies.

For our client-side dependencies we’ll use:

  • Underscore.js: This is a dependency of Backbone.js, and a handy toolbelt of functions.
  • Backbone.js: This is our client-side MVC that we’ll use to build out our application.
  • Mustache.js: The Javascript version of our templating library, by using the same templating language both on the client and the server, we can share views, as opposed to duplicating logic.

public/package.json

{
  "name": "nettuts-laravel4-and-backbone",
  "version": "0.0.1",
  "private": true,
  "dependencies": {
    "underscore": "*",
    "backbone": "*",
    "mustache": "*"
  }
}

Now just switch into your public folder, and run npm install. After that completes, lets switch back to our application root so we’re prepared for the rest of our commands.

cd public
npm install
cd ..

Package managers save us from a ton of work, should you want to update any of these libraries, all you have to do is run npm update or composer update. Also, should you want to lock any of these libraries in at a specific version, all you have to do is specify the version number, and the package manager will handle the rest.

To wrap up our setup process we’ll just add in all of the basic project files and folders that we’ll need, and then test it out to ensure it all works as expected.

We’ll need to add the following folders:

  • public/views
  • public/views/layouts
  • public/js
  • public/css

And the following files:

  • public/css/styles.less
  • public/js/app.js
  • public/views/app.mustache

To accomplish this, we can use a one-liner:

mkdir public/views public/views/layouts public/js public/css && touch public/css/styles.less public/js/app.js public/views/app.mustache

Twitter Bootstrap also has two JavaScript dependencies that we’ll need, so let's just copy them from the vendor folder into our public folder. They are:

  • html5shiv.js: allows us to use HTML5 elements without fear of older browsers not supporting them
  • bootstrap.min.js: the supporting JavaScript libraries for Twitter Bootstrap
cp vendor/twitter/bootstrap/docs/assets/js/html5shiv.js public/js/html5shiv.js
cp vendor/twitter/bootstrap/docs/assets/js/bootstrap.min.js public/js/bootstrap.min.js

For our layout file, the Twitter Bootstrap also provides us with some nice starter templates to work with, so let's copy one into our layouts folder for a head start:

cp vendor/twitter/bootstrap/docs/examples/starter-template.html public/views/layouts/application.blade.php

Notice that I am using a blade extension here, this could just as easily be a mustache template, but I wanted to show you how easy it is to mix the templating engines. Since our layout will be rendered on page load, and will not need to be re-rendered by the client, we are safe to use PHP here exclusively. If for some reason you find yourself needing to render this file on the client-side, you would want to switch this file to use the Mustache templating engine instead.

Now that we have all of our basic files in place, let's add some starter content that we can use to test that everything is working as we would expect. I’m providing you with some basic stubs to get you started.


public/css/styles.less

We’ll just import the Twitter Bootstrap files from the vendor directory as opposed to copying them. This allows us to update Twitter Bootstrap with nothing but a composer update.

We declare our variables at the end of the file, the LESS compiler will figure out the value of all of its variables before parsing the LESS into CSS. This means that by re-defining a Twitter Bootstrap variable at the end of the file, the value will actually change for all of the files included, allowing us to do simple overrides without modifying the Twitter Bootstrap core files.

/**
 * Import Twitter Bootstrap Base File
 ******************************************************************************************
 */
@import "../../vendor/twitter/bootstrap/less/bootstrap";
/**
 * Define App Styles
 * Do this before the responsive include, so that it can override properly as needed.
 ******************************************************************************************
 */
body {
  padding-top: 60px; /* 60px to make the container go all the way to the bottom of the topbar */
}

/* this will set the position of our alerts */
#notifications {
  width: 300px;
  position: fixed;
  top: 50px;
  left: 50%;
  margin-left: -150px;
  text-align: center;
}

/**
 * Import Bootstrap's Responsive Overrides
 * now we allow bootstrap to set the overrides for a responsive layout
 ******************************************************************************************
 */
@import "../../vendor/twitter/bootstrap/less/responsive";
/**
 * Define our variables last, any variable declared here will be used in the includes above
 * which means that we can override any of the variables used in the bootstrap files easily
 * without modifying any of the core bootstrap files
 ******************************************************************************************
 */

// Scaffolding
// -------------------------
@bodyBackground:    #f2f2f2;
@textColor:       #575757;

// Links
// -------------------------
@linkColor:       #41a096;

// Typography
// -------------------------
@sansFontFamily:    Arial, Helvetica, sans-serif;

public/js/app.js

Now we’ll wrap all of our code in an immediately-invoking-anonymous-function that passes in a few global objects. We’ll then alias these global objects to something more useful to us. Also, we’ll cache a few jQuery objects inside the document ready function.

//alias the global object
//alias jQuery so we can potentially use other libraries that utilize $
//alias Backbone to save us on some typing
(function(exports, $, bb){

  //document ready
  $(function(){

    /**
     ***************************************
     * Cached Globals
     ***************************************
     */
    var $window, $body, $document;

    $window  = $(window);
    $document = $(document);
    $body   = $('body');
  });//end document ready

}(this, jQuery, Backbone));

public/views/layouts/application.blade.php

Next is just a simple HTML layout file. We’re however using the asset helper from Laravel to aid us in creating paths to our assets. It is good practice to use this type of helper, because if you ever happen to move your project into a sub-folder, all of your links will still work.

We made sure that we included all of our dependencies in this file, and also added the jQuery dependency. I chose to request jQuery from the Google CDN, because chances are the visiting user of this site will already have a copy from that CDN cached in their browser, saving us from having to complete the HTTP request for it.

One important thing to note here is the way in which we are nesting our view. Mustache does not have Block Sections like Blade does, so instead, the contents of the nested view will be made available under a variable with the name of the section. I will point this out when we render this view from our route.

<!DOCTYPE html>
<html lang="en">
<head>
 <meta charset="utf-8">
 <title>Laravel4 & Backbone | Nettuts</title>
 <meta name="viewport" content="width=device-width, initial-scale=1.0">
 <meta name="description" content="A single page blog built using Backbone.js, Laravel, and Twitter Bootstrap">
 <meta name="author" content="Conar Welsh">

 <link href="{{ asset('css/styles.css') }}" rel="stylesheet">

 <!-- HTML5 shim, for IE6-8 support of HTML5 elements -->
 <!--[if lt IE 9]>
 <script src="{{ asset('js/html5shiv.js') }}"></script>
 <![endif]-->
</head>
<body>

 <div id="notifications">
 </div>

 <div class="navbar navbar-inverse navbar-fixed-top">
  <div class="navbar-inner">
   <div class="container">
    <button type="button" class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse">
     <span class="icon-bar"></span>
     <span class="icon-bar"></span>
     <span class="icon-bar"></span>
    </button>
    <a class="brand" href="#">Nettuts Tutorial</a>
    <div class="nav-collapse collapse">
     <ul class="nav">
      <li class="active"><a href="#">Blog</a></li>
     </ul>
    </div><!--/.nav-collapse -->
   </div>
  </div>
 </div>

 <div class="container" data-role="main">
  {{--since we are using mustache as the view, it does not have a concept of sections like blade has, so instead of using @yield here, our nested view will just be a variable that we can echo--}}

  {{ $content }}

 </div> <!-- /container -->

 <!-- Placed at the end of the document so the pages load faster -->
 <script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script> <!-- use Google CDN for jQuery to hopefully get a cached copy -->
 <script src="{{ asset('node_modules/underscore/underscore-min.js') }}"></script>
 <script src="{{ asset('node_modules/backbone/backbone-min.js') }}"></script>
 <script src="{{ asset('node_modules/mustache/mustache.js') }}"></script>
 <script src="{{ asset('js/bootstrap.min.js') }}"></script>
 <script src="{{ asset('js/app.js') }}"></script>
 @yield('scripts')
</body>
</html>

public/views/app.mustache

Next is just a simple view that we’ll nest into our layout.

<dl>
  <dt>Q. What did Biggie say when he watched inception?</dt>
  <dd>A. "It was all a dream!"</dd>
</dl>

app/routes.php

Laravel should have already provided you with a default route, all we’re doing here is changing the name of the view which that route is going to render.

Remember from above, I told you that the nested view was going to be available under a variable named whatever the parent section was? Well, when you nest a view, the first parameter to the function is the section name:

View::make('view.path')->nest($sectionName, $nestedViewPath, $viewVariables);

In our nest command we called the section "content", that means if we echo $content from our layout, we’ll get the rendered contents of that view. If we were to do return View::make('layouts.application')->nest('foobar', 'app'); then our nested view would be available under a variable named $foobar.

<?php

//backbone app route
Route::get('/', function()
{
  //change our view name to the view we created in a previous step
  //notice that we do not need to provide the .mustache extension
  return View::make('layouts.application')->nest('content', 'app');
});

With all of our basic files in place, we can test to ensure everything went OK. Laravel 4 utilizes the new PHP web server to provide us with a great little development environment. So long to the days of having a million virtual hosts setup on your development machine for every project that you work on!

Note: make sure that you’ve compiled your LESS file first!

php artisan serve

If you followed along correctly, you should be laughing hysterically at my horrible sense of humor, and all of our assets should be properly included into the page.


Part 2: Laravel 4 JSON API

Now we’ll build the API that will power our Backbone application. Laravel 4 makes this process a breeze.

API Guidelines

First let's go over a few general guidelines to keep in mind while we build our API:

  • Status Codes: Responses should reply with proper status codes, fight the temptation to just place an { error: "this is an error message" } in the body of your response. Use the HTTP protocol to its fullest!

    • 200: success
    • 201: resource created
    • 204: success, but no content to return
    • 400: request not fulfilled //validation error
    • 401: not authenticated
    • 403: refusal to respond //wrong credentials, do not have permission (un-owned resource)
    • 404: not found
    • 500: other error
  • Resource Methods: Even though controllers will be serving different resources, they should still have very similar behavior. The more predictable your API is, the easier it is to implement and adopt.

    • index: Return a collection of resources.
    • show: Return a single resource.
    • create: Return a form. This form should detail out the required fields, validation, and labels as best as possible. As well as anything else needed to properly create a resource. Even though this is a JSON API, it is very useful to return a form here. Both a computer and a person can parse through this form, and very easily decipher which items are needed to fill out this form successsfully. This is a very easy way to “document” the needs of your API.
    • store: Store a new resource and return with the proper status code: 201.
    • edit: Return a form filled with the current state of a resource. This form should detail out the required fields, validation, and labels as best as possible. As well as anything else needed to properly edit a resource.
    • update: Update an existing resource and return with the proper status code.
    • delete: Delete an existing resource and return with the proper status code: 204.

Routing & Versioning

API's are designed to be around for a while. This is not like your website where you can just change its functionality at the drop of a dime. If you have programs that use your API, they are not going to be happy with you if you change things around and their program breaks. For this reason, it’s important that you use versioning.

We can always create a "version two" with additional, or altered functionality, and allow our subscribing programs to opt-in to these changes, rather than be forced.

Laravel provides us with route groups that are perfect for this, place the following code ABOVE our first route:

<?php

//create a group of routes that will belong to APIv1
Route::group(array('prefix' => 'v1'), function()
{
  //... insert API routes here...
});

Generating Resources

We’re going to use Jeffrey Way's generators to generate our resources. When we generate a resource, it will create the following items for us:

  • Controller
  • Model
  • Views (index.blade.php, show.blade.php, create.blade.php, edit.blade.php)
  • Migration
  • Seeds

We’re only going to need two resources for this app: a Post resource and a Comment resource.

Note: in a recent update to the generators, I have been receiving a permissions error due to the way my web servers are setup. To remedy this problem, you must allow write permissions to the folder that the generators write the temp file to.

sudo chmod -R 755 vendor/way/generators/src/Way/

Run the generate:resource command

php artisan generate:resource post --fields="title:string, content:text, author_name:string"

php artisan generate:resource comment --fields="content:text, author_name:string, post_id:integer"

You should now pause for a second to investigate all of the files that the generator created for us.

Adjust the Generated Resources

The generate:resource command saved us a lot of work, but due to our unique configuration, we’re still going to need to make some modifications.

First of all, the generator placed the views it created in the app/views folder, so we need to move them to the public/views folder

mv app/views/posts public/views/posts
mv app/views/comments public/views/comments

app/routes.php

We decided that we wanted our API to be versioned, so we’ll need to move the routes that the generator created for us into the version group. We’ll also want to namespace our controllers with the corresponding version, so that we can have a different set of controllers for each version we build. Also the comments resource needs to be nested under the posts resource.

<?php

//create a group of routes that will belong to APIv1
Route::group(array('prefix' => 'v1'), function()
{
  //... insert API routes here...
  Route::resource('posts', 'V1PostsController'); //notice the namespace
  Route::resource('posts.comments', 'V1PostsCommentsController'); //notice the namespace, and the nesting
});

//backbone app route
Route::get('/', function()
{
  //change our view name to the view we created in a previous step
  //notice that we do not need to provide the .mustache extension
  return View::make('layouts.application')->nest('content', 'app');
});

Since we namespaced our controllers, we should move them into their own folder for organization, let's create a folder named V1 and move our generated controllers into it. Also, since we nested our comments controller under the posts controller, let's change the name of that controller to reflect the relationship.

mkdir app/controllers/V1
mv app/controllers/PostsController.php app/controllers/V1/
mv app/controllers/CommentsController.php app/controllers/V1/PostsCommentsController.php

We’ll need to update the controller files to reflect our changes as well. First of all, we need to namespace them, and since they are namespaced, any classes outside of that namespace will need to be manually imported with the use statement.

app/controllers/PostsController.php

<?php
//use our new namespace
namespace V1;

//import classes that are not in this new namespace
use BaseController;

class PostsController extends BaseController {

app/controllers/PostsCommentsController.php

We also need to update our CommentsController with our new name: PostsCommentsController

<?php
//use our new namespace
namespace V1;

//import classes that are not in this new namespace
use BaseController;

//rename our controller class
class PostsCommentsController extends BaseController {

Adding in Repositories

By default, repositories are not part of Laravel. Laravel is extremely flexible though, and makes it very easy to add them in. We’re going to use repositories to help us separate our logic for code re-usability, as well as for testing. For now we’ll just get setup to use repositories, we’ll add in the proper logic later.

Let’s make a folder to store our repositories in:

mkdir app/repositories

To let our auto-loader know about this new folder, we need to add it to our composer.json file. Take a look at the updated "autoload" section of our file, and you’ll see that we added in the repositories folder.

composer.json

{
  "require": {
    "laravel/framework": "4.0.*",
    "way/generators": "dev-master",
    "twitter/bootstrap": "dev-master",
    "conarwelsh/mustache-l4": "dev-master"
  },
  "require-dev": {
    "phpunit/phpunit": "3.7.*",
    "mockery/mockery": "0.7.*"
  },
  "autoload": {
    "classmap": [
      "app/commands",
      "app/controllers",
      "app/models",
      "app/database/migrations",
      "app/database/seeds",
      "app/tests/TestCase.php",
      "app/repositories"
    ]
  },
  "scripts": {
    "post-update-cmd": "php artisan optimize"
  },
  "minimum-stability": "dev"
}

Seeding Our Database

Database seeds are a useful tool, they provide us with an easy way to fill our database with some content. The generators provided us with base files for seeding, we merely need to add in some actual seeds.

app/database/seeds/PostsTableSeeder.php

<?php

class PostsTableSeeder extends Seeder {

  public function run()
  {
    $posts = array(
      array(
        'title'    => 'Test Post',
        'content'   => 'Lorem ipsum Reprehenderit velit est irure in enim in magna aute occaecat qui velit ad.',
        'author_name' => 'Conar Welsh',
        'created_at' => date('Y-m-d H:i:s'),
        'updated_at' => date('Y-m-d H:i:s'),
      ),
      array(
        'title'    => 'Another Test Post',
        'content'   => 'Lorem ipsum Reprehenderit velit est irure in enim in magna aute occaecat qui velit ad.',
        'author_name' => 'Conar Welsh',
        'created_at' => date('Y-m-d H:i:s'),
        'updated_at' => date('Y-m-d H:i:s'),
      ),
    );

    // Uncomment the below to run the seeder
    DB::table('posts')->insert($posts);
  }

}

app/database/seeds/CommentsTableSeeder.php

<?php

class CommentsTableSeeder extends Seeder {

  public function run()
  {
    $comments = array(
      array(
        'content'   => 'Lorem ipsum Nisi dolore ut incididunt mollit tempor proident eu velit cillum dolore sed',
        'author_name' => 'Testy McTesterson',
        'post_id'   => 1,
        'created_at' => date('Y-m-d H:i:s'),
        'updated_at' => date('Y-m-d H:i:s'),
      ),
      array(
        'content'   => 'Lorem ipsum Nisi dolore ut incididunt mollit tempor proident eu velit cillum dolore sed',
        'author_name' => 'Testy McTesterson',
        'post_id'   => 1,
        'created_at' => date('Y-m-d H:i:s'),
        'updated_at' => date('Y-m-d H:i:s'),
      ),
      array(
        'content'   => 'Lorem ipsum Nisi dolore ut incididunt mollit tempor proident eu velit cillum dolore sed',
        'author_name' => 'Testy McTesterson',
        'post_id'   => 2,
        'created_at' => date('Y-m-d H:i:s'),
        'updated_at' => date('Y-m-d H:i:s'),
      ),
    );

    // Uncomment the below to run the seeder
    DB::table('comments')->insert($comments);
  }

}

Don’t forget to run composer dump-autoload to let the Composer auto loader know about the new migration files!

composer dump-autoload

Now we can run our migrations and seed the database. Laravel provides us with a single command to do both:

php artisan migrate --seed

Tests

Testing is one of those topics in development that no one can argue the importance of, however most people tend to ignore it due to the learning curve. Testing is really not that difficult and it can dramatically improve your application. For this tutorial, we’ll setup some basic tests to help us ensure that our API is functioning properly. We’ll build this API TDD style. The rules of TDD state that we are not allowed to write any production code until we have failing tests that warrants it. However, if I were to walk you through each test individually, this would prove to be a very long tutorial, so in the interest of brevity, I will just provide you with some tests to work from, and then the correct code to make those tests pass afterwards.

Before we write any tests though, we should first check the current test status of our application. Since we installed PHPUnit via composer, we have the binaries available to us to use. All you need to do is run:

vendor/phpunit/phpunit/phpunit.php

Whoops! We already have a failure! The test that is failing is actually an example test that comes pre-installed in our Laravel application structure, this tests against the default route that was also installed with the Laravel application structure. Since we modified this route, we cannot be surprised that the test failed. We can however, just delete this test altogether as it does not apply to our application.

rm app/tests/ExampleTest.php

If you run the PHPUnit command again, you will see that no tests were executed, and we have a clean slate for testing.

Note: it is possible that if you have an older version of Jeffrey Way's generators that you’ll actually have a few tests in there that were created by those generators, and those tests are probably failing. Just delete or overwrite those tests with the ones found below to proceed.

For this tutorial we’ll be testing our controllers and our repositories. Let's create a few folders to store these tests in:

mkdir app/tests/controllers app/tests/repositories

Now for the test files. We’re going to use Mockery to mock our repositories for our controller tests. Mockery objects do as their name implies, they "mock" objects and report back to us on how those objects were interacted with.

In the case of the controller tests, we do not actually want the repositories to be called, after all, these are the controller tests, not the repository tests. So Mockery will set us up objects to use instead of our repositories, and let us know whether or not those objects were called as we expected them to.

In order to pull this off, we’ll have to tell the controllers to use our "mocked" objects as opposed to the real things. We’ll just tell our Application to use a mocked instance next time a certain class is requested. The command looks like this:

App::instance($classToReplace, $instanceOfClassToReplaceWith);

The overall mocking process will go something like this:

  • Create a new Mockery object, providing it the name of the class which it is to mock.
  • Tell the Mockery object which methods it should expect to receive, how many times it should receive that method, and what that method should return.
  • Use the command shown above to tell our Application to use this new Mockery object instead of the default.
  • Run the controller method like usual.
  • Assert the response.

app/tests/controllers/CommentsControllerTest.php

<?php

class CommentsControllerTest extends TestCase {

  /**
   ************************************************************************
   * Basic Route Tests
   * notice that we can use our route() helper here!
   ************************************************************************
   */

  //test that GET /v1/posts/1/comments returns HTTP 200
  public function testIndex()
  {
    $response = $this->call('GET', route('v1.posts.comments.index', array(1)) );
    $this->assertTrue($response->isOk());
  }

  //test that GET /v1/posts/1/comments/1 returns HTTP 200
  public function testShow()
  {
    $response = $this->call('GET', route('v1.posts.comments.show', array(1,1)) );
    $this->assertTrue($response->isOk());
  }

  //test that GET /v1/posts/1/comments/create returns HTTP 200
  public function testCreate()
  {
    $response = $this->call('GET', route('v1.posts.comments.create', array(1)) );
    $this->assertTrue($response->isOk());
  }

  //test that GET /v1/posts/1/comments/1/edit returns HTTP 200
  public function testEdit()
  {
    $response = $this->call('GET', route('v1.posts.comments.edit', array(1,1)) );
    $this->assertTrue($response->isOk());
  }

  /**
   *************************************************************************
   * Tests to ensure that the controller calls the repo as we expect
   * notice we are "Mocking" our repository
   *
   * also notice that we do not really care about the data or interactions
   * we merely care that the controller is doing what we are going to want
   * it to do, which is reach out to our repository for more information
   *************************************************************************
   */

  //ensure that the index function calls our repository's "findAll" method
  public function testIndexShouldCallFindAllMethod()
  {
    //create our new Mockery object with a name of CommentRepositoryInterface
    $mock = Mockery::mock('CommentRepositoryInterface');

    //inform the Mockery object that the "findAll" method should be called on it once
    //and return a string value of "foo"
    $mock->shouldReceive('findAll')->once()->andReturn('foo');

    //inform our application that we have an instance that it should use
    //whenever the CommentRepositoryInterface is requested
    App::instance('CommentRepositoryInterface', $mock);

    //call our controller route
    $response = $this->call('GET', route('v1.posts.comments.index', array(1)));

    //assert that the response is a boolean value of true
    $this->assertTrue(!! $response->original);
  }

  //ensure that the show method calls our repository's "findById" method
  public function testShowShouldCallFindById()
  {
    $mock = Mockery::mock('CommentRepositoryInterface');
    $mock->shouldReceive('findById')->once()->andReturn('foo');
    App::instance('CommentRepositoryInterface', $mock);

    $response = $this->call('GET', route('v1.posts.comments.show', array(1,1)));
    $this->assertTrue(!! $response->original);
  }

  //ensure that our create method calls the "instance" method on the repository
  public function testCreateShouldCallInstanceMethod()
  {
    $mock = Mockery::mock('CommentRepositoryInterface');
    $mock->shouldReceive('instance')->once()->andReturn(array());
    App::instance('CommentRepositoryInterface', $mock);

    $response = $this->call('GET', route('v1.posts.comments.create', array(1)));
    $this->assertViewHas('comment');
  }

  //ensure that the edit method calls our repository's "findById" method
  public function testEditShouldCallFindByIdMethod()
  {
    $mock = Mockery::mock('CommentRepositoryInterface');
    $mock->shouldReceive('findById')->once()->andReturn(array());
    App::instance('CommentRepositoryInterface', $mock);

    $response = $this->call('GET', route('v1.posts.comments.edit', array(1,1)));
    $this->assertViewHas('comment');
  }

  //ensure that the store method should call the repository's "store" method
  public function testStoreShouldCallStoreMethod()
  {
    $mock = Mockery::mock('CommentRepositoryInterface');
    $mock->shouldReceive('store')->once()->andReturn('foo');
    App::instance('CommentRepositoryInterface', $mock);

    $response = $this->call('POST', route('v1.posts.comments.store', array(1)));
    $this->assertTrue(!! $response->original);
  }

  //ensure that the update method should call the repository's "update" method
  public function testUpdateShouldCallUpdateMethod()
  {
    $mock = Mockery::mock('CommentRepositoryInterface');
    $mock->shouldReceive('update')->once()->andReturn('foo');
    App::instance('CommentRepositoryInterface', $mock);

    $response = $this->call('PUT', route('v1.posts.comments.update', array(1,1)));
    $this->assertTrue(!! $response->original);
  }

  //ensure that the destroy method should call the repositories "destroy" method
  public function testDestroyShouldCallDestroyMethod()
  {
    $mock = Mockery::mock('CommentRepositoryInterface');
    $mock->shouldReceive('destroy')->once()->andReturn(true);
    App::instance('CommentRepositoryInterface', $mock);

    $response = $this->call('DELETE', route('v1.posts.comments.destroy', array(1,1)));
    $this->assertTrue( empty($response->original) );
  }
}

app/tests/controllers/PostsControllerTest.php

Next, we’ll follow the exact same procedure for the PostsController tests

<?php

class PostsControllerTest extends TestCase {

  /**
   * Test Basic Route Responses
   */
  public function testIndex()
  {
    $response = $this->call('GET', route('v1.posts.index'));
    $this->assertTrue($response->isOk());
  }

  public function testShow()
  {
    $response = $this->call('GET', route('v1.posts.show', array(1)));
    $this->assertTrue($response->isOk());
  }

  public function testCreate()
  {
    $response = $this->call('GET', route('v1.posts.create'));
    $this->assertTrue($response->isOk());
  }

  public function testEdit()
  {
    $response = $this->call('GET', route('v1.posts.edit', array(1)));
    $this->assertTrue($response->isOk());
  }

  /**
   * Test that controller calls repo as we expect
   */
  public function testIndexShouldCallFindAllMethod()
  {
    $mock = Mockery::mock('PostRepositoryInterface');
    $mock->shouldReceive('findAll')->once()->andReturn('foo');
    App::instance('PostRepositoryInterface', $mock);

    $response = $this->call('GET', route('v1.posts.index'));
    $this->assertTrue(!! $response->original);
  }

  public function testShowShouldCallFindById()
  {
    $mock = Mockery::mock('PostRepositoryInterface');
    $mock->shouldReceive('findById')->once()->andReturn('foo');
    App::instance('PostRepositoryInterface', $mock);

    $response = $this->call('GET', route('v1.posts.show', array(1)));
    $this->assertTrue(!! $response->original);
  }

  public function testCreateShouldCallInstanceMethod()
  {
    $mock = Mockery::mock('PostRepositoryInterface');
    $mock->shouldReceive('instance')->once()->andReturn(array());
    App::instance('PostRepositoryInterface', $mock);

    $response = $this->call('GET', route('v1.posts.create'));
    $this->assertViewHas('post');
  }

  public function testEditShouldCallFindByIdMethod()
  {
    $mock = Mockery::mock('PostRepositoryInterface');
    $mock->shouldReceive('findById')->once()->andReturn(array());
    App::instance('PostRepositoryInterface', $mock);

    $response = $this->call('GET', route('v1.posts.edit', array(1)));
    $this->assertViewHas('post');
  }

  public function testStoreShouldCallStoreMethod()
  {
    $mock = Mockery::mock('PostRepositoryInterface');
    $mock->shouldReceive('store')->once()->andReturn('foo');
    App::instance('PostRepositoryInterface', $mock);

    $response = $this->call('POST', route('v1.posts.store'));
    $this->assertTrue(!! $response->original);
  }

  public function testUpdateShouldCallUpdateMethod()
  {
    $mock = Mockery::mock('PostRepositoryInterface');
    $mock->shouldReceive('update')->once()->andReturn('foo');
    App::instance('PostRepositoryInterface', $mock);

    $response = $this->call('PUT', route('v1.posts.update', array(1)));
    $this->assertTrue(!! $response->original);
  }

  public function testDestroyShouldCallDestroyMethod()
  {
    $mock = Mockery::mock('PostRepositoryInterface');
    $mock->shouldReceive('destroy')->once()->andReturn(true);
    App::instance('PostRepositoryInterface', $mock);

    $response = $this->call('DELETE', route('v1.posts.destroy', array(1)));
    $this->assertTrue( empty($response->original) );
  }

}

app/tests/repositories/EloquentCommentRepositoryTest.php

Now for the repository tests. In writing our controller tests, we pretty much already decided what most of the interface should look like for the repositories. Our controllers needed the following methods:

  • findById($id)
  • findAll()
  • instance($data)
  • store($data)
  • update($id, $data)
  • destroy($id)

The only other method that we’ll want to add here is a validate method. This will mainly be a private method for the repository to ensure that the data is safe to store or update.

For these tests, we’re also going to add a setUp method, which will allow us to run some code on our class, prior to the execution of each test. Our setUp method will be a very simple one, we’ll just make sure that any setUp methods defined in parent classes are also called using parent::setUp() and then simply add a class variable that stores an instance of our repository.

We’ll use the power of Laravel's IoC container again to get an instance of our repository. The App::make() command will return an instance of the requested class, now it may seem strange that we do not just do $this->repo = new EloquentCommentRepository(), but hold that thought, we’ll come back to it momentarily. You probably noticed that we’re asking for a class called EloquentCommentRepository, but in our controller tests above, we were calling our repository CommentRepositoryInterface… put this thought on the back-burner as well… explainations for both are coming, I promise!

<?php

class EloquentCommentRepositoryTest extends TestCase {

  public function setUp()
  {
    parent::setUp();
    $this->repo = App::make('EloquentCommentRepository');
  }

  public function testFindByIdReturnsModel()
  {
    $comment = $this->repo->findById(1,1);
    $this->assertTrue($comment instanceof IlluminateDatabaseEloquentModel);
  }

  public function testFindAllReturnsCollection()
  {
    $comments = $this->repo->findAll(1);
    $this->assertTrue($comments instanceof IlluminateDatabaseEloquentCollection);
  }

  public function testValidatePasses()
  {
    $reply = $this->repo->validate(array(
      'post_id'   => 1,
      'content'   => 'Lorem ipsum Fugiat consectetur laborum Ut consequat aliqua.',
      'author_name' => 'Testy McTesterson'
    ));

    $this->assertTrue($reply);
  }

  public function testValidateFailsWithoutContent()
  {
    try {
      $reply = $this->repo->validate(array(
        'post_id'   => 1,
        'author_name' => 'Testy McTesterson'
      ));
    }
    catch(ValidationException $expected)
    {
      return;
    }

    $this->fail('ValidationException was not raised');
  }

  public function testValidateFailsWithoutAuthorName()
  {
    try {
      $reply = $this->repo->validate(array(
        'post_id'   => 1,
        'content'   => 'Lorem ipsum Fugiat consectetur laborum Ut consequat aliqua.'
      ));
    }
    catch(ValidationException $expected)
    {
      return;
    }

    $this->fail('ValidationException was not raised');
  }

  public function testValidateFailsWithoutPostId()
  {
    try {
      $reply = $this->repo->validate(array(
        'author_name' => 'Testy McTesterson',
        'content'   => 'Lorem ipsum Fugiat consectetur laborum Ut consequat aliqua.'
      ));
    }
    catch(ValidationException $expected)
    {
      return;
    }

    $this->fail('ValidationException was not raised');
  }

  public function testStoreReturnsModel()
  {
    $comment_data = array(
      'content'   => 'Lorem ipsum Fugiat consectetur laborum Ut consequat aliqua.',
      'author_name' => 'Testy McTesterson'
    );

    $comment = $this->repo->store(1, $comment_data);

    $this->assertTrue($comment instanceof IlluminateDatabaseEloquentModel);
    $this->assertTrue($comment->content === $comment_data['content']);
    $this->assertTrue($comment->author_name === $comment_data['author_name']);
  }

  public function testUpdateSaves()
  {
    $comment_data = array(
      'content' => 'The Content Has Been Updated'
    );

    $comment = $this->repo->update(1, 1, $comment_data);

    $this->assertTrue($comment instanceof IlluminateDatabaseEloquentModel);
    $this->assertTrue($comment->content === $comment_data['content']);
  }

  public function testDestroySaves()
  {
    $reply = $this->repo->destroy(1,1);
    $this->assertTrue($reply);

    try {
      $this->repo->findById(1,1);
    }
    catch(NotFoundException $expected)
    {
      return;
    }

    $this->fail('NotFoundException was not raised');
  }

  public function testInstanceReturnsModel()
  {
    $comment = $this->repo->instance();
    $this->assertTrue($comment instanceof IlluminateDatabaseEloquentModel);
  }

  public function testInstanceReturnsModelWithData()
  {
    $comment_data = array(
      'title' => 'Un-validated title'
    );

    $comment = $this->repo->instance($comment_data);
    $this->assertTrue($comment instanceof IlluminateDatabaseEloquentModel);
    $this->assertTrue($comment->title === $comment_data['title']);
  }

}

app/tests/repositories/EloquentPostRepositoryTest.php

<?php

class EloquentPostRepositoryTest extends TestCase {

  public function setUp()
  {
    parent::setUp();
    $this->repo = App::make('EloquentPostRepository');
  }

  public function testFindByIdReturnsModel()
  {
    $post = $this->repo->findById(1);
    $this->assertTrue($post instanceof IlluminateDatabaseEloquentModel);
  }

  public function testFindAllReturnsCollection()
  {
    $posts = $this->repo->findAll();
    $this->assertTrue($posts instanceof IlluminateDatabaseEloquentCollection);
  }

  public function testValidatePasses()
  {
    $reply = $this->repo->validate(array(
      'title'    => 'This Should Pass',
      'content'   => 'Lorem ipsum Fugiat consectetur laborum Ut consequat aliqua.',
      'author_name' => 'Testy McTesterson'
    ));

    $this->assertTrue($reply);
  }

  public function testValidateFailsWithoutTitle()
  {
    try {
      $reply = $this->repo->validate(array(
        'content'   => 'Lorem ipsum Fugiat consectetur laborum Ut consequat aliqua.',
        'author_name' => 'Testy McTesterson'
      ));
    }
    catch(ValidationException $expected)
    {
      return;
    }

    $this->fail('ValidationException was not raised');
  }

  public function testValidateFailsWithoutAuthorName()
  {
    try {
      $reply = $this->repo->validate(array(
        'title'    => 'This Should Pass',
        'content'   => 'Lorem ipsum Fugiat consectetur laborum Ut consequat aliqua.'
      ));
    }
    catch(ValidationException $expected)
    {
      return;
    }

    $this->fail('ValidationException was not raised');
  }

  public function testStoreReturnsModel()
  {
    $post_data = array(
      'title'    => 'This Should Pass',
      'content'   => 'Lorem ipsum Fugiat consectetur laborum Ut consequat aliqua.',
      'author_name' => 'Testy McTesterson'
    );

    $post = $this->repo->store($post_data);

    $this->assertTrue($post instanceof IlluminateDatabaseEloquentModel);
    $this->assertTrue($post->title === $post_data['title']);
    $this->assertTrue($post->content === $post_data['content']);
    $this->assertTrue($post->author_name === $post_data['author_name']);
  }

  public function testUpdateSaves()
  {
    $post_data = array(
      'title' => 'The Title Has Been Updated'
    );

    $post = $this->repo->update(1, $post_data);

    $this->assertTrue($post instanceof IlluminateDatabaseEloquentModel);
    $this->assertTrue($post->title === $post_data['title']);
  }

  public function testDestroySaves()
  {
    $reply = $this->repo->destroy(1);
    $this->assertTrue($reply);

    try {
      $this->repo->findById(1);
    }
    catch(NotFoundException $expected)
    {
      return;
    }

    $this->fail('NotFoundException was not raised');
  }

  public function testInstanceReturnsModel()
  {
    $post = $this->repo->instance();
    $this->assertTrue($post instanceof IlluminateDatabaseEloquentModel);
  }

  public function testInstanceReturnsModelWithData()
  {
    $post_data = array(
      'title' => 'Un-validated title'
    );

    $post = $this->repo->instance($post_data);
    $this->assertTrue($post instanceof IlluminateDatabaseEloquentModel);
    $this->assertTrue($post->title === $post_data['title']);
  }

}

Now that we have all of our tests in place, let's run PHPUnit again to watch them fail!

vendor/phpunit/phpunit/phpunit.php

You should have a whole ton of failures, and in fact, the test suite probably did not even finish testing before it crashed. This is OK, that means we have followed the rules of TDD and wrote failing tests before production code. Although, typically these tests would be written one at a time and you would not move on to the next test until you had code that allowed the previous test to pass. Your terminal should probably look something like mine at the moment:

Screenshot

What’s actually failing is the assertViewHas method in our controller tests. It’s kind of intimidating to deal with this kind of an error when we have lumped together all of our tests without any production code at all. This is why you should always write the tests one at a time, as you’ll find these errors in stride, as opposed to just a huge mess of errors at once. For now, just follow my lead into the implementation of our code.


Sidebar Discussion

Before we proceed with the implementations, let's break for a quick sidebar discussion on the responsibilities of the MVC pattern.

From The Gang of Four:

The Model is the application object, the View is its screen presentation, and the Controller defines the way the user interface reacts to user input.

The point of using a structure like this is to remain encapsulated and flexible, allowing us to exchange and reuse components. Let's go through each part of the MVC pattern and talk about its reusability and flexibility:

View

I think most people would agree that a View is supposed to be a simple visual representation of data and should not contain much logic. In our case, as developers for the web, our View tends to be HTML or XML.

  • reusable: always, almost anything can create a view
  • flexible: not having any real logic in these layers makes this very flexible

Controller

If a Controller "defines the way the user interface reacts to user input", then its responsibility should be to listen to user input (GET, POST, Headers, etc), and build out the current state of the application. In my opinion, a Controller should be very light and should not contain more code than is required to accomplish the above.

  • reusable: We have to remember that our Controllers return an opinionated View, so we cannot ever call that Controller method in a practical way to use any of the logic inside it. Therefore any logic placed in Controller methods, must be specific to that Controller method, if the logic is reusable, it should be placed elsewhere.
  • flexible: In most PHP MVCs, the Controller is tied directly to the route, which does not leave us very much flexibility. Laravel fixes this issue by allowing us to declare routes that use a controller, so we can now swap out our controllers with different implementations if need be:
Route::get('/', array(
  'uses' => 'SomeController@action'
));

Model

The Model is the "application object" in our definition from the Gang of Four. This is a very generic definition. In addition, we just decided to offload any logic that needs to be reusable from our Controller, and since the Model is the only component left in our defined structure, it’s logical to assume that this is the new home for that logic. However, I think the Model should not contain any logic like this. In my opinion, we should think of our "application object", in this case as an object that represents its place in the data-layer, whether that be a table, row, or collection entirely depends on state. The model should contain not much more than getters and setters for data (including relationships).

  • reusable: If we follow the above practice and make our Models be an object that represents its place in the database, this object remains very reusable. Any part of our system can use this model and by doing so gain complete and unopinionated access to the database.
  • flexible: Following the above practice, our Model is basically an implementation of an ORM, this allows us to be flexible, because we now have the power to change ORM's whenever we’d like to just by adding a new Model. We should probably have a pre-defined interface that our Model's must abide by, such as: all, find, create, update, delete. Implementation of a new ORM would be as simple as ensuring that the previously mentioned interface was accomodated.

Repository

Just by carefully defining our MVC components, we orphaned all kinds of logic into no-man's land. This is where Repositories come in to fill the void. Repositories become the intermediary of the Controllers and Models. A typical request would be something like this:

  • The Controller receives all user input and passes it to the repository.
  • The Repository does any "pre-gathering" actions such as validation of data, authorization, authentication, etc. If these "pre-gathering" actions are successful, then the request is passed to the Model for processing.
  • The Model will process all of the data into the data-layer, and return the current state.
  • The Repository will handle any "post-gathering" routines and return the current state to the controller.
  • The Controller will then create the appropriate view using the information provided by the repository.

Our Repository ends up as flexible and organized as we have made our Controllers and Models, allowing us to reuse this in most parts of our system, as well as being able to swap it out for another implementation if needed.

We have already seen an example of swapping out a repository for another implementation in the Controller tests above. Instead of using our default Repository, we asked the IoC container to provide the controller with an instance of a Mockery object. We have this same power for all of our components.

What we have accomplised here by adding another layer to our MVC, is a very organized, scalable, and testable system. Let's start putting the pieces in place and getting our tests to pass.


Controller Implementation

If you take a read through the controller tests, you’ll see that all we really care about is how the controller is interacting with the repository. So let's see how light and simple that makes our controllers.

Note: in TDD, the objective is to do no more work than is required to make your tests pass. So we want to do the absolute bare minimum here.

app/controllers/V1/PostsController.php

<?php
namespace V1;

use BaseController; 
use PostRepositoryInterface; 
use Input;
use View;

class PostsController extends BaseController {

  /**
   * We will use Laravel's dependency injection to auto-magically
   * "inject" our repository instance into our controller
   */
  public function __construct(PostRepositoryInterface $posts)
  {
    $this->posts = $posts;
  }

  /**
   * Display a listing of the resource.
   *
   * @return Response
   */
  public function index()
  {
    return $this->posts->findAll();
  }

  /**
   * Show the form for creating a new resource.
   *
   * @return Response
   */
  public function create()
  {
    $post = $this->posts->instance();
    return View::make('posts._form', compact('post'));
  }

  /**
   * Store a newly created resource in storage.
   *
   * @return Response
   */
  public function store()
  {
    return $this->posts->store( Input::all() );
  }

  /**
   * Display the specified resource.
   *
   * @param int $id
   * @return Response
   */
  public function show($id)
  {
    return $this->posts->findById($id);
  }

  /**
   * Show the form for editing the specified resource.
   *
   * @param int $id
   * @return Response
   */
  public function edit($id)
  {
    $post = $this->posts->findById($id);
    return View::make('posts._form', compact('post'));
  }

  /**
   * Update the specified resource in storage.
   *
   * @param int $id
   * @return Response
   */
  public function update($id)
  {
    return $this->posts->update($id, Input::all());
  }

  /**
   * Remove the specified resource from storage.
   *
   * @param int $id
   * @return Response
   */
  public function destroy($id)
  {
    $this->posts->destroy($id);
    return '';
  }

}

app/controllers/PostsCommentsController.php

<?php
namespace V1;

use BaseController; 
use CommentRepositoryInterface; 
use Input;
use View;

class PostsCommentsController extends BaseController {

  /**
   * We will use Laravel's dependency injection to auto-magically
   * "inject" our repository instance into our controller
   */
  public function __construct(CommentRepositoryInterface $comments)
  {
    $this->comments = $comments;
  }

  /**
   * Display a listing of the resource.
   *
   * @return Response
   */
  public function index($post_id)
  {
    return $this->comments->findAll($post_id);
  }

  /**
   * Show the form for creating a new resource.
   *
   * @return Response
   */
  public function create($post_id)
  {
    $comment = $this->comments->instance(array(
      'post_id' => $post_id
    ));

    return View::make('comments._form', compact('comment'));
  }

  /**
   * Store a newly created resource in storage.
   *
   * @return Response
   */
  public function store($post_id)
  {
    return $this->comments->store( $post_id, Input::all() );
  }

  /**
   * Display the specified resource.
   *
   * @param int $id
   * @return Response
   */
  public function show($post_id, $id)
  {
    return $this->comments->findById($post_id, $id);
  }

  /**
   * Show the form for editing the specified resource.
   *
   * @param int $id
   * @return Response
   */
  public function edit($post_id, $id)
  {
    $comment = $this->comments->findById($post_id, $id);

    return View::make('comments._form', compact('comment'));
  }

  /**
   * Update the specified resource in storage.
   *
   * @param int $id
   * @return Response
   */
  public function update($post_id, $id)
  {
    return $this->comments->update($post_id, $id, Input::all());
  }

  /**
   * Remove the specified resource from storage.
   *
   * @param int $id
   * @return Response
   */
  public function destroy($post_id, $id)
  {
    $this->comments->destroy($post_id, $id);
    return '';
  }

}

It doesn’t get much simpler than that, all the Controllers are doing is handing the input data to the repository, taking the response from that, and handing it to the View, the View in our case is merely JSON for most of our methods. When we return an Eloquent Collection, or Eloquent Model from a Controller in Laravel 4, the object is parsed into JSON auto-magically, which makes our job very easy.

Note: notice that we added a few more "use" statements to the top of the file to support the other classes that we’re using. Do not forget this when you’re working within a namespace.

The only thing that is a bit tricky in this controller is the constructor. Notice we’re passing in a typed variable as a dependency for this Controller, yet there is no point that we have access to the instantiation of this controller to actually insert that class… welcome to dependency injection! What we’re actually doing here is hinting to our controller that we have a dependency needed to run this class and what its class name is (or its IoC binding name). Laravel uses App::make() to create its Controllers before calling them. App::make() will try to resolve an item by looking for any bindings that we may have declared, and/or using the auto-loader to provide an instance. In addition, it will also resolve any dependencies needed to instantiate that class for us, by more-or-less recursively calling App::make() on each of the dependencies.

The observant, will notice that what we’re trying to pass in as a dependency is an interface, and as you know, an interface cannot be instantiated. This is where it gets cool and we actually already did the same thing in our tests. In our tests however, we used App::instance() to provide an already created instance instead of the interface. For our Controllers, we’re actually going to tell Laravel that whenever an instance of PostRepositoryInterface is requested, to actually return an instance of EloquentPostRepository.

Open up your app/routes.php file and add the following to the top of the file

App::bind('PostRepositoryInterface', 'EloquentPostRepository');
App::bind('CommentRepositoryInterface', 'EloquentCommentRepository');

After adding those lines, anytime App::make() asks for an instance of PostRepositoryInterface, it will create an instance of EloquentPostRepository, which is assumed to implement PostRepositoryInterface. If you were to ever change your repository to instead use a different ORM than Eloquent, or maybe a file-based driver, all you have to do is change these two lines and you’re good to go, your Controllers will still work as normal. The Controllers actual dependency is any object that implements that interface and we can determine at run-time what that implementation actually is.

The PostRepositoryInterface and CommentRepositoryInterface must actually exist and the bindings must actually implement them. So let's create them now:

app/repositories/PostRepositoryInterface.php

<?php

interface PostRepositoryInterface {
  public function findById($id);
  public function findAll();
  public function paginate($limit = null);
  public function store($data);
  public function update($id, $data);
  public function destroy($id);
  public function validate($data);
  public function instance();
}

app/repositories/CommentRepositoryInterface.php

<?php

interface CommentRepositoryInterface {
  public function findById($post_id, $id);
  public function findAll($post_id);
  public function store($post_id, $data);
  public function update($post_id, $id, $data);
  public function destroy($post_id, $id);
  public function validate($data);
  public function instance();
}

Now that we have our two interfaces built, we must provide implementations of these interfaces. Let's build them now.

app/repositories/EloquentPostRepository.php

As the name of this implementation implies, we’re relying on Eloquent, which we can call directly. If you had other dependencies, remember that App::make() is being used to resolve this repository, so you can feel free to use the same constructor method we used with our Controllers to inject your dependencies.

<?php

class EloquentPostRepository implements PostRepositoryInterface {

  public function findById($id)
  {
    $post = Post::with(array(
        'comments' => function($q)
        {
          $q->orderBy('created_at', 'desc');
        }
      ))
      ->where('id', $id)
      ->first();

    if(!$post) throw new NotFoundException('Post Not Found');
    return $post;
  }

  public function findAll()
  {
    return Post::with(array(
        'comments' => function($q)
        {
          $q->orderBy('created_at', 'desc');
        }
      ))
      ->orderBy('created_at', 'desc')
      ->get();
  }

  public function paginate($limit = null)
  {
    return Post::paginate($limit);
  }

  public function store($data)
  {
    $this->validate($data);
    return Post::create($data);
  }

  public function update($id, $data)
  {
    $post = $this->findById($id);
    $post->fill($data);
    $this->validate($post->toArray());
    $post->save();
    return $post;
  }

  public function destroy($id)
  {
    $post = $this->findById($id);
    $post->delete();
    return true;
  }

  public function validate($data)
  {
    $validator = Validator::make($data, Post::$rules);
    if($validator->fails()) throw new ValidationException($validator);
    return true;
  }

  public function instance($data = array())
  {
    return new Post($data);
  }

}

app/repositories/EloquentCommentRepository.php

<?php

class EloquentCommentRepository implements CommentRepositoryInterface {

  public function findById($post_id, $id)
  {
    $comment = Comment::find($id);
    if(!$comment || $comment->post_id != $post_id) throw new NotFoundException('Comment Not Found');
    return $comment;
  }

  public function findAll($post_id)
  {
    return Comment::where('post_id', $post_id)
      ->orderBy('created_at', 'desc')
      ->get();
  }

  public function store($post_id, $data)
  {
    $data['post_id'] = $post_id;
    $this->validate($data);
    return Comment::create($data);
  }

  public function update($post_id, $id, $data)
  {
    $comment = $this->findById($post_id, $id);
    $comment->fill($data);
    $this->validate($comment->toArray());
    $comment->save();
    return $comment;
  }

  public function destroy($post_id, $id)
  {
    $comment = $this->findById($post_id, $id);
    $comment->delete();
    return true;
  }

  public function validate($data)
  {
    $validator = Validator::make($data, Comment::$rules);
    if($validator->fails()) throw new ValidationException($validator);
    return true;
  }

  public function instance($data = array())
  {
    return new Comment($data);
  }

}

If you take a look in our repositories, there are a few Exceptions that we are throwing, which are not native, nor do they belong to Laravel. Those are custom Exceptions that we’re using to simplify our code. By using custom Exceptions, we’re able to easily halt the progress of the application if certain conditions are met. For instance, if a post is not found, we can just toss a NotFoundException, and the application will handle it accordingly, but, not by showing a 500 error as usual, instead we’re going to setup custom error handlers. You could alternatively use App::abort(404) or something along those lines, but I find that this method saves me many conditional statements and repeat code, as well as allowing me to adjust the implementation of error reporting in a single place very easily.

First let's define the custom Exceptions. Create a file in your app folder called errors.php

touch app/errors.php

app/errors.php

<?php

class PermissionException extends Exception {

  public function __construct($message = null, $code = 403)
  {
    parent::__construct($message ?: 'Action not allowed', $code);
  }

}

class ValidationException extends Exception {

  protected $messages;

  /**
   * We are adjusting this constructor to receive an instance
   * of the validator as opposed to a string to save us some typing
   * @param Validator $validator failed validator object
   */
  public function __construct($validator)
  {
    $this->messages = $validator->messages();
    parent::__construct($this->messages, 400);
  }

  public function getMessages()
  {
    return $this->messages;
  }

}

class NotFoundException extends Exception {

  public function __construct($message = null, $code = 404)
  {
    parent::__construct($message ?: 'Resource Not Found', $code);
  }

}

These are very simple Exceptions, notice for the ValidationException, we can just pass it the failed validator instance and it will handle the error messages accordingly!

Now we need to define our error handlers that will be called when one of these Exceptions are thrown. These are basically Event listeners, whenever one of these exceptions are thrown, it’s treated as an Event and calls the appropriate function. It’s very simple to add logging or any other error handling procedures here.

app/filters.php

...

/**
 * General HttpException handler
 */
App::error( function(SymfonyComponentHttpKernelExceptionHttpException $e, $code)
{
  $headers = $e->getHeaders();

  switch($code)
  {
    case 401:
      $default_message = 'Invalid API key';
      $headers['WWW-Authenticate'] = 'Basic realm="CRM REST API"';
    break;

    case 403:
      $default_message = 'Insufficient privileges to perform this action';
    break;

    case 404:
      $default_message = 'The requested resource was not found';
    break;

    default:
      $default_message = 'An error was encountered';
  }

  return Response::json(array(
    'error' => $e->getMessage() ?: $default_message
  ), $code, $headers);
});

/**
 * Permission Exception Handler
 */
App::error(function(PermissionException $e, $code)
{
  return Response::json($e->getMessage(), $e->getCode());
});

/**
 * Validation Exception Handler
 */
App::error(function(ValidationException $e, $code)
{
  return Response::json($e->getMessages(), $code);
});

/**
 * Not Found Exception Handler
 */
App::error(function(NotFoundException $e)
{
  return Response::json($e->getMessage(), $e->getCode());
});

We now need to let our auto-loader know about these new files. So we must tell Composer where to check for them:

composer.json

Notice that we added the "app/errors.php" line.

{
  "require": {
    "laravel/framework": "4.0.*",
    "way/generators": "dev-master",
    "twitter/bootstrap": "dev-master",
    "conarwelsh/mustache-l4": "dev-master"
  },
  "require-dev": {
    "phpunit/phpunit": "3.7.*",
    "mockery/mockery": "0.7.*"
  },
  "autoload": {
    "classmap": [
      "app/commands",
      "app/controllers",
      "app/models",
      "app/database/migrations",
      "app/database/seeds",
      "app/tests/TestCase.php",
      "app/repositories",
      "app/errors.php"
    ]
  },
  "scripts": {
    "post-update-cmd": "php artisan optimize"
  },
  "minimum-stability": "dev"
}

We must now tell Composer to actually check for these files and include them in the auto-load registry.

composer dump-autoload

Great, so we have completed our controllers and our repositories, the last two items in our MVRC that we have to take care of is the models and views, both of which are pretty straight forward.

app/models/Post.php

<?php
/**
 * Represent a Post Item, or Collection
 */
class Post extends Eloquent {

  /**
   * Items that are "fillable"
   * meaning we can mass-assign them from the constructor
   * or $post->fill()
   * @var array
   */
  protected $fillable = array(
    'title', 'content', 'author_name'
  );

  /**
   * Validation Rules
   * this is just a place for us to store these, you could
   * alternatively place them in your repository
   * @var array
   */
  public static $rules = array(
    'title'    => 'required',
    'author_name' => 'required'
  );

  /**
   * Define the relationship with the comments table
   * @return Collection collection of Comment Models
   */
  public function comments()
  {
    return $this->hasMany('Comment');
  }

}

app/models/Comment.php

<?php
/**
 * Represent a Comment Item, or Collection
 */
class Comment extends Eloquent {

  /**
   * Items that are "fillable"
   * meaning we can mass-assign them from the constructor
   * or $comment->fill()
   * @var array
   */
  protected $fillable = array(
    'post_id', 'content', 'author_name'
  );

  /**
   * Validation Rules
   * this is just a place for us to store these, you could
   * alternatively place them in your repository
   * @var array
   */
  public static $rules = array(
    'post_id'   => 'required|numeric',
    'content'   => 'required',
    'author_name' => 'required'
  );

  /**
   * Define the relationship with the posts table
   * @return Model parent Post model
   */
  public function post()
  {
    return $this->belongsTo('Post');
  }

}

As far as views are concerned, I’m just going to mark up some simple bootstrap-friendly pages. Remember to change each files extension to .mustache though, since our generator thought that we would be using .blade.php. We’re also going to create a few "partial" views using the Rails convention of prefixing them with an _ to signify a partial.

Note: I skipped a few views, as we will not be using them in this tutorial.

public/views/posts/index.mustache

For the index page view we’ll just loop over all of our posts, showing the post partial for each.

{{#posts}}
  {{> posts._post}}
{{/posts}}

public/views/posts/show.mustache

For the show view we’ll show an entire post and its comments:

<article>
  <h3>
    {{ post.title }} {{ post.id }}
    <small>{{ post.author_name }}</small>
  </h3>
  <div>
    {{ post.content }}
  </div>
</article>

<div>
  <h2>Add A Comment</h2>
  {{> comments._form }}

  <section data-role="comments">
    {{#post.comments}}
      <div>
        {{> comments._comment }}
      </div>
    {{/post.comments}}
  </section>
</div>

public/views/posts/_post.mustache

Here’s the partial that we’ll use to show a post in a list. This is used on our index view.

<article data-toggle="view" data-target="posts/{{ id }}">
  <h3>{{ title }} {{ id }}</h3>
  <cite>{{ author_name }} on {{ created_at }}</cite>
</article>

public/views/posts/_form.mustache

Here’s the form partial needed to create a post, we’ll use this from our API, but this could also be a useful view in an admin panel and other places, which is why we choose to make it a partial.

{{#exists}}
  <form action="/v1/posts/{{ post.id }}" method="post">
    <input type="hidden" name="_method" value="PUT" />
{{/exists}}
{{^exists}}
  <form action="/v1/posts" method="post">
{{/exists}}

  <fieldset>

    <div class="control-group">
      <label class="control-label"></label>
      <div class="controls">
        <input type="text" name="title" value="{{ post.title }}" />
      </div>
    </div>

    <div class="control-group">
      <label class="control-label"></label>
      <div class="controls">
        <input type="text" name="author_name" value="{{ post.author_name }}" />
      </div>
    </div>

    <div class="control-group">
      <label class="control-label"></label>
      <div class="controls">
        <textarea name="content">{{ post.content }}"</textarea>
      </div>
    </div>

    <div class="form-actions">
      <input type="submit" class="btn btn-primary" value="Save" />
    </div>

  </fieldset>
</form>

public/views/comments/_comment.mustache

Here’s the comment partial which is used to represent a single comment in a list of comments:

<h5>
  {{ author_name }}
  <small>{{ created_at }}</small>
</h5>
<div>
  {{ content }}
</div>

public/views/comments/_form.mustache

The form needed to create a comment – both used in the API and the Show Post view:

{{#exists}}
  <form class="form-horizontal" action="/v1/posts/{{ comment.post_id }}/{{ id }}" method="post">
    <input type="hidden" name="_method" value="PUT" />
{{/exists}}
{{^exists}}
  <form class="form-horizontal" action="/v1/posts/{{ comment.post_id }}" method="post">
{{/exists}}

  <fieldset>

    <div class="control-group">
      <label class="control-label">Author Name</label>
      <div class="controls">
        <input type="text" name="author_name" value="{{ comment.author_name }}" />
      </div>
    </div>

    <div class="control-group">
      <label class="control-label">Comment</label>
      <div class="controls">
        <textarea name="content">{{ comment.content }}</textarea>
      </div>
    </div>

    <div class="form-actions">
      <input type="submit" class="btn btn-primary" value="Save" />
    </div>

  </fieldset>
</form>

public/views/layouts/_notification.mustache

And here’s the helper view partial to allow us to show a notification:

<div class="alert alert-{{type}}">
  {{message}}
</div>

Great, we have all of our API components in place. Let's run our unit tests to see where we’re at!

vendor/phpunit/phpunit/phpunit.php

Your first run of this test should pass with flying (green) colors. However, if you were to run this test again, you’ll notice that it fails now with a handful of errors, and that is because our repository tests actually tested the database, and in doing so deleted some of the records our previous tests used to assert values. This is an easy fix, all we have to do is tell our tests that they need to re-seed the database after each test. In addition, we did not receive a noticable error for this, but we did not close Mockery after each test either, this is a requirement of Mockery that you can find in their docs. So let's add both missing methods.

Open up app/tests/TestCase.php and add the following two methods:

/**
 * setUp is called prior to each test
 */
public function setUp()
{
  parent::setUp();
  $this->seed();
}

/**
 * tearDown is called after each test
 * @return [type] [description]
 */
public function tearDown()
{
  Mockery::close();
}

This is great, we now said that at every "setUp", which is run before each test, to re-seed the database. However we still have one problem, everytime you re-seed, it’s only going to append new rows to the tables. Our tests are looking for items with a row ID of one, so we still have a few changes to make. We just need to tell the database to truncate our tables when seeding:

app/database/seeds/CommentsTableSeeder.php

Before we insert the new rows, we’ll truncate the table, deleting all rows and resetting the auto-increment counter.

<?php

class CommentsTableSeeder extends Seeder {

  public function run()
  {
    $comments = array(
      array(
        'content'   => 'Lorem ipsum Nisi dolore ut incididunt mollit tempor proident eu velit cillum dolore sed',
        'author_name' => 'Testy McTesterson',
        'post_id'   => 1,
        'created_at' => date('Y-m-d H:i:s'),
        'updated_at' => date('Y-m-d H:i:s'),
      ),
      array(
        'content'   => 'Lorem ipsum Nisi dolore ut incididunt mollit tempor proident eu velit cillum dolore sed',
        'author_name' => 'Testy McTesterson',
        'post_id'   => 1,
        'created_at' => date('Y-m-d H:i:s'),
        'updated_at' => date('Y-m-d H:i:s'),
      ),
      array(
        'content'   => 'Lorem ipsum Nisi dolore ut incididunt mollit tempor proident eu velit cillum dolore sed',
        'author_name' => 'Testy McTesterson',
        'post_id'   => 2,
        'created_at' => date('Y-m-d H:i:s'),
        'updated_at' => date('Y-m-d H:i:s'),
      ),
    );

    //truncate the comments table when we seed
    DB::table('comments')->truncate();
    DB::table('comments')->insert($comments);
  }

}

app/database/seeds/PostsTableSeeder.php

<?php

class PostsTableSeeder extends Seeder {

  public function run()
  {
    $posts = array(
      array(
        'title'    => 'Test Post',
        'content'   => 'Lorem ipsum Reprehenderit velit est irure in enim in magna aute occaecat qui velit ad.',
        'author_name' => 'Conar Welsh',
        'created_at' => date('Y-m-d H:i:s'),
        'updated_at' => date('Y-m-d H:i:s'),
      ),
      array(
        'title'    => 'Another Test Post',
        'content'   => 'Lorem ipsum Reprehenderit velit est irure in enim in magna aute occaecat qui velit ad.',
        'author_name' => 'Conar Welsh',
        'created_at' => date('Y-m-d H:i:s'),
        'updated_at' => date('Y-m-d H:i:s'),
      )
    );

    //truncate the posts table each time we seed
    DB::table('posts')->truncate();
    DB::table('posts')->insert($posts);
  }

}

Now you should be able to run the tests any number of times and get passing tests each time! That means we have fulfilled our TDD cycle and we’re not allowed to write anymore production code for our API!! Let's just commit our changes to our repo and move onto the Backbone application!

git add . && git commit -am "built out the API and corresponding tests"

Backbone App

Now that we have completed all of the back-end work, we can move forward to creating a nice user interface to access all of that data. We’ll keep this part of the project a little bit on the simpler side, and I warn you that my approach can be considered an opinionated one. I have seen many people with so many different methods for structuring a Backbone application. My trials and errors have led me to my current method, if you do not agree with it, my hope is that it may inspire you to find your own!

We’re going to use the Mustache templating engine instead of Underscore, this will allow us to share our views between the client and server! The trick is in how you load the views, we’re going to use AJAX in this tutorial, but it’s just as easy to load them all into the main template, or precompile them.

Router

First we’ll get our router going. There are two parts to this, the Laravel router, and the Backbone router.

Laravel Router

There are two main approaches we can take here:

Approach #1: The catch-all

Remember I told you when you were adding the resource routes that it was important that you placed them ABOVE the app route?? The catch-all method is the reason for that statement. The overall goal of this method is to have any routes that have not found a match in Laravel, be caught and sent to Backbone. Implementing this method is easy:

app/routes.php

// change your existing app route to this:
// we are basically just giving it an optional parameter of "anything"
Route::get('/{path?}', function($path = null)
{
  return View::make('app');
})
->where('path', '.*'); //regex to match anything (dots, slashes, letters, numbers, etc)

Now, every route other than our API routes will render our app view.

In addition, if you have a multi-page app (several single page apps), you can define several of these catch-alls:

Route::get('someApp1{path?}', function($path = null)
{
  return View::make('app');
})
->where('path', '.*');

Route::get('anotherApp/{path?}', function($path = null)
{
  return View::make('app');
})
->where('path', '.*');

Route::get('athirdapp{path?}', function($path = null)
{
  return View::make('app');
})
->where('path', '.*');

Note: Keep in mind the '/' before {path?}. If that slash is there, it’ll be required in the URL (with the exception of the index route), sometimes this is desired and sometimes not.

Approach #2:

Since our front and back end share views… wouldn't it be extremely easy to just define routes in both places? You can even do this in addition to the catch-all approach if you want.

The routes that we’re going to end up defining for the app are simply:

GET /
GET /posts/:id

app/routes.php

<?php

App::bind('PostRepositoryInterface', 'EloquentPostRepository'); 
App::bind('CommentRepositoryInterface', 'EloquentCommentRepository');
//create a group of routes that will belong to APIv1
Route::group(array('prefix' => 'v1'), function()
{
  Route::resource('posts', 'V1PostsController');
  Route::resource('posts.comments', 'V1PostsCommentsController');
});
/**
 * Method #1: use catch-all
 * optionally commented out while we use Method 2
 */
// change your existing app route to this:
// we are basically just giving it an optional parameter of "anything"
// Route::get('/{path?}', function($path = null)
// {
//   return View::make('layouts.application')->nest('content', 'app');
// })
// ->where('path', '.*'); //regex to match anything (dots, slashes, letters, numbers, etc)
/**
 * Method #2: define each route
 */
Route::get('/', function()
{
  $posts = App::make('PostRepositoryInterface')->paginate();
  return View::make('layouts.application')->nest('content', 'posts.index', array(
    'posts' => $posts
  ));
});

Route::get('posts/{id}', function($id)
{
  $post = App::make('PostRepositoryInterface')->findById($id);
  return View::make('layouts.application')->nest('content', 'posts.show', array(
    'post' => $post
  ));
});

Pretty cool huh?! Regardless of which method we use, or the combination of both, your Backbone router will end up mostly the same.

Notice that we’re using our Repository again, this is yet another reason why Repositories are a useful addition to our framework. We can now run almost all of the logic that the controller does, but without repeating hardly any of the code!

Keep in mind a few things while choosing which method to use, if you use the catch-all, it will do just like the name implies… catch-ALL. This means there is no such thing as a 404 on your site anymore. No matter the request, its landing on the app page (unless you manually toss an exception somewhere such as your repository). The inverse is, with defining each route, now you have two sets of routes to manage. Both methods have their ups and downs, but both are equally easy to deal with.

Base View

One view to rule them all! This BaseView is the view that all of our other Views will inherit from. For our purposes, this view has but one job… templating! In a larger app this view is a good place to put other shared logic.

We’ll simply extend Backbone.View and add a template function that will return our view from the cache if it exists, or get it via AJAX and place it in the cache. We have to use synchronous AJAX due to the way that Mustache.js fetches partials, but since we’re only retrieving these views if they are not cached, we shouldn’t receive much of a performance hit here.

/**
 ***************************************
 * Array Storage Driver
 * used to store our views
 ***************************************
 */
var ArrayStorage = function(){
  this.storage = {};
};
ArrayStorage.prototype.get = function(key)
{
  return this.storage[key];
};
ArrayStorage.prototype.set = function(key, val)
{
  return this.storage[key] = val;
};
/**
 ***************************************
 * Base View
 ***************************************
 */
var BaseView = bb.View.extend({

  /**
   * Set our storage driver
   */
  templateDriver: new ArrayStorage,

  /**
   * Set the base path for where our views are located
   */
  viewPath: '/views/',

  /**
   * Get the template, and apply the variables
   */
  template: function()
  {
    var view, data, template, self;

    switch(arguments.length)
    {
      case 1:
        view = this.view;
        data = arguments[0];
        break;
      case 2:
        view = arguments[0];
        data = arguments[1];
        break;
    }

    template = this.getTemplate(view, false);
    self = this;

    return template(data, function(partial)
    {
      return self.getTemplate(partial, true);
    });
  },

  /**
   * Facade that will help us abstract our storage engine,
   * should we ever want to swap to something like LocalStorage
   */
  getTemplate: function(view, isPartial)
  {
    return this.templateDriver.get(view) || this.fetch(view, isPartial);
  },

  /**
   * Facade that will help us abstract our storage engine,
   * should we ever want to swap to something like LocalStorage
   */
  setTemplate: function(name, template)
  {
    return this.templateDriver.set(name, template);
  },

  /**
   * Function to retrieve the template via ajax
   */
  fetch: function(view, isPartial)
  {
    var markup = $.ajax({
      async: false,

      //the URL of our template, we can optionally use dot notation
      url: this.viewPath + view.split('.').join('/') + '.mustache'
    }).responseText;

    return isPartial
      ? markup
      : this.setTemplate(view, Mustache.compile(markup));
  }
});

PostView

The PostView renders a single blog post:

// this view will show an entire post
// comment form, and comments
var PostView = BaseView.extend({

  //the location of the template this view will use, we can use dot notation
  view: 'posts.show',

  //events this view should subscribe to
  events: {
    'submit form': function(e)
    {
      e.preventDefault();
      e.stopPropagation();

      return this.addComment( $(e.target).serialize() );
    }
  },

  //render our view into the defined `el`
  render: function()
  {
    var self = this;

    self.$el.html( this.template({
      post: this.model.attributes
    }) );
  },

  //add a comment for this post
  addComment: function(formData)
  {
    var
      self = this,

      //build our url
      action = this.model.url() + '/comments'
    ;

    //submit a post to our api
    $.post(action, formData, function(comment, status, xhr)
    {
      //create a new comment partial
      var view = new CommentViewPartial({
        //we are using a blank backbone model, since we done have any specific logic needed
        model: new bb.Model(comment)
      });

      //prepend the comment partial to the comments list
      view.render().$el.prependTo(self.$('[data-role="comments"]'));

      //reset the form
      self.$('input[type="text"], textarea').val('');

      //prepend our new comment to the collection
      self.model.attributes.comments.unshift(comment);

      //send a notification that we successfully added the comment
      notifications.add({
        type: 'success',
        message: 'Comment Added!'
      });
    });

  }
});

Partial Views

We’ll need a few views to render partials. We mainly just need to tell the view which template to use and that it should extend our view that provides the method to fetch our template.

// this will be used for rendering a single comment in a list
var CommentViewPartial = BaseView.extend({
  //define our template location
  view: 'comments._comment',
  render: function()
  {
    this.$el.html( this.template(this.model.attributes) );
    return this;
  }
});

//this view will be used for rendering a single post in a list
var PostViewPartial = BaseView.extend({
  //define our template location
  view: 'posts._post',
  render: function()
  {
    this.$el.html( this.template(this.model.attributes) );
    return this;
  }
});

Blog View

This is our overall application view. It contains our configuration logic, as well as handling the fetching of our PostCollection. We also setup a cool little infinite scroll feature. Notice how we’re using jQuery promises to ensure that the fetching of our collection has completed prior to rendering the view.

var Blog = BaseView.extend({
  //define our template location
  view: 'posts.index',

  //setup our app configuration
  initialize: function()
  {
    this.perPage = this.options.perPage || 15;
    this.page   = this.options.page || 0;
    this.fetching = this.collection.fetch();

    if(this.options.infiniteScroll) this.enableInfiniteScroll();
  },

  //wait til the collection has been fetched, and render the view
  render: function()
  {
    var self = this;
    this.fetching.done(function()
    {
      self.$el.html('');
      self.addPosts();

      // var posts = this.paginate()

      // for(var i=0; i<posts.length; i++)
      // {
      //   posts[i] = posts[i].toJSON();
      // }

      // self.$el.html( self.template({
      //   posts: posts
      // }) );

      if(self.options.infiniteScroll) self.enableInfiniteScroll();
    });
  },

  //helper function to limit the amount of posts we show at a time
  paginate: function()
  {
    var posts;
    posts = this.collection.rest(this.perPage * this.page);
    posts = _.first(posts, this.perPage);
    this.page++;

    return posts;
  },

  //add the next set of posts to the view
  addPosts: function()
  {
    var posts = this.paginate();

    for(var i=0; i<posts.length; i++)
    {
      this.addOnePost( posts[i] );
    }
  },

  //helper function to add a single post to the view
  addOnePost: function(model)
  {
    var view = new PostViewPartial({
      model: model
    });
    this.$el.append( view.render().el );
  },

  //this function will show an entire post, we could alternatively make this its own View
  //however I personally like having it available in the overall application view, as it
  //makes it easier to manage the state
  showPost: function(id)
  {
    var self = this;

    this.disableInifiniteScroll();

    this.fetching.done(function()
    {
      var model = self.collection.get(id);

      if(!self.postView)
      {
        self.postView = new self.options.postView({
          el: self.el
        });
      }
      self.postView.model = model;
      self.postView.render();
    });
  },

  //function to run during the onScroll event
  infiniteScroll: function()
  {
    if($window.scrollTop() >= $document.height() - $window.height() - 50)
    {
      this.addPosts();
    }
  },

  //listen for the onScoll event
  enableInfiniteScroll: function()
  {
    var self = this;

    $window.on('scroll', function()
    {
      self.infiniteScroll();
    });
  },

  //stop listening to the onScroll event
  disableInifiniteScroll: function()
  {
    $window.off('scroll');
  }
});

PostCollection

Setup our PostCollection – we just need to tell the Collection the URL it should use to fetch its contents.

// the posts collection is configured to fetch
// from our API, as well as use our PostModel
var PostCollection = bb.Collection.extend({
  url: '/v1/posts'
});

Blog Router

Notice that we’re not instantiating new instances of our views, we’re merely telling them to render. Our initialize functions are designed to only be ran once, as we don’t want them to run but once, on page load.

var BlogRouter = bb.Router.extend({
  routes: {
    "": "index",
    "posts/:id": "show"
  },
  initialize: function(options)
  {
    // i do this to avoid having to hardcode an instance of a view
    // when we instantiate the router we will pass in the view instance
    this.blog = options.blog;
  },
  index: function()
  {
    //reset the paginator
    this.blog.page = 0;

    //render the post list
    this.blog.render();
  },
  show: function(id)
  {
    //render the full-post view
    this.blog.showPost(id);
  }
});

Notifications Collection

We’re just going to setup a simple Collection to store user notifications:

var notifications = new bb.Collection();

NotificationsView

This view will handle the displaying and hiding of user notifications:

var NotificationView = BaseView.extend({
  el: $('#notifications'),
  view: 'layouts._notification',
  initialize: function()
  {
    this.listenTo(notifications, 'add', this.render);
  },
  render: function(notification)
  {
    var $message = $( this.template(notification.toJSON()) );
    this.$el.append($message);
    this.delayedHide($message);
  },
  delayedHide: function($message)
  {
    var timeout = setTimeout(function()
    {
      $message.fadeOut(function()
      {
        $message.remove();
      });
    }, 5*1000);

    var self = this;
    $message.hover(
      function()
      {
        timeout = clearTimeout(timeout);
      },
      function()
      {
        self.delayedHide($message);
      }
    );
  }
});
var notificationView = new NotificationView();

Error Handling

Since we used the custom exception handlers for our API, it makes it very easy to handle any error our API may throw. Very similar to the way we defined our event listeners for our API in the app/filters.php file, we’ll define event listeners for our app here. Each code that could be thrown can just show a notification very easily!

$.ajaxSetup({
  statusCode: {
    401: function()
    {
      notification.add({
        type: null, //error, success, info, null
        message: 'You do not have permission to do that'
      });
    },
    403: function()
    {
      notification.add({
        type: null, //error, success, info, null
        message: 'You do not have permission to do that'
      });
    },
    404: function()
    {
      notification.add({
        type: 'error', //error, success, info, null
        message: '404: Page Not Found'
      });
    },
    500: function()
    {
      notification.add({
        type: 'error', //error, success, info, null
        message: 'The server encountered an error'
      });
    }
  }
});

Event Listeners

We’ll need a few global event listeners to help us navigate through our app without refreshing the page. We mainly just hijack the default behavior and call Backbone.history.navigate(). Notice how on our first listener, we’re specifying the selector to only match those that don’t have a data attribute of bypass. This will allow us to create links such as <a href="/some/non-ajax/page" data-bypass="true">link</a> that will force the page to refresh. We could also go a step further here and check whether the link is a local one, as opposed to a link to another site.

$document.on("click", "a[href]:not([data-bypass])", function(e){
  e.preventDefault();
  e.stopPropagation();

  var href = $(this).attr("href");
  bb.history.navigate(href, true);
});

$document.on("click", "[data-toggle='view']", function(e)
{
  e.preventDefault();
  e.stopPropagation();

  var
    self = $(this),
    href = self.attr('data-target') || self.attr('href')
  ;

  bb.history.navigate(href, true);
});

Start The App

Now we just need to boot the app, passing in any config values that we need. Notice the line that checks for the silentRouter global variable, this is kind of a hacky way to be able to use both back-end routing methods at the same time. This allows us to define a variable in the view called silentRouter and set it to true, meaning that the router should not actually engage the backbone route, allowing our back-end to handle the initial rendering of the page, and just wait for any needed updates or AJAX.

var BlogApp = new Blog({
  el       : $('[data-role="main"]'),
  collection   : new PostCollection(),
  postView    : PostView,
  perPage    : 15,
  page      : 0,
  infiniteScroll : true
});

var router = new BlogRouter({
  blog: BlogApp
});

if (typeof window.silentRouter === 'undefined') window.silentRouter = true;

bb.history.start({ pushState: true, root: '/', silent: window.silentRouter });

Conclusion

Notice that for the Backbone portion of our app, all we had to do was write some Javascript that knew how to interact with the pre-existing portions of our application? That’s what I love about this method! It may seem like we had a lot of steps to take to get to that portion of things, but really, most of that work was just a foundation build-up. Once we got that initial foundation in place, the actual application logic falls together very simply.

Try adding another feature to this blog, such as User listings and info. The basic steps you would take would be something like this:

  • Use the generator tool to create a new "User" resource.
  • Make the necessary modifications to ensure that the UserController is in the V1 API group.
  • Create your Repository and setup the proper IoC bindings in app/routes.php.
  • Write your Controller tests one at a time using Mockery for the repository, following each test up with the proper implementation to make sure that test passes.
  • Write your Repository tests one at a time, again, following each test up with the implementation.
  • Add in the new functionality to your Backbone App. I suggest trying two different approaches to the location of the User views. Decide for yourself which is the better implementation.
    • First place them in their own routes and Main view.
    • Then try incorporating them into the overall BlogView.

I hope this gave you some insight into creating a scalable single page app and API using Laravel 4 and Backbone.js. If you have any questions, please ask them in the comment section below!

June 12 2013

01:49

Working With Data in Sails.js

Sails.js is an up and coming Node.js framework, with a focus on freedom and smart defaults. In this article we’ll take a look at some of the data features Sails provides out-of-the-box, for easily making complex apps.


Why Sails Is Different Than Other Frameworks

The reason to choose Sails is best put by the Sails creator, Mike McNeil, "Sails was created out of necessity". Many frameworks you see around, are built almost for the academic side of things, these frameworks usually foster best practices, and create a platform for developers to create things faster, or better.

Sails on the other hand, was created for production, it’s not trying to feed you a new syntax or platform, it’s a solid base, meant for creating 'client-work' with speed. The contrast may be subtle, but there are a few distinct differences.

To illustrate what I’m referring to, let's take a look at Meteor. Meteor is probably the leading JS platform around today, but it’s a prime example of a framework, for the sake of a framework. Now this is not a bad thing, I am a big supporter of Meteor, what I mean is, they set out to build a framework and they did a good job at it, Mike on the other hand set out to make client-work faster. Sails is just a means to reach an end.

In Meteor, pretty much everything is abstracted away and you use JavaScript plus the Meteor API to code everything. Whereas Sails is not meant to be a new platform, so nothing is hidden.

It rests on top of Socket.io and the popular Express framework, and you have access to them in their entirety, natively. Are you beginning to see the difference?

Furthermore, since Sails is geared for production first, it’s built with multiple options for scaling and security.

There is alot to talk about, but in this article I would like to focus on how Sails handles data, and how you can leverage some of Sails' more advanced features to perform some really cool actions.


Installation

Just in case you do not have Sails intalled yet, you can do so via NPM by running:

sudo npm install -g sails

Socket.io & Express

Now let's talk a little bit about Socket.io and Express before we get into Sails. There’s a good premium series on Express by Andrew Burgess if you’re interested, but I'll run through the relevant basics of both of these libraries here:

Socket.io

Socket.io is a pub/sub library, which is run on both the server and on the client, and it allows them to speak via web sockets.

A brief example could look something like this:

//Code For Server
var io = require("socket.io");
io.sockets.on("connection", function (sock) {
    sock.emit("welcomeMessage", { hello: "world" });
}
io.listen(80);

This code starts out by requiring the socket.io library, listening for a connection, and then when another socket connects, it will send it a message, addressed to the welcomeMessage event, and finally passing along some JSON.

Next, on the client you would write something like:

//Code For Client
var sock = io.connect('http://localhost');
sock.on('welcomeMessage', function (json) {
    //Handle Event Received
});

Here we’re connecting to the server and listening for that welcomeMessage event we just created. As you can see it's a fairly simple publish / subscribe server, which is bidirectional (the client could emit messages for the server as well).

Now let's take a look at Express:

Express

The simplest form of an Express route could be something like:

app.get('/users', function(req, res) {
    res.send("Hello from '/users' !");
});

This defines a simple route, so that when a user goes to your site's address and tries to access the /users page, they will be presented with the message "Hello from '/users' !".

So Express is a framework for handling HTTP requests and Socket.io is a websocket communications library. What the Sails team have done though, is map all Express routes to Socket.io internally. What this means is, you can call any of the HTTP routes through web sockets.

Now that’s pretty cool! But, there is still one piece of the puzzle missing and that is the Sails Blueprints.

Sails allows you to generate models just like in other frameworks, the difference is, Sails can also generate a production ready RESTfull API to go with them. This means if you generate a model named 'users' you can immediately run RESTfull queries on the '/users' resource without any coding necessary.

If you are new to RESTful APIs, it’s just a way of accessing data, to where CRUD operations are mapped to various HTTP methods.

So a GET request to '/users' will get all of the users, a POST request will create a new user, etc.

So what does all of this mean?

It means we have a full RESTfull API, mapped to Socket.io via Sails, without writing a single line of code!

But why are sockets better at retrieving data then an Ajax request? Well, besides being a leaner protocol, sockets stay open for bidirectional communication, and Sails has taken advantage of this. Not only will Sails pass you the data, but it will automatically subscribe you to updates on that database, and whenever something gets added, removed, or updated, your client will receive a notification via the web socket, letting you know about it.

This is why Sails is so awesome!


Sails + Backbone

The next topic I’d like to cover is Backbone integration, because if you aren't using a JavaScript framework, you are doing it wrong.

With this in mind, Sails and Backbone are the perfect pair. Backbone, like Sails, is extremely unobtrusive, all of its features are available, capable of being overridden, and optional.

If you have used Backbone before then you may know it connects natively with REST APIs, so out of the box, you can sync the data on the front-end with your Sails application.

But enough talk for now, let's take a look at all of this in action by creating a basic chat application. To get started, open up a terminal window and type:

sails new ChatApp
cd ChatApp
sails generate model users
sails generate model messages
sails generate controller messages
sails generate controller main

This will create a new app and generate some files for us. You can see from above, there are two different resources that you can generate; models and controllers. If you are familiar with the MVC design pattern, then you should know what these are, but in short, models are your data and controllers hold your logic code. So we are going to need two collections, one to hold the users, and one for the messages.

Next, for the controllers, we need one to handle the page routes, I called it 'main', then we have a second controller named 'messages'. Now you might wonder why I created a controller with the same name as our messages model? Well, if you remember, I said that Sails can create a REST API for you. What happens is, by creating a blank controller with the same name as a model, Sails will know to fall back and build a REST API for the corresponding resource.

So, we’ve created a controller for our messages model, but there’s no need to create one for the users model, so I just left it out. And that’s all there is to creating models and controllers.

Next, let’s setup some routes.

Routes

Routes are always a safe place to begin, because you usually have a good idea of which pages are going to be made.

So open up the routes.js file which is in the config folder, it may look a little overwhelming at first, but if you remove all of the comments and add the in the following routes, you will be left with something like this:

module.exports.routes = {
    '/' : {
         controller: 'main',
         action: 'index'
    },
    '/signup' : {
         controller: 'main',
         action: 'signup'
    },
    '/login' : {
         controller: 'main',
         action: 'login'
    },
    '/chat' : {
         controller: 'main',
         action: 'chat'
    }
};

We have a home page, a chat page, and then two pages for handling both the login and signup pages. I put them all in the same controller, but in Sails, you can create as many controllers as you’d like.

Models

Next, let's take a look at the generated messages model which can be located at "api > models > Messages.js". We need to add the necessary columns to our model. Now this is not absolutely necessary, but it will create some helper functions for us that we can use:

//Messages Model  
module.exports = {
    attributes  : {
        userId: 'INT',
        username: 'STRING',
        message: 'STRING'      
    }
};

For the messages model, we start with the id of the user that this message belongs to, a username so we won't have to query this separately, and then the actual message.

Now let's fill in the user's model:

//Users Model 
module.exports = {
    attributes  : {
         username: 'STRING',
         password: 'STRING'      
    }
};

And that’s it, we have just the username and password attributes. The next step is to create our route functions inside of the MainController.

Controllers

So open up the MainController, which can be found at "api > controllers > MainController.js". Let's begin by creating a function for each of the routes we defined above:

var MainController = {
    index: function (req, res) {
        
    },
    signup: function (req, res) {
        
    },
    login: function (req, res) {
        
    },
    chat: function (req, res) {
        
    }
};
module.exports = MainController;   

If you’re familiar with Express, then you’ll be happy to see that these functions are standard Express route functions. They receive two variables, req for the HTTP request and res to create the response.

Following the MVC pattern, Sails offers a function for rendering views. The home page doesn't need anything special, so let's just render the view.

index: function (req, res) {
    res.view();
},

Sails leans more toward convention over configuration, so when you call res.view(); Sails will look for a view file (with a .ejs extension by default) using the following pattern: 'views > controllerName > methodName.ejs'. So for this call, it will search for 'views > main > index.ejs'. It's also worth noting, these views only contain the view specific parts of the page. If you take a look at 'views > layout.ejs', you will see a call in the middle for <%- body %>, this is where your view file will be inserted. By default it uses this 'layout.ejs' file, but you can use other layout files just by passing the layout name into the res.view() function, under the property named 'layout'. For example: 'res.view( { layout: "other.ejs" } );'.

I’m going to use the default layout file with a small adjustment, I’m going to add jQuery, Backbone, and Underscore. So in the 'layout.ejs' file right before the closing </head> tag, add the following lines:

<script src="//cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/underscore.js/1.4.4/underscore-min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/backbone.js/1.0.0/backbone-min.js"></script>

With that in place we’re now ready to create the home page.

The Home Page

Let’s create a new folder inside of the views folder named main, and inside of our new main folder, we’ll create a new file named 'index.ejs'.

Inside the file let's just create a login and signup form:

<h1>Code Chat</h1>
<div>
    <h3>Login</h3>
    <input type="text" id="loginName" placeholder="name" />
    <input type="password" id="loginPassword" placeholder="password" />
    <button id="loginButton">Login</button>
</div>
<div>
    <h3>Signup</h3>
    <input type="text" id="signupName" placeholder="name" />
    <input type="password" id="signupPassword" placeholder="password" />
    <input type="password" id="signupConfirmPassword" placeholder="confirm password" />
    <button id="signupButton">Signup</button>
</div>

Pretty simple, just the essentials.

The Login and Signup Areas

Next we need to add a little JS to get this communicating with the server. Now this won’t be Sails specific, we are just going to send an AJAX request via jQuery to the Sails server.

This code can either be included on the page itself or loaded in via a separate JS file. For the sake of convenience, I’m just going to put it at the bottom of the same page:

<script>
    $("#loginButton").click(function(){
        var username = $("#loginName").val();
        var password = $("#loginPassword").val();
        if (username && password) {
            $.post(
                '/login',
                {username: username, password:password},
                function () {
                    window.location = "/chat";
                }
            ).fail(function(res){
                alert("Error: " + res.getResponseHeader("error"));
            });
        } else {
            alert("A username and password is required");
        }
    });
</script>

This is all just standard JS and jQuery, we’re listening for the click event on the login button, making sure the username and password fields are filled in, and posting the data to the '/login' route. If the login is successful, we redirect the user to the chat page, otherwise we will display the error returned by the server.

Next, let's create the same thing for the signup area:

$("#signupButton").click(function(){
    var username = $("#signupName").val();
    var password = $("#signupPassword").val();
    var confirmPassword = $("#signupConfirmPassword").val();
    if (username && password) {
        if (password === confirmPassword) {
            $.post(
                '/signup',
                {username: username, password:password},
                function () {
                    window.location = "/chat";
                }
            ).fail(function(res){
                alert("Error: " + res.getResponseHeader("error"));
            });
        } else {
            alert("Passwords don't match");
        }   
    } else {
        alert("A username and password is required");
    }
}); 

This code is almost identical, so much so, that you can probably just abstract the whole Ajax part out into its own function, but for this tutorial it's fine.

Now we need to go back to our 'MainController' and handle these two routes, but before we do that, I want to install a Node module. We’re going to need to hash the password, as plain text passwords are not a good thing, not even for demonstration! I found a nice module named 'password-hash' by David Wood which will work out nicely.

To install it, just go to the root of your Sails app, from within your terminal and type: npm install password-hash.

Once that installs, let's open the MainController and implement the two needed routes. Let’s start with signup:

signup: function (req, res) {
        var username = req.param("username");
        var password = req.param("password");
        
        Users.findByUsername(username).done(function(err, usr){
            if (err) {
                res.send(500, { error: "DB Error" });
            } else if (usr) {
                res.send(400, {error: "Username already Taken"});
            } else {
                var hasher = require("password-hash");
                password = hasher.generate(password);
                
                Users.create({username: username, password: password}).done(function(error, user) {
                if (error) {
                    res.send(500, {error: "DB Error"});
                } else {
                    req.session.user = user;
                    res.send(user);
                }
            });
        }
    });
}

It's a bit verbose, but all we’re doing here is reading the username and password from the POST request and making sure the username is not already taken. You can see I’m also using the password-hasher we just installed, it's super simple to use, just pass the password into the generate method and it will hash it using a random salt.

It's also worth mentioning, that at every possible location where we may run into an error or issue, we’re sending back an HTTP error code and returning a message via a custom header named 'error' which, if you remember, we’re displaying in an alert message on the index page.

Another noteworthy point, is the fact that we’re using a 'magic' function named 'findByUsername', this is made possible because we have a username column inside of our Users model.

Finally, at the bottom you can see if everything went well, we’re storing the user in a session variable and returning it with a default status code of 200, which will tell jQuery that the AJAX request was successful.

Next, let's write the login function:

login: function (req, res) {
    var username = req.param("username");
    var password = req.param("password");
    
    Users.findByUsername(username).done(function(err, usr) {
        if (err) {
            res.send(500, { error: "DB Error" });
        } else {
            if (usr) {
                var hasher = require("password-hash");
                if (hasher.verify(password, usr.password)) {
                    req.session.user = usr;
                    res.send(usr);
                } else {
                    res.send(400, { error: "Wrong Password" });
                }
            } else {
                res.send(404, { error: "User not Found" });
            }
        }
    });
}

Again, this is very similar to the previous signup function, we’re searching for a user with the same username that was posted from the form and if it finds one, we check if the password matches using the hasher’s verify method. The reason we can't just hash the password again and pass it into the models find function is because the hasher uses a random salt, so if we hashed the password again it would be equal to something else.

The rest of the code is the same; if everything checks out, we store the user in a session and return it, otherwise we send back an error message.

The login system is now complete and we’re finally able to move on to building the chat feature.

Building the Chat Feature

Since we’ll be using Backbone for getting the messages, the actual route function will be very simple. Here’s the complete chat function:

chat: function (req, res) {
    if (req.session.user) {
        res.view({username: req.session.user.username});
    } else {
        res.redirect('/');
    }
}

We start off by checking if the user is logged in or not, if this checks out, then it will load the view, passing it the username that was in the session, otherwise we just redirect to the home page.

Now let’s create a new view named 'chat.ejs' inside of the main folder. Open it up and let's create a simple form to post new messages and a div container for displaying all of them.

<h2>Welcome <%= username %></h2>
<div id="newMessageForm">
    <textarea id="message" placeholder="Enter your message here:"></textarea>
    <button id="postMessageButton">Add Message</button>
</div>
<div id="messagesContainer">
</div>

So for this view, we just used some pretty standard HTML. The only thing that may require some explanation is the <%= username %> code, this style of coding is not specific to Sails, it’s actually the syntax for EJS. This syntax is very similar to PHP’s short tags. <% is the equivalent of <? in PHP and <%= is the same as <?=. The first snippet of EJS allows you to integrate standard JS code on the page, whereas the second prints out the code within. Here we’re just printing out the username that we passed in from the controller.

The rest of our chat feature will be all JavaScript. To get started, let's take a look at how you would write the chat functionality using standard Backbone, and then we’ll see how to take advantage of web sockets.

At the bottom of the page, add the following JS:

<script>
    var MessageModel = Backbone.Model.extend({
        urlRoot: '/messages',
    });

    var MessageCollection = Backbone.Collection.extend({
        url: '/messages',
        model: MessageModel,
    });

    var messages = new MessageCollection();
    messages.fetch();

    $("#postMessageButton").click(function(){
        var messageText = $("#message").val();
        messages.create({message: messageText}, {wait: true});
        $("#message").val("");
    });
</script>

Since Sails automatically creates an API that Backbone understands natively, no extra server code has to be written, it doesn’t get much easier than that. This is what I was talking about when I said that Sails was not made to be a 'framework'. It doesn't try to make you use its own syntax, it was made to get things done and as you can see, it delivers.

To test it out, open up a terminal window and navigate to your Sails app folder, then type 'sails lift' to start it up. By default it will launch to http://localhost:1337. Now just signup and post a few messages.

To see your posted messages you can console.log the messages variable, or look at it inside of the browser’s console. Now the next thing we should implement is a view so we can see the posted messages in the browser.

_.templateSettings = {
    interpolate : /{{(.+?)}}/g
};
var MessagesView = Backbone.View.extend({
    el: '#messagesContainer',
    initialize: function () {
        this.collection.on('add', this.render, this);
        this.render();
    },
    template: _.template("<div><p>{{ message }}</p></div>"),
        render: function () {
            this.$el.html("");
            this.collection.each(function(msg){
                this.$el.append(this.template(msg.toJSON()));
            }, this)
    }
});

var mView = new MessagesView({collection: messages});

We start off by defining a view, attaching it to the div that we created earlier, then we add an event handler on the collection to re-render the div every time a new model gets added to the collection.

You can see at the top, I had to change the default Underscore settings from using the EJS syntax inside of the templates, to instead, use Mustache’s syntax. This is because the page is already an EJS document, so it would be processed out on the server and not in Underscore.

Note: I didn't come up with the Regex for this, that credit goes to the Underscore docs itself.

Lastly, at the bottom you can see we created a new instance of this view, passing it the collection variable.

If all went well, you should now see your messages in the browser, and it should update whenever you create a new post.


Sails Policies

Now you may have noticed that we’re not setting the userId or the username when we submit the posts, and this is for security purposes.

You don't want to put this kind of control on the client side. If all someone has to do is modify a JavaScript variable to control another user’s account, you’ll have a major problem.

So, how should you handle this? Well, with policies of course.

Policies are basically middleware, which run before the actual web request, where you can stop, modify, or even redirect the request, as needed.

For this app, let's create a policy for our messages. Policies are applied to controllers, so they can even be run on normal pages, but for this tutorial let's just stick with one for our messages Model.

Create a file named 'MessagesPolicy.js' inside the 'api > policies' folder, and enter the following:

module.exports = function (req, res, next) {
    if (req.session.user) {
        var action = req.param('action');
        if (action == "create") {
            req.body.userId = req.session.user.id;
            req.body.username = req.session.user.username;
        }
        next();
    } else {
        res.send("You Must Be Logged In", 403);
    }
};

So, what’s going on here? You can see this function resembles a normal route function, the difference though is the third parameter, which will call the next middleware in the stack. If you’re new to the idea of middleware, you can think of it like a Russian nesting doll. Each layer gets the request, along with the response variables and they can modify them as they see fit. If they pass all of the requirements, the layer can pass it further in, until it reaches the center, which is the route function.

So here we are, checking if the user is logged in, if the user isn't, we display a 403 error and the request ends here. Otherwise, (i.e. the user is logged in) we call next(); to pass it on. In the middle of the above code, is where we inject some post variables. We’re applying this to all calls on the 'messages' controller (basically the API), so we get the action and check if this request is trying to create a new message, in which case we add the post fields for the user's id and username.

Next, open up the policies.js file which is in the config folder, and add in the policy that we just created. So your file should look like this:

module.exports.policies = { 
    '*': true,
    'messages': 'MessagesPolicy'
};

With this put in to place, we’ll need to delete all of the old records, as they do not have these new pieces of information. So, close the Sails server (ctrl-c) and in the same terminal window type: rm -r .tmp to remove the temporary database giving us a clean slate.

Next, let's add the username to the actual posts, so in the 'chat.ejs' change the template to:

template: _.template("<div><p><b>{{ username }}: </b>{{ message }}</p></div>"),

Restart the Sails server (again using sails lift) and signup another new user to test it out. If everything is working correctly, you should be able to add messages and see your name in the post.

At this point we have a pretty good setup, we fetch the post automatically using Backbone and the API, plus we have some basic security in place. The problem is, it won't update when other people post messages. Now you could solve this by creating a JavaScript interval and poll for updates, but we can do better.

Leveraging Websockets

I mentioned earlier that Sails leverages websockets’ bidirectional abilities to post updates on the subscribed data. Using these updates, we can listen for new additions to the messages table and update the collection accordingly.

So in the chat.ejs file, let's create a new kind of collection; a SailsCollection:

var SailsCollection = Backbone.Collection.extend({
    sailsCollection: "",
    socket: null,
    sync: function(method, model, options){
        var where = {};
        if (options.where) {
            where = {
                where: options.where
            }
        }       
        if(typeof this.sailsCollection === "string" && this.sailsCollection !== "") {
            this.socket = io.connect();
            this.socket.on("connect", _.bind(function(){
                this.socket.request("/" + this.sailsCollection, where, _.bind(function(users){
                    this.set(users);
                }, this));
    
                this.socket.on("message", _.bind(function(msg){
                    var m = msg.uri.split("/").pop();
                    if (m === "create") {
                        this.add(msg.data);
                    } else if (m === "update") {
                        this.get(msg.data.id).set(msg.data);
                    } else if (m === "destroy") {
                        this.remove(this.get(msg.data.id));
                    }
                }, this));
            }, this));
        } else {
            console.log("Error: Cannot retrieve models because property 'sailsCollection' not set on the collection");
        }
    }
});

Now it may be long, but it's actually very simple, let's walk through it. We start off by adding two new properties to the Collection object, one to hold the name of the Sails 'model' and one to hold the web socket. Next, we modify the sync function, if you’re familiar with Backbone, then you’ll know that this is the function which interfaces with the server when you call things such as fetch. Usually, it fires off Ajax requests, but we’re going to customize it for socket communication.

Now, we’re not using most of the functionality that the sync function offers, mainly because we haven't added the ability for users to update or delete messages, but just to be complete, I will include them within the function definition.

Let’s take a look at the first part of the sync function:

var where = {};
if (options.where) {
    where = {
        where: options.where
    }
}   

This code first checks if any 'where' clauses were sent through, this would let you do things like: messages.fetch({ where : { id: 4 } }); to only fetch rows where the id equals four.

After that, we then have some code that makes sure the 'sailsCollection' property has been set, otherwise we log an error message. Afterwards, we create a new socket and connect to the server, listening for the connection with the on('connect') event.

Once connected, we request the index of the 'sailsCollection' specified to pull in the current list of models. When it receives the data, we use the collection's set function to initially set the models.

Alright, now so far, we have the equivalent of the standard fetch command. The next block of code is where the push notifications happen:

this.socket.on("message", _.bind(function(msg){
    var m = msg.uri.split("/").pop();
    if (m === "create") {
        this.add(msg.data);
    } else if (m === "update") {
        this.get(msg.data.id).set(msg.data);
    } else if (m === "destroy") {
        this.remove(this.get(msg.data.id));
    }
}, this));

Now the action that’s being performed (whether we’re creating, updating, or destroying a message) can be found inside of the actual msg, which is then inside of the uri. To get the action, we split the URI on forward slashes ('/') and grab just the last segment using the pop function. We then try to match it up with the three possible actions of create, update, or destroy.

The rest is standard Backbone, we either add, edit, or remove the specified model. With our new class almost complete, all that’s left to do is change the current MessageCollection. Instead of extending the Backbone collection, it needs to extend our new collection, like so:

var MessageCollection = SailsCollection.extend({
    sailsCollection: 'messages',
    model: MessageModel
});

In addition to extending our new collection, we’ll make another change so that instead of setting the URL property, we now set the sailsCollection property. And that's all there is to it. Open up the application into two different browsers (e.g. Chrome and Safari) and signup two separate users. You should see that posted messages from either of the browsers get immediately shown on the other, no polling, no trouble.

Chat Screenshot

Conclusion

Sails is a breath of fresh air, within a clutter of frameworks. It checks its ego at the door, and does what it can to help the developer instead of the brand. I have been chatting with the Sails devs and I can tell you that there’s even more awesomeness in the works, and it will be interesting to see where this framework goes.

So in conclusion, you’ve learned how to setup, use, and secure your data from within Sails, as well as how to interface it with the popular Backbone library.

Like always, if you have any comments, feel free to leave them below, or join us on the Nettuts+ IRC channel ("#nettuts" on freenode). Thank you for reading.

June 05 2013

00:12

Creating a Multi-Page Site with Meteor

As with any web application, creating multi-page sites requires a specialized set of tools. In this article, we’ll take a look at developing a library that not only can differentiate between the different URIs, but one that takes advantage of Meteor’s core features.


Intended Library Features

Whenever I have to develop a specific and focused library like this, I like to start with the outcome of, how do I want this to work?

So, we can begin by writing down some of the features that we’d like it to have:

  • The ability to load different pages according to the URI
  • Reading parameters from the URI (placeholders)
  • Keeping the pages dynamic as per the Meteor standard

That looks pretty good. Now after looking over these features, you may think they are pretty similar to a ‘router’ type of library and I agree. So, let’s take a look at how our ‘router’ library would work in action:

Router.addRoute('/home', 'homeTemplate');
Router.addRoute('/user/:username', 'profileTemplate');
Router.addRoute('/contact', 'contactTemplate');

Router.run();

In design, you have concepts like ‘form-follows-function’, which uses the idea of laying out everything first, and designing it later.

In code, I often find the opposite is more helpful. We as developers, can work in many directions and having a working example of what the design should look like, keeps us focused and efficient.

Now that I know what I want to do, it’s just a matter of implementing it. So, let’s take another look at our features that we wrote above; we want to be able to add routes, and have Meteor render the given template partial. Of course, Meteor’s views work off of handlebar’s templates by default, so I’ve made the decision to make this the denomination of our router.

The Breakdown

Next, let’s break down the project into different sections so that we know what features we need to work on.

  • We’ll start by getting the current page’s URI, as we need to match the routes against something. This can be done, easily enough, using the window.location.pathname variable, provided by the browser.
  • Next, we need the ability to add routes. This also, is pretty simple now that we have done some example code; we will have a function named addRoute which will accept a route pattern and a template name. This function will then have to store all of these route objects inside some kind of array.
  • With the current URI and an array of routes stored, we will need some kind of method to see if they match.
  • And last, but not least, we will need to take a matched route and display its attached template.

Hopefully, you can see that by laying out the requirements, it really helps to streamline the planning stage. We are now ready to jump in.


Setting Up

To get started, let’s create a new Meteor project, I am going to name mine ‘routerdemo‘. Now inside, we’ll create a folder named ‘lib‘ inside another folder named ‘client‘:

meteor create routerdemo
cd routerdemo
mkdir -p client/lib

Next, create a file named ‘router.js‘ inside the newly created lib folder. The reason we are sticking it in ‘client‘, is because the server doesn’t have access to the window.location.pathname variable and as such, won’t work with our router. Putting stuff inside a folder named ‘client‘ assures they will only be run on the client-side.

Now inside the router.js file you just made, let’s put in some scaffolding:

//////////////////////////
// Router
//////////////////////////

Router = {
	uri: /*Current URL*/,
	routes: [],
	
	addRoute: /* function to add a route */,
	getMatchingRoute: /* function to get matching route */,
	run: /* function to display the matched route's template */
};

I think that’s a pretty good scaffold, I even filled in the code for the routes array, and added some comments (progress!). Now to further develop our library, we need to discuss how we are going to match these routes together.


Matching Routes

This is not as simple as currentRoute === route, as we’re dealing with dynamic placeholders. We want a route for ‘/user/:id‘ to match a URI of ‘/user/42‘ and so on.

To do this, we are going to have to split the URI up and do a little more in-depth analysis. Now some people might be thinking to use a regex, but, that’s a little over the top if you ask me. A much simpler approach would be to split the segments up and make sure the two routes have the same number of segments, and also ensure that the parts of the route which aren’t placeholders, match up.

This can easily be achieved by splitting the URI where ever there’s a forward slash (‘/’), using the .split method. So our first check would ensure that the two routes have the same number of segments.

If the route is ‘/user/:id‘ and we get a URI of ‘/profile/42/foo/bar‘, we don’t even need to do any further checking, one has two segments and the other has four, so that seems like a good primary check. The next thing we can do is filter through the lists and make sure each piece that isn’t a placeholder matches up. If these two checks are true, we know the route matches.

Setting the URI Variable

So let’s get started with setting the uri variable:

uri: _.compact(window.location.pathname.split("/")),

In the above code, we’re splitting the array on forward slashes, and turning the string into an array of segments. Then we’re using Underscore’s compact function to remove any empty items from the list, these could be caused by a forward slash at the beginning or by someone using two forward slashes by mistake. By doing this, it makes our system much more forgiving.

Adding Routes

Next, we need to create the function to add a route, this is a fairly similar process, but because we are going to be matching the placeholders later, we are going to store not just the segments and the template name, but the indexes for the placeholders as well.

Here’s the completed function:

addRoute: function(route, template) {
	var segments =  _.compact(route.split("/"));
	
	var placeholders = _.reduce(segments, function(currentArr, piece, index) {
		if (piece.substr(0, 1) === ":") {
			currentArr.push(index);
			segments[index] = piece.substr(1);
		}
		return currentArr;
	}, []);
	
	this.routes.push({
		route: segments,
		template: template,
		placeholderIndexes: placeholders
	});
},

We start by splitting up the route into segments, just like we did for the URI, but this time we also need to store the indexes of the placeholders for future reference, using Underscore’s reduce method.

For the unaware, the reduce function is similar to the each method, it also cycles through all the elements of a list, the difference being, it passes whatever each iteration returns to the next item, ultimately returning the results to the given variable. We are starting with a blank array (the 3rd parameter) and we are adding each index as we find them and passing that array along until finally, it’s returned back to the placeholders variable.

The next thing you’ll see going on in here, is that we are renaming the segments that are placeholders and removing the colon. We do this purely for aesthetic reasons and later on, it will make it easier to reference in the templates.

Finally, we push the new data to our routes array, which we created earlier.

Matching a Route to a URI

The next step is to filter through the list and look for a route that matches the current URI.

Here is the complete function:

getMatchingRoute: function(){
     for (var i in this.routes) {
         var route = this.routes[i];
         var data = {};

         if (route.segments.length === this.uri.length) {
             var match = _.every(route.segments, function(seg, i){
                  if (_.contains(route.placeholderIndexes, i)) {
                       data[seg] = this.uri[i];
                       return true;
                  } else {
                       return seg === this.uri[i];
                  }
             }, this);

             if (match) {
                  return {
                      data: data,
                      template: route.template
                  }
             }
        }
    }
    //no matches (add 404 or default template maybe?)
    return false;
},

We are doing quite a few things here, so let’s walk through it. We begin by cycling through the array of routes, and we assign the current route to a variable, along with an empty data object to store the placeholders.

Next, we do the initial check of making sure the two routes have the same number of segments, otherwise, we just cycle on to the next route. If they do have the same number of components, we have to check whether the segments match, this can be done using Underscore’s ‘_.every‘ function. This function is again like the ‘_.each‘ method, except that it returns a boolean. The way it works is it will run the function for each item in the array, if they all return true, the function will return true, otherwise it will return false, so it’s perfect for doing things like this where we need to verify each segment.

Now the check that we are performing is pretty easy, if it’s a placeholder, then it automatically fits, as a placeholder can be equal to any value. If it isn’t a placeholder, we just make sure the two segments match, pretty simple.

In order to check whether or not this is a placeholder, we pass the current segments index (stored in ‘i‘) to Underscore’s _.contains function, which will check its value.

Now you may be wondering what the first line inside this ‘if‘ statement is doing, well, it’s storing the segment in the data array under the given placeholder name. So, say for example you had a route of ‘/user/:name‘ and the current URI is ‘/user/bob‘, then this line will add a property to the data object called ‘name‘ and pass it a value of bob.

The rest is fairly obvious, we pass true or false, depending on the circumstances, and the result gets stored in ‘match‘. If match is true, we return the data along with the templates name, and if there was no match, we return false. And that’s it for our getMatchingRoute method.

So far, we can get the current URI, we can add routes, and we can find a matching route, the only thing left is to display the correct route, and for this we need to write the ‘run‘ method.


Displaying the Template

Meteor uses handlebars for templates and stores all the templates in a variable, appropriately named, ‘Template‘. Now, if you are familiar with handlebars then you know these templates are just functions, and by calling them (optionally passing in some data) we get back the template’s HTML.

Now, calling these functions to get the template’s HTML would work fine, but it isn’t very Meteor-like, as what we’d end up with is just a normal static website. Luckily, adding in the dynamic behavior is easier than you might think, all we need to do is wrap the function call in a ‘Meteor.render‘ call. Putting it inside this function will make it react to changes in the data and keep it ‘live’.

The Run Method

Because of this, running the router is very simple, let’s create the run method:

run: function(){
    var route = this.getMatchingRoute();
    if (route) {
        var fragment = Meteor.render(function() {
            if (Template[route.template] !== undefined) {
                 return Template[route.template](route.data);
            }
        });

        document.body.appendChild(fragment);
    } else {
        //404
    }
}

We start by getting the matched route, using the getMatchingRoute function which we just wrote, we then make sure there is a match, and finally we use an else statement to handle displaying a 404.

Inside the if statement, we call Meteor.render and inside, we check and call the returned template, passing with it the data from the placeholders. This function will return an HTML fragment, which we can then just append to the document’s body.

So with about 60 lines of code, we’ve completed our router.

Testing It Out

The next step is to test it. I’m going to use the same code that we wrote earlier when we planned out this project, as it will be a good measure of whether we accomplished, what we wanted to accomplish. Let’s add a file named main.js inside the client folder and add in the following:

Meteor.startup(function(){
    Router.addRoute('/home', 'homeTemplate');
    Router.addRoute('/user/:username', 'profileTemplate');
    Router.addRoute('/contact', 'contactTemplate');

    Router.run();
});

In the above code, we first need to make sure our templates and body will be available before we try working with our Router. We do this by wrapping all of our code inside of the Meteor.startup method call. This will ensure everything is ready, and inside of the startup method, we can then add our routes and run the router.

Creating Our Templates

Now let’s create a couple of templates, this can be done anywhere, you can create a subfolder inside the client folder named templates and create a separate HTML file for each, but since these will be short templates and just for example purposes, I am going to put them together inside a file named ‘templates.html‘ inside the ‘client‘ folder:

<template name="homeTemplate">
    <h1>This is the Home Page</h1>
</template>

<template name="profileTemplate">
    <h1>Profile Page</h1>
    <p>Welcome back {{username}}</p>
</template>

<template name="contactTemplate">
    <h1>Contact Page</h1>
    <p>Contact me on twitter at {{twitterName}}</p>
</template>

The first template is pretty basic, it just contains a little HTML code for the home page’s heading. The second template is very similar to the first template, but this time we use the username route parameter. Now the last template also uses a placeholder, but its route doesn’t have the twitterName segment. This is because standard Meteor placeholders will still work, and work reactively.

Back inside the client folder, let’s now create a file named ‘templates.js‘, to declare the contact placeholder.

Template.contactTemplate.twitterName = function (){
    Session.setDefault('twitter_name', '@gabrielmanricks');
    return Session.get('twitter_name');
}

You could have just returned a string, but I wanted to demonstrate that everything is still reactive. The last step is to delete the default html and js files from the root directory (in my case they are named routerdemo.html and routerdemo.js) With that done, start the Meteor server and navigate to the given routes.

Try going to ‘/home‘ or ‘user/gmanricks‘ or ‘/contact‘ and they should all work for you as expected. Another thing is, since we stored the twitter name in Session, we can just open up the browser’s console on the contact page and enter:

Session.set('twitter_name', '@nettuts');

And you will see that the page will update in real time!


Summary

In this article, we built a basic router library, while still giving it a Meteor twist. We covered a lot of the core concepts and as it turns out, a lot of Underscore concepts as well.

In the end, I hope I got the message across that there is no real “magic” going on here. It’s really all about implementing what you need, as opposed to what you can.

Thank you for reading, I hope you’ve enjoyed it. Like always, if you have any questions you can leave them below or ask me on the NetTuts IRC or on my Twitter.

Meteor Starter Book

Note: If you are interested in learning more about Meteor, I have just released my new book, which details the process of building an app from its conception & planning to securing & deploying. You can pick-up the book in both ebook format as well as softcover from Amazon.

May 28 2013

16:58

Drawing With Two.js

Advanced graphics are a big part of the web these days, but there are a couple of different renderers in the mix. You could use canvas, of course; but SVG and WebGL are also options. In this tutorial, we’ll review a relatively new drawing library, two.js, which provides one API that does the same thing with all three of those renderers. If you're ready, let's check it out!


Step 1 – Setting Up

The first step is to create a two instance and put it on the page. The Two contstructor takes an object with a number of parameters:

    var two = new Two({
       fullscreen: true 
    });

In this case, we're using the fullscreen option, which makes the drawing area take up the entire browser window. If we wanted our drawing area to have a specific size, we could use the width and height properties instead; these both take a number for a pixel value. There's also the autostart parameter; if this is set to true, any animations will run right away when the page is loaded.

There's also the type parameter: this decides which renderer will be used. You can choose between canvas, SVG, and WebGl. You don't just type the name, though: you use a library constant of sorts: either Two.Types.canvas, Two.Types.svg, or Two.Types.webgl. Just to be clear, two.js will just default to using SVG; it doesn't do any kind of feature detection to see what the browser will support. You'll have to do that on your own (and I think that's a good idea: small tools, one thing well, and all that).

So, once we have a Two instance, what do we do with it. First, you'll want to append it to the page. It has an appendTo method that takes an HTML element as a paramter, so let's set this up:

<div id="main"></div>
<script src="./two.min.js"></script>
<script src="./main.js"></script>

Then, in main.js, we start with this:

var el = document.getElementById("main"),
    two = new Two({ 
        fullscreen: true
    });

two.appendTo(el);

With all this set up, we're ready to actually draw a few shapes.


Step 2 – Drawing Basic Shapes

We'll begin with basic shapes; while we can create our own complex shapes with new Two.Polygon, most simple shapes can be made with a few handy methods.

Let's start with circles. The makeCircle function takes three parameters:

var circle = two.makeCircle(110, 110, 100);
circle.fill = "#881111";

two.update();

We’ll review from the bottom, up: the call to two.update updates are drawing area and actually renders the content. Backing up to the circle, the first two parameters are the x and y coordinates for the center of the circle. Then, the third parameter is the radius for the circle. All the two.make... functions return a Two.Polygon object. As we go through this tutorial, you'll see several properties and methods that you can use on these shapes. Here's the first: fill. As you might guess, it sets the fill color: any valid CSS will do.

The result should look like this:

Now, what about rectangles? The two.makeRectangle method takes four parameters. Just like the circle, the first two parameters mark the x and y coordinates for the center of the rectangle. Then, param three is width and param four is the height of the rectangle.

var rect = two.makeRectangle(115, 90, 150, 100);
rect.fill = "orange";
rect.opacity = 0.25;
rect.noStroke();

two.update();

Again, we're using the fill property. We're also using the opacity property, which accepts a decimal value between 0 and 1; we're got quarter-opacity here. Finally, we're using the noStroke method, which removes the stroke (border) from the rectangle. Here's what we have:

Ellipses are pretty simple, too: as you might guess, the first two parameters set the center of the ellipse. Then, we've got width and height:

var ellipse = two.makeEllipse(100, 40, 90, 30);
ellipse.stroke = "#112233";
ellipse.linewidth = 5;
ellipse.noFill();

two.update();

For new properties: we've got stroke, which sets the color of the border; to set the width of that border, we use the linewidth property. Then, remember noStroke? The noFill method is the same, except removes the fill color for our shape (without that, our shapes default to a white fill).

Of course, the simplest shapes are lines.

var line = two.makeLine(10, 10, 110, 210);
line.linewidth = 10;
line.stroke = "rgba(255, 0, 0, 0.5)";

The first two parameters are the x and y for one end of the line; the second set are for the other end.

Probably the most awkward shape to create is the curve. The two.makeCurve method takes as many sets of x, y parameters as you'd like – each pair being a point where the line will curve. Then, the last parameter is a boolean: make it true if the shape is open, meaning the ends don't connect. If you'd like two.js to draw a line connecting the two ends of the curves, that should be false.

var curve = two.makeCurve(110, 100, 120, 50, 140, 150, 160, 50, 180, 150, 190, 100, true);
curve.linewidth = 2;
curve.scale = 1.75;
curve.rotation = Math.PI / 2; // Quarter-turn
curve.noFill();

You know linewidth, but what about scale? We can use this to shrink or expand our shape; here, we're expanding the shape by 175%. Then, we can use rotation to rotate our shape by a number of radians; we're doing 90 degrees, which is half-PI radians.

Finally, you might think that since we've made shape open, we won't get a fill; but that's not true. An unclosed curve will still have a fill, so we're using noFill to remove the fill and end up with just the curve.

The last shape type is the catch-all: it's the general polygon. Actually, it's pretty much just like the curve, except the lines go straight from point to point.

var poly = two.makePolygon(110, 100, 120, 50, 140, 150, 160, 50, 180, 150, 190, 100);
poly.linewidth = 4;
poly.translation = new Two.Vector(60, 60);
poly.stroke = "#cccccc";
poly.fill = "#ececec";

As with the curve, we have as many pairs of coordinates as we'd like, and then the open boolean; here we're setting it to false, so the shape will be closed.

We're also setting a translation here; this allows us move the shape left or right and up or down. We're setting the translation property to a Two.Vector instance. The Two.Vector constructor takes two parameters: an x and a y. These end up being the coordinates for the center of the shape. You don't actually have to create a new vector for this; you can just assign the x and y values directory:

poly.translation.x = 60;
poly.translation.y = 60;

Here's what we get:


Step 3 – Making Groups

So far, we've been working with individual shape objects; however, it's possble to group shapes together and interact with them as one piece.

You can make a group with the two.makeGroup method. Then, we can use its add method to add a shape to the group.

var group = two.makeGroup(),
    rect = two.makeRectangle(0, 0, 100, 100),
    circ = two.makeCircle(50, 50, 50);</p>

rect.fill = "red";
circ.fill = "blue";

group.add(rect);
group.add(circ);

two.update();

If you run this, it's pretty basic; just like you'd get without the group bits.

But then, we can work with the group, using any of the transformations we can do on an individual shape. For example, how about a translation?

group.translation.x = 100;
group.translation.y = 100;

two.update();

As with regular shapes, groups are ordered from back-to-front as they are created. However, if you add a shape to one group, and then another group, it will be removed from the first group. This is great if you need to change the front-to-back order of the shapes when animating (which we'll get to). So, if we start with this:

var topGroup = two.makeGroup(),
    bottomGroup = two.makeGroup(),
    rect = two.makeRectangle(100, 100, 100, 100),
    circ = two.makeCircle(150, 150, 50);
rect.fill = "red";
circ.fill = "blue";

topGroup.add(rect);
topGroup.add(circ);  

two.update();

We have the same as above:

But, if we add the rect to the bottomGroup instead . . .

bottomGroup.add(rect);

Now, our square is on the top.

Step 4 – Animating Shapes

Finally, let's talk about animation. You already know that two.js renders the shapes you've created when you you call two.update(). If you call two.play() instead, it's like calling update() repeatedly, using Request Animation Frame. Each time this happens, two.js fires an “update” event. This is how we can produce animation: listen for the “update” event; and when it happens, run a function to set up the next frame.

Our examples so far have been rather simple, so let's step things up a notch: We’ll create an orbiting planet with its own orbiting moon. Remember, we start by creating two instance:

var el = document.getElementById("main"),
    two = new Two({ 
        fullscreen: true
    }).appendTo(el);

Next, we need to set up a few variables.

var earthAngle = 0,
    moonAngle  = 0,
    distance   = 30,
    radius     = 50,
    padding    = 100,
    orbit      = 200,
    offset     = orbit + padding,
    orbits     = two.makeGroup();

We'll increment earthAngle and moonAngle to get our planet and moon around their orbits. The distance variable is how far our moon will be from our earth. The radius is the raduis of our planet earth, and the padding is how much space our planet will have outside its orbit. Said orbit comes from the orbit variable. The offset variable is how far our planet will be offset from the edge of the canvas. Finally, the orbits group will hold the two orbit circles, which will allow us to show or hide them at will. Don't worry if you're a bit confused; you'll see how they all work together in a second.

We'll start with the earth's orbit line. Of course, that's just a simple circle:

var earthOrbit = two.makeCircle(offset, offset, orbit);
earthOrbit.noFill();
earthOrbit.linewidth = 4;
earthOrbit.stroke = "#ccc";
orbits.add(earthOrbit);

two.update();

There's nothing new at all here. Here's what you should see:

Then, we need to create a planet and place it on its orbit. First, we need a means to figure out where on the orbit the planet should be placed; and of course, this needs to change for every animation frame. So, let's create a function that will return the center x and y coordinates for the orbit based on the current angle for positioning around the circle and the radius of the orbit:

function getPositions(angle, orbit) {
    return {
        x: Math.cos(angle * Math.PI / 180) * orbit,
        y: Math.sin(angle * Math.PI / 180) * orbit
    };
}

Yes, it's a little bit of trigonometry, but don't worry too much: basically, we're converting the angle (which is a degree) to a radian, using the JavaScript sine and cosine methods, and then multipling it by the orbit. Now, we can use this function to add the earth to the picture:

var pos = getPositions(earthAngle++, orbit),
    earth = two.makeCircle(pos.x + offset, pos.y + offset, radius);

earth.stroke = "#123456";
earth.linewidth = 4;
earth.fill = "#194878";

We start by getting the position for the first earthAngle (value of 0, remember?); then, we make our earth based on those positions (plus the offset) and color it up. Here's what we end up with:

Now, let's animate this planet. The event binding code actually comes straight from Backbone, so it might look familiar:

two.bind("update", function (frameCount) {
    var pos = getPositions(earthAngle++, orbit);
    earth.translation.x = pos.x + offset;
    earth.translation.y = pos.y + offset;
});

two.play();

What's happening here is that every time the update event occurs, we're using the getPositions function to calculate the position for the next angle on earth. Then, we just have to set the center of the earth to those new positions, plus the offset. Finally, we call two.play() to start the update events. If you reload the page now, you should see the earth rotating around the orbit.

Nice job so far, eh? Now, how about the moon and its orbit path; this will go above the bind statement.

var moonOrbit = two.makeCircle(earth.translation.x, earth.translation.y, radius + distance);
moonOrbit.noFill();
moonOrbit.linewidth = 4;
moonOrbit.stroke = "#ccc";
orbits.add(moonOrbit);

var pos = getPositions(moonAngle, radius + distance), 
    moon = two.makeCircle(earth.translation.x + pos.x, earth.translation.y + pos.y, radius / 4);

moonAngle += 5;
moon.fill = "#474747";

This looks a lot like the code for the planet: we center the moon's orbit circle at the center of the earth using its translation properties; its radius is the radius of the earth plus the distance the moon should be away from the earth. Again, we add the moonOrbit to the orbits group.

Next, we create the moon, by first getting its desired position and creating a circle at that location. For a radius, we'll use a quarter of the radius we used for the earth. We'll be imcrementing the angle of the moon by 5 each time, so it will move faster than the earth.

Turning off the animation (by commenting out the two.bind statement), we get this:

Last step: get the moon animating. Inside that same two.bind statement, add these lines:

var moonPos = getPositions(moonAngle, radius + distance);
moon.translation.x = earth.translation.x + moonPos.x;
moon.translation.y = earth.translation.y + moonPos.y;
moonAngle += 5;

moonOrbit.translation.x = earth.translation.x;
moonOrbit.translation.y = earth.translation.y;

Just like before, we get the new position for the moon, and position it relative to the earth. Then, we also move the moon's orbit ring so that it stays centered on the earth.

With all this in place, our little example is complete: Here's a still-shot of the action:

Like I said, we can also hide the orbits. Since they're both in the orbits group, we can use the visible property of the group:

orbits.visible = false;

And now:


Conclusion

Well, that's a wrap on this tutorial. Do you think you'll be using two.js in any of your own projects? Or maybe you have a better alternative? Let's hear about it in the comments!

May 24 2013

15:04

Cargo-Culting in JavaScript

Cargo-cult programming is what a programmer does when he or she doesn't know a particular language or paradigm well enough, and so ends up writing redundant and possibly harmful code. It rears its head quite often in the land of JavaScript. In this article, I explore the concept of cargo-cult programming and places to watch out for it in JavaScript.

Dogmatic rules surface and spread, until they are considered the norm.

Cargo-culting is sometimes defined as "the extreme adherence to the form instead of content." The form, in programming, being the syntax, paradigms, styles and patterns that we employ. The content being the abstract thing that you are seeking to represent through your code — the very substance of your program. A person with lacking understanding in an area is likely to copy the form of others without truly understanding, and thus their content — their program — can suffer.

Cargo-culting is curiously common in JavaScript, probably because of the general low barrier to entry in the front-end development world. You can whip up an HTML page with a bit of JavaScript in seconds. As a result, there are many people who become sufficiently proficient in these technologies to feel comfortable creating and imposing rules on themselves and others. Eventually, other newcomers copy these rules. Dogmatic rules surface and spread, until they are considered the norm:

  • Always use strict equality operators
  • Never use eval
  • Always use a single var declaration per scope
  • Always use an IIFE – it “protects” you

A rule continues to spread until a programmer is only using a given technique because of its popularity, instead of considering each specific use-case independently.


JavaScript Abuzz with Semicolons

If you've had the opportunity to witness the witty banter and rhetoric of the software developer over the years, you will have spotted a tendency to discuss seemingly tiny things at great lengths. Things like the semicolon, the comma, white-space or the curly brace.

Syntax like semicolons or white-space may seem to purely be elements of form, not of content. But many of these subtle syntax rules can have significant effects in JavaScript. If you don't understand the 'form' then you cannot begin to understand the 'content'.

So in this article, we will identify what areas of form in JavaScript are frequently cargo-culted off of — that is, copied without understanding.

How JavaScript can seem How JavaScript can seem… an image from Angus Croll’s "The Politics Of JavaScript" presentation

Undefined

Angus Croll, in a recent presentation, titled "The Politics Of JavaScript", highlighted one of the most common pieces of JS dogma that people cargo-cult off of:

if (typeof myObject.foo === 'undefined') {...}

Most of the time, doing such a long-winded check for undefined is pointless. The technique became common because people were copying other people, not because of it's actual value.

Of course, there are times when:

typeof x === 'undefined'

… is preferable to:

x === undefined

But, equally, there are times when the latter is preferred. A quick overview of the options:

// Determine if `x` is undefined:
x === undefined
typeof x == 'undefined'
typeof x === 'undefined'
x === void 0

// Determine if `x` is undefined OR null:
x == null
x == undefined

People started using the typeof approach because they were protecting themselves against:

  • A potentially undeclared variable (non-typeof approaches would throw TypeErrors)
  • Someone overwrote undefined globally or in a parent scope. Some environments allow you to overwrite undefined to something like true. You have to ask yourself: “Is it likely that someone overwrote undefined, and should my script have to pander to such silliness?

But most of the time they're protecting themselves from having to worry. It's a catch-all avoidance of having to know the details. Knowing the details can help you though. Every character of your code should exist with a purpose in mind.

The only time that you should need to use a typeof check for undefined is when you are checking for a variable that may not have been declared, e.g. checking for jQuery in the global scope:

if (typeof jQuery != 'undefined') {
    // ... Use jQuery
}

The thing is, if jQuery does exist, then we can be sure that it's an object – a "truthy" thing. So this would be sufficient:

// or:
if (window.jQuery) {

}

The Great Strict/non-strict Debate

Let's take something very common and generally considered good advice, solely using strict-equality:

a === b

Strict-equality is said to be good because it avoids ambiguity. It checks both the value and the type, meaning that we don't have to worry about implicit coercion. With non-strict equality, we do have to worry about it though:

1 == 1    // true &mdash; okay, that's good
1 == "1"  // true &mdash; hmm
1 == [1]  // true &mdash; wat!?

So it would seem sensible advice to entirely avoid non-strict equality, right? Actually, no. There are many situations where strict-equality creates large amounts of redundancy, and non-strict equality is preferable.

When you know, with 100% certainty, that the types of both operands are the same, you can avoid the need for strict-equality. For example, I always know that the typeof operator returns a string, and my right-hand operand is also a string (e.g. "number"):

// With strict-equals
typeof x === 'number'

// With non-strict-equals:
typeof x == 'number'

They're both effectively identical. I am not necessarily suggesting that we abandon strict-equals in this case — I am suggesting that we remain aware of what we're doing so that we can make the best choices given each situation.

Another quite useful example is when you want to know if a value is either null or undefined. With strict equality, you might do this:

if (value === undefined || value === null) {
    // ...
}

With non-strict equality, it's far simpler:

if (value == null) {
    // ...
}

There is no catch here — it is doing exactly what we want, only, arguably, less visibly. But, if we know the language, then what's the problem? It's right there in the spec:

The comparison x == y, where x and y are values, produces true or false. Such a comparison is performed as follows:

  • If x is null and y is undefined, return true.
  • If x is undefined and y is null, return true.

If you're writing JavaScript with the intention of it being read, if at all, by people that know JavaScript, then I would argue that you shouldn't feel bad taking advantage of implicit language rules, like this.


hasOwnProperty

The hasOwnProperty method is used to determine whether a property is directly owned by an object. Is it commonly found in for..in loops to ensure that you only mess with direct properties and not inherited properties.

for (var i in object) {
    if (object.hasOwnProperty(i)) {
        // We can do stuff with `object[i]`
    }
}

It’s important to note that the for-in statement will only loop through enumarable properties. Native inherited methods, for example, are not enumerable and so you don't need to worry about them anyway.

The hasOwnProperty check is specifically preventing you from touching properties that you or some third-party script has defined, i.e. when your object's prototype has enumerable properties.

If you know that your object's prototype (or its prototype’s prototype etc.) doesn't have any enumerable properties, then you don't have to worry about using hasOwnProperty in your for-in loops. And, if your object is initialized, via ES5's Object.create(null), then you won't even be able to call hasOwnProperty directly on the object (no prototype means no inherited native methods). This means that using hasOwnProperty by default in all of your for-in loops may actually break sometimes.

One potential solution for objects with null prototypes is to use a saved reference to hasOwnProperty, like so:

var hasOwnProperty = Object.prototype.hasOwnProperty;

// Later in your code:
for (var i in someObject) {
    if (hasOwnProperty.call(someObject, i)) {
        // ...
    }
}

That will work even if the object has no prototype (in the case of Object.create(null)). But, of course, we should only do this in the first place if we know we need it. If you're writing a third-party script for a "hostile" environment, then yes, definitely check for enumerable inherited properties. Otherwise, it may not be necessary all the time.

Note: IE9 and Safari 2.0 complicate the matter further when you're trying to identify enumerable properties that are already defined as non-enumerable. It's worth checking out a truly cross-browser forOwn loop implementation.

To conclude: your use of hasOwnProperty should depend on the object being looped over. It depends on what assumptions you can safely make. Blindly protecting yourself using the hasOwnProperty will not suffice in all cases. Be wary of cross-browser differences too.


Over-Parenthesising

Another common redundancy that creeps into JS code is the parenthesis. Within expressions, it is used to force specific grouping of sub-expressions. Without them, you are at the mercy of operator precedences and associativities. For example:

A && B || C
A && (B || C)
(A && B) || C

One of those is not like the other. The parentheses force a specific grouping, and many people prefer the extra clarity. In this case, the logical AND operator has a higher precedence than the logical OR operator, meaning that it is the first and last lines that are equivelant. The second line is an entirely different logical operation.

Higher precedence means that it will occur before other operations in a series of operations.

To avoid this complexity, developers frequently opt for a "parentheses policy" — where you keep adding parentheses until it is abundantly clear which operations are occurring, both for you and potential readers of the code. It can be argued that this verbosity ends up making things less clear though.

It’s tricky for a reader sometimes. One must consider that any given parentheses may have been added because:

  • It was needed to override default precedence/associativity
  • For no functional reason at all, just for "protection" or "clarity"

Take this example:

A && B ? doFoo() : doBaz()

Without knowledge of operator precedence rules, we can see two possible operations here:

(A && B) ? doFoo() : doBaz()
A && (B ? doFoo() : doBaz())

In this case, it's the logical AND that has the higher precedence, meaning that the equivalent parenthesised expression is:

(A && B) ? doFoo() : doBaz()

We should feel no obligation to add these parentheses in our code, though. It happens implicitly. Once we recognize that it happens implicitly, we are free to ignore it and focus on the program itself.

There are, of course, valid arguments to retain the parentheses where implicit grouping is unclear. This really comes down to you and what you're comfortable with. I would, however, implore you to learn the precedences and then you can be fully empowered to take the best route, dependent on the specific code you're dealing with.


Object Keys

It's not rare to see redundant quotes in object literals:

var data = {
  'date': '2011-01-01',
  'id': 3243,
  'action': 'UPDATE',
  'related': { '1253': 2, '3411': 3 }
};

In addition to strings, JavaScript allows you to use valid identifier names and numbers as object literal keys, so the above could be re-written to:

var data = {
  date: '2011-01-01',
  id: 3243,
  action: 'UPDATE',
  related: { 1253: 2, 3411: 3 }
};

Sometimes, you may prefer the added consistency of being able to use quotes, especially if a field-name happens to be a reserved word in JavaScript (like 'class' or 'instanceof'). And that's fine.

Using quotes is not a bad thing. But it is redundant. Knowing that you don't have to use them is half the battle won. It is now your choice to do what you want.


Comma Placement

There is a huge amount of subjective preference, when it comes to punctuation placement in programming. Most recently, the JavaScript world has been abuzz with rhetoric and discontent over the comma.

Initialising an object in traditionally idiomatic JavaScript looks like this:

var obj = {
    a: 1,
    b: 2,
    c: 3
};

There is an alternative approach, which has been gaining momentum though:

var obj = {
      a: 1
    , b: 2
    , c: 3 
};

The supposed benefit of placing the commas before each key-value pair (apart from the first) is that it means you only have to touch one line in order to remove a property. Using the traditional approach, you would need to remove "c: 3" and then the trailing comma on the line above. But with the comma-first approach you're able to just remove ", c: 3". Proponents claim this makes trailing commas less likely and also cleans up source-control diffs.

Opponents, however, say that this approach only achieves getting rid of the trailing-comma "problem" by introducing a new leading-comma problem. Try removing the first line and you're left with a leading comma on the next line. This is actually considered a good thing by comma-first proponents, because a leading comma would immediately throw a SyntaxError. A trailing comma, however, throws nothing, except in IE6 and 7. So if the developer fails to test their JS in those versions of IE, then the trailing commas can often creep into production code, which is never good. A leading comma throws in all environments, so is less likely to be missed.

Of course, you might argue that this entire thing is moot. We should probably be using linters like JSLint or the kinder JSHint. Then we're free to use the punctuation and whitespace placement that makes the most sense to us and our coworkers.

Let's not even get started on the comma-first style in variable declarations..

var a = 1
  , b = 2
  , c = 3
  ;

Thou Shalt Code for Psychopaths?

We should endeavour to learn the languages we use to a good enough level that we're able to avoid cargo-culting and over-protective catch-all coding techniques. And we should trust our coworkers and other developers to do the same.

We’ve also discussed the abandonement of cruft in favor of taking advantage of a language’s idiosyncracies and implicit rules. To some, this creates maintainability issues, especially if someone more junior in their acquisition of a given language approaches the code. For example, what if they don’t know about JavaScript’s weak vs. strict equality?

On the topic of maintainability, we’re reminded by this famous quote:

Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.

I don’t know if that is truly good advice. Even taken metaphorically, it suggests a distrust of the fictional maintainer’s competency — and the need to worry about their understanding above everything else. I would rather write code in the knowledge that it will be taken care of by people that know their stuff. So as a possible contradiction or even an addendum to that quote, I offer:

Always code as if the person who ends up maintaing your code is knowledgeable about the language and its constructs and is seeking to gain understanding of the problem domain through reading your code.

While this may not always be true, we should seek for it be so. We should endeavour to ensure that people working on a specific technology have the sufficient understanding to do so. The learned cargo-culter says:

If I forever pander to a lower level of understanding in my code — treading softly — strictly abiding to conventions and style guides and things I see the “experts” do, then I am never able to advance my own understanding, nor take advantage of a language in all its weirdness and beauty. I am happily and blissfully settled in this world of rules and absolutes, but to move forward, I must exit this world and embrace higher understanding.

April 30 2013

19:15

Getting Into Ember: Part 4

In my previous tutorial, I touched on how to use Ember.Object to define your models and work with datasets. In this section, we’ll look more closely at how Ember uses the Handlebars templating framework to define your app's user interface.


Client-side Templates

Most server-side developers are used to using templates to define markup that will be dynamically filled on the fly. If you've ever used ASP.NET, ColdFusion, PHP or Rails then it's pretty much assured you know what I'm talking about.

JavaScript Client-side templating has really taken off of late especially because of the focus on building more desktop-like experiences. This means that more of the processing is done on the client-side with data being mainly pulled via server-side API requests.

I remember writing about client-side templates some time ago when the jQuery Template plugin was first released. Nearly three years later, it's still the most read post on my blog, showing how interest in client-side templating has risen. Since then, a number of other frameworks have been released, offering rich features and supportive communities. Handlebars is one of the more popular options and the framework chosen by the Ember project to power it's templating needs. This makes sense as Handlerbars was created by Ember.js co-founder and core team member, Yehuda Katz. Note, though, that I'm not planning on doing comparisons between templating frameworks and I will strictly focus on Handelbars since this is what Ember.js uses by default.

In the previous articles, I showed some very basic templates in the code:

<script type="text/x-handlebars">
    <h2><strong>{{firstName}} {{lastName}}</strong></h2>
</script>

Two things that stand out are the type declaration for the script tag and the curly braces which act as delimiters for the expressions that Handlebars will act upon. This is very typical syntax that I'll discuss in more detail soon and you'll use consistently as you build Ember templates.


The Syntax

Despite the fact that Handlebars uses a special syntax, at the end of the day, you're really working primarily with standard HTML markup. Handlebars serves to inject content into this markup to render data to the user. It does this by parsing the delimited expressions and replacing them with the data you've asked Handlebars to work with. In the case of Ember, Handlebars provides the hooks and Ember uses them. That data typically comes from your controller (remember that controllers serve as the interface to your models).

The first thing any template needs is a script tag definition. Most of you have probably defined script tags to load your JavaScript library. In fact, you've already done this to load Handlebars into your Ember project:

<script src="js/libs/jquery-1.9.1.js"></script>
<script src="js/libs/handlebars-1.0.0-rc.3.js"></script>
<script src="js/libs/ember-1.0.0-rc.1.js"></script>
<script src="js/app.js"></script>

There's a slight difference with using it to define a template. First, we're specifying a type attribute of "text/x-handlebars". This type is ignored by the browser but leaves the text available for inspection and allows Ember to identify templates within the app. In addition, Ember uses a data attribute called "data-template-name" which Ember can use to associate specific parts of your application with a template. For example, the following declaration defines a template with a name "employee":

<script type="text/x-handlebars" data-template-name="employee">
...
</script>

When your application starts, Ember scans the DOM for type="text/x-handlebars, compiles the templates it finds, and stores them in a property of the Ember object, called Ember.TEMPLATES which it uses to figure out what to render for a given route. This is why following Ember's naming conventions is so important. In the example above, this template will be automatically associated to the employee route and controller you created in your application. Again, I can't stress enough how these naming conventions will make your development much easier.

Ember is reliant on URLs to determine the resources that need to be used and the templates that need to be rendered. Let's imagine that you had a profile page with the URL "/profile". You would have a resource, called profile that would load specific resources for that URL (like a route object) and you would also have a template by the same name. We reviewed defining resources and route objects in part 2 of my Ember series so if you're not sure about what I'm discussing, be sure to hop back there to refresh yourself on this.

When you visit that URL, Ember knows it needs to load these resources and parse the template you've defined. It does this via its naming conventions, knowing that because you went to "/profile" it needs to load the resources defined in the profile, and render the template, named data-template-name="profile".

  • Route: ProfileRoute
  • Controller: ProfileController
  • Template: profile (note that it's lowercase)

Going over the naming conventions again, you'll see that the route, controller and template are all tied together using the same URL name with the exception that the template is spelled in lowercase. This is what allows Ember to manage everything behind the scenes without you having to do a lot of wiring up.

Also important to note is that, if you declare a template without a data-template-name attribute, Ember will assume that it is the Application-scoped template – the one typically used as a site-wide template for creating user interface elements, such as headers, footers and navigation. And if you don't explicitly define a template for an application or even a resource (e.g: for a URL), Ember does that automatically for you to ensure stability and consistency in your app.


Expressions

The next step is to include your markup and the delimited expressions you'll be using to represent your data. Expressions are delimited, via double curly braces which allow them to be easily identified and parsed with data being passed from your controller. Here's an example:

<script type="text/x-handlebars">
    <h2><strong>{{firstName}} {{lastName}}</strong></h2>
</script>

In this case, the {{firstName}} and {{lastName}} expressions will be parsed by Ember and replaced by actual data. In addition, Ember sets up observers so that as your data changes, your template is automatically updated and the updates reflected to the user of your app.

So far, I've shown you a very simple example, but the takeaway is that:

  • Ember uses a special type attribute to define templates.
  • Templates use standard markup along with delimited expressions, which are parsed on the client-side.
  • These templates have the full feature set capabilities of Handlebars.
  • Ember sets up observers to dynamically update your user interface data, as it changes.

This offers a lot of flexibility in how you structure your user interface. Let's continue looking at the features that are available.


Advanced Expressions

Remember that Ember leverages Handlebars, so you have access to its full breadth of expressions here. Conditional expressions are a must, in order to render almost anything useful; Handlebars offers quite a number of options.

Let’s say that I had a JSON dataset that looked like this:

"items": [{
    "title": "Tearable Cloth Simulation in JavaScript",
    "url": "http://codepen.io/stuffit/pen/KrAwx",
    "id": 5592679,
    "commentCount": 20,
    "points": 127,
    "postedAgo": "1 hour ago",
    "postedBy": "NathanKP"
}, {
    "title": "Netflix now bigger than HBO",
    "url": "http://qz.com/77067/netflix-now-bigger-than-hbo/",
    "id": 5592403,
    "commentCount": 68,
    "points": 96,
    "postedAgo": "2 hours ago",
    "postedBy": "edouard1234567"
}

If I wanted to ensure that the title data is available, I could add a conditional "if" statement by using the #if expression:

{{#if item.title}}
    <li>{{item.title}} - {{item.postedAgo}} by {{item.postedBy}}</li>
{{/if}}

This checks to see if item.title is not undefined, and continues processing the subsequent expressions for the title, postedAgo and postedBy data expressions.

Since this dataset contains more than one "record", it's safe to assume that we'd probably want to loop over each element of item. That's where the {{#each}} expression comes into play. It allows you to enumerate over a list of objects. So, again, keeping in mind that templates are a combination of markup and Handlebars expressions, we can use the #each expression to loop through every item available within our Ember model object. Remember that the Ember model is derived from the controller, which is associated to the template, via Ember's naming conventions.

<ul>
    {{#each item in model}}
    {{#if item.title}}
        <li>{{item.title}} - {{item.postedAgo}} by {{item.postedBy}}</li>
    {{/if}}
    {{/each}}
</ul>

This would render out something similar to:

<ul>
<li>Tearable Cloth Simulation in JavaScript - 1 hour ago by NathanKP</li>
<li>Netflix now bigger than HBO - 2 hours ago by edouard1234567</li>
<li>Fast Database Emerges from MIT Class, GPUs and Student&#39;s Invention - 33 minutes ago by signa11</li>
<li> Connecting an iPad retina LCD to a PC - 6 hours ago by noonespecial</li>
</ul>

The distinct advantage is Ember's implicit specification of observer,s which will update your data upon an update.

If your conditional expression needs to be more complex, you'll want to create a computed property. This allows you to create a property based off of a method that can apply complex code conditions to your data. Let's say I wanted to solely wanted to display data that had the title "Tearable Cloth Simulation in JavaScript". There's a couple of things I need to setup:

  • I need a computed property to scan each item and tell me if the title matches
  • I need to create a controller that can be used by each item being enumerated over in the template
  • I need to update the template so that it uses this controller for each item
    The first thing I need to do is create the new controller that will wrap each item being looped over and create the computed property within it:
App.TitleController = Ember.ObjectController.extend({
    titleMatch: function() {
      return this.get(&#39;title&#39;) === &quot;Tearable Cloth Simulation in JavaScript&quot;;
    }.property()
});

Looking at the code, we're subclassing Ember.ObjectController to create the controller. This is the controller that will wrap each item being looped over in our template. Next, we're creating a method, called titleMatch which uses the get() method to pull back the current title, compare it to the text I've defined, and return a boolean. Lastly, the Ember property() method is called to define the titleMatch method as a computed property.

Once we have this in place, we update the template's {{#each}} expression to represent each item with the new controller we created. This is done by using the itemController directive. A key thing to understand is that itemController is a key phrase in Ember meant to associate a controller to items of a template. Don't confuse it for an actual controller name (as I did initially). The controller name is assigned to itemController, like this:

<ul>
   {{#each item in model itemController="title"}}
      {{#if titleMatch}}
        <li>{{foo.title}} - {{foo.postedAgo}} by {{foo.postedBy}}</li>
      {{/if}}
    {{/each}}
</ul>

Again, naming conventions dictate that, when assigning names in templates, we use lowercase. In this case, we're assigning TitleController to itemController.

Now, as each item is looped over, the computed property, titleMatch, is used to evaluate the title and display data if it matches.


Binding Data to Elements

Creating dynamic templates isn't just about spitting out text. There are times when the look and feel of the UI needs to be affected by data being processed. Displaying an image or building a link are great examples of this.

Binding data to an element requires using special Ember helpers that assist in defining an attribute's context, as well as ensuring that the attributes are updated properly when data changes. For element attributes, the {{bindAttr}} helper is used to fill in the values for an attribute. If we needed to dynamically specify the URL of an image, we'd use the following syntax:

<img {{bindAttr src="logoUrl"}} alt="Logo">

The same can be done for attributes that don't receive a value, such as disabled:

<input type="checkbox" {{bindAttr disabled="isAdministrator"}}>

In this case, isAdminstrator could be a computed property based off a method in the controller, or just a normal object property giving you a lot of flexibility in defining the conditions for disabling the checkbox. This flexibility carries over to defining class names as well. If I wanted to use a conditional statement to define if a class should be applied to my element, I could use the following code:

<div {{bindAttr class="isUrgent"}}>
  Warning!
</div>

Depending on the boolean state, my markup would either be:

<div {{bindAttr class="is-urgent"}}>
  Warning!
</div>

for a true condition, or:

<div>
  Warning!
</div>

for a false condition. Note that, when I specified isUrgent for the class, Ember dasherized the name and rendered the class as is-urgent. If you'd prefer to specify your own class based on the results, you can use a conditional expression similar to a ternary statement:

<div {{bindAttr class="isUrgent:urgent:normal"}}>

This will return urgent or normal for the class, based on the conditional value of isUrgent.


Get to Know Templates

Templates will be the foundation of your user interface, so it's going to be important that you spend time reading the docs at both the Ember and Handlebars site to get a good feel for their overall power. Even if you don't use Ember, Handlebars is a great framework for you to use day-to-day, and worth the investment in learning how to use it.

Gabriel Manricks wrote a great tutorial on Handlebars here on Nettuts+ that you can use to get up to speed on the framework.

April 25 2013

21:46

Promise-Based Validation

The concept of “Promises” has changed the way we write asynchronous JavaScript. Over the past year, many frameworks have incorporated some form of the Promise pattern to make asynchronous code easier to write, read and maintain. For example, jQuery added $.Deferred(), and NodeJS has the Q and jspromise modules that work on both client and server. Client-side MVC frameworks, such as EmberJS and AngularJS, also implement their own versions of Promises.

But it doesn’t have to stop there: we can rethink older solutions and apply Promises to them. In this article, we’ll do just that: validate a form using the Promise pattern to expose a super simple API.


What is a Promise?

Promises notify the result of an operation.

Simply put, Promises notify the result of an operation. The result can be a success or a failure, and the operation, itself, can be anything that abides by a simple contract. I chose to use the word contract because you can design this contract in several different ways. Thankfully, the development community reached a consensus and created a specification called Promises/A+.

Only the operation truly knows when it has completed; as such, it is responsibile for notifying its result using the Promises/A+ contract. In other words, it promises to tell you the final result on completion.

The operation returns a promise object, and you can attach your callbacks to it by using the done() or fail() methods. The operation can notify its outcome by calling promise.resolve() or promise.reject(), respectively. This is depicted in the following figure:

Figure for Promises

Using Promises for Form Validation

Let me paint a plausible scenario.

We can rethink older solutions and apply Promises to them.

Client-side form validation always begins with the simplest of intentions. You may have a sign-up form with Name and Email fields, and you need to ensure that the user provides valid input for both fields. That seems fairly straightforward, and you start implementing your solution.

You are then told that email addresses must be unique, and you decide to validate the email address on the server. So, the user clicks the submit button, the server checks the email’s uniqueness and the page refreshes to display any errors. That seems like the right approach, right? Nope. Your client wants a slick user experience; visitors should see any error messages without refreshing the page.

Your form has the Name field that doesn’t require any server-side support, but then you have the Email field that requires you to make a request to the server. Server requests means $.ajax() calls, so you will have to perform email validation in your callback function. If your form has multiple fields that require server-side support, your code will be a nested mess of $.ajax() calls in callbacks. Callbacks inside callbacks: “Welcome to callback hell! We hope you have a miserable stay!”.

So, how do we handle callback hell?

The Solution I Promised

Take a step back and think about this problem. We have a set of operations that can either succeed or fail. Either of these results can be captured as a Promise, and the operations can be anything from simple client-side checks to complex server-side validations. Promises also give you the added benefit of consistency, as well as letting you avoid conditionally checking on the type of validation. Lets see how we can do this.

As I noted earlier, there are several promise implementations in the wild, but I will focus on jQuery’s $.Deferred() Promise implementation.

We will build a simple validation framework where every check immediately returns either a result or a Promise. As a user of this framework, you only have to remember one thing: “it always returns a Promise”. Lets get started.

Validator Framework using Promises

I think it’s easier to appreciate the simplicity of Promises from the consumer’s point of view. Lets say I have a form with three fields: Name, Email and Address:

<form>
  <div class="row">
    <div class="large-4 columns">
      <label>Name</label>
      <input type="text" class="name"/>
    </div>
  </div>

  <div class="row">
    <div class="large-4 columns">
      <label>Email</label>
      <input type="text" class="email"/>
    </div>
  </div>

  <div class="row">
    <div class="large-4 columns">
      <label>Address</label>
      <input type="text" class="address"/>
    </div>
  </div>

</form>

I will first configure the validation criteria with the following object. This also serves as our framework’s API:

var validationConfig = {
  '.name': {
    checks: 'required',
    field: 'Name'
  },
  '.email': {
    checks: ['required'],
    field: 'Email'
  },
  '.address': {
    checks: ['random', 'required'],
    field: 'Address'
  }
};

The keys of this config object are jQuery selectors; their values are objects with the following two properties:

  • checks: a string or array of validations.
  • field: the human-readable field name, which will be used for reporting errors for that field

We can call our validator, exposed as the global variable V, like this:

V.validate(validationConfig)
  .done(function () {
      // Success
  })
  .fail(function (errors) {
      // Validations failed. errors has the details
  });

Note the use of the done() and fail() callbacks; these are the default callbacks for handing a Promise’s result. If we happen to add more form fields, you could simply augment the validationConfig object without disturbing the rest of the setup (the Open-Closed Principle in action). In fact, we can add other validations, like the uniqueness constraint for email addresses, by extending the validator framework (which we will see later).

So that’s the consumer-facing API for the validator framework. Now, let’s dive in and see how it works under the hood.

Validator, Under the Hood

The validator is exposed as an object with two properties:

  • type: contains the different kinds of validations, and it is also serves as the extension point for adding more.
  • validate: the core method that performs the validations based upon the provided config object.

The overall structure can be summarized as:

var V = (function ($) {

var validator = {

  /*
  * Extension point - just add to this hash
  * 
  * V.type['my-validator'] = {
  *   ok: function(value){ return true; },
  *   message: 'Failure message for my-validator'
  *   }
  */
  type: {
    'required': {
      ok: function (value) {
          // is valid ?
      },
      message: 'This field is required'
    },

    ...
  },

  /**
   *
   * @param config
   * {
   *   '<jquery-selector>': string | object | [ string ]
   * }
   */
  validate: function (config) {

    // 1. Normalize the configuration object  

    // 2. Convert each validation to a promise  

    // 3. Wrap into a master promise

    // 4. Return the master promise
  }
};

})(jQuery);

The validate method provides the underpinnings of this framework. As seen in the comments above, there are four steps that happen here:

1. Normalize the configuration object.

This is where we go through our config object and convert it into an internal representation. This is mostly to capture all the information we need to carry out the validation and report errors if necessary:

function normalizeConfig(config) {
  config = config || config;

  var validations = [];

  $.each(config, function (selector, obj) {

    // make an array for simplified checking
    var checks = $.isArray(obj.checks) ? obj.checks : [obj.checks];

    $.each(checks, function (idx, check) {
      validations.push({
        control: $(selector),
        check: getValidator(check),
        checkName: check,
        field: obj.field
      });
    });

  });
  return validations;
}

function getValidator(type) {
  if ($.type(type) === 'string' && validator.type[type]) return validator.type[type];

  return validator.noCheck;
}

This code loops over the keys in the config object and creates an internal representation of the validation. We will use this representation in the validate method.

The getValidator() helper fetches the validator object from the type hash. If we don’t find one, we return the noCheck validator which always returns true.

2. Convert each validation to a Promise.

Here, we ensure every validation is a Promise by checking the return value of validation.ok(). If it contains the then() method, we know it’s a Promise (this is as per the Promises/A+ spec). If not, we create an ad-hoc Promise that resolves or rejects depending on the return value.

    
validate: function (config) {
  // 1. Normalize the configuration object
  config = normalizeConfig(config);
  var promises = [],
    checks = [];

  // 2. Convert each validation to a promise
  $.each(config, function (idx, v) {
    var value = v.control.val();
    var retVal = v.check.ok(value);

    // Make a promise, check is based on Promises/A+ spec
    if (retVal.then) {
      promises.push(retVal);
    }
    else {
      var p = $.Deferred();

      if (retVal) p.resolve();
      else p.reject();

      promises.push(p.promise());
    }
    checks.push(v);
  });
  // 3. Wrap into a master promise

  // 4. Return the master promise
}

3. Wrap into a master Promise.

We created an array of Promises in the previous step. When they all succeed, we want to either resolve once or fail with detailed error information. We can do this by wrapping all of the Promises into a single Promise and propagate the result. If everything goes well, we just resolve on the master promise.

For errors, we can read from our internal validation representation and use it for reporting. Since there can be multiple validation failures, we loop over the promises array and read the state() result. We collect all of the rejected promises into the failed array and call reject() on the master promise:

// 3. Wrap into a master promise
var masterPromise = $.Deferred();
$.when.apply(null, promises)
  .done(function () {
    masterPromise.resolve();
  })
  .fail(function () {
    var failed = [];
    $.each(promises, function (idx, x) {
      if (x.state() === 'rejected') {
        var failedCheck = checks[idx];
        var error = {
          check: failedCheck.checkName,
          error: failedCheck.check.message,
          field: failedCheck.field,
          control: failedCheck.control
        };
        failed.push(error);
      }
    });
    masterPromise.reject(failed);
  });

// 4. Return the master promise
return masterPromise.promise();

4. Return the master promise.

Finally we return the master promise from the validate() method. This is the Promise on which the client code sets up the done() and fail() callbacks.

Steps two and three are the crux of this framework. By normalizing the validations into a Promise, we can handle them consistently. We have more control with a master Promise object, and we can attach additional contextual information that may be useful to the end user.


Using the Validator

See the demo file for a full use of the validator framework. We use the done() callback to report success and fail() to show a list of errors against each of the fields. The screenshots below show the success and failure states:

Demo showing Success Demo showing failures

The demo uses the same HTML and validation configuration mentioned earlier in this article. The only addition is the code that displays the alerts. Note the use of the done() and fail() callbacks to handle the validation results.

function showAlerts(errors) {
  var alertContainer = $('.alert');
  $('.error').remove();

  if (!errors) {
    alertContainer.html('<small class="label success">All Passed</small>');
  } else {
    $.each(errors, function (idx, err) {
      var msg = $('<small></small>')
          .addClass('error')
          .text(err.error);

      err.control.parent().append(msg);
    });
  }
}

$('.validate').click(function () {

  $('.indicator').show();
  $('.alert').empty();

  V.validate(validationConfig)
      .done(function () {
        $('.indicator').hide();
        showAlerts();
      })
      .fail(function (errors) {
        $('.indicator').hide();
        showAlerts(errors);
      });

});

Extending the Validator

I mentioned earlier that we can add more validation operations to the framework by extending the validator’s type hash. Consider the random validator as an example. This validator randomly succeeds or fails. I know its not a useful validator, but it’s worth noting some of its concepts:

  • Use setTimeout() to make the validation async. You can also think of this as simulating network latency.
  • Return a Promise from the ok() method.
  
// Extend with a random validator
V.type['random'] = {
  ok: function (value) {
    var deferred = $.Deferred();

    setTimeout(function () {
      var result = Math.random() < 0.5;
      if (result) deferred.resolve();
      else deferred.reject();

    }, 1000);

    return deferred.promise();
  },
  message: 'Failed randomly. No hard feelings.'
};

In the demo, I used this validation on the Address field like so:

var validationConfig = {
  /* cilpped for brevity */

  '.address': {
    checks: ['random', 'required'],
    field: 'Address'
  }
};

Summary

I hope that this article has given you a good idea of how you can apply Promises to old problems and build your own framework around them. The Promise-based approach is a fantastic solution to abstract operations that may or may not run synchronously. You can also chain callbacks and even compose higher-order Promises from a set of other Promises.

The Promise pattern is applicable in a variety of scenarios, and you’ll hopefully encounter some of them and see an immediate match!


References

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
(PRO)
No Soup for you

Don't be the product, buy the product!

close
YES, I want to SOUP ●UP for ...