Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 04 2014

14:00

JavaScript Tools of the Trade: JSBin

We’ve all been there. There are times when you simply want to throw some JavaScript code up and see how it works. Sure, you could go through the hassle of:

  • Setting up a dedicated directory
  • Create a file with markup
  • Finding the latest version of your favorite libraries, downloading them and including them in your code
  • Creating your stylesheet
  • Configuring your webserver

That seems like an awful lot of work just do do some simple code testing. Thankfully there are tools that make this type of work trivial.

In this tutorial, I’d like to go over one of my favorite tools for interactive JavaScript testing, JSBin.


The Case for JSBin

As I mentioned previously, in many cases you simply need to test a small subset of JavaScript code. Setting up a whole development environment for such a use case, in most cases, doesn’t really make a lot of sense unless there’s a clear dependency on hardware (for example, WebRTC) or reliance on a third party API or product where you need backend services to successfully access information.

What JSBin offers is a browser-based user interface where you can enter:

  • HTML markup
  • CSS
  • JavaScript

… and get immediate feedback based on your code. In addition, you can optionally include any number of popular frameworks into your onscreen code, allowing you to leverage the framework’s capabilities as well. The main benefit is the real-time feedback you get from the updates you make.

Let’s look at these tools a little more closely.


Getting to Know JSBin

JSBin was created and is actively maintained by well-respected developer Remy Sharp. The idea to develop it came from the need to collaborate interactively with other developers to debug JavaScript code. It has since matured into a robust tool which:

  • Allows groups of developers to work together to solve code problems
  • Serves as a sort of bin that developers can go back to reference
  • Makes sharing code and solutions incredibly easy

JSBin is also opensource licensed under the liberal MIT license allowing community members to freely contribute to it or fork it to create their own customized solutions.

JSBin offers a straightforward UI that breaks each type of code into individual vertical panels.

jsbin

Each panel provides a mini-IDE that allows you to enter code and receive immediate feedback via the Output panel. For example, if I add the following code to the HTML panel:

<div>Rey Bango</div>

I’ll immediately see the new element and the text render in the Output panel.

jsbin-html

Of course, you can add any number of elements to the markup allowing you to create a page quickly and interactively. Being able to style your markup is equally important since in some cases, the JavaScript you’re testing is explicitly designed to manipulate styles and CSS rules applied to your elements. That’s where the CSS panel comes in. It offers full CSS style capability, so you can layout your elements to suit your needs, even taking advantage of CSS3 rules. So adding the following code:

div { 
  color: red; 
  font:20px Tahoma,sans-serif; 
  border: 1px solid black; 
  width: 100px;
  margin: 30px;
  padding: 10px; 
  transform:rotate(15deg);
  -webkit-transform:rotate(15deg); 
}

… provides the following results:

jsbin-css

So far, the code has been simple but I need to stress that the important thing here is not the complexity of the code but the fact that you’re able to receive immediate feedback. I could easily grab more involved code, like that of the CSS Transitions demo on the Mozilla Developer Network and add that into JSBin to produce a similar effect for my test code:

div { 
  color: red; 
  font:20px Tahoma,sans-serif; 
  border: 1px solid black; 
  width: 100px;
  margin: 30px;
  padding: 10px; 
  -moz-transition:width 2s, height 2s, background-color 2s, -moz-transform 2s;
    -webkit-transition:width 2s, height 2s, background-color 2s, -webkit-transform 2s;
    -o-transition:width 2s, height 2s, background-color 2s, -o-transform 2s;
    transition:width 2s, height 2s, background-color 2s, transform 2s;
}

div:hover {
    background-color: #FFCCCC;
    width:200px;
    height:200px;
    -moz-transform:rotate(180deg);
    -webkit-transform:rotate(180deg);
    -o-transform:rotate(180deg);
    transform:rotate(180deg);
}
jsbin-csstrans

So while I’m more specifically focused on the JavaScript aspect of JSBin, it’s clear that web developers in general can benefit from the interactive nature of the tool.


Using JavaScript

For me, the main benefit of JSBin is the ability to test JavaScript quickly. I’m able to whip up quick and dirty code that I can test and adjust on-the-fly without the need to spin up a whole work environment. Sure, most browsers provide developer tools that offer a console where you can enter quick snippets but they’re not yet at a point where you can interactively test large amounts of code, let alone define complementary custom markup and styling to the output.

JSBin’s JavaScript panel is where you’re able to define your custom JavaScript code. As expected, you have full access to the language as well as the DOM API supported by the browser. This means that when I write:

var myDiv = document.querySelector( "div" );

myDiv.innerHTML = "Foo";

it:

  • Allows me to create a local variable
  • Provides access to the div element I created in the HTML panel
  • Changes the element’s content

The results are immediate, allowing me to debug as I’m writing the code.

Having access to plain ole JavaScript is great but it’s very common to use a JavaScript utility library like jQuery or a full-blown framework like Ember to abstract the complexities of cross-browser development or create app-like experiences in the browser. JSBin addresses this by allowing you to include most of the popular libraries into your test code.

Clicking on the Add library menu option provides a very long list of supported libraries that can be injected into your JSBin project. What this does is creates a script tag in your code that pulls the JavaScript file from a CDN. Selecting “jQuery 2.0.2” from the list injects the following:

<script src="http://ajax.googleapis.com/ajax/libs/jquery/2.0.2/jquery.min.js"></script>
<meta charset=utf-8 />

… while selecting Backbone adds the following:

<script src="http://documentcloud.github.io/underscore/underscore-min.js"></script>
<script src="http://documentcloud.github.io/backbone/backbone-min.js"></script>

Notice how JSBin uses different CDNs based on where the files are available. Most of the big name projects are listed, including:

  • jQuery
  • Dojo
  • Modernizr
  • Bootstrap

… and many more.

Adding in jQuery gives me full access to all of the libraries’ excellent helper methods and capabilities. I can switch to using its nice, terse API to access DOM elements and set values in one nicely-chained line of code:

$( "div" ).text( "Foo" );

Or, I can take it a bit further and test out an Ajax request to Flickr’s API to pull back JSON data and render images based on it:

(function() {
  var flickerAPI = "http://api.flickr.com/services/feeds/photos_public.gne?jsoncallback=?";
  $.getJSON( flickerAPI, {
    tags: "mount rainier",
    tagmode: "any",
    format: "json"
  })
    .done(function( data ) {
      $.each( data.items, function( i, item ) {
        $( "<img>" ).attr( "src", item.media.m ).appendTo( "div" );
        if ( i === 3 ) {
          return false;
        }
      });
    });
})();

The code above would render as follows:

jsbin-ajax

Having the full power of these libraries and frameworks really opens up the testing scenarios that you can setup with JSBin.

Again, this is a list of the most popular libraries and frameworks available and clearly, some niche ones just won’t be in the list. If you need to add your own custom library, the documentation shows how you can add it in yourself.


Additional Features and Resources

I find JSBin invaluable for my desktop development and as I shift to focusing on mobile devices, I’m glad to see that I’ll be able to continue to use it to test on those devices as well. As of version three, JSBin has incorporated a feature called “live rendering” which acts as a simulcast across multiple connected devices. These devices aren’t explicitly connected but instead, leverage a specific URL which allows them to essentially render the results at the same time. You can see this feature in action in the following video.

Another important feature is the ability to create your own JSBin account where you can save your bins for future reference and sharing. Registration is simple and you can even leverage your Github credentials via Github’s OAuth functionality.

jsbin-register

The key benefit to registering is the ability to keep a history of the bins you create so that you can revisit them later.

To really get a feel for the full breadth of functionality offered by JSBin, I urge you to go to Remy’s Youtube channel for JSBin, where he’s done a bang up job of creating tutorial videos exploring all of the excellent features of the service. The FAQ also does a great job of answering common questions you might have.

JSBin is one of my most valuable tools I’ve found for JavaScript development. The fact that it’s free and open source makes it a no-brainer to add to any toolkit. And Remy’s continued commitment to the service is commendable. All of this combined, makes it easy for me to spread the word about such a great tool.

February 03 2014

14:00

Creating Brackets Extensions

A little while ago I wrote about the recent updates to the Brackets editor. Brackets is an open source project focused on web standards and built with web technologies. It has a narrow focus and therefore may not have a particular feature you’ve come to depend upon. Luckily, Brackets ships with a powerful extension API that lets you add any number of new features. In this article, I’m going to discuss this API and demonstrate how you can build your own extensions.

It is crucial that you remember that Brackets is in active development. This article is being written in December of 2013. It is certainly possible that the code demonstrated below is now out of date. Keep that in mind and be sure to check the wiki for the latest updates to the extension API.


Getting Started

I’m going to assume you read my last article and are already familiar with the extension manager. This provides a simple, one click method of installing extensions. One of the best ways you can learn to write extensions is by looking at the work done by others (that’s how I learned). I’d recommend grabbing a few extensions (there’s almost 200 available now) and tearing apart their code. Don’t be afraid to break a few while you’re at it.

Brackets puts all installed extensions within one main folder. To find that folder, go to the Help menu and select “Show Extensions Folder“. For my OS X install, this was located at /Users/ray/Library/Application Support/Brackets/extensions/user. If you go up from that folder, you’ll notice a disabled folder as well. Brackets will make a valiant effort to load no matter what, but if you ever find yourself in a situation where Brackets has completely crapped the bed and simply will not work, consider moving potentially bad extensions into the disabled folder. Later on in the article, I’ll discuss how you can monitor and debug extensions to help prevent such problems in the first place.

Begin by going to your user folder and creating a new folder, helloworld1. Yes, even though it is completely lame, we’re going to build a HelloWorld extension. Don’t hate me, I like simple. Inside that folder create a new file called main.js. Listing one shows what the contents of this file should be. Note that in this article I’ll go through a couple of different iterations of the helloworld extension. Each one will be named with a progressively higher number. So our first example is from helloworld1, the next helloworld2, and so on. It would make sense for you to simply copy the code into one folder, helloworld, instead of copying each one by themselves. If you do, you’ll have multiple related extensions running at once and that can definitely confuse things.

Listing 1: helloworld1/main.js
define(function(require, exports, module) {

    function log(s) {
            console.log("[helloworld] "+s);
    }

    log("Hello from HelloWorld.");
});

The first line defines our extension as a module that will be picked up by Brackets automatically on application load. The rest of the extension is a custom log message (you will see why in a second) and a call to that logger. Once you have this file saved, go back to Brackets, select the Debug menu, and hit Reload. (You can also use Command/Control+R to reload as well.)

Brackets will reload and … nothing else will happen. The extension we built didn’t actually do anything that we could see, but it did log to the console. But where is that console? Brackets provides an easy way to view the console. Simply go back to the Debug menu and select Show Developer Tools. This will open a new tab in Chrome with a familiar Dev Tools UI. In the screen shot below I’ve highlighted our log. Other extensions, and Brackets itself, will also log messages to this screen. By prefixing my log messages with [helloworld], I can make my own stuff a bit easier to find.

Brackets console messages

Note that the full Chrome console.api works here. You can do stuff like this to format your console messages:

    
function log(s) {
    console.log("%c[helloworld] "+s,"color:blue;font-size:large");
}

Go crazy, but try to remove these messages before you share your code with the rest of the world. In case you’re curious, you can’t use dev tools in another browser, like Firefox, at this time.


Integration Points

Now that you know the (very) basics, let’s talk about what Brackets extensions can do to the editor:

  • They can create keyboard shortcuts, allowing them to respond to custom keystrokes.
  • They can add to the top level menu.
  • They can add context menus (and to a specific area, like the file listing or the editor window).
  • They can create UI items. This can be a modal dialog or even a panel. (Currently the panel is locked to the bottom of the screen).
  • They can create a linting provider (essentially they can register themselves as a code checker for a file type).
  • They can create their own inline editors (a major feature of Brackets).
  • They can register as a documentation provider (for example, adding MDN support for docs).
  • They can integrate with Quick Find and Quick Open.
  • They can add custom code hints and syntax colors.
  • They can read the current file open in the editor as well as modify it. (They can also see the current selected text, if any.)

That describes how extensions can modify Brackets, but what can extensions actually do in terms of code? Keeping in mind that you’re writing extensions in pure web standards (HTML, JavaScript, and CSS), you actually have quite a bit of power. The only real limits relate to binary data. There is a File system API that gives you control over files but is limited to text data only. Luckily, you have a way out.

Any Brackets extension can integrate with Node.js. If you’ve got an existing Node.js package your extension can make calls to it and do, well, whatever Node.js can do, which is essentially anything.

Let’s update our extension to integrate with the editor a bit better. I’ll start by simply adding a menu item for the extension.

Listing 2: helloworld2/main.js
/*
Based - in part - on the HelloWorld sample extension on the Brackets wiki:

https://github.com/adobe/brackets/wiki/Simple-%22Hello-World%22-extension

*/
define(function(require, exports, module) {

    var CommandManager = brackets.getModule("command/CommandManager"),
            Menus = brackets.getModule("command/Menus"),
            AppInit = brackets.getModule("utils/AppInit");
    function log(s) {
            console.log("[helloworld2] "+s);
    }
    function handleHelloWorld() {
            alert("You ran me, thanks!");
    }
    AppInit.appReady(function () {

            log("Hello from HelloWorld2.");

            var HELLOWORLD_EXECUTE = "helloworld.execute";

            CommandManager.register("Run HelloWorld", HELLOWORLD_EXECUTE, handleHelloWorld);

            var menu = Menus.getMenu(Menus.AppMenuBar.VIEW_MENU);
            menu.addMenuItem(HELLOWORLD_EXECUTE);

    });

});

We’ve got a few changes here so let’s tackle them one by one. You’ll notice that the extension begins with three calls to brackets.getModule. All extensions have access to a brackets object that provides an API where we can load in core functionality from the editor. In this case the extension has loaded two libraries we’ll need for the menu (CommandManager and Menus) and one which will be used to help initialize the extension (AppInit).

Lets talk about AppInit. You can see that most of the extension is now loaded with a appReady callback. This callback is fired when Brackets has completed loading and is generally considered “best practice” for extensions to make use of.

Registering a menu item takes a few steps. I begin by defining a “command ID”, a unique identifier for the item I’ll be adding to the UI. The typical way to do this is with the format extensionname.someaction. In my case, I used helloworld.execute. I can then register this command along with the function (handleHelloWorld) that should be called when the command is fired.

The final step is to add this command to the menu. You can probably guess that my menu item will be added under the View menu based on this value: Menus.AppMenuBar.VIEW_MENU. How did I know that value? Simple, I saw other extensions do it. Seriously though, there is no specific list of items like this yet. Don’t forget that Brackets is open source. I can easily pop over to the GitHub repo and check it out. In this case, the file is Menus.js, located on Github. In there I can see where the various different core menus are defined:

/**
  * Brackets Application Menu Constants
  * @enum {string}
*/
var AppMenuBar = {
     FILE_MENU       : "file-menu",
     EDIT_MENU       : "edit-menu",
     VIEW_MENU       : "view-menu",
     NAVIGATE_MENU   : "navigate-menu",
     HELP_MENU       : "help-menu"
};      

As a general rule of thumb, it makes sense to have at least a cursory understanding of what’s available in Brackets itself. Your extensions will, from time to time, make use of multiple different features so it’s definitely in your best interest to at least know the lay of the land.

After reloading Brackets, you’ll now see the menu item in the View menu. Exactly where it is may be a bit random as you may have other extensions installed.

View menu updated

You can actually be a bit more specific about your position. Again, this is where the source code will help you. The same file I linked to above also contains the addMenuItem definition.


Put Some Lipstick on That Pig

Now that you’ve seen a simple example of how an extension can integrate into Brackets, let’s look at how we update the UI. In the previous version of our code, an alert was used to send a message. While this works, it isn’t very pretty. Your code can access the Brackets editor just like any other DOM modification code. While you can do anything you want, there are a few standard ways extensions update the UI in Brackets. (As a warning, in general you do not want to touch the DOM of the main editor UI. You can, but with future updates, your code may break. Also, users may not be happy if your extension changes something core to Brackets.)

The first method we’ll look at uses modal dialogs. Brackets already uses this and has an API available for extensions to call. As a simple example, let’s just update the HelloWorld extension to use a modal instead.

Listing 3: helloworld3/main.js
/*
Based - in part - on the HelloWorld sample extension on the Brackets wiki:

https://github.com/adobe/brackets/wiki/Simple-%22Hello-World%22-extension

*/
define(function(require, exports, module) {

    var CommandManager = brackets.getModule("command/CommandManager"),
        Menus = brackets.getModule("command/Menus"),
        Dialogs = brackets.getModule("widgets/Dialogs"),
        DefaultDialogs = brackets.getModule("widgets/DefaultDialogs"),
        AppInit = brackets.getModule("utils/AppInit");

    function log(s) {
            console.log("[helloworld3] "+s);
    }

    function handleHelloWorld() {
        Dialogs.showModalDialog(DefaultDialogs.DIALOG_ID_INFO, "Hello World", "Same Important Message");
    }

    AppInit.appReady(function () {

        log("Hello from HelloWorld3.");

        var HELLOWORLD_EXECUTE = "helloworld.execute";

        CommandManager.register("Run HelloWorld", HELLOWORLD_EXECUTE, handleHelloWorld);

        var menu = Menus.getMenu(Menus.AppMenuBar.VIEW_MENU);
        menu.addMenuItem(HELLOWORLD_EXECUTE);

    });

});

Note the addition of two additional Brackets modules: Dialogs and DefaultDialogs. The next change is in handleHelloWorld. One of the methods in the Dialog library is the ability to show a dialog (no surprise there, I suppose). The method wants a class, a title, and a body, and that’s it. There’s more you can do with dialogs, but for now, this demonstrates the feature. Now when we run the command, we get a much prettier UI. (Along with default buttons and behaviours to handle closing the dialog.)

Dialog example

That’s one example, now lets look at another: creating a bottom panel. As with dialogs, we’ve got support from Brackets to make it easier. Let’s look at an example and then I’ll explain the changes.

Listing 4: helloworld4/main.js
/*
Based - in part - on the HelloWorld sample extension on the Brackets wiki:

https://github.com/adobe/brackets/wiki/Simple-%22Hello-World%22-extension

*/
define(function(require, exports, module) {

    var CommandManager = brackets.getModule("command/CommandManager"),
    Menus = brackets.getModule("command/Menus"),
    PanelManager = brackets.getModule("view/PanelManager"),
    AppInit = brackets.getModule("utils/AppInit");

    var HELLOWORLD_EXECUTE = "helloworld.execute";
    var panel;

    function log(s) {
            console.log("[helloworld4] "+s);
    }

    function handleHelloWorld() {
        if(panel.isVisible()) {
            panel.hide();
            CommandManager.get(HELLOWORLD_EXECUTE).setChecked(false);
        } else {
            panel.show();
            CommandManager.get(HELLOWORLD_EXECUTE).setChecked(true);
        }
    }

    AppInit.appReady(function () {

            log("Hello from HelloWorld4.");

            CommandManager.register("Run HelloWorld", HELLOWORLD_EXECUTE, handleHelloWorld);

            var menu = Menus.getMenu(Menus.AppMenuBar.VIEW_MENU);
            menu.addMenuItem(HELLOWORLD_EXECUTE);

            panel = PanelManager.createBottomPanel(HELLOWORLD_EXECUTE, $("<div class='bottom-panel'>HTML for my panel</div>"),200);

    });

});

Let’s focus on the changes. First, I dropped the Dialog modules as I’m no longer using them. Instead, we load up the PanelManager. Down in the appReady block I’ve defined a new panel using the PanelManager API method createBottomPanel. Like the menu command this takes in a unique ID so I just reuse HELLOWORLD_EXECUTE. The second argument is a jQuery-wrapped block of HTML (and in case you’re wondering, yes we can do this nicer), and finally, a minimum size. This sets up the panel but doesn’t actually execute it.

In the event handler, we have tied to the menu, we can ask the panel if it is visible and then either hide or show it. That part should be pretty trivial. For fun I’ve added in a bit more complexity. Notice that CommandManager lets us get a menu item and set a checked property. This may be unnecessary as the user can see the panel easily enough themselves, but adding the check just makes things a little bit more obvious. In the screen shot below you can see the panel in its visible state.

Panel example

Right away you may be wondering about the panel HTML. Is there a better way to provide the HTML? Anyway to style it? Yep, lets look at a more advanced version.

Listing 5: helloworld5/main.js
/*
Based - in part - on the HelloWorld sample extension on the Brackets wiki:

https://github.com/adobe/brackets/wiki/Simple-%22Hello-World%22-extension

*/
define(function(require, exports, module) {
    var CommandManager = brackets.getModule("command/CommandManager"),
    Menus = brackets.getModule("command/Menus"),
    PanelManager = brackets.getModule("view/PanelManager"),
    ExtensionUtils          = brackets.getModule("utils/ExtensionUtils"),        
    AppInit = brackets.getModule("utils/AppInit");

    var HELLOWORLD_EXECUTE = "helloworld.execute";
    var panel;
    var panelHtml     = require("text!panel.html");

    function log(s) {
            console.log("[helloworld5] "+s);
    }

    function handleHelloWorld() {
        if(panel.isVisible()) {
            panel.hide();
            CommandManager.get(HELLOWORLD_EXECUTE).setChecked(false);
        } else {
            panel.show();
            CommandManager.get(HELLOWORLD_EXECUTE).setChecked(true);
        }
    }

    AppInit.appReady(function () {

        log("Hello from HelloWorld5.");
        ExtensionUtils.loadStyleSheet(module, "helloworld.css");
        CommandManager.register("Run HelloWorld", HELLOWORLD_EXECUTE, handleHelloWorld);

        var menu = Menus.getMenu(Menus.AppMenuBar.VIEW_MENU);
        menu.addMenuItem(HELLOWORLD_EXECUTE);

        panel = PanelManager.createBottomPanel(HELLOWORLD_EXECUTE, $(panelHtml),200);

    });

});

As before, I’m going to focus on the changes. First note that I’ve included a variable called panelHtml that is loaded via require. This lets me define my HTML outside of my JavaScript code. (You can also use templating engines. Brackets ships with Mustache.) The HTML behind the panel is rather simple.

Listing 6: helloworld5/panel.html
<div class="bottom-panel helloworld-panel" id="helloworldPanel">
<h1>My Panel</h1>

<p>
My panel brings all the boys to the yard,<br/>
And they're like<br/>
It's better than yours,<br/>
Damn right it's better than yours,<br/>
I can teach you,<br/>
But I have to charge
</p>
</div>

Returning to main.js, I’ve demonstrated another feature, loadStyleSheet. This lets you load an extension specific style sheet. I created a file, helloworld.css, with some simple (but tasteful) CSS styles.

Listing 7: helloworld5/helloworld.css
.helloworld-panel h1 {
        color: red;
}

.helloworld-panel p {
        color: blue;
        font-weight: bold;
}

Note that I prefixed my styles with a unique name. This helps ensure my classes don’t conflict with anything built into Brackets. With these simple changes my panel now looks much better, and you can see why I’m known world wide for my superior design skills.

Epic CSS

Packaging and Sharing Your Kick Butt Extension

Of course, just creating the coolest Brackets extension isn’t quite enough. You probably (hopefully!) want to share it with others. One option is to just zip up the directory and put it on your website. Folks can download the zip, extract it, and copy it to their Brackets extensions folder.

But that’s not cool. You want to be cool, right? In order to share your extension and make it available via the Brackets Extension manager, you simply need to add a package.json file to your extension. If you’ve ever used Node.js, then this will seem familiar. Here is a sample one for our extension.

Listing 8: helloworld6/package.json
{
    "name": "camden.helloworld",
    "title": "HelloWorld",
    "description": "Adds HelloWorld support to Brackets.",
    "homepage": "https://github.com/cfjedimaster/something real here",
    "version": "1.0.0",
    "author": "Raymond Camden <raymondcamden@gmail.com> (http://www.raymondcamden.com)",
    "license": "MIT",
    "engines": {
        "brackets": "<=0.34.0"
    }
}

Most of this is self-explanatory, but the real crucial portion is the engines block. Brackets updates itself pretty rapidly. If Brackets added a particular feature at some point that your extension relies on, you can add a simple conditional here to ensure folks don’t try to install your extension on an incompatible version. (You can find a full listing of the possible settings on the Wiki.)

Once you’ve done this, the next part is to upload it to the Brackets Registry. You will need to log in via your GitHub account, but once you’ve done that, you can then simply upload your zip. Your extension will then be available to anyone using Brackets. Even better, if you update your extension, the Extension Manager will actually be able to flag this to the user so they know an update is available.


What Else?

Hopefully, you’ve seen how easy it is to extend Brackets. There’s more we didn’t cover, like the Linting API and NodeJS integration, but this article should be more than enough to get you started. As a reminder, do not forget there is a large collection of extensions available for you to start playing with right now. Good luck!

Sponsored post
feedback2020-admin
04:05

January 30 2014

14:00

JavaScript Animation That Works (Part 2 of 4)

In the last post, we introduced the idea of spriting, an easy way to animate in JavaScript that works in all browsers. We also walked through how to set up the sprite as a background image for a div and then use a line of JavaScript to change the background position to make it appear as if the image has moved.

In this post, we will use this technique to animate both running and jumping motions. In order to create the animation, we will need to quickly change the background position at a regular interval. Take a look again at the sprite we are using.

javascript-spriting-j

Meet J, the mascot for my company, Joust Multimedia.

In our example, we have ten total images: one of J standing facing right, three of J running to the right and one of J jumping while facing right (with the same number of each frame facing left). Let’s start with making him run to the right. In order to make our image look like it is running, we will need to do two things: change the sprite to a different image and move the div towards the right.


Running to the Right Animation

We certainly won’t want to be stuck clicking different buttons to cycle through the sprites, so we will need to create some functions that do this automatically.

For our running function, we want to:

  1. Move the div towards the right slightly
  2. Move to the next frame of animation
  3. Pause for a fraction of a second (to preserve the “persistence of vision” illusion)
  4. Loop the function again

Fortunately, there is an easy way to loop with functions. A native command in JavaScript called setTimeout will allow us to create a timed delay, after which we will call the function again (from inside the function).

function run_right(){
  // Move slightly to the right ...
  // Change to the next frame of animation ...

  // this will call 'run_right' again after 200 milliseconds
  setTimeout(function(){run_right();}, 200); 
}

So now we have a function that will call itself again five times a second (which will be fast enough to create animation for our purposes). Remember here that browsers are not terribly accurate with their timers. You can specify timing to the millisecond, but that doesn’t mean your script will run at that timing exactly!

Our next problem to tackle is how is our function going to know which sprite to change to? In our example, we will need to cycle back and forth through our three images (to have four total frames of animation). To do this, we are going to pass our function a bit of information to tell it which slide to switch to. Once in the function, we will do a test that will check which slide we should be on, then switch the background position to the correct sprite. When we call the function again, we will pass the next slide as the argument.

function run_right(slide){
  // Move slightly to the right ...
  switch (slide){ // this switch statement checks for different possibilities for 'slide'
    case 1: // if 'slide' equals '1' ...
      document.getElementById('j').style.backgroundPosition = "-40px 0px";
      setTimeout(function(){run_right(2);}, 200);
      break;
    case 2: // if 'slide' equals '2' ...
      document.getElementById('j').style.backgroundPosition = "-80px 0px";
      setTimeout(function(){run_right(3);}, 200);
      break;
    case 3: // if 'slide' equals '3' ...
      document.getElementById('j').style.backgroundPosition = "-120px 0px";
      setTimeout(function(){run_right(4);}, 200);
      break;
    case 4: // if 'slide' equals '4' ...
      document.getElementById('j').style.backgroundPosition = "-80px 0px";
      setTimeout(function(){run_right(1);}, 200);
      break;
  }
}

And now when we call the function for the first time, we will need to make sure we pass the starting slide.

<input type="button" value="Run Right" onclick="run_right(1);" />

Similarly, to move our div to the right slightly, we can pass the initial left attribute of the div, then move the div slightly each time the function is called.

function run_right(slide, left){
  
  left = left + 15; // Increase his left attribute by 15px
  document.getElementById('j').style.left = left+"px";
  
  switch (slide){ // this switch statement checks for different possibilities for 'slide'
    case 1: // if 'slide' equals '1' ...
      document.getElementById('j').style.backgroundPosition = "-40px 0px";
      setTimeout(function(){run_right(2, left);}, 200);
      break;
    case 2: // if 'slide' equals '2' ...
      document.getElementById('j').style.backgroundPosition = "-80px 0px";
      setTimeout(function(){run_right(3, left);}, 200);
      break;
    case 3: // if 'slide' equals '3' ...
      document.getElementById('j').style.backgroundPosition = "-120px 0px";
      setTimeout(function(){run_right(4, left);}, 200);
      break;
    case 4: // if 'slide' equals '4' ...
      document.getElementById('j').style.backgroundPosition = "-80px 0px";
      setTimeout(function(){run_right(1, left);}, 200);
      break;
  }
}

And when we initially call the function, we need to make sure we pass the current left position of our div.

<input type="button" value="Run Right" onclick="run_right(1, document.getElementById('j').offsetLeft);" />

Stopping the Animation

So, now we have a function that, when called, will animate J to run to the right. Unfortunately, we have no way to stop it. First of all, we will need to make the function stop calling itself if J runs to the edge of our stage. To do that, every time the function runs, we will check an if statement to see if J has room to keep running. If so, we will run the function like normal. If not, we will stop calling the function and return him to the standing sprite.

function run_right(slide, left){
  // If we can add 15 pixels to the left and have J's right edge not be at the stage's right edge ...
  if ((left + 15) < (document.getElementById('stage').offsetWidth - document.getElementById('j').offsetWidth)){
    // We have room! Continue like normal here
  } else { // if we are on the right edge, we need to stop calling the function and return to standing
    document.getElementById('j').style.backgroundPosition = "0px 0px";
  }
}

Finally, we will want to have a way to stop the function, when needed. We can set the setTimeout() command to a variable, then stop it with the clearTimeout() command. In order to do this, we will need to declare that variable outside of the function, so that we will be able to refer to it later. For now, we will declare it as a global variable. This is terrible coding practice, but we will correct this in the next post. This is what our function looks like.

var timer;

function run_right(slide, left){
  if ((left + 15) < (document.getElementById('stage').offsetWidth - document.getElementById('j').offsetWidth)){
    left = left + 15; // Increase his left attribute by 15px
    document.getElementById('j').style.left = left+"px";
  
    switch (slide){ // this switch statement checks for different possibilities for 'slide'
      case 1: // if 'slide' equals '1' ...
        document.getElementById('j').style.backgroundPosition = "-40px 0px";
        setTimeout(function(){run_right(2, left);}, 200);
        break;
      case 2: // if 'slide' equals '2' ...
        document.getElementById('j').style.backgroundPosition = "-80px 0px";
        setTimeout(function(){run_right(3, left);}, 200);
        break;
      case 3: // if 'slide' equals '3' ...
        document.getElementById('j').style.backgroundPosition = "-120px 0px";
        setTimeout(function(){run_right(4, left);}, 200);
        break;
      case 4: // if 'slide' equals '4' ...
        document.getElementById('j').style.backgroundPosition = "-80px 0px";
        setTimeout(function(){run_right(1, left);}, 200);
        break;
    }
  } else {
    document.getElementById('j').style.backgroundPosition = "0px 0px";
  }
}

And we can create another function to stop the running timer and return the sprite to the standing image.

function stop_running(){
  document.getElementById('j').style.backgroundPosition = "0px 0px";
  clearTimeout(timer);
}

Running to the Left Animation

Now by borrowing the code from our run_right function, we can create another function to make a run_left function, with just a few modifications.

function run_left(stage, left){
  if ((left - 15) > 0){
    left = left - 15;
    document.getElementById('j').style.left = left+"px";
    switch (stage){
      case 1:
        document.getElementById('j').style.backgroundPosition = "-40px -50px";
        timer = setTimeout(function(){run_left(2, left);}, 200);
        break;
      case 2:
        document.getElementById('j').style.backgroundPosition = "-80px -50px";
        timer = setTimeout(function(){run_left(3, left);}, 200);
        break;
      case 3:
        document.getElementById('j').style.backgroundPosition = "-120px -50px";
        timer = setTimeout(function(){run_left(4, left);}, 200);
        break;
      case 4:
        document.getElementById('j').style.backgroundPosition = "-80px -50px";
        timer = setTimeout(function(){run_left(1, left);}, 200);
        break;
    }
  } else {
    document.getElementById('j').style.backgroundPosition = "0px -50px";
  }
}

Jumping Animation

Finally, we need to create a jump function. We will pass two arguments to this function, one that will track whether the div is currently moving up or down and another that will track the current top attribute of the div. Between the two, we will determine which direction the div needs to move next, and how far (we will move the div less distance near the arc of the jump to simulate acceleration with gravity).

function jump(up, top){
  /*
   * We change J to his jumping sprite ...
   */
  document.getElementById('j').style.backgroundPosition = "-160px 0px";
  /*
   * Here, we need to decide whether he should be traveling up or down...
   */
  if (up && (document.getElementById('j').offsetTop > 20)){
    // if he is currently moving up, and he is more than 20 pixels from the top of the stage ...
    top = top - (top * .1); // This gives us a slight arc in the jump, rather than a constant movement like running
    document.getElementById('j').style.top = top+"px"; // Change his position
    timer = setTimeout(function(){jump(up, top);}, 60); // Then call the function again
  } else if (up) {
    // if he is currently moving up, but he is almost at the top of the stage and needs to come back down...
    up = false; // we switch the 'up' variable so he will be falling in the next loop
    timer = setTimeout(function(){jump(up, top);}, 60);
  } else if (!up && (document.getElementById('j').offsetTop < 115)){
    // if he is moving down, but is more than 5px from the ground, he will continue to fall...
    top = top + (top * .1); // His fall will slightly accelerate
    document.getElementById('j').style.top = top+"px";
    timer = setTimeout(function(){jump(up, top);}, 60);
  } else {
    // If he is moving down, and he is within 5px of the ground...
    document.getElementById('j').style.top = "120px"; // Place him on the ground
    document.getElementById('j').style.backgroundPosition = "0px 0px"; // return to standing sprite
    // We do not call the loop anymore since he is standing still at this point
  }
}

Now we can put all four of our functions into buttons and have a working prototype of a running and jumping animation! Please check out the source code for this page with comments and download the sprite sheet that I used, if you’d like.


Conclusion

Now, although we have a working prototype here, you may notice it is a little buggy. When you click on more than one button at a time, the script will try to run both at once. Or, if you click the jump button again on the way down, J will continue to fall forever. Also, as I mentioned earlier, we have global variables in our script, which means it might be difficult to add this code into an existing page without crashing other JavaScript (which is also why I didn’t try to run this code within this blog page). In our next post, we will clean up all of these bugs and talk about the concept of encapsulation and why it is important to write good code in the real world.

January 16 2014

19:42

Ember.js Testing

When I started playing around with Ember.js almost a year ago, the testability story left something to be desired. You could unit test an object without any trouble, but a unit test is only one way to get feedback when you’re building a software product. In addition to unit tests, I wanted a way to verify the integration of multiple components. So like most people testing rich JavaScript applications, I reached for the mother of all testing tools, Selenium.

Now before I bash it, without a proper introduction, it’s worth mentioning that Selenium is a great way to verify your entire web application works with a full production-like database and all your production dependencies, etc. And from a QA perspective, this tool can be a great resource for teams who need end-to-end UI acceptance tests.

But over time, a seemingly small test suite built on Selenium can begin to drag the velocity of your team to a snails pace. One easy way to reduce this pain is to avoid building a large application in the first place. If you build a handful of smaller web applications instead, it might help keep you afloat for a little longer because no individual build will crush the team, as you grow.

But even on a small project, the real problem with Selenium is that it’s not part of the test driven development process. When I’m doing red/ green/ refactor I don’t have time for slow feedback in any form. I needed a way to write both unit and integration tests that would provide quick feedback to help me shape the software I was writing in a more iterative way. If you are using a version of Ember.js >= RC3, you’re in luck because writing a unit or integration test is a walk in the part.


Installing the Test Runner

Now that we can write JavaScript tests for our application, how do we execute them? Most developers start out using the browser directly, but because I wanted something I could execute headless from the command line in a CI environment with a rich ecosystem full of plugins, I looked to Karma.

What I liked about Karma is that it only wants to be your test runner. It doesn’t care what JavaScript test framework you use or what client side MVC framework you use. It’s simple to get started with and writing tests that execute against your production Ember.js application is just a few lines of configuration.

But before we can configure Karma, we need to install it using npm. I recommend installing it locally so you can keep your npm modules isolated per project. To do this, add a file named package.json‘ to the root of your project that looks something like the below.

{
  "dependencies": {
    "karma-qunit": "*",
    "karma": "0.10.2"
  }
}

This example will require both Karma, and a plugin for QUnit. After you save the package.json file above, drop back to the command line and type npm install to pull down the required Node modules.

After the npm install completes, you will now see a new folder with the name node_modules in the root of your project. This folder contains all the JavaScript code we just pulled down with npm, including Karma and the QUnit plugin. If you drill down even further to node_modules/karma/bin/ you will see the Karma executable. We will be using this to configure the test runner, execute tests from the command line, etc.


Configure the Test Runner

Next we need to configure karma so it knows how to execute the QUnit tests. Type karma init from the root of the project. You will be prompted with a list of questions. The first will ask what testing framework you want to use, hit Tab until you see qunit, then hit Enter. Next answer no to the Require.js question, as we won’t be using it for this sample application. Tab until you see PhantomJS for the third question and you will need to hit Enter twice as it allows multiple options here. As for the rest, just leave them at their default option.

When you are done, you should see Karma has generated a configuration file named karma.conf.js in the root or your project. If you want to read more about the various options Karma supports, you might find the comments helpful. For the sake of this example, I have a simplified version of the configuration file to keep things beginner friendly.

If you want to follow along, delete the generated configuration file and replace it with this one.

module.exports = function(karma) {
  karma.set({
    basePath: 'js',
     
    files: [
      "vendor/jquery/jquery.min.js",
      "vendor/handlebars/handlebars.js",
      "vendor/ember/ember.js",
      "vendor/jquery-mockjax/jquery.mockjax.js",
      "app.js",
      "tests/*.js"
    ],
     
    logLevel: karma.LOG_ERROR,
    browsers: ['PhantomJS'],
    singleRun: true,
    autoWatch: false,
      
    frameworks: ["qunit"]
  });
};

This should be fairly similar to what Karma generated earlier, I’ve just removed all the comments and cut out a few options we don’t care about right now. In order to write the first unit test, I had to tell Karma a little more about the project structure.

At the top of the configuration file, you will see that I’ve set the basePath to js because all of the JavaScript assets live under this folder in the project. Next, I told Karma where it can find the JavaScript files required to test our simple application. This includes jQuery, Handlebars, Ember.js and the app.js file itself.


Writing the First Unit Test

Now we can add the first unit test file to the project. First make a new folder named tests and nest it under the js folder. Add a file in this new directory named unit_tests.js that looks something like this.

test('hello world', function() {
  equal(1, 1, "");
});

This test isn’t doing anything valuable yet, but it will help us verify we have everything wired up with Karma to execute it correctly. Notice in the Karma files section, we already added the js/tests directory. This way Karma will pull in every JavaScript file we use to test our application with, going forward.

Now that we have Karma configured correctly, execute the qunit tests from the command line using ./node_modules/karma/bin/karma start.

If you have everything setup correctly, you should see Karma execute one test and it being successful. To verify it executed the test we just wrote, go make it fail by altering the equals statement. For example, you could do the following:

test('hello world', function() {
  equal(1, 2, "boom");
});

If you can fail this and make it pass again, it’s time to write a test with a little more purpose.


The Sample Application

But before we get started, lets discuss the sample application used throughout this post. In the screenshot below, you see we have a very simple grid of users. In the HTML table, each user is shown by first name along with a button to delete that user. At the top of the application you will see an input for the first name, last name and finally a button that will add another user to the table when clicked.

https://dl.dropboxusercontent.com/u/716525/content/images/2013/pre-tuts.png

The example application has three problems. First, we want to show the user’s first and last name, not just the first name. Next, when you click a delete button it won’t actually remove the user. And finally, when you add a first name, last name and click add, it won’t put another user into the table.

On the surface, the full name change appears to be the simplest. It also turned out to be a great example that shows when you should write a unit test, an integration test or both. In this example, the quickest way to get feedback is to write a simple unit test that asserts the model has a computed property fullName.


Unit Testing the Computed Property

Unit testing an ember object is easy, you simply create a new instance of the object and ask for the fullName value.

test('fullName property returns both first and last', function() {
  var person = App.Person.create({firstName: 'toran', lastName: 'billups'});
  var result = person.get('fullName');
  equal(result, 'toran billups', "fullName was " + result);
});

Next if you go back to the command line and run ./node_modules/karma/bin/karma start, it should show one failing test with a helpful message describing fullName as undefined currently. To fix this, we need to open the app.js file and add a computed property to the model that returns a string of the combined first and last name values.

App.Person = Ember.Object.extend({
  firstName: '',
  lastName: '',
  fullName: function() {
    var firstName = this.get('firstName');
    var lastName = this.get('lastName');
    return firstName + ' ' + lastName;
  }.property()
});

If you drop back to the command line and run ./node_modules/karma/bin/karma start you should now see a passing unit test. You can extend this example by writing a few other unit tests to show that the computed property should change when either the first or last name is updated on the model.

test('fullName property returns both first and last', function() {
  var person = App.Person.create({firstName: 'toran', lastName: 'billups'});
  var result = person.get('fullName');
  equal(result, 'toran billups', "fullName was " + result);
});

test('fullName property updates when firstName is changed', function() {
  var person = App.Person.create({firstName: 'toran', lastName: 'billups'});
  var result = person.get('fullName');
  equal(result, 'toran billups', "fullName was " + result);
  person.set('firstName', 'wat');
  result = person.get('fullName');
  equal(result, 'wat billups', "fullName was " + result);
});

test('fullName property updates when lastName is changed', function() {
  var person = App.Person.create({firstName: 'toran', lastName: 'billups'});
  var result = person.get('fullName');
  equal(result, 'toran billups', "fullName was " + result);
  person.set('lastName', 'tbozz');
  result = person.get('fullName');
  equal(result, 'toran tbozz', "fullName was " + result);
});

If you add these two additional tests and run all three from the command line, you should have two failing. To get all three tests passing, modify the computed property to listen for changes on both the first name and last name. Now if you run ./node_modules/karma/bin/karma start from the command line, you should have three passing tests.

App.Person = Ember.Object.extend({
  firstName: '',
  lastName: '',
  fullName: function() {
    var firstName = this.get('firstName');
    var lastName = this.get('lastName');
    return firstName + ' ' + lastName;
  }.property('firstName', 'lastName')
});

Add the Karma-Ember-Preprocessor and Configure It

Now that we have a computed property on the model, we need to look at the template itself because currently we don’t use the new fullName property. In the past, you would need to wire up everything yourself, or use Selenium to verify the template gets rendered correctly. But with ember-testing you can now integration test this by adding a few lines of JavaScript and a plugin for Karma.

First open the package.json file and add the karma-ember-preprocessor dependency. After you update the package.json file, do npm install from the command line to pull this down.

{
  "dependencies": {
    "karma-ember-preprocessor": "*",
    "karma-qunit": "*",
    "karma": "0.10.2"
  }
}

Now that you have the pre-processor installed, we need to make Karma aware of the template files. In the files section of your karma.conf.js file add the following to tell Karma about the Handlebars templates.

module.exports = function(karma) {
  karma.set({
    basePath: 'js',
    
    files: [
      "vendor/jquery/jquery.min.js",
      "vendor/handlebars/handlebars.js",
      "vendor/ember/ember.js",
      "vendor/jquery-mockjax/jquery.mockjax.js",
      "app.js",
      "tests/*.js",
      "templates/*.handlebars"
    ],
    
    logLevel: karma.LOG_ERROR,
    browsers: ['PhantomJS'],
    singleRun: true,
    autoWatch: false,
     
    frameworks: ["qunit"]
  });
};

Next we need to tell Karma what to do with these handlebars files, because technically we want to have each template precompiled before it’s handed over to PhantomJS. Add the preprocessor configuration and point anything with a file extension of *.handlebars at the ember preprocessor. Also you need to add the plugins configuration to register the ember pre-processor (along with a few others that normally get included with Karma’s default configuration).

module.exports = function(karma) {
  karma.set({
    basePath: 'js',
     
    files: [
      "vendor/jquery/jquery.min.js",
      "vendor/handlebars/handlebars.js",
      "vendor/ember/ember.js",
      "vendor/jquery-mockjax/jquery.mockjax.js",
      "app.js",
      "tests/*.js",
      "templates/*.handlebars"
    ],
     
    logLevel: karma.LOG_ERROR,
    browsers: ['PhantomJS'],
    singleRun: true,
    autoWatch: false,
     
    frameworks: ["qunit"],
     
    plugins: [
      'karma-qunit',
      'karma-chrome-launcher',
      'karma-ember-preprocessor',
      'karma-phantomjs-launcher'
    ],
     
    preprocessors: {
      "**/*.handlebars": 'ember'
    }
  });
};

Integration Testing the Data-Bound Template

Now that we have the Karma configuration setup for integration testing, add a new file named integration_tests.js under the tests folder. Inside this folder we need to add a simple test to prove we can stand up the entire Ember.js application without error. Add a simple qunit test to see if we can hit the '/' route and get the basic HTML returned. For the initial test, we are only asserting that the table tag exists in the HTML that was generated.

test('hello world', function() {
  App.reset();
  visit("/").then(function() {
    ok(exists("table"));
  });
});

Notice we are using a few helpers that are built into ember-testing like visit and find. The visit helper is an ember friendly way of telling the application what state to be at during the execution. This test starts at the '/' route because that is where the People models get bound to the template and our HTML table is generated. The find helper is a quick way to lookup elements in the DOM using CSS selectors like you would with jQuery to verify something about the markup.

Before we can run this test we need to add a test helper file that will inject the test helpers and set a generic root element. Add the code below, to a file named integration_test_helper.js in the same tests directory. This will ensure our application has the test helpers at execution time.

document.write('<div id="ember-testing-container"><div id="ember-testing"></div></div>');
 
App.rootElement = '#ember-testing';
App.setupForTesting();
App.injectTestHelpers();

function exists(selector) {
  return !!find(selector).length;
}

Now from the command line you should be able to execute the integration test above. If you got a passing test, remove the table from the handlebars template to make it fail (just to help prove Ember was generating the HTML using that template).

Now that we have the integration tests setup, it’s time to write the one that asserts we show each user’s fullName instead of their firstName. We want to first assert that we get two rows, one for each person.

test('hello world', function() {
  App.reset();
  visit("/").then(function() {
    var rows = find("table tr").length;
    equal(rows, 2, rows);
  });
});

Note: The application is currently returning hard coded data to keep everything simple at the moment. If you are curious why we get two people, here is the find method on the model:

App.Person.reopenClass({
  people: [],
  find: function() {
    var first = App.Person.create({firstName: 'x', lastName: 'y'});
    var last = App.Person.create({firstName: 'x', lastName: 'y'});
    this.people.pushObject(first);
    this.people.pushObject(last);
    return this.people;
  }
});

If we run the tests now, we should still have everything passing because two people are returned as we would expect. Next, we need to get the table cell that shows the person’s name and assert it’s using the fullName property instead of just firstName.

test('hello world', function() {
  App.reset();
  visit("/").then(function() {
    var rows = find("table tr").length;
    equal(rows, 2, rows);
    var fullName = find("table tr:eq(0) td:eq(0)").text();
    equal(fullName, "x y", "the first table row had fullName: " + fullName);
  });
});

If you run the above test you should see a failing test because we haven’t yet updated the template to use fullName. Now that we have a failing test, update the template to use fullName and run the tests using ./node_modules/karma/bin/karma start. You should now have a passing suite of both unit and integration tests.


Should I Write Unit or Integration Tests?

If you are asking yourself, “when should I write a unit test vs. an integration test?”, the answer is simply: what will be less painful? If writing a unit test is faster and it explains the problem better than a much larger integration test, then I say write the unit test. If the unit tests seem less valuable because you are doing basic CRUD and the real behavior is in the interaction between components, I say write the integration test. Because the integration tests written with ember-testing are blazingly fast, they are part of the developer feedback cycle and should be used similarly to a unit test when it makes sense.

To show a CRUD like integration test in action, write the following test to prove the add button puts the person into the collection and that a new row gets rendered in the handlebars template.

test('add will append another person to the html table', function() {
  App.Person.people = [];
  App.reset();
  visit("/").then(function() {
    var rows = find("table tr").length
    equal(rows, 2, "the table had " + rows + " rows");
    fillIn(".firstName", "foo");
    fillIn(".lastName", "bar");
    return click(".submit");
  }).then(function() {
    equal(find("table tr").length, 3, "the table of people was not complete");
    equal(find("table tr:eq(2) td:eq(0)").text(), "foo bar", "the fullName for the person was incorrect");
  });
});

Start by telling the test what state you want to work with, then using the fillIn helper, add a first name and last name. Now if you click the submit button it should add that person to the HTML table, so in the returning then we can assert that three people exist in the HTML table. Run this test and it should fail because the Ember controller isn’t complete.

To get the test passing, add the following line to the PeopleController

App.PeopleController = Ember.ArrayController.extend({
  actions: {
    addPerson: function() {
      var person = {
        firstName: this.get('firstName'),
        lastName: this.get('lastName')
      };
      App.Person.add(person);
    }
  }
});

Now if you run the tests using ./node_modules/karma/bin/karma start it should show three people in the rendered HTML.

The last test is the delete, notice we find the button for a specific row and click it. In the following then we simply verify one less person is shown in the HTML table.

test('delete will remove the person for a given row', function() {
  App.Person.people = [];
  App.reset();
  visit("/").then(function() {
    var rows = find("table tr").length;
    equal(rows, 2, "the table had " + rows + " rows");
    return click("table .delete:first");
  }).then(function() {
    equal(find("table tr").length, 1, "the table of people was not complete
  });
});")})})

To get this passing, simply add the following line to the PeopleController:

App.PeopleController = Ember.ArrayController.extend({
  actions: {
    addPerson: function() {
      var person = {
        firstName: this.get('firstName'),
        lastName: this.get('lastName')
      };
      App.Person.add(person);
    },
    deletePerson: function(person) {
      App.Person.remove(person);
    }
  }
});

Run the tests from the command line and you should once again have a passing suite of tests.


Conclusion

So that wraps up our sample application. Feel free to ask any questions down in the comments.

Bonus: But I’m Already Using Grunt…

If you prefer to use Grunt instead of the karma-ember-preprocessor, simply remove the plugins and preprocessors configuration. Also remove templates/*.handlebars from the files section as Karma won’t need to precompile the templates. Here is a simplified karma.conf.js that works when using grunt to precompile the handlebars templates.

module.exports = function(karma) {
  karma.set({
    basePath: 'js',
 
    files: [
      "lib/deps.min.js", //built by your grunt task
      "tests/*.js"
    ],
    
    logLevel: karma.LOG_ERROR,
    browsers: ['PhantomJS'],
    singleRun: true,
    autoWatch: false,
     
    frameworks: ["qunit"]
  });
};

And that’s it!

January 14 2014

18:31

JavaScript Animation That Works (Part 1 of 4)

HTML is the language the web is built in, and it is kind of a strange beast. Although it was originally intended as a way to easily share academic information across the Internet, it has been slowly transformed to accommodate the media-rich environment we know and love, sort of like a cherished lawn mower that has been fitted with jet engines.

Because of the haphazard nature of HTML (and JavaScript, the programming language that manipulates elements in HTML and makes them interactive), sometimes we have to think outside the box a bit. In this tutorial series, I will show you how to do cross-browser animation using a method called spriting, using only JavaScript and HTML (and a bit of CSS too). And, because this is a learning opportunity (and I am stubborn and like to do things the hard way), we will be doing it all without any external libraries (like jQuery).

This will be a four-part series – I will explain the spriting itself in part one (this article) with some basic JavaScript, but then in later installments we will move into some intermediate techniques like encapsulation, event handling, and touch screen support.

So let’s get started!


What Is Animation?

Animation is based on a phenomenon called persistence of vision, which basically says that if your brain sees lots of similar still images fast enough, then it will appear as though it is one moving image. Every kind of film or video uses this basic technique – many, slightly different frames are shown in quick succession to make something appear moving. Film typically runs at 24 frames per second (₁), while broadcast television in North America is shown at 29.97 frames per second (₂). So, in other words, what we want to do is create something that shows similar frames really quickly (several times a second).


The Difficulties on the Web

There are two main reasons animation is difficult to use on the web:

  1. The first is that different web browsers have different ways they interpret HTML and JavaScript, so what works on one device often doesn’t work on another. Flash works great on most browsers, but support is starting to drop for it and iOS devices won’t allow it at all. Canvas has a lot of potential, but Internet Explorer 8 doesn’t support it. Same goes with Adobe Edge Animate. GIFs work on everything, but you can’t control the animation or make it interactive.
  2. And for the second, every time an image is served on a web page, a separate request is made between the browser and server. These requests can add up, even over a lightning fast Internet connection, making having several frames each second unmanageable.

The Solution: Spriting

One way around these problems is to make a sprite sheet. In elements like div‘s, we can set a background image for the div that may be larger than the element itself. We can also set the background position so that we determine exactly which part of the larger image will show. A sprite sheet is a larger image made of several smaller images that we can move around so that it could take the place of many small images. Take a look at the example below, using J, the mascot of my company Joust Multimedia:

javascript-spriting-j

Although there are ten different images of J, they are placed together on one larger PNG file (we are using PNGs because they can show transparency). If we have a div that is only the size of one of the images, and we set this PNG as the background, it will look like a single image.

See the Pen hazdm by Steven Riche (@StevenRiche) on CodePen.

Although this seems like a lot of trouble to go through to show an image, this method nicely fixes the two issues we had before. With very little JavaScript (one line!), you can change the background position of a div, and it works on everything. Also, since all of these frames are on the same image, it will only take one request to load that image in the page. So, once the page loads, it can switch between sprites with no problem at all.

So how do we set this up to animate easily then? The first step is to create the sprite sheet. You will want to know what the final dimensions of your image should be, and space the sprites accordingly in a grid. For instance, my J image is going to be 40px wide by 50px tall, so I lined up my sprites exactly 40px apart horizontally and exactly 50px apart vertically. It will probably be easiest if you set your starting sprite in the upper left corner.

Then we will set up a div with a bit of CSS to make sure that everything appears properly.

<div id="stage">
	<div id="j" class="character"></div>
</div>

And here is our CSS to make sure the sprite is showing correctly:

.character {
	/*
	 * Very important that we set the height and width of 
	 * our characters to the height and width of the sprites
	 */
	height:50px;
	width:40px;
	/*
	 * We need to position them absolutely so we can have full  
	 * control over their position within the stage
	 */
	position:absolute;
	left:100px;
	top:120px;
}

#j {
	/*
	 * And now we set the background image for the character div 
	 * to be the first sprite (in the upper left corner)
	 */
	background-image: url('j.png');
	background-repeat:no-repeat;
	background-position:0 0;
}

Notice the following things:

  • We specify the width and height of the div to the size of our sprite
  • We specify the background-repeat to 'no-repeat'
  • We specify the background-position to '0 0' – this will show the sprite in the top left corner

Now, it will just take a single line of JavaScript to change the background position to show the next sprite.

document.getElementById('j').style.backgroundPosition = '-40px 0px';

Here, we are selecting the element (with id='j'), and setting the style attribute 'backgroundPosition'. Notice that it is spelled 'backgroundPosition' in JavaScript, and not like 'background-position' in CSS. Also notice that in JavaScript, the 'px' is required for both the x and y amount – we cannot just pass it numbers. And because we are moving the background image, we have to move it in the opposite direction from what you might expect – to move to the sprite on the right, we have to have the image move 40px to the left.

Now, if we just have something simple to execute this code (like a button), we can see the frames changing in action.

See the Pen DIsgk by Steven Riche. Check out the source code for this page. It has all of the examples here with thorough comments. And here is a link to the sprite sheet I am using.

Next Up

This is still not really animation though – in part two of this series we will actually animate some running and jumping, by creating loops with the different sprites.

By part four, we will be creating mouseovers for a bit of robot action.

See the Pen ByGtv by Steven Riche.


In Conclusion & Drawbacks

Although this can be a great method for animating on the web, there are a few drawbacks. First, it can require you to create each individual frame of animation, which can be time-consuming. Second, browsers don’t have the most accurate of timers for animation, so if it is critical for your animation to be timed perfectly, then this may not work. Finally, mobile Safari (used on iPhones and iPads) has a “feature” where if you have a background image that is either larger than 2MB or larger than 1024 X 1024 X 3 pixels (or 3,145,728 total pixels), it will automatically rescale the image, ruining the spriting effect. This means that really large sprites, or animations with a very large number of sprites, is out of the question. But, for simple, small animations that you want to be very interactive, this is an easy and great way to get something that works everywhere.

Interesting Side Notes

1: Before sound was introduced with film, there wasn’t really a standard frame rate. The cameras were operated by a hand crank, so if you had a rookie cameraman the frame rate might slow down and speed up dramatically unintentionally. Similarly, less reputable theaters were notorious for telling their projectionists to crank the projector quicker to speed up the show so they could fit in more screenings. This is also why we stereotypically think of pre-sound movies as moving around comically fast – most were filmed around 16-18 fps, so when we play them today at 24 frames per second, they are moving faster than they were originally intended.

2: Television was originally broadcast at 30 fps in North America, but color television caused a glitch when shown at that speed. Engineers figured out that they could fix it by slowing down the frame rate by 0.1%, which is why it is now set at 29.97 fps. Also, besides all of the wacky technical issues involved with converting a film in 24 fps to display on television at 29.97 fps, showing television at a faster fps has had an interesting effect on the public. Many people watching the test screenings of “The Hobbit” at 48 fps reported that the increased frame rate made the film look “cheaper”, even though it was much higher quality than a typical film, just because they had grown to associate faster frame rates with watching something on television.

January 07 2014

20:39

Ember Components: A Deep Dive

Ember.js is a JavaScript MVC framework that allows developers to create ambitious web applications. Although pure MVC allows a developer to separate concerns, it does not provide you with all the tools and your application will need other constructs.  Today, I’m going to talk about one of those constructs. Ember components are essentially sandboxed re-usable chunks of UI.  If you are not familiar with Ember, please check out Getting Started With Ember.js or the Let’s Learn Ember Course. In this tutorial we will cover the Web Components specification, how to write a component in Ember, talk about composition, explain the difference between an Ember view and an Ember component, and integrating plugins with Ember components.


A Word About Web Components

Ember components are based off of the W3C Web Components specification. The specification is comprised of four smaller specifications; templates, decorators, shadow DOM, and custom elements. Of these four concepts only three of them have harden specifications, decorators being the exception. By having the specifications in place, framework developers have been able to polyfill these new APIs prior to them being implemented by browser vendors.

There are several important concepts to grasp when talking about components:

  • Components know nothing about the outside world unless explicitly passed in
  • Components should have a well defined interface to the outside world
  • Components cannot manipulate any JavaScript outside of the component
  • Components can broadcast events
  • Custom elements must be namespaced with a hyphen
  • Outside JavaScript cannot manipulate components

Web Components provide true encapsulation for UI widgets. Below is a diagram of how a component works at the most basic level.

Web Component Diagram

While Ember has successfully polyfilled a lot of a specification, frameworks like AngularJS, Dart, Polymer, and Xtags have similar solutions. The only caveat here is that Ember and Angular currently don’t scope styles to the component. Overtime these polyfill solutions will fade away, and frameworks will adopt the browser vendor’s implementation. This is a fundamentally different approach to development, as we can take advantage of future specifications without tying ourselves to experimental features in browsers.


The Most Basic Ember Component

Now with our knowledge of Web Components, lets implement the very basic my-name component from above, but in Ember. Let’s begin by downloading the Ember Starter Kit from the Ember website. At the time of this tutorial the version of Ember is 1.3.0. Once you have it downloaded open up the files in your favorite editor, delete all of the templates in index.html ( denoted with data-template-name ) and everything in app.js.

The first thing we are going to want to do is create our component template. For the sake of this tutorial we are going to use inline templates. You do this by writing the following in your index.html file. We also need to create a new Ember application in our JavaScript.


<script type="text/x-handlebars">
   {{my-name}}
</script>

<script type="text/x-handlebars" data-template-name="components/my-name">
// My component template will go here
</script>


var App = Ember.Application.create();

You’ll notice that the data-template-name has a path name instead of just a plain string. The reason why we prefix our component name with "components/" is to tell Ember we are dealing with a component template and not a regular application template. You’ll also notice that the component name has the hyphen in it. This is the namespacing that I had mentioned in the Web Components specification. Namespacing is done so that we do not have name collisions with existing tags.

If we open the browser we shouldn’t see anything different. The reason for this that we have yet to place anything in our my-name template. Let’s take care of that.


...
<script type="text/x-handlebars" data-template-name="components/my-name">
   Hi, my name is {{name}}.
</script>
Ember Name Component

Now in the browser you should see something like the image above. We still aren’t finished as you can see we actually aren’t printing out a name. As I mentioned in the first section, components should expose a well defined interface to the outside world. In this case we are concerned with the name. So let’s pass in the name by placing a name attribute on the my-name component.

...
<script type="text/x-handlebars">
   {{my-name name="Chad"}}
</script>

When you refresh the page you should see “Hi, my name is Chad”. All of this with writing one line of JavaScript. Now that we have a feel for writing a basic component, let’s talk about the difference between Ember components and Ember views.


Ember Components vs. Ember Views

Ember is an MVC, so some may be thinking, “Why not just use a view for this?” This is a legitimate question. Components actually are a subclass of Ember.View, the biggest difference here is that views are generally found in the context of a controller. Take the example below.


App.IndexController = Ember.Controller.extend({
  myState: 'on'
});

App.IndexView = Ember.View.extend({

  click: function () {
    var controller = this.get( 'controller' ),
    myState = controller.get( 'mySate' );

    console.log( controller ) // The controller instance
    console.log( myState ) // The string "on"
  }

});

<script type="text/x-handlebars" data-template-name="index">
  {{myState}}
</script>

Views normally sit behind a template and turn raw input ( click, mouseEnter, mouseMove, etc ) into a semantic action ( openMenu, editName, hideModal, etc ) in a controller or route. Another thing to point out is that templates need a context as well. So what ends up happening is that Ember infers the context through naming conventions and the URL. See the diagram below.

Ember Hierarchy

As you can see, there is a level of hierarchy based on the URL and each level of that hierarchy has its own context which is derived through naming conventions.

Ember components do not have a context, they only know about the interface that they define. This allows a component to be rendered into any context, making it decoupled and reusable. If the component exposes an interface, it’s the job of the context to fulfill that interface. In other words, if you want the component to render properly you must supply it with data that it’s expecting. It’s important to note that these passed in values can be both strings or bound properties.

Ember Hierarchy With Components

When bound properties are manipulated inside of a component those changes are still propagated wherever they are referenced in your application. This makes components extremely powerful. Now that we have a good understanding of how components are different from views, let’s look at a more complex example that illustrates how a developer can compose multiple components.


Composition of Components

One really nice thing about Ember is that it’s built on concepts of UI hierarchy and this is very apparent with composition of components. Below is an example of what we are going to make. It’s a simple group chat UI. Obviously I’m not going to write a whole chat service to power the UI but we can look how we can break the UI down into re-usable and composeable components.

Ember Group Chat Component

Let’s first look how we are going to break up the UI into smaller and more digestible parts. Basically anything that we can draw a box around is a component, with the exception of a the text and button inputs at the bottom of the UI. Our goal is to be able to just configure the component at the outer layer and everything should just work.

Group Chat Breakdown

Let’s start by creating a new html file called chat.html and setting up all of the dependencies for Ember. Next create all of templates.

<script type="text/x-handlebars" data-template-name="application">
  {{outlet}}
</script>

<script type="text/x-handlebars" data-template-name="index">
  {{ group-chat messages=model action="sendMessage" }}
</script>

<script type="text/x-handlebars" data-template-name="components/group-chat">
  <div class="chat-component">
    <ul class="conversation">
      {{#each message in messages}}
        <li class="txt">{{chat-message username=message.twitterUserName message=message.text time=message.timeStamp }}</li>
      {{/each}}
    </ul>

    <form class="new-message" {{action submit on="submit"}}>
      {{input type="text" placeholder="Send new message" value=message class="txt-field"}}
      {{input type="submit" class="send-btn" value="Send"}}
    </form>
  </div>
</script>

<script type="text/x-handlebars" data-template-name="components/chat-message">
  <div class="message media">
    <div class="img">
      {{user-avatar username=username service="twitter"}}
    </div>
    <div class="bd">
      {{user-message message=message}}
      {{time-stamp time=time}}
    </div>
  </div>
</script>

<script type="text/x-handlebars" data-template-name="components/user-avatar">
  <img {{bind-attr src=avatarUrl alt=username}} class="avatar">
</script>

<script type="text/x-handlebars" data-template-name="components/user-message">
  <div class="user-message">{{message}}</div>
</script>

<script type="text/x-handlebars" data-template-name="components/time-stamp">
  <div class="time-stamp">
    <span class="clock" role="presentation"></span>
    <span class="time">{{format-date time}}</span>
  </div>
</script>

You will see that components can be nested inside of other components. This makes components just like legos that we can assemble any way we want. We just need to write to the component’s interface.

If we now go look in the browser we shouldn’t see much because we don’t have any data flowing into the component. You will also notice that even though there is no data, the components do not throw an error. The only thing that actually gets rendered here is the input area and the send button. This is because they aren’t dependent on what is passed in.

Group Chat Without Data

Taking a little bit closer look at the templates you’ll notice that we assigned a couple things on the group-chat component.

<script type="text/x-handlebars" data-template-name="index">
  {{ group-chat messages=model action="sendMessage" }}
</script>

In this case, we are passing the model from the context of the IndexRoute as “messages” and we have set the string of “sendMessage” as the action on the component. The action will be used to broadcast out when the user wants to send a new message. We will cover this later in the tutorial. The other thing that you will notice is that we are setting up strict interfaces to the nested components all of which are using the data passed in from the group-chat interface.

...
<ul class="conversation">
  {{#each message in messages}}
    <li class="txt">{{chat-message username=message.twitterUserName message=message.text time=message.timeStamp }}</li>
  {{/each}}
</ul>
...

As mentioned before you can pass strings or bound properties into components. Rule of thumb being, use quotes when passing a string, don’t use quotes when passing a bound property. Now that we have our templates in place, lets throw some mock data at it.

App = Ember.Application.create();

App.IndexRoute = Ember.Route.extend({
  model: function() {
    return [
      {
        id: 1,
        firstName: 'Tom',
        lastName: 'Dale',
        twitterUserName: 'tomdale',
        text: 'I think we should back old Tomster. He was awesome.',
        timeStamp: Date.now() - 400000,
      },
      {
        id: 2,
        firstName: 'Yehuda',
        lastName: 'Katz',
        twitterUserName: 'wycats',
        text: 'That\'s a good idea.',
        timeStamp: Date.now() - 300000,
      }
    ];
  }
});

If we go look at this in the browser now, we should see a bit of progress. But there are still some work to be done, mainly getting the images to show up, formatting the date, and being able to send a new message. Let’s take care of that.

Group Chat Partially Filled With Data

So with our user-avatar component, we want to use a service called Avatars.io to fetch a user’s twitter avatar based on their twitter user name. Let’s look at how the user-image component is used in the template.

<script type="text/x-handlebars" data-template-name="components/chat-message">
...
{{ user-avatar username=username service="twitter" }}
...
</script>

<script type="text/x-handlebars" data-template-name="components/user-avatar">
  <img {{bind-attr src=avatarUrl alt=username}} class="avatar">
</script>

It’s a pretty simple component but you will notice that we have a bound property called avatarUrl. We are going to need to create this property within our JavaScript for this component. Another thing you will note is that we are specifying the service we want to fetch the avatar from. Avatars.io allows you fetch social avatars from Twitter, Facebook, and Instagram. So we can make this component extremely flexible. Let’s write the component.

App.UserAvatarComponent = Ember.Component.extend({
  avatarUrl: function () {
    var username = this.get( 'username' ),
          service = this.get( 'service' ),
          availableServices = [ 'twitter', 'facebook', 'instagram' ];

    if (  availableServices.indexOf( service ) > -1 ) {
       return 'http://avatars.io/' + service + '/' + username;
    }
    return 'images/cat.png';

  }.property( 'username' , 'service' )

});

As you can see, to create a new component we just follow the naming convention of NAMEOFCOMPONENTComponent and extend Ember.Component. Now if we go back to the browser we should now see our avatars.

Group Chat Without Formatted Date

To take care of the date formatting let’s use moment.js and write a Handlebars helper to format the date for us.

Ember.Handlebars.helper('format-date', function( date ) {
  return moment( date ).fromNow();
});

Now all we need to do is apply the helper to our time stamp component.

<script type="text/x-handlebars" data-template-name="components/time-stamp">
  <div class="time-stamp">
    <span class="clock" role="presentation"></span>
    <span class="time">{{format-date time}}</span>
  </div>
</script>

We should now have a component that formats dates instead of the Unix epoch timestamps.

Group Chat With Dates

We can do one better though. These timestamps should automatically update over the coarse of time. So lets make our time-stamp component do just that.

App.TimeStampComponent = Ember.Component.extend({

  startTimer: function () {

    var self = this, currentTime;
    this._timer = setInterval( function () {
      currentTime = self.get( 'time' );
      self.set( 'time', ( currentTime - 60000  ) );
    }, 60000 );

  }.on( 'didInsertElement' ),

  killTimer: function () {
    clearInterval( this._timer );
  }.on( 'willDestroyElement' )

});

A couple points to note here are the on() declarative event handler syntax. This was introduced in Ember prior to the 1.0 release. It does exactly what you think it does, when the time-stamp component is inserted into the DOM, startTimer is called. When the element is about to be destroyed and cleaned up the killTimer method will be called. The rest of component just tells the time to update every minute.

The next thing we need to do is setup the action so that when the user hits submit, a new message will be created. Our component shouldn’t care how the data is created it should just broadcast out that the user has tried to send a message. Our IndexRoute will be responsible for taking this action and turning into something meaningful.

App.GroupChatComponent = Ember.Component.extend({
  message: '',
  actions: {
    submit: function () {
      var message = this.get( 'message' ).trim(),
          conversation = this.$( 'ul' )[ 0 ];

      // Fetches the value of 'action'
      // and sends the action with the message
      this.sendAction( 'action', message );

      // When the Ember run loop is done
      // scroll to the bottom
      Ember.run.next( function () {
        conversation.scrollTop = conversation.scrollHeight;
      });

      // Reset the text message field
      this.set( 'message', '' );
    }
  }
});
<form class="new-message" {{action submit on="submit"}}>
  {{input type="text" placeholder="Send new message" value=message class="txt-field"}}
  {{input type="submit" class="send-btn" value="Send"}}
</form>

Since the group-chat component owns the input and send button we need to react to the user clicking send at this level of abstraction. When the user clicks the submit button it is going to execute the submit action in our component implementation. Within the submit action handler we are going to get the value of message, which is set by by the text input. We will then send the action along with the message. Finally we will reset the message to a black string.

The other odd thing you see here is the Ember.run.next method being called. In Ember there is a queue, normally referred to as the run loop, that get’s flushed when data is changed. This is done to basically coalesce changes and make the change once. So in our case we are saying when the sending of the message is done making any manipulations, call our callback. We need to scroll our ul to the bottom so the user can see the new message after any manipulations. For more on the run loop I suggest reading Alex Matchneer’s article “Everything You Never Wanted to Know About the Ember Run Loop”.

If we go over to the browser and we click the send button, we get a really nice error from Ember saying “Uncaught Error: Nothing handled the event ‘sendMessage’. This is what we expect because we haven’t told our application on how to reaction to these types of events. Let’s fix that.

App.IndexRoute = Ember.Route.extend({
 /* … */
  actions: {
   sendMessage: function ( message ) {
      if ( message !== '') {
    console.log( message );
      }
   }
 }
});

Now if we go back to the browser type something into the message input and hit send, we should see the message in the console. So at this point our component is loosely coupled and talking to the rest our application. Let’s do something more interesting with this. First let’s create a new Ember.Object to work as a model for a new message.

App.Message = Ember.Object.extend({
  id: 3,
  firstName: 'Chad',
  lastName: 'Hietala',
  twitterUserName: 'chadhietala',
  text: null,
  timeStamp: null
});

So when the sendMessage action occurs we are going to want to populate the text and timeStamp field of our Message model, create a new instance of it, and then push that instance into the existing collection of messages.

App.IndexRoute = Ember.Route.extend({
/* … */
  actions: {
    sendMessage: function ( message ) {
      var user, messages, newMessage;

      if ( message !== '' ) {

        messages = this.modelFor( 'index' ),
        newMessage = App.Message.create({
          text: message,
          timeStamp: Date.now()
        })

        messages.pushObject( newMessage );
      }
    }
  }
});

When we go back to the browser and we should now be able to create new messages.

Group Chat Creating Messages

We now have several different re-usable chucks of UI that we can just place anywhere. For instance if you needed to use an avatar somewhere else in your Ember application we can just reuse the user-avatar component.

<script type="text/x-handlebars" data-template-name="index">
...
{{user-avatar username="horse_js" service="twitter" }}
{{user-avatar username="detroitlionsnfl" service="instagram" }}
{{user-avatar username="KarlTheFog" service="twitter" }}
</script>
User Avatars From Twitter and Instagram

Wrapping jQuery Plugins

So at this point you’re probably wondering “What if I want to use some jQuery plugin in my component?” No problem. For brevity, lets modify our user-avatar component to show a tool tip when we hover over the avatar. I’ve chosen to use the jQuery plugin tooltipster to handle the tooltip. Let’s modify the existing code to utilize tooltipster.

First lets add correct files to our chat.html and modifiy the existing user avatar component.

...
<link href="css/tooltipster.css" rel="stylesheet" />

...
<script type="text/JavaScript" src="js/libs/jquery.tooltipster.min.js"></script>
<script type="text/JavaScript" src="js/app.js"></script>
...

And then our JavaScript:

App.UserAvatarComponent = Ember.Component.extend({
  /*…*/
  setupTooltip: function () {
    this.$( '.avatar' ).tooltipster({
      animation: 'fade'
    });
  }.on( 'didInsertElement' ),

  destroyTooltip: function () {
    this.$( '.avatar' ).tooltipster( 'destroy' );
  }.on( 'willDestroyElement' )

)};

So once again we see the declarative event listener syntax, but for the first time we see this.$. If you are familiar with jQuery you would expect that we would be querying all the elements with class of ‘avatar’. This isn’t the case in Ember because context is applied. So in our case we are only looking for elements with the class of ‘avatar’ in the user-avatar component. It’s comparable to jQuery’s find method e.g. $( ‘.user-avatar’ ).find( ‘.avatar’ ). On destruction of the element we should unbind the hover event on the avatar and clean up any functionality, this is done by passing ‘destroy’ to tooltipster. If we go to the browser, refresh and hover an image we should see the users username.

Avatar Tooltips

Conclusion

In this tutorial we took a deep dive into Ember components and showed how you can take re-usable chunks of UI to generate larger composites and integrate jQuery plugins. We looked at how components are different from views in Ember. We also covered the idea of interface-based programming when it comes to components. Hopefully I was able to shine some light on not only Ember Components but Web Components and where the Web is headed.

January 03 2014

19:46

Testing in Node.js

A test driven development cycle simplifies the thought process of writing code, makes it easier, and quicker in the long run. But just writing tests is not enough by itself, knowing the kinds of tests to write and how to structure code to conform to this pattern is what it's all about. In this article we will take a look at building a small app in Node.js following a TDD pattern.

Besides simple 'unit' tests, which we are all familiar with; We can also have Node.js's Async code running, which adds an extra dimension in that we don't always know the order in which functions will run or we may be trying to test something in a callback or checking to see how an async function is working.

In this article we will be building a Node app which can search for files that match a given query. I know there are already things for this (ack) but for the sake of demonstrating TDD I think it could be a well rounded project.

The first step is obviously to write some tests, but even before that, we need to choose a testing framework. You can use vanilla Node, as there is an assert library built-in, but it's not much in terms of a test runner, and is pretty much the bare essentials.

Another option and probably my favorite for general use is Jasmine. It's pretty self-contained, you don't have any other dependencies to add to your scripts and the syntax is very clean and easy to read. The only reason I am not going to use this today, is because I think Jack Franklin did an excellent job covering this in his recent Tuts+ series here, and it's good to know your options so you can pick the best tool for your situation.


What We’ll Be Building

In this article we will be using the flexible 'Mocha' test runner along with the Chai assertion library.

Unlike Jasmine which is more like an entire test suite in one package, Mocha only takes care of the overall structure but has nothing to do with the actual assertions. This allows you to keep a consistent look and feel when running your tests, but also allows you to run whichever assertion library best fits your situation.

So for example, if you were going to use the vanilla 'assert' library, you could pair it with Mocha to add some structure to your tests.

Chai is a fairly popular option, and is also all about options and modularity. Even without any plugins, just using the default API you have three different syntaxes you can use depending on if you would like to use a more classic TDD style or a more verbose BDD syntax.

So now that we know what we are going to use, let's get into the installation.


The Setup

To get started, let's install Mocha globally by running:

npm install -g mocha

When that completes create a new folder for our project and run the following inside it:

npm install chai

This will install a local copy of Chai for our project. Next, create a folder named test inside our project's directory, as this is the default location Mocha will look for tests.

That's pretty much it for setup, the next step is to talk about how to structure your apps when following a test driven development process.


Structuring Your App

It's important to know, when following a TDD approach, what needs to have tests and what does not. A rule of thumb is to not write tests for other peoples already tested code. What I mean by this is the following: let's say your code opens a file, you don't need to test the individual fs function, it's part of the languge and is supposedly already well tested. The same goes when using third-party libraries, you shouldn't structure functions which primarily call these types of functions. You don't really write tests for these and because of this you have gaps in the TDD cycle.

Now of course, with every programming style there are a lot of different opinions and people will have different views on how to TDD. But the approach I use is that you create individual components to use in your app, each of which solves a unique functional problem. These components are built using TDD ensuring that they work as expected and you won't break their API. Then you write your main script, which is essentially all glue code, and does not need to be tested / can't be tested, in certain situations.

This also means that most of your components can be reused in the future as they do not really have much to do, directly, with the main script.

Following what I just said, it's common practice to create a folder named 'lib' where you put all the individual components. So up to this point you should have Mocha and Chai installed, and then a project directory with two folders: 'lib' and 'test'.


Getting Started With TDD

Just in case you are new to TDD I thought it would be a good idea to quickly cover the process. The basic rule is that you can't write any code unless the test runner tells you to.

Essentially, you’re writing what your code is supposed to do before actually doing it. You have a really focused goal while coding and you never compromise your idea by getting side-tracked or thinking too far ahead. Besides that, since all of your code will have a test affiliated with it you can be certain you will never break your app in the future.

A test, in reality, is just a declaration of what a function is expected to do when run, you then run your test runner, which will obviously fail (since you haven't written the code yet) and then you write the minimum amount of code needed to pass the failing test. It's important never to skip this step, because sometimes a test will pass even before you add any code, due to other code you have in the same class or function. When this happens, you either wrote more code then you were supposed to for a different test or this is just a bad test (usually not specific enough).

Again according to our rule above, if the test passes right away you can't write any code, because it didn't tell you to. By continuously writing tests and then implementing the features you construct solid modules that you can rely on.

Once you’re finished implementing and testing your component, you can then go back and refactor the code to optimize it and clean it up but making sure the refactoring doesn't fail any of the tests you have in place and more importantly, doesn't add any features that are untested.

Every testing library will have its own syntax, but they usually follow the same pattern of making assertions and then checking if they pass. Since we are using Mocha and Chai let's take a look at both their syntaxes starting with Chai.


Mocha & Chai

I will be using the 'Expect' BDD syntax, because as I mentioned Chai comes with a few options out of the box. The way this syntax works is you start by calling the expect function, passing it the object you want to make an assertion on, and then you chain it with a specific test. An example of what I mean could be as follows:

expect(4+5).equal(9);

That's the basic syntax, we are saying expect the addition of 4 and 5 to equal 9. Now this isn't a great test because the 4 and 5 will be added by Node.js before the function is even called so we are essentially testing my math skills, but I hope you get the general idea. The other thing you should note, is this syntax is not very readable, in terms of the flow of a normal English sentence. Knowing this, Chai added the following chain getters which don't do anything but you can add them to make it more verbose and readable. The chain getters are as follows:

  • to
  • be
  • been
  • is
  • that
  • and
  • have
  • with
  • at
  • of
  • same
  • a
  • an

Using the above, we can rewrite our previous test to something like this:

expect(4+5).to.equal(9);

I really like the feel of the entire library, which you can check out in their API. Simple things like negating the operation is as easy as writing .not before the test:

expect(4+5).to.not.equal(10);

So even if you have never used the library before, it won't be hard to figure out what a test is trying to do.

The last thing I would like to look over before we get into our first test is how we structure our code in Mocha

Mocha

Mocha is the test runner, so it doesn't really care too much about the actual tests, what it cares about is the tests structure, because that is how it knows what is failing and how to layout the results. The way you build it up, is you create multiple describe blocks which outline the different components of your library and then you add it blocks to specify a specific test.

For a quick example, let's say we had a JSON class and that class had a function to parse JSON and we wanted to make sure the parse function can detect a badly formatted JSON string, we could structure this like so:

describe("JSON", function() {
   describe(".parse()", function() {
       it("should detect malformed JSON strings", function(){
           //Test Goes Here
       });
   });
});

It's not complicated, and it's about 80% personal preference, but if you keep this kind of format, the test results should come out in a very readable format.

We are now ready to write our first library, let's begin with a simple synchronous module, to get ourselves better acquainted with the system. Our app will need to be able to accept command line options for setting things like how many levels of folders our app should search through and the query itself.

To take care of all this, we will create a module which accepts the command's string and parses all the included options along with their values.

The Tag Module

This is a great example of a module you can reuse in all your command line apps, as this issue comes up a lot. This will be a simplified version of an actual package I have on npm called ClTags. So to get started, create a file named tags.js inside of the lib folder, and then another file named tagsSpec.js inside of the test folder.

We need to pull in the Chai expect function, as that will be the assertion syntax we will be using and we need to pull in the actual tags file so we can test it. Altogether with some initial setup it should look something like this:

var expect = require("chai").expect;
var tags = require("../lib/tags.js");

describe("Tags", function(){

});

If you run the 'mocha' command now from the root of our project, everything should be passing like expected. Now let's think about what our module will do; we want to pass it the command arguments array that was used to run the app, and then we want it to build an object with all the tags, and it would be nice if we could also pass it a default object of settings, so if nothing get's overridden, we will have some settings already stored.

When dealing with tags, a lot of apps also provide shortcut options which are just one character, so let's say we wanted to set the depth of our search we could allow the user to either specify something like --depth=2 or something like -d=2 which should have the same effect.

So let's just begin with the long formed tags (for example, '–depth=2'), To begin with, let's write the first test:

describe("Tags", function(){
   describe("#parse()", function(){
       it("should parse long formed tags", function(){
           var args = ["--depth=4", "--hello=world"];
           var results = tags.parse(args);

           expect(results).to.have.a.property("depth", 4);
           expect(results).to.have.a.property("hello", "world");
       });
   });
});

We added one method to our test suite called parse and we added a test for long formed tags. Inside this test I created an example command and added two assertions for the two properties it should pickup.

Running Mocha now, you should get one error, namely that tags doesn't have a parse function. So to fix this error let's add a parse function to the tags module. A fairly typical way to create a node module is like so:

exports = module.exports = {};

exports.parse = function() {

}

The error said we needed a parse method so we created it, we didn't add any other code inside because it didn't yet tell us to. By sticking with the bare minimum you are assured that you won't write more then you are supposed to and end up with untested code.

Now let's run Mocha again, this time we should be getting an error telling us that it can't read a property named depth from an undefined variable. That is because currently our parse function isn't returning anything, so let's add some code so that it will return an object:

exports.parse = function() {
var options = {}

return options;
}

We are slowly moving along, if you run Mocha again, their shouldn't be any exceptions being thrown, just a clean error message saying that our empty object has no property called depth.

No 'depth' property

Now we can get into some real code. For our function to parse the tag and add it to our object we need to cycle through the arguments array and remove the double dashes at the start of the key.

exports.parse = function(args) {
   var options = {}
   for (var i in args) { //Cycle through args
       var arg = args[i];
       //Check if Long formed tag
       if (arg.substr(0, 2) === "--") {
           arg = arg.substr(2);
           //Check for equals sign
           if (arg.indexOf("=") !== -1) {
               arg = arg.split("=");
               var key = arg.shift();
               options[key] = arg.join("=");
           }
       }
   }
   return options;
}

This code cycles through the list of arguments, makes sure we are dealing with a long formed tag, and then splits it by the first equals character to create the key and value pair for the options object.

Now this almost solves our issue, but if we run Mocha again, you will see that we now have a key for depth, but it's set to a string instead of a number. Numbers are a bit easier to work with later on in our app, so the next piece of code we need to add is to convert values to numbers whenever possible. This can be achieved with some RegEx and the parseInt function as follows:

        if (arg.indexOf("=") !== -1) {
            arg = arg.split("=");
            var key = arg.shift();
            var value = arg.join("=");

            if (/^[0-9]+$/.test(value)) {
                value = parseInt(value, 10);
            }
            options[key] = value;
        }

Running Mocha now, you should get a pass with one test. The number conversion should arguably be in its own test, or at least mentioned in the tests declaration so you don't, by mistake, remove the number conversion assertion; so just add-on “add and convert numbers” to the it declaration for this test or separate it into a new it block. It really depends whether you consider this “obvious default behavior” or a separate feature.

First Pass

Now like I have been trying to stress throughout this whole article, when you see a passing spec, it's time to write more tests. The next thing I wanted to add was the default array, so inside the tagsSpec file let's add the following it block right after the previous one:

    it("should parse long formed tags and convert numbers", function(){
        var args = ["--depth=4", "--hello=world"];
        var results = tags.parse(args);

        expect(results).to.have.a.property("depth", 4);
        expect(results).to.have.a.property("hello", "world");
    });
    it("should fallback to defaults", function(){
        var args = ["--depth=4", "--hello=world"];
        var defaults = { depth: 2, foo: "bar" };
        var results = tags.parse(args, defaults);

        var expected = {
            depth: 4,
            foo: "bar",
            hello: "world"
        };

        expect(results).to.deep.equal(expected);
    });

Here we are using a new test, the deep equal which is good for matching two objects for equal values. Alternatively, you can use the eql test which is a shortcut but I think this is more clear. This test passes two arguments as the command string and passes two defaults with one overlap, just so we can get a good spread on the test cases.

Running Mocha now, you should get a sort of diff, containing the differences between what is expected and what it actually got.

Defaults Diff

Let's now continue back to the tags.js module, and let's add this functionality in. It's a fairly simple fix to add, we just need to accept the second parameter, and when it's set to an object we can replace the standard empty object at the start with this object:

exports.parse = function(args, defaults) {
   var options = {};
   if (typeof defaults === "object" && !(defaults instanceof Array)) {
       options = defaults
   }

This will bring us back to a green state. The next thing I want to add is the ability to just specify a tag without a value and let it work like a boolean. For example, if we just set --searchContents or something like that, it will just add that to our options array with a value of true.

The test for this would look something like the following:

   it("should accept tags without values as a bool", function(){
       var args = ["--searchContents"];
       var results = tags.parse(args);

       expect(results).to.have.a.property("searchContents", true);
   });

Running this will give us the following error just like before:

Boolean Tags

Inside of the for loop, when we got a match for a long formed tag, we checked if it contained an equals sign; we can quickly write the code for this test by adding an else clause to that if statement and just setting the value to true:

        if (arg.indexOf("=") !== -1) {
             arg = arg.split("=");
             var key = arg.shift();
             var value = arg.join("=");

             if (/^[0-9]+$/.test(value)) {
                 value = parseInt(value, 10);
             }
             options[key] = value;
        } else {
             options[arg] = true;
        }

The next thing I want to add is the substitutions for the short-hand tags. This will be the third parameter to the parse function and will basically be an object with letters and their corresponding replacements. Here is the spec for this addition:

    it("should accept short formed tags", function(){
        var args = ["-sd=4", "-h"];
        var replacements = {
            s: "searchContents",
            d: "depth",
            h: "hello"
        };

        var results = tags.parse(args, {}, replacements);

        var expected = {
            searchContents: true,
            depth: 4,
            hello: true
        };

        expect(results).to.deep.equal(expected);
    });

The trouble with shorthand tags is that they are able to be combined in a row. What I mean by this is unlike the long formed tags where each one is separate, with short hand tags – since they are each just a letter long – you can call three different ones by typing -vgh. This makes the parsing a bit more difficult because we still need to allow for the equals operator for you to add a value to the last tag mentioned, while at the same time you need to still register the other tags. But not to worry, it's nothing that can't be solved with enough popping and shifting.

Here is the entire fix, from the beginning of the parse function:

exports.parse = function(args, defaults, replacements) {
   var options = {};
   if (typeof defaults === "object" && !(defaults instanceof Array)) {
       options = defaults
   }

   if (typeof replacements === "object" && !(defaults instanceof Array)) {
        for (var i in args) {
             var arg = args[i];
             if (arg.charAt(0) === "-" && arg.charAt(1) != "-") {
                  arg = arg.substr(1);
                  if (arg.indexOf("=") !== -1) {
                      arg = arg.split("=");
                      var keys = arg.shift();
                      var value = arg.join("=");

                      arg = keys.split("");
                      var key = arg.pop();
                      if (replacements.hasOwnProperty(key)) {
                           key = replacements[key];
                      }

                      args.push("--" + key + "=" + value);
                  } else {
                      arg = arg.split("");
                  }

                  arg.forEach(function(key){
                      if (replacements.hasOwnProperty(key)) {
                          key = replacements[key];
                      }
                      args.push("--" + key);
                  });
             }
        }
   }

It's a lot of code (in comparison) but all we are really doing is splitting the argument by an equals sign, then splitting that key into the individual letters. So for example if we passed -gj=asd we would split the asd into a variable called value, and then we would split the gj section into individual characters. The last character (j in our example) will become the key for the value (asd) whereas any other letters before it, will just be added as regular boolean tags. I didn't want to just process these tags now, just in case we changed the implementation later. So what we are doing is just converting these short hand tags into the long formed version and then letting our script handle it later.

Running Mocha again will take us back to our illustrious green results of four tests passing for this module.

Now there are a few more things we can add to this tags module to make it closer to the npm package, like the ability to also store plain text arguments for things like commands or the ability to collect all the text at the end, for a query property. But this article is already getting long and I would like to move on to implementing the search functionality.


The Search Module

We just went through creating a module step by step following a TDD approach and I hope you got the idea and feeling of how to write like this. But for the sake of keeping this article moving, for the rest of the article, I will speed up the testing process by grouping things together and just showing you the final versions of tests. It's more of a guide to different situations which may come up and how to write tests for them.

So just create a file named search.js inside the lib folder and a searchSpec.js file inside of the test folder.

Next open the spec file and let's setup our first test which can be for the function to get a list of files based on a depth parameter, this is also a great example for tests which require a bit of external setup for them to work. When dealing with outside object-like-data or in our case files, you will want to have a predefined setup which you know will work with your tests, but you also don't want to add fake info to your system.

There are basically two options to solve this problem, you can either mock the data, like I mentioned above if you are dealing with the languages own commands for loading data, you don't necessarily need to test them. In cases like that, you can simply provide the 'retrieved' data and continue on with your testing, kind of like what we did with the command string in the tags library. But in this case, we are testing the recursive functionality we are adding to the languages file reading capabilities, depending on the specified depth. In cases like these, you do need to write a test and so we need to create some demo files to test the file reading. The alternative is to maybe stub the fs functions to just run but not do anything, and then we can count how many times our fake function ran or something like that (check out spies) but for our example, I am just going to create some files.

Mocha provides functions which can run both before and after your tests, so you can perform these kinds of external setup and cleanup around your tests.

For our example, we will create a couple of test files and folders at two different depths so we can test out that functionality:

var expect = require("chai").expect;
var search = require("../lib/search.js");
var fs = require("fs");

describe("Search", function(){
   describe("#scan()", function(){
       before(function() {
           if (!fs.existsSync(".test_files")) {
               fs.mkdirSync(".test_files");
               fs.writeFileSync(".test_files/a", "");
               fs.writeFileSync(".test_files/b", "");
               fs.mkdirSync(".test_files/dir");
               fs.writeFileSync(".test_files/dir/c", "");
               fs.mkdirSync(".test_files/dir2");
               fs.writeFileSync(".test_files/dir2/d", "");
           }
       });

       after(function() {
           fs.unlinkSync(".test_files/dir/c");
           fs.rmdirSync(".test_files/dir");
           fs.unlinkSync(".test_files/dir2/d");
           fs.rmdirSync(".test_files/dir2");
           fs.unlinkSync(".test_files/a");
           fs.unlinkSync(".test_files/b");
           fs.rmdirSync(".test_files");
       });
   });
});

These will be called based on the describe block they are in, and you can even run code before and after each it block using beforeEach or afterEach instead. The functions themselves just use standard node commands to create and remove the files respectively. Next we need to write the actual test. This should go right next to the after function, still inside the describe block:

       it("should retrieve the files from a directory", function(done) {
           search.scan(".test_files", 0, function(err, flist){
               expect(flist).to.deep.equal([
                   ".test_files/a",
                   ".test_files/b",
                   ".test_files/dir/c",
                   ".test_files/dir2/d"
               ]);
               done();
           });
       });

This is our first example of testing an async function, but as you can see it's just as simple as before; all we need to do is use the done function Mocha provides in the it declarations to tell it when we are finished with this test.

Mocha will automatically detect if you specified the done variable in the callback and it will wait for it to be called allowing you to test asynchronous code really easily. Also, it's worth mentioning that this pattern is available throughout Mocha, you can for example, use this in the before or after functions if you needed to setup something asynchronously.

Next I would like to write a test that makes sure the depth parameter works if set:

    it("should stop at a specified depth", function(done) {
        search.scan(".test_files", 1, function(err, flist) {
            expect(flist).to.deep.equal([
                ".test_files/a",
                ".test_files/b",
            ]);
            done();
        });
    });

Nothing different here, just another plain test. Running this in Mocha you will get an error that the search doesn’t have any methods, basically because we haven't written anything in it. So let's go add an outline with the function:

var fs = require("fs");

exports = module.exports = {};

exports.scan = function(dir, depth, done) {

}

If you now run Mocha again, it will pause waiting for this async function to return, but since we haven't called the callback at all, the test will just timeout. By default it should time out after about two seconds, but you can adjust this using this.timeout(milliseconds) inside of a describe or it block, to adjust their timeouts respectively.

This scan function is supposed to take a path and depth, and return a list of all the files it finds. This is actually kind of tricky when you start thinking about how we are essentially recursing two different functions together in a single function. We need to recurse through the different folders and then those folders need to scan themselves and decide on going further.

Doing this synchronously is fine because you can kind of step through it one by one, slowly completing one level or path at a time. When dealing with an async version it get's a bit more complicated because you can't just do a foreach loop or something, because it won't pause in between folders, they will all essentially run at the same time each returning different values and they would sort of overwrite each other.

So to make it work, you need to create a sort of stack where you can asynchronously process one at a time (or all at once if you use a queue instead) and then keep some order in that manner. It's a very specific algorithm so I just keep a snippet by Christopher Jeffrey which you can find on Stack Overflow. It doesn't apply just to loading files, but I have used this in a number of applications, basically anything where you need to process an array of objects one at a time using async functions.

We need to alter it a bit, because we would like to have a depth option, how the depth option works is you set how many levels of folders you want to check, or zero to recurs indefinitely.

Here is the completed function using the snippet:

exports.scan = function(dir, depth, done) {
   depth--;
   var results = [];
   fs.readdir(dir, function(err, list) {
       if (err) return done(err);
       var i = 0;
       (function next() {
           var file = list[i++];
           if (!file) return done(null, results);
           file = dir + '/' + file;
           fs.stat(file, function(err, stat) {
               if (stat && stat.isDirectory()) {
                   if (depth !== 0) {
                       var ndepth = (depth > 1) ? depth-1 : 1;
                       exports.scan(file, ndepth, function(err, res) {
                           results = results.concat(res);
                           next();
                       });
                   } else {
                       next();
                   }
               } else {
                   results.push(file);
                   next();
               }
           });
       })();
   });
};

Mocha should now be passing both tests. The last function we need to implement is the one which will accept an array of paths and a search keyword and return all matches. Here is the test for it:

    describe("#match()", function(){
        it("should find and return matches based on a query", function(){
            var files = ["hello.txt", "world.js", "another.js"];
            var results = search.match(".js", files);
            expect(results).to.deep.equal(["world.js", "another.js"]);

            results = search.match("hello", files);
            expect(results).to.deep.equal(["hello.txt"]);
        });
    });

And last but not least, let's add the function to search.js:

exports.match = function(query, files){
  var matches = [];
  files.forEach(function(name) {
      if (name.indexOf(query) !== -1) {
          matches.push(name);
      }
  });
  return matches;
}

Just to make sure, run Mocha again, you should have a total of seven tests all passing.

All Green!

Putting It All Together

The last step is to really write the glue code which pulls all our modules together; so in the root of our project add a file named app.js or something like that and add the following inside:

# !/usr/bin/env node

var tags = require("./lib/tags.js");
var search = require("./lib/search.js");
var defaults = {
   path: ".",
   query: "",
   depth: 2
}
var replacements = {
   p: "path",
   q: "query",
   d: "depth",
   h: "help"
}

tags = tags.parse(process.argv, defaults, replacements);

if (tags.help) {
   console.log("Usage: ./app.js -q=query [-d=depth] [-p=path]");
} else {
   search.scan(tags.path, tags.depth, function(err, files) {
       search.match(tags.query, files).forEach(function(file){
           console.log(file);
       });
   });
}

No actual logic going on here really, we are just basically connecting the different modules together to get the desired results. I usually don't test this code as it's just glue code which has all been tested already.

You can now make your script executable (chmod +x app.js on a Unix system) and then run it like so:

./app.js -q=".js"

Optionally customizing some of the other placeholders we setup.

Action Still

Conclusion

In this article we have built an entire file searching app, albeit a simple one, but I think it demonstrates the process as a whole fairly well.

Some personal advice moving forward; if you are going to do a lot of TDD, setup your environment. A lot of the overhead time people associate with TDD is due to them having to keep switching windows around, opening and closing different files, then running tests and repeating this 80 dozen times a day. In such a case it interrupts your workflow decreasing productivity. But if you have your editor setup, like you either have the tests and code side-by-side or your IDE supports jumping back and forth, this saves a ton of time. You can also get your tests to automatically run by calling it with the -w tag to watch the files for changes and auto run all tests. These kinds of things make the process more seamless and more of an aid then a bother.

I hope you enjoyed this article, if you have any questions you can leave them below, contact me on Twitter @gabrielmanricks or on the Nettuts+ IRC channel (#nettuts on freenode).

December 31 2013

20:23

Introduction to HTML5 Desktop Apps With Node-Webkit

Using Node.js, we can create web applications easily. Now, thanks to the node-webkit we can also create desktop apps with it, using a unique combination of HTML5 and Node.


Introduction

The library combines WebKit engine and Node.js in a unique way. Both WebKit and Node share the same context, allowing you to write your code like it’s meant to be executed in a browser, but with the addition of all Node’s features.

The list of uses is endless. You can create business apps, text and image editors, games, presentations, admin panels, etc. Just name the desktop app that you would like to create, and I can assure you that it will be possible with node-webkit.

In this tutorial, I will show you how to get started by showing you how to create a simple text editor.


Preparation

First, you have to get the library. Download the version appropriate for your operating system (the application itself will run on all of them) from github and unzip it wherever you’d like. Now, lets create a basic folder structure. We will have folders for HTML files (.html) and for JavaScript files (.js). Also, create the package.json file in the same directory that the nw executable is in and a node_modules folder to keep the modules we will create.


Package.json

The first thing to do is to fill the required fields in the package.json file. In the case of node-webkit these are name and main (on Github you can find the full list of available package.json options). The first one is exactly the same as in the case of a plain Node.js app. The second one should be a (relative or absolute) path to the main HTML file that will be displayed upon the application starting. In the case of our example, the package.json file should look like this:

{
	"name": "simple-text-editor",
	"main": "./html/index.html"
}

Now if you run your app using the nw executable, you should see a blank screen like this:

blank_app

The Main File

Creating the user interface using node-webkit is no different than creating a webpage (with the slight difference that you know the rendering engine, so you don’t have to provide any fallbacks for older browsers, or use libraries like jQuery – and actually you shouldn’t, because they include fallback code that will slow down your app). Let’s create an index.html file that we will use:

<!DOCTYPE html>
<html>
<head>
	<title>Simple Text Editor</title>
</head>
<body>

</body>
</html>

Let’s also include (and create) the main JavaScript file that we will use, in the body, so it will be executed after the DOM is loaded:

<script src="../js/main.js"></script>

Now the only thing that will change when you open the app will be the title.


Let’s Use Some Node

To show you how easily you can use Node.js in node-webkit, let’s read the contents of package.json and print it on the screen. Create the js/main.js file and put the following code in it:

var fs = require('fs');

fs.readFile('./package.json', 'utf-8', function (error, contents) {
	document.write(contents);
});

As you can see, the code looks exactly the same as if you were writing it for Node. But then we use document.write to put the file contents on the page. No need to setup any local server.

Now open the app and you should see something like this:

file_contents_printed

Modules

There is another nice thing about node-webkit; You don’t have to include a series of <script> tags in your HTML if you want to split your code into modules. You can do it just like you do in Node.js – using require. Let’s create a simple module to read file contents to a textarea or write it to a file. Name the file file.js and put it in the node_modules directory. Now put the overall module structure in it:

var fs = require('fs');

function File() {
	function open(path, document) {
		
	}
	
	function save(path, document) {
		
	}
	
	this.open = open;
	this.save = save;
}

module.exports = new File;

As you can see this will be a static class with two public methods – one for opening the files and one for saving them.

The open method will look like this:

function open(path, document) {
	fs.readFile(path, 'utf-8', function (error, contents) {
		document.getElementById('editor').value = contents;
	});
}

Pretty simple right? The function accepts the path to the file as the first parameter and puts the file’s contents into the element with an id of “editor”. We also need to pass the document object to the function because the script is called using Node’s require and it doesn’t have access to the WebKit stuff directly.

The save method is as simple as the previous one:

function save(path, document) {
	var text = document.getElementById('editor').value;
	fs.writeFile(path, text);
}

Now let’s test if everything works. Change the contents of js/main.js to this:

var file = require('file.js');

console.log(file.open, file.save);

Now if you go to the developer tools console and click the developer refresh button in the right corner of the window you should see the two functions from this module logged. This is another useful thing in node-webkit – your console.log calls are being displayed in the developer tools console, so it’s easier to debug your apps.

dev_tools_log

Node-Webkit’s File Inputs

Let’s add two file inputs that we will need later:

<input id="open" type="file" style="display:none" accept="text/*"/>
<input id="save" type="file" nwsaveas style="display:none" accept="text/*"/>

Notice the nwsaveas attribute in the second input. It is a special type of input from node-webkit, that allows the user to select a non-existing file. Both of the inputs are hidden, because we will only access them from JavaScript. In node-webkit, file inputs are modified in a way that allows you to fire a click event on them, so you can open the file dialog without the user clicking on the input (with no need for the workarounds like invisible inputs above the button). Now we can move to the JavaScript.

First, remove the console.log call from the js/main.js file. Now put this code in there:

function clickInput(id) {
	var event = document.createEvent('MouseEvents');
	event.initMouseEvent('click');
	document.getElementById(id).dispatchEvent(event);
}

document.addEventListener('keyup', function (e) {
	if (e.keyCode == 'O'.charCodeAt(0) && e.ctrlKey) {
		clickInput('open');
	} else if (e.keyCode == 'S'.charCodeAt(0) && e.ctrlKey) {
		clickInput('save');
	}
});

This code is responsible for showing the Open and Save dialogs. The clickInput function does the main job here – it simulates the click event on the input element – in a normal browser, this would be impossible from security reasons, but here it’s not a security threat at all. Later, there is a normal keyup event handler that checks if the proper key combination was pressed (Ctrl+O or Ctrl+S) and “clicks” on the inputs. Note that this would also be impossible to achieve in a browser – combinations like Ctrl+O and Ctrl+S are reserved for the browser’s internal use and no event fires when they are pressed (only Firefox fires events for them).

Now press the developer refresh button and you should see the appropriate dialog when you press Ctrl+S or Ctrl+O. They of course do nothing for now.


Creating the Editor

Now, since we will be creating a text editor, we need something to write on. Add the textarea to the HTML:

<textarea id="editor" style="position:fixed;top:0;bottom:0;left:0;right:0"></textarea>

Next we have to finish the opening/saving code. Let’s create the onchange event listeners for the open and save inputs:

document.getElementById('open').addEventListener('change', function (e) {
	file.open(this.value, document);
});

document.getElementById('save').addEventListener('change', function (e) {
	file.save(this.value, document);
});

Thanks to the module created earlier, the code is really simple. This is also possible thanks to node-webkit, because in browsers (again for security reasons) the value of the file input element is set to a fake path – here it’s the path which was selected. Now open the app (or press the developer refresh button, if you didn’t closed it) and you should have a perfectly working text editor.


Further Enhancements

We can also do a few other things to make the editor a bit more advanced and useful. For example, let’s open a new window when a user presses Ctrl+N. First, add this require to the top of the script:

var gui = require('nw.gui');

The nw.gui module is a node-webkit’s library related to the user interface (you can read more about this on Node-webkit’s Github). Next add this else if to the document’s keyup listener:

} else if (e.keyCode == 'N'.charCodeAt(0) && e.ctrlKey) {
	gui.Window.open('index.html');
}

And viola! If you refresh the app, now you can press Ctrl+N to open a new window. This function does differ from the normal window.open though. You can pass various window options as the second parameter. The list of what you can set there is available in the documentation.

Another thing that may be useful in a text editor is an application menu (the one that sits under the title bar in Windows/Linux and on the top of the screen on Mac). In node-webkit it’s pretty simple. First, let’s create the menu:

var menu = new gui.Menu({ type: 'menubar' });

The type menubar is reserved for the application menu. Now we can add items to it. Let it be a File menu:

menu.append(new gui.MenuItem({
	label: 'File',
	submenu: new gui.Menu()
}));

Now let’s add some items to this menu:

menu.items[0].submenu.append(new gui.MenuItem({
	label: 'New',
	click: function () {
		gui.Window.open('index.html');
	}
}));
menu.items[0].submenu.append(new gui.MenuItem({
	type: 'separator'
}));
menu.items[0].submenu.append(new gui.MenuItem({
	label: 'Close',
	click: function () {
		gui.Window.get().close();
	}
}));

menu.items[0] is the first item of our application menu (you can also assign it to a variable when creating it if you want to). We are appending new items to its submenu, and each item gets a click callback to handle the click on it. The gui.Window.get method gets the current window, so we can close it when the user selects the Close option in the File menu.

Finally, we can assign the menu to the window:

gui.Window.get().menu = menu;

Again, we are using gui.Window.get to get the current window, then we are assigning our menu to its menu property. Please note, that even though we can assign a different menu to every window, on OSX (Mac) one app can only have one menu (which is global for all windows), so if you want your app to be used on Macs, you should avoid using different menus on different windows.

Now if you open or refresh your app, you should see a system menu under the titlebar:

system_menu

Packaging the App

Now if you would like to share your app with other users, you can package it into one file, so the users can download the node-webkit’s executable appropriate to their platform and launch your app with it. First, let’s remove the toolbar that makes the window look like a browser – it’s nice to have during development, but your users probably don’t want to see it. We can achieve this by setting window.toolbar to false in package.json, so it looks like this:

{
	"name": "example-app",
	"main": "./html/index.html",
	"window": {
		"toolbar": false
	}
}

Now if you open the app (nothing will change if you just refresh it, package.json is loaded only on startup), you should see the final result:

final_result

Packaging the app is pretty simple. Just create a .zip archive with all your assets (all of the files you created, without the files supplied with node-webkit) and change its extension to .nw. That’s all. If a user downloads node-webkit and your package, they only need to put it in the node-webkit’s directory and run the nw executable. Detailed descriptions, with a few more tips, are available on the node-webkit’s github.

Now your editor is ready to be shipped to your users.


Conclusion

As you can see, node-webkit is a very promising and powerful library. With frequent updates and fixes, and being supported by Intel, there’s almost no chance that it will be discontinued, which can sometimes happen with open source projects.

Share your thoughts about this project in the comments below. I personally think it’s the best thing out there if you want to create a desktop application using Node.js and HTML.

December 26 2013

21:07

Working With IndexedDB – Part 3

Welcome to the final part of my IndexedDB series. When I began this series my intent was to explain a technology that is not always the most… friendly one to work with. In fact, when I first tried working with IndexedDB, last year, my initial reaction was somewhat negative (“Somewhat negative” much like the Universe is “somewhat old.”). It’s been a long journey, but I finally feel somewhat comfortable working with IndexedDB and I respect what it allows. It is still a technology that can’t be used everywhere (it sadly missed being added to iOS7), but I truly believe it is a technology folks can learn and make use of today.

In this final article, we’re going to demonstrate some additional concepts that build upon the “full” demo we built in the last article. To be clear, you must be caught up on the series or this entry will be difficult to follow, so you may also want to check out part one.


Counting Data

Let’s start with something simple. Imagine you want to add paging to your data. How would you get a count of your data so you can properly handle that feature? I’ve already shown you how you can get all your data and certainly you could use that as a way to count data, but that requires fetching everything. If your local database is huge, that could be slow. Luckily the IndexedDB spec provides a much simpler way of doing it.

The count() method, run on an objectStore, will return a count of data. Like everything else we’ve done this will be asynchronous, but you can simplify the code down to one call. For our note database, I’ve written a function called doCount() that does just this:

function doCount() {

    db.transaction(["note"],"readonly").objectStore("note").count().onsuccess = function(event) {
        $("#sizeSpan").text("("+event.target.result+" Notes Total)");
    };

}

Remember – if the code above is a bit hard to follow, you can break it up into multiple blocks. See the earlier articles where I demonstrated this. The result handler is passed a result value representing the total number of objects available in the store. I modified the UI of our demo to include an empty span in the header.

<span class="navbar-brand" >Note Database <span id="sizeSpan"></span></span>
Count Example

The final thing I need to do is simply add a call to doCount when the application starts up and after any add or delete operation. Here is one example from the success handler for opening the database.

openRequest.onsuccess = function(e) {
    db = e.target.result;

    db.onerror = function(event) {
      // Generic error handler for all errors targeted at this database's
      // requests!
      alert("Database error: " + event.target.errorCode);
    };

    displayNotes();
    doCount();
};

You can find the full example in the zip you downloaded as fulldemo2. (As an FYI, fulldemo1 is the application as it was at the end of the previous article.)


Filter As You Type

For our next feature, we’re going to add a basic filter to the note list. In the earlier articles in this series I covered how IndexedDB does not allow for free form search. You can’t (well, not easily) search content that contains a keyword. But with the power of ranges, it is easy to at least support matching at the beginning of a string.

If you remember, a range allows us to grab data from a store that either begins with a certain value, ends with a value, or lies in between. We can use this to implement a basic filter against the title of our note fields. First, we need to add an index for this property. Remember, this can only be done in the onupgradeneeded event.

    if(!thisDb.objectStoreNames.contains("note")) {
        console.log("I need to make the note objectstore");
        objectStore = thisDb.createObjectStore("note", { keyPath: "id", autoIncrement:true });
        objectStore.createIndex("title", "title", { unique: false });
    }

Next, I added a simple form field to the UI:

Filter UI

Then I added a “keyup” handler to the field so I’d see immediate updates while I type.

$("#filterField").on("keyup", function(e) {
    var filter = $(this).val();
    displayNotes(filter);
});

Notice how I’m calling displayNotes. This is the same function I used before to display everything. I’m going to update it to support both a “get everything” action as well as a “get filtered” type action. Let’s take a look at it.

function displayNotes(filter) {

    var transaction = db.transaction(["note"], "readonly");  
    var content="<table class='table table-bordered table-striped'><thead><tr><th>Title</th><th>Updated</th><th>& </td></thead><tbody>";

    transaction.oncomplete = function(event) {
        $("#noteList").html(content);
    };

    var handleResult = function(event) {  
      var cursor = event.target.result;  
      if (cursor) {  
        content += "<tr data-key=\""+cursor.key+"\"><td class=\"notetitle\">"+cursor.value.title+"</td>";
        content += "<td>"+dtFormat(cursor.value.updated)+"</td>";

        content += "<td><a class=\"btn btn-primary edit\">Edit</a> <a class=\"btn btn-danger delete\">Delete</a></td>";
        content +="</tr>";
        cursor.continue();  
      }  
      else {  
        content += "</tbody></table>";
      }  
    };

    var objectStore = transaction.objectStore("note");

    if(filter) {
        //Credit: http://stackoverflow.com/a/8961462/52160
        var range = IDBKeyRange.bound(filter, filter + "\uffff");
        var index = objectStore.index("title");
        index.openCursor(range).onsuccess = handleResult;
    } else {
        objectStore.openCursor().onsuccess = handleResult;
    }

}

To be clear, the only change here is at the bottom. Opening a cursor with or without a range gives us the same type of event handler result. That’s handy then as it makes this update so trivial. The only complex aspect is in actually building the range. Notice what I’ve done here. The input, filter, is what the user typed. So imagine this is “The”. We want to find notes with a title that begins with “The” and ends in any character. This can be done by simply setting the far end of the range to a high ASCII character. I can’t take credit for this idea. See the StackOverflow link in the code for attribution.

You can find this demo in the fulldemo3 folder. Note that this is using a new database so if you’ve run the previous examples, this one will be empty when you first run it.

While this works, it has one small problem. Imagine a note titled, “Saints Rule.” (Because they do. Just saying.) Most likely you will try to search for this by typing “saints”. If you do this, the filter won’t work because it is case sensitive. How do we get around it?

One way is to simply store a copy of our title in lowercase. This is relatively easy to do. First, I modified the index to use a new property called titlelc.

        objectStore.createIndex("titlelc", "titlelc", { unique: false });

Then I modified the code that stores notes to create a copy of the field:

$("#saveNoteButton").on("click",function() {

    var title = $("#title").val();
    var body = $("#body").val();
    var key = $("#key").val();
    var titlelc = title.toLowerCase();

    var t = db.transaction(["note"], "readwrite");

    if(key === "") {
        t.objectStore("note")
                        .add({title:title,body:body,updated:new Date(),titlelc:titlelc});
    } else {
        t.objectStore("note")
                        .put({title:title,body:body,updated:new Date(),id:Number(key),titlelc:titlelc});
    }

Finally, I modified the search to simply lowercase user input. That way if you enter “Saints” it will work just as well as entering “saints.”

        filter = filter.toLowerCase();
        var range = IDBKeyRange.bound(filter, filter + "\uffff");
        var index = objectStore.index("titlelc");

That’s it. You can find this version as fulldemo4.


Working With Array Properties

For our final improvement, I’m going to add a new feature to our Note application – tagging. This will
let you add any number of tags (think keywords that describe the note) so that you can later find other
notes with the same tag. Tags will be stored as an array. That by itself isn’t such a big deal. I mentioned in the beginning of this series that you could easily store arrays as properties. What is a bit more complex is handling the search. Let’s begin by making it so you can add tags to a note.

First, I modified my note form to have a new input field. This will allow the user to enter tags separated by a comma:

Tag UI

I can save this by simply updating my code that handles Note creation/updating.

    var tags = [];
    var tagString = $("#tags").val();
    if(tagString.length) tags = tagString.split(",");

Notice that I’m defaulting the value to an empty array. I only populate it if you typed something in. Saving this is as simple as appending it to the object we pass to IndexedDB:

    if(key === "") {
        t.objectStore("note")
                        .add({title:title,body:body,updated:new Date(),titlelc:titlelc,tags:tags});
    } else {
        t.objectStore("note")
                        .put({title:title,body:body,updated:new Date(),id:Number(key),titlelc:titlelc,tags:tags});
    }

That’s it. If you write a few notes and open up Chrome’s Resources tab, you can actually see the data being stored.

Chrome DevTools and the Resource View

Now let’s add tags to the view when you display a note. For my application, I decided on a simple use case for this. When a note is displayed, if there are tags I’ll list them out. Each tag will be a link. If you click that link, I’ll show you a list of related notes using the same tag. Let’s look at that logic first.

function displayNote(id) {
    var transaction = db.transaction(["note"]);  
    var objectStore = transaction.objectStore("note");  
    var request = objectStore.get(id);

    request.onsuccess = function(event) {  
        var note = request.result;
        var content = "<h2>" + note.title + "</h2>"; 
        if(note.tags.length > 0) {
            content += "<strong>Tags:</strong> ";
            note.tags.forEach(function(elm,idx,arr) {
                content += "<a class='tagLookup' title='Click for Related Notes' data-noteid='"+note.id+"'> " + elm + "</a> ";  
            });
            content += "<br/><div id='relatedNotesDisplay'></div>";
        }
        content += "<p>" + note.body + "</p>";
         I
        $noteDetail.html(content).show();
        $noteForm.hide();           
    };  
}

This function (a new addition to our application) handles the note display code formally bound to the table cell click event. I needed a more abstract version of the code so this fulfills that purpose. For the most part it’s the same, but note the logic to check the length of the tags property. If the array is not empty, the content is updated to include a simple list of tags. Each one is wrapped in a link with a particular class I’ll use for lookup later. I’ve also added a div specifically to handle that search.

A note with tags

At this point, I’ve got the ability to add tags to a note as well as display them later. I’ve also planned to allow the user to click those tags so they can find other notes using the same tag. Now here comes the complex part.

You’ve seen how you can fetch content based on an index. But how does that work with array properties? Turns out – the spec has a specific flag for dealing with this: multiEntry. When creating an array-based index, you must set this value to true. Here is how my application handles it:

objectStore.createIndex("tags","tags", {unique:false,multiEntry:true});

That handles the storage aspect well. Now let’s talk about search. Here is the click handler for the tag link class:

$(document).on("click", ".tagLookup", function(e) {
    var tag = e.target.text;
    var parentNote = $(this).data("noteid");
    var doneOne = false;
    var content = "<strong>Related Notes:</strong><br/>";

    var transaction = db.transaction(["note"], "readonly");
    var objectStore = transaction.objectStore("note");
    var tagIndex = objectStore.index("tags");
    var range = IDBKeyRange.only(tag);

    transaction.oncomplete = function(event) {
        if(!doneOne) {
            content += "No other notes used this tag."; 
        }
        content += "<p/>";
        $("#relatedNotesDisplay").html(content);
    };

    var handleResult = function(event) {
        var cursor = event.target.result;
        if(cursor) {
            if(cursor.value.id != parentNote) {
                doneOne = true;
                content += "<a class='loadNote' data-noteid='"+cursor.value.id+"'>" + cursor.value.title + "</a><br/> ";
            }
            cursor.continue();
        }           
    };

    tagIndex.openCursor(range).onsuccess = handleResult;

});

There’s quite a bit here – but honestly – it is very similar to what we’ve dicussed before. When you click a tag, my code begins by grabbing the text of the link for the tag value. I create my transaction, objectstore, and index objects as you’ve seen before. The range is new this time. Instead of creating a range from something and to something, we can use the only() api to specify that we want a range of only one value. And yes – that seemed weird to me as well. But it works great. You can see then we open the cursor and we can iterate over the results as before. There is a bit of additional code to handle cases where there may be no matches. I also take note of the original note, i.e. the one you are viewing now, so that I don’t display it as well. And that’s really it. I’ve got one last bit of code that handles click events on those related notes so you can view them easily:

$(document).on("click", ".loadNote", function(e) {
    var noteId = $(this).data("noteid");
    displayNote(noteId);
});

You can find this demo in the folder fulldemo5.


Conclusion

I sincerely hope that this series has been helpful to you. As I said in the beginning, IndexedDB was not a technology I enjoyed using. The more I worked with it, and the more I began to wrap my head around how it did things, the more I began to appreciate how much this technology could help us as web developers. It definitely has room to grow, and I can definitely see people preferring to use wrapper libraries to simplify things, but I think the future for this feature is great!

December 24 2013

21:18

Managing the Asynchronous Nature of Node.js

Node.js allows you to create apps fast and easily. But due to its asynchronous nature, it may be hard to write readable and manageable code. In this article I’ll show you a few tips on how to achieve that.


Callback Hell or the Pyramid of Doom

Node.js is built in a way that forces you to use asynchronous functions. That means callbacks, callbacks and even more callbacks. You’ve probably seen or even written yourself pieces of code like this:

app.get('/login', function (req, res) {
	sql.query('SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ], function (error, rows) {
		if (error) {
			res.writeHead(500);
			return res.end();
		}
		if (rows.length &lt; 1) {
			res.end('Wrong username!');
		} else {
			sql.query('SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ], function (error, rows) {
				if (error) {
					res.writeHead(500);
					return res.end();
				}
				if (rows.length &lt; 1) {
					res.end('Wrong password!');
				} else {
					sql.query('SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ], function (error, rows) {
						if (error) {
							res.writeHead(500);
							return res.end();
						}
						req.session.username = req.param('username');
						req.session.data = rows[0];
						res.rediect('/userarea');
					});
				}
			});
		}
	});
});

This is actually a snippet straight from one of my first Node.js apps. If you’ve done something more advanced in Node.js you probably understand everything, but the problem here is that the code is moving to the right every time you use some asynchronous function. It becomes harder to read and harder to debug. Luckily, there are a few solutions for this mess, so you can pick the right one for your project.


Solution 1: Callback Naming and Modularization

The simplest approach would be to name every callback (which will help you debug the code) and split all of your code into modules. The login example above can be turned into a module in a few simple steps.

The Structure

Let’s start with a simple module structure. To avoid the above situation, when you just split the mess into smaller messes, let’s have it be a class:

var util = require('util');

function Login(username, password) {
	function _checkForErrors(error, rows, reason) {
		
	}
	
	function _checkUsername(error, rows) {
		
	}
	
	function _checkPassword(error, rows) {
		
	}
	
	function _getData(error, rows) {
		
	}
	
	function perform() {
		
	}
	
	this.perform = perform;
}

util.inherits(Login, EventEmitter);

The class is constructed with two parameters: username and password. Looking at the sample code, we need three functions: one to check if the username is correct (_checkUsername), another to check the password (_checkPassword) and one more to return the user-related data (_getData) and notify the app that the login was successful. There is also a _checkForErrors helper, which will handle all errors. Finally, there is a perform function, which will start the login procedure (and is the only public function in the class). Finally, we inherit from EventEmitter to simplify the usage of this class.

The Helper

The _checkForErrors function will check if any error occurred or if the SQL query returns no rows, and emit the appropriate error (with the reason that was supplied):

function _checkForErrors(error, rows, reason) {
	if (error) {
		this.emit('error', error);
		return true;
	}
	
	if (rows.length &lt; 1) {
		this.emit('failure', reason);
		return true;
	}
	
	return false;
}

It also returns true or false, depending on whether an error occurred or not.

Performing the Login

The perform function will have to do only one operation: perform the first SQL query (to check if the username exists) and assign the appropriate callback:

function perform() {
	sql.query('SELECT 1 FROM users WHERE name = ?;', [ username ], _checkUsername);
}

I assume you have your SQL connection accessible globally in the sql variable (just to simplify, discussing if this is a good practice is beyond the scope of this article). And that’s it for this function.

Checking the Username

The next step is to check if the username is correct, and if so fire the second query – to check the password:

function _checkUsername(error, rows) {
	if (_checkForErrors(error, rows, 'username')) {
		return false;
	} else {
		sql.query('SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ username, password ], _checkPassword);
	}
}

Pretty much the same code as in the messy sample, with the exception of error handling.

Checking the Password

This function is almost exactly the same as the previous one, the only difference being the query called:

function _checkPassword(error, rows) {
	if (_checkForErrors(error, rows, 'password')) {
		return false;
	} else {
		sql.query('SELECT * FROM userdata WHERE name = ?;', [ username ], _getData);
	}
}

Getting the User-Related Data

The last function in this class will get the data related to the user (the optional step) and fire a success event with it:

function _getData(error, rows) {
	if (_checkForErrors(error, rows)) {
		return false;
	} else {
		this.emit('success', rows[0]);
	}
}

Final Touches and Usage

The last thing to do is to export the class. Add this line after all of the code:

module.exports = Login;

This will make the Login class the only thing that the module will export. It can be later used like this (assuming that you’ve named the module file login.js and it’s in the same directory as the main script):

var Login = require('./login.js');

...

app.get('/login', function (req, res) {
	var login = new Login(req.param('username'), req.param('password));
	login.on('error', function (error) {
		res.writeHead(500);
		res.end();
	});
	login.on('failure', function (reason) {
		if (reason == 'username') {
			res.end('Wrong username!');
		} else if (reason == 'password') {
			res.end('Wrong password!');
		}
	});
	login.on('success', function (data) {
		req.session.username = req.param('username');
		req.session.data = data;
		res.redirect('/userarea');
	});
	login.perform();
});

Here’s a few more lines of code, but the readability of the code has increased, quite noticeably. Also, this solution does not use any external libraries, which makes it perfect if someone new comes to your project.

That was the first approach, let’s proceed to the second one.


Solution 2: Promises

Using promises is another way of solving this problem. A promise (as you can read in the link provided) “represents the eventual value returned from the single completion of an operation”. In practice, it means that you can chain the calls to flatten the pyramid and make the code easier to read.

We will use the Q module, available in the NPM repository.

Q in the Nutshell

Before we start, let me introduce you to the Q. For static classes (modules), we will primarily use the Q.nfcall function. It helps us in the conversion of every function following the Node.js’s callback pattern (where the parameters of the callback are the error and the result) to a promise. It’s used like this:

Q.nfcall(http.get, options);

It’s pretty much like Object.prototype.call. You can also use the Q.nfapply which resembles Object.prototype.apply:

Q.nfapply(fs.readFile, [ 'filename.txt', 'utf-8' ]);

Also, when we create the promise, we add each step with the then(stepCallback) method, catch the errors with catch(errorCallback) and finish with done().

In this case, since the sql object is an instance, not a static class, we have to use Q.ninvoke or Q.npost, which are similar to the above. The difference is that we pass the methods’ name as a string in the first argument, and the instance of the class that we want to work with as a second one, to avoid the method being unbinded from the instance.

Preparing the Promise

The first thing to do is to execute the first step, using Q.nfcall or Q.nfapply (use the one that you like more, there is no difference underneath):

var Q = require('q');

...
app.get('/login', function (req, res) {
	Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ])
});

Notice the lack of a semicolon at the end of the line – the function-calls will be chained so it cannot be there. We are just calling the sql.query as in the messy example, but we omit the callback parameter – it’s handled by the promise.

Checking the Username

Now we can create the callback for the SQL query, it will be almost identical to the one in the “pyramid of doom” example. Add this after the Q.ninvoke call:

.then(function (rows) {
	if (rows.length &lt; 1) {
		res.end('Wrong username!');
	} else {
		return Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ]);
	}
})

As you can see we are attaching the callback (the next step) using the then method. Also, in the callback we omit the error parameter, because we will catch all of the errors later. We are manually checking, if the query returned something, and if so we are returning the next promise to be executed (again, no semicolon because of the chaining).

Checking the Password

As with the modularization example, checking the password is almost identical to checking the username. This should go right after the last then call:

.then(function (rows) {
	if (rows.length &lt; 1) {
		res.end('Wrong password!');
	} else {
		return Q.ninvoke('query', sql, 'SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ]);
	}
})

Getting the User-Related Data

The last step will be the one where we’re putting the users’ data in the session. Once more, the callback is not much different from the messy example:

.then(function (rows) {
	req.session.username = req.param('username');
	req.session.data = rows[0];
	res.rediect('/userarea');
})

Checking for Errors

When using promises and the Q library, all of the errors are handled by the callback set using the catch method. Here, we are only sending the HTTP 500 no matter what the error is, like in the examples above:

.catch(function (error) {
	res.writeHead(500);
	res.end();
})
.done();

After that, we must call the done method to “make sure that, if an error doesn’t get handled before the end, it will get rethrown and reported” (from the library’s README). Now our beautifully flattened code should look like this (and behave just like the messy one):

var Q = require('q');

...
app.get('/login', function (req, res) {
	Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ])
	.then(function (rows) {
		if (rows.length &lt; 1) {
			res.end('Wrong username!');
		} else {
			return Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ]);
		}
	})
	.then(function (rows) {
		if (rows.length &lt; 1) {
			res.end('Wrong password!');
		} else {
			return Q.ninvoke('query', sql, 'SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ]);
		}
	})
	.then(function (rows) {
		req.session.username = req.param('username');
		req.session.data = rows[0];
		res.rediect('/userarea');
	})
	.catch(function (error) {
		res.writeHead(500);
		res.end();
	})
	.done();
});

The code is much cleaner, and it involved less rewriting than the modularization approach.


Solution 3: Step Library

This solution is similar to the previous one, but it’s simpler. Q is a bit heavy, because it implements the whole promises idea. The Step library is there only for the purpose of flattening the callback hell. It’s also a bit simpler to use, because you just call the only function that is exported from the module, pass all your callbacks as the parameters and use this in place of every callback. So the messy example can be converted into this, using the Step module:

var step = require('step');

...

app.get('/login', function (req, res) {
	step(
		function start() {
			sql.query('SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ], this);
		},
		function checkUsername(error, rows) {
			if (error) {
				res.writeHead(500);
				return res.end();
			}
			if (rows.length &lt; 1) {
				res.end('Wrong username!');
			} else {
				sql.query('SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ], this);
			}
		},
		function checkPassword(error, rows) {
			if (error) {
				res.writeHead(500);
				return res.end();
			}
			if (rows.length &lt; 1) {
				res.end('Wrong password!');
			} else {
				sql.query('SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ], this);
			}
		},
		function (error, rows) {
			if (error) {
				res.writeHead(500);
				return res.end();
			}
			req.session.username = req.param('username');
			req.session.data = rows[0];
			res.rediect('/userarea');
		}
	);
});

The drawback here is that there is no common error handler. Although any exceptions thrown in one callback are passed to the next one as the first parameter (so the script won’t go down because of the uncaught exception), having one handler for all errors is convenient most of the time.


Which One to Choose?

That’s pretty much a personal choice, but to help you pick the right one, here is a list of pros and cons of each approach:

Modularization:

Pros:

  • No external libraries
  • Helps to make the code more reusable

Cons:

  • More code
  • A lot of rewriting if you’re converting an existing project

Promises (Q):

Pros:

  • Less code
  • Only a little rewriting if applied to an existing project

Cons:

  • You have to use an external library
  • Requires a bit of learning

Step Library:

Pros:

  • Easy to use, no learning required
  • Pretty much copy-and-paste if converting an existing project

Cons:

  • No common error handler
  • A bit harder to indent that step function properly

Conclusion

As you can see, the asynchronous nature of Node.js can be managed and the callback hell can be avoided. I’m personally using the modularization approach, because I like to have my code well structured. I hope these tips will help you to write your code more readable and debug your scripts easier.

December 12 2013

21:20

WebGL With Three.js: Shaders

3D graphics in the browser have been a topic of interest since its introduction. But if you were to create your apps using plain old WebGL, it would take ages. Recently, some really useful libraries have come available. Three.js is one of the most popular, and in this series I will show you how to make the best use of it, to create stunning 3D experiences for your users.

I do expect you to have a basic understanding of 3D space before you start reading this tutorial, as I won’t be explaining things like coordinates, vectors, etc.


A Word About Shaders

If you already know what shaders are, you may skip this step. Shaders are basically programs written in GLSL (Graphics Layer Scripting Language), that are executed on the GPU. This makes them extremely useful, as we can take some work from the CPU and put it on the GPU to increase performance. There are two kinds: vertex and fragment shaders. Vertex shaders are used to modify the structure of the object (move the vertices), and fragment shaders make changes to the pixels that are being drawn.


Step 1: Vertex Shader

We will start with the simpler one. This shader will modify the placement of the vectors in the mesh, resulting in moving faces. Insert this code into <head> of your app:

<script id="cubeVertexShader" type="x-shader/x-vertex">
	uniform float time;
	varying vec2 vUv;

	void main() {
		vUv = uv;
		vec3 newPosition = position + normal * vec3(sin(time * 0.2) * 3.0);
		gl_Position = projectionMatrix <i> modelViewMatrix </i> vec4(newPosition, 1.0);
	}
</script>

The type attribute of this script won’t be understood by the browser, so it will not get executed (we will pass its contents to the Three.js material later). In the first two lines we are defining two variables. The first one is uniform float time. Uniforms are passed to both vertex and fragment shaders. Next, there is varying vec2 vUv. Varyings are the interface between the vertex and the fragment shader. time will hold the time in milliseconds since the app was started, which we will use to calculate new positions of vertexes. In vUv we will store the UV (texture vector) of each vertex, so we can use it in the fragment shader.

Next, there is the void main() declaration. All shaders must have this function. In here we are passing the UV of the vertex to our vUv and calculating the vertex’s new position. Finally, we set the gl_Position, which in fact sets the position of the vertex. But also, we have to multiply the position calculated earlier by the projectionMatrix and modelViewMatrix, two matrices that Three.js supplies to us. This is necessary because if we don’t do this, the GPU will not consider the point from which we are looking at the vertex. Now let’s move to the fragment shader.


Step 2: Fragment Shader

Now this is the place where all the magic happens. Fragment shaders are responsible for all of those good looking games. The one we will use is pretty simple, so don’t expect to see a scene from Crysis 3 after using it. Insert the following code under your vertex shader:

<script id="cubeFragmentShader" type="x-shader/x-fragment">
		uniform float time;
		varying vec2 vUv;

		void main() {
			vec2 position = -1.0 + 2.0 * vUv;

			float red = abs(sin(position.x * position.y + time / 5.0));
			float green = abs(sin(position.x * position.y + time / 4.0));
			float blue = abs(sin(position.x * position.y + time / 3.0 ));
			gl_FragColor = vec4(red, green, blue, 1.0);
		}
	</script>

As you can see at the top of the shader, there are our two variables again. You have to keep in mind that all variables you use (except the ones from Three.js) must be defined in each shader they are used in.

In the void main() function, we are calculating the colors based on the time and UV of the fragment (fragment shaders operate on fragments, which are composed from vertices, so the values of varying variables are interpolated in the fragment shader). Feel free to mess with those numbers and functions (just remember that the color values must be positive).

Finally, we are setting the gl_FragColor variable which sets the fragment’s color.

If you open your browser now, nothing will change, because we have to change the material of the object so it uses shaders.


Step 3: THREE.ShaderMaterial

This special material is used whenever we need to use shaders. Let’s change the material of the object that we attached to our model in the previous part of this series. First, define the uniforms array which will be used to pass variables to the shaders:

var uniforms = {
	time: { type: "f", value: 0 },
	resolution: { type: "v2", value: new THREE.Vector2 },
	texture: { type: "t", value: THREE.ImageUtils.loadTexture('./box.png') }
};

Next, in the loader.load define the item‘s material and use it:

var itemMaterial = new THREE.ShaderMaterial({
	uniforms: uniforms,
	vertexShader: document.getElementById('cubeVertexShader').innerHTML,
	fragmentShader: document.getElementById('cubeFragmentShader').innerHTML
});
item = new THREE.Mesh(new THREE.CubeGeometry(100, 10, 10), itemMaterial);

Now, if you open the browser, you should see that the red beam changed its colors:

shaders_result_not_moving

But the colors are not changing, and the mesh is not animated either. To change that, we have to update the time variable in the shaders each time a frame is drawn. Go to the render function and add this line after the clock.getDelta() call:

uniforms.time.value += delta * 10;

Now if you open the browser, you should see a nicely animated and colorful object:

shaders_animated_and_colorful

A Word About Performance

If we were to create such a texture effect using, for example, HTML5 Canvas, the process would take too much of the CPU’s cycles, resulting in lags. But all shaders are executed on the GPU, which is optimized for all operations on graphics and is focused only on them. Separating graphic and non-graphic calculations is the key to a good performing app.

If you want to create something real using WebGL, allow me to assure you, that you will have to move as much work as possible to the GPU, to make your application smooth and responsive.


Conclusion

As you can see, using Three.js allows us to create 3D graphics in the browser very easily, and the results are actually pretty good. But, they can be even better, take a look at these examples from Three.js’s site:

With enough time, a creative mind and Three.js, you can create amazing apps like those too. I will be more than happy to see your Three.js creations. Thanks for reading.

December 06 2013

22:04

WebGL With Three.js: Models and Animation

3D graphics in the browser has been a hot topic since it was introduced. But if you were to create your apps using plain old WebGL it would take ages. That’s why some really useful libraries have came about. Three.js is one of the most popular of them, and in this series I will show you how to make the best use of it to create stunning 3D experiences for your users.

I expect you to have a basic understanding of 3D space before you start reading this tutorial, as I won’t be explaining topics like coordinates, vectors, etc.


Preparation

As usual, we will start from the code that you created earlier. Download and unpack the assets I provided and you’ll be ready to go.


Step 1: A Word About Exporting Models In Blender

Before we start the programming part, I will explain something that many people have problems with. When you have a model created in Blender, and you want to export it to Three.js format, you should keep the following in mind:

  • First, remove the parenting – The Three.js exporter won’t export any animations if you leave it (this also applies to the Armature Modifier)
  • Second, group vertices – if you want the bone to move any vertices you have to group them, and name the group with the name of the bone
  • Third, you can have only one animation – this may sound like a big problem, but I will explain the workaround later

Also, when exporting you have to make sure that these options are selected in the exporter: Skinning, Bones and Skeletal Animation.


Step 2: Importing the Model

As with pretty much everything in Three.js, importing models is very simple. There is a special class, THREE.JSONLoader that will do everything for us. Of course it only loads JSON models, but it’s recommended to use them so I will only cover this loader (others work pretty much the same way). Let’s initialize it first:

var loader = new THREE.JSONLoader;
var animation;

No arguments needed. We also need to define a variable for animation, so we can access it later. Now we can load the model:

loader.load('./model.js', function (geometry, materials) {
	var skinnedMesh = new THREE.SkinnedMesh(geometry, new THREE.MeshFaceMaterial(materials));
	skinnedMesh.position.y = 50;
	skinnedMesh.scale.set(15, 15, 15);
	scene.add(skinnedMesh);
	
	animate(skinnedMesh);
});

The load method accepts two parameters: a path to the model and a callback function. This function will be called when the model is loaded (so in the meantime you can display a loading bar to the user). A callback function will be called with two parameters: the geometry of the model and its materials (these are exported with it). In the callback, we are creating the mesh – but this time it’s THREE.SkinnedMesh, which supports animations.

Next, we move the model 50 units up to put it on the top of our cube, scale it 15 times (because I tend to create small models in Blender) and add it to the scene. Next we call the animate function that will set up and play the animation.


Step 3: Animation

Now we set up the animation. This is the source for the animate function:

function animate(skinnedMesh) {
	var materials = skinnedMesh.material.materials;

	for (var k in materials) {
		materials[k].skinning = true;
	}

	THREE.AnimationHandler.add(skinnedMesh.geometry.animation);
	animation = new THREE.Animation(skinnedMesh, "ArmatureAction", THREE.AnimationHandler.CATMULLROM);
	animation.play();
}

First we have to enable skinning (animations) in all materials of the model. Next, we have to add the animation from model to THREE.AnimationHandler and create the THREE.Animation object. The parameters are in the following order: the mesh to animate, the animation name in the model and interpolation type (useful when you have a complicated model like a human body, where you want the mesh to bend smoothly). Finally, we play the animation.

But if you open the browser now, you would see that the model is not moving:

model_still

To fix this, we have to add one line to our render function, just below the particleSystem rotation:

if (animation) animation.update(delta);

This will update the time on the animation, so THREE.AnimationHandler knows which frame to render. Now open the browser and you should see the top cube bend to the left and to the right:

model_animated

Step 4: Multiple Animations

Yes, there is a workaround for only a one animation sequence in a model, but it requires you to edit it. The idea is that you add each animation to one sequence, then, when that one ends, the next one begins. Next, after you’ve exported your model, you need to change the animation code. Let’s say we have a standing animation from the beginning to the third second, and a walking animation from the third second to the end. Then in our render function we have to check on which second the animation is, and if it reaches the end time of the current sequence, stop it and play it from beginning:

var currentSequence = 'standing';

function (render) {
...
	if (animation) animation.update(delta);
	if (currentSequence == 'standing') {
		if (animation.currentTime > 4) {
			animation.stop();
			animation.play(false, 0); // play the animation not looped, from 0s
		}
	} else if (currentSequence == 'walking') {
		if (animation.currentTime <= 4 || animation.currentTime > 8) {
			animation.stop();
			animation.play(false, 4); // play the animation not looped, from 4s
		}
	}
...
}

You have to remember to start the animations not looped and from the correct time. This will of course be buggy if the user’s frame-rate is really low, because the delta will be higher and animation.currentTime may be much higher than the limit for any particular sequence, resulting in playing some part of the next sequence. But it will be noticeable only if deltas are about 300-500ms.

Now lets change our animate function to play the walking animation, just add these arguments to the animation.play function:

animation.play(false, 0);

Also, let’s allow the user to switch between animations using the a key. Add this code at the end of the file, just before the render() call:

document.addEventListener('keyup', function (e) {
	if (e.keyCode == 'A'.charCodeAt(0)) {
		currentSequence = (currentSequence == 'standing' ? 'walking': 'standing');
	}
});

Step 5: Attach to Bone

This technique is particularly useful in RPGs, but it can apply to other genres as well. It involves attaching another object to the bone of the animated object: clothes, weaponry, etc.

Let’s start by modifying our loader.load callback. Add this code under the scene.add(skinnedMesh'):

item = new THREE.Mesh(new THREE.CubeGeometry(100, 10, 10), new THREE.MeshBasicMaterial({ color: 0xff0000 }));
item.position.x = 50;
pivot = new THREE.Object3D();
pivot.scale.set(0.15, 0.15, 0.15);
pivot.add(item);
pivot.useQuaternion = true;
skinnedMesh.add(pivot);

The item mesh simulates something you may want to attach to an animated object. To make it rotate around a specific point, and not around the center, we will add it to a pivot object and move it 50 units (half of the width) to the right. We have to scale it to 0.15, because it will be added to the skinnedMesh that has a scale of 15. Finally, before it’s added to our animated object we tell it to use quaternions.

Basically, quaternions are a number system, but since Three.js handles everything for us, you don’t have to delve into this topic if you don’t want to (but if you do, take a look at its Wiki). They are used to rotate objects without the risk of gimbal lock.

Now, in the render function we have to update the object’s position and rotation:

pivot.position = new THREE.Vector3().getPositionFromMatrix(skinnedMesh.bones[2].skinMatrix);
pivot.quaternion.setFromRotationMatrix(skinnedMesh.bones[2].skinMatrix);

Let me explain what is happening here. First, we set the position to be the same as on the last bone in the model. We are using the skinMatrix property to calculate it. Then we use the same property to calculate the quaternion for the pivot‘s rotation. After that, you can open the browser and you should see the red beam attached to our model:

attach_to_bone

Conclusion

So I hope you’ve learned a few new interesting techniques from this tutorial. Like always, feel free to experiment with the app that we’ve created. In the next (and the last) tutorial in this series, I’ll show you the true power of OpenGL/WebGL – Shaders.

December 03 2013

19:10

Using Node’s Event Module

When I first heard about Node.js, I thought it was just a JavaScript implementation for the server. But it’s actually much more: it comes with a host of built-in functions that you don’t get in the browser. One of those bit of functionality is the Event Module, which has the EventEmitter class. We’ll be looking at that in this tutorial.


EventEmitter: What and Why

One last benefits to events: they are a very loose way of coupling parts of your code together.

So, what exactly does the EventEmitter class do? Put simply, it allows you to listen for “events” and assign actions to run when those events occur. If you’re familiar with front-end JavaScript, you’ll know about mouse and keyboard events that occur on certain user interactions. These are very similar, except that we can emit events on our own, when we want to, and not necessary based on user interaction. The principles EventEmitter is based on have been called the publish/subscribe model, because we can subscribe to events and then publish them. There are many front-end libraries built with pub/sub support, but Node has it build in.

The other important question is this: why would you use the event model? In Node, it’s an alternative to deeply nested callbacks. A lot of Node methods are run asynchronously, which means that to run code after the method has finished, you need to pass a callback method to the function. Eventually, your code will look like a giant funnel. To prevent this, many node classes emit events that you can listen for. This allows you to organize your code the way you’d like to, and not use callbacks.

One last benefits to events: they are a very loose way of coupling parts of your code together. An event can be emitted, but if no code is listening for it, that’s okay: it will just passed unnoticed. This means removing listeners (or event emissions) never results in JavaScript errors.


Using EventEmitter

We’ll begin with the EventEmitter class on its own. It’s pretty simple to get at: we just require the events module:

    var events = require("events");

This events object has a single property, which is the EventEmitter class itself. So, let’s make a simple example for starters:

    var EventEmitter = require("events").EventEmitter;

    var ee = new EventEmitter();
    ee.on("someEvent", function () {
        console.log("event has occured");
    });

    ee.emit("someEvent");

We begin by creating a new EventEmitter object. This object has two main methods that we use for events: on and emit.

We begin with on. This method takes two parameters: we start with the name of the event we’re listening for: in this case, that’s "someEvent". But of course, it could be anything, and you’ll usually choose something better. The second parameter is the function that will be called when the event occurs. That’s all that is required for setting up an event.

Now, to fire the event, you pass the event name to the EventEmitter instance’s emit method. That’s the last line of the code above. If you run that code, you’ll see that we get the text printed out to the console.

That’s the most basic use of an EventEmitter. You can also include data when firing events:

    ee.emit("new-user", userObj);

That’s only one data parameter, but you can include as many as you want. To use them in your event handler function, just take them as parameters:

    ee.on("new-user", function (data) {
        // use data here
    });

Before continuing, let me clarify part of the EventEmitter functionality. We can have more than one listener for each event; multiple event listeners can be assigned (all with on), and all functions will be called when the event is fired. By default, Node allows up to ten listeners on one event at once; if more are created, node will issue a warning. However, we can change this amount by using setMaxListeners. For example, if you run this, you should see a warning printed out above the output:

    ee.on("someEvent", function () { console.log("event 1"); });
    ee.on("someEvent", function () { console.log("event 2"); });
    ee.on("someEvent", function () { console.log("event 3"); });
    ee.on("someEvent", function () { console.log("event 4"); });
    ee.on("someEvent", function () { console.log("event 5"); });
    ee.on("someEvent", function () { console.log("event 6"); });
    ee.on("someEvent", function () { console.log("event 7"); });
    ee.on("someEvent", function () { console.log("event 8"); });
    ee.on("someEvent", function () { console.log("event 9"); });
    ee.on("someEvent", function () { console.log("event 10"); });
    ee.on("someEvent", function () { console.log("event 11"); });

    ee.emit("someEvent");

To set the maximum number of viewers, add this line above the listeners:

    ee.setMaxListeners(20);

Now when you run it, you won’t get a warning.


Other EventEmitter Methods

There are a few other EventEmitter methods you’ll find useful.

Here’s a neat one: once. It’s just like the on method, except that it only works once. After being called for the first time, the listener is removed.

    ee.once("firstConnection", function () { console.log("You'll never see this again"); });
    ee.emit("firstConnection");
    ee.emit("firstConnection");

If you run this, you’ll only see the message once. The second emission of the event isn’t picked up by any listeners (and that’s okay, by the way), because the once listener was removed after being used once.

Speaking of removing listeners, we can do this ourselves, manually, in a few ways. First, we can remove a single listener with the removeListener method. It takes two parameters: the event name and the listener function. So far, we’ve been using anonymous functions as our listeners. If we want to be able to remove a listener later, it will need to be a function with a name we can reference. We can use this removeListener method to duplicate the effects of the once method:

    function onlyOnce () {
        console.log("You'll never see this again");
        ee.removeListener("firstConnection", onlyOnce);
    }

    ee.on("firstConnection", onlyOnce) 
    ee.emit("firstConnection");
    ee.emit("firstConnection");

If you run this, you’ll see that it has the very same effect as once.

If you want to remove all the listeners bound to a given event, you can use removeAllListeners; just pass it the name of the event:

    ee.removeAllListeners("firstConnection");

To remove all listeners for all events, call the function without any parameters.

ee.removeAllListeners();

There’s one last method: listener. This method takes an event name as a parameter and returns an array of all the functions that are listening for that event. Here’s an example of that, based on our onlyOnce example:

    function onlyOnce () {
        console.log(ee.listeners("firstConnection"));
        ee.removeListener("firstConnection", onlyOnce);
        console.log(ee.listeners("firstConnection"));
    }

    ee.on("firstConnection", onlyOnce) 
    ee.emit("firstConnection");
    ee.emit("firstConnection");

We’ll end this section with one bit of meta-ness. Our EventEmitter instance itself actually fires two events of its own, which we can listen for: one when we create new listeners, and one when we remove them. See here:

    ee.on("newListener", function (evtName, fn) {
        console.log("New Listener: " + evtName);
    });

    ee.on("removeListener", function (evtName) {
        console.log("Removed Listener: " + evtName);
    });

    function foo () {}

    ee.on("save-user", foo);
    ee.removeListener("save-user", foo);

Running this, you’ll see our listeners for both new listeners and removed listeners have been run, and we get the messages we expected.

So, now that we’ve seen all the methods that an EventEmitter instance has, let’s see how it works in conjunction with other modules.

EventEmitter Inside Modules

Since the EventEmitter class is just regular JavaScript, it makes perfect sense that it can be used within other modules. Inside your own JavaScript modules, you can create EventEmitter instances, and use them to handle internal events. That’s simple, though. More interestingly, would be to create a module that inherits from EventEmitter, so we can use its functionality part of the public API.

Actually, there are built-in Node modules that do exactly this. For example, you may be familiar with the http module; this is the module that you’ll use to create a web server. This basic example shows how the on method of the EventEmitter class has become part of the http.Server class:

    var http = require("http");
    var server = http.createServer();

    server.on("request", function (req, res) {
        res.end("this is the response");
    });

    server.listen(3000);

If you run this snippet, the process will wait for a request; you can go to http://localhost:3000 and you’ll get the response. When the server instance gets the request from your browser, it emits a "request" event, an event that our listener will receive and can act upon.

So, how can we go about creating a class that will inherit from EventEmitter? It’s actually not that difficult. We’ll create a simple UserList class, which handles user objects. So, in a userlist.js file, we’ll start with this:

    var util         = require("util");
    var EventEmitter = require("events").EventEmitter;

We need the util module to help with the inheriting. Next, we need a database: instead of using an actual database, though, we’ll just use an object:

    var id = 1;
    var database = {
        users: [
            { id: id++, name: "Joe Smith",  occupation: "developer"    },
            { id: id++, name: "Jane Doe",   occupation: "data analyst" },
            { id: id++, name: "John Henry", occupation: "designer"     }
        ]
    };

Now, we can actually create our module. If you aren’t familiar with Node modules, here’s how they work: any JavaScript we write inside this file is only readable from inside the file, by default. If we want to make it part of the module’s public API, we make it a property of module.exports, or assign a whole new object or function to module.exports. Let’s do this:

    function UserList () {
        EventEmitter.call(this);
    }

This is the constructor function, but it isn’t your usual JavaScript constructor function. What we’re doing here is using the call method on the EventEmitter constructor to run that method on the new UserList object (which is this). If we need to do any other initialization to our object, we could do it inside this function, but that’s all we’ll do for now.

Inheriting the constructor isn’t enough though; we also need to inherit the prototype. This is where the util module comes in.

    util.inherits(UserList, EventEmitter);

This will add everything that’s on EventEmitter.prototype to UserList.prototype; now, our UserList instances will have all the methods of an EventEmitter instance. But we want to add some more, of course. We’ll add a save method, to allow us to add new users.

    UserList.prototype.save = function (obj) {
        obj.id = id++;
        database.users.push(obj);
        this.emit("saved-user", obj);  
    };

This method takes an object to save to our "database": it adds an id and pushes it into the users array. Then, it emits the "saved-user" event, and passes the object as data. If this were a real database, saving it would probably be an asynchronous task, meaning that to work with the saved record we would need to accept a callback. The alternative to this is to emit an event, as we’re doing. Now, if we want to do something with the saved record, we can just listen for the event. We’ll do this in a second. Let’s just close up the UserList

    UserList.prototype.all = function () {
        return database.users;
    };

    module.exports = UserList;

I’ve added one more method: a simple one that returns all the users. Then, we assign UserList to module.exports.

Now, let’s see this in use; in another file, say test.js. Add the following:

    var UserList = require("./userlist");
    var users = new UserList();

    users.on("saved-user", function (user) {
        console.log("saved: " + user.name + " (" + user.id + ")");
    });

    users.save({ name: "Jane Doe", occupation: "manager" });
    users.save({ name: "John Jacob", occupation: "developer" });

After requiring our new module and creating an instance of it, we listen for the "saved-user" event. Then, we can go ahead and save a few users. When we run this, you’ll see that we get two messages, printing out the names and ids of the records we saved.

    saved: Jane Doe (4)
    saved: John Jacob (5)

Of course, this could work the other way around: we could be using the on method from inside our class and the emit method outside, or both inside or out. But this is a good example of how it could be done.


Conclusion

So that’s how Node’s EventEmitter class works. Below you’ll find links to the Node documentation for some of the things we’ve been talking about.

November 28 2013

17:43

WebGL With Three.js: Textures & Particles

Since its introduction, 3D graphics in the browser has been a popular topic. But if you were to create your apps using plain old WebGL it would take a very long. But now we have some pretty useful libraries that we can take advantage of, like Three.js. So in this series I will show you how to create stunning 3D experiences for the browser.

I do expect you to have a basic understanding of 3D space before you start reading this tutorial, as I won’t be explaining things like coordinates, vectors etc.


Preparation

We will start with the code from previous part of this series. Also, grab the assets I provided and put them in the same folder as your app. Now, since we will use images here you will have to put your app on some static server (may be local), because unless you start the browser with enabled file access from files (for example using the --allow-file-access-from-files flag in Chrome) CORS will not let you load them from file. That’s all you need to do before proceeding.


Step 1: Loading the Texture

If you’ve ever got so bored that you went with creating something using pure OpenGL, you probably remember how much pain it is to load a texture. Luckily, Three.js comes with a nice function that will load and set up the texture for us. Add this line before the definition of our cube’s material:

var cubeTexture = THREE.ImageUtils.loadTexture('./box.png');

It’s really all you have to do in order to have your texture loaded.

In a real-world app you would have to preload the texture like any normal image and show the users some fancy loading bar to let them know that you are loading (Three.js will use the cached image then).


Step 2: Painting the Cube

Now we will apply the texture to our cube. This is also easy, you just need to replace the color definition in the cube’s material to look like this:

var cubeMaterial = new THREE.MeshLambertMaterial({ map: cubeTexture });

The map attribute sets the texture. Now you can open the browser and you should see a rotating, textured cube:

textured_cube

You can also colorize the texture, just add the color definition in the material’s options, like that:

var cubeMaterial = new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0x28c0ec });

And now the cube turns blue:

textured_colorized_cube

This way you can have multiple different objects with the same texture if only the color changes.


Step 3: Multiple Materials

You can set different materials for every face of the cube. To achieve that, you have to change the whole material’s definition. First, define the materials array. Each element in the array will correspond to the material of one face. They go in this order: right, left, top, bottom, front and back:

var materials = [];
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0xff0000 })); // right face
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0xffff00 })); // left face
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0xffffff })); // top face
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0x00ffff })); // bottom face
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0x0000ff })); // front face
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0xff00ff })); // back face

As you can see each face has it’s own material, so you can set different textures, colors and other attributes for each one. Next, change the type of the cube’s material to THREE.MeshFaceMaterial:

var cubeMaterial = new THREE.MeshFaceMaterial(materials);

You only need to pass the materials array as the parameter. In the browser you should see that each side of the cube has different color:

each_side_different

Step 4: Particles!

Let’s say you want to create an effect of spinning snowflakes in your app. If you were to render each snowflake as a mesh you will get very low fps. That’s where particles come into play. They are way less complicated, and drawing them as a whole particle system makes them really efficient.

Start with creating a geometry for our particles:

var particles = new THREE.Geometry;

THREE.Geometry is a base geometry object, without any shape. Now we have to define the position of each particle in the system. Let it be completely random:

for (var p = 0; p &lt; 2000; p++) {
	var particle = new THREE.Vector3(Math.random() * 500 - 250, Math.random() * 500 - 250, Math.random() * 500 - 250);
	particles.vertices.push(particle);
}

This loop will create 2000 randomly placed particles and put them all in the geometry. Next, you have to define particles’ material:

var particleMaterial = new THREE.ParticleBasicMaterial({ color: 0xeeeeee, size: 2 });

Notice that we are using THREE.ParticleBasicMaterial, which is only for particles. In options we only define the color and the size of each particle. Finally, you can create the particle system and add it to the scene:

var particleSystem = new THREE.ParticleSystem(particles, particleMaterial);

scene.add(particleSystem);

Now, to make the scene look better let’s rotate the particles in the direction opposite to the one that the cube is rotating in (change the render function to look like this):

function render() {
	requestAnimationFrame(render);
	
	var delta = clock.getDelta();
	cube.rotation.y -= delta;
	particleSystem.rotation.y += delta;
	
	renderer.render(scene, camera);
}

We moved the clock.getDelta to the variable, because if you would use it like that:

cube.rotation.y -= clock.getDelta();
particleSystem.rotation.y += clock.getDelta();

The particle system would not rotate, because the second call will return a number close to zero (remember that it’s getting the time from the last call).

Now open up the browser and you should see a cube and particles rotating:

cube_with_particles

Let’s combine both things you’ve learned in this tutorial and turn those ugly white squares into real snowflakes. First, load the snowflake texture:

var particleTexture = THREE.ImageUtils.loadTexture('./snowflake.png');

Now, change the particles’ material to use the texture. Also, enable transparency and make the particles bigger so we can see the shape:

var particleMaterial = new THREE.ParticleBasicMaterial({ map: particleTexture, transparent: true, size: 5 });

If you open the browser you should see some nice snowflakes flowing around the cube:

particles_snowflakes

Step 5: Smoke

Smoke effect is pretty easy to achieve and it looks nice. Start by creating the geometry, just like with the snowflakes:

var smokeParticles = new THREE.Geometry;
for (var i = 0; i &lt; 300; i++) {
	var particle = new THREE.Vector3(Math.random() * 32 - 16, Math.random() * 230, Math.random() * 32 - 16);
	smokeParticles.vertices.push(particle);
}

The only difference here is that we are choosing the position from a rectangular prism with dimensions 32x32x230. Now, let’s load the texture and define the material:

var smokeTexture = THREE.ImageUtils.loadTexture('./smoke.png');
var smokeMaterial = new THREE.ParticleBasicMaterial({ map: smokeTexture, transparent: true, blending: THREE.AdditiveBlending, size: 50, color: 0x111111 });

In material definition, there is a blending option. It tells the renderer how it should render one object on another. With THREE.AdditiveBlending overlapping color values will be added to each other which will result in a brighter smoke in the areas with higher particle density. We also set the color to almost black, so the smoke looks more natural.

Finally, create the particle system, move it a bit to the left and add it to the scene:

var smoke = new THREE.ParticleSystem(smokeParticles, smokeMaterial);
smoke.sortParticles = true;
smoke.position.x = -150;

scene.add(smoke);

You also have to set smoke.sortParticles to true. When it’s false the background of the sprite may be drawn as black. If you open the browser you should see a still pillar of smoke next to the cube:

smoke_still

To animate the smoke we have to loop trough all of the particles and move them up a bit. Add this code to the render function:

var particleCount = smokeParticles.vertices.length;
while (particleCount--) {
	var particle = smokeParticles.vertices[particleCount];
	particle.y += delta * 50;
	
	if (particle.y >= 230) {
		particle.y = Math.random() * 16;
		particle.x = Math.random() * 32 - 16;
		particle.z = Math.random() * 32 - 16;
	}
}
smokeParticles.__dirtyVertices = true;

In the loop we are adding delta * 50 to the y position of the particle. Next we check if the particle is higher than 230, if so we randomly choose its new position somewhere in the bottom of the smoke pillar. Finally, the most important thing: setting the geometry’s __dirtyVertices flag to true.

To improve the performance, Three.js is caching the objects to avoid building all of the WebGL calls again every frame, so if we change something in the geometry of the object we have to let the renderer know that it has changed. Basically, the __dirtyVertices flag will just reach the element.

If you open the browser now you should see a smoothly animated smoke next to the cube.


Conclusion

In this tutorial you’ve learned how to use textures and particles. As before, don’t be afraid to experiment a bit with your app. If you have problems take a look at the documentation. In the next article I will teach you how to load models and animate them.

November 26 2013

15:33

Getting Into Ember.js: Part 5

In part 3 of my Ember series, I showed you how you can interact with data using Ember's Ember.Object main base class to create objects that define the methods and properties that act as a wrapper for your data. Here's an example:

App.Item = Ember.Object.extend();

App.Item.reopenClass({
  all: function() {
    return $.getJSON('http://api.ihackernews.com/page?format=jsonp&callback=?').then(function(response) {
      var items = [];

      response.items.forEach( function (item) {
    items.push( App.Item.create(item) );
  });
  return items;
});

In this code, we subclass Ember.Object using the "extend()" and create a user-defined method called called "all()" that makes a request to Hacker News for JSON-formatted results of its news feed.

While this method definitely works and is even promoted by Ember-based Discourse as their way of doing it, it does require that you flesh out and expose the API that you'd like to reference the data with. Most MVC frameworks tend to include ORM-like capabilities so if you're used to Rails, for example, you'd be very familiar with the benefits of ActiveRecord which helps to manage and do the heavy lifting of interacting with data.

The Ember team has wanted to do the same thing but their main focus has been to get a stable v1 release of their core framework out first to ensure that complementary components could be built on a stable foundation. I actually applaud this and I actually made mention of the fact that you should hold off on using Ember Data because of this.

Now that Ember RC8 is out and v1 seems to be coming around the corner, I felt it was a good time to start exploring Ember Data and see what it offers.

Ember Data

The first thing I want to stress is that Ember Data is a work in progress and in much the same way as Ember started, will probably see a number of breaking API changes over the next several months. While that's not ideal, it's important to begin looking at how you would structure your apps using the library. To give you a good description of what Ember Data provides, I've copied in the well-written description from the GitHub page:

Ember Data is a library for loading data from a persistence layer (such as a JSON API), mapping this data to a set of models within your client application, updating those models, then saving the changes back to a persistence layer. It provides many of the facilities you'd find in server-side ORMs like ActiveRecord, but is designed specifically for the unique environment of JavaScript in the browser.

So as I mentioned, it's meant to abstract out a lot of the complexities of working with data.

Using Ember Data

If you've read my previous tutorials, you should be very familiar with how to set up a page to leverage Ember. If you haven't done so, you should go to the Ember.js home page and grab the Starter Kit. You can find it right in the middle of the page as it's displayed via a big button. This will give you the most up-to-date version of Ember which you'll need in order to work with Ember Data. The easiest way to get a downloadable version of Ember Data is to go to the API docs for models, scroll to the bottom and download the library. Additionally, you can go to the builds page to pull down the latest builds of any Ember-related library.

Adding Ember Data is as simple as adding another JavaScript file to the mix like this:

<script src="js/libs/jquery-1.9.1.js"></script>
<script src="js/libs/handlebars-1.0.0.js"></script>
<script src="js/libs/ember-1.0.0-rc.8.js"></script>
<script src="js/libs/ember-data.js"></script>
<script src="js/app.js"></script>

This now gives you access to Ember Data's objects, method and properties.

Without any configuration, Ember Data can load and save records and relationships served via a RESTful JSON API, provided it follows certain conventions.

Defining a Store

Ember uses a special object called a store to load models and retrieve data and is based off the Ember DS.Store class. This is how you'd define a new store:

App.Store = DS.Store.extend({
...
});

If you remember from my previous articles, "App" is just a namespace created for the application level objects, methods and properties for the application. While it's not a reserved word in Ember, I would urge you to use the same name as almost every tutorial and demo I've seen uses it for consistency.

The store you create will hold the models you create and will serve as the interface with the server you define in your adapter. By default, Ember Data creates and associates to your store a REST adapter based off the DS.RestAdapter class. If you simply defined the code above, you would have an adapter associated to it by default. Ember magic at its finest. You can also use a Fixture adapter as well if you're working with in-memory-based data (for example, JSON you're loading from code) but since this is about making API calls, the REST adapter is more appropriate.

You can also define your own adapter for those situations where you need more custom control over interfacing with a server by using the adapter property within your store declaration:

App.Store = DS.Store.extend({
  adapter: 'App.MyCustomAdapter'
});

Defining Models

The code I listed at the top of this tutorial was an example of how to use Ember.Object to create the models for your application. Things change a bit when you define models via Ember Data. Ember Data provides another object called DS.Model which you subclass for every model you want to create. For example, taking the code from above:

App.Item = Ember.Object.extend();

It would now look like this:

App.Item = DS.Model.Extend()

Not much of a difference in terms of appearance but a big difference in terms of functionality since you now have access to the capabilities of the REST adapter as well as Ember Data's built-in relationships like one-to-one, one-to-many and more. The main benefit, though, is that Ember Data provides the hooks for interacting with your data via your models as opposed to you having to roll your own. Referencing the code from above again:

App.Item.reopenClass({
  all: function() {
    return $.getJSON('http://api.ihackernews.com/page?format=jsonp&callback=?').then(function(response) {
      var items = [];</p>

     response.items.forEach( function (item) {
    items.push( App.Item.create(item) );
  });
  return items;
});

While I had to create my own method to return all of the results from my JSON call, Ember Data provides a find() method which does exactly this and also serves to filter down the results. So in essence, all I have to do is make the following call to return all of my records:

App.Item.find();

The find() method will send an Ajax request to the URL.

This is exactly what attracts so many developers to Ember; the forethought given to making things easier.

One thing to keep in mind is that it's important to define within the model the attributes you plan on using later on (e.g. in your templates). This is easy to do:

App.Post = DS.Model.extend({
     title: DS.attr('string')
});

In my demo app, I want to use the title property returned via JSON so using the attr() method, specify which attributes a model has at my disposal.

One thing I want to mention is that Ember Data is incredibly picky about the structure of the JSON returned. Because Ember leverages directory structures for identifying specific parts of your applications (remember the naming conventions we discussed in my first Ember article?), it makes certain assumptions about the way that the JSON data is structured. It requires that there be a named root which will be used to identify the data to be returned. Here's what I mean:

{
  'posts': [{
    'id': 1, 
    'title': 'A friend of mine just posted this.',
   'url': 'http://i.imgur.com/9pw20NY.jpg'
  }]
}1

<p>If you had defined it like this:</p>

1{
{
    'id': '1', 
    'title': 'A friend of mine just posted this.',
    'url': 'http://i.imgur.com/9pw20NY.jpg'
  },
{
    'id': '2', 
    'title': 'A friend of mine just posted this.',
    'url': 'http://i.imgur.com/9pw20NY.jpg'
  },
}

Ember Data would've totally balked and thrown the following error:

Your server returned a hash with the key id but you have no mapping for it.

The reason is that since the model is called "App.Post", Ember Data is expecting to find a URL called "posts" from which it will pull the data from. So if I defined my store as such:

App.Store = DS.Store.extend({
  url: 'http://emberdata.local' 
});

and my model like this:

App.Post = DS.Model.extend({
     title: DS.attr('string')
});

Ember Data would assume that the Ajax request made by the find() method would look like this:

http://emberdata.local/posts

And if you were making a request for a specific ID (like find(12)), it would look like this:

http://emberdata.local/posts/12

This issue drove me batty, but doing a search found plenty of discussions on it. If you can't set up your JSON results in this way, then you'll have to create a custom adapter to massage the results to properly serialize them before being able to use it. I'm not covering that here but plan to explore more of that soon.

The Demo App

I purposely wanted to keep this tutorial simple because I know Ember Data is changing and I wanted to give a brief overview of what it provided. So I whipped up a quick demo app that uses Ember Data to pull JSON data from my own local server. Let's look at the code.

First I create my application namespace (which you would do for any Ember app):

// Create our Application
App = Ember.Application.create({});

Next, I define my data store and I declare the url from where the model will pull the data from:

App.Store = DS.Store.extend({
  url: 'http://emberdata.local'; 
});

In the model, I specify the attribute: title, which I'll use in my template later on:

// Our model
App.Post = DS.Model.extend({
     title: DS.attr('string')
});

Lastly, I associate the model to the route via the model hook. Notice that I'm using the predefined Ember Data method find() to immediately pull back my JSON data as soon as the app is started:

// Our default route. 
App.IndexRoute = Ember.Route.extend({
  model: function() {
    return App.Post.find();
  }
});

In the template for the root page (index), I use the #each Handlebars directive to look through the results of my JSON data and render the title of each of my posts:

<script type="text/x-handlebars" data-template-name="index">
    <h2>My Posts</h2>
    <ul>
    {{#each post in model}}
        <li>{{post.title}}</li>
    {{/each}}
    </ul>
  </script></p>

That's it! No Ajax call to make or special methods to work with my data. Ember Data took care of making the XHR call and storing the data.

Fin

Now, this is incredibly simplistic and I don't want to lead you to believe it's all unicorns and puppy dogs. As I went through the process of working with Ember Data, I found myself wanting to go back to using Ember.Object where I had more control. But I also realize that a lot of work is going on to improve Ember Data, especially in the way it manages diverse data results. So it's important to at least kickstart the process of understanding how this thing works and even offering constructive feedback to the team.

So I urge you to jump in and begin tinkering with it, especially those that have a very strong ORM background and could help shape the direction of Ember Data. Now is the best time to do that.

November 19 2013

18:47

WebGL With Three.js: Basics

3D graphics in the browser have been a hot topic ever since it was first introduced. But if you were to create your apps using plain WebGL, it would take ages. This is exactly why some really useful libraries have recently came about. Three.js is one of the most popular, and in this series I will show you how best to use it in order to create stunning 3D experiences for your users.

Before we begin, I do expect you to have a basic understanding of 3D space before you start reading this tutorial, as I won’t be explaining stuff like coordinates, vectors, etc.


Step 1: Preparation

First, create three files: index.html, main.js and style.css. Now, download Three.js (whole zip file with examples and source, or the JavaScript file alone, your choice). Now, open index.html and insert this code:

<!DOCTYPE html>
<html>
<head>
	<link rel="stylesheet" href="./style.css">
	<script src="./three.js"></script>
</head>
<body>
	<script src="./main.js"></script>
</body>
</html>

That’s all you need in this file. Just a declaration of scripts and stylesheet. All the magic will happen in main.js, but before we get to that we need one more trick to make the app look good. Open style.css and insert this code:

canvas {
	position: fixed;
	top: 0;
	left: 0;
}

This will position the canvas in the left-top corner, because by default the body will have 8px of margin. Now we can proceed with the JavaScript code.


Step 2: The Scene and the Renderer

Three.js uses the concept of a display list. It means that all objects are stored in the list and then drawn to the screen.

Three.js uses the concept of a display list. This means that all objects are stored in the list and then drawn to the screen. Here, this is a THREE.Scene object. You need to add any object you want to be drawn on the screen to the scene. You can have as many scenes as you want, but one renderer can draw only one scene at once (of course you can switch the scene that is displayed).

The renderer simply draws everything from the scene to the WebGL canvas. Three.js also supports drawing on SVG or 2D Canvas, but we will focus on WebGL.

To get started, lets store the window’s width and height in variables, we will use it later:

var width = window.innerWidth;
var height = window.innerHeight;

Now define the renderer and the scene:

var renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(width, height);
document.body.appendChild(renderer.domElement);

var scene = new THREE.Scene;

The first line defines the WebGL renderer. You can pass the renderer’s options in the first argument as a map. Here, we set the antialias to true, because we want the edges of objects to be smooth, not jagged.

The second line sets the renderer size to the size of the window, and in the third one we add the renderer’s canvas element to the document (you can also do this using a library, like jQuery: $('body').append(renderer.domElement)).

The last one defines the scene, no arguments needed.


Step 3: The Cube

Now lets add something to be drawn. Let it be a cube, since it’s the simplest 3D object. In Three.js the objects that are being drawn on the screen are called meshes. Each mesh has to have its own geometry and material. Geometry is a set of points that need to be connected in order to create the object. Material is simply the paint (or painting, but that is not the topic of this tutorial) that will cover the object. So, lets create our cube. Luckily for us there are some helper functions in Three.js for creating primitives (simple shapes):

var cubeGeometry = new THREE.CubeGeometry(100, 100, 100);
var cubeMaterial = new THREE.MeshLambertMaterial({ color: 0x1ec876 });
var cube = new THREE.Mesh(cubeGeometry, cubeMaterial);

cube.rotation.y = Math.PI * 45 / 180;

scene.add(cube);

As you can see, first we create the geometry. The arguments are defining a size of the cube: the width, height and depth.

Next, we define the cube’s material. There are a few material types in Three.js, but this time we will use the THREE.MeshLambertMaterial, since we want to have some lighting later (this material uses Lambert’s algorithm for light calculations). You can pass the options in the first argument as a map, the same as with the renderer – this is pretty much a rule for more complex objects in Three.js. Here, we only use color, which is passed as a hexadecimal number.

On the third line, we create a mesh using the geometry and material created earlier. Next, we rotate the cube by 45 degrees on the Y axis, to make it look better. We have to change degrees to radians, which is handled by the equation you probably remember from your high school physics class: Math.PI * 45 / 180. Finally, the cube is added to the scene.

Now you could open index.html in your browser to see the results, but you will see nothing because the scene is not rendered yet.


Step 4: Camera!

To render something, first we need to add the camera to the scene, so the renderer knows from which point of view it should render stuff. There are a few types of cameras in Three.js, but you’ll probably only use THREE.PerspectiveCamera. This type of camera is presenting the scene as we see our world. Lets create one:

var camera = new THREE.PerspectiveCamera(45, width / height, 0.1, 10000);

“To render something, first we need to add the camera to the scene, so the renderer knows from which point of view it should render stuff.”

Creating the camera is a bit more complicated than the rest of the things we’ve done so far. The first argument defines the FOV (field of view), the angle that can be seen from where the camera is. A FOV of 45 degrees looks natural. Next, we define the camera’s ratio. This is always the width of the renderer divided by its height, unless you want to achieve some special effects. The last two numbers are defining how close and how far the object can be to the camera to be drawn.

Now we have to move the camera back and up a little, as all of the objects created in Three.js have their position set in the middle of the scene (x: 0, y: 0, z: 0) by default:

camera.position.y = 160;
camera.position.z = 400;

The z coordinate is positive in the direction of the viewer, so objects with a higher z position will appear closer to you (in this case, since we moved the camera, all of the objects will appear further away from you).

Now, lets add the camera to the scene and render it:

scene.add(camera);

renderer.render(scene, camera);

You add the camera just like you added the cube. The next line renders the scene using this camera. Now you can open the browser and you should see the following:

first_rendering

You should only be able to see the top of the cube. This is because we moved the camera up and it’s still looking directly in front of it. This can be fixed by letting the camera know on what position it should look. Add this line after the lines setting the position of the camera:

camera.lookAt(cube.position);

The only argument passed in is a position on which the camera will look. Now, the scene looks better, but the cube is still black, no matter what color you’ve set when creating it:

fixed_camera_lookat

Step 5: Lights!

The cube is black, because there are no lights on the scene, so it’s like a completely black room. You see a white background because there is nothing to draw apart from the cube. To avoid that, we will use a technique called skybox. Basically, we will add a big cube that will display the background of the scene (usually some far terrain if it’s open space). So, lets create the box. This code should go before the renderer.render call:

var skyboxGeometry = new THREE.CubeGeometry(10000, 10000, 10000);
var skyboxMaterial = new THREE.MeshBasicMaterial({ color: 0x000000, side: THREE.BackSide });
var skybox = new THREE.Mesh(skyboxGeometry, skyboxMaterial);

scene.add(skybox);

This code is similar to the one that creates the cube. But this time the geometry is much bigger. We’ve also used THREE.MeshBasicMaterial since we don’t need to light the skybox. Also, notice the additional argument passed to the material: side: THREE.BackSide. Since the cube will be displayed from the inside, we have to change the side that gets drawn (normally, Three.js draws only outside walls).

Now the rendered scene is completely black. To fix that we have to add light to the scene. We will use THREE.PointLight, which emits the light like a bulb. Add these lines after the skybox:

var pointLight = new THREE.PointLight(0xffffff);
pointLight.position.set(0, 300, 200);

scene.add(pointLight);

As you can see, we’ve created the point light with white color, then we are setting its position to be up and back a little, to light the front and the top of the cube. Finally the light is added to the scene like any other object. Open up the browser and you should see a colored, shaded cube:

colored_shaded_cube

But the cube is still pretty boring. Let’s add some movement to it.


Step 6: Action!

Now we will add some movement to the scene. Lets make the cube rotate around the Y axis. But first, we have to change the way that we render the scene. One renderer.render call, renders the current state of the scene once. So even if we animate the cube somehow, we will not see it move. To change that, we have to add the render loop to our app. This can be achieved using the renderAnimationFrame function, which was created specially for that purpose. It’s supported in most of the major browsers, and for those which doesn’t support it, Three.js comes with its own polyfill. So, lets change this:

renderer.render(scene, camera);

to this:

function render() {
	renderer.render(scene, camera);
	
	requestAnimationFrame(render);
}

render();

Actually, there is no loop in there, because it would freeze the browser. The requestAnimationFrame function behaves a bit like setTimeout, but it’s calling the function passed as quick as the browser is ready. So, nothing really changed in the displayed scene and the cube is still not moving. Lets fix that. Three.js comes with THREE.Clock which can be used to achieve smooth animation of objects. First, initialize it before the render function definition:

var clock = new THREE.Clock;

Now, each time you call clock.getDelta it will return the time since the last call, in milliseconds. This can be used to rotate the cube like this:

cube.rotation.y -= clock.getDelta();

Add this line between the renderer.render and the requestAnimationFrame calls in the render function. It’s simply subtracting the time passed from the cube’s rotation on the Y axis (remember that it’s in radians) to rotate the cube clockwise. Now open the browser and you should see your cube rotating clockwise smoothly.


Conclusion

In this part of the series you learned how to prepare the scene, add objects and lights, and how to animate things. You can experiment with the app, add more or different objects, lights. It’s up to you. Next time I will show you how to use textures and how to create some nice effects with particles. Don’t forget to take a look at the documentation if you are having any problems.

November 15 2013

19:22

Intro to the React Framework

In today’s world of Javascript Application frameworks, design philosophy is the key differentiating factor. If you compare the popular JS frameworks, such as EmberJS, AngularJS, Backbone, Knockout, etc. you are sure to find differences in their abstractions, thinking models, and of course the terminology. This is a direct consequence of the underlying design philosophy. But, in principle, they all do one thing, which is to abstract out the DOM in such a way that you don’t deal directly with HTML Elements.

I personally think that a framework becomes interesting when it provides a set of abstractions that enable a different mode of thinking. In this aspect, react, the new JS framework from the folks at Facebook, will force you to rethink (to some extent) how you decompose the UI and interactions of your application. Having reached version 0.4.1 (as of this writing), React provides a surprisingly simple, yet effective model for building JS apps that mixes a delightful cocktail of a different kind.

In this article, we’ll explore the building blocks of React and embrace a style of thinking that may seem counter-intuitive on the first go. But, as the React docs say: “Give it Five Minutes” and then you will see how this approach will become more natural.


Motivations

The story of React started within the confines of Facebook, wherein it brew for a while. Having reached a stable-enough state, the developers decided to open-source it a few months back. Interestingly the Instagram website is also powered by the React Framework.

React approaches the DOM-abstraction problem with a slightly different take. To understand how this is different, lets quickly gloss over the techniques adopted by the frameworks I mentioned earlier.

A High Level Overview of JS Application Frameworks

The MVC (Model-View-Controller) design pattern is fundamental to UI development, not just in web apps, but in front-end applications on any platform. In case of web apps, the DOM is the physical representation of a View. The DOM itself is generated from a textual html-template that is pulled from a different file, script-block or a precompiled template function. The View is an entity that brings the textual template to life as a DOM fragment. It also sets up event-handlers and takes care of manipulating the DOM tree as part of its lifecycle.

For the View to be useful, it needs to show some data, and possibly allow user interaction. The data is the Model, which comes from some data-source (a database, web-service, local-storage, etc.). Frameworks provide a way of “binding” the data to the view, such that changes in data are automatically reflected with changes on the view. This automatic process is called data-binding and there are APIs/techniques to make this as seamless as possible.

The MVC triad is completed by the Controller, which engages the View and the Model and orchestrates the flow of data (Model) into the View and user-events out from the View, possibly leading to changes in the Model.

mvc-flow

Frameworks that automatically handle the flow of data back and forth between the View and Model maintain an internal event-loop. This event-loop is needed to listen to certain user events, data-change events, external triggers, etc and then determine if there is any change from the previous run of the loop. If there are changes, at either end (View or Model), the framework ensures that both are brought back in sync.

What Makes React Different?

With React, the View-part of the MVC triad takes prominence and is rolled into an entity called the Component. The Component maintains an immutable property bag called props, and a state that represents the user-driven state of the UI. The view-generation part of the Component is rather interesting and possibly the reason that makes React stand out compared to other frameworks. Instead of constructing a physical DOM directly from a template file/script/function, the Component generates an intermediate DOM that is a stand-in for the real HTML DOM. An additional step is then taken to translate this intermediate DOM into the real HTML DOM.

As part of the intermediate DOM generation, the Component also attaches event-handlers and binds the data contained in props and state.

If the idea of an intermediate-DOM sounds a little alien, don’t be too alarmed. You have already seen this strategy adopted by language runtimes (aka Virtual Machines) for interpreted languages. Our very own JavaScript runtime, first generates an intermediate representation before spitting out the native code. This is also true for other VM-based languages such as Java, C#, Ruby, Python, etc.

React cleverly adopts this strategy to create an intermediate DOM before generating the final HTML DOM. The intermediate-DOM is just a JavaScript object graph and is not rendered directly. There is a translation step that creates the real DOM. This is the underlying technique that makes React do fast DOM manipulations.


React In Depth

To get a better picture of how React makes it all work, let’s dive a little deeper; starting with the Component. The Component is the primary building block in React. You can compose the UI of your application by assembling a tree of Components. Each Component provides an implementation for the render() method, where it creates the intermediate-DOM. Calling React.renderComponent() on the root Component results in recursively going down the Component-tree and building up the intermediate-DOM. The intermediate-DOM is then converted into the real HTML DOM.

component-dom-tree

Since the intermediate-DOM creation is an integral part of the Component, React provides a convenient XML-based extension to JavaScript, called JSX, to build the component tree as a set of XML nodes. This makes it easier to visualize and reason about the DOM. JSX also simplifies the association of event-handlers and properties as xml attributes. Since JSX is an extension language, there is a tool (command-line and in-browser) to generate the final JavaScript. A JSX XML node maps directly to a Component. It is worth pointing out that React works independent of JSX and the JSX language only makes it easy to create the intermediate DOM.

Tooling

The core React framework can be downloaded from their website. Additionally, for the JSX → JS transformation, you can either use the in-browser JSXTransformer or use the command line tool, called react-tools (installed via NPM). You will need an installation of Node.js to download it. The command-line tool allows you to precompile the JSX files and avoid the translation within the browser. This is definitely recommended if your JSX files are large or many in number.

A Simple Component

Alright, we have seen a lot of theory so far, and I am sure you are itching to see some real code. Let’s dive into our first example:

/** @jsx React.DOM */

var Simple = React.createClass({

  getInitialState: function(){
    return { count: 0 };
  },

  handleMouseDown: function(){
    alert('I was told: ' + this.props.message);
    this.setState({ count: this.state.count + 1});
  },

  render: function(){

    return <div>
      <div class="clicker" onMouseDown={this.handleMouseDown}>
        Give me the message!
      </div>
      <div class="message">Message conveyed
        <span class="count">{this.state.count}</span> time(s)</div>
    </div>
    ;
  }
});

React.renderComponent(<Simple message="Keep it Simple"/>,
                  document.body);

Although simple, the code above does cover a good amount of the React surface area:

  • We create the Simple component by using React.createClass and passing in an object that implements some core functions. The most important one is the render(), which generates the intermediate-DOM.
  • Here we are using JSX to define the DOM and also attach the mousedown event-handler. The {} syntax is useful for incorporating JavaScript expressions for attributes (onMouseDown={this.handleClick}) and child-nodes (<span class="count">{this.state.count}</span>). Event handlers associated using the {} syntax are automatically bound to the instance of the component. Thus this inside the event-handler function refers to the component instance. The comment on the first line /** @jsx React.DOM */ is a cue for the JSX transformer to do the translation to JS. Without this comment line, no translation will take place.

We can run the command-line tool (jsx) in watch mode and auto-compile changes from JSX → JS. The source files are in /src folder and the output is generated in /build.

jsx --watch src/ build/

Here is the generated JS file:

/** @jsx React.DOM */

var Simple = React.createClass({displayName: 'Simple',

  getInitialState: function(){
    return { count: 0 };
  },

  handleMouseDown: function(){
    alert('I was told: ' + this.props.message);
    this.setState({ count: this.state.count + 1});
  },

  render: function(){

    return React.DOM.div(null, 
      React.DOM.div( {className:"clicker", onMouseDown:this.handleMouseDown}, 
" Give me the message! "      ),
      React.DOM.div( {className:"message"}, "Message conveyed ",        React.DOM.span( {className:"count"}, this.state.count), " time(s)")
    )
    ;
  }
});

React.renderComponent(Simple( {message:"Keep it Simple"}),
                  document.body);

Notice how the <div/> and <span/> tags map to instances of React.DOM.div and React.DOM.span.

  • Now let’s get back to our code example. Inside handleMouseDown, we make use of this.props to read the message property that was passed in. We set the message on the last line of the snippet, in the call to React.renderComponent() where we create the <Simple/> component. The purpose of this.props is to store the data that was passed in to the component. It is considered immutable and only a higher-level component is allowed to make changes and pass it down the component tree.
  • Inside handleMouseDown we also set some user state with this.setState() to track the number of times the message was displayed. You will notice that we use this.state in the render() method. Anytime you call setState(), React also triggers the render() method to keep the DOM in sync. Besides React.renderComponent(), setState() is another way to force a visual refresh.

Synthetic Events

The events exposed on the intermediate-DOM, such as the onMouseDown, also act as a layer of indirection before they get set on the real-DOM. These events are thus refered to as Synthetic Events. React adopts event-delegation, which is a well-known technique, and attaches events only at the root-level of the real-DOM. Thus there is only one true event-handler on the real-DOM. Additionally these synthetic events also provide a level of consistency by hiding browser and element differences.

The combination of the intermediate-DOM and synthetic events gives you a standard and consistent way of defining UIs across different browsers and even devices.

Component Lifecycle

Components in the React framework have a specific lifecycle and embody a state-machine that has three distinct states.

component-lifecycle

The Component comes to life after being Mounted. Mounting results in going through a render-pass that generates the component-tree (intermediate-DOM). This tree is converted and placed into a container-node of the real DOM. This is a direct outcome of the call to React.renderComponent().

Once mounted, the component stays in the Update state. A component gets updated when you change state using setState() or change props using setProps(). This in turn results in calling render(), which brings the DOM in sync with the data (props + state). Between subsequent updates, React will calculate the delta between the previous component-tree and the newly generated tree. This is a highly optimized step (and a flagship feature) that minimizes the manipulation on the real DOM.

The final state is Unmounted. This happens when you explicitly call React.unmountAndReleaseReactRootNode() or automatically if a component was a child that was no longer generated in a render() call. Most often you don’t have to deal with this and just let React do the proper thing.

Now it would have been a big remiss, if React didn’t tell you when it moved between the Mounted-Update-Unmounted states. Thankfully that is not the case and there are hooks you can override to get notified of lifecycle changes. The names speak for themselves:

  • getInitialState(): prepare initial state of the Component
  • componentWillMount()
  • componentDidMount()
  • componentWillReceiveProps()
  • shouldComponentUpdate(): useful if you want to control when a render should be skipped.
  • componentWillUpdate()
  • render()
  • componentDidUpdate()
  • componentWillUnmount()

The componentWill* methods are called before the state change and the componentDid* methods are called after.

Some of the method names do seem to have taken a cue from the Cocoa frameworks in Mac and iOS

Miscellaneous Features

Within a component-tree, data should always flow down. A parent-component should set the props of a child-component to pass any data from the parent to the child. This is termed as the Owner-Owned pair. On the other hand user-events (mouse, keyboard, touches) will always bubble up from the child all the way to the root component, unless handled in between.

data-event-flow

When you create the intermediate-DOM in render(), you can also assign a ref property to a child component. You can then refer to it from the parent using the refs property. This is depicted in the snippet below.

  render: function(){
    // Set a ref 
    return <div>
        <span ref="counter" class="count">{this.state.count}</span>
        </div>;
  }

  handleMouseDown: function(){
    // Use the ref
    console.log(this.refs.counter.innerHTML);
  },

As part of the component metadata, you can set the initial-state (getInitialState()), which we saw earlier within the lifecycle methods. You can also set the default values of the props with getDefaultProps() and also establish some validation rules on these props using propTypes. The docs give a nice overview of the different kinds of validations (type checks, required, etc.) you can perform.

React also supports the concept of a Mixin to extract reusable pieces of behavior that can be injected into disparate Components. You can pass the mixins using the mixins property of a Component.

Now, lets get real and build a more comprehensive Component that uses these features.


A Shape Editor Built Using React

In this example, we will build an editor that accepts a simple DSL (Domain Specific Language) for creating shapes. As you type in, you will see the corresponding output on the side, giving you live feedback.

The DSL allows you to create three kinds of shapes: Ellipse, Rectangle and Text. Each shape is specified on a separate line along with a bunch of styling properties. The syntax is straightforward and borrows a bit from CSS. To parse a line, we use a Regex that looks like:

  var shapeRegex = /(rect|ellipse|text)(\s[a-z]+:\s[a-z0-9]+;)*/i;

As an example, the following set of lines describe two rectangles and a text label…

// React label
text value:React; color: #00D8FF; font-size: 48px; text-shadow: 1px 1px 3px #555; padding: 10px; left: 100px; top: 100px;

// left logo
rect background:url(react.png) no-repeat; border: none; width: 38; height: 38; left: 60px; top: 120px;

// right logo
rect background:url(react.png) no-repeat; border: none; width: 38; height: 38; left: 250px; top: 120px;

…generating the output shown below:

react-shapes

Setting Up

Alright, lets go ahead and build this editor. We will start out with the HTML file (index.html), where we put in the top-level markup and include the libraries and application scripts. I am only showing the relevant parts here:

<body>
<select class="shapes-picker">
  <option value="--">-- Select a sample --</option>
  <option value="react">React</option>
  <option value="robot">Robot</option>
</select>
<div class="container"></div>

<!-- Libraries -->
<script src="../../lib/jquery-2.0.3.min.js"></script>
<script src="../../lib/react.js"></script>
<!-- Application Scripts -->
<script src="../../build/shape-editor/ShapePropertyMixin.js"></script>
<script src="../../build/shape-editor/shapes/Ellipse.js"></script>
<script src="../../build/shape-editor/shapes/Rectangle.js"></script>
<script src="../../build/shape-editor/shapes/Text.js"></script>

<script src="../../build/shape-editor/ShapeParser.js"></script>
<script src="../../build/shape-editor/ShapeCanvas.js"></script>
<script src="../../build/shape-editor/ShapeEditor.js"></script>

<script src="../../build/shape-editor/shapes.js"></script>
<script src="../../build/shape-editor/app.js"></script>
</body>

In the above snippet, the container div holds our React generated DOM. Our application scripts are included from the /build directory. We are using JSX within our components and the command line watcher (jsx), puts the converted JS files into /build. Note that this watcher command is part of the react-tools NPM module.

jsx --watch src/ build/

The editor is broken down into a set of components, which are listed below:

  • ShapeEditor: the root Component in the component tree
  • ShapeCanvas: responsible for generating the shape-Components (Ellipse, Rectangle, Text). It is contained within the ShapeEditor.
  • ShapeParser: responsible for parsing text and extracting the list of shape definitions. It parses line by line with the Regex we saw earlier. Invalid lines are ignored. This is not really a component, but a helper JS object, used by the ShapeEditor.
  • Ellipse, Rectangle, Text: the shape Components. These become children of the ShapeCanvas.
  • ShapePropertyMixin: provides helper functions for extracting styles found in the shape definitions. This is mixed-into the three shape-Components using the mixins property.
  • app: the entry-point for the editor. It generates the root component (ShapeEditor) and allows you to pick a shape sample from the drop-down.

The relationship of these entities is shown in the annotated component-tree:

component-tree

The ShapeEditor Component

Lets look at the implementation of some of these components, starting with the ShapeEditor.

/** @jsx React.DOM */
var ShapeEditor = React.createClass({

  componentWillMount: function () {
    this._parser = new ShapeParser();
  },

  getInitialState: function () {
    return { text: '' };
  },

  render: function () {
    var shapes = this._parser.parse(this.state.text);

    var tree = (
      <div>
        <textarea class="editor" onChange={this.handleTextChange} />
        <ShapeCanvas shapes={shapes} />
      </div>);

    return tree;
  },

  handleTextChange: function (event) {
    this.setState({ text: event.target.value })
  }

});

As the name suggests, the ShapeEditor provides the editing experience by generating the <textarea/> and the live feedback on the <ShapeCanvas/<. It listens to the onChange event (events in React are always named with camel case) on the <textarea/> and on every change, sets the text property of the component’s state. As mentioned earlier, whenever you set the state using setState(), render is called automatically. In this case, the render() of the ShapeEditor gets called where we parse the text from the state and rebuild the shapes. Note that we are starting with an initial state of empty text, which is set in the getInitialState() hook.

For parsing the text into a set of shapes, We use an instance of the ShapeParser. I’ve left out the details of the parser to keep the discussion focused on React. The parser instance is created in the componentWillMount() hook. This is called just before the component mounts and is a good place to do any initializations before the first render happens.

It is generally recommended that you funnel all your complex processing through the render() method. Event handlers just set the state while render() is the hub for all your core logic.

The ShapeEditor uses this idea to do the parsing inside of its render() and forwards the detected shapes by setting the shapes property of the ShapeCanvas. This is how data flows down into the component tree, from the owner (ShapeEditor) to the owned (ShapeCanvas).

One last thing to note in here is that we have the first line comment to indicate JSX → JS translation.

ShapeCanvas to Generate the Shapes

Next, we will move on to the ShapeCanvas and the Ellipse, Rectangle and Text components.

p>The ShapeCanvas is rather straightforward with its core responsibility of generating the respective <Ellipse/>, <Rectangle/> and <Text/> components from the passed in shape definitions (this.props.shapes). For each shape, we pass in the parsed properties with the attribute expression: properties={shape.properties}.

/** @jsx React.DOM */
var ShapeCanvas = React.createClass({

  getDefaultProps: function(){
    return {
      shapes: []
    };
  },

  render: function () {
    var self = this;
    var shapeTree = <div class="shape-canvas">
    {
      this.props.shapes.map(function(s) {
        return self._createShape(s);
      })
    }
      </div>;

    var noTree = <div class="shape-canvas no-shapes">No Shapes Found</div>;

    return this.props.shapes.length > 0 ? shapeTree : noTree;
  },

  _createShape: function(shape) {
    return this._shapeMap[shape.type](shape);
  },

  _shapeMap: {
    ellipse: function (shape) {
      return <Ellipse properties={shape.properties} />;
    },

    rect: function (shape) {
      return <Rectangle properties={shape.properties} />;
    },

    text: function (shape) {
      return <Text properties={shape.properties} />;
    }
  }

});

One thing different here is that our component tree is not static, like we have in ShapeEditor. Instead it’s dynamically generated by looping over the passed in shapes. We also show the "No Shapes Found" message if there is nothing to show.

The Shapes: Ellipse, Rectangle, Text

All of the shapes have a similar structure and differ only in the styling. They also make use of the ShapePropertyMixin to handle the style generation.

Here’s Ellipse:

/** @jsx React.DOM */

var Ellipse = React.createClass({
  mixins: [ShapePropertyMixin],

  render:function(){
    var style = this.extractStyle(true);
    style['border-radius'] = '50% 50%';

    return <div style={style} class="shape" />;
  }
});

The implementation for extractStyle() is provided by the ShapePropertyMixin.

The Rectangle component follows suit, of course without the border-radius style. The Text component has an extra property called value which sets the inner text for the <div/>.

Here’s Text, to make this clear:

/** @jsx React.DOM */

var Text = React.createClass({

  mixins: [ShapePropertyMixin],

  render:function(){
    var style = this.extractStyle(false);
    return <div style={style} class="shape">{this.props.properties.value}</div>;
  }

});

Tying It All Together With App.js

app.js is where we bring it all together. Here we render the root component, the ShapeEditor and also provide support to switch between a few sample shapes. When you pick a different sample from the drop down, we load some predefined text into the ShapeEditor and cause the ShapeCanvas to update. This happens in the readShapes() method.

/** @jsx React.DOM */

var shapeEditor = <ShapeEditor />;
React.renderComponent(
  shapeEditor,
  document.getElementsByClassName('container')[0]
);
function readShapes() {
  var file = $('.shapes-picker').val(),
    text = SHAPES[file] || '';

  $('.editor').val(text);
  shapeEditor.setState({ text: text }); // force a render
}

$('.shapes-picker').change(readShapes);
readShapes(); // load time

To exercise the creative side, here is a robot built using the Shape Editor:

robot

And That’s React for you!

Phew! This has been a rather long article and having reached to this point, you should have a sense of achievement!

We have explored a lot of concepts here: the integral role of Components in the framework, use of JSX to easily describe a component tree (aka intermediate-DOM), various hooks to plug into the component lifecyle, use of state and props to drive the render process, use of Mixins to factor out reusable behavior and finally pulling all of this together with the Shape Editor example.

Hope this article gives you enough boost to build a few React apps for yourself. To continue your exploration, here are few handy links:

October 30 2013

16:27

Introduction to Sails.js

Sails is a Javascript framework designed to resemble the MVC architecture from frameworks like Ruby on Rails. It makes the process of building Node.js apps easier, especially APIs, single page apps and realtime features, like chat.


Installation

To install Sails, it is quite simple. The prerequisites are to have Node.js installed and also npm, which comes with Node. Then one must issue the following command in the terminal:

sudo npm install sails -g

Create a New Project

In order to create a new Sails project, the following command is used:

sails new myNewProject

Sails will generate a new folder named myNewProject and add all the necessary files to have a basic application built. To see what was generated, just get into the myNewProject folder and run the Sails server by issuing the following command in the terminal:

sails lift

Sails’s default port is 1337, so if you visit http://localhost:1337 you should get the Sails default index.html page.

index.html

Now, let's have a look at what Sails generated for us. In our myNewProject folder the following files and sub-folders were created:

Folder structure

The assets Folder

The assets folder contains subdirectories for the Javascript and CSS files that should be loaded during runtime. This is the best place to store auxiliary libraries used by your application.

The public Folder

Contains the files that are publicly available, such as pictures your site uses, the favicon, etc.

The config Folder

This is one of the important folders. Sails is designed to be flexible. It assumes some standard conventions, but it also allows the developer to change the way Sails configures the created app to fit the project’s needs. The following is a list of configuration files present in the config folder:

  • adapters.js – used to configure the database adapters
  • application.js – general settings for the application
  • assets.js – asset settings for CSS and JS
  • bootstrap.js – code that will be ran before the app launches
  • locales – folder containing translations
  • policies.js – user rights management configuration
  • routes.js – the routes for the system
  • views.js – view related settings

The sails.js documentation contains detailed information on each of these folders.

The views Folder

The application's views are stored in this folder. Looking at its contents, we notice that the views are generated by default as EJS (embedded JavaScript). Also, the views folder contains views for error handling (404 and 500) and also the layout file (layout.ejs) and the views for the home controller, which were generated by Sails.

The api Folder

This folder is composed from a buch of sub-folders:

  • the adapters folder contains the adapters used by the application to
    handle database connections
  • the controllers folder contains the application controllers
  • the application's models are stored in the models folder
  • in the policies folder are stored rules for application user access
  • the api services implemented by the app are stored in the services
    folder

Configure the Application

So far we have created our application and took a look at what was generated by default, now it's time to configure the application to make it fit our needs.

General Settings

General settings are stored in the config/application.js file. The configurable options for the application are:

  • application name (appName)
  • the port on which the app will listen (port)
  • the application environment; can be either development or production (environment)
  • the level for the logger, usable to control the size of the log file (log)

Note that by setting the app environment to production, makes Sails bundle and minify the CSS and JS, which can make it harder to debug.

Routes

Application routes are defined in the config/routes.js file. As you’d expect, this file will be the one that you will most often work with as you add new controllers to the application.

The routes are exported as follows, in the configuration file:

module.exports.routes = {
  // route to index page of the home controller
  '/': {
    controller: 'home'
  },

  // route to the auth controller, login action
  '/login': {
    controller: 'auth',
    action: 'login'
  },

  // route to blog controller, add action to add a post to a blog
  // note that we use also the HTTP method/verb before the path
  'post /blog/add': {
    controller: 'blog',
    action: 'add_post'
  },

  // route to get the first blog post. The find action will return
  // the database row containing the desired information
  '/blog/:item': {
    controller: blog,
    action: find
  }
}

Views

Regarding views, the configurable options are the template engine to be used and if a layout should or not be used, for views.


Models

Models are a representation of the application data stored in a database. Models are defined by using attributes and associations. For instance, the definition of a Person model might look like this:

// Person.js
var Person = {
  name: 'STRING',
  age: 'INTEGER',
  birthDate: 'DATE',
  phoneNumber: 'STRING',
  emailAddress: 'STRING'
};
exports = Person;

The communication with the underlying database is done through adapters. Adapters are defined in api/adapters and are configured in the adapters.js file. At the moment of writing this article, Sails comes with three adapters: memory, disk and mysql but you can write your own adapter (see the documentation for details).

Once you have a model defined you can operate on it by creating records, finding records, updating and destroying records.


Controllers

Controllers are placed in api/controllers. A controller is created using the following command:

sails generate controller comment

This command will generate a CommentController object. Actions are defined inside this object. Actions can also be generated when you issue the generate controller command:

sails generate controller comment create destroy tag like

This will create a Comment controller with actions for create, destroy, tag and like.

Actions receive as parameters the request and the response objects, which can be used for getting parameters of the URI (the request object) or output in the view (using the response object).

To communicate with the model, the callback of the appopriate action is used. For instance, in the case of querying a database with find, the following pattern is used to manipulate the model:

Blog.find(id).done(err, blog) {
  // blog is the database record with the specified id
  console.log(blog.content);
}

Views

Views are used to handle the UI of the application. By default, views are handled using EJS, but any other templating library can be used. How to configure views was discussed previously in the Configuration chapter.

Views are defined in the /views directory and the templates are defined in the /assests/templates folder.

There are mainly four types of views:

  • server-side views
  • view partials
  • layout views
  • client-side views

Server-Side Views

Their job is to display data when a view is requested by the client. Usually the method res.view corresponds to a client with the appropriate view. But if no controller or action exists for a request, Sails will serve the view in the following fashion: /views/:controller/:action.ejs.

The Layout View

The Layout can be found in /views/layout.ejs. It is used to load the application assets such as stylesheets or JavaScript libraries.

Have a look at the specified file:

<!DOCTYPE html>
<html>
  <head>
    <title><%- title %></title>

    <!-- Viewport mobile tag for sensible mobile support -->
    <meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1, maximum-scale=1&quot;>

    <!-- JavaScript and stylesheets from your public folder are included here -->
    <%- assets.css() %>
    <%- assets.js() %>
  </head>

  <body>
    <%- body %>

    <!-- Templates from your view path are included here -->
    <%- assets.templateLibrary() %>
  </body>
</html>

The lines assets.css() and assets.js() load the CSS and JS assets of our application and the assets.templateLibrary loads the client templates.

Client-Side Templates

These are defined in the /assets/templates and are loaded as we saw above.


Routes

We discussed how to configure routes in the Configuration chapter.

There are several conventions that Sails follows when routes are handled:

  • if the URL is not specified in the config/routes.js the default route for a URL is /:controller/:action/:id with the obvious meanings for controller and action and id being the request parameter derived from the URL.
  • if :action is not specified, Sails will redirect to the appropriate action. Out of the box, the same RESTful route conventions are used as in Backbone.
  • if the requested controller/action do not exist, Sails will behave as so:
    • if a view exists, Sails will render that view
    • if a view does not exist, but a model exists, Sails will return the JSON form of that model
    • if none of the above exist, Sails will respond with a 404

Conclusion

Now I’ve barely scratched the surface with what Sails can do, but stay tuned, as I will follow this up with an in-depth presentation showing you how to build an application, using Sails.

Also keep in mind that Sails is currently under development and constantly changing. So make sure to check out the documentation to see what's new.

October 21 2013

14:00

Working With IndexedDB – Part 2

Welcome to the second part of my IndexedDB article. I strongly recommend reading the first article in this series, as I’ll be assuming you are familiar with all the concepts covered so far. In this article, we’re going to wrap up the CRUD aspects we didn’t finish before (specifically updating and deleting content), and then demonstrate a real world application that we will use to demonstrate other concepts in the final article.


Updating Records

Let’s start off by discussing how to update a record with IndexedDB. If you remember, adding data was pretty simple:

//Define a person
var person = {
    name:name,
    email:email,
    created:new Date()
}

//Perform the add
var request = store.add(person);

Updating a record is just as simple. Assuming that you have defined a property called id as your key for your object store, you can simply use the put method instead of add.

var person = {
    name:name,
    email:email,
    created:new Date(),
    id:someId
}

//Perform the update
var request = store.put(person);

Like the add method, you can assign methods to handle the asynchronous results of the operation.


Deleting Records

Deleting records is done via the delete method. (Big surprise there.) You simply pass in the unique identifer of the record you want to remove. Here is a simple example:

var t = db.transaction(["people"], "readwrite");
var request = t.objectStore("people").delete(thisId);

And like every other aspect of IndexedDB, you can add your handles for the asynchronous results.

So, as I said, not terribly exciting, which is probably good. You want your APIs simple, boring, and unsurprising. Now let’s take what we’ve learned and bring it together to create a real, if simple, application.


The Note App

Ok, finally we have all (well, most) of the parts we need to build a real application. Since it hasn’t been done before (ahem), we are going to build a simple note taking application. Let’s look at a few screen shots and then I’ll show you the code behind it. On launch, the application initializes an IndexedDB for the application and renders an empty table. Initially, all you can do with the application is add a new note. (We could make this a bit more user friendly perhaps.)

Note App

Clicking the Add Note button opens a form:

Note App - Add Form

After entering some data in the form, you can then save the note:

Note App - Saved Note

As you can see, you have the option to edit and delete notes. Finally, if you click the row itself, you can read the note:

Note App - Note View

So not exactly rocket science, but a full working example of the IndexedDB specification. The notes written here will persist. You can close your browser, restart your machine, take a few years off to contemplate life and poetry, and when you open the browser again your data will still be there. Let’s take a look at the code now.

First – a disclaimer. This application would have been a perfect candidate for one of the many JavaScript frameworks. I’m sure those of you who use Backbone or Angular can already imagine how you would set this up. However – I made the bold decision here to not use a framework. I was worried both about the people who may use a different framework and those who use none. I wanted our focus here to be on the IndexedDB aspects alone. I fully expect some people to disagree with that decision, but let’s hash it out in the comments.

Our first template is the HTML file. We’ve only got one and most of it is boilerplate Bootstrap:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">

    <title>Note Database</title>

    <link href="bootstrap/css/bootstrap.css" rel="stylesheet">
    <link href="css/app.css" rel="stylesheet">

  </head>

  <body>

    <div class="navbar navbar-inverse navbar-fixed-top">
      <div class="container">
        <div class="navbar-header">
          <a class="navbar-brand" href="#">Note Database</a>
        </div>
      </div>
    </div>

    <div class="container">

        <div id="noteList"></div>
        <div class="pull-right"><button id="addNoteButton" class="btn btn-success">Add Note</button></div>
        <div id="noteDetail"></div>

        <div id="noteForm">
            <h2>Edit Note</h2>
            <form role="form" class="form-horizontal">
            <input type="hidden" id="key">
            <div class="form-group">
                <label for="title" class="col-lg-2 control-label">Title</label>
                <div class="col-lg-10">
                <input type="text" id="title" required class="form-control">
                </div>
            </div>
            <div class="form-group">
                <label for="body" class="col-lg-2 control-label">Body</label>   
                <div class="col-lg-10">
                <textarea id="body" required class="form-control"></textarea>
                </div>
            </div>
            <div class="form-group">
                <div class="col-lg-offset-2 col-lg-10">
                    <button id="saveNoteButton" class="btn btn-default">Save Note</button>
                </div>
            </div>
            </form>
        </div>

    </div>

    <script src="js/jquery-2.0.0.min.js"></script>
    <script src="bootstrap/js/bootstrap.min.js"></script>
    <script src="js/app.js"></script>
  </body>
</html>

As mentioned above, a good size portion of this file is template code for Bootstrap. The parts we care about are the noteList div, the noteDetail div, and the noteForm. You can probably guess that these are the DIVs we’ll be updating as the user clicks around in the application.

Coding Our Core App File

Now let’s take a look at app.js, the core file that handles the logic for our application.

/* global console,$,document,window,alert */
var db;

function dtFormat(input) {
    if(!input) return "";
    var res = (input.getMonth()+1) + "/" + input.getDate() + "/" + input.getFullYear() + " ";
    var hour = input.getHours();
    var ampm = "AM";
    if(hour === 12) ampm = "PM";
    if(hour > 12){
        hour-=12;
        ampm = "PM";
    }
    var minute = input.getMinutes()+1;
    if(minute < 10) minute = "0" + minute;
    res += hour + ":" + minute + " " + ampm;
    return res;
}

You can ignore the first function as it is simply a format utility for dates. Let’s skip ahead to the jQuery document ready block.

Checking for Browser Support

$(document).ready(function() {

    if(!("indexedDB" in window)) {
        alert("IndexedDB support required for this demo!");
        return;
    }

    var $noteDetail = $("#noteDetail");
    var $noteForm = $("#noteForm");

    var openRequest = window.indexedDB.open("nettuts_notes_1",1);

    openRequest.onerror = function(e) {
        console.log("Error opening db");
        console.dir(e);
    };

    openRequest.onupgradeneeded = function(e) {

        var thisDb = e.target.result;
        var objectStore;

        //Create Note OS
        if(!thisDb.objectStoreNames.contains("note")) {
            console.log("I need to make the note objectstore");
            objectStore = thisDb.createObjectStore("note", { keyPath: "id", autoIncrement:true });  
        }

    };

    openRequest.onsuccess = function(e) {
        db = e.target.result;

        db.onerror = function(event) {
          // Generic error handler for all errors targeted at this database's
          // requests!
          alert("Database error: " + event.target.errorCode);
          console.dir(event.target);
        };

        displayNotes();

    };

Our very first action is to check for IndexedDB support. If the user’s browser isn’t compatible, we use an alert and abort the function. It would probably be better to relocate them to a page that fully explains why they can’t use the application. (And to be clear, we could also build an application that made use of WebSQL as a backup. But again – my focus here is on simplicity.)

After caching a few jQuery selectors, that we’ll use throughout the app, we then open up our IndexedDB database. The database is fairly simple. In the onupgradeneeded handler you can see one object store called notes being created. Once everything is done, the onsuccess handler will fire off a call to displayNotes.

The displayNotes Function

    function displayNotes() {

        var transaction = db.transaction(["note"], "readonly");  
        var content="<table class='table table-bordered table-striped'><thead><tr><th>Title</th><th>Updated</th><th>&amp;nbsp;</td></thead><tbody>";

        transaction.oncomplete = function(event) {
            $("#noteList").html(content);
        };

        var handleResult = function(event) {  
          var cursor = event.target.result;  
          if (cursor) {  
            content += "<tr data-key=\""+cursor.key+"\"><td class=\"notetitle\">"+cursor.value.title+"</td>";
            content += "<td>"+dtFormat(cursor.value.updated)+"</td>";

            content += "<td><a class=\"btn btn-primary edit\">Edit</a> <a class=\"btn btn-danger delete\">Delete</a></td>";
            content +="</tr>";
            cursor.continue();  
          }  
          else {  
            content += "</tbody></table>";
          }  
        };

        var objectStore = transaction.objectStore("note");

        objectStore.openCursor().onsuccess = handleResult;

    }

The displayNotes function does what you expect – get all the data and display it. We discussed how to get all rows of data in the previous entry, but I want to point out something slightly different about this example. Note that we have a new event handler, oncomplete, that we’ve tied to the transaction itself. Previously, we’ve used events just within the actions, inside the transaction, but IndexedDB lets us do it at the top level as well. This becomes especially useful in a case like this. We have a giant string, our HTML table, that we build up over each iteration of our data. We can use the transaction’s oncomplete handler to wrap up the display portion and write it out using a simple jQuery call.

The Delete, Edit, and Add Functions

    $("#noteList").on("click", "a.delete", function(e) {
        var thisId = $(this).parent().parent().data("key");

        var t = db.transaction(["note"], "readwrite");
        var request = t.objectStore("note").delete(thisId);
        t.oncomplete = function(event) {
            displayNotes();
            $noteDetail.hide();
            $noteForm.hide();
        };
        return false;
    });

    $("#noteList").on("click", "a.edit", function(e) {
        var thisId = $(this).parent().parent().data("key");

        var request = db.transaction(["note"], "readwrite")  
                        .objectStore("note")  
                        .get(thisId);  
        request.onsuccess = function(event) {  
            var note = request.result;
            $("#key").val(note.id);
            $("#title").val(note.title);
            $("#body").val(note.body);
            $noteDetail.hide();
            $noteForm.show();
        };

        return false;
    });

    $("#noteList").on("click", "td", function() {
        var thisId = $(this).parent().data("key");
        var transaction = db.transaction(["note"]);  
        var objectStore = transaction.objectStore("note");  
        var request = objectStore.get(thisId);

        request.onsuccess = function(event) {  
            var note = request.result;
            $noteDetail.html("<h2>"+note.title+"</h2><p>"+note.body+"</p>").show();
            $noteForm.hide();
        };  
    });

    $("#addNoteButton").on("click", function(e) {
        $("#title").val("");
        $("#body").val("");
        $("#key").val("");
        $noteDetail.hide();
        $noteForm.show();       
    });

Our next two methods (delete and edit) is another example of this same principal. Since none of the IndexedDB calls here are new, we won’t bother going over them. Most of the “meat” here ends up being simple DOM manipulation to handle the particular actions. The handler for clicking the add button is exactly that, so we’ll skip over that as well.

The Save Function

    $("#saveNoteButton").on("click",function() {

        var title = $("#title").val();
        var body = $("#body").val();
        var key = $("#key").val();

        var t = db.transaction(["note"], "readwrite");

        if(key === "") {
            t.objectStore("note")
                            .add({title:title,body:body,updated:new Date()});
        } else {
            t.objectStore("note")
                            .put({title:title,body:body,updated:new Date(),id:Number(key)});
        }

        t.oncomplete = function(event) {
            $("#key").val("");
            $("#title").val("");
            $("#body").val("");
            displayNotes();
            $noteForm.hide();           
        };

        return false;
    });

});

The next interesting tidbit is the save method. It has to use a bit of logic to determine if we are adding or updating, but even that is rather simple. And that’s it! A complete, if simple, IndexedDB application. You can play around with this demo yourself by downloading the attached source code.


In Conclusion

That’s it for part two! The third article will take this application and begin adding additional features including search and array based properties.

October 11 2013

17:17

Essential Meteor Performance Tips

Meteor is the one of the easiest and robust ways to build real-time web apps. But Meteor is also known for bad performance. This is half true and half false. But that doesn’t mean Meteor cannot be used in production or cannot be scaled.

All of my hacks and findings are blogged at MeteorHacks.

Recently, I’ve worked on some improvements which make Meteor production ready. I hacked into Meteor and changed some of the internals to make it perform better. Also, I’ve tried and tested several proven web scaling techniques with Meteor and the results were promising. All of my hacks and findings are blogged at MeteorHacks.

In this article, I’ll show you a series of tips which can be used to make your Meteor app perform better and make it production ready. Let’s begin.


Use Smart Collections

The current MongoDB implementation in Meteor is not performing well. Meteor does have some plans to make it better, but we are not there just yet.

Smart Collections is a totally redesigned collection implementation for Meteor. It is well tested and used in production. You can get five times the performance improvements with Smart Collections. If you’ve got more subscriptions, Smart Collections gives you even more performance.

Smart Collections is almost compatible with the current Meteor Collection implementation. So it is a drop in replacement, you don’t need to change any application specific code. Here is how you can use Smart Collections in your app.

Install It From Atmosphere

mrt add smart-collections

Replace Meteor Collections With Smart Collections

//old code
Posts = new Meteor.Collections('posts');

//with smart collections
Posts = new Meteor.SmartCollections('posts');

Additionally, Smart Collections has support to scale Meteor apps horizontally and it is currently the best solution.


Think About MongoDB Indexes

When developing your applications, I know alot of you out there simply don’t care about MongoDB. You just focus on the app. Now, this is not a bad thing and this is also why Meteor is so popular. But, before you put your app into production, you must look into the queries you’ve used and add proper indexes for them.

You must add correct indexes, otherwise things will get even worst (this totally depends on your app and how people are using your app).

Read this official MongoDB index guide and choose correct indexes for your app. Additionally, you can profile MongoDB for slow queries and decide where you need to focus.


Arrays and Nested Objects

Meteor’s real-time engine identifies changes based on the fields of the MongoDB document. But it does not support nested fields and arrays. For example, let’s look at the following MongoDB document.

{
  "_id": "ormBGtZwcSZggWhCd",
  "name": "Arunoda Susiripala",
  "address": {
    "no": "434/T",
    "city": "Colombo",
    "country": "Sri Lanka"
  },
  "projects": [
    "node-usage",
    "Meteor-smart-collections",
    "laika",
    "Meteor-streams"
  ]
}

If you were to update country to a new value, Meteor detects this as a change to the address not to address.country. So Meteor will send a full object assigned to address, to the client.

Similarly, if you’ve added an item to projects, the whole projects array will be sent back to the client.

For a small document like above, this is not a big problem. But if your document contains lots of nested content and large arrays, think twice.


Filter Only the Fields You Need

When creating a publication only filter fields that need to be sent to the client. This saves bandwidth and the client will only get the fields which they really need. This might also fix the problem addressed in the previous tip.

See following example. It only sends type, content and _id to the client.

Meteor.publish('projects', function(type) {
  return MyCollection.find(type, {fields: {
    type: 1,
    content: 1
  }});
});

Think About Subscriptions

Meteor keeps a copy of each document related to each subscription on the server. This caching allows Meteor to send exact changes of the document to the client, and it saves bandwidth. As a result of this, the server’s RAM usage might increase with the number of subscriptions you have.

So, it is wise to stop subscriptions that you don’t need in order to get optimal RAM usage.

If your RAM usage is getting fat, this might be the problem. Keep an eye on this.


Use Meteor Streams When Persistence Is Not Needed

Meteor’s real-time model builds on top of MongoDB. That means you are persisting all of your communication to the DB. Now, this isn’t bad, but not every real-time communication needs to be persisted to the DB. For those things, you can use Meteor Streams. Meteor Streams adds real-time messaging capabilities to Meteor.

Read this Nettuts+ article to get started with Meteor Streams.


Disable WebSockets If Not In Use

When a client is trying to establish a connection to the Meteor Server, first it tries with WebSockets. If that fails, then it will try some other technique like xhr-polling.

Your Meteor app or most of the users may stay behind a firewall or a proxy that does not speak WebSockets. In those situations, the above process takes time and it will slow down how fast your app loads for the client.

The solution is to disable WebSockets in your app. This way clients will never try for WebSockets and instead, they will use some other proven technique which works everywhere. To do this, use the DISABLE_WEBSOCKETS environmental variable as shown below.

DISABLE_WEBSOCKETS=true Meteor

Don’t Serve Static Content From Meteor

In the end, Meteor runs on top of NodeJS. As we all know, NodeJS is not that great at serving static content. So if your app largely depends on static content, it would be wise to serve them from a CDN or a proxy, like Nginx. How to do this though is out of the scope of this article. But these guides will get you there.


Conclusion

So that completes my series of Meteor performance tips. Go ahead and apply some of these to your app and see what kind of improvements you get. If you know of any other tips and tricks, which make Meteor perform better, please do share them with us.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
(PRO)
No Soup for you

Don't be the product, buy the product!

close
YES, I want to SOUP ●UP for ...