Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 20 2014

20:09

SOLID: Part 2 – The Open/Closed Principle

Single Responsibility (SRP), Open/Closed (OCP), Liskov’s Substitution, Interface Segregation, and Dependency Inversion. Five agile principles that should guide you every time you need to write code.


Definition

Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.

The Open/Closed Principle, OCP in short, is credited to Bertrand Mayer, a French programmer, who first published it in his book n Object-Oriented Software Construction in 1988.

The principle rose in popularity in the early 2000s when it became one of the SOLID principles defined by Robert C. Martin in his book Agile Software Development, Principles, Patterns, and Practices and later republished in the C# version of the book Agile Principles, Patterns, and Practices in C#.

What we are basically talking about here is to design our modules, classes and functions in a way that when a new functionality is needed, we should not modify our existing code but rather write new code that will be used by existing code. This sounds a little bit strange, especially if we are working in languages like Java, C, C++ or C# where it applies not only to the source code itself but to the binary also. We want to create new features in ways that will not require us to redeploy existing binaries, executables or DLLs.


OCP in the SOLID Context

As we progress with these tutorials, we can put each new principle in the context of the already discussed ones. We already discussed the Single Responsibility (SRP) that stated that a module should have only one reason to change. If we think about OCP and SRP, we can observe that they are complementary. Code specifically designed with SRP in mind will be close to OCP principles or easy to make it respect those principles. When we have code that has a single reason to change, introducing a new feature will create a secondary reason for that change. So both SRP and OCP would be violated. In the same way, if we have code that should only change when its main function changes and should remain unchanged when a new feature is added to it, thus respecting OCP, will mostly respect SRP also.

This does not mean that SRP always leads to OCP or vice versa, but in most cases if one of them is respected, achieving the second one is quite simple.


The Obvious Example of OCP Violation

From a purely technical point of view, the Open/Closed Principle is very simple. A simple relationship between two classes, like the one below violates the OCP.

violate1

The User class uses the Logic class directly. If we need to implement a second Logic class in a way that will allow us to use both the current one and the new one, the existing Logic class will need to be changed. User is directly tied to the implementation of Logic, there is no way for us to provide a new Logic without affecting the current one. And when we are talking about statically typed languages, it is very possible that the User class will also require changes. If we are talking about compiled languages, most certainly both the User executable and the Logic executable or dynamic library will require recompilation and redeployment to our clients, a process we want to avoid whenever possible.


Show Me the Code

Based only on the schema above, one can deduce that any class directly using another class would actually violate the Open/Closed Principle. And that is right, strictly speaking. I found it quite interesting to find the limits, the moment when you draw the line and decide that it is more difficult to respect OCP than modify existing code, or the architectural cost does not justify the cost of changing existing code.

Let’s say we want to write a class that can provide progress as a percent for a file that is downloaded through our application. We will have two main classes, a Progress and a File, and I imagine we will want to use them like in the test below.

function testItCanGetTheProgressOfAFileAsAPercent() {
	$file = new File();
	$file->length = 200;
	$file->sent = 100;

	$progress = new Progress($file);

	$this->assertEquals(50, $progress->getAsPercent());
}

In this test we are a user of Progress. We want to obtain a value as a percent, regardless of the actual file size. We use File as the source of information for our Progress. A file has a length in bytes and a field called sent representing the amount of data sent to the one doing the download. We do not care about how these values are updated in the application. We can assume there is some magical logic doing it for us, so in a test we can set them explicitly.

class File {
	public $length;
	public $sent;
}

The File class is just a simple data object containing the two fields. Of course in real life, it would probably contain other information and behavior also, like file name, path, relative path, current directory, type, permissions and so on.

class Progress {

	private $file;

	function __construct(File $file) {
		$this->file = $file;
	}

	function getAsPercent() {
		return $this->file->sent * 100 / $this->file->length;
	}

}

Progress is simply a class taking a File in its constructor. For clarity, we specified the type of the variable in the constructor’s parameters. There is a single useful method on Progress, getAsPercent(), which will take the values sent and length from File and transform them into a percent. Simple, and it works.

Testing started at 5:39 PM ...
PHPUnit 3.7.28 by Sebastian Bergmann.
.
Time: 15 ms, Memory: 2.50Mb
OK (1 test, 1 assertion)

This code seems to be right, however it violates the Open/Closed Principle. But why? And How?


Changing Requirements

Every application that is expected to evolve in time will need new features. One new feature for our application could be to allow streaming of music, instead of just downloading files. File‘s length is represented in bytes, the music’s duration in seconds. We want to offer a nice progress bar to our listeners, but can we reuse the one we already have?

No, we can not. Our progress is bound to File. It understands only files, even though it could be applied to music content also. But in order to do that we have to modify it, we have to make Progress know about Music and File. If our design would respect OCP, we would not need to touch File or Progress. We could just simply reuse the existing Progress and apply it to Music.


Solution 1: Take Advantage of the Dynamic Nature of PHP

Dynamically typed languages have the advantages of guessing the types of objects at runtime. This allows us to remove the typehint from Progress‘ constructor and the code will still work.

class Progress {

	private $file;

	function __construct($file) {
		$this->file = $file;
	}

	function getAsPercent() {
		return $this->file->sent * 100 / $this->file->length;
	}

} 

Now we can throw anything at Progress. And by anything, I mean literally anything:

class Music {

	public $length;
	public $sent;

	public $artist;
	public $album;
	public $releaseDate;

	function getAlbumCoverFile() {
		return 'Images/Covers/' . $this->artist . '/' . $this->album . '.png';
	}
} 

And a Music class like the one above will work just fine. We can test it easily with a very similar test to File.

function testItCanGetTheProgressOfAMusicStreamAsAPercent() {
	$music = new Music();
	$music->length = 200;
	$music->sent = 100;

	$progress = new Progress($music);

	$this->assertEquals(50, $progress->getAsPercent());
}

So basically, any measurable content can be used with the Progress class. Maybe we should express this in code by changing the variable’s name also:

class Progress {

	private $measurableContent;

	function __construct($measurableContent) {
		$this->measurableContent = $measurableContent;
	}

	function getAsPercent() {
		return $this->measurableContent->sent * 100 / $this->measurableContent->length;
	}

} 

Good, but we have a huge problem with this approach. When we had File specified as a typehint, we were positive about what our class can handle. It was explicit and if something else came in, a nice error told us so.

Argument 1 passed to Progress::__construct()
must be an instance of File,
instance of Music given.1

<p>But without the typehint, we must rely on the fact that whatever comes in will have two public variables of some exact names like "<code>length</code>" and "<code>sent</code>". Otherwise we will have a refused bequest.</p>

<blockquote>Refused bequest: a class that overrides a method of a base class in such a way that the contract of the base class is not honored by the derived class. ~Source Wikipedia.</blockquote>

<p>This is one of the <em>code smells</em> presented in much more detail in the <a href="https://tutsplus.com/course/detecting-code-smells/">Detecting Code Smells</a> premium course. In short, we do not want to end up trying to call methods or access fields on objects that do not conform to our contract. When we had a typehint, the contract was specified by it. The fields and methods of the <code>File</code> class. Now that we have nothing, we can send in anything, even a string and it would result in an ugly error.</p>

1

<p>A test like this, where we send in a simple string, will produce a refused bequest:</p>

1Trying to get property of non-object.

While the end result is the same in both cases, meaning the code breaks, the first one produced a nice message. This one, however, is very obscure. There is no way of knowing what the variable is – a string in our case – and what properties were looked for and not found. It is difficult to debug and to solve the problem. A programmer needs to open the Progress class and read it and understand it. The contract, in this case, when we do not explicitly specify the typehint, is defined by the behavior of Progress. It is an implicit contract, known only to Progress. In our example, it is defined by the access to the two fields, sent and length, in the getAsPercent() method. In real life the implicit contract can be very complex and hard to discover by just looking for a few seconds at the class.

This solution is recommended only if none of the other suggestions below can easily be implemented or if they would inflict serious architectural changes that do not justify the effort.


Solution 2: Use the Strategy Design Pattern

This is the most common and probably the most appropriate solution to respect OCP. It is simple and effective.

strategy

The Strategy Pattern simply introduces the use of an interface. An interface is a special type of entity in Object Oriented Programming (OOP) which defines a contract between a client and a server class. Both classes will adhere to the contract to ensure the expected behavior. There may be several, unrelated, server classes that respect the same contract thus being capable of serving the same client class.

interface Measurable {
	function getLength();
	function getSent();
}

In an interface we can define only behavior. That is why instead of directly using public variables we will have to think about using getters and setters. Adapting the other classes will not be difficult at this point. Our IDE can do most of the job.

function testItCanGetTheProgressOfAFileAsAPercent() {
	$file = new File();
	$file->setLength(200);
	$file->setSent(100);

	$progress = new Progress($file);

	$this->assertEquals(50, $progress->getAsPercent());
}

As usual, we start with our tests. We will need to use setters to set the values. If considered mandatory, these setters may also be defined in the Measurable interface. However, be careful what you put there. The interface is to define the contract between the client class Progress and the different server classes like File and Music. Does Progress need to set the values? Probably not. So the setters are highly unlikely to be needed to be defined in the interface. Also, if you would define the setters there, you would force all of the server classes to implement setters. For some of them, it may be logical to have setters, but others may behave totally differently. What if we want to use our Progress class to show the temperature of our oven? The OvenTemperature class may be initialized with the values in the constructor, or obtain the information from a third class. Who knows? To have setters on that class would be odd.

class File implements Measurable {

	private $length;
	private $sent;

	public $filename;
	public $owner;

	function setLength($length) {
		$this->length = $length;
	}

	function getLength() {
		return $this->length;
	}

	function setSent($sent) {
		$this->sent = $sent;
	}

	function getSent() {
		return $this->sent;
	}

	function getRelativePath() {
		return dirname($this->filename);
	}

	function getFullPath() {
		return realpath($this->getRelativePath());
	}

}

The File class is modified slightly to accommodate the requirements above. It now implements the Measurable interface and has setters and getters for the fields we are interested in. Music is very similar, you can check its content in the attached source code. We are almost done.

class Progress {

	private $measurableContent;

	function __construct(Measurable $measurableContent) {
		$this->measurableContent = $measurableContent;
	}

	function getAsPercent() {
		return $this->measurableContent->getSent() * 100 / $this->measurableContent->getLength();
	}

} 

Progress also needed a small update. We can now specify a type, using typehinting, in the constructor. The expected type is Measurable. Now we have an explicit contract. Progress can be sure the accessed methods will be always present because they are defined in the Measurable interface. File and Music can also be sure they can provide all that is needed for Progress by simply implementing all the methods on the interface, a requirement when a class implements an interface.

This design pattern is explained in greater detail in the Agile Design Patterns course.

A Note on Interface Naming

People tend to name interfaces with a capital I in front of them, or with the word “Interface” attached at the end, like IFile or FileInterface. This is an old-style notation imposed by some outdated standards. We are so much past the Hungarian notations or the need to specify the type of a variable or object in its name in order to easier identify it. IDEs identify anything in a split second for us. This allows us to concentrate on what we actually want to abstract.

Interfaces belong to their clients. Yes. When you want to name an interface you must think of the client and forget about the implementation. When we named our interface Measurable we did so thinking about Progress. If I would be a progress, what would I need to be able to provide the percent? The answer is simple, something we can measure. Thus the name Measurable.

Another reason is that the implementation can be from various domains. In our case, there are files and music. But we may very well reuse our Progress in a racing simulator. In that case, the measured classes would be Speed, Fuel, etc. Nice, isn’t it?


Solution 3: Use the Template Method Design Pattern

The Template Method design pattern is very similar to the strategy, but instead of an interface it uses an abstract class. It is recommended to use a Template Method pattern when we have a client very specific to our application, with reduced reusability and when the server classes have common behavior.

template_method

This design pattern is explained in greater detail in the Agile Design Patterns course.


A Higher Level View

So, how is all of this affecting our high level architecture?

HighLevelDesign

If the image above represents the current architecture of our application, adding a new module with five new classes (the blue ones) should affect our design in a moderate way (red class).

HighLevelDesignWithNewClasses

In most systems you can’t expect absolutely no effect on the existing code when new classes are introduced. However, respecting the Open/Closed Principle will considerably reduce the classes and modules that require constant change.

As with any other principle, try not to think about everything from before. If you do so, you will end up with an interface for each of your classes. Such a design will be hard to maintain and understand. Usually the safest way to go is to think about the possibilities and if you can determine whether there will be other types of server classes. Many times you can easily imagine a new feature or you can find one on the project’s backlog that will produce another server class. In those cases, add the interface from the beginning. If you can not determine, or if you are unsure – most of the time – simply omit it. Let the next programmer, or maybe even yourself, to add the interface when you need a second implementation.


Final Thoughts

If you follow your discipline and add interfaces as soon as a second server is needed, modifications will be few and easy. Remember, if code required changes once, there is a high possibility it will require change again. When that possibility turns into reality, OCP will save you a lot of time and effort.

Thank you for reading.

January 17 2014

18:28

Better Responsive Images With the Element

This article will introduce you to <picture>, a new proposed element created to try and solve the problem of serving the version of content images which better suits the device currently visiting a web page.


Why Do We Need a New Element?

Responsive Web Design (RWD) has taken web developers and designers by storm, changing the way they approach the development of a website. When talking about RWD, I love to make an analogy. We, as people involved in software development, are always excited by new tools in the same way a child is thrilled by a new toy (you’ve heard of the recent diatribe on front-end automation tools, haven’t you?). We are so intrigued by new toys that sometimes we use, and even develop, tools that don’t resolve a real issue. Well, RWD isn’t like a new toy that everybody loves to play with, without a good reason. It’s a methodology that helps us solve real problems: adapting a website for a great variety of screens. However, as many (all?) things in life, it isn’t perfect and has some limitations.

One of the most important open issues is serving content images for an almost infinite range of devices, trying to serve the version that better suits the specific device currently visiting our website. For those of you who don’t know what content images are, they are the images part of the content. Therefore, they should be shown using the <img> tag and not a CSS rule such as background-image. So far, three main proposals were made, discussed, rejected, and even resurrected (as the picture element). The proposals are: the srcset attribute, the picture element, and the src-n attribute. All of them tried to cover the wider range of use cases possible but, unfortunately for us, none of them have been completely welcomed by all the actors involved. At the time of this writing, it seems that <picture> will be the way to go due to the positive comments on the revamped proposal by representatives of the main browsers. What I mean by revamped, is that the original proposal has been updated incorporating some features of the src-n attribute proposal.

Considering that based on the latest statistics of HTTPArchive.org, images constitute about the 62% of the total weight of web page, you can easily understand that solving the problem of content images is a primary issue, and the sooner we arrive to a solution, the better. In fact, an improvement in how and what images are served will lead to faster loading of websites, which in turn will lead to an improvement of the experience of the websites’ users.

Important Note

Before going ahead, I want to stress that this is still a proposal, hence the specifications are in a state of flux, and nobody can assure you that <picture> will reach the final stage. In addition, being a proposal at an early stage, no browsers offer support for it. For this reason, I strongly suggest you to follow this discussion, read the specifications from time to time, and wait until this proposal reaches a more stable state.


What’s the <picture> Element?

Citing the specifications, “the picture element is intended to give authors a way to control which image resource a user agent presents to a user, based on media query and/or support for a particular image format”. It, “should be used when an image source exists in multiple densities, or when a responsive design dictates a somewhat different image on some types of screens (“art direction”)”.

The proposed element is made of several pieces that we’re going to analyze in detail. The first fact you need to know is that it’s nothing but a container for other elements where we’ll actually specify the different versions of the image we want to show. Inside a picture element you can find two tags: <source> and <img>. The latter is used to provide backward compatibility for older browsers or, in general, for browsers who don’t support it. The source element has three attributes available:

  • srcset: Here is where we specify the URL of the image we want to show. The attribute allows for multiple URLs separated by a comma. In addition, taking cue from the srcset attribute proposal, we can pair each URL with a screen resolution or a width (considered as a min-width rule) specification. The latter is separated from the URL by a space.
  • media: Here we write a media query that, if evaluated to true, will suggest to the UA to show the image specified in the srcset attribute.
  • sizes: The attribute where we specify the set of intrinsic sizes for the sources described in the srcset attribute. It accepts multiple sizes separated by a comma.

Until now, we’ve spent too much time talking and seen nothing concrete. Let’s fix this by seeing some code!

The <picture> in Action

As our first example, let’s say that we developed a website with a mobile-first approach. Now, we want to show a content image and render the image “mobile.png” by default, the image “tablet.png” if the user screen is at least 480px, and “desktop.png” if the user screen is at least 1024px. Because we’re smart developers, we also want to render the image “tablet.png” for those browsers who don’t understand the picture element. To achieve this goal, we have to write the following code:

<picture>
   <source srcset="mobile.png">
   <source media="(min-width: 480px)" srcset="tablet.png">
   <source media="(min-width: 1024px)" srcset="desktop.png">
   <img src="./Better_responsive_image_with_the_picture_element_files/tablet.png" alt="A photo of London by night">
</picture>

In the code above, we’ve adopted pixels as the unit for the media queries, but you can use em as well if you prefer.

The first example shown was quite straightforward. Let’s see how we can made it more sophisticated using the expressive power of the srcset attribute. For the sake of the example, imagine that in addition to the previous requirements we want to differentiate the images based on the resolution of the screen. The resultant code is listed below:

<picture>
   <source srcset="mobile.png, mobile-hd.png 2x">
   <source media="(min-width: 480px)" srcset="tablet.png, tablet-hd.png 2x">
   <source media="(min-width: 1024px)" srcset="desktop.png, desktop-hd.png 2x">
   <img src="./Better_responsive_image_with_the_picture_element_files/tablet.png" alt="A photo of London by night">
</picture>

As you can see, for each <source> in the code, we’ve specified more than one URL into the srcset attribute. The second URL is paired with the string 2x, separated by a space, that targets users with a high-resolution display (pixel density 2x, like the Retina). In this case, the browsers will firstly search for the source element that best accommodates the user screen based on the media query specified (if any). Then, it’ll look at the screen resolution and choose the best fit among the images specified in the srcset attribute.

As our last example, we’ll see how we can employ the sizes attribute. Let’s say that we want our image to cover all the width of the device (100% of the width), regardless of its actual size and pixel density. To achieve this goal, we can specify the size we want to cover and the size of each image in the srcset attribute as shown below:

<picture>
   <source sizes="100%" srcset="mobile.png 480w, small-tablet.png 768w, tablet.png 968w, desktop.png 1024w">
   <img src="./Better_responsive_image_with_the_picture_element_files/tablet.png" alt="A photo of London by night">
</picture>

In this case, the User Agent will automatically calculate the effective pixel density of the image and choose which one to download accordingly.


Browser Compatibility

None. “Hey Aurelio, what do you mean by none? I was already starting to update my website!” If you recognize yourself in this sentence, I’m sorry to disappoint you. As I said multiple times in this article, the picture element is still a proposal and in a very early stage. So, you have to wait for a while before having the chance to employ it. Not all is lost, though. Scott Jehl created a polyfill for this proposal called picturefill. The latter was developed several months ago but has been updated several times to accommodate the specifications. Currently, it isn’t updated to the latest specifications, so its syntax is different than the one described in this article. However, you can still take a look at it and maybe make a Pull Request to help the project.


Conclusions

In this article we discussed the issue of serving content images suitable for the device currently visiting a website. Then, we delved into the specifications of the picture element, the proposal that seems to have succeeded in conciliating browser vendors, developers, and all the actors involved in the process. Unfortunately for us, it’s in a very early stage, so we can’t employ it in our next projects. Nonetheless, I’m confident that browser vendors will start to implement it soon. As a final note, I strongly encourage you to read and follow the specifications.

Sponsored post
soup-sponsored
04:52

January 16 2014

19:42

Ember.js Testing

When I started playing around with Ember.js almost a year ago, the testability story left something to be desired. You could unit test an object without any trouble, but a unit test is only one way to get feedback when you’re building a software product. In addition to unit tests, I wanted a way to verify the integration of multiple components. So like most people testing rich JavaScript applications, I reached for the mother of all testing tools, Selenium.

Now before I bash it, without a proper introduction, it’s worth mentioning that Selenium is a great way to verify your entire web application works with a full production-like database and all your production dependencies, etc. And from a QA perspective, this tool can be a great resource for teams who need end-to-end UI acceptance tests.

But over time, a seemingly small test suite built on Selenium can begin to drag the velocity of your team to a snails pace. One easy way to reduce this pain is to avoid building a large application in the first place. If you build a handful of smaller web applications instead, it might help keep you afloat for a little longer because no individual build will crush the team, as you grow.

But even on a small project, the real problem with Selenium is that it’s not part of the test driven development process. When I’m doing red/ green/ refactor I don’t have time for slow feedback in any form. I needed a way to write both unit and integration tests that would provide quick feedback to help me shape the software I was writing in a more iterative way. If you are using a version of Ember.js >= RC3, you’re in luck because writing a unit or integration test is a walk in the part.


Installing the Test Runner

Now that we can write JavaScript tests for our application, how do we execute them? Most developers start out using the browser directly, but because I wanted something I could execute headless from the command line in a CI environment with a rich ecosystem full of plugins, I looked to Karma.

What I liked about Karma is that it only wants to be your test runner. It doesn’t care what JavaScript test framework you use or what client side MVC framework you use. It’s simple to get started with and writing tests that execute against your production Ember.js application is just a few lines of configuration.

But before we can configure Karma, we need to install it using npm. I recommend installing it locally so you can keep your npm modules isolated per project. To do this, add a file named package.json‘ to the root of your project that looks something like the below.

{
  "dependencies": {
    "karma-qunit": "*",
    "karma": "0.10.2"
  }
}

This example will require both Karma, and a plugin for QUnit. After you save the package.json file above, drop back to the command line and type npm install to pull down the required Node modules.

After the npm install completes, you will now see a new folder with the name node_modules in the root of your project. This folder contains all the JavaScript code we just pulled down with npm, including Karma and the QUnit plugin. If you drill down even further to node_modules/karma/bin/ you will see the Karma executable. We will be using this to configure the test runner, execute tests from the command line, etc.


Configure the Test Runner

Next we need to configure karma so it knows how to execute the QUnit tests. Type karma init from the root of the project. You will be prompted with a list of questions. The first will ask what testing framework you want to use, hit Tab until you see qunit, then hit Enter. Next answer no to the Require.js question, as we won’t be using it for this sample application. Tab until you see PhantomJS for the third question and you will need to hit Enter twice as it allows multiple options here. As for the rest, just leave them at their default option.

When you are done, you should see Karma has generated a configuration file named karma.conf.js in the root or your project. If you want to read more about the various options Karma supports, you might find the comments helpful. For the sake of this example, I have a simplified version of the configuration file to keep things beginner friendly.

If you want to follow along, delete the generated configuration file and replace it with this one.

module.exports = function(karma) {
  karma.set({
    basePath: 'js',
     
    files: [
      "vendor/jquery/jquery.min.js",
      "vendor/handlebars/handlebars.js",
      "vendor/ember/ember.js",
      "vendor/jquery-mockjax/jquery.mockjax.js",
      "app.js",
      "tests/*.js"
    ],
     
    logLevel: karma.LOG_ERROR,
    browsers: ['PhantomJS'],
    singleRun: true,
    autoWatch: false,
      
    frameworks: ["qunit"]
  });
};

This should be fairly similar to what Karma generated earlier, I’ve just removed all the comments and cut out a few options we don’t care about right now. In order to write the first unit test, I had to tell Karma a little more about the project structure.

At the top of the configuration file, you will see that I’ve set the basePath to js because all of the JavaScript assets live under this folder in the project. Next, I told Karma where it can find the JavaScript files required to test our simple application. This includes jQuery, Handlebars, Ember.js and the app.js file itself.


Writing the First Unit Test

Now we can add the first unit test file to the project. First make a new folder named tests and nest it under the js folder. Add a file in this new directory named unit_tests.js that looks something like this.

test('hello world', function() {
  equal(1, 1, "");
});

This test isn’t doing anything valuable yet, but it will help us verify we have everything wired up with Karma to execute it correctly. Notice in the Karma files section, we already added the js/tests directory. This way Karma will pull in every JavaScript file we use to test our application with, going forward.

Now that we have Karma configured correctly, execute the qunit tests from the command line using ./node_modules/karma/bin/karma start.

If you have everything setup correctly, you should see Karma execute one test and it being successful. To verify it executed the test we just wrote, go make it fail by altering the equals statement. For example, you could do the following:

test('hello world', function() {
  equal(1, 2, "boom");
});

If you can fail this and make it pass again, it’s time to write a test with a little more purpose.


The Sample Application

But before we get started, lets discuss the sample application used throughout this post. In the screenshot below, you see we have a very simple grid of users. In the HTML table, each user is shown by first name along with a button to delete that user. At the top of the application you will see an input for the first name, last name and finally a button that will add another user to the table when clicked.

https://dl.dropboxusercontent.com/u/716525/content/images/2013/pre-tuts.png

The example application has three problems. First, we want to show the user’s first and last name, not just the first name. Next, when you click a delete button it won’t actually remove the user. And finally, when you add a first name, last name and click add, it won’t put another user into the table.

On the surface, the full name change appears to be the simplest. It also turned out to be a great example that shows when you should write a unit test, an integration test or both. In this example, the quickest way to get feedback is to write a simple unit test that asserts the model has a computed property fullName.


Unit Testing the Computed Property

Unit testing an ember object is easy, you simply create a new instance of the object and ask for the fullName value.

test('fullName property returns both first and last', function() {
  var person = App.Person.create({firstName: 'toran', lastName: 'billups'});
  var result = person.get('fullName');
  equal(result, 'toran billups', "fullName was " + result);
});

Next if you go back to the command line and run ./node_modules/karma/bin/karma start, it should show one failing test with a helpful message describing fullName as undefined currently. To fix this, we need to open the app.js file and add a computed property to the model that returns a string of the combined first and last name values.

App.Person = Ember.Object.extend({
  firstName: '',
  lastName: '',
  fullName: function() {
    var firstName = this.get('firstName');
    var lastName = this.get('lastName');
    return firstName + ' ' + lastName;
  }.property()
});

If you drop back to the command line and run ./node_modules/karma/bin/karma start you should now see a passing unit test. You can extend this example by writing a few other unit tests to show that the computed property should change when either the first or last name is updated on the model.

test('fullName property returns both first and last', function() {
  var person = App.Person.create({firstName: 'toran', lastName: 'billups'});
  var result = person.get('fullName');
  equal(result, 'toran billups', "fullName was " + result);
});

test('fullName property updates when firstName is changed', function() {
  var person = App.Person.create({firstName: 'toran', lastName: 'billups'});
  var result = person.get('fullName');
  equal(result, 'toran billups', "fullName was " + result);
  person.set('firstName', 'wat');
  result = person.get('fullName');
  equal(result, 'wat billups', "fullName was " + result);
});

test('fullName property updates when lastName is changed', function() {
  var person = App.Person.create({firstName: 'toran', lastName: 'billups'});
  var result = person.get('fullName');
  equal(result, 'toran billups', "fullName was " + result);
  person.set('lastName', 'tbozz');
  result = person.get('fullName');
  equal(result, 'toran tbozz', "fullName was " + result);
});

If you add these two additional tests and run all three from the command line, you should have two failing. To get all three tests passing, modify the computed property to listen for changes on both the first name and last name. Now if you run ./node_modules/karma/bin/karma start from the command line, you should have three passing tests.

App.Person = Ember.Object.extend({
  firstName: '',
  lastName: '',
  fullName: function() {
    var firstName = this.get('firstName');
    var lastName = this.get('lastName');
    return firstName + ' ' + lastName;
  }.property('firstName', 'lastName')
});

Add the Karma-Ember-Preprocessor and Configure It

Now that we have a computed property on the model, we need to look at the template itself because currently we don’t use the new fullName property. In the past, you would need to wire up everything yourself, or use Selenium to verify the template gets rendered correctly. But with ember-testing you can now integration test this by adding a few lines of JavaScript and a plugin for Karma.

First open the package.json file and add the karma-ember-preprocessor dependency. After you update the package.json file, do npm install from the command line to pull this down.

{
  "dependencies": {
    "karma-ember-preprocessor": "*",
    "karma-qunit": "*",
    "karma": "0.10.2"
  }
}

Now that you have the pre-processor installed, we need to make Karma aware of the template files. In the files section of your karma.conf.js file add the following to tell Karma about the Handlebars templates.

module.exports = function(karma) {
  karma.set({
    basePath: 'js',
    
    files: [
      "vendor/jquery/jquery.min.js",
      "vendor/handlebars/handlebars.js",
      "vendor/ember/ember.js",
      "vendor/jquery-mockjax/jquery.mockjax.js",
      "app.js",
      "tests/*.js",
      "templates/*.handlebars"
    ],
    
    logLevel: karma.LOG_ERROR,
    browsers: ['PhantomJS'],
    singleRun: true,
    autoWatch: false,
     
    frameworks: ["qunit"]
  });
};

Next we need to tell Karma what to do with these handlebars files, because technically we want to have each template precompiled before it’s handed over to PhantomJS. Add the preprocessor configuration and point anything with a file extension of *.handlebars at the ember preprocessor. Also you need to add the plugins configuration to register the ember pre-processor (along with a few others that normally get included with Karma’s default configuration).

module.exports = function(karma) {
  karma.set({
    basePath: 'js',
     
    files: [
      "vendor/jquery/jquery.min.js",
      "vendor/handlebars/handlebars.js",
      "vendor/ember/ember.js",
      "vendor/jquery-mockjax/jquery.mockjax.js",
      "app.js",
      "tests/*.js",
      "templates/*.handlebars"
    ],
     
    logLevel: karma.LOG_ERROR,
    browsers: ['PhantomJS'],
    singleRun: true,
    autoWatch: false,
     
    frameworks: ["qunit"],
     
    plugins: [
      'karma-qunit',
      'karma-chrome-launcher',
      'karma-ember-preprocessor',
      'karma-phantomjs-launcher'
    ],
     
    preprocessors: {
      "**/*.handlebars": 'ember'
    }
  });
};

Integration Testing the Data-Bound Template

Now that we have the Karma configuration setup for integration testing, add a new file named integration_tests.js under the tests folder. Inside this folder we need to add a simple test to prove we can stand up the entire Ember.js application without error. Add a simple qunit test to see if we can hit the '/' route and get the basic HTML returned. For the initial test, we are only asserting that the table tag exists in the HTML that was generated.

test('hello world', function() {
  App.reset();
  visit("/").then(function() {
    ok(exists("table"));
  });
});

Notice we are using a few helpers that are built into ember-testing like visit and find. The visit helper is an ember friendly way of telling the application what state to be at during the execution. This test starts at the '/' route because that is where the People models get bound to the template and our HTML table is generated. The find helper is a quick way to lookup elements in the DOM using CSS selectors like you would with jQuery to verify something about the markup.

Before we can run this test we need to add a test helper file that will inject the test helpers and set a generic root element. Add the code below, to a file named integration_test_helper.js in the same tests directory. This will ensure our application has the test helpers at execution time.

document.write('<div id="ember-testing-container"><div id="ember-testing"></div></div>');
 
App.rootElement = '#ember-testing';
App.setupForTesting();
App.injectTestHelpers();

function exists(selector) {
  return !!find(selector).length;
}

Now from the command line you should be able to execute the integration test above. If you got a passing test, remove the table from the handlebars template to make it fail (just to help prove Ember was generating the HTML using that template).

Now that we have the integration tests setup, it’s time to write the one that asserts we show each user’s fullName instead of their firstName. We want to first assert that we get two rows, one for each person.

test('hello world', function() {
  App.reset();
  visit("/").then(function() {
    var rows = find("table tr").length;
    equal(rows, 2, rows);
  });
});

Note: The application is currently returning hard coded data to keep everything simple at the moment. If you are curious why we get two people, here is the find method on the model:

App.Person.reopenClass({
  people: [],
  find: function() {
    var first = App.Person.create({firstName: 'x', lastName: 'y'});
    var last = App.Person.create({firstName: 'x', lastName: 'y'});
    this.people.pushObject(first);
    this.people.pushObject(last);
    return this.people;
  }
});

If we run the tests now, we should still have everything passing because two people are returned as we would expect. Next, we need to get the table cell that shows the person’s name and assert it’s using the fullName property instead of just firstName.

test('hello world', function() {
  App.reset();
  visit("/").then(function() {
    var rows = find("table tr").length;
    equal(rows, 2, rows);
    var fullName = find("table tr:eq(0) td:eq(0)").text();
    equal(fullName, "x y", "the first table row had fullName: " + fullName);
  });
});

If you run the above test you should see a failing test because we haven’t yet updated the template to use fullName. Now that we have a failing test, update the template to use fullName and run the tests using ./node_modules/karma/bin/karma start. You should now have a passing suite of both unit and integration tests.


Should I Write Unit or Integration Tests?

If you are asking yourself, “when should I write a unit test vs. an integration test?”, the answer is simply: what will be less painful? If writing a unit test is faster and it explains the problem better than a much larger integration test, then I say write the unit test. If the unit tests seem less valuable because you are doing basic CRUD and the real behavior is in the interaction between components, I say write the integration test. Because the integration tests written with ember-testing are blazingly fast, they are part of the developer feedback cycle and should be used similarly to a unit test when it makes sense.

To show a CRUD like integration test in action, write the following test to prove the add button puts the person into the collection and that a new row gets rendered in the handlebars template.

test('add will append another person to the html table', function() {
  App.Person.people = [];
  App.reset();
  visit("/").then(function() {
    var rows = find("table tr").length
    equal(rows, 2, "the table had " + rows + " rows");
    fillIn(".firstName", "foo");
    fillIn(".lastName", "bar");
    return click(".submit");
  }).then(function() {
    equal(find("table tr").length, 3, "the table of people was not complete");
    equal(find("table tr:eq(2) td:eq(0)").text(), "foo bar", "the fullName for the person was incorrect");
  });
});

Start by telling the test what state you want to work with, then using the fillIn helper, add a first name and last name. Now if you click the submit button it should add that person to the HTML table, so in the returning then we can assert that three people exist in the HTML table. Run this test and it should fail because the Ember controller isn’t complete.

To get the test passing, add the following line to the PeopleController

App.PeopleController = Ember.ArrayController.extend({
  actions: {
    addPerson: function() {
      var person = {
        firstName: this.get('firstName'),
        lastName: this.get('lastName')
      };
      App.Person.add(person);
    }
  }
});

Now if you run the tests using ./node_modules/karma/bin/karma start it should show three people in the rendered HTML.

The last test is the delete, notice we find the button for a specific row and click it. In the following then we simply verify one less person is shown in the HTML table.

test('delete will remove the person for a given row', function() {
  App.Person.people = [];
  App.reset();
  visit("/").then(function() {
    var rows = find("table tr").length;
    equal(rows, 2, "the table had " + rows + " rows");
    return click("table .delete:first");
  }).then(function() {
    equal(find("table tr").length, 1, "the table of people was not complete
  });
});")})})

To get this passing, simply add the following line to the PeopleController:

App.PeopleController = Ember.ArrayController.extend({
  actions: {
    addPerson: function() {
      var person = {
        firstName: this.get('firstName'),
        lastName: this.get('lastName')
      };
      App.Person.add(person);
    },
    deletePerson: function(person) {
      App.Person.remove(person);
    }
  }
});

Run the tests from the command line and you should once again have a passing suite of tests.


Conclusion

So that wraps up our sample application. Feel free to ask any questions down in the comments.

Bonus: But I’m Already Using Grunt…

If you prefer to use Grunt instead of the karma-ember-preprocessor, simply remove the plugins and preprocessors configuration. Also remove templates/*.handlebars from the files section as Karma won’t need to precompile the templates. Here is a simplified karma.conf.js that works when using grunt to precompile the handlebars templates.

module.exports = function(karma) {
  karma.set({
    basePath: 'js',
 
    files: [
      "lib/deps.min.js", //built by your grunt task
      "tests/*.js"
    ],
    
    logLevel: karma.LOG_ERROR,
    browsers: ['PhantomJS'],
    singleRun: true,
    autoWatch: false,
     
    frameworks: ["qunit"]
  });
};

And that’s it!

January 14 2014

18:31

JavaScript Animation That Works (Part 1 of 4)

HTML is the language the web is built in, and it is kind of a strange beast. Although it was originally intended as a way to easily share academic information across the Internet, it has been slowly transformed to accommodate the media-rich environment we know and love, sort of like a cherished lawn mower that has been fitted with jet engines.

Because of the haphazard nature of HTML (and JavaScript, the programming language that manipulates elements in HTML and makes them interactive), sometimes we have to think outside the box a bit. In this tutorial series, I will show you how to do cross-browser animation using a method called spriting, using only JavaScript and HTML (and a bit of CSS too). And, because this is a learning opportunity (and I am stubborn and like to do things the hard way), we will be doing it all without any external libraries (like jQuery).

This will be a four-part series – I will explain the spriting itself in part one (this article) with some basic JavaScript, but then in later installments we will move into some intermediate techniques like encapsulation, event handling, and touch screen support.

So let’s get started!


What Is Animation?

Animation is based on a phenomenon called persistence of vision, which basically says that if your brain sees lots of similar still images fast enough, then it will appear as though it is one moving image. Every kind of film or video uses this basic technique – many, slightly different frames are shown in quick succession to make something appear moving. Film typically runs at 24 frames per second (₁), while broadcast television in North America is shown at 29.97 frames per second (₂). So, in other words, what we want to do is create something that shows similar frames really quickly (several times a second).


The Difficulties on the Web

There are two main reasons animation is difficult to use on the web:

  1. The first is that different web browsers have different ways they interpret HTML and JavaScript, so what works on one device often doesn’t work on another. Flash works great on most browsers, but support is starting to drop for it and iOS devices won’t allow it at all. Canvas has a lot of potential, but Internet Explorer 8 doesn’t support it. Same goes with Adobe Edge Animate. GIFs work on everything, but you can’t control the animation or make it interactive.
  2. And for the second, every time an image is served on a web page, a separate request is made between the browser and server. These requests can add up, even over a lightning fast Internet connection, making having several frames each second unmanageable.

The Solution: Spriting

One way around these problems is to make a sprite sheet. In elements like div‘s, we can set a background image for the div that may be larger than the element itself. We can also set the background position so that we determine exactly which part of the larger image will show. A sprite sheet is a larger image made of several smaller images that we can move around so that it could take the place of many small images. Take a look at the example below, using J, the mascot of my company Joust Multimedia:

javascript-spriting-j

Although there are ten different images of J, they are placed together on one larger PNG file (we are using PNGs because they can show transparency). If we have a div that is only the size of one of the images, and we set this PNG as the background, it will look like a single image.

See the Pen hazdm by Steven Riche (@StevenRiche) on CodePen.

Although this seems like a lot of trouble to go through to show an image, this method nicely fixes the two issues we had before. With very little JavaScript (one line!), you can change the background position of a div, and it works on everything. Also, since all of these frames are on the same image, it will only take one request to load that image in the page. So, once the page loads, it can switch between sprites with no problem at all.

So how do we set this up to animate easily then? The first step is to create the sprite sheet. You will want to know what the final dimensions of your image should be, and space the sprites accordingly in a grid. For instance, my J image is going to be 40px wide by 50px tall, so I lined up my sprites exactly 40px apart horizontally and exactly 50px apart vertically. It will probably be easiest if you set your starting sprite in the upper left corner.

Then we will set up a div with a bit of CSS to make sure that everything appears properly.

<div id="stage">
	<div id="j" class="character"></div>
</div>

And here is our CSS to make sure the sprite is showing correctly:

.character {
	/*
	 * Very important that we set the height and width of 
	 * our characters to the height and width of the sprites
	 */
	height:50px;
	width:40px;
	/*
	 * We need to position them absolutely so we can have full  
	 * control over their position within the stage
	 */
	position:absolute;
	left:100px;
	top:120px;
}

#j {
	/*
	 * And now we set the background image for the character div 
	 * to be the first sprite (in the upper left corner)
	 */
	background-image: url('j.png');
	background-repeat:no-repeat;
	background-position:0 0;
}

Notice the following things:

  • We specify the width and height of the div to the size of our sprite
  • We specify the background-repeat to 'no-repeat'
  • We specify the background-position to '0 0' – this will show the sprite in the top left corner

Now, it will just take a single line of JavaScript to change the background position to show the next sprite.

document.getElementById('j').style.backgroundPosition = '-40px 0px';

Here, we are selecting the element (with id='j'), and setting the style attribute 'backgroundPosition'. Notice that it is spelled 'backgroundPosition' in JavaScript, and not like 'background-position' in CSS. Also notice that in JavaScript, the 'px' is required for both the x and y amount – we cannot just pass it numbers. And because we are moving the background image, we have to move it in the opposite direction from what you might expect – to move to the sprite on the right, we have to have the image move 40px to the left.

Now, if we just have something simple to execute this code (like a button), we can see the frames changing in action.

See the Pen DIsgk by Steven Riche. Check out the source code for this page. It has all of the examples here with thorough comments. And here is a link to the sprite sheet I am using.

Next Up

This is still not really animation though – in part two of this series we will actually animate some running and jumping, by creating loops with the different sprites.

By part four, we will be creating mouseovers for a bit of robot action.

See the Pen ByGtv by Steven Riche.


In Conclusion & Drawbacks

Although this can be a great method for animating on the web, there are a few drawbacks. First, it can require you to create each individual frame of animation, which can be time-consuming. Second, browsers don’t have the most accurate of timers for animation, so if it is critical for your animation to be timed perfectly, then this may not work. Finally, mobile Safari (used on iPhones and iPads) has a “feature” where if you have a background image that is either larger than 2MB or larger than 1024 X 1024 X 3 pixels (or 3,145,728 total pixels), it will automatically rescale the image, ruining the spriting effect. This means that really large sprites, or animations with a very large number of sprites, is out of the question. But, for simple, small animations that you want to be very interactive, this is an easy and great way to get something that works everywhere.

Interesting Side Notes

1: Before sound was introduced with film, there wasn’t really a standard frame rate. The cameras were operated by a hand crank, so if you had a rookie cameraman the frame rate might slow down and speed up dramatically unintentionally. Similarly, less reputable theaters were notorious for telling their projectionists to crank the projector quicker to speed up the show so they could fit in more screenings. This is also why we stereotypically think of pre-sound movies as moving around comically fast – most were filmed around 16-18 fps, so when we play them today at 24 frames per second, they are moving faster than they were originally intended.

2: Television was originally broadcast at 30 fps in North America, but color television caused a glitch when shown at that speed. Engineers figured out that they could fix it by slowing down the frame rate by 0.1%, which is why it is now set at 29.97 fps. Also, besides all of the wacky technical issues involved with converting a film in 24 fps to display on television at 29.97 fps, showing television at a faster fps has had an interesting effect on the public. Many people watching the test screenings of “The Hobbit” at 48 fps reported that the increased frame rate made the film look “cheaper”, even though it was much higher quality than a typical film, just because they had grown to associate faster frame rates with watching something on television.

January 13 2014

19:13

Browser Testing in the Cloud Redux

I’ve written quite a bit about browser testing solutions trying to help identify techniques and tools that make cross-browser development easier. My last article on the subject covered how to use BrowserStack to test any number of browsers all from one central tool; your own browser.

I was on a Windows PC back then so testing multiple browsers was a bit easier and testing tools were mainly complementary to my work. Now that I’m on OS X, the need for tools to round out my testing strategies is even more important, specifically because of the lack of Internet Explorer on the OS.

I’m a bit of a stickler for what I install on my computers and I prefer online tools when available. I’m also always on the hunt for new tools that make cross-browser testing easier and decided to give CrossBrowserTesting.com a run. I’ll go over some of the key features of the service and how to leverage it to improve your testing capabilities.


ZOMG That’s a Lot of Browsers

First, let’s mention that like every reliable service in this space, CrossBrowserTesting.com charges a monthly fee. I’m not surprised at all by this because the bottom line is that they have an infrastructure to support and well, that costs money. Their fee structure is based on the number of minutes you’d like available to you on a monthly basis but with a unique twist in that they allow you to roll over a certain number of minutes, month to month. So if you don’t use all of your minutes, you can roll some over for the next month.

Onto the service itself. There are a couple of things that are important to me in these types of services. These are:

  • Breadth of browser support across major OS versions
  • Mobile support (as I’m starting to shift to mobile web)
  • Debugging tool support
  • Responsiveness of the UI
  • Form factor support
  • Local system testing support (for example: proxy-based debugging)

All of these matter because they provide you the broadest testing surface across multiple devices. But to be honest, without debugging tool support (like Chrome DevTools, IE F12 Tools, etc.), a service like this would be compelling to use and only marginally better than a screenshot service. And being able to test locally is an obvious must-have to allow you to test interactively before deploying to staging or production. So this criteria is important to consider.

The first thing I noticed about the service is its amazing breadth of browser and device form factor support. Every major OS is covered (including Ubuntu) and every OS version has a fairly comprehensive list of supported browser versions for testing.

win8

In addition, there’s extensive support for mobile devices and browsers covering earlier and more modern versions of Android, iOS, Blackberry Bold and Windows Phone 8. The interesting (and really beneficial) thing is that for specific Android versions, they’re allowing you to test against competing browsers like Firefox Mobile, Maxthon and Opera.


Testing With the Service

If you’ve used BrowserStack or a similar service, you’ll feel right at home in CrossBrowserTesting.com. The user experience matches very closely to what I’ve seen before which made jumping into it fairly trivial. You’re initially presented with a dashboard that gives you access to the main features. These include:

  • Live browser testing
  • Automated screenshot service
  • Establishing a local connection

The live browser testing is what I’m most interested in. For me, I need to ensure that the rendering is consistent so the first thing I did was to do a baseline test to see if a site will render the same in my virtual browser as it does in my local browser. To mimic my local settings I chose to start the session in Mavericks, running under the most recent stable version of Chrome:

gnc-baseline

One thing to note is that in the OS/browser selection form, you’re solely presented with the browser options available for that specific OS version like this:

gnc-browser-options

I went with GNC’s website because, well, I’m a bit of a fitness buff and they have a lot of interactive points as well, such as JavaScript-based fly-over menus and cycling feature panels. I figured it was a good test to see if the service could handle all of the interaction.

Looking at the two screenshots, you can see that the rendering for Chrome on Mavericks on both systems is exactly the same. This is a good thing, although it’s a bit trippy to see Chrome on Mavericks within Chrome on Mavericks. Inception anyone?

gnc-real

Local machine

gnc-cbt

Remote virtual browser

Once your session is running, you can change your target OS and browser version at any time by clicking on the Change Configuration button which displays the panel with dropdown choices. Note that changing the OS or browser will reload your session but it sure beats having to spark up multiple virtual machines, especially for cursory reviews of pages.

Getting the baseline UI was great but a more important test is to see how the site responds to interaction. Let me preface this by saying that I’ve not found a service like this that offers instantaneous response. There will always be a lag because these browsers are virtualized. The key thing that you want is to ensure that normal interaction, like hovering over a menu or controlling UI controls (like a scrolling panel) performs as expected (albeit a little slower). For example, GNC’s site has a dropdown menu system that expands when you hover over a menu option. Notice that hovering over it will expand the menu and equally important give me the option to drill-down into it.

gnc-menu

This interactivity is what makes these services so valuable. The days of having to rely on screenshot services and a ton of VMs to see how your site renders across a ton of browsers are gone.


What About Debugging?

Good question. Browser-based developer tools have really progressed nicely and we depend on them daily. Thankfully, CrossBrowserTesting.com has included the default debugging tools with each browser giving us access to Chrome DevTools, the IE F12 Developer Tools, and Firefox’s Web Developer Tools as well as Firebug for older versions of the browser. Notice here that I’ve fired up the IE F12 tools in IE11 on Windows 7.

gnc-debug

The tools are completely functional allowing me to inspect the markup and DOM structure of the page as well as set styles and change text, just like you would on your local PC. You can see here how I’m able to update the inline JavaScript on the site:

gnc-debug-2

What this translates to is the ability to leverage the debuggers to do advanced debugging work like script debugging across any browser and browser version.

One thing I was concerned about is whether the tools would accurately show page load times via the network traffic monitoring panels and in my tests, they seem to be consistent with what I saw locally. This means I can feel confident, to some degree, that the load times will be more or less on par (of course taking into account network issues).

The one thing that I think would be very hard to measure, though, is page performance via the new suite of performance profilers included in Chrome and Internet Explorer. A lot of that data is directly affected by aspects of your computer, especially when rendering is GPU-enhanced. Testing this on virtualized browsers or virtual machines just isn’t real-world so I wouldn’t recommend it. If you’re an interactive developer (games), then it’s best to test on your own device to get a better understanding of performance.


Testing Different Form Factors

As I begin focusing on mobile more and more, the need to test across multiple mobile OSs and different form factors becomes a high priority. Unfortunately, short of getting a very big inheritance, winning the lotto, or finding a loving sponsor, building a full-featured mobile device lab just isn’t in the cards. And at the pace things are going, things are only get tougher as manufacturers continue to push the limits of mobile browsers and device size.

CrossBrowserTesting.com offers the ability to test across the major mobile OSs simulating most of the popular mobile devices like iPads, iPhones, Nexus 7s and such. This is certainly not an all-encompassing list of mobile devices and I’m assuming is meant to tackle the most modern OSs and devices available.

The process to testing is exactly the same as what we did for desktop browsers, except the rendering will be within the size of the specific mobile device you’ve selected:

gnc-iphone

Again, the service uses simulators to allow you to test out how your site will render on a mobile device. Keep in mind, though, that while simulators are good it’s always best to test against a real device if possible.

New devices come out all the time and I wouldn’t expect every form factor to be on here. I think a nice addition would be to allow a user of the service to be able to define the viewport size as opposed to solely being presented default screen resolutions. This would also offer more flexibility in testing sites that are responsive.


Screenshots

Before interactive services like CrossBrowserTesting.com became available, screenshot services became known as one of the quickest ways of seeing how your site rendered across multiple browsers. While they’re kind of passe now, they’re still useful and interestingly enough, I’m seeing most of these browser testing services spin up screenshot capture as part of their offerings. So it seems this practice is having a bit of a renaissance, most likely driven by the increasing number of browser versions, devices and form factors we need to account for.

Using the service is straightforward and as easy as entering a URL, selecting the browsers you’d like screenshots from, and clicking the Take Screenshots button:

screenshot

The nice thing about this is that it allows you to choose as many device/OS/browser combinations as you’d like as well as define the resolution on a per-target basis. This generates a series of snapshots that you can review:

screenshot-gnc

Clicking individual screenshots displays a larger image allowing you to get a detailed view of the rendering.

A couple of things to keep in mind: It takes a little while for the screenshots to be captured and rendered. So the more browsers you select, the longer you’ll wait. Unlike other services where you wait your turn in a queue, this wait seems to be simply associated with processing time. You’re paying for the service so I can’t imagine there being a queue like BrowserShots.org. Also bear in mind that some of these screenshots are invariably derived from simulators and as I mentioned before, simulators don’t always render the same as a real browser. Lastly, the screenshot is for a specific page, not the entire site.

Nonetheless, the fact that I can fairly quickly get an idea of how my site is rendering across so many devices helps me to drill-down into specific browser combinations that need special attention.

And that’s where a really neat feature comes in. The service offers the ability to compare layouts side-by-side so you can see rendering differences between different browsers:

screenshot-compare

As you can see in the screenshot, it goes a step further by also detailing the differences and creating a transparent yellow overlay on each panel to highlight the actual differences. I’m sure you can relate to the frustration many a developer has felt over discovering slight layout differences after the fact. This helps to bring that forward during the testing process. And you can scroll through and compare multiple scenarios by clicking the Prev and Next buttons.


Testing Local Files Remotely

The true value of a service like this is to facilitate your local debugging efforts. Simply allowing you to test publicly-available sites offers such limited value in terms of your overall testing strategy. CrossBrowserTesting.com provides the ability to test your local files against their remote servers using a Java-based proxy applet or the command line, again leveraging Java to create a proxy. This is similar to other services and is necessary to establish the connection between your local PC and the remote servers as well as allowing you to be able to tunnel past any firewalls you might have in your company. Once the connection is set, you’re able to test out both local files via direct access or via URL from your local web server.

The team at CrossBrowserTesting.com have created a video which gives you a good explanation and demonstration of how this part of the service works.


Closing Thoughts

It’d be truly great if we didn’t need these services. That would mean every browser rendered totally as expected across every device that supported them. Unfortunately, we still have a bit of browser fragmentation and every browser version tends to have their own quirks to contend with. So services like CrossBrowserTesting.com provide real value in streamlining cross-browser testing.

Overall, I think the service is very good albeit not without some quirks of its own. There were some intermittent lockups that I experienced in the live testing which may be attributed to Flash and in some sessions, seeing a number of browser icons in the OS dock left me scratching my head as to why they were there when I chose a specific target browser. These issues didn’t necessarily prevent me from doing what I wanted to do (testing) but it felt like things needed to be tidied up a bit.

The layout comparison feature, though, was pretty hot and something I could see myself using regularly.

What I am seeing is that price could be a big success factor for the breadth of services they’re offering. CrossBrowserTesting.com appears to have set themselves at a very competitive price point incorporating live testing, screenshots and local testing into one fixed monthly cost as opposed to separate pricing for specific services. This is very appealing, especially for price-conscious developers.

The big factor, though, will be how much time you need for testing. From experience, two and a half hours (the amount of time allotted for the Basic plan) seems a little limited especially when accounting for latency of rendering. Again, your mileage may vary but it’s certainly something to consider.

January 10 2014

21:22

Building a Customer Management App Using AngularJS and Laravel

When creating a single-page app we should use some kind of framework to do some of the job for us, so we can focus on the actual functionality. AngularJS fits here perfectly, because features like dynamic dependency injection and bi-directional data binding are just great. Sometimes we also require some kind of server. If you’ve chosen PHP then Laravel may be your best option, as it’s easy to work with and pretty powerful.


Introduction

In this tutorial you will create a simple customer/transaction management system with the ability to add and remove both transactions and customers. This is probably not the kind of thing you make very often, but it shows how to use features of both frameworks.

Before we start you should setup a MySQL database which we will use (Laravel supports many more of them, but this is still the most popular one). You don’t need any web server since we will be using PHP’s built-in one (but please keep in mind, that this solution is only for the development and should never be used in production – it lacks many features that are required for your app to work properly in public). For that, we will need at least PHP version 5.4.0.


Preparation

The first thing we have to do is to install Laravel. The full process is described on Laravel’s website. After that, you should have your project directory created with all of Laravel’s files in there. Navigate to that directory in your command line and run this command there:

php artisan serve

If all goes OK, you should see that the local development server was started on locahost:8000. Open your browser and navigate there, you should see Laravel’s welcome page:

first_run

Now we can proceed to the actual application.


Migrations and Models

Models in Laravel are just like in any other MVC framework. It’s using the Eloquent ORM to ease the work for you – you will probably never need to write an SQL query again (unless you’ll want something that Eloquent does not support). Using migrations you can modify the database structure with the ability to rollback changes if something goes wrong. You can read more about migrations in the documentation.

In our app we will use two models:

  • Customer – will hold the customer data
  • Transaction – will hold the information about a transaction

Let’s start by creating migrations for them. If you have not done so already, shut down the server we started earlier (Ctrl + C).

Customers

First, invoke this command:

php artisan migrate:make create_customers_table

This will create a migration file with a basic structure for you. Now navigate to app/database/migrations. There should be a file with its name starting with a timestamp and ending with “create_customers_table“. Laravel automatically created this basic structure for you. The up() method is called when the migration is applied, and down() when it’s rolled back.

First call the Schema::create() method. It takes two arguments – the schema’s name and a callback function:

Schema::create('customers', function ($table) {

The callback is executed when the table is created. The table object is passed as the $table variable and we manipulate the table’s structure using it. Let’s add an auto-incrementing id field:

	$table->increments('id');

Next there will be three string fields for the customer’s first name, surname and email:

	$table->string('first_name');
	$table->string('last_name');
	$table->string('email')->unique();

We make the email field unique by calling the unique() method on it.

The last method is for the timestamps:

	$table->timestamps();
});

This will create two date fields in the schema: created_at and updated_at. These will be used by Eloquent to store the time when the item was created and updated.

Finally, the code should look like this:

public function up() {
	Schema::create('customers', function ($table) {
		$table->increments('id');
		$table->string('first_name');
		$table->string('last_name');
		$table->string('email')->unique();
		$table->timestamps();
	});
}

The down() method is much simpler – it just deletes the schema:

public function down() {
	Schema::drop('customers');
}

Transactions

The code here will be similar to the customers’ one. First invoke this command:

php artisan migrate:make create_transactions_table

Now locate the appropriate file in the app/database/migrations and open it. Like earlier, start by creating the schema:

Schema::create('transactions', function ($table) {

Now add the fields for the id, transaction’s name, its cost and the id of the customer that it belongs to:

	$table->increments('id');
	$table->string('name');
	$table->float('amount');
	$table->integer('customer_id');

And of course the timestamps:

	$table->timestamps();
});

The final code should look like this:

public function up() {
	Schema::create('transactions', function ($table) {
		$table->increments('id');
		$table->string('name');
		$table->float('amount');
		$table->integer('customer_id');
		$table->timestamps();
	});
}

And now the down() method:

public function down() {
	Schema::drop('transactions');
}

Database Configuration

Now before you apply the migrations you’ll have to configure the connection to your database. Open the app/config/database.php file and go to line 55. Here is the configuration data for MySQL (there are few others in there, for example you could use SQLite or Postgres):

'mysql' => array(
	'driver'    => 'mysql',                 // database driver, don't touch
	'host'      => 'localhost',             // host of the database, usually localhost unless you have your db on some server
	'database'  => 'database',              // name of the database you will be using, it must be created earlier
	'username'  => 'root',                  // username that the script will use to connect, I strongly advice against using root user for this
	'password'  => '',                      // password for the user above, it's better not to use a blank one
	'charset'   => 'utf8',                  // encoding of the db
	'collation' => 'utf8_unicode_ci',       // db's collation setting
	'prefix'    => '',                      // prefix of the database tables, useful if you have multiple scripts using the same database
),

After you have filled that in, you are good to go. Make sure you saved the file and invoke this command from your app’s main directory (the one with the artisan file in it):

php artisan migrate

And thats it. If there were no errors, that means that the tables were created successfully. You can connect to your db using, for example, phpMyAdmin to check manually if you want.

Models

In Laravel, creating a model after you’ve configured your database using migrations is really quick. Navigate to app/models and delete the example User.php file that is there. Now create two files named Customer.php and Transaction.php.

Let’s start with Customer.php. Every model in Laravel has to extend the Eloquent class:

class Customer extends Eloquent {

Now we will define a relationship between the customer and their transactions. This is done by defining a public method in the model with the name of the property we would like to have in it (in this case transactions):

	public function transactions() {

Now in the body of the function there will be only one line:

		return $this->hasMany('Transaction');
	}
}

This tells Eloquent that it should provide all transactions with customer_id of the customer under a property named transactions.

Now we will do pretty much the same for the transactions, but we will reverse the relationship to make the transaction’s owner accessible via the customer property:

class Transaction extends Eloquent {
	public function customer() {
		return $this->belongsTo('Customer');
	}
}

This is done using the $this->belongsTo() method of the model.


Controllers

Now to actually use the models we have to create controllers for them. Head to the app/controllers directory, delete the HomeController.php only – BaseController.php is important as our controllers will extend it. Now create two files: CustomerController.php and TransactionController.php.

CustomerController

This controller will handle everything related to the customers – adding, removing and showing a list of them. Start by defining the class:

class CustomerController extends BaseController {

We will be using Laravel’s feature named RESTful controllers. It makes creating routes easier because we only have to define the base URI and Laravel will handle everything for us. This requires you to start your function names with the appropriate HTTP verb and then continue with the subroute name (using camelCase). So for example, if we would have a method named getNames and the base URI would be /customers, then the method will be accessible at /customers/names.

The getIndex(), postIndex(), deleteIndex() etc. methods will be mapped to the default route (in this case /customers).

Now let’s define our first route – getting the customer by their id:

	public function getIndex() {

Let’s get the id from the query parameters (Laravel provides a nice Input class to deal with that, so you don’t have to use $_GET, $_POST and $_FILES):

		$id = Input::get('id');

And search for the user in the database using that id:

		return Customer::find($id);
	}

Every method of the controller has to return a value that is a string or has a __toString() method. In this case the Customer model that is returned will be converted to JSON before sending.

Now lets return a list of all users (this will be accessible under /customers/all):

	public function getAll() {
		return Customer::all();
	}

As you can see, we can get all customers using the model’s all() method.

Now the longest part, adding a new customer:

	public function postIndex() {

First let’s check if all information needed was provided. We can do this using the Input::has() method:

		if (Input::has('first_name', 'last_name', 'email')) {

Let’s put all of the input fields in the $input variable to avoid calling Input::get() over and over. This can be done using Input::all():

			$input = Input::all();

Next we will check if any of the inputs are empty. If so, we will return a HTTP 400 Bad Request error with a more verbose message:

			if ($input['first_name'] == '' || $input['last_name'] == '' || $input['email'] == '') {
				return Response::make('You need to fill all of the input fields', 400);
			}

Since we wanted to return a status code other than 200 instead of just returning the message as a string, we used Response::make(), which takes the data to send as the first parameter and the status code as the second. Take a look at the docs if you want to know more about responses.

Now we finally create a new Customer model and feed it with the data provided:

			$customer = new Customer;
			$customer->first_name = $input['first_name'];
			$customer->last_name = $input['last_name'];
			$customer->email = $input['email'];

After that we can save the newly created model and respond to the request with it:

			$customer->save();
			
			return $customer;

Here we handle the case if not all of the inputs were provided:

		} else {
			return Response::make('You need to fill all of the input fields', 400);
		}
	}

Finally, we also need the ability to remove the customers. This one is really short:

	public function deleteIndex() {

We start by getting the id of the customer to delete:

		$id = Input::get('id');

Next, we search for and delete the customer:

		$customer = Customer::find($id);
		$customer->delete();

After that, we respond to the request with the id provided:

		
		return $id;
	}
}

Now before the routes can be accessed, we have to hook them. Open the app/routes.php file, delete everything but the comment and add this line at the end of the file:

Route::controller('/customers', 'CustomerController');

This will tell Laravel to route all requests at /customers to our CustomerController. Now you can use CURL to play with it. First start the server with php artisan serve and then you can, for example, create a customer:

curl -X POST -d "first_name=Jane&last_name=Doe&email=jdoe@gmail.com" http://localhost:8000/customers

Then you can get the list of all customers:

curl http://localhost:8000/customers/all

TransactionController

This, like the model is very similar to the CustomerController. First create the class:

class TransactionController extends BaseController {

Then let’s define the method to get all transactions for a user:

	public function getIndex() {
		$id = Input::get('id');
		return User::find($id)->transactions;
	}

As you can see we are using the relationship defined earlier to get the transactions (now recall the query you had to write to achieve the same thing using plain PHP and SQL).

The next thing will be the creation of transactions:

	public function postIndex() {

Like earlier, we are checking if all of the required information is provided:

		if (Input::has('name', 'amount')) {

If so, assign it to an $input variable:

			$input = Input::all();

Check if any of the values provided are empty and if so return an error:

			if ($input['name'] == '' || $input['amount'] == '') {
				return Response::make('You need to fill all of the input fields', 400);
			}

Now create the transaction and supply it with all of the info provided:

			$transaction = new Transaction;
			$transaction->name = $input['name'];
			$transaction->amount = $input['amount'];

Now we need to add it to the appropriate customer. Let’s find them by the id provided and add the $transaction to their transactions list:

			$id = $input['customer_id'];
			User::find($id)->transactions->save($transaction);

This is done using the transactions->save() method provided by Laravel. Now we can respond with the transaction created:

			return $transaction;

And handle the case where none or not all of the data was provided:

		} else {
			return Response::make('You need to fill all of the input fields', 400);
		}
	}

After that there is also a method to delete the transaction in the same way that we deleted the customer:

	public function deleteIndex() {
		$id = Input::get('id');
		$transaction = Transaction::find($id);
		$transaction->delete();
		
		return $id;
	}
}

Now just add the route and you can test the controller using CURL:

Route::controller('/transactions', 'TransactionController');

Conclusion

Alright, this is the end of the first part – in the second part of this tutorial, we will create the front-end using AngularJS. Feel free to add more features to your app (like editing customers or sorting), in case you did not find the information you were looking for, take a look at Laravel’s documentation.

January 09 2014

18:40

Creating a Photo Tag Wall With Twilio Picture Messaging & PHP

Twilio’s recently announced Picture Messaging has vastly opened up what we can do with text messaging, now we can attach photos to our text messages and have them get used in different ways.

In our case, we are going to build a Photo Tag Wall, which will contain photos linked to tags that will be displayed on a website.

This can be handy for events, or parties, or just about anything where you want to associate photos and tags.

To process our photos, we’ll be doing a few different things; We’re going to resize them, we’ll also use pusher to add a little real-time updating to the wall, with the idea of having it displayed on a monitor somewhere and letting it update as needed for everyone to see what is displayed.

We’re going to use the Jolt Microframework for PHP, and Idiorm and Paris for our MySql handling.


Getting Started

Ok, first let’s set up our database:


CREATE TABLE `tag`(

`id` bigint(20) NOT NULL AUTO_INCREMENT,

`name` varchar(255) NOT NULL DEFAULT '',

`slug` varchar(255) NOT NULL DEFAULT '',

PRIMARY KEY (`id`),

KEY `name` (`name`),

KEY `slug` (`slug`)

) ENGINE=MyISAM AUTO_INCREMENT=1 DEFAULT CHARSET=latin1;

CREATE TABLE `photo`(

`id` bigint(20) NOT NULL AUTO_INCREMENT,

`tag_id` bigint(20) NOT  NULL DEFAULT '0',

`file` varchar(255) NOT NULL DEFAULT '',

`from` varchar(255) NOT NULL DEFAULT '',

`country` varchar(255) NOT NULL DEFAULT '',

`datetime` timestamp DEFAULT CURRENT_TIMESTAMP,

PRIMARY KEY (`id`),

KEY `tag_id` (`tag_id`),

KEY `file` (`file`)

) ENGINE=MyISAM AUTO_INCREMENT=1 DEFAULT CHARSET=latin1;

We’re setting up two tables, one to store the tag, and one for the photos and the id to the tag they are associated with.

This table will store the tag, the image, and some meta data about the phone number that sent the photo.

We’re also going to need to download the Jolt framework, the Twilio PHP Library, and Idiorm and Paris.

The first thing you’ll want to do is grab these packages from their respective websites:

Now that you have all the packages downloaded to your computer, it’s time to setup your directory structure. We’ll be putting the files into our site’s root folder.

We’re putting the web services related files inside the Services folder, since it helps us watch where things are.

Ok, let’s set up our config.ini file, open up config.ini in your editor and modify the following settings:


;site settings

site.name = my site

site.url = http://mysiteurl.com

; rendering vars

views.root = views

views.layout = layout

; session vars

cookies.secret = IeNj0yt0sQu33zeflUFfym0nk1e

cookies.flash = _F

; db stuff

db.enabled = true

db.host = MY DATABASE HOST

db.name = MY DATABASE NAME

db.user = MY DATABASE USER

db.pass = MY DATABASE PASSWORD

; twilio stuff

twilio.accountsid = MY TWILIO ACCOUNT SID

twilio.authtoken = MY TWILIO AUTH TOKEN

twilio.from = MY TWILIO FROM NUMBER

You can see what you’ll have to fill in here, your site name, and URL, your database info and your Twilio info.


Now for the Coding!

To get started, let’s set up our models. We’ll create a file inside the system folder called models.php:


<?php

class Tag extends Model{

	public function photos(){

		return $this->has_many('Photo');

	}

}

class Photo extends Model {

	public function tag(){

		return $this->belongs_to('Tag');

	}

}

?>

This is a pretty basic model layout, but one nice thing about it, is that we’re using Paris to establish a relationship with the tag table. In fact, because we’ve previously built our database to have a tag_id field in the photo table, this model knows to associate all photos with the tag_id, where tag_id is the table name and the primary key field in the tag table.

The same is true for the Photo class, where we’ve set it to belong to a tag as specified in the tag() function.

This is handy for building a quick model system without a lot of overhead.

We also want to create our functions.php file, which we will also keep inside the system folder:


<?php

function slugify( $string ){

	$string = strtolower( trim($string) );

	$slug=preg_replace('/[^A-Za-z0-9-]+/', '-', $string);

	return $slug;

}

function cropResize($img,$out='',$dSize=170){

	$x = @getimagesize($img);

	$sw = $x[0];

	$sh = $x[1];

	$yOff = 0;

	$xOff = 0;

	if($sw < $sh) {

		$scale = $dSize / $sw;

		$yOff = $sh/2 - $dSize/$scale/2;

	} else {

		$scale = $dSize / $sh;

		$xOff = $sw/2 - $dSize/$scale/2;

}

$im = @ImageCreateFromJPEG ($img) or // Read JPEG Image

$im = @ImageCreateFromPNG ($img) or // or PNG Image

$im = @ImageCreateFromGIF ($img) or // or GIF Image

$im = false; // If image is not JPEG, PNG, or GIF

if (!$im) {

	readfile ($img);

} else {

	$thumb = @ImageCreateTrueColor ($dSize,$dSize);

	imagecopyresampled($thumb, $im,

	0, 0,

	$xOff,$yOff,

	$dSize, $dSize,

	$dSize / $scale ,$dSize / $scale);

}

if( $out == '' ){

	header('content-type:image/jpeg');

	imagejpeg($thumb);

}else{

	imagejpeg($thumb, $out);

}

}

functions.php will contain two core functions, one function, slugify(), will convert tag names into slugs, and the cropResize() function will take the image we pass to it, and save it within new dimensions.

We’ll be using these functions quite a lot coming up.

Most of our code will be stored inside index.php, so let’s set up the bare bones for it:


<?php

include("system/jolt.php");

require 'system/idiorm.php';

require 'system/paris.php';

require 'system/models.php';

require 'Services/Twilio.php';

require 'system/functions.php';

Ok, we’ve included our files, and nothing happened. Now, let’s get Jolt up and running:


$app = new Jolt();

$app->option('source', 'config.ini');

The above code just sets up Jolt and tells it to read the config.ini file and set our configuration settings, now let’s connect to our database:


if( $app->option('db.enabled') != false ){

	ORM::configure('mysql:host='.$app->option('db.host').';dbname='.$app->option('db.name'));

	ORM::configure('username', $app->option('db.user') );

	ORM::configure('password', $app->option('db.pass') );

}

Our final piece of bootstrapping, we want to set up our Twilio client:


$client = new Services_Twilio($app->option('twilio.accountsid'), $app->option('twilio.authtoken') );

$fromNumber = $app->option('twilio.from');

$app->store('client',$client);

This is our bootstrap section, so far all we’ve done is included our files, set up our Jolt app, connected to our database and initialized our Twilio client.

Right now, if you run your app, you’ll get a few errors. This is fine, we’ll be taking care of those errors next.


Routing

Now we have to set up our routes and tell our app what to do based on certain rules. These rules will be either get or post.

Our initial rules will be the home page, the tag page, and the listener:


$app->get('/',function(){

$app = Jolt::getInstance();

});

$app->get('/tag/:tag',function($tag){

$app = Jolt::getInstance();

});

$app->post('/listener',function($tag){

$app = Jolt::getInstance();

});

$app->listen();

We’ve just set up the initial bare bones actions for our homepage, which is represented by the '/', our tag page, and our listener.

You’ll notice the listener is a post rather than a get, that is because this is the handler from Twilio when new messages are received.

Lastly, you’ll see the $app->listen(); method call. This is the most important method call we have, as it tells the app to start running.


There’s No Place Like Home

Let’s set up the home page, and build the view that we’ll be displaying for everybody.

Replace the original homepage route with this one:


$app->get('/', function(){

$app = Jolt::getInstance();

$tags = Model::factory('Tag')->count();

if( isset($tags) ){

	$images = Model::factory('Photo')->count();

	$tagList = Model::factory('Tag')->find_many();

}else{

	$tags = 0;

	$images = 0;

	$tagList = array();

}

$app->render( 'home',array(

	'tags'=>$tags,

	'tagList' => $tagList,

	'fromNumber' => $app->option('twilio.from'),

	'images'=>$images

	));

});

You’ll also notice that we tell it to render something called ‘home‘, in the views folder, there is a home.php file, open it up and edit it as follows:


<p >Text <span><?php echo $fromNumber ?></span> a picture with the name of a tag.  Your image will be displayed on that tag.</p>

<div>

	<div>

		<p>Number of Tags: <?php echo $tags; ?></p>

		<p>Number of Images: <?php echo $images; ?></p>

	</div>

</div>

<hr />

<h3>Tags</h3>

<ul>

<?php   foreach($tagList as $tag){  ?>

	<li>

	<a href="<?php echo $uri?>/tag/<?php echo $tag->slug?>"><?php echo $tag->name?></a>

	</li>

<?php   }   ?>

</ul>

This file will take the variables we pass from the $app->render() function and make use of them here.

We’re going to display a count of total tags, along with total images, and a list of tags that a visitor can click on.

The actual page layout is controlled by a file called layout.php. Let’s go ahead and update that file now:


<html>

	<head>

		<title><?=$pageTitle?></title>

		<meta name="viewport" content="width=device-width, initial-scale=1.0">

		<link href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css" rel="stylesheet">

		<link href="<?=$uri?>/style.css" rel="stylesheet">

		<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>

		<script src="//netdna.bootstrapcdn.com/bootstrap/3.0.0/js/bootstrap.min.js"></script>

	</head>

	<body>

		<div>

			<div>

				<ul>

					<li><a href="<?=$uri?>">Home</a></li>

				</ul>

				<h3>Photo Wall</h3>

			</div>

		<hr />

		<section>

			<?=$pageContent?>

		</section>

		</div>

	</body>

</html>

This is pretty bare bones HTML, but it covers what we’re needing. All output gets sent to the $pageContent variable in layout.php.


Picture Messaging!

Ok, now let’s handle the actual uploading of pictures from Twilio.

Log into your Twilio account and point a phone number to http://MYSITEURL/listener for SMS messages, where MYSITEURL is the address where you’ve uploaded your app.

We’re going to replace our listener route with this one:


$app->post('/listener', function(){

$app = Jolt::getInstance();

if ( isset($_POST['NumMedia']) && $_POST['NumMedia'] > 0 ){

	//  let's find out what tag this is for.. or create a new one..

	$thetag = slugify( $_POST['Body'] );

	$tag  = Model::factory('Tag')->where_equal( 'slug', $thetag )->find_one();

	if( isset($tag->id) && !empty($tag->id) ){

		$tag_id = $tag->id;

	}else{

		//  no tag already exists...

		$tag                    = Model::factory('Tag')->create();

		$tag->name              = $_POST['Body'];

		$tag->slug              = slugify( $_POST['Body'] );

		$tag->save();

		$tag_id = $tag->id();

	}

	for ($i = 0; $i < $_POST['NumMedia']; $i++){

	if (strripos($_POST['MediaContentType'.$i], 'image') === False){

		continue;

	}

	$file = sha1($_POST['MediaUrl'.$i]).'.jpg';

	file_put_contents('images/original/'.$file, file_get_contents($_POST['MediaUrl'.$i]));

	chmod ('images/original/'.$file, 01777);
	// Edit image

	$in = 'images/original/'.$file;

	$out = 'images/processed/'.$file;

	cropResize($in,$out,250);

	chmod ('images/processed/'.$file, 01777);
	// Remove Original Image

	unlink('images/original/'.$file);

	$photo              =   Model::factory('Photo')->create();

	$photo->tag_id      =   $tag_id;

	$photo->file        =   $file;

	$photo->from        =   $_POST['From'];

	$photo->country     =   $_POST['FromCountry'];

	$photo->save();

	}

	$message = $app->store('client')->account->messages->sendMessage(

	$app->option('twilio.from'), // From a valid Twilio number

	$_POST['From'], // Text this number

	"Image(s) Added to <".strtolower(trim($_POST['Body']))."> Photo Wall Link: ".$app->option('site.url')."/tag/".strtolower(trim($_POST['Body']))

	);

	return true;

}else{

	if ( isset($_POST['From']) ){

	$message = $app->store('client')->account->messages->sendMessage(

	$app->option('twilio.from'), // From a valid Twilio number

	$_POST['From'], // Text this number

	"MMS error. Please try sending your image again."

	);

}

header('HTTP/1.1 400 Bad Request', true, 400);

return false;

}

});

There is no view associated with this action. Now, let’s go over what it does.

This is only called during a post, hence the $app->post() statement.

When it is activated by someone sending in a message, we check to see if there are any images attached, and if there are, then we cycle through them and save them in the database.

First, we check to see if there are any tags already stored in our database that match the tag we attached to our image, and if there is, then we grab the id from the database, otherwise, we save a new record containing that tag’s information.

Next, we cycle through the uploaded files and make sure they are images. Each image is downloaded locally and stored inside the images/original folder. We then resize and crop the images to be a more manageable size, and store the new files inside the images/processed folder.

Finally, we store the images inside the database, along with some meta data on the call itself, and send a text message back to the sender to tell him or her to check out the tag page.

If no images were attached, then we send them a message that there was an error.


The Photo Wall

Now, we’ve set up the home page, and we’ve set up the listener. What’s left, is to set up the photo wall itself.

This will go inside the $app->get(‘/tag/:tag') call.

Replace the original placeholder with the following code:


// preload photos whenever a matching route has :tag in it

$app->filter('tag_slug', function ($tag_slug){

$app = Jolt::getInstance();

$tag = Model::factory('Tag')->where_equal('slug',$tag_slug)->find_one();

$photos = $tag->photos()->find_many();

$app->store('tag', $tag);

$app->store('photos', $photos);

});

$app->get('/tag/:tag_slug', function($tag_slug){

$app = Jolt::getInstance();

$tag = $app->store('tag');

$photos = $app->store('photos');

$app->render( 'gallery', array(

"pageTitle"=>"viewing Photos for {$tag->name}",

"tag"=>$tag,

"photos"=>$photos

));

});

Notice the $app->filter(), This is a handy method we can set up that will grab the tag and its photos each time the $tag_slug variable is passed, this lets us save on extra queries.

Now, we need to set up a gallery.php page inside views:


<div>

	<h1>#<?php echo $tag->name; ?></h1>

</div>

<hr />

<div>

	<div>

		<div id="container">

			<?php   foreach($photos as $photo){ ?>

			<?php if (file_exists('images/processed/'.$photo->file)){ ?>

		<div>

	<div>

	<a href="<?php echo $uri?>/images/processed/<?php echo $photo->file ?>" title="<?php echo $photo->datetime ?>" >
	<img src="<?php echo $uri?>/images/processed/<?php echo $photo->file ?>" /></a>

	<p><?php echo $photo->datetime?></p>

</div>

</div>

<?php } ?>

<?php } ?>

</div>

</div>

</div>

<script src="//cdnjs.cloudflare.com/ajax/libs/masonry/3.1.1/masonry.pkgd.min.js"></script>

<script type="text/javascript">

var container = document.querySelector('#container');

var msnry = new Masonry( container, {

itemSelector: '.image'

});

</script>

This will display the gallery, and use jQuery masonry to float all the images nicely.


In Conclusion

So that completes our app. You’ve now built a handy little photo wall that can be used to show photos from events. Be sure to checkout the links provided above to learn more about the libraries and frameworks used throughout this article. Thanks for reading.

January 07 2014

20:39

Ember Components: A Deep Dive

Ember.js is a JavaScript MVC framework that allows developers to create ambitious web applications. Although pure MVC allows a developer to separate concerns, it does not provide you with all the tools and your application will need other constructs.  Today, I’m going to talk about one of those constructs. Ember components are essentially sandboxed re-usable chunks of UI.  If you are not familiar with Ember, please check out Getting Started With Ember.js or the Let’s Learn Ember Course. In this tutorial we will cover the Web Components specification, how to write a component in Ember, talk about composition, explain the difference between an Ember view and an Ember component, and integrating plugins with Ember components.


A Word About Web Components

Ember components are based off of the W3C Web Components specification. The specification is comprised of four smaller specifications; templates, decorators, shadow DOM, and custom elements. Of these four concepts only three of them have harden specifications, decorators being the exception. By having the specifications in place, framework developers have been able to polyfill these new APIs prior to them being implemented by browser vendors.

There are several important concepts to grasp when talking about components:

  • Components know nothing about the outside world unless explicitly passed in
  • Components should have a well defined interface to the outside world
  • Components cannot manipulate any JavaScript outside of the component
  • Components can broadcast events
  • Custom elements must be namespaced with a hyphen
  • Outside JavaScript cannot manipulate components

Web Components provide true encapsulation for UI widgets. Below is a diagram of how a component works at the most basic level.

Web Component Diagram

While Ember has successfully polyfilled a lot of a specification, frameworks like AngularJS, Dart, Polymer, and Xtags have similar solutions. The only caveat here is that Ember and Angular currently don’t scope styles to the component. Overtime these polyfill solutions will fade away, and frameworks will adopt the browser vendor’s implementation. This is a fundamentally different approach to development, as we can take advantage of future specifications without tying ourselves to experimental features in browsers.


The Most Basic Ember Component

Now with our knowledge of Web Components, lets implement the very basic my-name component from above, but in Ember. Let’s begin by downloading the Ember Starter Kit from the Ember website. At the time of this tutorial the version of Ember is 1.3.0. Once you have it downloaded open up the files in your favorite editor, delete all of the templates in index.html ( denoted with data-template-name ) and everything in app.js.

The first thing we are going to want to do is create our component template. For the sake of this tutorial we are going to use inline templates. You do this by writing the following in your index.html file. We also need to create a new Ember application in our JavaScript.


<script type="text/x-handlebars">
   {{my-name}}
</script>

<script type="text/x-handlebars" data-template-name="components/my-name">
// My component template will go here
</script>


var App = Ember.Application.create();

You’ll notice that the data-template-name has a path name instead of just a plain string. The reason why we prefix our component name with "components/" is to tell Ember we are dealing with a component template and not a regular application template. You’ll also notice that the component name has the hyphen in it. This is the namespacing that I had mentioned in the Web Components specification. Namespacing is done so that we do not have name collisions with existing tags.

If we open the browser we shouldn’t see anything different. The reason for this that we have yet to place anything in our my-name template. Let’s take care of that.


...
<script type="text/x-handlebars" data-template-name="components/my-name">
   Hi, my name is {{name}}.
</script>
Ember Name Component

Now in the browser you should see something like the image above. We still aren’t finished as you can see we actually aren’t printing out a name. As I mentioned in the first section, components should expose a well defined interface to the outside world. In this case we are concerned with the name. So let’s pass in the name by placing a name attribute on the my-name component.

...
<script type="text/x-handlebars">
   {{my-name name="Chad"}}
</script>

When you refresh the page you should see “Hi, my name is Chad”. All of this with writing one line of JavaScript. Now that we have a feel for writing a basic component, let’s talk about the difference between Ember components and Ember views.


Ember Components vs. Ember Views

Ember is an MVC, so some may be thinking, “Why not just use a view for this?” This is a legitimate question. Components actually are a subclass of Ember.View, the biggest difference here is that views are generally found in the context of a controller. Take the example below.


App.IndexController = Ember.Controller.extend({
  myState: 'on'
});

App.IndexView = Ember.View.extend({

  click: function () {
    var controller = this.get( 'controller' ),
    myState = controller.get( 'mySate' );

    console.log( controller ) // The controller instance
    console.log( myState ) // The string "on"
  }

});

<script type="text/x-handlebars" data-template-name="index">
  {{myState}}
</script>

Views normally sit behind a template and turn raw input ( click, mouseEnter, mouseMove, etc ) into a semantic action ( openMenu, editName, hideModal, etc ) in a controller or route. Another thing to point out is that templates need a context as well. So what ends up happening is that Ember infers the context through naming conventions and the URL. See the diagram below.

Ember Hierarchy

As you can see, there is a level of hierarchy based on the URL and each level of that hierarchy has its own context which is derived through naming conventions.

Ember components do not have a context, they only know about the interface that they define. This allows a component to be rendered into any context, making it decoupled and reusable. If the component exposes an interface, it’s the job of the context to fulfill that interface. In other words, if you want the component to render properly you must supply it with data that it’s expecting. It’s important to note that these passed in values can be both strings or bound properties.

Ember Hierarchy With Components

When bound properties are manipulated inside of a component those changes are still propagated wherever they are referenced in your application. This makes components extremely powerful. Now that we have a good understanding of how components are different from views, let’s look at a more complex example that illustrates how a developer can compose multiple components.


Composition of Components

One really nice thing about Ember is that it’s built on concepts of UI hierarchy and this is very apparent with composition of components. Below is an example of what we are going to make. It’s a simple group chat UI. Obviously I’m not going to write a whole chat service to power the UI but we can look how we can break the UI down into re-usable and composeable components.

Ember Group Chat Component

Let’s first look how we are going to break up the UI into smaller and more digestible parts. Basically anything that we can draw a box around is a component, with the exception of a the text and button inputs at the bottom of the UI. Our goal is to be able to just configure the component at the outer layer and everything should just work.

Group Chat Breakdown

Let’s start by creating a new html file called chat.html and setting up all of the dependencies for Ember. Next create all of templates.

<script type="text/x-handlebars" data-template-name="application">
  {{outlet}}
</script>

<script type="text/x-handlebars" data-template-name="index">
  {{ group-chat messages=model action="sendMessage" }}
</script>

<script type="text/x-handlebars" data-template-name="components/group-chat">
  <div class="chat-component">
    <ul class="conversation">
      {{#each message in messages}}
        <li class="txt">{{chat-message username=message.twitterUserName message=message.text time=message.timeStamp }}</li>
      {{/each}}
    </ul>

    <form class="new-message" {{action submit on="submit"}}>
      {{input type="text" placeholder="Send new message" value=message class="txt-field"}}
      {{input type="submit" class="send-btn" value="Send"}}
    </form>
  </div>
</script>

<script type="text/x-handlebars" data-template-name="components/chat-message">
  <div class="message media">
    <div class="img">
      {{user-avatar username=username service="twitter"}}
    </div>
    <div class="bd">
      {{user-message message=message}}
      {{time-stamp time=time}}
    </div>
  </div>
</script>

<script type="text/x-handlebars" data-template-name="components/user-avatar">
  <img {{bind-attr src=avatarUrl alt=username}} class="avatar">
</script>

<script type="text/x-handlebars" data-template-name="components/user-message">
  <div class="user-message">{{message}}</div>
</script>

<script type="text/x-handlebars" data-template-name="components/time-stamp">
  <div class="time-stamp">
    <span class="clock" role="presentation"></span>
    <span class="time">{{format-date time}}</span>
  </div>
</script>

You will see that components can be nested inside of other components. This makes components just like legos that we can assemble any way we want. We just need to write to the component’s interface.

If we now go look in the browser we shouldn’t see much because we don’t have any data flowing into the component. You will also notice that even though there is no data, the components do not throw an error. The only thing that actually gets rendered here is the input area and the send button. This is because they aren’t dependent on what is passed in.

Group Chat Without Data

Taking a little bit closer look at the templates you’ll notice that we assigned a couple things on the group-chat component.

<script type="text/x-handlebars" data-template-name="index">
  {{ group-chat messages=model action="sendMessage" }}
</script>

In this case, we are passing the model from the context of the IndexRoute as “messages” and we have set the string of “sendMessage” as the action on the component. The action will be used to broadcast out when the user wants to send a new message. We will cover this later in the tutorial. The other thing that you will notice is that we are setting up strict interfaces to the nested components all of which are using the data passed in from the group-chat interface.

...
<ul class="conversation">
  {{#each message in messages}}
    <li class="txt">{{chat-message username=message.twitterUserName message=message.text time=message.timeStamp }}</li>
  {{/each}}
</ul>
...

As mentioned before you can pass strings or bound properties into components. Rule of thumb being, use quotes when passing a string, don’t use quotes when passing a bound property. Now that we have our templates in place, lets throw some mock data at it.

App = Ember.Application.create();

App.IndexRoute = Ember.Route.extend({
  model: function() {
    return [
      {
        id: 1,
        firstName: 'Tom',
        lastName: 'Dale',
        twitterUserName: 'tomdale',
        text: 'I think we should back old Tomster. He was awesome.',
        timeStamp: Date.now() - 400000,
      },
      {
        id: 2,
        firstName: 'Yehuda',
        lastName: 'Katz',
        twitterUserName: 'wycats',
        text: 'That\'s a good idea.',
        timeStamp: Date.now() - 300000,
      }
    ];
  }
});

If we go look at this in the browser now, we should see a bit of progress. But there are still some work to be done, mainly getting the images to show up, formatting the date, and being able to send a new message. Let’s take care of that.

Group Chat Partially Filled With Data

So with our user-avatar component, we want to use a service called Avatars.io to fetch a user’s twitter avatar based on their twitter user name. Let’s look at how the user-image component is used in the template.

<script type="text/x-handlebars" data-template-name="components/chat-message">
...
{{ user-avatar username=username service="twitter" }}
...
</script>

<script type="text/x-handlebars" data-template-name="components/user-avatar">
  <img {{bind-attr src=avatarUrl alt=username}} class="avatar">
</script>

It’s a pretty simple component but you will notice that we have a bound property called avatarUrl. We are going to need to create this property within our JavaScript for this component. Another thing you will note is that we are specifying the service we want to fetch the avatar from. Avatars.io allows you fetch social avatars from Twitter, Facebook, and Instagram. So we can make this component extremely flexible. Let’s write the component.

App.UserAvatarComponent = Ember.Component.extend({
  avatarUrl: function () {
    var username = this.get( 'username' ),
          service = this.get( 'service' ),
          availableServices = [ 'twitter', 'facebook', 'instagram' ];

    if (  availableServices.indexOf( service ) > -1 ) {
       return 'http://avatars.io/' + service + '/' + username;
    }
    return 'images/cat.png';

  }.property( 'username' , 'service' )

});

As you can see, to create a new component we just follow the naming convention of NAMEOFCOMPONENTComponent and extend Ember.Component. Now if we go back to the browser we should now see our avatars.

Group Chat Without Formatted Date

To take care of the date formatting let’s use moment.js and write a Handlebars helper to format the date for us.

Ember.Handlebars.helper('format-date', function( date ) {
  return moment( date ).fromNow();
});

Now all we need to do is apply the helper to our time stamp component.

<script type="text/x-handlebars" data-template-name="components/time-stamp">
  <div class="time-stamp">
    <span class="clock" role="presentation"></span>
    <span class="time">{{format-date time}}</span>
  </div>
</script>

We should now have a component that formats dates instead of the Unix epoch timestamps.

Group Chat With Dates

We can do one better though. These timestamps should automatically update over the coarse of time. So lets make our time-stamp component do just that.

App.TimeStampComponent = Ember.Component.extend({

  startTimer: function () {

    var self = this, currentTime;
    this._timer = setInterval( function () {
      currentTime = self.get( 'time' );
      self.set( 'time', ( currentTime - 60000  ) );
    }, 60000 );

  }.on( 'didInsertElement' ),

  killTimer: function () {
    clearInterval( this._timer );
  }.on( 'willDestroyElement' )

});

A couple points to note here are the on() declarative event handler syntax. This was introduced in Ember prior to the 1.0 release. It does exactly what you think it does, when the time-stamp component is inserted into the DOM, startTimer is called. When the element is about to be destroyed and cleaned up the killTimer method will be called. The rest of component just tells the time to update every minute.

The next thing we need to do is setup the action so that when the user hits submit, a new message will be created. Our component shouldn’t care how the data is created it should just broadcast out that the user has tried to send a message. Our IndexRoute will be responsible for taking this action and turning into something meaningful.

App.GroupChatComponent = Ember.Component.extend({
  message: '',
  actions: {
    submit: function () {
      var message = this.get( 'message' ).trim(),
          conversation = this.$( 'ul' )[ 0 ];

      // Fetches the value of 'action'
      // and sends the action with the message
      this.sendAction( 'action', message );

      // When the Ember run loop is done
      // scroll to the bottom
      Ember.run.next( function () {
        conversation.scrollTop = conversation.scrollHeight;
      });

      // Reset the text message field
      this.set( 'message', '' );
    }
  }
});
<form class="new-message" {{action submit on="submit"}}>
  {{input type="text" placeholder="Send new message" value=message class="txt-field"}}
  {{input type="submit" class="send-btn" value="Send"}}
</form>

Since the group-chat component owns the input and send button we need to react to the user clicking send at this level of abstraction. When the user clicks the submit button it is going to execute the submit action in our component implementation. Within the submit action handler we are going to get the value of message, which is set by by the text input. We will then send the action along with the message. Finally we will reset the message to a black string.

The other odd thing you see here is the Ember.run.next method being called. In Ember there is a queue, normally referred to as the run loop, that get’s flushed when data is changed. This is done to basically coalesce changes and make the change once. So in our case we are saying when the sending of the message is done making any manipulations, call our callback. We need to scroll our ul to the bottom so the user can see the new message after any manipulations. For more on the run loop I suggest reading Alex Matchneer’s article “Everything You Never Wanted to Know About the Ember Run Loop”.

If we go over to the browser and we click the send button, we get a really nice error from Ember saying “Uncaught Error: Nothing handled the event ‘sendMessage’. This is what we expect because we haven’t told our application on how to reaction to these types of events. Let’s fix that.

App.IndexRoute = Ember.Route.extend({
 /* … */
  actions: {
   sendMessage: function ( message ) {
      if ( message !== '') {
    console.log( message );
      }
   }
 }
});

Now if we go back to the browser type something into the message input and hit send, we should see the message in the console. So at this point our component is loosely coupled and talking to the rest our application. Let’s do something more interesting with this. First let’s create a new Ember.Object to work as a model for a new message.

App.Message = Ember.Object.extend({
  id: 3,
  firstName: 'Chad',
  lastName: 'Hietala',
  twitterUserName: 'chadhietala',
  text: null,
  timeStamp: null
});

So when the sendMessage action occurs we are going to want to populate the text and timeStamp field of our Message model, create a new instance of it, and then push that instance into the existing collection of messages.

App.IndexRoute = Ember.Route.extend({
/* … */
  actions: {
    sendMessage: function ( message ) {
      var user, messages, newMessage;

      if ( message !== '' ) {

        messages = this.modelFor( 'index' ),
        newMessage = App.Message.create({
          text: message,
          timeStamp: Date.now()
        })

        messages.pushObject( newMessage );
      }
    }
  }
});

When we go back to the browser and we should now be able to create new messages.

Group Chat Creating Messages

We now have several different re-usable chucks of UI that we can just place anywhere. For instance if you needed to use an avatar somewhere else in your Ember application we can just reuse the user-avatar component.

<script type="text/x-handlebars" data-template-name="index">
...
{{user-avatar username="horse_js" service="twitter" }}
{{user-avatar username="detroitlionsnfl" service="instagram" }}
{{user-avatar username="KarlTheFog" service="twitter" }}
</script>
User Avatars From Twitter and Instagram

Wrapping jQuery Plugins

So at this point you’re probably wondering “What if I want to use some jQuery plugin in my component?” No problem. For brevity, lets modify our user-avatar component to show a tool tip when we hover over the avatar. I’ve chosen to use the jQuery plugin tooltipster to handle the tooltip. Let’s modify the existing code to utilize tooltipster.

First lets add correct files to our chat.html and modifiy the existing user avatar component.

...
<link href="css/tooltipster.css" rel="stylesheet" />

...
<script type="text/JavaScript" src="js/libs/jquery.tooltipster.min.js"></script>
<script type="text/JavaScript" src="js/app.js"></script>
...

And then our JavaScript:

App.UserAvatarComponent = Ember.Component.extend({
  /*…*/
  setupTooltip: function () {
    this.$( '.avatar' ).tooltipster({
      animation: 'fade'
    });
  }.on( 'didInsertElement' ),

  destroyTooltip: function () {
    this.$( '.avatar' ).tooltipster( 'destroy' );
  }.on( 'willDestroyElement' )

)};

So once again we see the declarative event listener syntax, but for the first time we see this.$. If you are familiar with jQuery you would expect that we would be querying all the elements with class of ‘avatar’. This isn’t the case in Ember because context is applied. So in our case we are only looking for elements with the class of ‘avatar’ in the user-avatar component. It’s comparable to jQuery’s find method e.g. $( ‘.user-avatar’ ).find( ‘.avatar’ ). On destruction of the element we should unbind the hover event on the avatar and clean up any functionality, this is done by passing ‘destroy’ to tooltipster. If we go to the browser, refresh and hover an image we should see the users username.

Avatar Tooltips

Conclusion

In this tutorial we took a deep dive into Ember components and showed how you can take re-usable chunks of UI to generate larger composites and integrate jQuery plugins. We looked at how components are different from views in Ember. We also covered the idea of interface-based programming when it comes to components. Hopefully I was able to shine some light on not only Ember Components but Web Components and where the Web is headed.

January 06 2014

18:43

Writing Robust Web Applications – The Lost Art of Exception Handling

As developers, we want the applications we build to be resilient when it comes to failure, but how do you achieve this goal? If you believe the hype, micro-services and a clever communication protocol are the answer to all your problems, or maybe automatic DNS failover. While that kind of stuff has its place and makes for an interesting conference presentation, the somewhat less glamorous truth is that making a robust application begins with your code. But, even well designed and well tested applications are often lacking a vital component of resilient code – exception handling.

I never fail to be amazed by just how under-used exception handling tends to be even within mature codebases. Let’s look at an example.


What Can Possibly Go Wrong?

Say we have a Rails app, and one of the things we can do using this app is fetch a list of the latest tweets for a user, given their handle. Our TweetsController might look like this:

class TweetsController < ApplicationController
  def show
    person = Person.find_or_create_by(handle: params[:handle])
    if person.persisted?
      @tweets = person.fetch_tweets
    else
      flash[:error] = "Unable to create person with handle: #{person.handle}"
    end
  end
end

And the Person model that we used might be similar to the following:

class Person < ActiveRecord::Base
  def fetch_tweets
    client = Twitter::REST::Client.new do |config|
      config.consumer_key        = configatron.twitter.consumer_key
      config.consumer_secret     = configatron.twitter.consumer_secret
      config.access_token        = configatron.twitter.access_token
      config.access_token_secret = configatron.twitter.access_token_secret
    end
    client.user_timeline(handle).map{|tweet| tweet.text}
  end
end

This code seems perfectly reasonable, there are dozens of apps that have code just like this sitting in production, but let’s look a little more closely.

  • find_or_create_by is a Rails method, it’s not a ‘bang’ method, so it shouldn’t throw exceptions, but if we look at the documentation we can see that due to the way this method works, it can raise an ActiveRecord::RecordNotUnique error. This won’t happen often, but if our application has a decent amount of traffic it’s occurring more likely than you might expect (I’ve seen it happen many times).
  • While we’re on the subject, any library you use can throw unexpected errors due to bugs within the library itself and Rails is no exception. Depending on our level of paranoia we might expect our find_or_create_by to throw any kind of unexpected error at any time (a healthy level of paranoia is a good thing when it comes to building robust software). If we have no global way of handling unexpected errors (we’ll discuss this below), we might want to handle these individually.
  • Then there is person.fetch_tweets which instantiates a Twitter client and tries to fetch some tweets. This will be a network call and is prone to all sorts of failure. We may want to read the documentation to figure out what the possible errors we might expect are, but we know that errors are not only possible here, but quite likely (for example, the Twitter API might be down, a person with that handle might not exist etc.). Not putting some exception handling logic around network calls is asking for trouble.

Our tiny amount of code has some serious issues, let’s try and make it better.


The Right Amount of Exception Handling

We’ll wrap our find_or_create_by and push it down into the Person model:

class Person < ActiveRecord::Base
  class << self
    def find_or_create_by_handle(handle)
      begin
        Person.find_or_create_by(handle: handle)
      rescue ActiveRecord::RecordNotUnique
        Rails.logger.warn { "Encountered a non-fatal RecordNotUnique error for: #{handle}" }
        retry
      rescue => e
        Rails.logger.error { "Encountered an error when trying to find or create Person for: #{handle}, #{e.message} #{e.backtrace.join("\n")}" }
        nil
      end
    end
  end
end

We’ve handled the ActiveRecord::RecordNotUnique according to the documentation and now we know for a fact that we’ll either get a Person object or nil if something goes wrong. This code is now solid, but what about fetching our tweets:

class Person < ActiveRecord::Base
  def fetch_tweets
    client.user_timeline(handle).map{|tweet| tweet.text}
  rescue => e
    Rails.logger.error { "Error while fetching tweets for: #{handle}, #{e.message} #{e.backtrace.join("\n")}" }
    nil
  end

  private

  def client
    @client ||= Twitter::REST::Client.new do |config|
      config.consumer_key        = configatron.twitter.consumer_key
      config.consumer_secret     = configatron.twitter.consumer_secret
      config.access_token        = configatron.twitter.access_token
      config.access_token_secret = configatron.twitter.access_token_secret
    end
  end
end

We push instantiating the Twitter client down into its own private method and since we didn’t know what could go wrong when we fetch the tweets, we rescue everything.

You may have heard somewhere that you should always catch specific errors. This is a laudable goal, but people often misinterpret it as, “if I can’t catch something specific, I won’t catch anything”. In reality, if you can’t catch something specific you should catch everything! This way at least you have an opportunity to do something even if it’s only to log and re-raise the error.

An Aside on OO Design

In order to make our code more robust, we were forced to refactor and now our code is arguably better than it was before. You can use your desire for more resilient code to inform your design decisions.

An Aside on Testing

Every time you add some exception handling logic to a method, it’s also an extra path through that method and it needs to be tested. It’s vital you test the exceptional path, perhaps more so than testing the happy path. If something goes wrong on the happy path you now have the extra insurance of the rescue block to prevent your app from falling over. However, any logic inside the rescue block itself has no such insurance. Test your exceptional path well, so that silly things like mistyping a variable name inside the rescue block don’t cause your application to blow up (this has happened to me so many times – seriously, just test your rescue blocks).


What to Do With the Errors We Catch

I’ve seen this kind of code countless times through the years:

begin
  widgetron.create
rescue
  # don't need to do anything
end

We rescue an exception and don’t do anything with it. This is almost always a bad idea. When you’re debugging a production issue six months from now, trying to figure our why your ‘widgetron’ isn’t showing up in the database, you won’t remember that innocent comment and hours of frustration will follow.

Don’t swallow exceptions! At the very least you should log any exception that you catch, for example:

begin
  foo.bar
rescue => e
  Rails.logger.error { "#{e.message} #{e.backtrace.join("\n")}" }
end

This way we can trawl the logs and we’ll have the cause and stack trace of the error to look at.

Better yet, you may use an error monitoring service, such as Rollbar which is pretty nice. There are many advantages to this:

  • Your error messages aren’t interspersed with other log messages
  • You will get stats on how often the same error has happened (so you can figure out if it’s a serious issue or not)
  • You can send extra information along with the error to help you diagnose the problem
  • You can get notifications (via email, pagerduty etc.) when errors occur in your app
  • You can track deploys to see when particular errors were introduced or fixed
  • etc.
begin
  foo.bar
rescue => e
  Rails.logger.error { "#{e.message} #{e.backtrace.join("\n")}" }
  Rollbar.report_exception(e)
end

You can, of course, both log and use a monitoring service as above.

If your rescue block is the last thing in a method, I recommend having an explicit return:

def my_method
  begin
    foo.bar
  rescue => e
    Rails.logger.error { "#{e.message} #{e.backtrace.join("\n")}" }
    Rollbar.report_exception(e)
    nil
  end
end

You may not always want to return nil, sometimes you might be better off with a null object or whatever else makes sense in the context of your application. Consistently using explicit return values will save everyone a lot of confusion.

You can also re-raise the same error or raise a different one inside your rescue block. One pattern that I often find useful is to wrap the existing exception in a new one and raise that one so as not to lose the original stack trace (I even wrote a gem for this since Ruby doesn’t provide this functionality out of the box). Later on in the article when we talk about external services, I will show you why this can be useful.


Handling Errors Globally

Rails lets you specify how to handle requests for resources of a certain format (HTML, XML, JSON) by using respond_to and respond_with. I rarely see apps that correctly use this functionality, after all if you don’t use a respond_to block everything works fine and Rails renders your template correctly. We hit our tweets controller via /tweets/yukihiro_matz and get an HTML page full of Matzs’ latest tweets. What people often forget is that it’s very easy to try and request a different format of the same resource e.g. /tweets/yukihiro_matz.json. At this point Rails will valiantly try to return a JSON representation of Matzs’ tweets, but it won’t go well since the view for it doesn’t exist. An ActionView::MissingTemplate error will get raised and our app blows up in a spectacular fashion. And JSON is a legitimate format, in a high traffic application you’re just as likely to get a request for /tweets/yukihiro_matz.foobar. Tuts+ gets these kinds of requests all the time (likely from bots trying to be clever).

The lesson is this, if you’re not planning to return a legitimate response for a particular format, restrict your controllers from trying to fulfill requests for those formats. In the case of our TweetsController:

class TweetsController < ApplicationController
  respond_to :html

  def show
    ...
    respond_to do |format|
      format.html
    end
  end
end

Now when we get requests for spurious formats we’ll get a more relevant ActionController::UnknownFormat error. Our controllers feel somewhat tighter which is a great thing when it comes to making them more robust.

Handling Errors the Rails Way

The problem we have now, is that despite our semantically pleasing error, our application is still blowing up in our users’ face. This is where global exception handling comes in. Sometimes our application will produce errors that we want to respond to consistently, no matter where they come from (like our ActionController::UnknownFormat). There are also errors that can get raised by the framework before any of our code comes into play. A perfect example of this is ActionController::RoutingError. When someone requests a URL that doesn’t exist, like /tweets2/yukihiro_matz, there is nowhere for us to hook in to rescue this error, using traditional exception handling. This is where Rails’ exceptions_app comes in.

You can configure a Rack app in application.rb to be called when an error that we haven’t handled is produced (like our ActionController::RoutingError or ActionController::UnknownFormat). The way you will normally see this used is to configure your routes app as the exceptions_app, then define the various routes for the errors you want to handle and route them to a special errors controller that you create. So our application.rb would look like this:

...
config.exceptions_app = self.routes
...

Our routes.rb will then contain the following:

...
match '/404' => 'errors#not_found', via: :all
match '/406' => 'errors#not_acceptable', via: :all
match '/500' => 'errors#internal_server_error', via: :all
...

In this case our ActionController::RoutingError would be picked up by the 404 route and the ActionController::UnknownFormat will be picked up by the 406 route. There are many possible errors that can crop up. But as long as you handle the common ones (404, 500, 422 etc.) to start with, you can add others if and when they happen.

Within our errors controller we can now render the relevant templates for each kind of error along with our layout (if it’s not a 500) to maintain the branding. We can also log the errors and send them to our monitoring service, although most monitoring services will hook in to this process automatically so you don’t have to send the errors yourself. Now when our application blows up it does so gently, with the right status code depending on the error and a page where we can give the user some idea regarding what happened and what they can do (contact support) – an infinitely better experience. More importantly, our app will seem (and will actually be) much more solid.

Multiple Errors of the Same Type in a Controller

In any Rails controller we can define specific errors to be handled globally within that controller (no matter which action they get produced in) – we do this via rescue_from. The question is when to use rescue_from? I usually find that a good pattern is to use it for errors that can occur in multiple actions (for example, the same error in more than one action). If an error will only be produced by one action, handle it via the traditional begin...rescue...end mechanism, but if we’re likely to get the same error in multiple places and we want to handle it the same way – it’s a good candidate for a rescue_from. Let’s say our TweetsController also has a create action:

class TweetsController < ApplicationController
  respond_to :html

  def show
    ...
    respond_to do |format|
      format.html
    end
  end

  def create
    ...
  end
end

Let’s also say that both of these actions can encounter a TwitterError and if they do we want to tell the user that something is wrong with Twitter. This is where rescue_from can be really handy:

class TweetsController < ApplicationController
  respond_to :html

  rescue_from TwitterError, with: twitter_error

  private

  def twitter_error
    render :twitter_error
  end
end

Now we don’t need to worry about handling this in our actions and they will look much cleaner and we can/should – of course – log our error and/or notify our error monitoring service within the twitter_error method. If you use rescue_from correctly it can not only help you make your application more robust, but can also make your controller code cleaner. This will make it easier to maintain and test your code making your application that little bit more resilient yet again.


Using External Services in Your Application

It’s difficult to write a significant application these days without using a number of external services/APIs. In the case of our TweetsController, Twitter came into play via a Ruby gem that wraps the Twitter API. Ideally we would make all our external API calls asynchronously, but we’re not covering asynchronous processing in this article and there are plenty of applications out there that make at least some API/network calls in-process.

Making network calls is an extremely error prone task and good exception handling is a must. You can get authentication errors, configuration problems, and connectivity errors. The library you use can produce any number of code errors and then there is a matter of slow connections. I am glossing over this point, but it’s oh so crucial since you can’t deal with slow connections via exception handling. You need to appropriately configure timeouts in your network library, or if you’re using an API wrapper make sure it provides hooks to configure timeouts. There is no worse experience for a user than having to sit there waiting without your application giving any indication of what’s happening. Just about everyone forgets to configure timeouts appropriately (I know I have), so take heed.

If you’re using an external service in multiple places within your application (multiple models for example), you expose large parts of your application to the full landscape of errors that can be produced. This is not a good situation. What we want to do is limit our exposure and one way we can do this is putting all access to our external services behind a facade, rescuing all errors there and re-raising one semantically appropriate error (raise that TwitterError that we talked about if any errors occur when we try to hit the Twitter API). We can then easily use techniques like rescue_from to deal with these errors and we don’t expose large parts of our application to an unknown number of errors from external sources.

An even better idea might be to make your facade an error free API. Return all successful responses as is and return nils or null objects when you rescue any sort of error (we do still need to log/notify ourselves of the errors via some of the methods we discussed above). This way we don’t need to mix different types of control flow (exception control flow vs if…else) which may gain us significantly cleaner code. For example, let’s wrap our Twitter API access in a TwitterClient object:

class TwitterClient
  attr_reader :client

  def initialize
    @client = Twitter::REST::Client.new do |config|
      config.consumer_key        = configatron.twitter.consumer_key
      config.consumer_secret     = configatron.twitter.consumer_secret
      config.access_token        = configatron.twitter.access_token
      config.access_token_secret = configatron.twitter.access_token_secret
    end
  end

  def latest_tweets(handle)
    client.user_timeline(handle).map{|tweet| tweet.text}
  rescue => e
    Rails.logger.error { "#{e.message} #{e.backtrace.join("\n")}" }
    nil
  end
end

We can now do this: TwitterClient.new.latest_tweets('yukihiro_matz'), anywhere in our code and we know that it will never produce an error, or rather it will never propagate the error beyond TwitterClient. We’ve isolated an external system to make sure that glitches in that system won’t bring down our main application.


But What if I Have Excellent Test Coverage?

If you do have well-tested code, I commend you on your diligence, it will take you a long way towards having a more robust application. But a good test suite can often provide a false sense of security. Good tests can help you refactor with confidence and protect you against regression. But, you can only write tests for things you expect to happen. Bugs are, by their very nature, unexpected. To use our tweets example, until we choose to write a test for our fetch_tweets method where client.user_timeline(handle) raises an error thereby forcing us to wrap a rescue block around the code, all our tests will have been green and our code would have remained failure-prone.

Writing tests, doesn’t absolve us of the responsibility of casting a critical eye over our code to figure out how this code can potentially break. On the other hand, doing this kind of evaluation can definitely help us write better, more complete test suites.


Conclusion

Resilient systems don’t spring forth fully formed from a weekend hack session. Making an application robust, is an ongoing process. You discover bugs, fix them, and write tests to make sure they don’t come back. When your application goes down due to an external system failure, you isolate that system to make sure the failure can’t snowball again. Exception handling is your best friend when it comes to doing this. Even the most failure-prone application can be turned into a robust one if you apply good exception handling practices consistently, over time.

Of course, exception handling is not the only tool in your arsenal when it comes to making applications more resilient. In subsequent articles we will talk about asynchronous processing, how and when to apply it and what it can do in terms of making your application fault tolerant. We will also look at some deployment and infrastructure tips that can have a significant impact without breaking the bank in terms of both money and time – stay tuned.

14:00

70 Best Web Design Tutorials of 2013

2013 has ended with a great blast and I’m pretty sure that everyone has done their countdowns and throwback blog posts, tweets and even status updates. It’s a pretty cool thing to do because you get to remember all the awesome things 2013 has given you. You look back at how bad you were design-wise last year and how well you’ve improved. With this, you get to assess yourself how much more you could exert and, in turn, how much more money you could earn. Retrospection also tells you what more could you learn. Seeing what you’re weak at will also allow you to see where you could trained. That is why self-correction and openness to new things is a great habit for a designer.

Speaking of learning, were you able to catch some of the best web design tutorials of 2013? has given a lot of trends and knowledge that made tutorials prevalent as stars in the night sky. Well, for one thing, these tutorials will teach give you new knowledge and improve on what you already know. You’ll never know; maybe one day, the web design noob becomes the ninja.

Now, let’s look back at the top 70 tutorials that rocked the web design world in 2013. Some of these, you might have seen before. Others, you might have missed. But one thing is for sure, these tutorials don’t kid when they say they will teach you new cool things. So, for those who lived under the rock last year, let’s roll!

Graphic Design

The following tutorials highlights graphic designs. It includes typography and basic web mock up designs. These tutorials are aimed at making your website look better using graphic design elements such as fonts, shapes and a lot more!

Working with Types: Typography Design Tutorial for Beginners
This one discusses the basics of typography and how to create your very first design! For beginners, this one’s recommended.

70-tutorials-2013-typography

Your Logo as a Web Font Ligature
Create beautiful logos using types and not worry about CSS3 cross browser considerations (which is very, very tricky, messy and takes a lot of effort). This one shall render with better quality with the smallest effort and time.

70-tutorials-2013-ligature

Flat Web Design Tutorial – Portfolio Landing Page [FREE Download]

This tutorial will teach you to create a very beautiful flat design portfolio landing page. Very easy to follow and tinker with!

70-tutorials-2013-portfolio-landing-page

PSD to HTML/CSS, SASS, CSS3, HTML5

Three of the best used platforms in web design are used in this section. This is a list of PSD to HTML/CSS tutorials and HTML and CSS tricks. Make your site look better using these easy-to-follow tutorials.

Basic Web Design Video Course – Wireframing, Photoshop Tools & Panels, and Designing [Part 1]

Talking about details, this one goes off the charts. Broken into parts, this comprehensive tutorial discusses the major facets of web design that you should know. Great for both beginners and experts.

70-tutorials-2013-wireframing

How to Create Your Own Custom Grid System in Photoshop

Grid systems are of great help in putting different elements in a page without becoming lost in pixels. This tutorial teaches you how to use a very simple tool (a Photoshop plugin called Guide Guide) to design using grids.

70-tutorials-2013-grid-using-photoshop

 Hide / Show Notification Bar With CSS3

Do you want to create a sliding notification bar? This tutorial is for you. Written in the simplest and most understandable way possible, you will be amazed at how easy it is to add a notification bar in your website design.

70-tutorials-2013-hide-or-show-notifications

Web Design: How to Convert CSS to SASS & SCSS

SASS and SCSS are pretty new to some designers. With CSS still known as a very plausible and useful language, learning how to convert CSS to SASS and SCCS is not bad for some knowledge that you might use in the future. When you read this tutorial, you will see how these languages are both alike and different in many ways.

70-tutorials-2013-css-to-sass

How To Style Google Maps

Putting maps on your website is great, especially for corporate clients. But styling them? That’s better. Now learn how to do this through this tutorial, and you’ll be able to look at maps in a whole new way.

710tutorials-2013-Style-google-maps

Getting Creative with the Google Maps API 

We’ve seen earlier how could we style maps. This one is another tutorial about this. The only difference is that it pushed further as it incorporated a lot of new elements that we thought we could never place in maps before. Amaze your clients at how you could design a map just for them.

70-tutorials-2013-googlemaps-creative

Web Design: Equal Column Height With CSS
What bugs most designers is unequal column heights. An OC designer will have a hard time tinkering with pixels and doing trial and error computations. This becomes a major headache for beginners as they still don’t know the tricks of the trade yet. So, this tutorial becomes a must if they need to add more to their skills. Look at this tutorial to see which of your methods is easier  to execute and which produces more accurate results!

70-tutorials-2013-unequal-height

How To Create a Web-Based Video Player

It’s always fun to customize all the elements you post in your website. It gives a personal touch to it. More so, you achieve uniformity in design. This tutorial is a proof of this perk. Learn how to create a web-based video player that you can really call ‘yours’.

70-tutorials-2013-web-based-video-payer

How To Use CSS3 Multiple Backgrounds
When I first read this tutorial, I never knew that having multiple backgrounds in one div or class is possible. Well, we all have our moments of learning, don’t we? This one is a great yet simple example of this. 

70-tutorials-2013-multiple-backgrounds

Creating Content Tabs with Pure CSS

In the early popularity of CSS3,  this tutorial has done wonders for me. Being able to create tabbed contents easily with CSS? It’s a pretty amazing thing to know.

70-tutorials-2013-content-tabs

Fun CSS Loading Animations

I’ve talked about loading animations before and how cool they are. They make your website more fun and makes your audience do (or see) something while they’re waiting. Interested? Learn from this tutorial.

70-tutorials-2013-loading-animation
Flat UI Pro Tutorial: How To Use LESS to Create a Sign-In Form

David East gives a very simple and detailed explanation on how he created a custom sign-up form using LESS and HTML. It’s very simple and easy to follow. You just need basic coding knowledge. That’s all.

70-tutorials-2013-custom-sign-up-using-css-and-less

Create an e-Commerce Web Element with CSS3
Have you ever had any e-commerce customers? Or you wanted to put up one? Well, this tutorial is a good kick-start. Learn how to put e-commerce elements in any site using only CSS3!

70-tutorials-2013-e-commerce

Create a Drop Down Menu with Search Box in CSS3 and HTML
Do you want to create a very simple and elegant drop-down menu and search box? This one’s for you. With its very trendy design and easy to follow steps, a drop down menu could never be created this easier.

70-tutorials-2013-drop-down-menu

Learn SASS on Treehouse
“SASS is the most mature, stable, and powerful professional grade CSS extension language in the world.” So does the SASS website says. So where to learn this Syntactically Awesome Style Sheets? Nowhere else but on this tutorial!

70-tutorials-2013-sass

Creating Non-Rectangular Layouts with CSS Shapes

Sometimes it’s good to deviate from designing norms. A good example of this is non-rectangular layouts. It gives a refreshing look to the over-all design of the page. Give your clients and yourself a new view by trying this tutorial out.

70-tutorials-2013-non-rectangular

How To Create Flat Style Breadcrumb Links with CSS

Breadcrumbs are great tools for navigation. It lets you know where are you exactly on the website and prevents you from getting lost in the deep, dark woods. Learn how to save your viewers with breadcrumbs and learn how to do it creatively using CSS!

710tutorials-2013-bread-crumbs

Bring Your Forms U- to-Date With CSS3 and HTML5 Validation

Great for contact forms, this tutorial will help you create a simple but very useful contact form. I was amazed at how it matched with any design because of its awesome simplicity.

70-tutorials-2013-contact-form

Quick Tip: Don’t Forget the Viewport Meta Tag

Is responsive designing complicated? Or are you just complicating things out? It turns out that, most of the time, you’re the one complicating things out. Good thing you have this tutorial to clarify.

70-tutorials-2013-viewport

How to Customize the Foundation 4 Top Bar

Another navigation menu tutorial proves to be brilliant as it crosses over browsers and devices with simplicity, style and sense. Great tutorial.

70-tutorials-2013-ctopbar

Build a Freshly Squeezed Responsive Grid System

Using one of the many grid systems, this tutorial is aimed towards teaching designers to use grids for easy styling. This method is a very great tool, especially when doing responsive designs.

70-tutorials-2013-lemonade

Build an HTML Email Template From Scratch

Email templates could attract more traffic to your website. Once you send your subscriber cool emails, they will be awed at how good you are and might even hire you because of this. So learn how.

70-tutorials-2013-email-template

Quick Tip: Implement a Sticky “Back to Top” Button

One of the greatest advancements in web design is that “back-to-the-top” button which allows you to scroll into the top of the page in a single click. This tutorial lets you create that. Put it on your website now!

710tutorials-2013-back-to the-top

Quick Tip: Using Images as Fullscreen Faux-Gradient Backgrounds

I personally love full screen backgrounds. They give professional and neat-looking feel to your website. Try this on your designs and your client will surely love them!

70-tutorials-2013-faux-background

Circle Navigation Effect with Css3

This easy-to-navigate and use navigation effect gives a very clever method of presenting data in your website. Another quality tutorial for your convenience.

710tutorials-2013-circle-nav

Blur Menu with CSS3 Transitions

Another amazing navigation effects, this tutorial lets you create a very cool navigation that blurs unselected links. I recommend this for designers who want to design portfolios.

70-tutorials-2013-blur-menu

Animated Border Menus

Who would have thought that you could do something as clever as this? I love this tutorial because it lets me understand the greatness and coolness of using borders with animations.

70-tutorials-2013-animated-border=menus

Building a Circular Navigation with CSS Transforms

The world surely is both flat and round as this tutorial proves that something amazing can be achieved by fusing two opposing ideas in the literal aspect. With different effects to tinker on and with this conveniently written article, you’ll soon adapt this in your creative outputs.

70-tutorials-2013-circular-navigation

jQuery

Here’s for you guys who like to have a little bit of a challenge. Some advanced stuff! jQuery effects that will surely make your clients’ eyes pop! The list is easy to follow and comes with demos that you could download and tinker!

Beginner’s Guide to Working with jQuery Animation

This tutorial delves into jQuery animation basics and lets you understand these fundamentals while preparing you for more advanced stuff. For beginners, this is a yes-yes tutorial.

70-tutorials-2013-jquery

jQuery How-to: Creating and Inserting New Elements

Learn the basics of jQuery from this very comprehensive set of tutorials!

70-tutorials-2013-tutorial-part-1

Creating Touch-Enabled Slider With SwipeJS And jQuery

With the advent of mobile touch devices, doing jQuery sliders became a little bit more complicated. As we all know, it is very easy for web designers to use jQuery slider navigations. But it becomes difficult when transforming these into mobile as the navigation icons become cluttered in the tight screen resolution. The answer to this? Make the slider touch-enabled.

70-tutorials-2013-touch-enabled-slider

Customizing Browser Scroll Bar With CSS / JQuery

Does the default scroll bar bore you? Well, good news is, you could re-design them. Learn how from this tutorial.

70-tutorials-2013-scroll-bar

Building a Mobile Panel With jQuery Mobile

I first saw this mobile panel on the mobile version of Facebook. Well, for one thing, I knew that someday someone will do the re-creation of this panel. So, here it is!

70-tutorials-2013-mobile-panel

How to Implement Infinite Page Scroll Effect in Static Webpages

Are you into infinite scrolling? Or do you want your design to have one? You came to the right place because I will tell you where it is. It’s right in this particular tutorial. Learn how to seamlessly feed your future viewers with content.

710tutorials-2013-infinite-scroll

Smart (Auto-Hiding) Website Header & Footer Using jQuery

This tutorial discusses and demonstrates how you can build a sticky header bar that will auto-hide after a few scrolls. This is very advantageous because it allows you to achieve the ease of access capability of a sticky nav bar, and the convenience as well as lack of distraction in a regular nav bar.

710tutorials-2013-sticky-nav-bar

Sticky Position (Bar) with CSS or jQuery

Make your sidebar and other elements on your website always visible. Avoid scrolling them together with the content through this tutorial.

710tutorials-2013-sticky-nav-bar=02

Create an Awesome 3D Interactive Object Using Only Images and jQuery

Do you have a client who wants to put up an online shopping website? Do you need a 3d presentation of a product you want to place in your website? This one is perfect for you.

710tutorials-2013-3d-elements

Creating an Immersive Slider Experience with jQuery Immersive Slider

Sliders are still a thing in the world of web design. Be creative and try this tutorial out. Your eyes will pop at the fusion of simplicity and beauty at its finest.

710tutorials-2013-slider-background-blur

Creating a Parallax Scrolling Webpage Using Jarallax.js

Doing a videogame website design? Or a promotional website for a movie? Try this tutorial and let it help you using Parallax scrolling.

 70-tutorials-2013-parallax

Create a Smooth Jump To Sub-Navigation Menu in One JS Call

Navigation is a key element to any website. If you have the easiest to navigate design, more likely, you’ll get the money. Allow your users to still navigate even within posts using this tutorial.

70-tutorials-2013-jump-to

FancyScroll.js: Add an iOS/Android Overflow Scroll Effect

Are you a fan of Android or iOS scroll effects? Adapt them to your website and give it a feel of simplicity and professionalism.

70-tutorials-2013-overfow-scorll-effect

OnePageScroll.js: Creating an Apple’s iPhone 5S Website

One page scrolling has been a think this year. With the major companies like Apple following the trend, it becomes a bandwagon design as it epitomizes the functionality of all content within one page.

70-tutorials-2013-one-page-scroll

Notify Better.js: Creating a Dynamic Favicon and Title

Dynamic favicons are cool. Period. It gives a flare on your designs and lets you own every bit of it. Create dynamic favicons by following this tutorial and I bet you’ll have a very good time tinkering with this.

70-tutorials-2013-dynamic-notificaion

LoadingBar.js: Adding a YouTube-like Loading Bar to Your Website

When YouTube first adapted that loading bar, everybody’s eyebrows went high and wondered if they could re-do it and adapt the design into their website. Well, we can. Here’s the tutorial.

70-tutorials-2013-youtube-loading

Super Simple Text Rotator with Almost No Effort Required

Great for advertisements and short descriptions, this text-rotator will make your website more creative and adds flavor of movement and dynamism into it.

70-tutorials-2013-text-rotator

Create a Path-like Menu Button with WheelMenu.js

With the revamp of Path a few years back came a button like no other. It is very convenient and eye-pleasing. Learn how to create it through this tutorial.

70-tutorials-2013-path-like-menu

Creating the New Google Play’s Multi-Level Navigation from Scratch

Reeking with googliness is the next tutorial that borrowed the design of Google play. Learn how to create this eye-candy tutorial now.

70-tutorials-2013-multi-level-google-play-nav

Add Depth to Flat UI with Flat Shadow.js

Use long shadows using code? Yes, you can. I didn’t believe that too. Well, not before I have finished reading this tutorial. Read and see for yourself.

70-tutorials-2013-long-shadow

Creating a Swipeable Side Menu for the Web

Learn to recreate the behavior of side menus in smartphone applications in your website design. Both designs are suited for mobile and desktop versions. Learn how this tutorial works and become fun for both mobile and desktop users!

70-tutorials-2013-swipable-menu

Creating a 3D Interactive Gallery with CSS and jQuery

Imaginations are pushed further as this gallery is put into the limelight. Be amazed at how simple and yet awesome the output is. This is great for portfolio designs!

70-tutorials-2013-gallerry-using-jQuery

Making Draggable Webpage Elements Using jQuery

Draggable elements are very interactive. It lets your viewer see how your mind works by stretching your users’ imagination. Using this tutorial, you will learn how to put a little twist in your regular design.

70-tutorials-2013-draggable-elements

Make Pretty Charts For Your App with jQuery and xChart

Want to put something analytic in your website? How about trying charts using jQuery and xCharts? Be prepared for eyegasms as you will be pampered with good-looking data.

70-tutorials-2013-pretty-charts

Create a Beautiful Password Strength Meter

If you are planning to put sign-up forms, better spice it up using this tutorial. Add a beautiful password strength meter to ensure the safety of your users while adding a different look in your forms.

70-tutorials-2013-password-strenght-indicator

How to Build an iOS-Style Content Slider using jQuery

Create a cool slideshow using jQuery inspired by iOS. The good thing about this tutorial is that it allows you to drag the images to navigate them.

70-tutorials-2013-ios-style-slider

How to Create a CSS3 Login Form

Refurbish your log-in page through this very simple tutorial. Give your users a new look and make your website better.

70-tutorials-2013-css3-login-form

Create a Sticky Navigation Header Using jQuery Waypoints

This is another cool tutorial that involves the navbar. This time, it moves a few pixels down and becomes less opaque.

70-tutorials-2013-sticky-sidebar3

Quick Tip: Embedding New Google Maps

Maps are back on track as another tutorial is included in this list. Re-tweak your maps using this tutorial and revolutionize how people navigate in real life.

70-tutorials-2013-google-maps

Build a Dynamic Dashboard With ChartJS

This chart tutorial adds effect to the previous one as it provides with more samples and templates to choose from. Present your data using pie, line and hexagonal charts.

70-tutorials-2013-charts2

Draggable Image Boxes Grid

Do you have a client who wants a gallery type design? Well, this one might help you. Using grids and tiles as navigation tools, your client will surely love your work.

70-tutorials-2013-draggble-image-boxe

Medium-Style Page Transition

Showcase your posts in a totally new way using these transitions. With both simplicity and elegance, this tutorial will surely help you impress your clients, now and in the future.

70-tutorials-2013-medium-transitions

3D Shading with Box-Shadows

3D-looking boxes are not an abomination in the flat design trend. In fact, adding a 3D effect in your designs will accentuate the message you are trying to convey. Learn how it works on this tutorial.

70-tutorials-2013-box-shadows

Google Nexus Website Menu

Want to tinker more on menus? This Nexus-looking menu is great for a start. It is totally refreshing to see and amazingly easy to navigate. Try it.

70-tutorials-2013-google-nexus

Caption Hover Effects

Hovers have never been this awesome. I admit that I was pretty impressed with this tutorial. Clever as it may seem to be, reading this will make you realize that it’s not that difficult to do after all.

70-tutorials-2013-hover-effects

Responsive Retina-Ready Menu

Inspired from the colors of the Maliwan Borderlands Game, this menu is both retina-ready and colorful. Try to resize the page and see if the images become pixelated.

70-tutorials-2013-retina-ready

Thumbnail Grid With Expanding Preview

Another gallery tutorial that lets you enlarge the image for a better preview by clicking on the thumbnail.

70-tutorials-2013-Thumbnail-grid

How WordPress Plugins Work

Perhaps one of the best WordPress plugin tutorials there is. This one introduces you to the magical world of plugins and teaches you step-by-step to become a plugin developer.

70-tutorials-2013-wordpress-plugin

Killer Responsive Layouts With CSS Regions

Still in responsive layouts? Smashing Magazine gives an option using CSS Regions. Read this tutorial and become free from the limits posed by CSS elements.

Conclusion

And there you have it, guys. Seventy tutorials that highlights 2013. As 2014 comes right into our doorsteps, read these tutorials to refresh and to learn new things for the next year will surely be a bang! More tutorials to come, more design trends to be inspired from, more clients to ask for your help and hopefully, more money to fall right into your pockets.

January 03 2014

19:46

Testing in Node.js

A test driven development cycle simplifies the thought process of writing code, makes it easier, and quicker in the long run. But just writing tests is not enough by itself, knowing the kinds of tests to write and how to structure code to conform to this pattern is what it's all about. In this article we will take a look at building a small app in Node.js following a TDD pattern.

Besides simple 'unit' tests, which we are all familiar with; We can also have Node.js's Async code running, which adds an extra dimension in that we don't always know the order in which functions will run or we may be trying to test something in a callback or checking to see how an async function is working.

In this article we will be building a Node app which can search for files that match a given query. I know there are already things for this (ack) but for the sake of demonstrating TDD I think it could be a well rounded project.

The first step is obviously to write some tests, but even before that, we need to choose a testing framework. You can use vanilla Node, as there is an assert library built-in, but it's not much in terms of a test runner, and is pretty much the bare essentials.

Another option and probably my favorite for general use is Jasmine. It's pretty self-contained, you don't have any other dependencies to add to your scripts and the syntax is very clean and easy to read. The only reason I am not going to use this today, is because I think Jack Franklin did an excellent job covering this in his recent Tuts+ series here, and it's good to know your options so you can pick the best tool for your situation.


What We’ll Be Building

In this article we will be using the flexible 'Mocha' test runner along with the Chai assertion library.

Unlike Jasmine which is more like an entire test suite in one package, Mocha only takes care of the overall structure but has nothing to do with the actual assertions. This allows you to keep a consistent look and feel when running your tests, but also allows you to run whichever assertion library best fits your situation.

So for example, if you were going to use the vanilla 'assert' library, you could pair it with Mocha to add some structure to your tests.

Chai is a fairly popular option, and is also all about options and modularity. Even without any plugins, just using the default API you have three different syntaxes you can use depending on if you would like to use a more classic TDD style or a more verbose BDD syntax.

So now that we know what we are going to use, let's get into the installation.


The Setup

To get started, let's install Mocha globally by running:

npm install -g mocha

When that completes create a new folder for our project and run the following inside it:

npm install chai

This will install a local copy of Chai for our project. Next, create a folder named test inside our project's directory, as this is the default location Mocha will look for tests.

That's pretty much it for setup, the next step is to talk about how to structure your apps when following a test driven development process.


Structuring Your App

It's important to know, when following a TDD approach, what needs to have tests and what does not. A rule of thumb is to not write tests for other peoples already tested code. What I mean by this is the following: let's say your code opens a file, you don't need to test the individual fs function, it's part of the languge and is supposedly already well tested. The same goes when using third-party libraries, you shouldn't structure functions which primarily call these types of functions. You don't really write tests for these and because of this you have gaps in the TDD cycle.

Now of course, with every programming style there are a lot of different opinions and people will have different views on how to TDD. But the approach I use is that you create individual components to use in your app, each of which solves a unique functional problem. These components are built using TDD ensuring that they work as expected and you won't break their API. Then you write your main script, which is essentially all glue code, and does not need to be tested / can't be tested, in certain situations.

This also means that most of your components can be reused in the future as they do not really have much to do, directly, with the main script.

Following what I just said, it's common practice to create a folder named 'lib' where you put all the individual components. So up to this point you should have Mocha and Chai installed, and then a project directory with two folders: 'lib' and 'test'.


Getting Started With TDD

Just in case you are new to TDD I thought it would be a good idea to quickly cover the process. The basic rule is that you can't write any code unless the test runner tells you to.

Essentially, you’re writing what your code is supposed to do before actually doing it. You have a really focused goal while coding and you never compromise your idea by getting side-tracked or thinking too far ahead. Besides that, since all of your code will have a test affiliated with it you can be certain you will never break your app in the future.

A test, in reality, is just a declaration of what a function is expected to do when run, you then run your test runner, which will obviously fail (since you haven't written the code yet) and then you write the minimum amount of code needed to pass the failing test. It's important never to skip this step, because sometimes a test will pass even before you add any code, due to other code you have in the same class or function. When this happens, you either wrote more code then you were supposed to for a different test or this is just a bad test (usually not specific enough).

Again according to our rule above, if the test passes right away you can't write any code, because it didn't tell you to. By continuously writing tests and then implementing the features you construct solid modules that you can rely on.

Once you’re finished implementing and testing your component, you can then go back and refactor the code to optimize it and clean it up but making sure the refactoring doesn't fail any of the tests you have in place and more importantly, doesn't add any features that are untested.

Every testing library will have its own syntax, but they usually follow the same pattern of making assertions and then checking if they pass. Since we are using Mocha and Chai let's take a look at both their syntaxes starting with Chai.


Mocha & Chai

I will be using the 'Expect' BDD syntax, because as I mentioned Chai comes with a few options out of the box. The way this syntax works is you start by calling the expect function, passing it the object you want to make an assertion on, and then you chain it with a specific test. An example of what I mean could be as follows:

expect(4+5).equal(9);

That's the basic syntax, we are saying expect the addition of 4 and 5 to equal 9. Now this isn't a great test because the 4 and 5 will be added by Node.js before the function is even called so we are essentially testing my math skills, but I hope you get the general idea. The other thing you should note, is this syntax is not very readable, in terms of the flow of a normal English sentence. Knowing this, Chai added the following chain getters which don't do anything but you can add them to make it more verbose and readable. The chain getters are as follows:

  • to
  • be
  • been
  • is
  • that
  • and
  • have
  • with
  • at
  • of
  • same
  • a
  • an

Using the above, we can rewrite our previous test to something like this:

expect(4+5).to.equal(9);

I really like the feel of the entire library, which you can check out in their API. Simple things like negating the operation is as easy as writing .not before the test:

expect(4+5).to.not.equal(10);

So even if you have never used the library before, it won't be hard to figure out what a test is trying to do.

The last thing I would like to look over before we get into our first test is how we structure our code in Mocha

Mocha

Mocha is the test runner, so it doesn't really care too much about the actual tests, what it cares about is the tests structure, because that is how it knows what is failing and how to layout the results. The way you build it up, is you create multiple describe blocks which outline the different components of your library and then you add it blocks to specify a specific test.

For a quick example, let's say we had a JSON class and that class had a function to parse JSON and we wanted to make sure the parse function can detect a badly formatted JSON string, we could structure this like so:

describe("JSON", function() {
   describe(".parse()", function() {
       it("should detect malformed JSON strings", function(){
           //Test Goes Here
       });
   });
});

It's not complicated, and it's about 80% personal preference, but if you keep this kind of format, the test results should come out in a very readable format.

We are now ready to write our first library, let's begin with a simple synchronous module, to get ourselves better acquainted with the system. Our app will need to be able to accept command line options for setting things like how many levels of folders our app should search through and the query itself.

To take care of all this, we will create a module which accepts the command's string and parses all the included options along with their values.

The Tag Module

This is a great example of a module you can reuse in all your command line apps, as this issue comes up a lot. This will be a simplified version of an actual package I have on npm called ClTags. So to get started, create a file named tags.js inside of the lib folder, and then another file named tagsSpec.js inside of the test folder.

We need to pull in the Chai expect function, as that will be the assertion syntax we will be using and we need to pull in the actual tags file so we can test it. Altogether with some initial setup it should look something like this:

var expect = require("chai").expect;
var tags = require("../lib/tags.js");

describe("Tags", function(){

});

If you run the 'mocha' command now from the root of our project, everything should be passing like expected. Now let's think about what our module will do; we want to pass it the command arguments array that was used to run the app, and then we want it to build an object with all the tags, and it would be nice if we could also pass it a default object of settings, so if nothing get's overridden, we will have some settings already stored.

When dealing with tags, a lot of apps also provide shortcut options which are just one character, so let's say we wanted to set the depth of our search we could allow the user to either specify something like --depth=2 or something like -d=2 which should have the same effect.

So let's just begin with the long formed tags (for example, '–depth=2'), To begin with, let's write the first test:

describe("Tags", function(){
   describe("#parse()", function(){
       it("should parse long formed tags", function(){
           var args = ["--depth=4", "--hello=world"];
           var results = tags.parse(args);

           expect(results).to.have.a.property("depth", 4);
           expect(results).to.have.a.property("hello", "world");
       });
   });
});

We added one method to our test suite called parse and we added a test for long formed tags. Inside this test I created an example command and added two assertions for the two properties it should pickup.

Running Mocha now, you should get one error, namely that tags doesn't have a parse function. So to fix this error let's add a parse function to the tags module. A fairly typical way to create a node module is like so:

exports = module.exports = {};

exports.parse = function() {

}

The error said we needed a parse method so we created it, we didn't add any other code inside because it didn't yet tell us to. By sticking with the bare minimum you are assured that you won't write more then you are supposed to and end up with untested code.

Now let's run Mocha again, this time we should be getting an error telling us that it can't read a property named depth from an undefined variable. That is because currently our parse function isn't returning anything, so let's add some code so that it will return an object:

exports.parse = function() {
var options = {}

return options;
}

We are slowly moving along, if you run Mocha again, their shouldn't be any exceptions being thrown, just a clean error message saying that our empty object has no property called depth.

No 'depth' property

Now we can get into some real code. For our function to parse the tag and add it to our object we need to cycle through the arguments array and remove the double dashes at the start of the key.

exports.parse = function(args) {
   var options = {}
   for (var i in args) { //Cycle through args
       var arg = args[i];
       //Check if Long formed tag
       if (arg.substr(0, 2) === "--") {
           arg = arg.substr(2);
           //Check for equals sign
           if (arg.indexOf("=") !== -1) {
               arg = arg.split("=");
               var key = arg.shift();
               options[key] = arg.join("=");
           }
       }
   }
   return options;
}

This code cycles through the list of arguments, makes sure we are dealing with a long formed tag, and then splits it by the first equals character to create the key and value pair for the options object.

Now this almost solves our issue, but if we run Mocha again, you will see that we now have a key for depth, but it's set to a string instead of a number. Numbers are a bit easier to work with later on in our app, so the next piece of code we need to add is to convert values to numbers whenever possible. This can be achieved with some RegEx and the parseInt function as follows:

        if (arg.indexOf("=") !== -1) {
            arg = arg.split("=");
            var key = arg.shift();
            var value = arg.join("=");

            if (/^[0-9]+$/.test(value)) {
                value = parseInt(value, 10);
            }
            options[key] = value;
        }

Running Mocha now, you should get a pass with one test. The number conversion should arguably be in its own test, or at least mentioned in the tests declaration so you don't, by mistake, remove the number conversion assertion; so just add-on “add and convert numbers” to the it declaration for this test or separate it into a new it block. It really depends whether you consider this “obvious default behavior” or a separate feature.

First Pass

Now like I have been trying to stress throughout this whole article, when you see a passing spec, it's time to write more tests. The next thing I wanted to add was the default array, so inside the tagsSpec file let's add the following it block right after the previous one:

    it("should parse long formed tags and convert numbers", function(){
        var args = ["--depth=4", "--hello=world"];
        var results = tags.parse(args);

        expect(results).to.have.a.property("depth", 4);
        expect(results).to.have.a.property("hello", "world");
    });
    it("should fallback to defaults", function(){
        var args = ["--depth=4", "--hello=world"];
        var defaults = { depth: 2, foo: "bar" };
        var results = tags.parse(args, defaults);

        var expected = {
            depth: 4,
            foo: "bar",
            hello: "world"
        };

        expect(results).to.deep.equal(expected);
    });

Here we are using a new test, the deep equal which is good for matching two objects for equal values. Alternatively, you can use the eql test which is a shortcut but I think this is more clear. This test passes two arguments as the command string and passes two defaults with one overlap, just so we can get a good spread on the test cases.

Running Mocha now, you should get a sort of diff, containing the differences between what is expected and what it actually got.

Defaults Diff

Let's now continue back to the tags.js module, and let's add this functionality in. It's a fairly simple fix to add, we just need to accept the second parameter, and when it's set to an object we can replace the standard empty object at the start with this object:

exports.parse = function(args, defaults) {
   var options = {};
   if (typeof defaults === "object" && !(defaults instanceof Array)) {
       options = defaults
   }

This will bring us back to a green state. The next thing I want to add is the ability to just specify a tag without a value and let it work like a boolean. For example, if we just set --searchContents or something like that, it will just add that to our options array with a value of true.

The test for this would look something like the following:

   it("should accept tags without values as a bool", function(){
       var args = ["--searchContents"];
       var results = tags.parse(args);

       expect(results).to.have.a.property("searchContents", true);
   });

Running this will give us the following error just like before:

Boolean Tags

Inside of the for loop, when we got a match for a long formed tag, we checked if it contained an equals sign; we can quickly write the code for this test by adding an else clause to that if statement and just setting the value to true:

        if (arg.indexOf("=") !== -1) {
             arg = arg.split("=");
             var key = arg.shift();
             var value = arg.join("=");

             if (/^[0-9]+$/.test(value)) {
                 value = parseInt(value, 10);
             }
             options[key] = value;
        } else {
             options[arg] = true;
        }

The next thing I want to add is the substitutions for the short-hand tags. This will be the third parameter to the parse function and will basically be an object with letters and their corresponding replacements. Here is the spec for this addition:

    it("should accept short formed tags", function(){
        var args = ["-sd=4", "-h"];
        var replacements = {
            s: "searchContents",
            d: "depth",
            h: "hello"
        };

        var results = tags.parse(args, {}, replacements);

        var expected = {
            searchContents: true,
            depth: 4,
            hello: true
        };

        expect(results).to.deep.equal(expected);
    });

The trouble with shorthand tags is that they are able to be combined in a row. What I mean by this is unlike the long formed tags where each one is separate, with short hand tags – since they are each just a letter long – you can call three different ones by typing -vgh. This makes the parsing a bit more difficult because we still need to allow for the equals operator for you to add a value to the last tag mentioned, while at the same time you need to still register the other tags. But not to worry, it's nothing that can't be solved with enough popping and shifting.

Here is the entire fix, from the beginning of the parse function:

exports.parse = function(args, defaults, replacements) {
   var options = {};
   if (typeof defaults === "object" && !(defaults instanceof Array)) {
       options = defaults
   }

   if (typeof replacements === "object" && !(defaults instanceof Array)) {
        for (var i in args) {
             var arg = args[i];
             if (arg.charAt(0) === "-" && arg.charAt(1) != "-") {
                  arg = arg.substr(1);
                  if (arg.indexOf("=") !== -1) {
                      arg = arg.split("=");
                      var keys = arg.shift();
                      var value = arg.join("=");

                      arg = keys.split("");
                      var key = arg.pop();
                      if (replacements.hasOwnProperty(key)) {
                           key = replacements[key];
                      }

                      args.push("--" + key + "=" + value);
                  } else {
                      arg = arg.split("");
                  }

                  arg.forEach(function(key){
                      if (replacements.hasOwnProperty(key)) {
                          key = replacements[key];
                      }
                      args.push("--" + key);
                  });
             }
        }
   }

It's a lot of code (in comparison) but all we are really doing is splitting the argument by an equals sign, then splitting that key into the individual letters. So for example if we passed -gj=asd we would split the asd into a variable called value, and then we would split the gj section into individual characters. The last character (j in our example) will become the key for the value (asd) whereas any other letters before it, will just be added as regular boolean tags. I didn't want to just process these tags now, just in case we changed the implementation later. So what we are doing is just converting these short hand tags into the long formed version and then letting our script handle it later.

Running Mocha again will take us back to our illustrious green results of four tests passing for this module.

Now there are a few more things we can add to this tags module to make it closer to the npm package, like the ability to also store plain text arguments for things like commands or the ability to collect all the text at the end, for a query property. But this article is already getting long and I would like to move on to implementing the search functionality.


The Search Module

We just went through creating a module step by step following a TDD approach and I hope you got the idea and feeling of how to write like this. But for the sake of keeping this article moving, for the rest of the article, I will speed up the testing process by grouping things together and just showing you the final versions of tests. It's more of a guide to different situations which may come up and how to write tests for them.

So just create a file named search.js inside the lib folder and a searchSpec.js file inside of the test folder.

Next open the spec file and let's setup our first test which can be for the function to get a list of files based on a depth parameter, this is also a great example for tests which require a bit of external setup for them to work. When dealing with outside object-like-data or in our case files, you will want to have a predefined setup which you know will work with your tests, but you also don't want to add fake info to your system.

There are basically two options to solve this problem, you can either mock the data, like I mentioned above if you are dealing with the languages own commands for loading data, you don't necessarily need to test them. In cases like that, you can simply provide the 'retrieved' data and continue on with your testing, kind of like what we did with the command string in the tags library. But in this case, we are testing the recursive functionality we are adding to the languages file reading capabilities, depending on the specified depth. In cases like these, you do need to write a test and so we need to create some demo files to test the file reading. The alternative is to maybe stub the fs functions to just run but not do anything, and then we can count how many times our fake function ran or something like that (check out spies) but for our example, I am just going to create some files.

Mocha provides functions which can run both before and after your tests, so you can perform these kinds of external setup and cleanup around your tests.

For our example, we will create a couple of test files and folders at two different depths so we can test out that functionality:

var expect = require("chai").expect;
var search = require("../lib/search.js");
var fs = require("fs");

describe("Search", function(){
   describe("#scan()", function(){
       before(function() {
           if (!fs.existsSync(".test_files")) {
               fs.mkdirSync(".test_files");
               fs.writeFileSync(".test_files/a", "");
               fs.writeFileSync(".test_files/b", "");
               fs.mkdirSync(".test_files/dir");
               fs.writeFileSync(".test_files/dir/c", "");
               fs.mkdirSync(".test_files/dir2");
               fs.writeFileSync(".test_files/dir2/d", "");
           }
       });

       after(function() {
           fs.unlinkSync(".test_files/dir/c");
           fs.rmdirSync(".test_files/dir");
           fs.unlinkSync(".test_files/dir2/d");
           fs.rmdirSync(".test_files/dir2");
           fs.unlinkSync(".test_files/a");
           fs.unlinkSync(".test_files/b");
           fs.rmdirSync(".test_files");
       });
   });
});

These will be called based on the describe block they are in, and you can even run code before and after each it block using beforeEach or afterEach instead. The functions themselves just use standard node commands to create and remove the files respectively. Next we need to write the actual test. This should go right next to the after function, still inside the describe block:

       it("should retrieve the files from a directory", function(done) {
           search.scan(".test_files", 0, function(err, flist){
               expect(flist).to.deep.equal([
                   ".test_files/a",
                   ".test_files/b",
                   ".test_files/dir/c",
                   ".test_files/dir2/d"
               ]);
               done();
           });
       });

This is our first example of testing an async function, but as you can see it's just as simple as before; all we need to do is use the done function Mocha provides in the it declarations to tell it when we are finished with this test.

Mocha will automatically detect if you specified the done variable in the callback and it will wait for it to be called allowing you to test asynchronous code really easily. Also, it's worth mentioning that this pattern is available throughout Mocha, you can for example, use this in the before or after functions if you needed to setup something asynchronously.

Next I would like to write a test that makes sure the depth parameter works if set:

    it("should stop at a specified depth", function(done) {
        search.scan(".test_files", 1, function(err, flist) {
            expect(flist).to.deep.equal([
                ".test_files/a",
                ".test_files/b",
            ]);
            done();
        });
    });

Nothing different here, just another plain test. Running this in Mocha you will get an error that the search doesn’t have any methods, basically because we haven't written anything in it. So let's go add an outline with the function:

var fs = require("fs");

exports = module.exports = {};

exports.scan = function(dir, depth, done) {

}

If you now run Mocha again, it will pause waiting for this async function to return, but since we haven't called the callback at all, the test will just timeout. By default it should time out after about two seconds, but you can adjust this using this.timeout(milliseconds) inside of a describe or it block, to adjust their timeouts respectively.

This scan function is supposed to take a path and depth, and return a list of all the files it finds. This is actually kind of tricky when you start thinking about how we are essentially recursing two different functions together in a single function. We need to recurse through the different folders and then those folders need to scan themselves and decide on going further.

Doing this synchronously is fine because you can kind of step through it one by one, slowly completing one level or path at a time. When dealing with an async version it get's a bit more complicated because you can't just do a foreach loop or something, because it won't pause in between folders, they will all essentially run at the same time each returning different values and they would sort of overwrite each other.

So to make it work, you need to create a sort of stack where you can asynchronously process one at a time (or all at once if you use a queue instead) and then keep some order in that manner. It's a very specific algorithm so I just keep a snippet by Christopher Jeffrey which you can find on Stack Overflow. It doesn't apply just to loading files, but I have used this in a number of applications, basically anything where you need to process an array of objects one at a time using async functions.

We need to alter it a bit, because we would like to have a depth option, how the depth option works is you set how many levels of folders you want to check, or zero to recurs indefinitely.

Here is the completed function using the snippet:

exports.scan = function(dir, depth, done) {
   depth--;
   var results = [];
   fs.readdir(dir, function(err, list) {
       if (err) return done(err);
       var i = 0;
       (function next() {
           var file = list[i++];
           if (!file) return done(null, results);
           file = dir + '/' + file;
           fs.stat(file, function(err, stat) {
               if (stat && stat.isDirectory()) {
                   if (depth !== 0) {
                       var ndepth = (depth > 1) ? depth-1 : 1;
                       exports.scan(file, ndepth, function(err, res) {
                           results = results.concat(res);
                           next();
                       });
                   } else {
                       next();
                   }
               } else {
                   results.push(file);
                   next();
               }
           });
       })();
   });
};

Mocha should now be passing both tests. The last function we need to implement is the one which will accept an array of paths and a search keyword and return all matches. Here is the test for it:

    describe("#match()", function(){
        it("should find and return matches based on a query", function(){
            var files = ["hello.txt", "world.js", "another.js"];
            var results = search.match(".js", files);
            expect(results).to.deep.equal(["world.js", "another.js"]);

            results = search.match("hello", files);
            expect(results).to.deep.equal(["hello.txt"]);
        });
    });

And last but not least, let's add the function to search.js:

exports.match = function(query, files){
  var matches = [];
  files.forEach(function(name) {
      if (name.indexOf(query) !== -1) {
          matches.push(name);
      }
  });
  return matches;
}

Just to make sure, run Mocha again, you should have a total of seven tests all passing.

All Green!

Putting It All Together

The last step is to really write the glue code which pulls all our modules together; so in the root of our project add a file named app.js or something like that and add the following inside:

# !/usr/bin/env node

var tags = require("./lib/tags.js");
var search = require("./lib/search.js");
var defaults = {
   path: ".",
   query: "",
   depth: 2
}
var replacements = {
   p: "path",
   q: "query",
   d: "depth",
   h: "help"
}

tags = tags.parse(process.argv, defaults, replacements);

if (tags.help) {
   console.log("Usage: ./app.js -q=query [-d=depth] [-p=path]");
} else {
   search.scan(tags.path, tags.depth, function(err, files) {
       search.match(tags.query, files).forEach(function(file){
           console.log(file);
       });
   });
}

No actual logic going on here really, we are just basically connecting the different modules together to get the desired results. I usually don't test this code as it's just glue code which has all been tested already.

You can now make your script executable (chmod +x app.js on a Unix system) and then run it like so:

./app.js -q=".js"

Optionally customizing some of the other placeholders we setup.

Action Still

Conclusion

In this article we have built an entire file searching app, albeit a simple one, but I think it demonstrates the process as a whole fairly well.

Some personal advice moving forward; if you are going to do a lot of TDD, setup your environment. A lot of the overhead time people associate with TDD is due to them having to keep switching windows around, opening and closing different files, then running tests and repeating this 80 dozen times a day. In such a case it interrupts your workflow decreasing productivity. But if you have your editor setup, like you either have the tests and code side-by-side or your IDE supports jumping back and forth, this saves a ton of time. You can also get your tests to automatically run by calling it with the -w tag to watch the files for changes and auto run all tests. These kinds of things make the process more seamless and more of an aid then a bother.

I hope you enjoyed this article, if you have any questions you can leave them below, contact me on Twitter @gabrielmanricks or on the Nettuts+ IRC channel (#nettuts on freenode).

January 02 2014

22:38

Acceptance Testing With Codeception

Typically new features for web applications are tested by visiting the appropriate page in a browser, maybe filling out some form data, submitting the form, and then developers or testers hope to see their desired result. This is the natural way most web developers test their apps. We can continue with this natural testing process and improve upon it to ensure our apps are as stable as possible by using Codeception.


What Is Codeception?

Codeception is a multi-featured testing framework for PHP. It can handle unit, functional, and acceptance testing of web applications and it’s powered by the already very popular PHPUnit testing framework.

Codeception allows us to test different kinds of user perspectives and site scenarios while they are visiting our app to ensure a pleasant user experience. By testing multiple scenarios, we can simulate a user’s natural flow throughout our application to make sure the app is working as expecting.


Installation & Configuration

Let’s start by creating a folder in our Sites‘s directory (or wherever you prefer to store your web applications) to hold our sample application that we’ll test with Codeception:

cd Sites
mkdir codeception

Now I’ve already created a small sample HTML and PHP file that we can use to test with. You can just copy and paste it from below. We’ll start with the toupper.html file:

# codeception/toupper.html

<!DOCTYPE html>
<html>
<head>
	<title>Convert Me!</title>
</head>
<body>

	<h1>Convert Me!</h1>

	<form action="toupper.php" method="post">

		<label for="string">Convert to Uppercase:</label>
		<input type="text" name="string" id="string">

		<input type="submit" value="Convert">

	</form>

</body>
</html>

This page simply displays a form allowing a user to enter in a string of text and we’ll convert it to uppercase using PHP. Next will be our PHP file which will process the form submission:

# codeception/toupper.php
<?php
$message = "No string entered";

if (!empty($_POST['string'])) {
	$message = "String converted: " . strtoupper($_POST['string']);
}

?>
<!DOCTYPE html>
<html>
<head>
	<title>To Upper!</title>
</head>
<body>

	<h1>To Upper!</h1>

	<p><?php echo $message; ?></p>

	<p><a href="toupper.html">Back to form</a>.</p>

</body>
</html>

This page creates a $message variable to hold a default message. We then check to see if the form was submitted. If so, we replace the default message with the uppercase converted string, then we echo out the message and at the bottom of the page we have a link, linking back to the form.

A super simple PHP application, but it does allow us to try out Codeception’s acceptance testing features.

Now let’s download and install Codeception. Luckily it is very easy to install and configure. There are a couple of ways to install it using either Composer, Git, and installing the Phar. I prefer to use Composer, so let’s create a composer.json file in the root of our codeception sample web app:

cd codeception
touch composer.json

Next, open up composer.json into your favorite text editor and add the following lines to download Codeception:

{
    "require": {
        "codeception/codeception": "*"
    }
}

Then run composer in your terminal:

composer update

To actually install it, run the following command:

./vendor/bin/codecept bootstrap

You’ll notice after running the previous commands, we now have tests and vendor folders in our sample web app.

Next we just need to add our local application’s URL into our tests/acceptance.suite.yml file:

class_name: WebGuy
modules:
    enabled:
        - PhpBrowser
        - WebHelper
    config:
        PhpBrowser:
            url: 'http://localhost/codeception/'

Ok, our sample application and Codeception should now be installed and ready to go.


Acceptance Testing

Acceptance testing allows us to test our applications using the normal website viewing process of visit a webpage, fill in a form, and submit the form to see the desired result. The difference is with Codeception, we don’t have to waste time going to the browser each time we want to test a new feature out, instead we can just run our acceptance tests to see if they pass or not.

Generating an Acceptance Test

Now we need a file that we can write our tests in. Codeception makes it super simple to create different types of tests by using the codecept generator scripts. Let’s generate a Toupper acceptance test:

./vendor/bin/codecept generate:cept acceptance Toupper

Here we run the codecept command again and tell it to generate an acceptance test with the name of Toupper (after file generation, the file will actually be named ToupperCept.php).

Writing an Acceptance Test

When using Codeception you will likely come across the various different “Guys” that run the various different tasks for the framework. There’s a CodeGuy, a TestGuy, and a WebGuy.

For acceptance testing, we’ll be using the WebGuy. Let’s open up our ToupperCept.php file in our favorite text editor and begin by creating a new WebGuy that we can use to run our tests:

$I = new WebGuy($scenario);

Here we create a new WebGuy object, storing it in an $I variable. We can now use this object to test out different parts of our page to ensure everything is correct.

Let’s make sure that our Toupper web page is loading up properly first, then we’ll test out the form submission:

$I = new WebGuy($scenario);
$I->wantTo('ensure Toupper form works');
$I->amOnPage('toupper.html');
$I->see('Convert Me!');

In the above code we use our $I object and call several of its methods to test out our page. We start by stating what we’re wanting to do. In this case, we’re just wanting to make sure our Toupper form works using the wantTo method. Then we use the amOnPage method to ensure we’re on the correct page of toupper.html. Finally we call the see method to ensure we see the text: Convert Me! on the webpage.

Executing Our Test

Now that we have a basic test to see if we’re on the right page and that we’re seeing the correct content, let’s run the test using the following command:

./vendor/bin/codecept run

This will run all test suites. You should see the following output in your console, showing a successful test:

successful-test-1

If you’d like to run only your acceptance tests, you can use the following command:

./vendor/bin/codecept run acceptance

Additionally, you can see the full list of actions performed by using the --steps flag:

./vendor/bin/codecept run acceptance --steps

Here’s what your console would look like:

successful-test-steps

Testing Forms

Next, let’s see how we can use Codeception to test out the functionality of our form. Let’s add the following into our ToupperCept.php file:

$I = new WebGuy($scenario);
$I->wantTo('ensure Toupper form works');
$I->amOnPage('toupper.html');
$I->see('Convert Me!');
$I->fillField('string', "Convert me to upper");
$I->click('Convert');
$I->amOnPage('toupper.php');
$I->see('To Upper!');

Here we just continue where we left off. After ensuring we’re on the right page, we then fill in the form using the fillField method passing it the field name and the value we’d like to use and then we click the Convert button. Afterwards, we verify we are now on the toupper.php page and that we’re seeing our heading of To Upper!.

Let’s run our tests again:

./vendor/bin/codecept run

Your console should have something similar to below, confirming the tests have passed:

successful-form-test

Testing Links

Now lastly, let’s test out the link that’s on our toupper.php page to ensure it takes us back home:

$I = new WebGuy($scenario);
$I->wantTo('ensure Toupper form works');
$I->amOnPage('toupper.html');
$I->see('Convert Me!');
$I->fillField('string', "Convert me to upper");
$I->click('Convert');
$I->amOnPage('toupper.php');
$I->see('To Upper!');
$I->click('Back to form');
$I->see('Convert Me!');

Again, we call the click method to click on our link and to make sure we’re back on the right page, we verify that we see the text Convert Me!.

successful-link-test

In Conclusion

Now that you know the basics of working with Codeception you should give it a try on your own. Go ahead and attempt to add a few more tests to ensure that the form displays the: No string entered error message if you submit the form empty.

Here’s my final code after doing the above and the resulting passed tests:

$I = new WebGuy($scenario);
$I->wantTo('ensure Toupper form works');
$I->amOnPage('toupper.html');
$I->see('Convert Me!');
$I->fillField('string', "Convert me to upper");
$I->click('Convert');
$I->amOnPage('toupper.php');
$I->see('To Upper!');
$I->click('Back to form');
$I->see('Convert Me!');
$I->fillField('string', '');
$I->click('Convert');
$I->amOnPage('toupper.php');
$I->see('No string entered');
successful-final-test

So to wrap up, Codeception is an awesome testing framework that allows you to write very readable tests, easily. Be sure to checkout the full documentation for more detailed examples and for information regarding the other testing paradigms.

21:12

Showcase Of Fresh Photoshop Text Effect Tutorials

In this assortment, we are showcasing some fresh and outstanding Photoshop Text Effect Tutorials. For the designers Photoshop plays a vital role in their designing. Adobe Photoshop is easy to use, save time, and make designers work easier for them. Adobe Photoshop have so many advantages, this is the reason that every designer keeps an eye on every innovative and fresh Adobe Photoshop Tutorials.

With this collection, you can make your typography designs more colorful, and eye-catching. With the typography, web designers can easily show their creative skills and wonderful imagination power to the whole creative world.

Basically Typography is the arrangement of letters. So what are you waiting for? Scroll through our wonderful collection and be inspired to create your own amazing and attractive text effect, as well. You can easily download these amazing Photoshop text effect tutorials without paying any cost. And do not forget to share what you think about this compilation. Your suggestions and comments are very precious for us. So please give your feedbacks via comment section below.

Create Inspirational 3D Gold Text with Photoshop

Learn how to make any inspirational message look more meaningful by turning it into a 3D typography poster. In this Photoshop tutorial, you will learn how to create striking 3D gold text, add reflections with image-based lighting, create a drywall background from scratch, and position your lights using an easy method that feel like you’re playing a 1st person shooter game. To follow this tutorial, you’ll need Photoshop CS6 Extended.

Create a Magma Hot Text Effect

In this tutorial, we will explain how to create a magma hot text effect using layer styles and textures in Photoshop. Let’s get started!

The Text on the Wet Sweaty Window

This tutorial explains how to use a couple of images, a blur filter, and a simple brush to create a realistic text-on-wet-window effect.

How to Create Colorful Wooden 3D Text

Photoshop is a powerful tool for creating all sorts of imagery but it does have its limitations. While Photoshop does have some fantastic tools for creating 3D text, there are other applications that are often better suited for the job. That is why many illustrators use a combination of Photoshop and some other 3D application to help create 3D text. In this tutorial, we will show you how to use Cinema 4D to create wooden 3D text and then how to use Photoshop to add the final touches. Let’s get started!

Use Photoshop and Illustrator to Create Guitar String Typography

Photoshop and Illustrator are great tools to help you create things that only exist in your imagination. In this tutorial, we will create words out of intertwining guitar strings. Let’s get started!

Create a Unique Glowing Text with Space Background

In this tutorial, I will show you the steps I took to Create this Unique Glowing Text with Space Background in Adobe Photoshop CS5. We will use techniques such as layer blending, filter effect, custom brushes and transform tools to create this text effect. This is an beginner to intermediate level Photoshop tutorial, let’s get started! Layered PSD file included.

Design Unique Grass Wood Text Effect

In this Photoshop tutorial, I will show you the steps I took to Design this Unique Grass Wood text effect in Photoshop. This is an intermediate level tutorial so some steps can be a little tricky. We will come across techniques such as layer masking, image adjustments, blending, a number of filter effects. Have a try!

Vintage Wood Typography Tutorial

Learn how to create a vintage wood typography using a coll 3D Photoshop action. You can easily combine different font types, shapes and layer styles to create unlimited number of effects. You can easily customize the 3D effect as it uses smart objects.

Flower Blossom Typography Tutorial

In this Adobe Photoshop tutorial I will show you how I created this beautiful flower blossom typography using some simple techniques that can be practiced even by beginners.

Design Abstract Text Effect with Pen Tool and Grunge Brush

In this tutorial, I will show you the steps to design this abstract text effect with pen tool and grunge brush decoration in Photoshop. Along the way, we will practice using Pen Tool to draw some nice hair-like lines, as well as apply grunge brushsets for some cool abstract effect for our texts.

Design Fluid Typography on Watercolour Background

In this tutorial, I will show you the steps to Design this Fluid Typography on Watercolour Background in Photoshop. We will mainly use custom brushes and the liquify tool to achieve this simple, but unique effect. This is a beginner level tutorial, have a try!

How to Create a Metallic Text Effect

In this tutorial, we will explain how to create a metallic text effect using layer styles in Photoshop. Let’s get started!

Vintage 3D Text Effect

The 3D Environment in Photoshop CS6 is a great way to create amazing 3D text effects without the need for any other 3D software. This tutorial will explain how to use the many tools and settings to create a nice vintage 3D text effect.

Simple Bokeh Text Effect

Bokeh, in Photography, is a word that describes “the way the lens renders out-of-focus points of light”. It adds an absolutely stunning touch to photographs, and creates a really nice effect. This tutorial is inspired by this technique, and will show you a super easy way to create a dazzling beautiful bokeh text effect, using a simple brush and a layer effect.

Vintage Tiles Text Effect

This tutorial will show you how to create some letters, convert them to shapes, exclude them from rounded rectangles, then apply a simple layer style to create lovely vintage looking tiles. Then, a nice action will be applied to enhance the coloring of the final result.

3D Bold Text Effect

This tutorial will explain how to create a bold bright-textured slick text effect in Photoshop CS6. A simple pattern will be used to create the bump map, while some other material settings will be changed to create the final appearance. Lighting, Environment, and Image Based Light settings will also be modified to complete the scene and create the final result.

Colorful Textured Plastic Text Effect

This tutorial is a retro stationery inspired one. It will explain an easy way to create textured plastic semi-transparent letters, then move them around over the graph paper background.

Design a Hot Golden Text with Disintegration Effect

In this Photoshop tutorial, I will show you the steps I took to Design this Hot, Golden Text with Disintegration Effect in Photoshop. We will mainly use the selection tool and layer mask to perform some texturing for the letters, and use image adjustment tools to fine-tune the effect.

Create an “Out of the Box” Stone and Concrete 3D Text Effect

In this tutorial, we will mix Photoshop CS6s 3D tools with Filter Forge to create an “out of the box” stone and concrete 3D text effect. Let’s get started!

Striped Road-Inspired Text Effect

This tutorial will show you some tips on how to use the Pen Tool, along with some Brushes and Blend Modes, to create a striped road (street) inspired text effect.

Dot-Cutout Paper Text Effect

This tutorial will explain how to create a colorful dot-cutout paper text effect, by rasterizing the type layers, then creating and stroking work paths with a modified round brush. It will also explain the process of adding a couple of adjustment layers to modify the coloring of the final result.

Decorated Wood Text Effect

In this tutorial we will learn how to create a wood style text effect.

Create a Retro Sign from Scratch Using Advanced Techniques

In this tutorial, Stephen Petrany will push Photoshop to its limits to produce a realistic neon sign. He will use several techniques, ranging from simple to advanced, to help us create and texture a neon sign within Photoshop. We will explore how to use Smart Objects, create custom textures, manipulate Layer Styles and create custom brushes.

Grunge Road Paint Photoshop Text Effect

Learn how to create a realistic road paint effect in Photoshop in just a few simple steps. This grunge paint road works great with text, shapes and silhouettes. You can use it with any paint color not just with white and yellow.

Create a 3D Text Effect

Photoshop is an incredibly versatile application that is often used alongside 3D applications. In this tutorial, Wojciech Pijecki will show you how to sketch out an idea for a text effect, build up the idea in Photoshop, render it in 3D using Maya, and then how to add the finishing touches again in Photoshop.

Brick Text Typography Surreal Tutorial

Learn how to create a 3D brick typography in Photoshop using a free 3D action maker. We will combine the brick text effect with different images to create a surreal photo manipulation.

Create an Advanced Stone Text Effect

Today’s advanced tutorial will teach you how to create a realistic stone text effect from scratch in Photoshop. You’ll learn how to create a custom bump map, and how to use this technique to distort virtually any surface. Then you will combine Layer Styles to create an authentic ancient stone texture. Finally, we will explore few simple techniques for creating fast and realistic jungle vines with leaves.

Love Roses Text in Photoshop

Learn how to create a beautiful love red rose text in Photoshop. To create this text effect I will use a 3D action for creating the #d text effect and some interesting layer styles with creative red rose patterns.

Happy New Year Text Effect

In this tutorial I wanted to create a 3D text effect for the 2013 New Year. I have used several techniques in this tutorial that I hope you will find useful for creating your own text effects. I have also used free Photoshop action for creating a 3D effect.

Create a Sparkling Diamond and Gold Text Effect

In this tutorial, we will explain how to use Filter Forge and Photoshop to create a glamorous, sparkling, diamond and gold text effect. Let’s get started!

Create a Unique Rock Text with Space Background

In this Photoshop tutorial, I will show you the steps I took to Create this Unique Rock Text with Space Background in Photoshop. I will show you an interesting way to place the rocks and join them together with simple line-drawing techniques. This is a beginner to intermediate level tutorial, have a try!

Design an Awesome Electrified Metal Scrap Text Effect

In this tutorial, I will show you the steps I took to Design an Electrified Text Effect in Photoshop. The focus of this tutorial is show you how you can combine serveral elements seamlessly using selection and layer blending modes, in order to create an eye-catching text effect. This is an intermedia level tutorial so some steps can be tricky, but why not have a try!

Gingerbread Cookies Text Effect

This tutorial will explain how to bake make a delicious gingerbread cookies inspired text effect, using layer styles, some filters, and a couple of other simple elements. Ready?

Simple New Year Typographic Wallpaper

This mini-tutorial will explain how to use simple text and a filter to create a nice new year’s typographic wallpaper.

Colorful Futuristic Text Effect

This tutorial explains how to create a colorful futuristic looking text effect, using a couple of Layer Styles for multiple layers, and a simple brush.

Create a Cross Stitch Effect

Learn how to create a cross stitch effect in Phtoshop that you can use to make creative text effects using fabrics and other cloth textures. This effect is only for text and numbers, so if you want to turn images/shapes/clipart to cross stitch you will have to do it using the cross stitch brush and the process is very slow.

Ghostly Text Effect

This tutorial will show you how to combine different layer effects with the gaussian blur filter to create a simple hazy ghostly text effect.

Create Electrified Metal Text Effect

In this tutorial, I will show you the steps to Create Electrified Metal Text Effect in Photoshop. We will cover a number of selection techniques, as well as how we can use the right texture to form unique text effect.

December 31 2013

20:23

Introduction to HTML5 Desktop Apps With Node-Webkit

Using Node.js, we can create web applications easily. Now, thanks to the node-webkit we can also create desktop apps with it, using a unique combination of HTML5 and Node.


Introduction

The library combines WebKit engine and Node.js in a unique way. Both WebKit and Node share the same context, allowing you to write your code like it’s meant to be executed in a browser, but with the addition of all Node’s features.

The list of uses is endless. You can create business apps, text and image editors, games, presentations, admin panels, etc. Just name the desktop app that you would like to create, and I can assure you that it will be possible with node-webkit.

In this tutorial, I will show you how to get started by showing you how to create a simple text editor.


Preparation

First, you have to get the library. Download the version appropriate for your operating system (the application itself will run on all of them) from github and unzip it wherever you’d like. Now, lets create a basic folder structure. We will have folders for HTML files (.html) and for JavaScript files (.js). Also, create the package.json file in the same directory that the nw executable is in and a node_modules folder to keep the modules we will create.


Package.json

The first thing to do is to fill the required fields in the package.json file. In the case of node-webkit these are name and main (on Github you can find the full list of available package.json options). The first one is exactly the same as in the case of a plain Node.js app. The second one should be a (relative or absolute) path to the main HTML file that will be displayed upon the application starting. In the case of our example, the package.json file should look like this:

{
	"name": "simple-text-editor",
	"main": "./html/index.html"
}

Now if you run your app using the nw executable, you should see a blank screen like this:

blank_app

The Main File

Creating the user interface using node-webkit is no different than creating a webpage (with the slight difference that you know the rendering engine, so you don’t have to provide any fallbacks for older browsers, or use libraries like jQuery – and actually you shouldn’t, because they include fallback code that will slow down your app). Let’s create an index.html file that we will use:

<!DOCTYPE html>
<html>
<head>
	<title>Simple Text Editor</title>
</head>
<body>

</body>
</html>

Let’s also include (and create) the main JavaScript file that we will use, in the body, so it will be executed after the DOM is loaded:

<script src="../js/main.js"></script>

Now the only thing that will change when you open the app will be the title.


Let’s Use Some Node

To show you how easily you can use Node.js in node-webkit, let’s read the contents of package.json and print it on the screen. Create the js/main.js file and put the following code in it:

var fs = require('fs');

fs.readFile('./package.json', 'utf-8', function (error, contents) {
	document.write(contents);
});

As you can see, the code looks exactly the same as if you were writing it for Node. But then we use document.write to put the file contents on the page. No need to setup any local server.

Now open the app and you should see something like this:

file_contents_printed

Modules

There is another nice thing about node-webkit; You don’t have to include a series of <script> tags in your HTML if you want to split your code into modules. You can do it just like you do in Node.js – using require. Let’s create a simple module to read file contents to a textarea or write it to a file. Name the file file.js and put it in the node_modules directory. Now put the overall module structure in it:

var fs = require('fs');

function File() {
	function open(path, document) {
		
	}
	
	function save(path, document) {
		
	}
	
	this.open = open;
	this.save = save;
}

module.exports = new File;

As you can see this will be a static class with two public methods – one for opening the files and one for saving them.

The open method will look like this:

function open(path, document) {
	fs.readFile(path, 'utf-8', function (error, contents) {
		document.getElementById('editor').value = contents;
	});
}

Pretty simple right? The function accepts the path to the file as the first parameter and puts the file’s contents into the element with an id of “editor”. We also need to pass the document object to the function because the script is called using Node’s require and it doesn’t have access to the WebKit stuff directly.

The save method is as simple as the previous one:

function save(path, document) {
	var text = document.getElementById('editor').value;
	fs.writeFile(path, text);
}

Now let’s test if everything works. Change the contents of js/main.js to this:

var file = require('file.js');

console.log(file.open, file.save);

Now if you go to the developer tools console and click the developer refresh button in the right corner of the window you should see the two functions from this module logged. This is another useful thing in node-webkit – your console.log calls are being displayed in the developer tools console, so it’s easier to debug your apps.

dev_tools_log

Node-Webkit’s File Inputs

Let’s add two file inputs that we will need later:

<input id="open" type="file" style="display:none" accept="text/*"/>
<input id="save" type="file" nwsaveas style="display:none" accept="text/*"/>

Notice the nwsaveas attribute in the second input. It is a special type of input from node-webkit, that allows the user to select a non-existing file. Both of the inputs are hidden, because we will only access them from JavaScript. In node-webkit, file inputs are modified in a way that allows you to fire a click event on them, so you can open the file dialog without the user clicking on the input (with no need for the workarounds like invisible inputs above the button). Now we can move to the JavaScript.

First, remove the console.log call from the js/main.js file. Now put this code in there:

function clickInput(id) {
	var event = document.createEvent('MouseEvents');
	event.initMouseEvent('click');
	document.getElementById(id).dispatchEvent(event);
}

document.addEventListener('keyup', function (e) {
	if (e.keyCode == 'O'.charCodeAt(0) && e.ctrlKey) {
		clickInput('open');
	} else if (e.keyCode == 'S'.charCodeAt(0) && e.ctrlKey) {
		clickInput('save');
	}
});

This code is responsible for showing the Open and Save dialogs. The clickInput function does the main job here – it simulates the click event on the input element – in a normal browser, this would be impossible from security reasons, but here it’s not a security threat at all. Later, there is a normal keyup event handler that checks if the proper key combination was pressed (Ctrl+O or Ctrl+S) and “clicks” on the inputs. Note that this would also be impossible to achieve in a browser – combinations like Ctrl+O and Ctrl+S are reserved for the browser’s internal use and no event fires when they are pressed (only Firefox fires events for them).

Now press the developer refresh button and you should see the appropriate dialog when you press Ctrl+S or Ctrl+O. They of course do nothing for now.


Creating the Editor

Now, since we will be creating a text editor, we need something to write on. Add the textarea to the HTML:

<textarea id="editor" style="position:fixed;top:0;bottom:0;left:0;right:0"></textarea>

Next we have to finish the opening/saving code. Let’s create the onchange event listeners for the open and save inputs:

document.getElementById('open').addEventListener('change', function (e) {
	file.open(this.value, document);
});

document.getElementById('save').addEventListener('change', function (e) {
	file.save(this.value, document);
});

Thanks to the module created earlier, the code is really simple. This is also possible thanks to node-webkit, because in browsers (again for security reasons) the value of the file input element is set to a fake path – here it’s the path which was selected. Now open the app (or press the developer refresh button, if you didn’t closed it) and you should have a perfectly working text editor.


Further Enhancements

We can also do a few other things to make the editor a bit more advanced and useful. For example, let’s open a new window when a user presses Ctrl+N. First, add this require to the top of the script:

var gui = require('nw.gui');

The nw.gui module is a node-webkit’s library related to the user interface (you can read more about this on Node-webkit’s Github). Next add this else if to the document’s keyup listener:

} else if (e.keyCode == 'N'.charCodeAt(0) && e.ctrlKey) {
	gui.Window.open('index.html');
}

And viola! If you refresh the app, now you can press Ctrl+N to open a new window. This function does differ from the normal window.open though. You can pass various window options as the second parameter. The list of what you can set there is available in the documentation.

Another thing that may be useful in a text editor is an application menu (the one that sits under the title bar in Windows/Linux and on the top of the screen on Mac). In node-webkit it’s pretty simple. First, let’s create the menu:

var menu = new gui.Menu({ type: 'menubar' });

The type menubar is reserved for the application menu. Now we can add items to it. Let it be a File menu:

menu.append(new gui.MenuItem({
	label: 'File',
	submenu: new gui.Menu()
}));

Now let’s add some items to this menu:

menu.items[0].submenu.append(new gui.MenuItem({
	label: 'New',
	click: function () {
		gui.Window.open('index.html');
	}
}));
menu.items[0].submenu.append(new gui.MenuItem({
	type: 'separator'
}));
menu.items[0].submenu.append(new gui.MenuItem({
	label: 'Close',
	click: function () {
		gui.Window.get().close();
	}
}));

menu.items[0] is the first item of our application menu (you can also assign it to a variable when creating it if you want to). We are appending new items to its submenu, and each item gets a click callback to handle the click on it. The gui.Window.get method gets the current window, so we can close it when the user selects the Close option in the File menu.

Finally, we can assign the menu to the window:

gui.Window.get().menu = menu;

Again, we are using gui.Window.get to get the current window, then we are assigning our menu to its menu property. Please note, that even though we can assign a different menu to every window, on OSX (Mac) one app can only have one menu (which is global for all windows), so if you want your app to be used on Macs, you should avoid using different menus on different windows.

Now if you open or refresh your app, you should see a system menu under the titlebar:

system_menu

Packaging the App

Now if you would like to share your app with other users, you can package it into one file, so the users can download the node-webkit’s executable appropriate to their platform and launch your app with it. First, let’s remove the toolbar that makes the window look like a browser – it’s nice to have during development, but your users probably don’t want to see it. We can achieve this by setting window.toolbar to false in package.json, so it looks like this:

{
	"name": "example-app",
	"main": "./html/index.html",
	"window": {
		"toolbar": false
	}
}

Now if you open the app (nothing will change if you just refresh it, package.json is loaded only on startup), you should see the final result:

final_result

Packaging the app is pretty simple. Just create a .zip archive with all your assets (all of the files you created, without the files supplied with node-webkit) and change its extension to .nw. That’s all. If a user downloads node-webkit and your package, they only need to put it in the node-webkit’s directory and run the nw executable. Detailed descriptions, with a few more tips, are available on the node-webkit’s github.

Now your editor is ready to be shipped to your users.


Conclusion

As you can see, node-webkit is a very promising and powerful library. With frequent updates and fixes, and being supported by Intel, there’s almost no chance that it will be discontinued, which can sometimes happen with open source projects.

Share your thoughts about this project in the comments below. I personally think it’s the best thing out there if you want to create a desktop application using Node.js and HTML.

December 30 2013

16:44

Using New Relic Custom Metrics to Monitor All the Things

When you first get introduced to New Relic, it’s easy to be overwhelmed by all the features. But like with most tools, as you slowly climb the learning curve and get familiar with the features provided out of the box, you begin to wonder how it all hangs together under the hood and if there is more you can do with what you have at hand.

Today I will look at how New Relic actually monitors transactions and how you can hook into the process. We will take a quick glance at the support provided by New Relic for monitoring background jobs and we’ll also dig into New Relic custom metrics, how you would use them and the benefits you get from doing so. By the end you will have a deeper understanding of how New Relic works and be able to utilize it more fully. In other words, we will have a deeper knowledge of our tools—something every developer should strive for.

Let’s begin by taking a quick look at how New Relic actually hooks-in to track the performance of your application.


How New Relic Tracks Transactions and How You Can Do the Same

It may seem a little like magic, you include a gem in your Gemfile:

gem 'newrelic_rpm'

And somehow New Relic monitors all your code. Of course it is just code, so let’s see how New Relic actually instruments your app so it can begin monitoring it when the agent gem is required. We will do this in the context of a Rails 4 app.

The first place to look is newrelic_rpm.rb, which has the following relevant code in it:

...
if Rails::VERSION::MAJOR.to_i >= 3
  module NewRelic
    class Railtie < Rails::Railtie

      initializer "newrelic_rpm.start_plugin" do |app|
        NewRelic::Control.instance.init_plugin(:config => app.config)
      end
    end
  end
else
...

So a Railtie gets created when the Rails version is above three, this gets a NewRelic::Control singleton instance (when it is initialized) and calls init_plugin. When the NewRelic::Control instance is created it figures out what framework is running (Rails 4 in our case) and loads some relevant code; we can see this in new_relic/control/class_methods#load_framework_class. The init_plugin method that gets executed lives in new_relic/control/instance_methods. The interesting code here is:

...
if Agent.config[:agent_enabled] && !NewRelic::Agent.instance.started?
  start_agent
  install_instrumentation
  load_samplers unless Agent.config[:disable_samplers]
...

The install_instrumentation call is the important one. The implementation lives in new_relic/control/instrumentation. Skipping over the boilerplate bits, this works out which instrumentation files it needs to load and then requires them one by one. In our case, it will load files under new_relic/agent/instrumentation/rails4. One of the files here is action_controller.rb, when this gets required it eventually executes the following code via some metaprogramming magic:

executes do
  class ActionController::Base
    include NewRelic::Agent::Instrumentation::ControllerInstrumentation
    include NewRelic::Agent::Instrumentation::Rails4::ActionController
  end
  NewRelic::Agent::Instrumentation::ActionControllerSubscriber \
    .subscribe(/^process_action.action_controller$/)
end

And here we get to the crux of it: ActionController::Base, from which all your controllers inherit and it gets a couple of modules included in it, the most important one, which is NewRelic::Agent::Instrumentation::ControllerInstrumentation. This is the start of how New Relic begins to monitor all your controller actions as ‘transactions’. Of course, it’s a somewhat simplified view and we’re glossing over a lot of detail, but it gives you an idea of how New Relic monitors your code. The question is, how can you use this information?

Monitoring Custom Frameworks & Background Jobs

You’re highly unlikely to find yourself in a situation where you’re using a web framework that New Relic does not already have instrumentation for (in the Ruby world), but let’s say you did. Knowing what we know now, we can easily, manually instrument the controller actions of this custom framework. If we have a controller like this:

class CustomController
  def custom_action
    ...
  end
end

We can instrument it like so:

class CustomController
  include NewRelic::Agent::Instrumentation::ControllerInstrumentation

  def custom_action
    ...
  end
  add_transaction_tracer :custom_action
end

Now your controller method will be tracked like a transaction in the same way Rails actions get tracked. Of course, if you’ve rolled your own web framework which includes database handling code, you will need to do extra work to instrument much of that code, to allow New Relic to monitor more than just controller actions. But the general idea remains sound.

The above pattern becomes more useful when you want New Relic to track background jobs in your application. You’re a lot more likely to have rolled some custom background job handling code than you are to have written your own web framework. In fact, we did just this with Tuts+ initially, although we’re now migrating to Sidekiq. If you’re using one of the well-known background jobs systems like Sidekiq, Resque or Delayed Job, New Relic already has instrumentation built in, but if you did roll your own, the above pattern is all you need to monitor your tasks.

For example, our custom Tuts+ background jobs were regular ruby classes that responded to the execute method, so all we have to do is this:

class SomeBackgroundJob
  include NewRelic::Agent::Instrumentation::ControllerInstrumentation

  def execute
    ...
  end
  add_transaction_tracer :custom_action, category: :task
end

The last bit, category: :task, is to ensure that New Relic does not track it as a web transaction, but instead treats it as a background tasks and makes it appear under the background tasks tab in the New Relic UI. If we create a base class for all our jobs, we can put the instrumentation there and child classes will inherit it, so we don’t have to worry about doing the above in every job class.

Customizing a Transaction Even More

Interestingly enough, even the web transactions that New Relic automatically monitors are not sacrosanct. You can, for example, add some custom parameters to be sent to New Relic for the transaction that is currently executing (if you’ve switched on capturing parameters).

01_capture_parameters

You can do this at any point during the transaction. All you have to do is call ::NewRelic::Agent.add_custom_parameters(:key => 'value') at any time and the parameters you pass in will get added to the parameter data you see inside New Relic. For example, if we had a controller that looked like this:

class HelloController < ApplicationController
  def index
    ::NewRelic::Agent.add_custom_parameters(:hello => 'world')
  end
end

Slow transactions would give us the following:

02_custom_parameter_display

This isn’t all we can do. We can segment a transaction while it’s running by renaming it. Let’s say we want to treat a transaction as special when it is performed by a particular user. You may do something like this:

class HelloController < ApplicationController
  def index
    new_relic_name = NewRelic::Agent.get_transaction_name
    if current_user.name == 'Joe Customer'
      NewRelic::Agent.set_transaction_name("#{new_relic_name} - Joe Customer")
    end
  end
end

Now this transaction will be treated as a separate transaction in the New Relic UI:

03_segmented_transaction

Even default New Relic instrumentation has some room for customization, but sometimes, just like Captain Kirk, you just need more power. This is where custom metrics come in.


New Relic Custom Metrics & How They Are Useful

Back in the day, you would have used custom metrics to monitor things like external service communication and use of various common tools like Redis. These days, New Relic has better ways of monitoring that stuff, so what do we need custom metrics for? I’ve found custom metrics to be useful in four situations:

  • monitoring code that New Relic can’t see
  • monitoring code you don’t control
  • monitoring scripts
  • monitoring totally custom events

Let’s take a quick look at each of them.

Monitoring Code New Relic Can’t See

New Relic is pretty good at breaking down the performance of your various application methods in a transaction trace, but sometimes you will see something like this in a trace:

04_uninstrumented_code

It seems there is application code that New Relic couldn’t instrument for some reason. What we can do is help New Relic (and ourselves) out with some custom metrics. We need to figure out which method New Relic had trouble monitoring and hook in some custom metrics to track how long this method took to execute. This will then appear in all subsequent traces. Let’s say we have a class with a method we want to monitor via custom metrics:

class Order
  def amount
    ...
  end
end

We can begin to track the amount method like so:

require 'new_relic/agent/method_tracer'

class Order
  include ::NewRelic::Agent::MethodTracer

  def amount
    ...
  end
  add_method_tracer :amount, 'Custom/amount'
end

The second parameter to add_method_tracer is the name that this custom metric will get in the New Relic UI. Metric names are slash separated strings and all custom metrics should start with ‘Custom/’. You could, for example, name your custom metric as ‘Custom/<class_name>/<method_name>’. At this point you will begin seeing the amount method in your transaction traces, in the New Relic UI. But, what if our amount method is highly complex and we want to monitor parts of it which we suspect are slow? My advice is that you should refactor your method—it’s too big, but if you can’t do that, you can instrument random code like so:

class Order
  extend ::NewRelic::Agent::MethodTracer

  def amount
    ...
    self.class.trace_execution_scoped(['Custom/amount/complex_code']) do
      ...complex code...
    end
    ...
  end
end

Now the instrumented part of the method will be reported separately in your transaction traces. If you’ve previously instrumented the method itself, your new ‘inner’ metric will be grouped under the previous one.

This is by far the most common way you’ll end up using custom metrics in your code, but let’s look at the others anyway.

Monitoring Code You Don’t Control

Often you’re using a library which, you suspect, is slowing down your application. New Relic won’t instrument any random gem for you by default, so what can you do? You can fork the gem and add some instrumentation using the method we saw above, but an even simpler solution exists – use initializers. Let’s say you’re using the foobar library which has a class Foo with a method bar that you suspect has some slow code in it. All you need to do is create an initializer foobar_instrumentation.rb, and put the following in it:

require 'new_relic/agent/method_tracer'

Foo.class_eval do
  include ::NewRelic::Agent::MethodTracer

  add_method_tracer :bar
end

As you can see the code is very similar to what we had above, New Relic will work out a sensible name for your new custom metric based on the class and method name and you will begin to see it in your transaction traces. Use this to work out if that suspicious library is actually making your code perform badly, but don’t keep this instrumentation around permanently. It spams unnecessary initializers in your Rails app and pollutes your New Relic UI with custom metrics that you don’t really need to track permanently.

Monitoring Scripts

One part of web applications that is often neglected is scripts. To paraphrase a presentation I did recently – it’s still production code and should be treated as such. You don’t need production code to be performing badly, especially if you execute it on an ongoing basis via cron jobs (or similar method that is not a background job in your system), so we can use New Relic to figure out if your scripts are slow.

You may instrument your script code using custom metrics as described above. It won’t show up in transaction traces as it won’t be part of a transaction, what you will be able to do though, is create a custom dashboard from the metrics you collect which should give you an idea if your script is performing badly.

The other thing you can do is to treat your script as a type of background job and instrument it accordingly (include NewRelic::Agent::Instrumentation::ControllerInstrumentation etc.), it will get lumped together with other background jobs in the UI, but you don’t have to worry about custom dashboards.

The only caveat with scripts is this. New Relic only sends data over the wire periodically, with a one-off script that executes quickly you will need to ensure that the collected data actually gets sent, so you may need to shut down the New Relic agent manually. A good rule of thumb is to manually start the agent at the beginning of each script and to shut it down at the end:

require 'newrelic_rpm'

::NewRelic::Agent.manual_start

... codez ...

::NewRelic::Agent.shutdown

This way you won’t ever need to wonder why your data is not appearing in the UI.

Monitoring Totally Custom Events

One of the interesting things about New Relic, is that it lets you take advantage of its UI and data aggregation facilities for metrics that have nothing to do with performance (in theory). For example, you may want to have some visibility on how often users sign up to your application, the frequency of sales, the total amount users pay when they make purchases etc. These are more business metrics than performance ones, but if it’s too much trouble to track these separately, you may use New Relic to do so.

New Relic lets you record custom metrics directly via two API calls:

  • record_metric
  • increment_metric

You can use record_metric to track any metric that has an amount and increment_metric is pretty self explanatory. So we may, for example, do this:

...
def purchase(amount)
...
  ::NewRelic::Agent.record_metric('Custom/purchase_amount', amount)
  ::NewRelic::Agent.increment_metric('Custom/purchase_count')
...
end
...

The only way for you to see these metrics in the UI would be to create some custom dashboards. I have to mention that this would be a somewhat “creative” use of the New Relic API as it’s designed with performance data in mind, but it is certainly a handy thing to know when you need to throw a quick dashboard together and don’t want to set up a bunch of extra infrastructure.


The Dangers of Monitoring Too Much

Of course, all this power has a cost. If you collect too many custom metrics, it can begin to slow down your application. It can also slow down the New Relic UI and make it difficult to interpret the data as New Relic will collapse similar metrics into a summary one. New Relic recommends keeping the number of custom metrics you collect below 2000. I have found that custom metrics are best used periodically. Instrument the code you need, use the instrumentation to solve the problem you’re having and then remove the instrumentation. This way you get to solve your performance issues and the number of custom metrics you use is unlikely to grow too high.


Conclusion

We’ve dug into the internals of the newrelic_rpm gem and saw how to tell New Relic about code that you consider a web transaction. We saw how to alter transactions on the fly, how to monitor background jobs and looked at the various situations where it makes sense to use custom metrics. There is much you can do with New Relic beyond the functionality it provides out of the box and we’re now much more able to utilize it to its full potential. But, there is always more to learn, like how to build custom dashboards out of the metrics that we capture or how to monitor infrastructure with plugins. We’ll be covering those topics and more in subsequent articles, so make sure to check back often. And as always if you have a question, want to share your own New Relic story or just want to say hi, don’t forget to leave a comment.

December 26 2013

21:07

Working With IndexedDB – Part 3

Welcome to the final part of my IndexedDB series. When I began this series my intent was to explain a technology that is not always the most… friendly one to work with. In fact, when I first tried working with IndexedDB, last year, my initial reaction was somewhat negative (“Somewhat negative” much like the Universe is “somewhat old.”). It’s been a long journey, but I finally feel somewhat comfortable working with IndexedDB and I respect what it allows. It is still a technology that can’t be used everywhere (it sadly missed being added to iOS7), but I truly believe it is a technology folks can learn and make use of today.

In this final article, we’re going to demonstrate some additional concepts that build upon the “full” demo we built in the last article. To be clear, you must be caught up on the series or this entry will be difficult to follow, so you may also want to check out part one.


Counting Data

Let’s start with something simple. Imagine you want to add paging to your data. How would you get a count of your data so you can properly handle that feature? I’ve already shown you how you can get all your data and certainly you could use that as a way to count data, but that requires fetching everything. If your local database is huge, that could be slow. Luckily the IndexedDB spec provides a much simpler way of doing it.

The count() method, run on an objectStore, will return a count of data. Like everything else we’ve done this will be asynchronous, but you can simplify the code down to one call. For our note database, I’ve written a function called doCount() that does just this:

function doCount() {

    db.transaction(["note"],"readonly").objectStore("note").count().onsuccess = function(event) {
        $("#sizeSpan").text("("+event.target.result+" Notes Total)");
    };

}

Remember – if the code above is a bit hard to follow, you can break it up into multiple blocks. See the earlier articles where I demonstrated this. The result handler is passed a result value representing the total number of objects available in the store. I modified the UI of our demo to include an empty span in the header.

<span class="navbar-brand" >Note Database <span id="sizeSpan"></span></span>
Count Example

The final thing I need to do is simply add a call to doCount when the application starts up and after any add or delete operation. Here is one example from the success handler for opening the database.

openRequest.onsuccess = function(e) {
    db = e.target.result;

    db.onerror = function(event) {
      // Generic error handler for all errors targeted at this database's
      // requests!
      alert("Database error: " + event.target.errorCode);
    };

    displayNotes();
    doCount();
};

You can find the full example in the zip you downloaded as fulldemo2. (As an FYI, fulldemo1 is the application as it was at the end of the previous article.)


Filter As You Type

For our next feature, we’re going to add a basic filter to the note list. In the earlier articles in this series I covered how IndexedDB does not allow for free form search. You can’t (well, not easily) search content that contains a keyword. But with the power of ranges, it is easy to at least support matching at the beginning of a string.

If you remember, a range allows us to grab data from a store that either begins with a certain value, ends with a value, or lies in between. We can use this to implement a basic filter against the title of our note fields. First, we need to add an index for this property. Remember, this can only be done in the onupgradeneeded event.

    if(!thisDb.objectStoreNames.contains("note")) {
        console.log("I need to make the note objectstore");
        objectStore = thisDb.createObjectStore("note", { keyPath: "id", autoIncrement:true });
        objectStore.createIndex("title", "title", { unique: false });
    }

Next, I added a simple form field to the UI:

Filter UI

Then I added a “keyup” handler to the field so I’d see immediate updates while I type.

$("#filterField").on("keyup", function(e) {
    var filter = $(this).val();
    displayNotes(filter);
});

Notice how I’m calling displayNotes. This is the same function I used before to display everything. I’m going to update it to support both a “get everything” action as well as a “get filtered” type action. Let’s take a look at it.

function displayNotes(filter) {

    var transaction = db.transaction(["note"], "readonly");  
    var content="<table class='table table-bordered table-striped'><thead><tr><th>Title</th><th>Updated</th><th>& </td></thead><tbody>";

    transaction.oncomplete = function(event) {
        $("#noteList").html(content);
    };

    var handleResult = function(event) {  
      var cursor = event.target.result;  
      if (cursor) {  
        content += "<tr data-key=\""+cursor.key+"\"><td class=\"notetitle\">"+cursor.value.title+"</td>";
        content += "<td>"+dtFormat(cursor.value.updated)+"</td>";

        content += "<td><a class=\"btn btn-primary edit\">Edit</a> <a class=\"btn btn-danger delete\">Delete</a></td>";
        content +="</tr>";
        cursor.continue();  
      }  
      else {  
        content += "</tbody></table>";
      }  
    };

    var objectStore = transaction.objectStore("note");

    if(filter) {
        //Credit: http://stackoverflow.com/a/8961462/52160
        var range = IDBKeyRange.bound(filter, filter + "\uffff");
        var index = objectStore.index("title");
        index.openCursor(range).onsuccess = handleResult;
    } else {
        objectStore.openCursor().onsuccess = handleResult;
    }

}

To be clear, the only change here is at the bottom. Opening a cursor with or without a range gives us the same type of event handler result. That’s handy then as it makes this update so trivial. The only complex aspect is in actually building the range. Notice what I’ve done here. The input, filter, is what the user typed. So imagine this is “The”. We want to find notes with a title that begins with “The” and ends in any character. This can be done by simply setting the far end of the range to a high ASCII character. I can’t take credit for this idea. See the StackOverflow link in the code for attribution.

You can find this demo in the fulldemo3 folder. Note that this is using a new database so if you’ve run the previous examples, this one will be empty when you first run it.

While this works, it has one small problem. Imagine a note titled, “Saints Rule.” (Because they do. Just saying.) Most likely you will try to search for this by typing “saints”. If you do this, the filter won’t work because it is case sensitive. How do we get around it?

One way is to simply store a copy of our title in lowercase. This is relatively easy to do. First, I modified the index to use a new property called titlelc.

        objectStore.createIndex("titlelc", "titlelc", { unique: false });

Then I modified the code that stores notes to create a copy of the field:

$("#saveNoteButton").on("click",function() {

    var title = $("#title").val();
    var body = $("#body").val();
    var key = $("#key").val();
    var titlelc = title.toLowerCase();

    var t = db.transaction(["note"], "readwrite");

    if(key === "") {
        t.objectStore("note")
                        .add({title:title,body:body,updated:new Date(),titlelc:titlelc});
    } else {
        t.objectStore("note")
                        .put({title:title,body:body,updated:new Date(),id:Number(key),titlelc:titlelc});
    }

Finally, I modified the search to simply lowercase user input. That way if you enter “Saints” it will work just as well as entering “saints.”

        filter = filter.toLowerCase();
        var range = IDBKeyRange.bound(filter, filter + "\uffff");
        var index = objectStore.index("titlelc");

That’s it. You can find this version as fulldemo4.


Working With Array Properties

For our final improvement, I’m going to add a new feature to our Note application – tagging. This will
let you add any number of tags (think keywords that describe the note) so that you can later find other
notes with the same tag. Tags will be stored as an array. That by itself isn’t such a big deal. I mentioned in the beginning of this series that you could easily store arrays as properties. What is a bit more complex is handling the search. Let’s begin by making it so you can add tags to a note.

First, I modified my note form to have a new input field. This will allow the user to enter tags separated by a comma:

Tag UI

I can save this by simply updating my code that handles Note creation/updating.

    var tags = [];
    var tagString = $("#tags").val();
    if(tagString.length) tags = tagString.split(",");

Notice that I’m defaulting the value to an empty array. I only populate it if you typed something in. Saving this is as simple as appending it to the object we pass to IndexedDB:

    if(key === "") {
        t.objectStore("note")
                        .add({title:title,body:body,updated:new Date(),titlelc:titlelc,tags:tags});
    } else {
        t.objectStore("note")
                        .put({title:title,body:body,updated:new Date(),id:Number(key),titlelc:titlelc,tags:tags});
    }

That’s it. If you write a few notes and open up Chrome’s Resources tab, you can actually see the data being stored.

Chrome DevTools and the Resource View

Now let’s add tags to the view when you display a note. For my application, I decided on a simple use case for this. When a note is displayed, if there are tags I’ll list them out. Each tag will be a link. If you click that link, I’ll show you a list of related notes using the same tag. Let’s look at that logic first.

function displayNote(id) {
    var transaction = db.transaction(["note"]);  
    var objectStore = transaction.objectStore("note");  
    var request = objectStore.get(id);

    request.onsuccess = function(event) {  
        var note = request.result;
        var content = "<h2>" + note.title + "</h2>"; 
        if(note.tags.length > 0) {
            content += "<strong>Tags:</strong> ";
            note.tags.forEach(function(elm,idx,arr) {
                content += "<a class='tagLookup' title='Click for Related Notes' data-noteid='"+note.id+"'> " + elm + "</a> ";  
            });
            content += "<br/><div id='relatedNotesDisplay'></div>";
        }
        content += "<p>" + note.body + "</p>";
         I
        $noteDetail.html(content).show();
        $noteForm.hide();           
    };  
}

This function (a new addition to our application) handles the note display code formally bound to the table cell click event. I needed a more abstract version of the code so this fulfills that purpose. For the most part it’s the same, but note the logic to check the length of the tags property. If the array is not empty, the content is updated to include a simple list of tags. Each one is wrapped in a link with a particular class I’ll use for lookup later. I’ve also added a div specifically to handle that search.

A note with tags

At this point, I’ve got the ability to add tags to a note as well as display them later. I’ve also planned to allow the user to click those tags so they can find other notes using the same tag. Now here comes the complex part.

You’ve seen how you can fetch content based on an index. But how does that work with array properties? Turns out – the spec has a specific flag for dealing with this: multiEntry. When creating an array-based index, you must set this value to true. Here is how my application handles it:

objectStore.createIndex("tags","tags", {unique:false,multiEntry:true});

That handles the storage aspect well. Now let’s talk about search. Here is the click handler for the tag link class:

$(document).on("click", ".tagLookup", function(e) {
    var tag = e.target.text;
    var parentNote = $(this).data("noteid");
    var doneOne = false;
    var content = "<strong>Related Notes:</strong><br/>";

    var transaction = db.transaction(["note"], "readonly");
    var objectStore = transaction.objectStore("note");
    var tagIndex = objectStore.index("tags");
    var range = IDBKeyRange.only(tag);

    transaction.oncomplete = function(event) {
        if(!doneOne) {
            content += "No other notes used this tag."; 
        }
        content += "<p/>";
        $("#relatedNotesDisplay").html(content);
    };

    var handleResult = function(event) {
        var cursor = event.target.result;
        if(cursor) {
            if(cursor.value.id != parentNote) {
                doneOne = true;
                content += "<a class='loadNote' data-noteid='"+cursor.value.id+"'>" + cursor.value.title + "</a><br/> ";
            }
            cursor.continue();
        }           
    };

    tagIndex.openCursor(range).onsuccess = handleResult;

});

There’s quite a bit here – but honestly – it is very similar to what we’ve dicussed before. When you click a tag, my code begins by grabbing the text of the link for the tag value. I create my transaction, objectstore, and index objects as you’ve seen before. The range is new this time. Instead of creating a range from something and to something, we can use the only() api to specify that we want a range of only one value. And yes – that seemed weird to me as well. But it works great. You can see then we open the cursor and we can iterate over the results as before. There is a bit of additional code to handle cases where there may be no matches. I also take note of the original note, i.e. the one you are viewing now, so that I don’t display it as well. And that’s really it. I’ve got one last bit of code that handles click events on those related notes so you can view them easily:

$(document).on("click", ".loadNote", function(e) {
    var noteId = $(this).data("noteid");
    displayNote(noteId);
});

You can find this demo in the folder fulldemo5.


Conclusion

I sincerely hope that this series has been helpful to you. As I said in the beginning, IndexedDB was not a technology I enjoyed using. The more I worked with it, and the more I began to wrap my head around how it did things, the more I began to appreciate how much this technology could help us as web developers. It definitely has room to grow, and I can definitely see people preferring to use wrapper libraries to simplify things, but I think the future for this feature is great!

December 24 2013

21:18

Managing the Asynchronous Nature of Node.js

Node.js allows you to create apps fast and easily. But due to its asynchronous nature, it may be hard to write readable and manageable code. In this article I’ll show you a few tips on how to achieve that.


Callback Hell or the Pyramid of Doom

Node.js is built in a way that forces you to use asynchronous functions. That means callbacks, callbacks and even more callbacks. You’ve probably seen or even written yourself pieces of code like this:

app.get('/login', function (req, res) {
	sql.query('SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ], function (error, rows) {
		if (error) {
			res.writeHead(500);
			return res.end();
		}
		if (rows.length &lt; 1) {
			res.end('Wrong username!');
		} else {
			sql.query('SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ], function (error, rows) {
				if (error) {
					res.writeHead(500);
					return res.end();
				}
				if (rows.length &lt; 1) {
					res.end('Wrong password!');
				} else {
					sql.query('SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ], function (error, rows) {
						if (error) {
							res.writeHead(500);
							return res.end();
						}
						req.session.username = req.param('username');
						req.session.data = rows[0];
						res.rediect('/userarea');
					});
				}
			});
		}
	});
});

This is actually a snippet straight from one of my first Node.js apps. If you’ve done something more advanced in Node.js you probably understand everything, but the problem here is that the code is moving to the right every time you use some asynchronous function. It becomes harder to read and harder to debug. Luckily, there are a few solutions for this mess, so you can pick the right one for your project.


Solution 1: Callback Naming and Modularization

The simplest approach would be to name every callback (which will help you debug the code) and split all of your code into modules. The login example above can be turned into a module in a few simple steps.

The Structure

Let’s start with a simple module structure. To avoid the above situation, when you just split the mess into smaller messes, let’s have it be a class:

var util = require('util');

function Login(username, password) {
	function _checkForErrors(error, rows, reason) {
		
	}
	
	function _checkUsername(error, rows) {
		
	}
	
	function _checkPassword(error, rows) {
		
	}
	
	function _getData(error, rows) {
		
	}
	
	function perform() {
		
	}
	
	this.perform = perform;
}

util.inherits(Login, EventEmitter);

The class is constructed with two parameters: username and password. Looking at the sample code, we need three functions: one to check if the username is correct (_checkUsername), another to check the password (_checkPassword) and one more to return the user-related data (_getData) and notify the app that the login was successful. There is also a _checkForErrors helper, which will handle all errors. Finally, there is a perform function, which will start the login procedure (and is the only public function in the class). Finally, we inherit from EventEmitter to simplify the usage of this class.

The Helper

The _checkForErrors function will check if any error occurred or if the SQL query returns no rows, and emit the appropriate error (with the reason that was supplied):

function _checkForErrors(error, rows, reason) {
	if (error) {
		this.emit('error', error);
		return true;
	}
	
	if (rows.length &lt; 1) {
		this.emit('failure', reason);
		return true;
	}
	
	return false;
}

It also returns true or false, depending on whether an error occurred or not.

Performing the Login

The perform function will have to do only one operation: perform the first SQL query (to check if the username exists) and assign the appropriate callback:

function perform() {
	sql.query('SELECT 1 FROM users WHERE name = ?;', [ username ], _checkUsername);
}

I assume you have your SQL connection accessible globally in the sql variable (just to simplify, discussing if this is a good practice is beyond the scope of this article). And that’s it for this function.

Checking the Username

The next step is to check if the username is correct, and if so fire the second query – to check the password:

function _checkUsername(error, rows) {
	if (_checkForErrors(error, rows, 'username')) {
		return false;
	} else {
		sql.query('SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ username, password ], _checkPassword);
	}
}

Pretty much the same code as in the messy sample, with the exception of error handling.

Checking the Password

This function is almost exactly the same as the previous one, the only difference being the query called:

function _checkPassword(error, rows) {
	if (_checkForErrors(error, rows, 'password')) {
		return false;
	} else {
		sql.query('SELECT * FROM userdata WHERE name = ?;', [ username ], _getData);
	}
}

Getting the User-Related Data

The last function in this class will get the data related to the user (the optional step) and fire a success event with it:

function _getData(error, rows) {
	if (_checkForErrors(error, rows)) {
		return false;
	} else {
		this.emit('success', rows[0]);
	}
}

Final Touches and Usage

The last thing to do is to export the class. Add this line after all of the code:

module.exports = Login;

This will make the Login class the only thing that the module will export. It can be later used like this (assuming that you’ve named the module file login.js and it’s in the same directory as the main script):

var Login = require('./login.js');

...

app.get('/login', function (req, res) {
	var login = new Login(req.param('username'), req.param('password));
	login.on('error', function (error) {
		res.writeHead(500);
		res.end();
	});
	login.on('failure', function (reason) {
		if (reason == 'username') {
			res.end('Wrong username!');
		} else if (reason == 'password') {
			res.end('Wrong password!');
		}
	});
	login.on('success', function (data) {
		req.session.username = req.param('username');
		req.session.data = data;
		res.redirect('/userarea');
	});
	login.perform();
});

Here’s a few more lines of code, but the readability of the code has increased, quite noticeably. Also, this solution does not use any external libraries, which makes it perfect if someone new comes to your project.

That was the first approach, let’s proceed to the second one.


Solution 2: Promises

Using promises is another way of solving this problem. A promise (as you can read in the link provided) “represents the eventual value returned from the single completion of an operation”. In practice, it means that you can chain the calls to flatten the pyramid and make the code easier to read.

We will use the Q module, available in the NPM repository.

Q in the Nutshell

Before we start, let me introduce you to the Q. For static classes (modules), we will primarily use the Q.nfcall function. It helps us in the conversion of every function following the Node.js’s callback pattern (where the parameters of the callback are the error and the result) to a promise. It’s used like this:

Q.nfcall(http.get, options);

It’s pretty much like Object.prototype.call. You can also use the Q.nfapply which resembles Object.prototype.apply:

Q.nfapply(fs.readFile, [ 'filename.txt', 'utf-8' ]);

Also, when we create the promise, we add each step with the then(stepCallback) method, catch the errors with catch(errorCallback) and finish with done().

In this case, since the sql object is an instance, not a static class, we have to use Q.ninvoke or Q.npost, which are similar to the above. The difference is that we pass the methods’ name as a string in the first argument, and the instance of the class that we want to work with as a second one, to avoid the method being unbinded from the instance.

Preparing the Promise

The first thing to do is to execute the first step, using Q.nfcall or Q.nfapply (use the one that you like more, there is no difference underneath):

var Q = require('q');

...
app.get('/login', function (req, res) {
	Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ])
});

Notice the lack of a semicolon at the end of the line – the function-calls will be chained so it cannot be there. We are just calling the sql.query as in the messy example, but we omit the callback parameter – it’s handled by the promise.

Checking the Username

Now we can create the callback for the SQL query, it will be almost identical to the one in the “pyramid of doom” example. Add this after the Q.ninvoke call:

.then(function (rows) {
	if (rows.length &lt; 1) {
		res.end('Wrong username!');
	} else {
		return Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ]);
	}
})

As you can see we are attaching the callback (the next step) using the then method. Also, in the callback we omit the error parameter, because we will catch all of the errors later. We are manually checking, if the query returned something, and if so we are returning the next promise to be executed (again, no semicolon because of the chaining).

Checking the Password

As with the modularization example, checking the password is almost identical to checking the username. This should go right after the last then call:

.then(function (rows) {
	if (rows.length &lt; 1) {
		res.end('Wrong password!');
	} else {
		return Q.ninvoke('query', sql, 'SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ]);
	}
})

Getting the User-Related Data

The last step will be the one where we’re putting the users’ data in the session. Once more, the callback is not much different from the messy example:

.then(function (rows) {
	req.session.username = req.param('username');
	req.session.data = rows[0];
	res.rediect('/userarea');
})

Checking for Errors

When using promises and the Q library, all of the errors are handled by the callback set using the catch method. Here, we are only sending the HTTP 500 no matter what the error is, like in the examples above:

.catch(function (error) {
	res.writeHead(500);
	res.end();
})
.done();

After that, we must call the done method to “make sure that, if an error doesn’t get handled before the end, it will get rethrown and reported” (from the library’s README). Now our beautifully flattened code should look like this (and behave just like the messy one):

var Q = require('q');

...
app.get('/login', function (req, res) {
	Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ])
	.then(function (rows) {
		if (rows.length &lt; 1) {
			res.end('Wrong username!');
		} else {
			return Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ]);
		}
	})
	.then(function (rows) {
		if (rows.length &lt; 1) {
			res.end('Wrong password!');
		} else {
			return Q.ninvoke('query', sql, 'SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ]);
		}
	})
	.then(function (rows) {
		req.session.username = req.param('username');
		req.session.data = rows[0];
		res.rediect('/userarea');
	})
	.catch(function (error) {
		res.writeHead(500);
		res.end();
	})
	.done();
});

The code is much cleaner, and it involved less rewriting than the modularization approach.


Solution 3: Step Library

This solution is similar to the previous one, but it’s simpler. Q is a bit heavy, because it implements the whole promises idea. The Step library is there only for the purpose of flattening the callback hell. It’s also a bit simpler to use, because you just call the only function that is exported from the module, pass all your callbacks as the parameters and use this in place of every callback. So the messy example can be converted into this, using the Step module:

var step = require('step');

...

app.get('/login', function (req, res) {
	step(
		function start() {
			sql.query('SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ], this);
		},
		function checkUsername(error, rows) {
			if (error) {
				res.writeHead(500);
				return res.end();
			}
			if (rows.length &lt; 1) {
				res.end('Wrong username!');
			} else {
				sql.query('SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ], this);
			}
		},
		function checkPassword(error, rows) {
			if (error) {
				res.writeHead(500);
				return res.end();
			}
			if (rows.length &lt; 1) {
				res.end('Wrong password!');
			} else {
				sql.query('SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ], this);
			}
		},
		function (error, rows) {
			if (error) {
				res.writeHead(500);
				return res.end();
			}
			req.session.username = req.param('username');
			req.session.data = rows[0];
			res.rediect('/userarea');
		}
	);
});

The drawback here is that there is no common error handler. Although any exceptions thrown in one callback are passed to the next one as the first parameter (so the script won’t go down because of the uncaught exception), having one handler for all errors is convenient most of the time.


Which One to Choose?

That’s pretty much a personal choice, but to help you pick the right one, here is a list of pros and cons of each approach:

Modularization:

Pros:

  • No external libraries
  • Helps to make the code more reusable

Cons:

  • More code
  • A lot of rewriting if you’re converting an existing project

Promises (Q):

Pros:

  • Less code
  • Only a little rewriting if applied to an existing project

Cons:

  • You have to use an external library
  • Requires a bit of learning

Step Library:

Pros:

  • Easy to use, no learning required
  • Pretty much copy-and-paste if converting an existing project

Cons:

  • No common error handler
  • A bit harder to indent that step function properly

Conclusion

As you can see, the asynchronous nature of Node.js can be managed and the callback hell can be avoided. I’m personally using the modularization approach, because I like to have my code well structured. I hope these tips will help you to write your code more readable and debug your scripts easier.

December 23 2013

16:24

BDD With Behat

The BDD PHP framework Behat, allows you to test your PHP applications using human-readable sentences to write features and scenarios about how your applications should behave in order to test out its functionality. We can then run these tests to see if our application is behaving as expected. Let’s spend about 15 minutes quickly going over Behat’s installation process and learn how we can test our PHP applications behavior using the basics.


Installation

To begin testing with Behat, we just need to install it on our local development machine. I’ll be using Composer, but you can take a look at a the documentation for additional installation methods.

Within your PHP application (I have a blank PHP app named phpbehat created in advance) create a composer.json file with the following code to load in Behat:

{
	"require": {
		"behat/behat": "2.4.*@stable"
	},
	"minimum-stability": "dev",
	"config": {
		"bin-dir": "bin/"
	}
}

In your terminal we can install it using the following command:

composer update

We should now be able to run the bin/behat command to work with Behat.


Creating Features

We always begin by creating a new feature. A feature is something that we can use to describe a feature of our application and then implement it to get the test to pass.

A feature, at its simplest, consists of:

  • Human-readable, plain text.
  • Uses a .feature extension.
  • Contains the feature’s benefit, role, and the feature itself.
  • May hold a list of scenarios.

Now I can’t show you how to test all possible features for an application, as that would take far too long. But what I can do is show you how to go about writing a basic feature and you can adapt this code to make it work for your specific application.

Testing the Phpadder Class

To avoid complicating the learning process, let’s create a very simple Phpadder.php file which contains a couple methods that we can use to add two numbers together and display their sum.

class Phpadder {

	private $a;
	private $b;
	public $sum;

	public function __construct($a, $b) {
		$this->a = $a;
		$this->b = $b;
	}

	public function add() {
		$this->sum = $this->a + $this->b;
	}

	public function display() {
		return $this->sum;
	}
}

This creates a new class named Phpadder. We have a few properties to hold the first ($a) and second ($b) numbers to be added, as well as a $sum property to hold the values’s sum.

Next, we have our constructor which initializes our properties. Then we have an add method to add the two numbers together and finally a display method to display the sum of the two numbers.

Adder Feature

Let’s now test this Phpadder class. Create a new .feature file along with the following code:

# features/phpadder.feature
Feature: adder
	In order to display the sum of two numbers
	As anybody
	I need to provide two numbers

We start out by giving our feature a name of adder. Then we specify, on three lines, the feature’s benefit, role, and finally the feature itself. It should be fairly self explanatory.

Now that we have our feature, let’s start writing our scenarios.


Creating Scenarios

Anytime we create a feature we can define scenarios that describe how that feature behaves given certain conditions. Scenario’s follow the same basic layout consisting of the scenario’s description, a context, an event, and an outcome.

Let’s create a scenario to test out our Phpadder application:

Scenario: Display the sum of two provided numbers
	Given I have the number 50 and the number 25
	When I add them together
	Then I should get 75

We begin with our scenario description, we then list out that we start with the two numbers of 50 and 25 and when they are added together it should display the sum of 75.

You can also have multiple contexts, events, and outcomes as well using the following keywords: And and But. Here’s a modified example of the previous scenario where we’ll use And:

Scenario: Display the sum of two provided numbers
	Given I have the number 50 and the number 25
	And I have a third number of 25
	When I add them together
	Then I should get 100

Creating Step Definitions

A step is nothing more than a PHP function which is made up of a keyword, a regular expression, and a callback function. Each statement within your scenario will be matched to a step. These steps define what should happen given one of your statements are called within a scenario. You store all of your steps within the features/bootstrap/FeatureContext.php file.

The “Given” Step

In our FeatureContext.php file (you may need to create this), we need to write a step for each of our statements, using its name as the function name. We’ll start with the "Given I have the number" statement:

# features/bootstrap/FeaturesContext.php

require("Phpadder.php");

use Behat\Behat\Context\BehatContext, 
	Behat\Behat\Exception\PendingException;

use Behat\Gherkin\Node\PyStringNode,
	Behat\Gherkin\Node\TableNode;
	
class FeatureContext extends BehatContext {

	private $Adder;
	
	/**
	 * @Given /^I have the number (\d+) and the number (\d+)$/
	 */
	public function iHaveTheNumberAndTheNumber($a, $b) {
		$this->Adder = new Phpadder($a, $b);
	}

First, we create a new PHP file, require in our Phpadder.php file (the file that we’re testing), and then we use a few Behat classes with use statements, right after.

Next, we create our FeatureContext class and have it extend the BehatContext class.

Our class has only one property, $Adder, which will hold a Phpadder object.

Finally, we create a method for our first step using the same name as the statement. Our step has a regular expression, found within the method’s comment. This regular expression is used to match the step to a scenario statement. Within this method we simply assign a new Phpadder object to our $this->Adder property so that the rest of our methods have access to the same Phpadder object.

Now, you’ve probably noticed the @Given keyword within your function’s comment, this is actually an annotation so that the different types of steps can be identified. You can use @Given, @When, and @Then.

Now we just need to write the last two steps.

The “When” Step

/**
 * @When /^I add them together$/
 */
public function iAddThemTogether() {
	$this->Adder->add();
}

This step simply uses our $this->Adder object’s add method to add the two numbers together.

The “Then” Step

/** 
 * @Then /^I should get (\d+)$/ 
 */
public function iShouldGet($sum) {
	if ($this->Adder->sum != $sum) {
		throw new Exception("Actual sum: ".$this->Adder->sum);
	}
	$this->Adder->display();
}

This step starts off by checking if the actual sum (retrieved using our $this->Adder object and its sum property) is not equal to the expected sum. If this evaluates to true that means we need to have Behat show a failure. To do so, we’ll just throw a new exception displaying the actual sum so we can compare. Otherwise, we call our display method.

Time to run the tests.


Running the Tests

Now that we have our features, scenarios, and steps laid out, let’s run our tests using the following command:

bin/behat

You should see the following success messages inside of your terminal:

testing-success

You can ensure that your tests are running correctly, by simply breaking something in your Phpadder.php file so that it doesn’t run quite the same way your test expects it to. For example, if we change the add method to instead use subtraction, like so:

public function add() {
	$this->sum = $this->a - $this->b;
}

And then we rerun our tests: bin/behat. You can see in our terminal, we now have a failing test, because it’s no longer adding our numbers but subtracting them:

testing-fail

Conclusion

In conclusion, you can follow a small four step process for testing your PHP apps using Behat:

  1. Define a Feature
  2. Define a Scenario
  3. Define Step Definitions
  4. Run the tests using bin/behat

So, with just that small amount of code, we should now have a basic understanding of how to work with Behat to test our PHP applications. For more information about working with Behat please checkout their documentation.

December 20 2013

18:50

Intro to Shadow DOM

Take any modern web page and you will notice that it invariably contains content stitched together from a variety of different sources; it may include the social sharing widgets from Twitter or Facebook or a Youtube video playing widget, it may serve a personalized advertisement from some ad-server or it may include some utility scripts or styles from a third party library hosted over CDN and so on. And if everything is HTML based (as is preferred these days) there is a high probability of collisions between the markup, scripts or styles served from various sources. Generally, namespaces are employed to prevent these collisions which solve the problem to some extent, but they don’t offer Encapsulation.

Encapsulation is one of the pillars on which the Object Oriented Programming paradigm was founded and is normally used to restrict the internal representation of an object from the outside world.

Coming back to our problem, we can surely encapsulate the JavaScript code using closures or using the module pattern but can we do the same for our HTML markup? Imagine that we have to build a UI widget, can we hide the implementation details of our widget from the JavaScript and CSS code that is included on the page, which consumes our widget? Alternatively, can we prevent the consuming code from messing up our widget’s functionality or look and feel?


Shadow DOM to the Rescue

The only existing solution that creates a boundary between the code you write and code that consumes, is ugly – and operates by using a bulky and restrictive iFrame, which brings with itself another set of problems. So are we forced to adapt to this approach always?

Not anymore! Shadow DOM provides us an elegant way to overlay the normal DOM subtree with a special document fragment that contains another subtree of nodes, which are impregnable to scripts and styles. The interesting part is that it’s not something new! Various browsers have already been using this methodology to implement native widgets like date, sliders, audio, video players, etc.

Enabling Shadow DOM

At the time of this writing, the current version of Chrome (v29) supports inspecting Shadow DOM using Chrome DevTools. Open Devtools and click on the cog button at the bottom right of the screen to open the Settings panel, scroll down a bit and you will see a checkbox for showing Shadow DOM.

Turn on Shadow DOM

Now that we have enabled our browser, lets check out the internals of the default audio player. Just type:

<audio width="300" height="32" src="http://developer.mozilla.org/@api/deki/files/2926/=AudioTest_(1).ogg" autoplay="autoplay" controls="controls">
 Your browser does not support the HTML5 Audio.
 </audio>

Into your HTML markup. It shows the following native audio player in supported browsers:

audio_player

Now go ahead and inspect the audio player widget that you just created.

Shadow DOM of Native Date Widget

Wow! It shows the internal representation of the audio player, which was otherwise hidden. As we can see, the audio element uses a document fragment to hold the internal contents of the widget and appends that to the container element ( which is known as Shadow Host ).

Shadow Host & Shadow Root

  • Shadow Host: is the DOM element which is hosting the Shadow DOM subtree or it is the DOM node which contains the Shadow Root.
  • Shadow Root: is the root of the DOM subtree containing the shadow DOM nodes. It is a special node, which creates the boundary between the normal DOM nodes and the Shadow DOM nodes. It is this boundary, which encapsulates the Shadow DOM nodes from any JavaScript or CSS code on the consuming page.
  • Shadow DOM: allows for multiple DOM subtrees to be composed into one larger tree. Following images from the W3C working draft best explains the concept of overlaying the nodes. This is how it looks before the Shadow Root’s content are attached to Shadow Host element:
    Normal Document Tree & Shadow DOM Subtrees

    When rendered, the Shadow tree takes place of Shadow Host’s content.

    Composition Complete

    This process of overlaying the nodes is often referred to as Composition.

  • Shadow Boundary: is denoted by the dotted line in the image above. This denotes the separation between the normal DOM world and the Shadow DOM world. The scripts from either side cannot cross this boundary and create havoc on the other side.

Hello Shadow DOM World

Enough chit-chat I say, Let’s get our hands dirty by writing some code. Suppose we have the following markup, which shows a simple welcome message.

<div id="welcomeMessage">Welcome to My World</div>

Add the following JavaScript code or use this Fiddle:


var shadowHost = document.querySelector("#welcomeMessage");
var shadowRoot = shadowHost.webkitCreateShadowRoot();
shadowRoot.textContent = "Hello Shadow DOM World";

Here we create a Shadow Root using the webkitCreateShadowRoot() function, attach it to a Shadow Host and then simply change the content.

Notice the vendor-specific prefix webkit before the function name. This indicates that this functionality is currently supported on some webkit-based browsers only.

If you go ahead and run this example in a supported browser, then you would see “Hello Shadow DOM World” instead of “Welcome to My World” as the Shadow DOM nodes have over-shadowed the normal ones.

Disclaimer: As some of you may notice, we’re mixing the markup with scripts, which is generally not recommended and Shadow DOM is no exception. We have deliberately avoided the use of templates so early in the game in order to avoid any confusion. Otherwise Shadow DOM does provide an elegant solution to this problem and we will get there pretty soon.


Respecting Shadow Boundary

If you try and access the content of the rendered tree using JavaScript, like so:

var shadowHost = document.querySelector("#welcomeMessage");
var shadowRoot = shadowHost.webkitCreateShadowRoot();
shadowRoot.textContent = "Hello Shadow DOM World";

console.log(shadowHost.textContent);
 // Prints "Welcome to My World" as the shadow DOM nodes are encapsulated and cannot be accessed by JavaScript

You will get the original content “Welcome to My World” and not the content which is actually rendered on the page, as the Shadow DOM tree is encapsulated from any scripts. This also means that the widget that you create using Shadow DOM is safe from any unwanted/conflicting scripts already present in the page.

Styles Encapsulation

Similarly, any CSS selector is forbidden to cross the shadow boundary. Check the following code where we have applied red color to the list items, but that style is only applied to the nodes which are part of the parent page, and the list items which are part of Shadow Root are not affected with this style.

<div class="outer">
  <div id="welcomeMessage">Welcome to My World</div>
  <div class="normalTree">Sample List
  <ul>
      <li>Item 1</li>
      <li>Item 2</li>
  </ul>
  </div>
</div>
<style>
   div.outer li {  
      color: red;  
   } 
   div.outer{  
      border: solid 1px;  padding: 1em; 
   }
</style>
<script type="text/javascript">
    var shadowHost = document.querySelector("#welcomeMessage");
    var shadowRoot = shadowHost.webkitCreateShadowRoot();
    shadowRoot.innerHTML = ["<div class='shadowChild'>",
                            "Shadow DOM offers us Encapsulation from",
                            "<ul>",
                            "<li>Scripts</li>",
                            "<li>Styles</li>",
                            "</ul>",
                            "</div>"
                            ].join(',').replace(/,/g,"");
</script>

You can see the code in action on Fiddle. This encapsulation applies even if we reverse the direction of traversal. Any styles which are defined inside the Shadow DOM does not affect the parent document and remains scoped to the Shadow Root only. Check this Fiddle for an example, where we apply the blue color to list items in Shadow DOM but the parent document’s list items are unaffected.

There is however one notable exception here; Shadow DOM gives us the flexibility to style the Shadow Host, the DOM node which is holding the Shadow DOM. Ideally it lies outside the Shadow boundary and is not a part of Shadow Root, but using the @host rule, one can specify the styles that can be applied to Shadow Host as we have styled the welcome message in the example below.

<div id="welcomeMessage">Welcome to My World</div>
<script type="text/javascript">
  var shadowHost = document.querySelector("#welcomeMessage");
  var shadowRoot = shadowHost.webkitCreateShadowRoot();
  shadowRoot.innerHTML = ["<style>",
                          "@host{ ",
                             "#welcomeMessage{ ",
                                "font-size: 28px;",
                                "font-family:cursive;",
                                "font-weight:bold;",
                             "}",
                          "}",
                          "</style>",
                          "<content select=''></content>"
                          ].join(',').replace(/,/g,"");
</script>

Check this Fiddle as we style the Shadow Host’s welcome message using the styles defined in Shadow DOM.

Creating Style Hooks

As a widget developer, I might want the user of my widget to be able to style certain elements. This is achievable by plugging a hole into the shadow boundary using custom pseudo elements. This is similar to how some browsers create style hooks for the developer to style some internal elements of a native widget. For example, to style the thumb and the track of the native slider you can use the ::-webkit-slider-thumb and ::webkit-slider-runnable-track as follows:


input[type=range]{
    -webkit-appearance:none;
 }
 input[type=range]::-webkit-slider-thumb {
    -webkit-appearance:none;
    height:12px;
    width:12px;
    border-radius:6px;
    background:yellow;
    position:relative;
    top:-5px;
 }
 input[type=range]::-webkit-slider-runnable-track {
    background:red;
    height:2px;
 }

Fork this Fiddle and apply your own styles to it!

Event Re-Targeting

If an event that originates from one of the nodes in Shadow DOM crosses the Shadow Boundary then it is re-targeted to refer to the Shadow Host in order to maintain encapsulation. Consider the following code:


<input id="normalText" type="text" value="Normal DOM Text Node" />
<div id="shadowHost"></div>
<input id="shadowText" type="text" value="Shadow DOM Node" />
<script type="text/javascript">
    var shadowHost = document.querySelector('#shadowHost');
    var shadowRoot = shadowHost.webkitCreateShadowRoot();
    var template = document.querySelector('template');
    shadowRoot.appendChild(template.content.cloneNode(true));
    template.remove();
    document.addEventListener('click', function(e) { 
                                 console.log(e.target.id + ' clicked!'); 
                              });
</script>

It renders two text input elements, one via Normal DOM and another via Shadow DOM and then listens for a click event on the document. Now, when the second text input is clicked, the event is originated from inside Shadow DOM and when it crosses the Shadow Boundary, the event is modified to change the target element to Shadow Host’s <div> element instead of the <input> text input. We have also introduced a new <template> element here; this is conceptually similar to client-side templating solutions like Handlebars and Underscore but is not as evolved and lacks browser support. Having said that, using templates is the ideal way to write Shadow DOM rather than using script tags as has been done so far throughout this article.


Separation of Concerns

We already know that it’s always a good idea to separate actual content from presentation; Shadow DOM should not embed any content, which is to be finally shown to the user. Rather, the content should always be present on the original page and not hidden inside the Shadow DOM template. When the composition occurs, this content should then be projected into appropriate insertion points defined in the Shadow DOM’s template. Let’s rewrite the Hello World example, keeping in mind the above separation – a live example can be found on Fiddle.

<div id="welcomeMessage">Welcome to Shadow DOM World</div>
<script type="text/javascript">
    var shadowRoot = document.querySelector("#welcomeMessage").webkitCreateShadowRoot();
    var template = document.querySelector("template");
    shadowRoot.appendChild(template.content); 
    template.remove();
</script>

When the page is rendered, the content of the Shadow Host is projected into the place where the <content> element appears. This is a very simplistic example where <content> picks up everything inside the Shadow Host during composition. But it can very well be selective in picking the content from Shadow Host using the select attribute as shown below

<div id="outer">How about some cool demo, eh ?
    <div class="cursiveButton">My Awesome Button</div>
</div>
<button>
  Fallback Content
</button>
<style>
button{ 
   font-family: cursive;  
   font-size: 24px;
   color: red; 
}
</style>
<script type="text/javascript">
    var shadowRoot = document.querySelector("#outer").webkitCreateShadowRoot(); 
    var template = document.querySelector("template"); 
    shadowRoot.appendChild(template.content.cloneNode(true));
    template.remove();
</script>

Check out the live demo and play with it to better understand the concept of insertion points and projections.


Web Components

As you may already know, Shadow DOM is a part of the Web Components Spec, which offers other neat features, like:

  1. Templates – are used to hold inert markup, which is to be used at a later point in time. By inert, we mean that all the images in the markup are not downloaded, scripts included are not present until the content of the template actually becomes a part of the page.
  2. Decorators – are used to apply the templates based on CSS Selectors and hence can be seen as decorating the existing elements by enhancing their presentation.
  3. HTML Imports – provides us with the capability to reuse other HTML documents in our document without having to explicitly make XHR calls and write event handlers for it.
  4. Custom Elements – allows us to define new HTML element types which can then be used declaratively in the markup. For example, if you want to create your own navigation widget, you define your navigation element, inheriting from HTMLElement and providing certain life-cycle callbacks which implement certain events like construction, change, destruction of the widget and simply use that widget in your markup as <myAwesomeNavigation attr1="value1"..></myAwesomeNavigation>. So custom elements essentially give us a way to bundle all the Shadow DOM magic, hiding the internal details and packages everything together.

I wont babble much about other aspects of the Web Components Spec in this article but it would do us good to remember that together they enable us to create re-usable UI widgets which are portable across browsers in look and feel and fully encapsulated by all the scripts and styles of the consuming page.


Conclusion

The Web Components Spec is a work in progress and the sample code included which works today may not work on a later release. As an example, earlier texts on this subject use the webkitShadowRoot() method which no longer works; Instead use createWebkitShadowRoot() to create a Shadow Root. So if you want to use this to create some cool demos using Shadow DOM, it’s always best to refer to the spec for details.

Currently, only Chrome and Opera supports it so I would be wary about including any Shadow DOM on my production instance, but with Google coming out with Polymer which is built on top of Web Components and Polyfills coming out to support Shadow DOM natively, this is surely something that every web developer must get his hands dirty with.

You can also stay updated with the latest happenings on Shadow DOM by following this Google+ Channel. Also checkout the Shadow DOM Visualizer tool, which helps you to visualize how Shadow DOM renders in the browser.

December 19 2013

19:29

Coding in the Cloud

The Internet has allowed us to work anywhere we want, giving us tremendous flexibility in choosing where we'd like to plant ourselves to build the next great app. Not being tied to an office desk has been a blessing to many who cherish the ability to work in varying environments and draw inspiration from their surroundings. But for the most part, we've needed to have a full-blown development machine to make this happen. What if we could leverage the cloud as a development environment itself, freeing us up to choose from a variety of devices to accomplish the same thing. That's what we'll explore in this article.


Coding in the Cloud

As I mentioned, until now, whether you were in an office or sitting at a coffee shop, you generally needed a development machine (a MacBook, Windows, or Linux laptop in most cases) which may have the following things:

  • An editor or IDE like Sublime, Vim or Visual Studio
  • A local web server like MAMP
  • An app server and framework (RoR, Python/Django, Node.js)
  • Git
  • LiveReload or CodeKit

And countless other tools that you rely on to get your work done. This typically requires that you have a beefy machine to work with so you can power all of these tools and be productive.

But with broadband becoming more prevalent, the requirements for having such a powerful device are becoming less stringent due to the number of cloud offerings available to developers. Services like Cloud9 IDE and Nitrous.io are bringing real-world development environments to the cloud and allowing developers to work remotely and offering more flexibility in terms of the devices they choose to work with.

These services not only bring IDEs to the web, they also provide infrastructure that gives developers:

  • Complete Git integration with services like Github and Bitbucket
  • Terminal access for command line functionality
  • Virtualized backends allowing you to spin up instances of RoR or Node.js
  • Deployment to production services like Heroku or Windows Azure
  • Team collaboration

Basically, these services are adding in the kitchen sink, making the move to the cloud much easier and enticing. Let's explore these a little more.


Cloud9 IDE

My first exposure to a real cloud-based IDE was Cloud9. They had a very basic online editor with Github integration which was very cool and obviously very alpha at the time. Nonetheless, it was incredibly promising and along with the work Mozilla was doing on Bespin, it showed tremendous potential. Interestingly enough, the Mozilla Bespin project was later merged into Ace, Cloud9's editor, which seems to have greatly contributed to the solid editing experience in the cloud-based IDE.

C9 takes a very similar pricing approach to Github, offering a nice baseline set of features via a freemium model with more unlimited functionality for $12 per month. The differences boil down to the type of workspaces you have, the number you can have and the features available within those workspaces. A workspace is where your project lives, including your project files and the tools and services you might use (such as Ruby or Python). So depending on how sophisticated your needs are, you may need to seriously consider the premium option which gives you:

  • Five more private workspaces
  • Unlimited FTP workspaces
  • More virtual disk space for your workspaces
  • Full terminal and command line access within your workspaces

The terminal options are especially important since C9 allows you to SSH and FTP into your own server, basically allowing you to use their service as solely a cloud-based IDE.

The fact that they offer a freemium option is great for kicking the tires to see if it's a service you can get into.

You have a couple of options for sign-in, including a traditional email/password scenario or you can use OAuth via GitHub or BitBucket. Doing so with the latter two gives you access to the repositories you already have stored on those services, as evidenced here:

c9-workspace

The workspace allows me to either clone an existing repo from the imported list, clone from a URL of my choice or create a new workspace based off a remote server (via SSH or FTP)

new-workspace

I decided to clone my "authy" Git repo which was the sample source code I wrote for an article here on Nettuts+ on two-factor authentication. The app server I used for that was ColdFusion and I was pretty excited to see the C9 recognized the ColdFusion files correctly. I honestly wasn't expecting it since CFML isn't as popular as it used to be:

ide-authy

Notice that my entire project structure is brought over intact and I have full editing capabilities on my files. In addition, I also have access to full terminal commands:

ide-terminal

To drive this home a bit more, notice in the following screenshot I made a change to gettoken.cfm by adding a comment. Typing in git status in the terminal panel displays the changed status of the file just like you would expect:

ide-terminal-git

Then following up with git commit -a -m "Added comment" and git push updates my repo accordingly:

commit-github

A key feature that the C9 likes to hype is the JavaScript autocomplete capabilities and it makes sense since it's such an invaluable resource in any editor.

c9-autocomplete

I know there's been a lot of debate recently about whether or not autocomplete hinders your ability to remember language features but with the growing level of complexity in libraries, frameworks and tools, I personally find tremendous value in having a little help remembering things. I do wish they offered more language support though.

One of the biggest selling points is the maturity of the documentation. It covers everything about the service from customizing the IDE to integrating with database systems and deploying your code. There are a number of video tutorials that you can leverage to get familiar with the service, which complement the solid documentation.

Lastly, if you're adventurous, you could decide to roll your own version of Cloud9 IDE since it is an open-source project licensed under the GPL. The GitHub page offers good instructions on how to install it, both as *nix and Windows environments, so if you'd like to forego the cost, have at it.


Nitrous.IO

Nitrous.IO (which I'll just refer to as Nitrous from now on) is a new option that's gotten a lot of praise from developers like Yehuda Kathz of the Ember.js project and Tobias Lutke, Rails Core alumni. It's still in Beta, but works impressively well at this point. They've taken the approach of offering up a full virtualized stack that not only encompasses an IDE but also spins up what they call "boxes" which basically house your entire development stack. This includes your choice of Ruby on Rails, Node.js, Python/Django or Go.

new-box

Similar to C9, they offer basic services that allow you to kick the tires around. This is done by giving you enough "nitrous" points (155) to create a basic box which they say should be enough for most day-to-day tasks. Notice in the image above that the amount of memory and storage selected affects the amount of nitrous points you'll have left. The standard basic box will leave you with five points and like many VPS hosting providers, you can dynamically choose more resources depending on what you need. I'm sure this will come at a cost once it's out of beta but they make it incredibly easy to earn more points without opening up your wallet. Via a couple of different social outreach connections and tasks, you can earn more points towards your box features:

n20-morepoints

Notice that by choosing a couple of options, I was able to boost my N20 points from 155 to 180 and the more friends you invite the more points you earn. Just don't be an annoying spammer though!

Setting up a box though is more than just selecting resources. An important part of this, in terms of performance, is to choose the closest geographic region to you to decrease latency. They actually key in on this during their intro video.

Once you've chosen your settings, provisioning the box is incredibly straightforward and even has a cool animated progress dial:

n20-provision

The IDE then shows up and you're ready to begin your work:

n20-ide

Unlike C9 though, I didn't find a way to visually see the Github repos available. I assumed that connecting to Github would also allow me to easily clone one of my repos into my box. I ended up following these instructions to add the SSH keys generated by Nitrous to my Github account and then git cloning one of my repos into the IDE:

n2o-git-clone

I also could've used the upload functionality to upload my project files:

n20-upload

It just seems to me that getting direct visual access to your GitHub repo is a priority feature the Nitrous team should be looking at and something I think is a plus for C9.

With the files in place, it was time to see how the IDE worked and for all intents seemed to work very well, easily recognizing different file types and offering syntax highlighting according to the file type:

n20-ide-files

Unlike C9, though, there was no autocomplete so you'll need to determine how valuable a feature that is to you. But like C9, keyboard shortcuts for common tasks such as saving a file or closing tabs are all there.

For those used to working in the terminal, you'll find Nitrous easy to adapt to. In fact, you'll really need to be comfortable with it to make full use of the service. Installing and starting packages like MongoDB or Memcached is done via the Nitrous package manager called Parts which, you guessed it, is command-line based. For example, installing MongoDB would go like this:

parts install mongodb

If you're used to apt-get on Linux or brew install on OS X, this syntax should be very familiar.

The main thing to remember is that the editor is only one part of the equation. You're basically running a complete machine here within a web browser so being comfortable in a *nix environment will definitely give you a leg up.

Even deploying your site will require you to use the command-line. There's no "just push a button" option here. Nitrous integrates easily with the following services:

  • Heroku
  • Google App Engine
  • Microsoft Azure
  • Nodejitsu

That gives pretty good coverage to several large cloud-based services. I was curious about their deployment support for Amazon but couldn't find a lot of information on that, at least not enough to be able to understand how to set it up.

With that said, their documentation is very well organized and will easily guide you through most of the tasks you need to get up and running with their service.


To Cloud or Not to Cloud

Both services seem to offer compelling features that could make it easy to move over to full-time cloud-based development. With so many low-cost devices coming out that are clearly targeted at consumers who just want to be connected all the time, it makes sense that these service start to evolve and perhaps gain traction.

It's hard to imagine giving up my trusty laptop with it's i7 processor and speedy SSD for an all-cloud dev environment but I can't outright dismiss it either. Every major software company is touting cloud services and I'm sure every one of you reading this uses multiple cloud services daily. Seems like a logical step to begin considering coding in the cloud.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
(PRO)
No Soup for you

Don't be the product, buy the product!

close
YES, I want to SOUP ●UP for ...