Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 28 2014


The New Smashing Mystery Riddle: Have You Figured It Out Yet?


Remember the Smashing Book Mystery riddle from a couple of months ago? Guess what: the Mystery continues! To celebrate the launch of the SmashingConf NYC and our lovely book, we’ve prepared a new riddle, and this time it’s not going to be any easier, but the reward is definitely worth the effort.

How does it work? Below you’ll find an animated GIF that contains a hidden message. Once you resolved the mystery, you’ll see a Twitter hashtag displayed in plain text.

Are you ready? Action! (Hint: watch out for the file name).

Smashing Book Mystery
A hint: watch out for the file name.

What Can You Win?

Among the first readers who tweet us with the hidden hashtag, we’ll raffle a quite extraordinary, smashing prize (and a couple of other Smashing extras):

  • a roundtrip flight to New York, USA,
  • full accommodation in a fancy hotel,
  • a ticket to the Smashing Conference New York 2014,
  • any Smashing workshop ticket of your choice,
  • full access to the Smashing eBook Library,
  • a signed edition of the Smashing Book #4,
  • a truly Smashing laptop bag,
  • your very own Smashing caricature, designed just for you.

Please notice that to avoid spoilers, comments are closed for this post. And sorry, we aren’t going to make it too easy for you.

Alright. Let’s get to work. Or have you already figured it out? ;-)

© Vitaly Friedman for Smashing Magazine, 2014.

January 06 2014


50 Problems In 50 Days: The Power Of Not Knowing


I’ve travelled 2517 miles to try to solve 50 problems in 50 days using design, a journey that challenged me to fundamentally rethink my understanding of the user-experience design process.

I set myself a challenge. I wanted to test the limits of design’s ability to solve problems — big and small. To do this, I left the comfort of my computer chair and set out into the unknown. Every day, I had 24 hours to observe a problem, attempt to solve it and then communicate the solution.


For more of an introduction to the adventure, “50 Problems in 50 Days, Part 1: Real Empathy for Innovation” gives an overview of the project, as well as argues for the importance of real empathy in developing truly problem-solving solutions. For more, take a look at the 50 Problems in 50 Days website.

In this second of three articles, I’ll share what travelling 2517 miles taught me about not knowing and about the value of incompetence in fostering innovation — in other words, how I learned that being an idiot is OK for me, and is OK for you, too.

Day 42: Homelessness

It was 7:00 am, and the sun was already shining in beautiful Turin. Leaving my hostel early, I set out to find my problem to tackle that day. I found myself walking with commuters, immersing myself in my surroundings and observing.

As I walked one of the city’s many high streets, I passed shops that were not yet open. Looking around me, I gradually became aware of shapes by the side of the road. Looking closer, I realized there were people asleep in the doorways of the stores.


I was familiar with homelessness, having researched the problem in a past project and having witnessed it in several cities during my adventure. However, I was more conscience of it here in Turin.

I stopped walking and thought.

Assessing what I was seeing — and perhaps in a moment of foolishness — I decided to make homelessness in Turin my problem for the day. My question became, How can design address the problem of homelessness in Turin?

I knew that trying to tackle homelessness in 24 hours was a ridiculous undertaking. However, with my knowledge and research, I had some experience to direct my thinking.

I got to work.

I began by interviewing passersby about what they knew of the problem in the city and about the social and political factors that might have led to people living on the street.

I observed people interact with the homeless.

I analyzed. I created mind maps documenting what I knew of the interdependent factors relating to homelessness, analyzing perceptions and comparing them to the realities.

I worked. And I worked. And I worked.

And I came up with absolutely nothing.

My effort resulted in pages of notes, yet I struggled to come up with any tangible ideas. The ideas that came to me seemed inadequate to the complexities that I knew of the problem.

I was getting nowhere.


Exhausted, I sat down at the edge of a busy public plaza.

As I did, my eyes adjusted to the surroundings. In the corner of the plaza sat a middle-aged woman. She was dressed in black and was looking down at her feet. I soon realized that she was begging.


Observing her interact with passersby, I realized something important: that I knew nothing about her.

Until that point, my work had been directed largely by my past experience with this problem, rather than by what I was seeing right in front of me. In a moment of foolishness, I decided to scrap all of my day’s work so far and start again.

Standing up, I left the plaza and went back to observation, research and analysis. This time, I approached the situation afresh. As I spoke with people, I tried to re-understand the problem as if I had never dealt with it. In doing so, I based my questions not on what I thought I knew, but on a search for new insight.

I turned a corner, and suddenly there was a flash in front of me.

An elderly lady was coming right towards me! Before I knew it, she was clutching at my shirt sleeves, speaking to me in fast pigeon-English, asking for money!

My mind raced.

A simple interaction took me completely by surprise. (Image: Bogan Suditu)

Questions started popping into my head, great questions that built on my previous research, questions that recognized the complexity of the problem! But stopping myself from focusing on what I thought I knew, I decided to ask her one simple, naive question:

What do you need my money for?

As she answered, something unanticipated happened. She told me why she needed any change I could spare, and as she did so, I forgot all of my questions and listened to what she was saying. In that moment, I suddenly understood the problem.

As she spoke, I found myself faced with a very human dilemma.

On the one hand, I wanted to give and to be compassionate. On the other hand, I didn’t know how my money would be spent and was concerned that it might be used irresponsibly. In the end, I didn’t give.

This one simple insight sparked a multitude of ideas in my mind and led to my eventual solution.

Where my foreknowledge and experience had taken me in circles, being prepared to forget what I knew and approach the problem naively yielded insight as clear as day. I was able to tackle this problem not because of my competence in it, but because I was prepared to be incompetent.

Not Knowing

Trying to solve 50 problems in 50 days demonstrated to me the value of incompetence in solving problems and the danger that competence sometimes presents.

From checkout processes to apps, we build up proficiencies in how to solve certain types of problems during our careers as designers. This, coupled with repeated exposure, enables us to develop a measure of expertise in our field.

Every designer should strive for expertise. However, my adventure taught me that sometimes this expertise can make us neglect a vital tool in truly solving problems: It can make us forget “not knowing.”

Our accumulated expertise enables us to quickly establish how to tackle a particular challenge, the factors related to the problem and the limitations of our reach. While this experience is vital, familiarity with a problem can also restrict us to routine ways of thinking and can prevent us from seeing beyond what we know and discovering what we don’t.


As designers, the quality of our solutions will always be fundamentally linked to our understanding of the problem. Learning to approach both new and familiar problems naively enables us to do something important. It enables us to look beyond what we think we know and to fully immerse ourselves in the problem we are trying to solve.

Innovation In Incompetence

From asking “Why is a used toilet roll on the floor?” in Berlin (day 32) to questioning why people keep getting hit by bikes in Amsterdam (day 28), I unearthed unanticipated insights from otherwise ordinary situations by approaching problems naively. This did not negate my knowledge; rather, it enabled me to spot assumptions and limitations in my thinking. In doing so, I was able to discern more accurate insights and to foster more effective solutions.

Documenting the people-weaving techniques of cyclists in Amsterdam on day 28

Being prepared to “not know” is difficult. Everything we think to be true in life is based on what experience has taught us. Therefore, being prepared to approach a familiar design challenge naively is difficult. In short, it can make us feel a little stupid and challenge our desire for expertise.

Ultimately, though, trying to solve 50 problems in 50 days taught me that real design expertise isn’t always about knowing the right answers, but in knowing when and how to ask the right questions.

From responsive design conventions to e-commerce store layouts, when we are prepared to re-evaluate what we know and to approach familiar problems afresh, we give ourselves vital opportunities to better understand and better solve those problems.

Methods For Anyone And Everyone

Below are just some of the methods I developed in my journey. Use them, build on them, and develop your own. Our aim should be to find methods to more deeply understand the problems we are trying to solve and, in doing so, to better serve the people we are designing for.

Choose to Not Know

Creating an interactive communication tool for hostel staff on day 29

When we’re briefed on a design problem, more often that not our brains start to do something amazing: They start to solve. We find points of reference to frame the challenge by recalling experiences that might help.

Suppose we’re given a brief to “design a website to help festival-goers find music venues.” Upon hearing this brief, we’d have two choices. Choosing to momentarily put our experience to one side and “not know” what this solution might look like accomplishes two things:

  1. It compels us to dig deeper.
    Starting with no answers compels us to dig deeper to find the best solution, rather than be drawn to the obvious half solutions that might meet the need. Being prepared to approach a new problem naively forces us to immerse ourselves in the world of the people we are designing for. Rather than let our experience direct our thinking, by saying “I don’t know,” we make ourselves really pay attention to the people who do.
  2. It fosters creativity.
    Not retreating to past experience opens our mind to consider solutions beyond what we know. These solutions could be better, more inventive and more intuitive — we just have to be prepared to look for them.

Ask Stupid Brilliant Questions

Designing a simple tool to locate and navigate to free Wi-Fi on day 35

In my journey, I learned that one of the best methods for unearthing valuable insight from familiar design challenges is to ask (seemingly) stupid questions.

Suppose we’re asked to create a website to help someone sell their home. Our experience could give us useful shortcuts to skip the basics in order to deal with more unknown aspects of the project. However, rather than asking “What content should go on this website and how shall we structure it?”, asking a naïve question like “Why do people sell their home?” or even “Why do you need a website to achieve what you want?” could unlock insights that shape the entire direction of the project.

Likewise, asking naïve questions of the people we’re designing for gives us an opportunity to listen and to challenge our assumptions about the problems they face. Rather than asking end users “What specific resources would you like to help you sell your home?” making a naïve statement like “I’ve never sold a house before. What’s it like?” helps to dig below the surface to root out the underlying problem.

Question Convention

Creating an interactive and non-invasive tool for crowdsourcing book notes on day 47

Designed conventions are vital. By including conventions in our products, we are giving users a shorthand to more easily understand and use the solutions we have created for them. However, while the conventions we use are often an excellent shorthand, they are not necessarily the best way to solve the problems we’re tasked with.

Experience teaches us that certain problems can be solved with conventional solutions. It is easy to assume that a wayfinding problem is best solved with a map. While conventions enable us to solve quickly, innovation is born of a natural distrust of convention and a desire to create smarter, more intuitive experiences. Suppose we’re asked to create a music discovery service. In digital applications, music is conventionally represented as images of album artwork or as rows in tables. Following suit and having users browse music in this way would be easy. But our aim should be to create a solution that better meets the needs of the people we are designing for. In almost every other situation, discovering music is rarely a visual experience — it’s auditory. How would this affect the decisions we make when designing this service?

For a great (and really simple) example of this in practice, look at Christian Holst’s article on “Redesigning the Country Selector.” Regardless of whether you agree with his solution, challenging convention enables us to create smarter, more effective outcomes.

These are just some of the methods by which we can embrace naivety and challenge what we think we know in order to better solve for the people we are designing for.

It would be ridiculous to overlook the role of convention and the role of experience in creating intuitive, useable products. But by being prepared to challenge what we know, we set ourselves up to discover what we don’t.

The Solution

So, what was the result of overlooking my experience, re-evaluating what I thought I knew and re-approaching the problem with the mindset of a beginner?

Sitting down outside a metro station in the centre of Turin, I reconsidered my dilemma of being asked to spare some money.

On the one hand, I wanted to give and be compassionate. On the other hand, I didn’t know whether my money would be used responsibility. In the end, I didn’t give.

As I sat outside the station, I looked up to see commuters ascending the stairs from the underground platform. Reaching the top of the stairs, they would throw away their travel ticket and continue their journey.

Watching this simple action was my eureka moment.


The solution I created would repurpose Turin’s metro system. When buying a ticket, the buyer would be invited to donate a tiny amount (as little as €0.20).

Then, when leaving the station, instead of throwing their ticket away, they would be able to leave it in the hat of someone living on the street, a donation that could be converted to a night’s stay at one of Turin’s shelters.


With this solution, people could be compassionate while knowing exactly how their donation will be used.

Since publishing 50 Problems in 50 Days, this solution has been one of the most shared and most discussed. Being contacted by organizations and individuals around the world to discuss how to implement comparable solutions in their cities has been amazing.


Being prepared to re-approach design challenges naively is vital to innovation. From shopping carts to apps, our accumulated experience helps us to confidently solve based on what we think to be true. However, it can also negate one of our greatest assets as designers: humble incompetence.

Even in the most familiar of situations, when we are prepared to “not know” our client’s business (say, the format of a website or the mindset of the user), we are able to expose our assumptions and seek new insight. In doing so, we empower ourselves to better understand the problem and to foster a more effective solution.

Ultimately, we must dig deeper into the problems we are tasked with solving and aim to better solve for the people whose lives we are affecting every day.

In the words of Irene Pereyra, Global Director of UX and Strategy at Fantasy Interactive:

In order to be innovative, you have to be fearless, you have to be comfortable with being different, and you have to be willing to stand apart from the crowd… In order to be forward-thinking you have to let go of the status quo, and you have to rephrase the problem.

Stay tuned for the final part of this series on my 50 problems. In it, I’ll share a selection of small lessons on what travelling 2517 miles has taught me about process, discipline, innovation and the power of design to solve problems big and small.


© Peter Smart for Smashing Magazine, 2014.

Sponsored post
Reposted bySchrammelhammelMrCoffeinmybetterworldkonikonikonikonikoniambassadorofdumbgroeschtlNaitliszpikkumyygittimmoe

December 27 2013


So You’ve Decided To Open-Source A Project At Work. What Now?


A while back, I wrote a post about starting an open-source project. The focus of that article was on starting an open-source project as an individual. I received a lot of positive feedback and also some questions about how the process changes when you’re open-sourcing a project at work.

A guide on getting started with an open-source project at work.
Open source projects are always a team effort, and they alwars require a dedicated commitment to the project. Image credit: open source way

Many companies are starting to investigate and participate in the open-source community, and yet few guides for doing so exist. This article focuses primarily on the process of open-sourcing a project at work, which brings with it other concerns and decisions.

Why Open Source?

Before delving into the how, it’s useful to step back and talk about the why. Why is it that so many companies have or are starting to establish an open-source presence? There are actually a number of reasons:

  • Technical brand
    Companies want to be known as a place where smart people work. One excellent way to show this is by releasing code that has been written at the company. Doing so creates mindshare in the community, familiarity with the company and its contributions, and fodder for future technical brand initiatives (such as giving talks at meetups and conferences).
  • Recruiting
    Time and again, you’ll see contributors joining companies that sponsor open-source projects. I saw this happen frequently while at Yahoo, where YUI contributors would sometimes end up as Yahoo employees after having contributed to the project on an ongoing basis. Similar hires have occurred in the Node.js community. The reason is pretty clear: If you work on an open-source project in your spare time, imagine how great it would be to turn that hobby into a job. Additionally, allowing job candidates to see some of the company’s code gives some good insight into what working at the company would be like.
  • Giving back
    A lot of companies benefit from open-source software, and contributing open-source software back into the world is a way of giving back. These days, it’s part of what it means to be involved in the technical community. When you receive, find a way to give back. A lot of companies are embracing this philosophy.

There are many more reasons why companies are choosing to open-source, but these are the primary drivers. Therefore, the process of open-sourcing a project must be aligned with these goals while protecting the company’s interests.

Getting Started

Suppose someone at your company wants to open-source something. This has never happened before and you’re not sure what to do. Do you just put it up on GitHub? Announce it in a press release or blog post? How do you know that the code is OK to open-source? There is a lot of planning to do, and it all starts (unfortunately) with the legal department.

Giving away company assets is as much a legal issue as anything else. The very first conversation should be with an appropriate member of your company’s legal team to discuss the ins and outs of open-sourcing. In larger companies, one or more intellectual property (IP) attorneys are likely on staff or on retainer; in smaller companies, this conversation might start with the general counsel. In either case, it’s important to lay out exactly what you want to do and to clarify that you’d like to formalize a repeatable process for open-sourcing projects.

The primary legal concerns tend to be around licensing, code ownership and trade secrets. These are all important to discuss openly. Because many companies have done this already, you should have a fair amount of evidence of how other companies have established their processes. The most important thing is to engage the legal department early in the process and to have a champion on the legal team who you can work with should any issues arise.

Choose A Default License

One of the first topics of discussion should be which open-source license the company will use as its standard. Leaving the team for each project to decide for itself which license to use is a bad idea, because a lack of awareness could quite possibly lead to two projects from the same company having incompatible licenses. Decide up front exactly which license to use, and use it for all open-source projects in your company.

I touched on the different types of licenses in my previous article (also, see “Understanding Copyright and Licenses”). In general, companies tend to standardize either the three-clause BSD license or the Apache license. Very rarely will a company standardize the MIT license, because the standard MIT license doesn’t contain a clause that prevents use of the company’s name in advertisements for software that makes use of the project. The Apache license has additional clauses related to patent protection, which some companies prefer.

The ultimate choice of a license typically comes down to legal considerations on the nature of the code being released. The philosophical implications of which license you choose are not important; using the same license for all projects in your company is important.

Outgoing Review

The next topic of discussion should be to define an outgoing review process. Just putting code out in the public without some sort of review is neither safe nor sane. In general, a request to open-source a project should be reviewed by the following:

  • Legal
    As mentioned, the legal department needs to be kept in the loop during this process. They will likely not review the code, but rather will want to understand what the code does and whether it could be considered a company secret.
  • Security
    Someone with a security mindset should actually look at the code to make sure it doesn’t reveal any security issues or contain code that should not be made public. This process could be manual or automated, depending on the complexity of the code.
  • Executive
    Someone on the executive team needs to approve the request, basically saying that they believe this code is safe to share and that they are aware that the code is being published.

Exactly how an outgoing review gets started tends to be company-specific. It could be done by submitting a request to a mailing list, filling out a form or just setting up a meeting. How it’s implemented isn’t as important as the fact that the review occurs and is done quickly. So, setting a deadline for a response is a good idea. Depending on the size of the company, this could range from a few days to a few weeks, but setting up the expectation ahead of time helps to alleviate any scheduling issues.

Accepting Contributions

One part of the process that is often forgotten is figuring out rules for accepting external contributions. Open-sourcing a project is basically a way of saying, “Hey, we’d love to have you fix our bugs!” Part of the point is to get people interested enough to want to contribute to the project. Establish a process so that you know how and from whom external contributions may be made.

Open-source projects
When building something, accepting external contributions can significantly benefit the project, but you need to establish a process for contributions first. Image credit: open source way.

The company should require a contributor license agreement (CLA) to be signed before accepting contributions from external developers. A CLA is a legal document that typically states a few things:

  • The contributor asserts that they are the original author of the code they are submitting.
  • The contributor grants the project the right to use and distribute the code they are submitting.
  • The contributor has the right to grant the previous two points.
  • Any code submitted by a contributor is not guaranteed to be accepted or used.

Take the Node.js CLA. It’s a pretty straightforward form that defines the expectations for contributors and asks for the contributor’s name and other information. These days, asking for someone’s GitHub user name as well is quite common (to help automate the process of checking CLAs for commits).

The CLA will be important should any legal action be taken against your company as a result of the code contained in the project. Because the company is the maintainer of the project, any legal action would likely be directed at the company itself, rather than any individual contributor.

CLAs are sometimes controversial, and some developers refuse to sign them. That’s an acceptable loss to protect yourself and your company from the legal risks of open-source projects. Those who are familiar with the open-source community and the reason behind CLAs will be more than happy to sign on and contribute to your project.

Maintaining The Project

An overlooked part of the open-source process is maintaining the project once it’s been published. Many developers (and some companies) view the step of open-sourcing a project as the end of the process — they’ve already spent considerable time creating software that they now want to share with the world. In truth, open-sourcing a project is the beginning of a journey. The act of sharing now makes it, effectively, communal software, and you can now expect the project to be discussed and to evolve as a whole.

Once a new project is open-sourced, eventually you’ll start receiving suggestions, requests and pull-requests. Maintenance is a task that is often overlooked in open-source projects. Image credit.

Many companies that are new to the open-source community make the mistake of publishing their code and leaving it behind. Issues are left unresolved and unanswered, road maps are not shared, pull requests are ignored, and there is no way to get in contact with the people behind the project. As much as open-source projects can enhance your technical brand, leaving projects in this state can hurt it.

Once you’ve open-sourced a project, you must commit to maintain it. Even stable software will have bugs, and some of those bugs could be found by people outside of your company. If you have no intention of interacting with anyone outside of the company on this project, then you might want to consider simply distributing the compiled binary or library freely, and not actually open-sourcing the code.

While there are no established rules for maintaining a project, here are some guidelines I follow:

  1. The public repo is the source of truth.
    Once you’ve published your source code to a public repository (or repo), all development should happen in that repository. The public repo shouldn’t simply be a clone of an internal one. If the project is being actively developed, then that development should happen exclusively in the public repo in order to be as transparent as possible. Developing in private and then updating periodically prevents the use of pull requests and makes forking your project extremely difficult and frustrating.
  2. Plan in public.
    Use a public bug tracker to track all issues, so that others can see what’s being worked on and what’s already been reported. Post a road map somewhere public that shows plans for development. Be as transparent about the development process as possible. In short, open-source the way your project will grow and evolve.
  3. Dedicate company time.
    If you are the primary author of the project and you’ve open-sourced the code, then you (or a delegate) should set aside time to interact with external contributors. That means making timely responses to pull requests and issues, which in turn means setting aside time daily or weekly. This has now become part of your full-time job, not just a hobby.
  4. Open channels of communication.
    Give external contributors a way to interact directly with the maintainers. This could be through a forum, mailing list, IRC chat or another channel. Make sure that these systems are public; for example, an IRC chat should not be on your company’s chat server. Plenty of free communication services exist to make use of. The simplest and least disruptive method is to create a free mailing list using Google Groups or Yahoo Groups.
  5. Commit to document.
    Lack of documentation is a huge problem with open-source projects. From the start, commit to writing good documentation that explains not only how to use the project but also how to set up a development environment, run tests and work effectively with the code as a contributor. There is no better way to discourage people from using your software than to give them no way to get up and running on their own.
  6. Maintain regular outgoing communication.
    There should be a steady stream of outgoing communication about the project. At a minimum, post announcements about new releases and security issues that require immediate upgrading. Maintain changelogs that describe differences between versions, and follow a predictable and regular scheme in your versioning (such as by following semantic versioning). This will help both users and contributors understand the impact of filing issues and submitting pull requests.

An open-source project very quickly takes on a life of its own once released. If a community forms around the project, then it could take up more and more of your time. That’s why a commitment to maintain the project needs to be a part of the open-sourcing process. Letting a project languish without any attention sends a horrible message to those outside of your company and will discourage people from contributing.

Warning Signs

Most open-source projects, whether by individuals or companies, are started with the best of intentions. Typically, the goal is to share learning and code with the world. However, the Internet is littered with abandoned projects. If your project ends up like this, it could hurt your and your company’s reputation. Projects slowly decay over time and can usually be identified by one or more of the following characteristics:

  • “Not enough time”
    The more frequently this phrase appears in responses to pull requests and issues, the more likely the project is headed for the graveyard. This is one of the top reasons why projects die: The maintainer runs out of time to maintain it. As should be obvious from this article, maintaining a project requires a significant amount of work, which is frequently not sustainable in the long term.
  • Too few contributors
    If most contributions come from one person, then the project is likely either early in its life (on the upswing) or close to the end. You can easily tell which is the case by looking at the date of the first commit. Thriving projects tend to have a large number of commits from the maintainer and a small number of frequent commits from a few others. Another good way to measure this activity is in the number of merged pull requests in the last year.
  • Too many open issues and pull requests
    A surefire sign that a project is on its way out is issues and pull requests that are left open with no comment. More than a few issues that have been open for a year means that the project isn’t being cared for.
  • No way to contact the maintainer
    If you can’t find a reliable way to contact the maintainer, whether through email, a mailing list, Twitter, issues or pull requests, then there’s not much hope for the project. Maintainers aren’t maintaining if they aren’t communicating.

Keep an eye on these patterns in your own project. If you recognize the warning signs, then you’ll have to decide what to do with the project. Either someone else should become the maintainer or it’s time to end-of-life the project.

End-Of-Lifing Projects

At some point, you or your company might find that there is no longer interest in maintaining the project. This is a natural evolution of software — sometimes a project outlives its usefulness. When this happens, you need to appropriately end-of-life the project.

End-of-lifing typically starts with a public announcement that the project is no longer being actively maintained (along with a post in the project’s README file). The announcement should address the following:

  • Why is the project being end-of-lifed? Has it been superseded by something else? Is it no longer in use? Do you recommend different software written by someone else?
  • What will happen to outstanding issues? Will bugs still be fixed? Will all outstanding issues be closed without being fixed?
  • Is there an opportunity to transfer ownership? If someone really likes the project, is your company willing to transfer ownership to them or their organization?

Ultimately, you might decide to delete the repository completely. While being able to see all of a company’s projects, even those that have been end-of-lifed, is sometimes nice, that comes at a cost, and so removing repositories from time to time might be prudent. In doing so, be certain that you have effectively communicated that the project is going away, and give at least 30 days notice to allow others to fork the project if they are so inclined.


Open-sourcing projects at work is a great initiative for many reasons. When done correctly, an open-source presence will do a lot to promote your company and its employees. Active open-source involvement signals your company’s willingness to interact with the technical community and to contribute back, both signs of a strong technical brand.

On the other hand, a poor open-source presence is worse than no presence at all. It signals general laziness or apathy towards a community of developers, a community that might very well want to help your project succeed. Few things are as demoralizing as trying to work on an open-source project that has been abandoned. Don’t be that company.


© Nicholas C. Zakas for Smashing Magazine, 2013.

December 17 2013


How Optimized Are Your Images? Meet ImageOptim-CLI, a Batch Compression Tool


Exporting images for the Web from one’s favorite graphics software is something many of us have done hundreds of times. Our eyes fixate on an image’s preview, carefully adjusting the quality and optimization settings until we’ve found that sweet spot, where the file size and quality are both the best they can possibly be.

After exporting the image — usually using a feature called “Save for the Web” — and having gone to all that care and effort, we would be forgiven for thinking that our image is in the best shape possible. That’s not always the case, of course.

In fact, much more data is usually left in such files, data that browsers have to download despite not requiring or even using it, data that keeps our users waiting just a bit longer than necessary.

Thankfully, a number of popular tools can help us optimize images even further, but which should we use? We assumed, for a time at least, that our graphics editing software properly optimized our files, but what do we really know about our image optimization tools?

Image Optimization Tools

If you’re not currently using any image optimization tool, I would urge you to choose one. Any is better than none. Regardless of which you choose, you will likely speed up your website and keep users happy.

To inform our work, I ran the most popular image optimization tools over a varied sample of images (kindly donated by Daan Jobsis via his “Retina Revolution” article), and I’ve published the results on GitHub.

The report shows us how much data each tool saves and how much quality was lost statistically. However, how great a loss in quality is noticeable and how much is acceptable will vary from person to person, project to project and image to image.

Aim For The Biggest Gains

I’ve been using ImageOptim for many years, with ImageAlpha and JPEGmini joining it more recently.

With this trio, we have a specialist in JPEGs, another in PNGs, and a great all-round application, ImageOptim, which also supports GIF and other formats. Each uses different techniques to deliver impressive savings, but they complement each other when combined to offer better savings still.


ImageOptim beats any single lossless optimizer by bundling all of them. It works by finding the best combination of compression parameters and removes unnecessary comments and color profiles.


ImageAlpha is unique in its lossy conversion of PNG24 to PNG8, delivering savings many times bigger than popular PNG optimizers such as and TinyPNG. The conversion even maintains alpha-transparency in all browsers, including on iOS and even in IE 6.


JPEGmini is a “patent-pending photo recompression technology, which significantly reduces the size of photographs without affecting their perceptual quality.” The creators claim it reduces a file’s size by up to 80%, while maintaining quality that is visually identical to the original.

The savings are quite remarkable, but you will need to purchase the software to use it without restriction.

Prioritize Convenience

In terms of performance, the comparative data is reassuring, and to date I’ve been happy with my decisions. But there’s a real problem: all of these tools are GUI applications for OS X.

This has some benefits because everything is local. You don’t need to upload and download files to a Web server, so there’s no risk of the service being temporarily unavailable. This also means that your images don’t need to leave your machine either.

But at some point ahead of every launch, I had to remember to open each application, manually process new images, then wait for the tool to finish, before doing the same in the next application.

This soon gets tedious: We need to automate! This is why (with James Stout and Kornel Lesiński) I’ve created ImageOptim-CLI, automated image optimization from the command line interface (CLI).


Though other image optimization tools are available from the command line, ImageOptim-CLI exists because the current benchmarks suggest that ImageOptim, ImageAlpha and JPEGmini currently outperform those alternatives over lossless and lossy optimizations.

I wanted to take advantage of this.

Given a folder or other set of images, ImageOptim-CLI automates the process of optimizing them with ImageAlpha, JPEGmini and ImageOptim. In one command, we can run our chosen images through all three optimizers — giving us automated, multi-stage image optimization right from the command line.

This gives us the levels of optimization of all three applications, with the convenience of the command line, opening up all kinds of possibilities for integration with other utilities:

  • Integrate it with Alfred workflows.
  • Extend OS X with folder actions and more using Automator.
  • Optimize images whenever they change with the Guard RubyGem.
  • Ensure that images are optimized when you Git commit.

Do you know of other ways to integrate image optimization in your workflow? If so, please share your ideas in the comments.

Installation and Usage

The CLI can be downloaded as a ZIP archive or cloned using Git, but the easiest way is by running this:

npm install -g imageoptim-cli

Running all three applications before closing them afterwards can be achieved with this:

imageoptim --image-alpha --jpeg-mini --quit --directory ~/Sites/MyProject

Or you can do it with the equivalent shorthand format:

imageoptim -a -j -q -d ~/Sites/MyProject

You will find more installation and usage examples on the project page on GitHub.

Case Study: Myspace

Earlier this week, I visited Myspace and found that 4.1 MB of data was transferred to my machine. With the home page’s beautiful magazine-style layout, it’s no surprise that roughly 76% (or 3.1 MB) of that were images.

I was curious whether any data could be saved by running the images through ImageOptim-CLI. So, I recorded the video below to show the tool being installed and then run over Myspace’s home page.

As you can see, the total size of images before running the command was 3,186 KB, and ImageOptim-CLI was able to remove 986 KB of data, while preserving 99.93% of image quality.


There is a companion Grunt plugin for ImageOptim-CLI, called grunt-imageoptim, which offers full support for the optimization of folders and collections of images. It can also be paired with grunt-contrib-watch to run whenever any images are modified in your project.

Smashing Magazine has a great article for those who want to get up and running with Grunt.


Image optimization is an essential step in a designer’s workflow, and with so many tools to choose from, there’s bound to be one that suits your needs.

Data should bear heavily in your decision, so that you reap bigger rewards, but choose one that is convenient — using a weak tool every time is better than using than a strong tool sometimes. You’ll rarely make a decision in your career that doesn’t have some kind of trade-off, and this is no different.


If you’ve made it this far, I thank you for reading and welcome your questions, comments and ideas.

(al, ea)

© Jamie for Smashing Magazine, 2013.

November 28 2013


The Mystery Is Resolved: GIFs, Crazy People And All That Malarkey


Ah, the good ol’ Smashing Book Mystery riddle. Now that two days after the announcement have passed, it’s time to reveal the mystery and announce the winners. Oh, you want to figure it out first? Well, then please close this window since there are (obviously) spoilers in this post.

The Mystery Riddle

Alright, it wasn’t easy indeed. But with a quite smashing prize for the winners, we didn’t want to make it easy for you. However, we did want it to be hard enough, but not too hard which would make most readers stop trying. So after a series of usability rounds we thoroughly adjusted and edited the copy of the riddle, explaining that the mysterious animated GIF contains a hidden message.

Once you resolved the mystery, you’d see a Twitter hashtag displayed in plain text. So there shouldn’t be any room for vague interpretations. The solution should be displayed in plain text. The only hint was to watch out for the file name.

Smashing Book Mystery

The only hint was to watch out for the file name.

Now, in our usability tests, many people went beyond their way to go through each and every frame, check the colors of each frame in Photoshop, rotate the images, run a diff between images, color-highlight different areas of the images and even play around with hex-colors which are (surprise, surprise!) actually hashtags as well (#aed4dc, anyone?).

Exploring the GIF in Photoshop
Okay, this (#merryxmas) could be a solution but it isn’t. Well done, @jussivirtanen!

Of course, the initial reaction was to click anywhere and everywhere on the image. The house, the trees, the lake. And, according to our heatmaps, our dear readers literally set the house on fire. And yes, it’s lots and lots of clicks! Unfortunately, since it was an animated GIF and it wasn’t linked and had no clickable areas, all those clicks didn’t help resolve the riddle at all. Sorry about that!

The clicks heatmap for the riddle. The lighter an area, the more clicks it had. Our readers set the house on fire!
Large view.

The solution is a bit easier though. Well, if you have to watch out for the file name, the first thing you probably do is look up the file name. The file name of the image was So how can you look into the house? Since it’s an animated GIF and there are no clickable elements on it, what about… changing the URL in the browser address bar? What about enter-the-house.gif or just house.gif?

Smashing Book Mystery
enter-the-house.gif revealed a person sitting in the house, distracted by a lamp.

Yes, the key was to figure out that the URL had to be adjusted. That’s it. In fact, we created a number of animated GIFs, including common aliases such as home.gif, the-home.gif, house.gif and the-house.gif, for example, to make sure that people who “get” the idea, follow it through and get to the next level.

Well, since you entered the house already, at this point you might start to look closely at what’s going on in the image. If the image doesn’t exist on the server, you’ll be forwarded to the nobody-is-home.gif.

Well, first you might want to explore lamp.gif or light-off.gif

Smashing Book Mystery
Alright, the lamp is turned off now. Now we can keep reading.

…and book.gif.

Smashing Book Mystery
Reading the book, day and night long.

Hmmm… What about sun.gif? No, no luck. Dead-end: the image doesn’t change. Alright. What else can we do? Well, what if we look under the carpet.gif?

Smashing Book Mystery
Aha! The mouse is running under the carpet!

Aha! So we just found a mouse! But… it doesn’t help us much. So what else have we got? Well, what about the window.gif?

Smashing Book Mystery
Oh… that’s… just… surreal.

Oh. That’s… weird. Maybe couch.gif?

Smashing Book Mystery
The person still sitting on the couch reading the book.

Oh! Wait a second, what do we have here? Hasn’t something changed? Oh, look, we now have a pillow on the couch! You know what it means! pillow.gif.

Smashing Book Mystery
Aha! The hashtag was hidden under the pillow!

But wait a second. We haven’t explored the picture just yet! picture.gif:

Smashing Book Mystery
Aha! There is another hashtag hidden behind the picture!

So now it’s just about time to tweet both hashtags. You could tweet the color hex code of the house door as well (it doesn’t change in the animation), but you didn’t have to. So does it mean that the mystery is resolved? Oh, wait a second, mystery-resolved.gif?

Smashing Book Mystery
The mystery is resolved! The winner will be going to Oxford in March 2013!

The Winners

Well, we knew that the riddle wouldn’t be that easy, but it was remarkable to see the first tweets with both correct hashtags appearing within minutes after the riddle was announced. In fact, it turned out that our dear readers are absolutely smashing indeed: #feelsmashing and #newsmb4perspective.

The first place has earned the person who was the quickest to resolve the riddle. We raffled the other lucky winners randomly, however the requirement was that at least one of the hashtags had to be found. The winners can select any prize of their choice (be it a bag, a Smashing Library account, printed books or anything else). And they all get the Smashing Cody Stress-Relief Ball, too. All winners are being contacted. The winners are:

Congratulations! And thanks to everyone who participated! We hope we didn’t waste too much of your productive work time (and didn’t cause too many headaches, of course).

Behind The Scenes

Guillaume Kurdjian
Guillaume Kurdjian is a 22 year old freelance illustrator and animator from Nantes, France. Guillaume likes to experiment with stuff on his computer and climb trees.

Do you know that feeling when you find a designer and just want to work with them no matter what? Well, as we had stumbled upon Guillaume Kurdjian‘s animated GIFs, we knew that at some point we’d love to work on a series of animations with Guillaume. With the Smashing Book Mystery the opportunity presented itself.

So in a long thread we designed all the different variations of GIFs, ending up with over 30 different GIFs, and drafts, and ideas that were thrown away and the ones that did make it to the final stage. Since we wanted the theme of the book — New Perspectives — to be recognizable, the design was quite surreal (the guy fishing a book from a lake, lamp going crazy, scary people outside the window) to indicate the “new perspectives” theme. Also, the visual design and the perspective of the person sitting in the house wasn’t chosen randomly.

Another tricky thing was the speed of the animations and the way animations would loop. We didn’t want to make it too difficult, or too fast, or have many things happening at the same time since the riddle would get more difficult in the end. In fact, we had poor souls staring at the GIF literally for hours to figure out what’s going on and trying to click all the different areas because the animation was a bit too quick.

All designs were a series of iterations to make the overall riddle not necessarily perfect, but just right. So a big “thank you” to Guillaume for following through and being so enthusiastic and kind about all the changes made.

Now, are you ready for the second round? ;-)

© Vitaly Friedman for Smashing Magazine, 2013.

November 05 2013


Killer Responsive Layouts With CSS Regions


As Web designers, we are largely constrained by the layout features available to us. Content placed inside a container will often naturally extend the container vertically, wrapping the content. If a design requires elements to remain a certain height, then our options are limited. In these cases, we can only add a scroll bar or hide the overflow. The CSS Regions specification provides a new option.


Regions are a new part of the CSS specification, so not all browsers have implemented them, and in some cases you might have to enable a flag to use them. They have recently gained support in iOS7 and Safari 7, as well as Safari 6.1+. Adobe maintains a list of supported browsers and instructions on enabling regions and other features. However, support for regions is constantly growing. For a robust list of which browsers have implemented regions and the various features available, see Adobe’s “CSS Regions Support” page.

Regions 101

CSS regions enable us to disperse content across multiple containing elements. They provide a flow, which consists of content that may appear within multiple elements, and a region chain, which is the collection of elements the flow is spread across. Once these elements have been defined, the flow dynamically fills the elements in the region chain. We can then size our containers vertically without worrying about the content getting cut off, because it simply overflows into next element in the chain. This creates new opportunities for layout with responsive design.

To use regions, start by creating a named flow; simply add the CSS property flow-into to your content element, with the value of your flow’s name. Then, for each region through which you want the content to flow, apply the CSS property flow-from with the same flow name value. The content element will then flow through the region elements. Current implementations in browsers require the property to be prefixed, but we are using the unprefixed version here.

#myContent {
	flow-into: myNamedFlow;

.myRegion {
	flow-from: myNamedFlow;

Your HTML would contain a content element and the scaffolding of all of the regions that this content will flow through. When you use regions, the content element will not be visible in its original location and any HTML already in your region elements will disappear, replaced by the content being flowed into them. Because of this, we can have placeholder or fallback content within our region elements.

<div class="myRegion"></div>
<div class="myRegion"></div>
<div class="myRegion"></div>

<div id="myContent">...</div>

When using regions, the content being flowed is not a child of the region elements. You are only changing where the content is displayed. According to the DOM, everything remains the same, so the content does not inherit styles from the region in which it lives. Instead, the specification defines a CSS pseudo-selector, ::region(), which allows you to style the content within a region. Apply the pseudo-element to the region’s selector and then pass a selector as an argument, specifying the elements that will be styled within a particular region.

    /*styles for all the paragraphs flowing inside our regions*/

Responsive Design With Regions

Responsive design is the technique of creating malleable layouts that stretch and change according to the given context. Frequently, designers will make elements flexible with percentages and media queries to adapt a layout to different screen sizes. Responsive design adapts content to every screen without requiring the designer to completely overhaul the design or code.

Regions facilitate responsive design in several ways. First, you no longer have to rely on height: auto for every element to ensure content fits. Instead, you can allow the content to flow into different elements within the layout. This means that the content does not dictate the layout, but rather adapts to the intended design. You can still use height: auto on the last region in the chain to ensure that it extends to display all remaining content. You can see this technique in the CodePen example below.

See the Pen Region Auto Height by CJ Gammon (@cjgammon) on CodePen.

Regions And Events

You can use JavaScript events with regions to manage your layout and to ensure that content is displayed properly. The regions specification defines events that you can use to respond to certain conditions. The regionoversetchange event is dispatched when the regionOverset property changes for any region. This can occur when a user resizes the page, stretching out the container element so that the content no longer flows into certain regions. The value of regionOverset is either fit, overset or empty. A value of empty specifies no content inside the region. The regionOverset property is set to overset when the last region in the chain is unable to display all of the remaining content, making some of the content unreadable.

The fit value sets content to fit within the region properly, either completely (if earlier in the chain) or partially (if it is the last region in the chain). How you respond to these events will depend on the design, content and other aspects of your layout. These events could be used to dynamically add or remove regions or to apply a class that changes the layout. You can see an example of the former technique in the CodePen below.

Note: Some implementations call the event regionlayoutupdate, instead of regionoversetchange, based on an earlier version of the specification.

See the Pen okmGu by CJ Gammon (@cjgammon) on CodePen.

Regions And Media Queries

Regions are defined entirely in CSS, making them easy to use in combination with media queries. In addition to resizing and positioning elements, you can completely change which elements are defined as regions. You can also set a region to display: none, which will cause it to be skipped entirely in the region chain. This capability makes it easy to remove particular regions from a layout without worrying about the continuity of the content. You can also use this technique to display whole new templates with completely different layouts, without ever changing the content.

Regions And Break Properties

Regions also extend break properties from the multi-column layout specification, which you can use to define how content breaks within your regions. You can apply these properties to elements within the flow either to always break or to avoid breaking a region relative to the element. Using the value region for break-before or break-after will always force a region to break before or after the element, respectively. The value avoid-region can be used for break-before, break-after or break-inside to prevent regions from breaking before, after or inside the element.

This technique is useful for keeping related elements grouped together and for preventing important elements from being split. The demo below shows images along the right column and long descriptive text flowing along the left. If you reduce the width of your browser, then media queries will change the layout, causing the images to redistribute over the narrower single-column structure. Applying break-after: region to the image containers ensures that a new region break will occur after each image in the image flow.

Note: Some implementations use non-standard regions-specific break properties with a region prefix; for example, region-break-before or, with a vendor prefix, -webkit-region-break-before.

Responsive Layout
The break-after property is applied to regions with media queries.

Regions And Viewport Units

Viewport units enable you to use the window (or viewport) as the basis for sizing elements, which creates a consistent aspect ratio and harmony in the layout. You can simulate pages or blocks that break up the content cohesively. A potential pitfall of this approach is that, if you use the aspect ratio of the device to size containers, defining both the width and the height, then your content might no longer fit inside the containers.

You could, however, use regions to break up the content while respecting the variable-sized elements across different screen sizes. You can see this technique being applied in the “Demo for National Geographic Orphan Elephants.” On this website, images and text are alternated to maintain the height of the viewport. We use regions to flow the content through all of the text sections, and we adjust them when the user shrinks the screen.

Regions being used with viewport units. Notice how the image fits the window exactly. (Large view)

The typical navigation paradigm for magazines and books on a tablet is pagination — i.e. enabling the user to swipe or tap to page through the content. As a designer, you want these pages to respond to a variety of screen sizes. Regions are particularly useful for this kind of layout, because you can size columns using viewport units and create a variety of different layouts that enable content to flow across the columns. An example of this done in HTML is shown in the video below:

The Kindle Cloud Reader website has a similar two-page spread but uses JavaScript to manage the layout. Implementing this kind of layout in JavaScript requires significant development overhead, and manipulating the DOM so heavily will usually incur a performance penalty. You can use regions to bring these capabilities natively to the browser, increasing the website’s performance while reducing development time.


When working with regions, it’s helpful to have tools to easily manage and debug various features. In Chrome Developer Tools, you can enable debugging features specific to regions. Detailed instructions on enabling these tools can be found in Christian Cantrell’s post “Web Inspector Support for CSS Regions.” With these features, you can find all of the named flows in a document, find the content and region chain associated with each named flow, and get visual cues for whether content fits in a region based on the regionOverset property.

Webkit Nightly also has some helpful visual cues. When you open the Web Inspector and inspect a region’s container, you will see a region number and links between the region containers showing the flow of the content.

Webkit Nightly allows you to inspect region containers, showing their number and the flow chain.

Further Reading

Regions open up many new opportunities for designing responsively and ensuring that content looks great at any size. One responsive website whose unique layout was created with regions is Adobe’s demo for a bike company, created with Edge Reflow. Follow @adobeweb for the latest updates on regions and other new Web features. Also, be sure to check out Adobe’s CodePen collection, which shows regions in use; you may want to fork one or more of the examples to explore different ways to use regions.

For more on regions, visit Adobe’s Web Platform Team Blog, which often provides updates about the specification and implementations. Full details can be found in the CSS Regions specification, which outlines all of the topics covered here and more. You can also find more information and examples in the “Regions” section of Adobe & HTML.

Front page image credits: Adobe & HTML

(al, il)

© CJ Gammon for Smashing Magazine, 2013.

October 29 2013


Get Up And Running With Grunt


In this article, we’ll explore how to use Grunt in a project to speed up and change the way you develop websites. We’ll look briefly at what Grunt can do, before jumping into how to set up and use its various plugins to do all of the heavy lifting in a project.

We’ll then look at how to build a simple input validator, using Sass as a preprocessor, how to use grunt-cssc and CssMin to combine and minify our CSS, how to use HTMLHint to make sure our HTML is written correctly, and how to build our compressed assets on the fly. Lastly, we’ll look at using UglifyJS to reduce the size of our JavaScript and ensure that our website uses as little bandwidth as possible.

Grunt.js is a JavaScript task runner that helps you perform repetitive tasks such as minification, compilation, unit testing or linting.

Getting Started With Grunt

Most developers would agree that the speed and pace of JavaScript development over the last few years has been pretty astounding. Whether with frameworks such as Backbone.js and Ember.js or with communities such as JS Bin, the development of this language is changing not only the way we experience websites as users but also the way we build them.

When you are working with JavaScript, you will likely need to execute multiple tasks regularly. While this is pretty much a given in most projects, it’s a time-consuming and repetitive way to work. Being in such an active community, you would assume that tools are available to automate and speed up this process. This is where Grunt comes in.

What Is Grunt?

Built on top of Node.js, Grunt is a task-based command-line tool that speeds up workflows by reducing the effort required to prepare assets for production. It does this by wrapping up jobs into tasks that are compiled automatically as you go along. Basically, you can use Grunt on most tasks that you consider to be grunt work and would normally have to manually configure and run yourself.

While earlier versions came bundled with plugins like JSHint and Uglyify, the most recent release (version 0.4) relies on plugins for everything.

What kind of tasks? Well, the list is exhaustive. Suffice it to say, Grunt can handle most things you throw at it, from minifying to concatenating JavaScript. It can also be used for a range of tasks unrelated to JavaScript, such as compiling CSS from LESS and Sass. We’ve even used it with blink(1) to notify us when a build fails.

Why Use Grunt?

One of the best things about Grunt is the consistency it brings to teams. If you work collaboratively, you’ll know how frustrating inconsistency in the code can be. Grunt enables teams to work with a unified set of commands, thus ensuring that everyone on the team is writing code to the same standard. After all, nothing is more frustrating than a build that fails because of little inconsistencies in how a team of developers writes code.

Grunt also has an incredibly active community of developers, with new plugins being released regularly. The barrier to entry is relatively low because a vast range of tools and automated tasks are already available to use.

Setting Up

The first thing to do in order to use Grunt is to set up Node.js. (If you know nothing about Node.js, don’t worry — it merely needs to be installed in order for Grunt to be able to run.)

Once Node.js is installed, run this command:

$ npm install -g grunt-cli

To make sure Grunt has been properly installed, you can run the following command:

$ grunt --version

The next step is to create a package.json and a gruntfile.js file in the root directory of your project.

Creating the package.json File

The JSON file enables us to track and install all of our development dependencies. Then, anyone who works on the project will have the most current dependencies, which ultimately helps to keep the development environments in sync.

Create a file in the root of your project that contains the following:

    "name" : "SampleGrunt",
    "version" : "0.1.0",
    "author" : "Brandon Random",
    "private" : true,

    "devDependencies" : {
        "grunt" :                   "~0.4.0"

Once you have done this, run the following command:

$ npm install

This tells npm which dependencies to install and places them in a node_modules folder.

Creating the gruntfile.js File

Gruntfile.js is essentially made up of a wrapper function that takes grunt as an argument.

module.exports = function(grunt){

        pkg: grunt.file.readJSON('package.json')

    grunt.registerTask('default', []);


You are now set up to run Grunt from the command line at the root of your project. But if you do so at this stage, you will get the following warning:

$ grunt
> Task "default" not found. Use --force to continue. 

We’d get this because we haven’t specified any tasks or dependencies yet other than Grunt. So, let’s do that. But first, let’s look at how to extend the package.json file.

Extending the package.json File

The best thing about working with Node.js is that it can find packages and install them in one go, simply based on the contents of the package file. To install all of the new dependencies, just add this to the file:

    "name" : "SampleGrunt",
    "version" : "0.1.0",
    "author" : "Mike Cunsolo",
    "private" : true,

    "devDependencies" : {
        "grunt" :                       "~0.4.0",
        "grunt-contrib-cssmin":         "*",
        "grunt-contrib-sass":           "*",
        "grunt-contrib-uglify":         "*",
        "grunt-contrib-watch":          "*",
        "grunt-cssc":                   "*",
        "grunt-htmlhint":               "*",
        "matchdep":                     "*"

And to complete the process? You guessed it:

$ npm install

Loading npm Tasks In Grunt

Now that the packages have been installed, they have to be loaded in Grunt before we can do anything with them. We can load all of the tasks automatically with a single line of code, using the matchdep dependency. This is a boon for development because now the dependency list will be included only in the package file.

At the top of gruntfile.js, above grunt.initConfig, paste this:


Without matchdep, we would have to write grunt.loadNpmTasks("grunt-task-name"); for each dependency, which would quickly add up as we find and install other plugins.

Because the plugins are loaded into Grunt, we may start specifying options. First off is the HTML file (index.html), which contains the following:

<!DOCTYPE html>
<html lang="en">


        <meta charset="utf-8">
        <meta name="viewport"   content="width=device-width; initial-scale=1.0; maximum-scale=1.0;">

        <title>Enter your first name</title>

        <link rel="stylesheet"  href="build/css/master.css">



        <label for="firstname">Enter your first name</label>
        <input id="firstname" name="firstname" type="text">
        <p id="namevalidation" class="validation"></p>

        <script type="text/javascript" src="build/js/base.min.js"></script>



Validating With HTMLHint

Add this configuration to grunt.initConfig:

htmlhint: {
    build: {
        options: {
            'tag-pair': true,
            'tagname-lowercase': true,
            'attr-lowercase': true,
            'attr-value-double-quotes': true,
            'doctype-first': true,
            'spec-char-escape': true,
            'id-unique': true,
            'head-script-disabled': true,
            'style-disabled': true
        src: ['index.html']

A plugin is typically configured like this: the plugin’s name (without the grunt-contrib-/grunt- prefix), then one or more targets of your choosing (which can be used to create custom options for the plugin for different files), an options object, and the files it affects. Now, when we run grunt htmlhint from the terminal, it will check through the source file and make sure that our HTML has no errors! However, manually typing this command several times an hour would get tedious pretty quickly.

Automate Tasks That Run Every Time A File Is Saved

The watch task can run a unique set of tasks according to the file being saved, using targets. Add this configuration to grunt.initConfig:

watch: {
    html: {
        files: ['index.html'],
        tasks: ['htmlhint']

Then, run grunt watch in the terminal. Now, try adding a comment to index.html. You’ll notice that when the file is saved, validation is automatic! This is a boon for development because it means that watch will silently validate as you write code, and it will fail if the code hasn’t passed the relevant tests (and it will tell you what the problem is).

Note that grunt watch will keep running until the terminal is closed or until it is stopped (Control + C on a Mac).

Keeping The JavaScript As Lean As Possible

Let’s set up a JavaScript file to validate a user’s name. To keep this as simple as possible, we’ll check only for non-alphabetical characters. We’ll also use the strict mode of JavaScript, which prevents us from writing valid but poor-quality JavaScript. Paste the following into assets/js/base.js:

function Validator()
    "use strict";

Validator.prototype.checkName = function(name)
    "use strict";
    return (/[^a-z]/i.test(name) === false);

window.addEventListener('load', function(){
    "use strict";
    document.getElementById('firstname').addEventListener('blur', function(){
        var _this = this;
        var validator = new Validator();
        var validation = document.getElementById('namevalidation');
        if (validator.checkName(_this.value) === true) {
            validation.innerHTML = 'Looks good! :)';
            validation.className = "validation yep";
            _this.className = "yep";
        else {
            validation.innerHTML = 'Looks bad! :(';
            validation.className = "validation nope";
            _this.className = "nope";


Let’s use UglifyJS to minify this source file. Add this to grunt.initConfig:

uglify: {
    build: {
        files: {
            'build/js/base.min.js': ['assets/js/base.js']

UglifyJS compresses all of the variable and function names in our source file to take up as little space as possible, and then trims out white space and comments — extremely useful for production JavaScript. Again, we have to set up a watch task to build our Uglify’ed JavaScript. Add this to the watch configuration:

watch: {    
    js: {
        files: ['assets/js/base.js'],
        tasks: ['uglify']

Building CSS From Sass Source Files

Sass is incredibly useful for working with CSS, especially on a team. Less code is usually written in the source file because Sass can generate large CSS code blocks with such things as functions and variables. Walking through Sass itself is a little beyond the scope of this article; so, if you are not comfortable with learning a preprocessor at this stage, you can skip this section. But we will cover a very simple use case, using variables, one mixin and the Sassy CSS (SCSS) syntax, which is very similar to CSS!

Grunt’s Sass plugin requires the Sass gem. You will need to install Ruby on your system (it comes preloaded in OS X). You can check whether Ruby is installed with this terminal command:

ruby -v

Install Sass by running the following:

gem install sass

Depending on your configuration, you might need to run this command via sudo — i.e. sudo gem install sass: — at which point you will be asked for your password. When Sass is installed, create a new directory named assets and, inside that, another named sass. Create a new file named master.scss in this directory, and paste the following in it:

@mixin prefix($property, $value, $prefixes: webkit moz ms o spec) {
    @each $p in $prefixes {
        @if $p == spec {
            #{$property}: $value;
        @else {
            -#{$p}-#{$property}: $value;
$input_field:            #999;
$input_focus:           #559ab9;
$validation_passed:     #8aba56;
$validation_failed:     #ba5656;
$bg_colour:             #f4f4f4;
$box_colour:            #fff;
$border_style:          1px solid;
$border_radius:         4px;

html {
    background:         $bg_colour;

body {
    width:              720px;
    padding:            40px;
    margin:             80px auto;
    background:         $box_colour;
    box-shadow:         0 1px 3px rgba(0, 0, 0, .1);
    border-radius:      $border_radius;
    font-family:        sans-serif;

input[type="text"] {
    @include            prefix(appearance, none, webkit moz);
    @include            prefix(transition, border .3s ease);
    border-radius:      $border_radius;
    border:             $border_style $input_field;
    width:              220px;

input[type="text"]:focus {
    border-color:       $input_focus;
    outline:            0;

.validation {
    line-height:        1;
    font-size:          1em;
    padding:            10px;
    display:            inline;
    margin-right:       20px;

input.yep {
    border-color:       $validation_passed;

input.nope {
    border-color:       $validation_failed;

p.yep {
    color:              $validation_passed;            

p.nope {
    color:              $validation_failed;

You will notice that the SCSS extension looks a lot more like CSS than conventional Sass. This style sheet makes use of two Sass features: mixins and variables. A mixin constructs a block of CSS based on some parameters passed to it, much like a function would, and variables allow common fragments of CSS to be defined once and then reused.

Variables are especially useful for hex colours; we can build a palette that can be changed in one place, which makes tweaking aspects of a design very fast. The mixin is used to prefix rules such as for appearance and transitions, and it reduces bulk in the file itself.

When working with a large style sheet, anything that can be done to reduce the number of lines will make the file easier to read when a team member other than you wants to update a style.

In addition to Sass, grunt-cssc combines CSS rules together, ensuring that the generated CSS has minimal repetition. This can be very useful in medium- to large-scale projects in which a lot of styles are repeated. However, the outputted file is not always the smallest possible. This is where the cssmin task comes in. It not only trims out white space, but transforms colors to their shortest possible values (so, white would become #fff). Add these tasks to gruntfile.js:

cssc: {
    build: {
        options: {
            consolidateViaDeclarations: true,
            consolidateViaSelectors:    true,
            consolidateMediaQueries:    true
        files: {
            'build/css/master.css': 'build/css/master.css'

cssmin: {
    build: {
        src: 'build/css/master.css',
        dest: 'build/css/master.css'

sass: {
    build: {
        files: {
            'build/css/master.css': 'assets/sass/master.scss'

Now that we have something in place to handle style sheets, these tasks should also be run automatically. The build directory is created automatically by Grunt to house all of the production scripts, CSS and (if this were a full website) compressed images. This means that the contents of the assets directory may be heavily commented and may contain more documentation files for development purposes; then, the build directory would strip all of that out, leaving the assets as optimized as possible.

We’re going to define a new set of tasks for working with CSS. Add this line to gruntfile.js, below the default task:

grunt.registerTask('buildcss',  ['sass', 'cssc', 'cssmin']);

Now, when grunt buildcss is run, all of the CSS-related tasks will be executed one after another. This is much tidier than running grunt sass, then grunt cssc, then grunt cssmin. All we have to do now is update the watch configuration so that this gets run automatically.

watch: {
    css: {
        files: ['assets/sass/**/*.scss'],
        tasks: ['buildcss']

This path might look a little strange to you. Basically, it recursively checks any directory in our assets/sass directory for .scss files, which allows us to create as many Sass source files as we want, without having to add the paths to gruntfile.js. After adding this, gruntfile.js should look like this:

module.exports = function(grunt){

    "use strict";


        pkg: grunt.file.readJSON('package.json'),

        cssc: {
            build: {
                options: {
                    consolidateViaDeclarations: true,
                    consolidateViaSelectors:    true,
                    consolidateMediaQueries:    true
                files: {
                    'build/css/master.css': 'build/css/master.css'

        cssmin: {
            build: {
                src: 'build/css/master.css',
                dest: 'build/css/master.css'

        sass: {
            build: {
                files: {
                    'build/css/master.css': 'assets/sass/master.scss'

        watch: {
            html: {
                files: ['index.html'],
                tasks: ['htmlhint']
            js: {
                files: ['assets/js/base.js'],
                tasks: ['uglify']
            css: {
                files: ['assets/sass/**/*.scss'],
                tasks: ['buildcss']

        htmlhint: {
            build: {
                options: {
                    'tag-pair': true,                      
// Force tags to have a closing pair
                    'tagname-lowercase': true,             
// Force tags to be lowercase
                    'attr-lowercase': true,                
// Force attribute names to be lowercase e.g. <div id="header"> is invalid
                    'attr-value-double-quotes': true,      
// Force attributes to have double quotes rather than single
                    'doctype-first': true,                 
// Force the DOCTYPE declaration to come first in the document
                    'spec-char-escape': true,              
// Force special characters to be escaped
                    'id-unique': true,                     
// Prevent using the same ID multiple times in a document
                    'head-script-disabled': true,          
// Prevent script tags being loaded in the  for performance reasons
                    'style-disabled': true                 
// Prevent style tags. CSS should be loaded through 
                src: ['index.html']

        uglify: {
            build: {
                files: {
                    'build/js/base.min.js': ['assets/js/base.js']


    grunt.registerTask('default',   []);
    grunt.registerTask('buildcss',  ['sass', 'cssc', 'cssmin']);


We should now have a static HTML page, along with an assets directory with the Sass and JavaScript source, and a build directory with the optimized CSS and JavaScript inside, along with the package.json and gruntfile.js files.

By now, you should have a pretty solid foundation for exploring Grunt further. As mentioned, an incredibly active community of developers is building front-end plugins. My advice is to head on over to the plugin library and explore the more than 300 plugins.


© Mike Cunsolo for Smashing Magazine, 2013.

October 02 2013


Powerful Workflow Tips, Tools And Tricks For Web Designers


Designing and developing can be time-consuming, especially when the project involves a new challenge, putting the team or freelancer into unknown territory. Moreover, time is a key factor in productivity. Working efficiently enables us to deliver better value at a competitive price.

However, some steps can be repeated for every project. These are steps we know and should make as quick as possible in order to have more freedom to experiment with new solutions.

This article presents a collection of tools, tips and tricks that will make your standard workflow as fast and practical as possible, so that you have more time for the exciting parts of the project.

Ready? Here we go!

Tips And Tricks

Design Workflow

My Secret for Color Schemes
Erica Schoonmaker shares her trick for matching up colors and creating a nice color scheme. Read more…


Useful Aligning and Spacing
Kris Jolls creates squares for the various spaces he has between elements. This cuts down time and makes sure everything is aligned and spaced properly. Read more…

Creating squares for various spacings between elements

The Ultimate Photoshop Web Design Workspace
Jacob Cass shares his set-up for the ultimate Web design workspace in Photoshop. Read more…

The Ultimate Photoshop Web Design Workspace

Coding Workflow

Perfect Workflow in Sublime Text 2
This is a must for all Sublime users. Ilya Grigorik has put together a two-hour tour de force to make you a Sublime ninja! Read more…

Perfect workflow in Sublime Text 2

Development Workflow for 2013
Learn what a modern development workflow looks like, from editors and plugins to authoring abstractions, testing and DVCS integration. Read more…

Development workflow for 2013

Vertical Editing” (with TextMate)
Learn how to vertically edit in general and with TextMate in particular. It pairs best practices with vendor-specific redundant properties. Read more…

Vertical editing (with textmate)

Prevent background-color Bleed on Touch Screens
Add outline: 1px solid #fff to your code to stop background-color bleeding on touchscreens. Read more…

Prevent background-color bleed on touch screens

Quick Tip: Rounded Corners Done Right
Improperly nested corners are a detail that can ruin a brilliant design. Learn how to do it the right way. Read more…

Quick Tip: Rounded Corners Done Right | Webdesigntuts

The //* /*/ //*/ Comment Toggle Trick
This is a little trick to make development faster. Not suitable for production code. Read more…

The //* /*/ //*/ comment toggle trick

Outdenting Properties for Debugging CSS
Martin Sutherland usually ends up adding a ton of properties to figure out how things fit together. Here is a little trick to remove the properties before a project goes live. Read more…

Outdenting properties for debug CSS

Favicons Next to External Links
A little trick to display an external favicon and next to the corresponding link, using simple lightweight jQuery. Read more…

Favicons Next To External Links

DevTools Tips and Tricks
These slides include tips and tricks for performance. You will be surprised what Chrome DevTools can do. (Use the arrow keys to navigate the slides.) Read more…

DevTools Tips and Tricks

Sublime Text Workflow That Beats Coda and Espresso
Andrey Tarantsov talks about jumping into Sublime Text 2 and and setting up a workflow that beats traditional tools such as Coda and Espresso. Watch the video or read more…

Sublime Text Workflow That Beats Coda and Espresso

Speed Up CSS Prototyping
This is a simple trick to overlay a grid or a mock-up over a page that you’re styling. It also allows you to edit content directly in the browser to see how the layout responds to various lines of text. Read more…

Speed Up CSS Prototyping

Git: Twelve Curated Tips and Workflows From the Trenches
12 simple tips for using Git, including: make “git diff” wrap long lines, set a global proxy, and clone a specific branch. Read more…

Git: Twelve Curated Tips And Workflows From the Trenches

The JavaScript “Ah ha!” Moment
This article collects comments of people having their “Ah ha!” moment with JavaScript — that is, the moment they learned something that made JavaScript click for them. Read more…

The JavaScript Ah-ha! Moment

Here are more articles and thoughts to help you improve your coding workflow:

OS Productivity

Alfred workflow tips and tricks
David Ferguson shares tips and tricks for working with Alfred. Read more…

Alfred Workflow Tips & Tricks

SSH: More Than Secure Shell
This article covers less common SSH use cases, such as using password-less, key-based login, setting up local per-host configurations, exporting a local service through a firewall, accessing a remote service through a firewall and more. Read more…

SSH: More Than Secure Shell

  • Share Your Hidden OS X Features or Tips and Tricks
    This is a thread with a whole range of OS X tips and tricks. So far, there are 126 comments, and you can add your own. Read more…
  • Alfred Workflows
    You find various workflows, provided by Isometry, including UNIX man page search and filtering text through arbitrary shell one-liners. Read more…
  • Tricks
    This is a collection of tricks for various areas, collected by Carles Fenollosa. Areas include bash, pseudo-aliases for commonly used long commands, VIM, tools and networking. Read more…




Time-syncing around the globe can be tricky. This tool lets you add the names and locations of people involved to find the best meeting time. Send the synced time to others and don’t risk hard feelings about time-conversion mistakes. Read more…

Timezoneslider: time syncing app

World Time Buddy
World Time Buddy is a cross between a time-zone converter, a world clock converter and an online meeting scheduler. It an online productivity tools for those who often finding themselves traveling, in flight, in online meetings or just calling friends and family abroad. Read more…

WorldTimeBuddy: A sync tool for scheduling meetings

Doodle can’t be recommended enough. It is a easy and uncomplicated tool for finding and scheduling a date that suits everyone — with only one email. Read more…

Doodle: easy scheduling

How to Solve the ‘Sharing Huge Design Files Amongst Teams’ Problem
This article is about how BitTorrent can be used to sync large files between team members. Key problems to overcome were how to share files between designers and between designers and developers and how to resolve points of failure. Read more…

How to solve the ‘sharing huge design files amongst teams’ problem

Screenhero is another tool for collaborative screen-sharing. The great things is that each user gets their own mouse, and both users are always in control. Read more…



SoFresh: Automatically Refreshing Your Browser
SoFresh is a CSS refresh bookmarklet. It allows you to select which files to refresh. The files are refreshed every time you save them, so that you don’t need to refresh your browser. Read more…

SoFresh!: Automatically refreshing your browser

Divvy is a new way to manage your workspace. It enables you to quickly and efficiently “divvy up” your screen into precise portions. Read more…


Shortcat: Keyboard Productivity App for Mac OS X
It takes an average of three seconds to move your hand from the keyboard to mouse, click once, and then return to the keyboard. Shortcat is a keyboard tool for Mac OS X that allows you to keep your hands on the keyboard, saving time and energy. Read more…

Shortcat: Keyboard productivity app for Mac OS X

The Thinkerbot
Logic is the enemy of creativity. By grabbing a steady stream of pure Internet randomness, this app injects non-linear inspiration into any brainstorming session. Read more…

The Thinkerbot: a brainstorming app

SizeUp allows you to quickly resize and position windows with keyboard shortcuts or a handy menu-bar icon. Read more…


DragonDrop lets you set down what you’re dragging, leaving you free to find the destination without having to worry about holding down the mouse button. Read more…


  • Slate
    Slate is a window-management application similar to Divvy and SizeUp (covered below), but free and less limited. It attempts to overcome their limitations by simply being extremely configurable. Read more…

Making Use of the Cloud

SortMyBox works like email filters, but for your Dropbox files. It magically moves files to folders based on your rules. Read more…

Organize your Dropbox with SortMyBox

Servus: For Mac and Dropbox
Give your files some meaning and a nice layout when your share them with others. Servus for Mac easily turns any file on your computer into a branded download page, hosted on Dropbox. Read more…

Create a branded download page with Dropbox

Send to Dropbox
Ever wish you could email files to your Dropbox? With this tool, you can. All you have to do is log into Dropbox, get your unique email address, and start sending files. Read more…

Send to Dropbox

Versioning Your Graphics Files With Dropbox
This quick tutorial explains how you can version graphic files via Dropbox. Read more…

Versioning Your Graphics Files With Dropbox

Create your own music-streaming service with this little app. DropTunes lets you stream music from Dropbox. Add tracks to your playlist, and browse while song is playing. Read more…

DropTunes: Stream music from the Dropbox

This tool lets you send files from a URL directly to the cloud (currently, Dropbox and email) without the need to download them. This is perfect to save on bandwidth when you discover great stuff with your phone on the go. Read more…

sideCLOUDload: Send files from an url to the cloud

  • Post Via Dropbox
    This WordPress plugin allows you to post and edit on your blog with text files uploaded via Dropbox. Read more…
  • Site44
    Site44 turns Dropbox folders into websites. You can edit the HTML locally; this way, your website will always be up to date. Read more…

Design, Color and Image Tools

Hex Color Tool
Hex is a color tool that automatically displays any hex color in darker and lighter shades. Read more…

Hex Color Tool

GuideGuide make dealing with grids in Photoshop easy. Pixel-accurate columns, rows, midpoints and baselines can be created based on a document or selection with the click of a button. Guide sets can be saved for repeated use. Read more…


Ever fire up Photoshop just to multiply a couple of colors? ColorBlendy can do this easily in your browser. Read more…

ColorBlendy - Blend colors with different modes like multiply, overlay, dodge.

CMYK to Pantone
Input a CMYK color code, and this tool will work out which Pantone colors are close. Read more…

CMYK to Pantone

ImageMagick is a command-line program that can do many operations on images quickly and with high quality. It’s especially useful for resizing and sharpening images, generating thumbnails, etc. Read more…


Development and Testing

Reconciling SVG and Icon Fonts
This is the first in a series of three articles on SVG. Romain over at Hull explains how to set up a powerful design workflow, going from Sketch all the way to icon fonts, all automated. Part two shows how to dissect the fonts and go crazy with their components. Part three shows how to do the same with sprites and Photoshop. Read more…

Reconciling SVG and Icon Fonts

Emmet Documentation
Emmet (previously known as Zen Coding) is a Web developer’s toolkit that could greatly improve your HTML and CSS workflow. Read more…

Emmet Documentation

Alias enables you to manage all of your aliases online and to browse a list of cool aliases submitted by others. From there, you can run a single command to copy your aliases back into your profile should you require them. Read more…

Alias: Manage your aliases in the cloud

Anvil is a menu-bar application for managing local websites. It takes your website and serves it up locally with a .dev URL, without requiring you to change system-level configuration files or hack around in the Terminal. Read more…

Anvil for Mac - Run your sites locally

DOM Monster
DOM Monster is a cross-platform, cross-browser bookmarklet that will analyze the DOM and other features of the page you’re on, and give you its bill of health. Read more…

DOM Monster Bookmarklet

prettyPrint.js is an in-browser JavaScript variable dumper, similar in functionality to ColdFusion’s cfdump tag. Read more…

padolsey/prettyPrint.js · GitHub

Resemble.js analyzes and compares images with HTML5 canvas and JavaScript. It can be used for any image analysis and comparison need you might have in the browser. Read more…

Resemble.js : Image analysis

Updating large and possibly responsively designed websites can be a hassle. You never know whether a change will break anything. This tool gets screenshots of all of your running websites in different resolutions, so that you can spot any issues. Read more…

Review: Screenshots in different resolutions

BrowserStack gives you instant access to all desktop and mobile browsers, which is great for testing your designs, especially if you cannot afford to buy many devices. Read more…

BrowserStack: Live, Web-Based Brower Testing

Zippopotamus makes working with postal codes and ZIP codes easy. It delivers a free API in JSON response format, supports over 60 countries, is perfect for autocompletion and open for crowdsourcing and contribution. Read more…

Zippopotamus- Zip Code Galore

Here are more development and testing tools you can check out:

Little Helpers

Style Manual
Is English not really your thing, or not your first language? This reference document by Andy Taylor will help you find the right answer to style-related issues. Read more…


13 Bills
This is a great tool for complicated bill-splitting. It’s especially handy when you have to split a bill according to the amount of time people have been around. Read more…

The easy peasy bill splitter

The Universal Packing List
Feed in the details of your next trip (timeframe, climate, gender, accommodation, kids, type of trip, activities, transportation and bag size), and this dynamic tool will work out what you should pack. Read more…

The Universal Packing List

Long Press
This tool simulates the alternate character choice that you have on Android and iOS keyboards. Read more…

Long Press

Sejda is a great online tool for manipulating PDF files when your preferred software is not at hand. It has many advanced features, including merging, splitting and combining. Read more…

Sejda: Edit PDF files online (for free)

If you need a quick and simple invoice, this tool turns your raw data into a presentable invoice. The tool offers five free invoices per month, which is great if you only need to use it every now and then. Read more…

CreateMyInvoice - invoice from your inbox is tool that becomes active when your service experiences downtime of any kind. Activity incidents are prominently displayed at the top of your page for visitors to see right when they arrive.Read more… Hosted Status Pages for Your Company

Something for music lovers. Feed your iTunes library XML file and get a weekly update of new releases from your favorite artists. Read more…

Beathound: iTunes new releases for your library

This tools converts your favorite music or ringtones to MP3, M4A, M4R or CAF format, and converts video to MP4, M4V or MOV format for enjoyment on your iPhone, iPod or iPad. It works both ways. It also includes other handy features that are worth checking out. Read more…

Syncios: Free iPod Transfer, Free iPod to PC Transfer, Transfer App Music Video Photo Ebook from iPod/iPhone/iPad to PC

This is for minimalists who don’t want to deal with cluttered admin interface that makes publishing complicated. Skrivr lets you write, save and publish your writing. Read more…

Skrivr: writing and publishing process made simpler

List of All Countries in All Languages and All Data Formats
This is a great resource that lists all countries in all languages and in all data formats. Read more…

List of All Countries in All Languages and All Data Formats

  • TokenPhrase
    TokenPhrase is a simple gem that generates unique phrases to use in your app as tokens. Read more…
  • Linkrr
    Linkrr is a small tool that transforms multiple unclickable links into clickable ones. Once you’ve gathered all of your links, Linkrr can open them with only one click. In some cases, you’ll have to disable your popup blocker. Read more…
  • LinkChecker
    This highly rated and popular Firefox add-on tests the validity of links on any Web page. Read more…

Useful Chrome Extensions

  • Tab Wrangler
    Tab Wrangler automatically closes inactive tabs but makes it easy to get them back, too. It works similar to AutoClose Tabs for Firefox. Read more…
  • Responsive Inspector
    Responsive Inspector is a simple Chrome extension that allows you to view the media queries of websites you visit. It is very useful when developing responsive layouts because it visually shows what resolutions are defined in style sheets. Read more…
  • Shortcut Manager
    With this extension, change the browser’s default shortcut keys, and assign any bookmarklets or JavaScript actions to your hotkeys. It works like Keyconfig on Firefox. Read more…
  • Auto Login
    Your browser already fills in your user name and password, so why not have it click “Submit,” too? This tool automatically logs you into websites that Chrome has saved a password for. Read more…
  • Tincr
    Tincr lets you edit and save files from Chrome Developer tools. It supports live reloading and saves changes to the right file automatically. Works for Mac, Windows and Linux. Read more…
  • OneTab
    OneTab is perfect for anyone who tends to open too many tabs in Chrome. It saves up to 95% of memory and minimizes clutter by reducing all tabs into one. Read more…

Last Clicks…

Browser Pong
Here is an attempt to expand how you think of the browser. Browser Pong lives between multiple open windows. During play, the space between windows is transformed into a playing field — the abstracted tennis court of Pong. Browser Pong really is thinking outside the box. Read more…

Browser Pong

Talks to Help You Become a Better Front-End Engineer in 2013
Addy Osmani has curated talks that he has found helpful this year. The advice shared in them will equip you with the knowledge to become a better front-end engineer. Read more…

Talks To Help You Become A Better Front-End Engineer In 2013 | Smashing Magazine

The Setup
This collection of interviews asks people from all walks of life what they use to get the job done. Read more…

The Setup

Learn Something Every Day
UK-based design studio Young has published a book of 265 facts to help you learn something new every day. Additionally, you can purchase some great fact shirts. There is also a free iPhone app. Read more…

Learn Something Every Day

  • eBooks Compiled From Stack Overflow
    These books contain the top questions from a selection of the top tags on Stack Overflow. The top questions include those with a score of 10 or greater. Read more…
  • Jourrrnal
    Jourrrnal is a blog that publishes interviews about the workflows of some of the most active and talented Dribbble members. Read more…
  • How I Work
    This page collects little tips on how other people work. Rather than reading blog posts on why one way is better than another, read why one person loves a certain way of working, and judge for yourself whether it’s worth adopting. Read more…
  • My Radical Productivity Experiment
    Michael Schechter has experimented with different approaches to find what works for his own productivity. If you haven’t found a decent workflow for yourself, figure one out. Read more…

Further Reading

There you have it! A collection of great tools, tips and tricks that members of our community have found very useful. Hopefully, some of them will speed up your workflow or solve one of your confounding issues. Maybe they will even inspire you to share some of your hidden secrets of productivity.

If your favorite tool, tip or trick is not in this list, make sure you share it with us in the comments section below. Have any of the above changed you life? If so, let us know more!

(al, il, ea)

© Melanie Lang for Smashing Magazine, 2013.

July 29 2013


All That Glitters Is Not Gold: A Common Misconception About Designing With Data


Too often when working in information design, you’ll hear the request, “Can you make the data look cool?” “Cool” is a rather ambiguous term, and means different things to different people. Unfortunately, when working with data, the term “cool” is often directly juxtaposed with common charts like line, bar and pie, which are often relegated to the backbench, and written off as “boring.”

The emphasis on the instant visual appeal of an infographic over the clarity of the presentation of data is a troubling trend that plagues the modern incarnation of the infographic.

Data Visualization And The “Cool” Factor

“The important criterion for a graph is not simply how fast we can see a result; rather it is whether through the use of the graph we can see something that would have been harder to see otherwise or that could not have been seen at all.”

– William Cleveland, Web Application Design Handbook

When presenting data visually, our main aim should be to look for ways to effectively communicate the stories hidden in the depths of the spreadsheet. Data visualization grants a means for discovery. It frees the data from the confines of its tabulated raw format and provides a way for the viewer to see trends, gather insights and understand complex stories without the need for words.

However, the average infographic that you’ll likely come across online today focuses more on the overall aesthetic of the piece, the “cool” factor, than on the data itself. “Info-posters” that are heavily illustrated and have a low information-to-pixel density currently dominate the infographic landscape, many of which dumb down data and spoon-feed the audience a choice set of factoids for easy digestion of the chosen topic.

In an effort to make the data appear more interesting, some infographics use design elements to distract the viewer. Large view.

In striving to achieve coolness, the examples above forgo the integrity of the data and, hence, the main function of the infographic. The design, rather than clarifying the data, actually distracts the audience from the information presented, encouraging them to engage with the visual elements rather than with the data itself.

Data Vs. Design?

Now, that’s not to say that being informative and beautiful are mutually exclusive. There are many wonderful examples of data being presented in an efficient, effective and visually stunning way. But the beauty of most of them lies in their simplicity.

Screen Shot 2013-07-12 at 12.15.56
US Gun Deaths” by Periscopic. Large view.

The most effective infographics and visualizations present data as clearly and concisely as possible, free from most distractions and embellishments — the emphasis being not on making the data look cool, but on ensuring that the data’s story is clearly communicated.

Decisions made during the design process will either help or hinder the viewer’s comprehension of the subject matter. So, although it is often said that the type of data will determine the type of visualization, design choices involving color, shape, layout and typography are also extremely important and will affect the ease with which the information can be understood. For example, color can highlight key insights, size can demonstrate change, and proximity can imply correlation.

A common misconception is that the audience will be interested in your data only if it looks exciting and is broken down into bite-sized chunks. In reality, you don’t need to dress up the data to make it more interesting. The information in the graphic, or the story you’re revealing, will ultimately determine the appeal of the piece, not the fancy illustrations. As Brian Suda puts it in his book Designing With Data, “Graphic design is not a panacea for bad copy, poor layout or misleading statistics”.

Bar Charts Are Your Friend, Not Your Enemy

When the focus is on communicating data effectively, common chart types should not be dismissed as boring alternatives, and you certainly shouldn’t be afraid to use them when telling stories with data.

News organizations such as The New York Times and The Guardian rely heavily on these common chart types, because simplicity and efficiency will often trump novelty in conveying complex information to a wide audience.

Screen Shot 2013-07-12 at 12.23.00
Why Is Her Paycheck Smaller?” by The New York Times. Large view.

Charts such as line, bar and pie have been in constant use since the 18th century for good reason. How information is displayed and organized in these charts appeals directly to the intrinsic way in which humans perceive things. According to the Gestalt laws of perception, the human brain automatically organizes and groups the things it sees by proximity, similarity, continuity and symmetry.

Designing with data in a way that supports these relationships (for example, by using color to show similarity, or by using a connecting line to imply continuity) makes it easier for viewers to understand what the data is showing and to draw certain conclusions that would be a lot harder to come to had the data been merely displayed in tabular format.

According to Andy Kirk in his book Data Visualization: A Successful Design Process, by presenting data in a visually effective way, you “move from just looking at the data, to actually seeing it.”

In Practice

The following “infographic” was probably the unfortunate result of a designer being asked to make the data look cool — a predicament I can certainly sympathize with, having been given the exact same creative direction in a number of projects. On more than one occasion, I have been explicitly asked not to use bar or line charts and been told that I simply needed to “jazz up the data,” much to my concern.

The presentation of the information below certainly is unique, and effort has been exerted to make the information look interesting. The graphic fails, however, in actually conveying information.

Screen Shot 2013-07-12 at 12.25.53
Top 10 Salaries at Google”: Making the data appear more interesting does not necessarily make it any easier to understand. Large view.

This graphic’s primary purpose should have been to present the data in a format that is easier to read than a table.

Screen Shot 2013-07-12 at 12.29.58
Large view.

In this instance, however, the table functions more efficiently than the graphic as a way to explore the data. If the salary range for each job was not present in the graphic, it would be nigh impossible to read. The pie-chart format adopted here actually obscures the data, rather than makes it accessible. Additionally, the colors are somewhat confusing; the radius of the pie slices are meaningless; and the number and size of the people figures don’t seem to correspond to anything.

So, how could this be better visualized? Focusing first and foremost on the clarity of the data, I’ve explored just two possible ways in which this small data set could have been presented. I’m sure there are many, many more.

One possible alternative to the infographic above. Large view.

This first option gives the viewer a much clearer representation of the data. With the linear organization, the viewer can understand at a glance what the data is showing, without having to work too hard. In stark contrast to the original, this graph makes the data instantly accessible, allowing for easy comparison between the jobs.

The first complaint I would likely hear is that the graph has an abundance of empty space. This space, however, actually tells a big part of the story. On the one hand, we’re presenting a comparison of 10 jobs; on the other, we also want to highlight that these salaries are actually very high, and the white space reinforces that.

Alternatively, the data could be presented as a slope graph:

Any one data set can be presented in a number of ways. Large view.

Here, we’ve zoomed into a portion of the graph. On the left side, markers pinpoint the low end of the salary range; on the right, markers pinpoint the high end. This version more clearly highlights those jobs with the widest ranges in salary, and it shows us that different presentation methods can tell slightly different stories.

These two examples may not have the visual pizazz of the original, but unlike the original, the data and the story are easily accessible to the viewer and not hindered by the motivation to look cool.

The request to simply make data look interesting is all too common in information design, and it might seem innocent at first. But if you forgo the integrity of the data to achieve this goal, then you run the risk of turning your infographic into merely a graphic with information, and that’s a big difference.

All that glitters is not necessarily gold.

(al) (ea)

© Tiffany Farrant for Smashing Magazine, 2013.

July 25 2013


Introduction To Photoshop Scripting


Automation is useful in the work of every designer. It saves precious time on repetitive tasks and helps us solve certain problems more quickly and easily.

You can automate your workflow in Photoshop with actions, which are pretty popular and which most of you already know about and use. Today, we’ll introduce you to an advanced automation technique: scripting. All you need for this is basic knowledge of JavaScript, which some of us Web designers already have.

I’ve known about Photoshop scripts for years but decided to really dive in a few months ago. I had avoided it because I thought it was the domain of smart math-minded programmers. I was wrong, and today I’ll show that, although it requires some basic programming skills, scripting isn’t hard to grasp.

But first, we have to answer the obvious question.

Why Do We Need Scripts?

Why should we would learn to script if Photoshop already has pretty nice actions? The answer is interactivity. When you use an action, you can’t really control how it behaves in different situations; it is like a videotape that just keeps playing again and again without any change.

Why We Need Scripts?

A script is more dynamic; its behavior changes according to the parameters you input or the context of its application. Sounds useful, no?


You don’t have to be an advanced programmer to be able to write scripts; I’m just a graphic designer, like most of you. But you should at least have a basic understanding of JavaScript and some experience with properties and methods to get the most out of this article.


If you are not familiar with JavaScript at all, fear not! There are plenty of places to learn the basics of programming. Codecademy, for example, has pretty neat interactive lessons.

I work in Adobe Photoshop CS5, but everything we’ll cover applies to newer versions, too; Adobe hasn’t made any major updates to its scripting API since CS5. I will refer to the latest version of the scripting documentation, though, which is CS6.

Getting Started

When you record actions in Photoshop, you set the order of the steps to achieve a certain result — that’s your algorithm. Then, you press “Record” and replicate them in Photoshop one by one. Scripting is similar, but instead of doing these steps in Photoshop, you write them down as lines of code. Most actions that you do in Photoshop have their own script equivalent as a function.

Start here

Let’s say you are creating an action that scales a document to 150% of its original size. You’d go through these steps:

  1. Open Image → Image Size.
  2. Enter 150% in width and height.
  3. Hit “OK.”

The same process with a script would look like this:

  1. Call the application: app
  2. Target a document: activeDocument
  3. Call the function to resize the image: resizeImage(width, height)

And the code would look like this:

app.activeDocument.resizeImage("150%", "150%");


There are three ways to write scripts for Photoshop: using AppleScript on Mac, VBScript on Windows or JavaScript on either platform. I use the third because it is cross-platform and I already have some experience with it.


Adobe has its own utility for writing scripts, called ExtendedScript Toolkit.

Adobe ExtendedScript Toolkit
The main window for Adobe’s ExtendedScript Toolkit. (View large version.)

The toolkit comes with Photoshop, and you can find it in the following folder:

  • Mac OS X
    /Applications/Utilities/Adobe Utilities CS6/ExtendScript Toolkit CS6/
  • Windows
    C:Program FilesAdobeAdobe Utilities - CS6ExtendScript Toolkit CS6
    (or Program Files (x86) for 64-bit machines)

The user interface of the ExtendedScript Toolkit is pretty straightforward. To start writing scripts, first select the target application in the drop-down menu. If Photoshop is running, then look for the green chain icon near the drop-down menu:

Application Select

Now you can write something like this:

alert("Hello Photoshop!");

Press cmd + R (or just hit the “Play” button in the toolbar) to run your script. ExtendedScript Toolkit should switch to Photoshop and show an alert box:

Hello Photoshop!

ExtendedScript Toolkit has some other neat features for debugging scripts, but this is enough for this article. You can learn more about how to use it by going to Help → JavaScript Tools Guide.

You can use any plain-text editor to write a script; just save it as a .jsx file. To run it, you’ll have to go to File → Scripts → Browse in Photoshop and select it. Alternatively, just open the script file with Photoshop. You can also add a line of code at the top of the script so that the file always opens in Photoshop:

#target photoshop

Save your scripts in the Photoshop/Presets/Scripts/ directory, and access them with File → Scripts. You can also set up a hotkey; just go to Edit → Keyboard Shortcuts, navigate to File → Scripts → [your script’s name], and set the shortcut you want.

ExtendedScript Toolkit can run and debug code from the integrated development environment, and it has an object model viewer built in, which is useful. So, I recommend using the toolkit to write your scripts. Unfortunately, the Mac version crashes sometimes, so keep that in mind.

Photoshop Object Model

To make writing scripts easier, you should understand how things relate to each other in Photoshop’s Document Object Model (DOM). Understanding it is not so hard if you look at Photoshop itself. The main object in Photoshop’s DOM is the application. In the application, we have a collection of documents that are currently open in Photoshop.

Each document contains elements — such as layers (called ArtLayers), groups of layers (LayerSets), channels, history states and so on — just like in a regular PSD document.

A simplified visualization of Photoshop’s DOM is below. A more detailed containment hierarchy can be found on page 12 of “Adobe Photoshop CS6 Scripting Guide” (PDF).

Simplified visualization of Photoshop API
A simplified visualization of Photoshop’s DOM.

Each of these objects has its own properties and methods that you can work with. For example, to change the opacity of the selected layer in a document, you would go to Application → Document → Layer → Opacity and set the desired value. The code would look like this:

app.activeDocument.activeLayer.opacity = 50;

As you may have guessed, activeDocument and activeLayer determine the currently selected document and layer.

You can find descriptions of most objects and their properties and methods in “Adobe Photoshop CS6 JavaScript Scripting Reference” (PDF), or in ExtendedScript Toolkit by going to Help → Object Model Viewer.

Let’s see how this works in a real-world example. In this next section, we’ll write our own script based on an action.

Remastering The RotateMe Action As A Script

A few years ago at Christmas time, I had an idea for an action to help me draw snowflakes.

Drawing Snowflake 101

  1. Draw one stem of the snowflake with a pattern.

    Step One

  2. Duplicate the stem, and rotate it a few degrees.

    Step Two

  3. Repeat the second step until you have a full circle.

    Step Three

Duplicating and rotating each stem manually is tedious, so I came up with an action to automate it. The algorithm looks like this:

  1. Duplicate the stem.
  2. Rotate it by however many degrees you’ve chosen, using the Transform tool.
  3. Duplicate the layer.
  4. Use the “Repeat Transform” function.
  5. Repeat steps 4 and 5 until you have a full circle.

Pretty neat. But the action had a disadvantage: You can set only a certain number of stems for the snowflake, according to the number of degrees you set in third step of the algorithm.

Back when I wasn’t familiar with scripting, I just made a few versions of the action, each of which produced a snowflake with a different number of stems.

Today, we will remaster this action as a dynamic script that takes your input on the number of stems. Let’s get started!


When you start writing a script, defining the algorithm first before digging into the code itself is always a good idea. In our case, the algorithm will work like this:

  1. Ask the user to enter the number of stems.
  2. Calculate the rotation angle.
  3. Duplicate and rotate the layer by the number set in the first step.

Let’s start with saving the current or selected layer to a variable for further use:

// Save selected layer to variable:
var originalStem = app.activeDocument.activeLayer;

Note that in JavaScript, you can mark a line with double slashes (//) to make it a comment. Comments are used to describe parts of code for future reference and don’t affect the behavior of the script.

Let’s move on to our algorithm now.

1. Ask User for Input

We can take input from the user with the prompt(message, default value[, title]) function. This function shows a dialog box with the message and an input field that contains the default value. When the user hits “OK,” the function returns the inputted value; so, we have to save it to the variable to be able to be used.

// Ask user for input by showing prompt box and save inputted value to variable:
var stemsAmount = prompt("Processing """"
How many stems do you need?", 12);

Note that I used in the message, so the dialog box will show the name of selected layer.

On Mac OS X, the first line of the message is in bold and functions as the title. So, our main message should be on the second line. To make a new line, type .

In Windows, you can specify a third argument in the function to set the title:

// Ask user for input by showing prompt box and save inputted value to variable:
var stemsAmount = prompt("How many stems do you need?", 12, "Processing ";

If we run the code in Photoshop, it will show this dialog box:

Prompt dialog

When the user hits “OK,” the inputted value will be saved to the stemsAmount variable. If the user clicks “Cancel,” then the function will return a null value. We’ll use this later.

2. Calculate the Rotation Angle

To calculate the rotation angle, we have to divide 360 degrees (a full circle) by the number of stems:

// Calculate the rotation angle
var angle = 360 / stemsAmount;

3. Duplicate and Rotate

Now we have everything we need to make duplicates of our stem. To do this, we’ll use the for loop. It lets us repeatedly run lines of code as many times as we’d like. Our loop will look like this:

for(var i = 1; i < stemsAmount; i++){
	// This code will run "stemAmount - 1" of times

Note that the first instance of an object in programming has the value of 0, but because our first layer is already on the canvas, we’re starting the loop from 1 instead.

To duplicate and rotate our layer, we will use the duplicate() and rotate(angle, AnchorPosition) methods: the number of layers to be rotated in angle multiplied by the indexed number of duplicates. AnchorPosition determines the point around which the layer will rotate. You can see this point when you use the Transform tool in Photoshop — it looks like a small circle with a crosshair. In scripting, it has only 9 specified values — i.e. the 9 positions of the anchor point:

AnchorPosition visualization

In our case, it is the bottom center of the layer, BOTTOMCENTER. Photoshop uses a lot of other constants here and there in some of the functions, which you can find on page 197 of “Adobe Photoshop CS6 JavaScript Reference” (PDF).

So, our loop will look like this:

// Duplicate and rotate layers:
for(var i = 1; i < stemsAmount; i++){
	// Duplicate original layer and save it to the variable 
	var newStem = originalStem.duplicate();

	// Rotate new layer
	newStem.rotate(angle * i, AnchorPosition.BOTTOMCENTER);

And the completed code will look like the following. You can try to run it.

// Save selected layer to variable:
var originalStem = app.activeDocument.activeLayer;

// Ask user for input by showing prompt box and save inputted value to variable:
var stemsAmount = prompt("Processing """"
How many stems do you need?", 12);

// Calculate the rotation angle:
var angle = 360 / stemsAmount;

// Duplicate and rotate layers:
for(var i = 1; i < stemsAmount; i++){
	// Duplicate original layer and save it to the variable
	var newStem = originalStem.duplicate();

	// Rotate new layer
	newStem.rotate(angle * i, AnchorPosition.BOTTOMCENTER); 

Final Touches

I’ll usually try to achieve the main goal with a script, and when everything works correctly, I’ll start to refine the code. In our case, we have to make sure that the user inputs a valid number in the prompt box — i.e. a positive integer, greater than one.

Also, to prevent Photoshop from going crazy, we will restrict the number of stems — let’s say, to 100. To do this, we will use a while loop to show the user an error message in the event of an invalid submission, and the prompt box will continue to be shown until the user enters a valid value or hits the “Cancel” button (remember that the prompt returns null if the user hits “Cancel”).

The new code looks like this:

// Save selected layer to variable:
var originalStem = app.activeDocument.activeLayer;

// Ask user for input by showing prompt box and save inputted value to variable:
var stemsAmount = prompt ("Processing """"
How many stems do you need? (From 2 to 100)", 12);

// Check that user entered a valid number and, if invalid, show error message and ask for input again
while(isNaN(stemsAmount) || stemsAmount <= 0 || stemsAmount > 100){
	// If user clicks "Cancel" button, then exit loop
	if(stemsAmount == null) break;

	// Show error message…
	alert("Please enter number in range from 2 to 100");
	// …and ask for input again
	stemsAmount = prompt("Processing """"
How many stems do you need? (From 2 to 100)", 12);

// Run the copying process
if(stemsAmount != null){ 
	// Calculate the rotation angle
	var angle = 360 / parseInt(stemsAmount);

	// Duplicate and rotate layers:
	for(var i = 1; i < stemsAmount; i++){
		// Duplicate original layer and save it to the variable
		var newStem = originalStem.duplicate();

		// Rotate new layer
		newStem.rotate(angle * i, AnchorPosition.BOTTOMCENTER);

As you may have noticed, we’re using the isNaN(value) function, which returns true if value is “not a number” and parseInt(value) to convert the value to an integer when we calculate the rotation angle.

The next thing we will do is manage the layers, renaming our new layers by adding an index to them. Also to make sure that we do not mess up document’s layers, let’s place our stems in a group.

Renaming the layers is not a hard task. We will just use the name property of the layer and add an index number to it:

// Add index to new layers = + " " + (i+1);

A group in Photoshop’s API is called a LayerSet and we can access all groups of the document by calling the layerSets property. To add a new group to a document, we have to call the layerSets’ method add():

// Create a group for stems
var stemsGroup = app.activeDocument.layerSets.add(); = + " ("+stemsAmount+" stems)";

Then, to add a layer to the group, we will use the move(relativeObject, ElementPlacement) function. Note that the move() function moves a layer in the layer stack, not on the canvas. (You can move a layer on the canvas with the translate(deltaX[, deltaY]) function.)

ElementPlacement is another constant, this one determining how we will place our layer relative to… well, relativeObject. In our case, we will use ElementPlacement.INSIDE to place the original layer inside a group:

// Place original layer in group
originalStem.move(stemsGroup, ElementPlacement.INSIDE);

We will place each new copy of the layer at the bottom of all layers in the group using ElementPlacement.PLACEATEND. The result is all of our layers arranged in ascending order, the first layer at the top and the last at the bottom:

// Place new layer inside stems group
newStem.move(stemsGroup, ElementPlacement.PLACEATEND);

You can read more about the ElementPlacement constant on page 202 of “Adobe Photoshop CS6 JavaScript Reference” (PDF).

Final Code

That’s it! RotateMe.jsx is done. Our final code looks like this:

// Save selected layer to variable:
var originalStem = app.activeDocument.activeLayer;

// Ask user for input by showing prompt box and save inputted value to variable: var stemsAmount = prompt ("Processing """" How many stems do you need? (From 2 to 100)", 12); // Check that user entered a valid number and, if invalid, show error message and ask for input again while(isNaN(stemsAmount) || stemsAmount <= 0 || stemsAmount > 100){ // If user clicks "Cancel" button, then exit loop if(stemsAmount == null) break; // Show error message… alert("Please enter number in range from 2 to 100"); // …and ask for input again stemsAmount = prompt("Processing """" How many stems do you need? (From 2 to 100)", 12); }; // Run the copying process if(stemsAmount != null){ // Calculate the rotation angle var angle = 360 / parseInt(stemsAmount); // Create a group for stems var stemsGroup = app.activeDocument.layerSets.add(); = + " ("+stemsAmount+" stems)"; // Place original layer in group originalStem.move(stemsGroup, ElementPlacement.INSIDE); // Duplicate and rotate layers: for(var i = 1; i < stemsAmount; i++){ // Duplicate original layer and save it to the variable var newStem = originalStem.duplicate(); // Rotate new layer newStem.rotate(angle * i, AnchorPosition.BOTTOMCENTER); // Add index to new layers = + " " + (i+1); // Place new layer inside stems group newStem.move(stemsGroup, ElementPlacement.PLACEATEND); }; // Add index to the original layer += " 1"; };

That wasn’t too hard, was it?


Now you can put it in the Photoshop/Presets/Scripts/ folder and run it by going to File → Scripts in Photoshop. Using different shapes with different values can yield interesting results:


As you can see from the number of links in the resources section below, there’s much more to say about scripting than can fit in an introductory article. But I hope the little that we’ve described today piques your interest and shows how powerful and helpful scripting is.

Community Power!

If you decide to dive into it, let’s learn together and share our experience. Ask your questions and share what you’ve done in the comments. If you are not a coder, consider leaving an idea for a script; maybe another reader will make it happen.

Let’s make Photoshop more useful together!


I’m still learning about Photoshop scripts, too, and here are some resources that are helping me along the way:

  • Adobe Photoshop Scripting,” Adobe Developer Connection
    All of the documentation and utilities for scripting.
  • Adobe Introduction to Scripting” (PDF), Adobe
    Here are the basics on scripting for Adobe applications. The nice thing about scripting for Photoshop is that you can apply your knowledge to other Adobe products; you just need to learn the application’s DOM, and you’ll be ready to go.
  • Adobe Photoshop CS6 Scripting Guide” (PDF), Adobe
    In this introductory guide to scripting for Photoshop, you’ll find the basics on getting started with scripting.
  • Photoshop CS6 JavaScript Reference” (PDF), Adobe
    This describes all of the objects and their functions and methods that you can use in scripting for Photoshop. This is one of the documents I use most when writing scripts.
  • JavaScript,” Mozilla Developer Network
    Here are answers to all kinds of questions about general JavaScript functions and usage.
  • JavaScript Tools Guide” (PDF), Adobe
    This has basic information about ExtendedScript Toolkit and some advanced techniques, such as file system access and ScriptUI and working with XML, sockets and more.
  • PS-Scripts
    An independent forum about scripting for Photoshop. I haven’t signed up to participate in discussions, but it has plenty of answered questions and solved problems to discover.
  • Photoshop Scripting, Adobe Community
    Adobe’s official forum for Photoshop scripting has some good discussion on problems encountered by users.

(al) (ea)

© darkwark for Smashing Magazine, 2013.

July 09 2013


Retinize It: Free Photoshop Action For Slicing Graphics For HD Screens


High-definition (or “Retina”) displays have spread wider and wider, and evidently their numbers will keep growing. So, as creators of products that will be consumed on Retina devices, we have to optimize our design and development workflow accordingly.

Slicing graphics from finished designs to use for development is one of the less enjoyable parts of building a website or app. And it takes a long time. Because slicing is a monotonous and straightforward task, using the right tool and workflow can save you hours or even days of work.


Preparing graphics for development mostly entails saving user-interface elements from the final mockups, with transparent backgrounds. And to support Retina displays, we also need to create double-sized versions of elements.

Upon failing to find a tool that fits my design team’s workflow, I created a set of two time-saving Photoshop actions for slicing graphics for Retina and standard displays. The great feedback from my team inspired me to share it with other designers. The tool has gotten good buzz in the Web design and front-end development community, so today I’m happy to introduce Retinize It on Smashing Magazine.

Retinize It website
Retinize It uses Photoshop actions and retina.js to optimize for Retina displays.

How It Works

Select one layer, several layers or a group of layers, and run the action. Once you’ve activated it, you just need to name the files and set the directory to save them to.

Retinize It

In the background, Retinize It copies the selected layers to a new file, makes the background transparent and trims the space around the element. Once that’s done, the action asks what you want to name the file, saves it, scales the original element by 200%, and saves that as a separate file. After this process, you’ll be returned to the original file.

How Much Of The Slicing Process The Action Saves

Almost all of it. The only thing you have to do is choose the directory and name the files. Remember to add a high-resolution modifier, @2x, for the Retina versions of the files. This convention was established in Apple’s iOS Developer Library.

If you’re building an iOS application, then you’ll need to provide a background and splash screen in three resolutions: standard (320 × 480 pixels), Retina (640 × 960 pixels) and iPhone 5 (1136 × 640 pixels). The naming convention for the standard and Retina versions is straightforward. For images for the iPhone 5’s screen resolution, Apple recommends adding a -568h@2x suffix, although Apple doesn’t require it.

Apparently, this happens because Xcode does not automatically associate -568h@2x images with the iPhone 5’s resolution; developers may set the suffix manually for this kind of file. I’ve worked with an iOS developer who has asked Apple to add a @5x suffix. So, the best way to determine the naming convention for iPhone 5 images in future is to ask your developer. In other cases, use the -568h@2x suffix.

The techniques presented in the article “Towards A Retina Web” bring the @2x convention from mobile apps to the Web, helping us to optimize websites for Retina displays very quickly.

Why Scale by 200% and Not 50%?

Retinize It is good for those who start designing at non-Retina sizes, which is a better practice for two reasons. First, the non-Retina version of an image will look much closer to the final product, giving you more accurate feedback on how the design will actually look.

Secondly, an element with an odd size value that is scaled by 50% will end up with a x.5 pixel value, making the element blurry. Bjango explains this issue in his article “Designing for Retina Display,” as does Niklaus Gerber in his article “Designing for the iPhone 4.”

What Kind of Layers Will This Work With?

The non-Retina action in this pack will work with any kind of layer. If you’re using the Retina version, then you should work with shapes and smart objects, so that the 200%-scaled file will not look pixelated.

Also, if your layer has an inner or drop shadow, then uncheck “Use Global Light”; otherwise, those effects in the sliced version of the layer will inherit Photoshop’s default angle.

What Does The Set Include?

The set includes two Photoshop actions:

  • Slice It
    This action slices a 100%-sized version of an element.
  • Retinize It
    This action saves a 100%-sized version and a 200%-scaled version.

What Makes It Special?

  • It’s free to use.
  • Install in one click.
  • You don’t need to change the layer structure in PSD files.
  • You don’t need to name layers.
  • It’s optimized for Retina displays.
  • Run in one click, no setup needed.
  • Windows and Mac support.

Download And Documentation

Additional Tools

PNG EXPRESS (MAC & Windows, $29)

A specification created by PNG Express

If you will be having limited interaction with the developer who will be coding your design and you’re not sure it will be pixel-perfect, PNG Express can be a great time-saver. It helps you to create specifications with instructions on element positions, margins, fonts and font sizes.

PNG Express also has an option for slicing images including Retina support.

ImageOptim (Mac Only, Free)


ImageOptim reduces image sizes while maintaining quality. The tool removes internal data embedded by graphics editors, such as comments and color profiles. I recommend adding an images folder from your website to this app before compiling and going live. ImageOptim will reduce around 30% of an image’s size on average.

Slicy (Mac Only, $29)


Slicy is great-designed tool most designers and developers are using for slicing graphics for iOS apps. The tool exports graphics from PSDs automatically, but it requires to organize your layers in Photoshop and name them in certain way. In addition once you make changes in Photoshop, Slicy updates the slices automatically. The main reason I decided not to use Slicy is no ability for quick export for couple of elements from PSD without preparing it for Slicy.


© Artiom Dashinsky for Smashing Magazine, 2013.

July 02 2013


Taming The Email Beast


In the 1950s, when consumer electronics such as vacuum cleaners and washing machines emerged, there was a belief that household chores would be done in a fraction of the time.

We know now it didn’t work out that way. Our definition of clean changed. Instead of wearing underwear for multiple days, we started using a fresh pair every day, and so the amount of washing required increased. In short, technology enabled us to do more, not less.

Our work environments have followed a similar path. Tools such as email enable us to communicate more, rather than make life easier. In fact, many people are now overwhelmed by the amount of email they receive.

The Problem Of Email

Email has changed our expectations of communication; most of us feel like we need to be constantly available. We are tied to our email-enabled devices, and, like Pavlov’s dog, we have to check email every time the bell rings.

We are constantly available, constantly interrupted and continually overwhelmed.

Going offline isn’t the answer. As Web designers, we do not just build websites; we provide services to our clients. Therefore, we need to keep our clients happy, and that can only be done by regular communication. Clients need constant reassurance that their project is in hand, and they need continual chivying to provide the feedback and contributions we require to do our job.

Like it or not, email is a necessary evil. But that doesn’t mean it needs to rule us. We can tame the beast, and it all starts by doing less.

Like any beast, the more you feed email, the bigger it becomes. It’s time to put email on a diet. We can achieve this in a simple way: by using email less.

Send Less

Believe it or not, doing considerably less with email while still effectively communicating with our clients and colleagues is perfectly possible.

You probably don’t need to send out nearly as many emails as you do. You could almost certainly reduce the number of people you copy in your emails. Remember that the more email you send out, the more email you will get back. It’s that simple.

Email is not always the best form of communication. A face-to-face meeting or a phone call is usually much more effective. After all, what we actually say is the minority of communication. Tone of voice and body language are critically important.

Instant messaging (IM) is another option to consider. While it is intrusive at times, it can be perfect for quick questions. Email encourages long-form communication, while IM tends to be shorter.

That being said, there is no reason why emails need to be long.

Write Less

The less you write in emails, the less people will write in reply. People tend to mirror the behavior of others; so, if you want to receive more concise emails, start writing emails that are to the point yourself.

You might feel that short emails are less friendly and come across as cold, but these problems can be worked around.

Try linking to in your signature. That website will perfectly explain the brevity of your emails.

Linking to makes it clear to clients that you keep your emails short because you value their time. Larger view.

An even easier option is to adopt the “Sent from my phone” signature that many people use these days, a good excuse for getting to the point.

Please don’t misunderstand. Being friendly and personable with clients is important. But email is not the place to do that. If you want to chat, pick up the phone.

Email should feel more like Twitter than traditional mail. In fact, many people are abandoning email entirely and turning to Twitter as their primary communication tool.

If this step feels too big, try summarizing your email at the top. This will make it easier for the reader to get the gist of your message if they are busy. Also, you will find that people start doing the same in their emails, making reading much quicker.

In addition to sending less email and shortening your messages, reducing the amount you receive is possible.

Receive Less Email

The easiest way to cut down on replies is to tell people that they do not need to reply. Putting abbreviations such as NRN (no reply necessary) or FYI (for your information) in the subject line will help with this. But that won’t stop unsolicited email.

Most of us get a lot of unsolicited email, despite the excellent spam filters that most email services provide. These emails are often newsletters that we’ve never subscribed to or announcements from companies from which we once made a purchase. Regardless of whether we ever did agree to receive these emails, they are now cluttering our inbox.

You might be tempted to just delete these and keep wading through the rest of your email. But take the time to find the “Unsubscribe” link, because these companies will not contact you just once. They will email you again and again until you stop them.

If they don’t include an “Unsubscribe” link, create an email rule that automatically deletes them. Those couple of minutes now will save you time and distraction in the long run. If you really are too busy to find those “Unsubscribe” links, then try out, which makes unsubscribing even easier.

Unroll.me_500 makes unsubscribing to emails easier than ever. Larger view.

However you do it, unsubscribing from mass emails will dramatically reduce your load. But don’t stop there; consider unsubscribing from newsletters that you did sign up for.

Keep Email For Communication Only

Part of our problem is that we have turned email into something it naturally is not. For example, many people use their inbox as a place to read news. Email was never really meant for that. Ample apps (such as the wonderful Feedly) provide this functionality.

Use an app like Feedly to read news, rather than your email client. Larger view.

Others use their email client as a repository for files that they want to keep. This makes little sense because a much more powerful filing system is built into their operating system.

And yet others use their inbox as a task manager, marking emails as starred or unread to remind themselves to take some action. However, dedicated tasks managers will help you work much more efficiently.

Omnifocus 2_mini_border
Your email client is not nearly as good a task manager as applications such as OmniFocus. Always use the best tool for the job. Larger view.

Turning email into something else merely clutters our inbox, making the job of reading and writing actual email less efficient.

To tame the beast, use email as a communication tool, not as a way to manage files, read news or schedule tasks.

While the techniques above will reduce the amount of email coming in, they address only the symptoms and not the root cause of our problem — which is our addiction to email.

Breaking Our Addiction

The reference earlier to our Pavlovian response to the audio notification of incoming email was slightly tongue in cheek, but accurate nonetheless.

Upon hearing that beep, we find it hard not to look. But checking email every five minutes adds up to over 32,000 interruptions a year! That is a phenomenal number.

Do we really need to check email that much? Almost certainly not. The majority of email that comes in either is unsolicited or can wait a few hours. The number of emails that genuinely require urgent action is relatively low.

The problem is that we perceive certain emails as being urgent when they are not. It’s just a matter of training our clients not to expect an immediate response. Of course, that is not always possible.

What we need is a way to be notified of only the important emails. Fortunately, achieving this is relatively easy. Start by turning off notifications in your email client. They are just too indiscriminate, notifying you of every single message that comes in.

Instead, sign up for a service, such as AwayFind, that will notify you by text or app notification when an email comes in that meets certain requirements. For example, you could choose to receive notifications only of emails from a particular client or about that day’s meeting.

AwayFind notifies you about only the most important emails, freeing you from the shackles of constant alerts. Larger view.

If you don’t want to pay for this service, you could try IFTTT.

The point is to free yourself from constant interruption. Knowing that important messages will reach you instantly, you can comfortably check email only a couple of times a day. I check email first thing in the morning, at lunchtime and at the end of the business day. That way, I can respond reasonably promptly without having my workflow interrupted.

And when you do check your email, be organized in the way you deal with it.

Organizing Your Email

A lot of people make email more complicated than it needs to be because they are not organized. The biggest offenders are those who never move email out of their inbox.

Having an inbox filled with hundreds or thousands of emails increases the time it takes to process new messages. With so much clutter, figuring out what needs to be dealt with and what has been read becomes confusing. No matter how in control you may feel, things are bound to fall between the cracks.

Your inbox is where email arrives, but it shouldn’t stay there. Instead, clear your inbox every time you open your email client. You don’t necessarily have to act on every email right away — just read it and decide what to do with it.

You have five options upon reading an email:

  • Act on it.
    If you have time to act on the email immediately, then do so. This could mean responding or completing a task. But don’t feel obliged to act immediately if you have higher priorities.
  • Defer it.
    Too busy to deal with the email immediately? No problem. Turn it into a task that sits in your task manager. You can then deal with it on your own time and view it alongside your other tasks.
  • File it.
    Many emails we receive require no particular action, but merely provide useful information. In such cases, archive the post for future reference. With today’s powerful search tools, there is little need to tag it or add it to a folder. But do move it out of the inbox.
  • Delete it.
    If the email is spam or has no long-term value, delete it.
  • Delegate it.
    Some emails require action, but you might not be the best person to do it. In those cases, delegate the task by forwarding the email to the relevant person.

The lesson in all of this is that your inbox is just a holding place for unprocessed email. Once you have read it and decided what to do with it, move it out of your inbox to make room for future emails.

Start Today

You might be intimidated by the prospect of having to process all of those emails staring back at you in your inbox. This might all sound like too much work. I promise you it will be worth it.

If the inbox is too overwhelming, just declare bankruptcy. Archive everything except this week’s email. If any emails from more than a week ago haven’t been addressed yet, replying to them now would probably be too late anyway.

Archiving all of that email will leave you with a manageable load. Work through each email and decide what to do with it. If you get a lot of email, this could take some time, but it will be worth it. Remember that you don’t have to act on everything immediately. Defer actions until later by bouncing them to your task list. The trick is to process everything out of your inbox. Do that and I promise you will never look at email with the same horror again.

So, those are my tips on managing email. What are yours? What do you think of email clients such as Mailbox? Or have you a completely different approach? Let us know in the comments. We’d love to hear your perspective.


© Paul Boag for Smashing Magazine, 2013.

June 20 2013


Email Templates For Web Designers And Developers (PDF, ODT, TXT)


You know how it goes: you are facing a difficult situation with a client, and you aren’t quite sure how to respond to it to navigate the conversation into a meaningful direction. This is where email templates can come in handy. This article features email templates for communicating with clients, superiors, teammates and the like. You can easily customize them. They balance firmness and tact, professionalism and friendliness.

Please note, though, that these templates are subjective. They’ve been created to the best of my ability, with the help and input of dozens of designers and developers. Once you load the templates into your email program, remember to format them first, OK? Use the “Paste as plain text” command and you’ll be fine.

Here is a short overview of all templates:

The Dreaded Price Email

First, try to defer talking about price until you have all of the details. I do this all of the time with prospective clients of mine. Tell them that you’ll send an accurate estimate once they share some thoughts on what they’re looking for. And if they budge, go ahead and send it. Be done with it.

[Subject:] Answer to your question on my rates

[Client’s name],

My rate varies, depending on the project and its scope. Generally, though, my rate is [$X] for [work Y], just so that you have a ballpark idea.

If you send me more details about the kind of work you have in mind, I can send you a more accurate estimate. For now, though, let’s get back to where we were, regarding [matter Z], and we can discuss pricing when more information is available.

[Your name]
[contact details, website]

Questions About The Design Brief

Trust me, iron out any questions you have before the project starts. The client will appreciate your initiative and your willingness to approach them when help is really needed. Swallowing a bit of pride and asking is always better than wallowing in confusion and causing problems down the road.

[Subject:] Some questions about the design brief

[Client’s name],

I appreciate your quick provision of the design brief. It’s really allowed me to get a good idea of where you want this project to go. I’m excited to start working on the project!

I have just a few questions to clear up before we go full steam ahead.

  • [question X]
  • [question Y]
  • [question Z]

If you could get back to me with your input by [date and/or time], that’d be great. If you also have things you’d like to discuss, please reach out.

It’s always best that we’re on the same page.

[Your name]
[contact details, website]

Sending The Final Plan To A New Client

To give a new client a good impression of you from the outset, make it clear that doing a professional quality work and work ethic is important to you. Send out this email along with the project plan.

[Subject:] Would love your input. Project plan attached.

[Client’s name],

To start, thanks for your vote of trust. I’ll be working hard to make sure you love your decision to work with me — that’s a promise.

I’ve attached the final project plan here, for your input. Below are its main points, in case you don’t have time right away to read the full plan.

  • The total estimated cost is [$X].
  • The estimated time is [Y].
  • [other important point]
  • [other important point]

If you could send me your comments by [date and/or time], I’d appreciate it.

Should you have things you’d like to discuss, please feel free to reach out. If a meeting is needed, I’m OK with that as well.

Thanks again for your business, and I look forward to getting to the work!

[Your name]
[contact details, website]

The Cost In The Final Plan Is Damned Far From Your Initial Estimate

You’ll almost always have to submit an initial estimate to the client. If all goes well, that estimate will be reflected in the final plan, without much change. But for those times when a drastic departure is needed, take heart.

[Subject:] Final project plan, based on recent info

[Client’s name],

Two things here.

First, thanks for providing the full details on the project you’d like us to work on together. I’ve prepared the project plan based on the information you’ve given.

The plan is attached here, for your evaluation.

Secondly, I’d like to inform you of the revised estimate, reflected in the plan.

Very briefly, the project will now take [time X], at a cost of [$Y].

I’m aware this is far from the previous estimates I talked about with you. I’ve given these figures a lot of thought, and I believe they’re fair, considering the work to be done on both of our ends.

To close, please send me your feedback on the plan by [date and/or time].

Then, we can work out an arrangement that’s a win for both of us.

[Your name]
[contact details, website]

Scope Is Creeping But Can Be Accommodated

I’ve yet to see a large project that doesn’t have scope creep, one way or another. Still, it’s important to manage the creep, quickly and proactively. Otherwise, the project will bloat, bringing a completely new set of problems.

[Subject:] A quick note on your new requirements

[Client’s name],

Thanks for providing input on the project — I appreciate your direction!

Regarding the changes we talked about, I’m happy to tell you that they can be accommodated. But because they aren’t a part of our initial agreement, they’ve caused shifts in the plan for this project.

That revised plan is attached, showing the new timelines and associated costs.

I’d appreciate feedback regarding the attachment by [date and/or time], so that the design work can get back to its usual speed.

[Your name]
[contact details, website]

You Won’t Be Able To Deliver The Design On Time

First things first, problems like this happen sooner or later. What’s important is that you apologize, not try to shirk responsibility, and fix the situation fast. If you do these three actions, you’ll be fine most of the time.

[Subject:] Important notice, and an apology

[Client’s name],

I’m sure this isn’t the type of email you expected to get from me. Still, I’d like to deal with the facts as they are and get a solution in place, ASAP.

So, here goes. I’m sorry, but the design won’t be delivered on time. There are a couple reasons for this, but rest assured, I take full responsibility.

  • [reason A]
  • [reason B]
  • [reason C]

To get the project back on track, I’ve done [action X], [action Y] and [action Z].

I’m also taking steps to ensure that we don’t go through this headache again.

Anyway, if you’d like to discuss the effects of this issue, feel free to reach out.

[Your name]
[contact details, website]

Dealing With Late Payment

Thankfully, I’ve not had to send these emails often, and usually my clients have only forgotten to deal with an invoice out of busyness. But if you’re in the unfortunate position of having to collect a very late payment, read on!

[Subject:] Your payment for [work X]

[Client’s name],

I recently sent you an invoice dated [date], for [services rendered]. The total cost reflected in the invoice is [$X].

While I’ve worked up to standard and delivered on time, the compensation still hasn’t arrived. According to our agreement, the payment terms are below.

[Insert relevant details here, preferably in bold for emphasis.]

According to these terms, the payment ball is clearly in your court. If you’re going through difficulties, please let me know, and we’ll work to reach a solution together.

Otherwise, I’ll be expecting your payment by [date X], and will be contacting you on [date Y] if any issues still remain.

[Your name]
[contact details, website]

Discussing Other Aspects Of The Website

Design is rarely the only thing a client has to consider. As the one with the knowledge, you would do well to bring related aspects of website performance and usability to the client’s attention. As a result, you might earn not only their respect, but perhaps even higher compensation.

[Subject:] Wanted to bring these to your attention

[Client’s name],

As you may know, design isn’t the only thing that matters on your website. So, I feel it’s my responsibility to bring your attention to related issues that you may need to consider.

  • [first consideration (such as website performance)]
  • [second consideration (such as usability or functionality)]
  • [yet another consideration]

The factors above will all have an impact on your website and its users. They’re important because of [reason X, reason Y and reason Z].

I’m bringing these things to your attention now so that we can act on them promptly. If you’d like to talk about what I’ve shared here, please let me know!

[Your name]
[contact details, website]

Justifying The Need For Extra Hours

As mentioned, problems and changes always come up, whatever the project. Here is yet another template for such instances, this one an email to soften the client’s heart on the subject of extra hours.

[Subject:] Important project update

[Client’s name],

I just wanted to tell you about some important changes to the project.

From my most recent check of what still needs to get done, I’ve come to the realization that extra hours are needed, for these reasons:

  • [first reason and why it matters]
  • [second reason and why it matters]
  • [third reason and so on]

I know this is a surprise, and I would have liked to have avoided this. But my responsibility is to keep you in the loop, especially about any changes such as these.

If you’d like to discuss the new hours, please do reach out. Or we can meet at [date and/or time]. If then doesn’t work, let me know when is most convenient for you.

[Your name]
[contact details, website]

No Need For Extra Hours

On rare occasions you will tell the client that you need extra time, only to realize later that in fact you don’t. Be honest and promptly share the good news with them.

[Subject:] Some good news for you

[Client’s name],

I recently sent you a [revised plan, email, etc.], indicating the need for [X] extra hours. The reasons for those hours were [A, B and C].

On a happier note, I’d like to share with you that those hours are no longer needed. They’ll no longer be billed, and the invoice will reflect that.

To be clear, the total project cost is now [$Y].

Everything else remains as is. If you’d like formal documentation to indicate this change, please let me know and I’ll prepare it.

Thank you,
[Your name]
[contact details, website]

Declining A Project

Oh, it’s a happy day when you have too many projects to accept a new one. If it ever does happen, a polite decline will stand as proof of your professionalism and will leave a good impression on the inquirer, who may need you in future.

[Subject:] Sorry I cannot take on your project

[Client’s name],

Thanks for your [inquiry or offer to hire me].

Unfortunately, I have a lot on my plate right now. I won’t be able to take you up on your offer. I wouldn’t want to accept and then commit at anything less than 100%.

For now, I’d like to focus on current projects, but I expect to have a free period open by [date X]. Would this work for you?

[Your name]
[contact details, website]

Stopping Work Because Of Delinquent Payment

All projects come with a payment risk. You could ask for a deposit up front to mitigate the risk, but sometimes you have to stop work altogether and accept the reality. Still, tell your client so that they’re clear that you haven’t shirked any responsibilities of your own.

[Subject:] Will have to stop work until dues are paid

[Client’s name],

This is a situation I would have preferred to avoid, but we both have to deal with the facts as they are. Due to delinquent payment, work on the project will have to stop, according to the terms of our agreement.

For the sake of our relationship, I’ll just assume that the invoice fell through the cracks. I’m sending a copy later today, and look forward to your payment by [date X].

I’ll also send a reminder by [date Y] if the issue remains.

[Your name]
[contact details, website]

The Client Refuses To Sign A Contract

Contracts, whether written in legalese or plain language, protect you. For this reason, a client’s refusal to sign one should throw up a red flag, and you should make it clear that you won’t work without the right measures in place.

[Subject:] Clarification

[Client’s name],

This is just a quick note about the contract I presented to you. You’ve stated that it’s unnecessary, but I really can’t overemphasize that it is necessary.

A contract clarifies our shared responsibilities and is an important safeguard for both of us. It’s an assurance that we’ll both comply with what’s expected of us, within the bounds of our professional relationship.

For these reasons, I really would never work without one. Not only is a contract standard practice, but it’s also demanded by common business sensibilities.

I hope you’ll understand. Should you wish to discontinue work because of the contract requirement, please inform me.

[Your name]
[contact details, website]

Funds Needed For Materials

Some designs require third-party resources, such as stock photography or original artwork. Most contracts have a clause that the client will pay for these materials, but reminding the client of as much via email is always prudent.

[Subject:] Materials needed for the design

[Client’s name],

I’m sending you this as a record of my request for materials. Specifically, the design requires the following items:

  • [first item (such as a stock photograph, with link)]
  • [second item (such as artwork, with link)]

These materials will be used for [insert intended use].

The total price for such materials is [$X], which breaks down as [$Y] for the first item, and [$Z] for the second item.

According to our agreement, the funds for such materials will come from you.

Please reply with your approval, and send the payment over by [date X].

[Your name]
[contact details, website]

Rates Are Going Up

Regularly increasing your rates is a normal part of business. This protects your margins and offsets inflation and higher taxes. Still, higher rates could mean disgruntled clients, so soften them to the idea early on.

[Subject:] I’ll be raising my rates

[Client’s name],

Because your business is extremely important to me, I’d like to personally explain the reasons for my raised rates.

  • [reason X]
  • [reason Y]
  • [reason Z]

As you’re aware, increases like these are an unavoidable part of business. That being said, I believe the new rate reflects the accompanying increase in my skills. For example, I’ve recently [insert latest big achievement].

If you have questions or clarifications, please let me know. I’d be happy to talk through any concerns you may have.

Thanks for your time,
[Your name]
[contact details, website]

Request For Testimonial

Testimonials are some of the most powerful marketing materials out there. The best can allay apprehensions, reinforce credibility and solidify your reputation. So, actively gather them when the opportunities present themselves. Don’t let your good work go unnoticed!

[Subject:] Can I get your approval for this quote?

[Client’s name],

Hope I haven’t caught you at a bad time.

I’m sure you know how important testimonials are for securing new clients. And because I want to make things super-easy for you, I’ve prepared a template for you. You’re free to edit it as you like, of course.

[Insert pre-written testimonial.]

If this testimonial is OK, can I get your approval to feature it on my website? Also, if you could send a photo by [date and/or time], I’d really appreciate it.

[Your name]
[contact details, website]

Request For Case Study

In addition to testimonials or social networking, case studies are another form of marketing. If you put effort into making great case studies, you’ll greatly reduce apprehensions about your services on the part of potential clients.

[Subject:] Can I feature you as a case study on my website?

[Client’s name],

The subject line pretty much says everything, but I’d like to ask again. Can I feature you as a case study? I think our project had a lot of highlights, and I’m eager to get the word out about our work together.

Specifically, I plan to dig into these main aspects:

  • [first main aspect to highlight in case study]
  • [second main aspect to highlight]
  • [third aspect and so on]

If being featured is OK with you, can we chat over coffee on [date and/or time]? Or if that doesn’t work, I’m free on [date X].

I look forward to meeting you!

[Your name]
[contact details, website]

Request For Referral

If you do good work, referrals will come automatically. But it never hurts to be proactive and ask whether your clients know people whom you could help. At least you’ll get the benefit of their introduction, which will alleviate any anxiety on the part of the prospective client.

[Subject:] Know any people I could help?

[Client’s name],

As you probably know, referrals are an important source of customers. So, I’d like to check in and ask: do you know people I could help with my skills?

If you do, I’ve written an introductory email that you can send them.

Introductory email: Hi, [friend’s name]. I’m introducing you to [your name]. [He/she] is the designer who did my website, and [he/she] is great: solid design skills, good work ethic and very responsive. I think you’d get some benefit from getting in touch with [him/her]. Contact details: [your email address, phone number, website].

Thanks for your help with this, [client name].

[Your name]
[contact details, website]

Download The Templates For Free


Thanks to the dozens of designers who have provided input and help. Also, a big debt of gratitude to the editors of Smashing Magazine for providing the platform to share this with the world.

Download the Set for Free

This set of templates is completely free to use for commercial or personal use. Go ahead and share this with anyone whom you think it’ll help. But please don’t sell it or claim it as your own. Putting this together was hard work!

(Credits of image on front page: Sarah Joy)

(al) (ea)

© Bea Kylene Jumarang for Smashing Magazine, 2013.

June 13 2013


Building An App In 45 Minutes With Meteor


The other day, I finally accomplished one of my long-standing goals: to go from one of those “Wouldn’t it be cool…” ideas to a working, live app in less than 1 hour. 45 minutes, actually.

It all started with a design meet-up in San Francisco. I can honestly say this was the best meet-up I’ve ever been to: Even though it was announced only two days in advance, more than 200 people RSVPed, and a good number of them showed up. It was a great chance to put faces to familiar names, as well as to make new friends.

But I got to talking with so many people that I didn’t have a chance to get contact info for everybody. So, the next day, I asked the organizers about it and they suggested that everyone who attended leave a link to their Twitter account in a shared Google Doc.

That would work, but I was afraid it would prove to be too much effort. If I’ve learned one thing in my years as a designer, it’s that people are lazy. Instead, what if I built an app that lets the user add their Twitter account to a list in a single click?

The app would work something like this:

  1. The user signs into Twitter,
  2. A link to their Twitter profile appears on the page,
  3. That’s pretty much it!

With my list of requirements complete, I set to work to see how fast I could build this, and I thought it’d be interesting to walk you through the process.

At first, take a peek at how the final app looked like:

Our very bare-bones (but working!) app.
Our final bare-bones (but working!) app.

You can also see a demo of the finished product, and find the code on GitHub. (Note: Give it some time to load. Apps hosted on Meteor’s free hosting service often slow down under a lot of traffic.)

A word of warning: This won’t be a traditional tutorial. Instead, it will be a play-by-play walkthrough of how I coded the app in one hour, including the usual dumb mistakes and wrong turns.

Introducing Meteor

I decided to build the app with Meteor. Meteor is a fairly young JavaScript framework that works on top of Node and has a few interesting characteristics.

The Meteor homepage
Meteor’s home page

First, it’s all JavaScript, so you don’t need to deal with one language in the browser and another on the server. That’s right: the same language you use to set up jQuery slider plugins can also be used to query your app’s database! The added benefit of this is that your app now has only a single code base — meaning you can make the same code accessible from both the client and server if you need to.

Meteor is also reactive, meaning that any change to your data is automatically reflected everywhere throughout the app (including the user interface) without the need for callbacks. This is a powerful feature. Imagine adding a task to a to-do list. With reactivity, you don’t need a callback to insert the new HTML element into the list. As soon as Meteor receives the new item, it automatically propagates the change to the user interface, without any intervention on your part!

What’s more, Meteor is real time, so both your changes and the changes made by other users are instantly reflected in the UI.

Like many other modern frameworks, Meteor also speeds up your Web app by transforming it into a single-page Web app. This means that instead of refreshing the whole browser window every time the user changes the page or performs an action, Meteor modifies only the part of the app that actually changes without reloading the rest, and then it uses the HTML5 pushState API to change the URL appropriately and make the back button work.

Not having to update the whole page enables another very powerful feature. Instead of sending HTML code over the network, Meteor sends the raw data and lets the client decide how to render it.

Finally, one of my favorite features of Meteor is simply that it automates a lot of boring tasks, such as linking up and minifying style sheets and JavaScript code. It also takes care of routine stuff for you on the back end, letting you add user accounts to the app with a single line of code.

I’ve been experimenting with Meteor for the past six months, using it first to build Telescope (an open-source social news app), and then in turn using Telescope as a base to create Sidebar (a design links website), and I’ve just released a book about it. I believe that, more than any other framework, Meteor helps you get from idea to app in the shortest possible amount of time. So, if all of this has made you curious, I recommend you give it a try and follow along this short walkthrough.

Step 0: Install Meteor (5 Minutes)

First, let’s install Meteor. If you’re on Mac or Linux, simply open a Terminal window and type:

curl | /bin/sh

Installing Meteor on Windows is a little trickier; you can refer to this handy guide to get started.

Step 1: Create The App (1 Minute)

Creating a Meteor app is pretty easy. Once you’ve installed Meteor, all you need to do is go back to the Terminal and type this:

meteor create myApp

You’ll then be able to run your brand new app locally with this:

cd myApp
meteor myApp

In my case, I decided to call my app twitterList, but you can call yours whatever you want!

Once you run the app, it will be accessible at http://localhost:3000/ in your browser.

Step 2: Add Packages (1 Minute)

Because I want users to be able to log in with Twitter, the first step is to set up user accounts. Thankfully, Meteor makes this trivially easy as well. First, add the required Meteor packages, accounts-ui and (since we want users to log in with Twitter) accounts-twitter.

Open up a new Terminal window (since your app is already running in the first one) and enter:

meteor add accounts-ui
meteor add accounts-twitter

You’ll now be able to display a log-in button just by inserting {{loginButtons}} anywhere in your Handlebars code.

A more complex version of the accounts-ui widget, as seen in Telescope
A more complex version of the accounts-ui widget, as seen in Telescope.

I didn’t want to have to bother with styling, so I decided to also include Twitter Bootstrap with my app.

I went to the Twitter Bootstrap website, downloaded the framework, extracted the ZIP file, copied it to my app’s Meteor directory, and then hooked up the required CSS files in the head of my app’s main file.

Ha ha, not really. What is this, 2012? That’s not how it works with Meteor. Instead, we just go back to the Terminal and type:

meteor add bootstrap

Client Vs. Server

I guess at this point I should briefly tell you more about how Meteor apps work. First, we’ve already established that a Meteor app’s code is all JavaScript. This JavaScript can be executed in the browser like regular JavaScript code (think a jQuery plugin or an alert() message), but can additionally be executed on the server (like PHP or Ruby code). What’s more, the same code can even be executed in both environments!

So, how do you keep track of all this? It turns out Meteor has two mechanisms to keep client and server code separate: the Meteor.isClient and Meteor.isServer booleans, and the /client and /server directories.

I like to keep things clean; so, unlike the default Meteor app that gets generated with meteor create (which uses the booleans), I’d rather use separate directories.

Also, note that anything that isn’t in the /client or /server directories will be executed in both environments by default.

Step 3: Create the Markup (10 Minutes)

I like to start from a static template and then fill in the holes with dynamic data, so that’s what I did. Just write your template as if it were static HTML, except replace every “moving part” with Handlebars tags. So, something like this…

 <a href="">Sacha Greif</a></p>

… becomes this:

 <a href="{{userName}}">{{fullName}}</a>

Of course, those tags won’t do anything yet and will appear blank. But we’ll match them up with real data pretty soon. Next, I deleted the contents of twitterlist.html and got to work on my HTML. This is the code I had after this step:

  <title>Who Was There?</title>
  <div class="container">
    <div class="row">
    <div class="span6">
      <div class="well">
        <h4>Did you go to the <a href="">Designer Potluck</a>? Sign in with Twitter to add your name.</h4>
      <table class="table">
  <a target="_blank" href="{{userName}}"><img src="{{image}}"/> {{fullName}}</a>


Step 4: Configure Twitter Sign-In (3 Minutes)

You’ll have noticed the {{loginButtons}} Handlebars tag, which inserts a log-in button on your page. If you try to click it right now, it won’t work, and Meteor will ask you for additional information.

You need to fill in your app's Twitter credentials.
You’ll need to fill in your app’s Twitter credentials. Larger view.

To get this information, we first need to tell Twitter about our app. Follow the steps on the screen and create a new Twitter app; once you’re done, try logging in. If everything has worked right, you should now have a user account in the app!

Creating a new Twitter app.
Creating a new Twitter app. Larger view.

To test this out, open your browser’s console (in the WebKit inspector or in Firebug) and type this:


This will retrieve the currently logged-in user, and, if everything has gone right, it will give you your own user object in return (something like Object {_id: "8ijhgK5icGrLjYTS7", profile: Object, services: Object}).

Step 5: Split It Into Templates (5 Minutes)

You’ll have noticed that our HTML has room to display only a single user. We’ll need some kind of loop to iterate over the whole list. Thankfully, Handlebars provides us with the {{#each xyz}}{{/each}} helper (where xyz are the objects you want to iterate on, usually an array), which does just that.

We’ll also split the code into a few templates to keep things organized. The result is something like this:

  <title>Who Was There?</title>

  <div class="container">
    {{> content}}

<template name="content">
  <div class="row">
    <div class="span6">
      <div class="well">
      <table class="table">
      {{#each users}}
        {{> user}}

<template name="user">
  <a target="_blank" href="{{userName}}"><img src="{{image}}"/> {{fullName}}</a>

Step 6: Hook Up Our Template (5 Minutes)

Our template is all set up, but it’s iterating over empty air. We need to tell it what exactly this users variable in the {{#each users}} block is. This block is contained in the content template, so we’ll give that template a template helper.

Delete the contents of twitterlist.js, and write this instead:

Template.content.users = function () {
  return Meteor.users.find();

What we’re doing here is defining Template.content.users as a function that returns Meteor.users.find().

Meteor.users is a special collection created for us by Meteor. Collections are Meteor’s equivalent of MySQL tables. In other words, they’re a list of items of the same type (such as users, blog posts or invoices). And find() simply returns all documents in the collection.

We’ve now told Meteor where to find that list of users, but nothing’s happening yet. What’s going on?

Step 7: Fix Our Tags (5 Minutes)

Remember when we typed this?

<a target="_blank" href="{{userName}}"><img src="{{image}}"/> {{fullName}}</a>

The {{userName}}, {{image}} and {{fullName}} are just random placeholders that I picked for the sake of convenience. We’d be pretty lucky if they corresponded to actual properties of our user object! (Hint: they don’t.)

Let’s find out the “real” properties with the help of our friend, the browser console. Open it up, and once more type this:


The object returned has all of the fields we need. By exploring it, we can quickly find out that the real properties are actually these:

  • {{services.twitter.screenName}}
  • {{services.twitter.profile_image_url}}
  • {{}}

Let’s make the substitutions in our template and see what happens.

It works! Our first and only user (you!) should now appear in the list. We’re still missing some fields, though, and only the user’s full name appears. We need to dig deeper into Meteor to understand why.

A Database On The Client

We haven’t really touched on what Meteor does behind the scenes yet. Unlike, say, PHP and MySQL, with which your data lives only on the server (and stays there unless you extract it from the database), Meteor replicates your server-side data in the client and automatically syncs both copies.

This accomplishes two things. First, reading data becomes very fast because you’re reading from the browser’s own memory, and not from a database somewhere in a data center.

Secondly, modifying data is extremely fast as well, because you can just modify the local copy of the data, and Meteor will replicate the changes for you server-side in the background. But this new paradigm comes with a caveat: We have to be more careful with data security.

Step 8: Make the App Secure (1 Minute)

We’ll address data security in terms of both writing and reading. First, let’s prevent people from writing whatever they want to our database. This is simple enough because all we need to do is remove Meteor’s insecure package:

meteor remove insecure

This package comes bundled with every new Meteor app to speed up development (letting you insert data client-side without having to set up all of the necessary checks and balances first), but it is obviously not meant for production. And because our app won’t need to write to the database at all (except for creating new users — but that’s a special case that Meteor already takes care of), we’re pretty much done!

More On Security

While we’re on the topic of security, Meteor apps also come with a second default package, autopublish, which takes care of sending all of the data contained in your server-side collections to the client.

Of course, for a larger app, you probably won’t want to do that. After all, some of the information in your database is supposed to remain private, and even if all your data is public, sending all of it to the client might not be good for performance.

In our case, this doesn’t really matter because we do want to “publish” (i.e. send from the server to the client) all of our users. Don’t worry, though — Meteor is still smart enough not to publish sensitive information, such as passwords and authentication tokens, even with autopublish on.

Step 9: Add Follow Buttons (8 Minutes)

While visitors can now click on a name to go to their Twitter profile, simply displaying follow buttons for each user would be much better. This step took a little tinkering to get right. It turns out that Twitter’s default follow button code doesn’t play nice with Meteor.

After 15 minutes of unsuccessful attempts, I turned to the Google and quickly found that for single-page apps, Twitter suggests using an iframe instead.

This worked great:

<iframe style="width: 300px; height: 20px;" src="//{{services.twitter.screenName}}" height="240" width="320" frameborder="0" scrolling="no"></iframe>

Step 10: Deploy (1 Minute)

The last step is to deploy our app and test it in production. Once again, Meteor makes this easy. No need to find a hosting service, register, launch an instance, and do a Git push. All you need to do is go back to the Terminal and type this:

meteor deploy myApp

Here, myApp is a unique subdomain that you pick (it doesn’t have to be the same as the app’s name). Once you’ve deployed, your app will live at Go ahead and ask a few people to register: You’ll see their Twitter profiles added to the list in real time!

Going Further

Of course, I had to gloss over a lot of key Meteor concepts to keep this tutorial light. I barely mentioned collections and publications, and I didn’t even really talk about Meteor’s most important concept, reactivity. To learn more about Meteor, here are a few good resources:

  • Documentation, Meteor
    This is a required reference for any Meteor developer. And it’s cached, meaning you can even access it offline.
  • EventedMind
    Chris Mather puts out two Meteor screencasts every Friday. They’re a great help when you want to tackle Meteor’s more advanced features.
  • Discover Meteor
    I’m obviously biased, but I think our book is one of the best resources to get started with Meteor. It takes you through building a real-time social news app (think Reddit or Hacker News) step by step.
  • Blog, Discover Meteor
    We also make a lot of information available for free on our blog. We suggest looking at “Getting Started With Meteor” and “Useful Meteor Resources.”
  • Prototyping With Meteor
    A tutorial we wrote for NetTuts that takes you through building a simple chat app.

I truly believe Meteor is one of the best frameworks out there for quickly building apps, and it’s only going to get better. Personally, I’m really excited to see how the framework evolves in the next couple of months. I hope this short tutorial has given you a taste of what Meteor’s all about and has made you curious to learn more!

(il) (ea) (al)

© Sacha G for Smashing Magazine, 2013.

June 11 2013


Front-End Ops


When a team builds a complex application, there is often a common breakdown of roles. Specifically on the back end, there are database engineers, application engineers and operations engineers, or something close to this. In recent years, more and more application logic is being deferred to the client side. For some reason, though, operations folks aren’t going with it.

I recently wrote an article on “Deploying JavaScript Applications.” It was largely well received, and I was happy with the content, but one negative comment stuck out to me. I probably didn’t have the reaction that the commenter was intending, but it pointed out something to me nonetheless.

“With all due respect, may I ask if you actually enjoy your job? I am a dev, and I do enjoy using tech to do stuff to a point. If your role is to squeeze every last second of performance out of your app, then yea, all this stuff must be cool. BUT if you are a coder doing something else and then come back to all of this as well, then wow, I don’t know how you haven’t gone mad already. I’d be sick to the stomach if I had to do all of this, in addition to my usual work.”

See, I had written my article with a few too many assumptions. I understood ahead of time that a few of my solutions weren’t globally applicable, and that many people wouldn’t have the time or energy to implement them. What I didn’t fully grasp was how different the role in that article is from the picture that people have of a front-end developer in their head. Up to this point, a front-end developer had just the few operations duties lumped into their role, and even then, many people chose to skip those steps (that’s why Steve Souders is constantly yelling at you to make your pages faster).

I think things are about to shift, and I’d (humbly) like to help guide that shift, because I think it’ll be great for the Web.

The Front-End Operations Engineer

A front-end operations engineer is not a title you’ve likely come across, but hopefully one that you will. Such a person would need to be an expert at serving and hosting front-end resources. They’d need to be pros at Grunt (or something similar) and have strong opinions about modules. They would find the best ways to piece together the parts of a Web application, and they’d be pros at versioning, caching and deployment.

A front-end operations engineer would own external performance. They would be critical of new HTTP requests, and they would constantly be measuring file size and page-load time. They wouldn’t necessarily always worry about the number of times that a loop can run in a second — that’s still an application engineer’s job. They own everything past the functionality. They are the bridge between an application’s intent and an application’s reality.

A front-end operations engineer would be very friendly with the quality assurance team, and they would make sure that “performance” is a test that comes up green. They’d monitor client-side errors and get alerts when things go wrong. They’d make sure that migrations to new versions of the application go smoothly, and they’d keep all external and internal dependencies up to date, secure and stable. They are the gatekeepers of the application.


We have reached a point where there is enough work to be done in the operations space that it often no longer serves us to have an application engineer do both jobs. When the application’s features are someone’s priorities, and that person has a full plate, they will typically deprioritize the critical steps in delivering their application most successfully to the end users.

Not every company or team can afford this person, but even if someone puts on the “front-end operations” hat for one day a week and prioritizes their work accordingly, users win. It doesn’t matter how many features you have or how sexy your features are if they aren’t delivered to the user quickly, with ease, and then heavily monitored. Front-end operations engineers are the enablers of long-term progress.

Builds And Deployment

If you were to ask most back-end engineers which person on their team has traditionally worried about builds and deployment, I’m sure you’d get a mixed bag. However, a very sizeable chunk of engineers would tell you that they have build engineers or operations engineers who handle these things. In that world, this often entails generating an RPM file, spinning up EC2 instances, running things through continuous integration tools, and switching load balancers over to new machines. Not all of this will necessarily go away for a front-end operations engineer, but there will be new tools as well.

A front-end operations engineer will be a master of the build tool chain. They’ll help run and set up the continuous integration (or similar) server but, more specifically, they’ll set up the testing instances that their application runs on and then, eventually, the deployment instances. They’ll integrate Git post-commit hooks into the application and run the tests (either in Node.js and PhantomJS or against something like Sauce Labs, Testling or BrowserStack) before anything gets merged into the master. They’ll need to make sure that those servers can take the raw code and, with a few commands, build up the resulting application.

This is where many people use Grunt these days. With a quick grunt build, these machines could be serving the built version of an application in order to enable proper testing environments. The front-end operations engineer would be in charge of much that’s behind that command as well. grunt build could call out to RequireJS’ r.js build tool, or a Browserify process, or it could simply minify and concatenate a list of files in order. It would also do similar things to the CSS (or your favorite preprocessed CSS dialect), in addition to crushing images, building sprites and reducing requests in any other way necessary or possible.

Front-end operations engineers would make sure that all of this stuff works on people’s local machines. A quick grunt test should be able to build everything locally, serve it and test it (likely with some WebDriver API-compatible server). They’d make sure that team members have the power to push their applications into the continuous integration environment and test them there. And they’d remove single points of failure from deployment (GitHub being down during launch wouldn’t scare them).

They’d facilitate internal deployments of feature branches and future release branches. They’d make sure that the quality assurance team has an easy time of testing anything and that the managers have an easy time of demoing things that aren’t ready.

They’d help build multiple versions of an application to best suit each of their core sets of users. This could mean builds for mobile or for old versions of Internet Explorer, but all of it should be relatively transparent to those who are programming against those feature, browser or device tests.

They’d facilitate the process of taking a release, building it, uploading it to a static edge-cached content delivery network, and flipping the switch to make it live. And they’d have a documented and fast roll-back mechanism in place.

Perhaps most importantly, they’d automate everything.

front-end ops start image_mini
(Image credits: Rube Goldberg)

Tracking Speed

The metric by which a front-end operations engineer would be judged is speed: the speed of the application, the speed of the tests, of the builds and deployment, and the speed at which other teammates understand the operational process.

A front-end operations engineer would live in a dashboard that feeds them data. Data is king when it comes to speed. This dashboard would integrate as much of it as possible. Most importantly, it would constantly be running the team’s app in multiple browsers and tracking all important metrics of speed. This space currently doesn’t have a ton of options, so they’d likely set up a private cloud of WebPageTest instances. They’d put them in multiple zones around the world and just run them non-stop.

They’d run against production servers and new commits and pull requests and anything they can get their hands on. At any given point, they’d be able to tell when, where, and what the surrounding circumstances were behind a slow-down. A decrease in speed would be directly correlated to some change, whether a new server, a diff of code, a dependency or third-party outage, or something similar.

They’d have a chart that graphs the number of HTTP requests on load. They’d also have a chart that tells them the Gzip’ed and minified payload of JavaScript, CSS and images that are delivered on load. And they’d also go crazy and have the unGzip’ed payload of JavaScript so that they can measure the effect of code parsing, because they know how important it can be on mobile. They’d instrument tools like mod_pagespeed and nginx_pagespeed to catch any mistakes that fall through the cracks.

They’d be masters of the latest development and measurement tools. They’d read flame graphs and heap snapshots of their apps from their development tools (in each browser that has them). They’d measure frames per second on scrolling and animations, prevent layout thrashing, build memory profiles, and keep a constant eye on compositing, rendering and the overall visual performance of the application. They’d do all of this for desktop and mobile devices, and they’d track trends in all of these areas.

They’d religiously parallelize tasks. They’d track the application via waterfalls and .har data to make sure that all serial operations are necessary or intentional.

They’d chart the average run time of the tests, builds and deploys. And they’d fight to keep them low. They’d chart their external dependencies in size and speed. They may not have control over slow API requests, but they’d want to be able to point to the reasons why their numbers are increasing.

They’d set an alarm if any of these numbers rose above an acceptable limit.

Monitoring Errors And Logs

Managing logging is a critical job of a normal operations engineer. The data that is generated from running an application is vital to understanding where things go wrong in the real world. A front-end operations engineer would also instrument tools and code that allow the same level of introspection on the client side.

This would often manifest itself as an analytics tool. Application engineers would be encouraged to log important events and any errors at certain levels to a logging service. These would be appropriately filtered and batched on the client and sent back as events to an internal or external analytics-style provider. The engineer would have enough information to identify the circumstances, such as browser name and version, application deployment version, screen size and perhaps a bit of other data. (Though they’d want to avoid storing personally identifiable information here.)

Logging stack traces can be very helpful in browsers that support them. You can integrate third-party services that do this for you.

The front-end operations engineer would encourage a very small tolerance for errors. Any error that happened would be investigated and either fixed or logged differently. With the data that comes back, they should be able to visualize groups of errors by browser or by state information from the application. A threshold of errors would be allowed to occur, and when that is passed, engineers would be notified. Severities would be assigned, and people would be responsible for getting patches out or rolling back as necessary (with quick patches being heavily favored to roll backs).

Much like today’s operations people focus on the security of the systems they manage, a front-end operations engineer would have probes for XSS vulnerabilities and would constantly be looking for holes in the app (along with the quality assurance team).

A front-end operations engineer would have an up-to-date picture of the state of the application in production. This is challenging in the front-end world, because your application doesn’t run on your machines — but that makes it even more necessary.

Keeping Things Fresh and Stable

My favorite thing that good operations people who I’ve worked with in the past were really good at was keeping things up to date. For some applications, stability and security are so deeply necessary that caution is the larger priority; but in most cases, failure to keep dependencies and environments up to date is what causes applications to get stale over time. We’ve all worked on a project that’s four years old where all of the tools are very old versions of the ones we know, and getting good performance out of it is impossible.

A front-end operations engineer would be effective at keeping dependencies up to date and at removing cruft in systems. When the next version of jQuery is released, they’d use their skills to switch out the dependency in the application to work with the new version and then test it to validate the change. They’d keep Grunt up to date (and Node.js along with it). When WebP becomes viable, they’d automate moving the application’s images over to that format.

They’d work closely with the more architecture-oriented application engineers to make sure that the entire system still feels viable and is not lagging behind in any one area. They would keep on top of this stuff as often as possible. Updating a dependency here and there as you build is far easier than having a big “Update Everything” day. It encourages application developers to loosely couple dependencies and to build good, consistent interfaces for their own modules.

A front-end operations engineer makes it viable and fun to work on a project long after it’s new.

The Future

I’m sure plenty of commenters will tell me that these tasks have been going on for years, and plenty will tell me that they should be the concern of all developers on a team. I would agree with both statements. I am not introducing new concepts; I’m compiling tasks we’ve all been doing for years and giving them a name. I think this will help us build better tools and document better processes in the future.

The addition of this role to a team doesn’t absolve the other members of performance responsibilities. It’s just that right now, front-end operations are no one’s explicit priority on most of the teams that I’ve encountered, and because of that, they often get skipped in crunch time. I think there’s enough to be done, especially in the configuration and monitoring of these tools, outside of the normal job of a front-end engineer, to justify this role.

Most importantly, regardless of whether a new job comes from these tasks, or whether we solve the problem in a different way, we do all need to be conscious of the importance of solving these problems in some way. You simply can’t ignore them and still achieve reliable, robust, high-experience applications. Addressing these concerns is critical to the stability and longevity of our applications and to the happiness of programmers and users.

If we build with that in mind, it helps the Web win, and we all want the Web to win.

(al) (il) (ea)

© Alex Sexton for Smashing Magazine, 2013.

May 06 2013


New Defaults In Web Design: How Much Has The Web Really Changed?


Responsive design is about more than just layout; it’s about designing for the Web, which means, mostly, for people with browsers. And that’s just about everything we know about the people who visit our websites: they are probably using a browser. All the rest we just don’t know.

Up until not so long ago, we used to base our designs on some rather general assumptions about screen size and input type. With the rise of devices with various screen sizes and alternative ways to interact, these assumptions have turned out to be unreliable. We need to upgrade the defaults that we use when we start designing our websites.

A Closer Look

People keep saying that the Web has changed. But has it really? Let’s take a look at all of the things that have actually changed.

Screen Sizes

In the 1990s, the Web was 640 pixels wide. In the early 2000s, it grew to 800 pixels. A few years later, we decided it should be 1024 pixels. But five years ago, all of a sudden, something strange happened. A device with a very small screen entered the market. Suddenly, our ideas about the size of the Web did not work anymore. Later on, tablets entered the market. People hold these things however they want. Today, the height of the viewport could be bigger than the width! But is that new? Not really.

Screen sizes, shown in a non-flexible medium. Photo and work by Aram Bartholl.
Screen sizes, shown in a non-flexible medium. (Photo and work: Aram Bartholl)

We never really knew what size the window of our visitors would be. We just assumed it was at least the random pixel width that we felt comfortable with. These numbers were always arbitrary, and there were always people who could not see the entire website. We simply ignored them.

“Everyone Has a Mouse”

We’ve always assumed that everyone uses a mouse. Even though we knew that this was not always true, most designs completely ignored alternative ways of interacting. People who had to use a keyboard, for whatever reason, had a very hard time interacting with our websites.

But because most people did use a mouse, and because back then many designers thought that designing only for the majority was OK, we created websites that were unusable for a lot of people. And this turned out to be a growing number. Many mouseover interactions are completely dysfunctional on a touch device. Because people love these devices, and even managers and designers use them, they are harder to ignore.

“Everyone Has Broadband Internet”

Another thing we always assumed was that everyone had a super-fast Internet connection, at least as fast as our own. And if they didn’t already have it, they’d have it soon. This was again mostly true; speeds were increasing. But today, more and more people use crappy, unreliable 3G connections all the time. If you’ve ever travelled on a train in The Netherlands, you know what I mean. And if you’ve ever had to rely on the mythical “free hotel Wi-Fi,” then you know for sure that the assumption about the ever-increasing speed of our Internet connections is just not true. This is a big change in our thinking; we really should consider these users. This will have a major impact on what our designs look like.

“Everyone’s Computer Gets Faster Every Year”

It used to be true that computers would get faster and faster. If you waited half a year before buying a computer, you would get one that was twice as fast, for the same price. This was true of new desktop computers, but mobile devices have priorities other than processor speed. The most important thing for a phone, for instance, is battery life: you really don’t want to have to charge it after every phone call.

And there’s another trend: instead of creating ever-faster devices, many manufacturers are starting to sell ever-cheaper devices. Many people care about price and battery life more than about processor speed. This is also not new: what happened to your old computers? You probably sold them or gave them away. People keep using old stuff. Not everyone has the same hardware as we designers do.

“All Monitors Are Calibrated”

Well, we always knew this to be untrue, right? Only the monitors of visual professionals are calibrated. Most other monitors don’t display colors accurately, and many monitors are downright crappy. Most mobile phones that I’ve tested have pretty decent screens, until you start using them outside, in the sunshine. If you’re lucky, you can read the content, but you definitely cannot see the subtle gradients in low-contrast designs.

I haven’t even mentioned “modern” black and white screens. These, too, are not new. People have always used crappy monitors, and people with bad eyesight have always visited your websites. It’s just that more and more people are seeing a subpar color palette. Instead of buying a state of the art monitor, buying a cheap monitor and several low-end devices to test your work on might be a better investment.

All of these things are not new. In 2002, John Allsopp wrote the monumental article “A Dao of Web Design.” People such as Jeremy Keith and Roger Johansson have written about all of these facts for years and years. And yet, somehow, we’ve always managed to actively ignore them. But we really can’t anymore. The Web actually did change in the last five years, with new devices, new browsers and many, many cool new features. We need new defaults. The old ways of creating websites just don’t work anymore.

This Is Responsive, the excellent resource about responsive design by Brad Frost.
This Is Responsive, the excellent resource about responsive design by Brad Frost.

In the past few years, we’ve been actively researching new ways to deal with all of these different screen sizes. But apart from responsive design, there are many more challenges in today’s ever-growing pile of devices. We have to find new patterns of interaction: we need interfaces that work on any device. Maybe we have to reconsider that enormous photo carousel on the home page, now that we know that not everyone has a cheap and fast connection. New defaults are emerging, and I’ve collected a few for you here.

The things in this article are not new. Many clever people have written about them in many articles and many books. But these ideas, like all good stories, have to be repeated many times so that people understand and remember them.

New Default: Activate

I initially titled this section “New Default: Touch.” But I came to realize that “touch” has a different meaning for everyone. Some people, like me, think of a single tap when we hear the word. Others think about swiping and complex gestures. That’s why I settled on the heading “New Defaults: Activate.” All devices, no matter what kind of input they offer, let the user activate something in some way.

With a mouse, it’s a click; with a touch device, it’s a tap; on a keyboard, it’s the “Enter” key. There are ways to activate things by voice, and by waving your arms in the air. And many devices offer more than one way to interact. The only thing that all of these devices have in common is the action of activating. Most of them are capable of doing many other things, too, but all of them can activate stuff.

Only recently have we really started thinking about alternative methods of user input. We used to assume that everyone uses a mouse. Hiding content and showing it on mouseover was considered to be a decent design pattern. And it used to work for most people — until all of these wonderful touch devices entered the market. What should a device without a mouse do when content can be revealed only with a mouse? Different devices have different solutions. Let’s look at a simple drop-down menu.

You can find a live example of this navigation pattern right here.
See a live example of this navigation pattern.

When you hover over a menu item, a submenu appears. But apart from hovering over an item, you can also simply click on it to follow the link. Now, what should happen when you tap on the item with a touch device? Should the submenus appear, or should the link activate? Or both? Or should something else happen? On iOS, something else happens. The first time you tap a link like that, the submenu appears; in other words, the hover event fires. You have to tap a second time to actually follow the link. This is confusing, and not many people will tap a second time. On Android, the submenu appears and the link is followed simultaneously. I don’t have to explain to you that this is confusing.

It’s very well possible to think of complex solutions whereby you define different interactions for different input devices. But the better solution, I think, is to make sure that the default interaction, the activate event, just works for everybody. If you really need to, you could choose to enhance this default experience for certain users.

For instance, if you are certain that someone is using a mouse, you could enable some mouseover interactions. Or if you’re sure that someone has fat fingers, you could make small buttons a bit bigger. But only do so in addition to the default activate interaction, and only if there’s no doubt about it, and only if the enhancement would really make things better. Those are quite a few ifs, and some of them, such as the mouse usage, are very hard to detect — especially on devices that offer more than one way to interact, such as a laptop with an optional mouse, touch pad, camera, microphone, keyboard and touchscreen. Give it some serious thought. Do you really need to optimize for a mouse?

New Default: Small Screens

Growing is easy. Most things grow. Babies grow, trees grow, curious minds grow. They don’t grow by themselves, but you don’t need much energy to make things bigger. This is just what things do when they live. While shrinking things is definitely possible, it’s also much harder. You could, for instance, compress a car to a fraction of its original size. A compressed car does have a certain aesthetic appeal to it, but it is definitely not as useful as it was before. The same goes for websites. Shrinking a desktop website does not always result in a pleasant experience on a small screen.

Trees grow on their own, cars are less usefull when they shrink.
Cedro di Versailles by Italian artist Giuseppe Penone clearly shows that things grow. On the other hand, the work Papalote Goliad by American artist John Chamberlain shows that shrinking can be aesthetically appealing but may result in less useful results.

To build a responsive website that works on all kinds of screens, designing for a small screen first is easiest. It forces you to focus on what’s really important: if it doesn’t fit in this small square, it is probably not terribly important. It forces you to think better about hierarchy, about the right order of components on the page.

The same principle that we follow for interactions — whereby we design the activate event first and enhance it later — applies to graphic design. We should start designing the things that we know everyone will see. That’s the content. No matter how big or small a screen is and no matter how minimal the feature set of a browser, it will be able to show letters. Because this is about the only thing we know for certain — since color is absent on most Kindles, most of the latest CSS doesn’t work on old browsers, and layout is of minor importance on small screens — starting with the text is logical.

I wrote an in-depth article about defining breakpoints on the basis of typography, so I won’t repeat every detail here. But the basic idea is that you start by designing the relationship between the different font sizes. Almost everyone, no matter what device they have, will be able to see this. When the typography is done, you would start designing the layout for bigger screens; you can think of this as an enhancement for people with bigger screens. And after that, when the different layouts are done, you could add the paint. And by paint, I mean color, gradients, borders, etc.

I’ve presented this as a very strict way of working; in real life, of course, things are not as rigid. I’m not talking about “activate only” or “small screen only.” When I say to start with typography, I don’t mean that you aren’t allowed to think about paint at the same time. Rather, I’m trying to find the things that all of these different devices, with all of their different screen sizes and all of their different features, have in common. It just seems logical to first design this shared core thoroughly. The strange thing is that this core is often overlooked: Web professionals tend to view their own creations with top-of-the-line devices with up-to-date browsers. They see only the enhancements. The shared core with the basic experience is often invisible.

New Default: Content

The way we designed our websites until recently was by putting a header with the logo and navigation at the top, putting the subnavigation on the left, putting some widgets on the right, and putting the footer at the bottom. When all of that was done, we’d cram the content into the little space that was left in the middle. All of the things we created first — the navigation, the widgets, the footer — they all helped the visitor to leave the page. But the visitor probably wanted to be there! That was weird. It was as if we were not so confident in our own content and tried our best to come up with something else that our guests might like.

But rather than pollute the page with all kinds of links to get people out of there, we should really focus on that thing in the middle. Make sure it works. Make sure it looks good. Make sure it’s readable. Make sure people will understand it and find it useful. Perhaps even delight them with it!

Once you’re done with the content, you can start to ask yourself whether this content needs a header. Or a logo. Or subnavigation. Does it need navigation at all? And does it really need all of those widgets? The answer to that last question is “No.” I’ve never understood what those widgets are for. I have never seen a useful widget. I have never seen a widget that’s better than white space.

A typical news site with more attention for widgets versus the complete focus on the content on Medium.
Compare a typical news website’s attention to widgets with Medium’s complete focus on content.

By starting with the content first, you can come up with some very interesting solutions. For instance, does the logo really need to be at the top of every page? It could very well go in the footer on many websites; such as in digital style guides or on pages for registered users. Many links that we used to put in the subnavigation might work better in relevant spots in the main content.

For instance, the option to add extra luggage to a flight booking might be most effective right there in the overview of the flight, instead of in the middle of a list of links somewhere on the left of the page. And when looking at the hierarchy of a page, does the main navigation look more important than the main content? Most of the time it shouldn’t be, and I usually consider the navigation to be footer content. A simple “skip” link at the top of the page could either take the visitor to the navigation or fetch the navigation and show it at the top of the page.

In this era of responsive Web design, we need many new clever solutions. As we’ve seen here, our old defaults don’t work anymore. We need to reconsider how we work with interaction, how we approach design and how we shape our content. But we need to think about one other very important thing, and that is where our content comes from.

New Default: The API

Luke Wroblewski wrote a fantastic article about designing an application for the command line first, and then enhancing it for different needs. This is not just a nerdy idea, but a very practical idea, too. If you are able to design and develop your own application, you could test the functionality relatively easily before even starting to think about what it will look like on different devices. This requires designers to work with developers to design a feature that at first works only from the command line. If the feature does not work as expected, then you merely have to change the API, rather than also a bunch of visual designs. Once the API works as you want it to, enhancing it for all of the devices and screen sizes that you want to support becomes easier.

Most of the time, you wouldn’t design the entire API of the application that you’re building. Most companies would choose a content management system (CMS) of sorts or a specialized tool to help them achieve what they want to do. I’ve always been amazed that CMSes are so often chosen only by technical people and business people. This causes many problems during the design process.

Developers and business people have different goals than designers. Developers want stuff that is easy to develop on. Business people want stuff that’s cheap. But designers want to make the best and most beautiful things possible. These goals can easily conflict.

I’m not saying that designers alone should choose the system, but they should definitely be a part of the decision-making process. I’m convinced that the selection of CMSes will improve. And I’m convinced that CMS makers will start to improve their products once designers get involved. Right now, all CMSes I know of deliver hostile cruft unless you tweak them extensively.

But it works the other way around, too. If designers are involved in the selection process, they will have a say in the choice of tool and will understand how it works, what’s possible, what’s easy and what’s hard. This will result in designs that are based in part on the tool, not just on imagination. This is an important part of the design process that has not yet been optimized. Right now, the command line and the systems that deliver the content we design for are the domain of the developers, and designers have nothing to do with them. That is a pity. Just as you would want to take advantage of the knowledge of developers in the design process, you would want to take advantage of the knowledge of designers in the development process.

Progressive Enhancement

If you review the sections above, you’ll see that what I’ve described is nothing other than progressive enhancement. You start with the content, then design the content and optimize it for different screen sizes and devices, and after that you can further optimize for very specific features such as mouse usage and fat fingers. Many Web developers build websites according to this principle. They transform the beautiful Photoshop documents that they receive into all of the different layers described above.

This can work out fine if the developer has a good sense of design and a delicate attention to detail. But if they don’t — which is often the case — this can easily result in crappy usability and ugly details. I’m not saying that designers shouldn’t use Photoshop anymore. If that’s your tool, go ahead and use it. But do remember that you’re designing the layers of the Web, not the layers in Photoshop. There’s much more to the Web than a single beautiful image. People will see our creations in innumerable ways. We design for all of these people — remember that. We don’t just design for the CEO with a laptop. We also design for the people on the train and the people with “free hotel Wi-Fi.”


I’ve mentioned Photoshop a few times because it’s still widely misused for designing websites. One reason we have a hard time with progressive enhancement in the design process is due to a lack of good Web design tools. The tools we use are built to wow; they mostly help you to create the “paint,” not to design the core. Fortunately, more tools are popping up with very specific functions in the design process. These are micro-tools such as the International Measure Slider, which helps you to define breakpoints in your grid; tools such as Gridset, which helps you to create grids for different screen sizes; and excellent tools that help you to define typography. By incorporating these tools into our design workflow, we might start making better stuff.


The Web has always been a weird, borderless, flexible medium. In the last couple of years, we’ve started to realize that designing for this medium is fundamentally different from the design work we’ve done previously. The fixed dimensions and the singular ways of interacting that formed the basis of all types of media that we’ve worked with for centuries just don’t work on the Web. This truly is a unique medium.

We have to find new defaults, new starting points for our design process. I’ve explained some of these new defaults here, but of course there are many more. The way we work with forms, for instance, could probably use a whole series of articles by itself. Some new starting points are well established by now, but I’m sure many more will be invented in the near future. I am curious to hear about new patterns and new defaults that you have discovered and have used successfully in your projects.


© Vasilis van Gemert for Smashing Magazine, 2013.

March 21 2013


Device Size Matters: Responsive Web Design With Physical Units


This post should be titled “Getting Ahead of Yourself.” “…By a Few Years,” actually. Here’s the deal: at the time I’m writing this, early 2013, there’s no way to accurately design for the Web using physical units, nor will there be for a very long time. But there is a way to design while knowing the physical characteristics of the device — or, at least, there will be in the very near future.

Mobile devices
Different devices can have a similar screen resolution, yet entirely different physical factors. iPad (1st generation) has the diagonal size of 9.7″, the resolution 1024 × 768 and 132 ppi. Kindle Keyboard 3G has the diagonal size of 6″, also the resolution 768 × 1024, yet 212 ppi. Image source: kodomut.

It’s called the “resolution media query”, and it’s been in the specification for media queries for some time. However, while it has been in the spec, that doesn’t mean anyone has actually implemented it yet. Fortunately, WebKit is leading the way and pushing for this feature to be implemented. So, how will we use this nifty little feature, exactly? Here’s how.

The Thin Line Between Queries

First off, I posit that there will be only one use case for a resolution-only media query. Something along the lines of

@media (min-resolution: 250dpi) {


has, at this time, only one good use: swapping out low- for high-resolution images. I’ve tried imagining other uses and, as far as I can tell, there just aren’t any. But resolution is not what we, as Web designers, are truly interested in. Since we are designing for humans, shouldn’t we be thinking about the physical side of human data consumption and designing using this kind of a metric? And in a perfect world we could simply say width: 1in and have a one-inch wide element, regardless of the device. Unfortunately, we live in a digital world in which the physical and digital pixels are not the same. We need something to bridge the gap. That something is the resolution media query.

Good. Now that that’s out of the way, let me show you how this one little piece of code can make so much difference that your head will promptly explode. (I take no responsibility for actual blown heads as a result of this post.)

Let’s compare two media-query declarations:

@media (min-resolution: 341dpi) and (min-width: 767px) > {



@media (max-resolution: 131dpi) and (min-width: 767px) > {


At first glance, this doesn’t seem like much of a separation, right? Wrong. The numbers I’ve used are specific to the HTC Windows Phone 8X (the first snippet) and the iPad 2 (the second snippet). By using the resolution query, one can basically separate physically small devices from large devices.

As it currently stands, a query that looks like @media (min-width: 767px){ } will affect both the HTC and the iPad, with no other possibility of separation, because both have a resolution that is 768 pixels wide. In fact, the iPad has a lower resolution, at 1024 × 768, whereas the HTC is 1280 × 768. In case you haven’t realized yet, the problem with all of this is that the iPad is a 10-inch device, while the HTC is a 4.3-inch one. That’s less than half the physical size!

By using the resolution media query together with a width query, we can distinguish between physically small and large devices to adjust design elements and layouts accordingly. While, as mentioned above, screen resolution isn’t really what we are interested in since we use logical breakpoints in responsive design, it is useful to know whether a site is being displayed on a small or a large physical display — e.g. to increase font size or rearrange design elements in the layout. But where do we draw the line between small and large? Quite simply, we can’t. Each of us has to draw the line, possibly on a project-by-project basis, between “This is a small device” and “This is a large device.” As far as ballpark numbers go, I’ve done a few calculations and have developed a theorem that should give you a clearer idea of how this works more or less.

The Physical Size Inquiry Non-Exhaustive Theorem (PSINET)

Here’s the theory: In a combined query, if the ratio between the smaller of the width and height and the resolution, called a PSINET score, is higher than 5, then the result falls in the category of a physically large device. If the resulting number is lower than 5, then it is a physically small device. Devices that score very close to 5 are considered to be medium-sized, close to the physical size of an A4 sheet of paper (21 × 29.7 cm).

Here’s a non-exhaustive list of devices to test the formula above. I’ve listed each device’s score according to the formula, along with its diagonal size, resolution and density, and PSINET score.

Physically Large Devices

Device name Diagonal size (inches) Resolution PPI PSINET score Apple iMac 27 2560 × 1440 109 13.00 Sony Vaio F 16.4 1920 × 1080 134 8.05 Apple MacBook Pro RD 13 2560 × 1600 227 7.04

Physically Small Devices

Device name Diagonal size (inches) Resolution PPI PSINET score Sony PSP 4.3 480 × 272 128 3.75 Kindle Keyboard 3G 6 768 × 1024 212 3.62 Kindle Fire 7 1024 × 600 169 3.55 Samsung Galaxy S 4 480 × 800 160 3.00 Samsung Galaxy NoteII 5.5 720 × 1280 267 2.69 Samsung Galaxy S IV 5 1080 × 1920 441 2.62 HTC Butterfly 5 1080 × 1920 441 2.62 Samsung Galaxy Grand I9082 5 480 × 800 187 2.56 Palm Pre 3.1 480 × 320 186 2.5 Sony Xperia Z 5 1920 × 1080 443 2.43 Samsung Galaxy SIII 4.8 720 × 1280 306 2.35 LG Nexus 4 E960 4.7 768 × 1280 318 2.41 Nokia Lumia 920 4.5 1280 × 768 332 2.31 HTC One 4.7 1080 × 1920 469 2.30 HTC One X 4.7 720 × 1280 312 2.30 HTC Desire HD 4.3 480 × 800 217 2.21 BlackBerry Q10 3.1 720 × 720 328 2.19 BlackBerry Z10 4.2 768 × 1280 355 2.16 Motorola Droid X 4.3 854 × 480 228 2.10 Sony Ericsson S 4.3 720 × 1280 342 2.10 Motorola RAZR i XT890 4.3 540 × 960 256 2.10 iPhone 5 4 640 × 1136 326 1.96 Apple iPod Touch 3.5 960 × 640 326 1.96 Nokia Lumia 620 3.8 480 × 800 246 1.95 HTC Wildfire 3.2 240 × 320 125 1.92 Nokia Lumia 710 3.7 800 × 480 252 1.90 Motorola Defy 3.7 854 × 480 265 1.81 LG Optimus One 3.2 320 × 480 180 1.77 Nokia N96 2.8 240 × 320 143 1.67 Sony Ericsson W810i 1.9 176 × 220 148 1.18

Medium-Sized Devices

Device name Diagonal size (inches) Resolution PPI PSINET score Apple iPad (1 & 2) 9.7 1024 × 768 132 5.81 Apple iPad (3rd Gen) 9.7 2048 × 1536 264 5.81 Amazon Kindle DX 9.7 824 × 1200 150 5.49 Acer Iconia Tab A500 10.1 800 × 1280 149 5.36 Samsung Galaxy Tab 10.1 1280 × 800 149 5.36 Motorola Xoom 10.1 1280 × 800 149 5.36 Asus Transformer Pad Infinity 10.1 1920 × 1200 224 5.35 Microsoft Surface 10.1 1366 × 768 148 5.18 Asus VivoTab RT TF600T 10.1 1366 × 768 155 4.95 iPad Mini 7.9 768 × 1024 162 4.74 Amazon Kindle Fire HD 8.9 1920 × 1200 254 4.72

Is this method of determining device size foolproof? Hardly — that’s why it’s a theorem. It’s based on solid reasoning and empirical evidence and has come about by using the scientific method, but it is not a rule, law or axiom. Take it with a pinch of salt (or, better yet, a truckload of NaCl) and refine it. It is a theorem, a proposition, to be remembered in the future when the resolution media query and our work with it become a mainstay of the Web.

Breaking the Theorem

Like any self-respecting follower of the scientific method, I’ve tried to break my own theorem. Thus, I imagined a freak of a device, 2 inches long and 20 inches wide, putting its diagonal size at 20.09 inches, with a 240 × 40 pixel display, yielding a resolution of just 11.94 PPI. It gets a PSINET score of 2.01, which puts it well into the small device category, even though it’s almost half a meter long. The reason is simple: it’s that 2-inch-wide dimension. Because the PSINET score ignores the longer of the device’s physical width and height, the greater the difference between those two dimensions, the less accurate the PSINET score will be. Sure, this beast of a device is unlikely to ever become reality, but it’s worth understanding the reasons why it would break the theorem.

Device name Diagonal size (inches) Resolution PPI PSINET score ThinLong 20.09 24 × 240 11.94 2.01

Real-World Applications

Apart from the obvious visual changes and tweaks mentioned above, there are other ways to use the resolution media query.

Enter Enquire.js. For those of you who haven’t heard of it, it’s a very nice JavaScript library that helps you execute particular scripts on matching media queries.

We could use Enquire.js or even just window.matchMedia, which is a native JavaScript method, to differentiate between mobile, tablet and computer users much more reliably than by using width queries alone. Here’s a not-very-polite example using Enquire.js:

enquire.register("screen and max-resolution: 150dpi and max-width: 300px", function() {
    alert('My, what a small screen you have there, Grandma!')

Combining media query types with CSS and using a resolution-aware JavaScript library is just the right formula to give us real future-proof control over what I call the “physical Web.” Imagine being able to view a priceless sculpture located in a museum halfway across the Earth on a 1:1 ratio on any device, or shopping for an engagement ring online and seeing exactly how big that 24-carat diamond is. The real-world applications, pun intended, are nearly endless.

In our world of responsive Web design, we’d very much like to provide the best experience to users, whatever their device. In light of the sans-resolution media query above, that task becomes less of a challenge and more a windmill fight. Assigning blame is pointless, because none of us can do anything to change the current ecosystem of devices. Manufacturers will continue to put out devices with resolutions and pixel densities that they’ve pulled out of their butts, and that’s fine — that’s their business. Staying on top of the situation by providing us designers with the tools we need (but can’t easily build ourselves) to create the best user experience possible is the job of browser makers, and I salute the good people at WebKit for spearheading the implementation of the resolution media query.


© Radu Chelariu for Smashing Magazine, 2013.

September 04 2012


The State Of E-Commerce Checkout Design 2012 // UX Study


A year ago we published an article on 11 fundamental guidelines for e-commerce checkout design here at Smashing Magazine. The guidelines presented were based on the 63 findings of a larger E-Commerce Checkout Usability research study we conducted in 2011 focusing strictly on the checkout user experience, from “cart” to “completed order”.

This year we’ve taken a look at the state of e-commerce checkouts by documenting and benchmarking the checkout processes of the top 100 grossing e-commerce websites based on the findings from the original research study. This has lead to a massive checkout database with 508 checkout steps reviewed, 975 screenshots, and 3,000+ examples of adherences and violations of the checkout usability guidelines.

Here’s a walkthrough of just a handful of the interesting stats we’ve found when benchmarking the top 100 grossing e-commerce websites’ checkout processes:

  1. The average checkout process consist of 5.08 steps.
  2. 24% require account registration.
  3. 81% think their newsletter is a must have (opt-out or worse).
  4. 41% use address validators.
  5. 50% asks for the same information twice.
  6. The average top 100 checkouts violate 33% of the checkout usability guidelines.

In this article I’ll go over each of them and explain exactly what’s behind these numbers, showing you some real life implementations of do’s and don’ts when it comes to checkout processes.

The Average Checkout Process Consists Of 5.08 Steps (But It Doesn’t Influence Usability Too Much)

The average checkout consists of 5.08 steps, counting from the shopping cart to the step where the order is actually placed — often a “review and confirm order” step. The shortest checkout process is one step (including cart) and the longest being a massive nine steps.

Average Number Of Checkout Steps

Above you see the distribution among the top 100 grossing e-commerce websites in regards to the number of checkout steps they have. Note that only a single website had one step (including cart), and the “average” for this one website therefore shouldn’t have been given too much weight.

Score As A Function Of Steps

Above, we’ve plotted the websites grouped after the number of checkout steps, moving out from the x-axis, as the groups average checkout usability score moves up the y-axis. As you can see, we’ve found that up until six checkout steps there isn’t a noticeable relation between the number of checkout steps and the quality of the user’s checkout experience. This matches the test subject’s behavior we observed during the checkout usability test back in 2011. What matters the most for checkout experience isn’t the number of steps in a checkout process, but rather what the customer has to do at each step.

With that being said, there does seem to be an upper limit to the number of steps practically achievable in a checkout process before it begins to hurt the checkout experience. The websites with eight or nine steps have accumulated a significantly lower score in checkout usability than the rest of the checkout processes. This is often a result of required account registration (which typically induces more steps and is bad for checkout usability) as well as the fact that websites that end up with over eight checkout steps simply have more chances available to screw up the experience for their customers. At the time of testing, these were the websites with eight or more steps: Sephora (8), Amazon (8), Peapod (8), Sony (8), Safeway (9), ShopNBC (9) and W.W. Grainger (9).

To recap: don’t focus too much on the number of steps in your checkout — instead spend your resources on what the customers have to do at each step, as that is what matters the most for the checkout experience. Three examples of this are the checkout processes of Apple, Walmart and Gap, which are all seven-step checkouts that perform approximately 50% higher than the average top 100 grossing checkouts (not to say that they are perfect, there are still room for further checkout improvements). While in theory it is possible, in practice none of the benchmarked websites with eight or more checkout steps had a checkout process that wasn’t greatly under-delivering in regards to the checkout user experience for a new customer.

81% Think Their Newsletter Is A “Must Have” (And Don’t Value Customer Privacy)

81% of the 100 largest e-commerce websites “assume” that their customers want their promotional emails by having a pre-checked newsletter checkbox (or worse) at some point during checkout.

Sehopra Pre-Checks The Newsletter Box
Large view.

One reason why customer hate being required to create an account to complete a purchase is because they have a mental model of account = newsletter. This became evident during the user testing, where we heard the same complaint over and over again: people hate creating an account when buying online. When we asked the test subjects why, 40% told us that they “didn’t want any newsletters”.

For years websites, including e-commerce websites, have tricked customers into “accidentally” signing up for newsletters that they didn’t want by visually downplaying a pre-checked “subscribe to newsletter” checkbox. So people have come to expect, that when they sign up for a new account, that they also sign up for a newsletter, or “spam” (as more than half of the test subjects had referred to such newsletters).

This mental model sadly isn’t just a misconception, but evidently something learned the hard way. Pre-checking the newsletter checkout is one thing, but of those 81% of the websites that think their newsletter is a “must have”, 32 of them proceed to do something even worse than pre-checking a checkbox:

Amazon Checkout Step 3
Amazon is just one of the 32% of the top 100 grossing e-commerce websites that automatically signs customers up for their newsletters, without clearly informing the customer (only via the privacy link), and without giving an opting-out option during checkout. Large view.

These 32% automatically sign up their new customers for their newsletters with no way of opting out during the checkout process, and often burying this fact deep down in their privacy policy. Typically, the only way for customers to “opt-out” on these websites are either by a privacy tab in an account settings section (if they were forced to register for an account) or by an unsubscribe link in the newsletters that the customers will automatically start receiving.

So what the test subjects displayed of account = newsletter is something they learned from shopping at websites (such as these from the top 32%). Only 8% of the top 100 e-commerce websites value their customers inbox and ask them to opt-in if they want to receive newsletters (as does the last 11%, which don’t offer newsletter subscriptions at all during checkout.)

24% Require Account Registration

To put it differently: 24% don’t offer the customer a “guest checkout” option when placing an order, but force them to create accounts on their websites.

Sony Electronics Checkout Step 2 Account
Sony (step 2) is just one of the 24% that require every new customer to register for an account when placing an order. Large view.

During the checkout usability study, we (as have many others have before us) have identified multiple reasons why potential customers resent being forced to register for an account just for placing a simple order. We’ve already touched upon one of them, the mental model of account = newsletter. But let’s quickly list a handful more of them that we’ve found during the study:

  1. Signing up for an account means more steps and form fields to complete during checkout — essentially taking longer to complete.
  2. Most customers already have a myriad of logins and passwords to remember and don’t want more of them.
  3. When creating an account, customers are more likely to realize that you’re storing their information indefinitely.
  4. Many customers just don’t understand why they need an account to buy a product. As one test subject clearly expressed during testing: “I don’t need to sign up for anything when I’m buying a perfume in a regular [brick and mortar] store.”

Nordstrom's Checkout Process Step 3
Nordstorm (step 3) is one example of the 76% of the top 100 grossing e-commerce websites that offer new customers the much appreciated “guest checkout” option, but offering at the same time an easy optional account registration. Large view.

When you do it right (as 76% of the e-commerce websites have done) and provide the much appreciated guest checkout option, you still have the possibility of asking for an optional account creation along (or after) the purchase. This can be done simply by creating a short section with a brief description and an optional password field. During the checkout usability study no test subjects were put-off by this approach, and just left the optional field(s) blank if they weren’t interested in creating an account with that particular website. But they generally liked the option on websites where they were interested in becoming repeat customers.

If we look into the type of websites that typically require account registration, there is a slight tendency towards them being the highest grossing websites:

Require Registration Compared To Size

Of the 23 websites that had more than $1 billion in online sales (Internet Retailer 2010 sales estimates), 35% of them required account registration, whereas for the rest of them grossing less than $1 billion (and down to $148 million) it was only 21% that required account registration during checkout.

41% Use Address Validators

Of these 41%, 12% (relative) don’t allow their customers to override the validation mechanism in case the address isn’t recognized (though the customer is absolutely sure the address is correct).

Amway is one examples of the 12% (relative) that doesn’t allow the customer to proceed in any way, in the event that the address validator is outright wrong, or the address validation database isn’t updated properly. Large view.

An address validator can be a smart way to avoid common customer typos that might cause shipping problems, ones that otherwise would have resulted in undelivered or delayed orders. But street names, postal codes, etc. aren’t consistent, nor permanent. So the possibility still exists that it’s the address validation mechanism/database that is erroneous — not the customer’s input. Those subsets of websites that don’t allow the customer to force proceed through a potentially wrong address validator (at the time of testing: Office Depot, ShopNBC, Amway Global, FreshDirect, and CafePress) will leave the customers with no other option but to abandon their purchase as they are technically locked-out from completing the checkout process.

Overstock Adhered
A decent implementation by Overstock (step 3) that informs the customers that their typed address doesn’t match the address validation — and therefore, are likely to be wrong — while still giving the customers an option to force-proceed.

The advisable approach — implemented by the vast majority of the 41% of those websites utilizing address validators — informs the customer that the typed address doesn’t match, yet still allows them to force proceed if they are sure that the address is right.

50% Ask For The Same Information Twice

Instead of pre-filling the already typed-in information for the customer, 50% of the e-commerce websites add needless friction to their checkout experience by asking for the same information more than once. This is rarely at the same page (although that does happen) but is most often happening across multiple pages. Sometimes it’s the customer’s name that isn’t pre-filled from the address step to the billing step. Other times it’s the zip code that the customer provided at the cart step (e.g. for a shipping calculator) which isn’t pre-filled at the the shipping address step. Although it is only fair to assume that in most cases users calculate the shipping to a certain zip code, this would also be the zip code that they plan on shipping the order to.

Apple Step5 Crop
Apple is one of the 50% of e-commerce websites that asks for the same information more than once. At their 5’th checkout step the billing email address isn’t prefilled — even when the customer clicks the “Same as shipping information”-link. Large view.

Retyping information is a tedious task on a regular computer, but on a mobile device most users will find it outright annoying. Considering that all the benchmarked websites gross $148+ million per year in online sales, it seems rather sloppy that only half of them have dedicated the resources to removing needless checkout friction by ensuring that they don’t ask for the same information more than once (across multiple pages).

Hayneedle Step2 Cropped
On the path to reducing needless checkout friction, only 10% of the websites helped their customers by pre-filling the state and/or city fields based on the zip code provided. Hayneedle (step 2) was one of them. The result: three less fields for the customer to fill + shipping dates and costs already updated at the page entry. Large view.

On the same note for reducing needless checkout friction, only 10% of the websites helped their customers to fill-out even less form fields by pre-filling the state and/or city fields based on the zip code that the customer provides.

The Influence Of Revenue And Industry

The e-commerce websites grossing above the $1 billion mark scored 44% worse on checkout usability (for a first-time customer) than the e-commerce websites grossing below $1 billion.

When taking a closer looking at the checkout experience of these 23 websites that gross over $1 billion, it’s likely that some of that gap exists because these websites are more focused on forcing as many customers into their account eco-system as possible. Furthermore, the top grossing e-commerce websites also tend to be the ones with the most complex marketplace systems. These marketplace systems often end up inducing a lot of derived complexity into the checkout process, due to shipping and legal constraints, for a deal where the website only acts as the middleman. In comparison, some of the “smaller” websites in the top 24 to 100 grossing range had one simplified goal for their checkouts: to let the customer move as swiftly as possible through the checkout process.

Usability Score Vs. Online Sales Scatterplot
All the top 100 e-commerce websites plotted with checkout usability score moving up the y-axis and online sales moving out the x-axis (logarithmic scale). Notice that the far majority of checkouts that scored the highest on checkout usability are below the $1 billion sales mark. Large view.

If we take a look at specific e-commerce industries, the Automotive Parts industry had much better checkout usability than the rest of the industries (scoring 110% higher) whereas the Office Supplies industry scored the lowest (38% lower than average). Food & Drugs followed right behind in providing the worst checkout experience.

It’s interesting to see the that in both the worst and the best scoring industries, all three have a very similar checkout process. In fact, their checkouts are almost identical; have a look at Staples’ checkout, Office Depot’s checkout, and OfficeMax’s checkout. I’m not going to speculate on who “was inspired” by whom, nor does it really matter. But in the Office Supplies industry it’s unfortunate, because as a consequence they all suffer from a very sub-standard checkout experience (38% lower than the average). While it’s clear that some of the top 100 e-commerce websites are using the same system vendor (and thus, end up with similar features and sequences in their checkout flow), the tendency of similar checkouts between competitors weren’t noticed to nearly the same degree in some of the other e-commerce industries (e.g. in Electronics).

The General State Of E-Commerce Checkouts

If we have an overall look at the top 100 grossing e-commerce checkout processes, the average checkout violates 21 checkout usability guidelines. This is an indication that checkout improvements are still much needed if the average cart abandonment rate of 65,95% is to be lowered (“50% Ask for Same Information Twice” also points in this direction).

This overall lacking of checkout experience — even among the highest grossing e-commerce stores — is hardly rooted in an unwillingness to improve checkout experience, but is most likely due to a combination of factors, such as:

  1. Flows are much more difficult to improve than single pages.
  2. Checkouts often need deep, back-end integration, and thus require more IT capabilities to modify/test upon.
  3. Checkouts haven’t been on the agenda for top management (although, I believe this has changed a lot in recent years).
  4. Checkouts are for most designers much more dull to work on than product pages, home pages or new ad-campaigns.
  5. In a few cases, a poor user experience can still be good for business, at least in the short run (e.g. sneaking people into your newsletter).
  6. No Web convention for a checkout process exists.
  7. “Best practice” for checkout designs are scattered and scarce (only two to three research-based resources exist).
  8. Feedback from those who use the checkout process are only several degrees of separation from those who design and develop it.
  9. Improving most somewhat-optimized/decent checkouts aren’t 1 to 3 “big fixes”, but are most likely to be 10 to 30 smaller checkout changes.

If you want to further examine the checkout processes and flows of the 100 top grossing e-commerce websites for yourself — without filling out some 1,300 form fields, as we have done — you can do so in the free part of the 2012 E-Commerce Checkout Benchmark, as we’ve decided to make that part of the database publicly available.


© Christian Holst for Smashing Magazine, 2012.

August 09 2012


The Art Of Staying Up To Date


An important part of our job is staying up to date. Technologies don’t really change that fast — HTML5 and CSS3 take a long time to be specified and implemented. But the ideas surrounding these technologies and the things we can do with them are constantly evolving, and hundreds of blog posts and articles are published every day. There’s no way you can read all of those but you’ll still have to keep up to date. Here are some tips on doing that while still having some time left to work.

Ideas Surrounding These Technologies and the Things we can do With Them are Constantly Evolving


The hardest part of staying up to date is not reading too much. So many articles are published on a daily basis, so you’ll need filters. It’s unfortunately hard to make a living by reading articles all day, so you don’t want to read marginally interesting stuff, and you don’t want to read too much. You only want to read relevant stuff. You could try to automate this filtering, but I found that the best filters are actually people and time.


Some people produce lots and lots of ideas. Not all of these ideas are worth your time, but some of them are excellent. If you follow these people directly there’s a lot of noise you have to filter and you need a good sensor to recognize the good stuff. A very easy way to solve this is to not follow them directly but only follow the people surrounding them — they will do the filtering for you. If there’s an excellent idea, they will link to it. So in order to keep your sanity, don’t follow loudmouths (follow their more silent friends).

Don't Follow Loudmouths Directly

This tip works very well for Twitter, but it works for blogs as well. Don’t follow overactive sources, follow the people who follow these sources.


A few years ago I noticed that my RSS feeds started to dry up — especially blogs with opinionated articles. Articles where many people would leave their comments were all of a sudden gone. These discussions had moved to Twitter overnight. That’s the reason why I started tweeting (although I have to admit that I was addicted to it within a week). If you tend to your Twitter stream with care, it can become a very valuable source of good and relevant information. But if you follow the wrong people, or too many people, it will be exactly the opposite. My stream consists of mostly people who generally agree with each other. This means that it usually isn’t filled with tedious discussions about irrelevant details that can easily grow to gargantuan proportions. Now, I don’t say you shouldn’t listen to people you don’t agree with, I just think that Twitter is not the right place to follow these people.


Related to this Twitter-management (where I try to avoid heated discussions) is this other excellent filter I use: time. I almost never read articles the moment they are published, I wait a few days, or weeks or even months. If they are still interesting after a period of time, they are worth reading. You’ll see that lots of stuff is outdated even after a few days. Many articles are written in an emotional state, and many responses to these articles are written with even more emotion. These rows can certainly be entertaining, but they are rarely interesting after a week. I use Pinboard to create this buffer of unread articles, but there are many other handy tools available like Instapaper or Pocket (or you could just use your browser’s bookmark functionality).

Being up to date isn’t about knowing the latest trends and keeping track of all the gossip, it’s about knowing the right stuff by reading only the right stuff. But it isn’t just about reading the right stuff, it’s also about remembering it.

Backup Your Knowledge

The good thing about our current era is that we don’t have to learn everything we read by heart: we have computers these days to do the remembering for us. We just have to make sure that our computer can find the stuff we want it to remember. So create a database of the links to interesting articles that you read. I always write a small comment with these links when I save them to Pinboard, this way I can easily find them when I need them. You could buy the archival option from Pinboard, this makes it even easier to find older articles. I also created some IFTTT rules to backup these links to Evernote and Dropbox. I don’t want to depend on one tool, so I spread my knowledge around.

Use Your Knowledge

A very important part of understanding a new technique or design trick is by playing with it. You could of course immediately start using it in a big-production website (or you could also just first try it out). There are many tools out there that make it easy to test some snippets of code, like the amazing Dabblet and the incredible JS Bin. Playing around with code snippets that you find in articles will greatly improve your understanding of how things work.


There are many tools you can use for gathering and keeping your knowledge (and I already named quite a few). Here are a few more:


I use YoruFukurou as my Twitter client. It’s an unobtrusive client with some very handy tools for power-users, like muting certain words. It has some very handy advanced custom filter options as well. Tweetbot is a similar tool which works especially well on iOs devices. I fave every tweet that might have an interesting link (yes, that’s why I fave all of your tweets, but I’m not stalking you). All of these faves are automatically stored as unread items in a Pinboard account.


I read my feeds using the excellent self-hosted feed reader Fever. It has a feature that detects what articles are hot by checking how many people link to it. It uses the clever principle of Sparks — feeds that link to interesting things, but are not worth following to determine what’s hot. You can save articles for later (and yes, these articles are also saved as unread items in my Pinboard account, as well).

I Use Fever to Read My Feeds


As I mentioned before, by creating some clever filters you can make sure that your list of unread articles is manageable. But reading the articles and actually doing something with that knowledge can be pretty time-consuming. Every now and then I hit one of my two Pinboard bookmarklets that either show me the oldest unread item or a random one. As I said, many articles are outdated after a few days (but still many remain to be read). If an article is small, I read it right away. If it’s very long and very interesting, I either e-mail it to myself or I save it to Instapaper.

I save every article that is worth remembering to a second Pinboard account using Delibar, with a small comment and a few tags attached to it. There are many more ways to better organize these links, but this system works for me (I usually find a link I need within a few seconds).


IFTTT is a very handy tool that connects Web services. I use it to store my bookmarks on as many locations as possible. For instance, every article I save to my second Pinboard account is saved to Evernote and Dropbox. This makes it easy to access all these bookmarks from every device I use with specialized tools like nvAlt.


This whole article is just about staying up to date by reading articles, but one of the best ways to stay up to date is by talking to people. In real life you can talk to colleagues or attend conferences and workshops, as there are many regular meet-ups of like-minded people all around the world. You can use things like Twitter or IRC to start discussions or ask questions, or post your question on one of the many online fora out there.

Talking About Your Work is a Great Way to Form an Opinion

Other tools

There are many other tools out there that can help you with staying up to date. Many people use Instapaper, Delicious or Pocket to collect links. Others use email to send these links to themselves. Some people use the native bookmarks of their browser and others write their own bookmarking service.


Professionally I am specialized in HTML and CSS, and I have an interest in Web Design and some other areas. Since I have expert knowledge of CSS, it doesn’t make much sense for me to follow websites that offer CSS tutorials for beginners. So on this particular subject I follow the real experts and even the people who write the specs: my knowledge about CSS has to be more than up to date. Bas Poppink, a colleague of mine, calls this principle following the sources of your sources until you found the headspring. I call it the Poppink-principle. So if you’ve outgrown tutorials, ask the authors of these tutorials what websites and which people they follow.

What sources are right for you depends on a lot of things, like your experience and your fields of interests. Below you’ll find some of my sources. You might find some interesting stuff in there…

My Sources

My main source of information comes from people who tweet something that might interest me. Twitter is also great for discussing articles and opinions, or asking for advice. But there is more…


There are some feeds I rely on: the bookmarks saved by Jeremy Keith, Kazuhito Kidachi, Peter van der Zee, and Paul Irish. They usually add a helpful description to their bookmarks. There are a few people who regularly post high quality reading lists: you should definitely follow a few of those too, if not all. The rest of the links are distilled from a somewhat random collection of ancient and newer RSS feeds that definitely need some weeding. Do you really want to know what they are? Here is the OPML file.

But you’ll probably be better served by the excellent collection of Front-End and Web Standards feeds that Paul Irish curates. He also points at these great weekly email newsletters about JavaScript, Web design, CSS and HTML5. Definitely worth a follow if email is more your thing.

Your Own Sources

Whether you want to be the very best in your profession or someone who is good enough, staying up to date is essential for every professional. The exact people and feeds to follow depends on your own interests. Just take your time to find and collect them and be sure to critically look at them every now and then. Also, what tools you choose to use in order to stay up to date is totally up to you, as there are many more ways to stay up to date than I described here. I hope that this article somehow helps you in finding the right sources and in creating your own, better, flow of information.

Image source of picture used on front page.


© Vasilis van Gemert for Smashing Magazine, 2012.

March 01 2012


The Basics Of Creating A Magento Module


A lot of community extensions (or modules) are available for the feature-rich open-source e-commerce solution Magento, but what if they don’t quite work as you want them to? What if you could understand the structure of a Magento module a little better, to the point that you could modify it to suit your needs or, better yet, write your own module from scratch?

In this tutorial, we will introduce the coding of Magento in the form of a “Hello World”-style module. The goal of the module will be simply to write some information to a log file every time a product is saved. This very basic module will allow us to cover a number of interesting topics, including:

  • The app/code directories,
  • The structure and creation of a Magento module,
  • Event observers,
  • Logging.

Before We Begin

This tutorial assumes that you already have an installation of Magento up and running, either locally or on a development server, that you can add new files to. The version of Magento that you use doesn’t really matter, because we will be covering fundamental aspects that exist across all versions and editions: Community, Professional and Enterprise.

Disable the Cache

This is one of the first lessons a Magento developer should learn: disable the cache! You can do this by going to Admin Panel → System → Cache Management → Select All → Actions: Disable → Submit.

While very good at boosting performance in a production environment, the cache is a developer’s enemy. Leave it enabled at your peril! Every Magento developer I have met has on more than one occasion spent an hour or so wondering why their latest update is not showing up, only to find that Magento is still displaying the version of the website that it conveniently cached earlier that day.

The app/code Directory

The brains of Magento can be found in individual modules inside the app/code directory, which is split in to three areas: core, community and local.


The app/code/core directory contains all of the functionality for products, categories, customers, payments, etc. Until you know what you are doing (and even afterwards), keep app/code/core off limits because these files should not be modified.

Magento is structured in such a way that you can alter the functionality of any of these core files without modifying them directly, which ensures that your application remains upgrade-proof. By all means, look in order to better understand how Magento works, but do not touch.


As the name suggests, app/code/community is where you will find modules that have been provided by third parties (i.e. not Magento’s core team). Hundreds of modules are available through Magento Connect, and when you install them through the built-in “Package Manager,” this is where they end up.


Magento ships with an empty app/code/local directory, ready for you to add bespoke modules for your own Magento installation. This is where we will be working for the duration of this tutorial.

Structuring Our Directory

Open your favorite editor, and navigate to app/code/local to add some new directories and files.

Module Namespace

The first directory we will create is a “namespace.” This can be called anything you like, but the convention is some form of the name of the company or module’s author. Magento uses “Mage” as its namespace. Here at Ampersand Commerce, we use “Ampersand.” For this tutorial, we will use “SmashingMagazine” as our namespace. So, create the directory app/code/local/SmashingMagazine.

Module Name

For the next directory, we will give our module a descriptive name. The module we are creating will write log entries each time a product is saved, so a logical name would be LogProductUpdate. Create the directory app/code/local/SmashingMagazine/LogProductUpdate.

We should now have the following directory structure for our module. These directory and file names are case-sensitive, so capitalize where appropriate.


Configuring Our Module

Next, we will begin to configure our module. The configuration files belong inside our module in a directory named etc, so let’s create that along with a new XML file: app/code/local/SmashingMagazine/LogProductUpdate/etc/config.xml. This XML file will inform Magento of the location of the files in our module, as well as many other things, such as version number and events to observe. For now, we will create a simple config.xml file, which contains comments that explain the meaning of each section.

<?xml version="1.0" encoding="UTF-8"?>

<!-- The root node for Magento module configuration -->

    <!-- The module's node contains basic information about each Magento module -->

            This must exactly match the namespace and module’s folder names,
            with directory separators replaced by underscores

            <!-- This is the version of our module, starting at 0.0.1 -->




Activating Our Module

The next step is to inform our Magento installation that our module exists, which we do by creating a new XML file in app/etc/modules. The name of this XML file can be anything you like, since Magento will read all XML files in this directory and will be interested only in the content. However, by convention we should give the file and module the same name. Let’s create app/etc/modules/SmashingMagazine_LogProductUpdate.xml with the following content:

<?xml version="1.0" encoding="UTF-8"?>

            <!-- Whether our module is active: true or false -->

            <!-- Which module code pool to use: core, community or local -->


Sanity Check: Is The Module Enabled?

We now have a fully functional module that is enabled in Magento. It doesn’t do anything, but it is a valid module. This is our first opportunity to see whether we have correctly configured everything so far. If we log into the Magento admin panel and navigate to System → Configuration → Advanced → Advanced and view the “Disable Modules Output” listing, we should see our SmashingMagazine_LogProductUpdate module listed as enabled. If it is not listed, then something has gone wrong, so carefully run through the steps up to this point again. This is usually when new Magento developers discover the cache!

Our module’s structure now looks like this:



Defining An Event Observer

Event observers are extremely powerful and are one of the cleanest ways to extend Magento’s functionality without having to rewrite or override any core methods or classes. We want to observe the event that Magento dispatches just after a product is saved, so the code for the event we are interested in is catalog_product_save_after. Determining which event code to use when defining a new observer requires a basic understanding of Magento’s model layer, which is beyond the scope of this tutorial. Don’t worry, though: we’ll cover it another time!

We now need to modify our config.xml to include the event observer definition:

<?xml version="1.0" encoding="UTF-8"?>

    <!-- Configure our module's behavior in the global scope -->

        <!-- Defining an event observer -->

            <!-- The code of the event we want to observe -->

                <!-- Defining an observer for this event -->

                        Unique identifier in the catalog_product_save_after node.
                        By convention, we put the module's name in lowercase.

                        <!-- The model to be instantiated -->

                        <!-- The method of the above model to be called -->

                        <!-- We generally use the singleton type for observers -->

                    </smashingmagazine_logproductupdate >






Configuring Our Model’s Directory

In the event observer defined above, we made reference to a model that we have not yet created. We need to inform Magento where to find models in our module by updating config.xml with the following:

<?xml version="1.0" encoding="UTF-8"?>

    <!-- Configure our module's behavior in the global scope -->

        <!-- Defining models -->

                Unique identifier in the model’s node.
                By convention, we put the module’s name in lowercase.

                    The path to our models directory, with directory
                    separators replaced by underscores



                    </smashingmagazine_logproductupdate >



Creating An Observer Model

We will now create the model to be instantiated when the event is dispatched. Create a new PHP file in app/code/local/SmashingMagazine/LogProductUpdate/Model/Observer.php with the following content:

 * Our class name should follow the directory structure of
 * our Observer.php model, starting from the namespace,
 * replacing directory separators with underscores.
 * i.e. app/code/local/SmashingMagazine/LogProductUpdate/Model/Observer.php
class SmashingMagazine_LogProductUpdate_Model_Observer
     * Magento passes a Varien_Event_Observer object as
     * the first parameter of dispatched events.
    public function logUpdate(Varien_Event_Observer $observer)
        // Retrieve the product being updated from the event observer
        $product = $observer->getEvent()->getProduct();

        // Write a new line to var/log/product-updates.log
        $name = $product->getName();
        $sku = $product->getSku();
        Mage::log("{$name} ({$sku}) updated", null, 'product-updates.log');

We’re done! Try it out.

The directory structure for our completed module should now look like this:




Now that our module is complete, it’s time to try it out! Log into the Magento admin panel, create or update a product in your catalog, and then check the var/log folder to see your product-updates.log file populated.

If nothing appears or the directory does not exist, ensure that the correct permissions are set to allow Magento to write to this directory, and that logging is enabled in Admin Panel → System → Configuration → Developer → Log Settings → Enabled.

This basic tutorial is meant to give you an overall understanding of how Magento modules work. After completing this tutorial, spend some time exploring the Magento modules in app/code/core and see if you now have a better idea of how it all works.

We welcome any questions and would love to hear any feedback in the comments area below.


© Joseph McDermott for Smashing Magazine, 2012.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
No Soup for you

Don't be the product, buy the product!

YES, I want to SOUP ●UP for ...