Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 17 2014

14:00

February 12 2014

13:50

26 Web Design Tutorials for Learning New CSS3 Properties

Advertise here with BSA


Over the past few years a number of prominent CSS3 tutorials have been published online. These are perfect for beginners who don’t have a lot of experience coding webpages. But developers who do have that experience may also find the more detailed articles quite insightful.

Take a look over this collection of posts related to newer CSS3 properties. There may be some you’re already familiar with, and others that you’ve never even heard about before. CSS3 provides developers with many different solutions used to solve individual problems. Also if you know any other great posts I’ve forgotten you can share with us in the post discussion area.

CSS3 Box Shadow Effects

various css3 box shadow effects tutorial

CSS Text Stroke

css css3 text stroke effect

Cubic-Bezier In CSS3 Transition

css3 transition cubic bezier tutorial

CSS3 :not Selector

css3 negation not selector guide

CSS3 Multiple Columns Layout

css3 multiple columns layout tutorial howto

CSS3 :First-Of-Type Structural Selector

css3 first type structural selector tutorial

CSS3 Linear Gradients

css3 linear gradients tutorial howto article

Transparencies and Gradients With CSS

css3 transparency gradient howto tutorial design

Using any font with CSS3

howto tutorial web fonts css3 article

CSS3 multiple background images

tutoral learn css3 multiple backgrounds howto

CSS background-size property

css background size property tutorial

The Skinny on CSS Attribute Selectors

basic guide tutorial using css attribute selector css3

CSS3 Reflection

tutorial howto code css3 reflection effect

Box Model Control with box-sizing

box sizing css3 tutorial howto model

Annotation Overlay Effect with CSS3

css annotation overlay effect css3 design

Understanding the CSS Clip Property

css clip property tutorial howto

Intro to CSS Pseudo-Classes

howto tutorial css3 pseudo classes article

List of CSS3 Pseudo-classes

list of css3 pseudo classes design website

2D and 3D Transform Techniques

css3 tutorial 2d 3d transition animation

CSS3 Transitions from Scratch

html css3 transitions and form effects

Smooth Fonts with CSS3

css3 alias font design typography howto

Background-clip with CSS3

css3 background clip tutorial howto article

Change Default Text Selection Color

howto update text select color design tutorial

Awesome Flat Buttons with CSS

css3 flat gradient button design howto coding

CSS3 Animations Using Keyframes

css3 keyframe animation design howto tutorial

Experimenting With CSS3 Keyframe Animations

experiment new css3 keyframes howto animate elements


Advertise here with BSA

08:14

February 10 2014

16:57

How To Create A Self-Paced Email Course


  

When I realized I had written what seemed to be a course (i.e. not my usual article or book), I was left with a sense of panic. There are so many options for running an online course, and all of them seem slightly confusing or time-intensive to set up.

Then I remembered the autoresponders feature in my newsletter application (I use MailChimp, although every newsletter software has it). I could trigger lessons with autoresponders and deliver course material to where most people spend most of their day: the inbox.

Another problem was that the course was about writing a book, and some of the lessons were slightly onerous — like “Write a first draft.” So, setting a fixed time delay wouldn’t work because some people complete things like that much more quickly than others.

Instead of automatically firing off each lesson after a set amount of time, I created a series of lessons via autoresponders that fired only when the previous lesson was marked as finished. That way, I got to deliver each new lesson only when the student had finished the previous. This method does not require you to configure any website, plugins or additional software (beyond setting up a mailing list and creating pages on your existing website, which you probably already know how to do).

I made my own course, Write and Sell Your Damn Book, free for a few reasons. First, I was able to bring some sponsors on board to give me enough money to make it worthwhile to create and set up. Secondly, I felt that the course material should be available to anyone, on any budget, who is writing a book.

Thus, the course made money before it launched, but the downside is that it made a fixed amount of money. I set up additional (albeit minor) revenue streams for it — affiliate links on Amazon to recommended books on the same subject, links to my own paid books, as well as the course in Kindle format, just in case people wanted to read the material all at once.

Using the method outlined below, I created a self-paced email course that had over 1,000 registrations in the first 24 hours, and almost 2,500 in the first week. There are other ways to do this using MailChimp, such as triggering the completion of a course with a URL, but this is how I set up mine.

1. Create A List

This list is only for people who will take your email course. Make sure the publicity settings are set to non-public and non-archivable (otherwise, people will be able to share the lessons with whomever they want).

Check “No, my campaigns are not public,” and uncheck “Activate the archive bar.”

When creating autoresponders, ensure that you remove the “View this campaign in a browser” link, to further discourage shareability. To take things one step further and make sure only subscribers see some or all of the course’s content, read up on conditional merge tags.

2. Match The Colors And Fonts In The Course Material To The Registration Process

You’ll find these by going to “Signup Forms” and then “General Forms.” Match the fonts, colors and logo of the course’s website for a consistent user experience.

3. Select “Send A Final Welcome Email”

You’ll find this option in the drop-down menu on the “Create Forms” page; it will be automatically selected, unless you’ve unchecked the box. Add text to this email (scroll down to edit the contents), informing users to “Click the completed lesson” button in each lesson to get the next lesson.

Later, we’ll get into how to set this up, but essentially each lesson’s email will have a link that users can click when they’re finished to notify MailChimp to deliver the next lesson.

Also, in this final welcome email, let users know when the first lesson will be delivered.

4. Set Up The First Lesson And Autoresponder

Go to “Autoresponders” and then “Create autoresponder.” Select the entire list to be the recipients. On the next page, the event that triggers this autoresponder is “Subscription to the list.” Make sure that “Also trigger on list import” is checked if you want to use Twitter cards or if you will be charging for the course (more on this later).

Lesson One
(View larger version.)

Choose whether to send it within the hour or at another time and date. On the set-up and campaign information page, give the campaign a subject line and make sure that, under “Tracking,” “Goal Tracking” is checked — this is important because it will trigger the next autoresponder lesson.

5. Set A Goal For Your Campaign

A goal is simply a URL that you add to the lesson. For my own email course, I created a few pages on my website that thanked the user for completing the lesson. For example, I added a button to the campaign for lesson 1, reading “I have completed this lesson” and linking to http://mydamnbook.com/lessons/lessonone.

The URL may be anything, but if you are setting a reminder email (more on this later), then the URL must contain the same folder — in this case, lessons. If you use WordPress, this is simply the parent page, and each individual lesson would be a child page of the parent, /lessons/.

These pages that live on your website are important for firing off autoresponders, as well as for letting the user know that a lesson has been completed and that a new one is on the way.

A good marketing strategy is to add some social engagement to the completion page for each lesson, such as “Tweet that you’ve finished lesson 1,” with a hash tag for your course or sharing buttons, so that users can let others in their network know about the course and where to sign up.

6. Set Up Subsequent Lessons By Creating A New Autoresponder

Set the entire list as the recipients. On the next page, set “Specific link in the campaign is clicked” as the event to trigger the autoresponder.

Lesson Two
(View larger version.)

Then, select the previous lesson. If you’re creating lesson 2, then select lesson 1’s autoresponder from the drop-down menu. Then, choose “Select a link from your campaign” and select the URL that you used for the button that tells the user they have completed that lesson.

Set the autoresponder to send either within the hour or at a time and date of your choosing. I always pick “Within the hour,” so that the user gets the next lesson fairly quickly.

Make sure to track goals for every lesson you create (otherwise, the URL clicks won’t be tracked by the following lesson).

To create lesson 3, you’d follow the steps above but would select lesson 2 from “For what campaign” and the lesson 2 completion URL for “Select a link from your campaign.” And so on, until you’ve added all of the lessons.

7. Set Up Additional Emails (If Needed)

In addition to the lessons, you may want to send out a different type of email a day or two after the final lesson has opened. The email could include additional resources, an “About the author” section, or perhaps a review of the course (if you’ve got one).

Select “Send to the entire list” for the recipients.

For the autoresponder, select “Campaign is opened” as the event to trigger the follow-up email, and select “For what campaign” as the final lesson (via the drop-down menu). Then, select the amount of time for “When the autoresponder should be sent.” If the email is a review or list of resources, then sending it a day or two after makes sense, while the lessons are still fresh.

8. Set Up A Reminder Email

Because the course is self-paced, people won’t get the next lesson if they forget about the email for the current lesson, so setting up a course reminder autoresponder is another good idea.

From step 5, if the same folder is in the URL for each lesson (in this example, /lessons/), then it’s simply a matter of creating a new reminder autoresponder that sends to a new segment of the list.

reminder
(View larger version.)

To do this, start an autoresponder, select “Send to a new segment,” then pick “Subscribers match,” and then “Any” from the drop-down menu.

In the next drop-down menu, choose “Goal Activity,” then “Doesn’t match,” and in the field type in the folder URL of all lessons (in this case, lessons) (don’t type the full URL or any slashes).

On the next page, select “Subscription to the list” as the event to trigger the autoresponder.

When setting the autoresponder, estimate a reasonable time which people would take to complete a lesson. For my own list, I’ve set the reminder to “45 days” after a user has stopped clicking anything.

Remind people that they’ve signed up for the course and, if they’ve forgotten about the lessons, to go back and read the current one (and click that they’ve finished it once they have). I also offer helpful suggestions on how to get over being stuck in the writing process.

Integrate With Payment Solution (Optional)

If you want to charge for the course, you will need to collect the user’s money before the course lessons start firing. I use Gumroad to sell items online; while it doesn’t directly integrate with MailChimp, one easy additional step makes it happen.

Gumroad
Giving your course a price can be done in one easy step. (View larger version.)

In your Gumroad account, click “Add a Product,” and then select the product. Where it asks for a file, create and upload a PDF of the text in your “Final welcome email” that tells people they’ve successfully signed up for the course and will get the first lesson within an hour.

Give it a price, and then “Add” the product. The next screen lets you upload a graphic (or video) and a description of the course. When it’s ready, click “Publish.”

To add an incentive (for example, to reward users with a discount for signing up early), click on the “Options” tab and create an offer. Otherwise, you’re done!

To integrate with Zapier, create a secret “free” offer, which you can use to finish the process, and delete it when you’re done.

Next, to connect Gumroad to MailChimp, sign up for an account with Zapier. Free and paid options are available. If you expect fewer than 100 users, go with a free account. Otherwise, it’s fairly cheap, and it scales. If 50,000 people are signing up a month, then the $99 per month price tag is well worth it.

Zapier
By connecting Gumroad to MailChimp you can automatically add purchasers of your Gumroad product to your course’s mailing list. (View larger version.)

Once you’ve got an account, click “Create a Zap.” The trigger service is Gumroad, and the action service is MailChimp. For “Choose a trigger,” select “New sale.” For “Choose an action,” select “Add subscriber,” and then continue. From there, follow the steps to connect both your Gumroad and MailChimp accounts to Zapier.

Next, choose your “Product” (which would be your course if you have more than one product in Gumroad), and then continue. Then, choose which MailChimp mailing list to put subscribers in. When you click on “Insert fields” in the email section, Zapier will ask you to create a new purchase of your product. Go back to Gumroad and do that (using the free discount code), and continue with the process. Make sure to select “Email” in the email drop-down menu.

Also, select “No” for “Send a welcome email” because subscribers will get the PDF as a download immediately upon paying. Click “Continue,” name the Zap whatever you’d like, and turn it on!

Now, whenever someone purchases the Gumroad product for your email course, they will be automatically added to the course’s mailing list and will start receiving lessons.

And that’s how you create a self-paced email course using MailChimp, Gumroad and Zapier.

(al, il, ea)

Credits for the image used on the front page: zapier.


© Paul Jarvis for Smashing Magazine, 2014.

14:00

CSS Best Practices to Follow and Bad Habits to Avoid

Creating a well-managed set of CSS codes can be a challenge. As technology evolves, it’s not really easy to say if you’re doing the right CSS practices or you’re just messing up the code and compromising the quality of your website on different browsers.

Through practice, you should be able to avoid CSS errors. So, to give you a guide on the dos and dont’s when writing CSS codes, we listed below the CSS best practices to follow as well as the bad habits to avoid. So, get ready and let’s get started.

css-best-practices-and-bad-habits-to-avoid

CSS Best Practices 

Use CSS Reset

Browser inconsistencies are one of the biggest problems of front-end development nowadays. Styles like margins, paddings, line heights, headings, font sizes and so on may look different on different browsers. The goal of a reset style sheet is to reduce browser inconsistencies by providing general styles that can be edited and extended.

One of the great examples for a reset CSS stylesheet is normalize.css, a modern HTML5 CSS reset. All you have to do is include it before your own style definitions in your HTML file under the Head section. Otherwise, these styles will override your own style definitions.

Provide Style Sheet Information

Put a Title, Author, Tags, Description, URL  information and so on on your stylesheet. This will give the user/developer a reference person to contact whenever they need support regarding your creation.

/*
Theme Name: Simple Parallax Website
Description: Simple Parallax Scrolling Effect
Author: Samuel Norton
Author URI: http://www.1stwebdesigner.com/
Tags: Parallax, Website
*/

Organize Elements on the Stylesheet from Top to Bottom

Usually for beginners, they put the elements on the stylesheet according to what they want to put first. But this is not a good practice for CSS code structure as it will give you a hard time finding CSS code elements on the stylesheet. Ordering them from inclusive styles (such as body, H1, p, a and the likes) followed by a header to a footer will make a lot of sense.

As an example consider the code structure below.


/****** General Styles *********/

body {...}
h1, h2, h3 {..}
p {...}
a {...}

/****** Header Style *********/
#header {...}

/****** Navigation Style *********/
#nav {...}

/****** Footer Style *********/
#footer {...}

Shrink CSS file size with CSS Compressors

It’s really a great idea to shrink the CSS file size as it will remove white spaces, line breaks and remove redundant CSS styles. Through this, you can help browsers to speed up the loading of your CSS codes. Using tools like CSS Compressor and CSS Compressor & Minifier can make this happen.

css-compressor

css-minifier

Group IDs and Classes That Fall under the Same Element

If you have an element that contains different IDs and classes, you might want to group them to make them look organized and easy to find so looking for errors would not take time.

As an example, you have a class container that contains a div tag that has an ID of logo and another div tag that has an ID of icons.

</pre>
<div>
<div id="logo"></div>
<div id="tagline">< /div></div>

You can group them on your CSS code like this:

.
container {width: 960px; margin: 0;  padding: 0;}.
container #logo {font-family: Arial, sans-serif;  font-size: 30px;  color: red;}.
container #tagline {font-family: Verdana; font-size: 10px;}

Use Annotations/Comments to Identify a Set of CSS

Another best practice for CSS coding is putting a comment on each group of CSS. This will make it easy for you to look for specific groups of CSS once you got in to some CSS errors.


/****** General Styles *********/

body{
	margin: 0;
	padding: 0;
	width: 100%;
}

h1, h2, h3 {
    font-family: Arial, sans-serif;
	font-weight:normal;
	font-size: 55px;
	text-align: center;
	color: #fff;
	margin: 0;
	padding: 0;
}

Structure Naming Convention

Using proper naming conventions on IDs and classes will make a lot of sense to your work. This will help your work easier and faster in case you need to add elements or redesign a website.

For instance, putting a class of title-red will not make sense when you change the color of the title so why not just put title instead. Always name your elements properly based on their use not on their properties such as what color or font size the element have.

Use Hex Code instead of Name Color

According to a performance test run by Sean Connon, Senior Web Developer at Alien Creations, Inc, hex codes seems to be just barely faster on 4/5 runs. Check out the test performance here. Therefore, we recommend using hex codes rather than name colors.

hex-vs-colorname

Use CSS Vendor Prefixed

If you are aware of the new features of CSS3, you must also know that each browser has its own specification when it comes to a specific style. That’s why browser prefixes are being used to make sure that the browser supports the specific features/style you want to use.

Many designers and developers are having an error with this simple matter because they forgot to add vendor prefixes to target specific browsers.

The CSS browser prefixes are:

  • Chrome: -webkit-
  • Firefox: -moz-
  • iOS: -webkit-
  • Opera: -o-
  • Safari: -webkit-

For instance, you want to add a CSS3 transition to your CSS code, you will just use transition property along with a vendor prefix. Check out the code below.

-webkit-transition: all 1s ease;
-moz-transition: all 1s ease;
-ms-transition: all 1s ease;
-o-transition: all 1s ease;
transition: all 1s ease;

Validate Your CSS

Using W3C free CSS Validator will let you know if your CSS code was properly structured. Another benefit of using it is it can point you the error on your stylesheet, thus, saving you more time on troubleshooting it manually.

Bad Habits to Avoid

Creating Redundant CSS

Using styles again and again for specific elements is not a good practice. It’s very important that you clean your code and remove redundant styles. For example, if you have a paragraph tag and span tag that has the same font size and color, you might just group them using a comma.

Take a look at the code below.

BAD PRACTICE

span {font-size: 12px; color: red;}
p {font-size: 12px; color: red;}

BEST PRACTICE

span, p {font-size: 12px; color: red;}

Mixing Tag Names with ID or Class Name

Adding tag name to an ID or Class Name is not a good practice since it would slow down the matching process unnecessarily.

Check out the code below.

BAD PRACTICE

p#container {color: red; font-size: 8px;}

BEST PRACTICE

#container {color: red; font-size: 8px;}

Targetting Margin and Padding Positions Separately

Using separate CSS codes to target margin or padding either on top, left, right or bottom is not a good idea. Shrinking your code and combining it in one line will make your code more readable and makes the loading of the browsers faster.

BAD PRACTICE

For example, you have a div id of container and you target all the positions’ margin separately. See the code below.

#container {
margin-top: 10px;
margin-right: 2px;
margin-left: 5px;
margin-bottom: 14px;
}

BEST PRACTICE

You can just combine all these styles in one line. Check out the code below.

#container {margin: 10px 2px 5px 14px;}

Using Underscores on ID or Class Name

Using underscores might give you unnecessary results on old browsers. It is highly recommended to use hyphens.

Take a look at the example below.

BAD PRACTICE

.
left_col {margin: 0; padding: 0;}

BEST PRACTICE

.
left-col {margin: 0; padding: 0;}

Final Words

Throughout this article, I provided you some tricks and guidelines on how to write a better CSS code and what mistakes to avoid. Keeping these rules while you’re coding will help you implement clean codes and prevent you from having errors in the future. Make sure you validate your CSS code using W3C CSS Validator for a quality and error-free CSS code.

February 06 2014

15:58

How To Code a Hidden Author Bio Display using jQuery

Advertise here with BSA


I was browsing through websites one day and came across one really interesting feature. Jennifer Perrin has a small blog on her website which uses a fixed top navbar. In the center you’ll find a profile avatar photo which displays her full author bio(triggered by hover). This is a really interesting feature and I’ve set out to replicate the idea using jQuery.

So in this tutorial I want to demonstrate how we can build a very simple HTML5 webpage recreating a full author bio display. The entire bio container is hidden until the user hovers over the avatar photo. Take a peek at my live demo to see what it should look like:

hidden author bio display hover tutorial screenshot

Page Setup

First I know the effect will be created using jQuery so I’ll be downloading a local copy for this project. It’s all pretty simple except for the hovering triggers based on event delegation. But we can get into that later in the article.

<!doctype html>
<html lang="en-US">
<head>
  <meta charset="utf-8">
  <meta http-equiv="Content-Type" content="text/html">
  <title>Hidden Author Bio Display Demo - DesignM.ag</title>
  <meta name="author" content="Jake Rocheleau">
  <link rel="shortcut icon" href="http://designm.ag/favicon.ico">
  <link rel="icon" href="http://designm.ag/favicon.ico">
  <link rel="stylesheet" type="text/css" media="all" href="css/styles.css">
  <script type="text/javascript" src="js/jquery-1.10.2.min.js"></script>
</head>

I’ve also created a new stylesheet named styles.css. This has all my typical page resets along with a number of extra styles to position the bio and top navbar. To keep things simpler I didn’t use a whole lot of extra text or links within this toolbar. But for your own website it’s entirely possible to move things around as necessary.

  <header id="topbar">
    <div id="logo"><h1>MySite Logo</h1></div>
    
    <div id="avatar-icon">
      <img src="img/avatar.jpg" alt="Author avatar photo" class="authoravi">
      <div id="author-bio-box">
        <img src="img/avatar.jpg" alt="author big photo" class="bigavi">
        <h2>Jake Rocheleau</h2>
        <p>Jake is a writer and designer found all over the Internet. He specializes in UI design, frontend development, branding, and content strategy.</p>
      </div>
    </div>
  </header>

This is really the most important HTML on the webpage. Directly after the opening body tag I’ve setup a new header element as the base container. The logo and the avatar have their own separate divs positioned within this container. Whenever a user hovers on top of the #avatar-icon image, we display the internal #author-bio-box.

CSS Design

You should take a look over my basic styles and CSS resets if you’re unfamiliar with page structure. At the top I’m referencing an external web font named Montserrat Alternates. Beneath the top navigation bar I’ve setup a wrapper div which contains filler Lorem Ipsum text. You can see how the bio display works even when scrolling down through a webpage.

#avatar-icon {
  position: relative;
  width: 100px;
  margin: 0 auto;
  text-align: center;
  padding-top: 6px;
  cursor: pointer;
}

#avatar-icon .authoravi {
  width: 55px;
  height: 55px;
  -webkit-border-radius: 5px;
  -moz-border-radius: 5px;
  border-radius: 5px;
}

/* header top bar */
#topbar {
  display: block;
  position: fixed;
  top: 0;
  width: 100%;
  min-width: 750px;
  height: 70px;
  padding: 0 45px;
  z-index: 9999;
  background-color: #6e7cc9;
  background-image: -webkit-gradient(linear, left top, left bottom, from(#6e7cc9), to(#6a74af));
  background-image: -webkit-linear-gradient(top, #6e7cc9, #6a74af);
  background-image: -moz-linear-gradient(top, #6e7cc9, #6a74af);
  background-image: -ms-linear-gradient(top, #6e7cc9, #6a74af);
  background-image: -o-linear-gradient(top, #6e7cc9, #6a74af);
  background-image: linear-gradient(top, #6e7cc9, #6a74af);
  border-bottom: 1px solid #53597c;
}

To keep this avatar centered in position I’m using margin: 0 auto on the fixed-width container. We also need a relative positioning so the interior author box can use absolute positioning. #avatar-icon becomes the new container for this absolute position, as opposed to the entire body element.

Using a fixed position on the #topbar header is key for the navigation to follow along with the user. I’m using large padding on the HTML body so everything on the page gets pushed down beneath this fixed bar. Using the CSS property top: 0 will force this bar to stay fixed at the top of the page, while the other content appears below.

/** bio box **/
#author-bio-box {
  display: none;
  position: absolute;
  cursor: default;
  width: 300px;
  top: 0;
  left: -100px;
  padding: 5px 15px 8px 15px;
  background: rgba(20,20,20,0.9);
  z-index: 99999;
  -webkit-border-radius: 0 0 7px 7px;
  -moz-border-radius: 0 0 7px 7px;
  border-radius: 0 0 7px 7px;
  -webkit-box-shadow: 1px 1px 4px rgba(0,0,0,0.6), -1px -1px 3px rgba(0,0,0,0.5) inset;
  -moz-box-shadow: 1px 1px 4px rgba(0,0,0,0.6), -1px -1px 3px rgba(0,0,0,0.5) inset;
  box-shadow: 1px 1px 4px rgba(0,0,0,0.6), -1px -1px 3px rgba(0,0,0,0.5) inset;
}

#author-bio-box .bigavi {
  width: 95px;
  height: 95px;
  -webkit-border-radius: 5px;
  -moz-border-radius: 5px;
  border-radius: 5px;
}

#author-bio-box h2 {
  display: block;
  font-family: 'Trebuchet MS', Helvetica, Arial, sans-serif;
  font-size: 2.5em;
  line-height: 1.5em;
  text-align: center;
  color: #fff;
  cursor: text;
  font-variant: small-caps;
  letter-spacing: -0.03em;
}
#author-bio-box p {
  display: block;
  font-size: 1.5em;
  line-height: 1.35em;
  color: #fff;
  cursor: text;
}

Finally we get to the hidden author box and other internal elements. The #topbar div uses a z-index of 9,999 so that it appears on top of all other page content. Since we need this author bio to appear on top of the bar I’m using 99,999 z-index to setup the correct layout.

Width is fixed at 300px but since this bio is offset in the center, it won’t appear correct. We can’t use absolute positioning because the content itself will appear outside the boundary box. It’s smarter to use a negative left margin to reposition the author bio exactly where it should appear. Note this value should also be changed as you update the bio box width, so keep this in mind when adjusting for your own project(s).

jQuery Display Effects

Possibly the most confusing piece of my jQuery code is event delegation with the .on() method. By using the body element as my selector it works like an updated event handler to also bind events onto hidden or dynamically-appended elements. Check out this Stack Overflow post to get a better understanding.

$(function(){
  $('body').on('mouseenter', '#avatar-icon', function(e){ 
    $('#author-bio-box').css('display','block');
  });

  $('body').on('mouseleave', '#author-bio-box', function(e){
    $(this).css('display','none');
  });
});

Also worth noting are the mousenter and mouseleave event triggers. mouse over/out will trigger if the user re-hovers something other than the target element. So if we open the author bio and the user hovers the new avatar image or the bio paragraph, it would see we are hovering something new and close the whole box. Mouseleave will only trigger when leaving the entire container – much safer and provides exactly the effect we need.

So once the user hovers onto #avatar-icon we immediately display the author bio box. And once the mouse leaves those boundaries this box gets re-hidden from view. Truly a simple yet powerful script if you know how to use it.

hidden author bio display hover tutorial screenshot

Final Thoughts

I can admit that this solution probably wouldn’t work best on all websites. But if you’re designing a small author-based blog why not try it out? It is a really interesting concept which is perfect for any fixed-width title bar. Feel free to download a copy of my source code and let us know what you think in the post discussion area.


Advertise here with BSA

February 04 2014

14:00

JavaScript Tools of the Trade: JSBin

We’ve all been there. There are times when you simply want to throw some JavaScript code up and see how it works. Sure, you could go through the hassle of:

  • Setting up a dedicated directory
  • Create a file with markup
  • Finding the latest version of your favorite libraries, downloading them and including them in your code
  • Creating your stylesheet
  • Configuring your webserver

That seems like an awful lot of work just do do some simple code testing. Thankfully there are tools that make this type of work trivial.

In this tutorial, I’d like to go over one of my favorite tools for interactive JavaScript testing, JSBin.


The Case for JSBin

As I mentioned previously, in many cases you simply need to test a small subset of JavaScript code. Setting up a whole development environment for such a use case, in most cases, doesn’t really make a lot of sense unless there’s a clear dependency on hardware (for example, WebRTC) or reliance on a third party API or product where you need backend services to successfully access information.

What JSBin offers is a browser-based user interface where you can enter:

  • HTML markup
  • CSS
  • JavaScript

… and get immediate feedback based on your code. In addition, you can optionally include any number of popular frameworks into your onscreen code, allowing you to leverage the framework’s capabilities as well. The main benefit is the real-time feedback you get from the updates you make.

Let’s look at these tools a little more closely.


Getting to Know JSBin

JSBin was created and is actively maintained by well-respected developer Remy Sharp. The idea to develop it came from the need to collaborate interactively with other developers to debug JavaScript code. It has since matured into a robust tool which:

  • Allows groups of developers to work together to solve code problems
  • Serves as a sort of bin that developers can go back to reference
  • Makes sharing code and solutions incredibly easy

JSBin is also opensource licensed under the liberal MIT license allowing community members to freely contribute to it or fork it to create their own customized solutions.

JSBin offers a straightforward UI that breaks each type of code into individual vertical panels.

jsbin

Each panel provides a mini-IDE that allows you to enter code and receive immediate feedback via the Output panel. For example, if I add the following code to the HTML panel:

<div>Rey Bango</div>

I’ll immediately see the new element and the text render in the Output panel.

jsbin-html

Of course, you can add any number of elements to the markup allowing you to create a page quickly and interactively. Being able to style your markup is equally important since in some cases, the JavaScript you’re testing is explicitly designed to manipulate styles and CSS rules applied to your elements. That’s where the CSS panel comes in. It offers full CSS style capability, so you can layout your elements to suit your needs, even taking advantage of CSS3 rules. So adding the following code:

div { 
  color: red; 
  font:20px Tahoma,sans-serif; 
  border: 1px solid black; 
  width: 100px;
  margin: 30px;
  padding: 10px; 
  transform:rotate(15deg);
  -webkit-transform:rotate(15deg); 
}

… provides the following results:

jsbin-css

So far, the code has been simple but I need to stress that the important thing here is not the complexity of the code but the fact that you’re able to receive immediate feedback. I could easily grab more involved code, like that of the CSS Transitions demo on the Mozilla Developer Network and add that into JSBin to produce a similar effect for my test code:

div { 
  color: red; 
  font:20px Tahoma,sans-serif; 
  border: 1px solid black; 
  width: 100px;
  margin: 30px;
  padding: 10px; 
  -moz-transition:width 2s, height 2s, background-color 2s, -moz-transform 2s;
    -webkit-transition:width 2s, height 2s, background-color 2s, -webkit-transform 2s;
    -o-transition:width 2s, height 2s, background-color 2s, -o-transform 2s;
    transition:width 2s, height 2s, background-color 2s, transform 2s;
}

div:hover {
    background-color: #FFCCCC;
    width:200px;
    height:200px;
    -moz-transform:rotate(180deg);
    -webkit-transform:rotate(180deg);
    -o-transform:rotate(180deg);
    transform:rotate(180deg);
}
jsbin-csstrans

So while I’m more specifically focused on the JavaScript aspect of JSBin, it’s clear that web developers in general can benefit from the interactive nature of the tool.


Using JavaScript

For me, the main benefit of JSBin is the ability to test JavaScript quickly. I’m able to whip up quick and dirty code that I can test and adjust on-the-fly without the need to spin up a whole work environment. Sure, most browsers provide developer tools that offer a console where you can enter quick snippets but they’re not yet at a point where you can interactively test large amounts of code, let alone define complementary custom markup and styling to the output.

JSBin’s JavaScript panel is where you’re able to define your custom JavaScript code. As expected, you have full access to the language as well as the DOM API supported by the browser. This means that when I write:

var myDiv = document.querySelector( "div" );

myDiv.innerHTML = "Foo";

it:

  • Allows me to create a local variable
  • Provides access to the div element I created in the HTML panel
  • Changes the element’s content

The results are immediate, allowing me to debug as I’m writing the code.

Having access to plain ole JavaScript is great but it’s very common to use a JavaScript utility library like jQuery or a full-blown framework like Ember to abstract the complexities of cross-browser development or create app-like experiences in the browser. JSBin addresses this by allowing you to include most of the popular libraries into your test code.

Clicking on the Add library menu option provides a very long list of supported libraries that can be injected into your JSBin project. What this does is creates a script tag in your code that pulls the JavaScript file from a CDN. Selecting “jQuery 2.0.2” from the list injects the following:

<script src="http://ajax.googleapis.com/ajax/libs/jquery/2.0.2/jquery.min.js"></script>
<meta charset=utf-8 />

… while selecting Backbone adds the following:

<script src="http://documentcloud.github.io/underscore/underscore-min.js"></script>
<script src="http://documentcloud.github.io/backbone/backbone-min.js"></script>

Notice how JSBin uses different CDNs based on where the files are available. Most of the big name projects are listed, including:

  • jQuery
  • Dojo
  • Modernizr
  • Bootstrap

… and many more.

Adding in jQuery gives me full access to all of the libraries’ excellent helper methods and capabilities. I can switch to using its nice, terse API to access DOM elements and set values in one nicely-chained line of code:

$( "div" ).text( "Foo" );

Or, I can take it a bit further and test out an Ajax request to Flickr’s API to pull back JSON data and render images based on it:

(function() {
  var flickerAPI = "http://api.flickr.com/services/feeds/photos_public.gne?jsoncallback=?";
  $.getJSON( flickerAPI, {
    tags: "mount rainier",
    tagmode: "any",
    format: "json"
  })
    .done(function( data ) {
      $.each( data.items, function( i, item ) {
        $( "<img>" ).attr( "src", item.media.m ).appendTo( "div" );
        if ( i === 3 ) {
          return false;
        }
      });
    });
})();

The code above would render as follows:

jsbin-ajax

Having the full power of these libraries and frameworks really opens up the testing scenarios that you can setup with JSBin.

Again, this is a list of the most popular libraries and frameworks available and clearly, some niche ones just won’t be in the list. If you need to add your own custom library, the documentation shows how you can add it in yourself.


Additional Features and Resources

I find JSBin invaluable for my desktop development and as I shift to focusing on mobile devices, I’m glad to see that I’ll be able to continue to use it to test on those devices as well. As of version three, JSBin has incorporated a feature called “live rendering” which acts as a simulcast across multiple connected devices. These devices aren’t explicitly connected but instead, leverage a specific URL which allows them to essentially render the results at the same time. You can see this feature in action in the following video.

Another important feature is the ability to create your own JSBin account where you can save your bins for future reference and sharing. Registration is simple and you can even leverage your Github credentials via Github’s OAuth functionality.

jsbin-register

The key benefit to registering is the ability to keep a history of the bins you create so that you can revisit them later.

To really get a feel for the full breadth of functionality offered by JSBin, I urge you to go to Remy’s Youtube channel for JSBin, where he’s done a bang up job of creating tutorial videos exploring all of the excellent features of the service. The FAQ also does a great job of answering common questions you might have.

JSBin is one of my most valuable tools I’ve found for JavaScript development. The fact that it’s free and open source makes it a no-brainer to add to any toolkit. And Remy’s continued commitment to the service is commendable. All of this combined, makes it easy for me to spread the word about such a great tool.

February 03 2014

14:00

Creating Brackets Extensions

A little while ago I wrote about the recent updates to the Brackets editor. Brackets is an open source project focused on web standards and built with web technologies. It has a narrow focus and therefore may not have a particular feature you’ve come to depend upon. Luckily, Brackets ships with a powerful extension API that lets you add any number of new features. In this article, I’m going to discuss this API and demonstrate how you can build your own extensions.

It is crucial that you remember that Brackets is in active development. This article is being written in December of 2013. It is certainly possible that the code demonstrated below is now out of date. Keep that in mind and be sure to check the wiki for the latest updates to the extension API.


Getting Started

I’m going to assume you read my last article and are already familiar with the extension manager. This provides a simple, one click method of installing extensions. One of the best ways you can learn to write extensions is by looking at the work done by others (that’s how I learned). I’d recommend grabbing a few extensions (there’s almost 200 available now) and tearing apart their code. Don’t be afraid to break a few while you’re at it.

Brackets puts all installed extensions within one main folder. To find that folder, go to the Help menu and select “Show Extensions Folder“. For my OS X install, this was located at /Users/ray/Library/Application Support/Brackets/extensions/user. If you go up from that folder, you’ll notice a disabled folder as well. Brackets will make a valiant effort to load no matter what, but if you ever find yourself in a situation where Brackets has completely crapped the bed and simply will not work, consider moving potentially bad extensions into the disabled folder. Later on in the article, I’ll discuss how you can monitor and debug extensions to help prevent such problems in the first place.

Begin by going to your user folder and creating a new folder, helloworld1. Yes, even though it is completely lame, we’re going to build a HelloWorld extension. Don’t hate me, I like simple. Inside that folder create a new file called main.js. Listing one shows what the contents of this file should be. Note that in this article I’ll go through a couple of different iterations of the helloworld extension. Each one will be named with a progressively higher number. So our first example is from helloworld1, the next helloworld2, and so on. It would make sense for you to simply copy the code into one folder, helloworld, instead of copying each one by themselves. If you do, you’ll have multiple related extensions running at once and that can definitely confuse things.

Listing 1: helloworld1/main.js
define(function(require, exports, module) {

    function log(s) {
            console.log("[helloworld] "+s);
    }

    log("Hello from HelloWorld.");
});

The first line defines our extension as a module that will be picked up by Brackets automatically on application load. The rest of the extension is a custom log message (you will see why in a second) and a call to that logger. Once you have this file saved, go back to Brackets, select the Debug menu, and hit Reload. (You can also use Command/Control+R to reload as well.)

Brackets will reload and … nothing else will happen. The extension we built didn’t actually do anything that we could see, but it did log to the console. But where is that console? Brackets provides an easy way to view the console. Simply go back to the Debug menu and select Show Developer Tools. This will open a new tab in Chrome with a familiar Dev Tools UI. In the screen shot below I’ve highlighted our log. Other extensions, and Brackets itself, will also log messages to this screen. By prefixing my log messages with [helloworld], I can make my own stuff a bit easier to find.

Brackets console messages

Note that the full Chrome console.api works here. You can do stuff like this to format your console messages:

    
function log(s) {
    console.log("%c[helloworld] "+s,"color:blue;font-size:large");
}

Go crazy, but try to remove these messages before you share your code with the rest of the world. In case you’re curious, you can’t use dev tools in another browser, like Firefox, at this time.


Integration Points

Now that you know the (very) basics, let’s talk about what Brackets extensions can do to the editor:

  • They can create keyboard shortcuts, allowing them to respond to custom keystrokes.
  • They can add to the top level menu.
  • They can add context menus (and to a specific area, like the file listing or the editor window).
  • They can create UI items. This can be a modal dialog or even a panel. (Currently the panel is locked to the bottom of the screen).
  • They can create a linting provider (essentially they can register themselves as a code checker for a file type).
  • They can create their own inline editors (a major feature of Brackets).
  • They can register as a documentation provider (for example, adding MDN support for docs).
  • They can integrate with Quick Find and Quick Open.
  • They can add custom code hints and syntax colors.
  • They can read the current file open in the editor as well as modify it. (They can also see the current selected text, if any.)

That describes how extensions can modify Brackets, but what can extensions actually do in terms of code? Keeping in mind that you’re writing extensions in pure web standards (HTML, JavaScript, and CSS), you actually have quite a bit of power. The only real limits relate to binary data. There is a File system API that gives you control over files but is limited to text data only. Luckily, you have a way out.

Any Brackets extension can integrate with Node.js. If you’ve got an existing Node.js package your extension can make calls to it and do, well, whatever Node.js can do, which is essentially anything.

Let’s update our extension to integrate with the editor a bit better. I’ll start by simply adding a menu item for the extension.

Listing 2: helloworld2/main.js
/*
Based - in part - on the HelloWorld sample extension on the Brackets wiki:

https://github.com/adobe/brackets/wiki/Simple-%22Hello-World%22-extension

*/
define(function(require, exports, module) {

    var CommandManager = brackets.getModule("command/CommandManager"),
            Menus = brackets.getModule("command/Menus"),
            AppInit = brackets.getModule("utils/AppInit");
    function log(s) {
            console.log("[helloworld2] "+s);
    }
    function handleHelloWorld() {
            alert("You ran me, thanks!");
    }
    AppInit.appReady(function () {

            log("Hello from HelloWorld2.");

            var HELLOWORLD_EXECUTE = "helloworld.execute";

            CommandManager.register("Run HelloWorld", HELLOWORLD_EXECUTE, handleHelloWorld);

            var menu = Menus.getMenu(Menus.AppMenuBar.VIEW_MENU);
            menu.addMenuItem(HELLOWORLD_EXECUTE);

    });

});

We’ve got a few changes here so let’s tackle them one by one. You’ll notice that the extension begins with three calls to brackets.getModule. All extensions have access to a brackets object that provides an API where we can load in core functionality from the editor. In this case the extension has loaded two libraries we’ll need for the menu (CommandManager and Menus) and one which will be used to help initialize the extension (AppInit).

Lets talk about AppInit. You can see that most of the extension is now loaded with a appReady callback. This callback is fired when Brackets has completed loading and is generally considered “best practice” for extensions to make use of.

Registering a menu item takes a few steps. I begin by defining a “command ID”, a unique identifier for the item I’ll be adding to the UI. The typical way to do this is with the format extensionname.someaction. In my case, I used helloworld.execute. I can then register this command along with the function (handleHelloWorld) that should be called when the command is fired.

The final step is to add this command to the menu. You can probably guess that my menu item will be added under the View menu based on this value: Menus.AppMenuBar.VIEW_MENU. How did I know that value? Simple, I saw other extensions do it. Seriously though, there is no specific list of items like this yet. Don’t forget that Brackets is open source. I can easily pop over to the GitHub repo and check it out. In this case, the file is Menus.js, located on Github. In there I can see where the various different core menus are defined:

/**
  * Brackets Application Menu Constants
  * @enum {string}
*/
var AppMenuBar = {
     FILE_MENU       : "file-menu",
     EDIT_MENU       : "edit-menu",
     VIEW_MENU       : "view-menu",
     NAVIGATE_MENU   : "navigate-menu",
     HELP_MENU       : "help-menu"
};      

As a general rule of thumb, it makes sense to have at least a cursory understanding of what’s available in Brackets itself. Your extensions will, from time to time, make use of multiple different features so it’s definitely in your best interest to at least know the lay of the land.

After reloading Brackets, you’ll now see the menu item in the View menu. Exactly where it is may be a bit random as you may have other extensions installed.

View menu updated

You can actually be a bit more specific about your position. Again, this is where the source code will help you. The same file I linked to above also contains the addMenuItem definition.


Put Some Lipstick on That Pig

Now that you’ve seen a simple example of how an extension can integrate into Brackets, let’s look at how we update the UI. In the previous version of our code, an alert was used to send a message. While this works, it isn’t very pretty. Your code can access the Brackets editor just like any other DOM modification code. While you can do anything you want, there are a few standard ways extensions update the UI in Brackets. (As a warning, in general you do not want to touch the DOM of the main editor UI. You can, but with future updates, your code may break. Also, users may not be happy if your extension changes something core to Brackets.)

The first method we’ll look at uses modal dialogs. Brackets already uses this and has an API available for extensions to call. As a simple example, let’s just update the HelloWorld extension to use a modal instead.

Listing 3: helloworld3/main.js
/*
Based - in part - on the HelloWorld sample extension on the Brackets wiki:

https://github.com/adobe/brackets/wiki/Simple-%22Hello-World%22-extension

*/
define(function(require, exports, module) {

    var CommandManager = brackets.getModule("command/CommandManager"),
        Menus = brackets.getModule("command/Menus"),
        Dialogs = brackets.getModule("widgets/Dialogs"),
        DefaultDialogs = brackets.getModule("widgets/DefaultDialogs"),
        AppInit = brackets.getModule("utils/AppInit");

    function log(s) {
            console.log("[helloworld3] "+s);
    }

    function handleHelloWorld() {
        Dialogs.showModalDialog(DefaultDialogs.DIALOG_ID_INFO, "Hello World", "Same Important Message");
    }

    AppInit.appReady(function () {

        log("Hello from HelloWorld3.");

        var HELLOWORLD_EXECUTE = "helloworld.execute";

        CommandManager.register("Run HelloWorld", HELLOWORLD_EXECUTE, handleHelloWorld);

        var menu = Menus.getMenu(Menus.AppMenuBar.VIEW_MENU);
        menu.addMenuItem(HELLOWORLD_EXECUTE);

    });

});

Note the addition of two additional Brackets modules: Dialogs and DefaultDialogs. The next change is in handleHelloWorld. One of the methods in the Dialog library is the ability to show a dialog (no surprise there, I suppose). The method wants a class, a title, and a body, and that’s it. There’s more you can do with dialogs, but for now, this demonstrates the feature. Now when we run the command, we get a much prettier UI. (Along with default buttons and behaviours to handle closing the dialog.)

Dialog example

That’s one example, now lets look at another: creating a bottom panel. As with dialogs, we’ve got support from Brackets to make it easier. Let’s look at an example and then I’ll explain the changes.

Listing 4: helloworld4/main.js
/*
Based - in part - on the HelloWorld sample extension on the Brackets wiki:

https://github.com/adobe/brackets/wiki/Simple-%22Hello-World%22-extension

*/
define(function(require, exports, module) {

    var CommandManager = brackets.getModule("command/CommandManager"),
    Menus = brackets.getModule("command/Menus"),
    PanelManager = brackets.getModule("view/PanelManager"),
    AppInit = brackets.getModule("utils/AppInit");

    var HELLOWORLD_EXECUTE = "helloworld.execute";
    var panel;

    function log(s) {
            console.log("[helloworld4] "+s);
    }

    function handleHelloWorld() {
        if(panel.isVisible()) {
            panel.hide();
            CommandManager.get(HELLOWORLD_EXECUTE).setChecked(false);
        } else {
            panel.show();
            CommandManager.get(HELLOWORLD_EXECUTE).setChecked(true);
        }
    }

    AppInit.appReady(function () {

            log("Hello from HelloWorld4.");

            CommandManager.register("Run HelloWorld", HELLOWORLD_EXECUTE, handleHelloWorld);

            var menu = Menus.getMenu(Menus.AppMenuBar.VIEW_MENU);
            menu.addMenuItem(HELLOWORLD_EXECUTE);

            panel = PanelManager.createBottomPanel(HELLOWORLD_EXECUTE, $("<div class='bottom-panel'>HTML for my panel</div>"),200);

    });

});

Let’s focus on the changes. First, I dropped the Dialog modules as I’m no longer using them. Instead, we load up the PanelManager. Down in the appReady block I’ve defined a new panel using the PanelManager API method createBottomPanel. Like the menu command this takes in a unique ID so I just reuse HELLOWORLD_EXECUTE. The second argument is a jQuery-wrapped block of HTML (and in case you’re wondering, yes we can do this nicer), and finally, a minimum size. This sets up the panel but doesn’t actually execute it.

In the event handler, we have tied to the menu, we can ask the panel if it is visible and then either hide or show it. That part should be pretty trivial. For fun I’ve added in a bit more complexity. Notice that CommandManager lets us get a menu item and set a checked property. This may be unnecessary as the user can see the panel easily enough themselves, but adding the check just makes things a little bit more obvious. In the screen shot below you can see the panel in its visible state.

Panel example

Right away you may be wondering about the panel HTML. Is there a better way to provide the HTML? Anyway to style it? Yep, lets look at a more advanced version.

Listing 5: helloworld5/main.js
/*
Based - in part - on the HelloWorld sample extension on the Brackets wiki:

https://github.com/adobe/brackets/wiki/Simple-%22Hello-World%22-extension

*/
define(function(require, exports, module) {
    var CommandManager = brackets.getModule("command/CommandManager"),
    Menus = brackets.getModule("command/Menus"),
    PanelManager = brackets.getModule("view/PanelManager"),
    ExtensionUtils          = brackets.getModule("utils/ExtensionUtils"),        
    AppInit = brackets.getModule("utils/AppInit");

    var HELLOWORLD_EXECUTE = "helloworld.execute";
    var panel;
    var panelHtml     = require("text!panel.html");

    function log(s) {
            console.log("[helloworld5] "+s);
    }

    function handleHelloWorld() {
        if(panel.isVisible()) {
            panel.hide();
            CommandManager.get(HELLOWORLD_EXECUTE).setChecked(false);
        } else {
            panel.show();
            CommandManager.get(HELLOWORLD_EXECUTE).setChecked(true);
        }
    }

    AppInit.appReady(function () {

        log("Hello from HelloWorld5.");
        ExtensionUtils.loadStyleSheet(module, "helloworld.css");
        CommandManager.register("Run HelloWorld", HELLOWORLD_EXECUTE, handleHelloWorld);

        var menu = Menus.getMenu(Menus.AppMenuBar.VIEW_MENU);
        menu.addMenuItem(HELLOWORLD_EXECUTE);

        panel = PanelManager.createBottomPanel(HELLOWORLD_EXECUTE, $(panelHtml),200);

    });

});

As before, I’m going to focus on the changes. First note that I’ve included a variable called panelHtml that is loaded via require. This lets me define my HTML outside of my JavaScript code. (You can also use templating engines. Brackets ships with Mustache.) The HTML behind the panel is rather simple.

Listing 6: helloworld5/panel.html
<div class="bottom-panel helloworld-panel" id="helloworldPanel">
<h1>My Panel</h1>

<p>
My panel brings all the boys to the yard,<br/>
And they're like<br/>
It's better than yours,<br/>
Damn right it's better than yours,<br/>
I can teach you,<br/>
But I have to charge
</p>
</div>

Returning to main.js, I’ve demonstrated another feature, loadStyleSheet. This lets you load an extension specific style sheet. I created a file, helloworld.css, with some simple (but tasteful) CSS styles.

Listing 7: helloworld5/helloworld.css
.helloworld-panel h1 {
        color: red;
}

.helloworld-panel p {
        color: blue;
        font-weight: bold;
}

Note that I prefixed my styles with a unique name. This helps ensure my classes don’t conflict with anything built into Brackets. With these simple changes my panel now looks much better, and you can see why I’m known world wide for my superior design skills.

Epic CSS

Packaging and Sharing Your Kick Butt Extension

Of course, just creating the coolest Brackets extension isn’t quite enough. You probably (hopefully!) want to share it with others. One option is to just zip up the directory and put it on your website. Folks can download the zip, extract it, and copy it to their Brackets extensions folder.

But that’s not cool. You want to be cool, right? In order to share your extension and make it available via the Brackets Extension manager, you simply need to add a package.json file to your extension. If you’ve ever used Node.js, then this will seem familiar. Here is a sample one for our extension.

Listing 8: helloworld6/package.json
{
    "name": "camden.helloworld",
    "title": "HelloWorld",
    "description": "Adds HelloWorld support to Brackets.",
    "homepage": "https://github.com/cfjedimaster/something real here",
    "version": "1.0.0",
    "author": "Raymond Camden <raymondcamden@gmail.com> (http://www.raymondcamden.com)",
    "license": "MIT",
    "engines": {
        "brackets": "<=0.34.0"
    }
}

Most of this is self-explanatory, but the real crucial portion is the engines block. Brackets updates itself pretty rapidly. If Brackets added a particular feature at some point that your extension relies on, you can add a simple conditional here to ensure folks don’t try to install your extension on an incompatible version. (You can find a full listing of the possible settings on the Wiki.)

Once you’ve done this, the next part is to upload it to the Brackets Registry. You will need to log in via your GitHub account, but once you’ve done that, you can then simply upload your zip. Your extension will then be available to anyone using Brackets. Even better, if you update your extension, the Extension Manager will actually be able to flag this to the user so they know an update is available.


What Else?

Hopefully, you’ve seen how easy it is to extend Brackets. There’s more we didn’t cover, like the Linting API and NodeJS integration, but this article should be more than enough to get you started. As a reminder, do not forget there is a large collection of extensions available for you to start playing with right now. Good luck!

January 31 2014

14:00

Learning SASS: A Beginner’s Guide to SASS

Writing a lot of CSS can be overwhelming; that is why learning SASS and LESS can make  any web developer and designer’s life much easier. For a beginner, you might find it fine but as time goes by and your CSS skills are improving. You begin to wonder if there is a way so that you don’t need to repeat a lot of CSS codes in your style sheet. The good news is, there is one! Thanks to the CSS pre – processor, it’s now possible to write concise CSS codes without repeating each of them again and again. It is even formatted nicely. You can perform computations and do dynamic styling using these pre-processing methods. There are two pre-processing methods that I will tackle: SASS and LESS. For this tutorial I will talk about SASS first and then on a separate tutorial, I’ll talk about LESS.

CSS Drawbacks

Using CSS only might work for you but when making big websites with multiple pages, there might be some features you wish CSS has. Take a look at the following disadvantages of using CSS alone.

  • No way to re-use common style rules.
  • No way to specify variables that can be defined and re-used all through the style sheet.
  • You can’t execute computations where you can add numerical values to elements.

Advantages of Using Pre-Processing Methods

While using CSS alone might give you nuisance, using pre-processing methods can save you a lot of time and effort. Check out the list of advantages of using pre-processing methods below.

  • Allows you to use variables that can be re-used all throughout the style sheet.
  • Higher level style syntax that provides advanced CSS features.
  • Compiled CSS files are uploaded to the production web server.

What Is SASS?

SASS stands for Syntactically Awesome Style Sheets and was  designed and created by Hampton Catlin. SASS manipulates CSS using variables, mixins, inheritance and nesting rules. Given the extensions .sass and .scss respectively, it’s translated to well-formatted CSS using a command line tool or web-framework plugin.

SASS makes it easier to write less CSS codes and manipulate them dynamically. It’s a great way to write more functional CSS codes and can speed up the workflow of every web developer and designer.

.sass VS. .scss Format

Before we begin on how to use SASS, let’s compare .sass and .scss extensions of SASS. First I will provide a simple CSS code and then I will show you how to simplify them on both extensions of SASS.

CSS Code

For our CSS, I used a header tag and put a zero value for margin and padding then white color for its text color.


header {
     margin: 0;
     padding: 0;
     color: #fff;
}

.scss Extension Format (New Way of Writing SASS)

To format this into .scss extension format, we will use a variable $color and give it a hexadecimal color value of #fff for white color. And then under the CSS style, instead of putting a hexadecimal color value of #fff, use the variable $color that was set in the beginning of the code.

$color:  #fff;
header {
    margin: 0;
    padding:0;
    color: $color;
}

.sass Extension Format (Old Way of Writing SASS)

For our .sass extension, we will have the same variable and value just like the .scss extension format, but, this time, without semi-colons and brackets. Notice that indentions are more reliant. This is the old format in writing SASS.

$color: #fff
header
   margin: 0
   padding: 0
   color: $color

Resources you need to complete this tutorial:

Installing Ruby

Before you can be able to test how SASS works, you need to download Ruby to your computer. Launch the Ruby installer and you will be prompted with the Setup page. Select your preferred language and click OK.

1

Then click on I accept the License radio button. 2

Next, install it to your preferred location and make sure that the radio button for Add Ruby executables to your PATH is checked. Click the Install button. This will install the software and when it’s done installing, just click the Finish button.

34

Checking if Ruby is Running Properly

Now that you have installed Ruby, let’s go ahead and check if this is working properly. Open Command Prompt and type the word ruby -v. And you can see, it would return the current version of the Ruby installed along with the date. If it’s returning an error, it could be that Ruby was not installed correctly or you did not put the Ruby executable into your path.

5

Installing SASS

To install SASS, open Command Prompt and type the word gem install sass and you can see that the Installation Prompt that it was successfully installed.

6

Preparing the Necessary Files

Before digging in with SASS, we need to prepare the necessary file you need before you code. Create a new folder to your preferred location (for this tutorial, I placed mine on my desktop) and name it SASS or any name you preferred. Inside the SASS folder, create an HTML file and name it index.html. For the HTML content put the following code.


Introduction to SASS
        	<link href="style.css" rel="stylesheet" type="text/css" /></pre>
<div id="container"><header>
<h1>Sass Sample Document</h1>
<h2>A 1stwebdesigner tutorial</h2>
</header>
<div>
Simple paragraph of text

Another paragraph of text</div>
<div>
<ul id="list1">
	<li>List Item 1</li>
	<li>List Item 2</li>
	<li>List Item 3</li>
</ul>
</div>
<footer>
<h3>This is a cool footer content</h3>
</footer></div>
<pre>

Now for our SASS file, create a new blank file in your text editor and name it style.scss. If you followed the steps, by this time you will have the following file structure.

7

Converting SASS code into CSS code

To convert the SASS code into CSS code, we’re going to use the -watch command in command prompt. This will compile the SASS codes to CSS. Additionally, this will also watch the directories for changes or updates. Let’s try to convert  the SASS file to CSS file. Before we start, we need to put a sample code on our style.scss to see if this working.  Copy and paste the following sample SASS code on the stye.scss file you created under SASS folder.


$myMargin: 0px auto;
$myColor: red;
$myWidth: 600px;

h1 {
	color: $myColor;
	$myMargin: $margin;
}

Next, open your command prompt and then go to the location where you put your files in. In my case, I put it in my desktop so I will type in cd ”Desktop” and it will locate the desktop directory.

8

Now that we are in the desktop file directory, type in the sass –watch Sass:Sass 9

Using –watch command, we will convert all of the .scss files on the folder SASS. It will also watch for the changes or updates on the file. Notice that there are two SASS, divided by a colon. The first one represents the current location of the .scss file while the second one represents the location of the output of the file. Make sure you link the converted CSS file to your HTML file to see this working.

 10

Using Variables

SASS variables are declared using the $ character and are defined like CSS values. Using SASS, you can declare variables for styles like font size, margin, padding and so on. Using variables and giving it a style value makes it easy to reuse a style repeatedly.

There are six different types of variables you can use with SASS.

  1. Strings (e.g. $myString: “your text here”;)
  2. Numbers (e.g. $myNum: 10px;)
  3. Colors (e.g. $myColor: white;)
  4. Booleans (e.g. $myBool: true;)
  5. Lists (e.g. $myItemList: 1px solid red;)
  6. Nulls (e.g. $myVar: null;)

Let’s put some of these types into practice. Open up your style.scss file and type the following code.


$myColor: #009a82;
$myString: " some text here ";
$myFontSize: 13px;
$myMargin: 0px auto;
$myWidth: 460px;

h1 {
	color: $myColor;
	margin: 0;
	padding: 0;
}

#container {
	width: $myWidth;
	margin: $myMargin;
}

Now when you run this code into your browser, you will have this output.

12

Nesting

SASS also allows you to define nested styles. This will let you write easy-to-read codes. For instance, you have the following CSS code.

#container p {
   font-family: Arial;
   font-size: 13px;
}

#container h1 {
   font-family: Tahoma;
   font-size: 15px;
}

#container h2 {
   font-family: Helvetica;
   font-size: 14px;
}

For the SASS version, you will have a format like this.

$myFontsize1: 13px;
$myFontsize2: 18px;
$myFontsize3: 25px;
$myWidth: 500px;
$myMargin: 0px auto;

#container {
    width: $myWidth;
	margin: $myMargin;

	p {
		font-family: Arial;
		font-size: $myFontsize1;
	}

    h1 {
		font-family: Tahoma;
		font-size: $myFontsize3;
	}

	h2 {

		font-family: Helvetica;
		font-size: $myFontsize2;
}
}

Notice that we put all of the elements styles under the id container, instead of naming them one by one under the same element. If you run this code into your browser, you will have something like this.

12

Mixins

Mixins let you define common properties once then re-use them over and over again. Mixins are defined using @mixin directive and contains a block of codes and then reuse them using @include directive. Let’s put this into practice. Copy the code below to your style.scss file.

@mixin border {
   border: 1px solid red;
}

#container {
   width: 960px;
   margin: 0 auto;
   @include border;
}

As you can see, we use the @mixins directive to create a style for border and then we include the style to id container using @include directive. If you run this in your browser, you will have the output below.

13

Operators

Performing mathematical operations is one of the best features of pre-processors like SASS, something which you can’t do with just a normal CSS code. Given this feature allows you do more complex and dynamic codes. Let’s take a look at this how it works. Go ahead and open your style.scss file and copy and paste the code below.


$myColor: #aa30ff;
$myPadding: 20px;
$thickness: 1px;
$border: solid red;

#samplepara{
	color: $myColor;
	padding: $myPadding+30;
}

#list1 {
	color: $myColor;
	border: $thickness+5 $border;
}

As you can see, we perform mathematical computations by adding 30px on the padding. We also added border thickness of 5px on the border. If you run this in your browser, you can see the output will look like this.

14

Functions

SASS offers a variety of functions. A good example of this is the color functions where you can manipulate color styles. Check out the series of color functions below.

  1. darken(color, amount)
  2. lighten(color, amount)
  3. saturate(color, amount)
  4. desaturate(color, amount)
  5. alpha(color)

These are just series of examples of functions. To learn more about functions check out the SASS documentation. For now let’s go ahead and try this example to see how SASS functions work. Open your style.scss folder and then copy and paste the code below.


$myColor: #202020;
$myBackground: #e6e6e6;

body {
     background: darken($myBackground, 20%);
}

h1, h2 {
     color: lighten($myColor, 40%);
}

Notice that the color darkens by 20% using the darken function while the H1 lightens, and also the H2 text by 40% using the lighten functions. So if run this in your browser you can see similar output below.

15

SASS Output Formatting

One of the best features of SASS is it offers several options to control how the .scss code is formatted when compiled into CSS. Using the -style option command, we can perform formatting to our compile CSS code. The following are the SASS formatting styles.

Nested Format

Nested style is the default format of SASS. This format gives indention to all of the styles in your compiled CSS file. To see this in action, copy and paste the code below into your style.scss and then open your Command Prompt (make sure you are inside the directory of SASS) and type in SASS –update style.scss . This command will update the formatting for the compiled CSS file using the update command.

16

 Now go ahead and open your compiled style.css file. As I’ve said above all of the styles will have proper indention.

  17

Expanded Format

This is the most user-friendly and readable format as the braces are properly expanded and each property will have its own line. Let’s see this in action. Using the same code above, open your command prompt and type sass –update style.scss –style expanded. Notice that we added –style command this is used to format compiled CSS file to a particular formatting.

18

So if you are going to look at the compiled CSS file on the SASS file, you can see the code formatting is similar to the image below. Notice that each property has its own line. The braces are also fully expanded.

19

Compact Format

This is the compact format output CSS code in a condensed but still readable format. It adds spaces between braces but all in one line. To see how this works using the same code above, open your command prompt and type sass –update style.scss –style compact .

20

If you are going to check the compiled CSS file, you can see something similar to the image below. It is condensed. Each property and style is in one line.

 21

Compressed Format

The compressed format has minimized output. This is suitable for the production environment. This format has more condensed formatting. Using the same code, open your command prompt and type in sass –update style.scss –style compressed .

22

So if you are going to look at the compiled CSS file on the SASS file, you can see code formatting similar to the image below. As you can see, all codes are in one line with no spaces at all. 23

Conclusion

SASS offers a variety of ways to simplify your CSS issues. It allows you to reduce repetition of codes by using variables. SASS is powerful. This can help to improve the workflow of both web designer and web developers. Learn SASS now and discover how it can it help you build professional websites in a fast-paced development. Hope you enjoyed this tutorial and see you again next time.

January 30 2014

14:00

JavaScript Animation That Works (Part 2 of 4)

In the last post, we introduced the idea of spriting, an easy way to animate in JavaScript that works in all browsers. We also walked through how to set up the sprite as a background image for a div and then use a line of JavaScript to change the background position to make it appear as if the image has moved.

In this post, we will use this technique to animate both running and jumping motions. In order to create the animation, we will need to quickly change the background position at a regular interval. Take a look again at the sprite we are using.

javascript-spriting-j

Meet J, the mascot for my company, Joust Multimedia.

In our example, we have ten total images: one of J standing facing right, three of J running to the right and one of J jumping while facing right (with the same number of each frame facing left). Let’s start with making him run to the right. In order to make our image look like it is running, we will need to do two things: change the sprite to a different image and move the div towards the right.


Running to the Right Animation

We certainly won’t want to be stuck clicking different buttons to cycle through the sprites, so we will need to create some functions that do this automatically.

For our running function, we want to:

  1. Move the div towards the right slightly
  2. Move to the next frame of animation
  3. Pause for a fraction of a second (to preserve the “persistence of vision” illusion)
  4. Loop the function again

Fortunately, there is an easy way to loop with functions. A native command in JavaScript called setTimeout will allow us to create a timed delay, after which we will call the function again (from inside the function).

function run_right(){
  // Move slightly to the right ...
  // Change to the next frame of animation ...

  // this will call 'run_right' again after 200 milliseconds
  setTimeout(function(){run_right();}, 200); 
}

So now we have a function that will call itself again five times a second (which will be fast enough to create animation for our purposes). Remember here that browsers are not terribly accurate with their timers. You can specify timing to the millisecond, but that doesn’t mean your script will run at that timing exactly!

Our next problem to tackle is how is our function going to know which sprite to change to? In our example, we will need to cycle back and forth through our three images (to have four total frames of animation). To do this, we are going to pass our function a bit of information to tell it which slide to switch to. Once in the function, we will do a test that will check which slide we should be on, then switch the background position to the correct sprite. When we call the function again, we will pass the next slide as the argument.

function run_right(slide){
  // Move slightly to the right ...
  switch (slide){ // this switch statement checks for different possibilities for 'slide'
    case 1: // if 'slide' equals '1' ...
      document.getElementById('j').style.backgroundPosition = "-40px 0px";
      setTimeout(function(){run_right(2);}, 200);
      break;
    case 2: // if 'slide' equals '2' ...
      document.getElementById('j').style.backgroundPosition = "-80px 0px";
      setTimeout(function(){run_right(3);}, 200);
      break;
    case 3: // if 'slide' equals '3' ...
      document.getElementById('j').style.backgroundPosition = "-120px 0px";
      setTimeout(function(){run_right(4);}, 200);
      break;
    case 4: // if 'slide' equals '4' ...
      document.getElementById('j').style.backgroundPosition = "-80px 0px";
      setTimeout(function(){run_right(1);}, 200);
      break;
  }
}

And now when we call the function for the first time, we will need to make sure we pass the starting slide.

<input type="button" value="Run Right" onclick="run_right(1);" />

Similarly, to move our div to the right slightly, we can pass the initial left attribute of the div, then move the div slightly each time the function is called.

function run_right(slide, left){
  
  left = left + 15; // Increase his left attribute by 15px
  document.getElementById('j').style.left = left+"px";
  
  switch (slide){ // this switch statement checks for different possibilities for 'slide'
    case 1: // if 'slide' equals '1' ...
      document.getElementById('j').style.backgroundPosition = "-40px 0px";
      setTimeout(function(){run_right(2, left);}, 200);
      break;
    case 2: // if 'slide' equals '2' ...
      document.getElementById('j').style.backgroundPosition = "-80px 0px";
      setTimeout(function(){run_right(3, left);}, 200);
      break;
    case 3: // if 'slide' equals '3' ...
      document.getElementById('j').style.backgroundPosition = "-120px 0px";
      setTimeout(function(){run_right(4, left);}, 200);
      break;
    case 4: // if 'slide' equals '4' ...
      document.getElementById('j').style.backgroundPosition = "-80px 0px";
      setTimeout(function(){run_right(1, left);}, 200);
      break;
  }
}

And when we initially call the function, we need to make sure we pass the current left position of our div.

<input type="button" value="Run Right" onclick="run_right(1, document.getElementById('j').offsetLeft);" />

Stopping the Animation

So, now we have a function that, when called, will animate J to run to the right. Unfortunately, we have no way to stop it. First of all, we will need to make the function stop calling itself if J runs to the edge of our stage. To do that, every time the function runs, we will check an if statement to see if J has room to keep running. If so, we will run the function like normal. If not, we will stop calling the function and return him to the standing sprite.

function run_right(slide, left){
  // If we can add 15 pixels to the left and have J's right edge not be at the stage's right edge ...
  if ((left + 15) < (document.getElementById('stage').offsetWidth - document.getElementById('j').offsetWidth)){
    // We have room! Continue like normal here
  } else { // if we are on the right edge, we need to stop calling the function and return to standing
    document.getElementById('j').style.backgroundPosition = "0px 0px";
  }
}

Finally, we will want to have a way to stop the function, when needed. We can set the setTimeout() command to a variable, then stop it with the clearTimeout() command. In order to do this, we will need to declare that variable outside of the function, so that we will be able to refer to it later. For now, we will declare it as a global variable. This is terrible coding practice, but we will correct this in the next post. This is what our function looks like.

var timer;

function run_right(slide, left){
  if ((left + 15) < (document.getElementById('stage').offsetWidth - document.getElementById('j').offsetWidth)){
    left = left + 15; // Increase his left attribute by 15px
    document.getElementById('j').style.left = left+"px";
  
    switch (slide){ // this switch statement checks for different possibilities for 'slide'
      case 1: // if 'slide' equals '1' ...
        document.getElementById('j').style.backgroundPosition = "-40px 0px";
        setTimeout(function(){run_right(2, left);}, 200);
        break;
      case 2: // if 'slide' equals '2' ...
        document.getElementById('j').style.backgroundPosition = "-80px 0px";
        setTimeout(function(){run_right(3, left);}, 200);
        break;
      case 3: // if 'slide' equals '3' ...
        document.getElementById('j').style.backgroundPosition = "-120px 0px";
        setTimeout(function(){run_right(4, left);}, 200);
        break;
      case 4: // if 'slide' equals '4' ...
        document.getElementById('j').style.backgroundPosition = "-80px 0px";
        setTimeout(function(){run_right(1, left);}, 200);
        break;
    }
  } else {
    document.getElementById('j').style.backgroundPosition = "0px 0px";
  }
}

And we can create another function to stop the running timer and return the sprite to the standing image.

function stop_running(){
  document.getElementById('j').style.backgroundPosition = "0px 0px";
  clearTimeout(timer);
}

Running to the Left Animation

Now by borrowing the code from our run_right function, we can create another function to make a run_left function, with just a few modifications.

function run_left(stage, left){
  if ((left - 15) > 0){
    left = left - 15;
    document.getElementById('j').style.left = left+"px";
    switch (stage){
      case 1:
        document.getElementById('j').style.backgroundPosition = "-40px -50px";
        timer = setTimeout(function(){run_left(2, left);}, 200);
        break;
      case 2:
        document.getElementById('j').style.backgroundPosition = "-80px -50px";
        timer = setTimeout(function(){run_left(3, left);}, 200);
        break;
      case 3:
        document.getElementById('j').style.backgroundPosition = "-120px -50px";
        timer = setTimeout(function(){run_left(4, left);}, 200);
        break;
      case 4:
        document.getElementById('j').style.backgroundPosition = "-80px -50px";
        timer = setTimeout(function(){run_left(1, left);}, 200);
        break;
    }
  } else {
    document.getElementById('j').style.backgroundPosition = "0px -50px";
  }
}

Jumping Animation

Finally, we need to create a jump function. We will pass two arguments to this function, one that will track whether the div is currently moving up or down and another that will track the current top attribute of the div. Between the two, we will determine which direction the div needs to move next, and how far (we will move the div less distance near the arc of the jump to simulate acceleration with gravity).

function jump(up, top){
  /*
   * We change J to his jumping sprite ...
   */
  document.getElementById('j').style.backgroundPosition = "-160px 0px";
  /*
   * Here, we need to decide whether he should be traveling up or down...
   */
  if (up && (document.getElementById('j').offsetTop > 20)){
    // if he is currently moving up, and he is more than 20 pixels from the top of the stage ...
    top = top - (top * .1); // This gives us a slight arc in the jump, rather than a constant movement like running
    document.getElementById('j').style.top = top+"px"; // Change his position
    timer = setTimeout(function(){jump(up, top);}, 60); // Then call the function again
  } else if (up) {
    // if he is currently moving up, but he is almost at the top of the stage and needs to come back down...
    up = false; // we switch the 'up' variable so he will be falling in the next loop
    timer = setTimeout(function(){jump(up, top);}, 60);
  } else if (!up && (document.getElementById('j').offsetTop < 115)){
    // if he is moving down, but is more than 5px from the ground, he will continue to fall...
    top = top + (top * .1); // His fall will slightly accelerate
    document.getElementById('j').style.top = top+"px";
    timer = setTimeout(function(){jump(up, top);}, 60);
  } else {
    // If he is moving down, and he is within 5px of the ground...
    document.getElementById('j').style.top = "120px"; // Place him on the ground
    document.getElementById('j').style.backgroundPosition = "0px 0px"; // return to standing sprite
    // We do not call the loop anymore since he is standing still at this point
  }
}

Now we can put all four of our functions into buttons and have a working prototype of a running and jumping animation! Please check out the source code for this page with comments and download the sprite sheet that I used, if you’d like.


Conclusion

Now, although we have a working prototype here, you may notice it is a little buggy. When you click on more than one button at a time, the script will try to run both at once. Or, if you click the jump button again on the way down, J will continue to fall forever. Also, as I mentioned earlier, we have global variables in our script, which means it might be difficult to add this code into an existing page without crashing other JavaScript (which is also why I didn’t try to run this code within this blog page). In our next post, we will clean up all of these bugs and talk about the concept of encapsulation and why it is important to write good code in the real world.

January 29 2014

14:00

How to Use UI Kits – Plus Free UI Kits to Choose From!

UI kits can be seen anywhere on the Web. But what is this UI kit thing? Do you know how to use UI kits? You can see a lot of resources with beautiful interface elements such as buttons, sliders, breadcrumbs, media players, forms and the likes. Perhaps it crossed your mind how these elements came to be.

Let me give you first a brief introduction about UI kits. A UI kit stands for “User Interface Kit”, which are PSD files that are composed of user interface elements. These come in a variety of colors, patterns and asl files that are being integrated on web and mobile designs. Though sometimes there are UI elements that are included on the package that you don’t think you will be using. That being said, UI kits will vary according to your web design needs. Usually the premium versions of UI kits come with a lot of user interface elements than the free ones.

Using UI kits is all about improving and speeding up the workflow without giving a lot of time thinking what to design. This allows you to focus more on the functionality and usability of the website you are working on.

OK, enough for introduction. I know you’re all fired up and want to know how to use UI kits. So let’s rock!

Choosing a UI Kit to Use

There are a lot of UI kit resources on the Web that you can use, depending on what you need. But for this tutorial, we will be using Flat UI PSD format by Designmodo.com since it has a lot of user interface elements that we can choose from.

Feel free to check these out too!

Flat UI Pro, a professional design framework

1.  Flat AP UI Kit

From: Andrew Preble
free-flat-UI-kit-1

2. Featherweight UI – A free, vector based and retina ready UI kit

From: Sara Hunt
free-flat-UI-kit-2

3. Flat Design UI Kit Vol. 1

From: Bloom Web Design
free-flat-UI-kit-15

4. FREE flat UI kit.

From: Visualcreative.cz
free-flat-UI-kit-4

5. UI Kit

From: Abhimanyu Rana
free-flat-UI-kit-5

Opening the UI Kit PSD File

Unzip the file first to view the contents of the folder. You can see four folders and 2 .txt files inside. Open the UI folder and then view the flat-ui-free.psd file in Adobe Photoshop.

file-structure

photshop-opened

By default, there are three folders on the Flat UI PSD file:

  • Basic Elements – consists of a series of individual UI element folders.
  • Samples – consists of the actual application design of the UI elements.
  • Background – the white background of the UI elements.

psd-file-structure

Using the UI Kit Elements

Now that the Flat UI PSD file is opened, we can now go on and use the UI elements. Click Auto Select on the Option Bar (on the top left section of Photoshop near the menu. Make sure Group is selected instead of layer) and then open the Basic Elements folder.

auto-select

open-basic-element

Next, select the UI elements that you want to use. For this example, we will select the menu, share status buttons, radio buttons and checkboxes. Click on the selected UI elements and then move them to a new Photoshop document file.

menu-ui

Resizing UI Kit Elements

To resize UI elements, click on the Direct Selection Tool on the toolbar on the left section of the Photoshop.

direct-selection-tool

And then select the path on the right side of the UI element you want to resize. For this example, we will resize the menu element, press shift and drag the path to the right to resize.

resize

Changing the UI Element Color Scheme

To change the color scheme, you need to select the specific element you want change then click on the shape and select your preferred color on the color picker. For this example, we will change the background color of the menu. Go to the Menu folder and look for the shape layer that has the same background color of the menu. Change the background color of the menu to hexadecimal color: #00acc0.

change-color

Using Vectors and Glyphs

Vectors and glyphs add a touch of  creativity to web design elements. Let’s use the existing vectors and glyphs of the Flat UI kit. Create a new Photoshop document file. Next, create a box using Rounded Rectangle Tool with the dimension of 372px by 372px.  Now drag the retina vector icon and static green button on the box you created. Then drag the round check icon glyph to the left of the static green button. You just use the vectors and glyphs of UI elements.

vector-glyphs

Exporting UI Elements for Web

Now that we already know how to resize and change the color scheme of UI elements, let’s export them for web use. Simply drag your preferred UI element in the new Photoshop document file with a transparent background. Make sure you will only select the layers of that UI element, not the whole folders themselves. For this example, let’s use the paginator. Drag the UI element in the new Photoshop document and go to Image then choose Trim.

trim1

The Trim window will then appear. Next, select the Transparent Pixels radio button and make sure that all of the checkboxes are selected on Trim Away section (Top, Bottom, Left, Right) and then click OK.

trim2

Then Go to File > Save for Web & Devices. In the Save for Web & Devices Window, select the type of preset you want to use. For this example, PNG-24 is used since high quality image with no transparent background is preferred. Click Save to place it on your preferred location.

save-for-web

Congratulations! You just learned how to use, resize, change color scheme, apply vectors and glyphs and export UI kit elements for web used.

Conclusion

UI kits are very helpful to both web designers and web developers. If you are a freelancer, using UI Kits will save you a lot of time since you don’t need spend more time thinking for the design. It can please your clients too because you can deliver the project faster. You can explore more UI kits here. The web is composed of hundreds of UI Kits, both in free and premium versions. You just need to choose the UI kit that suits your needs. You might also want to try to apply it on your actual web or mobile design and then later on code it using HTML and CSS.

January 28 2014

14:00

Setting Up a Local Mirror for Composer Packages With Satis

Installing all your PHP libraries with Composer is a great way to save time. But larger projects automatically tested and run at each commit to your software version control (SVC) system will take a long time to install all the required packages from the Internet. You want to run your tests as soon as possible through your continuous integration (CI) system so that you have fast feedback and quick reactions on failure. In this tutorial we will set up a local mirror to proxy all your packages required in your project’s composer.json file. This will make our CI work much faster, install the packages over the local network or even hosted on the same machine, and make sure we have the specific versions of the packages always available.


What Is Satis?

Satis is the name of the application we will use to mirror the various repositories for our project. It sits as a proxy between the Internet and your composer. Our solution will create a local mirror of a few packages and instruct our composer to use it instead of the sources found on the Internet.

Here is an image that says more than a thousand words.

Architecture

Our project will use composer as usual. It will be configured to use the local Satis server as the primary source. If a package is found there, it will be installed from there. If not, we will let composer use the default packagist.org to retrieve the package.


Getting Satis

Satis is available through composer, so installing it is very simple. In the attached source code archive, you will find Satis installed in the Sources/Satis folder. First we will install composer itself.

$ curl -sS https://getcomposer.org/installer | php
#!/usr/bin/env php All settings correct for using Composer Downloading...
 Composer successfully installed to: /home/csaba/Personal/Programming/NetTuts/Setting up a local mirror for Composer packages with Satis/Sources/Satis/composer.phar Use it: php composer.phar

Then we will install Satis.

$ php composer.phar create-project composer/satis --stability=dev --keep-vcs Installing composer/satis (dev-master eddb78d52e8f7ea772436f2320d6625e18d5daf5)
  - Installing composer/satis (dev-master master)
    Cloning master
 Created project in /home/csaba/Personal/Programming/NetTuts/Setting up a local mirror for Composer packages with Satis/Sources/Satis/satis Loading composer repositories with package information Installing dependencies (including require-dev) from lock file
  - Installing symfony/process (dev-master 27b0fc6)
    Cloning 27b0fc645a557b2fc7bc7735cfb05505de9351be

  - Installing symfony/finder (v2.4.0-BETA1)
    Downloading: 100%

  - Installing symfony/console (dev-master f44cc6f)
    Cloning f44cc6fabdaa853335d7f54f1b86c99622db518a

  - Installing seld/jsonlint (1.1.1)
    Downloading: 100%

  - Installing justinrainbow/json-schema (1.1.0)
    Downloading: 100%

  - Installing composer/composer (dev-master f8be812)
    Cloning f8be812a496886c84918d6dd1b50db5c16da3cc3

  - Installing twig/twig (v1.14.1)
    Downloading: 100%
 symfony/console suggests installing symfony/event-dispatcher () Generating autoload files

Configuring Satis

Satis is configured by a JSON file very similar to composer. You can use whatever name you want for your file and specify it for usage later. We will use “mirrored-packages.conf“.

{
    "name": "NetTuts Composer Mirror",
    "homepage": "http://localhost:4680",

    "repositories": [
        { "type": "vcs", "url": "https://github.com/SynetoNet/monolog" },
        { "type": "composer", "url": "https://packagist.org" }
    ],

    "require": {
        "monolog/monolog": "syneto-dev",
        "mockery/mockery": "*",
        "phpunit/phpunit": "*"
    },
    "require-dependencies": true
}

Let’s analyze this configuration file.

  • name – represents a string that will be shown on the web interface of our mirror.
  • homepage – is the web address where our packages will be kept. This does not tell our web server to use that address and port, it is rather just information of a working configuration. We will set up the access to it on that addres and port later.
  • repositories – a list of repositories ordered by preference. In our example, the first repository is a Github fork of the monolog logging libraries. It has some modifications and we want to use that specific fork when installing monolog. The type of this repository is “vcs“. The second repository is of type “composer“. Its URL is the default packagist.org site.
  • require – lists the packages we want to mirror. It can represent a specific package with a specific version or branch, or any version for that matter. It uses the same syntax as your “require” or “require-dev” in your composer.json.
  • require-dependencies – is the final option in our example. It will tell Satis to mirror not only the packages we specified in the “require” section but also all their dependencies.

To quickly try out our settings we first need to tell Satis to create the mirrors. Run this command in the folder where you installed Satis.

$ php ./satis/bin/satis build ./mirrored-packages.conf ./packages-mirror Scanning packages Writing packages.json Writing web view

While the process is taking place, you will see how Satis mirrors each found version of the required packages. Be patient it may take a while to build all those packages.

Satis requires that date.timezone to be specified in the php.ini file, so make sure it is and set to your local timezone. Otherwise an error will appear.

[Twig_Error_Runtime]
  An exception has been thrown during the rendering of a template
("date_default_timezone_get(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set) function. 

Then we can run a PHP server instance in our console pointing to the recently created repository. PHP 5.4 or newer is required.

$ php -S localhost:4680 -t ./packages-mirror/ PHP 5.4.22-pl0-gentoo Development Server started at Sun Dec  8 14:47:48 2013 Listening on http://localhost:4680 Document root is /home/csaba/Personal/Programming/NetTuts/Setting up a local mirror for Composer packages with Satis/Sources/Satis/packages-mirror Press Ctrl-C to quit.
[Sun Dec  8 14:48:09 2013] 127.0.0.1:56999 [200]: /
[Sun Dec  8 14:48:09 2013] 127.0.0.1:57000 [404]: /favicon.ico - No such file or directory

And we can now browse our mirrored packages and even search for specific ones by pointing our web browser to http://localhost:4680:

MirrorWebpage

Let’s Host It on Apache

If you have a running Apache at hand, creating a virtual host for Satis will be quite simple.

Listen 4680

<Directory "/path/to/your/packages-mirror"> Options -Indexes FollowSymLinks AllowOverride all Order allow,deny Allow from all
</Directory>

<VirtualHost *:4680> DocumentRoot "/path/to/your/packages-mirror" ServerName 127.0.0.1:4680 ServerAdmin admin@example.com
 ErrorLog syslog:user

</VirtualHost>

We just use a .conf file like this, put in Apache’s conf.d folder, usually /etc/apache2/conf.d. It creates a virtual host on the 4680 port and points it to our folder. Of course you can use whatever port you want.


Updating Our Mirrors

Satis can not automatically update the mirrors unless we tell it. So the easiest way, on any UNIX like system, is to just add a cron job to your system. That would be very easy, and just a simple script to execute our update command.

#!/bin/bash php /full/path/to/satis/bin/satis build \
/full/path/to/mirrored-packages.conf \
/full/path/to/packages-mirror

The drawback of this solution is that it is static. We have to manually update the mirrored-packages.conf every time we add another package to our project’s composer.json. If you are part of a team in a company with a big project and a continuous integration server, you can’t rely on people remembering to add the packages on the server. They may not even have permissions to access the CI infrastructure.


Dynamically Updating Satis Configuration

It’s time for a PHP TDD exercise. If you just want your code ready and running, check out the source code attached to this tutorial.

require_once __DIR__ . '/../../../../vendor/autoload.php';
 class SatisUpdaterTest extends PHPUnit_Framework_TestCase {
    function testBehavior() {
        $this->assertTrue(true);
    }
}

As usual we start with a degenerative test, just enough to make sure we have a working testing framework. You may notice that I have quite a strange looking require_once line, this is because I want to avoid having to reinstall PHPUnit and Mockery for each small project. So I have them in a vendor folder in my NetTuts‘ root. You should just install them with composer and drop the require_once line altogether.

class SatisUpdaterTest extends PHPUnit_Framework_TestCase {
	function testDefaultConfigFile() {
		$expected = '{
    "name": "NetTuts Composer Mirror",
    "homepage": "http://localhost:4680",

    "repositories": [
        { "type": "vcs", "url": "https://github.com/SynetoNet/monolog" },
        { "type": "composer", "url": "https://packagist.org" }
    ],

    "require": {
    },
    "require-dependencies": true
}';
		$actual = $this->parseComposerConf('');
		$this->assertEquals($expected, $actual);
	}
}

That looks about right. All the fields except “require” are static. We need to generate only the packages. The repositories are pointing to our private git clones and to packagist as needed. Managing those is more of a sysadmin job than a software developer’s.

Of course this fails with:

PHP Fatal error:  Call to undefined method SatisUpdaterTest::parseComposerConf()

Fixing that is easy.

private function parseComposerConf($string) {
}

I just added an empty method with the required name, as private, to our test class. Cool, but now we have another error.

PHPUnit_Framework_ExpectationFailedException : Failed asserting that null matches expected '{ ... }'

So, null does not match our string containing all that default configuration.

private function parseComposerConf($string) {
    return '{
"name": "NetTuts Composer Mirror",
"homepage": "http://localhost:4680",

"repositories": [
    { "type": "vcs", "url": "https://github.com/SynetoNet/monolog" },
    { "type": "composer", "url": "https://packagist.org" }
],

"require": {
},
"require-dependencies": true
}';
}

OK, that works. All tests are passing.

PHPUnit 3.7.28 by Sebastian Bergmann. Time: 15 ms, Memory: 2.50Mb OK (1 test, 1 assertion)

But we introduced a horrible duplication. All that static text in two places, written character by character in two different places. Let’s fix it:

class SatisUpdaterTest extends PHPUnit_Framework_TestCase {
	static $DEFAULT_CONFIG = '{
    "name": "NetTuts Composer Mirror",
    "homepage": "http://localhost:4680",

    "repositories": [
        { "type": "vcs", "url": "https://github.com/SynetoNet/monolog" },
        { "type": "composer", "url": "https://packagist.org" }
    ],

    "require": {
    },
    "require-dependencies": true
}';

	function testDefaultConfigFile() {
		$expected =  self::$DEFAULT_CONFIG;

		$actual = $this->parseComposerConf('');
		$this->assertEquals($expected, $actual);
	}

	private function parseComposerConf($string) {
		return self::$DEFAULT_CONFIG;
	}
}

Ahhh! That’s better.

	function testEmptyRequiredPackagesInComposerJsonWillProduceDefaultConfiguration() {
    $expected = self::$DEFAULT_CONFIG;

    $actual = $this->parseComposerConf('{"require": {}}');
    $this->assertEquals($expected, $actual);
}

Well. That also passes. But it also highlights some duplication and useless assignment.

	function testDefaultConfigFile() {
    $actual = $this->parseComposerConf('');
    $this->assertEquals(self::$DEFAULT_CONFIG, $actual);
}
 function testEmptyRequiredPackagesInComposerJsonWillProduceDefaultConfiguration() {
    $actual = $this->parseComposerConf('{"require": {}}');
    $this->assertEquals(self::$DEFAULT_CONFIG, $actual);
}

We inlined the $expected variable. $actual could also be inlined, but I like it better this way. It keeps the focus on what is tested.

Now we have another problem. The next test I want to write would look like this:

function testARequiredPackageInComposerWillBeInSatisAlso() {
    $actual = $this->parseComposerConf(
    '{"require": {
        "Mockery/Mockery": ">=0.7.2"
    }}');
    $this->assertContains('"Mockery/Mockery": ">=0.7.2"', $actual);
}

But after writing the simple implementation, we will notice it requires json_decode() and json_encode(). And of course these functions reformat our string and matching strings will be difficult at best. We have to take a step back.

function testDefaultConfigFile() {
    $actual = $this->parseComposerConf('');
    $this->assertJsonStringEqualsJsonString($this->jsonRecode(self::$DEFAULT_CONFIG), $actual);
}
 function testEmptyRequiredPackagesInComposerJsonWillProduceDefaultConfiguration() {
    $actual = $this->parseComposerConf('{"require": {}}');
    $this->assertJsonStringEqualsJsonString($this->jsonRecode(self::$DEFAULT_CONFIG), $actual);
}
 private function parseComposerConf($jsonConfig) {
    return $this->jsonRecode(self::$DEFAULT_CONFIG);
}
 private function jsonRecode($json) {
    return json_encode(json_decode($json, true));
}

We changed our assertion method to compare JSON strings and we also recoded our $actual variable. ParseComposerConf() was also modified to use this method. You will see in a moment how it helps us. Our next test becomes more JSON specific.

function testARequiredPackageInComposerWillBeInSatisAlso() {
    $actual = $this->parseComposerConf(
        '{"require": {
            "Mockery/Mockery": ">=0.7.2"
        }}');
    $this->assertEquals('>=0.7.2', json_decode($actual, true)['require']['Mockery/Mockery']);
}

And making this test pass, along with the rest of the tests, is quite easy, again.

private function parseComposerConf($jsonConfig) {
    $addedConfig = json_decode($jsonConfig, true);
    $config = json_decode(self::$DEFAULT_CONFIG, true);
    if (isset($addedConfig['require'])) {
        $config['require'] = $addedConfig['require'];
    }
    return json_encode($config);
}

We take the input JSON string, decode it, and if it contains a “require” field we use that in our Satis configuration file instead. But we may want to mirror all versions of a package, not just the last one. So maybe we want to modify our test to check that the version is “*” in Satis, regardless of what exact version is in composer.json.

function testARequiredPackageInComposerWillBeInSatisAlso() {
    $actual = $this->parseComposerConf(
        '{"require": {
            "Mockery/Mockery": ">=0.7.2"
        }}');
    $this->assertEquals('*', json_decode($actual, true)['require']['Mockery/Mockery']);
}

That obviously fails with a cool message:

PHPUnit_Framework_ExpectationFailedException : Failed asserting that two strings are equal. Expected :* Actual   :>=0.7.2

Now, we need to actually edit our JSON before re-encoding it.

private function parseComposerConf($jsonConfig) {
    $addedConfig = json_decode($jsonConfig, true);
    $config = json_decode(self::$DEFAULT_CONFIG, true);
    $config = $this->addNewRequires($addedConfig, $config);
    return json_encode($config);
}
 private function toAllVersions($config) {
    foreach ($config['require'] as $package => $version) {
        $config['require'][$package] = '*';
    }
    return $config;
}
 private function addNewRequires($addedConfig, $config) {
    if (isset($addedConfig['require'])) {
        $config['require'] = $addedConfig['require'];
        $config = $this->toAllVersions($config);
    }
    return $config;
}

To make the test pass we have to iterate over each element of the required packages array and set their version to ‘*’. See method toAllVersion() for more details. And to speed up this tutorial a little bit, we also extracted some private methods in the same step. This way, parseComoserConf() becomes very descriptive and easy to understand. We could also inline $config into the arguments of addNewRequires(), but for aesthetic reasons I left it on two lines.

But what about “require-dev” in composer.json?

function testARquiredDevPackageInComposerWillBeInSatisAlso() {
    $actual = $this->parseComposerConf(
        '{"require-dev": {
            "Mockery/Mockery": ">=0.7.2",
            "phpunit/phpunit": "3.7.28"
        }}');
    $this->assertEquals('*', json_decode($actual, true)['require']['Mockery/Mockery']);
    $this->assertEquals('*', json_decode($actual, true)['require']['phpunit/phpunit']);
}

That obviously fails. We can make it pass with just copy/pasting our if condition in addNewRequires():

private function addNewRequires($addedConfig, $config) {
    if (isset($addedConfig['require'])) {
        $config['require'] = $addedConfig['require'];
        $config = $this->toAllVersions($config);
    }
    if (isset($addedConfig['require-dev'])) {
        $config['require'] = $addedConfig['require-dev'];
        $config = $this->toAllVersions($config);
    }
    return $config;
}

Yep, that makes it pass, but those duplicated if statements are nasty looking. Let’s deal with them.

private function addNewRequires($addedConfig, $config) {
    $config = $this->addRequire($addedConfig, 'require', $config);
    $config = $this->addRequire($addedConfig, 'require-dev', $config);
    return $config;
}
 private function addRequire($addedConfig, $string, $config) {
    if (isset($addedConfig[$string])) {
        $config['require'] = $addedConfig[$string];
        $config = $this->toAllVersions($config);
    }
    return $config;
}

We can be happy again, tests are green and we refactored our code. I think only one test is left to be written. What if we have both “require” and “require-dev” sections in composer.json?

function testItCanParseComposerJsonWithBothSections() {
    $actual = $this->parseComposerConf(
        '{"require": {
            "Mockery/Mockery": ">=0.7.2"
            },
        "require-dev": {
            "phpunit/phpunit": "3.7.28"
        }}');
    $this->assertEquals('*', json_decode($actual, true)['require']['Mockery/Mockery']);
    $this->assertEquals('*', json_decode($actual, true)['require']['phpunit/phpunit']);
}

That fails because the packages set by “require-dev” will overwrite those of “require” and we will have an error:

Undefined index: Mockery/Mockery

Just add a plus sign to merge the arrays, and we are done.

private function addRequire($addedConfig, $string, $config) {
    if (isset($addedConfig[$string])) {
        $config['require'] += $addedConfig[$string];
        $config = $this->toAllVersions($config);
    }
    return $config;
}

Tests are passing. Our logic is finished. All we left to do is to extract the methods into their own file and class. The final version of the tests and the SatisUpdater class can be found in the attached source code.

We can now modify our cron script to load our parser and run it on our composer.json. This will be specific to your projects’ particular folders. Here is an example you can adapt to your system.

#!/usr/local/bin/php

<?php require_once __DIR__ . '/Configuration.php';

$outputDir = '/path/to/your/packages-mirror';
$composerJsonFile = '/path/to/your/projects/composer.json';
$satisConf = '/path/to/your/mirrored-packages.conf';

$satisUpdater = new SatisUpdater();
$conf = $satisUpdater->parseComposerConf(file_get_contents($composerJsonFile)); file_put_contents($satisConf, $conf);
 system(sprintf('/path/to/satis/bin/satis build %s %s', $satisConf, $outputDir), $retval); exit($retval);

Making Your Project Use the Mirror

We talked about a lot of things in this article, but we did not mention how we will instruct our project to use the mirror instead of the Internet. You know, the default is packagist.org? Unless we do something like this:

   "repositories": [
        {
            "type": "composer",
            "url": "http://your-mirror-server:4680"
        }
    ],

That will make your mirror the first choise for composer. But adding just that into the composer.json of your project will not disable access to packagist.org. If a package can not be found on the local mirror, it will be downloaded from the Internet. If you wish to block this feature, you may also want to add the following line to the repositories section above:

"packagist": false

Final Thoughts

That’s it. Local mirror, with automatic adapting and updating packages. Your colleagues will never have to wait a long time while they or the CI server installs all the requirements of your projects. Have fun.

January 27 2014

14:00

Simple Yet Important Cross-Browser Styling Tips Everyone Should Know

Many have been searching for cross-browser styling tips that can help them solve their browser compatibility problems. After all, creating a website can be easy but developing one that looks the same on all browsers can be tough. For many years, browser compatibility is one of the hardest parts of web development.

As a web designer/developer, it’s always our goal to give our website visitors better experience on viewing our site. Having a consistent look on every browser leaves a visitor good impression of the site.

cross-browser-styling

If you’ve been developing websites for a long time you might agree with me that IE8 and the Jurassic lower versions of IE are the one of the web designer’s and developer’s nightmare. Given this fact that, there are still people using these versions of IE to view websites.

This is just one of the issues that we might encounter in developing a cross-browser website. The good thing is that we can redesign a site or initiate some good practices to make them compatible and look the same on every browser. In this article, I will show you some cross-browser styling tips and tricks to make your website development easier.

Use CSS Reset

Let’s face the fact that web browsers interpret different default styling for HTML elements. Some browsers have different interpretations towards values for styles like margin and padding.

To fix this issue, adding a CSS reset at the start of your style sheet is a good practice. This will reset all CSS elements. This would also mean that it will start from a zero base, thus, giving you complete control to your style sheet without worrying about the browser’s CSS issues such as margin, padding and alignment.

Eric Meyer has a good set of CSS rules to do this. You might also want to consider Nicolas Gallagher’s Normalize.css, a modern HTML5 ready alternative to CSS resets. After you’ve linked the CSS rules to your markup, you can now make sure that you have a starting zero base CSS on every browser.

Validate Your HTML and CSS

Before running or publishing your website, it’s a good practice if you validate both your HTML and CSS using validators as it will correct some minor errors that might give you some problems in the future.

You can use W3C HTML Validator and CSS Validator. These are trusted validators by W3C but feel free to use your own tools if you have another resource of HTML and CSS Validator that works for you.

IE Conditional Comments

The Jurassic versions of IE had been giving web designers and developers a headache when it comes to compatibility issues. As an answer to this issue, Microsoft developed conditional comments in their browser that allows linking a style sheet that will be interpreted by IE alone. Check out the code below.

The code below will target all versions of IE.

!
<--[if IE]>
  	<link href="ie.css" rel="stylesheet" type="text/css" /!>
<[endif]-->

The code below will target a specific version of IE.

!
<--[if IE6]>
  	<link href="ie.css" rel="stylesheet" type="text/css" /!>
<[endif]-->

Use Vendor Prefixes

W3C standards are continually evolving. It’s good practice that you know certain drawbacks if a browser supports a particular CSS property. Styles like rounded corners, drop shadows and gradients can now be implemented using CSS alone without the use of images.

Using vendor prefixes will save you a lot of time without worrying if the browser supports your CSS.

Check out the following list of vendor prefixes.

1. Safari and Chrome (-webkit-)
2. Firefox (-moz-)
3. Opera (-o-)
4. Internet Explorer (-ms-)

As an example, let us use the transition property along with vendor prefixes to target specific browsers.


-webkit-transition: all 4s ease;
-moz-transition: all 4s ease;
-ms-transition: all 4s ease;
-o-transition: all 4s ease;
transition: all 4s ease;

Clear Your Floats

We all know how to float elements either to the left or right by using the style property float. By default, if you only just use onefloat along with a non-floated element, it will just flow along the side of the element you floated.

But that is not the issue here; most often we float divs to the left and right and wraps them with a container or wrapper. Consider the images below.

This image shows what we are trying to achieve.

This image shows what we are trying to achieve.

But instead our divs end up like this image.

But instead our divs end up like this image.

This issue can be solved by simply clearing the float using the CSS property clear and adding a value of both. That means both floats will be clear.

Font Size Measurement

Although there are no fixed thumb rules what font size measurement to use, I would recommend using ems (em) and percentages (%) since ems and percentages are based on the default preference set in the user’s browser.

Most people use pixels or points, which are measurements based on the resolution of the screen. This is always fixed.

Regardless your preference, this How to Size Text in CSS article would help you learn more about text in CSS.

Always Test Your Website Using Different Browsers

You might feel lazy testing your website in different browsers, thinking that it looks good in the modern browsers, believe me, I’ve been there but not doing it might give your website visitors a bad experience.

Using an emulator like spoon plugin might save you some time but it is highly recommended if you test your creation directly on the actual browser itself.

Code Simply

Coding simply makes the whole workflow easier and faster. What do I mean? Just by using simple elements in building up your website makes the browser’s task a lot easier to read your site.

As an example, instead of using tables or paragraphs in creating navigation you might just use an unordered list to form the navigation. In creating a responsive site, instead of using tables, divs works perfectly. Keep it simple, organize your codes and name them properly. Believe me, this will save you a lot of time and headache when you run into some problems.

Polishing Off

There are more good tips and tricks out there about cross browser compatibility but this should guide you on the basic foundation in creating a firm cross-browser experience.

It’s a good idea taking some time to read your codes and check them across different browsers. If you know some more tricks and tips that we’ve not listed here or you have some comments or suggestions, I would love to hear from you. Please free to drop them in the comment section below.

14:00

How to Use New Relic Custom Dashboards & Why You’d Want To

Today we’re going to look at New Relic custom dashboards. Specifically, I will show you three ways that I tend to use custom dashboards:

  • creating an overview dashboard from existing charts
  • creating your own charts from existing data captured by New Relic
  • creating dashboards for your custom metrics

But, before we get into any of that, we must first ask ourselves a question, why should we use custom dashboards at all? New Relic does a pretty good job of presenting your application performance data. Just using the regular UI, you can glean way more information about the performance of your app than you ever could have before you started using New Relic. So, before I show you how to use custom dashboards, I’ll explain why I think anyone using New Relic should look at them sooner rather than later.


Why Use Custom Dashboards at All?

It’s true, most people happily use New Relic without ever looking at the custom dashboard functionality that it provides. It’s not until you’re a fairly advanced user that you may decide to look at custom dashboards and see what they have to offer you. I think this is a shame, playing around with custom dashboards can not only allow you to slice and dice your app performance data in different ways, but can also:

  • teach you quite a bit about how New Relic captures metrics
  • allow you to learn exactly what kind of data gets stored for the metrics that get captured
  • teach you about the limitations of New Relic charts

You can learn all those things by reading the documentation, but playing around with custom dashboards, allows us to begin to understand some of these things by example (on a more intuitive level), rather than just knowing it as a bunch of dot points. As is often the case with technology, tinkering with an unrelated area of a tool, will sometimes give you more insight and understanding into how the more commonly used areas of the same tool work. Using custom dashboards will make you a more savvy New Relic user, and if you’ve read the other New Relic posts that I’ve written, you’ll know how I feel about knowing your tools.


Creating an Overview Dashboard from Existing Charts

The one custom dashboard I always like to build is what I call ’24 hours at a glance’. I take a bunch of existing charts that I consider important for a single application, lock the time period to the last 24 hours and put them all together on one dashboard. Whenever I am looking at a specific application in New Relic, this will be the first screen I look at to see if there is anything particularly bad that jumps out at me from the last 24 hours. Let’s see how we can build this kind of dashboard.

Firstly, we need to create a new custom dashboard. In the New Relic UI click on Dashboards->Create custom dashboard. On the next screen, we give our dashboard a name (you could use “24 hours at a glance”, or your own name) and pick the grid layout. Essentially, grid layout is a collection of charts all the same size and overview layout is one big chart surrounded by a bunch of smaller charts (we will use overview layout in the next section).

01_create_dashboard

Now we need to select the app for which we want to create our dashboard and then find some relevant charts to add. I like to add the following:

  • server average response time
  • historical server average response time
  • browser average response time
  • error rate
  • throughput
  • historical throughput
  • application CPU usage by host
  • application memory usage by host
  • top five web transactions by wall clock time
  • downtime
  • top countries by throughput
  • top five database operations by wall clock time

Depending on your application you may want to add some others, but I find this gives me a fairly good snapshot of what’s happening with the application and what to investigate if anything is wrong. Let’s find one of these charts and add them to our new dashboard. The server average response time is an easy one, as it’s the big chart in the Overview section of the Monitoring tab for an application. Each chart in the New Relic UI has a little New Relic logo in the bottom right corner, when you hover your mouse over this logo it turns into a plus sign, clicking on the plus will allow you to add this chart to a dashboard:

02_add_chart_to_dashboard

But before we add our chart, we need to change the time window for the New Relic UI to be 24 hours. We need to do this in order to give ourselves the ability to ‘lock’ the chart to the last 24 hours when we actually add it to the dashboard (this is awkward UX in my opinion, but at least we have a way to do what we need):

03_change_time_window

We can now go ahead and add the chart:

04_add_chart_to_dashboard

Don’t forget to tick the Lock to span checkbox. When we now visit our new dashboard, the chart we’ve just added should be there:

05_dashboard_with_one_chart

We can rinse and repeat the above process until we’ve added all the charts that we want. In the end, it should look something like this:

06_full_dashboard

You can click the Edit dashboard button in the top right corner which will let you drag the charts around and arrange them in the order you want. The only thing to note is that you can’t modify the individual charts in any way (for example, you may have wanted to have a more descriptive chart title, but you can’t change it) since they are standard New Relic charts.

The other custom dashboard I always like to build from existing charts is the ‘All applications at a glance’. This is only applicable if you have multiple applications you’re looking after. Here we pick one or two of the most important charts for every relevant application and put them together. It’s usually a safe bet to use the ‘Response Time’ chart from each application. The actual process of putting the dashboards together is the same as described above, you’ll just need to switch applications to get the relevant charts from each. In the end, you should end up with something like this:

07_all_applications

This is the screen I’ll tend to look at first, when I log in to New Relic. It may be useful to lock the time of each chart to 24 hours just like we did for our ’24 hours at a glance’ dashboard, but that’s up to you. Of course, this is only relevant if you’re supporting multiple applications. Having said that, if you have multiple pre-production environments for your application (for staging or load test), you may want to put them all together into a dashboard similar to this one, it may help you catch changes that degrade performance before the code ends up in production.


Creating Your Own Charts from Existing Data

The New Relic UI suffers from some necessary limitations. It needs to be all things to all people, so they can only group together charts and tables that would make sense for all web applications. Most of the time the UI screens will limit you to looking at one transaction at a time and one or two sets of metrics at a time, if you need to access others you have to click around. The good news is, with custom dashboards, this limitation no longer applies. We know which transactions are related within our domain, we also know which metrics are important to us on a per transaction basis. We can build a dashboard that groups together several related transactions with all the important metrics for each and look at it on the one screen.

Let’s say we have a particularly important transaction in our application, it might make sense to have a dashboard where we can see most of the important information about this transaction at a glance. Here on Tuts+ we have a concept of articles (obviously) and articles are pretty important to us, let’s build a dashboard to keep an eye on them.

Once again we need to create a new dashboard just like before, we’ll call it ‘Article overview’, but this time we’ll use an overview layout. We don’t have to go hunting for charts as we’ll be creating our own custom charts, so click the big button to create the main chart for our dashboard:

08_add_custom_chart

It will ask you if you want to add a chart or a table, we will add a table later, for now choose chart. You will see a screen that looks like this:

09_new_chart

The main thing to look at here is the metric that you want to display. When you click inside the metrics text-box it will drop down a list of top level metrics that you can select. The metrics in New Relic are named as prefix/category/label. In the case of a Rails app, the prefix might be Controller or ActiveRecord (if you’re not using Rails, the prefix for transactions will be WebTransactions). For Controller, the category would be the name of the controller and the label will be the action name. If you explore some of the metrics while you’re playing around with your first chart, you will start to have a feel for the kind of metrics you have access to and where to find them. If you don’t see the metrics you expect, make sure that you have the right application selected within New Relic, this always trips me up.

Back to our main chart. The metric we’re after will be tied to our ArticlesController, so its name is Controller/articles/show. Once we’ve picked the metric, the contents of the Value drop-down will change to contain all the values that make sense for this metric. It’s worth once again exploring all the various values and seeing what the resulting chart actually contains. In our case, “Average response time” seems like a good thing to have as our main chart.

At this point, if we give our chart a title and click the preview button we can see what it looks like:

10_articles_preview

This looks OK, but I want the Y axis to be in milliseconds and I want the units on the axis as well. So, let’s drop down the advanced options for the chart and change the number format to be ‘To Milliseconds‘, we will also set the Y-axis unit label as ‘ms‘:

11_filled_out_main_chart_form

Our chart now looks good on the preview. The only thing of note that we haven’t talked about is the Chart click-through drop-down. This essentially allows your chart to be a link to another custom dashboard, when you click on the chart, that dashboard will be displayed. We don’t need this functionality, so we leave the drop-down alone. We’ll now go ahead and save our chart.

12_completed_chart_preview

We can now add the various sub-charts. In our case, I happen to know that Tuts+ has a concept of tutorials (also, obviously) which is closely related to articles, so if I am going to keep an eye on article average response time, it’s probably a good idea to have a tutorial average response time close by as a comparison, so I will create a chart for that. We’ll follow the same steps as above, in the end our dashboard will look like this:

13_article_and_tutorial_dashboard

Hmm, it looks like the average response time for articles is much higher than tutorials, but I happen to know that both of them share a significant amount of code, strange. But it’s also an indication that our custom dashboard is already paying dividends and we haven’t even finished constructing it yet. I could have found this information by looking it up in the regular New Relic UI, but having the charts side by side like this helps to really bring home the fact that there might be an issue.

It might also be good to see where our ArticlesController sits compared to other controllers, as far as their maximum response time goes, this is a job for a table. We add another chart just like before, but this time pick table instead of chart. To create tables with multiple rows, we need to use wildcards in our metric name. In our case, I will set the metric to be Controller/, this will select all the metrics under the *Controller prefix, I will now set the limit text-box to be 10 which will do exactly as you expect and set the number of rows in our table to ten. In the end, we should have something that looks like this, right before we save:

14_table

Our custom dashboard will now be:

15_custom_dashboard_with_table

It seems that ArticlesController#show has the longest maximum response time out of all the controller actions, including TutorialsController#show, this is very curious and I should probably make a note to look into this.

We can continue adding a bunch of other charts such as end user average response time or calls per minute. However, some things you just can’t construct using a custom chart, such as historical throughput or response time. Luckily we can always fall back on finding the charts that we want somewhere else in New Relic and simply adding those ones to our custom dashboard.

The only limitation with using a dashboard which has custom charts is that you have to have the correct app selected within New Relic, otherwise all the custom charts on your dashboard will be blank.


Creating Dashboards for Your Custom Metrics

If you read my last article on custom metrics, you might remember me mentioning that the only way for you to view the custom metric data that you’ve collected is to create a custom dashboard in New Relic, this is the third reason to use custom dashboards. If you collect a lot of custom metrics, this might be the best reason of all.

On Tuts+ we have a concept of categories (yet again, obviously), I just happen to know that we have a few custom metrics floating around for categories. Let’s see if we can put these on a dashboard and actually get an idea of what’s happening. We’ll create another dashboard and call it ‘Categories Custom Metrics’. All custom metrics in New Relic should live under the Custom prefix and this is where we find the metrics we’re looking for:

16_custom_metrics

We’ll create a couple of charts, one to see how long building presenters in CategoriesController takes and the other to see how long it takes to get a link hash from the presenters. The key thing to know with custom metrics, is what kind of data you’re actually sending to New Relic. In this case I happen to know that we’re measuring time, so I can pick ‘Average value‘ as my metric value and set the Number format to milliseconds to get a reasonable looking chart. After creating both charts, our custom dashboard looks like this:

17_dashboard_for_custom_metrics

It looks like getting the link hash from the presenters is very quick and doesn’t fluctuate too much, there is no need to optimize anything here and I can probably stop collecting this metric completely (no need to capture unnecessary data). However, building the presenters takes significantly more time, we can look into this further to see if it can be optimized. We can also keep an eye on the chart (by looking at the custom dashboard occasionally) to make sure the performance doesn’t degrade as we continue to work on the application.


Conclusion

Custom dashboards are not a panacea. Aside from creating charts for custom metrics, you can do everything that custom dashboards can do with the regular New Relic UI. However playing with custom dashboards will definitely help you become more of a power user of New Relic, with a deeper understanding of how it works under the hood. In addition, the ability to look at your performance metrics in different ways can be an invaluable tool to help you surface potential performance problems before they have a significant impact on your application.

If you have any questions about New Relic custom dashboards, don’t be afraid to leave a comment and I’ll do my best to answer. Also, if you’ve used custom dashboards to good effect in the past, please share any tips you might have, it’s always interesting to see how other people use their tools.

January 24 2014

19:37

SOLID: Part 3 – Liskov Substitution & Interface Segregation Principles

The Single Responsibility (SRP), Open/Closed (OCP), Liskov Substitution, Interface Segregation, and Dependency Inversion. Five agile principles that should guide you every time you write code.

Because both the Liskov Substitution Principle (LSP) and the Interface Segregation Principle (ISP) are quite easy to define and exemplify, in this lesson we will talk about both of them.


Liskov Substitution Principle (LSP)

Child classes should never break the parent class’ type definitions.

The concept of this principle was introduced by Barbara Liskov in a 1987 conference keynote and later published in a paper together with Jannette Wing in 1994. Their original definition is as follows:

Let q(x) be a property provable about objects x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T.

Later on, with the publication of the SOLID principles by Robert C. Martin in his book Agile Software Development, Principles, Patterns, and Practices and then republished in the C# version of the book Agile Principles, Patterns, and Practices in C#, the definition became known as the Liskov Substitution Principle.

This leads us to the definition given by Robert C. Martin:

Subtypes must be substitutable for their base types.

As simple as that, a subclass should override the parent class’ methods in a way that does not break functionality from a client’s point of view. Here is a simple example to demonstrate the concept.

class Vehicle {

	function startEngine() {
		// Default engine start functionality
	}

	function accelerate() {
		// Default acceleration functionality
	}
}

Given a class Vehicle – it may be abstract – and two implementations:

class Car extends Vehicle {

	function startEngine() {
		$this->engageIgnition();
		parent::startEngine();
	}

	private function engageIgnition() {
		// Ignition procedure
	}

}

class ElectricBus extends Vehicle {

	function accelerate() {
		$this->increaseVoltage();
		$this->connectIndividualEngines();
	}

	private function increaseVoltage() {
		// Electric logic
	}

	private function connectIndividualEngines() {
		// Connection logic
	}

} 

A client class should be able to use either of them, if it can use Vehicle.

class Driver {
	function go(Vehicle $v) {
		$v->startEngine();
		$v->accelerate();
	}
} 

Which leads us to a simple implementation of the Template Method Design Pattern as we used it in the OCP tutorial.

template_method

Based on our previous experience with the Open/Closed Principle, we can conclude that Liskov’s Substitution Principle is in strong relation with OCP. In fact, “a violation of LSP is a latent violation of OCP” (Robert C. Martin), and the Template Method Design Pattern is a classic example of respecting and implementing LSP, which in turn is one of the solutions to respect OCP also.


The Classic Example of LSP Violation

To illustrate this completely, we will go with a classic example because it is highly significant and easily understandable.

class Rectangle {

	private $topLeft;
	private $width;
	private $height;

	public function setHeight($height) {
		$this->height = $height;
	}

	public function getHeight() {
		return $this->height;
	}

	public function setWidth($width) {
		$this->width = $width;
	}

	public function getWidth() {
		return $this->width;
	}

}

We start with a basic geometrical shape, a Rectangle. It is just a simple data object with setters and getters for width and height. Imagine that our application is working and it is already deployed to several clients. Now they need a new feature. They need to be able to manipulate squares.

In real life, in geometry, a square is a particular form of rectangle. So we could try to implement a Square class that extends a Rectangle class. It is frequently said that a child class is a parent class, and this expression also conforms to LSP, at least at first sight.

SquareRect

But is a Square really a Rectangle in programming?

class Square extends Rectangle {

	public function setHeight($value) {
		$this->width = $value;
		$this->height = $value;
	}

	public function setWidth($value) {
		$this->width = $value;
		$this->height = $value;
	}
} 

A square is a rectangle with equal width and height, and we could do a strange implementation like in the above example. We could overwrite both setters to set the height as well as the width. But how would that affect client code?

class Client {

	function areaVerifier(Rectangle $r) {
		$r->setWidth(5);
		$r->setHeight(4);

		if($r->area() != 20) {
			throw new Exception('Bad area!');
		}

		return true;
	}

} 

It is conceivable to have a client class that verifies the rectangle’s area and throws an exception if it is wrong.

function area() {
	return $this->width * $this->height;
}

Of course we added the above method to our Rectangle class to provide the area.

class LspTest extends PHPUnit_Framework_TestCase {

	function testRectangleArea() {
		$r = new Rectangle();
		$c = new Client();
		$this->assertTrue($c->areaVerifier($r));
	}

}

And we created a simple test by sending an empty rectangle object to area verifier and the test passes. If our Square class is correctly defined, sending it to the Client’s areaVerifier() should not break its functionality. After all, a Square is a Rectangle in all mathematical sense. But is our class?

function testSquareArea() {
	$r = new Square();
	$c = new Client();
	$this->assertTrue($c->areaVerifier($r));
}

Testing it is very easy and it breaks big time. An exception is thrown to us when we run the test above.

PHPUnit 3.7.28 by Sebastian Bergmann.

Exception : Bad area!
#0 /paht/: /.../.../LspTest.php(18): Client->areaVerifier(Object(Square))
#1 [internal function]: LspTest->testSquareArea()

So, our Square class is not a Rectangle after all. It breaks the laws of geometry. It fails and it violates the Liskov Substitution Principle.

I especially love this example because it not only violates LSP, it also demonstrates that object oriented programming is not about mapping real life to objects. Each object in our program must be an abstraction over a concept. If we try to map one-to-one real objects to programmed objects, we will almost always fail.


The Interface Segregation Principle

The Single Responsibility Principle is about actors and high level architecture. The Open/Closed Principle is about class design and feature extensions. The Liskov Substitution Principle is about subtyping and inheritance. The Interface Segregation Principle (ISP) is about business logic to clients communication.

In all modular applications there must be some kind of interface that the client can rely on. These may be actual Interface typed entities or other classic objects implementing design patterns like Facades. It doesn’t matter which solution is used. It always has the same scope: to communicate to the client code on how to use the module. These interfaces can reside between different modules in the same application or project, or between one project as a third party library serving another project. Again, it doesn’t matter. Communication is communication and clients are clients, regardless of the actual individuals writing the code.

So, how should we define these interfaces? We could think about our module and expose all the functionalities we want it to offer.

hugeInterface

This looks like a good start, a great way to define what we want to implement in our module. Or is it? A start like this will lead to one of two possible implementations:

  • A huge Car or Bus class implementing all the methods on the Vehicle interface. Only the sheer dimensions of such classes should tell us to avoid them at all costs.
  • Or, many small classes like LightsControl, SpeedControl, or RadioCD which are all implementing the whole interface but actually providing something useful only for the parts they implement.

It is obvious that neither solution is acceptable to implement our business logic.

specializedImplementationInterface

We could take another approach. Break the interface into pieces, specialized to each implementation. This would help to use small classes that care about their own interface. The objects implementing the interfaces will be used by the different type of vehicles, like car in the image above. The car will use the implementations but will depend on the interfaces. So a schema like the one below may be even more expressive.

carUsingInterface

But this fundamentally changes our perception of the architecture. The Car becomes the client instead of the implementation. We still want to provide to our clients ways to use our whole module, that being a type of vehicle.

oneInterfaceManyClients

Assume we solved the implementation problem and we have a stable business logic. The easiest thing to do is to provide a single interface with all the implementations and let the clients, in our case BusStation, HighWay, Driver and so on, to use whatever thew want from the interface’s implementation. Basically, this shifts the behavior selection responsibility to the clients. You can find this kind of solution in many older applications.

The interface-segregation principle (ISP) states that no client should be forced to depend on methods it does not use.

However, this solution has its problems. Now all the clients depend on all the methods. Why should a BusStation depend on the state of lights of the bus, or on the radio channels selected by the driver? It should not. But what if it does? Does it matter? Well, if we think about the Single Responsibility Principle, it is a sister concept to this one. If BusStation depends on many individual implementations, not even used by it, it may require changes if any of the individual small implementations change. This is especially true for compiled languages, but we can still see the effect of the LightControl change impacting BusStation. These things should never happen.

Interfaces belong to their clients and not to the implementations. Thus, we should always design them in a way to best suite our clients. Some times we can, some times we can not exactly know our clients. But when we can, we should break our interfaces in many smaller ones, so they better satisfy the exact needs of our clients.

segregatedInterfaces

Of course, this will lead to some degree of duplication. But remember! Interfaces are just plain function name definitions. There is no implementation of any kind of logic in them. So the duplications is small and manageable.

Then, we have the great advantage of clients depending only and only on what they actually need and use. In some cases, clients may use and need several interfaces, that is OK, as long as they use all the methods from all the interfaces they depend on.

Another nice trick is that in our business logic, a single class can implement several interfaces if needed. So we can provide a single implementation for all the common methods between the interfaces. The segregated interfaces will also force us to think of our code more from the client’s point of view, which will in turn lead to loose coupling and easy testing. So, not only have we made our code better to our clients, we also made it easier for ourselves to understand, test and implement.


Final Thoughts

LSP taught us why reality can not be represented as a one-to-one relation with programmed objects and how subtypes should respect their parents. We also put it in light of the other principles that we already knew.

ISP teaches us to respect our clients more than we thought necessary. Respecting their needs will make our code better and our lives as programmers easier.

Thank you for your time.

14:36

40 Tutorials And Tools That Will Make You A CSS3 Master

Having a tough time in learning CSS? Well, not to worry as here we are presenting 40 excellent and extremely useful tutorials, tricks as well as some tools to make you a CSS3 master. With these resources, you can develop your CSS3 skills and can bring them into your work.

Without any further ado, here we are presenting a list of 40 awesome tutorials and tricks to help you become a CSS master and along with them you will also find some tools to help you with CSS. Here is the full collection after the jump. We hope that you will enjoy this collection. Feel free to share your opinions with us via comment section below. Enjoy!

Tutorials

Make a Stopwatch Using CSS3

This tutorial focuses on step based CSS3 keyframe animation, along with the usage of the animation-play-state property to start/stop/reset the stopwatch.

Create a flat bubble navigation

In this tutorial I’m going to show you how to create a flat bubble navigation with HTML and CSS3.

CSS3 Shapes

Here you’ll find a range off shapes all coded with just pure CSS3 code. Unfortunately not all shapes will render correctly in all browsers, currently only web browsers that support CSS3 will produce the correct geometric shapes.

Expanding Search Bar Deconstructed

A tutorial on how to create a mobile-friendly and responsive expanding search bar.

Caption Hover Effects

A tutorial on how to create some subtle and modern caption hover effects.

Using CSS3 Pseudo-Classes And Transitions

Hyperlinks and buttons are a functional yet in general boring aspect of a website or web application. They tend to provide very little feedback when you interact with them. This doesn’t have to be the case, it’s very easy to add some simple styling to let the user know that they have either interacted with or can interact with an element.

Using CSS3 To Provide Smooth Resize Effects

Ever noticed if you resize your browser window with Gmail open (or Asana or a host of other sites) the various elements on screen resize automatically in a smooth animation? This can be done using JavaScript or jQuery, but can also be achieved using CSS3 transitions and @media selectors.

Animating CSS3 Gradients

In this tutorial we will learn about animating CSS3 Gradients.

CSS3 Radial Gradients

Gradients are much-used on web sites: if you want to liven up pretty much any UI feature (buttons, panels, headers, etc.) you can use a gradient, although you should use them sparingly to avoid the “web site christmas tree effect”™. Traditionally we used CSS background images to add gradients to our UIs, and they worked ok, but they were rather inflexible. You’d have to go to your image editor and make changes to the image file every time you wanted to change the colours, size or direction of the gradient.

How To Create Gmail Logo With CSS3

In this post, I’m going to show you how to create not one, but two variations of Gmail logo using just CSS3.

Code an Awesome Animated Download Button

Follow along as we create a simple and fun download button using some fancy CSS3.

Mastering CSS Gradients in Under an Hour

If you’re just curious about how to use CSS Gradients, this is the place for you. We’ll start with the basics of syntax to very advanced effects with lots of tips and examples.

Semantic CSS With Intelligent Selectors

CSS Technique

Absolute Horizontal And Vertical Centering In CSS.

CSS Layouts

Designing CSS Layouts With Flexbox Is As Easy As Pie.

Building A Circular Navigation

In this tutorial I’m going to show you how to create a flat bubble navigation with HTML and CSS3.

Natural Language Form With Custom Input Elements

An experimental form that uses natural language instead of the usual form display. Values are entered using custom input elements.

Web Audio Stylophone

The mighty stylophone. One of the greatest musical instruments ever created. Used by the likes of David Bowie, Kraftwerk and pretty much no-one else inbetween, I’ll show you how to make this fantastic invention using the Web Audio API.

CSS3 Cookbook

Below you’ll find seven fun and attractive CSS tricks that you can grab and insert right into your own projects and customize at will. Keep in mind that since this stuff is still cutting edge, older browsers won’t support most of it. I’ve tried to ensure graceful degradation where possible so that you can provide a working experience to all users and a better experience to those with using webkit.

CSS Transitions

If you haven’t used transitions before, here’s a brief introduction. On the element you want to have animate, add the following CSS:

CSS3 Navigation Menu

Been a while since I posted an update as i’ve been working alot, so I think it’s time for another CSS3 navigation menu tutorial. The tutorial will walk you through the process of structuring a navigation menu in HTML. Once the HTML is complete you will use Photoshop to create a textured background, and then the menu will be brought to life using CSS.

Google Nexus Website Menu

A tutorial on how to re-create the slide out sidebar menu that can be seen on the Google Nexus 7 website.

Height Equals Width With Pure CSS

CSS3 And The Nth-Child

How many times have you seen a piece of JavaScript or PHP code that’s only purpose is to add a specific class to the first, alternate or last element of a list or row of table? With CSS3 you can easily get rid of all of that logic and replace it clean and simple CSS selectors. This will help tidy up your code and depending on what exactly the code is doing reduce CPU load by removing unnecessary recursions (This is of course a minimal gain, but every little bit counts)

How To Edit CSS Dotted Outline

By default, when an anchor tag element is in active or focus state, a dotted line appears around it, as shown in the image below.

Tools

Same Gradient Generator

The coveted CSS3 Gradient Generator has undergone a redesign and moved to its new home within the CSS3 Factory. It has maintained all of it’s tools and functions but looks even better now. The redesign enhances the generator’s usability and I will be adding even more advanced functions soon.

Border Radius

CSS border radius generator for lazy people.

CSS3.0 Maker

CSS Maker is a free tool to experiment with CSS properties and values and generate a simple stylesheet for your site.

CSS3 Generators

A fully customizable CSS3 generator to fit your needs. Now with older browser and Internet Explorer support!

HTML5 & CSS3 Support

cSS3Pie

PIE makes Internet Explorer 6-9 capable of rendering several of the most useful CSS3 decoration features.

Css Sprit

Please, choose and upload the image files you want to use in your css sprite and click “Create CSS Sprite”. The application will join all your images files into a single file and will generate the corresponding CSS and HTML code along with the rollover effect if any choosen.

Css3 Button

Css Arrow Please

Css Trick Button Maker

Layer Styles

Just like your favorite graphics editor, but in your browser. And it creates CSS.

Gradients

Spritebox

Spritebox is a WYSIWYG tool to help web designers quickly and easily create CSS classes and IDs from a single sprite image. It is based on the principle of using the background-position property to align areas of a sprite image into block elements of a web page. It was made using a combination of JQuery, CSS3 and HTML5, and is totally free to use.

Fontdragr

A revolutionary way to test custom fonts in the browser. No coding, no uploading, just drag and drop.

CSS3 Cheat Sheet

Sizzle

A pure-JavaScript CSS selector engine designed to be easily dropped in to a host library.

HTML 5 Visual Cheat Sheet

HTML 5 Visual Cheat Sheet is an useful cheat sheet for web designers and developers designed by me. This cheat sheet is essentially a simple visual grid with a list of all HTML tags and of their related attributes supported by HTML versions 4.01 and/or 5. The simple visual style I used to design this sheet allows you to find at a glance everything you are looking for.

January 23 2014

19:41

Validation and Exception Handling: From the UI to the Backend

Sooner or later in your programming career you will be faced with the dilemma of validation and exception handling. This was the case with me and my team also. A couple or so years ago we reached a point when we had to take architectural actions to accommodate all the exceptional cases our quite large software project needed to handle. Below is a list of practices we came to value and apply when it comes to validation and exception handling.


Validation vs. Exception Handling

When we started discussing our problem, one thing surfaced very quickly. What is validation and what is exception handling? For example in a user registration form, we have some rules for the password (it must contain both numbers and letters). If the user enters only letters, is that a validation issue or an exception. Should the UI validate that, or just pass it to the backend and catch any exceptions that my be thrown?

We reached a common conclusion that validation refers to rules defined by the system and verified against user provided data. A validation should not care about how the business logic works, or how the system for that matter works. For example, our operating system may expect, without any protests, a password composed of plain letters. However we want to enforce a combination of letters and numbers. This is a case for validation, a rule we want to impose.

On the other hand, exceptions are cases when our system may function in an unpredicted way, wrongly, or not at all if some specific data is provided in a wrong format. For example, in the above example, if the username already exists on the system, it is a case of an exception. Our business logic should be able to throw the appropriate exception and the UI catch and handle it so that the user will see a nice message.


Validating in the User Interface

Now that we made clear what our goals are, let’s see some examples based on the same user registration form idea.

Validating in JavaScript

To most of today’s browsers, JavaScript is second nature. There is almost no webpage without some degree of JavaScript in it. One good practice is to validate some basic things in JavaScript.

Let’s say we have a simple user registration form in index.php, as described below.

<!DOCTYPE html>
<html>
	<head>
		<title>User Registration</title>
		<meta charset="UTF-8">
	</head>
	<body>
		<h3>Register new account</h3>
		<form>
			Username:
			<br/>
			<input type="text" />
			<br/>
			Password:
			<br/>
			<input type="password" />
			<br/>
			Confirm:
			<br/>
			<input type="password" />
			<br/>
			<input type="submit" name="register" value="Register">
		</form>
	</body>
</html>

This will output something similar to the image below:

RegistrationForm

Every such form should validate that the text entered in the two password fields are equal. Obviously this is to ensure the user does not make a mistake when typing in his or her password. With JavaScript, doing the validation is quite simple.

First we need to update a little bit of our HTML code.

<form onsubmit="return validatePasswords(this);">
	Username:
	<br/>
	<input type="text" />
	<br/>
	Password:
	<br/>
	<input type="password" name="password"/>
	<br/>
	Confirm:
	<br/>
	<input type="password" name="confirm"/>
	<br/>
	<input type="submit" name="register" value="Register">
</form>

We added names to the password input fields so we can identify them. Then we specified that on submit the form should return the result of a function called validatePasswords(). This function is the JavaScript we’ll write. Simple scripts like this can be kept in the HTML file, other, more sophisticated ones should go in their own JavaScript files.

<script>
	function validatePasswords(form) {
		if (form.password.value !== form.confirm.value) {
			alert("Passwords do not match");
			return false;
		}
		return true;
	}

</script>

The only thing we do here is to compare the values of the two input fields named “password” and “confirm“. We can reference the form by the parameter we send in when calling the function. We used “this” in the form’s onsubmit attribute, so the form itself is sent to the function.

When the values are the same, true will be returned and the form will be submitted, otherwise an alert message will be shown telling the user the passwords do not match.

PasswordDoNotMatchAlert

HTML5 Validations

While we can use JavaScript to validate most of our inputs, there are cases when we want to go on an easier path. Some degree of input validation is available in HTML5, and most browsers are happy to apply them. Using HTML5 validation is simpler in some cases, though it offers less flexibility.

<head>
	<title>User Registration</title>
	<meta charset="UTF-8">
	<style>
		input {
			width: 200px;
		}
		input:required:valid {
			border-color: mediumspringgreen;
		}
		input:required:invalid {
			border-color: lightcoral;
		}
	</style>
</head>
<body>
	<h3>Register new account</h3>
	<form onsubmit="return validatePasswords(this);">
		Username:
		<br/>
		<input type="text" name="userName" required/>
		<br/>
		Password:
		<br/>
		<input type="password" name="password"/>
		<br/>
		Confirm:
		<br/>
		<input type="password" name="confirm"/>
		<br/>
		Email Address:
		<br/>
		<input type="email" name="email" required placeholder="A Valid Email Address"/>
		<br/>
		Website:
		<br/>
		<input type="url" name="website" required pattern="https?://.+"/>
		<br/>
		<input type="submit" name="register" value="Register">
	</form>
</body>

To demonstrate several validation cases, we extended our form a little bit. We added an email address and a website also. HTML validations were set on three fields.

  • The text input username is just simply required. It will validate with any string longer than zero characters.
  • The email address field is of type “email” and when we specify the “required” attribute, browsers will apply a validation to the field.
  • Finally, the website field is of type “url“. We also specified a “pattern” attribute where you can write your regular expressions that validate the required fields.

To make the user aware of the state of the fields, we also used a little bit of CSS to color the borders of the inputs in red or green, depending on the state of the required validation.

HTMLValidations

The problem with HTML validations is that different browsers behave differently when you try to submit the form. Some browsers will just apply the CSS to inform the users, others will prevent the submission of the form altogether. I recommend you to test your HTML validations thoroughly in different browsers and if needed also provide a JavaScript fallback for those browsers that are not smart enough.


Validating in Models

By now many people know about Robert C. Martin’s clean architecture proposal, in which the MVC framework is only for presentation and not for business logic.

HighLevelDesign

Essentially, your business logic should reside in a separate, well isolated place, organized to reflect the architecture of your application, while the framework’s views and controllers should control the delivery of the content to the user and models could be dropped altogether or, if needed, used only to perform delivery related operations. One such operation is validation. Most frameworks have great validation features. It would be a shame to not put your models at work and do a little validation there.

We will not install several MVC web frameworks to demonstrate how to validate our previous forms, but here are two approximate solutions in Laravel and CakePHP.

Validation in a Laravel Model

Laravel is designed so that you have more access to validation in the Controller where you also have direct access to the input from the user. The built-in validator kind of prefers to be used there. However there are suggestions on the Internet that validating in models is still a good thing to do in Laravel. A complete example and solution by Jeffrey Way can be found on his Github repository.

If you prefer to write your own solution, you could do something similar to the model below.

class UserACL extends Eloquent {
    private $rules = array(
        'userName' => 'required|alpha|min:5',
        'password'  => 'required|min:6',
		'confirm' => 'required|min:6',
		'email' => 'required|email',
		'website' => 'url'
    );

    private $errors;

    public function validate($data) {
        $validator = Validator::make($data, $this->rules);

        if ($validator->fails()) {
            $this->errors = $validator->errors;
            return false;
        }
        return true;
    }

    public function errors() {
        return $this->errors;
    }
}

You can use this from your controller by simply creating the UserACL object and call validate on it. You will probably have the “register” method also on this model, and the register will just delegate the already validated data to your business logic.

Validation in a CakePHP Model

CakePHP promotes validation in models as well. It has extensive validation functionality at model level. Here is about how a validation for our form would look like in CakePHP.

class UserACL extends AppModel {

    public $validate = [
		'userName' => [
			'rule' => ['minLength', 5],
			'required' => true,
			'allowEmpty' => false,
			'on' => 'create',
			'message' => 'User name must be at least 5 characters long.'
		],
        'password' => [
            'rule'    => ['equalsTo', 'confirm'],
            'message' => 'The two passwords do not match. Please re-enter them.'
        ]
    ];

    public function equalsTo($checkedField, $otherField = null) {
		$value = $this->getFieldValue($checkedField);
        return $value === $this->data[$this->name][$otherField];
    }

	private function getFieldValue($fieldName) {
	    return array_values($otherField)[0];
	}
}

We only exemplified the rules partially. It is enough to highlight the power of validation in the model. CakePHP is particularly good at this. It has a great number of built-in validation functions like “minLength” in the example and various ways to provide feedback to the user. Even more, concepts like “required” or “allowEmpty” are not actually validation rules. Cake will look at these when generating your view and put HTML validations also on fields marked with these parameters. However rules are great and can easily be extended by just simply creating methods on the model class as we did to compare the two password fields. Finally, you can always specify the message you want to send to the views in case of validation failure. More on CakePHP validation in the cookbook.

Validation in general at the model level has its advantages. Each framework provides easy access to the input fields and creates the mechanism to notify the user in case of validation failure. No need for try-catch statements or any other sophisticated steps. Validation on the server side also assures that the data gets validated, no matter what. The user can not trick our software any more as with HTML or JavaScript. Of course, each server side validation comes with the cost of a network round-trip and computing power on the provider’s side instead of the client’s side.


Throwing Exceptions from the Business Logic

The final step in checking data before committing it to the system is at the level of our business logic. Information that reaches this part of the system should be sanitized enough to be usable. The business logic should only check for cases that are critical for it. For example, adding a user that already exists is a case when we throw an exception. Checking the length of the user to be at least five characters should not happen at this level. We can safely assume that such limitations were enforced at higher levels.

On the other hand, comparing the two passwords is a matter for discussion. For example, if we just encrypt and save the password near the user in a database, we could drop the check and assume previous layers made sure the passwords are equal. However, if we create a real user on the operating system using an API or a CLI tool that actually requires a username, password, and password confirmation, we may want to take the second entry also and send it to a CLI tool. Let it re-validate if the passwords match and be ready to throw an exception if they do not. This way we modeled our business logic to match how the real operating system behaves.

Throwing Exceptions from PHP

Throwing exceptions from PHP is very easy. Let’s create our user access control class, and demonstrate how to implement a user addition functionality.

class UserControlTest extends PHPUnit_Framework_TestCase {
	function testBehavior() {
		$this->assertTrue(true);
	}
}

I always like to start with something simple that gets me going. Creating a stupid test is a great way to do so. It also forces me to think about what I want to implement. A test named UserControlTest means I thought I will need a UserControl class to implement my method.

require_once __DIR__ . '/../UserControl.php';
class UserControlTest extends PHPUnit_Framework_TestCase {

	/**
	 * @expectedException Exception
	 * @expectedExceptionMessage User can not be empty
	 */
	function testEmptyUsernameWillThrowException() {
		$userControl = new UserControl();
		$userControl->add('');
	}

}

The next test to write is a degenerative case. We will not test for a specific user length, but we want to make sure we do not want to add an empty user. It is sometimes easy to lose the content of a variable from view to business, over all those layers of our application. This code will obviously fail, because we do not have a class yet.

PHP Warning:  require_once([long-path-here]/Test/../UserControl.php):
failed to open stream: No such file or directory in
[long-path-here]/Test/UserControlTest.php on line 2

Let’s create the class and run our tests. Now we have another problem.

PHP Fatal error:  Call to undefined method UserControl::add()

But we can fix that, too, in just a couple of seconds.

class UserControl {

	public function add($username) {

	}

}

Now we can have a nice test failure telling us the whole story of our code.

1) UserControlTest::testEmptyUsernameWillThrowException
Failed asserting that exception of type "Exception" is thrown.

Finally we can do some actual coding.

public function add($username) {
	if(!$username) {
		throw new Exception();
	}
}

That makes the expectation for the exception pass, but without specifying a message the test will still fail.

1) UserControlTest::testEmptyUsernameWillThrowException
Failed asserting that exception message '' contains 'User can not be empty'.

Time to write the Exception’s message

public function add($username) {
	if(!$username) {
		throw new Exception('User can not be empty!');
	}
}

Now, that makes our test pass. As you can observe, PHPUnit verifies that the expected exception message is contained in the actually thrown exception. This is useful because it allows us to dynamically construct messages and only check for the stable part. A common example is when you throw an error with a base text and at the end you specify the reason for that exception. Reasons are usually provided by third party libraries or application.

/**
 * @expectedException Exception
 * @expectedExceptionMessage Cannot add user George
 */
function testWillNotAddAnAlreadyExistingUser() {
	$command = \Mockery::mock('SystemCommand');
	$command->shouldReceive('execute')->once()->with('adduser George')->andReturn(false);
	$command->shouldReceive('getFailureMessage')->once()->andReturn('User already exists on the system.');
	$userControl = new UserControl($command);
	$userControl->add('George');
}

Throwing errors on duplicate users will allow us to explore this message construction a step further. The test above creates a mock which will simulate a system command, it will fail and on request, it will return a nice failure message. We will inject this command to the UserControl class for internal use.

class UserControl {

	private $systemCommand;

	public function __construct(SystemCommand $systemCommand = null) {
		$this->systemCommand = $systemCommand ? : new SystemCommand();
	}

	public function add($username) {
		if (!$username) {
			throw new Exception('User can not be empty!');
		}
	}

}

class SystemCommand {

}

Injecting the a SystemCommand instance was quite easy. We also created a SystemCommand class inside our test just to avoid syntax problems. We won’t implement it. Its scope exceeds this tutorial’s topic. However, we have another test failure message.

1) UserControlTest::testWillNotAddAnAlreadyExistingUser
Failed asserting that exception of type "Exception" is thrown.

Yep. We are not throwing any exceptions. The logic to call the system command and try to add the user is missing.

public function add($username) {
	if (!$username) {
		throw new Exception('User can not be empty!');
	}

	if(!$this->systemCommand->execute(sprintf('adduser %s', $username))) {
		throw new Exception(
				sprintf('Cannot add user %s. Reason: %s',
						$username,
						$this->systemCommand->getFailureMessage()
				)
			);
	}
}

Now, those modifications to the add() method can do the trick. We try to execute our command on the system, no matter what, and if the system says it can not add the user for whatever reason we throw an exception. This exception’s message will be part hard-coded, with the user’s name attached and then the reason from the system command concatenated at the end. As you can see, this code makes our test pass.

Custom Exceptions

Throwing exceptions with different messages is enough in most cases. However, when you have a more complex system you also need to catch these exceptions and take different actions based on them. Analyzing an exception’s message and taking action solely on that can lead to some annoying problems. First, strings are part of the UI, presentation, and they have a volatile nature. Basing logic on ever changing strings will lead to dependency management nightmare. Second, calling a getMessage() method on the caught exception each time is also a strange way to decide what to do next.

With all these in mind, creating our own exceptions is the next logical step to take.

/**
 * @expectedException ExceptionCannotAddUser
 * @expectedExceptionMessage Cannot add user George
 */
function testWillNotAddAnAlreadyExistingUser() {
	$command = \Mockery::mock('SystemCommand');
	$command->shouldReceive('execute')->once()->with('adduser George')->andReturn(false);
	$command->shouldReceive('getFailureMessage')->once()->andReturn('User already exists on the system.');
	$userControl = new UserControl($command);
	$userControl->add('George');
}

We modified our test to expect our own custom exception, ExceptionCannotAddUser. The rest of the test is unchanged.

class ExceptionCannotAddUser extends Exception {

	public function __construct($userName, $reason) {
		$message = sprintf(
			'Cannot add user %s. Reason: %s',
			$userName, $reason
		);
		parent::__construct($message, 13, null);
	}
}

The class that implements our custom exception is like any other class, but it has to extend Exception. Using custom exceptions also provides us a great place to do all the presentation related string manipulation. Moving the concatenation here, we also eliminated presentation from the business logic and respected the single responsibility principle.

public function add($username) {
	if (!$username) {
		throw new Exception('User can not be empty!');
	}

	if(!$this->systemCommand->execute(sprintf('adduser %s', $username))) {
		throw new ExceptionCannotAddUser($username, $this->systemCommand->getFailureMessage());
	}
}

Throwing our own exception is just a matter of changing the old “throw” command to the new one and sending in two parameters instead of composing the message here. Of course all tests are passing.

PHPUnit 3.7.28 by Sebastian Bergmann.

..

Time: 18 ms, Memory: 3.00Mb

OK (2 tests, 4 assertions)

Done.

Catching Exceptions in Your MVC

Exceptions must be caught at some point, unless you want your user to see them as they are. If you are using an MVC framework you will probably want to catch exceptions in the controller or model. After the exception is caught, it is transformed in a message to the user and rendered inside your view. A common way to achieve this is to create a “tryAction($action)” method in your application’s base controller or model and always call it with the current action. In that method you can do the catching logic and nice message generation to suit your framework.

If you do not use a web framework, or a web interface for that matter, your presentation layer should take care of catching and transforming these exceptions.

If you develop a library, catching your exceptions will be the responsibility of your clients.


Final Thoughts

That’s it. We traversed all the layers of our application. We validated in JavaScript, HTML and in our models. We’ve thrown and caught exceptions from our business logic and even created our own custom exceptions. This approach to validation and exception handling can be applied from small to big projects without any severe problems. However if your validation logic is getting very complex, and different parts of your project uses overlapping parts of logic, you may consider extracting all validations that can be done at a specific level to a validation service or validation provider. These levels may include, but not need to be limited to JavaScript validator, backend PHP validator, third party communication validator and so on.

Thank you for reading. Have a nice day.

January 22 2014

15:00

Get ‘Layed’: What You Need to Know about Photoshop Layers

Photoshop is revered as one of the best digital multimedia solutions software. So there is no question on what you need to know about Photoshop layers because there is always something to be learnt about this amazing software. It basically encompasses every design need there is. Artists consider using it because of its flexibility. It can be utilized by any artist notwithstanding their design inclinations. Most photographers use the software in editing their photos. Some digital artists use Photoshop, (if not, Illustrator) in their designs. Meanwhile, most web designers use this software in building the visuals of their designs. Truly, with these uses, Adobe’s carrier software has been the one-man-wrecking-machine software that solved most of our digital problems.

photoshop-layers-02

photo from Wallpaper Stock.net

Having said that Photoshop is a must-have software, it is also equally a must to master it. And that entails a great deal of effort. With all the tools, options and features of Photoshop, which – until today – is continually developing, it is very difficult to start training with the software. But never fret, you have us. What is 1stwebdesigners for, right?

photoshop-layers-01

Photo from Deviantart

One of the probably gazillion things you need to know about Photoshop is its layers. The importance of mastering this feature is, of course, pertinent for a web designer as a web design, before being coded, is most probably created through Photoshop.

Layers are free and independent elements in a Photoshop work space. They can be manipulated in scaling, color, effects, position, opacity and more. Layers often represent separate elements that a designer wants to tweak or customize. In the realm of web design, layers play an important role. They are used to represent the design elements of a webpage. They are made to imply a text box, an image, a background, the content and a lot more.

photoshop-layers-03

Photo from Designcontest.com

Most Photoshop users will agree to the notion that one of the key features of a software like Photoshop is its good layering options. Here are some of them that you need to know.

Fill and Opacity

Fill and opacity are two different things. Although they are commonly confused as the same terms and often connected as a single terminology (i.e. Fill-Opacity), they are completely different from one another.photoshop-layers-04

Fill is an option of a layer to customize how many percent of the background color is seen. This option is commonly used for shapes

.photoshop-layers-03

On the other hand, opacity is the option that dictates how transparent the whole layer relative to the other layers in the design.

The difference between the two is that the Fill option does not affect layer styles. For example, you put a stroke in the image. Notice that when you ticker down the fill percentage, the stroke is still seen unlike if you lower the opacity percentage, you see the whole image together with the layer styles is made transparent.

photoshop-layers-05

Grouping

This is one of the most rudimentary learnings a Photoshop user should have when it comes to layers. Grouping really does not do a lot of things in the design per se. Despite that predicament, the importance of grouping is never left behind. You see, grouping helps in the organization of layers. Aside from naming them properly, which is also very useful, grouping layers improves the workflow because you don’t need to search the whole design looking for a single pixel. Just find where it belongs and voila! You have it!

photoshop-layers-09

photo from elated.com

Masking

Masking is the act of hiding particular portions of the layer to show the layers below it. This technique is a very important skill in a designer’s arsenal as it is a good way of combining multiple photos in one seamless image.

Photoshop masks can be used at a very large amount on a single layer. According to TutsPlus.com, you can use up to 11 Pixel Masks and 11 Vector Masks!

If you want a good tutorial, better visit this.

photoshop-layers-06

Photo from Photoshopsupport

photoshop-layers-07

Photo from Amazon news

Selecting

Supposed you’re slicing up particular photos in a design. You have a lot of options to choose from. Either you click the layer you want to copy or cut, press CTRL+A and paste it wherever you want to, or use the lasso or quick selection tool. But you have a third option. That is quick selecting it by pressing CTRL + click the image desired.

photoshop-layers-11

Layer Styles

Layer styles give you the liberty to select specific options to enhance the design. There are some options to tinker on:

  • General Blending – here you can choose the Blending Mode. The Blending mode option allows you to customize background and image relationships and how to complement, contrast or both. Aside from that, you could also choose from advanced options where you are allowed to reduce the opacity of the layer as a whole or per channel.
  • Bevel and Emboss – this option gives your layers a 3D feel as they tend to pop out of the design. The Bevel and Emboss option add depth to the design, making it more ‘realistic.’ In this option, you could select from contouring options like inner bevels, emboss and more.
  • Stroke – this is one of the most used options in the menu. Stroke adds an outline to the main image. You can choose the opacity of the stroke, the color and its blending options.
  • Inner Shadow- gives you a subtle darkening of the layer. Inner shadow also provides depth. (Outer Shadow is the opposite of this)
  • Inner Glow – adds a shiny and feathered yellow design in the outline of the image within. (Outer Glow is the opposite of this)
  • Color Overlay – fills the whole image with a chosen color. It will totally forget what’s inside the fill. It will surely fill every spot of what you wanted it to fill with.

photoshop-layers-10

Conclusion

Layers are one nasty piece of business to study. It requires a lot of time and patience to do this. With the importance of layers highlighted, it is equally important to study them well. The key to this lesson is that you should try using them and commit mistakes. It’s where we really learn- from our mistakes. So next time, you’ll be dealing with layers, stand up and be proud that once in your life, you got ‘layed’ here at 1stwebesigner. Again, pun intended.

January 21 2014

19:50

Working With LESS and the Chrome DevTools

This is a complete tutorial to using LESS with Chrome’s DevTools. If you’ve already used Sass with Chrome’s DevTools, you’ll most likely already be familiar with the concepts introduced here.

The Tl;dr

  • LESS has very recently added support for Source Maps, so this is new and exciting!
  • DevTools mapping means you can view LESS styles for all your relevant CSS.
  • Edit LESS source files within DevTools and have them save to disk.
  • Source Map adoption is improving with Sass, CoffeeScript, UglifyJS2 and more.

Introduction

Not too long ago, the Chrome DevTools added a number of IDE-like features (and continues to do so), one of which is the ability to understand the mapping between a compiled file and its corresponding source file. This is made possible thanks to Source Maps.

image_0-2

What This Tutorial Will Cover

  • How to generate Source Maps from LESS files.
  • Debugging LESS code through the DevTools.
  • Authoring LESS code in the DevTools and seeing changes immediately via Workspaces and Grunt.

Everything mentioned here is available in Chrome Canary.


Configuring LESS for Source Maps

First thing’s first, install the latest (1.5.1 at the time of this writing) version of LESS through NPM (Source Maps arrived in 1.5.0):

$ npm install -g less
$ lessc -v
lessc 1.5.1 (LESS Compiler) [JavaScript]
image_1

At this point, you can run the LESS compiler in your terminal using the lessc command.

image_2

Create a new LESS file, for demonstration purposes, it’ll be kept short and simple.

@color: #4D926F;
#header {
  color: @color;
}

To turn the LESS file into a CSS file, it’s a simple case of running:

lessc header.less header.css
image_3

We now need a Source Map file. Create it with the -source-map flag.

lessc --source-map=header.map header.less header.css

Using the LESS code shown above, running this command produces a Source Map file with the following contents:

{"version":3,"file":"header.css","sources":["header.less"],"names":[],"mappings":"AAEQ;EACP,cAAA"}

Using DevTools to Map to LESS Source Files

In a world without Source Maps, DevTools would display styles originating from the compiled CSS. Line numbering would not be useful due to a mismatch between compiled CSS and the source CSS. With Source Maps, when inspecting a DOM node on the page, DevTools will automatically show the styles originating from the LESS source file.

Viewing a web page which references the previously mentioned header.css is now inspectable in a more meaningful way.

image_4

While holding Command (Ctrl on Windows), click any property, value or selector to jump to the line of code in the corresponding LESS source file within the Sources Panel.


Authoring Workflow With DevTools & LESS

Viewing LESS code in the DevTools is certainly useful, but integrating an authoring workflow can speed up your iteration cycle. The next step is to tell DevTools how the resources on a web page map to files on a file system. Enter: Workspaces.

Note: Workspaces are suitable for many projects, not just those using CSS preprocessors.

Workspaces

You might already be familiar with the Sources panel from an editing perspective. The general idea is that you have a file navigator in the sidebar where each file is typically a stylesheet, script or HTML resource that the web page has loaded.

image_6

Clicking on a resource displays the contents in the main panel, you may notice the similarity this has with the Resources panel, however there is at least one major difference: the Sources panel contains a live editor. Changes you make to stylesheets are applied to the page instantly and changes you make to scripts are injected back into the V8 engine and evaluated immediately. Not only does this work for remotely hosted files, but also for local ones with the added benefit of persisting your changes to a file.

Note: To make use of Workspaces, the following steps are only required once per project.

Step 1.

Open up a local webpage and add its corresponding folder on your file system to the workspace by Right-Clicking in the Sources panel sidebar and selecting Add folder to Workspace.

image_7

Step 2.

Allow DevTools access to the folder you’ve just added.

image_8

Step 3.

Your newly added folder will appear in the sidebar navigation.

image_9

Right-click on a file within a network resourced folder in the Sources Panel (make sure it has an obvious mapping to a file on your file system) and select Map to File System Resource.

image_10

Step 4.

The first suggestion provided by DevTools is the correct one. It has noticed that the file on my file system (/Users/.../bootstrap.css) has the same name as a network resource file (http://localhost/.../bootstrap.css). Verify the mapping by selecting the correct resource.

image_11

DevTools now understands the mapping between filesystem resources and network resources. When you Cmd/Ctrl-Click a selector from the Styles pane and end in the Sources Panel, you’re now being shown your actual LESS source file. You can edit a file in the Sources panel and those changes will persist to disk when you Command/Control+S.

We’ve come all this way, so let’s complete this workflow by using a Grunt watch task to watch for changes made to the LESS files and then automatically compile our CSS and make a corresponding Source Map file.

Using Grunt to Watch & Compile LESS

With Workspaces set up, configure Grunt (or another tool of your choice) to watch for changes to LESS source files and compile a new CSS file (with a Source Map). DevTools will pick up this change automatically and apply any new styles to the page.

Note: Enable the Auto-reload generated CSS flag in the Settings panel to use this workflow.

image_12

Here is an outline of the automated steps which will occur:

  1. You save a change to a LESS file via DevTools.
  2. A watch task monitors LESS files for changes and then runs a LESS task.
  3. The LESS task compiles a new CSS file plus a Source Map file.
  4. DevTools injects the new CSS file onto the current page without a page refresh.

Here’s a simplified Gruntfile:

module.exports = function(grunt) {
    'use strict';

    require('matchdep').filterDev('grunt-!(cli)').forEach(grunt.loadNpmTasks);

    grunt.initConfig({
        less: {
            dev: {
                options: {
                    sourceMap: true,
                    sourceMapFilename: 'bootstrap.map'
                },
                files: {
                    'less/bootstrap.css': 'less/bootstrap.less'
                }
            }
        },
        watch: {
            all: {
                files: ['less/**/*.less'],
                tasks: ['less'],
            }
        }
    });

    grunt.registerTask('default', ['less', 'watch']);
};

Note: The code snippet above comes from the demo repository.

After an npm install, running grunt in your terminal should show a watch task running.

image_13

DevTools already has write access to your development folder (through Workspaces). Cmd/Ctrl+S your changes in the Sources Panel to have DevTools overwrite the source LESS file with your new change, Grunt compiles a new CSS file which DevTools pulls in and applies to the page.


Conclusion

  • During development and debugging, looking at your source file (rather than the compiled file) will almost always be more useful to you.
  • For DevTools to understand source file mappings, it needs to be compatible with the Source Map v3 proposal which is up to the compilation tool to implement.
  • Tools adopting Source Maps are improving, we have Sass, Compass, LESS, autoprefixer, UglifyJS2, CoffeeScript and more. There are grunt-contrib-* tasks for most of these tools (Sass, Compass, LESS, autoprefixr, UglifyJS2, CoffeeScript) which tie in nicely with a livereload workflow.
  • Viewing LESS files will work out of the box with DevTools. To actually edit files, try out Workspaces which gives you the ability to persist changes to disk.

Further Reading

Source Maps

LESS

Chrome DevTools

16:53

How To Design Custom Pricing Plan Tables with HTML5 and CSS3

Advertise here with BSA


Any company selling a service with payment plans usually breaks up the structure into different packages. Smaller packages will cost less but also provide fewer features. Pricing tables are the perfect UI component to display these packages in an easy-to-compare design.

For this tutorial I want to demonstrate how we can create pricing tables with just a bit of HTML5 and CSS3. Despite the name, I haven’t actually used any <table> elements within the design. You might feel these are easier but they also depend on a rigid formulaic structure. Pricing tables can have many facets including service terms, plan details, pricing details, and of course the purchase/signup button.

custom designed html5 css3 pricing tables tutorial open source

Live DemoDownload Source Code

Getting Started

This demonstration is mostly built around aesthetics so I haven’t even bothered touching the jQuery library. The document header references a single stylesheet named styles.css which imports an external web font Alegreya Sans.

Looking towards the internal table itself you will see it’s built using a container div, along with internal block elements like <header> and <section>. This makes everything easier to read when skimming through the code. And we can separate block sections from each table to distinguish between background colors and font styles.

<div id="pricing" class="clearfix">
  <div class="price-table">
    <header>
      <h3>Basic</h3>
    </header>
    <section class="price-details">
      <p>$19<span class="price-recur">per month</span></p>
    </section>
    <section class="details">
      <ul class="details-list">
        <li class="wizard">Easy Setup Wizard</li>
        <li class="storage">250GB Storage</li>
        <li class="scripts">Open Source Scripts</li>
        <li class="support">24/7 Tech Support</li>
      </ul>
    </section>
    <section class="purchase-btn"><a href="javascript:void(0);">Purchase</a></section>
  </div><!-- @end "Basic" .price-table -->

To save room I’ve only copied the very first pricing table HTML for the “Basic” plan. The header section uses a background gradient plus a small text shadow to stand apart from the page. Pricing details are split using a span element with the class .price-recur. So the numeric cost will appear much larger than the denoted timeframe(per month).

You’ll also notice the .details-list contains list items with different classes. These each represent a different background icon from the Gentle Edges icon set. The PNGs are white by default which is perfect for this dark tabular color scheme.

The outer container #pricing uses a clearfix class to keep everything aligned properly. The tables are floated next to each other using fixed width values. These could be redefined to use flexible widths based on the container, if your layout is responsive. Basically you can adjust these numbers in CSS to get them aligned any way you’d like.

Typical CSS Styles

Each of the table designs follows the same class structure. Higher-priced tables have more listed icon features, and so they appear longer on the page.

/* page structure */
#w {
  display: block;
  width: 800px;
  margin: 0 auto;
  padding: 15px 10px;
  background: #fff;
  -webkit-border-radius: 4px;
  -moz-border-radius: 4px;
  border-radius: 4px;
  -webkit-box-shadow: 1px 2px 4px rgba(0,0,0,0.55);
  -moz-box-shadow: 1px 2px 4px rgba(0,0,0,0.55);
  box-shadow: 1px 2px 4px rgba(0,0,0,0.55);
}


#pricing {
  display: block;
  margin-bottom: 20px;
}

.price-table {
  display: block;
  float: left;
  width: 185px;
  text-align: center;
  color: #fff;
  background: #6f8590;
  margin-right: 10px;
  -webkit-border-radius: 5px;
  -moz-border-radius: 5px;
  border-radius: 5px;
}

Each inner table is fixed at 185px width. This comes out to 740px along with an extra 10px margin. The total body wrapper is 800px so these tables come in at just about the perfect width value. Also the table itself has rounded corners which are then translated to other elements such as the header.

.price-table header {
  display: block;
  padding: 15px 0;
  border-bottom: 1px solid #54656d;
  -webkit-border-top-right-radius: 5px;
  -webkit-border-top-left-radius: 5px;
  -moz-border-radius-topright: 5px;
  -moz-border-radius-topleft: 5px;
  border-top-right-radius: 5px;
  border-top-left-radius: 5px;
  background-color: #5f8597;
  background-image: -webkit-gradient(linear, left top, left bottom, from(#324a56), to(#5f8597));
  background-image: -webkit-linear-gradient(top, #324a56, #5f8597);
  background-image: -moz-linear-gradient(top, #324a56, #5f8597);
  background-image: -ms-linear-gradient(top, #324a56, #5f8597);
  background-image: -o-linear-gradient(top, #324a56, #5f8597);
  background-image: linear-gradient(top, #324a56, #5f8597);
}
.price-table header h3 {
  font-size: 2.7em;
  font-weight: bold;
  text-shadow: 1px 2px 0 rgba(0,0,0,0.3);
}

.price-table .price-details {
  padding: 20px 0;
  background: #cedee6;
  font-size: 4.46em;
  line-height: 1.1em;
  font-weight: bold;
  color: #4b5d72;
  margin-bottom: 15px;
  text-shadow: 1px 2px 0 rgba(255,255,255,0.6);
}
.price-table .price-details .price-recur {
  display: block;
  font-size: 0.4em;
  line-height: 0.9em;
  font-weight: normal;
}

.price-table .details-list {
  list-style: none;
  text-align: left;
  margin-bottom: 10px;
}
.price-table .details-list li {
  display: block;
  padding: 8px 0;
  padding-left: 40px;
  font-size: 1.2em;
  line-height: 20px;
  font-weight: bold;
  background-position: 6px 5px;
  background-repeat: no-repeat;
  border-bottom: 1px solid #a7b7bf;
}

You’ll notice the header uses a CSS3 background gradient with browser prefixes to create the glossy effect. As we move down to .price-details the background turns into a stationary color, and the pricing table itself uses a darker gray for the original background. Each of these sections will appear divided and still come together forming a recognizable pricing table.

On each of the .details-list li elements we need to keep the same background properties, but swap out different images. I figured the simplest way to do this would be including the same background-position and background-repeat values for each list item. Then for individual classes we switch between the icon URL itself.

The Recommended Table

You may have already noticed that the “Pro” table has a lighter purple hue in comparison to the darker teal. Many pricing tables will encapsulate one of the plans to appear more noticeable than the others. Companies offer these plans as the best deal for your money, and hope to persuade visitors into choosing that plan for its greater value.

The recommended table’s header is much shorter in height because I’ve included a small notice to catch people’s attention. You could get fancy using ribbons or badges, however I wanted to avoid extraneous images for this design. The table itself has an additional class so we can target internal elements using .price-table.recommended as the selector.

.price-table.recommended {
  background: #6c7b9b;
}

.price-table.recommended header {
  padding: 5px 0;
  background-color: #5a76b1;
  background-image: -webkit-gradient(linear, left top, left bottom, from(#495e8a), to(#5a76b1));
  background-image: -webkit-linear-gradient(top, #495e8a, #5a76b1);
  background-image: -moz-linear-gradient(top, #495e8a, #5a76b1);
  background-image: -ms-linear-gradient(top, #495e8a, #5a76b1);
  background-image: -o-linear-gradient(top, #495e8a, #5a76b1);
  background-image: linear-gradient(top, #495e8a, #5a76b1);
  border-bottom: 1px solid #3b577e;
}
.price-table.recommended .notice {
 font-size: 1.2em;
 line-height: 20px;
 background: #3b577e;
 font-weight: bold;
}

.price-table.recommended .price-details {
  background: #bac7ea;
  padding: 20px 0;
}

To compensate for the loss of space I’ve reduced the header padding from 30px down to 10px. This is also why the small .notice text uses a line-height of 20px. I wanted the last two tables to stay perfectly aligned, even though they do feature slightly different content. Mostly all of the background colors have been updated along with the purchase button.

.price-table.recommended .purchase-btn a {
  border-color: #6a3fc2 #7045bf #651fbb;
  background: #6149ad;
  background-image: -webkit-gradient(linear, left top, left bottom, from(#7a67d8), to(#6149ad));
  background-image: -webkit-linear-gradient(top, #7a67d8, #6149ad);
  background-image: -moz-linear-gradient(top, #7a67d8, #6149ad);
  background-image: -ms-linear-gradient(top, #7a67d8, #6149ad);
  background-image: -o-linear-gradient(top, #7a67d8, #6149ad);
  background-image: linear-gradient(to bottom, #7a67d8, #6149ad);
}
.price-table.recommended .purchase-btn a:hover {
  background: #634fc2;
  background-image: -webkit-gradient(linear, left top, left bottom, from(#8876e2), to(#634fc2));
  background-image: -webkit-linear-gradient(top, #8876e2, #634fc2);
  background-image: -moz-linear-gradient(top, #8876e2, #634fc2);
  background-image: -ms-linear-gradient(top, #8876e2, #634fc2);
  background-image: -o-linear-gradient(top, #8876e2, #634fc2);
  background-image: linear-gradient(to bottom, #8876e2, #634fc2);
}
.price-table.recommended .purchase-btn a:active {
  background: #503d88;
  background-image: -webkit-gradient(linear, left top, left bottom, from(#6d58bd), to(#503d88));
  background-image: -webkit-linear-gradient(top, #6d58bd, #503d88);
  background-image: -moz-linear-gradient(top, #6d58bd, #503d88);
  background-image: -ms-linear-gradient(top, #6d58bd, #503d88);
  background-image: -o-linear-gradient(top, #6d58bd, #503d88);
  background-image: linear-gradient(to bottom, #6d58bd, #503d88);
}

custom designed html5 css3 pricing tables tutorial open source

Live DemoDownload Source Code

Closing

You won’t find a need for pricing plan tables in every project. Actually you probably won’t need them for a majority of web projects. But there are many companies which offer a recurring service, like TypeKit or MaxCDN. These are the designs which can truly incorporate pricing tables with a sense of purpose. Feel free to use my source code in your own website layouts, and to share any questions or ideas you may have in the post discussion area below.


Advertise here with BSA

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl