Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

November 19 2019


No, Absolutely Not

I think the difference between a junior and senior front-end developer isn't in their understanding or familiarity with a particular tech stack, toolchain, or whether they can write flawless code. Instead, it all comes down to this: how they push back against bad ideas.

What I've learned this year is that web performance will suffer if you don't say no to the marketing department because you'll suddenly find yourself with eighteen different analytics scripts on your website. If you don't say no to engineers, then you'll have a codebase that's half React, a quarter Vue and another quarter built in a language you don't even recognize. If you don't say no to designers, then you'll have a ton of components that are very similar to one another and that will eventually end up confusing everyone in your organization. And if you don’t say no to project managers, then you'll forfeit the time necessary to build an accessible, responsive, baseline experience.

The true beauty of web design is that you can pick up HTML, CSS, and the basics of JavaScript within a dedicated week or two. But over the past year, I've come to the conclusion that building a truly great website doesn't require much skill and it certainly doesn't require years to figure out how to perform the coding equivalent of a backflip.

What you need to build a great website is restraint.

But! The problem with working on large-scale projects with hundreds of people is that saying "no" can be political suicide. Instead, you have to learn how to say it without sounding like a jerk. You need to educate everyone about performance, responsive design, and accessibility. You'll need to explain to folks what front-end development even is.

And that's because the hard part is that saying "no" requires justification—even mentorship—as to why something is a bad idea when it comes to building a website.

The even harder part of all this is that front-end development is boring to everyone else, except us. No one cares about the three weird languages we have to write. And certainly, no one cares about performance or accessibility until things suddenly stop working for them. this is why the broken parts of the internet are felt by everyone but are mostly invisible to those who build it.

All of these lessons have reminded me of a piece by Robinson Meyer for The Atlantic about the threat of climate change and how the solutions are "boring as dirt", or BAD for short:

The BAD problem recognizes that climate change is an interesting challenge. It is scary and massive and apocalyptic, and its attendant disasters (especially hurricanes, wildfires, and floods) make for good TV. But the policies that will address climate change do not pack the same punch. They are technical and technocratic and quite often dull. At the very least, they will never be as immediate as climate change itself. Floods are powerful, but stormwater management is arcane. Wildfires are ravenous, but electrical-grid upgrades are tedious. Climate change is frightening, but dirt is boring. That's the BAD problem.

The "boring as dirt" problem exists in our industry and every organization we work with. For instance, the performance of a website is obviously a terrible problem for users when they're trying to report a blackout in their area and the website can't load because there are a dozen or more third-party scripts loading at any given time.

But fixing that problem? It requires going through each script, talking to the marketing department, finding out who owns what script, why they use it, what data is ultimately useful to the organization and what is not. Then, finally, you can delete the script. The solution to the problem is boring as dirt and trying to explain why the work is important—even vital—will get you nowhere in many organizations.

So, how do we avoid boredom when it comes to solving to front-end development problems?

We must realign it with the goals of the business. We must mention our customers and why they need responsive interfaces when we talk about CSS. We should start a newsletter when we do a ton of great work that no one can see.

And when someone has a bad idea that hurts the web? We should, as politely as we can, say no.

The post No, Absolutely Not appeared first on CSS-Tricks.


JAMstack, Fugu, and Houdini

What has me really excited about building websites recently is the fact that we, as front-end developers, have the power to do so much more. Only a few years ago, I would need a whole team of developers to accomplish what can now be done with just a few amazing tools.

Although the projects/tools/technologies are almost endless, in this article I'd like to talk about the top three that have me the most excited about building websites today, and for the future.

Serverless and the JAMstack

Serverless functions, which are really just server-side functions that you don't host yourself, have been around for a few years, but they've really picked up in the past year or so. They allow us to host simple node functions that don't require a permanent state and can be called from a frontend website the way we would call any other server-side API.

Serverless functions have really changed the game for me and I like to think that they did for frontend developers what sites like Squarespace did for non-developers. For the latter group, they no longer need a developer to build something simple like a portfolio website. For us frontend developers, we no longer need a backend developer to accomplish tasks like creating a contact form on a website. Things that really we should never have needed a whole API to do in the first place!

The popularity of serverless functions has led to the creation of a new tech stack: JavaScript, APIs, and Markup (JAMstack). I really love the concept of the JAMstack because it’s a move to more static, performant, websites, which I’m a big fan of. If you want to learn more about this stack, JAMstack_conf is a great conference to attend. I spoke at this year's conference in San Francisco about using headless Chrome and Cloudinary to create progressively enhanced dynamic content (long title, I know). You can watch my talk below.

Project Fugu

Project Fugu is an initiative started by the Chromium team to bring to the web as many capabilities that are available to native applications as possible. A lot of these features are small and incremental, but the sum of the parts is going to make a huge change in the way we build progressive web applications.

One of the APIs I'm really looking forward to is the Native File System API, which will allow users of websites the ability to grant access to files on their system. A great use case for this would be Figma, the online interface design tool. Instead of having files "saved" online-only, they could directly work with files on your machine, the same way that native applications do!
Some other APIs I think are interesting are:

  • Wake Lock API - will allow websites to prevent the device from dimming or falling asleep
  • Contacts Picker API - will allow websites to access contacts from the user’s device
  • Get Installed Related Apps API - will allow websites to check if a native application is ins

You can view the full list of APIs.

CSS Houdini

Although Houdini isn't exactly ready yet, it's probably the technology I am most excited for as a lover of CSS because I believe it will be a true game-changer in how we build websites.

Houdini is a collection of APIs that exposes "hooks" into certain parts of the browser's rendering engine. This gives us low-level access to the different stages that which CSS is applied, allowing us to essentially create our own CSS!

A great example of this is using the Layout Houdini API to create the infamous masonry layout as a new value for the display property. Once these APIs are out, the possibilities for what we will be able to create will be endless!

The post JAMstack, Fugu, and Houdini appeared first on CSS-Tricks.


Oh, the Places JavaScript Will Go

I tend to be pretty vocal about the problems client-side JavaScript cause from a performance perspective. We're shipping more JavaScript than ever to our user's devices and the result is increasingly brittle and resource-intensive experiences. It's... not great.

But that doesn't mean I don't like JavaScript. On the contrary, I enjoy working in JavaScript quite a bit. I just wish we were a little more selective about where we use it.

What excites me is when JavaScript starts reaching into parts of the technical stack where it didn't live before. Both server-side programming and the build tool process weren't exactly off-limits to front-end developers, but before Node.js and tools like Grunt, Gulp, webpack, and Parcel came along, they required different languages to be used. There are a lot of improvements (asset optimizations, test running, server-side adjustments necessary for improved front-end performance, etc) that required server-side languages, which meant most front-end developers tended not to go there. Now that those tools are powered by JavaScript, it's far more likely that front-end developers can make those changes themselves.

Whenever we take a part of the technology stack and make it more approachable to a wider audience, we'll start to see an explosion of creativity and innovation. That's exactly what's happened with build processes and bundlers. There's been an explosion of innovation in no small part thanks to extending where front-end developers can reach.

That's why I'm really excited about edge computing solutions.

Using a CDN is one of the most valuable things you can do to improve performance and extend your reach. But configuring that CDN and getting the maximum amount of value has been out of reach for most front-end teams.

That's changing.

Cloudflare has Cloudflare Workers, powered by JavaScript. Akamai has EdgeWorkers, powered by JavaScript. Amazon has Lambda@Edge, powered by JavaScript. Fastly just announced Compute@Edge which is powered by WebAssembly. You can't write JavaScript at the moment for Compute@Edge (you can write TypeScript if that's your thing), but I suspect it's only a matter of time before that changes.

Each of these tools provides a programmable layer between your CDN and the people visiting your site, enabling you to transform your content at the edge before it ever gets to your users. Critically, all of these tools make doing these things much more approachable to front-end developers.

For example, instead of making the client do all the work for A/B testing, you can use any one of these tools to handle all the logic on the CDN instead, helping to make client-side A/B testing (an annoyance of every performance-minded engineer ever) a thing of the past. Optimizely's already using this technology to do just that for their own A/B testing solution.

Using a third-party resource? Edge computing makes it much easier to proxy those requests through your own CDN, sparing you the extra connection cost and helping eliminate single point of failures.

Custom error messages? Sure. User authentication? You betcha. Personalization? Yup. There's even been some pretty creative technical SEO work happening thanks to edge computing.

Some of this work was achievable before, but often it required digging through archaic user interfaces to find the right setting or using entirely different languages and tools like ESI or Varnish which don't really exist outside of this little sliver of space they operate in.

Making these things approachable to anyone with a little JavaScript knowledge has the potential to help be a release valve of sorts, making it easier for folks to move some of that heavy work away from client devices and back to a part of the tech stack that is much more predictable and reliable. Like Node.js and JavaScript-driven build tools, they extend the reach of front-end developers further.

I can't wait to see all the experimentation that happens.

The post Oh, the Places JavaScript Will Go appeared first on CSS-Tricks.


Understanding and Mastering the Art of Ombre Design

The term Ombre Design refers to the gradual blending of one hue of a colour to another. Generally, the progression is from light tints and shades towards darker tints and shades. The effect has gained wide popularity in multiple platforms such as; book cover design, book design, interior design, home décor and also graphic design and website design.

Such design practices are known for representing drama and mood swings. It has a French derivation from the word “ombrer” that essentially means ‘to shade’.

Let’s understand the various implications of Ombre Design:

1. Book Design:

In the above mentioned example, you can see how the book uses a subtle yet very attracting ombre design at the centre of the page. The centre of the book has a darker shade of teal, dispersing off to lighter shades outwards. This helps create an atmospheric vibe. It helps guide the reader to the centre of the book, giving them a visual direction to follow when reading the book.

The design helps the reader’s eye gravitate towards the centre. This helps in managing focus and keeping the reader engaged.  The darker shade of ombre creates drama. Note that the use of such design element doesn’t compromise on the legibility of the print text. A good designer would always take care of such aspects when using such design practices.

2. Paintings:

Ombre is often considered or used as an abstract element that is added to a design. However, many designers see the potential of inculcating Ombre principles to enhance the elements of their original design. If we talk about this painting, you can see the different gradient that is used throughout the painting for showing the depth of field and layering.

The forests are deep and saturated with blue that eventually gradates to lighter tones and distant light as we go up towards the sky. This shows how Ombre can be used to express distance and depth.  This proves there is no particular method of creating an Ombre design.

3. Interior Design:

Using Ombre in interior design is a new trend, but it has caught on quick to grab many interior designers attention. It is based on the colour transition from the darker tones to the light tones. Even from one tint to another can be considered as Ombre. This is also known as degradation or gradient. There are many ways you can use Ombre in designing office or home spaces.

  Ombre in wall decorations:

Walls are the most noticeable element of any room. You can use Ombre in any wall of any room. The key is to find the right tone.  Executing such ideas require a careful and knowledgeable approach. To bring the idea you have to reality, the painter/artist should know how exactly to give the right strokes and texture to create the Ombre effect.

Generally, it is difficult to make such art on such a big canvas as a wall. However, if done correctly, it would enhance the look of the room significantly.

Interior walls:

However, when you’re applying Ombre to the interiors of a wall, one important thing to remember is that if the room is smaller and has low ceilings, you should use the darkest shades at the bottom. Then transition to lighter colours on your way up.

Doing so would help elevate the visual height of the space, psychologically. If you have a spacious room to play around with, you need not stick to this rule and could experiment with Ombre in any directions you want.

Decoration of windows:

The textile industry didn’t miss out on the trend of Ombre Design. By using fabric that is suitable for ombre design, many manufacturers have implemented such design principles in designing window curtains of different materials such as silk, cotton, jute and linen.  Using gradient on such a platform creates a sense of lightness.

The designers generally go for handstrokes to give it a carefree feel that makes it look natural. However, the strokes are still picturesque and not bizarre. Such design practices work best when they complement the furniture of the room in subtle yet effective ways.

Ombre Design in Furniture:

Using Ombre Design in furniture has led to some interesting dynamics in interiors.  Ombre is generally used in frames a lot. For instance, you have cabinets, which have drawers and doors provided. You could colour the cabinets yourself by simply taking one can of white colour and another can of the colour you want to paint.

Just by mixing and experimenting with the white and the desired colour ratio, you can create shades with different saturation for the cabinets. This helps create an Ombre effect.

4. Cosmetics:

There are various options available as far as colour range goes in the world of cosmetics. This opens up a big scope for experimenting with Ombre effects for makeup. An ombre effect can be achieved by blending two or more shades and applying it on lips, eyes or even cheeks. The concept of dark to light is similar to a concept that already exists in cosmetics which is contouring.

Contouring is a process where different shades and tints of natural skin tones are blended. However, the major difference between ombre and contouring is that contouring is used to sculpt a face to look a certain way artificially. In contrast, ombre is just mixing of two shades or more in any format.


In the world of cooking and baking, the presentation plays an important role. It holds just as much value as the taste, the preparation and the accuracy of the dish. Probably a bit more, as it is the first interaction the customers has with the dish that forms their first impression about it.

Hence a lot of professional bakers have realized the advantage of using Ombre design elements in the frosting of their cake, or individual baking layer in gradual tones from light to dark. They also use the dyeing and stacking method to achieve an ombre fade.

6. Hairstyles:

Hairstyle trends are quick to change and are constantly evolving. This also makes them most open for experimentation, and they readily accept any trend. Ombre design is no exception to this rule.  Many hairstylists have realized that Ombre texture while highlighting hair adds depth and layers to the hairstyle. Many consumers are also readily willing to experiment with Ombre shades of hair colour. It looks ‘well thought of’ and beautiful compared to flat colours most of the times.

Generally, for hair colour, the idea is to have a dramatic two-toned hair colour effect. The hair is generally darker at the top and lighter at the bottom. Many colour combinations work well, like natural blonde, brown or red or even some unconventional colours if someone’s confident enough like pink, green, purple and more.

7. Graphic Design:

Ombre Design got popularity for its tangible aesthetics initially in industries like interiors, textiles and furniture. It has also made its way into graphic and website design.  Whether it is for creating logos or using elements of Ombre design in wallpapers, or any other such use, several designers are now seeing the value of using Ombre design.

For the same reason, any designer must know how to create their own Ombre Design. Let’s look at the 5 step process:

How to Create Your Ombre Design

Creating different layers:

For this instance, we would use an image of interlocking arrow pattern. For starting out, we create a white background layer. Later add a black arrows layer. Finally, add a paper texture layer so that the gradation blends and diffuses properly.

 Working on the individual layers:

Now select the contents of the Black Arrows layer by pressing control and clicking its thumbnail. After this, select the arrow and highlight the new gradient fill layer. Now click on the layer, go to Layer Mask and finally Reveal Selection. Doing so would blend the gradient fill layer in the shape of the arrows. Now you don’t need the Black Arrow layer.

Now go to the White background layer, and replace it with another gradient layer.  You don’t need to create a mask for this layer. It already masked by the arrows from the above layer.  You can now control the changes in the right-facing arrows and left-facing arrows separately.

Working with Gradients:

Open Gradient editor situated at the left. When you open it, you would see there are small swatches on the bottom and the top. Use the top swatches to control opacity. We leave the left side of this gradient to be 100 per cent opaque (black) and the right side to be 100 per cent transparent (white).

Now switch the swatches situated below. These can help choose the colour of the gradient. For our understanding, we would use blue and green. They are complementary colours as per colour theory. With the concept of dark to light, make necessary adjustments to both the colours by creating subtle differences in the background arrows to that of foreground arrows.

Add a Brightness/Contrast Layer:

Now to further enhance our Ombre design, we can add a brightness and contrast layer. This would further distinguish the transition from light to dark. By doing this, we can refine the design without having to use all the functions of the gradient editor. This eases the process and gives you more control over the adjustments.

Add a Hue/Saturation layer:

Now to wrap it up, we use a hue/saturation layer that allows us to try different colour palettes. Drag the hue slider left and right to make the necessary adjustments until you find what works best for you.

8. Website Design:

Many bloggers have been very inspired by the use of Ombre Design in their blog pages for adding a sense of aesthetical value. It has become a popular trend.  Ombre Design adds a subtle yet effective touch to a blog’s design.  The reason it works the best is it isn’t overwhelming to look at and not even as boring as a flat single solid colour. For creating an Ombre Design in your website, you can take help of a CSS Tool for the coding.  Let’s understand how we can use Ombre Design on our website.

Firstly, visit any of the CSS Gradient resources like the Ultimate CSS Gradient Generator. Secondly, create your gradient by either selecting a preset from the top or creating your mix. Double click the coloured boxes that are situated below the gradient bar for changing the colours.  You can drag the colours left or right for increasing or decreasing the fade between them.  If the two colour boxes are close to each other, they create a hard line; however, if you pull them further apart, they start creating a fade. Chose the Orientation next, once satisfied with the gradient. The Orientation of your gradient can be vertical, horizontal or diagonal.

Once you’re done tweaking all such modifications and are satisfied with the resulted Ombre design, copy the resulting CSS and paste it in your website’s CSS style sheet, inside body tag:


Now you have an overall understanding of Ombre Design and its various applications. It is used in many industries, and you can draw inspiration for your next website design or graphic design assignment from all the various available sources of inspiration.  From home décor, furniture, interiors, paintings or even book designs, you can practice your Ombre Design anywhere on any surface. Hence to conclude, this was an insight on understanding and mastering the art of Ombre Design.

The post Understanding and Mastering the Art of Ombre Design appeared first on Line25.


How Do You Remove Unused CSS From a Site?

Here's what I'd like you to know upfront: this is a hard problem. If you've landed here because you're hoping to be pointed at a tool you can run that tells you exactly what CSS you can delete from your project, well... there are tools out there, but I'm warning you to be very careful with them because none of them can ever tell you the complete story.

I know what you want. You want to run the tool, delete what it tells you, and you have a faster site in 2.2 minutes. I'm sorry, but I'm going to disappoint you.

I think you should have a healthy level of skepticism for any tool like that. None of them are exactly lying to you — they often just don't have enough information to give you results that are safe and actionable. That's not to say you can't use them or it can't be done. Let's take a walk.

The motivation

I imagine the #1 driver for the desire to remove unused CSS is this:

You used a CSS framework (e.g. Bootstrap), included the framework's entire CSS file, and you only used a handful of the patterns it provides.

I can empathize with that. CSS frameworks often don't provide simple ways to opt-in to only what you are using, and customizing the source to work that way might require a level of expertise that your team doesn't have. That might even be the reason you reached for a framework to begin with.

Say you're loading 100 KB of CSS. I'd say that's a lot. (As I write, this site has ~23 KB, and there are quite a lot of pages and templates. I don't do anything special to reduce the size.) You have a suspicion, or some evidence, that you aren't using a portion of those bytes. I can see the cause for alarm. If you had a 100 KB JPG that you could compress to 20 KB by dropping it onto some tool, that's awesome and totally worth it. But the gain in doing that for CSS is even more important because CSS is loaded in the head and is render blocking. The JPG is not.

😬 Looking at "coverage"

Chrome's DevTools has a "Coverage" tab that will tell you how much of your CSS and JavaScript is in use. For example, if I visit the homepage of CSS-Tricks right now...

It tells me that 70.7% of my style.css file is unused. I imagine it's right, and that the rest of the CSS is used elsewhere. I didn't just dump a big style library onto this site; I wrote each line of that by hand, so I have my doubts that more than 2/3 of it is unused globally.

I assumed I could start "recording" then click around different areas of the site and watch that unused number go down as different pages with different HTML are rendered, but alas, when the page refreshes, so does the Coverage tab. It's not very useful in getting a multi-page look at CSS coverage, unless you have a Single Page App I guess?

I hate to say it but I find looking at code coverage pretty useless. For me, it paints a dire picture of all this unused code on the site, which preys upon my doubts, but all I can do is worry about it.

This might be the very thing that's given you the idea that unused CSS needs to be discovered and deleted in the first place.

My primary concern

My biggest concern is that you look at something like code coverage and see your unused lines:

And you go, Perfect! I'll delete that CSS! And you do, only to find out it wasn't unused at all and you caused big styling problems throughout the site. Here's the thing: you don't actually know if a CSS selector is unused unless you:

  1. check coverage on every single page of your entire site...
  2. while executing all JavaScript...
  3. under every possible combination of state...
  4. in every possible combination of media queries you've used.

Checking your homepage doesn't count. Checking all your top-level pages doesn't count. You gotta dig through every page, including states that aren't always top-of-mind, not to mention all of the edge-case scenarios. Otherwise, you might end up deleting the dropdown styling for the credit card choice dropdown in the pop-up modal that appears for users with a disabled account who've logged in during their grace period that also have a gift card to apply.

This is too complex for automated tooling to promise their approach works perfectly, particularly when factoring in the unknowns of browser context (different screen sizes, different capabilities, different browsers) and third parties.

Here's an example of my concern playing out:

PurifyCSS Online takes some URLs and instantly provides a copy-pasteable chunk of CSS to use

Here's me dropping my into PurifyCSS Online and getting new CSS.


On the left, CSS-Tricks as normal. On the right, I applied the new "purified" CSS, which deleted a bunch of CSS necessary for other pages.

It gave me the opportunity to put in other URLs (which is nice) but there are tens of thousands of URLs on CSS-Tricks. Many of them are fairly similar, but all of them have the potential of having selectors that are used. I get the impression it didn't execute JavaScript, because anything that came onto the page via JavaScript was left unstyled. It even deleted my :hover states.

Perhaps you can see why my trust in these tools is so low.

Part of a build process

PurifyCSS is probably more regularly used as a build process tool rather than the online interface. Their docs have instructions for Grunt, Gulp, and webpack. For example, globbing files to check and process them:

var content = ['**/src/js/*.js', '**/src/html/*.html'];
var css = ['**/src/css/*.css'];

var options = {
  // Will write purified CSS to this file.
  output: './dist/purified.css'

purify(content, css, options);

This gives you a lot more opportunity for accuracy. That content blob could be a list of every single template, partial, and JavaScript file that builds your site. That might be a pain to maintain, but you'll certainly get more accuracy. It doesn't account for content in data stores (e.g. this blog post that lives in a database) and third-party JavaScript, but maybe that doesn't matter to you or you can account for it some other way.

PurgeCSS, a competitor to PurifyCSS, warns about its comparison technique:

PurifyCSS can work with any file type, not just HTML or JavaScript. PurifyCSS works by looking at all of the words in your files and comparing them with the selectors in your CSS. Every word is considered a selector, which means that a lot of selectors can be erroneously consider used. For example, you may happen to have a word in a paragraph that matches a selector in your CSS.

So keep that in mind as well. It's dumb in the way it compares potential selector matches, which is both clever and dangerous.

UnusedCSS is an online service that crawls your site for you

Manually configuring a tool to look at every page on your site from every angle is certainly a chore and something that will need to be kept in sync day-to-day as your codebase evolves. Interestingly, the online service UnusedCSS tries to overcome this burden by crawling the site itself based on a single URL you give it.

I signed up for the paid service and pointed it at CSS-Tricks. I admit, with just a glance at the results, it feels a lot more accurate to me:

It's telling me I'm using 93% of my CSS, which feels more inline to me as hand-author of all the CSS on this site.

It also lets you download the cleaned file and offers lots of customization, like checking/unchecking selectors you actually want/don't want (e.g. you see a class name it doesn't think you need, but you know for sure you actually do need it) as well as prefixing and removing duplicate selectors.

I enjoyed the increased accuracy of the online crawling service, but there was a lot of noise, and I also can't see how I'd incorporate it practically into a day-to-day build and release process.

Tooling is generally used post-processing

Say your CSS is built with Less or Sass, then uses a postprocessor to compile it into CSS. You'd probably incorporate automated unused CSS cleaning at the very end of whatever other CSS preprocessing you do. Like...

  1. Sass
  2. PostCSS / Autoprefixer
  3. [ Clean Unsued CSS ]
  4. Production CSS

That both makes sense and is slightly funny to me. You don't actually fix the styling that generates unused CSS. Instead, you just wipe it away at the end of the build. I suppose JavaScript has been doing that kind of thing with tree shaking for a while, so there is a precedent, but it still feels weird to me because a CSS codebase is so directly hands-on. This setup almost encourages you to dump CSS wherever because there is no penalty for overdoing. It removes any incentive to understand how CSS is applied and used.

PurgeCSS is another tool that takes explicit input and gives you the results

PurgeCSS is another player in the unused CSS market. One tangential thing I like about it is that it clearly explains how it differs from other tools. For example, compared to PurifyCSS:

The biggest flaw with PurifyCSS is its lack of modularity. However, this is also its biggest benefit. PurifyCSS can work with any file type, not just HTML or JavaScript. PurifyCSS works by looking at all of the words in your files and comparing them with the selectors in your CSS. Every word is considered a selector, which means that a lot of selectors can be erroneously consider used. For example, you may happen to have a word in a paragraph that matches a selector in your CSS.

PurgeCSS fixes this problem by providing the possibility to create an extractor. An extractor is a function that takes the content of a file and extracts the list of CSS selectors used in it. It allows a perfect removal of unused CSS.

PurgeCSS seems like the big dog at the moment. Lots of people are using it and writing about it.

Despite PurgeCSS needing special configuration to work with Tailwind, it seems like Tailwind and PurgeCSS are two peas in a pod. In fact, their docs recommend using them together and provides a CLI for using it in a build process.

I believe the gist of it is this: Tailwind produces this big CSS file full of utility selectors. But they don't intend for you to use the entire thing. You use these utility selectors in your HTML to do all your styling, then use PurgeCSS to look at all your HTML and shake out the unused utility selectors in your production CSS.

Still, it will be an ongoing maintenance issue to teach it about every single template on your site — JavaScript, HTML, or otherwise — while manually configuring anything that relies on third-party resources and knowing that any data that comes from a data store probably cannot be looked at during a build process, making it something to account for manually.

My favorite technique: have someone who is really familiar with your CSS codebase be aware of the problem and aim to fix it over time

Perhaps this feels like the approach of an old-timer who needs to get with the times, but hey, this just feels like the most practical approach to me. Since this problem is so hard, I think hard work is the answer to it. It's understanding the problem and working toward a solution over time. A front-end developer that is intimately involved in your front end will have an understanding about what is used and usused in CSS-land after time and can whittle it down.

An extreme testing approach I've seen is using a (i.e. background-image: url(/is-this-being-used.gif?selector);) in the CSS block and then checking server logs over time to see if that image has been accessed. If it is accessed, it was used; if not, it wasn't.

But perhaps my favorite tool in the potential toolbox is this:

Visual regression testing

You screenshot as much of your site as possible — like all of the most important pages and those pages manipulated into different states — plus across different browsers and screen sizes. Those screenshots are created from your master branch on Git.

Then, before any branches gets merged into Master, you take all those screenshots of them and compare those to the screenshots in master. Not manually, but programmatically.

That's exactly what Percy does, so watch this:

There have been other stabs at visual regression testing tools over the years, but Percy is the only one I've seen that makes clear sense to me. I don't just need to take screenshots; I want them compared so I can see visual differences between them. I don't just want to see the differences; I want to approve or disapprove them. I also want that approval to block or allow merges and I want to be able to control the browser before the screenshot is taken. I don't want to manually update the comparison images. That's all bread-and-butter Percy stuff.

Full disclosure: Percy has sponsored things here on CSS-Tricks here before — including that video above — but not this post.

The relation to Atomic CSS and CSS-in-JS

I'm sure there are lots of people reading this that would say: I don't have unused CSS because the tooling I use generates the exact CSS it needs and nothing more.

Hey, that's kinda cool.

Maybe that's Atomizer. Maybe that's Tachyons that you also run through UnCSS and you are super careful about it. Maybe it's the Tailwind + PurgeCSS combo that's all the rage right now.

Maybe you tackle styles some other way. If you're tightly coupling JavaScript components and styles, like React and Emotion, or even just using CSS modules with whatever, less unused CSS is an advantage of CSS-in-JS. And because tree-shaking and code-splitting come along for the ride in many JavaScript-based build processes, you not only have less CSS but only load what you need at the moment. There are tradeoffs to all this though.

How do you avoid unused CSS in future projects?

I think the future of styling is an intentional split between global and componentized styles. Most styles are scoped to components, but there are global styling choices that are made that take clear advantage of the cascade (e.g. global typography defaults).

If most styling is left scoped to components, I think there is less opportunity for unused styles to build up as it's much easier to wrap your mind around a small block of HTML and a small block of CSS that directly relate to each other. And when components die or evolve, the styling dies or evolves with it. CSS bundles are made from components that are actually used.

CSS-in-JS solutions naturally head in this direction as styles are bound to components. That's the main point, really. But it's not required. I like the generic approach of CSS modules, which is pretty much entirely for style scoping and doesn't mandate that you use some particular JavaScript framework.

If all that seems theoretical or out-of-reach, and you just have a Bootstrap site where you're trying to reduce the size of all that Bootstrap CSS, I'd recommend starting by using Bootstrap from the source instead of the final default distributed bundle. The source is SCSS and built from a bunch of high-level includes, so if you don't need particular parts of Bootstrap, you can remove them.

Removing dropdowns, badges, and breadcrumbs from Bootstrap before the build.

Good luck out there, gang.

The post How Do You Remove Unused CSS From a Site? appeared first on CSS-Tricks.


Six Months Using Firebase Web Performance Monitoring

I don't really think of Firebase as a performance monitoring tool (all I ever think about is auth and real-time data storage), but nevertheless, it totally has that feature.

Justin Ribeiro...

[A] tool to track what real users in the wild are experiencing with an easy setup? Yes, please. [...] I’ve been using Firebase web perf tracking since June on this very blog. Let’s take a look at the good, the bad, and the downright confusing portions of the Firebase web performance monitoring.

Justin talks about the good and bad of this particular product, but what I think is notable about this kind of performance tooling is that it is reflective of real users using your production site. A lot of performance tooling tests is just fancied up WebPageTest that runs your site once on probably-simulated browser conditions. I don't see as much happening in the real user performance monitoring space.

I think I'd rank performance testing by type like this:

  1. Run simulated performance metrics in CI. Stop merge requests that break metrics/budgets.
  2. Measure real user monitoring in production.
  3. Run simulated performance metrics in production.

Direct Link to ArticlePermalink

The post Six Months Using Firebase Web Performance Monitoring appeared first on CSS-Tricks.


The Tools are Here

Heading into 2020, it occurs to me that I've now been making websites for 20 years. Looking back on that time, it seems as though our practices have been in near-constant churn, and that our progress did not always seem linear. But ultimately, even the missteps and tangents along the way have contributed to a pattern of refinement, and now for the first time, it feels like we'll have a standard pattern for most of the technical challenges we face. I think 2020 looks to be a stabilizing moment for web standards.

Given that delivery is inherent to our medium, many of our challenges have come from network constraints. Early on, networks offered limited bandwidth, so we developed tools and practices to reduce the physical size of our files. Soon enough, bandwidth er… widened, and latency–the time spent making trips between servers and devices–became our next bottleneck. To mitigate latency, we developed techniques to deliver more code in fewer trips, like combining like-files, splitting our resources across many domains to allow more downloads at a given time, and inlining unlike-files into our HTML to avoid waiting for additional requests. We also learned to distribute our code around the world on CDNs, as physical proximity always helps. But latency itself is improving now, especially with the arrival of 5G, and advancements in how browsers communicate with servers now allow us to request any number of files at a time, or even push files to the browser before it asks for them. All of this has simplified our ability to deliver quickly and reliably, and it's only just recently become available without complicated workarounds.

Device differences used to confound us as well. In the early years of the mobile web, we had to find creative and often clumsy workarounds to deliver contextually appropriate features and assets, but nowadays we have fantastic tools to deliver device-appropriate experiences. We can use media queries to fluidly adapt our visual layouts across screen sizes, and we can build those layouts using proper design tools like grid and flexbox. We can use standard feature queries to test if we can rely on a particular tool before using it, or even to decide whether to load costly files in the first place. For media delivery, we now have powerful options for delivering appropriately sized images and videos to any device. All of this required less-ideal or non-standard practices only a few years ago, but things have changed for the better.

Accessibility has become simpler to achieve too, which is timely since awareness of its importance has likely never been greater. Standards have given us tools to better communicate the meaning and state of our components to assistive technology, and browsers and OSs have dramatically improved their interaction with those standards.

I don't mean to suggest that we don't still face hard technical problems, but I think it is increasingly our own practices and assumptions that create those problems, rather than any forces beyond our control. For example, we still see a few sites that smoothly reconcile fast delivery with smooth responsiveness during runtime, particularly in the average devices that people are using worldwide. But problems like that aren't absolute–they're caused by faults in our own priorities, or in over-relying on patterns we already know to be costly.

In short, the tools we need to do our jobs well are here. Except for container queries. We still really need container queries to do our jobs well, and it's frankly ridiculous that in 2020 we—ahem. Where was I? Oh, right.

So heading into 2020, it feels like we finally have a well-rounded standard toolset for building and analyzing our sites. Nowadays, if a site is slow or expensive to deliver, or slow to respond to user interaction, or inaccessible to assistive technology, or poorly designed on a particular screen, we can take comfort in knowing that it's probably our own fault and that we can fix it. And that's great because the web has much bigger, more pressing, non-technical problems that need our attention much more.

The post The Tools are Here appeared first on CSS-Tricks.

What is CRM (Customer Relationship Management)?

Make Your Own Expanding And Contracting Content Panels

Make Your Own Expanding And Contracting Content Panels

Make Your Own Expanding And Contracting Content Panels

Ben Frain

We’ve called them an ‘opening and closing panel’ so far, but they are also described as expansion panels, or more simply, expanding panels.

To clarify exactly what we’re talking about, head on over to this example on CodePen:

Easy show/hide drawer (Multiples) by Ben Frain on CodePen.

Easy show/hide drawer (Multiples) by Ben Frain on CodePen.

That is what we’ll be building in this short tutorial.

From a functionality point of view, there are a few ways to achieve the animated open and close that we are looking for. Each approach with its own benefits and trade-offs. I’m going to share the details of my ‘go-to’ method in detail in this article. Let’s consider possible approaches first.


There are variations on these techniques, but broadly speaking, the approaches fall into one of three categories:

  1. Animate/transition the height or max-height of content.
  2. Use transform: translateY to move elements into a new position, giving the illusion of a panel closing and then re-render the DOM once the transform is complete with the elements in their finishing position.
  3. Use a library that does some combination/variation of 1 or 2!

Considerations Of Each Approach

From a performance perspective, using a transform is more effective than animating or transitioning the height/max-height. With a transform, the moving elements are rasterized and get shifted around by the GPU. This is a cheap and easy operation for a GPU so performance tends to be much better.

The basic steps when using a transform approach are:

  1. Get the height of the content to be collapsed.
  2. Move the content and everything after by the height of the content to be collapsed using transform: translateY(Xpx). Operate the transform with the transition of choice to give a pleasing visual effect.
  3. Use JavaScript to listen to the transitionend event. When it fires, display: none the content and remove the transform and everything should be in the right place.

Doesn’t sound too bad, right?

However, there are a number of considerations with this technique so I tend to avoid it for casual implementations unless performance is absolutely crucial.

For example, with the transform: translateY approach you need to consider the z-index of the elements. By default, the elements that transform up are after the trigger element in the DOM and therefore appear on-top of the things before them when translated up.

You also need to consider how many things appear after the content you want to collapse in the DOM. If you don’t want a big hole in your layout, you might find it easier to use JavaScript to wrap everything you want to move in a container element and just move that. Manageable but we have just introduced more complexity! This is, however, the kind of approach I went for when moving players up and down in In/Out. You can see how that was done here.

For more casual needs, I tend to go with transitioning the max-height of the content. This approach doesn’t perform as well as a transform. The reason being that the browser is tweening the height of the collapsing element throughout the transition; that causes a lot of layout calculations which are not as cheap for the host computer.

However, this approach wins from a simplicity point of view. The pay-off of suffering the afore-mentioned computational hit is that the DOM re-flow takes care of the position and geometry of everything. We have very little in the way of calculations to write plus the JavaScript needed to pull it off well is comparatively simple.

The Elephant In The Room: Details And Summary Elements

Those with an intimate knowledge of HTML’s elements will know there is a native HTML solution to this problem in the form of the details and summary elements. Here’s some example markup:

    <summary>Click to open/close</summary>
    Here is the content that is revealed when clicking the summary...

By default, browsers provide a little disclosure triangle next to the summary element; click the summary and the contents below the summary is revealed.

Great, hey? Details even support the toggle event in JavaScript so you can do this kind of thing to perform different things based upon whether it is open or closed (don’t worry if that kind of JavaScript expression seems odd; we’ll get to that in more detail shortly):

details.addEventListener("toggle", () => { ? thisCoolThing() : thisOtherThing();

OK, I’m going to halt your excitement right there. The details and summary elements don’t animate. Not by default and it is not currently possible to get them animating/transitioning open and closed with additional CSS and JavaScript.

If you know otherwise, I’d love to be proved wrong.

Sadly, as we need an opening and closing aesthetic we’ll have to roll up our sleeves and do the best and most accessible job we can with the other tools at our disposal.

Right, with the depressing news out of the way, let’s get on with making this thing happen.

Markup Pattern

The basic markup is going to look like this:

<div class="container">
    <button type="button" class="trigger">Show/Hide content</button>
    <div class="content">
        All the content here

We have an outer container to wrap the expander and the first element is the button which serves as a trigger to the action. Notice the type attribute in the button? I always include that as by default a button inside a form will perform a submit. If you find yourself wasting a couple of hours wondering why your form isn’t working and buttons are involved in your form; make sure you check the type attribute!

The next element after the button is the content drawer itself; everything you want to be hiding and showing.

To bring things to life, we will make use of CSS custom properties, CSS transitions, and a little JavaScript.

Basic Logic

The basic logic is this:

  1. Let the page load, measure the height of the content.
  2. Set the height of the content onto the container as the value of a CSS Custom Property.
  3. Immediately hide the content by adding an aria-hidden: "true" attribute to it. Using aria-hidden ensures assistive technology knows that content is hidden too.
  4. Wire up the CSS so that the max-height of the content class is the value of the custom property.
  5. Pressing our trigger button toggles the aria-hidden property from true to false which in turn toggles the max-height of the content between 0 and the height set in the custom property. A transition on that property provides the visual flair — adjust to taste!

Note: Now, this would be a simple case of toggling a class or attribute if max-height: auto equalled the height of the content. Sadly it doesn't. Go and shout about that to the W3C here.

Let’s have a look how that approach manifests in code. Numbered comments show the equivalent logic steps from above in code.

Here is the JavaScript:

// Get the containing element
const container = document.querySelector(".container");
// Get content
const content = document.querySelector(".content");
// 1. Get height of content you want to show/hide
const heightOfContent = content.getBoundingClientRect().height;
// Get the trigger element
const btn = document.querySelector(".trigger");

// 2. Set a CSS custom property with the height of content"--containerHeight", `${heightOfContent}px`);

// Once height is read and set
setTimeout(e => {
    3. content.setAttribute("aria-hidden", "true");
}, 0);

btn.addEventListener("click", function(e) {
    container.setAttribute("data-drawer-showing", container.getAttribute("data-drawer-showing") === "true" ? "false" : "true");
    // 5. Toggle aria-hidden
    content.setAttribute("aria-hidden", content.getAttribute("aria-hidden") === "true" ? "false" : "true");

The CSS:

.content {
  transition: max-height 0.2s;
  overflow: hidden;
.content[aria-hidden="true"] {
  max-height: 0;
// 4. Set height to value of custom property
.content[aria-hidden="false"] {
  max-height: var(--containerHeight, 1000px);

Points Of Note

What about multiple drawers?

When you have a number of open-and-hide drawers on a page you’ll need to loop through them all as they will likely be differing sizes.

To handle that we will need to do a querySelectorAll to get all the containers and then re-run your setting of custom variables for each content inside a forEach.

That setTimeout

I have a setTimeout with 0 duration before setting the container to be hidden. This is arguably unneeded but I use it as a ‘belt and braces’ approach to ensure the page has rendered first so the heights for the content are available to be read.

Only fire this when the page is ready

If you have other stuff going on, you might choose to wrap your drawer code up in a function that gets initialised on page load. For example, suppose the drawer function was wrapped up in a function called initDrawers we could do this:

window.addEventListener("load", initDrawers);

In fact, we will add that in shortly.

Additional data-* attributes on the container

There is a data attribute on the outer container that also gets toggled. This is added in case there is anything that needs to change with the trigger or container as the drawer opens/closes. For example, perhaps we want to change the color of something or reveal or toggle an icon.

Default value on the custom property

There’s a default value set on the custom property in CSS of 1000px. That’s the bit after the comma inside the value: var(--containerHeight, 1000px). This means if the --containerHeight gets screwed up in some way, you should still have a decent transition. You can obviously set that to whatever is suitable to your use case.

Why Not Just Use A Default Value Of 100000px?

Given that max-height: auto doesn’t transition, you may be wondering why you don’t just opt for a set height of a value greater than you would ever need. For example, 10000000px?

The problem with that approach is that it will always transition from that height. If your transition duration is set to 1 second, the transition will ‘travel’ 10000000px in a second. If your content is only 50px high, you’ll get quite a quick opening/closing effect!

Ternary operator for toggles

We’ve made use of a ternary operator a couple of times to toggle attributes. Some folks hate them but I, and others, love them. They might seem a bit weird and a little ‘code golf’ at first but once you get used to the syntax, I think they are a more straightforward read than a standard if/else.

For the uninitiated, a ternary operator is a condensed form of if/else. They are written so that the thing to check is first, then the ? separates what to execute if the check is true, and then the : to distinguish what should run if the check if false.

isThisTrue ? doYesCode() : doNoCode();

Our attribute toggles work by checking if an attribute is set to "true" and if so, set it to "false", otherwise, set it to "true".

What happens on page resize?

If a user resizes the browser window, there’s a high probability the heights of our content will change. Therefore you might want to re-run setting the height for containers in that scenario. Now we are considering such eventualities, it seems like a good time to refactor things a little.

We can make one function to set the heights and another function to deal with the interactions. Then add two listeners on the window; one for when the document loads, as mentioned above, and then another to listen for the resize event.

All Together

With the page load, multiple drawers, and handling resize events, our JavaScript code looks like this:

var containers;
function initDrawers() {
    // Get the containing elements
    containers = document.querySelectorAll(".container");
    window.addEventListener("resize", setHeights);

window.addEventListener("load", initDrawers);

function setHeights() {
    containers.forEach(container => {
        // Get content
        let content = container.querySelector(".content");
        // Height of content to show/hide
        let heightOfContent = content.getBoundingClientRect().height;
        // Set a CSS custom property with the height of content"--containerHeight", `${heightOfContent}px`);
        // Once height is read and set
        setTimeout(e => {
            content.setAttribute("aria-hidden", "true");
        }, 0);

function wireUpTriggers() {
    containers.forEach(container => {
        // Get each trigger element
        let btn = container.querySelector(".trigger");
        // Get content
        let content = container.querySelector(".content");
        btn.addEventListener("click", function(e) {
            container.setAttribute("data-drawer-showing", container.getAttribute("data-drawer-showing") === "true" ? "false" : "true");
            content.setAttribute("aria-hidden", content.getAttribute("aria-hidden") === "true" ? "false" : "true");

You can also play with it on CodePen over here:

Easy show/hide drawer (Multiples) by Ben Frain on CodePen.

Easy show/hide drawer (Multiples) by Ben Frain on CodePen.


It’s possible to go on for some time further refining and catering for more and more situations but the basic mechanics of creating a reliable opening and closing drawer for your content should now be within your reach. Hopefully, you are also aware of some of the hazards. The details element can’t be animated, max-height: auto doesn’t do what you hoped, you can’t reliably add a massive max-height value and expect all content panels to open as expected.

To re-iterate our approach here: measure the container, store it’s height as a CSS custom property, hide the content and then use a simple toggle to switch between max-height of 0 and the height you stored in the custom property.

It might not be the absolute best performing method but I have found for most situations it is perfectly adequate and benefits from being comparatively straightforward to implement.

Smashing Editorial(dm, yk, il)

Smashing Podcast Episode 3 With Jina Anne: What Are Design Tokens?

Smashing Podcast Episode 3 With Jina Anne: What Are Design Tokens?

Smashing Podcast Episode 3 With Jina Anne: What Are Design Tokens?

Drew McLellan

Jina Anne In this episode of the Smashing Podcast, we’re talking about Design Tokens. What are they, what problem do they solve, and how can they be used within an existing Design System? Drew McLellan talks to someone who is much more than a token expert: Jina Anne.

Show Notes


Drew: She’s a design systems advocate and coach. While at Amazon, she was senior design systems lead and she was lead designer on the Lightning Design System at Salesforce, while at Apple she led the CSS architecture and style guide for the Apple Online Store. She’s worked with GitHub, Engine Yard, and the Memphis Brooks Museum of Art and more. She founded and organizes Clarity, the first design systems conference, and is on the Sass core team where she leads the brand design and website for Sass. When it comes to design systems, you’d be hard pushed to find anyone more qualified, but did you know that she’s never seen a sidewalk? My smashing friends, please welcome Jina Anne. Hello, Jina.

Jina Anne: Hello.

Drew: How are you?

Jina Anne: I’m smashing.

Drew: I wanted to talk to you today about design tokens, which I think is a phrase many of us have probably heard passed about, but we perhaps aren’t sure what it means. But before we get to that, I guess we should talk a little bit about design systems. I mean, design systems are your thing, right?

Jina Anne: Yeah. It rules everything around me. Yeah.

Drew: I think that there’s something that we’re seeing is becoming increasingly common in projects and people are making them public and seems to be a real movement around design systems. But I think there are plenty of organizations that don’t have them in place still. What problem does a formalized design system solve from your point of view?

Jina Anne: It can solve many problems. I think some of the more common problems that people seek to solve is around maintainability and consistency. That usually has to do with design debt or in some cases code debt, some cases both. I also look at it as a… Like, it’s not just about the code or the design, but also the problems around how people work together. So, I look at it as a way to also solve some of the issues around communication and workflow process and so on.

Drew: Are design systems then something exclusively that are useful to really big teams and big organizations?

Jina Anne: I don’t think so. I’ve seen them work really well with smaller teams or sometimes even with a lone designer. They definitely help with larger teams for sure, but they are definitely not exclusive to large teams. In fact, I think if you see yourself perhaps growing at some point to be a large team, then having the system in place already will help you do that more efficiently.

Drew: What did you think are the sort of symptoms that somebody might be looking for if they’re working and they’re still having problems? What do those problems look like that might be solved by putting a design system in place?

Jina Anne: There’s a few, duplication of efforts, duplication of code. You might have a breakdown in communication where things just aren’t being built the way they’re expected to be built. It could come down to things that aren’t documented well, so people don’t really quite know what the best thing is to use or where to look. Yeah, there are all sorts of signs.

Drew: I guess design systems are generally a concept, rather than a specific technical solution. In your work, you must see people using all sorts of different tools to achieve design systems.

Jina Anne: Yeah.

Drew: What are some of the more common ways that people actually go about it?

Jina Anne: I think the most common ways are having a component library done in code and often cases you’ll see it in it like a React library or an Angular library, whatever, platform you’re using. There’s usually also a website associated with it that will display those components. Then you’ll usually see perhaps like a Sketch or a Figma library as well.

Jina Anne: But one of the things that I like to stress to people is that if you look at that website that displays your documentation and your components, that website is not actually your design system. It’s a representation of your design system. So, I think a lot of people spend a lot of time on making this gorgeous, beautiful website and it’s fine. They’re nice to look at and they’re nice to share and they help a lot with communicating what you’re doing and even with recruiting.

Jina Anne: But it’s the system itself that it represents that I want people to spend their love and care into, so thinking through what’s going into that website, like the content and how you’ve organized things, how you’ve named things, the things that you’re systemizing, so, yeah. I think a lot of people think about the artifacts, like the deliverables, but really it’s a lot more than that. It’s a lot of process and workflow as well.

Drew: Is it exclusively web projects that the design system would help with?

Jina Anne: Not at all. It is the most common, I believe, from, at least, what I’ve seen, but design systems definitely can cover many things. In the digital space, you have native platforms, but even outside the digital space, I think a lot of people talk about design systems in a digital product space. But they’ve been around for ages for traditional medias and real-world scenarios. If you have seen the NASA graphic standards manual from like the ‘70s, that was a design system. It just was across all the different like rockets and spacesuits and all that, instead of digital products.

Drew: So, I guess, there must be some overlap between things, traditional things like brand guidelines and that sort of documentation that I think probably people are familiar with in all sorts of walks of life. There must be a crossover between that sort of documentation of a system and a more modern concept of a design system.

Jina Anne: Yeah, I believe so. I think a lot of people forget that it’s all about branding. The whole reason any of this even started and why we want to display these things in a uniform or unified way is all about the brand because brand isn’t just logos. It’s how people use and experience your company’s service or product or whatever it is that you offer. So, yeah, absolutely.

Drew: So, I’ve got a design system in place, I mean an organization. We’ve done a whole lot of work. We’ve got a design system. There are creatives within the organization working in maybe, like you mentioned, Figma or Sketch. We’ve got web designers using that in a CSS. Perhaps we’ve got a mobile team doing like Android and iOS development, building apps. Loads of people working with a design system contributing into it and consuming stuff from it. Where do design tokens come in? What problem do they solve?

Jina Anne: Ooh, yes. Let me first take it back to a story. When I first joined at Salesforce, I was actually part of a small project team. It was a different product, it’s like a productivity tool like tasks and notes and things like that. We were only three designers and I was the only one that, I guess, I wouldn’t say brave enough, but maybe interested enough to work with the Android designs. The other two designers, I think, just weren’t quite as interested. So, I was basically the main designer on our Android app. Then I also did a lot of design for iOS app and, of course, the web application as well and the marketing website, so lots of different projects in play.

Jina Anne: With the website, since I like to design and code, it was pretty straightforward. I could go ahead and build the buttons and typography and everything that we needed for the web application or the marketing website, document it in code and deliver that.

Jina Anne: However, with both the Android and iOS app, I don’t really know how to code for that and so I wasn’t able to deliver the same thing. So, I was having to do a ton of redlines specs, which, if you’re not familiar with redlines, it’s essentially where you are specking out every single spacing, font size, color, anything to indicate how to build it for the engineer. I would do these for many, many, many screens and, of course, a lot of those screens had variations because maybe you’re showing what happens when you clicked that button or when a certain state happens. So, doing this across many, many screens and then saving those up to Dropbox and then documenting it in a Wiki. That was the process that I was having to do at the time.

Jina Anne: I usually think about things in a CSS way, like especially the C in CSS, so I usually think, “Oh, well, font sizes should only need to be declared one time because it’s going to cascade everywhere.” But I found that with certain engineers that I’ve worked within the past, if you don’t spec it, and I guess with native it works a little differently, they’re not going to build it and so I would have to be very explicit and name pretty much everything per screen. I was just like, “Oh, why is it like this?” Then any time we made any changes, I had to go back through and change all those screens again. It was not fun at all.

Jina Anne: Fast forward to when I moved over to the core team of Salesforce, I had been working in the Sass website and I’ve been playing around with using a YAML file to store the data for colors, typography, spacing and so on and was looping over that data to create the style guide, as well as the Sass variables in the classes. The reason I did that was we open-sourced the Sass website and I wanted people to be able to contribute to the design as well. But I didn’t want to make it a tedious process where you had to update the style guide along with any colors that you’re adding and so doing it this way, just kind of automated that process.

Jina Anne: I showed that to the team at Salesforce and then that kind of is where the concept of design tokens spawned off of. So they built a tool called Theo and there’s other tools out now that do the same thing like Style Dictionary. But the idea of it is you have this automated tool that takes the data that you give it and generates the code. You might think, “Well, that might be over-engineering variables. Why not just use variables?”

Jina Anne: Well, the idea is, as you alluded to earlier, like native platforms just take those attributes in a totally different way and so trying to scale design to Android and iOS, whatever other platforms that get Salesforce. We had some people on Java, we had some people on React yet, some people on Angular, PHP, not just internally at Salesforce, but also externally with all our partners and customers that were building their own applications. So, this was a way to store our visual information as data and then, in an automated way, generate the variables or the XML data you needed or the JSON data, whatever format that particular platform looked for.

Jina Anne: Then what was great about it was we found, let’s say a color doesn’t pass contrast ratios. I didn’t have to then notify the Android team and the iOS team and the web team. I just made that change and then they would get that change automatically the next time that they would pull in the latest. So, it just really helped streamline a lot of that and helped us be able to take off some of the burdens of updating visual designs from the engineers and that let us do that.

Drew: So, instead of being sort of variables within one particular code base, within your own React codebase or within your PHP or within your Java or wherever, they’re like variables across an entire organization? Is that fair to say?

Jina Anne: Correct. Correct. Then what’s cool is things like colors, for example, like transparent colors, you do that differently in Android, like eight-digit hex, instead of RGBA like you would with web. So that tool that you use, if you’re using one that is built to think through all this, does that transformation for you. So, rather than saying RGBA 50 comma, 40 comma, whatever the color, you can just say color background card or something like that. It’s really more of a named entity now and then you can all be speaking the same language, even though it might render a different syntax.

Drew: Right. So, although variables kind of the nuts and bolts of how it might be implemented, the idea is kind of much bigger than just what you’d think of as just variables. I mean, I guess in a way like RSS could be called just variables. But, actually, the way it enables us to distribute blog content and podcasts and everything has a much wider impact than just the core technology that’s there.

Jina Anne: Yeah, I think that’s actually a really good metaphor. I do see a lot of people when they use it or talk about it in their own design system website, they’re usually only talking about like Sass variables or CSS variables. I think that’s why there’s this confusion, like, “Well, isn’t that just variables?” It’s, like, “Why are we renaming it?” But it is that much broader application of it with a whole process around it. It even gets into like how you distribute those variables across components, like on a global level or on an individual component level. You can have multi-layers and so on. It can get pretty interesting.

Drew: So, I suppose as well as helping in the maintenance, you mentioned being able to change a color in one central location and then have everything that is, using those design tokens, be able to pick it up when the next build or next refresh from the system, presumably this has the potential to enable all sorts of other interesting things. I know a lot of people make sort of white-labeled products. It’s the same core product, but it’s customized with different design tweaks for different and things. So, using design tokens could actually be a solution for those sorts of applications as well, the need to span more than just one particular codebase.

Jina Anne: Right. Yeah. So, that was definitely a use case at Salesforce. We have a lot of, I don’t know why I’m still using present tense, but we had a lot of customers that wanted to be able to brand their UI that they were using. So, we had this concept of certain variables that we wanted to actually be seen more as like a constant, like maybe it’s an error color versus colors that were meant to be configured, like brandable colors. So, for some people’s needs that can get interesting, too, white labeling or offering any sort of theming, dark mode or night mode, even offering a feature, which you may have seen in Gmail, but it’s like that comfortable, cozy, compact spacing density. So, there are all sorts of extra stuff that you can get with it across multiple products very quickly, which is really nice.

Drew: It is really an extension of core principles of programming where you make sure that you’ve really defined things once in one place, so you don’t have multiple instances so it’s easy to update. But it is looking at that as a much, much bigger idea than just one small element of a product, looking at it across everything and centralizing that.

Jina Anne: Yeah, so we definitely looked at these as our source of truth. However, in case anybody is worried about like, “Well, Android does things differently than iOS,” or you might have some concerns there. Depending on how you’ve architected things, you can still solve for those use cases. So, we would have a global token set that all our products would basically import in, but then we made them in a way where you could either alter it for that particular context or extend it, like offer maybe additional tokens that only that particular context needs. So, you can still give the fine-tune experience that you need to give to each of those context, while bringing in the most common shared things.

Drew: On a technical level, how would this actually work? Is there like a common file format the different systems share? Is there like an established standard for how you declare your design tokens?

Jina Anne: It’s interesting that you asked that. There’s actually a community group formed through… W3C has all these community groups. It’s not really exactly a working group, but it’s still like an initiative across various people that are in this space trying to come up with a recommendation of what those standards could be. Even how people store their data can change. Like it could be YAML, it could be JSON, it could even be a spreadsheet. Then what you export would be different because you might be using Sass, you might be using LESS, you might be using some sort of XML base system. We actually don’t want to tell you which of those things to use because depending on our use case, you might need to use spreadsheets instead of JSON or YAML or you might need to use XML instead of Sass or LESS or even CSS variables. That’s because everybody’s products are so different and have different needs.

Jina Anne: But what we can standardize on is around the tooling to generate these things. The reason we want to try to come to some sort of standard is because so many design tools are starting to implement this, InVision, Adobe, Figma. All these tools are looking at design tokens because there is a need to not just make this a code-based thing, but make this a design tool-driven thing as well. We don’t want to do it in a way where those tools don’t feel like they can innovate. We want them to be able to innovate, but at least offer some sort of standards so that new tool-makers can get into this space and already have sort of an established understanding of how to set that up. So, while we’re not going to get strict on your format of what file format you’re using or what tool you’re using, we’re going to more try to standardize on like the internal process and basically the API of it.

Drew: Because like I said, once that API has been defined, the tooling can spring up around it that speaks with that API for whatever tools that people want to use. So, somebody could write up a Java library that speaks that API, and then anything that’s using Java could make use of it and so on. Are there any tools currently that support design tokens in any way?

Jina Anne: Yeah. On the code side, I mentioned already Theo and Style Dictionary. There’s also one called Diez, D-I-E-Z. That’s kind of newer to the space and it’s taking it beyond, just like doing the transformation process, but kind of treating design tokens as a component in a way and so that’s cool.

Jina Anne: Then on the design side, InVision already has it in their DSM tool, which is their Design System Manager tool. The last I looked at it, it was just colors and typography, but I do know when I… I talked to Evan, who is one of the main folks behind that product. He did tell me other things like spacing should be coming into play, if it’s not already. I haven’t looked at it super recently. I know there are newer tools that are really catching my eye, like modules and interplay. Both of those are code-driven design tools.

Jina Anne: Then I’ve been told that it’s supposed to come into some of the stuff that Figma and Adobe are doing, so I’m not sure if I’m revealing secrets. I don’t think I am. I think it’s all stuff they’ve talked about publicly. But, yeah, I’m really excited because I think while it was something that we were doing really just making our design system work easier, it’s kind of almost accidentally created a path for bringing design tools and code cluster together. That’s really exciting to me.

Drew: The makers of these various tools, are they working with the design tokens community group?

Jina Anne: Yeah, a lot of them have joined. Since I’m a chair member, I get to see by email, everybody who joins. It sends me a notice. What’s cool is not only just seeing all these design tool people joining, but also seeing big companies. I saw like Google and Salesforce and all that, so it’s really exciting. Because I think it shows that this really matters to where a lot of people are doing on a large scale and small scale and that’s pretty cool.

Drew: So, if I was sort of listening to this and thinking about my own projects, thinking, “Ah, yes, design tokens are absolutely the answer to all these problems that I’m having,” where would I go to find out more to start learning and start maybe using design tokens?

Jina Anne: It’s a really good question. There are a few articles and I can send you some links to include with this, but I think one of the first articles, which I wish I had written, but Nathan Curtis wrote and that he actually kind of helped bring attention to them. I think he inspired a lot of people to start using them, so he kind of discusses what they are and how to use them, his recommended way.

Jina Anne: I don’t like the title of this next article I’m going to mention, but it’s called Design Tokens for Dummies. I’m not a fan of using that terminology, but it is a pretty well thought-through article that goes to pretty much everything about them. There was a CSS Tricks article by Robin Randall recently that just explains really what they are. I did a All You Can Learn Library session for Jared Spool a while back, but it is a membership-based thing so you would have to have access to that to see it. I know there’s been a lot of presentations and stuff, but there’s not like an official book to it yet. But that’s perhaps something I’m working on. It’s like one of two books I’m working on, actually.

Drew: So, if I’m a toolmaker or I work for maybe a big organization that’s having these sorts of problems and they’ve got some ideas about maybe contributing to the process of designing how the standard works, is the design tokens community group something that I could get involved in?

Jina Anne: Absolutely. I think you’ll want a GitHub because that’s where all of the public discussions and notes and things are happening. Then on the W3C community group website, you can create an account there. Having that account enables you to join other community groups as well. But then, yeah, at that point once you’ve created your account there and… I think it asks if you have any affiliations, like if you work for a big company or anything like that, just so it’s transparent, like if you have any, I wouldn’t say necessarily bias, but like a certain interest. It just helps everybody understand where you’re coming from. Anyway, at that point, yeah, you join and you’re pretty much in.

Drew: It’s quite an open process then.

Jina Anne: Yeah.

Drew: What’s in the future for design tokens? What’s coming down the line?

Jina Anne: I’m really excited about what’s going on with the community group. Kaelig’s been doing most of the leading of it. He’s the co-chair with me and I really love seeing his passion behind this. My particular interests in this are really around the education of it. So, kind of similarly to the work I’ve been doing with the Sass community, I kind of want to do a little bit of that for the design token community, like talk through how to educate people on what this is and not just make it an API doc, but also like where to get started, how to get into this. That’s something I’m interested in project-wise.

Jina Anne: I’m also really keen to see where this evolves, especially with all these design tool companies getting involved. Then a lot of people mostly think about design tokens as a visual abstraction, but really what it came from was the same technology that you used for localizing content. You wrap things in strings and then you can pass through different stuff, so bringing it back to its roots. I’d love to see the application of this apply in different ways, like interactions and content. I’m not really super keen on AR/VR-type stuff, but how does it maybe manifest there? Yeah, really just seeing it kind of go beyond just like the visual layer of what we see.

Drew: I guess that’s the beauty of having an open process like the W3C community group, is that people who do have specialisms in things like AR and VR can contribute to the conversation and bring their expertise to it as well.

Jina Anne: Absolutely.

Drew: I’ve been learning a lot about design tokens today. What have you been learning about lately?

Jina Anne: I’m always trying to learn something, but I’ve actually been occasionally taking some cocktail classes. Yeah. I’m not really with the interest of becoming a bartender, but more of just having an appreciation for cocktails. What’s cool about these classes is they’re beyond just making cocktails. They actually talk about business practices and ethical practices, the hygiene of your bar, all sorts of stuff like that, so it’s been really fascinating because I think I have like this weird fantasy of one-day leaving tech and maybe going into that. Let’s see.

Drew: Do you have a favorite cocktail?

Jina Anne: Manhattan.

Drew: It’s good. It’s good.

Jina Anne: Yeah.

Drew: You can’t go wrong with a Manhattan.

Jina Anne: I have been ordering a lot of Old Fashioneds lately so that would probably be number two.

Drew: Do you have a favorite bourbon?

Jina Anne: Ooh. The first one that came to mind is Angel’s Envy. It’s like finished in port barrels that have kind of this slightly port-like essence to it. Their rye is really good, too. It’s like finished in rum barrels, so it almost has like a banana bread-like flavor to it.

Drew: This is a direction I wasn’t expecting to go in today.

Jina Anne: Yeah.

Drew: Was there anything else you’d like to talk about design tokens?

Jina Anne: My take is, just like with design systems, people are going to use them in different ways and also there might be people out there that don’t even need to use this. If you just have like an editorial website that is pretty straightforward, maybe all you really need are CSS variables and that’s it. There’s no need to over-engineer things.

Jina Anne: This is really more for people that really need to scale or if you have a theming context then maybe. But, yeah, it’s really not meant for everyone. So, just because it’s becoming kind of a hot thing to talk about, you might not need to even bother with it.

Drew: If you, dear listener, would like to hear more from Jina, you can follow her on Twitter where she’s @Jina, or find her and all her projects on the web at Thanks for joining us today, Jina. Do you have any parting words?

Jina Anne: Design systems are for people.

Smashing Editorial(dm, ra, il)

Teaching CSS

I've been using CSS as a web developer since CSS became something we could actually use. My first websites were built using <font> tags and <table>s for layout. I remember arguments about whether this whole CSS thing was a good idea at all. I was quickly convinced, mostly due to the ability to easily change the font on an entire site in one place. Managing common styles was so useful at a time when most websites were just a stack of HTML pages with no management of content or any form of templating. I was an early adopter of using CSS rather than tables for layout, despite the backdrop of people asking, "but what about Netscape 4?"

CSS is a remarkable language. Those early sites were developed in a time where the best we standards advocates hoped for was that browsers would support the CSS that existed; that developers would validate their HTML and CSS and use the CSS that existed. Yet, a website built back then that is still online, or one accessed via the Wayback Machine will still work in a modern browser. Such is the care that has been taken to not break the web by the CSS Working Group, and the other groups working to add features to the web platform.

I've been teaching CSS for almost as long as I've been using CSS. I'm incapable of having a thought without turning it into words. I write things down to remember them, I write things down to help them make sense to me. This leaves me with a lot of words, and from the earliest days of my career I had an idea that they might be useful to other people and so I started to publish them. Over the years I've learned how to teach people, discovered the things which seem to help the various concepts click for folk with different ways of learning and processing information. Since the early days of CSS layout, we've been teaching it along the following lines.

  • this is a block thing
  • this is an inline thing
  • you can turn the block things into inline things and vice versa using the display property
  • this is the Box Model, it is very important and also kind of weird.

Typically we would teach CSS by jumping right in, styling up a layout and explaining the strange collection of hacks that allowed for a layout as we went along. Unlike other languages, where we might start with the core fundamentals of programming, in CSS we had very few concepts to teach outside of building things and explaining the weirdness in the context of actual layouts. The Box Model was important because it was all we really had in terms of layout. It was core to our method of giving things a size and pushing them around in a way that would allow them to line up with other carefully sized things to make something that looked like a grid. If you didn't understand the standard Box Model, and that the width you set wasn't actually the width the thing took up, your carefully calculated percentages would add up to more than 100%, and bad things would happen.

Over the last few years, we've been handed all of these new tools, Flexbox and Grid give us a layout system designed for CSS. Perhaps less obviously, however, a set of concepts are emerging that give us a real way to explain CSS layout for the first time. There has been something of a refactoring of the language, turning it from a collection of hacks into something that can be taught as a cohesive system. We can start with normal flow and move onto what it is to change the value of display because it is here that all of our new layout capabilities live. We can share how display controls two things - the outer value of block and inline and the inner formatting context - which might be grid, or flex, or normal flow.

Explaining Writing Modes early on is vital. Not because our beginner is going to need to format a vertical script, or even use vertical writing creatively immediately. It matters because writing modes explain why we talk about start and end, and the block and inline dimensions rather than the physical top, right, bottom and left corners of their screen. Understanding these things makes alignment in grid and flexbox and the line-based positioning in grid much easier to understand. The Box Model can then drop back to a brief explanation of the fact that width and height (or inline-size and block-size) relate to the content-box and we can change it to relate to the border-box with the box-sizing property. In a world where we aren't giving things a size and pushing them around, the Box Model becomes just part of our discussion on Box Sizing, which includes the intrinsic sizing far more useful when working with flexbox and grid.

Finally we need to focus on the idea of Conditional CSS. Media Queries and Feature Queries mean we can test the environment of our user using metrics such as their viewport size, whether they are using a pointing device or a touchscreen, and the capabilities of their browser. We can never be sure how our websites are going to be encountered, but we increasingly have the ability in CSS to optimize for the environment once we are there. One of the greatest skills we can give to the person beginning their journey as a web developer is an understanding of this truth. The person visiting your site might have a touchscreen, they might be using a screen reader, they may be on a small-screen device, and they might be on IE11. In all of these cases, there are things you want to do that will not work in their situation, your job is to deal with it and CSS has given you the tools to do so.

As I started my CSS layout journey with a backdrop of people complaining about Netscape 4, I now continue against a backdrop of people whining about IE11. As our industry grows up, I would love to see us leaving these complaints behind. I think that this starts with us teaching CSS as a robust language, one which has been designed to allow us to present information to multiple environments, to many different people, via a sea of ever-changing devices.

The post Teaching CSS appeared first on CSS-Tricks.

How A VPN Can Help Digital Marketers

November 18 2019


Use WP Scheduled Posts to Manage Your Publishing Calendar

Managing a dynamic content strategy requires big-picture thinking. It means staying on top of your content strategy and your content team. It means planning and scheduling your posts in advance: what day and time they will go live. And, once they go live, publicizing them on social media. 

It means taking stock of your content to see what type of content you have a lot of and what kind of content you need more of. It means posting new content and updating old content. 

Fortunately, you can ditch the spreadsheets, turn off those calendar reminders, and manage your scheduled and draft posts in one place with a convenient month-to-month look at your content. 

Forgot to add a photo to a post? Easy. Need to reschedule a blog for another day? Done!

With WP Scheduled Posts, you can schedule your WordPress posts in advance and manage all authors in one place.

WP Scheduled Posts

WP Scheduled Posts

What Is WP Scheduled Posts?

The WP Scheduled Posts plugin is an editorial calendar tool that helps to streamline your content strategy and optimize productivity for your WordPress website. It is available for purchase from AppSumo.

As your one-stop shop for planning, creating, and scheduling blog posts on your WordPress site, WP Scheduled Posts makes it easy to stay on top of your content strategy and your content team.  

You get a full view of your content in the editorial calendar. You can see every scheduled post on the Editorial Calendar. You don't have to jump from spreadsheet to spreadsheet. 

Manage Everything—and Everyone—in One Place

With WP Scheduled Posts, keeping track of what and when you're posting is a breeze. You can do it all from the editorial calendar. 

It lets you manage multiple authors all in one place. You can easily manage your scheduled and draft posts right from the stunning calendar. You can create, edit and move posts easily with the intuitive drag-and-drop feature. 

Forgot to add a photo to a post? Easy. Need to reschedule a blog for another day? Done!

Plan Content Ahead of Time

Plan your content and schedule hundreds of posts ahead of time by specifying the time and date you want the post to go live. Then just set it and forget it—WP Scheduled Posts automatically publishes it for you! You can even go on vacation and enjoy some peace of mind knowing that posts will still go live when scheduled. 

Also, WP Scheduled Posts easily integrates with Twitter and Facebook. Sharing on these platforms increases the visibility of your content. So, once the post is live, WP Scheduled Posts will automatically share it on social media. No need for third-party websites or another tool to do the auto-sharing on Twitter or Facebook. 

No More Missed Posts! 

In a perfect world, scheduling a post on WordPress means that your content goes live at a designated time, every time. In WordPress world, sometimes you’re hit with the Missed Schedule Error, meaning a scheduled post doesn't get published. 

But now, you don’t have to worry anymore because WordPress Scheduled Posts comes equipped with a Missed Schedule Handler.

How does it work? It checks every 15 minutes whether any scheduled content has been missed, and it automatically posts anything that missed the given schedule.

This means you get to focus on the important things, like actually running your business—or going on an off-the-grid getaway.

Automate Your WordPress Content

Your days of slaving over spreadsheets and setting endless iPhone reminders are over. You let WP Scheduled Posts work for you by choosing and setting days and times when your posts will go live. It even shares your posts on social media for you. 

How about taking that vacation you’ve been dreaming about and leaving the laptop at home? 

How to Use WP Scheduled Posts

WP Scheduled Posts is a plugin calendar that lets you streamline your content strategy and reach maximum productivity on your WordPress site. 

Once you've installed the plugin, go to the WP Schedule Posts menu item in the WordPress admin sidebar and select either the Auto Scheduler or the Manual Scheduler option. 

Auto Scheduler

If you want a more hands-off, automated option, go for the Auto Scheduler. There, all you have to do is: 

  1. select your time zone 
  2. determine how many articles you want to go out each day
  3. select your posting time parameters

When you're setting up your next post, WP Scheduled Posts will automatically show you the next available time slots.

Auto Scheduler vs Manual Sceduler

Manual Scheduler

If you're looking for more control, select the Manual Scheduler. With this option, you will select what time you want your posts to be published each day. 

You can also toggle the Activate Missed Schedule option so you can trust a WordPress post will never go unpublished. Make sure to click on the Save Changes button after you activate the Missed Schedule Handler. 

Manual Scheduler options

The Schedule Calendar

To get a big-picture view of all the posts you're publishing, navigate to WP Scheduled Posts > Calendar in the WordPress admin menu. Here, you can easily see and manage your content.

Schedule Calendar view

In this calendar, you can use drag-and-drop features to:

1. Easily Reschedule Your Posts 

You only need to drag your post to your preferred date and, just like that, the post will be scheduled. This makes it easy to plan or reshuffle your publication schedule. 

2. Schedule Draft Posts

You can easily schedule a new post by dragging it into the calendar. Unscheduled drafts are shown in the sidebar to the left.

3. Edit Post Title 

You can also edit a post's title, content, and time directly in this view. To edit, just hover over a post card in the calendar view, and a popup will allow you to make quick changes. 

Editing a post in the calendar view

Twitter and Facebook Integration

WP Scheduled Posts provides Twitter and Facebook integration, so all of your published posts will be sent directly to your feed. You don't have to go elsewhere to distribute your content. 

Make sure you have created a Facebook page on your account. 

Go to your WordPress Dashboard and navigate to WP Scheduled Posts from the admin menu.  Open up the Integrations tab and you will find the Twitter Form and Facebook Form sections.

Facebook integrations in WP Scheduled Posts

Each section contains a field for you to add an access token that will allow WP Scheduled Posts to post content to your social media accounts. Simply click the link to generate Twitter API Keys or click the Generate Access Key button for Facebook. Then follow the instructions, and copy the resulting key or token information back into the respective fields in WP Scheduled Posts. 

Once you've completed this setup, WP Scheduled Posts will be able to add posts to your social accounts.


The WP Scheduled Posts plugin gives you control over how you manage and schedule posts. It gives you an overall picture of your scheduled content. You can plan, create, and schedule content on your WordPress site from the Editorial Calendar. You can choose between an automated or manual scheduler. And it comes with a missed schedule handler so can rest assured your posts will not go unpublished.  

Try out WP Scheduled Posts today!


The Communal Cycle of Sharing

What I'm interested in this year is how we're continuing to expand on tools, services, and shared side projects to collectively guide where we take the web next, and the way we're sharing that.

So many other mediums—mostly analog ones—have been around for ages and have a deeper history. In the grand scheme of things, the web, and thus the job of building for it, are still pretty new. We talk about open source and licenses, the ebbs and flows of changes of web-related (public and for-profit) education, the never-ending conversation about what job titles we think web builders should have, tooling, and so much more. The communal experience of this field is what makes and keeps this all very interesting.

The sharing aspect is equally, if not more important, than the building itself.

I thoroughly enjoy seeing browsers share more of their new builds include. I'm grateful that we have multiple browsers to work with and not one monolithic giant. I'm obsessed that websites like CodePen and Glitch exist and that sharing is the main goal of those services, and that people's lives have changed because of an experiment they created or came across. I'm touched that people make things for their own needs and feel inclined to share that code or that design process with someone else. I'm also glad to see design tools focus on collaboration and version control to improve our process.

Recently, I was thinking about how delightful it was to set up Netlify to host my site and also use it for client work at thoughtbot. I used to try to understand how to set up staging previews based on pull requests or scratch my head as I tried to understand why the "s" in "https" was so important. But now Netlify helps with those things so much that it's almost like that side of their service was built for people like me.

But, it gets better. In a community Slack, a fellow web builder says "Hey, Netlify's a great tool and my static site generator now works on it."

So then here I am at midnight and wide awake, starting a new demo repository using 11ty.

Fast forward, and another fellow builder shares their project Hylia, which makes starting an 11ty site on Netlify delightfully easy.

And all of this is freely available to use.

Putting this all together, I realize we're moving from a place where we're not just sharing what we have, we're working to build and improve on what others have built. And then sharing that, and the cycle continues. In a way, we've been doing this all along but it feels more noticeable now. In a way, we're not just building websites, but building and iterating the way we build websites, and that is exciting.

The post The Communal Cycle of Sharing appeared first on CSS-Tricks.


The Best Cocktail in Town

I admit I've held in a lot of pent-up frustration about the direction web development has taken the past few years. There is the complexity. It requires a steep learning curve. It focuses more on more configuration than it does development.

That's not exactly great news for folks like me who consider themselves to be more on the design side of the front-end spectrum. I remember grimacing the first time I found myself using a Grunt workflow on a project. Now, how I long for the "simplicity" of those days.

That's not to say I haven't enjoyed experimenting with new development workflows and frameworks. I actually find Vue to be pretty pleasant. But I think that might have to do with the fact that it's organized in a HTML-CSS-JS structure that feels familiar and that it works with straight-up HTML.

I'm finding myself rekindling my love for a development workflow that's as close to a vanilla combination of HTML, CSS, and JavaScript as I can get. Everything generally compiles back to these languages anyway. CSS has gotten more complex, yes, but it has also gotten more powerful and empowering (hello, CSS grid, custom properties, and calc!) to the point that using a preprocessor requires an intentional choice for me. And JavaScript? Yeah, it done got big, but it's getting nicer to write all the time.

HTML, CSS, and JavaScript: it's still the best cocktail in town.

If there's one new thing in the dev landscape that's caught my attention more than anything in the past year, it's the evolution of JAMstack. Hot dang if it isn't easier to deploy sites and changes to them while getting continuous delivery and a whole lot of performance value to boot. Plus, it abstracts server work to the extent that I no longer feel beholden to help from a back-end developer to set me up with different server environments, fancy testing tools, and deployment integrations. It's all baked into an online dashboard that I can configure in a matter of minutes. All hail the powerful front-end developer!

I've been building websites for nearly 20 years and I feel like the last five have seen the most changes in the way we develop for the web. Progressive web apps? Bundlers and tree-shaking? Thinking in components? Serverless? Yes, it's a crazy time for an old dog like me to learn new tricks, but it brings a level of excitement I haven't experienced since learning code the View Source way.

That's why I still find myself loving and using a classic workflow as much as I can in 2019, but can still appreciate the new treats we've gotten in recent years and how they open my mind up to new possibilities that challenge the status quo.


The post The Best Cocktail in Town appeared first on CSS-Tricks.


The Kind of Development I Like

I'm turning 40 next year (yikes!) and even though I've been making websites for over 25 years, I feel like I'm finally beginning to understand the kind of development I like. Expectedly, these are not new revelations and my views can be summed up by two older Computer Science adages that pre-date my career.

  1. Composition over inheritance
  2. Convention over configuration

Allow me to take you on a short journey. In modern component-driven web development, I often end up with or see structures like this:

    <ComponentC />

Going down this route is a system where everything is nested child components and props or data are passed down from parent components. It works, but for me, it zaps the fun out of programming. It feels more like plumbing than programming.

Seeing Mozilla's new ECSY framework targeted at 2D games and 3D virtual reality scenes, I immediately found myself gravitating towards its programming model where Components chain their behaviors onto objects called Entities.


Hey! That looks like a chained jQuery method. I like this and not just for nostalgia's sake. It's the "composition" of functionality that I like. I know CSS is fraught with inheritance problems, but it reminds me of adding well-formed CSS classes. I gravitate towards that. Knowing I personally favor composition actually helped me resolve some weird inconsistent feelings on why I genuinely like React Hooks (composition) even though I'm not particularly fond of the greater React ecosystem (inheritance).

I think I must confess and apologize for a lot of misplaced anger at React. As a component system, it's great. I used it on a few projects but never really bonded with it. I think I felt shame that I didn't enjoy this very popular abstraction and felt out of sync with popular opinion. Now I think I understand more about why.

I should apologize to webpack too. As a bundling and tree shaking tool, it does a great job. It's even better when all the configuration is hidden inside tools like Angular CLI and Nuxt. My frustrations were real, but as I learn more about myself, I realized it might be something else...

My frustrations with modern web development have continued to tumble downwards in levels of abstraction. I now think about npm and wonder if it's somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we've co-opted on the client and I think we're feeling those repercussions in the browser.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs. So we build processes and tools to bundle these 46,000 scripts together. But that obfuscates the end product. It's not uncommon that a site could be using fetch, axios, and bluebird all at the same time and all of lodash just to write a forEach loop.

In an "npm install your problems away" world, I feel like we do less programming and more configuring things we installed from the Internet. As dependencies grow in features and become more flexible, they allow you to configure some of the option flags. As a one-off, configs are a great feature. But cumulatively, even on a "simple" project, I can find myself managing and battling over a half dozen config files. One day while swimming in a sea of JSON configs it dawned on me: I don't like configuration.

"Convention over configuration" was a set of ideals popularized by David Heinemeier Hansson (@DHH) and it guided a lot of the design of Ruby on Rails. While the saying has waned in popularity some, I think it sums up the kind of development I like, especially when frameworks are involved. Frameworks should try to be a collection of best practices, to save others from having to overthink decisions. I've said it before, but I think Nuxt does this really well. When I step into a system of predefined conventions and minor configuration, I'm much happier than the opposite system of no conventions and lots of configuration.

It's a little weird to be turning 40 and discovering so much about the job I do on a daily basis. But it's nice to have found some vocabulary and principles for what I like about development. Your list of things you like may be different than mine and that's a good thing. I'd love to know more about the kind of development you like. What do you like to build? What are you optimizing for? What is your definition of success?

The post The Kind of Development I Like appeared first on CSS-Tricks.


We asked web developers we admire: “What about building websites has you interested this year?”

For the first time ever here on CSS-Tricks, we're going to do an end-of-year series of posts. Like an Advent calendar riff, only look at us, we're beating the Advent calendar rush! We'll be publishing several articles a day from a variety of web developers we look up to, where they were all given the same prompt:

What about building websites has you interested this year?

We're aiming for a bit of self-reflection and real honesty. As in, not what you think you should care about or hot takes on current trends, but something that has quite literally got you thinking. Our hope is that all put together, the series paints an interesting picture of where we are and where we're going in the web development industry.

We didn't directly ask people for their future predictions. Instead, we will perhaps get a glimpse of the future through seeing what is commanding the attention of developers today. I wanted to mention that because this series takes some inspiration from the one NeimanLabs runs each year (e.g. 2019, 2018, 2017...) which directly asks for people's predictions about journalism. Maybe we'll try that one year!

Automattic has a been a wonderful partner to us for a while now, and so I'm using this series as another way to thank them for that. Automattic are the makers of and big contributors to WordPress itself, which is what this site runs on. They also make premium plugins like WooCommerce and Jetpack, which we also use.

Stay tuned for all the wonderful thoughts we'll be publishing this week (hey, I even hear RSS is still cool) or bookmark the homepage for the series.

The post We asked web developers we admire: “What about building websites has you interested this year?” appeared first on CSS-Tricks.

6 Best Apps of 2019 to Keep You Productive

Ways to Organize and Prepare Images for a Blur-Up Effect Using Gatsby

Gatsby does a great job processing and handling images. For example, it helps you save time with image optimization because you don’t have to manually optimize each image on your own.

With plugins and some configuration, you can even setup image preloading and a technique called blur-up for your images using Gatsby. This helps with a smoother user experience that is faster and more appealing.

I found the combination of gatsby-source-filesystem, GraphQL, Sharp plugins and gatsby-image quite tedious to organize and un-intuitive, especially considering it is fairly common functionality. Adding to the friction is that gatsby-image works quite differently from a regular <img> tag and implementing general use cases for sites could end up complex as you configure the whole system.

Medium uses the blur-up technique for images.

If you haven’t done it already, you should go through the gatsby-image docs. It is the React component that Gatsby uses to process and place responsive, lazy-loaded images. Additionally, it holds the image position which prevents page jumps as they load and you can even create blur-up previews for each image.

For responsive images you’d generally use an <img> tag with a bunch of appropriately sized images in a srcset attribute, along with a sizes attribute that informs the layout situation the image will be used in.

<img srcset="img-320w.jpg 320w,
              img-480w.jpg 480w,
              img-800w.jpg 800w"
      sizes="(max-width: 320px) 280px,
            (max-width: 480px) 440px,

You can read up more on how this works in the Mozilla docs. This is one of the benefits of using gatsby-image in the first place: it does all the resizing and compressing automatically while doing the job of setting up srcset attributes in an <img /> tag.

Directory structure for images

Projects can easily grow in size and complexity. Even a single page site can contain a whole bunch of image assets, ranging from icons to full-on gallery slides. It helps to organize images in some order rather than piling all of them up in a single directory on the server. This helps us set up processing more intuitively and create a separation of concerns.

While attempting to organize files, another thing to consider is that Gatsby uses a custom webpack configuration to process, minify, and export all of the files in a project. The generated output is placed in a /public folder. The overall structure gatsby-starter-default uses looks like this:

|-- /.cache
|-- /plugins
|-- /public
|-- /src
    |-- /pages
    |-- /components
    |-- /images
    |-- html.js
|-- /static (not present by default)
|-- gatsby-config.js
|-- gatsby-node.js
|-- gatsby-ssr.js
|-- gatsby-browser.js

Read more about how the Gatsby project structure works here.

Let’s start with the common image files that we could encounter and would need to organize

For instance:

  • icons
  • logos
  • favicon
  • decorative images (generally vector or PNG files)
  • Image gallery (like team head shots on an About page or something)

How do we group these assets? Considering our goal of efficiency and the Gatsby project structure mentioned above, the best approach would be to split them into two groups: one group that requires no processing and directly imported into the project; and another group for images that require processing and optimization.

Your definitions may differ, but that grouping might look something like this:

Static, no processing required:

  • icons and logos that require no processing
  • pre-optimized images
  • favicons
  • other vector files (like decorative artwork)

Processing required:

  • non-vector artwork (e.g. PNG and JPG files)
  • gallery images
  • any other image that can be processed, which are basically common image formats other than vectors

Now that we have things organized in some form of order, we can move onto managing each of these groups.

The "static" group

Gatsby provides a very simple process for dealing with the static group: add all the files to a folder named static at the root of the project. The bundler automatically copies the contents to the public folder where the final build can directly access the files.

Say you have a file named logo.svg that requires no processing. Place it in the static folder and use it in a component file like this:

import React from "react"

// Tell webpack this JS file requires this image
import logo from "../../static/logo.svg" 

function Header() {
  // This can be directly used as image src
  return <img src={logo} alt="Logo" />

export default Header

Yes, it’s as simple as that — much like importing a component or variable and then directly using it. Gatsby has detailed documentation on importing assets directly into files you could refer to for further understanding.

Special case: Favicon

The plugin gatsby-plugin-manifest not only adds a manifest.json file to the project but also generates favicons for all required sizes and links them up in the site.

With minimal configuration, we have favicons, no more manually resizing, and no more adding individual links in the HTML head. Place favicon.svg (or .png or whatever format you’re using) in the static folder and tweak the gatsby-config.js file with settings for gatsby-plugin-manifest

  resolve: `gatsby-plugin-manifest`,
  options: {
    name: `Absurd`,
    icon: `static/favicon.svg`,

The "processed" group

Ideally, what we’d like is gatsby-image to work like an img tag where we specify the src and it does all the processing under the hood. Unfortunately, it’s not that straightforward. Gatsby requires you to configure gatsby-source-filesystem for the files then use GraphQL to query and processed them using Gatsby Sharp plugins (e.g. gatsby-transformer-sharp, gatsby-plugin-sharp) with gatsby-image. The result is a responsive, lazy-loaded image.

Rather than walking you through how to set up image processing in Gatsby (which is already well documented in the Gatsby docs), I’ll show you a couple of approaches to optimize this process for a couple of common use cases. I assume you have a basic knowledge of how image processing in Gatsby works — but if not, I highly recommend you first go through the docs.

Use case: An image gallery

Let’s take the common case of profile images on an About page. The arrangement is basically an array of data with title, description and image as a grid or collection in a particular section.

The data array would be something like:

const TEAM = [
    name: 'Josh Peck',
    image: 'josh.jpg',
    role: 'Founder',
    name: 'Lisa Haydon',
    image: 'lisa.jpg',
    role: 'Art Director',
    name: 'Ashlyn Harris',
    image: 'ashlyn.jpg',
    role: 'Frontend Engineer',

Now let’s place all the images (josh.jpg, lisa.jpg and so on) in src/images/team You can create a folder in images based on what group it is. Since we’re dealing with team members on an About page, we’ve gone with images/team The next step is to query these images and link them up with the data.

To make these files available in the Gatsby system for processing, we use gatsby-source-filesystem. The configuration in gatsby-config.js for this particular folder would look like:

  resolve: `gatsby-source-filesystem`,
  options: {
    name: `team`,
    path: `${__dirname}/src/images/team`,

To query for an array of files from this particular folder, we can use sourceInstanceName It takes the value of the name specified in gatsby-config.js:

  allFile(filter: { sourceInstanceName: { eq: "team" } }) {
    edges {
      node {
        childImageSharp {
          fluid(maxWidth: 300, maxHeight: 400) {

This returns an array:

// Sharp-processed image data is removed for readability
  "data": {
    "allFile": {
      "edges": [
          "node": {
            "relativePath": "josh.jpg"
          "node": {
            "relativePath": "ashlyn.jpg"
          "node": {
            "relativePath": "lisa.jpg"

As you can see, we’re using relativePath to associate the images we need to the item in the data array. Some quick JavaScript could help here:

// Img is gatsby-image
// TEAM is the data array{ name, image, role }) => {
  // Finds associated image from the array of images
  const img = data.allFile.edges.find(
    ({ node }) => node.relativePath === image

  return (
      <Img fluid={img.childImageSharp.fluid} alt={name} />

That’s the closest we’re getting to using src similar to what we do for <img> tags.

Use case: Artwork

Although artwork may be created using the same type of file, the files are usually spread throughout the in different sections (e.g. pages and components), with each usually coming in different dimensions.

It’s pretty clear that querying the whole array, as we did previously, won’t wor. However, we can still organize all the images in a single folder. That means we an still use sourceInstanceName for specifying which folder we are querying the image from.

Similar to our previous use case, let’s create a folder called src/images/art and configure gatsby-source-filesystem. While querying, rather than getting the whole array, here we will query for the particular image we need in the size and specification as per our requirements:

art_team: file(
    sourceInstanceName: { eq: "art" }
    name: { eq: "team_work" }
  ) {
    childImageSharp {
    fluid(maxWidth: 1600) {

This can be directly used in the component:

<Img fluid={data.art_team.childImageSharp.fluid} />

Further, this can be repeated for each component or section that requires an image from this group.

Special case: Inlining SVGs

Gatsby automatically encodes smaller images into a base64 format and places the data inline, reducing the number of requests to boost performance. That's great in general, but might actually be a detriment to SVG files. Instead, we can manually wrangle SVGs to get the same performance benefits, or in the case we might want to make things more interactive, incorporate animations.

I found gatsby-plugin-svgr to be the most convenient solution here. It allows us to import all SVG files as React components:

import { ReactComponent as GithubIcon } from './github.svg';

Since we’re technically processing SVG files instead of raster images, it’d make sense to move the SVG file out of static folder and place it in the folder of the component that’s using it.


After working with Gatsby on a couple of projects, these are a few of the ways I overcame hurdles when working with images to get that nice blur-up effect. I figured they might come handy for you, particularly for the common use cases we looked at.

All the conventions used here came from the gatsby-absurd starter project I set up on GitHub. Here's the result:

It’s a good idea to check that out if you’d like to see examples of it used in a project. Take a look at Team.js to see how multiple images are queried from the same group. Other sections — such as About.js and Header.js — illustrate how design graphics (the group of images shared across different sections) are queried. Footer.js and Navbar.js have examples for handling icons.

The post Ways to Organize and Prepare Images for a Blur-Up Effect Using Gatsby appeared first on CSS-Tricks.


The Department of Useless Images

Gerry McGovern:

The Web is smothering in useless images. These clichéd, stock images communicate absolutely nothing of value, interest or use. They are one of the worst forms of digital pollution because they take up space on the page, forcing more useful content out of sight. They also slow down the site’s ability to download quickly.

:laugh: :cry:

It's so true, isn't it? How much bandwidth and electricity is spent sending middle-aged-man-staring-into-camera.jpg?

Great photography can be a powerful emotional trigger and be a distinguishing feature of a design, but there is a line between that and some random Unsplash thing. (Says the guy who absolutely loves the Unsplash integration on Notion.)

Direct Link to ArticlePermalink

The post The Department of Useless Images appeared first on CSS-Tricks.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!