Tag: FrontEnd


Deprecated: Function create_function() is deprecated in /home2/thebrand/public_html/wp-content/plugins/wp-spamshield/wp-spamshield.php on line 1884
logo-red-12

How Frontend Developers Can Empower Designer’s Work

How Frontend Developers Can Empower Designer’s Work

How Frontend Developers Can Empower Designer’s Work

Sandrina Pereira

This article is mostly directed at you, dear Frontend Developer, who enjoys implementing user interfaces but struggles in aligning expectations with designers you work with. Perhaps you are referred to as the “UI Developer” or “UX Engineer.” Regardless of the title that you carry around, your job (and as well as mine) consists of more than breathing life into design files. We are also responsible for filling the gap between the design and development workflows. However, when crossing that bridge, we are faced with multiple challenges.

Today, I’d like to share with you practical tips that have helped me to collaborate more efficiently with designers in the past years.

I believe it’s our job, as UI Developers, to not only help designers in their journey to learn how the web works, but also to get to know their reality and learn their language.

Understanding UX Designers’ Background

Most of the UX designers (also referred to as Web Designers or Product Designers) out there took their first steps in the design world through tools like Photoshop and Illustrator. Perhaps they were Graphic Designers: their main goal was to create logos and brand identities and to design layouts for magazines. They could also have been Marketing Designers: printing billboards, designing banners and creating infographics.

This means that most UX designers spent their early days designing for print, which is a totally different paradigm from their current medium, the screen. That was their first big challenge. When dealing with print, designers cared about pixel alignment, but on a fixed area (paper). They didn’t have to contend with a dynamic layout (screens). Thinking about text breaking or states of interactions was simply not part of their job either. Designers also had complete freedom over colors, images, and typography without performance constraints.

Fortunately, there have been many efforts from the self-taught UX designers community, to teach development fundamentals, discuss whether designers should learn to code and understand how to best perform hand-off to developers. The same held true for the development side as well (more about that in a minute.) However, there is still friction happening between the two fields.

On the one hand, designers complaining each time an implementation doesn’t match their designs or feeling misunderstood when those are rejected by the developers without a clear explanation. On the other hand, developers might take for granted that designers know something technical when that might not be true. I believe we can all do better than that.

Embracing A New Way Of Thinking

The websites and apps that we are building will be displayed on a wide range of screen sizes. Each person will use them on multiple devices. Our common goal is to create a familiar experience across their journeys.

When we, as developers, think that designers are being picky about pixel alignments, we need to understand why that is. We need to recognize that it’s beyond fidelity and consistency. It’s about the sum of all the parts working together. It’s cohesion. We have to embrace it and do our best to accomplish it. Learning the fundamentals of design principles is a good starting point. Understand the importance of selecting the right colors, how the hierarchy works, and why white space matters.

Note: There’s an online course known as “Design for Developers” and a book called “Refactoring UI” — both are great to get you started. With these, you’ll be able to implement user interfaces with a sharp eye for detail and gain significant leverage when communicating with designers.

Magnifying Your Designers’ Awareness

You might take for granted that designers know the web as much as you do. Well, they might not. Actually, they don’t have to! It’s our responsibility, as developers, to keep them updated with the current state of the web.

I’ve worked with the two sides of this spectrum: Designers who think that anything can be built (such as complex filters, new scroll behaviors or custom form inputs) without giving browser compatibility a thought. Then, there are designers with assumptions about the “low limitations of the web” and just assume by themselves that something is impossible to implement. We need to show them the true possibilities of web design and not let the limitations hold back their skills.

Teach Them Code, Not How To Code

This seems contradictory but bear with me: knowing how to code is at the core of a developer’s job. We work in a fast-paced industry with new stuff popping up every day. It would be a hypocritical request from us to demand designers to learn how to code. However, we can help them to understand code.

Sit next to the designer you work with for 15 minutes and teach them how they can see for themselves whether the specs of an element are correct and how to change them. I find Firefox Page Inspector remarkably user-friendly for this.

Nowadays, it’s a joy to visualize layouts, debug CSS animations and tweak typography. Show them a ready-to-use code playground like Codepen for them to explore. They don’t need to know all CSS specs to understand how the layout paradigm works. However, they need to know how elements are created and behave in order to empower their daily work.

Include Designers In The Development Process

Invite designers to join you in the stand-up meeting, to be part of the QA process and to sit down with you while you refine visual details in your implementations. This will make them understand the constraints of the web and, soon enough, they’ll recognize why a feature takes time to implement.

Ask Designers To Include You In Their Design Process

You’ll realize that they do much more than “make things pretty”. You’ll learn more about the research process and how user testing is done. You’ll discover that for each design proposal they show to you, several other studies were previously dropped. When designers request a change, ask the reason behind it, so you can learn more about the decisions being made. Ultimately, you’ll start to understand why they are picky about white space and alignments, and hopefully, soon you’ll be too!

I find it crucial to treat frontend development as a core part of the design process, and design as a core part of the development process. Advocating a mindset where everyone gets the chance to be involved in the design and development cycle will help us all to better understand each other’s challenges and to create great experiences as well.

We May Use Different Tools, But We Must Speak The Same Language

Now that we are starting to understand each other’s workflow a little better, it’s time for the next step: to speak the same language.

Looking Beyond The Code Editor

Once you start to pay attention to your surroundings, you’ll be better equipped to tackle problems. Get to know the business better and be willing to listen to what designers have to say. That will make you a team member with richer contributions to the project. Collaborating in areas beyond our expertise is the key to creating meaningful conversations and mutual trust.

Using UI Systems As A Contract

Ask designers to share their design system with you (and if they don’t use one, it’s never too late to start). That will save you the bother of handpicking the colors used, figuring out margins or guessing a text style. Make sure you are familiar with the UI System as much as they are.

You might already be familiar with the component-based concept. However, some designers might not perceive it in the same way as you do. Show them how components can help you to build user interfaces more efficiently. Spread that mindset and also say bye-bye to similar-but-not-equal-names: header vs hero, pricing info vs price selector. When a piece of the user interface (a.k.a ‘a component’) is built, stride to use the same names so they can be reflected in both design and code files. Then, when someone says, “We need to tweak the proposal invitation widget,” everyone knows exactly what is being talked about.

Acknowledging The Real Source Of Truth

Despite the fact that the user interface is first created in the design files, the real source of truth is on the development side. At the end of the day, it is what people actually see, right? When a design is updated, it’s a good idea to leave a side note about its development status, especially in projects where a large number of people are involved. This trick helps to keep the communication smooth, so nobody (including you) wonders: “This is different from the live version. Is it a bug or hasn’t so-and-so been implemented just yet?

Keeping The Debt Under Control

So, you know all about technical debt — it happens when the cost to implement something new is high because of a shortcut we took in the past to meet a deadline. The same can happen on the design side too and we call it design debt.

You can think about it like this: The higher the UX & UI inconsistency, the higher the debt (technical and design). One of the most common inconsistencies is in having different elements to represent the same action. This is why bad design is usually reflected in bad code. It’s up to all of us, both as designers and developers, to treat our design debt seriously.

Being Accessible Doesn’t Mean It Has To Be Ugly

I’m really pleased to see that both we, as developers and designers, are finally starting to bring accessibility into our work. However, a lot of us still think that designing accessible products is hard or limits our skills and creativity.

Let me remind you that we are not creating a product just for ourselves. We are creating a product to be used by all the people, including those who use the product in different ways from you. Take into account how individual elements can be more accessible while keeping the current userflows clear and coherent.

For example, if a designer really believes that creating an enhanced select input is necessary, stand by their side and work together to design and implement it in an accessible way to be used by a wide range of people with diverse abilities.

Giving Valuable Feedback To Designers

It’s unavoidable that questions will pop up when going through the designs. However, that’s not a reason for you to start complaining about the designers’ mistakes or about their ambitious ideas. The designers are not there to drive you mental, they just don’t always intuitively know what you need to do your job. The truth is that, in the past, you didn’t know about this stuff either, so be patient and embrace collaboration as a way of learning.

The Earlier The Feedback, The Better

The timing for feedback is crucial. The feedback workflow depends a lot on the project structure, so there isn’t a one-size-fits-all solution for it. However, when possible, I believe we should aim for a workflow of periodic feedback — starting in the early stages. Having this mindset of open collaboration is the way to reduce the possibility of unexpected big re-iterations later in the road. Keep in mind that the later you give the designer your first feedback, the higher will be the cost for them to explore a new approach if needed.

Explain Abstract Ideas In Simple Terms

Remember when I said that performance was not a concern of the designers’ previous jobs? Don’t freak out when they are not aware of how to create optimized SVGs for the web. When faced with a design that requires a lot of different fonts to be loaded, explain to them why we should minimize the number of requests, perhaps even take advantage of Variable Fonts. Asides from loading faster, it also provides a more consistent user experience. Unless designers are keen to learn, avoid using too many technical terms when explaining something. You can see this as an opportunity of improving your communication skills and clarifying your thoughts.

Don’t Let Assumptions Turn Into Decisions

Some questions about a design spec only show up when we get our hands dirty in the code. To speed things up, we might feel tempted to make decisions based on our assumptions. Please, don’t. Each time you turn an assumption into a decision, you are risking the trust that the design team puts on you to implement the UI. Whenever in doubt, reach out and clarify your doubts. This will show them that you care about the final result as much as they do.

Don’t Do Workarounds By Yourself

When you are asked to implement a design that is too hard, don’t say “It’s impossible” or start to implement a cheap alternative version of it by yourself. This immediately causes friction with the design team when they see their designs were not implemented as expected. They might react angrily, or not say anything but feel defeated or frustrated. That can all be avoided if we explain to them why something it’s not possible, in simple terms and suggest alternative approaches. Avoid dogmatic behaviors and be open to collaborating on a new solution.

Let them know about the Graceful Degradation and Progressive Enhancement techniques and build together an ideal solution and a fallback solution. For example, we can enhance a layout from flexbox to CSS Grid. This allows us to respect the core purpose of a feature and at the same time make the best use of web technologies.

When it comes to animations, let’s be real: deep down you are as thrilled to implement the next wow animation as much as the designers are to create it. The difference between us and them is that we are more aware of the web’s constraints. However, don’t let that limit your own skills! The web evolves fast, so we must use that in our favor.

The next time you are asked to bring a prototype to life, try not to reject it upfront or do it all by yourself. See it as an opportunity to push yourself. Animations are one of the pickiest parts of web development, so make sure to refine each keyframe with your designer — in person or by sharing a private synced link.

Think Beyond The Ideal State — Together

Here’s the thing: it’s not all about you. One of the designers’ challenges is to really understand their users and present the designs in the most attractive way to sell to the Product Manager. Sometimes they forget about what’s beyond the ideal state and developers forget it too.

In order to help avoid these gaps from happening, align with designers the scenario requirements:

  • The worst-case scenario
    A scenario where the worst possibilities are happening. This helps designers to say no to fixed content and let it be fluid. What happens if the title has more than 60 characters or if the list is too long? The same applies to the opposite, the empty scenario. What should the user do when the list is empty?
  • Interaction states
    The browser is more than hovering and clicking around. There are a bunch of states: default, hover, focus, active, disable, error… and some of them can happen at the same time. Let’s give them the proper attention.
  • The loading state
    When building stuff locally, we might use mocks and forget that things actually take time to load. Let designers know about that possibility too and show them that are ways to make people perceive that things don’t take that long to load.

Taking into consideration all these scenarios will save you a lot of time going back and forth during the development phase.

Note: There’s a great article by Scott Hurff about how to fix bad user interfaces that will take us a step closer to an accessible product.

Embrace Change Requests

Developers are known for not being too happy about someone finding a bug in their code — especially when it’s a design bug reported by a designer. There are a lot of memes around it, but have you ever reflected how those bugs can compoundingly rot both the quality of the experience as well as your relationship when these design bugs are casually dismissed? It happens slowly, almost like falling asleep. Bit by bit, then all at once. Designers might start out saying, “It’s just a tiny little detail,” until the indifference and resentment builds up and nothing is said. The damage has then already been done.

This situation is two-fold: both to your peers and to your work. Don’t let that happen. Work on what’s coloring your reaction. A designer being ‘picky’ can be inconvenient, but it can also be an engineer’s shallow interpretation of attentiveness and enthusiasm. When someone tells you that your implementation is not perfect (yet), put your ego aside and show them how you recognize their hard work in refining the final result.

Go Off The Record Once In A While

We all have tasks to deliver and roadmaps to finish. However, some of the best work happens off the record. It won’t be logged into the TO DO board and that’s okay. If you find a designer you have a rapport with, go sneak into their workspace and explore together new experiments. You never know what can come from there!

Conclusion

When you are willing to learn from the design team, you are also learning different ways of thinking. You’ll evolve your mindset of collaboration with other areas out of your experience while refining your eye for creating new experiences. Be there to help and be open to learning, because sharing is what makes us better.


This article wouldn’t be possible without the feedback of many great people. I want to gratefully thank to the designers Jasmine Meijer and Margarida Botelho for helping me to share a balanced perspective about the misunderstandings between designers and developers.

Related Reading on SmashingMag:

Smashing Editorial (ra, yk, il)


Articles on Smashing Magazine — For Web Designers And Developers


Deprecated: Function create_function() is deprecated in /home2/thebrand/public_html/wp-content/plugins/wp-spamshield/wp-spamshield.php on line 1884
tfjs-transfer-800w

Machine Learning For Front-End Developers With Tensorflow.js

Machine Learning For Front-End Developers With Tensorflow.js

Machine Learning For Front-End Developers With Tensorflow.js

Charlie Gerard

Machine learning often feels like it belongs to the realm of data scientists and Python developers. However, over the past couple of years, open-source frameworks have been created to make it more accessible in different programming languages, including JavaScript. In this article, we will use Tensorflow.js to explore the different possibilities of using machine learning in the browser through a few example projects.

What Is Machine Learning?

Before we start diving into some code, let’s talk briefly about what machine learning is as well as some core concepts and terminology.

Definition

A common definition is that it is the ability for computers to learn from data without being explicitly programmed.

If we compare it to traditional programming, it means that we let computers identify patterns in data and generate predictions without us having to tell it exactly what to look for.

Let’s take the example of fraud detection. There is no set criteria to know what makes a transaction fraudulent or not; frauds can be executed in any country, on any account, targeting any customer, at any time, and so on. It would be pretty much impossible to track all of this manually.

However, using previous data around fraudulent expenses gathered over the years, we can train a machine-learning algorithm to understand patterns in this data to generate a model that can be given any new transaction and predict the probability of it being fraud or not, without telling it exactly what to look for.

Core Concepts

To understand the following code samples, we need to cover a few common terms first.

Model

When you train a machine-learning algorithm with a dataset, the model is the output of this training process. It’s a bit like a function that takes new data as input and produces a prediction as output.

Labels And Features

Labels and features relate to the data that you feed an algorithm in the training process.

A label represents how you would classify each entry in your dataset and how you would label it. For example, if our dataset was a CSV file describing different animals, our labels could be words like “cat”, “dog” or “snake” (depending on what each animal represents).

Features on the other hand, are the characteristics of each entry in your data set. For our animals example, it could be things like “whiskers, meows”, “playful, barks”, “reptile, rampant”, and so on.

Using this, a machine-learning algorithm will be able to find some correlation between features and their label that it will use for future predictions.

Neural Networks

Neural networks are a set of machine-learning algorithms that try to mimic the way the brain works by using layers of artificial neurons.

We don’t need to go in-depth about how they work in this article, but if you want to learn more, here’s a really good video:

Now that we’ve defined a few terms commonly used in machine learning, let’s talk about what can be done using JavaScript and the Tensorflow.js framework.

Features

Three features are currently available:

  1. Using a pre-trained model,
  2. Transfer learning,
  3. Defining, running, and using your own model.

Let’s start with the simplest one.

1. Using A Pre-Trained Model

Depending on the problem you are trying to solve, there might be a model already trained with a specific data set and for a specific purpose which you can leverage and import in your code.

For example, let’s say we are building a website to predict if an image is a picture of a cat. A popular image classification model is called MobileNet and is available as a pre-trained model with Tensorflow.js.

The code for this would look something like this:

<html lang="en">   <head>     <meta charset="UTF-8">     <meta name="viewport" content="width=device-width, initial-scale=1.0">     <meta http-equiv="X-UA-Compatible" content="ie=edge">     <title>Cat detection</title>     <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.1"> </script>     <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/mobilenet@1.0.0"> </script>   </head>   <body>     <img id="image" alt="cat laying down" src="cat.jpeg"/>        <script>       const img = document.getElementById('image');          const predictImage = async () => {         console.log("Model loading...");         const model = await mobilenet.load();         console.log("Model is loaded!")            const predictions = await model.classify(img);         console.log('Predictions: ', predictions);       }       predictImage();     </script>   </body> </html>

We start by importing Tensorflow.js and the MobileNet model in the head of our HTML:

<script src="https://cdnjs.cloudflare.com/ajax/libs/tensorflow/1.0.1/tf.js"> </script> <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/mobilenet@1.0.0"> </script>

Then, inside the body, we have an image element that will be used for predictions:

<img id="image" alt="cat laying down" src="cat.jpeg"/>

And finally, inside the script tag, we have the JavaScript code that loads the pre-trained MobileNet model and classifies the image found in the image tag. It returns an array of 3 predictions which are ordered by probability score (the first element being the best prediction).

const predictImage = async () => {   console.log("Model loading...");   const model = await mobilenet.load();   console.log("Model is loaded!")   const predictions = await model.classify(img);   console.log('Predictions: ', predictions); }  predictImage();

And that’s it! This is the way you can use a pre-trained model in the browser with Tensorflow.js!

Note: If you want to have a look at what else the MobileNet model can classify, you can find a list of the different classes available on Github.

An important thing to know is that loading a pre-trained model in the browser can take some time (sometimes up to 10s) so you will probably want to preload or adapt your interface so that users are not impacted.

If you prefer using Tensorflow.js as a NPM module, you can do so by importing the module this way:

import * as mobilenet from '@tensorflow-models/mobilenet';

Feel free to play around with this example on CodeSandbox.

Now that we’ve seen how to use a pre-trained model, let’s look at the second feature available: transfer learning.

2. Transfer Learning

Transfer learning is the ability to combine a pre-trained model with custom training data. What this means is that you can leverage the functionality of a model and add your own samples without having to create everything from scratch.

For example, an algorithm has been trained with thousands of images to create an image classification model, and instead of creating your own, transfer learning allows you to combine new custom image samples with the pre-trained model to create a new image classifier. This feature makes it really fast and easy to have a more customized classifier.

To provide an example of what this would look like in code, let’s repurpose our previous example and modify it so we can classify new images.

Note: The end result is the experiment below that you can try live here.

(Live demo) (Large preview)

Below are a few code samples of the most important part of this setup, but if you need to have a look at the whole code, you can find it on this CodeSandbox.

We still need to start by importing Tensorflow.js and MobileNet, but this time we also need to add a KNN (k-nearest neighbor) classifier:

<!-- Load TensorFlow.js --> <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs"></script> <!-- Load MobileNet --> <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/mobilenet"></script> <!-- Load KNN Classifier --> <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/knn-classifier"></script>

The reason why we need a classifier is because (instead of only using the MobileNet module) we’re adding custom samples it has never seen before, so the KNN classifier will allow us to combine everything together and run predictions on the data combined.

Then, we can replace the image of the cat with a video tag to use images from the camera feed.

<video autoplay id="webcam" width="227" height="227"></video>

Finally, we’ll need to add a few buttons on the page that we will use as labels to record some video samples and start the predictions.

<section>   <button class="button">Left</button>    <button class="button">Right</button>    <button class="test-predictions">Test</button> </section>

Now, let’s move to the JavaScript file where we’re going to start by setting up a few important variables:

// Number of classes to classify const NUM_CLASSES = 2; // Labels for our classes const classes = ["Left", "Right"]; // Webcam Image size. Must be 227. const IMAGE_SIZE = 227; // K value for KNN const TOPK = 10;  const video = document.getElementById("webcam");

In this particular example, we want to be able to classify the webcam input between our head tilting to the left or the right, so we need two classes labeled left and right.

The image size set to 227 is the size of the video element in pixels. Based on the Tensorflow.js examples, this value needs to be set to 227 to match the format of the data the MobileNet model was trained with. For it to be able to classify our new data, the latter needs to fit the same format.

If you really need it to be larger, it is possible but you will have to transform and resize the data before feeding it to the KNN classifier.

Then, we’re setting the value of K to 10. The K value in the KNN algorithm is important because it represents the number of instances that we take into account when determining the class of our new input.

In this case, the value of 10 means that, when predicting the label for some new data, we will look at the 10 nearest neighbors from the training data to determine how to classify our new input.

Finally, we’re getting the video element. For the logic, let’s start by loading the model and classifier:

async load() {     const knn = knnClassifier.create();     const mobilenetModule = await mobilenet.load();     console.log("model loaded"); }

Then, let’s access the video feed:

navigator.mediaDevices   .getUserMedia({ video: true, audio: false })   .then(stream => {     video.srcObject = stream;     video.width = IMAGE_SIZE;     video.height = IMAGE_SIZE;   });

Following that, let’s set up some button events to record our sample data:

setupButtonEvents() {     for (let i = 0; i < NUM_CLASSES; i++) {       let button = document.getElementsByClassName("button")[i];        button.onmousedown = () => {         this.training = i;         this.recordSamples = true;       };       button.onmouseup = () => (this.training = -1);     }   }

Let’s write our function that will take the webcam images samples, reformat them and combine them with the MobileNet module:

// Get image data from video element const image = tf.browser.fromPixels(video);  let logits; // 'conv_preds' is the logits activation of MobileNet. const infer = () => this.mobilenetModule.infer(image, "conv_preds");  // Train class if one of the buttons is held down if (this.training != -1) {   logits = infer();    // Add current image to classifier   this.knn.addExample(logits, this.training); }

And finally, once we gathered some webcam images, we can test our predictions with the following code:

logits = infer(); const res = await this.knn.predictClass(logits, TOPK); const prediction = classes[res.classIndex];

And finally, you can dispose of the webcam data as we don’t need it anymore:

// Dispose image when done image.dispose(); if (logits != null) {   logits.dispose(); }

Once again, if you want to have a look at the full code, you can find it in the CodeSandbox mentioned earlier.

3. Training A Model In The Browser

The last feature is to define, train and run a model entirely in the browser. To illustrate this, we’re going to build the classic example of recognizing Irises.

For this, we’ll create a neural network that can classify Irises in three categories: Setosa, Virginica, and Versicolor, based on an open-source dataset.

Before we start, here’s a link to the live demo and here’s the CodeSandbox if you want to play around with the full code.

At the core of every machine learning project is a dataset. One of the first step we need to undertake is to split this dataset into a training set and a test set.

The reason for this is that we’re going to use our training set to train our algorithm and our test set to check the accuracy of our predictions, to validate if our model is ready to be used or needs to be tweaked.

Note: To make it easier, I already split the training set and test set into two JSON files you can find in the CodeSanbox.

The training set contains 130 items and the test set 14. If you have a look at what this data looks like you’ll see something like this:

{   "sepal_length": 5.1,   "sepal_width": 3.5,   "petal_length": 1.4,   "petal_width": 0.2,   "species": "setosa" }

What we can see is four different features for the length and width of the sepal and petal, as well as a label for the species.

To be able to use this with Tensorflow.js, we need to shape this data into a format that the framework will understand, in this case, for the training data, it will be [130, 4] for 130 samples with four features per iris.

import * as trainingSet from "training.json"; import * as testSet from "testing.json";  const trainingData = tf.tensor2d(   trainingSet.map(item => [     item.sepal_length,     item.sepal_width,     item.petal_length,     item.petal_width   ]),   [130, 4] );  const testData = tf.tensor2d(   testSet.map(item => [     item.sepal_length,     item.sepal_width,     item.petal_length,     item.petal_width   ]),   [14, 4] );

Next, we need to shape our output data as well:

const output = tf.tensor2d(trainingSet.map(item => [     item.species === 'setosa' ? 1 : 0,     item.species === 'virginica' ? 1 : 0,     item.species === 'versicolor' ? 1 : 0  ]), [130,3])

Then, once our data is ready, we can move on to creating the model:

const model = tf.sequential();  model.add(tf.layers.dense(     {            inputShape: 4,          activation: 'sigmoid',          units: 10     } ));  model.add(tf.layers.dense(     {         inputShape: 10,          units: 3,          activation: 'softmax'     } ));

In the code sample above, we start by instantiating a sequential model, add an input and output layer.

The parameters you can see used inside (inputShape, activation, and units) are out of the scope of this post as they can vary depending on the model you are creating, the type of data used, and so on.

Once our model is ready, we can train it with our data:

async function train_data(){     for(let i=0;i<15;i++){       const res = await model.fit(trainingData, outputData,{epochs: 40});              } }  async function main() {   await train_data();   model.predict(testSet).print(); }

If this works well, you can start replacing the test data with custom user inputs.

Once we call our main function, the output of the prediction will look like one of these three options:

[1,0,0] // Setosa [0,1,0] // Virginica [0,0,1] // Versicolor

The prediction returns an array of three numbers representing the probability of the data belonging to one of the three classes. The number closest to 1 is the highest prediction.

For example, if the output of the classification is [0.0002, 0.9494, 0.0503], the second element of the array is the highest, so the model predicted that the new input is likely to be a Virginica.

And that’s it for a simple neural network in Tensorflow.js!

We only talked about a small dataset of Irises but if you want to move on to larger datasets or working with images, the steps will be the same:

  • Gathering the data;
  • Splitting between training and testing set;
  • Reformatting the data so Tensorflow.js can understand it;
  • Picking your algorithm;
  • Fitting the data;
  • Predicting.

If you want to save the model created to be able to load it in another application and predict new data, you can do so with the following line:

await model.save('file:///path/to/my-model'); // in Node.js

Note: For more options on how to save a model, have a look at this resource.

Limits

That’s it! We’ve just covered the three main features currently available using Tensorflow.js!

Before we finish, I think it is important to briefly mention some of the limits of using machine learning in the frontend.

1. Performance

Importing a pre-trained model from an external source can have a performance impact on your application. Some objects detection model, for example, are more than 10MB, which is significantly going to slow down your website. Make sure to think about your user experience and optimize the loading of your assets to improve your perceived performance.

2. Quality Of The Input Data

If you build a model from scratch, you’re going to have to gather your own data or find some open-source dataset.

Before doing any kind of data processing or trying different algorithms, make sure to check the quality of your input data. For example, if you are trying to build a sentiment analysis model to recognize emotions in pieces of text, make sure that the data you are using to train your model is accurate and diverse. If the quality of the data used is low, the output of your training will be useless.

3. Liability

Using an open-source pre-trained model can be very fast and effortless. However, it also means that you don’t always know how it was generated, what the dataset was made of, or even which algorithm was used. Some models are called “black boxes”, meaning that you don’t really know how they predicted a certain output.

Depending on what you are trying to build, this can be a problem. For example, if you are using a machine-learning model to help detect the probability of someone having cancer based on scan images, in case of false negative (the model predicted that a person didn’t have cancer when they actually did), there could be some real legal liability and you would have to be able to explain why the model made a certain prediction.

Summary

In conclusion, using JavaScript and frameworks like Tensorflow.js is a great way to get started and learn more about machine learning. Even though a production-ready application should probably be built in a language like Python, JavaScript makes it really accessible for developers to play around with the different features and get a better understanding of the fundamental concepts before eventually moving on and investing time into learning another language.

In this tutorial, we’ve only covered what was possible using Tensorflow.js, however, the ecosystem of other libraries and tools is growing. More specified frameworks are also available, allowing you to explore using machine learning with other domains such as music with Magenta.js, or predicting user navigation on a website using guess.js!

As tools get more performant, the possibilities of building machine learning-enabled applications in JavaScript are likely to be more and more exciting, and now is a good time to learn more about it, as the community is putting effort into making it accessible.

Further Resources

If you are interested in learning more, here are a few resources:

Other Frameworks And Tools

Examples, Models And Datasets

Inspiration

Thanks for reading!

Smashing Editorial (rb, dm, yk, il)


Articles on Smashing Magazine — For Web Designers And Developers

logo-red-9

How Frontend Developers Can Help To Bridge The Gap Between Designers And Developers

How Frontend Developers Can Help To Bridge The Gap Between Designers And Developers

How Frontend Developers Can Help To Bridge The Gap Between Designers And Developers

Stefan Kaltenegger

Within the last nine years, almost every designer I used to work with expressed their frustration to me about them frequently having to spend days giving feedback to developers to correct spacings, font sizes, visual as well as layout aspects that had simply not been implemented correctly. This often lead to weakening the trust between designers and developers, and caused unmotivated designers along with a bad atmosphere among the two disciplines.

A lot of times developers still seem to have the bad reputation of being overly technical and ignorant when it comes to being considerate about details the design team came up with. According to an article by Andy Budd, “[…] a lot of developers are in the same position about design — they just don’t realize it.” In reality though (as Paul Boag points out), “developers [need to] make design decisions all the time.”

In this article, I’ll provide practical points of advice for frontend developers to avoid frustration and increase productivity when working with their creative counterpart.

Looking Through The Eyes Of A Designer

Let’s for one moment imagine you were a designer and spent the last weeks — if not months — to work out a design for a website. You and your teammates went through multiple internal revisions as well as client presentations, and put a solid effort into fine-tuning visual details such as white space, font styles, and sizes. (In a responsive era — for multiple screen sizes, of course.) The designs have been approved by the client and were handed off to the developers. You feel relieved and happy.

A few weeks later, you receive an email from your developer that says:

“Staging site is set up. Here’s the link. Can you please QA?”

In a thrill of anticipation, you open that staging link and after scrolling through some of the pages, you notice that the site looks a little off. Spacings are not even close to what your design suggested and you notice some kinks in the layout: wrong font faces and colors as well as incorrect interactions and hover states. Your excitement starts to slowly fade and turn into a feeling of frustration. You can’t help but ask yourself, “How could that have happened?”

The Search For Reasons

Maybe there were just a lot of unfortunate misunderstandings in the communication between the designers and developers. Nevertheless, you continue asking yourself:

  • What did the the handover of designs look like? Were there just some PDFs, Photoshop or Sketch files shared via e-mail with some comments, or was there an actual handover meeting in which various aspects such as a possible design system, typography, responsive behavior, interactions and animations were discussed?
  • Did interactive or motion prototypes that help to visualize certain interactions exist?
  • Was a list of important aspects with defined levels of priority created?
  • How many conversations took place — with both designers and developers in the same room together?

Since communication and handover are two very important key points, let’s take a closer look at each.

Communication Is Key

Designers and developers, please talk to each other. Talk a lot. The earlier on in the project and the more often, the better. If possible, review design work in progress together early in the project (and regularly) in order to constantly evaluate feasibility and get cross-disciplinary input. Designers and developers naturally both focus on different aspects of the same part and therefore see things from different angles and perspectives.

Checking in early on lets developers become familiarized with the project so they can start researching and planning ahead on technical terms and bring in their ideas on how to possibly optimize features. Having frequent check-ins also brings the team together on a personal and social level, and you learn how to approach each other to communicate effectively.

The Handover From Design To Development

Unless an organization follows a truly agile workflow, an initial handover of design comps and assets (from the design team to the developers) will likely happen at some point in a project. This handover — if done thoroughly — can be a solid foundation of knowledge and agreements between both sides. Therefore, it is essential not to rush through it and plan some extra time.

Ask a lot of questions and talk through every requirement, page, component, feature, interaction, animation, anything — and take notes. If things are unclear, ask for clarification. For example, when working with external or contract-based teams, both designers and developers can sign off the notes taken as a document of mutual agreement for future reference.

Flat and static design comps are good for showing graphical and layout aspects of a website but obviously lack the proper representation of interactions and animations. Asking for prototypes or working demos of complex animations will create a clearer vision of what needs to be built for everyone involved.

Nowadays, there’s is a wide range of prototyping tools available that designers can utilize to mockup flows and interactions in different levels of fidelity. Javier Cuello explains how to choose the right prototyping tool for your project in one of his comprehensive articles.

Every project is unique, and so are its requirements. Due to these requirements, not all conceptualized features can always be built. Often the available time and resources to build something can be a limiting factor. Furthermore, constraints can come from technical requirements such as feasibility, accessibility, performance, usability and cross-browser support, economic requirements like budget and license fees or personal constraints like the skill level and availability of developers.

So, what if these constraints cause conflicts between designers and developers?

Finding Compromises And Building Shared Knowledge

In order to successfully ship a project on time and meet all defined requirements, finding compromises between the two disciplines is mostly inevitable. Developers need to learn to speak to designers in non-technical terms when they explain reasons why things need changes or can’t be built in a specific situation.

Instead of just saying, “Sorry, we can’t build this,” developers should try to give an explanation that is understandable for designers and — in the best case — prepare suggestions for an alternative solution that works within the known constraints. Backing your point with statistics, research, or articles, can help to emphasize your argument. Also, if timing is an issue, maybe the implementation of some time-consuming parts can be moved to a later phase of the project?

Even though it is not always possible, having designers and developers sit next to each other can shorten feedback loops and make it easier to work out a compromised solution. Adapting and prototyping can be done directly through coding and optimizing with DevTools open.

Show your fellow designers how to use DevTools in a browser so that they can alter basic information and preview small changes in their browser (e.g. paddings, margins, font sizes, class names) on the fly.

If the project and team structure allow it, building and prototyping in the browser as soon as possible can give everyone involved a better understanding of the responsive behavior and can help eliminate bugs and errors in the early stage of the project.

The longer designers and developers work together, the better designers will understand what is easier and what is more difficult for the developers to build. Over time, they can eventually refer to solutions that have worked for both sides in the past:

“We’ve used that solution to find a compromise in Project A. Can we use it for this project as well?”

This also helps developers get a better sense of what details the designers are very specific about and what visual aspects are important to them.

Designers Expect The Frontend To Look (And Function) Like Their Design

The Design File Vs. Browser Comparison

A helpful technique to prevent designers from frustration is to make a simple left-right comparison between the design file you got handed over and what your current state of development looks like. This might sound trivial, but as a developer, you have to take care of so many things that need to function under the hood that you might have missed some visual details. If you see some noticeable discrepancies, simply correct them.

Think of it this way: Every detail in your implementation that looks exactly as it was designed saves both you and the designer valuable time and headaches, and encourages trust. Not everyone might have the same level of attention to detail, but in order to train your eye to notice visual differences, a quick round of Can’t Unsee might be a good help.

“Can’t Unsee” is a game where you need to choose the most correct design out of two choices.
(Image credits: Can’t Unsee) (Large preview)

This nostalgically reminds me of a game we used to play a long time ago called “Find it”. You had to find discrepancies by comparing two seemingly similar images in order to score points.

In “Find it”, players have to find errors comparing two images
(Image credits: Mordillo find them) (Large preview)

Still, you may be thinking:

“What if there simply is no noticeable system of font sizes and spacings in the design?”

Well, good point! Experience has shown me that it can help to start a conversation with the designer(s) by asking for clarification rather than radically starting to change things on your own and creating unwanted surprises for the designer(s) later.

Learn Basic Typographic And Design Rules

As Oliver Reichenstein states in one of his articles, 95% of the information on the web is written language. Therefore, typography plays a vital role not only in web design but also in development. Understanding basic terms and concepts of typography can help you communicate more effectively with designers, and will also make you more versatile as a developer. I recommend reading Oliver’s article as he elaborates the importance of typography on the web and explains terms such as micro- and macro-typography.

In the “Reference Guide For Typography In Mobile Web Design”, Suzanne Scacca thoroughly covers typography terminology such as typeface, font, size, weight, kerning, leading and tracking as well as the role of typography in modern web design.

If you would like to further expand your typographical horizon, Matthew Butterick’s book “Butterick’s Practical Typography” might be worth reading. It also provides a summary of key rules of typography.

One thing I found particularly useful in responsive web design is that one should aim for an average line length (characters per line) of 45 to 90 characters since shorter lines are more comfortable to read than longer lines.

Comparing two text paragraphs with different line lengths
Comparing different line lengths (Large preview)

Should Developers Design?

There has been a lot of discussion whether designers should learn to code, and you may be asking yourself the same question the other way around. I believe that one can hardly excel in both disciplines, and that’s totally fine.

Rachel Andrew nicely outlines in her article “Working Together: How Designers And Developers Can Communicate To Create Better Projects” that in order to collaborate more effectively, we all need to learn something of the language, skills, and priorities of our teammates so that we can create a shared language and overlapping areas of expertise.

One way to become more knowledgable in the field of design is an online course known as “Design for Developers” that is offered by Sarah Drasner in which she talks about basic layout principles and color theory — two fundamental areas in web design.

“The more you learn outside of your own discipline, is actually better for you […] as a developer.”

— Sarah Drasner

The Visual Center

By collaborating with designers, I learned the difference between the mathematical and visual center. When we want to draw the reader’s attention to a certain element, our eye’s natural focal point lies just slightly above the mathematical center of the page.

We can apply this concept, for example, to position modals or any kinds of overlays. This technique helps us to naturally get the user’s attention and makes the design appear more balanced:

Comparing two page layouts where one shows a text aligned to the mathematical and the other a text aligned to the visual center
(Large preview)

We’re All In This Together

In fast-paced and not-so-agile agency environments with tight deadlines, developers are often asked to implement fully functional responsive frontends based on a mobile and desktop mockup. This inevitably forces the developer to take design decisions throughout the process. Questions such as, “At what width will we decrease the font size of headlines?” or “When should we switch our three-column layout to a single column?” may arise.

Also, in the heat of the moment, it may happen that details like error states, notifications, loading states, modals or styles of 404 pages simply fall through the cracks. In such situations, it’s easy to start finger-pointing and blaming the people who should have thought about this earlier on. Ideally, developers shouldn’t ever be put in such a situation, but what if that’s the case?

When I listened to Ueno’s founder and CEO, Haraldur Thorleifsson, speak at a conference in San Francisco in 2018, he presented two of their core values:

“Nothing here is someone else’s problem.”

“We pick up the trash we didn’t put down.”

What if more developers proactively start mocking-up the above-mentioned missing parts as good as they can in the first place, and then refine together with the designer sitting next to them? Websites live in the browser, so why not utilize it to build and refine?

While winging missing or forgotten parts might not always be ideal, I’ve learned in my past experiences that it has always helped us to move forward faster and eliminate errors on the fly — as a team.

Of course, this does not mean that designers should be overruled in the process. It means that developers should try to respectfully meet designers halfway by showing initiative in problem-solving. Besides that, I as a developer was valued way more by the team simply for caring and taking on responsibility.

Building Trust Between Designers And Developers

Having a trustful and positive relationship between the creative and tech team can strongly increase productivity and outcome of work. So what can we, as developers, do to increase trust between the two disciplines? Here are a few suggestions:

  1. Show an eye for details.
    Building things exactly as they were designed will show the designers that you care and put a big smile on their faces.
  2. Communicate with respect.
    We’re all human beings in a professional environment striving for the best possible outcome. Showing respect for each other’s discipline should be the basis for all communication.
  3. Check in early on and regularly.
    Involving developers from the start can help to eliminate errors early on. Through frequent communication, team members can develop a shared language and better understanding of each other’s positions.
  4. Make yourself available.
    Having at least an optional 30-minute window a day when designers can discuss ideas with developers can give designers a feeling of being supported. This also gives developers the opportunity to explain complex technical things in words that not-so-technical people can understand better.

The Result: A Win-Win Situation

Having to spend less time in QA through effective communication and a proper handover of designs gives both the creative and dev team more time to focus on building actual things and less headaches. It ultimately creates a better atmosphere and builds trust between designers and developers. The voice of frontend developers that show interest and knowledge in some design-related fields will be heard more in design meetings.

Proactively contributing to finding a compromise between designers and developers and problem-solving as a developer can give you a broader sense of ownership and involvement with the whole project. Even in today’s booming creative industry, it’s not easy to find developers who — besides their technical skillset — care about and have an eye for visual details. This can be your opportunity to help bridge the gap in your team.

Smashing Editorial (dm, yk, il)


Articles on Smashing Magazine — For Web Designers And Developers

hero-image-template

Why Mason And Front-End As A Service Will Be A Game Changer For Product Development

Why Mason And Front-End As A Service Will Be A Game Changer For Product Development

Why Mason And Front-End As A Service Will Be A Game Changer For Product Development

Suzanne Scacca

(This is a sponsored article.) Take a look around at the apps and software you interact with on a regular basis. Each has its own unique design, right? And, yet, there’s something similar about each of them. Navigation bars, contact forms, feature boxes, CTAs — certain elements tend to be present no matter where you go.

That’s because these elements play a crucial part in how users engage with the products that you’ve built. From the users’ end, this is a good thing. By including these recognizable and predictable elements within the frontend structure of an application, the user’s focus is on the content before them; not on trying to solve the mystery of the UI.

From the software developers’ end, though, this is a pain. You know what kinds of components need to be included in a product. However, until now, you’ve had to build them from-scratch time and time again. Worse, any time something needs to be updated, it rides on you to implement the update and push it to the live site — and it’s not often you have the bandwidth to make those changes right away.

That’s why what Mason is doing with front-end-as-a-service (FEaaS) is so interesting. In this article, I’m going to give you a closer look at FEaaS, who it’s for and why empowering product and marketing teams with it is a big deal.

What Is FEaaS?

You know what software-as-a-service (SaaS) is. But have you heard of software-components-as-a-service (SCaaS)?

There were some light grumblings around SCaaS a number of years back. The basic idea was that you could create and easily maintain reusable UI components and widgets for your software. However, it never really caught on — and that’s probably because it was too restrictive in what it enabled developers to do.

With FEaaS, though, you have a much more valuable and powerful solution. Essentially, Mason’s front-end-as-a-service solution enables you to quickly and effectively build the visual aspects and functionality of your software.

This means that you’re building complex features and making them talk to APIs. An example of different designed, complex forms connected to Airtable as the datasource can be found here.

What’s more, each feature you build with Mason lives in the same codebase as the rest of your product. Let’s take a look at a customizable Apixu-powered Chatbot that was made with Mason:

Mason Airtable to-do list demo
An example of the kinds of dynamic content you can build with FEaaS. (Source: Mason) (Large preview)

And this is a hero banner I created for an ebook giveaway using a Mason template:

Mason hero template
An example of a hero image template, customized and published with Mason. (Source: Mason) (Large preview)

Make no mistake: this is not a website builder. Mason empowers you and your team to build individual components and fully functional features; not entire managed hosting websites or products. This way, you’re no longer restricted by the capabilities of your site builder solution.

You can build your website, app, or other software product in the tool of your choice. Then, design and export really complex features from Mason to be integrated into your code base. It’s important to point out that unlike a platform that locks you and your customer data in, Mason lets product teams augment their current products and own everything (not like some website buiders that own your whole website and business instead).

Who Is Mason For?

With Mason, you should already have a fully developed software product or, at the very least, a solution to build it with. Mason is the tool you’ll use to build and design the features (and their functionality) for your product — and to do so with ease (i.e. no coding).

Those features will then be self-contained and dropped into the product when they’re good to go.

As for the actual people that Mason was built for, Tom McLaughlin, Mason’s CEO, explains:

“Today, the entire product lives in the codebase, so it becomes the de facto realm of the engineering team only, even though so many of the features that make up the product can be found in every other codebase on earth — they’re just not that unique. Mason lets your product team build these common features faster, but, more importantly, lets anyone in the organization — technical or not — manage them, even once they’re in production.”

Your product team — your software developers and designers — are the ones that will use Mason to build your software. However, your marketing and content teams will gain the ability to update the features you’ve built in Mason after they’ve been deployed, without needing to wait on engineering to deploy every new edit or tweak.

This means that maintenance of front end features is no longer solely dependent on you, the developer. Anyone on your team — designers, marketers, content creators, etc. — can use Mason’s FEaaS platform to build and update your software’s features.

So, not only can you more efficiently build powerful features for your products, your team can deploy updates in real-time, rather than allowing them to pile up on your open ticket list.

Why FEaaS Matters

Has your software development, deployment or update schedule suffered in the past due to (albeit completely understandable) software developer bottlenecks? If so, then FEaaS should sound like a dream to you.

Until now, there’s really been no other option for software engineers. If you wanted to build a product for the web, everything had to be built from scratch and it required a good amount of time on your part to do so, especially if your aims were more complex in nature. All the while, the rest of your team was left waiting in the wings for you to do your part.

With Mason leading the charge with its FEaaS solution, I’d like to take a look at how its capabilities are going to revolutionize your product development workflow.

Design UI Components Visually

FEaaS takes engineers and developers out of a product’s code base and into a visual build interface. As such, you see exactly what you’re building without having to switch back and forth between your code and a visual preview of what it’ll look like once deployed.

With Mason’s visual builder, you can design complex but essential UI components using a system of containers, columns, layers and pre-configured elements like text, form fields, buttons and more.

Mason visual builder
A look inside Mason’s visual builder tool. (Source: Mason) (Large preview)

Similar to how other modern builder tools work, there is an abundance of options available to help you do more without ever having to write a line of code. And thanks to a convenient toggle between desktop, mobile and tablet views, responsive design is no problem either.

In addition, Mason comes with a full-featured UI kit that includes a variety of templates for the most common UI components. Or you can hand-select the ones you need:

Mason templates
Mason’s pre-built UI component library of templates. (Source: Mason) (Large preview)

Feature cards. Login screens. Blog content blocks. Hero images. CTA buttons. All of the core components you need to get visitors to engage with your product and take action have already been built for you.

If you’re tired of creating them from scratch in each and every product you build, these templates will be a huge help. As you can imagine, having the ability to design and customize product components this way would be a great boon to your team’s productivity.

Build Components And Functionality More Quickly

Now, being able to style components visually is just one of the benefits of using an FEaaS platform like Mason. As you might expect, a tool like this was manufactured for speed.

In terms of actually using Mason, it’s a tool that loads insanely quickly — which would make this quite valuable to anyone who’s lost time in the past waiting for their tools to launch, save changes or move from one view to another.

In terms of how it affects your workflow, expect to gain speed there as well.

With the Mason builder, you can:

  • Add new containers, columns, rows, content blocks or custom-coded elements with a simple drag-and-drop:
Mason Design choices
Mason’s comprehensive set of design layouts, components, and coding options. (Source: Mason) (Large preview)
  • Review, edit, duplicate, move or delete the layers of your component using this visualized hierarchy of elements:
Mason layers
Mason’s easy-to-manage layer controls. (Source: Mason) (Large preview)
  • It’s not just the UI design piece that’s simplified either. You can connect your components to other sources through APIs with ease, too.
mason datasources
Connecting datasources to Mason features to prepare for mapping. (Source: Mason) (Large preview)
Mason API mapping
Connecting APIs to Mason components through mapping. (Source: Mason) (Large preview)

Mason’s “Configure” tab enables you to quickly integrate with other applications, like:

  • Authy
  • Facebook
  • Hubspot
  • Stripe
  • Twilio
  • And more.

So, let’s say you want to sell the eBook promoted in your hero banner instead of just collect leads with it. The first thing you’d do is set up the Stripe integration.

All you need are the Publishable and Secret keys from Stripe’s Developer dashboard:

Stripe API keys
Retrieve the API keys from your Stripe developer dashboard. (Source: Stripe) (Large preview)

Then, input each of the keys into the corresponding field in Mason:

Mason third-party integrations
Integrate other applications with Mason components as datasources, like this Stripe example. (Source: Mason) (Large preview)

Return to the “Design” tab and drag the Credit Card Form Element into the component.

Mason credit card form element
Use the Credit Card Form Element in Mason to accept payments through components. (Source: Mason) (Large preview)

With your form added to the page, there’s one final step to take to set up this integration.

Return to the Configure tab. You’ll now see a new option on the sidebar called “Forms”:

Mason integration with Stripe
Configure a Stripe payment form in just a few steps with Mason. (Source: Mason) (Large preview)

You can see that all of the pertinent details have already been added here and the connection already made to your form.

Again, Mason makes light work of something that would typically take software engineers awhile to do if they were to build a component from-scratch. Instead, you now have all the tools you need to quickly design and program new features for your product.

Deploy New Features With Ease

To be sure, being able to design new features quickly is important for product teams. However, that still doesn’t fix the issue of deployment.

Bottlenecks can occur at various points of a product’s development. And when you build a piece of software that’s so complex that only an engineer can readily launch it or deploy updates, well, you can only expect further delays in the pipeline.

Mason has developed a solution for this. To start, publishing a component to Mason’s library is a cinch. Simply click the “Publish” button in the top-right corner of the builder and Mason takes care of the rest.

To get the component inside your product or app, though, a developer needs to get involved — but only just this once and it shouldn’t take more than five minutes.

To do this, use the “< > Deploy” button in the the top-right corner of the builder. It will then prompt you to do the following:

Mason deployment process
Deploying a Mason component takes just five minutes and four steps. (Source: Mason) (Large preview)

Essentially, what you’re doing is taking the unique identifier Mason has created for the feature you built and adding it to your codebase. When set up correctly, your product will call on the Mason API to render the component app-side. And those on the front end of the site will be none the wiser.

It’s as simple as that to push a new component and all its functionality live.

Empower Everyone To Make Changes And Push Updates

All of the points I’ve made here about the benefits of FEaaS have been dancing around this final — and huge — benefit, which is this:

FEaaS empowers everyone to make changes to features and push them to a live application.

Think about that for a moment.

How much time has your team spent waiting around for an engineer to implement the changes that they need? And what has that done in terms of stunting your app’s ability to engage and convert visitors? Without an impressive-looking UI, without properly functioning features, without all of the critical elements needed to compel visitors to take action.

You’re ultimately costing the business money by holding the software hostage. Until now, that’s something that software product teams couldn’t help. It was just the nature of their work. But with FEaaS by Mason, this finally changes.

Once you have (1) published your component and (2) deployed it to your application, the feature will appear within your product. But let’s say changes are needed to it. For example:

  • Your designer wants to change the style of a form to mirror the revamped landing page design.
  • Your marketing manager has a new branded image that needs to replace the home page hero image.
  • Your editor has decided to tweak the wording for the latest lead gen offer and wants to update the CTA.

It doesn’t matter who steps inside the Mason builder to change a component. The second it happens, that faded “Saved” button in the top-right corner of the builder turns to a green “Go to Publish” button.

Mason software component updates
Updating Mason components can be done by anyone. (Source: Mason) (Large preview)

And guess what? No technical experience is needed to click it.

Mason simplifies publication
Mason takes care of publishing your components and its updates to its library. (Source: Mason) (Large preview)

Mason takes care of publishing and deploying the changes, so as long as the connection has already been made between your app and Mason, these updates should instantly go live for visitors to see.

If you and your product team have been bogged down with a barrage of tickets, asking you to build new components or to tweak existing ones, this will effectively put a stop to that.

Wrapping Up

One of the wonderful things about building products for the web is that someone’s always developing a new way to help us accomplish more while simultaneously doing less.

With software applications, in general, it’s been a long time coming. Thankfully, FEaaS is here and it looks as though Mason has developed a supremely valuable tool to speed up software development, improve output, and also empower more of your team to pitch in.

Smashing Editorial (md, yk, il)


Articles on Smashing Magazine — For Web Designers And Developers


Deprecated: Function create_function() is deprecated in /home2/thebrand/public_html/wp-content/plugins/wp-spamshield/wp-spamshield.php on line 1884
brad-perf-budget-builder

Front-End Performance Checklist 2019 [PDF, Apple Pages, MS Word]

Front-End Performance Checklist 2019 [PDF, Apple Pages, MS Word]

Front-End Performance Checklist 2019 [PDF, Apple Pages, MS Word]

Vitaly Friedman

Web performance is a tricky beast, isn’t it? How do we actually know where we stand in terms of performance, and what our performance bottlenecks exactly are? Is it expensive JavaScript, slow web font delivery, heavy images, or sluggish rendering? Is it worth exploring tree-shaking, scope hoisting, code-splitting, and all the fancy loading patterns with intersection observer, server push, clients hints, HTTP/2, service workers and — oh my — edge workers? And, most importantly, where do we even start improving performance and how do we establish a performance culture long-term?

Back in the day, performance was often a mere afterthought. Often deferred till the very end of the project, it would boil down to minification, concatenation, asset optimization and potentially a few fine adjustments on the server’s config file. Looking back now, things seem to have changed quite significantly.

Performance isn’t just a technical concern: it matters, and when baking it into the workflow, design decisions have to be informed by their performance implications. Performance has to be measured, monitored and refined continually, and the growing complexity of the web poses new challenges that make it hard to keep track of metrics, because metrics will vary significantly depending on the device, browser, protocol, network type and latency (CDNs, ISPs, caches, proxies, firewalls, load balancers and servers all play a role in performance).

So, if we created an overview of all the things we have to keep in mind when improving performance — from the very start of the process until the final release of the website — what would that list look like? Below you’ll find a (hopefully unbiased and objective) front-end performance checklist for 2019 — an updated overview of the issues you might need to consider to ensure that your response times are fast, user interaction is smooth and your sites don’t drain user’s bandwidth.

Table Of Contents

(You can also just download the checklist PDF (166 KB) or download editable Apple Pages file (275 KB) or the .docx file (151 KB). Happy optimizing, everyone!)

Getting Ready: Planning And Metrics

Micro-optimizations are great for keeping performance on track, but it’s critical to have clearly defined targets in mind — measurable goals that would influence any decisions made throughout the process. There are a couple of different models, and the ones discussed below are quite opinionated — just make sure to set your own priorities early on.

  1. Establish a performance culture.
    In many organizations, front-end developers know exactly what common underlying problems are and what loading patterns should be used to fix them. However, as long as there is no established endorsement of the performance culture, each decision will turn into a battlefield of departments, breaking up the organization into silos. You need a business stakeholder buy-in, and to get it, you need to establish a case study on how speed benefits metrics and Key Performance Indicators (KPIs) they care about.

    Without a strong alignment between dev/design and business/marketing teams, performance isn’t going to sustain long-term. Study common complaints coming into customer service and see how improving performance can help relieve some of these common problems.

    Run performance experiments and measure outcomes — both on mobile and on desktop. It will help you build up a company-tailored case study with real data. Furthermore, using data from case studies and experiments published on WPO Stats will help increase sensitivity for business about why performance matters, and what impact it has on user experience and business metrics. Stating that performance matters alone isn’t enough though — you also need to establish some measurable and trackable goals and observe them.

    How to get there? In her talk on Building Performance for the Long Term, Allison McKnight shares a comprehensive case-study of how she helped establish a performance culture at Etsy (slides).

Brad Frost and Jonathan Fielding’s Performance Budget Calculator
Performance budget builder by Brad Frost and Jonathan Fielding’s Performance Budget Calculator can help you set up your performance budget and visualize it. (Large preview)
  1. Goal: Be at least 20% faster than your fastest competitor.
    According to psychological research, if you want users to feel that your website is faster than your competitor’s website, you need to be at least 20% faster. Study your main competitors, collect metrics on how they perform on mobile and desktop and set thresholds that would help you outpace them. To get accurate results and goals though, first study your analytics to see what your users are on. You can then mimic the 90th percentile’s experience for testing.

    To get a good first impression of how your competitors perform, you can use Chrome UX Report (CrUX, a ready-made RUM data set, video introduction by Ilya Grigorik), Speed Scorecard (also provides a revenue impact estimator), Real User Experience Test Comparison or SiteSpeed CI (based on synthetic testing).

    Note: If you use Page Speed Insights (no, it isn’t deprecated), you can get CrUX performance data for specific pages instead of just the aggregates. This data can be much more useful for setting performance targets for assets like “landing page” or “product listing”. And if you are using CI to test the budgets, you need to make sure your tested environment matches CrUX if you used CrUX for setting the target (thanks Patrick Meenan!).

    Collect data, set up a spreadsheet, shave off 20%, and set up your goals (performance budgets) this way. Now you have something measurable to test against. If you’re keeping the budget in mind and trying to ship down just the minimal script to get a quick time-to-interactive, then you’re on a reasonable path.

    Need resources to get started?

    Once you have a budget in place, incorporate them into your build process with Webpack Performance Hints and Bundlesize, Lightouse CI, PWMetrics or Sitespeed CI to enforce budgets on pull requests and provide a score history in PR comments. If you need something custom, you can use webpagetest-charts-api, an API of endpoints to build charts from WebPagetest results.

    For instance, just like Pinterest, you could create a custom eslint rule that disallows importing from files and directories that are known to be dependency-heavy and would bloat the bundle. Set up a listing of “safe” packages that can be shared across the entire team.

    Beyond performance budgets, think about critical customer tasks that are most beneficial to your business. Set and discuss acceptable time thresholds for critical actions and establish “UX ready” user timing marks that the entire organization has agreed on. In many cases, user journeys will touch on the work of many different departments, so alignment in terms of acceptable timings will help support or prevent performance discussions down the road. Make sure that additional costs of added resources and features are visible and understood.

    Also, as Patrick Meenan suggested, it’s worth to plan out a loading sequence and trade-offs during the design process. If you prioritize early on which parts are more critical, and define the order in which they should appear, you will also know what can be delayed. Ideally, that order will also reflect the sequence of your CSS and JavaScript imports, so handling them during the build process will be easier. Also, consider what the visual experience should be in “in-between”-states, while the page is being loaded (e.g. when web fonts aren’t loaded yet).

    Planning, planning, planning. It might be tempting to get into quick “low-hanging-fruits”-optimizations early on — and eventually it might be a good strategy for quick wins — but it will be very hard to keep performance a priority without planning and setting realistic, company-tailored performance goals.

The difference between First Paint, First Contentful Paint, First Meaningful Paint, Visual Complete and Time To Interactive. Large view. Credit: @denar90
  1. Choose the right metrics.
    Not all metrics are equally important. Study what metrics matter most to your application: usually it will be related to how fast you can start render most important pixels of your product and how quickly you can provide input responsiveness for these rendered pixels. This knowledge will give you the best optimization target for ongoing efforts.

    One way or another, rather than focusing on full page loading time (via onLoad and DOMContentLoaded timings, for example), prioritize page loading as perceived by your customers. That means focusing on a slightly different set of metrics. In fact, choosing the right metric is a process without obvious winners.

    Based on Tim Kadlec’s research and Marcos Iglesias’ notes in his talk, traditional metrics could be grouped into a few sets. Usually, we’ll need all of them to get a complete picture of performance, and in your particular case some of them might be more important than others.

    • Quantity-based metrics measure the number of requests, weight and a performance score. Good for raising alarms and monitoring changes over time, not so good for understanding user experience.
    • Milestone metrics use states in the lifetime of the loading process, e.g. Time To First Byte and Time To Interactive. Good for describing the user experience and monitoring, not so good for knowing what happens between the milestones.
    • Rendering metrics provide an estimate of how fast content renders (e.g. Start Render time, Speed Index). Good for measuring and tweaking rendering performance, but not so good for measuring when important content appears and can be interacted with.
    • Custom metrics measure a particular, custom event for the user, e.g. Twitter’s Time To First Tweet and Pinterest’s PinnerWaitTime. Good for describing the user experience precisely, not so good for scaling the metrics and comparing with with competitors.

    To complete the picture, we’d usually look out for useful metrics among all of these groups. Usually, the most specific and relevant ones are:

    • First Meaningful Paint (FMP)
      Provides the timing when primary content appears on the page, providing an insight into how quickly the server outputs any data. Long FMP usually indicates JavaScript blocking the main thread, but could be related to back-end/server issues as well.
    • Time to Interactive (TTI)
      The point at which layout has stabilized, key webfonts are visible, and the main thread is available enough to handle user input — basically the time mark when a user can interact with the UI. The key metrics for understanding how much wait a user has to experience to use the site without a lag.
    • First Input Delay (FID), or Input responsiveness
      The time from when a user first interacts with your site to the time when the browser is actually able to respond to that interaction. Complements TTI very well as it describes the missing part of the picture: what happens when a user actually interacts with the site. Intended as a RUM metric only. There is a JavaScript library for measuring FID in the browser.
    • Speed Index
      Measures how quickly the page contents are visually populated; the lower the score, the better. The Speed Index score is computed based on the speed of visual progress, but it’s merely a computed value. It’s also sensitive to the viewport size, so you need to define a range of testing configurations that match your target audience (thanks, Boris!).
    • CPU time spent
      A metric that indicates how busy is the main thread with the processing of the payload. It shows how often and how long the main thread is blocked, working on painting, rendering, scripting and loading. High CPU time is a clear indicator of a janky experience, i.e. when the user experiences a noticeable lag between their action and a response. With WebPageTest, you can select “Capture Dev Tools Timeline” on the “Chrome” tab to expose the breakdown of the main thread as it runs on any device using WebPageTest.
    • Ad Weight Impact
      If your site depends on the revenue generated by advertising, it’s useful to track the weight of ad related code. Paddy Ganti’s script constructs two URLs (one normal and one blocking the ads), prompts the generation of a video comparison via WebPageTest and reports a delta.
    • Deviation metrics
      As noted by Wikipedia engineers, data of how much variance exists in your results could inform you how reliable your instruments are, and how much attention you should pay to deviations and outlers. Large variance is an indicator of adjustments needed in the setup. It also helps understand if certain pages are more difficult to measure reliably, e.g. due to third-party scripts causing significant variation. It might also be a good idea to track browser version to understand bumps in performance when a new browser version is rolled out.
    • Custom metrics
      Custom metrics are defined by your business needs and customer experience. It requires you to identify important pixels, critical scripts, necessary CSS and relevant assets and measure how quickly they get delivered to the user. For that one, you can monitor Hero Rendering Times, or use Performance API, marking particular timestaps for events that are important for your business. Also, you can collect custom metrics with WebPagetest by executing arbitrary JavaScript at the end of a test.

    Steve Souders has a detailed explanation of each metric. It’s important to notice that while Time-To-Interactive is measured by running automated audits in the so-called lab environment, First Input Delay represents the actual user experience, with actual users experiencing a noticeable lag. In general, it’s probably a good idea to always measure and track both of them.

    Depending on the context of your application, preferred metrics might differ: e.g. for Netflix TV UI, key input responsiveness, memory usage and TTI are more critical, and for Wikipedia, first/last visual changes and CPU time spent metrics are more important.

    Note: both FID and TTI do not account for scrolling behavior; scrolling can happen independently since it’s off-main-thread, so for many content consumption sites these metrics might be much less important (thanks, Patrick!).

User-centric performance metrics provide a better insight into the actual user experience. First Input Delay (FID) is a new metric that tries to achieve just that. (Large preview)
  1. Gather data on a device representative of your audience.
    To gather accurate data, we need to thoroughly choose devices to test on. It’s a good option to choose a Moto G4, a mid-range Samsung device, a good middle-of-the-road device like a Nexus 5X and a slow device like Alcatel 1X, perhaps in an open device lab. For testing on slower thermal-throttled devices, you could also get a Nexus 2, which costs just around $ 100.

    If you don’t have a device at hand, emulate mobile experience on desktop by testing on a throttled network (e.g. 150ms RTT, 1.5 Mbps down, 0.7 Mbps up) with a throttled CPU (5× slowdown). Eventually switch over to regular 3G, 4G and Wi-Fi. To make the performance impact more visible, you could even introduce 2G Tuesdays or set up a throttled 3G network in your office for faster testing.

    Keep in mind that on a mobile device, you should be expecting a 4×–5× slowdown compared to desktop machines. Mobile devices have different GPUs, CPU, different memory, different battery characteristics. While download times are critical for low-end networks, parse times are critical for phones with slow CPUs. In fact, parse times on mobile are 36% higher than on desktop. So always test on an average device — a device that is most representative of your audience.

  2. Introducing the slowest day of the week
    Introducing the slowest day of the week. Facebook has introduced 2G Tuesdays to increase visibility and sensitivity of slow connections. (Image source)

    Luckily, there are many great options that help you automate the collection of data and measure how your website performs over time according to these metrics. Keep in mind that a good performance picture covers a set of performance metrics, lab data and field data:

    • Synthetic testing tools collect lab data in a reproducible environment with predefined device and network settings (e.g. Lighthouse, WebPageTest) and
    • Real User Monitoring (RUM) tools evaluate user interactions continuously and collect field data (e.g. SpeedCurve, New Relic — both tools provide synthetic testing, too).

    The former is particularly useful during development as it will help you identify, isolate and fix performance issues while working on the product. The latter is useful for long-term maintenance as it will help you understand your performance bottlenecks as they are happening live — when users actually access the site.

    By tapping into built-in RUM APIs such as Navigation Timing, Resource Timing, Paint Timing, Long Tasks, etc., synthetic testing tools and RUM together provide a complete picture of performance in your application. You could use PWMetrics, Calibre, SpeedCurve, mPulse and Boomerang, Sitespeed.io, which all are great options for performance monitoring. Furthermore, with Server Timing header, you could even monitor back-end and front-end performance all in one place.

    Note: It’s always a safer bet to choose network-level throttlers, external to the browser, as, for example, DevTools has issues interacting with HTTP/2 push, due to the way it’s implemented (thanks, Yoav, Patrick!). For Mac OS, we can use Network Link Conditioner, for Windows Windows Traffic Shaper, for Linux netem, and for FreeBSD dummynet.

Lighthouse
Lighthouse, a performance auditing tool integrated into DevTools. (Large preview)
  1. Set up “clean” and “customer” profiles for testing.
    While running tests in passive monitoring tools, it’s a common strategy to turn off anti-virus and background CPU tasks, remove background bandwidth transfers and test with a clean user profile without browser extensions to avoid skewed results (Firefox, Chrome).

    However, it’s also a good idea to study which extensions your customers are using frequently, and test with a dedicated “customer” profile as well. In fact, some extensions might have a profound performance impact on your application, and if your users use them a lot, you might want to account for it up front. “Clean” profile results alone are overly optimistic and can be crushed in real-life scenarios.

  2. Share the checklist with your colleagues.
    Make sure that the checklist is familiar to every member of your team to avoid misunderstandings down the line. Every decision has performance implications, and the project would hugely benefit from front-end developers properly communicating performance values to the whole team, so that everybody would feel responsible for it, not just front-end developers. Map design decisions against performance budget and the priorities defined in the checklist.

Setting Realistic Goals

  1. 100-millisecond response time, 60 fps.
    For an interaction to feel smooth, the interface has 100ms to respond to user’s input. Any longer than that, and the user perceives the app as laggy. The RAIL, a user-centered performance model gives you healthy targets: To allow for <100 milliseconds response, the page must yield control back to main thread at latest after every <50 milliseconds. Estimated Input Latency tells us if we are hitting that threshold, and ideally, it should be below 50ms. For high-pressure points like animation, it’s best to do nothing else where you can and the absolute minimum where you can’t.

    RAIL
    RAIL, a user-centric performance model.

    Also, each frame of animation should be completed in less than 16 milliseconds, thereby achieving 60 frames per second (1 second ÷ 60 = 16.6 milliseconds) — preferably under 10 milliseconds. Because the browser needs time to paint the new frame to the screen, your code should finish executing before hitting the 16.6 milliseconds mark. We’re starting having conversations about 120fps (e.g. iPad’s new screens run at 120Hz) and Surma has covered some rendering performance solutions for 120fps, but that’s probably not a target we’re looking at just yet.

    Be pessimistic in performance expectations, but be optimistic in interface design and use idle time wisely. Obviously, these targets apply to runtime performance, rather than loading performance.

  2. Speed Index < 1250, TTI < 5s on 3G, Critical file size budget < 170KB (gzipped).
    Although it might be very difficult to achieve, a good ultimate goal would be First Meaningful Paint under 1 second and a Speed Index value under 1250. Considering the baseline being a $ 200 Android phone (e.g. Moto G4) on a slow 3G network, emulated at 400ms RTT and 400kbps transfer speed, aim for Time to Interactive under 5s, and for repeat visits, aim for under 2s (achievable only with a service worker),

    Notice that, when speaking about interactivity metrics, it’s a good idea to distinguish between First CPU Idle and Time To Interactive to avoid misunderstandings down the line. The former is the earliest point after the main content has rendered (where there is at least a 5-second window where the page is responsive). The latter is the point where the page can be expected to always be responsive to input (thanks, Philip Walton!).

    We have two major constraints that effectively shape a reasonable target for speedy delivery of the content on the web. On the one hand, we have network delivery constraints due to TCP Slow Start. The first 14KB of the HTML is the >most critical payload chunk — and the only part of the budget that can be delivered in the first roundtrip (which is all you get in 1 sec at 400ms RTT due to mobile wake-up times).

    On the other hand, we have hardware constraints on memory and CPU due to JavaScript parsing times (we’ll talk about them in detail later). To achieve the goals stated in the first paragraph, we have to consider the critical file size budget for JavaScript. Opinions vary on what that budget should be (and it heavily depends on the nature of your project), but a budget of 170KB JavaScript gzipped already would take up to 1s to parse and compile on an average phone. Assuming that 170KB expands to 3× that size when decompressed (0.7MB), that already could be the death knell of a “decent” user experience on a Moto G4 or Nexus 2.

    Of course, your data might show that your customers are not on these devices, but perhaps they simply don’t show up in your analytics because your service is inaccessible to them due to slow performance. In fact, Google’s Alex Russels recommends to aim for 130–170KB gzipped as a reasonable upper boundary, and exceeding this budget should be an informed and deliberate decision. In real-life world, most products aren’t even close: an average bundle size today is around 400KB, which is up 35% compared to late 2015. On a middle-class mobile device, that accounts for 30-35 seconds for Time-To-Interactive.

    We could also go beyond the bundle size budget though. For example, we could set performance budgets based on the activities of the browser’s main thread, i.e. paint time before start render, or track down front-end CPU hogs. Tools such as Calibre, SpeedCurve and Bundlesize can help you keep your budgets in check, and can be integrated into your build process.

    Also, a performance budget probably shouldn’t be a fixed value. Depending on the network connection, performance budgets should adapt, but payload on slower connection is much more “expensive”, regardless of how they’re used.

From 'Fast By Default: Modern Loading Best Practices' by Addy Osmani
From Fast By Default: Modern loading best practices by Addy Osmani (Slide 19)
Performance budgets should adapt depending on the network conditions for an average mobile device. (Image source: Katie Hempenius) (Large preview)

Defining The Environment

  1. Choose and set up your build tools.
    Don’t pay too much attention to what’s supposedly cool these days. Stick to your environment for building, be it Grunt, Gulp, Webpack, Parcel, or a combination of tools. As long as you are getting results you need and you have no issues maintaining your build process, you’re doing just fine.

    Among the build tools, Webpack seems to be the most established one, with literally hundreds of plugins available to optimize the size of your builds. Getting started with Webpack can be tough though. So if you want to get started, there are some great resources out there:

  2. Use progressive enhancement as a default.
    Keeping progressive enhancement as the guiding principle of your front-end architecture and deployment is a safe bet. Design and build the core experience first, and then enhance the experience with advanced features for capable browsers, creating resilient experiences. If your website runs fast on a slow machine with a poor screen in a poor browser on a sub-optimal network, then it will only run faster on a fast machine with a good browser on a decent network.

  3. Choose a strong performance baseline.
    With so many unknowns impacting loading — the network, thermal throttling, cache eviction, third-party scripts, parser blocking patterns, disk I/O, IPC latency, installed extensions, antivirus software and firewalls, background CPU tasks, hardware and memory constraints, differences in L2/L3 caching, RTTS — JavaScript has the heaviest cost of the experience, next to web fonts blocking rendering by default and images often consuming too much memory. With the performance bottlenecks moving away from the server to the client, as developers, we have to consider all of these unknowns in much more detail.

    With a 170KB budget that already contains the critical-path HTML/CSS/JavaScript, router, state management, utilities, framework and the application logic, we have to thoroughly examine network transfer cost, the parse/compile time and the runtime cost of the framework of our choice.

    As noted by Seb Markbåge, a good way to measure start-up costs for frameworks is to first render a view, then delete it and then render again as it can tell you how the framework scales. The first render tends to warm up a bunch of lazily compiled code, which a larger tree can benefit from when it scales. The second render is basically an emulation of how code reuse on a page affects the performance characteristics as the page grows in complexity.

'Fast By Default: Modern Loading Best Practices' by Addy Osmani
From Fast By Default: Modern Loading Best Practices by Addy Osmani (Slides 18, 19).
  1. Evaluate each framework and each dependency.
    Now, not every project needs a framework and not every page of a single-page-application needs to load a framework. In Netflix’s case, “removing React, several libraries and the corresponding app code from the client-side reduced the total amount of JavaScript by over 200KB, causing an over-50% reduction in Netflix’s Time-to-Interactivity for the logged-out homepage.” The team then utilized the time spent by users on the landing page to prefetch React for subsequent pages that users were likely to land on (read on for details).

    It might sound obvious but worth stating: some projects can also benefit benefit from removing an existing framework altogether. Once a framework is chosen, you’ll be staying with it for at least a few years, so if you need to use one, make sure your choice is informed and well considered.

    Inian Parameshwaran has measured performance footprint of top 50 frameworks (against First Contentful Paint — the time from navigation to the time when the browser renders the first bit of content from the DOM). Inian discovered that, out there in the wild, Vue and Preact are the fastest across the board — both on desktop and mobile, followed by React (slides). You could examine your framework candidates and the proposed architecture, and study how most solutions out there perform, e.g. with server-side rendering or client-side rendering, on average.

    Baseline performance cost matters. According to a study by Ankur Sethi, “your React application will never load faster than about 1.1 seconds on an average phone in India, no matter how much you optimize it. Your Angular app will always take at least 2.7 seconds to boot up. The users of your Vue app will need to wait at least 1 second before they can start using it.” You might not be targeting India as your primary market anyway, but users accessing your site with suboptimal network conditions will have a comparable experience. In exchange, your team gains maintainability and developer efficiency, of course. But this consideration needs to be deliberate.

    You could go as far as evaluating a framework (or any JavaScript library) on Sacha Greif’s 12-point scale scoring system by exploring features, accessibility, stability, performance, package ecosystem, community, learning curve, documentation, tooling, track record, team, compatibility, security for example. But on a tough schedule, it’s a good idea to consider at least the total cost on size + initial parse times before choosing an option; lightweight options such as Preact, Inferno, Vue, Svelte or Polymer can get the job done just fine. The size of your baseline will define the constraints for your application’s code.

    A good starting point is to choose a good default stack for your application. Gatsby.js (React), Preact CLI, and PWA Starter Kit provide reasonable defaults for fast loading out of the box on average mobile hardware.

JavaScript processing times in 2018 by Addy Osmani
(Image credit: Addy Osmani) (Large preview)
  1. Consider using PRPL pattern and app shell architecture.
    Different frameworks will have different effects on performance and will require different strategies of optimization, so you have to clearly understand all of the nuts and bolts of the framework you’ll be relying on. When building a web app, look into the PRPL pattern and application shell architecture. The idea is quite straightforward: Push the minimal code needed to get interactive for the initial route to render quickly, then use service worker for caching and pre-caching resources and then lazy-load routes that you need, asynchronously.
PRPL Pattern in the application shell architecture
PRPL stands for Pushing critical resource, Rendering initial route, Pre-caching remaining routes and Lazy-loading remaining routes on demand.
Application shell architecture
An application shell is the minimal HTML, CSS, and JavaScript powering a user interface.
  1. Have you optimized the performance of your APIs?
    APIs are communication channels for an application to expose data to internal and third-party applications via so-called endpoints. When designing and building an API, we need a reasonable protocol to enable the communication between the server and third-party requests. Representational State Transfer (REST) is a well-established, logical choice: it defines a set of constraints that developers follow to make content accessible in a performant, reliable and scalable fashion. Web services that conform to the REST constraints, are called RESTful web services.

    As with good ol’ HTTP requests, when data is retrieved from an API, any delay in server response will propagate to the end user, hence delaying rendering. When a resource wants to retrieve some data from an API, it will need to request the data from the corresponding endpoint. A component that renders data from several resources, such as an article with comments and author photos in each comment, may need several roundtrips to the server to fetch all the data before it can be rendered. Furthermore, the amount of data returned through REST is often more than what is needed to render that component.

    If many resources require data from an API, the API might become a performance bottleneck. GraphQL provides a performant solution to these issues. Per se, GraphQL is a query language for your API, and a server-side runtime for executing queries by using a type system you define for your data. Unlike REST, GraphQL can retrieve all data in a single request, and the response will be exactly what is required, without over or under-fetching data as it typically happens with REST.

    In addition, because GraphQL is using schema (metadata that tells how the data is structured), it can already organize data into the preferred structure, so, for example, with GraphQL, we could remove JavaScript code used for dealing with state management, producing a cleaner application code that runs faster on the client.

    If you want to get started with GraphQL, Eric Baer published two fantastic articles on yours truly Smashing Magazine: A GraphQL Primer: Why We Need A New Kind Of API and A GraphQL Primer: The Evolution Of API Design (thanks for the hint, Leonardo!).

Hacker Noon
A difference between REST and GraphQL, illustrated via a conversation between Redux + REST on the left, an Apollo + GraphQL on the right. (Image source: Hacker Noon) (Large preview)
  1. Will you be using AMP or Instant Articles?
    Depending on the priorities and strategy of your organization, you might want to consider using Google’s AMP or Facebook’s Instant Articles or Apple’s Apple News. You can achieve good performance without them, but AMP does provide a solid performance framework with a free content delivery network (CDN), while Instant Articles will boost your visibility and performance on Facebook.

    The seemingly obvious benefit of these technologies for users is guaranteed performance, so at times they might even prefer AMP-/Apple News/Instant Pages-links over “regular” and potentially bloated pages. For content-heavy websites that are dealing with a lot of third-party content, these options could potentially help speed up render times dramatically.

    Unless they don’t. According to Tim Kadlec, for example, “AMP documents tend to be faster than their counterparts, but they don’t necessarily mean a page is performant. AMP is not what makes the biggest difference from a performance perspective.”

    A benefit for the website owner is obvious: discoverability of these formats on their respective platforms and increased visibility in search engines. You could build progressive web AMPs, too, by reusing AMPs as a data source for your PWA. Downside? Obviously, a presence in a walled garden places developers in a position to produce and maintain a separate version of their content, and in case of Instant Articles and Apple News without actual URLs (thanks Addy, Jeremy!).

  2. Choose your CDN wisely.
    Depending on how much dynamic data you have, you might be able to “outsource” some part of the content to a static site generator, pushing it to a CDN and serving a static version from it, thus avoiding database requests. You could even choose a static-hosting platform based on a CDN, enriching your pages with interactive components as enhancements (JAMStack). In fact, some of those generators (like Gatsby on top of React) are actually website compilers with many automated optimizations provided out of the box. As compilers add optimizations over time, the compiled output gets smaller and faster over time.

    Notice that CDNs can serve (and offload) dynamic content as well. So, restricting your CDN to static assets is not necessary. Double-check whether your CDN performs compression and conversion (e.g. image optimization in terms of formats, compression and resizing at the edge), support for servers workers, edge-side includes, which assemble static and dynamic parts of pages at the CDN’s edge (i.e. the server closest to the user), and other tasks.

    Note: based on research by Patrick Meenan and Andy Davies, HTTP/2 is effectively broken on many CDNs, so we shouldn’t be too optimistic about the performance boost there.

Assets Optimizations

  1. Use Brotli or Zopfli for plain text compression.
    In 2015, Google introduced Brotli, a new open-source lossless data format, which is now supported in all modern browsers. In practice, Brotli appears to be much more effective than Gzip and Deflate. It might be (very) slow to compress, depending on the settings, but slower compression will ultimately lead to higher compression rates. Still, it decompresses fast. You can also estimate Brotli compression savings for your site.

    Browsers will accept it only if the user is visiting a website over HTTPS though. What’s the catch? Brotli still doesn’t come preinstalled on some servers today, and it’s not straightforward to set up without self-compiling Nginx. Still, it’s not that difficult, and its support is coming, e.g. it’s available since Apache 2.4.26. Brotli is widely supported, and many CDNs support it (Akamai, AWS, KeyCDN, Fastly, Cloudlare, CDN77) and you can enable Brotli even on CDNs that don’t support it yet (with a service worker).

    At the highest level of compression, Brotli is so slow that any potential gains in file size could be nullified by the amount of time it takes for the server to begin sending the response as it waits to dynamically compress the asset. With static compression, however, higher compression settings are preferred.

    Alternatively, you could look into using Zopfli’s compression algorithm, which encodes data to Deflate, Gzip and Zlib formats. Any regular Gzip-compressed resource would benefit from Zopfli’s improved Deflate encoding because the files will be 3 to 8% smaller than Zlib’s maximum compression. The catch is that files will take around 80 times longer to compress. That’s why it’s a good idea to use Zopfli on resources that don’t change much, files that are designed to be compressed once and downloaded many times.

    If you can bypass the cost of dynamically compressing static assets, it’s worth the effort. Both Brotli and Zopfli can be used for any plaintext payload — HTML, CSS, SVG, JavaScript, and so on.

    The strategy? Pre-compress static assets with Brotli+Gzip at the highest level and compress (dynamic) HTML on the fly with Brotli at level 1–4. Make sure that the server handles content negotiation for Brotli or gzip properly. If you can’t install/maintain Brotli on the server, use Zopfli.

  2. Use responsive images and WebP.
    As far as possible, use responsive images with srcset, sizes and the <picture> element. While you’re at it, you could also make use of the WebP format (supported in Chrome, Opera, Firefox 65, Edge 18) by serving WebP images with the <picture> element and a JPEG fallback (see Andreas Bovens’ code snippet) or by using content negotiation (using Accept headers). Ire Aderinokun has a very detailed tutorial on converting images to WebP, too.

    Sketch natively supports WebP, and WebP images can be exported from Photoshop using a WebP plugin for Photoshop. Other options are available, too. If you’re using WordPress or Joomla, there are extensions to help you easily implement support for WebP, such as Optimus and Cache Enabler for WordPress and Joomla’s own supported extension (via Cody Arsenault).

    It’s important to note that while WebP image file sizes compared to equivalent Guetzli and Zopfli, the format doesn’t support progressive rendering like JPEG, which is why users might see an actual image faster with a good ol’ JPEG although WebP images might get faster through the network. With JPEG, we can serve a “decent” user experience with the half or even quarter of the data and load the rest later, rather than have a half-empty image as it is in the case of WebP. Your decision will depend on what you are after: with WebP, you’ll reduce the payload, and with JPEG you’ll improve perceived performance.

    On Smashing Magazine, we use the postfix -opt for image names — for example, brotli-compression-opt.png; whenever an image contains that postfix, everybody on the team knows that the image has already been optimized. And — shameless plug! — Jeremy Wagner even published a Smashing book on WebP.

Responsive Image Breakpoints Generator
The Responsive Image Breakpoints Generator automates images and markup generation.
  1. Are images properly optimized?
    When you’re working on a landing page on which it’s critical that a particular image loads blazingly fast, make sure that JPEGs are progressive and compressed with mozJPEG (which improves the start rendering time by manipulating scan levels) or Guetzli, Google’s new open-source encoder focusing on perceptual performance, and utilizing learnings from Zopfli and WebP. The only downside: slow processing times (a minute of CPU per megapixel). For PNG, we can use Pingo, and for SVG, we can use SVGO or SVGOMG. And if you need to quickly preview and copy or download all the SVG assets from a website, svg-grabber can do that for you, too.

    Every single image optimization article would state it, but keeping vector assets clean and tight is always worth reminding. Make sure to clean up unused assets, remove unnecessary metadata and reduces the amount of path points in artwork (and thus SVG code). (Thanks, Jeremy!)

    There are more advanced options though. You could:

    • Use Squoosh to compress, resize and manipulate images at the optimal compression levels (lossy or lossless),
    • Use the Responsive Image Breakpoints Generator or a service such as Cloudinary or Imgix to automate image optimization. Also, in many cases, using srcset and sizes alone will reap significant benefits.
    • To check the efficiency of your responsive markup, you can use imaging-heap, a command line tool that measure the efficiency across viewport sizes and device pixel ratios.
    • Lazy load images and iframes with lazysizes, a library that detects any visibility changes triggered through user interaction (or IntersectionObserver which we’ll explore later).
    • Watch out for images that are loaded by default, but might never be displayed — e.g. in carousels, accordions and image galleries.
    • Consider Swapping Images with the Sizes Attribute by specifying different image display dimensions depending on media queries, e.g. to manipulate sizes to swap sources in a magnifier component.
    • Review image download inconsistencies to prevent unexpected downloads for foreground and background images.
    • To optimize storage interally, you could use Dropbox’s new Lepton format for losslessly compressing JPEGs by an average of 22%.
    • Watch out for the aspect-ratio property in CSS and intrinsicsize attribute which will allow us to set aspect ratios and dimensions for images, so browser can reserve a pre-defined layout slot early to avoid layout jumps during the page load.
    • If you feel adventurous, you could chop and rearrange HTTP/2 streams using Edge workers, basically a real-time filter living on the CDN, to send images faster through the network. Edge workers use JavaScript streams that use chunks which you can control (basically they are JavaScript that runs on the CDN edge that can modify the streaming responses), so you can control the delivery of images. With service worker it’s too late as you can’t control what’s on the wire, but it does work with Edge workers. So you can use them on top of static JPEGs saved progressively for a particular landing page.
    A sample output by imaging-heap, a command line tool that measure the efficiency across viewport sizes and device pixel ratios. (Image source) (Large preview)

    The future of responsive images might change dramatically with the adoption of client hints. Client hints are HTTP request header fields, e.g. DPR, Viewport-Width, Width, Save-Data, Accept (to specify image format preferences) and others. They are supposed to inform the server about the specifics of user’s browser, screen, connection etc. As a result, the server can decide how to fill in the layout with appropriately sized images, and serve only these images in desired formats. With client hints, we move the resource selection from HTML markup and into the request-response negotiation between the client and server.

    As Ilya Grigorik noted, client hints complete the picture — they aren’t an alternative to responsive images. “The <picture> element provides the necessary art-direction control in the HTML markup. Client hints provide annotations on resulting image requests that enable resource selection automation. Service Worker provides full request and response management capabilities on the client.” A service worker could, for example, append new client hints headers values to the request, rewrite the URL and point the image request to a CDN, adapt response based on connectivity and user preferences, etc. It holds true not only for image assets but for pretty much all other requests as well.

    For clients that support client hints, one could measure 42% byte savings on images and 1MB+ fewer bytes for 70th+ percentile. On Smashing Magazine, we could measure 19-32% improvement, too. Unfortunately, client hints still have to gain some browser support. Under consideration in Firefox and Edge. However, if you supply both the normal responsive images markup and the <meta> tag for Client Hints, then the browser will evaluate the responsive images markup and request the appropriate image source using the Client Hints HTTP headers.

    Not good enough? Well, you can also improve perceived performance for images with the multiple background images technique. Keep in mind that playing with contrast and blurring out unnecessary details (or removing colors) can reduce file size as well. Ah, you need to enlarge a small photo without losing quality? Consider using Letsenhance.io.

    These optimizations so far cover just the basics. Addy Osmani has published a very detailed guide on Essential Image Optimization that goes very deep into details of image compression and color management. For example, you could blur out unnecessary parts of the image (by applying a Gaussian blur filter to them) to reduce the file size, and eventually you might even start removing colors or turn the picture into black and white to reduce the size even further. For background images, exporting photos from Photoshop with 0 to 10% quality can be absolutely acceptable as well. Ah, and don’t use JPEG-XR on the web — “the processing of decoding JPEG-XRs software-side on the CPU nullifies and even outweighs the potentially positive impact of byte size savings, especially in the context of SPAs”.

  2. Are videos properly optimized?
    We covered images so far, but we’ve avoided a conversation about good ol’ GIFs. Frankly, instead of loading heavy animated GIFs which impact both rendering performance and bandwidth, it’s a good idea to switch either to animated WebP (with GIF being a fallback) or replace them with looping HTML5 videos altogether. Yes, the browser performance is slow with <video>, and, unlike with images, browsers do not preload <video> content, but they tend to be lighter and smaller than GIFs. Not an option? Well, at least we can add lossy compression to GIFs with Lossy GIF, gifsicle or giflossy.

    Early tests show that inline videos within img tags display 20× faster and decode 7× faster than the GIF equivalent, in addition to being a fraction in file size. Although the support for <img src=".mp4"> has landed in Safari Technology Preview, we are far from it being adopted widely as it’s not coming to Blink any time soon.

    Addy Osmani recommends to replace animated GIFs with looping inline videos. The file size difference is noticeable (80% savings). (Large preview)

    In the land of good news though, video formats have been advancing massively over the years. For a long time, we had hoped that WebM would become the format to rule them all, and WebP (which is basically one still image inside of the WebM video container) will become a replacement for dated image formats. But despite WebP and WebM gaining support these days, the breakthrough didn’t happen.

    In 2018, the Alliance of Open Media has released a new promising video format called AV1. AV1 has compression similar to H.265 codec (the evolution of H.264) but unlike the latter, AV1 is free. The H.265 license pricing pushed browser vendors to adopting a comparably performant AV1 instead: AV1 (just like H.265) compress twice as good as WebP.

    AV1 Logo 2018
    AV1 has good chances of becoming the ultimate standard for video on the web. (Image credit: Wikimedia.org) (Large preview)

    In fact, Apple currently uses HEIF format and HEVC (H.265), and all the photos and videos on the latest iOS are saved in these formats, not JPEG. While HEIF and HEVC (H.265) aren’t properly exposed to the web (yet?), AV1 is — and it’s gaining browser support. So adding the AV1 source in your <video> tag is reasonable, as all browser vendors seem to be on board.

    For now, the most widely used and supported encoding is H.264, served by MP4 files, so before serving the file, make sure that your MP4s are processed with a multipass-encoding, blurred with the frei0r iirblur effect (if applicable) and moov atom metadata is moved to the head of the file, while your server accepts byte serving. Boris Schapira provides exact instructions for FFmpeg to optimize videos to the maximum. Of course, providing WebM format as an alternative would help, too.

    Video playback performance is a story on its own, and if you’d like to dive into it in details, take a look at Doug Sillar’s series on The Current State of Video and Video Delivery Best Practices that include details on video delivery metrics, video preloading, compression and streaming.

Zach Leatherman’s Comprehensive Guide to Font-Loading Strategies
Zach Leatherman’s Comprehensive Guide to Font-Loading Strategies provides a dozen options for better web font delivery.
  1. Are web fonts optimized?
    The first question that’s worth asking if you can get away with using UI system fonts in the first place. If it’s not the case, chances are high that the web fonts you are serving include glyphs and extra features and weights that aren’t being used. You can ask your type foundry to subset web fonts or if you are using open-source fonts, subset them on your own with Glyphhanger or Fontsquirrel. You can even automate your entire workflow with Peter Müller’s subfont, a command line tool that statically analyses your page in order to generate the most optimal web font subsets, and then inject them into your page.

    WOFF2 support is great, and you can use WOFF as fallback for browsers that don’t support it — after all, legacy browsers would probably be served well enough with system fonts. There are many, many, many options for web font loading, and you can choose one of the strategies from Zach Leatherman’s “Comprehensive Guide to Font-Loading Strategies,” (code snippets also available as Web font loading recipes).

    Probably the better options to consider today are Critical FOFT with preload and “The Compromise” method. Both of them use a two-stage render for delivering web fonts in steps — first a small supersubset required to render the page fast and accurately with the web font, and then load the rest of the family async. The difference is that “The Compromise” technique loads polyfill asynchronously only if font load events are not supported, so you don’t need to load the polyfill by default. Need a quick win? Zach Leatherman has a quick 23-min tutorial and case study to get your fonts in order.

    In general, it’s a good idea to use the preload resource hint to preload fonts, but in your markup include the hints after the link to critical CSS and JavaScript. Otherwise, font loading will cost you in the first render time. Still, it might be a good idea to be selective and choose files that matter most, e.g. the ones that are critical for rendering or that would help you avoiding visible and disruptive text reflows. In general, Zach advises to preload one or two fonts of each family — it also makes sense to delay some font loading if they are less-critical.

    Nobody likes waiting for the content to be displayed. With the font-display CSS descriptor, we can control the font loading behavior and enable content to be readable immediately (font-display: optional) or almost immediately (font-display: swap). However, if you want to avoid text reflows, we still need to use the Font Loading API, specifically to group repaints, or when you are using third party hosts. Unless you can use Google Fonts with Cloudflare Workers, of course. Talking about Google Fonts: consider using google-webfonts-helper, a hassle-free way to self-host Google Fonts. Always self-host your fonts for maximum control if you can.

    In general, if you use font-display: optional, it might not be a good idea to also use preload as it it’ll trigger that web font request early (causing network congestion if you have other critical path resources that need to be fetched). Use preconnect for faster cross-origin font requests, but be cautious with preload as preloading fonts from a different origin wlll incur network contention. All of these techniques are covered in Zach’s Web font loading recipes.

    Also, it might be a good idea to opt out of web fonts (or at least second stage render) if the user has enabled Reduce Motion in accessibility preferences or has opted in for Data Saver Mode (see Save-Data header). Or when the user happens to have slow connectivity (via Network Information API).

    To measure the web font loading performance, consider the All Text Visible metric (the moment when all fonts have loaded and all content is displayed in web fonts), as well as Web Font Reflow Count after first render. Obviously, the lower both metrics are, the better the performance is. It’s important to notice that variable fonts might require a significant performance consideration. They give designers a much broader design space for typographic choices, but it comes at the cost of a single serial request opposed to a number of individual file requests. That single request might be slow blocking the entire typographic appearance on the page. On the good side though, with a variable font in place, we’ll get exactly one reflow by default, so no JavaScript will be required to group repaints.

    Now, what would make a bulletproof web font loading strategy? Subset fonts and prepare them for the 2-stage-render, declare them with a font-display descriptor, use Font Loading API to group repaints and store fonts in a persistent service worker’s cache. You could fall back to Bram Stein’s Font Face Observer if necessary. And if you’re interested in measuring the performance of font loading, Andreas Marschke explores performance tracking with Font API and UserTiming API.

    Finally, don’t forget to include unicode-range to break down a large font into smaller language-specific fonts, and use Monica Dinculescu’s font-style-matcher to minimize a jarring shift in layout, due to sizing discrepancies between the fallback and the web fonts.

Build Optimizations

  1. Set your priorities straight.
    It’s a good idea to know what you are dealing with first. Run an inventory of all of your assets (JavaScript, images, fonts, third-party scripts and “expensive” modules on the page, such as carousels, complex infographics and multimedia content), and break them down in groups.

    Set up a spreadsheet. Define the basic core experience for legacy browsers (i.e. fully accessible core content), the enhanced experience for capable browsers (i.e. the enriched, full experience) and the extras (assets that aren’t absolutely required and can be lazy-loaded, such as web fonts, unnecessary styles, carousel scripts, video players, social media buttons, large images). A while back, we published an article on “Improving Smashing Magazine’s Performance,” which describes this approach in detail.

    When optimizing for performance we need to reflect our priorities. Load the core experience immediately, then enhancements, and then the extras.

  2. Revisit the good ol’ “cutting-the-mustard” technique.
    These days we can still use the cutting-the-mustard technique to send the core experience to legacy browsers and an enhanced experience to modern browsers. An updated variant of the technique would use ES2015+ <script type="module">. Modern browsers would interpret the script as a JavaScript module and run it as expected, while legacy browsers wouldn’t recognize the attribute and ignore it because it’s unknown HTML syntax.

    These days we need to keep in mind that feature detection alone isn’t enough to make an informed decision about the payload to ship to that browser. On its own, cutting-the-mustard deduces device capability from browser version, which is no longer something we can do today.

    For example, cheap Android phones in developing countries mostly run Chrome and will cut the mustard despite their limited memory and CPU capabilities. Eventually, using the Device Memory Client Hints Header, we’ll be able to target low-end devices more reliably. At the moment of writing, the header is supported only in Blink (it goes for client hints in general). Since Device Memory also has a JavaScript API which is already available in Chrome, one option could be to feature detect based on the API, and fall back to “cutting the mustard” technique only if it’s not supported (thanks, Yoav!).

  3. Parsing JavaScript is expensive, so keep it small.
    When dealing with single-page applications, we need some time to initialize the app before we can render the page. Your setting will require your custom solution, but you could watch out for modules and techniques to speed up the initial rendering time. For example, here’s how to debug React performance and eliminate common React performance issues, and here’s how to improve performance in Angular. In general, most performance issues come from the initial parsing time to bootstrap the app.

    JavaScript has a cost, but it’s rarely the file size alone that drains on performance. Parsing and executing times vary significantly depending on the hardware of a device. On an average phone (Moto G4), a parsing time alone for 1MB of (uncompressed) JavaScript will be around 1.3–1.4s, with 15–20% of all time on mobile spent on parsing. With compiling in play, just prep work on JavaScript takes 4s on average, with around 11s before First Meaningful Paint on mobile. Reason: parse and execution times can easily be 2–5x times higher on low-end mobile devices.

    To guarantee high performance, as developers, we need to find ways to write and deploy less JavaScript. That’s why it pays off to examine every single JavaScript dependency in detail.

    There are many tools to help you make an informed decision about the impact of your dependencies and viable alternatives:

    An interesting way of avoiding parsing costs is to use binary templates that Ember has introduced in 2017. With them, Ember replaces JavaScript parsing with JSON parsing, which is presumably faster. (Thanks, Leonardo, Yoav!)

    Measure JavaScript parse and compile times. We can use synthetic testing tools and browser traces to track parse times, and browser implementors are talking about exposing RUM-based processing times in the future. Alternatively, consider using Etsy’s DeviceTiming, a little tool allowing you to instruct your JavaScript to measure parse and execution time on any device or browser.

    Bottom line: while size matters, it isn’t everything. Parse and compiling times don’t necessarily increase linearly when the script size increases.

  4. Are you using tree-shaking, scope hoisting and code-splitting?
    Tree-shaking is a way to clean up your build process by only including code that is actually used in production and eliminate unused imports in Webpack. With Webpack and Rollup, we also have scope hoisting that allows both tools to detect where import chaining can be flattened and converted into one inlined function without compromising the code. With Webpack, we can also use JSON Tree Shaking as well.

    Also, you might want to consider learning how to write efficient CSS selectors as well as how to avoid bloat and expensive styles. Feeling like going beyond that? You can also use Webpack to shorten the class names and use scope isolation to rename CSS class names dynamically at the compilation time.

    Code-splitting is another Webpack feature that splits your code base into “chunks” that are loaded on demand. Not all of the JavaScript has to be downloaded, parsed and compiled right away. Once you define split points in your code, Webpack can take care of the dependencies and outputted files. It enables you to keep the initial download small and to request code on demand when requested by the application. Alexander Kondrov has a fantastic introduction to code-splitting with Webpack and React.

    Consider using preload-webpack-plugin that takes routes you code-split and then prompts browser to preload them using <link rel="preload"> or <link rel="prefetch">. Webpack inline directives also give some control over preload/prefetch

    Where to define split points? By tracking which chunks of CSS/JavaScript are used, and which aren’t used. Umar Hansa explains how you can use Code Coverage from Devtools to achieve it.

    If you aren’t using Webpack, note that Rollup shows significantly better results than Browserify exports. While we’re at it, you might want to check out rollup-plugin-closure-compiler and Rollupify, which converts ECMAScript 2015 modules into one big CommonJS module — because small modules can have a surprisingly high performance cost depending on your choice of bundler and module system.

  5. Can you offload JavaScript into a Web Worker?
    To reduce the negative impact to Time-to-Interactive, it might be a good idea to look into offloading heavy JavaScript into a Web Worker or caching via a Service Worker.

    As the code base keeps growing, the UI performance bottlenecks will show up, slowing down the user’s experience. That’s because DOM operations are running alongside your JavaScript on the main thread. With web workers, we can move these expensive operations to a background process that’s running on a different thread. Typical use cases for web workers are prefetching data and Progressive Web Apps to load and store some data in advance so that you can use it later when needed. And you could use Comlink to streamline the communication between the main page and the worker. Still some work to do, but we are getting there.

    Workerize allows you to move a module into a Web Worker, automatically reflecting exported functions as asynchronous proxies. And if you’re using Webpack, you could use workerize-loader. Alternatively, you could use worker-plugin as well.

    Note that Web Workers don’t have access to the DOM because the DOM is not “thread-safe”, and the code that they execute needs to be contained in a separate file.

  6. Can you offload JavaScript into WebAssembly?
    We could potentialy also convert JavaScript into WebAssembly, a binary instruction format, designed as a portable target for compilation of high-level languages like C/C++/Rust. Its browser support is remarkable, and it has recently become viable as function calls between JavaSript and WASM are getting faster, at least in Firefox.

    In real-world scenarios, JavaScript seems to perform better than WebAssembly on smaller array sizes and WebAssembly performs better than JavaScript on larger array sizes. For most web apps, JavaScript is a better fit, and WebAssembly is best used for computationally intensive web apps, such as web games. However, it might be worth investigating if a switch to WebAssembly would result in noticeable performance improvements.

    If you’d like to learn more about WebAssembly:

A general overview of how WebAssembly works and why it’s useful.
Milica Mihajlija provides a general overview of how WebAssembly works and why it’s useful. (Large preview)
  1. Are you using an ahead-of-time compiler?
    Use an ahead-of-time compiler to offload some of the client-side rendering to the server and, hence, output usable results quickly. Finally, consider using Optimize.js for faster initial loading by wrapping eagerly invoked functions (it might not be necessary any longer, though).
'Fast By Default: Modern Loading Best Practices' by Addy Osmani
From Fast By Default: Modern Loading Best Practices by the one-and-only Addy Osmani. Slide 76.
  1. Serve legacy code only to legacy browsers.
    With ES2015 being remarkably well supported in modern browsers, we can use babel-preset-env to only transpile ES2015+ features unsupported by the modern browsers you are targeting. Then set up two builds, one in ES6 and one in ES5. As mentioned above, JavaScript modules are now supported in all major browsers, so use use script type="module" to let browsers with ES module support load the file, while older browsers could load legacy builds with script nomodule. And we can automate the entire process with Webpack ESNext Boilerplate.

    Note that these days we can write module-based JavaScript that runs natively in the browser, without transpilers or bundlers. <link rel="modulepreload"> header provides a way to initiate early (and high-priority) loading of module scripts. Basically, it’s a nifty way to help in maximizing bandwidth usage, by telling the browser about what it needs to fetch so that it’s not stuck with anything to do during those long roundtrips. Also, Jake Archibald has published a detailed article with gotchas and things t keep in mind with ES Modules that’s worth reading.

    For lodash, use babel-plugin-lodash that will load only modules that you are using in your source. Your dependencies might also depend on other versions of Lodash, so transform generic lodash requires to cherry-picked ones to avoid code duplication. This might save you quite a bit of JavaScript payload.

    Shubham Kanodia has written a detailed low-maintenance guide on smart bundling: to shipping legacy code to only legacy browsers in production with the code snippet you could use right away.

As explained in Jake Archibald’s article, inline scripts are deferred until blocking external scripts and inline scripts are executed.
Jake Archibald has published a detailed article with gotchas and things to keep in mind with ES Modules, e.g. inline scripts are deferred until blocking external scripts and inline scripts are executed. (Large preview)
  1. Are you using differential serving for JavaScript?
    We want to send just the necessary JavaScript through the network, yet it means being slightly more focused and granular about the delivery of those assets. A while back Philip Walton introduced the idea of differential serving. The idea is to compile and serve two separate JavaScript bundles: the “regular” build, the one with Babel-transforms and polyfills and serve them only to legacy browsers that actually need them, and another bundle (same functionality) that has no transforms or polyfills.

    As a result, we help reduce blocking of the main thread by reducing the amount of scripts the browser needs to process. Jeremy Wagner has published a comprehensive article on differential serving and how to set it up in your build pipeline in 2019, from setting up Babel, to what tweaks you’ll need to make in Webpack, as well as the benefits of doing all this work.

  2. Identify and rewrite legacy code with incremental decoupling.
    Long-living projects have a tendency to gather dust and dated code. Revisit your dependencies and assess how much time would be required to refactor or rewrite legacy code that has been causing trouble lately. Of course, it’s always a big undertaking, but once you know the impact of the legacy code, you could start with incremental decoupling.

    First, set up metrics that tracks if the ratio of legacy code calls is staying constant or going down, not up. Publicly discourage the team from using the library and make sure that your CI alerts developers if it’s used in pull requests. polyfills could help transition from legacy code to rewritten codebase that uses standard browser features.

  3. Identify and remove unused CSS/JS.
    CSS and JavaScript code coverage in Chrome allows you to learn which code has been executed/applied and which hasn’t. You can start recording the coverage, perform actions on a page, and then explore the code coverage results. Once you’ve detected unused code, find those modules and lazy load with import() (see the entire thread). Then repeat the coverage profile and validate that it’s now shipping less code on initial load.

    You can use Puppeteer to programmatically collect code coverage and Canary already allows you to export code coverage results, too. As Andy Davies noted, you might want to collect code coverage for both modern and legacy browsers though. There are many other use-cases for Puppeteer, such as, for example, automatic visual diffing or monitoring unused CSS with every build.

    Furthermore, purgecss, UnCSS and Helium can help you remove unused styles from CSS. And if you aren’t certain if a suspicious piece of code is used somewhere, you can follow Harry Roberts’ advice: create a 1×1px transparent GIF for a particular class and drop it into a dead/ directory, e.g. /assets/img/dead/comments.gif. After that, you set that specific image as a background on the corresponding selector in your CSS, sit back and wait for a few months if the file is going to appear in your logs. If there are no entries, nobody had that legacy component rendered on their screen: you can probably go ahead and delete it all.

    For the I-feel-adventurous-department, you could even automate gathering on unused CSS through a set of pages by monitoring DevTools using DevTools.

  4. Trim the size of your JavaScript bundles.
    As Addy Osmani noted, there’s a high chance you’re shipping full JavaScript libraries when you only need a fraction, along with dated polyfills for browsers that don’t need them, or just duplicate code. To avoid the overhead, consider using webpack-libs-optimizations that removes unused methods and polyfills during the build process.

    Add bundle auditing into your regular workflow as well. There might be some lightweight alternatives to heavy libraries you’ve added years ago, e.g. Moment.js could be replaced with date-fns or Luxon. Benedikt Rötsch’s research showed that a switch from Moment.js to date-fns could shave around 300ms for First paint on 3G and a low-end mobile phone.

    That’s where tools like Bundlephobia could help find the cost of adding a npm package to your bundle. You can even integrate these costs with a Lighthouse Custom Audit. This goes for frameworks, too. By removing or trimming the Vue MDC Adapter (Material Components for Vue), styles drop from 194KB to 10KB.

    Feeling adventurous? You could look into Prepack. It compiles JavaScript to equivalent JavaScript code, but unlike Babel or Uglify, it lets you write normal JavaScript code, and outputs equivalent JavaScript code that runs faster.

    Alternatively to shipping the entire framework, you could even trim your framework and compile it into a raw JavaScript bundle that does not require additional code. Svelte does it, and so does Rawact Babel plugin which transpiles React.js components to native DOM operations at build-time. Why? Well, as maintainers explain, “react-dom includes code for every possible component/HTMLElement that can be rendered, including code for incremental rendering, scheduling, event handling, etc. But there are applications which do not need all these features (at initial page load). For such applications, it might make sense to use native DOM operations to build the interactive user interface.”

Webpack comparison
In his article, Benedikt Rötsch’s showed that a switch from Moment.js to date-fns could shave around 300ms for First paint on 3G and a low-end mobile phone. (Large preview)
  1. Are you using predictive prefetching for JavaScript chunks?
    We could use heuristics to decide when to preload JavaScript chunks. Guess.js is a set of tools and libraries that use Google Analytics data to determine which page a user is mostly likely to visit next from a given page. Based on user navigation patterns collected from Google Analytics or other sources, Guess.js builds a machine-learning model to predict and prefetch JavaScript that will be required in each subsequent page.

    Hence, every interactive element is receiving a probability score for engagement, and based on that score, a client-side script decides to prefetch a resource ahead of time. You can integrate the technique to your Next.js application, Angular and React, and there is a Webpack plugin which automates the setup process as well.

    Obviously, you might be prompting the browser to consume unneeded data and prefetch undesirable pages, so it’s a good idea to be quite conservative in the number of prefetched requests. A good use case would be prefetching validation scripts required in the checkout, or speculative prefetch when a critical call-to-action comes into the viewport.

    Need something less sophisticated? Quicklink is a small library that automatically prefetches links in the viewport during idle time in attempt to make next-page navigations load faster. However, it’s also data-considerate, so it doesn’t prefetch on 2G or if Data-Saver is on.

  2. Take advantage of optimizations for your target JavaScript engine.
    Study what JavaScript engines dominate in your user base, then explore ways of optimizing for them. For example, when optimizing for V8 which is used in Blink-browsers, Node.js runtime and Electron, make use of script streaming for monolithic scripts. It allows async or defer scripts to be parsed on a separate background thread once downloading begins, hence in some cases improving page loading times by up to 10%. Practically, use <script defer> in the <head>, so that the browsers can discover the resource early and then parse it on the background thread.

    Caveat: Opera Mini doesn’t support script deferment, so if you are developing for India or Africa, defer will be ignored, resulting in blocking rendering until the script has been evaluated (thanks Jeremy!).

Progressive booting
Progressive booting means using server-side rendering to get a quick first meaningful paint, but also include some minimal JavaScript to keep the time-to-interactive close to the first meaningful paint.
  1. Client-side rendering or server-side rendering?
    In both scenarios, our goal should be to set up progressive booting: Use server-side rendering to get a quick first meaningful paint, but also include some minimal necessary JavaScript to keep the time-to-interactive close to the first meaningful paint. If JavaScript is coming too late after the First Meaningful Paint, the browser might lock up the main thread while parsing, compiling and executing late-discovered JavaScript, hence handcuffing the interactivity of site or application.

    To avoid it, always break up the execution of functions into separate, asynchronous tasks, and where possible use requestIdleCallback. Consider lazy loading parts of the UI using WebPack’s dynamic import() support, avoiding the load, parse, and compile cost until the users really need them (thanks Addy!).

    In its essence, Time to Interactive (TTI) tells us the time between navigation and interactivity. The metric is defined by looking at the first five-second window after the initial content is rendered, in which no JavaScript tasks take longer than 50ms. If a task over 50ms occurs, the search for a five-second window starts over. As a result, the browser will first assume that it reached Interactive, just to switch to Frozen, just to eventually switch back to Interactive.

    Once we reached Interactive, we can then — either on demand or as time allows — boot non-essential parts of the app. Unfortunately, as Paul Lewis noticed, frameworks typically have no concept of priority that can be surfaced to developers, and hence progressive booting is difficult to implement with most libraries and frameworks. If you have the time and resources, use this strategy to ultimately boost performance.

    So, client-side or server-side? If there is no visible benefit to the user, client-side rendering might not be really necessary — actually, server-side-rendered HTML could be faster. Perhaps you could even pre-render some of your content with static site generators and push them straight to the CDNs, with some JavaScript on top.

    Limit the use of client-side frameworks to pages that absolutely require them. Server-rendering and client-rendering are a disaster if done poorly. Consider pre-rendering at build time and CSS inlining on the fly to produce production-ready static files. Addy Osmani has given a fantastic talk on the Cost of JavaScript that might be worth watching.

  2. Constrain the impact of third-party scripts.
    With all performance optimizations in place, often we can’t control third-party scripts coming from business requirements. Third-party-scripts metrics aren’t influenced by end-user experience, so too often one single script ends up calling a long tail of obnoxious third-party scripts, hence ruining a dedicated performance effort. To contain and mitigate performance penalties that these scripts bring along, it’s not enough to just load them asynchronously (probably via defer) and accelerate them via resource hints such as dns-prefetch or preconnect.

    As Yoav Weiss explained in his must-watch talk on third-party scripts, in many cases these scripts download resources that are dynamic. The resources change between page loads, so we don’t necessarily know which hosts the resources will be downloaded from and what resources they would be.

    What options do we have then? Consider using service workers by racing the resource download with a timeout and if the resource hasn’t responded within a certain timeout, return an empty response to tell the browser to carry on with parsing of the page. You can also log or block third-party requests that aren’t successful or don’t fulfill certain criteria. If you can, load the 3rd-party-script from your own server rather than from the vendor’s server.

    Casper.com published a detailed case study on how they managed to shave 1.7 seconds off the site by self-hosting Optimizely. It might be worth it.
    Casper.com published a detailed case study on how they managed to shave 1.7 seconds off the site by self-hosting Optimizely. It might be worth it. (Image source) (Large preview)

    Another option is to establish a Content Security Policy (CSP) to restrict the impact of third-party scripts, e.g. disallowing the download of audio or video. The best option is to embed scripts via <iframe> so that the scripts are running in the context of the iframe and hence don’t have access to the DOM of the page, and can’t run arbitrary code on your domain. Iframes can be further constrained using the sandbox attribute, so you can disable any functionality that iframe may do, e.g. prevent scripts from running, prevent alerts, form submission, plugins, access to the top navigation, and so on.

    For example, it’s probably going to be necessary to allow scripts to run with <iframe sandbox="allow-scripts">. Each of the limitations can be lifted via various allow values on the sandbox attribute (supported almost everywhere), so constrain them to the bare minimum of what they should be allowed to do.

    Consider using Intersection Observer; that would enable ads to be iframed while still dispatching events or getting the information that they need from the DOM (e.g. ad visibility). Watch out for new policies such as Feature policy, resource size limits and CPU/Bandwidth priority to limit harmful web features and scripts that would slow down the browser, e.g. synchronous scripts, synchronous XHR requests, document.write and outdated implementations.

    To stress-test third parties, examine bottom-up summaries in Performance profile page in DevTools, test what happens if a request is blocked or it has timed out — for the latter, you can use WebPageTest’s Blackhole server blackhole.webpagetest.org that you can point specific domains to in your hosts file. Preferably self-host and use a single hostname, but also generate a request map that exposes fourth-party calls and detect when the scripts change. You can use Harry Roberts’ approach for auditing third parties and produce spreadsheets like this one. Harry also explains the auditing workflow in his talk on third-party performance and auditing.

request blocking
Image credit: Harry Roberts
  1. Set HTTP cache headers properly.
    Double-check that expires, max-age, cache-control, and other HTTP cache headers have been set properly. In general, resources should be cacheable either for a very short time (if they are likely to change) or indefinitely (if they are static) — you can just change their version in the URL when needed. Disable the Last-Modified header as any asset with it will result in a conditional request with an If-Modified-Since-header even if the resource is in cache. Same with Etag.

    Use Cache-control: immutable, designed for fingerprinted static resources, to avoid revalidation (as of December 2018, supported in Firefox, Edge and Safari; in Firefox only on https:// transactions). In fact “across all of the pages in the HTTP Archive, 2% of requests and 30% of sites appear to include at least 1 immutable response. Additionally, most of the sites that are using it have the directive set on assets that have a long freshness lifetime.”

    Remember the stale-while-revalidate? As you probably know, we specify the caching time with the Cache-Control response header, e.g. Cache-Control: max-age=604800. After 604800 seconds have passed, the cache will re-fetch the requested content, causing the page to load slower. This slowdown can be avoided by using stale-while-revalidate; it basically defines an extra window of time during which a cache can use a stale asset as long as it revalidates it async in the background. Thus, it “hides” latency (both in the network and on the server) from clients.

    In October 2018, Chrome published an intent to ship handling of stale-while-revalidate in HTTP Cache-Control header, so as a result, it should improve subsequent page load latencies as stale assets are no longer in the critical path. Result: zero RTT for repeat views.

    You can use Heroku’s primer on HTTP caching headers, Jake Archibald’s “Caching Best Practices” and Ilya Grigorik’s HTTP caching primer as guides. Also, be wary of the vary header, especially in relation to CDNs, and watch out for the Key header which helps avoiding an additional round trip for validation whenever a new request differs slightly (but not significantly) from prior requests (thanks, Guy!).

    Also, double-check that you aren’t sending unnecessary headers (e.g. x-powered-by, pragma, x-ua-compatible, expires and others) and that you include useful security and performance headers (such as Content-Security-Policy, X-XSS-Protection, X-Content-Type-Options and others). Finally, keep in mind the performance cost of CORS requests in single-page applications.

Delivery Optimizations

  1. Do you load all JavaScript libraries asynchronously?
    When the user requests a page, the browser fetches the HTML and constructs the DOM, then fetches the CSS and constructs the CSSOM, and then generates a rendering tree by matching the DOM and CSSOM. If any JavaScript needs to be resolved, the browser won’t start rendering the page until it’s resolved, thus delaying rendering. As developers, we have to explicitly tell the browser not to wait and to start rendering the page. The way to do this for scripts is with the defer and async attributes in HTML.

    In practice, it turns out we should prefer defer to async (at a cost to users of Internet Explorer up to and including version 9, because you’re likely to break scripts for them). According to Steve Souders, once async scripts arrive, they are executed immediately. If that happens very fast, for example when the script is in cache aleady, it can actually block HTML parser. With defer, browser doesn’t execute scripts until HTML is parsed. So, unless you need JavaScript to execute before start render, it’s better to use defer.

    Also, as mentioned above, limit the impact of third-party libraries and scripts, especially with social sharing buttons and <iframe> embeds (such as maps). Size Limit helps you prevent JavaScript libraries bloat: If you accidentally add a large dependency, the tool will inform you and throw an error. You can use static social sharing buttons (such as by SSBG) and static links to interactive maps instead.

    You might want to revise your non-blocking script loader for CSP compliance.

  2. Lazy load expensive components with IntersectionObserver.
    In general, it’s a good idea to lazy-load all expensive components, such as heavy JavaScript, videos, iframes, widgets, and potentially images. The most performant way to do so is by using the Intersection Observer API that provides a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document’s viewport. Basically, you need to create a new IntersectionObserver object, which receives a callback function and a set of options. Then we add a target to observe.

    The callback function executes when the target becomes visible or invisible, so when it intercepts the viewport, you can start taking some actions before the element becomes visible. In fact, we have a granular control over when the observer’s callback should be invoked, with rootMargin (margin around the root) and threshold (a single number or an array of numbers which indicate at what percentage of the target’s visibility we are aiming).

    Alejandro Garcia Anglada has published a handy tutorial on how to actually implement it, Rahul Nanwani wrote a detailed post on lazy-loading foreground and background images, and Google Fundamentals provide a detailed tutorial on lazy loading images and video with Intersection Observer as well. Remember art-directed storytelling long reads with moving and sticky objects? You can implement performant scrollytelling with Intersection Observer, too.

    Also, watch out for the lazyload attribute that will allow us to specify which images and iframes should be lazy loaded, natively. Feature policy: LazyLoad will provide a mechanism that allows us to force opting in or out of LazyLoad functionality on a per-domain basis (similar to how Content Security Policies work). Bonus: once shipped, priority hints will allow us to specify importance on scripts and preloads in the header as well (currently in Chrome Canary).

  3. Load images progressively.
    You could even take lazy loading to the next level by adding progressive image loading to your pages. Similarly to Facebook, Pinterest and Medium, you could load low quality or even blurry images first, and then as the page continues to load, replace them with the full quality versions by using the LQIP (Low Quality Image Placeholders) technique proposed by Guy Podjarny.

    Opinions differ if these techniques improve user experience or not, but it definitely improves time to first meaningful paint. We can even automate it by using SQIP that creates a low quality version of an image as an SVG placeholder, or Gradient Image Placeholders with CSS linear gradients. These placeholders could be embedded within HTML as they naturally compress well with text compression methods. In his article, Dean Hume has described how this technique can be implemented using Intersection Observer.

    Browser support? Decent, with Chrome, Firefox, Edge and Samsung Internet being on board. WebKit status is currently supported in preview. Fallback? If the browser doesn’t support intersection observer, we can still lazy load a polyfill or load the images immediately. And there is even a library for it.

    Want to go fancier? You could trace your images and use primitive shapes and edges to create a lightweight SVG placeholder, load it first, and then transition from the placeholder vector image to the (loaded) bitmap image.

  4. SVG lazy loading technique by José M. Pérez
    SVG lazy loading technique by José M. Pérez. (Large preview)
  5. Do you send critical CSS?
    To ensure that browsers start rendering your page as quickly as possible, it’s become a common practice to collect all of the CSS required to start rendering the first visible portion of the page (known as “critical CSS” or “above-the-fold CSS”) and add it inline in the <head> of the page, thus reducing roundtrips. Due to the limited size of packages exchanged during the slow start phase, your budget for critical CSS is around 14 KB.

    If you go beyond that, the browser will need additional roundtrips to fetch more styles. CriticalCSS and Critical enable you to do just that. You might need to do it for every template you’re using. If possible, consider using the conditional inlining approach used by the Filament Group, or convert inline code to static assets on the fly.

    With HTTP/2, critical CSS could be stored in a separate CSS file and delivered via a server push without bloating the HTML. The catch is that server pushing is troublesome with many gotchas and race conditions across browsers. It isn’t supported consistently and has some caching issues (see slide 114 onwards of Hooman Beheshti’s presentation). The effect could, in fact, be negative and bloat the network buffers, preventing genuine frames in the document from being delivered. Also, it appears that server pushing is much more effective on warm connections due to the TCP slow start.

    Even with HTTP/1, putting critical CSS in a separate file on the root domain has benefits, sometimes even more than inlining due to caching. Chrome speculatively opens a second HTTP connection to the root domain when requesting the page, which removes the need for a TCP connection to fetch this CSS (thanks, Philip!)

    A few gotchas to keep in mind: unlike preload that can trigger preload from any domain, you can only push resources from your own domain or domains you are authoritative for. It can be initiated as soon as the server gets the very first request from the client. Server pushed resources land in the Push cache and are removed when the connection is terminated. However, since an HTTP/2 connection can be re-used across multiple tabs, pushed resources can be claimed by requests from other tabs as well (thanks, Inian!).

    At the moment, there is no simple way for the server to know if pushed resources are already in one of the user’s caches, so resources will keep being pushed with every user’s visit. You may then need to create a cache-aware HTTP/2 server push mechanism. If fetched, you could try to get them from a cache based on the index of what’s already in the cache, avoiding secondary server pushes altogether.

    Keep in mind, though, that the new cache-digest specification negates the need to manually build such “cache-aware” servers, basically declaring a new frame type in HTTP/2 to communicate what’s already in the cache for that hostname. As such, it could be particularly useful for CDNs as well.

    For dynamic content, when a server needs some time to generate a response, the browser isn’t able to make any requests since it’s not aware of any sub-resources that the page might reference. For that case, we can warm up the connection and increase the TCP congestion window size, so that future requests can be completed faster. Also, all inlined assets are usually good candidates for server pushing. In fact, Inian Parameshwaran did remarkable research comparing HTTP/2 Push vs. HTTP Preload, and it’s a fantastic read with all the details you might need. Server Push or Not Server Push? Colin Bendell’s Should I Push? might point you in the right direction.

    Bottom line: As Sam Saccone noted, preload is good for moving the start download time of an asset closer to the initial request, while Server Push is good for cutting out a full RTT (or more, depending on your server think time) — if you have a service worker to prevent unnecessary pushing, that is.

  6. Experiment with regrouping your CSS rules.
    We’ve got used to critical CSS, but there are a few optimizations that could go beyond that. Harry Roberts conducted a remarkable research with quite surprising results. For example, it might be a good idea to split the main CSS file out into its individual media queries. That way, the browser will retrieve critical CSS with high priority, and everything else with low priority — completely off the critical path.

    Also, avoid placing <link rel="stylesheet" /> before async snippets. If scripts don’t depend on stylesheets, consider placing blocking scripts above blocking styles. If they do, split that JavaScript in two and load it either side of your CSS.

    Scott Jehl solved another interesting problem by caching an inlined CSS file with a service worker, a common problem familiar if you’re using critical CSS. Basically, we add an ID attribute onto the style element so that it’s easy to find it using JavaScript, then a small piece of JavaScript finds that CSS and uses the Cache API to store it in a local browser cache (with a content type of text/css) for use on subsequent pages. To avoid inlining on subsequent pages and instead reference the cached assets externally, we then set a cookie on the first visit to a site. Voilà!

Do we stream reponses? With streaming, HTML rendered during the initial navigation request can take full advantage of the browser’s streaming HTML parser.
  1. Do you stream responses?
    Often forgotten and neglected, streams provide an interface for reading or writing asynchronous chunks of data, only a subset of which might be available in memory at any given time. Basically, they allow the page that made the original request to start working with the response as soon as the first chunk of data is available, and use parsers that are optimized for streaming to progressively display the content.

    We could create one stream from multiple sources. For example, instead of serving an empty UI shell and letting JavaScript populate it, you can let the service worker construct a stream where the shell comes from a cache, but the body comes from the network. As Jeff Posnick noted, if your web app is powered by a CMS that server-renders HTML by stitching together partial templates, that model translates directly into using streaming responses, with the templating logic replicated in the service worker instead of your server. Jake Archibald’s The Year of Web Streams article highlights how exactly you could build it. Performance boost is quite noticeable.

    One important advantage of streaming the entire HTML response is that HTML rendered during the initial navigation request can take full advantage of the browser’s streaming HTML parser. Chunks of HTML that are inserted into a document after the page has loaded (as is common with content populated via JavaScript) can’t take advantage of this optimization.

    Browser support? Getting there with Chrome 52+, Firefox 57+ (behind flag), Safari and Edge supporting the API and Service Workers being supported in all modern browsers.

  2. Consider making your components connection-aware.
    Data can be expensive and with growing payload, we need to respect users who choose to opt into data savings while accessing our sites or apps. The Save-Data client hint request header allows us to customize the application and the payload to cost- and performance-constrained users. In fact, you could rewrite requests for high DPI images to low DPI images, remove web fonts, fancy parallax effects, preview thumbnails and infinite scroll, turn off video autoplay, server pushes, reduce the number of displayed items and downgrade image quality, or even change how you deliver markup. Tim Vereecke has published a very detailed article on data-s(h)aver strategies featuring many options for data saving.

    The header is currently supported only in Chromium, on the Android version of Chrome or via the Data Saver extension on a desktop device. Finally, you can also use the Network Information API to deliver low/high resolution images and videos based on the network type. Network Information API and specifically navigator.connection.effectiveType (Chrome 62+) use RTT, downlink, effectiveType values (and a few others) to provide a representation of the connection and the data that users can handle.

    In this context, Max Stoiber speaks of connection-aware components. For example, with React, we could write a component that renders different elements for different connection types. As Max suggested, a <Media /> component in a news article might output:

    • Offline: a placeholder with alt text,
    • 2G / save-data mode: a low-resolution image,
    • 3G on non-Retina screen: a mid-resolution image,
    • 3G on Retina screens: high-res Retina image,
    • 4G: an HD video.

    Dean Hume provides a practical implementation of a similar logic using a service worker. For a video, we could display a video poster by default, and then display the “Play” icon as well as the video player shell, meta-data of the video etc. on better connections. As a fallback for non-supporting browsers, we could listen to canplaythrough event and use Promise.race() to timeout the source loading if the canplaythrough event doesn’t fire within 2 seconds.

  3. Consider making your components device memory-aware.
    Network connection gives us only one perspective at the context of the user though. Going further, you could also dynamically adjust resources based on available device memory, with the Device Memory API (Chrome 63+). navigator.deviceMemory returns how much RAM the device has in gigabytes, rounded down to the nearest power of two. The API also features a Client Hints Header, Device-Memory, that reports the same value.

The priority column in DevTools
The ‘Priority’ column in DevTools. Image credit: Ben Schwarz, The Critical Request
  1. Warm up the connection to speed up delivery.
    Use resource hints to save time on dns-prefetch (which performs a DNS lookup in the background), preconnect (which asks the browser to start the connection handshake (DNS, TCP, TLS) in the background), prefetch (which asks the browser to request a resource) and preload (which prefetches resources without executing them, among other things).

    Most of the time these days, we’ll be using at least preconnect and dns-prefetch, and we’ll be cautious with using prefetch and preload; the former should only be used if you are confident about what assets the user will need next (for example, in a purchasing funnel).

    Note that even with preconnect and dns-prefetch, the browser has a limit on the number of hosts it will look up/connect to in parallel, so it’s a safe bet to order them based on priority (thanks Philip!).

    In fact, using resource hints is probably the easiest way to boost performance, and it works well indeed. When to use what? As Addy Osmani has explained, we should preload resources that we have high-confidence will be used in the current page. Prefetch resources likely to be used for future navigations across multiple navigation boundaries, e.g. Webpack bundles needed for pages the user hasn’t visited yet.

    Addy’s article on “Loading Priorities in Chrome” shows how exactly Chrome interprets resource hints, so once you’ve decided which assets are critical for rendering, you can assign high priority to them. To see how your requests are prioritized, you can enable a “priority” column in the Chrome DevTools network request table (as well as Safari Technology Preview).

    For example, since fonts usually are important assets on a page, it’s always a good idea to request the browser to download fonts with preload. You could also load JavaScript dynamically, effectively lazy-loading execution. Also, since <link rel="preload"> accepts a media attribute, you could choose to selectively prioritize resources based on @media query rules.

    A few gotchas to keep in mind: preload is good for moving the start download time of an asset closer to the initial request, but preloaded assets land in the memory cache which is tied to the page making the request. preload plays well with the HTTP cache: a network request is never sent if the item is already there in the HTTP cache.

    Hence, it’s useful for late-discovered resources, a hero image loaded via background-image, inlining critical CSS (or JavaScript) and pre-loading the rest of the CSS (or JavaScript). Also, a preload tag can initiate a preload only after the browser has received the HTML from the server and the lookahead parser has found the preload tag.

    Preloading via the HTTP header is a bit faster since we don’t to wait for the browser to parse the HTML to start the request. Early Hints will help even further, enabling preload to kick in even before the response headers for the HTML are sent and Priority Hints (coming soon) will help us indicate loading priorities for scripts.

    Beware: if you’re using preload, as must be defined or nothing loads, plus preloaded fonts without the crossorigin attribute will double fetch.

  2. Use service workers for caching and network fallbacks.
    No performance optimization over a network can be faster than a locally stored cache on a user’s machine. If your website is running over HTTPS, use the “Pragmatist’s Guide to Service Workers” to cache static assets in a service worker cache and store offline fallbacks (or even offline pages) and retrieve them from the user’s machine, rather than going to the network. Also, check Jake’s Offline Cookbook and the free Udacity course “Offline Web Applications.”

    Browser support? As stated above, it’s widely supported (Chrome, Firefox, Safari TP, Samsung Internet, Edge 17+) and the fallback is the network anyway. Does it help boost performance? Oh yes, it does. And it’s getting better, e.g. with Background Fetch allowing background uploads/downloads from a service worker. Shipped in Chrome 71.

    There are a number of use cases for a service worker. For example, you could implement “Save for offline” feature, handle broken images, introduce messaging between tabs or provide different caching strategies based on request types. In general, a common reliable strategy is to store the app shell in the service worker’s cache along with a few critical pages, such as offline page, frontpage and anything else that might be important in your case.

    There are a few gotchas to keep in mind though. With a service worker in place, we need to beware range requests in Safari (if you are using Workbox for a service worker it has a range request module). If you ever stumbled upon DOMException: Quota exceeded. error in the browser console, then look into Gerardo’s article When 7KB equals 7MB.

    As Gerardo writes, “If you are building a progressive web app and are experiencing bloated cache storage when your service worker caches static assets served from CDNs, make sure the proper CORS response header exists for cross-origin resources, you do not cache opaque responses with your service worker unintentionally, you opt-in cross-origin image assets into CORS mode by adding the crossorigin attribute to the <img> tag.”

    A good starting point for using service workers would be Workbox, a set of service worker libraries built specifically for building progressive web apps.

  3. Are you using service workers on the CDN/Edge, e.g. for A/B testing?
    At this point, we are quite used to running service workers on the client, but with CDNs implementing them on the server, we could use them to tweak performance on the edge as well.

    For example, in A/B tests, when HTML needs to vary its content for different users, we could use Service Workers on the CDN servers to handle the logic. We could also stream HTML rewriting to speed up sites that use Google Fonts.

  4. Optimize rendering performance.
    Isolate expensive components with CSS containment — for example, to limit the scope of the browser’s styles, of layout and paint work for off-canvas navigation, or of third-party widgets. Make sure that there is no lag when scrolling the page or when an element is animated, and that you’re consistently hitting 60 frames per second. If that’s not possible, then at least making the frames per second consistent is preferable to a mixed range of 60 to 15. Use CSS’ will-change to inform the browser of which elements and properties will change.

    Also, measure runtime rendering performance (for example, in DevTools). To get started, check Paul Lewis’ free Udacity course on browser-rendering optimization and Georgy Marchuk’s article on Browser painting and considerations for web performance.

    If you want to dive deeper into the topic, Nolan Lawson has shared tricks to accurately measure layout performance in his article, and Jason Miller suggested alternative techniques, too. We also have a lil’ article by Sergey Chikuyonok on how to get GPU animation right. Quick note: changes to GPU-composited layers are the least expensive, so if you can get away by triggering only compositing via opacity and transform, you’ll be on the right track. Anna Migas has provided a lot of practical advice in her talk on Debugging UI Rendering Performance, too.

  5. Have you optimized rendering experience?
    While the sequence of how components appear on the page, and the strategy of how we serve assets to the browser matter, we shouldn’t underestimate the role of perceived performance, too. The concept deals with psychological aspects of waiting, basically keeping customers busy or engaged while something else is happening. That’s where perception management, preemptive start, early completion and tolerance management come into play.

    What does it all mean? While loading assets, we can try to always be one step ahead of the customer, so the experience feels swift while there is quite a lot happening in the background. To keep the customer engaged, we can test skeleton screens (implementation demo) instead of loading indicators, add transitions/animations and basically cheat the UX when there is nothing more to optimize. Beware though: skeleton screens should be tested before deploying as some tests showed that skeleton screens can perform the worst by all metrics.

HTTP/2

  1. Migrate to HTTPS, then turn on HTTP/2.
    With Google moving towards a more secure web and eventual treatment of all HTTP pages in Chrome as being “not secure,” a switch to HTTP/2 environment is unavoidable. HTTP/2 is supported very well; it isn’t going anywhere; and, in most cases, you’re better off with it. Once running on HTTPS already, you can get a major performance boost with service workers and server push (at least long term).
    HTTP/2
    Eventually, Google plans to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that Chrome uses for broken HTTPS. (Image source)

    The most time-consuming task will be to migrate to HTTPS, and depending on how large your HTTP/1.1 user base is (that is, users on legacy operating systems or with legacy browsers), you’ll have to send a different build for legacy browsers performance optimizations, which would require you to adapt to a different build process. Beware: Setting up both migration and a new build process might be tricky and time-consuming. For the rest of this article, I’ll assume that you’re either switching to or have already switched to HTTP/2.

  2. Properly deploy HTTP/2.
    Again, serving assets over HTTP/2 requires a partial overhaul of how you’ve been serving assets so far. You’ll need to find a fine balance between packaging modules and loading many small modules in parallel. At the end of the day, still the best request is no request, however, the goal is to find a fine balance between quick first delivery of assets and caching.

    On the one hand, you might want to avoid concatenating assets altogether, instead breaking down your entire interface into many small modules, compressing them as a part of the build process, referencing them via the “scout” approach and loading them in parallel. A change in one file won’t require the entire style sheet or JavaScript to be re-downloaded. It also minimizes parsing time and keeps the payloads of individual pages low.

    On the other hand, packaging still matters. First, compression will suffer. The compression of a large package will benefit from dictionary reuse, whereas small separate packages will not. There’s standard work to address that, but it’s far out for now. Secondly, browsers have not yet been optimized for such workflows. For example, Chrome will trigger inter-process communications (IPCs) linear to the number of resources, so including hundreds of resources will have browser runtime costs.

    Progressive CSS loading
    To achieve best results with HTTP/2, consider to load CSS progressively, as suggested by Chrome’s Jake Archibald.

    Still, you can try to load CSS progressively. In fact, since Chrome 69, in-body CSS no longer blocks rendering for Chrome. Obviously, by doing so, you are actively penalizing HTTP/1.1 users, so you might need to generate and serve different builds to different browsers as part of your deployment process, which is where things get slightly more complicated. You could get away with HTTP/2 connection coalescing, which allows you to use domain sharding while benefiting from HTTP/2, but achieving this in practice is difficult, and in general, it’s not considered to be good practice.

    What to do? Well, if you’re running over HTTP/2, sending around 6–10 packages seems like a decent compromise (and isn’t too bad for legacy browsers). Experiment and measure to find the right balance for your website.

  3. Do your servers and CDNs support HTTP/2?
    Different servers and CDNs are probably going to support HTTP/2 differently. Use Is TLS Fast Yet? to check your options, or quickly look up how your servers are performing and which features you can expect to be supported.

    Consult Pat Meenan’s incredible research on HTTP/2 priorities and test server support for HTTP/2 prioritization. According to Pat, it’s recommended to enable BBR congestion control and set tcp_notsent_lowat to 16KB for HTTP/2 prioritization to work reliably on Linux 4.9 kernels and later (thanks, Yoav!). Andy Davies did a similar research for HTTP/2 prioritization across browsers, CDNs and Cloud Hosting Services.

Is TLS Fast Yet?
Is TLS Fast Yet? allows you to check your options for servers and CDNs when switching to HTTP/2. (Large preview)
  1. Is OCSP stapling enabled?
    By enabling OCSP stapling on your server, you can speed up your TLS handshakes. The Online Certificate Status Protocol (OCSP) was created as an alternative to the Certificate Revocation List (CRL) protocol. Both protocols are used to check whether an SSL certificate has been revoked. However, the OCSP protocol does not require the browser to spend time downloading and then searching a list for certificate information, hence reducing the time required for a handshake.
  2. Have you adopted IPv6 yet?
    Because we’re running out of space with IPv4 and major mobile networks are adopting IPv6 rapidly (the US has reached a 50% IPv6 adoption threshold), it’s a good idea to update your DNS to IPv6 to stay bulletproof for the future. Just make sure that dual-stack support is provided across the network — it allows IPv6 and IPv4 to run simultaneously alongside each other. After all, IPv6 is not backwards-compatible. Also, studies show that IPv6 made those websites 10 to 15% faster due to neighbor discovery (NDP) and route optimization.
  3. Is HPACK compression in use?
    If you’re using HTTP/2, double-check that your servers implement HPACK compression for HTTP response headers to reduce unnecessary overhead. Because HTTP/2 servers are relatively new, they may not fully support the specification, with HPACK being an example. H2spec is a great (if very technically detailed) tool to check that. HPACK’s compression algorithm is quite impressive, and it works.
  4. Make sure the security on your server is bulletproof.
    All browser implementations of HTTP/2 run over TLS, so you will probably want to avoid security warnings or some elements on your page not working. Double-check that your security headers are set properly, eliminate known vulnerabilities, and check your certificate. Also, make sure that all external plugins and tracking scripts are loaded via HTTPS, that cross-site scripting isn’t possible and that both HTTP Strict Transport Security headers and Content Security Policy headers are properly set.

Testing And Monitoring

  1. Have you optimized your auditing workflow?
    It might not sound like a big deal, but having the right settings in place at your fingertips might save you quite a bit of time in testing. Consider using Tim Kadlec’s Alfred Workflow for WebPageTest for submitting a test to the public instance of WebPageTest.

    You could also drive WebPageTest from a Google Spreadsheet and incorporate accessibility, performance and SEO scores into your Travis setup with Lighthouse CI or straight into Webpack.

    And if you need to debug something quickly but your build process seems to be remarkably slow, keep in mind that “whitespace removal and symbol mangling accounts for 95% of the size reduction in minified code for most JavaScript — not elaborate code transforms. You can simply disable compression to speed up Uglify builds by 3 to 4 times.”

pull request checks review required
Integrating accessibility, performance and SEO scores into your Travis setup with Lighthouse CI will highlight the performance impact of a new feature to all contributing developers. (Image source) (Large preview)
  1. Have you tested in proxy browsers and legacy browsers?
    Testing in Chrome and Firefox is not enough. Look into how your website works in proxy browsers and legacy browsers. UC Browser and Opera Mini, for instance, have a significant market share in Asia (up to 35% in Asia). Measure average Internet speed in your countries of interest to avoid big surprises down the road. Test with network throttling, and emulate a high-DPI device. BrowserStack is fantastic, but test on real devices as well.
k6 allows you write unit tests-alike performance tests.
  1. Have you tested the accessibility performance?
    When the browser starts to load a page, it builds a DOM, and if there is an assistive technology like a screen reader running, it also creates an accessibility tree. The screen reader then has to query the accessibility tree to retrieve the information and make it available to the user — sometimes by default, and sometimes on demand. And sometimes it takes time.

    When talking about fast Time to Interactive, usually we mean an indicator of how soon a user can interact with the page by clicking or tapping on links and buttons. The context is slightly different with screen readers. In that case, fast Time to Interactive means how much time passes by until the screen reader can announce navigation on a given page and a screen reader user can actually hit keyboard to interact.

    Léonie Watson has given an eye-opening talk on accessibility performance and specifically the impact slow loading has on screen reader announcement delays. Screen readers are used to fast-paced announcements and quick navigation, and therefore might potentially be even less patient than sighted users.

    Large pages and DOM manipulations with JavaScript will cause delays in screen reader announcements. A rather unexplored area that could use some attention and testing as screen readers are available on literally every platform (Jaws, NVDA, Voiceover, Narrator, Orca).

  2. Is continuous monitoring set up?
    Having a private instance of WebPagetest is always beneficial for quick and unlimited tests. However, a continuous monitoring tool — like Sitespeed, Calibre and SpeedCurve — with automatic alerts will give you a more detailed picture of your performance. Set your own user-timing marks to measure and monitor business-specific metrics. Also, consider adding automated performance regression alerts to monitor changes over time.

    Look into using RUM-solutions to monitor changes in performance over time. For automated unit-test-alike load testing tools, you can use k6 with its scripting API. Also, look into SpeedTracker, Lighthouse and Calibre.

Quick Wins

This list is quite comprehensive, and completing all of the optimizations might take quite a while. So, if you had just 1 hour to get significant improvements, what would you do? Let’s boil it all down to 12 low-hanging fruits. Obviously, before you start and once you finish, measure results, including start rendering time and Speed Index on a 3G and cable connection.

  1. Measure the real world experience and set appropriate goals. A good goal to aim for is First Meaningful Paint < 1 s, a Speed Index value < 1250, Time to Interactive < 5s on slow 3G, for repeat visits, TTI < 2s. Optimize for start rendering time and time-to-interactive.
  2. Prepare critical CSS for your main templates, and include it in the <head> of the page. (Your budget is 14 KB). For CSS/JS, operate within a critical file size budget of max. 170KB gzipped (0.7MB decompressed).
  3. Trim, optimize, defer and lazy-load as many scripts as possible, check lightweight alternatives and limit the impact of third-party scripts.
  4. Serve legacy code only to legacy browsers with <script type="module">.
  5. Experiment with regrouping your CSS rules and test in-body CSS.
  6. Add resource hints to speed up delivery with faster dns-lookup, preconnect, prefetch and preload.
  7. Subset web fonts and load them asynchronously, and utilize font-display in CSS for fast first rendering.
  8. Optimize images, and consider using WebP for critical pages (such as landing pages).
  9. Check that HTTP cache headers and security headers are set properly.
  10. Enable Brotli or Zopfli compression on the server. (If that’s not possible, don’t forget to enable Gzip compression.)
  11. If HTTP/2 is available, enable HPACK compression and start monitoring mixed-content warnings. Enable OCSP stapling.
  12. Cache assets such as fonts, styles, JavaScript and images in a service worker cache.

Download The Checklist (PDF, Apple Pages)

With this checklist in mind, you should be prepared for any kind of front-end performance project. Feel free to download the print-ready PDF of the checklist as well as an editable Apple Pages document to customize the checklist for your needs:

If you need alternatives, you can also check the front-end checklist by Dan Rublic, the “Designer’s Web Performance Checklist” by Jon Yablonski and the FrontendChecklist.

Off We Go!

Some of the optimizations might be beyond the scope of your work or budget or might just be overkill given the legacy code you have to deal with. That’s fine! Use this checklist as a general (and hopefully comprehensive) guide, and create your own list of issues that apply to your context. But most importantly, test and measure your own projects to identify issues before optimizing. Happy performance results in 2019, everyone!


A huge thanks to Guy Podjarny, Yoav Weiss, Addy Osmani, Artem Denysov, Denys Mishunov, Ilya Pukhalski, Jeremy Wagner, Colin Bendell, Mark Zeman, Patrick Meenan, Leonardo Losoviz, Andy Davies, Rachel Andrew, Anselm Hannemann, Patrick Hamann, Andy Davies, Tim Kadlec, Rey Bango, Matthias Ott, Peter Bowyer, Phil Walton, Mariana Peralta, Philipp Tellis, Ryan Townsend, Ingrid Bergman, Mohamed Hussain S. H., Jacob Groß, Tim Swalling, Bob Visser, Kev Adamson, Adir Amsalem, Aleksey Kulikov and Rodney Rehm for reviewing this article, as well as our fantastic community which has shared techniques and lessons learned from its work in performance optimization for everybody to use. You are truly smashing!

Smashing Editorial (ra, il)


Articles on Smashing Magazine — For Web Designers And Developers


Deprecated: Function create_function() is deprecated in /home2/thebrand/public_html/wp-content/plugins/wp-spamshield/wp-spamshield.php on line 1884
overflow-y

Common CSS Issues For Front-End Projects

Common CSS Issues For Front-End Projects

Common CSS Issues For Front-End Projects

Ahmad Shadeed

When implementing a user interface in a browser, it’s good to minimize those differences and issues wherever you can, so that the UI is predictable. Keeping track of all of those differences is hard, so I’ve put together a list of common issues, with their solutions, as a handy reference guide for when you’re working on a new project.

Let’s begin.

1. Reset The Backgrounds Of button And input Elements

When adding a button, reset its background, or else it will look different across browsers. In the example below, the same button is shown in Chrome and in Safari. The latter adds a default gray background.

(Large preview)

Resetting the background will solve this issue:

button {   appearance: none;   background: transparent;   /* Other styles */ } 

See the Pen Button and Inputs by Ahmad Shadeed (@shadeed) on CodePen.

2. Overflow: scroll vs. auto

To limit the height of an element and allow the user to scroll within it, add overflow: scroll-y. This will look good in Chrome on macOS. However, on Chrome Windows, the scroll bar is always there (even if the content is short). This is because scroll-y will show a scroll bar regardless of the content, whereas overflow: auto will show a scroll bar only when needed.

Left: Chrome on macOS. Right: Chrome on Windows. (Large preview)
.element {     height: 300px;     overflow-y: auto; } 

See the Pen overflow-y by Ahmad Shadeed (@shadeed) on CodePen.

3. Add flex-wrap

Make an element behave as a flex container simply by adding display: flex. However, when the screen size shrinks, the browser will display a horizontal scroll bar in case flex-wrap is not added.

<div class="wrapper">   <div class="item"></div>   <div class="item"></div>   <div class="item"></div>   <div class="item"></div>   <div class="item"></div>   <div class="item"></div> </div> 
.wrapper {   display: flex; }  .item {   flex: 0 0 120px;   height: 100px; } 

The example above will work great on big screens. On mobile, the browser will show a horizontal scroll bar.

Left: A horizontal scroll bar is shown, and the items aren’t wrapped. Right: The items are wrapped onto two rows. (Large preview)

The solution is quite easy. The wrapper should know that when space is not available, it should wrap the items.

.wrapper {     display: flex;     flex-wrap: wrap; } 

See the Pen flex-wrap by Ahmad Shadeed (@shadeed) on CodePen.

4. Don’t Use justify-content: space-between When The Number Of Flex Items Is Dynamic

When justify-content: space-between is applied to a flex container, it will distribute the elements and leave an equal amount of space between them. Our example has eight card items, and they look good. What if, for some reason, the number of items was seven? The second row of elements would look different than the first one.

The wrapper with eight items. (Large preview)
The wrapper with seven items. (Large preview)

See the Pen justify-content by Ahmad Shadeed (@shadeed) on CodePen.

In this case, using CSS grid would be more suitable.

When an article is being viewed on a mobile screen, a long word or inline link might cause a horizontal scroll bar to appear. Using CSS’ word-break will prevent that from happening.

Large preview
.article-content p {     word-break: break-all; }    
(Large preview)

Check out CSS-Tricks for the details.

6. Transparent Gradients

When adding gradient with a transparent start and end point, it will look black-ish in Safari. That's because Safari doesn’t recognize the keyword transparent. By substituting it with rgba(0, 0, 0, 0), it will work as expected. Note the below screenshot:

Top: Chrome 70. Bottom: Safari 12. (Large preview)
.section-hero {   background: linear-gradient(transparent, #d7e0ef), #527ee0;   /*Other styles*/ } 

This should instead be:

.section-hero {   background: linear-gradient(rgba(0, 0, 0,0), #d7e0ef), #527ee0;   /*Other styles*/ } 

7. The Misconception About The Difference Between auto-fit And auto-fill In CSS Grid

In CSS grid, the repeat function can create a responsive column layout without requiring the use of media queries. To achieve that, use either auto-fill or auto-fit.

.wrapper {     grid-template-columns: repeat(auto-fill, minmax(100px, 1fr)); } 
(Large preview)

In short, auto-fill will arrange the columns without expanding their widths, whereas auto-fit will collapse them to zero width but only if you have empty columns. Sara Soueidan has written an excellent article on the topic.

8. Fixing Elements To The Top Of The Screen When The Viewport Is Not Tall Enough

If you fix an element to the top of the screen, what happens if the viewport is not tall enough? Simple: It will take up screen space, and, as a result, the vertical area available for the user to browse the website will be small and uncomfortable, which will detract from the experience.

@media (min-height: 500px) {     .site-header {         position: sticky;         top: 0;         /*other styles*/     } } 

In the snippet above, we’re telling the browser to fix the header to the top only if the viewport’s height is equal to or greater than 500 pixels.

Also important: When you use position: sticky, it won’t work unless you specify the top property.

Large preview

See the Pen Vertical media queries: Fixed Header by Ahmad Shadeed (@shadeed) on CodePen.

9. Setting max-width For Images

When adding an image, define max-width: 100%, so that the image resizes when the screen is small. Otherwise, the browser will show a horizontal scroll bar.

img {     max-width: 100%; } 

10. Using CSS Grid To Define main And aside Elements

CSS grid can be used to define the main and aside sections of a layout, which is a perfect use for grid. As a result, the aside section’s height will be equal to that of the main element, even if it’s empty.

To fix this, align the aside element to the start of its parent, so that its height doesn’t expand.

.wrapper {   display: grid;   grid-template-columns: repeat(12, minmax(0, 1fr));   grid-gap: 20px; }  // align-self will tell the aside element to align itself with the start of its parent. aside {   grid-column: 1 / 4;   grid-row: 1;   align-self: start; }  main {   grid-column: 4 / 13; } 
(Large preview)

See the Pen main and aside by Ahmad Shadeed (@shadeed) on CodePen.

11. Adding fill To An SVG

Sometimes, while working with SVGs, fill won’t work as expected if the fill attribute has been added inline in the SVG. To solve this, either to remove the fill attribute from the SVG itself or override fill: color.

Take this example:

.some-icon {     fill: #137cbf; } 

This won’t work if the SVG has an inline fill. It should be this instead:

.some-icon path {     fill: #137cbf; } 

12. Working With Pseudo-Elements

I love to use pseudo-elements whenever I can. They provide us with a way to create fake elements, mostly for decorative purposes, without adding them to the HTML.

When working with them, the author might forget to do one of the following:

  • add the content: "" property,
  • set the width and height without defining the display property for it.

In the example below, we have a title with a badge as a pseudo-element. The content: "" property should be added. Also, the element should have display: inline-block set in order for the width and height to work as expected.

Large preview

13. The Weird Space When Using display: inline-block

Setting two or more elements to display: inline-block or display: inline will create a tiny space between each one. The space is added because the browser is interpreting the elements as words, and so it’s adding a character space between each one.

In the example below, each item has a space of 8px on the right side, but the tiny space caused by using display: inline-block is making it 12px, which is not the desired result.

li:not(:last-child) {   margin-right: 8px; } 
(Large preview)

A simple fix for this is to set font-size: 0 on the parent element.

ul {     font-size: 0; }  li {     font-size: 16px; /*The font size should be reassigned here because it will inherit `font-size: 0` from its parent.*/ } 
(Large preview)

See the Pen Inline Block Spacing by Ahmad Shadeed (@shadeed) on CodePen.

14. Add for="ID" When Assigning A Label Element To An Input

When working with form elements, make sure that all label elements have an ID assigned to them. This will make them more accessible, and when they’re clicked, the associated input will get focus.

<label for="emailAddress">Email address:</label> <input type="email" id="emailAddress"> 
Large preview

15. Fonts Not Working With Interactive HTML Elements

When assigning fonts to the whole document, they won’t be applied to elements such as input, button, select and textarea. They don’t inherit by default because the browser applies the default system font to them.

To fix this, assign the font property manually:

input, button, select, textarea {   font-family: your-awesome-font-name; } 

16. Horizontal Scroll Bar

Some elements will cause a horizontal scroll bar to appear, due to the width of those elements.

The easiest way to find the cause of this issue is to use CSS outline. Addy Osmani has shared a very handy script that can be added to the browser console to outline every element on the page.

[].forEach.call($ $ ("*"), function(a) {   a.style.outline =     "1px solid #" + (~~(Math.random() * (1 << 24))).toString(16); }); 
(Large preview)

17. Compressed Or Stretched Images

When you resize an image in CSS, it could be compressed or stretched if the aspect ratio is not consistent with the width and height of the image.

The solution is simple: Use CSS’ object-fit. Its functionality is similar to that of background-size: cover for background images.

img {     object-fit: cover; } 
(Large preview)

Using object-fit won’t be the perfect solution in all cases. Some images need to appear without cropping or resizing, and some platforms force the user to upload or crop an image at a defined size. For example, Dribbble accepts thumbnails uploads at 800 by 600 pixels.

18. Add The Correct type For input.

Use the correct type for an input field. This will enhance the user experience in mobile browsers and make it more accessible to users.

Here is some HTML:

<form action="">   <p>     <label for="name">Full name</label>     <input type="text" id="name">   </p>   <p>     <label for="email">Email</label>     <input type="email" id="email">   </p>   <p>     <label for="phone">Phone</label>     <input type="tel" id="phone">   </p> </form> 

This is how each input will look once it’s focused:

(Large preview)

19. Phone Numbers In RTL Layouts

When adding a phone number like + 972-123555777 in a right-to-left layout, the plus symbol will be positioned at the end of the number. To fix that, reassign the direction of the phone number.

p {     direction: ltr; } 
(Large preview)

Conclusion

All of the issues mentioned here are among the most common ones I’ve faced in my front-end development work. My goal is to keep a list to check regularly while working on a web project.

Do you have an issue that you always face in CSS? Let us know in the comments!

Smashing Editorial (dm, ra, al, yk, il)


Articles on Smashing Magazine — For Web Designers And Developers

smashingconf-new-york-cat_wumlpv

New Treasures In Front-End And UX — Meet SmashingConf NYC 2018

New Treasures In Front-End And UX — Meet SmashingConf NYC 2018

New Treasures In Front-End And UX — Meet SmashingConf NYC 2018

Vitaly Friedman

With so much happening around the web, what is important and what should you pay attention to? SmashingConf NYC 2018 will explore how new web technologies, findings and emerging front-end/UX techniques can make us all better designers and developers. Early Bird tickets are now live →

On Oct 23–24 2018, we’ll dive deep into everything from Progressive Web Apps, Vue.js, Webpack, performance patterns, and CSS tricks all the way to accessibility, eCommerce UX, DesignOps, and strategies for better planning and estimates. With Sara Soueidan, Dan Mall, Lea Verou, Jason Grigsby, and many others speakers and workshops.

The night before the conference we’ll be hosting a FailNight, a warm-up party with a twist. The sessions will highlight the stories nobody likes to talk about but which are invaluable to get better and smarter at what you do: stories about how we all failed at one point or another — be it at a small or big scale. Sounds like fun? Well, it will be.

Speakers & Workshops

One track, two conference days, 15 speakers, and just 400 available seats. With highly practical talks featuring techniques and strategies that you can apply to your work right away. Here’s what you should expect:

Dan Mall and Sara Soueidan are some of the wonderful speakers we are glad to be having in New York
  • Design Workflow and Process
    Dan MallSuperFriendly
  • CSS Tricks and Techniques
    Lea VerouMIT
  • Design In The Era Of AI & Machine Learning
    Josh ClarkBig Medium
  • Web Font Loading Optimization
    Monica DinculescuGoogle
  • Productivity, Pushing Ideas Forward
    Paul Boag
  • Serverless and Vue.js
    Sarah DrasnerMicrosoft
  • Progressive Web Apps
    Jason GrigsbyCloud Four
  • Refactoring and UI Improvements With CSS
    Sara Soueidan
  • Setting Up Webpack
    Sean Thomas LarkinWebpack Core Team
  • Graphic Design Workflow
    Chiara AliottaUntilSunday
  • Web Performance Best Practices
    Patrick HamannFastly
  • … plus Tim Kadlec (Perfomance Audit), Laurence Penney (Variable Fonts), Michael Flarup (Icon Design), and, of course, the Mystery Speaker.

Conference Ticket

$ 549 Get Your Ticket

Two days of mind-blowing talks
Check all speakers →

Conf + Workshop Tickets

Save $ 100 Conf + Workshop

Three days full of learning
Check all workshops →

Workshops At SmashingConf New York

Our workshops give you the opportunity to spend an entire day on the topic of your choice. SVG? Vue.js? Performance? Whatever it might be that sparks your interest, we’ve got you covered. A ticket for the full-day workshop is $ 499, and if you combine it with a conference ticket, you’ll save $ 100 on the regular workshop price. Seats are limited.

Workshops on Monday, October 22nd

Dan Mall on Design Workflow For A Multi-Device World
Dan MallIn this workshop, Dan will share insights into his tools and techniques for integrating design thinking into your product development process. You’ll learn how to craft powerful design approaches through collaborative brainstorming techniques, and how to involve your entire team in the design process. Read more…

Josh Clark on Design For What’s Next
Josh ClarkSpend a day exploring the web’s emerging interactions and how you can put them to work today. Your guide is designer Josh Clark, author of Designing for Touch and ambassador of the near future. As you move into newer design tools — speech, bots, physical interfaces, artificial intelligence, and more — you’ll learn the tools and techniques for prototyping and launching these new interfaces and get answers to foundational questions for all your projects. Read more…

Sarah Drasner on Intro To Vue.js
Sarah DrasnerVue.js brings together the best features of the Javascript framework landscape elegantly. If you’re interested in writing maintainable, clean code in an exciting and expressive manner, you should consider joining this class. Read more…

Vitaly Friedman on Dirty Little Tricks From The Dark Corners Of eCommerce
Vitaly FriedmanIn this workshop, we will use real-life examples as a case study and examine refinements of the interface on spot. The goal is to set up a very clear roadmap on how we can do the right things in the right order to improve conversion and customer experience. That means removing distractions, minimizing friction and avoiding disruptions and dead ends caused by the interface. Read more…

Workshops on Thursday, October 25th

Lea Verou on WORKSHOP TITLE PLACEHOLDER SOMETHING WITH CSS SECRETS
Lea VerouTHIS IS A PLACEHOLDER TEXT, YOU DON’T REALLY WANT TO US THIS IN YOUR FINAL VERSION 🙂 I REPEAT THIS IS A PLACEHOLDER TEXT, YOU DON’T REALLY WANT TO US THIS IN YOUR FINAL VERSION 🙂 Read more…

Paul Boag on How To Convince Clients And Colleagues The Right Way
Paul BoagThis full-day workshop offers practical guidance on the most critical skills you need in your career; the ability to persuade. Persuade colleagues and clients to take a risk, to step out of their comfort zones and embrace change. Read more…

Sara Soueidan on The CSS & SVG Power Combo
Sara SoueidanThe workshop with the strongest punch of creativity. The CSS & SVG Power Combo is where you will learn about the latest, cutting-edge CSS and SVG techniques to create creative crisp and beautiful interfaces. We will also be looking at any existing browser inconsistencies as well as performance considerations to keep in mind. And there will be lots of exercises and practical examples that can be taken and directly applied in real life projects. Read more…

Tim Kadlec on Building Performant Websites
Tim KadlecPerformance is a critical consideration for online experiences. We don’t stand still on the web. We click links, we expand menus, we move from page to page — we are constantly having our experiences shaped by how quickly each of these interactions takes place. Read more…

Conference Ticket

$ 549 Get Your Ticket

Two days of mind-blowing talks
Check all speakers →

Conf + Workshop Tickets

Save $ 100 Conf + Workshop

Three days full of learning
Check all workshops →

Location

New York City is buzzing with energy and excitement — the perfect backdrop for SmashingConf. Just like in past years, we will welcome you at the New World Stages, an old movie theater renovated into one of the top off-Broadway houses, just off the Times Square district. Workshops will take place at the Microsoft Technology Center right in the heart of Times Square, close to transportation and lots of food and entertainment options.

SmashingConf at the New World Stages
The cats are taking over! We’re heading back to the New World Stages, a renowned performing arts complex in the heart of New York’s theater district. (Image credit: Marc Thiele)

Why This Conference Could Be For You

Each SmashingConf is an intimate, familiar experience. A gathering of new and old friends, good conversations, and lots of sharing and learning. At SmashingConf New York you will learn how to:

  1. Drastically improve the accessibility and overall experience of your components and interfaces,
  2. How to scale up your web app with serverless functions & Vue.js (and safe hosting costs),
  3. How to make a website more performant, resiliant, and accessible,
  4. How to convince your bosses and clients to give their approval,
  5. How to take full advantage of Progressive Web App technology,
  6. How to Avoid the typical pitfalls of the handover process,
  7. What to consicer when using AI in your products,
  8. How to load web fonts fast by default,
  9. … and a lot more.

Download “Convince Your Boss” PDF

Think your boss needs a little more persuasion? We’ve prepared a neat Convince Your Boss PDF that you can use to tip the scales in your favor to send you to the event.

Conference Ticket

$ 549 Get Your Ticket

Two days of mind-blowing talks
Check all speakers →

Conf + Workshop Tickets

Save $ 100 Conf + Workshop

Three days full of learning
Check all workshops →

See You In New York!

We’d love you to join us on this exciting adventure. Let’s spend some memorable days together to explore the hidden corners of front-end — and hunt for some new treasures, too, of course. See you there!

Smashing Editorial (cm)


Articles on Smashing Magazine — For Web Designers And Developers

×

Powered by WhatsApp Chat

× Chat with Us