Smashing Podcast Episode 34 With Harry Roberts: What’s The State Of Web Performance?
In this episode, we’re talking about Web Performance. What does the performance landscape look like in 2021? I spoke with expert Harry Roberts to find out.
Show Notes
Harry is running a Web Performance Masterclass workshop with Smashing in May 2021. At the time of publishing, big earlybird discounts are still available.
- Harry on Twitter
- Harry’s consulting site CSS Wizardry
- Video course Everything I Have Done to Make CSS Wizardry Fast including 15% discount
- Questions for Consultants eBook including 15% discount
- Web.dev guide to Web Vitals
- Drew’s favorite OXO GoodGrips egg beater 😉
Weekly Update
- Web Design Trends 2021: The Report written by Suzanne Scacca
- Using Grommet In React Applications written by Fortune Ikechi
- How To Build A Node.js API For Ethereum Blockchain written by John Agbanusi
- How We Improved SmashingMag Performance written by Vitaly Friedman
- When To Say No To Freelance Projects written by Becca Kennedy
Transcript
Drew McLellan: He’s an independent Consultant Web Performance Engineer from Leeds in the UK. In his role, he helps some of the world’s largest and most respected organizations deliver faster and more reliable experiences to their customers. He’s an invited Google Developer Expert, a Cloudinary Media Developer Expert, an award-winning developer, and an international speaker. So we know he knows his stuff when it comes to web performance, but did you know he has 14 arms and seven legs? My Smashing friends, please welcome Harry Roberts. Hi Harry, how are you?
Harry Roberts: Hey, I’m smashing thank you very much. Obviously the 14 arms, seven legs… still posing its usual problems. Impossible to buy trousers.
Drew: And bicycles.
Harry: Yeah. Well I have three and a half bicycles.
Drew: So I wanted to talk to you today, not about bicycles unfortunately, although that would be fun in itself. I wanted to talk to you about web performance. It’s a subject that I’m personally really passionate about but it’s one of those areas where I worry, when I take my eye off the ball and get involved in some sort of other work and then come back to doing a bit of performance work, I worry that the knowledge I’m working with goes out of date really quick… Is web performance as fast-moving these days as I perceive?
Harry: This is… I’m not even just saying this to be nice to you, that’s such a good question because I’ve been thinking on this quite a bit lately and I’d say there are two halves of it. One thing I would try and tell clients is that actually it doesn’t move that fast. Predominantly because, and this is the soundbite I always use, you can bet on the browser. Browsers aren’t really allowed to change fundamentally how they work, because, of course, there’s two decades of legacy they have to uphold. So, generally, if you bet on the browser and you know how those internals work, and TCP/IP that’s never changing… So the certain things that are fairly set in stone, which means that best practice will, by and large, always be best practice where the fundamentals are concerned.
Harry: Where it does get more interesting is… The thing I’m seeing more and more is that we’re painting ourselves into corners when it comes to site-speed issues. So we actually create a lot of problems for ourselves. So what that means realistically is performance… it’s the moving goalpost, I suppose. The more the landscape or the topography of the web changes, and the way it’s built and the way we work, we pose ourself new challenges. So the advent of doing a lot more work on the client poses different performance issues than we’d be solving five years ago, but those performance issues still pertain to browser internals which, by and large, haven’t changed in five years. So a lot of it depends… And I’d say there’s definitely two clear sides to it… I encourage my clients lean on the browser, lean on the standards, because they can’t just be changed, the goalposts don’t really move. But, of course, that needs to meld with more modern and, perhaps slightly more interesting, development practices. So you keep your… Well, I was going to say “A foot in both camps” but with my seven feet, I’d have to… four and three.
Drew: You mentioned that the fundamentals don’t change and things like TCP/IP don’t change. One of the things that did change in… I say “recent years”, this is actually probably going back a little bit now but, is HTTP in that we had this established protocol HTTP for communicating between clients and servers, and that changed and then we got H2 which is then all binary and different. And that changed a lot of the… It was for performance reasons, it was to take away some of the existing limitations, but that was a change and the way we had to optimize for that protocol changed. Is that now stable? Or is it going to change again, or…
Harry: So one thing that I would like to be learning more about is the latter half of the question, the changing again. I need to be looking more into QUIC and H3 but it’s a bit too far around of the corner to be useful to my clients. When it comes to H2, things have changed quite a lot but I genuinely think H2 is a lot of false promise and I do believe it was rushed over the line, which is remarkable considering H1 was launched… And I mean 1.1, was 1997, so we have a lot of time to work on H2.
Harry: I guess the primary benefit is web developers who understand it or perceive it is unlimited in flight requests now. So rather than six dispatched and/or six in-flight requests at a time, potentially unlimited, infinite. Brings really interesting problems though because… it’s quite hard to describe without visual aids but you’ve still got the same amount of bandwidth available, whether you’re running H1 or H2, the protocol doesn’t make your connection any faster. So it’s quite possible that you could flood the network by requesting 24 files at once, but you don’t have enough bandwidth for that. So you don’t actually get any faster because you can only manage, perhaps, a fraction of that at a time.
Harry: And also what you have to think about is how the files respond. And this is another pro-tip I go through on client workshops et cetera. People will look at an H2 waterfall and they will see that instead of the traditional six dispatch requests they might see 24. Dispatching 24 requests isn’t actually that useful. What is useful is when those responses are returned. And what you’ll notice is that you might dispatch 24 requests, so your left-hand side of your waterfall looks really nice and steep, but they all return in a fairly staggered, sequential manner because you need to limit the amount of bandwidth so you can’t fulfill all response at the same time.
Harry: Well, the other thing is if you were to fulfill all response at the same time, you’d be interleaving responses. So you night get the first 10% of each file and the next 20%… 20% of a JavaScript file is useless. JavaScript isn’t usable until 100% of it has arrived. So what you’ll see is, in actual fact, the way an H2 waterfall manifests itself when you look at the response… It looks a lot more like H1 anyway, it’s a lot more staggered. So, H2, I think it was oversold, or perhaps engineers weren’t led to believe that there are caps on how effective it could be. Because you’ll see people overly sharding their assets and they might have twenty… let’s keep the number 24. Instead of having two big JS files, you might have 24 little bundles. They’ll still return fairly sequentially. They won’t all arrive at the same time because you’ve not magic-ed yourself more bandwidth.
Harry: And the other problem is each request has a constant amount of latency. So let’s say you’re requesting two big files and it’s a hundred millisecond roundtrip and 250 milliseconds downloading, that’s two times 250 milliseconds. If you multiply up to 24 requests, you’ve still got constant latency, which we’ve decided 100 milliseconds, so now you’ve got 2400 milliseconds of latency and 24 times… instead of 250 milliseconds download let’s say its 25 milliseconds download, it’s actually taken longer because the latency stays constant and you just multiply that latency over more requests. So I’ll see clients who will have read that H2 is this magic bullet. They’ll shard… Oh! They couldn’t simplify the development process, we don’t need to do bundling or concatenation et cetera, et cetera. And ultimately it will end up slower because you’ve managed to spread your requests out, which was the promise, but your latency stays constant so you’ve actually just got n times more latency in the browser. Like I said, really hard, probably pointless trying to explain that without visuals, but it’s remarkable how H2 manifests itself compared to what people are hoping it might do.
Drew: Is there still benefit in that sharding process in that, okay, to get the whole lot still takes the same amount of time but by the time you get 100% of the first one 24th back you can start working on it and you can start executing it before the 24th is through.
Harry: Oh, man, another great question. So, absolutely, if things go correctly and it does manifest itself in a more H1 looking response, the idea would be file one returns first, two, three, four, and then they can execute in the order they arrive. So you can actually shorten the aggregate time by assuring that things arrive at the same time. If we have a look at a webpage instead of waterfall, and you notice that requests are interleaved, that’s bad news. Because like I said, 10% of a JavaScript file is useless.
Harry: If the server does its job properly and it sends, sends, sends, sends, send, then it will get faster. And then you’ve got knock-on benefits of your cacheing strategy can be more granular. So really annoying would be you update the font size on your date picker widget. In H1 world you’ve got to cache bust perhaps 200 kilowatts of your site’s wide CSS. Whereas now, you just cache bust datepicker.css. So we’ve got offshoot benefits like that, which are definitely, definitely very valuable.
Drew: I guess, in the scenario where you magically did get all your requests back at once, that would obviously bog down the client potentially, wouldn’t it?
Harry: Yeah, potentially. And then what would actually happen is the client would have to do a load of resource scheduling so what you’d end up with is a waterfall where all your responses return at the same time, then you’d have a fairly large gap between the last response arriving and its ability to execute. So ideally, when we’re talking about JavaScript, you’d want the browser to request them all in the request order, basically the order you defined them in, the server to return them all in the correct order so then the browser can execute them in the correct order. Because, as you say, if they all returned at the same time, you’ve just got a massive JavaScript to run at once but it still needs to be scheduled. So you could have a doubter of up to second between a file arriving and it becoming useful. So, actually, H1… I guess, ideally, what you’re after is H2 request scheduling, H1 style responses, so then things can be made useful as they arrive.
Drew: So you’re basically looking for a response waterfall that looks like you could ski down it.
Harry: Yeah, exactly.
Drew: But you wouldn’t need a parachute.
Harry: Yeah. And it’s a really difficult… I think to say it out loud it sounds really trivial, but given the way H2 was sold, I find it quite… not challenging because that makes my client sound… dumb… but it’s quite a thing to to explain to them… if you think about how H1 works, it wasn’t that bad. And if we get responses that look like that and “Oh yeah, I can see it now”. I’ve had to teach performance engineers this before. People who do what I do. I’ve had to teach performance engineers that we don’t mind too much when requests were made, we really care about when responses become useful.
Drew: One of the reasons things seem to move on quite quickly, especially over the last five years, is that performance is a big topic for Google. And when Google puts weight behind something like this then it gains traction. Essentially though, performance is an aspect of user experience, isn’t it?
Harry: Oh, I mean… this is a really good podcast, I was thinking about this half an hour ago, I promise you I was thinking about this half an hour ago. Performance is applied accessibility. You’re guaranteeing or increasing the chances that someone can access your content and I think accessibility is always just… Oh it’s screen readers, right? It’s people without sight. The decisions to build a website rather than an app… the decisions are access more of an audience. So yeah, performance is applied accessibility, which is therefore the user experience. And that user experience could come down to “Could somebody even experience your site” full stop. Or it could be “Was that experience delightful? When I clicked a button, did it respond in a timely manner?”. So I 100% agree and I think that’s a lot of the reason why Google are putting weight on it, is because it affects the user experience and if someone’s going to be trusting search results, we want to try and give that person a site that they’re not going to hate.
Drew: And it’s… everything that you think about, all the benefits you think about, user experience, things like increased engagement, it’s definitely true isn’t it? There’s nothing that sends the user away from a site more quickly than a sluggish experience. It’s so frustrating, isn’t it? Using a site where you know that maybe the navigation isn’t that clear and if you click through to a link and you think “Is this what I want? Is it not?” And just the cost of making that click, just waiting, and then you’ve got to click the back button and then that waiting, and it’s just… you give up.
Harry: Yeah, and it makes sense. If you were to nip to the supermarket and you see that it’s absolutely rammed with people, you’ll do the bare minimum. You’re not going to spend a lot of money there, it’s like “Oh I just need milk”, in and out. Whereas if it’s a nice experience, you’ve got “Oh, well, while I’m here I’ll see if… Oh, yeah they’ve got this… Oh, I’ll cook this tomorrow night” or whatever. I think still, three decades into the web, even people who build for the web struggle, because it’s intangible. They struggle to really think that what would annoy you in a real store would annoy you online, and it does, and the stats show that it has.
Drew: I think that in the very early days, I’m thinking late 90s, showing my age here, when we were building websites we very much thought about performance but we thought about performance from a point of view that the connections that people were using were so slow. We’re talking about dial-up, modems, over phone lines, 28K, 56K modems, and there was a trend at one point with styling images that every other line of the image would blank out with a solid color to give this… if you can imagine it like looking through a venetian blind at the image. And we did that because it helped with the compression. Because every other line the compression algorithm could-
Harry: Collapse into one pointer.
Drew: Yeah. And so you’ve significantly reduced your image size while still being able to get… And it became a design element. Everybody was doing it. I think maybe Jeffrey Zeldman was one of the first who pioneered that approach. But what we were thinking about there was primarily how quickly could we get things down the wire. Not for user experience, because we weren’t thinking about… I mean I guess it was user experience because we didn’t want people to leave our sites, essentially. But we were thinking about not optimizing things to be really fast but trying to avoid them being really slow, if that makes sense.
Harry: Yeah, yeah.
Drew: And then, I think as speeds like ADSL lines became more prevalent, that we stopped thinking in those terms and started just not thinking about it at all. And now we’re at the situation where we’re using mobile devices and they’ve got constrained connections and perhaps slower CPUs and we’re having to think about it again, but this time in terms of getting an advantage. As well as the user experience side of things, it can have real business benefits in terms of costs and ability to make profit. Hasn’t it?
Harry: Yeah, tremendously. I mean, not sure how to word it. Not shooting myself in the foot here but one thing I do try and stress to clients is that site-speed is going to give you a competitive advantage but it’s only one thing that could give you some competitive advantage. If you’ve got a product no one wants to buy then it doesn’t matter how fast your site is. And equally, if someone genuinely wants the world’s fastest website, you have to delete your images, delete your CSS, delete your JavaScript, and then see how many products you tell, because I guarantee site-speed wasn’t the factor. But studies have shown that there’s huge benefits of being fast, to the order of millions. I’m working with a client as we speak. We worked out for them that if they could render a given page one second faster, or rather their largest content for paint was one second faster, it’s worth 1.8 mil a year, which is… that’s a big number.
Drew: That would almost pay your fee.
Harry: Hey! Yeah, almost. I did say to them “Look, after two years this’ll be all paid off. You’ll be breaking even”. I wish. But yeah, does the client-facing aspect… sorry, the customer-facing aspect of if you’ve got an E-Com site, they’re going to spend more money. If you’re a publisher, they’re going to read more of an article or they will view more minutes of content, or whatever you do that is your KPI that you measure. It could be on the Smashing site, it could be they didn’t bounce, they actually click through a few more articles because we made it really easy and fast. And then faster sites are cheaper to run. If you’ve got your cacheing strategy sorted you’re going to keep people away from your servers. If you optimize your assets, anything that does have to come from your server is going to weight a lot less. So much cheaper to run.
Harry: The thing is, there’s a cost in getting there. I think Scott Jehl probably said one of the most… And I heard it from him first, so I’m going to assume he came up with it but the saying is “It’s easy to make a fast website but it’s difficult to make a website fast”. And that is just so succinct. Because the reason web perf might get pushed down the list of things to do is because you might be able to say to a client “If I make your site a second faster you’ll make an extra 1.8 mil a year” or it can be “If you just added Apple Pay to your checkout, you’re going to make an extra five mil.” So it’s not all about web perf and it isn’t the deciding factor, it is one part of a much bigger strategy, especially for E-Com online. But the evidence is that I’ve measured it firsthand with my retail clients, my E-Com clients. The case for it is right there, you’re absolutely right. It’s competitive advantage, it will make you more money.
Drew: Back in the day, again, I’m harping back to a time past, but people like Steve Souders were some of the first people to really start writing and speaking about web performance. And people like Steve were basically saying “Forget the backend infrastructure, where all the gains to be had are in the browser, in the front end, that’s where everything slow happens.” Is that still the case 15 years on?
Harry: Yeah, yeah. He reran the test in between way back then and now, and the gap had actually widened, so it’s actually more costly over the wire. But there is a counter to that, which is if you’ve got really bad backend performance, if you set out of the gate slowly, there’s only so much you can claw back on the front end. I got a client at the moment, their time to first byte is 1.5 seconds. We can never render faster than 1.5 seconds therefore, so that’s going to be a cap. We can still claw time back on the front end but if you’ve got a really, really bad time to first byte, you have got backend slow downs, there’s a limit on how much faster your front end performance efforts could get you. But absolutely.
Harry: That is, however, changing because… Well, no it’s not changing I guess, it’s getting worse. We’re pushing more onto the client. It used to be a case of “Your server is as fast as it is but then after that we’ve got a bunch of question marks.” because I hear this all the time “All our users run on WiFi. They’ve all got desktop machines because they all work from our office.” Well, no, now they’re all working from home. You don’t get to choose. So, that’s where all the question marks come in which is where the slow downs happen, where you can’t really control it. After that, the fact that now we are tending to put more on the client. By that I mean, entire run times on the client. You’ve moved all your application logic off of a server anyway so your time to first byte should be very, very minimal. It should be a case of sending some bundles from a CDM to my… but you’ve gone from being able to spec to your own servers to hoping that somebody’s not got Netflix running on the same machine they’re trying to view your website on.
Drew: It’s a really good point about the way that we design sites and I think the traditional best practice has always been you should try and cater for all sorts of browsers, all sorts of connection speeds, all sorts of screen sizes, because you don’t know what the user is going to be expecting. And, as you said, you have these scenarios where people say “Oh no we know all our users are on their work-issued desktop machine, they’re running this browser, it’s the latest version, they’re hardwired into the LAN” but then things happen. One of the great benefits of having web apps is that we can do things like distribute our work force suddenly back all to their homes and they can keep working, but that only holds true if the quality of the engineering was such that then somebody who’s spinning up their home machine that might have IE11 on it or whatever, whether the quality of the work is there that actually means that the web fulfills its potential in being a truly accessible medium.
Drew: As you say, there’s this trend to shift more and more stuff into the browser, and, of course, then if the browser is slow, that’s where the slowness happens. You have to wonder “Is this a good trend? Should we be doing this?” I’ve got one site that I particularly think of, noticed that is almost 100% server rendered. There’s very little JavaScript and it is lightning fast. Every time I go to it I think “Oh, this is fast, who wrote this?” And then I realize “Oh yeah, it was me”.
Harry: That’s because you’re on localhost, no wonder it feels fast. It’s your dev site.
Drew: Then, my day job, we’re building out our single page application and shifting stuff away from the server because the server’s the bottleneck in that case. Can you just say that it’s more performant to be in the browser? Or more performant to be on the server? Is it just a case of measuring and taking it on a case-by-case basis?
Harry: I think you need to be very, very, very aware of your context and… genuinely I think an error is… narcissism where people think “Oh, my blog deserves to be rendered in someone’s browser. My blog with a bounce rate of 89% needs its own runtime in the browser, because I need subsequent navigations to be fast, I just want to fetch a… basically a diff of the data.” No one’s clicking onto your next article anyway, mate, don’t push a runtime down the pipe. So you need to be very aware of your context.
Harry: And I know that… if Jeremy Keith’s listening to this, he’s going to probably put a hit out on me, but there is, I would say, a difference between a website and a web app and the definition of that is very, very murky. But if you’ve got a heavily read and write application, so something where you’re inputting data, manipulating data, et cetera. Basically my site is not a web app, it’s a website, it’s read only, that I would firmly put in the website camp. Something like my accountancy software is a web app, I would say is a web app and I am prepared to suffer a bit of boot time cost, because I know I’ll be there for 20 minutes, an hour, whatever. So you need a bit of context, and again, maybe narcissism’s a bit harsh but you need to have a real “Do we need to make this newspaper a client side application?” No, you don’t. No, you don’t. People have got ad-blocker on, people don’t like commuter newspaper sites anyway. They’re probably not even going to read the article and rant about it on Facebook. Just don’t build something like that as a client rendered application, it’s not suitable.
Harry: So I do think there is definitely a point at which moving more onto the client would help, and that’s when you’ve got less sensitivity to churn. So any com type, for example, I’m doing an audit for a moment for a site who… I think it’s an E-Com site but it’s 100% on the client. You disable JavaScript and you see nothing, just an empty div id=”app”. E-Com is… you’re very sensitive to any issues. Your checkout flow is even subtly wrong, I’m off somewhere else. It’s too slow, I’m off somewhere else. You don’t have the context where someone’s willing to bed in to that app for a while.
Harry: Photoshop. I pop open Photoshop and I’m quite happy to know that it’s going to take 45 seconds of splash screen because I’m going to be in there for… basically the 45 seconds is worth the 45 minutes. And it’s so hard to define, which is why I really struggle to convince clients “Please don’t do this” because I can’t just say “How long do you think your user’s going to be there for”. And you can prox it from… if your bounce rate’s 89% don’t optimize for a second page view. Get that bounce rate down first. I do think there’s definitely a split but what I would say is that most people fall on the wrong side of that line. Most people put stuff in the client that shouldn’t be there. CNN, for example, you cannot read a single headline on the CNN website until it is fully booted a JavaScript application. The only thing server rendered is the header and footer which is the only thing people don’t care about.
Harry: And I feel like that is just… I don’t know how we arrive at that point. It’s never going to be the better option. You deliver a page that is effectively useless which then has to say “Cool, I’ll go fetch what would have been a web app but we’re going to run it in the browser, then I’ll go and ask for a headline, then you can start to… oh, you’re gone.” That really, really irks me.
Harry: And it’s no one’s fault, I think it’s the infancy of this kind of JavaScript ecosystem, the hype around it, and also, this is going to sound really harsh but… It’s basically a lot of naïve implementation. Sure, Facebook have invented React and whatever, it works for them. Nine times out of 10 you’re not working at Facebook scale, 95 times out of 100 you’re probably not the smartest Facebook engineers, and that’s really, really cruel and it sounds horrible to say, but you can only get… None of these things are fast by default. You need a very, very elegant implementation of these things to make them correct.
Harry: I was having this discussion with my old… he was a lead engineer on the squad that I was on 10 years ago at Sky. I was talking to him the other day about this and he had to work very hard to make a client rendered app fast, whereas making a server rendered app fast, you don’t need to do anything. You just need to not make it slow again. And I feel like there’s a lot of rose tinted glasses, naivety, marketing… I sound so bleak, we need to move on before I start really losing people here.
Drew: Do you think we have the tendency, as an industry, to focus more on developer experience than user experience sometimes?
Harry: Not as a whole, but I think that problem crops up in a place you’d expect. If you look at the disparity… I don’t know if you’ve seen this but I’m going to presume you have, you seem to very much have your finger on the pulse, the disparity between HTTP archive’s data about what frameworks and JavaScript libraries are used in the wild versus the state of JavaScript survey, if you follow the state of JavaScript survey it would say “Oh yes, 75% of developers are using React” whereas fewer than 5% of sites are using React. So, I feel like, en masse, I don’t think it’s a problem, but I think in the areas you’d expect it, heavy loyalty to one framework for example, developer experience is… evangelized probably ahead of the user. I don’t think developer experience should be overlooked, I mean, everything has a maintenance cost. Your car. There was a decision when it was designed that “Well, if we hide this key, that functionality, from a mechanic, it’s going to take that mechanic a lot longer to fix it, therefore we don’t do things like that”. So there does need to be a balance of ergonomics and usability, I think that is important. I think focusing primarily on developer experience is just baffling to me. Don’t optimize for you, optimize for your customer, your customer pays you it’s not the other way around.
Drew: So the online echo chamber isn’t exactly representative of reality when you see everybody saying “Oh you should be using this, you should be doing that” then that’s actually only a very small percentage of people.
Harry: Correct, and that’s a good thing, that’s reassuring. The echo chamber… it’s not healthy to have that kind of monoculture perhaps, if you want to call it that. But also, I feel like… and I’ve seen it in a lot of my own work, a lot of developers… As a consultant, I work with a lot of different companies. A lot of people are doing amazing work in WordPress. And WordPress powers 24% of the web. And I feel like it could be quite easy for a developer like that working in something like WordPress or PHP on the backend, custom code, whatever it is, to feel a bit like “Oh, I guess everyone’s using React and we aren’t” but actually, no. Everyone’s talking about React but you’re still going with the flow, you’re still with the majority. It’s quite reassuring to find the silent majority.
Drew: The trend towards static site generators and then hosting sites entirely on a CDN, sort of JAMstack approach, I guess when we’re talking about those sorts of publishing type sites, rather than software type sites, I guess that’s a really healthy trend, would you think?
Harry: I love that, absolutely. You remember when we used to call SSG “flap file”, right?
Drew: Yeah.
Harry: So, I built CSS Wizardry on Jekyll back when Jekyll was called a flap file website. But now we service our generator, huge, huge fan of that. There’s no disadvantage to it really, you pay maybe a slightly larger up front compute cost of pre-compiling the site but then your compute cost is… well, Cloudflare fronts it, right? It’s on a CDN so your application servers are largely shielded from that.
Harry: Anything interactive that does need doing can be done on the client or, if you want to get fancy, what one really nice approach, if you are feeling ambitious, is use Edge Side Includes so you can keep your shopping cart server rendered, but at the edge. You can do stuff like that. Tremendous performance benefits there. Not appropriate for a huge swathe of sites, but, like you say, if we’re thinking publishing… an E-Com site it wouldn’t work, you need realtime stock levels, you need… search that doesn’t just… I don’t know you just need far more functionality. But yeah, I think the Smashing site, great example, my site is an example, much smaller than Smashing but yeah, SSG, flap filers, I’m really fond of it.
Drew: Could it work going deeper into the JAMstack approach of shifting everything into the client and building an E-Commerce site? I think the Smashing E-Commerce site is essentially using JavaScript in the client and server APIs to do the actual functionality as service functions or what have you.
Harry: Yeah. I’ve got to admit, I haven’t done any stuff with serverless. But yeah, that hybrid approach works. Perhaps my E-Commerce example was a bit clunky because you could get a hybrid between statically rendering a lot of the stuff, because most things on an E-Com site don’t really change. You filter what you can do on the client. Search, a little more difficult, stock levels does need to go back to an API somewhere, but yeah you could do a hybrid for a definite, for an E-Com site.
Drew: Okay, so then it’s just down to monitoring all those performance metrics again, really caring about the network, about latency, about all these sorts of things, because you’re then leaning on the network a lot more to fetch all those individual bits of data. It hosts a new set of problems.
Harry: Yeah, I mean you kind of… I wouldn’t say “Robbing Peter to pay Paul” but you are going to have to keep an eye on other things elsewhere. I’ve not got fully to the bottom of it, before anyone tweets it at us, but a client recently moved to an E-Commerce client. I worked with them two years ago and that site was already pretty fast. It was built on… I can’t remember which E-Com platform, it was .net, hosted on IIS, server rendered, obviously, and it was really fast because of that. It was great and we just wanted to maintain, maybe find a couple of hundred milliseconds here and there, but really good. Half way through last year, they moved to client side React for key pages. PP… product details page, product listing page, and stuff just got marketable slower lower, much slower. To the point they got back in touch needing help again.
Harry: And one of the interesting things I spotted when they were putting a case for “We need to actually revert this”. I was thinking about all the…what’s slower, obviously it’s slower, how could doing more work ever be faster, blah blah blah. One of their own bullet points in the audit was: based on projections, their yearly hosting costs have gone up by a factor of 10 times. Because all of a sudden they’ve gone from having one application server and a database to having loads of different gateways, loads of different APIs, loads of different microservers they’re calling on. It increased the surface area of their application massively. And the basic reason for this, I’ll tell you exactly why this happened. The developer, it was a very small team, the developer who decided “I’m going to use React because it seems like fun” didn’t do any business analysis. It was never expected of them to actually put forward a case of how much is it going to cost the dude, how much is it going to return, what’s the maintenance cost of this?
Harry: And that’s a thing I come up against really frequently in my work and it’s never the developer’s fault. It’s usually because the business keeps financials away from the engineering team. If your engineers don’t know the cost or value of their work then they’re not informed to make those decisions so this guy was never to know that that was going to be the outcome. But yeah, interestingly, moving to a more microservice-y approach… And this is an outlier, and I’m not going to say that that 10 times figure is typical, it definitely seems atypical, but it’s true that there is at least one incident I’m aware of when moving to this approach, because they just had to use more providers. It 10x’ed their… there’s your 10 times engineer, increased hosting by 10 times.
Drew: I mean, it’s an important point, isn’t it? Before starting out down any particular road with architectural changes and things about doing your research and asking the right questions. If you were going to embark on some big changes, say you’ve got a really old website and you’re going to structure it and you want it to be really fast and you’re making all your technology choices, I mean it pays, doesn’t it, to talk to different people in the business to find out what they want to be doing. What sort of questions should you be asking other people in the business as a web developer or as a performance engineer? Who should you be talking to you and what should you be asking them?
Harry: I’ve got a really annoying answer to the “Who should you be talking to?” And the answer is everyone should be available to you. And it will depend on the kind of business, but you should be able to speak to marketing “Hey, look, we’re using this AB testing tool. How much does that cost as a year and how much do you think it nets as a year?” And that developer should feel comfortable. I’m not saying developers need to change their attitude, what I mean is the company should make the developers able to ask those kind of questions. How much does Optimizely cost as a year? Right, well that seems like a lot, does it make that much in return? Okay, whatever we can make a decision based on that. That’s who you should be talking to and then questions you should ask, it should be things like…
Harry: The amount of companies I work will, they won’t give their own developers to Google Analytics. How are you meant to build a website if you don’t know who you’re building it for? So the question should be… I work a lot with E-Com clients so every developer should things like “What is our average order value? What is our conversion rate? What is our revenue, how much do we make?” These things mean that you can at least understand that “Oh, people spend a lot of money on this website and I’m responsible for a big chunk of that and I need to take that responsibility.”
Harry: Beyond that, other things are hard to put into context, so for me, one of things that I, as a consultant, so this is very different to an engineer in the business, I need to know how sensitive you are to performance. So if a client gives me the average order value, monthly traffic, and their conversion rate, I can work out how much 100 milliseconds, 500 a second will save them a year, or return them, just based on those three numbers I can work out roughly “Well a second’s worth 1.8 mil”. It’s a lot harder for someone in the business to get all the back information because as a performance engineer it’s second nature to me. But if you can work that kind of stuff out, it unlocks a load of doors. Okay, well if a second’s work this much to us, I need to make sure that I never lose a second and if I can, gain a second back. And that will inform a lot of things going forward. A lot of these developers are kept quite siloed. “Oh well, you don’t need to know about business stuff, just shut up and type”.
Drew: I’ve heard you say, it is quite a nice soundbite, that nobody wants a faster website.
Harry: Yeah.
Drew: What do you mean by that?
Harry: Well it kind of comes back to, I think I’ve mentioned it already in the podcast, that if my clients truly wanted the world’s fastest website, they would allow me to go in and delete all their JavaScript, all their CSS, all their images. Give that customer a Times New Roman stack.
Harry: But fast for fast sake is… not chasing the wrong thing but you need to know what fast means to you, because, I see it all the time with clients. There’s a point at which you can stop. You might find that your customer’s only so sensitive to web perf that it might mean that getting a First Contentful Paint from four seconds to two seconds might give you a 10% increase in revenue, but getting from that two to a one, might only give you a 1% increase. It’s still twice as fast, but you get minimal gains. So what I need to do with my clients is work out “How sensitive are you? When can we take our foot off the gas?” And also, like I said, towards the top of the show… You need to have a product that people want to buy.
Harry: If people don’t want to buy your product, it doesn’t matter how quickly you show them it, it’ll just disgust them faster, I guess. Is your checkout flow really, really, really seamless on mobile, for example. So there’s a number of factors. For me, and my clients, it’ll be working out a sweet spot, to also working out “If getting from here to here is going to make you 1.8 mil a year, I can find you that second for a fraction of that cost.” If you want me to get you an additional second on top of that, it’s going to get a lot harder. So my cost to you will probably go up, and that won’t be an extra 1.8, because it’s not lineal, you don’t get 1.8 mil for every one second.
Harry: It will tail off at some point. And clients will get to a point where… they’ll still be making gains but it might be a case of your engineering effort doubles, meaning your returns halve, you can still be in the green, hopefully it doesn’t get more expensive and you’re losing money on performance, but there’s a point where you need to slow down. And that’s usually things that I help clients find out because otherwise they will just keep chasing speed, speed, speed and get a bit blinkered.
Drew: Yeah, it is sort of diminishing returns, isn’t it?
Harry: That’s what I was look for-
Drew: Yeah.
Harry: … diminishing returns, that’s exactly it. Yeah, exactly.
Drew: And in terms of knowing where to focus your effort… Say you’ve got the bulk of your users, 80% of your users are getting a response within two, three seconds, and then you’ve got 20% who may be in the long-tail that might end up with responses five, ten seconds. Is it better to focus on that 80% where the work’s really hard, or is it better to focus on the 20% that’s super slow, where the work might be easier, but it’s only 20%. How do you balance those sorts of things?
Harry: Drew, can you write all podcast questions for everyone else? This is so good. Well, a bit of a shout out to Tim Kadlec, he’s done great talks on this very topic and he calls it “The Long-Tail of Web Performance” so anyone listening who wants to look at that, Tim’s done a lot of good firsthand work there. The 80, 20, let’s just take those as good example figures, by the time you’re dealing with the 80th percentile, you’re definitely in the edge cases. All your crooks and web file data is based around 75th percentile. I think there’s a lot of value investing in that top 20th percentile, the worst 20%. Several reasons for this.
Harry: First thing I’m going to start with is one of the most beautiful, succinct soundbites I’ve ever heard. And the guy who told me it, I can guarantee, did not mean it to be this impactful. I was 15 years old and I was studying product design, GCSE. Finally, a project, it was a bar stool so it was a good sign of things to come. And we were talking about how you design furniture. And my teacher basically said… I don’t know if I should… I’m going to say his name, Mr. Brocklesby.
Harry: He commanded respect but he was one of the lads, we all really liked him. But he was massive in every dimension. Well over six foot tall, but just a big lad. Big, big, big, big man. And he said to us “If you were to design a doorway, would you design it for the average person?” And 15 year old brains are going “Well yeah, if everyone’s roughly 5’9 then yeah” He was like “Well, immediately, Harry can’t use that door.” You don’t design for the average person, you design for the extremities because you want it to be useful to the most people. If you designed a chair for the average person, Mr. Brocklesby wasn’t going to fit in it. So he taught me from a really, really age, design to your extremities.
Harry: And where that becomes really interesting in web perf is… If you imagine a ladder, and you pick up the ladder by the bot… Okay I’ve just realized my metaphor might… I’ll stick with it and you can laugh at me afterwards. Imagine a ladder and you lift the ladder up by the bottom rungs. And that’s your worst experiences. You pick the bottom rung in the ladder to lift it up. The whole ladder comes with it, like a rising tide floats all boats. The reason that metaphor doesn’t work is if you pick a ladder up by the top rung, it all lifts as well, it’s a ladder. And the metaphor doesn’t even work if I turn it into a rope ladder, because a rope ladder then, you lift the bottom rung and nothing happens but… my point is, if you can improve experience for your 90th percentile, it’s got to get that up for your 10th percentile, right?
Harry: And this is why I tell clients, they’ll say to me things like “Oh well most of our users are on 4G on iPhones” so like all right, okay, and we start testing 3G on Android, like “No, no, most of our users are iPhones” okay… that means your average user’s going to have a better experience but anyone who isn’t already in the 50th percentile just gets left further behind. So set the bar pretty high for yourself by setting expectations pretty low.
Harry: Sorry, I’ve got a really bad habit of giving really long answers to short questions. But it was a fantastic question and, to try and wrap up, 100% definitely I agree with you that you want to look at that long-tail, you want to look at that… your 80th percentile because if you take all the experiences on the site and look at the median, and you improve the median, that means you’ve made it even better for people who were already quite satisfied. 50% of people being effectively ignored is not the right approach. And yeah, it always comes back to Mr Brocklesby telling me “Don’t design for the average person because then Harry can’t use the door”. Oh, for anyone listening, I’m 193 centimeters, so I’m quite lanky, that’s what that is.
Drew: And all those arms and legs.
Harry: Yeah. Here’s another good one as well. My girlfriend recently discovered the accessibility settings in iOS… so everyone has their phone on silent, right? Nobody actually has a phone that actually rings, everyone’s got it on silent. She found that “Oh you know, you can set it so that when you get a message, the flash flashes. And if you tap the back of the phone twice, it’ll do a screenshot.” And these are accessibility settings, these are designed for that 95th percentile. Yet she’s like “Oh, this is really useful”.
Harry: Same with OXO Good Grips. OXO Good Grips, the kitchen utensils. I’ve got a load of them in the kitchen. They’re designed because the founder’s wife had arthritis and he wanted to make more comfortable utensils. He designed for the 99th percentile, most people don’t have arthritis. But by designing for the 99th percentile, inadvertently, everyone else is like “Oh my God, why can’t all potato peelers be this comfortable?” And I feel like it’s really, really… I like a feel-good or anecdote that I like to wheel out in these sort of scenarios. But yeah, if you optimize for them… Well, a rising tide floats all boats and that therefore optimizes the tail-end of people and you’re going to capture a lot of even happier customers above that.
Drew: Do you have the OXO Good Grips manual hand whisk?
Harry: I don’t. I don’t, is it good?
Drew: Look into it. It’s so good.
Harry: I do have the OXO Good Grips mandolin slicer which took the end of my finger off last week.
Drew: Yeah, I won’t get near one of those.
Harry: Yeah, it’s my own stupid fault.
Drew: Another example from my own experience with catering for that long-tail is that, in the project I’m working on at the moment, that long-tail is right at the end, you’ve got people with the slowest performance, but if it turns out if you look at who those customers are, they’re the most valuable customers to the business-
Harry: Okay.
Drew: … because they are the biggest organizations with the most amount of data.
Harry: Right.
Drew: And so they’re hitting bottlenecks because they have so much data to display on a page and those pages need to be refactored a little bit to help that use case. So they’re having the slowest experience and they’re, when it comes down to it, paying the most money and making so much more of a difference than all of the people having a really fast experience because they’re free users with a tiny amount of data and it all works nice and it is quick.
Harry: That’s a fascinating dimension, isn’t it? In fact, I had a similar… I had nowhere near the business impact as what you’ve just described, but I worked with a client a couple of years ago, and their CEO got in touch because their site was slow. Like, slow, slow, slow. Really nice guy as well, he’s just a really nice down to earth guy, but he’s mentored, like proper rich. And he’s got the latest iPhone, he can afford that. He’s a multimillionaire, he spends a lot of his time flying between Australia, where he is from, and Estonia, where he is now based.
Harry: And he’s flying first class, course he is. But it means most of his time on his nice, shiny iPhone 12 Pro Max whatever, whatever, is over airplane WiFi, which is terrible. And it was this really amazing juxtaposition where he owns the site and he uses it a lot, it’s a site that he uses. And he was pushing it… I mean easily their richest customer was their CEO. And he’s in this weirdly privileged position where he’s on a worse connection than Joe Public because he’s somewhere above Singapore on a Quantas flight getting champagne poured down his neck, and he’s struggling. And that was a really fascinating insight that… Oh yeah, because you’ve got your 95th percentile can basically can go in either direction.
Drew: Yeah, it’s when you start optimizing for using a site with a glass of champagne in one hand that you think “Maybe we’re starting to lose the way a bit.”
Harry: Yeah, exactly.
Drew: We talked a little bit about measurement of performance, and in my own experience with performance work it’s really essential to measure everyhtin.g A so you can identify where problems are but B so that when you actually start tackling something you can tell if you’re making a different and how much of a difference you’re making. How should we be going about measuring the performance of our sites? What tools can we use and where should we start?
Harry: Oh man, another great question. So there’s a range of answers depending on how much time, resources, inclination there is towards fixing site speed. So what I will try and do with client is… Certain off the shelf metrics are really good. Load time, do not care about that anymore. It’s very, very, very… I mean, it’s a good proxy if your load time’s 120 seconds I’m going to guess you don’t have a very fast website, but it’s too obscure and it’s not really customer facing. I actually think vitals are a really good step in the right direction because they do measure user experience but they’re based on technical input. Largest Contentful Paint is a really nice thing to visual but the technical stuff there is unblock your critical path, make sure hero images arrive quickly and make sure your web font strategy is decent. There’s a technical undercurrent to these metrics. Those are really good off the shelf.
Harry: However, if clients have got the time, it’s usually time, because you want to capture the data but you need time to actually capture the data. So what I try and do with clients is let’s go “Look, we can’t work together for the next three months because I’m fully booked. So, what we can do is really quickly set you up with a free trial of Speedcurve, set up some custom metrics” so that means that for a publisher client, a newspaper, I’d be measuring “How quickly was the headline of the article rendered? How quickly was the lead image for the article rendered?” For an E-Commerce client I want to measure, because obviously you’re measuring things like start render passively. As soon as you start using any performance monitoring software, you’re capturing your actual performance metrics for free. So your First Contentful Paint, Largest Contentful, etc. What I really want to capture is things that matter to them as a business.
Harry: So, working with an E-Com client at the moment where we are able to correlate… The faster your start render, what is the probability to an adding to cart. If you can show them a product sooner, they’re more likely to buy it. And this is a lot of effort to set up, this is kind of the stretch goal for clients who are really ambition, but anything that you really want to measure, because like I say, you don’t really want to measure what your Largest Contentful Paint is, you want to measure your revenue and was that influenced by Large Contentful Paint? So the stretch goal, ultimate thing, would be anything you would see as a KPI for that business. It could be, on newspapers, how far down the article did someone scroll? And does that correlate in any way to first input delay? Did people read more articles if CLS was lower? But then before we start doing custom, custom metrics, I honestly think web vitals is a really good place to start and it’s also been quite well normalized. It becomes a… I don’t know what the word is. Lowest common denominator I guess, where everyone in the industry now can discuss performance on this level playing field.
Harry: One problem I’ve got, and I actually need to set up a meeting with the vitals team, is I also really think Lighthouse is great, but CLS is 33% of web vitals. You’ve got LCP, FID, CLS. CLS is 33% of your vitals. Vitals is what normally goes in front of your marketing team, your analytics department, because it pops up in search console, it’s mentioned in context of search results pages, whereas vitals is concerned, you’ve got heavy weighting, 33%, a third of vitals is CLS, it’s only 5% of our Lighthouse score. So what you’re going to get is developers who build around Lighthouse, because it can be integrated into tooling, it’s a lab metric. Vitals is field data, it’s rum.
Harry: So you’ve got this massive disconnect where you’ve got your marketing team saying “CLS is really bad” and developers are thinking “Well it’s 5% of the Lighthouse score that DevTools is giving me, it’s 5% of the score that Lighthouse CLI gives us in CircleCI” or whatever you’re using, yet for the marketing team its 33% of what they care about. So the problem there is a bit of a disconnect because I do think Lighthouse is very valuable, but I don’t know how they reconcile that fairly massive difference where in vitals, CLS is 33% of your score… well, not score because you don’t really have one, and Lighthouse is only 5%, and it’s things like that that still need ironing out before we can make this discussion seamless.
Harry: But, again, long answer to a short question. Vitals is really good. LCP is a good user experience metric which can be boiled down to technical solutions, same with CLS. So I think that’s a really good jump off point. Beyond that, it’s custom metrics. What I try and get my clients to is a point where they don’t really care how fast their site is, they just care that they make more money from yesterday, and if it did is that because it was running fast? If it made less is that because it was running slower? I don’t want them to chase a mystical two second LCP, I want them to chase the optimal LCP. And if that actually turns out to be slower than what you think, then whatever, that’s fine.
Drew: So, for the web developer who’s just interested in… they’ve not got budget to spend on tools like Speedcurve and things, they can obviously run tools like Lighthouse just within their browser, to get some good measurement… Are things like Google Analytics useful for that level?
Harry: They are and they can be made more useful. Analytics, for many years now, has captured rudimentary performance information. And that is going to be DNS time, TCP and TLS, time to first byte, page download time, which is a proxy… well, whatever, just page download time and load time. So fairly clunky metrics. But it’s a good jump off point and normally every project I start with a client, if they don’t have New Relic or Speedcurve or whatever, I’ll just say “Well let me have a look at your analytics” because I can at least proxy the situation from there. And it’s never going to be anywhere near as good as something like Speedcurve or New Relic or Dynatrace or whatever. You can send custom metrics really, really, really easily off to analytics. If anyone listening wants to be able to send… my site for example. I’ve got metrics like “How quickly can you read the heading of one of my articles? At what point was the About page image rendered? At what point was the call to action that implores you to hire me? How soon is that rendered to screen?” Really trivial to capture this data and almost as trivial to send it to analytics. So if anyone wants to view source on my site, scroll down to the closing body tag and find the analytics snippet, you will see just how easy it is for me to capture custom data and send that off to analytics. And, in the analytics UI, you don’t need to do anything. Normally you’d have to set up custom reports and mine the data and make it presentable. These are a first class citizen in Google Analytics. So the moment you start capturing custom analytics, there’s a whole section of the dashboard dedicated to it. There’s no setup, no heavy lifting in GA itself, so it’s really trivial and, if clients are on a real budget or maybe I want to show them the power of custom monitoring, I don’t want to say “Oh yeah, I promise it’ll be really good, can I just have 24 grand for Speedcurve?” I can start by just saying “Look, this is rudimentary. Let’s see the possibilities here, now we can maybe convince you to upgrade to something like Speedcurve.”
Drew: I’ve often found that my gut instinct on how fast something should be, or what impact a change should have, can be wrong. I’ll make a change and think I’m making things faster and then I measure it and actually I’ve made things slower. Is that just me being rubbish at web perf?
Harry: Not at all. I’ve got a really pertinent example of this. Preload… a real quick intro for anyone who’s not heard of preload, loading certain assets on the web is inherently very slow and the two primary candidates here are background images in CSS and web fonts, because before you can download a background image, you have to download the HTML, which then downloads the CSS, and then the CSS says “Oh, this div on the homepage needs this background image.” So it’s inherently very slow because you’ve got that entire chunk of CSS time in between. With preload, you can put one line in HTML in the head tag that says “Hey, you don’t know it yet but, trust me, you’ll need this image really, really, really soon.” So you can put a preload in the HTML which preemptively fires off this download. By the time the CSS needs the background image, it’s like “Oh cool, we’ve already got it, that’s fast.” And this is toutered as this web perf Messiah… Here’s the thing, and I promise you, I tweeted this last week and I’ve been proved right twice since. People hear about preload, and the promise it gives, and also it’s very heavily pushed by Lighthouse, in theory, it makes your site faster. People get so married to the idea of preload that even when I can prove it isn’t working, they will not remove it again. Because “No, but Lighthouse said.” Now this is one of those things where the theory is sound. If you have to wait for your web font, versus downloading it earlier, you’re going to see stuff faster. The problem is, when you think of how the web actually works, any page you first hit, any brand new domain you hit, you’ve got a finite amount of bandwidth and the browser’s very smart spending that bandwidth correctly. It will look through your HTML really quickly and make a shopping list. Most important thing is CSS, then it’s this jQuery, then it’s this… and then next few things are these, these, and these less priority. As soon as you start loading your HTML with preloads, you’re telling the browser “No, no, no, this isn’t your shopping list anymore, buddy, this is mine. You need to go and get these.” That finite amount of bandwidth is still finite but it’s not spent across more assets, so everything gets marginally slower. And I’ve had to boo this twice in the past week, and still people are like “Yeah but no it’s because it’s downloading sooner.” No, it’s being requested sooner, but it’s stealing bandwidth from your CSS. You can literally see your web fonts are stealing bandwidth from your CSS. So it’s one of those things where you have to, have to, have to follow the numbers. I’ve done it before on a large scale client. If you’re listening to this, you’ve heard of this client, and I was quite insistent that “No, no, your head tags are in the wrong order because this is how it should be and you need to have them in this order because theoretically it clues in that…” Even in what I was to the client I knew that I was setting myself up for a fool. Because of how browsers work, it has to be faster. So I’m making the ploy, this change… to many millions of people, and it got slower. It got slower. And me sitting there, indignantly insisting “No but, browsers work like this” is useless because it’s not working. And we reverted it and I was like “Sorry! Still going to invoice you for that!” So it’s not you at all.
Drew: Follow these numbers.
Harry: Yeah, exactly. “I actually have to charge you more, because I spent time reverting it, took me longer.” But yeah, you’re absolutely right, it’s not you, it’s one of those things where… I have done it a bunch of times on a much smaller scale, where I’ll be like “Well this theoretically must work” and it doesn’t. You’ve just got to follow what happens in the real world. Which is why that monitoring is really important.
Drew: As the landscape changes and technology develops, Google rolls out new technologies that help us make things faster, is there a good way that we can keep up with the changes? Is there any resources that we should be looking at to keep our skills up to date when it comes to web perf?
Harry: To quickly address the whole “Google making”… I know it’s slightly tongue in cheek but I’m going to focus on this. I guess right towards the beginning, bet on the browser. Things like AMP, for example, they’re at best a after thought catch of a solution. There’s no replacement for building a fast site, and the moment you start using things like AMP, you have to hold on to those non-standard standards, the mercy of the AMP team changing their mind. I had a client spend a fortune licensing a font from an AMP allow-listed font provider, then at some point, AMP decided “Oh actually no, that font provided, we’re going to block list them now” So I had a client who’s invested heavily in AMP and this font provider and had to choose “Well, do we undo all the AMP work or do we just waste this very big number a year on the web font” blah, blah, blah. So I’d be very wary of any one… I’m a Google Developer expert but I don’t know of any gagging-order… I can be critical, and I would say… avoid things that are hailed as a one-size-fits-all solution, things like AMP.
Harry: And to dump on someone else for a second, Cloudflare has a thing called Rocket Loader, which is AMP-esque in its endeavor. It’s designed like “Oh just turn this thing on your CDN, it’ll make your site faster.” And actually it’s just a replacement for building your site properly in the first place. So… to address that aspect of it, try and remain as independent as possible, know how browsers work, which immediately means that Chrome monoculture, you’re back in Google’s lap, but know how browsers work, stick to some fundamental ideologies. When you’re building a site, look a the page. Whether that’s in Figma, or Sketch, or wherever it is, look at the design and say “Well, that is what a user wants to see first, so I’ll put nothing in the way of that. I won’t lazy load this main image because that’s daft, why would I do that?” So just think about “What would you want the user to be first?” On an E-Com site, it’s going to be that product image, probably nav at the same time, but reviews of the product, Q and A of the product, lazy load that. Tuck that behind JavaScript.
Harry: Certain fundamental ways of working that will serve you right no matter what technology you’re reading up on, which is “Prioritize what your customer prioritizes”. Doing more work on that’d be faster, so don’t put things in the way of that, but then more tactical things for people to be aware of, keep abreast of… and again, straight back to Google, but web.dev is proving to be a phenomenal resource for framework agnostic, stack agnostic insights… So if you want to learn about vitals, you want to learn about PWAs, so web.dev’s really great.
Harry: There’s actually very few performance-centric publications. Calibre’s email is, I think its fortnightly perf email is just phenomenal, it’s a really good digest. Keep an eye on the web platform in general, so there’s the Performance Working Group, they’ve got a load of stuff on GitHub proposals. Again, back to Google, but no one knows about this website and its phenomenal: chromestatus.com. It tells you exactly what Chrome’s working on, what the signals are from other browsers, so if you want to see what the work is on priority hints, you can go and get links to all the relevant bug trackers. Chrome Status shows you milestones for each… “This is coming out in MAT8, this was released in ’67” or whatever, that’s a really good thing for quite technical insights.
Harry: But I keep coming back to this thing, and I know I probably sound like “Old man shouts at Cloud” but stick to the basics, nearly every single pound or dollar, euro, I’ve ever earned, has been teaching clients that “You know the browser does this already, right” or “You know that this couldn’t possible be faster” and that sounds really righteous of me… I’ve never made a cent off of selling extra technology. Every bit of money I make is about removing, subtracting. If you find yourself adding things to make your site faster, you’re in the wrong direction.
Harry: Case in point, I’m not going to name… the big advertising/search engine/browser company at all, not going to name them, and I’m not going to name the JavaScript framework, but I’m currently in discussions with a very, very big, very popular JavaScript framework about removing something that’s actively harming, or optionally removing something that would harm the performance of a massive number of websites. And they were like “Oh, we’re going to loop in…” someone from this big company, because they did some research… and it’s like “We need an option to remove this thing because you can see here, and here, and here it’s making this site slower.” And their solution was to add more, like “Oh but if you do this as well, then you can sidestep that” and it’s like “No, no, adding more to make a site faster must be the wrong solution. Surely you can see that you’re heading in the wrong direction if it takes more code to end up with a faster site.”
Harry: Because it was fast to start with, and everything you add is what makes it slower. And the idea of adding more to make it faster, although… it might manifest itself in a faster website, it’s the wrong way about it. It’s a race to the bottom. Sorry, I’m getting really het up, you can tell I’ve not ranted for a while. So that’s the other thing, if you find yourself adding features to make a site faster, you’re probably heading in the wrong direction, it’s far more effective to make a faster by removing things than it is to add them.
Drew: You’ve put together a video course called “Everything I Have Done to Make CSS Wizardry Fast”.
Harry: Yeah!
Drew: It’s a bit different from traditional online video courses, isn’t it?
Harry: It is. I’ll be honest, it’s partly… I don’t want say laziness on my part, but I didn’t want to design a curriculum which had to be very rigid and take you from zero to hero because the time involved in doing that is enormous and time I didn’t know if I would have. So what I wanted to was have ready-to-go material, just screen cast myself talking through it so it doesn’t start off with “Here is a browser and here’s how it works” so you do need to be at least aware of web perf fundamentals, but it’s hacks and pro-tips and real life examples.
Harry: And because I didn’t need to do a full curriculum, I was able to slam the price way down. So it’s not a big 10 hour course that will take you from zero to hero, it’s nip in and out as you see fit. It’s basically just looking at my site which is an excellent playground for things that are unstable or… it’s very low risk for me to experiment there. So I’ve just done video series. It was a ton of fun to record. Just tearing down my own site and talking about “Well this is how this works and here’s how you could use it”.
Drew: I think it’s really great how it’s split up into solving different problems. If I want to find out more about optimizing images or whatever, I can think “Right, what does my mate Harry have to say about this?”, dip in to the video about images and off I go. It’s really accessible in that way, you don’t have to sit through hours and hours of stuff, you can just go to the bit you want and learn what you need to learn and then get out.
Harry: I think I tried to keep it more… The benefit of not doing a rigid curriculum is you don’t need to watch a certain video first, there’s no intro, it’s just “Go and look around and see what you find interesting” which meant that someone suffering with LTP issues they’re like “Oh well I’ve got to dive into this folder here” or if they’re suffering with CSS problems they can go dive into that folder. Obviously I have no stats, but I imagine there’s a high abandonment rate on courses, purely because you have to trudge through three hours of intro in case you do miss something, and it’s like “Oh, do you know what, I can’t keep doing this every day” and people might just abandon a lot of courses. So my thinking was just dive in, you don’t need to have seen the preceding three hours, you can just go and find whatever you want. And feedback’s been really, really… In fact, what I’ll do is, it doesn’t exist yet, but I’ll do it straight after the call, anybody who uses the discount code SMASHING15, they’ll get 15% off of it.
Drew: So it’s almost like you’ve performance optimized the course itself, because you can just go straight to the bit you want and you don’t have to do all the negotiation and-
Harry: Yeah, unintentional but I’ll take credit for that.
Drew: So, I’ve been learning all about web performance, what have you been learning about lately, Harry?
Harry: Technical stuff… not really. I’ve got a lot on my “to learn” list, so QUIC, H3 sort of stuff I would like to get a bit more working knowledge of that, but I wrote an E-Book during first lockdown in the UK so I learned how to make E-Books which was a ton of fun because they’re just HTML and CSS and I know my way around that so that was a ton of fun. I learnt very rudimentary video editing for the course, and what I liked about those is none of that’s conceptual work. Obviously, learning a programming language, you’ve got to wrestle concepts, whereas learning an E-Book was just workflows and… stuff I’ve never tinkered with before so it was interesting to learn but it didn’t require a change of career, so that was quite nice.
Harry: And then, non technical stuff… I ride a lot of bikes, I fall off a lot of bikes… and because I’ve not traveled at all since last March, nearly a year now, I’ve been doing a lot more cycling and focusing a lot more on… improving that. So I’ve been doing a load of research around power outputs and functional threshold powers, I’m doing a training program at the moment, so constantly, constantly exhausted legs but I’m learning a lot about physiology around cycling. I don’t know why because I’ve got no plans of doing anything with it other than keep riding. It’s been really fascinating. I feel like I’ve been very fortunate during lockdowns, plural, but I’ve managed to stay active. A lot of people will miss out on simple things like a daily commute to the office, a good chance to stretch legs. In the UK, as you’ll know, cycling has been very much championed, so I’ve been tinkering a lot more with learning more about riding bikes from a more physiological aspect which means… don’t know, just being a nerd about something else for a change.
Drew: Is there perhaps not all that much difference between performance optimization on the web and performance optimization in cycling, it’s all marginal gains, right?
Harry: Yeah, exactly. And the amount of graphs I’ve been looking at on the bike… I’ve got power data from the bike, I’ll go out on a ride and come back like “Oh if I had five more watts here but then saved 10 watts there, I could do this, this, and this the fastest ever” and… been a massive anorak about it. But yeah, you’re right. Do you know what, I think you’ve hit upon something really interest there. I think that kind of thing is a good sport/pastime for somebody who is a bit obsessive, who does like chasing numbers. There are things on, I mean you’ll know this but, Strava, you’ve got your KOMs. I bagged 19 of them last year which is, for me, a phenomenal amount. And it’s nearly all from obsessing over available data and looking at “This guy that I’m trying to beat, he was doing 700 watts at this point, if I could get up to 1000 and then tail off” and blah, blah, blah… it’s being obsessive. Nerdy. But you’re right, I guess it’s a similar kind of thing, isn’t it? If you could learn where you afford to tweak things from or squeeze last little drops out…
Drew: And you’ve still got limited bandwidth in both cases. You’ve got limited energy and you’ve got limited network connection.
Harry: Exactly, you can’t just magic some more bandwidth there.
Drew: If you, the listener, would like to hear more from Harry, you can find him on Twitter, where he’s @csswizardty, or go to his website at csswizardry.com where you’ll find some fascinating case studies of his work and find out how to hire him to help solve your performance problems. Harry’s E-Book, that he mentioned, and video course we’ll link up from the show notes. Thanks for joining us today, Harry, do you have any parting words?
Harry: I’m not one for soundbites and motivation quotes but I heard something really, really, really insightful recently. Everyone keeps saying “Oh well we’re all in the same boat” and we’re not. We’re all in the same storm and some people have got better boats than others. Some people are in little dinghies, some people have got mega yachts. Oh, is that a bit dreary to end on… don’t worry about Corona, you’ll be dead soon anyway!
Drew: Keep hold of your oars and you’ll be all right.
Harry: Yeah. I was on a call last night with some web colleagues and we were talking about this and missing each other a lot. The web is, by default, remote, that’s the whole point of the web. But… missing a lot of human connection so, chatting to you for this hour and a bit now has been wonderful, it’s been really nice. I don’t know what my parting words really are meant to be, I should have prepared something, but I just hope everyone’s well, hope everyone’s making what they can out of lockdown and people are keeping busy.
Articles on Smashing Magazine — For Web Designers And Developers