Written by Arthur Luimes.
When asking people who use web pages what they mean when they talk about the performance of the site, you might hear different stories. An end user might say that a site has bad performance when he looks at his screen and sees that the web page is loading slowly. However, IT Test managers or traditional performance testers will tell you that the performance is defined by the response time of a combination of HTTP calls that the website makes to the back-end web server.
This latter definition of performance is usually related to what the user sees on the screen, but not always the same thing. Now you might wonder why these technical people, who often have lots of knowledge in the area of performance, do not test or measure, or even have requirements for, the thing that really matters to their end users: the browser screen. But the fact is, compliance with traditional performance requirements is no guarantee of a good user experience – and that’s a problem!
A brand new, browser-based world
I think the discrepancy occurs for several reasons. Firstly, traditional web pages consisted of only a few HTTP calls, and the majority of these HTTP elements were not dynamic in any way, so that testing using HTTP scripts was roughly the same as testing on the visual level. However, even more importantly, if you want to load test using actual browsers, then you need a lot of system resources for the load generators, which was not available. Up to now. I have written another blog on performance testing in the cloud using browsers, which deals with this topic in more depth. And lastly, some performance testers (not you of course) thought that browser-based test scripts were just too easy and did not show the added value they have in the market. They might be in for a surprise in the not too far future when browser-based performance tests will become more and more common.
These days, web pages are dynamic, change often, contain animations, produce HTTP calls that are unrelated to the loading time of the browser, and a lot of time is spent in the browser, parsing and painting, even when there are no HTTP calls being processed. A recent study I did on a heavily used banking public website showed that at least 50% of the loading time was produced by the browser. So that means that HTTP-based, or protocol-based testing will produce results that look less and less like what the user is actually seeing. We need a better solution.
Browser based performance requirements – the response time
And we start off by needing performance requirements for the application as seen in the browser, as seen by the user. The first question that comes to mind is: what is the response time that the user sees?
The answer might seem simple, one could look at the Visually Complete time: the time it takes until the page is loaded and nothing changes any more. However, during the transition of page load, it may very well be that the end user is completely uninterested in some parts of the page that are still loading: he already saw the content that he came for, and wants to progress to the next step, by clicking on that content, for example. The time it takes for the user to see what is relevant for him or her is called the Time to First meaningful Paint. This is actually closest to what the user wanted from the web page, but this measurement point has a number of issues: firstly, what is meaningful might be subjective and will change from page to page. It is therefore difficult to measure in an automated fashion.1). So you might want to make up your mind: use a measure that corresponds to exactly what the user came for and give up precise measurability, or go for measurability, and understand that you might be including actions in the response time that a real user has no interest in. The choice is yours. Or should we use both measures?
Whatever you choose here, be aware that a lot of research has shown that response times over three seconds turn your users off, and that the best response time for any application is maximally one second, as users start to lose their focus on the task they are performing when things get slower than one second. No, this is not just so for the spoiled internet generation. It also applies to the older generation, including myself. 2)
What about slower mobile networks?
Users that are on slower networks, such as 3G mobile connections, might be more tolerant to the response time of your web site. Indeed, they might be used to the fact that applications that need the network are slower on a slow network compared to on a faster network, like WiFi. In that case, you might consider having a separate, larger, response time requirement for situations where the network is not so optimal.
However, that does not mean that your site should be able to just load forever. Eventually the user will give up: researchers have found this time is between 8 and 10 seconds. If things last longer than that, the website is just blocking the use of the mobile device. Between 8 and 10 seconds is therefore a particularly good time to implement a Timeout: your application should just stop loading, throw a “sorry” page to the end user, and free up the browser for other activities. This requirement is often forgotten but will make your site feel more robust and thereby more performant.
Remember that screen transitions can make your site feel fast – or slow!
When discussing whether a response time is the only performance requirement you will ever need, you undoubtedly are going to find that there is more than just response time that meets the eye. Screen transitions effects can make your site feel fast, like a beautifully moving menu that slides to the left when you click on the “X” in the menu: however, when the Frames per Second rate for this animation is less than 60 FPS, the user is going to see a stuttering menu, and are going to wonder why you have such a sluggish site.
Show the information that is relevant first – and keep it there
When discussing how to measure response time in a dynamically built website, we discussed the First Meaningful Paint. Most developers know that they should show the information that the user came for as soon as possible, and show less relevant information, such as advertisements, later. However, if you opened a site, see the relevant information, start to read it, and then your text box is being moved down because an advertisement above it appears and pushed it away, you are going to lose your focus. And because the loading of the advertisement disturbed the reading process, and you will need to find the relevant text block again, find where you got disturbed when reading, and continue on from there, and you will feel that you have lost a lot of time. And you will therefore rate the website as slow. An important requirement for fast sites is therefore to show the meaningful content first, and then to keep it at that location. How to test this? Try the nifty Cumulative Layout Shift measure Google has recently introduced 3)
Optimize the speed index – but what is that?
The speed index 4) is a measure for the time it takes for the visible parts of the page to be displayed. It focuses on how fast larger portions of the final page are displayed during the loading process. When a large part of the page is displayed quickly, and a smaller part of it is displayed much later, then the site is still considered to “feel” fast. It will therefore have a low speed index: think of it as a sort of weighted response time for the loading of the page.
So now how to display the majority of the page quickly? There are many good practices, including the one on the first meaningful paint we mentioned above, but I wanted to mention two other ones: do not block the CSS with JavaScript, and introduce skeletons 5). The first will just produce white pages during the transition, and changes are that the majority of the screen for the resulting page is not white.
The latter, skeletons, or shimmers, points to a “mockup” of text, images or other parts of the site. They just give a feel of what the actual content, that usually comes much later, will look like. The big advantage of skeletons is that they point the eye to the right location, make sure the eye gets used to what the text block or image will roughly look like, so that the eye does not have to look for the location of the content when the skeleton is eventually replaced by the real content. Really cool stuff, used extensively by large companies like Facebook, Google, LinkedIn, and other sites that care about perceived performance (and they should).
Conclusion
The browser continues to become ever more important. And sites in a browser are getting more complex – and thereby so is the user experience of the browser. When considering the requirements that you want to impose on your site, think of which method you are going to use to measure the response time (First Paint or something else), remember that a loading time of 1 second is ideal, introduce timeouts after 8 to 10 seconds, load animations with at least 60 frames per second, do not move the relevant content around when moving, and optimize the speed index. These are the basic criteria to make your site feel fast.
About the author
Arthur Luimes is a Senior Performance Consultant at Altersis Performance, with more than twenty-five years of experience in performance management, and is leading the way to embrace new technologies to enable performance optimization.
References and further reading
- Google is going to remove the First Meaningful Paint measurement from the Chrome Lighthouse measurements in favor of the Largest Contentful Paint, see here.
- The Google RAIL model defines 1000 ms as the time at which users start to lose focus.
- The Cumulative Layout Shift (CLS) measurement is described here.
- The speed index was introduced as part of WebPageTest, a Google initiative to measure the performance of internet sites.
- Skeletons can be used as a nice CSS trick – see here, for instance.
Written by Arthur Luimes, Senior performance consultant, Altersis Performance
0 Comments