Does Cumulative Layout Shift stop measuring on user-interaction? - performance-testing

Does the Cumulative Layout Shift metric stop measuring on user-interaction? Google's guidance on Cumulative Layout Shift says:
Layout shifts that occur in response to user interactions (clicking a
link, pressing a button, typing in a search box and similar) are
generally fine, as long as the shift occurs close enough to the
interaction that the relationship is clear to the user.

It's important to remember that Cumulative Layout Shift (CLS) is not just focused on page load. Many of the most problematic layout shifts we see happen after the initial load of a page has completed. You may have started reading an article or scrolled when all of a sudden content "shifts" because of an ad that has been injected or updated.
So, CLS is accumulated until a page reaches unload. It does not "stop" measuring upon a user input, as one of the goals of the metric is to measure the experience real users have of unstable page layouts.
You might wonder if this means that long-lived pages or sessions could end up with a worse CLS. There is thought going into how best to factor in this nuance this via the WebPerf Working Group. Ultimately, we need to normalize how we measure metrics like layout shift across the length of a session.

Related

Does Google PSI "trailing thirty days" testing still occur?

I noticed in this Google PSI FAQ written for a previous deprecated version of the test that it says that changes made to the website do no effect the PSI score immediately.
"The speed data shown in PSI is not updated in real-time. The reported metrics reflect the user experience over the trailing thirty days and are updated on a daily basis."
https://developers.google.com/speed/docs/insights/faq
Does this part of the FAQ still apply today? I've noticed that if I reduce the number of DOM elements, the "Avoid an excessive DOM size" complaint in Google PSI immediately shows the correct new count of DOM elements but scores still remain in the same range.
The part you are referring to is "field data", which is indeed still calculated on a trailing 30 day period.
However when you run your website through Page Speed Insights that is tested without any cache and is calculated each time you run it. (known as "Lab Data")
Think of field data as "real world" information, based on visitors to your site and their experiences, it is a far more accurate representation of what is really happening when people visit your site.
Think of the "lab data" as a synthetic test and a diagnostic tool. They try to simulate a slower CPU and a 4G connection but it is still a simulation, it is designed to give you feedback on potential problems. It has the advantage of updating instantly when you make changes though.
For this reason your "field data" will always lag behind your "lab data" when you make changes to the site.
Also bear in mind that some items in the report are purely diagnostics. In your example of "excessive DOM size" this has no direct scoring implications. However it is there to explain why you might be getting slow render times and or a large Cumulative Layout Shift as lots of DOM elements = more rendering time & more chance of a reflow.
See this answer I gave on the new scoring model PSI uses.

What is the name of the widget or design element that shows where you are in the process of filling out a form?

I am pretty sure it has a name other than a progress bar, which shows very fluidly amount loaded or percent complete. I am not interested in what shows percent complete. I am looking for the name of a thing that shows you, in named steps or page numbers, where you are in filling out a form.
I remember seeing somewhere a line of interlocking arrow-shaped buttons that accomplished this, but the specifics of the graphic representation is not so important to me. I will experiment with various ways to render this. But in order for me to research more,... what is this thing called by web developers?

heavy numerical data entry UI pattern?

Is there any UWP "pattern" or maybe just some tips for a data entry form that is geared towards allowing the user to enter a series of numerical fields quickly/efficiently? We are creating a tablet-only app (so WUP may not have been the best choice, but due to circumstances beyond our control we're committed). There's one form where the user will enter 10-12 numerical values in rapid succession.
Our users will likely use an external keyboard. I'm a newbie, but am inheriting a fairly functional incomplete application to finish. We've been throwing around ideas like a number-pad keyboard which is locked visible somehow and/or maybe tabbing through the controls? Not sure if any of this is possible via UWP. Would appreciate any guidance or reference sites! I'm sure we're not the only ones working this problem...at least I hope not!
TIA!
You could use a TextBox with InputScope set to Number to display a numeric on screen keyboard. If your numbers are of known number of digits - you could automatically switch focus once a proper number of digits is entered. You could also provide +/- buttons if the numbers are small. The NumericUpDown control in WinRT XAML Toolkit has the +/- buttons and can also change values by sliding up/down or left/right on the entry box. Since we're talking about sliding - the platform's Slider control allows you to select values by sliding a slider.

How to make a series of dice rolls secure in a client/server scenario?

Imagine the following situation in a game: A series of random numbers is presented to the player. Each number is shown to the player for a short period of time before it goes to the next one. It is the player's aim to pick a high number. He or she just needs to 'click' in the right moment and then the number is chosen.
The question is about how to implement this scenario in a secure way in a client/server scenario.
That means there is a game client which displays the above mentioned scene and there is a server to which the chosen number (in whatever way) needs to be send. The catch is to get this thing secure so that cheating (e.g. by modifying the client) is not possible.
There's really no way to make this completely secure. Even without modifying your client, all someone needs to do is have another program running in the background reading the monitor and looking for a certain number to appear and then send a mouse click event to your client program. This can be done without modifying your client program at all. Even if you somehow managed to make it secure so that they couldn't run any other programs or services at the same time as your program, they could just point a webcam at the screen hooked to a different computer to do the optical digit recognition and send a mouse click event over USB to the computer running your program.
Fortunately, you may be able to get around the problem. Generally if something is to be a "dice roll" you want a random "luck based" outcome. By allowing them to click at a certain time you are making this a skill based game instead of luck based and therefore not really a dice roll. You could make it so there's a slight "delay" from when they click to when the dice stops rolling, so they see it rolling around, click, and it slows down and lands on a number. This way what number was displayed when they actually click does not determine the outcome, but the next (random) number after that one, which would eliminate the possibility of cheating and make this a luck based roll instead of skill based.

How do you measure if an interface change improved or reduced usability?

For an ecommerce website how do you measure if a change to your site actually improved usability? What kind of measurements should you gather and how would you set up a framework for making this testing part of development?
Multivariate testing and reporting is a great way to actually measure these kind of things.
It allows you to test what combination of page elements has the greatest conversion rate, providing continual improvement on your site design and usability.
Google Web Optimiser has support for this.
Similar methods that you used to identify the usability problems to begin with-- usability testing. Typically you identify your use-cases and then have a lab study evaluating how users go about accomplishing certain goals. Lab testing is typically good with 8-10 people.
The more information methodology we have adopted to understand our users is to have anonymous data collection (you may need user permission, make your privacy policys clear, etc.) This is simply evaluating what buttons/navigation menus users click on, how users delete something (i.e. changing quantity - are more users entering 0 and updating quantity or hitting X)? This is a bit more complex to setup; you have to develop an infrastructure to hold this data (which is actually just counters, i.e. "Times clicked x: 138838383, Times entered 0: 390393") and allow data points to be created as needed to plug into the design.
To push the measurement of an improvement of a UI change up the stream from end-user (where the data gathering could take a while) to design or implementation, some simple heuristics can be used:
Is the number of actions it takes to perform a scenario less? (If yes, then it has improved). Measurement: # of steps reduced / added.
Does the change reduce the number of kinds of input devices to use (even if # of steps is the same)? By this, I mean if you take something that relied on both the mouse and keyboard and changed it to rely only on the mouse or only on the keyboard, then you have improved useability. Measurement: Change in # of devices used.
Does the change make different parts of the website consistent? E.g. If one part of the e-Commerce site loses changes made while you are not logged on and another part does not, this is inconsistent. Changing it so that they have the same behavior improves usability (preferably to the more fault tolerant please!). Measurement: Make a graph (flow chart really) mapping the ways a particular action could be done. Improvement is a reduction in the # of edges on the graph.
And so on... find some general UI tips, figure out some metrics like the above, and you can approximate usability improvement.
Once you have these design approximations of user improvement, and then gather longer term data, you can see if there is any predictive ability for the design-level usability improvements to the end-user reaction (like: Over the last 10 projects, we've seen an average of 1% quicker scenarios for each action removed, with a range of 0.25% and standard dev of 0.32%).
The first way can be fully subjective or partly quantified: user complaints and positive feedbacks. The problem with this is that you may have some strong biases when it comes to filter those feedbacks, so you better make as quantitative as possible. Having some ticketing system to file every report from the users and gathering statistics about each version of the interface might be useful. Just get your statistics right.
The second way is to measure the difference in a questionnaire taken about the interface by end-users. Answers to each question should be a set of discrete values and then again you can gather statistics for each version of the interface.
The latter way may be much harder to setup (designing a questionnaire and possibly the controlled environment for it as well as the guidelines to interpret the results is a craft by itself) but the former makes it unpleasantly easy to mess up with the measurements. For example, you have to consider the fact that the number of tickets you get for each version is dependent on the time it is used, and that all time ranges are not equal (e.g. a whole class of critical issues may never be discovered before the third or fourth week of usage, or users might tend not to file tickets the first days of use, even if they find issues, etc.).
Torial stole my answer. Although if there is a measure of how long it takes to do a certain task. If the time is reduced and the task is still completed, then that's a good thing.
Also, if there is a way to record the number of cancels, then that would work too.

Resources