PageSpeed inconsistent result against webpagetest.org - performance-testing

When testing my website performance with webpagetest I get excellent results, with my pages being fully loaded under 1s, taking aound 0.6s.
Those tests are being made using my user base location (Brazil - São Paulo), so it may be similar to their result.
But when I do check Google Search Console for the speed result it shows around 1.4s, which is too far away from the results I do have in here.
What I am in doubt is:
Is it because the speed result in Goolge Search Console is still experimental?
Or is there something wrong that I am doing on those tests?
The webpage I am testing is:
https://www.99contratos.com.br/contrato-locacao-residencial.php
And a result I get from webpagetest can be seen clicking the link bellow:
Results
I do appreciate all the help / tips / explanations.
Kind Regards

The data in search console is 'real world data' based on visitor experiences.
It is more accurate than synthetic tests.
What you need to look at is your break-down, not just the speeds. If you have a small percentage in the "red" and "orange" (less than 20%) then you do not need to worry, those users probably have super cheap phones and or a poor 3G connection. You cannot do much about that.
What you need to think about also is where people are accessing your site from. If they are all from abroad then you need a CDN as close to them as possible as latency will ruin your site load speed (so look at your visitor stats in Google Analytics).
Look at what devices your users are accessing your site on, if they all have super cheap android phones then expect higher load times while they process the page (hard to determine).
Just to reassure you - your page scores 98 / 100 for me using Developer Tools Audit, which considering I am in the UK is plenty quick enough.
Couple of suggestions to improve the speed
The main thing you haven't done is inline your critical CSS.
This means that your 'above the fold' content can display the second all the HTML has loaded. By not having to wait for the CSS to load from a separate request this can really speed your page up on FCP and FMP, especially if someone is on a high latency connection.
Also your number of requests could be reduced by using inline SVGs for your icons, making your page smaller and reducing network requests (which yet again helps with round-trip latency as up to 8 requests can be completed at a time, so with 26 requests you have at least 5 round trips to the server (1 (html), 8, 8, 8, 1) ).

The speed (in seconds, not scores) displayed on the speed test results is very influential on the test server region. The closer the test server region is, the faster the loading time will be.
Example of speed test results for your page, using servers from Australia - Sydney, Canada - Vancouver, and your base location: Brazil - São Paulo, using GTmetrix.
Australia - Sydney (3,2s)
Canada - Vancouver (1,9s)
Brazil - São Paulo (0,8s)
So, it can be concluded LARGE POSSIBLE test server region used by Google Search Console is far from your base location.
By the way, when I open your page from Indonesia, it only takes about 0.9-1.2 seconds. So, congratulations, your page is fast!

Related

What to understand by loaded by in performance result

Lately i have seen high time to first byte for my sites.
Most of the time it is by javascript. A test at webpagetest.org usually shows like....
URL: http://example.com/
Loaded by: http://example.com/some-kind-of-javascript.js
When i remove that javascript then anothe javascript appears in that place.
What does loaded by exactly mean??
Check example test result....
https://www.webpagetest.org/result/190729_JY_cb028989b0f44671fba830c9eaca29d7/1/details/#waterfall_view_step1
I'm not sure why you think it's Javascript that is the problem? It looks to me like it's the initial HTML that is causing the problem.
Time to First byte for a resource is the time take from after sending the request (e.g. GET /) until it receives the first byte back. It excludes the DNS lookup, TCP connection and SSL handshake time, so really is a measure of the time take to start receiving that resource. Note that the "first byte" time at the top of the waterfall is the full end to end time, including DNS/TCP/SSL and any redirects, but the TTFB for each resource this is split out more.
I don't know how your home page is created - I would guess it's not a static page so whatever is generating this (PHP?) is taking too long. Whether this is due to bad backend code, an under-resourced server, a slow database, or something else is impossible to say from the outside. I would suggest getting in touch with your hosting provider and/or reviewing your code and server.

Chrome Lighthouse comma and full stop in performance option

I have a few questions regarding the report of lighthouse (see screenshot below)
The first is a culture think: i assume the value 11.930 ms stands for 11 seconds and 930 ms. Is this the case?
The second is the Delayed Paint compared to the size. The third entry (7.22 KB) delays the paint by 3,101 ms the fourth entry delays the paint by 1,226 ms although the javascript file is more than three times the size 24.03 KB versus 7.22 KB. Does anybody know what might be the cause?
Screenshot of Lighthouse
This is an extract of a lighthouse report. In the screenshot of google-chrome-lighthouse you can see that a few metrics are written with a comma 11,222 ms and others with a full stop 7.410 ms
Thank you for discovering quite a bug! An issue has been filed in the Lighthouse GitHub repo.
To explain what's likely going on, it looks like this report was generated with the CLI (or at least a locale that is different from the one that it is being displayed in). Some numbers (such as the ones in the table) are converted to strings ahead of time while others are converted at display time in the browser. The browser numbers are respecting your OS/user-selected locale while the pre-stringified numbers are not.
To answer your questions...
Yes, the value it's reporting is 11930 milliseconds or 11 seconds and 930 milliseconds (11,930 ms en-US or 11.930 ms de-DE).
The delayed paint metric is reporting to you how many milliseconds after the load started the asset finished loading. There are multiple factors that influence this number including when the asset was discovered by the browser, queuing time, server response time, network variability, and payload size. The small script that delayed paint longer likely had a lower priority or was added to your page later than the larger script was and thus was pushed out farther.

Speaker Recognition and Response Time?

I know that Speaker Recognition is in preview and the only available location is the West Coast, and Im hoping that's why I am seeing this 'delay'.
Im on the East Coast (NY) and with just 3 speakers in my search it takes 6 seconds to return a confirmation. Dont get me wrong, 6 seconds is impressive for what it does but that long of delay makes the use case more limited than a quicker reply.
Main question is - Should I see a quicker reply once the service adds location closer? (Its not like the latency should cause a big issue...) - Or is there anything else that may speed up replies - or, of course, is this simply 'The way its going to be'??
Thanks!
I assume you're talking about Microsoft Speaker Recognition.
The processing time is a function of the audio length. For a 15 seconds audio, you can expect less than 1 second latency, and yes, in general, you should see quicker response when the service expands to closer locations.

Setting up Google Cloud Speech API to transcribe interviews

I've got over 100 hours of audio associated with video interviews for a documentary that need to be transcribed to text - hopefully with some kind of timecode markers every 30 seconds or so so the video can easily be matched up to the text in the edit suite.
The files are BWAV 24 bit 96khz and WAV 16 bit 48khz and last anywhere from 20 minutes to 2 hours.
What kind of resources need to be setup in a VM to do this kind of activity? I suspect it will be rather compute intensive so the VM might need 32 cores and a fair amount of memory, but there is no need for realtime response so it is ok if priorities are low and it takes several hours to process a file. My budget is miniscule - $300 is about the most we can afford for all the files (which is one reason we aren't sending these files out to a transcription service at $75+/hour).
I've already got a Cloud Platform account but have never used it. There is no point in my floundering around if someone has already done something similar and can give me some help.

Average internet delay

Just wondering, what is the average packet transmission delay between two hosts over the internet (ignoring packet loss and retransmission).
Now, hang a second before you write that it's too genenral and depends on too many factors (Location of the two hosts, network workload at a specific time, just to name a few), i'm aware of that.
Yet, that's why i'm asking what might be the AVERAGE delay. There must be some record for that.
Maybe it's appropriate to ask for seperate countrywide/continentwide/intercontinental average values, too. Whatever makes sense.
However you ask it, this question is WAAAYYYY too general. Ping times can give you a reasonable approximation, though. My avg to a google host:
round-trip (ms) min/avg/max/med = 20/23/37/21
Yahoo:
round-trip (ms) min/avg/max/med = 19/23/38/23
Baidu (China):
round-trip (ms) min/avg/max/med = 269/272/275/272
Pair (Pittsburgh):
round-trip (ms) min/avg/max/med = 63/66/73/67
Google and Y! are using content-distribution networks, so I am most likely hitting servers very nearby. Baidu is across the world from me. Pair is across the country. These are all from a relatively fast connection.
I'd expect a dialup user to see figures that are approximately 100-200 ms higher (depending on network activity at the time). Similarly, my figures would increase significantly if my network were heavily loaded (its not at the moment).
Does that help at all?
You may find the discussion of this stuff at this page interesting. The author argues that traffic is traveling at about half the speed of light (the speed of light being the best you can possibly do for traffic speed, assuming various scientists are right.

Resources