YSlow: Incorrect number of HTTP requests? - performance-testing

A page I'm looking at optimising has around 83-87 HTTP requests as measured by Chrome dev tools and WebPageTest (the exact number is slightly variable depending on affiliate libraries).
However, the YSlow Chrome extension claims there are only 51 requests. Likewise, YSlow run from ShowSlow is showing 60 requests.
The difference between YSlow measures aside, it does look like YSlow is measuring the number of HTTP requests incorrectly, and thus my faith in the recommendations and grade is not good.
The page in question does load some components post-onload (which YSlow doesn't measure), but there are only 10 components loaded post-load (which doesn't account for the 20-30 anomaly with other tools).
Anyone know why this might be happening, or indeed provide some suggestions on how to debug or diagnose?

I took a look at the link you suggested (bally.co.uk) to compare YSlow with WebPageTest. YSlow reported 56 components and WebPageTest 76. Here's the breakout:
Doc/html: yslow 1, wpt 3, diff: 2 0-byte files
Javascript: yslow 37, wpt 39, diff: 2 0-byte files
CSS: yslow 5, wpt 5
Images: yslow 12, wpt 19, diff: 7 1x1 beacon gifs
Favicon: yslow 1, wpt 1
JSON: yslow 0, wpt 7, diff: 7 dynamically loaded
Font: yslow 0, wpt 2, diff: 2 dynamically loaded
My conclusion goes back to the link you provided to the YSlow FAQ. The differences all seem to be dynamic requests that are either 0-byte or very small (like the 1x1 gifs). I think it's due to the combined DOM and network sniffing approach that YSlow takes.
Also if I compare the total size loaded for the first view, they are very close to each other:
YSlow: 1,683 KB
WebPageTest: 1,711 KB

Related

PageSpeed inconsistent result against webpagetest.org

When testing my website performance with webpagetest I get excellent results, with my pages being fully loaded under 1s, taking aound 0.6s.
Those tests are being made using my user base location (Brazil - São Paulo), so it may be similar to their result.
But when I do check Google Search Console for the speed result it shows around 1.4s, which is too far away from the results I do have in here.
What I am in doubt is:
Is it because the speed result in Goolge Search Console is still experimental?
Or is there something wrong that I am doing on those tests?
The webpage I am testing is:
https://www.99contratos.com.br/contrato-locacao-residencial.php
And a result I get from webpagetest can be seen clicking the link bellow:
Results
I do appreciate all the help / tips / explanations.
Kind Regards
The data in search console is 'real world data' based on visitor experiences.
It is more accurate than synthetic tests.
What you need to look at is your break-down, not just the speeds. If you have a small percentage in the "red" and "orange" (less than 20%) then you do not need to worry, those users probably have super cheap phones and or a poor 3G connection. You cannot do much about that.
What you need to think about also is where people are accessing your site from. If they are all from abroad then you need a CDN as close to them as possible as latency will ruin your site load speed (so look at your visitor stats in Google Analytics).
Look at what devices your users are accessing your site on, if they all have super cheap android phones then expect higher load times while they process the page (hard to determine).
Just to reassure you - your page scores 98 / 100 for me using Developer Tools Audit, which considering I am in the UK is plenty quick enough.
Couple of suggestions to improve the speed
The main thing you haven't done is inline your critical CSS.
This means that your 'above the fold' content can display the second all the HTML has loaded. By not having to wait for the CSS to load from a separate request this can really speed your page up on FCP and FMP, especially if someone is on a high latency connection.
Also your number of requests could be reduced by using inline SVGs for your icons, making your page smaller and reducing network requests (which yet again helps with round-trip latency as up to 8 requests can be completed at a time, so with 26 requests you have at least 5 round trips to the server (1 (html), 8, 8, 8, 1) ).
The speed (in seconds, not scores) displayed on the speed test results is very influential on the test server region. The closer the test server region is, the faster the loading time will be.
Example of speed test results for your page, using servers from Australia - Sydney, Canada - Vancouver, and your base location: Brazil - São Paulo, using GTmetrix.
Australia - Sydney (3,2s)
Canada - Vancouver (1,9s)
Brazil - São Paulo (0,8s)
So, it can be concluded LARGE POSSIBLE test server region used by Google Search Console is far from your base location.
By the way, when I open your page from Indonesia, it only takes about 0.9-1.2 seconds. So, congratulations, your page is fast!

Chrome Lighthouse comma and full stop in performance option

I have a few questions regarding the report of lighthouse (see screenshot below)
The first is a culture think: i assume the value 11.930 ms stands for 11 seconds and 930 ms. Is this the case?
The second is the Delayed Paint compared to the size. The third entry (7.22 KB) delays the paint by 3,101 ms the fourth entry delays the paint by 1,226 ms although the javascript file is more than three times the size 24.03 KB versus 7.22 KB. Does anybody know what might be the cause?
Screenshot of Lighthouse
This is an extract of a lighthouse report. In the screenshot of google-chrome-lighthouse you can see that a few metrics are written with a comma 11,222 ms and others with a full stop 7.410 ms
Thank you for discovering quite a bug! An issue has been filed in the Lighthouse GitHub repo.
To explain what's likely going on, it looks like this report was generated with the CLI (or at least a locale that is different from the one that it is being displayed in). Some numbers (such as the ones in the table) are converted to strings ahead of time while others are converted at display time in the browser. The browser numbers are respecting your OS/user-selected locale while the pre-stringified numbers are not.
To answer your questions...
Yes, the value it's reporting is 11930 milliseconds or 11 seconds and 930 milliseconds (11,930 ms en-US or 11.930 ms de-DE).
The delayed paint metric is reporting to you how many milliseconds after the load started the asset finished loading. There are multiple factors that influence this number including when the asset was discovered by the browser, queuing time, server response time, network variability, and payload size. The small script that delayed paint longer likely had a lower priority or was added to your page later than the larger script was and thus was pushed out farther.

Watir Measure Page Performance

Ive found this gem : http://watirwebdriver.com/page-performance/
But i cant seem to understand what this measures
browser.performance.summary[:response_time]/1000
Does it start measuring from the second i open the browser?
Watir::Browser.new :chrome
or from the last Watir-webdriver command writen?
And how can i set when it starts the timer?
** I've tried several scripts but i keep getting 0 seconds
Thats why im not sure.**
From what I have read (I have not actually used it on a project), the response_time is the time from starting navigation to the end of the page loading - see Tim's (the gem's author) answer in a previous question. The graphical image on Tim's blog will helps to understand the different values - http://90kts.com/2011/04/19/watir-webdriver-performance-gem-released/.
The gem is for getting performance results of single response, rather than overall usage of a browser during a script. So there is no need to start/stop the timer.
If you are getting 0 seconds, it likely means that the response_time is less than 1000 milliseconds (ie in Ruby, doing 999/1000 gives 0). To make sure you are getting something non-zero, try doing:
browser.performance.summary[:response_time]/1000.0
Dividing by 1000.0 will ensure that you get the decimal values (eg 0.013).

Browser limitation with maximum Page length

We are managing html contents from datasource and directly write on the web pages using asp.net C#.
Here we are facing the problem :
On the page complete contents are not displaying but while we check the Page source and copy/paste it into a static html page all contents will be displayed.
Is there any limitation of browser related to maximum length of a web page.
I googled and found that the limit of a web page should be 10-30KB but in the same project we have pages with length upto 55 KB.
Can anyone help me out?
I've recently been benchmarking browser load times for very large text files. Here are some data:
IE --dies around 35 megs
Firefox --dies around 60 megs
Safari --dies around 60 megs
Chrome --dies around 50 megs
Again, this is simple browser load time of a basic (if large) English text files. One strange note is that Firefox seems to handle close to 60 megs before becoming non-responsive, but it only puts 55.1 megs out on the viewport. (However I can ctrl-a to get all 60 megs onto the clipboard.)
Naturally your mileage will vary, and this is all related to network latency, and we'll probably see vast differences if you're talking about downloading pictures etc. This is just for a single very large file of english text.
The limits (if they only exist) are higher than 50KB:
$ wget --quiet "http://www.cnn.com" -O- | wc -c
99863
I wouldn't believe there's any particular constant limit for page size. I would guess it rather depends on the memory size the web browser process can allocate.
Install firefox and firebug and try to examine any factors that could be affecting the source code. Unless you are changing something odd in the C# it shouldn't be cut off.

How do I stress test a web form file upload?

I need to test a web form that takes a file upload.
The filesize in each upload will be about 10 MB.
I want to test if the server can handle over 100 simultaneous uploads, and still remain
responsive for the rest of the site.
Repeated form submissions from our office will be limited by our local DSL line.
The server is offsite with higher bandwidth.
Answers based on experience would be great, but any suggestions are welcome.
Use the ab (ApacheBench) command-line tool that is bundled with Apache
(I have just discovered this great little tool). Unlike cURL or wget,
ApacheBench was designed for performing stress tests on web servers (any type of web server!).
It generates plenty statistics too. The following command will send a
HTTP POST request including the file test.jpg to http://localhost/
100 times, with up to 4 concurrent requests.
ab -n 100 -c 4 -p test.jpg http://localhost/
It produces output like this:
Server Software:
Server Hostname: localhost
Server Port: 80
Document Path: /
Document Length: 0 bytes
Concurrency Level: 4
Time taken for tests: 0.78125 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Non-2xx responses: 100
Total transferred: 2600 bytes
HTML transferred: 0 bytes
Requests per second: 1280.00 [#/sec] (mean)
Time per request: 3.125 [ms] (mean)
Time per request: 0.781 [ms] (mean, across all concurrent requests)
Transfer rate: 25.60 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.6 0 15
Processing: 0 2 5.5 0 15
Waiting: 0 1 4.8 0 15
Total: 0 2 6.0 0 15
Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 15
95% 15
98% 15
99% 15
100% 15 (longest request)
Automate Selenium RC using your favorite language. Start 100 Threads of Selenium,each typing a path of the file in the input and clicking submit.
You could generate 100 sequentially named files to make looping over them easyily, or just use the same file over and over again
I would perhaps guide you towards using cURL and submitting just random stuff (like, read 10MB out of /dev/urandom and encode it into base32), through a POST-request and manually fabricate the body to be a file upload (it's not rocket science).
Fork that script 100 times, perhaps over a few servers. Just make sure that sysadmins don't think you are doing a DDoS, or something :)
Unfortunately, this answer remains a bit vague, but hopefully it helps you by nudging you in the right track.
Continued as per Liam's comment:
If the server receiving the uploads is not in the same LAN as the clients connecting to it, it would be better to get as remote nodes as possible for stress testing, if only to simulate behavior as authentic as possible. But if you don't have access to computers outside the local LAN, the local LAN is always better than nothing.
Stress testing from inside the same hardware would be not a good idea, as you would do double load on the server: Figuring out the random data, packing it, sending it through the TCP/IP stack (although probably not over Ethernet), and only then can the server do its magic. If the sending part is outsourced, you get double (taken with an arbitrary sized grain of salt) performance by the receiving end.

Resources