Why moblie device access web pages slower than PC? - frontend

I have a question about a iphone and PC,connecting same WiFi,open same webpage,PC less than 1s but iphone need 3-5s. Recenty i developing a webapp and in view of this situation how to optimize webapp.

Slower network access (small bandwidth + hight latency).
+ Slower CPU
_____________
= slower web rendering
Solutions ?
1 ) Minimize the connections (objects number of your page).
2 ) Minimize the total size.
3 ) Minimize client-side computations (rarely needed except for complex web apps).
The latency problem is important. Consider using sprites to regroup images. And of course don't use big images when small ones are enough.
As mobile devices are more and more diverse, it's better not to try to focus on detection and specific optimization, but apply general web site optimization (Google here would be your friend).

First, probably because iPhone and PC hasn't same perfs.
Even if it's the same webpage, the render motor isn't the same.
Optimise your webpage using a css focused on mobile devices is a good start :
<link rel="stylesheet" href="assets/css/mobile.css" type="text/css" media="handheld" />

There can be a number of factors affecting performance:
cpu processing speed/power: your PC probably has a better cpu. This means it can execute more instructions quicker.
web browser: The specific web browsers might be slower/faster at rendering the page. The browsers are probably using different javascript engines and that will also effect performance.
memory: the amount and speed of memory between the 2 systems will effect performance.
ect....

Related

Drastically different Google PageSpeed Insights "Lab Data" speeds between Mobile and Desktop experiences?

When running the pages of this website through Google Pagespeed Insights tool, I receive drastically different "Lab Data" (Time to Interactive, First Contentful Paint, Speed Index) speeds when comparing Mobile and Desktop. Desktop tends to receive values under 2 seconds, and as a result, the Pagespeed Insights score is generally in the 80s or 90s on each page. The Mobile score, however, suggests the page load speed is much slower, upwards to 10 seconds. As you may guess, I cannot reproduce anything close to these loading times on mobile. The mobile and desktop experience do not differ dramatically with the primary differences being styling using CSS media queries. Would love any help understanding why these values are so dramatically different!
Images for reference:
Desktop metrics
Mobile metrics
Page Speed Insights uses simulated CPU and Connection throttling to simulate mobile conditions people may experience when displaying your mobile score (no throttling exists on Desktop score).
Not everyone has a flagship phone (far from it) so they slow the CPU speed of their server by a factor of 4 to simulate the slower CPU speeds of mid and low end phones.
Similarly they also simulate a slow 4G connection to account for when people are out and about / have no WiFi connection. SO they add additional latency and slow the upload and download speeds to reflect this.
This is why you see such big differences on your site score between mobile and desktop.
If you want to simulate a similar speed yourself you can open developer tools in Google Chrome -> Network -> Look for the drop down that says "online" and change it to "Fast 3G".
Now reload your page and you can see the effects of additional latency and slower download speeds on your waterfall.
According to my analysis, this is due to the images on this page. However, Google PageSpeed ​​Insights is very sensitive to mobile scores than desktop scores, so the stark difference between mobile and desktop scores is natural for this tool.
Try compressing the image first (you can use tinypng.com or other online tools), then use lazyload for image.

Why are Google Page Speed insight scores so different from GTMetrix, WebPageTest.org, Pingdom, etc?

Is this because it uses a slower connection to the website? I have read that it's a fast 3G connection? Is that used as well as field data?
I have websites that load in under 2 seconds but they fail the PSI tests.
Ryan - Google PageSpeed is a very robust tool. It is also stricter than other tools such as GTMetrix or Pingdom.
There are several factors that impact speed. Expect a variance of 5 to 7 points depending on the location of google servers relative to your server. If you are getting a larger variation - that could be your CDN instead of your server.
Double-check results in running Google Lighthouse. You can find this under Chrome dev tools.
Late answer but hopefully this will help people understand the difference.
Short Answer
Page Speed Insights (PSI) simulates a mid tier mobile phone on slow 4G connection. You will always score lower on PSI mobile tests as the other sites do not use throttling.
The desktop tab of PSI should be similar but yet again uses different metrics for score that the others do not appear to have updated to (at time of writing).
Longer Answer
Is this because it uses a slower connection to the website? I have read that it's a fast 3G connection?
Page Speed Insights (PSI),uses lighthouse to power it.
As part of this it uses simulated network throttling to simulate network latency and slower connection speeds (comparable to fast 3G / slow 4G).
It also simulates a slower CPU.
It does both of these to simulate a mid-tier mobile phone on a 4G connection. Mobiles have lower processing power and may be used "on the go" without WiFi.
GTMetrix, WebPageTest.org, Pingdom etc. all check the desktop version of the site.
This is the main reason you will see vastly different scores as they do not apply any form of throttling to the CPU or network speeds.
You should find that you get similar scores if you compare the desktop tab of PSI report to them as that is unthrottled.
Another difference (although I am not 100% sure) is that I think those sites are still using Lighthouse version 5 scoring at their core. Lighthouse changed to version 6 scoring earlier this year, to reflect the items that really matter to the end user. This is why I said "similar" scores in the previous paragraph.
Is that used as well as field data?
No field data is real world data, also known as RUM (Real User Metrics). It is collected from real visitors to your site.
It has no affect on your score on PSI as that is calculated each time from "lab data".
Field data is there for diagnostics (as RUM are far more reliable and help identify errors automated testing may miss such as an overloaded server, problems at certain screen sizes etc.)
I have websites that load in under 2 seconds but they fail the PSI tests.
Are you sure? It may show 2 seconds on automated tests (for desktop) but in the real world how can you know that?
One way to check is to actually monitor this information on your site. This answer I gave has all the relevant metrics you may want to gather and monitor for site performance.
If you combine that information with screen size and device information you have everything you need to identify issues in near real time.

Real Browser based load testing or Browser level user testing

I am currently working on multiple Load testing tool such as Jmeter, LoadRunner and Gatling.
All above tool works upon protocol level user load testing except TrueClient protocol offered by LoadRunner. Now something like real browser testing is in place which is definitely high on resources consumption tools such as LoadNinja and Flood.IO works on this novel concept.
I have few queries in this regards
What will be the scenario where real browser based load testing fits perfectly?
What real browser testing offers which is not possible in protocol based load testing?
I know, we can use Jmeter to Mimic browser behavior for load testing but is there anything different that real browser testing has to offer?
....this novel concept.....
You're showing your age a bit here. Full client testing was state of the art in 1996 before companies shifted en masse to protocol based testing because it's more efficient in terms of resources. (Mercury, HP, Microfocus) LoadRunner, and (Segue, Borland, Microfocus) Silk, and (Rational, IBM) Robot, have retained the ability to use full GUI virtual users (run full clients using functional automation tools) since this time. TruClient is a recent addition which runs a full client, but simply does not write the output to the screen, so you get 99% of the benefits and the measurements
What is the benefit. Well, historically two tier client server clients were thick. Lots of application processing going on. So having a GUI Virtual user in a small quantity combined with protocol virtual users allowed you to measure the cost/weight of the client. The flows to the server might take two seconds, but with the transform and present in the client it might take an addtional 10 seconds. You now know where the bottleneck is/was in the user experience.
Well, welcome to the days of future past. The web, once super thin as a presentation later, has become just as thick as the classical two tier client server applications. I might argue thicker as the modern browser interpreting JavaScript is more of a resource hog than the two tier compiled apps of years past. It is simply universally available and based upon a common client-server protocol - HTTP.
Now that the web is thick, there is value in understanding the delta between arrival and presentation. You can also observe much of this data inside of the performance tab of Chrome. We also have great w3c in browser metrics which can provide insight into the cost/weight of the local code execution.
Shifting the logic to the client also has resulted in a challenge on trying to reproduce the logic and flow of the JavaScript frameworks for producing the protocol level dataflows back and forth. Here's where the old client-server interfaces has a distinct advantage, the protocols were highly structured in terms of data representation. So, even with a complex thick client it became easy to represent and modify the dataflows at the protocol level (think database as an example, rows, columns....). HTML/HTTP is very much unstructured. Your developer can send and receive virtually anything as long as the carrier is HTTP and you can transform it to be used in JavaScript.
To make it easier and more time efficient for script creation with complex JavaScript frameworks the GUI virtual user has come back into vogue. Instead of running a full functional testing tool driving a browser, where we can have 1 browser and 1 copy of the test tool per OS instance, we now have something that scale a bit more efficiently, Truclient, where multiple can be run per OS instance. There is no getting around the high resource cost of the underlying browser instance however.
Let me try to answer your questions below:
What will be the scenario where real browser based load testing fits perfectly?
What real browser testing offers which is not possible in protocol based load testing?
Some companies do real browser based load testing. However, as you rightly concluded that it is extremely costly to simulate such scenarios. Fintech Companies mostly do that if the load is pretty less (say 100 users) and application they want to test is extremely critical and such applications cannot be tested using the standard api load tests as these are mostly legacy applications.
I know, we can use JMeter to Mimic browser behaviour for load testing but is there anything different that real browser testing has to offer?
Yes, real Browsers have JavaScript. Sometimes if implementation is poor on the front end (website), you cannot catch these issues using service level load tests. It makes sense to load test if you want to see how well the JS written by the developers or other logic is affecting page load times.
It is important to understand that performance testing is not limited to APIs alone but the entire user experience as well.
Hope this helps.
There are 2 types of test you need to consider:
Backend performance test: simulating X real users which are concurrently accessing the web application. The goal is to determine relationship between increasing number of virtual users and response time/throughput (number of requests per second), identify saturation point, first bottleneck, etc.
Frontend performance test: protocol-based load testing tools don't actually render the page therefore even if response from the server came quickly it might be the case that due to a bug in client-side JavaScript rendering will take a lot of time therefore you might want to use real browser (1-2 instances) in order to collect browser performance metrics
Well-behaved performance test should check both scenarios, it's better to conduct main load using protocol-based tools and at the same time access the application with the real browser in order to perform client-side measurements.

By hosting some assets on a server and others on a CDN will this allow a browser to negotiate more connections?

Short question
Im trying to speed up a static website - html,css,js the site also has many images that weight allot, during testing it seems the browser is struggling managing all these connections.
Apart from the normal compression im wandering if by hosting my site files - html,css,js on the VPS and then taking all the images onto a CDN would this allow the browser to negociate with 2 server and be quicker over all. (Obviously there is going to be no mbps speed gain as that limited by the users connection, but im wandering if this would allow me to have more open connections, thus having a quicker TTFB)
Long question with background
Ive got a site that im working on speeding up, the site itself is all client side html,js, css. The issue with the speed on this site tends to be a) the size of the files b) the qty of the files. ie. there are lots of images, and they each weight allot.
Im going to do all the usual stuff combine the ones that are used for the UI into a sprite sheet and compress all image using jpegmini etc.
Ive moved the site to a VPS which has made a big difference, the next move im pondering is setting up a CDN to host the images, im not trying to host them for any geographical distribution or load balancing reason (although that is an added benefit) but i was wandering if the bottle neck on downloading assets would be less if the users browser would be getting all the site files : html, js, css from the VPS but at the same time getting the images from the CDN, is this were the bottle neck is ie. the users browser can only have so many connections to 1 server at a time, but if i had 2 servers it could negotiate the same amount of connections but on both servers concurrently ?
Im guessing there could also be an issue with load on the server, but for testing im using a 2gb multi core VPS which no one else is on so in that regard that shouldnt be a problem that comes up during my tests.
Context
Traditionally, web browsers place a limit on the number of simultaneous connections a browser can make to one domain. These limits were established in 1999 in the HTTP/1.1 specification by the Internet Engineering Task Force (IETF). The intent of the limit was to avoid overloading web servers and to reduce internet congestion. The commonly used limit is no more than two simultaneous connections with any server or proxy.
Solution to that limitation: Domain Sharding
Domain sharding is a technique to accelerate page load times by tricking browsers into opening more simultaneous connections than are normally allowed.
See this article) for an example by using multiple subdomains in order to multiply parallel connections to the CDN.
So instead of:
http://cdn.domain.com/img/brown_sheep.jpg
http://cdn.domain.com/img/green_sheep.jpg
The browser can use parallel connections by using a subdomain:
http://cdn.domain.com/img/brown_sheep.jpg
http://cdn1.domain.com/img/green_sheep.jpg
Present: beware of Sharding
You might consider the downsides of using domain sharding, because it won't be necessary and even hurts performance under SPDY. If the browsers you're targeting are mostly SPDY is supported by Chrome, Firefox, Opera, and IE 11, you might want to skip domain sharding.

How important it is to measure the page render/load time in web application [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
When we test for the Performance of an Web Application what generally people concentrate on ?. Is it the http response time ?. Or is it the time page takes to load/render completely on client browser once after it receives the response from the Server ?.
What is measured generally across the industry ?. Do you have any recommendations in terms which should be done when ?
Do you have any tool recommendations for the same ?.
Can I use the Visual Studio Web Tests to measure the performance in terms of Web page load/render time once after client receives the response. or its just the http response time ?.
In three words : Performance really matters !
My golden rule is pretty simple : You have to measure everything and optimize everything. It's not only a pure tech challenge, but also concerns your business team. Here are some classic exemples from Velocity Conf.
Bing – A page that was 2 seconds slower resulted in a 4.3% drop in revenue/user.
Google – A 400 millisecond delay caused a 0.59% drop in searches/user.
Yahoo! – A 400 milliseconds slowdown resulted in a 5-9% drop in full-page traffic.
Shopzilla – Speeding up their site by 5 seconds increased the conversion rate 7-12%, doubled the number of sessions from search engine marketing, and cut the number of required servers in half.
Mozilla – Shaving 2.2 seconds off their landing pages increased download conversions by 15.4%, which they estimate will result in 60 million more Firefox downloads per year.
Netflix – Adopting a single optimization, gzip compression, resulted in a 13-25% speedup and cut their outbound network traffic by 50%.
What is measured generally across the industry ?. Do you have any
recommendations in terms which should be done when ?
From Steve Souders, pioneer in Web Performance Optimization, "80-90% of the end-user response time is spent on the frontend" Start here first : Too many requests, non-optimized images, un-minified content (js/css), do not distribute static throught a cdn are common errros.
On the other hand, do not forget your backend, because this part really depends on load & activity. Some sites are paying the largest amount of performance tax due to backend issues. As the page generation time increases proportionally to the user load, You have to find the throughput peak of your app and check if it's ok with your -own- SLA.
Do you have any tool recommendations for the same ?
There is no magic tool that covers all topics, but many great tools that will help for a specific part of your app.
Page Rendering : Google Chrome SpeedTracer or IE 11 UI Responsiveness tool
FrontEnd : PageSpeed, YSlow, WebPageTest.org (online), GtMetrix(online), Pingdom (online)
Backend : asp.net Mini-Profiler, Glimpse, Visual Studio Profiler & Visual Studio Web/Load Tests
Google Analytics for RUM (Real User Monitoring)
Can I use the Visual Studio Web Tests to measure the performance in
terms of Web page load/render time once after client receives the
response. or its just the http response time ?.
No, Visual Studio Web & Load Test focus only on HTTP request. Javascript is not executed and virtual users are not virtual browsers : it's impossible to measure page laod/redner time. In my company, we use it only for integration tests and load testing.
If you want to read more, you can look at this post (disclamer : I am the author).
Another interested link is from Jeff Atwood (co-founder of StackOverflow), Performance is a feature.
Performance is a vast topic, and I only cover here only a small part, but you have a good starting point.

Resources