First Contentful Paint measured with almost 6 s although content is visible in second screen - pagespeed-insights

I did everything I could to optimize my Wordpress site and got rid of most of the recommendations by PageSpeed Insights. I use WP Rocket caching plugin, Optimole image optimization and Cloudflare CDN.
Google PageSpeed Insights got somewhat better but still, especially on mobile, results are far from good - although all of the recommendations that were there in the beginning and that I could get rid of (without custom coding and without breaking the site) are now gone.
There is one thing that strikes me as odd about the PageSpeed Insights results. That is that First Contentful Paint is measured with something between 5 and 6 seconds although the screenshots of the page that Google presents clearly show that there is contentful paint in the second frame already. See image 1.
Any ideas on this?

The only remaining points on your suggestions are 1. Remove Unused css, and 2.Defer non critical resources( I think, cus the text is in German)
Point 2 affects time to first paint the most.

The screenshots you see in PSI are not in real time.
Also there is a slight (bug?) discrepancy between screenshots and actual performance as PSI uses a simulated slow-down of the page rather than an applied slowdown (so it loads the page at full speed then adjusts the figures to account for bandwidth and Round Trip Time to the server caused by higher latency).
If you run a Lighthouse audit (Google Chrome -> F12 -> audits 0> run audits) with the throttling set to 'applied' rather than 'simulated' you will see it is about 5 seconds before a meaningful paint.
Lighthouse is the engine that powers Page Speed Insights now so the same information should be in both.
With regards to your site speed you have a load of blank SVGs being loaded for some reason and your CSS and JS files need combining (do it manually, plugins don't tend to do a good job here) to reduce the number of requests your site makes. (It can only make 8 requests at a time and on 4G the round-trip / latency to your server means these add up quickly e.g. 40 files = 5 * 8 round trips at 100ms latency = 500ms of dead time waiting for a response)

Related

NavTreeBuilder in preview mode

I was wondering if Crafter engine in preview mode changes how NavTreeBuilder behaves.
I have observed that the exact same call to navTreeBuilder.getNavTree(url, 2, ...) is taking above 5s to respond in preview whereas less than a second in regular crafter delivery nodes.
This has been observed in all environments we manage with the exact same speed behavior. To be precise, this is crafter 2.5.
Any suggestions?
Thanks,
Nicolas
Preview and Engine run exactly the same code. The only difference is that Preview does not cache descriptors.
You can prove that caching what is improving the performance by trying in the (non-production) delivery environment immediately after a restart.
If that proves out then the question is:
* How large is the tree you are walking (breadth and depth [looks like depth 2])
* What filters are you applying
5s is an unusually long time. I expect either: an enormous amount of objects, complex filters or some complicating environmental factor to be the culprit.

Synthetic performance AB test

I have deployed two versions of our singlepage web app: one master (A) and one branch where are some changes which can affect somehow load time (B). The change is usually some new feature on front-end, refactoring, small performance optimization, etc. The difference is not so big and the load time varies much more from other reasons (a load of testing machines, a load of servers, network, etc). So webpagetest.org even with 9 tries varies much more (14-20s speedindex) than the real difference could be (0,5s in average for example).
Basically, I need one number which tells me - this feature increase/decrease load time.
Is there some tool which could measure such differences?
My idea was to deploy Webpagetest to a server with minimal load and run Webpagetest randomly on both versions at the same time so I avoid most of the noise. Make a lot of samples (1000+) and check average(or median) value.
But before I start working on that I would like to ask if there is some service which solves that problem.

Heap Generation 2 and Large Object Heap climbs

I am not sure if I am posting to the right StackOverFlow forum but here goes.
I have a C# desktop app. It receives images from 4 analogue cameras and it tries to detect motion and if so it saves it.
When I leave the app running say over a 24hr cycle I notice the Private Working Set has climbed by almost 500% in Task manager.
Now, I know using Task Manager is not a good idea but it does give me an indication if something is wrong.
To that end I purchase dotMemory profiler from JetBrains.
I have used its tools to determine that the Heap Generation 2 increases a lot in size. Then to a lesser degree the Large Object Heap as well.
The latter is a surprise as the image size is 360x238 and the byte array size is always less than 20K.
So, my issues are:
Should I explicitly call GC.Collect(2) for instance?
Should I be worried that my app is somehow responsible for this?
Andrew, my recommendation is to take memory snapshot in dotMemory, than explore it to find what retains most of the memory. This video will help you. If you not sure about GC.Collect, you can just tap "Force GC" button it will collect all available garbage in your app.

Why hassle with requireJS, when caching performs so well?

If you look at cached js files that get reloaded, you can see in the network panel that it literally takes NO time at all to reload them.
Why hassle with requireJS when you basically can load 3 MB of js out of the main memory in less than 1 microsecond
Why bother?
On one hand it's all about lazy loading of modules that isn't using frequently. On the other hand it's alway good practice to build amd modules in packages for production until http 2.0 will go in masses.
looking at jquery parse times which typically take 10-100 ms, depending on the device I would conclude that when there are actually many libraries, it is worth the hassle.
Probably in future, this need will degrade though (top end devices like iphone 5 have 10 ms parse time.)

MonoTouch Memory Use High

I have monodevelop 2.8 on top of monotouch 5 agains the Xcode 4.2 SDK. I have been having memory issues with my iPhone app. I have been struggling with identifying the cause, so I created a test app with a master detail view. I made a minor modification to the rootcontroller to have it show 5 root items instead of the default 1. Each click of the root item adds a new DetailViewController into the navigation controller.
controller.NavigationController.PushViewController (DetailViewController, true);
In my detail view controller I've added logic that simply take an input that governs the number of times a loop happens, and then a button to trigger the loop to occur and make a call to a REST based service. Very minimal code changes from the default.
Just running the example and looking at it in instruments I seem to be up to 1.2 MB of live bytes. I think launch the detail view by touching items in the root view controller and I get up over 2 MB. Rotating the display or triggering the keyboard to open gets memory up near 3 MB. I navigate back in the controller and open a different view from the rootviewcontroller and I can see the memory grow some more. Just moving in and out of views without even triggering my custom code I can get the memory use in instruments over 3 MB. I've seen my app receive memory warnings when being up over 3 MB before. My test detail view is very basic with a text box, a label, and a button that all have outlets on them. I was under the impression I don't need to do anything special to have them cleanup. However, I don't see live bytes drop in instruments.
As an additional test, I added a Done button. When the done button is pressed I go and use RemoveFromSuperview() on each outlet, Dispose(), and then set it to null. I see the live bytes drop. But that doesn't do anything for me if the back navigation is used instead.
I'm curious if anyone can verify my expectations of seeing memory drop. Not sure if using instruments to look at live bytes is even valid or not. I'd like to determine if my testing is even valid and if there are tips for reducing memory foot print. Any links to best practices on reducing the memory foot print are appreciated as I seem to be able to get the memory to climb and my app to start getting memory warnings just by navigating around between screens.
It's hard to comment without seeing the code for the test app. Is there any way you could submit a bug report to http://bugzilla.xamarin.com and attach your test project?
There's a developer on MonoTouch working hard to add additional smarts to the GC for MonoTouch for 5.2 that I'm sure would love to have more test cases.
I would also be very interested in looking over your test case.

Resources