NavTreeBuilder in preview mode - crafter-cms

I was wondering if Crafter engine in preview mode changes how NavTreeBuilder behaves.
I have observed that the exact same call to navTreeBuilder.getNavTree(url, 2, ...) is taking above 5s to respond in preview whereas less than a second in regular crafter delivery nodes.
This has been observed in all environments we manage with the exact same speed behavior. To be precise, this is crafter 2.5.
Any suggestions?
Thanks,
Nicolas

Preview and Engine run exactly the same code. The only difference is that Preview does not cache descriptors.
You can prove that caching what is improving the performance by trying in the (non-production) delivery environment immediately after a restart.
If that proves out then the question is:
* How large is the tree you are walking (breadth and depth [looks like depth 2])
* What filters are you applying
5s is an unusually long time. I expect either: an enormous amount of objects, complex filters or some complicating environmental factor to be the culprit.

Related

Why should I not use incremental builds for release binaries?

I noticed that as my project grows, the release compilation/build time gets slower at a faster pace than I expected (and hoped for). I decided to look into what I could do to improve compilation speed. I am not talking about the initial build time, which involves compilation of dependencies and is largely irrelevant.
One thing that seems to be helping significantly is the incremental = true profile setting. On my project, it seems to shorten build time by ~40% on 4+ cores. With fewer cores the gains are even larger, as builds with incremental = true don't seem to use (much) parallelization. With the default (for --release) incremental = false build times are 3-4 times slower on a single core, compared to 4+ cores.
What are the reasons to refrain from using incremental = true for production builds? I don't see any (significant) increase in binary size or storage size of cached objects. I read somewhere it is possible that incremental builds lead to slightly worse performance of the built binary. Is that the only reason to consider or are there others, like stability, etc.?
I know this could vary, but is there any data available on how much of a performance impact might be expected on real-world applications?
Don't use an incremental build for production releases, because it is:
not reproducible (i.e. you can't get the exact same binary by compiling it again) and
quite possibly subtly broken (incremental compilation is way more complex and way less tested than clean compilation, in particular with optimizations turned on).

First Contentful Paint measured with almost 6 s although content is visible in second screen

I did everything I could to optimize my Wordpress site and got rid of most of the recommendations by PageSpeed Insights. I use WP Rocket caching plugin, Optimole image optimization and Cloudflare CDN.
Google PageSpeed Insights got somewhat better but still, especially on mobile, results are far from good - although all of the recommendations that were there in the beginning and that I could get rid of (without custom coding and without breaking the site) are now gone.
There is one thing that strikes me as odd about the PageSpeed Insights results. That is that First Contentful Paint is measured with something between 5 and 6 seconds although the screenshots of the page that Google presents clearly show that there is contentful paint in the second frame already. See image 1.
Any ideas on this?
The only remaining points on your suggestions are 1. Remove Unused css, and 2.Defer non critical resources( I think, cus the text is in German)
Point 2 affects time to first paint the most.
The screenshots you see in PSI are not in real time.
Also there is a slight (bug?) discrepancy between screenshots and actual performance as PSI uses a simulated slow-down of the page rather than an applied slowdown (so it loads the page at full speed then adjusts the figures to account for bandwidth and Round Trip Time to the server caused by higher latency).
If you run a Lighthouse audit (Google Chrome -> F12 -> audits 0> run audits) with the throttling set to 'applied' rather than 'simulated' you will see it is about 5 seconds before a meaningful paint.
Lighthouse is the engine that powers Page Speed Insights now so the same information should be in both.
With regards to your site speed you have a load of blank SVGs being loaded for some reason and your CSS and JS files need combining (do it manually, plugins don't tend to do a good job here) to reduce the number of requests your site makes. (It can only make 8 requests at a time and on 4G the round-trip / latency to your server means these add up quickly e.g. 40 files = 5 * 8 round trips at 100ms latency = 500ms of dead time waiting for a response)

about managing file system space

Space Issues in a filesystem on Linux
Lets call it FILESYSTEM1
Normally, space in FILESYSTEM1 is only about 40-50% used
and clients run some reports or run some queries and these reports produce massive files about 4-5GB in size and this instantly fills up FILESYSTEM1.
We have some cleanup scripts in place but they never catch this because it happens in a matter of minutes and the cleanup scripts usually clean data that is more than 5-7 days old.
Another set of scripts are also in place and these report when free space in a filesystem is less than a certain threshold
we thought of possible solutions to detect and act on this proactively.
Increase the FILESYSTEM1 file system to double its size.
set the threshold in the Alert Scripts for this filesystem to alert when 50% full.
This will hopefully give us enough time to catch this and act before the client reports issues due to FILESYSTEM1 being full.
Even though this solution works, does not seem to be the best way to deal with the situation.
Any suggestions / comments / solutions are welcome.
thanks
It sounds like what you've found is that simple threshold-based monitoring doesn't work well for the usage patterns you're dealing with. I'd suggest something that pairs high-frequency sampling (say, once a minute) with a monitoring tool that can do some kind of regression on your data to predict when space will run out.
In addition to knowing when you've already run out of space, you also need to know whether you're about to run out of space. Several tools can do this, or you can write your own. One existing tool is Zabbix, which has predictive trigger functions that can be used to alert when file system usage seems likely to cross a threshold within a certain period of time. This may be useful in reacting to rapid changes that, left unchecked, would fill the file system.

Heap Generation 2 and Large Object Heap climbs

I am not sure if I am posting to the right StackOverFlow forum but here goes.
I have a C# desktop app. It receives images from 4 analogue cameras and it tries to detect motion and if so it saves it.
When I leave the app running say over a 24hr cycle I notice the Private Working Set has climbed by almost 500% in Task manager.
Now, I know using Task Manager is not a good idea but it does give me an indication if something is wrong.
To that end I purchase dotMemory profiler from JetBrains.
I have used its tools to determine that the Heap Generation 2 increases a lot in size. Then to a lesser degree the Large Object Heap as well.
The latter is a surprise as the image size is 360x238 and the byte array size is always less than 20K.
So, my issues are:
Should I explicitly call GC.Collect(2) for instance?
Should I be worried that my app is somehow responsible for this?
Andrew, my recommendation is to take memory snapshot in dotMemory, than explore it to find what retains most of the memory. This video will help you. If you not sure about GC.Collect, you can just tap "Force GC" button it will collect all available garbage in your app.

Should I go for faster queries or less cpu consuption / faster processing?

I have to choose between performing a query for X size data and not process it, just send it to the client,
OR
I can choose to perform a query for half X size data and do a little processing, then send it to the client.
Now, in my life of a programmer I met storage vs speed problem quite a bit, but in this case, I have to choose between "fast query + processing" or "slow query + no processing".
If it matters, I am using nodejs for the server and mongodb for the database.
If you care, I am holding non intersecting map areas and I am testing if an area intersects any or no map area. All are boxes. If I keep them as origin point, its only one pair of coordinates and I have to process the point into an area(all map areas have the same size). If I store them as an area directly, I don't have to process it anymore, but its 4 pairs of coordinates now. 4 times the size and I think, slower query.
There is no right answer to this question, it all depends on your infrastructure. If you're for example using Amazon Webservices for this, it depends on the transaction price. If you've got your own infrastructure, it depends on the load of the DB and web servers. If they are on the same server, it's a matter of the underlying hardware whether the I/O from the DB starts to limit before the CPU/memory become the bottle neck.
The only way to determine the right answer to this question for your specific situation is to set it up and do a stress test, for example using Load Impact or one of the tons of other good tools to do this. While it is getting hammered, monitor your system load using top and watch the wa column specifically - if it starts going up over 50% consistently you're I/O limited, and the DB should be offloaded to the CPU.

Resources