HTML website with Knockoutjs bindings has performance issue - azure

I am working on my client's pure HTML CSS website having data bindings with JSON datasets using Knockoutjs. For tables I have used Datatables library.
I have hosted the website on Windows Azure websites.
Here is the link of website : http://bit.ly/(REMOVED SINCE IT IS CONFEDENTIAL)
It takes around 4 seconds to load the website even though I have used CDN for common JS libraries.
It should not have that much load time. I am unable to find the culprit here. I am fetching data from 4 different datasets. Does it impact on performance? Or there is problem with Windows Azure datacenter, It takes while to get response from Azure server. Is Azure culprit?
You can examine the page load time on the website link given above.
Any help would be appreciated.
Solution :
Instead of using sync calls, used
$.getJSON(url, function(data){
//whole knockoutjs logic and bindings
}

All model .js files (starting with patientMedicationChart-Index.js) are loaded synchronously (async:false is set in that file). This means that the browser has to wait for each script file to be loaded before continuing to load the next.
I count about 10 files loaded like that for your demo, which (for me) each take about 200ms to load (about 95% of that 200ms is spent waiting for a response, which also seems rather slow; that might be a server issue with Azure). So times 10 is already 2 seconds spent loading those files, and only after loading all of them will the ready event for the page be triggered.
There might be a reason for wanting to load those files synchronously, but as it is, it's causing a significant part of the loading time for the entire page.

Related

Azure CDN high max latency

We are experimenting with using blazor webasembly with angular. It works nicely, but blazor requires a lot of dlls to be loaded, so we decided to store them in azure blob storage, and to serve it we use microsoft CDN on azure.
When we check average latency as users started working, it shows values between 200-400 ms. But max values of latency jump to 5-6 minutes.
This happens for our ussual workload 1k-2k users over course of 1 hour. If they dont have blazor files cached locally yet that can be over 60 files per user requested from cdn.
My question is if this is expected behaviour or we can have some bad configuration somewhere.
I mention blazor WS just in case, not sure it can be problem specificaly with way how these files are loaded, or it is only because it is big amount of fetched files.
Thanks for any advice in advance.
I did check if files are served from cache, and from response headers it seems so: x-cache: TCP_HIT. Also Byte Hit ratio from cdn profile seem ok,mostly 100% and it never falls under 65%.

Streaming data from node js controller to web page continuously

I have a very large nodejs controller that runs thru huge tables and checks data against other huge tables. This takes quite a while to run.
Today my page only shows the final result after 20 minutes.
Is there anyway to stream out the result continuously from the controller to the web page? Like a real time scrolling log of whats going on?
(or is Socket I.O. my only option)
You can try with node-cron and set the dynamic query for fetching data with different limits and append data in the front-end side.
I am not sure is it a proper way or not.

Scalability for intensive pdf generation tasks on a node.js app using puppeteer?

The goal of the app is to generate a pdf using puppeteer, we fetch the data, build the html template then using chrome headless generate the pdf, we then, return a link to the newly generated pdf.
The issue, is it takes about 7000 ms to generate a pdf, mainly because of the three puppeteer functions : launch (launch the headless broweser), goto (navigate to the html template) and pdf (generates the pdf).
So having around 7~8 seconds to answer one request, with more incoming requests or a sudden spike, it could easily takes about 40 to 50 seconds for 30 simultaneous requests, which I find unacceptable.
After many time spent on research, I will implement the cluster module to take advantage of multiple processes.
But besides clustering, are there any other possible options to optimize the time on a single instance?
There are something to consider ...
Consider to call puppeteer.launch once per application start. Your conversion script will just check is browser instance already exists and use it by calling newPage(), which basically create new tab, instead of every time creating the browser.
You may consider to intercept Request as page.on('request', this.onPageRequest); when calling goto() and filter out certain types of the files which page is loading right now, but you don't need them for PDF rendering; you may filter out external resources as well if this is your case.
When use pdf() you may return back Buffer from your service, instead of using file system and return link to the location of PDF file created. this may or may not speed up things, depend on your service setup; anyway less IO should be better.
This is probably all you can do for single instance of your app; With the implementation above regular (couple of pages) PDF with a few images render for me in 1-2 sec.
To speed up things use clustering. Other than embed it inside your application you may consider to use PM2 manager to start and scale multiple instances of your service.

Apache - strange time-to-first-byte issue

Somehow I think there's a problem with stat in linux, but I tested a regular empty folder in linux vs an icon of less than 1000 bytes in size. The test was done with Apache 2.2 and the server location is in east canada.
Webpagetest results:
http://www.webpagetest.org/result/160202_KK_9HJ/1/details/
I'm curious as to why the time to first byte for the directory listing is higher than the time to first byte for the icon by one third?
What settings do I use in linux to fix this?
The time to first byte represents the time taken to 1) send the request to the server, 2) process the request and 3) return at least some of the results from the server.
For similarly sized resources 1) and 3) should be the same so let's concentrate on 2) for now.
When you request the directory, Apache has to check if the directory contains an index.html file, if not it reads the directory, then it starts to construct the HTML page creating links to the parent directory and each file/sub directory in the directory, then it has to return the file.
When you request the ico file Apache just has to pick up the file and return it to you nice and simple.
So as you can see there is more work in the first than in the second. So I don't think this is a fair test. Compare a static index.html file to a static ico file for a fairer test and then you'll know if you have an issue.
Additionally, depending on your mpm choice, settings, server load and server history there may be a thread or process started up waiting to process the first request (fast) or the first request may have to initiate one to handle this request (slow). This is likely to be less of an issue for a second request, particularly with keep-alive enabled. See here for more details: https://serverfault.com/questions/383526/how-do-i-select-which-apache-mpm-to-use.
There is also the TCP slow start issue, which particularly affects older version of OS and software, but that is unlikely to have an impact here in the small loads you are talking about and also should affect total download time rather than TTFB. Still it's yet another reason to ensure you're running up to date software.
And finally your TTFB is mostly influenced by your hosting provider and the pipes to your server and number of hops until
It gets to Apache so, once you have chosen a hosting provider, it is mostly out of your control. Again this will usually be reflective across the board and rather than the variances you see between two requests here.

Limit the number of concurrent connections from the server side?

I'm writing my own webserver and I don't yet handle concurrent connections properly. I get massive page loading lag due to inappropriately handling concurrent connections (I respond to SYN, but I lose the GET packet somehow. The browser retries after a while, but it takes 3 seconds!) I'm trying to figure out if there's a way to instruct the browser to stop loading things concurrently, because debugging this is taking a long time. The webserver is very stripped down, is not going to be public and is not the main purpose of this application, which is why I'm willing to cut corners in this fashion.
It'd be nice to just limit the concurrent connections to 1, because modifying that parameter using a registry hack for IE and using about:config for Firefox both make things work perfectly.
Any other workaround ideas would be useful, too. A couple I can think of:
1 - Instruct the browser to cache everything with no expiration so the slow loads (.js, .css and image files) happen only once. I can append a checksum to the end of the file (img src="/img/blah.png?12345678") to make sure if I update the file, it's reloaded properly.
2 - Add the .js and .css to load inline with the .html files - but this still doesn't fix the image issue, and is just plain ugly anyway.
I don't believe it's possible to tell a browser like Firefox to not load concurrently, at least not for your users via some http header or something.
So I never found a way to do this.
My underlying issue was too many requests were coming in and overflowing my limited receive buffers in emac ram. Overflowing receive buffers = discarded packets. The resolution was to combine all .js and all .css files into 1 .js and 1 .css file in order to get my requests down. I set all image, js and css pages to have a year's expiration. The html pages are set to expire immediately. I wrote a perl script to append md5 checksums to files so changed files are refetched. Works great now. Pages load instantly after the first load caches everything.

Resources