Somehow I think there's a problem with stat in linux, but I tested a regular empty folder in linux vs an icon of less than 1000 bytes in size. The test was done with Apache 2.2 and the server location is in east canada.
Webpagetest results:
http://www.webpagetest.org/result/160202_KK_9HJ/1/details/
I'm curious as to why the time to first byte for the directory listing is higher than the time to first byte for the icon by one third?
What settings do I use in linux to fix this?
The time to first byte represents the time taken to 1) send the request to the server, 2) process the request and 3) return at least some of the results from the server.
For similarly sized resources 1) and 3) should be the same so let's concentrate on 2) for now.
When you request the directory, Apache has to check if the directory contains an index.html file, if not it reads the directory, then it starts to construct the HTML page creating links to the parent directory and each file/sub directory in the directory, then it has to return the file.
When you request the ico file Apache just has to pick up the file and return it to you nice and simple.
So as you can see there is more work in the first than in the second. So I don't think this is a fair test. Compare a static index.html file to a static ico file for a fairer test and then you'll know if you have an issue.
Additionally, depending on your mpm choice, settings, server load and server history there may be a thread or process started up waiting to process the first request (fast) or the first request may have to initiate one to handle this request (slow). This is likely to be less of an issue for a second request, particularly with keep-alive enabled. See here for more details: https://serverfault.com/questions/383526/how-do-i-select-which-apache-mpm-to-use.
There is also the TCP slow start issue, which particularly affects older version of OS and software, but that is unlikely to have an impact here in the small loads you are talking about and also should affect total download time rather than TTFB. Still it's yet another reason to ensure you're running up to date software.
And finally your TTFB is mostly influenced by your hosting provider and the pipes to your server and number of hops until
It gets to Apache so, once you have chosen a hosting provider, it is mostly out of your control. Again this will usually be reflective across the board and rather than the variances you see between two requests here.
Related
We have an XPages application and we serialize all pages on disk for this specific application. We already use the gzip option but it seems the serialized files are removed from disk only when the http task is stopped or restarted.
As this application is used by many different customers from different places around the globe, we try to avoid restart the server or the http task as much as possible but the drawback is that serialized files are never deleted ans so sooner or later we face a disk space problem even if the gzip serialzed files are not that big.
A secondary issue is that the http task takes quite a long time to stop because it has to remove all the serialized files.
Is there any way to have the domino server "clean" old/unused serialized files without restarting the http task ?
Currently we implemented an OS script which cleans serialized files older than tow days which is fine, but I would prefer a solution within domino.
Thanks in advance for your answers/suggestions !
Renaud
I believe the httpSessionId is used to store the file(s) on disk. You could try the following:
Alter the xsp.persistence.dir.xspstate to a friendlier location on (i.e. /temp/xspstate)
Register a SessionListener with your XPage application
Inside the SessionListener's sessionDestroyed method recursively search through the folders to find the one file or folder that matches the sessionId and delete
When the sessionDestoryed method is called in the listener any file locks should have been removed. Also note, as of right now, the seesionDestroyed method is not called right after a user logs out (see my question here: SessionListener sessionDestroyed not called)
hope this helps...
I am working on my client's pure HTML CSS website having data bindings with JSON datasets using Knockoutjs. For tables I have used Datatables library.
I have hosted the website on Windows Azure websites.
Here is the link of website : http://bit.ly/(REMOVED SINCE IT IS CONFEDENTIAL)
It takes around 4 seconds to load the website even though I have used CDN for common JS libraries.
It should not have that much load time. I am unable to find the culprit here. I am fetching data from 4 different datasets. Does it impact on performance? Or there is problem with Windows Azure datacenter, It takes while to get response from Azure server. Is Azure culprit?
You can examine the page load time on the website link given above.
Any help would be appreciated.
Solution :
Instead of using sync calls, used
$.getJSON(url, function(data){
//whole knockoutjs logic and bindings
}
All model .js files (starting with patientMedicationChart-Index.js) are loaded synchronously (async:false is set in that file). This means that the browser has to wait for each script file to be loaded before continuing to load the next.
I count about 10 files loaded like that for your demo, which (for me) each take about 200ms to load (about 95% of that 200ms is spent waiting for a response, which also seems rather slow; that might be a server issue with Azure). So times 10 is already 2 seconds spent loading those files, and only after loading all of them will the ready event for the page be triggered.
There might be a reason for wanting to load those files synchronously, but as it is, it's causing a significant part of the loading time for the entire page.
I put redirection code on the top of a page which has bootstrapping code below. Does redirection spawn a different process making the redirecting page process as a background process or it kills the current process entirely?
Im using header() for redirection but surprisingly the remaining code below header() which required database connection also got executed. That got me curious.
First, you should never call header() just like that, unless you are very sure that the Drupal helpers are not appropriate (I have never come across such a situation in my 10+ years of Drupal development)
header()will not call the shutdown and other closing functions in Drupal, resulting in potentially broken sessions, wrong statistics and broken modules (which depend on closing being invoked). The fact that sockets and other low-level resources are not closed in such cases, might even crash your apache server (or other servers) at some point.
Rather call drupal_set_header() if you want to set a header. Nine out of ten times you want a redirection header, in wich case you best call drupal_goto(), wich does all the closing and even supports destination-paramters to be followed.
On drupal_goto, all processes are killed (see the exit() at the bottom), hence no running process will be kept. The module_invoke_all('exit') will make sure that all modules get a turn in closing their sockets, connections and what more.
As far as I understand, a single process attends both the original request as well as the redirect request. The redirect request is processed once the original request has been completed.
Since there are many URL redirection techniques, we cannot be sure unless you let us know the particular URL redirection technique that you are using.
So far from my understanding, if you are redirecting directly from apache htaccess file, then no new process will be created. If header() function is used, it sends response to the apache server, and apache redirects the page. In both cases, no new process was created; but the later one runs the php script before re-directing (more costly?).
Even if the image is changed, overwritten, modified, IIS still serves the cached copy.
I am trying to upload an image from a webcam taken every 15 seconds. The image makes it onto the server but when I refresh the browser with the image FROM the server it does not refresh.
IIS caches the file apparently for more than 2 minutes. I want this to be in real-time. Tried disabling caching everywhere I could think of. No luck.
Embed your image as follows:
<*ImageTag src="WebCamImage.aspx?data={auto-generated guid}" ... >
*ImageTag = img (spam filter won't let me post it)
And create a page (WebCamImage.aspx) that streams the static image file back to the browser while ignoring the "data" request parameter, which is only used to avoid any caching (make sure to set the response content type to "image/jpeg" or whatever is adequate in the #page header).
Are you sure that the image is cached on the server and not on the client. Have you tried requesting the same image from a different client?
If this IS server side caching then this article has all the answers for you:
http://blogs.msdn.com/david.wang/archive/2005/07/07/HOWTO-Use-Kernel-Response-Cache-with-IIS-6.aspx
You are most likely "affected" by the kernel-mode caching.
See that scavenger time?
Scavenger - 120 seconds by default and controlled by the registry key HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\UriScavengerPeriod
That is probably what you experience (2min caching)
Try turning kernel-mode caching off to see if it makes a difference (performance may suffer but it will be no worse than IIS5)
I'm writing my own webserver and I don't yet handle concurrent connections properly. I get massive page loading lag due to inappropriately handling concurrent connections (I respond to SYN, but I lose the GET packet somehow. The browser retries after a while, but it takes 3 seconds!) I'm trying to figure out if there's a way to instruct the browser to stop loading things concurrently, because debugging this is taking a long time. The webserver is very stripped down, is not going to be public and is not the main purpose of this application, which is why I'm willing to cut corners in this fashion.
It'd be nice to just limit the concurrent connections to 1, because modifying that parameter using a registry hack for IE and using about:config for Firefox both make things work perfectly.
Any other workaround ideas would be useful, too. A couple I can think of:
1 - Instruct the browser to cache everything with no expiration so the slow loads (.js, .css and image files) happen only once. I can append a checksum to the end of the file (img src="/img/blah.png?12345678") to make sure if I update the file, it's reloaded properly.
2 - Add the .js and .css to load inline with the .html files - but this still doesn't fix the image issue, and is just plain ugly anyway.
I don't believe it's possible to tell a browser like Firefox to not load concurrently, at least not for your users via some http header or something.
So I never found a way to do this.
My underlying issue was too many requests were coming in and overflowing my limited receive buffers in emac ram. Overflowing receive buffers = discarded packets. The resolution was to combine all .js and all .css files into 1 .js and 1 .css file in order to get my requests down. I set all image, js and css pages to have a year's expiration. The html pages are set to expire immediately. I wrote a perl script to append md5 checksums to files so changed files are refetched. Works great now. Pages load instantly after the first load caches everything.