Limit the number of concurrent connections from the server side? - browser

I'm writing my own webserver and I don't yet handle concurrent connections properly. I get massive page loading lag due to inappropriately handling concurrent connections (I respond to SYN, but I lose the GET packet somehow. The browser retries after a while, but it takes 3 seconds!) I'm trying to figure out if there's a way to instruct the browser to stop loading things concurrently, because debugging this is taking a long time. The webserver is very stripped down, is not going to be public and is not the main purpose of this application, which is why I'm willing to cut corners in this fashion.
It'd be nice to just limit the concurrent connections to 1, because modifying that parameter using a registry hack for IE and using about:config for Firefox both make things work perfectly.
Any other workaround ideas would be useful, too. A couple I can think of:
1 - Instruct the browser to cache everything with no expiration so the slow loads (.js, .css and image files) happen only once. I can append a checksum to the end of the file (img src="/img/blah.png?12345678") to make sure if I update the file, it's reloaded properly.
2 - Add the .js and .css to load inline with the .html files - but this still doesn't fix the image issue, and is just plain ugly anyway.

I don't believe it's possible to tell a browser like Firefox to not load concurrently, at least not for your users via some http header or something.

So I never found a way to do this.
My underlying issue was too many requests were coming in and overflowing my limited receive buffers in emac ram. Overflowing receive buffers = discarded packets. The resolution was to combine all .js and all .css files into 1 .js and 1 .css file in order to get my requests down. I set all image, js and css pages to have a year's expiration. The html pages are set to expire immediately. I wrote a perl script to append md5 checksums to files so changed files are refetched. Works great now. Pages load instantly after the first load caches everything.

Related

Apache - strange time-to-first-byte issue

Somehow I think there's a problem with stat in linux, but I tested a regular empty folder in linux vs an icon of less than 1000 bytes in size. The test was done with Apache 2.2 and the server location is in east canada.
Webpagetest results:
http://www.webpagetest.org/result/160202_KK_9HJ/1/details/
I'm curious as to why the time to first byte for the directory listing is higher than the time to first byte for the icon by one third?
What settings do I use in linux to fix this?
The time to first byte represents the time taken to 1) send the request to the server, 2) process the request and 3) return at least some of the results from the server.
For similarly sized resources 1) and 3) should be the same so let's concentrate on 2) for now.
When you request the directory, Apache has to check if the directory contains an index.html file, if not it reads the directory, then it starts to construct the HTML page creating links to the parent directory and each file/sub directory in the directory, then it has to return the file.
When you request the ico file Apache just has to pick up the file and return it to you nice and simple.
So as you can see there is more work in the first than in the second. So I don't think this is a fair test. Compare a static index.html file to a static ico file for a fairer test and then you'll know if you have an issue.
Additionally, depending on your mpm choice, settings, server load and server history there may be a thread or process started up waiting to process the first request (fast) or the first request may have to initiate one to handle this request (slow). This is likely to be less of an issue for a second request, particularly with keep-alive enabled. See here for more details: https://serverfault.com/questions/383526/how-do-i-select-which-apache-mpm-to-use.
There is also the TCP slow start issue, which particularly affects older version of OS and software, but that is unlikely to have an impact here in the small loads you are talking about and also should affect total download time rather than TTFB. Still it's yet another reason to ensure you're running up to date software.
And finally your TTFB is mostly influenced by your hosting provider and the pipes to your server and number of hops until
It gets to Apache so, once you have chosen a hosting provider, it is mostly out of your control. Again this will usually be reflective across the board and rather than the variances you see between two requests here.

Session store get and set on every http request?

I am using node.js with https://github.com/visionmedia/connect-redis to store session variables in redis.
I ran redis-cli monitor and noticed that on a single page load, there are 3 sets of get and setex commands being executed. The 3 sets come from the 3 http requests made on my page load (favicon.ico, /, and index.css).
My question: Is it normal for a redis get and setex to run on every http request? Each pair contains identical data.
The 3 HTTP gets that you are seeing are normal for a web application.
You can set a very long expiration date on your favicon.ico so that the browser only requests it once.
For static assets (i.e. CSS, JS, images) you can do the same or put them in a different domain (or subdomain)
Be aware that if you put a very long expiration date on a CSS/JS file the browse will not request it again and you might run into weird "issues" in which you make a change to a CSS/JS file and the browser might not get the updated file. This is one of the reasons a lot of sites "version" their CSS files (e.g. styles-2013-02-17.css) so that they can use a different file name when a change is made.

Does browser timeout on huge request?

Assume that I use web browser to upload a huge file(maybe several GB), it may take hours for all the data to be transferred to server. Assume the server has no limit on the size of file upload, it just keeps engulfing data. Will the browser work earnestly for hours till all the data is transferred? Or browser will prompt certain error after some time? Or a browser-specific issue?
A request will always timeout at some point, regardless of the web server you are using (Apache, IIS, etc.) The defaults are usually a couple of minutes, so you will need to increase those. There are also other limits such as maximum file size that you would have to increase. An example of doing this with PHP/Apache can be found here:
Increase max_execution_time in PHP?
The browser will give an error page request timeout

cache external files eg, i.ytimg.com/vi/#code#/0.jpg with apache .htaccess?

As topic is it possible to set cache on external resources with htaccess.
I have some third party stuff on my site eg, google web elements and embedded youtube clips.
I want my google page speed to get higher.
error code from page speed:
The following resources are missing a cache validator.
http://i.ytimg.com/vi/-MfM1fVSFnM/0.jpg
http://i.ytimg.com/vi/-PxVKNJmw4M/0.jpg
http://i.ytimg.com/vi/3nxENc_msc0/0.jpg
http://i.ytimg.com/vi/5Bra7rbGb7g/0.jpg
http://i.ytimg.com/vi/5P76PKybW5o/0.jpg
http://i.ytimg.com/vi/9l9BzKfI88o/0.jpg
http://i.ytimg.com/vi/E7hvBxMB4XI/0.jpg
http://i.ytimg.com/vi/IiocozLHFis/0.jpg
http://i.ytimg.com/vi/JIHohC8fydQ/0.jpg
http://i.ytimg.com/vi/P66uwFpmQSE/0.jpg
http://i.ytimg.com/vi/TXLTbARnRdU/0.jpg
http://i.ytimg.com/vi/bPBrRzckfEQ/0.jpg
http://i.ytimg.com/vi/dajcIH9YUuI/0.jpg
http://i.ytimg.com/vi/g4roerqw090/0.jpg
http://i.ytimg.com/vi/h1imBHP3DdA/0.jpg
http://i.ytimg.com/vi/hRvW5ndLLEk/0.jpg
http://i.ytimg.com/vi/kzahftbo6Qc/0.jpg
http://i.ytimg.com/vi/lta2U3hkC4k/0.jpg
http://i.ytimg.com/vi/n1o9bGF88HY/0.jpg
http://i.ytimg.com/vi/n3csJN0wXew/0.jpg
http://i.ytimg.com/vi/q0Xu-0moeew/0.jpg
http://i.ytimg.com/vi/tPCDPKirZBM/0.jpg
http://i.ytimg.com/vi/uLxsPImMJmg/0.jpg
http://i.ytimg.com/vi/x33B_iBn2_M/0.jpg
No, it's up to them to cache it.
The best you could do would be to download them onto your server and then serve them, but that would be slower anyway!
Nope, setting cache settings for third parties is not possible unless you start passing those resources through on your server as a proxy, which you usually don't want for reasons of speed and traffic.
As far as I can see, there's nothing you can do here.
You could delay your Youtube videos from loading on the page until something like a holding image is clicked. This wouldn't cache these images when (or if) they are loaded, but they wouldn't detrimentally affect your Page Speed because they wouldn't be loaded on page load any more.

IIS 6 caches static image

Even if the image is changed, overwritten, modified, IIS still serves the cached copy.
I am trying to upload an image from a webcam taken every 15 seconds. The image makes it onto the server but when I refresh the browser with the image FROM the server it does not refresh.
IIS caches the file apparently for more than 2 minutes. I want this to be in real-time. Tried disabling caching everywhere I could think of. No luck.
Embed your image as follows:
<*ImageTag src="WebCamImage.aspx?data={auto-generated guid}" ... >
*ImageTag = img (spam filter won't let me post it)
And create a page (WebCamImage.aspx) that streams the static image file back to the browser while ignoring the "data" request parameter, which is only used to avoid any caching (make sure to set the response content type to "image/jpeg" or whatever is adequate in the #page header).
Are you sure that the image is cached on the server and not on the client. Have you tried requesting the same image from a different client?
If this IS server side caching then this article has all the answers for you:
http://blogs.msdn.com/david.wang/archive/2005/07/07/HOWTO-Use-Kernel-Response-Cache-with-IIS-6.aspx
You are most likely "affected" by the kernel-mode caching.
See that scavenger time?
Scavenger - 120 seconds by default and controlled by the registry key HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\UriScavengerPeriod
That is probably what you experience (2min caching)
Try turning kernel-mode caching off to see if it makes a difference (performance may suffer but it will be no worse than IIS5)

Resources