Assume that I use web browser to upload a huge file(maybe several GB), it may take hours for all the data to be transferred to server. Assume the server has no limit on the size of file upload, it just keeps engulfing data. Will the browser work earnestly for hours till all the data is transferred? Or browser will prompt certain error after some time? Or a browser-specific issue?
A request will always timeout at some point, regardless of the web server you are using (Apache, IIS, etc.) The defaults are usually a couple of minutes, so you will need to increase those. There are also other limits such as maximum file size that you would have to increase. An example of doing this with PHP/Apache can be found here:
Increase max_execution_time in PHP?
The browser will give an error page request timeout
Related
We are experimenting with using blazor webasembly with angular. It works nicely, but blazor requires a lot of dlls to be loaded, so we decided to store them in azure blob storage, and to serve it we use microsoft CDN on azure.
When we check average latency as users started working, it shows values between 200-400 ms. But max values of latency jump to 5-6 minutes.
This happens for our ussual workload 1k-2k users over course of 1 hour. If they dont have blazor files cached locally yet that can be over 60 files per user requested from cdn.
My question is if this is expected behaviour or we can have some bad configuration somewhere.
I mention blazor WS just in case, not sure it can be problem specificaly with way how these files are loaded, or it is only because it is big amount of fetched files.
Thanks for any advice in advance.
I did check if files are served from cache, and from response headers it seems so: x-cache: TCP_HIT. Also Byte Hit ratio from cdn profile seem ok,mostly 100% and it never falls under 65%.
In my Lotus Notes web application, I have file upload functionality. Here I want to validate the attachment file size before uploading which I did through webquerysave. My problem is that whenever the attached file size exceeds the limitation, which is configured in server document, it throws the server error page like “HTTP: 500 Invalid POST Request Exception”.
I tried some methods to resolve this, but they’re not working:
In domcfg.nsf, I mapped the target form called "CustomGeneralErrorForm".
I created "$$ReturnGeneralError" from to show error page.
In Notes.ini, I added "HTTPMultiErrorPage=/error.html"
How can I resolve this issue?
I suppose there's no way. I've tried several time to catch that error but I think the only way is to test files size with javascript; Obviously it works only with html5 browsers as you can find in this post:
Using jQuery, Restricting File Size Before Uploading
So... you have to write code to detect browser features and use javascript code with html5 browser and find alternative ways for old browser.
For example you can use Flash plugin and post tu server-side code depending on your backend.
Uploadify is a very good chance (http://www.uploadify.com/) to work just one time, but make a internet search and choose the best for you.
In this way you can stop user large posts, but if you need to upload large size file (>10Mb default) you must set a secondary internet site server document with greater post size limit.
We've hit a problem with some forms in the admin portion of our web app. There are a handful of forms that contain a large number of fields (it can range anywhere from one input field to the hundreds).
We've found that as these forms grow, there is a point where the server will throw 500 errors when a form is posted.
After running a test, I was able to find that the server can handle forms with 100 fields in them; once 101 or more fields are used, we get the errors.
We run Coldfusion, and we have determined that Coldfusion is not throwing this error. We never see this error logged in Coldfusion, so we are assuming IIS is throwing an error even before it sends the request to the Coldfusion server.
I'm assuming there is some setting in IIS 7.5 where we can up this limit. I've searched on the web, but all I can find is how to raise the byte-size limits of this data, not any kind of limit on a number of fields that are allowed.
So, am I right in assuming that this can be changed, and if so, how can it be done?
This is an issue introduced with hotfix APSB12-06. While it is a ColdFusion error, people have reported receiving the error in Tomcat, before it supposedly hit the CF server
There is a setting in neo-runtime.xml which defines the postsizelimit - and is defaulted to 100.
The full notes are located here, but here is the short version.
This hot fix has a new setting in ColdFusion, Post Parameter Limit. This setting limits the number of parameters in a post request. The default value is 100. If a post request contains more parameters as specified, the server doesn't process the request and throws an exception. This process protects against DoS attack using Hash Collision. This setting is different from Post Size Limit (ColdFusion Administrator > Settings > Maximum size of post data). This setting isn't exposed in the ColdFusion Administrator console. But you can easily change this limit in the neo-runtime.xml file. See point 5 below.
Customers who want to change postParameterLimit, go to {ColdFusion-Home}/lib for Server Installation or {ColdFusion-Home}/WEB-INF/cfusion/lib for Multiserver or J2EE installation. Open file neo-runtime.xml, after line.
<var name='postSizeLimit'><number>100.0</number></var>
Add the line below and you can change 100 with the desired number.
<var name='postParametersLimit'><number>100.0</number></var>
CF10+ has the setting available to edit within the CF Admin Settings page under Maximum number of POST request parameters under Server Settings -> Settings.
On our 9.0.1 server, we just increased the setting up to 10000 and have seen no adverse effects.
I believe you are bumping up against a security feature of ColdFusion. What ColdFusion version are you running? In ColdFusion Security Hotfix APSB12-06 they introduced a fix to protect against DoS attack using Hash Collision. From that page:
This hotfix implements a new setting in ColdFusion, Post Parameter
Limit. This limits the number of parameters in a post request. The
default value is 100. If a post request contains more parameters as
specified, server will not process the request and throws an
exception. This is done to protect against DoS attack using Hash
Collision. This setting is different from Post Size Limit (ColdFusion
Administrator > Settings > Maximum size of post data). We are not
exposing this setting in ColdFusion Administrator console, but this
limit can be easily changed in neo-runtime.xml file. See point 5
below.
Also on that page are instructions on how to increase that limit. Basically you have to make a change in your neo-runtime.xml file.
Customers who want to change postParameterLimit, go to
{ColdFusion-Home}/lib for Server Installation or
{ColdFusion-Home}/WEB-INF/cfusion/lib for Multiserver or J2EE
installation. Open file neo-runtime.xml, after line:
<var name='postSizeLimit'><number>100.0</number></var>
add the below line and you can change 100 with desired number.
<var name='postParametersLimit'><number>100.0</number></var>
Even if the image is changed, overwritten, modified, IIS still serves the cached copy.
I am trying to upload an image from a webcam taken every 15 seconds. The image makes it onto the server but when I refresh the browser with the image FROM the server it does not refresh.
IIS caches the file apparently for more than 2 minutes. I want this to be in real-time. Tried disabling caching everywhere I could think of. No luck.
Embed your image as follows:
<*ImageTag src="WebCamImage.aspx?data={auto-generated guid}" ... >
*ImageTag = img (spam filter won't let me post it)
And create a page (WebCamImage.aspx) that streams the static image file back to the browser while ignoring the "data" request parameter, which is only used to avoid any caching (make sure to set the response content type to "image/jpeg" or whatever is adequate in the #page header).
Are you sure that the image is cached on the server and not on the client. Have you tried requesting the same image from a different client?
If this IS server side caching then this article has all the answers for you:
http://blogs.msdn.com/david.wang/archive/2005/07/07/HOWTO-Use-Kernel-Response-Cache-with-IIS-6.aspx
You are most likely "affected" by the kernel-mode caching.
See that scavenger time?
Scavenger - 120 seconds by default and controlled by the registry key HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\UriScavengerPeriod
That is probably what you experience (2min caching)
Try turning kernel-mode caching off to see if it makes a difference (performance may suffer but it will be no worse than IIS5)
I'm writing my own webserver and I don't yet handle concurrent connections properly. I get massive page loading lag due to inappropriately handling concurrent connections (I respond to SYN, but I lose the GET packet somehow. The browser retries after a while, but it takes 3 seconds!) I'm trying to figure out if there's a way to instruct the browser to stop loading things concurrently, because debugging this is taking a long time. The webserver is very stripped down, is not going to be public and is not the main purpose of this application, which is why I'm willing to cut corners in this fashion.
It'd be nice to just limit the concurrent connections to 1, because modifying that parameter using a registry hack for IE and using about:config for Firefox both make things work perfectly.
Any other workaround ideas would be useful, too. A couple I can think of:
1 - Instruct the browser to cache everything with no expiration so the slow loads (.js, .css and image files) happen only once. I can append a checksum to the end of the file (img src="/img/blah.png?12345678") to make sure if I update the file, it's reloaded properly.
2 - Add the .js and .css to load inline with the .html files - but this still doesn't fix the image issue, and is just plain ugly anyway.
I don't believe it's possible to tell a browser like Firefox to not load concurrently, at least not for your users via some http header or something.
So I never found a way to do this.
My underlying issue was too many requests were coming in and overflowing my limited receive buffers in emac ram. Overflowing receive buffers = discarded packets. The resolution was to combine all .js and all .css files into 1 .js and 1 .css file in order to get my requests down. I set all image, js and css pages to have a year's expiration. The html pages are set to expire immediately. I wrote a perl script to append md5 checksums to files so changed files are refetched. Works great now. Pages load instantly after the first load caches everything.