Is there any length limitation of query string in ios phonegap?
Thanks!!!
Although the specification of the HTTP protocol does not specify any maximum length, practical limits are imposed by web browser and server software.
Microsoft Internet Explorer (Browser)
Microsoft states that the maximum length of a URL in Internet Explorer is 2,083 characters, with no more than 2,048 characters in the path portion of the URL. In my tests, attempts to use URLs longer than this produced a clear error message in Internet Explorer.
Firefox (Browser)
After 65,536 characters, the location bar no longer displays the URL in Windows Firefox 1.5.x. However, longer URLs will work. I stopped testing after 100,000 characters.
Safari (Browser)
At least 80,000 characters will work. I stopped testing after 80,000 characters.
Opera (Browser)
At least 190,000 characters will work. I stopped testing after 190,000 characters. Opera 9 for Windows continued to display a fully editable, copyable and pasteable URL in the location bar even at 190,000 characters.
Apache (Server)
My early attempts to measure the maximum URL length in web browsers bumped into a server URL length limit of approximately 4,000 characters, after which Apache produces a "413 Entity Too Large" error. I used the current up to date Apache build found in Red Hat Enterprise Linux 4. The official Apache documentation only mentions an 8,192-byte limit on an individual field in a request.
Microsoft Internet Information Server
The default limit is 16,384 characters (yes, Microsoft's web server accepts longer URLs than Microsoft's web browser). This is configurable.
Perl HTTP::Daemon (Server)
Up to 8,000 bytes will work. Those constructing web application servers with Perl's HTTP::Daemon module will encounter a 16,384 byte limit on the combined size of all HTTP request headers. This does not include POST-method form data, file uploads, etc., but it does include the URL. In practice this resulted in a 413 error when a URL was significantly longer than 8,000 characters. This limitation can be easily removed. Look for all occurrences of 16x1024 in Daemon.pm and replace them with a larger value. Of course, this does increase your exposure to denial of service attacks.
Recommendations
Extremely long URLs are usually a mistake. URLs over 2,000 characters will not work in the most popular web browser. Don't use them if you intend your site to work for the majority of Internet users.
References : http://www.boutell.com/newfaq/misc/urllength.html
Related
I'm not quite sure whether this is the suitable forum to post my question. I'm analyzing web server logs both in Apache and IIS log formats. I want to find the evidences for automatic browsing(Ex. Web robots,spiders,bots etc.) I used python robot-detection 0.2.8 for detecting robots in my log files. Anyway there may be other robots(automatic programs) which have traversed through the web site but robot-detection can not identify.
So are there any specific clues that can be found in log files(that human users do not perform but software perform actions etc)?
Do they follow a specific navigation pattern?
I saw some requests for favicon.ico? Does this implicate that it is a automatic browsing?.
I found this article with some valuable points.
The article on how to identify robots has some good information. Other things you might consider.
If you see a request for an HTML page, but it isn't followed by requests for the images or script files that the page uses, it's very likely that the request came from a crawler. If you see lots of those from the same IP address, it's almost certainly a crawler. It could be the Lynx browser (text only), but it's more likely a crawler.
It's pretty easy to spot a crawler that scans your entire site very quickly. But some crawlers go more slowly, waiting 5 minutes or more between page requests. If you see multiple requests from the same IP address, spread out over time but at very regular intervals, it's probably a crawler.
Repeated 403 (Unauthorized) entries in the log from the same IP. It's rare that a human will suffer through more than a handful of 403 errors before giving up. An unsophisticated crawler will blindly try URLs on the site, even if it gets dozens of 403s.
Repeated 404's from the same IP address. Again, a human will give up after some small number of 404s. A crawler will blindly push on ... "I know there's a good URL in here somewhere."
A user-agent string that isn't one of the major browsers' agent strings. If the user-agent string doesn't look like a browser's user agent string, it's probably a bot. Note that the reverse isn't true; many bots set the user agent string to a known browser user agent string.
I am designing a web site, spread across many servers, and there are pages where I need up to chain up to 4 redirections in a row. I tried a few browsers (firefox, chrome, IE) and it seems to work fine.
Apparently, Firefox's default limit is 20 redirections in a row, Chrome's default seems to be 20, and IE8's limit seems to be 10 redirections.
What is the maximum number of HTTP redirections allowed by all major browsers? Is it 10?
Edit:
Why do I need 4 redirections? Basically, the user is in a hotspot, she tries to go to (say) google.com, there is a local captive portal that captures the request and redirects (#1) the user to a local server. The local server checks some things about the user, but if it does not have the data locally, it redirects (#2) the user to the central web site. If the user is already logged in to this central Web site, she gets redirected (#3) to another server (there are different portals depending on the user). Finally, the server checks the user's rights, and if she has the appropriate rights, there is a final (#4) redirection to the local access controller, in order to get access to the appropriate service. Believe me, I tried my best to remove redirections, but I cannot see where this can be optimized.
4 redirections should work in all major browsers. However, consider reducing the number to give users a faster experience. Each redirection requires a round trip between the user and the server (and requires creating a new connection, if it's redirecting to a different server). In total, the latency will be significant, likely annoying your users.
I also curious about this, but doesn't find info about what is the maximum number of redirection of each browser. So I try to test it by myself using this PHP code to each web browser that I have:
<?php
$count = $_GET["c"] ?? 0;
$next = $count+1;
header('Location:'.$_SERVER['PHP_SELF'].'?c='.$next);
exit;
The result is:
20 = Firefox
19 = Chrome, Opera, Brave, Opera Mini, Puffin, UC Browser, etc (maybe all blink based browser)
16 = Safari
Question: What is the maximum number of HTTP redirections allowed by all major browsers? Is it 10?
Answer: Based on my test the maximum that allowed by all major browser is 16.
But I can not test IE/Edge because I don't have a windows OS. As Microsoft Edge use Blink engine, so I think the maximum 19 too.
You can test above code using your own server or this link(mine). The last URL query number until show ERR_TOO_MANY_REDIRECTS error is the maximum redirection.
Having a strange issue - we have a page that uses a query string to get some information, this query string happens to contain the word set. When this happens, the page returns a 406 error (Client browser does not accept the MIME type of the requested page.)
The URL looks like example.com/folder/file.asp?variable=sunset boulevard. If I change the space to %20 it still returns 406.
On my local machine running IIS 5.x this doesn't happen, on our test server running IIS 7.x this doesn't happen, only on our production server running IIS 7.x over SSL. Note however that a self signed certificate on my local machine over SSL still doesn't produce the error.
So my question is, what does the set keyword in the URL tell IIS to do and is there an easy way to avoid it happening? I would like to avoid changing the space to a different character if possible.
Does your server have additional filters installed? Smells like the work of an agressive filter designed to prevent certain types of attack.
We have a fairly high-traffic static site (i.e. no server code), with lots of images, scripts, css, hosted by IIS 7.0
We'd like to turn on some caching to reduce server load, and are considered setting the expiry of web content to be some time in the future. In IIS, we can do this on a global level via "Expire web content" section of the common http headers in the IIS response header module. Perhaps setting content to expire 7 days after serving.
All this actually does is sets the max-age HTTP response header, so far as I can tell, which makes sense, I guess.
Now, the confusion:
Firstly, all browsers I've checked (IE9, Chrome, FF4) seem to ignore this and still make conditional requests to the server to see if content has changed. So, I'm not entirely sure what the max-age response header will actually effect?! Could it be older browsers? Or web-caches?
It is possible that we may want to change an image in the site at short notice... I'm guessing that if the max-age is actually used by something that, by its very nature, it won't then check if this image has changed for 7 days... so that's not what we want either
I wonder if a best practice is to partition one's site into folders of content really won't change often and only turn on some long-term expiry for these folders? Perhaps to vary the querystring to force a refresh of content in these folders if needed (e.g. /assets/images/background.png?version=2) ?
Anyway, having looked through the (rather dry!) HTTP specification, and some of the tutorials, I still don't really have a feel for what's right in our situation.
Any real-world experience of a situation similar to ours would be most appreciated!
Browsers fetch the HTML first, then all the resources inside (css, javascript, images, etc).
If you make the HTML expire soon (e.g. 1 hour or 1 day) and then make the other resources expire after 1 year, you can have the best of both worlds.
When you need to update an image, or other resource, you just change the name of that file, and update the HTML to match.
The next time the user gets fresh HTML, the browser will see a new URL for that image, and get it fresh, while grabbing all the other resources from a cache.
Also, at the time of this writing (December 2015), Firefox limits the maximum number of concurrent connections to a server to six (6). This means if you have 30 or more resources that are all hosted on the same website, only 6 are being downloaded at any time until the page is loaded. You can speed this up a bit by using a content delivery network (CDN) so that everything downloads at once.
Is it browser dependent? Also, do different web stacks have different limits on how much data they can get from the request?
RFC 2616 (Hypertext Transfer Protocol — HTTP/1.1) states there is no limit to the length of a query string (section 3.2.1). RFC 3986 (Uniform Resource Identifier — URI) also states there is no limit, but indicates the hostname is limited to 255 characters because of DNS limitations (section 2.3.3).
While the specifications do not specify any maximum length, practical limits are imposed by web browser and server software. Based on research which is unfortunately no longer available on its original site (it leads to a shady seeming loan site) but which can still be found at Internet Archive Of Boutell.com:
Microsoft Edge (Browser)
The limit appears to be around 81578 characters. See URL Length limitation of Microsoft Edge
Chrome
It stops displaying the URL after 64k characters, but can serve more than 100k characters. No further testing was done beyond that.
Firefox (Browser)
After 65,536 characters, the location bar no longer displays the URL in Windows Firefox 1.5.x. However, longer URLs will work. No further testing was done after 100,000 characters.
Safari (Browser)
At least 80,000 characters will work. Testing was not tried beyond that.
Opera (Browser)
At least 190,000 characters will work. Stopped testing after 190,000 characters. Opera 9 for Windows continued to display a fully editable,
copyable and pasteable URL in the location bar even at 190,000 characters.
Microsoft Internet Explorer (Browser)
Microsoft states that the maximum length of a URL in Internet Explorer is 2,083 characters, with no more than 2,048 characters in the path portion of the URL. Attempts to use URLs longer than this produced a clear error message in Internet Explorer.
Apache (Server)
Early attempts to measure the maximum URL length in web browsers bumped into a server URL length limit of approximately 4,000 characters, after which Apache produces a "413 Entity Too Large" error. The current up to date Apache build found in Red Hat Enterprise Linux 4 was used. The official Apache documentation only mentions an 8,192-byte limit on an individual field in a request.
Microsoft Internet Information Server (Server)
The default limit is 16,384 characters (yes, Microsoft's web server accepts longer URLs than Microsoft's web browser). This is configurable.
Perl HTTP::Daemon (Server)
Up to 8,000 bytes will work. Those constructing web application servers with Perl's HTTP::Daemon module will encounter a 16,384 byte limit on the combined size of all HTTP request headers. This does not include POST-method form data, file uploads, etc., but it does include the URL. In practice this resulted in a 413 error when a URL was significantly longer than 8,000 characters. This limitation can be easily removed. Look for all occurrences of 16x1024 in Daemon.pm and replace them with a larger value. Of course, this does increase your exposure to denial of service attacks.
Recommended Security and Performance Max: 2048 CHARACTERS
Although officially there is no limit specified by RFC 2616, many security protocols and recommendations state that maxQueryStrings on a server should be set to a maximum character limit of 1024. While the entire URL, including the querystring, should be set to a max of 2048 characters. This is to prevent the Slow HTTP Request DDOS/DOS attack vulnerability on a web server. This typically shows up as a vulnerability on the Qualys Web Application Scanner and other security scanners.
Please see the below example code for Windows IIS Servers with Web.config:
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxQueryString="1024" maxUrl="2048">
<headerLimits>
<add header="Content-type" sizeLimit="100" />
</headerLimits>
</requestLimits>
</requestFiltering>
</security>
</system.webServer>
This would also work on a server level using machine.config.
This is just for windows operating system based servers, I'm not sure if there is a similar issue on apache or other servers.
Note: Limiting query string and URL length may not completely prevent Slow HTTP Requests DDOS attack but it is one step you can take to prevent it.
Adding a reference as requested in the comments:
https://www.raiseupwa.com/writing-tips/what-is-the-limit-of-query-string-in-asp-net/
Different web stacks do support different lengths of http-requests. I know from experience that the early stacks of Safari only supported 4000 characters and thus had difficulty handling ASP.net pages because of the USER-STATE. This is even for POST, so you would have to check the browser and see what the stack limit is. I think that you may reach a limit even on newer browsers. I cannot remember but one of them (IE6, I think) had a limit of 16-bit limit, 32,768 or something.