Something that seems easy, but I don't find the way to do that. Does it possible to change the header sent in a response
server: ArangoDB
by something else (in order to be less verbose and more secure) ?
Also, I need to store a large string (very long url + lot of informations) in a document, but what is the max length of a joi.string ?
Thx,
The internal string limit in V8 (the JavaScript engine used by ArangoDB) is around 256 MB in the V8 version used by ArangoDB. Thus 256 MB will be the absolute maximum string length that can be used from JavaScript code that's executed in ArangoDB.
Regarding maximum URL lengths as mentioned above: URLs should get too long because very long URLs may not be too portable across browsers. I think in practice several browser will enforce some URL max length limits of around 64 K, so URLs should definitely not get longer than this value. I would recommend using much shorter URLs though and passing hugh payloads in the HTTP request body instead. This also means you may need to change from HTTP GET to HTTP POST or HTTP PUT, but its at least portable.
Finally regarding the HTTP response header "Server: ArangoDB" that is sent by ArangoDB in every HTTP response: starting with ArangoDB 2.8, there is an option to turn this off: --server.hide-product-header true. This option is not available in the stable 2.7 branch yet.
No, there currently is no configuration to disable the server: header in ArangoDB.
I would recommend prepending an NGiNX or similar HTTP-Proxy to achieve that (and other possible hardening for your service).
The implementation of server header can be found in lib/Rest/HttpResponse.cpp.
Regarding Joi -
I only found howto specify a string length in joi - not what its maximum could be.
I guess the general javascript limit for strings should be taken into account.
However, it rather seems that you shouldn't exceed the limit of 2000 chars for URLs which thereby should be the limit.
Related
I want to ignore requests with big cookie size. We have some requests dropped in varnish due to "BogoHeader Header too long: Cookie: xyz". How can it be done in VCL? I didn't find any len, length or strlen function in VCL, I know that it can be done in the vcl_rcev phase.
The strlen() feature won't help to fix your problem. Varnish discards the request due to the large Cookie header before vcl_recv is executed. If you don't want those requests to be discarded you need to check and adjust some runtime parameters: http_req_hdr_len, http_req_size, http_resp_hdr_len, etc.
In any case, if you are still interested in the strlen() feature, it would be trivial to add it to the std VMOD, but that support doesn't exist at the moment. You could consider using an existing VMOD including utilities like strlen() (or implement it on your own), but that's probably too much work. Finally, you could consider using a hacky approach using just VCL and a regexp:
if (req.http.Cookie ~ "^.{1024,}$") {
...
}
I am making a request to an ordinary express.js server where it's supposed to parse the param that looks like this:
app.get('/:param', function(req, res) {
// do something
})
This works for 99% of the ordinary cases, but when I try to pass a very long parameter (about 10,000 characters) it fails with 400 Error.
The server doesn't give any other details than just 400 error and I've looked all over the internet but while there does exist a limit to the URL length, that's way above 10,000 and I don't think that's the reason.
Again, shorter urls work just fine with exactly the same code. It's long urls that fail. So my question is:
Am I mistaken about the limits and this is not supposed to be poossible?
How can I debug this situation? All I get is 400 error.
Headers received by HTTP servers must not exceed 8192 bytes in total to prevent possible Denial of Service attacks. That is a compile-time constant, you would have to use a custom-compiled version of Node to set that constant larger.
I'm trying to get varnish cache response to be chunked... (that's possible, right?)
I have the following scenario:
1 - cache is clean, good to go (service varnish restart)
2 - access the www.mywebsite.com/page for the first time
(no content-length is returned, and chunking is there, great!)
3 - the next time I access the page (like simple reloading) it will be cached.. and now I get this:
(now we have content-length... which means no chunking :( not great!)
After reading some Varnish docs/blogs (and this: http://book.varnish-software.com/4.0/chapters/VCL_Basics.html), looks like there are two "last" returns: return(fetch) or return(deliver).
When forcing a return(fetch), the chunked encoding works... but it also means that the request won't be cached, right? While return(deliver) caches correctly but adds the content-length header.
I've tried adding these to my default.vcl file:
set beresp.do_esi = true; (at vcl_backend_response stage)
and
unset beresp.http.content-length; (at different stages, without success)
So.. how to have Varnish caching working with Transfer-Encoding: chunked?
Thanks for your attention!
Is there a reason why you want to send it chunked? Chunked transfer encoding is kind of a clumsy workaround for when the content length isn't known ahead of time. What's actually happening here is Varnish is able to compute the length of the gzipped content after caching it for the first time, and so doesn't have to use the workaround! Rest assured that you are not missing out on any performance gains in this scenario.
I want to send a JSON response using Node and Express. I'm trying to compare the performance of res.end and res.json for this purpose.
Version 1: res.json
res.json(anObject);
Version 2: res.end
res.setHeader('Content-Type', 'application/json');
res.end(JSON.stringify(anObject));
Running some benchmarks I can see that the second version is almost 15% faster than the first one. Is there a particular reason I have to use res.json if I want to send a JSON response?
Yes, is is very desirable to use json despite the overhead.
setHeader and end come from the native http module. By using them, you're effectively bypassing a lot Express's added features, hence the moderate speed bump in a benchmark.
However, benchmarks in isolation don't tell the whole story. json is really just a convenience method that sets the Content-Type and then calls send. send is an extremely useful function because it:
Supports HEAD requests
Sets the appropriate Content-Length header to ensure that the response does not use Transfer-Encoding: chunked, which wastes bandwidth.
Most importantly, provides ETag support automatically, allowing conditional GETs.
The last point is the biggest benefit of json and probably the biggest part of the 15% difference. Express calculates a CRC32 checksum of the JSON string and adds it as the ETag header. This allows a browser making subsequent requests for the same resource to issue a conditional GET (the If-None-Match header), and your server will respond 304 Not Modified if the JSON string is the same, meaning the actual JSON need not be sent over the network again.
This can add up to substantial bandwidth (and thus time) savings. Because the network is a much larger bottleneck than CPU, these savings are almost sure to eclipse the relatively small CPU savings you'd get from skipping json().
Finally, there's also the issue of bugs. Your "version 2" example has a bug.
JSON is stringified as UTF-8, and Chrome (contrary to spec) does not default to handling application/json responses as UTF-8; you need to supply a charset. This means non-ASCII characters will be mangled in Chrome. This issue has already been discovered by Express users, and Express sets the proper header for you.
This is one of the many reasons to be careful of premature/micro-optimization. You run the very real risk of introducing bugs.
What is the maximum size of a web browser's cookie's key?
I know the maximum size of a cookie is 4KB, but does the key have a limitation as well?
The 4K limit you read about is for the entire cookie, including name, value, expiry date etc. If you want to support most browsers, I suggest keeping the name under 4000 bytes, and the overall cookie size under 4093 bytes.
One thing to be careful of: if the name is too big you cannot delete the cookie (at least in JavaScript). A cookie is deleted by updating it and setting it to expire. If the name is too big, say 4090 bytes, I found that I could not set an expiry date. I only looked into this out of interest, not that I plan to have a name that big.
To read more about it, here are the "Browser Cookie Limits" for common browsers.
While on the subject, if you want to support most browsers, then do not exceed 50 cookies per domain, and 4093 bytes per domain. That is, the size of all cookies should not exceed 4093 bytes.
This means you can have 1 cookie of 4093 bytes, or 2 cookies of 2045 bytes, etc.
I used to say 4095 bytes due to IE7, however now Mobile Safari comes in with 4096 bytes with a 3 byte overhead per cookie, so 4093 bytes max.
Actually, RFC 2965, the document that defines how cookies work, specifies that there should be no maximum length of a cookie's key or value size, and encourages implementations to support arbitrarily large cookies. Each browser's implementation maximum will necessarily be different, so consult individual browser documentation.
See section 5.3, "Implementation Limits", in the RFC.
Not completely entirely a direct answer to the original question, but relevant for the curious quickly trying to visually understand their cookie information storage planning without implementing a complex limiter algorithm, this string is 4096 ASCII character bytes:
"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmn"
You can also use web storage too if the app specs allows you that (it has support for IE8+).
It has 5M (most browsers) or 10M (IE) of memory at its disposal.
"Web Storage (Second Edition)" is the API and "HTML5 Local Storage" is a quick start.
A cookie key(used to identify a session) and a cookie are the same thing being used in different ways. So the limit would be the same. According to Microsoft its 4096 bytes.
MSDN
cookies are usually limited to 4096
bytes and you can't store more than 20
cookies per site. By using a single
cookie with subkeys, you use fewer of
those 20 cookies that your site is
allotted. In addition, a single cookie
takes up about 50 characters for
overhead (expiration information, and
so on), plus the length of the value
that you store in it, all of which
counts toward the 4096-byte limit. If
you store five subkeys instead of five
separate cookies, you save the
overhead of the separate cookies and
can save around 200 bytes.