I am trying to use NestJs to develop an API. I am getting Http Error Code 431, when the bearer token passed as HTTP header is too long (around 2400 characters). But it works when the bearer token is around 1200 characters. Is there any setting that we can do to increase the header size limit? I am using nodejs12
The HTTP 431 Request Header Fields Too Large response status code indicates that the server refuses to process the request because the request’s HTTP headers are too long. The request may be resubmitted after reducing the size of the request headers.
431 can be used when the total size of request headers is too large, or when a single header field is too large. To help those running into this error, indicate which of the two is the problem in the response body — ideally, also include which headers are too large. This lets users attempt to fix the problem, such as by clearing their cookies.
Servers will often produce this status if:
The Referer URL is too long
There are too many Cookies sent in the request
The below solution is not specific to nest.js but to any node.js server.
On running node --help, you'll see one of the flag will be:
...
--max-http-header-size=... set the maximum size of HTTP headers (default: 8KB)
...
This Node.js CLI flag can help:
--max-http-header-size=16384
It sets the HTTP Max Headers Size to 16KB.
You can set this flag to the value you want.
See this for reference.
The documentation states the maximum size on this flag, so take care of that.
Related
curl -kvs -X POST https://xxxx-xxxx.azure-api.net/*******/reports -k -H "account: ********" -H "Ocp-Apim-Subscription-Key: ***********"
on making this curl, i'm getting the error message as HTTP/1.1 411 Length Required.
I know we can fix this by adding the content -length header to the curl. But can we do something from azure apim level to get it fixed?
Thanks in Advance.
As per the HTTP 1.1 protocol the request made SHOULD have the Content-Length header supplied when a HTTP request is made to an endpoint. Following is the extract from the standard
The Content-Length entity-header field indicates the size of the entity-body, in decimal number of OCTETs, sent to the recipient or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET.Applications SHOULD use this field to indicate the transfer-length of the message-body, unless this is prohibited by the rules in section 4.4.
Content-Length = "Content-Length" ":" 1*DIGIT
An example is
Content-Length: 3495
This header is used as a part of the logic to determine the length of the message. This logic is explained in the section 4.4 of the RFC 2616. You can read about it at RFC 2616 Section 4.4 Message Length
If you have used an API testing tool like POSTMAN, you will see that it automatically adds the header to request that you send.
Same is the case with Azure API Management developer portal, if you inspect the network traffic originating from the portal, you would see the content-length added to the request.
In a nutshell, you should not avoid sending the Content-Length header.
I am studying NodeJS and I have learned about streams and buffer. I found it very interesting because it occurred to me to use it on an HTTP server, since if a request is too big, the data can be sent little by little, especially when using the verb POST to upload big files for example. The problem is that not all data (files at this case) is very big. Is there a way to know the size of a request before it reaches its destination?
From the receiving end, you can look at the Content-Length header.
It may or may not be present, depending upon the sender. For info about that, see Is the Content-Length header required for a HTTP/1.0 response?.
I'm trying to figure out what to do to make Google App Engine (standard version) apply compression to the output of my Next.js/Node.js/Express application.
As far as I've gathered, the problem is that
1) Google's load balancer removes all meta tags indicating that the client supports compression from the request, and thus app.use(compression()) in server.js won't do anything. I've tried to force compression using a {filter: shouldCompress} function, but doesn't seem to matter since Google's front end still returns an uncompressed result. (Locally compression works fine.)
2) How and when Google's load balancer chooses to apply compression is a mystery to me. (And particularly, why not to my silly but large application/javascsript content :))
Here's what they say in the docs:
If the client sends HTTP headers with the original request indicating
that the client can accept compressed (gzipped) content, App Engine
compresses the handler response data automatically and attaches the
appropriate response headers. It uses both the Accept-Encoding and
User-Agent request headers to determine if the client can reliably
receive compressed responses.
How Requests are Handled: Response Compression
So there's that. I'd love to use App Engine for this project but when index.js is 700KB instead of a compressed 200KB, it's kind of a showstopper.
As per the Request Headers and Responses documentation for Node.js, the Accept-Encoding header is removed from the request for security purpose.
Note: Entity headers (headers relating to the request body) are not sanitized or checked, so applications should not rely on them. In particular, the Content- MD5 request header is sent unmodified to the application, so may not match the MD5 hash of the content. Also, the Content-Encoding request header is not checked by the server, so if the client sends a gzipped request body, it will be sent in compressed form to the application.
Also note the response on Google Group which states:
Today, we are not passing through the Accept-Encoding header, so it is not possible for your middleware to decide that it should compress.
We will roll out a fix for this in the new few weeks.
I try to Benchmark Node.js Ghost with JMeter. I want to create a testplan which just signs in and then creates and publishes a post.
My problem now is that i do not get any session-cookies. So every request on the backend fails. I already tried to change the CookieManager settings within the user.properties file.
i tried following configuration:
CookieManager.check.cookies=false
CookieManager.delete_null_cookies=false
CookieManager.save.cookies=true
jmeter.save.saveservice.url=true
jmeter.save.saveservice.requestHeaders=true
This is the results tree (on the left side you can see my testplan setup):
I don't think Ghost uses cookies at all, the errors you're seeing are likely due to failed login.
Looking into response to the first request:
It seems Ghost uses OAuth authentication.
So you need to do the following:
Extract this access_token value from the /ghost/api/v0.1/authentication/token request response. You can do it using JSON Path PostProcessor like
Configure HTTP Header Manager for next requests to send Authorization header with the value of Bearer ${access_token}
The whole process of getting dynamic content from previous request, converting it to JMeter Variable and adding as a parameter to next request is known as correlation.
I know that the default Varnish vcl_fetch looks at beresp.ttl and beresp.http.* to reference the HTTP headers returned from the backend, but is it possible to examine the content of the response also? Our backend sometimes fails with junk HTML but with a status of 200 OK. We'd like to be able to run a regex on the result and retry if possible.
I understand that versions of Varnish <= 3.0 don't stream anyway and download the entire object before passing to the client, but I can't find the appropriate field in beresp in the documentation - I'm looking for something like beresp.http.content
Yes and no. It's accessible, but only through inline C, not VCL configuration (to the best of my knowledge). However, it's not easy to do and not really recommended due to the additional overhead of parsing body text. That said, you can see an attempt at something like what you're looking for here: rewrite vmod for varnish 3
If your junk HTML responses are of a specific length, you can retry the request based on the response's Content-Length header. Alternatively, you might consider adding client-side JS to evaluate the HTML and make an AJAX request to a URL to clear the cache of any junk pages. Lastly, if you know that only a specific subset of your site that returns invalid results, you can try proxying those URLs through something like OpenResty with LuaJIT or nginx with the subs module enabled, and do the body parsing there.