Failed to load resource: the server responded with a status of 431 (Request Header Fields Too Large) - node.js

when i am trying to use my NodeJS api from the react app(building a MERN stack app) i get the error mentioned in question
"Failed to load resource: the server responded with a status of 431 (Request Header Fields Too Large)"
the api is working fine from postman
const onSubmit=async(e)=>{
e.preventDefault()
if(password!==password2){
console.log('passwords dont match')
}else{
const newUser={
name:name,
email:email,
password:password
}
try {
const config={
headers:{
'Content-Type':'application/json'
}
}
const body=JSON.stringify(newUser)
//axios has been set up as proxy
//http://localhost:3000
//we dont need to add the above to url
const res =await axios.post('/api/users',body,config)
console.log(res.data)
} catch (error) {
console.error(error.response.data)
}
}
}

The HTTP 431 Request Header Fields Too Large response status code indicates that the
server refuses to process the request because the request’s HTTP headers are too long.
The request may be resubmitted after reducing the size of the request headers.
431 can be used when the total size of request headers is too large, or when a single
header field is too large. To help those running into this error, indicate which of the > two is the problem in the response body — ideally, also include which headers are too
large. This lets users attempt to fix the problem, such as by clearing their cookies.
Servers will often produce this status if:
The Referer URL is too long
There are too many Cookies sent in the request
In my case I was sending too many cookies because localhost:4200 was used as the domain of 3 different projects ... The solution (delete useless cookies)
Hope this helps...

I was getting this error when I used the same local port in the proxy destination as the port of the React app by accident. This created an internal forwarding loop, resulting in "Request Header Fields Too Large".

I was getting that error when forgetting to run my server first, before running React app.
I used a knex.js and express.js based simple back end and forgot to initialise it before starting React. Now it all works fine.

Clean your browser cache or Open with an incognito window.

Go to chrome dev tool > Application > Cookies,
Clear the cookies, and you are ready to rock!

You must be following Traversy Media !
remove proxy statement from package.json and write node url in axios call

After I used chrispytoe's suggestion to debug, I knew the problem is related to wrong URL, but not axios or React. To be more specific, something related to the URL on the server side.
Here's what I would suggest to help debug. On the server side in that
route, do console.log(req.headers). Then make the request from
postman, then make it from your react app and see what the differences
are. – chrispytoe
How I fixed this problem is to make sure the proxy server port in my client's packcage.json the same as my server.js' port number. Add the line below to the end of your's client's package.json (before the last "}").
"proxy": "http://localhost:5500"
Next, set your server.js' port the same as your proxy's port:
const PORT = process.env.PORT || 5500;
In addition, it doesn't matter what port you configure your client to use. It has nothing to do with the server and its proxy server.
Hope my 2 cents help. Please feel free to correct me. Thanks.

This error arises when the API server port you inputted in your front-end proxy(e.g "proxy":"http://localhost:5000/") isn't the same as the port on the server.

when you get back to the front end and start building React hooks, you'll have a to do a bit of jumping around on a PC
install cross-env lib:
npm i --save-dev cross-env
alter your start section in package.json
"start": "cross-env PORT=8000 react-scripts start",
allow you to declare the start port of the project to avoid conflict with other libs, in this case the create-react-app

431 HTTP response status code is sent from the server when client's HTTP Header is greater than the server's accepting HTTP Header limit. Maximum size of HTTP Header for well-known web-server is provided here.
This makes complete sense that the API is working fine from Postman and not browser; As there won't be any residue cookies present in Postman.

Related

angular universal https problems

I have an angular universal app set up. I do POST requests on the server-side using localhost to pre-render my app and this works fine.
An example working url would be http://localhost:8000/api/get-info.
I've now put the app into production on an external url (apache server). I'm also using ssl.
Now when I try to do a POST request on the server-side to pre-render my app, I get back a response with status: 0, url: null (I'm assuming this means the connection was refused).
An example non-working url would be https://mywebsite.com/api/get-info.
What really stumps me is that when the app loads on the client, all HTTPS requests start working. So the problem is I cannot get the express server to send POST requests to my external url.
I've tested a post request on the server-side to a different website (twitter), and that seems to work fine as well. So i'm not entirely sure where I've gone wrong.
I already have CORS set to '*' as well.
Try using
http://localhost:8000/api/get-info
in production as well. Since your Angular app is rendered on the same server as your API is running, using localhost should just work fine. It doesn't matter if you are on an external URL.
I do something similar (its a GET but that shouldn't matter) with my translations:
if ( this.isServer ) {
translateLoader.setUrl( 'http://localhost:4000/assets/localization/' );
} else {
translateLoader.setUrl( 'assets/localization/' );
}
It works locally and in production (both server and client).
I just encountered this problem myself for two days. Please take a look at my comment on https://github.com/angular/universal/issues/856#issuecomment-426254727.
Basically what I did was I did a conditional check in Angular to see if the APP is running in browser or in server (rendered by Angular Universal), and change my API endpoint to actual IP in https or localhost in http accordingly. Also in my Nginx setting, I only redirect incoming request from browser to https by checking if the server_name is localhost.
Hope it helps!

request to nodejs proxy: Provisional headers are shown

In my web app project, I came across a cross domain issue. The back-end API was deployed to the server, which providing the IP and port(We can assume the back-end API works well). And I am working on the front-end side, and still in the local development stage. So there is cross domain issue.
so I used a nodejs proxy and try to overcome the cross domain issue. For the nodejs proxy part, I am not the expert on that. Just get it from my teammates, who meet the same issue before.
So in my client side js code, I use axios library to send http request to the nodejs proxy as following:
axios.get('http://127.0.0.1:8888/service/delete/v1?id=5').then((response) => {
console.log(response);
}).catch((error) => {
console.log(error);
});
And the nodejs proxy will replace the 127.0.0.1:8888 part with the real API's IP and port, and send the request. That's my understanding about it.
so I run the nodejs proxy, which is listening on port 8888, and run my frond-end code in another console. But when I send the above mentioned request. I got the error as following:
GET http://127.0.0.1:8888/service/delete/v1?id=5 net::ERR_CONNECTION_TIMED_OUT
Error: Network Error
at createError (createError.js?f777:16)
at XMLHttpRequest.handleError (xhr.js?14ed:87)
and I debug the error deeper in the chrome devtool, and find the Provisional headers are shown in the header as following
I searched some previous article about this issue.It's said that the potential reason is the request is blocked.
But I can send other HTTP request successfully to other third party APIs. For example my mock data service Mockarro as following:
const key = 'mykeyxxx';
const url = `http://www.mockaroo.com/api/generate.json?schema=firstapis&key=${key}`;
axios.get(url).then((response) => {
console.log(response);
});
So my the previous request is blocked. I am very confused. My guess is the issue is from the nodejs proxy part? Right?

Getting proxy error when using "gulp serve" when API returns 204 with no content

I am using Gulp to develop an Angular application generated by Yeoman's gulp-angular generator. I have configured it to proxy requests to /api to another port, the port my API is listening on. That port is actually forwarded via an SSH tunnel to an external server.
Here is the config generated by Yeoman that I have edited for my own API:
gulp/server.js
'use strict';
var gulp = require('gulp');
var browserSync = require('browser-sync');
var httpProxy = require('http-proxy');
/* This configuration allow you to configure browser sync to proxy your backend */
var proxyTarget = 'http://localhost:3434/api'; // The location of your backend
var proxyApiPrefix = 'api'; // The element in the URL which differentiate between API request and static file request
var proxy = httpProxy.createProxyServer({
target: proxyTarget
});
function proxyMiddleware(req, res, next) {
if (req.url.indexOf(proxyApiPrefix) !== -1) {
proxy.web(req, res);
} else {
next();
}
// ...rest of config truncated
stdout
[BS] Watching files...
/Users/jason/dev/web/node_modules/http-proxy/lib/http-proxy/index.js:114
throw err;
^
Error: Parse Error
at Socket.socketOnData (http.js:1583:20)
at TCP.onread (net.js:527:27)
I get the above error when my application attempts to hit a particular API url which sends back a response of 204, no content.
url structure: POST /api/resource/delete
(API doesn't support actual DELETE http method so we POST to this endpoint)
Response: 204 No Content
The API is also in development and is being served via the built in PHP web server. What the server is telling us is that the client (aka Node in this case because it is the proxy) is hanging up before PHP can send the response.
I thought perhaps it was just choking on the fact that there was no content. So, we created a second endpoint that also returned 204 No Content and it seemed to work fine. But, to be fair, this issue appears to be intermittent - it works sometimes and sometimes it does not. It's very confusing.
As far as we can tell, it only happens on this delete URL, however. I am pretty new to Node and am having a very hard time figuring out what the issue is or where to look. Does anyone have any clues or has anyone seen this before?
It turns out that the developer of the API was sending me content along with his 204 when he shouldn't have been - some debug code left in. The HTTP parser that node-proxy uses was then reading that content from the buffer at the beginning of the subsequent request and then throwing an error because it wasn't seeing a properly formed HTTP request - since the first thing in the buffer was a PHP var_dump.
As it happens, my front end app did the delete call and then refreshes another object via the GET request. They happen so fast that it seemed like the DELETE call killed the gulp server, when it was actually the GET command afterwards.
The http-proxy module for node explicitly does not do error handling, leaving the onus on the end user. If you don't handle an error, it bubbles up into an uncaught exception and will cause the application to close, as I was seeing.
So, the fix was simply:
gulp/server.js
var proxy = httpProxy.createProxyServer({
target: proxyTarget
}).on('error', function(e) {
console.log(JSON.stringify(e, null, ' '))
});
The console will now log all proxy errors, but the process won't die and subsequent requests will continue to be served as expected.
For the error in question, the console output is:
{
"bytesParsed": 191,
"code": "HPE_INVALID_CONSTANT"
}
Additionally, we've fixed the API to honor its 204 and actually, you know, not send content.

Redirect an http request from a remote server to a local server using nodejs

There is a feature in a tool called charles that allows you to map remote requests:
http://www.charlesproxy.com/documentation/tools/map-remote/
Basically, it can take any request to a server(even if you're not the one running it) and then makes a new request to another server, preserving the path and the query string. The response from the second server then overwrites the response from the first server.
I just want to know if there is a node module that can do this. I tried using http-proxy, but I have a feeling this map remote tool is a bit different than a proxy, since it seems like you must own both servers with a proxy.
EDIT: Tried using the http-proxy node module again, but can't seem to get it to work. Here's my code:
var http = require('http')
, httpProxy = require('http-proxy');
httpProxy.createServer({
hostnameOnly: true,
router: {
'www.stackoverflow.com': 'localhost:9000',
}
}).listen(80);
// Create your target server
//
http.createServer(function (req, res) {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.write('request successfully proxied!' + '\n' + JSON.stringify(req.headers, true, 2));
res.end();
}).listen(9000);
My expectation is that when I go to www.stackoverflow.com or www.stackoverflow.com:80, it will instead redirect to my localhost:9000
No, what you are asking for is indeed a simple proxy. And no, you don't have to "own" both servers to run a proxy. You simply proxy the request, and at that point you can modify the data however you wish.
The proxy module you mention will work fine, and there are many others. You can also do this with simple Nginx config if you wish.
I made this pastebin with my solution:
http://pastebin.com/TfG67j1x
Save the content of the pastebin as proxy.js. Make sure you install dependencies in the same folder as the proxy.js file. (npm install http-proxy colors connect util --save)
When you run the proxy, it will:
start a new server listening on 8013, acting as a proxy server;
start a demo target server listening at 9013.
When accessing the demo target through the proxy, it modifies the "Ruby" string into "nodejitsu", for easy testing. If you are behind a corporate firewall/proxy, this script fails for now.
UPDATE: The problem with "headers already sent" was at line 32/33. It turns out that several errors occured on the same connection. When first error occurs, headers would be sent, when second error occurs, headers are already sent; as a result a "headers already sent" exception is raised and the server is killed.
With this fix the server no longer dies, but it still does not fix the source of the error, which is that NODE.JS cannot reach your target site. You must be behind another proxy/firewall and NodeJS would need to forward the HTTP request to a second proxy. If you normally use a proxy in your Browser to connect to Internet, my solution will fail. You did not specify this to be a requirement, though.
You may verify this by accessing through my proxy a server inside your network (no proxy required for it normally).
UPDATE2: You should not try to access http://localhost:8013 directly, but to set it as a proxy in your browser. Take notice of your original browser proxy settings (see above). Try and access then http://localhost:9013.
Did you add that proxy to your browser config? Otherwise the underlying OS would route your request directly to www.stackoverflow.com and there is no way your proxy is catching that.
Could you confirm that www.stackoverflow.com is ending up at your node.app at all? The name currently will resolve to the IP address that leads you to this website, so you would have to have made sure that name now resolves to your node.app. In this case, that probably means editing your hosts file.

Socket.io + Express CORS Error on localhost (not allowed by Access-Control-Allow-Origin)

I have a working node.js Express server to which I would to add socket.io support (allow javascript clients to connect via socket.io). I can connect to the express server via a Javascript $.get(), but the socket.io.connect() command fails due to a CORS error.
My testing machine is OSX with Apache to serve the client, thus port 80 is taken, so I have node.js/express running on port 8888. I added socket.io per the documentation:
var exp = express();
var server = require('http').createServer(api.server);
exp.listen(8888);
var io = require('socket.io').listen(server);
io.sockets.on('connection', function(socket) {
console.log('connection');
});
I properly see "info: socket.io started" in my node.js logs.
Then, on the client, I attempt to connect to the server...
this.socket = io.connect('http://localhost:8888');
this.socket.on('connect',function() {
socket.emit('install','test');
});
However, I'm getting a CORS error in the console in Chrome:
XMLHttpRequest cannot load http://localhost:8888/socket.io/1/?t=1358715637192. Origin http://localhost is not allowed by Access-Control-Allow-Origin.
HOWEVER, THIS works fine!
$.get('http://localhost:8888',function(e,d){
console.log(e,d);
});
So I double checked my headers, for both localhost:8888 and localhost -- both are properly returning the headers which (should) allow for the cross-domain requests...
Access-Control-Allow-Origin: *
Any ideas?
CORS is a very tricking thing to get working (or at least it was for me). I recommend this resource here: http://enable-cors.org/
Following what they do very carefully helped me. I also found that different browsers gave different visibility over the CORS request/responses which helped.
I found that Chrome was easier to get working than firefox, but firefoxes tools such as firebug, were quite nice to work with.
My gut feel from your information is that you might need your request to have an X-Request-With in your request attributes.
I also found using fidler to send the http requests allowed me to narrow my problems down to the server side initially and get that working. You will find browser enforce CORS, but something like fidler doesn't and thus provides another way of inspecting what is happening.
I definately recommend trying to break the problem in half so that you can see if it is server side or client side that is not behaving how you expect.
My problem was related to returning the same CORS response for the OPTIONS header as the POST or GET. That was wrong. Chrome allowed it. Firefox didnt. Any options request that is sent out will be sent out once, then in the future it will be cached and not resent (Which caused alot of confusion for me initially). For the options request you just need a standard response saying its ok to proceed, then in the post or get response i believe you want your cors responses there only.

Resources