Apache & Node Reverse proxy, Socket Timeout, Keepalive - node.js

I have an API application built with node and expressjs. Using Apache for reverse proxy, with keepalive enabled.
Some requests (specifically POST/PUT), will end up hanging for 2 minutes due to the default 2 minute socket timeout. Doesn't happen always, but often. As soon as the timeout is hit, the Client then gets the response and continues sending other requests.
It seems to be due to Keep-Alive, although I'm not 100% sure.
Adding the header:
res.set('Connection', 'close');
Makes the problem go away, that's why I think this is related to keep-alive.
Been researching the issue for 2 days with no success.
Is it worth setting the header and accepting the consequences, or is there any other solution/explanation to this behaviour?

Turns out this was all caused by a "204 - No Content" response on DELETE requests sent before the PUT/POST requests. Changing from
res.send(data)
To
res.status(204).end()
Fixed the issue for me.

Related

524 response has no CORS headers

I have a Couch/Pouch app that seems to work correctly, but suffers strange delays and fills the browser log with CORS errors. The CORS errors only occur on timed out GETs, because their responses don't supply CORS headers.
Using browser dev tools I can see many successful polling requests that look like this:
GET https://couch.server/mydb/_changes
?style=all_docs
&filter=_view
&view=visible/person
&since=369-69pI ...... bTC_376CZ
&limit=100
response 200
... and a bad one like this ...
OPTIONS https://couch.server/mydb/_changes
?style=all_docs
&feed=longpoll
&heartbeat=10000
&filter=_view
&view=visible/person
&since=369-69pI ...... bTC_376CZ
&limit=100
response 200
GET https://couch.server/mydb/_changes
?style=all_docs
&feed=longpoll
&heartbeat=10000
&filter=_view
&view=visible/person
&since=369-69pI ...... bTC_376CZ
&limit=100
response 524
So there are just two differences between the good case and the bad case. In the bad case PouchDB:
precedes the GET request with an OPTIONS request
specifies a longpoll feed with a 10 second timeout
The defect, apparently, is that CouchDB's 524 response has no CORS headers!
I have four such live: true, retry: true replicators, so my browser logs are showing four red-inked errors every ten seconds.
I would post this as an issue in CouchDB repository, but I wanted some feedback here first; I could easily be misunderstanding something,
Other factors:
I host the client code in GitHub pages and serve it through CloudFlare.
Access to CouchDB from clients also goes through CloudFlare
CouchDB sits behind NGinx on my VPS.
Let me know if there are further details I should be providing, please.
Credit for answering this question should actually go to #sideshowbarker, since he made me realize that the error was not in CouchDB but in my Cloudflare settings.
In those settings I had my CouchDB site set to use DNS and HTTP proxy (CDN) (orange cloud icon) mode rather then DNS only (grey cloud icon). Switching to DNS only and, perhaps unnecessarily, wiping the cache, solved the problem after a considerable delay (over an hour?).

sanic.exceptions.RequestTimeout: Request Timeout in Sanic

I run sanic application and it raises an exception every several seconds even without any request coming in.
sanic.exceptions.RequestTimeout: Request Timeout
How to fix the issue?
I would point you towards the documentation so that you understand what you are doing and why you are receiving that exception. Just blindly changing KEEP_ALIVE to False may not be what you want.
The KEEP_ALIVE config variable is set to True in Sanic by default. If you don’t need this feature in your application, set it to False to cause all client connections to close immediately after a response is sent, regardless of the Keep-Alive header on the request.
The amount of time the server holds the TCP connection open is decided by the server itself. In Sanic, that value is configured using the KEEP_ALIVE_TIMEOUT value. By default, it is set to 5 seconds, this is the same default setting as the Apache HTTP server and is a good balance between allowing enough time for the client to send a new request, and not holding open too many connections at once. Do not exceed 75 seconds unless you know your clients are using a browser which supports TCP connections held open for that long.
The issue comes from the fact that the connection remains alive. Adding following configuration seems to have fixed my issue
from sanic.config import Config
Config.KEEP_ALIVE = False

Requests being doubled if Tomcat is slow to respond

We are working with the following stack:
A node express middleware running on Nginx is communicating with an Apache, which proxies the requests to Tomcat, that are located on another server. Now, when requesting an operation that takes more than 15 seconds to complete, another identical request will be sent. There is obviously a 15-second retry policy somewhere.
So far, I have been unable to detect exactly who is doing this and my Google searches have also been fruitless. So, my question is if anyone has experience with something like this and could it be Node, Nginx or Apache that is sending the second request.
Any suggestions on where the double requests are coming from and what property I need to adjust to turn them off would be greatly appreciated.
The solution was to set the socket timeout property in apache's jk_mod to 0.

intermittent responses with node.js/ socket.io after authorization handshake

Ok I have a WAMP installation with cakePHP under windows xp 64 bit. I am using a websocket PHP pluging with the latest node 8.17 and socket io version .9.13. None of my co workers seem to know what the problem is and i have been stuck for two weeks. I was able to narrow down the problem, but I have no clue on how to fix it.
After my cakephp plugin makes the request to a socket io server i can capture the authorization handshake request however according to the socket io protocol the response body should contain the handhshake id, the heartbeat interval, the timeout interval etc... Sometimes I would get the proper response however the majority of the times(like 90 %) of the time I would get a null body response but the headers return 200 ok response which throws an error in my application. Is there a way I can get consistant results. I am more that happy to post debug information so you can see what I am talking about. I read somewhere that it might be gzip compression problem but with the socket io update i believe that has been fixed.
Any help would be much appreciated!!!
Problem was with line 80 of mgcrea/cake_websocket plugin. For some reason although it was issuing the correct request it was receiving at intermittent response. When i overrode it it then solved the problem.

Uncatchable exception NodeJS

I have an application which writes to a datastore, so I attempt to send an HTTP request out to the datastore (hbase + stargate). ETIMEDOUT exception, which kills the process.
I have .on('error') on every socket connection present or at least seemingly present, including requests and responses. I even took an extreme step and make a change to the source code, which is supposed to "ignore" those errors in the third post:
http://comments.gmane.org/gmane.comp.lang.javascript.nodejs/25283
I even have a
process.on('uncaughtException', function(){})
All of this still to no avail, and my processes keep dying. Potentially losing everything that built up in the ZMQ stream queue.
The weirdest part yet is that one server out of the 4 server cluster, behaves just fine.
I had a like-issue with our datastore that relied on HTTP requests.
How are you sending the "HTTP request out"? Is it with a library? Have you tried putting a timeout limit on the HTTP request to avoid the ETIMEDOUT exception? Although this does not address the main issue, it will give you the ability to catch the timeout by throwing your own controlled exception.

Resources