Cloudflare HTTP POST 524 Timeout with node.js + express - node.js

I am having a trouble using HTTP POST when cloudflare is enabled.
It keeps returning 524 timeout.
Failed to load resource: the server responded with a status of 524 (Origin Time-out)
But when I disabled cloudflare, the HTTP POST works fine.
Any idea what might caused this?
UPDATE
I am using AJAX POST, does this got anything to do with ajax?
Thanks.

General causes for a CloudFlare 524 error.
Support should be able to provide more detailed troubleshooting.

Console utility "netstat" shows that some connections from CloudFlare are in CLOSE_WAIT state. Pointing that server just sits without correctly closing connections. Looking to the TCP traffic of my web server with Message Analyzer, I found several connections that was established and http request was sent but that wasn't ever processed by my server.
So we get an answer: the number of simultaneously established connections outnumbered available Accept() calls. So TCP stack connects and wait while application will handle it's connection. Depending on the situation this can never happen, so the client side just drops this connection after a 30 sec timeout without getting any response.
To fix this, you must increase the number of outstanding possible accepts. This parameter can be named as "Max simultaneous connections number" or something similar. Check your web server documentation \ ask the support to find it out.
Also, as an experiment, you can force your server to reply with the "Connection:close" header to each request. This may prevent reaching the active connections limit problem because CloudFlare keep-alive them just way too long.
Also, the more simultaneous requests you do, the more probability to get in troubles. You can try to set some small webserver-side timeout for idle connections.
P.S.: Illustration of CloudFlare's connections number after one client loaded a page:
(http://i.imgur.com/IgwGLCf.png)

Related

ISAPI Filter modifies 302 response - IIS drops request and puts into HTTPERR - IPv6 / HTTP2.0

Need some help to dig deeper into why IIS is behaving in a certain way. Edge/Chrome makes an HTTP2.0 request to IIS, using the IPv6 address in the header (https://[ipv6]/) which results in the server generating a 302 response. The ISAPI filter makes some mods to the 302 response and replaces the response buffer. IIS drops the request/response and logs in HTTPERR log:
<date> <time> fe80::a993:90bf:89ff:1a54%2 1106 fe80::bdbf:5254:27d2:33d8%2 443 HTTP/2.0 GET <url> 1 - 1 Connection_Dropped_List_Full <pool>
Suspect related to HTTP2.0, when putting Fiddler in the middle, it isn't HTTP/2.0 anymore, it downgrades to HTTP/1.1 and it works.
When using an IPv4 address, it works. In either case the filter goes through the identical steps. There is no indication in the filter that anything went wrong.
Failed Request Tracing will not write buffers for incomplete/dropped requests that appear in HTTPERR log.
Is there a place where I can find out more detailed information about why IIS is dropping the request?
I did the network capture, and looks like browser is initiating the FIN tear down of session.
Do you use any load balance or reverse proxy before request get in IIS? This error indicates that the log cannot store more lost connections, so the problem is that your connection is lost.
If you use load balance, web application is under heavy load and
because of this no threads are available to currently provide
logging data to HTTP.sys. Check this.
Or before IIS response to client, client has closed the request but
IIS still send response. This is more likely to be a problem with the
application itself not IIS and http.sys. Check this.
One thing I noticed is if you change http2 to 1.1, it can work well. The difference between http1.1 and 2 is performance.
HTTP/1.1 practically allows only one outstanding request per TCP connection (though HTTP pipelining allows more than one outstanding request, it still doesn’t solve the problem completely).
HTTP/2.0 allows using same TCP connection for multiple parallel requests
So it looks like that when you use http2, one connection includes multiple requests and application cannot handle these requests well, especially the request of image.
Aother thing is failed request tracing can catch all request and response, including status code is 200 and 302.

HTTP with TCP keep alive?

I'm writing a HTTP/1.1 client in (asyncio) Python, and wondering if sockets should be created with the SO_KEEPALIVE option
import socket
sock = socket.socket(family=socket.AF_INET, type=socket.SOCK_STREAM, proto=socket.IPPROTO_TCP)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
Should it always be enabled or disabled? Are there certain situations where it is better to enable it or not? Are there tradeoffs to be made? It the answer different if it's HTTPS?
I am specifically thinking in reference to connections used for more than one HTTP request (i.e. using HTTP Keep-Alive).
TCP keep-alive is used to detect a loss of connectivity for TCP connection which are idle (i.e. no data transfer) for a longer duration. HTTP/1 usually does not fit this use case and thus it makes not much sense to have TCP keep-alive active. But it also does no harm. In fact, it likely makes no difference at all what shows up on the wire.
In HTTP/1 the client sends a request which more or less immediately is followed by a response from the server. And if HTTP keep-alive is active then another request might follow which again results in a response. The duration between these requests is usually short, i.e. it is not common to keep an idle connection open for long. It is also expected that client and server can close the connection at any time after the request-response was done and that they must also be able to handle such connection close from the peer. Thus it is likely that the connection either got closed or that new data got transferred before the TCP keep-alive timer could trigger the delivery of the empty keep-alive packet.

Cloudflare 524 w/ nodejs + express

I'm running a nodejs webserver on azure using the express library for http handling. We've been attempting to enable cloudflare protection on the domains pointing to this box, but when we turn cloudflare proxying on, we see cycling periods of requests succeeding, and requests failing with a 524 error. I understand this error is returned when the server fails to respond to the connection with an HTTP response in time, but I'm having a hard time figuring out why it is
A. Only failing sometimes as opposed to all the time
B. Immediately fixed when we turn cloudflare proxying off.
I've been attempting to confirm the TCP connection using
tcpdump -i eth0 port 443 | grep cloudflare (the request come over https) and have seen curl requests fail seemingly without any traffic hitting the box, while others do arrive. For further reference, these requests should be and are quite quick when they succeed, so I'm having a hard time believe the issue is due to a long running process stalling the response.
I do not believe we have any sort of IP based throttling or firewall (at least not intentionally?)
Any ideas greatly appreciated, thanks
It seems that the issue was caused by DNS resolution.
On Azure, you can configure a custom domain name for your created webapp. And according to the CloudFlare usage, you need to switch the DNS resolution to CloudFlare DNS server, please see more infomation for configuring domain name https://azure.microsoft.com/en-us/documentation/articles/web-sites-custom-domain-name/.
You can try to refer to the faq doc of CloudFlare How do I enter Windows Azure DNS records in CloudFlare? to make sure the DNS settings is correct.
Try clearing your cookies.
Had a similar issue when I changed cloudflare settings to a new host but cloudflare cookies for the domain was doing something funky to the request (I am guessing it might be trying to contact the old host?)

Node.js Reverse Proxy/Load Balancer

I am checking node-http-proxy and nodejs-proxy to build a DIY reverse proxy/load balancer in Node.js. After coding a small version, I setup 2 WEBrick servers for the same Rails app so I could load balance (round robin) between them. However each HTTP request is sent to one or another server which is very inefficient since the loading process of CSS and Javascript files from the home page is performed with more than 25 GET requests.
I tried to play a bit with socket events but I didn't get anywhere because by default it uses keep-alive connections (possibly this is why nginx just support http/1.0).
Ok, so I am wondering how can my proxy send a block of HTTP requests (for instance loading a webpage entirely, etc) to only one server so I could send the next block to another server.
You need to consider stickiness or session persistence. This will ensure future connections after the first connection inbound will get 'stuck' to the chosen server for the duration of the session or until the persistence connection times out.

Running out of tcp connection using httpclient?

In our project, the front end UI makes a lot of http request using httpclient to the backend REST service.
I noticed sometime a http request was never even made to the server (using tcpdump)
Is there some kind of limit in Linux that limits the total tcp socket connection one could have ?
I was playing with lsof, but can't seem to make much out of it...
Sorry for the poorly phrased question.

Resources