Kubernetes ECONNRESET nodejs - node.js

I have a front application which is calling a microservice both in nodejs
I do not really understand why from time to time the http call on internal url http://service.env:3027 is returning connect ECONNRESET 100.66.156.188:3027
Should be aprox on 0.1% of the requests but I can not understand why

"ECONNRESET" means the other side of the TCP conversation abruptly closed its end of the connection. This is most probably due to one or more application protocol errors. You could look at the API server logs to see if it complains about something.
It's probably related to clients connected to your frontend just closed the browser in the middle of the request - I wouldn't be to worry about that as long as it stays low in numbers.

Related

Frequent xhr request by socket.io

When I connect to the socket server from the client side, which is considered react, every few seconds a repeated request is sent by the socket client. Generally, the requests are of get type and most of the time they are in pending mode. Sometimes the result of requests is 2.
What do you think is the problem of sending repeated requests after connecting or doing anything with the socket?
UPDATE
This problem occurs when I use namespace . I tried all the solutions but this problem was not solved.
image
This is expected behavior when the option used for transport is polling (long-polling).
What happens is, by default, the transport parameter is ["polling", "websocket"] (client, server), where the sequence of elements matters. So, the first connection attempt is made via polling (which is faster to start compared to websocket), and then (or in parallel, I don't know the working details) there is a connection attempt by websocket (this takes a little longer to establish but is faster for later communication).
If the websocket connection is successfully established, the communication will be carried in this way. But if an error occurs, or the connection takes a long time to be established, or this transport option is not present in the instance's parameters, then the communication will continue being carried out through polling, which are the various requests that remain pending. It is normal for them to remain pending, so they receive an update and are able to inform the requester immediately, without the need for several quick requests consulting the application's status.
Check the instance parameters you set for this connection to find out if transport via websocket is enabled. Be careful when using the socket server behind a reverse proxy, as this reverse proxy needs to be properly configured to accept websocket connections, otherwise it won't work.
You can check the websocket requests in the browser inspection, Network tab, by enabling the WS filter.
Here are some additional links for you to read more about:
https://socket.io/docs/v4/how-it-works/
https://socket.io/docs/v4/using-multiple-nodes/
https://socket.io/docs/v4/reverse-proxy/
https://ably.com/blog/websockets-vs-long-polling

Heroku H12 Request Timeout for Server Sent Events (SSE) Route

I have a NodeJS app that uses a Server Sent Events (SSE) route to send updates from the server to the client. On my local dev environment, this works great since the client remains connected to the SSE route at all times and tries to reconnect immediately if it is disconnected.
However, once I deployed my app to Heroku, it all went awry. Within a few seconds of not sending any data over the SSE route, I get a 503 Service Unavailable error on the client side and the client loses its connection to server due to which it is unable to receive any more real time updates. Looking at the Heroku server logs, it was giving me a H12 Request Timeout error.
On some further research, I came across this article on the Heroku website:
If you’re sending a streaming response, such as with server-sent
events, you’ll need to detect when the client has hung up, and make
sure your app server closes the connection promptly. If the server
keeps the connection open for 55 seconds without sending any data,
you’ll see a request timeout.
However, it does not mention how to solve the issue.
Is there a way to set the timeout to infinity?
Or does this mean I have to keep sending heartbeats from my server to client just to keep the SSE route connection alive? This seems tedious and unnecessary since I want to keep the connection alive at all times.
Received this from Heroku:
I wish I had better news for you but unfortunately, there's nothing
you can do to avoid this other than sending a ping within every 55
seconds to keep the SSE persistent.
Heartbeat is the only way to keep an SSE route alive with Heroku unfortunately.

Error "close (transport close)" on Socket client side

In my express/socket app, (which is running behind HAproxy server), I am using sticky session(cookie based) to route requests to same worker. I have total 16 processes running (8 /machine- 2 machines). Socket session data is being stored in Redis adapter.
The problem I have is, when an event is fired from server, client can't receive it. Inspite, it will keep throwing disconnection errors, after every few seconds (4-5) :
Update : It will only fire event if transport was opened when event was fired, which is getting closed instantly, and than restarting.
Can someone please suggest something on this..
Finally, I found the solution. It was timeout client which was set to too low in HAproxy config. Increasing it, fixed the issue.

Node.js with Socket.io - Long Polling fails and throws “code”:1,“message”:“Session ID unknown” response

Please give solution-
All my socket polling requests are failing with the following error.
{"code":1,"message":"Session ID unknown"}
?EIO=3&transport=polling&t=LqtR6Rn&sid=0JFGcEFNdrS-XBZxHAXM, this is the long poll call that client makes to the server, if you see here it is passing the sessionId, the node identifies the socket connection for which the request has been made and responds.
But in some cases, like dealing with multiple nodes/Amazon ELB the call may go to some other node that didn't generate this sessioIs, in that case the node will not be able to identify the sessionId for which the call was made and hence responds with {"code":1,"message":"Session ID unknown"}
You will also see this error in case of long polling not getting answered or getting timeout.
Nginx
You will need ip_hash in upstream server definition and some headers.
SocketIO NginX Configuration (Using Multiple Nodes)
Amazon ELB
For those who are having this issue behind a amazon ELB, make sure you enable application-controlled session stickiness

Which is the better way to implement heartbeat on the client side for websockets?

On the Server side for websockets there is already an ping/pong implementation where the server sends a ping and client replies with a pong to let the server node whether a client is connected or not. But there isn't something implemented in reverse to let the client know if the server is still connected to them.
There are two ways to go about this I have read:
Every client sends a message to server every x seconds and whenever
an error is thrown when sending, that means the server is down, so
reconnect.
Server sends a message to every client every x seconds, the client receives this message and updates a variable on the client, and on the client side you have a thread that constantly checks every x seconds which checks if this variable has changed, if it hasn't in a while it means it hasn't received a message from the server and you can assume the server is down so reestablish a connection.
You can achieve trying to figure out on client side whether the server is still online using either methods. The first one you'll be sending traffic to the server whereas the second one you'll be sending traffic out of the server. Both seem easy enough to implement but I'm not so sure which is the better way in terms of being the more efficient/cost effective.
Server upload speeds are higher than client upload speeds, but server CPUs are an expensive resource while client CPUs are relatively cheap. Unloading logic onto the client is a more cost-effective approach...
Having said that, servers must implement this specific logic (actually, all ping/timeout logic), otherwise they might be left with "half-open" sockets that drain resources but aren't connected to any client.
Remember that sockets (file descriptors) are a limited resource. Not only do they use memory even when no traffic is present, but they prevent new clients from connecting when the resource is maxed out.
Hence, servers must clear out dead sockets, either using timeouts or by implementing ping.
P.S.
I'm not a node.js expert, but this type of logic should be implemented using the Websocket protocol ping rather than by your application. You should probably look into the node.js server / websocket framework and check how to enable ping-ing.
You should set pings to accommodate your specific environment. i.e., if you host on Heroku, than Heroku will implement a timeout of ~55 seconds and your pings should be sent before this timeout occurs.

Resources