I have a NodeJS app that uses a Server Sent Events (SSE) route to send updates from the server to the client. On my local dev environment, this works great since the client remains connected to the SSE route at all times and tries to reconnect immediately if it is disconnected.
However, once I deployed my app to Heroku, it all went awry. Within a few seconds of not sending any data over the SSE route, I get a 503 Service Unavailable error on the client side and the client loses its connection to server due to which it is unable to receive any more real time updates. Looking at the Heroku server logs, it was giving me a H12 Request Timeout error.
On some further research, I came across this article on the Heroku website:
If you’re sending a streaming response, such as with server-sent
events, you’ll need to detect when the client has hung up, and make
sure your app server closes the connection promptly. If the server
keeps the connection open for 55 seconds without sending any data,
you’ll see a request timeout.
However, it does not mention how to solve the issue.
Is there a way to set the timeout to infinity?
Or does this mean I have to keep sending heartbeats from my server to client just to keep the SSE route connection alive? This seems tedious and unnecessary since I want to keep the connection alive at all times.
Received this from Heroku:
I wish I had better news for you but unfortunately, there's nothing
you can do to avoid this other than sending a ping within every 55
seconds to keep the SSE persistent.
Heartbeat is the only way to keep an SSE route alive with Heroku unfortunately.
Related
In my express/socket app, (which is running behind HAproxy server), I am using sticky session(cookie based) to route requests to same worker. I have total 16 processes running (8 /machine- 2 machines). Socket session data is being stored in Redis adapter.
The problem I have is, when an event is fired from server, client can't receive it. Inspite, it will keep throwing disconnection errors, after every few seconds (4-5) :
Update : It will only fire event if transport was opened when event was fired, which is getting closed instantly, and than restarting.
Can someone please suggest something on this..
Finally, I found the solution. It was timeout client which was set to too low in HAproxy config. Increasing it, fixed the issue.
On the Server side for websockets there is already an ping/pong implementation where the server sends a ping and client replies with a pong to let the server node whether a client is connected or not. But there isn't something implemented in reverse to let the client know if the server is still connected to them.
There are two ways to go about this I have read:
Every client sends a message to server every x seconds and whenever
an error is thrown when sending, that means the server is down, so
reconnect.
Server sends a message to every client every x seconds, the client receives this message and updates a variable on the client, and on the client side you have a thread that constantly checks every x seconds which checks if this variable has changed, if it hasn't in a while it means it hasn't received a message from the server and you can assume the server is down so reestablish a connection.
You can achieve trying to figure out on client side whether the server is still online using either methods. The first one you'll be sending traffic to the server whereas the second one you'll be sending traffic out of the server. Both seem easy enough to implement but I'm not so sure which is the better way in terms of being the more efficient/cost effective.
Server upload speeds are higher than client upload speeds, but server CPUs are an expensive resource while client CPUs are relatively cheap. Unloading logic onto the client is a more cost-effective approach...
Having said that, servers must implement this specific logic (actually, all ping/timeout logic), otherwise they might be left with "half-open" sockets that drain resources but aren't connected to any client.
Remember that sockets (file descriptors) are a limited resource. Not only do they use memory even when no traffic is present, but they prevent new clients from connecting when the resource is maxed out.
Hence, servers must clear out dead sockets, either using timeouts or by implementing ping.
P.S.
I'm not a node.js expert, but this type of logic should be implemented using the Websocket protocol ping rather than by your application. You should probably look into the node.js server / websocket framework and check how to enable ping-ing.
You should set pings to accommodate your specific environment. i.e., if you host on Heroku, than Heroku will implement a timeout of ~55 seconds and your pings should be sent before this timeout occurs.
I have socket.io server in node.js. All connections come through NGINX. The client is written in C# with Quobject/SocketIoClientDotNet library.
The problem is that client receives messages from server only from time to time.
I have logs in node.js code, so the server tries to send messages. Moreover, there are multiple processes with TIME_WAIT state in server (I gat that by netstat) and the number of that processes is equal to number of unsuccessful send attempts by socket.io server.
Otherwise, server always receive messages from clients.
I made nginx settings ("upgrade" headers, etc.) but it didn't help.
I turned off Windows Firewall but it didn't help.
So, I don't know why such situation happens, I don't know where else to look at and I will appreciate any help from community.
Hi I've been struggling with this issue for a few days now. I have a simple node.js app that connects to Twitter's streaming API and tracks a few terms. As a term is found the client side gets a websocket notification. I've made sure that my OAuth credentials are only used by this app and that the connection to the streaming API occurs only on app start up. What keeps happening is I get a 200 ok response but the stream then disconnects. I have it set to reconnect in 30 seconds but it's becoming ridiculous. It seems to be fine for a few minutes after restarting the app and then goes back to repeatedly disconnecting. The error is {"disconnect":{"code":7,"stream_name":"XXXXX-statuses158325","reason":"admin logout"}}. I have ran the same app locally with multiple client connections and not had a problem. I looked into other hosting services but I can't find one that supports websockets without having to revert to a slow long polling option on socket.io (which won't work for my app's purposes).
Any ideas for why this keeps happening?
that error means that you're connecting again with the same credentials (https://dev.twitter.com/discussions/11251).
One cause might be running more than 1 drone.
If this doesn't help, join us on http://webchat.jit.su and we'll do our best to help you :D
-yawnt
Socket.io allows you to use heartbeats to "check the health of Socket.IO connections." What exactly are heartbeats and why should or shouldn't I use them?
A heartbeat is a small message sent from a client to a server (or from a server to a client and back to the server) at periodic intervals to confirm that the client is still around and active.
For example, if you have a Node.js app serving a chat room, and a user doesn't say anything for many minutes, there's no way to tell if they're really still connected. By sending a hearbeat at a predetermined interval (say, every 15 seconds), the client informs the server that it's still there. If it's been e.g. 20 seconds since the server's gotten a heartbeat from a client, it's likely been disconnected.
This is necessary because you cannot be guaranteed a clean connection termination over TCP--if a client crashes, or something else happens, you won't receive the termination packets from the client, and the server won't know that the client has disconnected. Furthermore, Socket.IO supports various other mechanisms (other than TCP sockets) to transfer data, and in these cases the client won't (or can't) send a termination message to the server.
By default, a Socket.IO client will send a heartbeat to the server every 15 seconds (heartbeat interval), and if the server hasn't heard from the client in 20 seconds (heartbeat timeout) it will consider the client disconnected.
I can't think of many average use cases where you probably wouldn't want to use heartbeats.