I'm working on a project that includes BeagleBoneBlack and web server written in NodeJs. The idea is that when Beaglebone detects something from enviroment sends it to web server, and if connection is lost for some reason BBB stores the log in it's own database.
So I use SocketIO, and I emit when BBB detects something. I use flag variable isConnected that I put on false on "disconnect" event, and if isConnected is false I'm not emiting to server, just writing in database.
Problem is that when computer, where server is running, goes to sleep (simulating lost connection), SocketIo needs sometimes more than a minute to detect that connection is lost, and emit disconnect event. Is there any way to get this info faster, because the program tries to send readings to server and can't, but it's not written in database.
The client could perhaps emit myping every 5 seconds (or more often) and receive mypong from server confirming that the server is working.
...or the server can send the confirmation of receiving data and the client tries to resend data until it receives the confirmation.
Related
I used SSE for push notification. But i can't get an error or a close event in the server when the client's wifi/mobile data unconditionally disconnect without closing the connection in the right way.
It always see the client as online for about 15 minutes before getting connection closed message.
I used regular implementation of SSE in nodejs and express.
Is there any way to check response.write() whether the message delivered to the user or not?
To get a socket error you need the server to attempt to write data; there is no other way to detect when the connection has dropped except to try using it.
As described in chapter 5 of Data Push Apps with HTML5 SSE, what I do is: a) have the server send out a keep-alive message every e.g. 30 seconds. I normally have the message just be the current datestamp, but it could be anything; b) have the client disconnect and reconnect if it hasn't received any messages for 45 seconds.
I'm trying to have 2 servers communicate with each other, I'm pretty new to websockets so its kind of confusing. Also, just to put it out there, i'm not trying to do this: websocket communication between servers;
My goal here is to basically use a socket to read data from another server (if this is possible?) I'll try to easily explain more below;
We'll assume there is a website called https://www.test.com (going to this website returns an object)
With a normal HTTP request, you would just do:
$.get('https://www.test.com').success(function (r) {
console.log(r)
})
And this would return r, which is an object thats something like this {test:'1'};
Now from what I understand with websockets, is that you cannot return data from them because you don't actually 'request' data, you just send data through said socket.
Since I know what test.com returns, and I know all of the headers that i'm going to need, is it possible to just open a socket with test.com and wait for that data to be changed without requesting it?
I understand how client-server communication works with socketio/websockets im just not sure if its possible to do server-server communication.
If anyone has any links to documentation or anything trying to help explain, it would be much appreciated, I just want to learn how this works. (or if its even possible)
Yes, I you can do what (assuming I understood your needs correctly). You can establish a websocket connection between two servers and then either side can just send data to the other. That will trigger an event at the other server and it will receive the sent data as part of that event. You can do this operation either direction from serverA to serverB or vice versa or both.
In node.js, everything is event driven. So, you would establish the webSocket connection and then just set up an event handler to be triggered when data arrives. The other server can then just send new data whenever it has updated data to send. This is referred to as the "push" model. So, rather than serverA asking serverB is it has any new data, you establish the webSocket connection and serverB just sends new data to serverA whenever that new data is available. Done correctly, this is both more efficient and more timely (as there is no polling interval and no cycles wasted asking for data when there is nothing new).
The identical model can be used between servers or client to server. The only difference with the client/server model is that the webSocket must be initially established client to server. With the server to server model, either server can initiate the connection.
You can think of a webSocket connection like establishing a phone call. Once the phone call is established, either side can just say something and the other end hears what they're saying. The webSocket connection is similar. Once its established, either side can just send some data to the other end and the other end will receive it. It's an open pipeline ready to have data sent either way. In node.js, when data arrives on that pipeline, it triggers and event so the listener will get that event and see the data that was sent.
I am going to design a system where there is a two-way communication between clients and a web application. The web application can receive data from the client so it can persist it to a DB and so forth, while it can also send instructions to the client. For this reason, I am going to use Node.JS and Socket.IO.
I also need to use RabbitMQ since I want that if the web application sends an instruction to a client, and the client is down (hence the socket has dropped), I want it to be queued so it can be sent whenever the client connects again and creates a new socket.
From the client to the web application it should be pretty straightforward, since the client uses the socket to send the data to the Node.JS app, which in turn sends it to the queue so it can ultimately be forwarded to the web application. From this direction, if the socket is down, there is no internet connection, and hence the data is not sent in the first place, or is cached on the client.
My concern lies with the other direction, and I would like an answer before I design it this way and actually implement it, so I can avoid hitting any brick walls. Let's say that the web application tries to send an instruction to the client. If the socket is available, the web app forwards the instruction to the queue, which in turn forwards it to the Node.JS app, which in turn uses the socket to forward it to the client. So far so good. If on the other hand, the internet connection from the client has dropped, and hence the socket is currently down, the web app will still send the instruction to the queue. My question is, when the queue forwards the instruction to Node.JS, and Node.JS figures out that the socket does not exist, and hence cannot send the instruction, will the queue receive a reply from Node.JS that it could not forward the data, and hence that it should remain in the queue? If that is the case, it would be perfect. When the client manages to connect to the internet, it will perform a handshake once again, the queue will once again try to send to Node.JS, only this time Node.JS manages to send the instruction to the client.
Is this the correct reasoning of how those components would interact together?
this won't work the way you want it to.
when the node process receives the message from rabbitmq and sees the socket is gone, you can easily nack the message back to the queue.
however, that message will be processed again immediately. it won't sit there doing nothing. the node process will just pick it up again. you'll end up with your node / rabbitmq thrashing as it just nacks a message over and over and over and over, waiting for the socket to come back online.
if you have dozens or hundreds of messages for a client that isn't connected, you'll have dozens or hundreds of messages thrashing round in circles like this. it will destroy the performance of both your node process and rabbitmq.
my recommendation:
when the node app receives the message from rabbitmq, and the socket is not available to the client, put the message in a database table and mark it as waiting for that client.
when the client re-connects, check the database for any pending messages and forward them all at that point.
I have a page on which users connect to my node server with socket.io but I only allow them to have one open socket.io connection to the server (by passing along their account id when authorizing them and storing it in an array) and this works fine 99% of the time. The problem is that sometimes when users disconnect, the serverside disconnect event doesn't fire for some reason, so I can't clear their account from the array of clients, which ends up with them being locked out.
Is there a way for me to check if their old socket connection (which I have the ID of) is still active? (So if it isn't I can clear their old connection and let them connect again)
Make sure that heartbeats is set to true (the default). If the timeout lapses, disconnect should happen automatically. However, there was a bug report about heartbeats not working. Make sure you have the latest version of Socket.IO (I'm not sure the status of the bug).
If you still need help, you could send a ping to the old connection when the user tries to reconnect:
Emit a 'ping' from the server, and reply with 'pong' from the client if the connection is still alive. If the ping times out (after say, 20 seconds), drop the connection manually so the user can reconnect. You could use the ping to notify the original client that another client is trying to connect, and raise some UI to that effect.
Socket.io allows you to use heartbeats to "check the health of Socket.IO connections." What exactly are heartbeats and why should or shouldn't I use them?
A heartbeat is a small message sent from a client to a server (or from a server to a client and back to the server) at periodic intervals to confirm that the client is still around and active.
For example, if you have a Node.js app serving a chat room, and a user doesn't say anything for many minutes, there's no way to tell if they're really still connected. By sending a hearbeat at a predetermined interval (say, every 15 seconds), the client informs the server that it's still there. If it's been e.g. 20 seconds since the server's gotten a heartbeat from a client, it's likely been disconnected.
This is necessary because you cannot be guaranteed a clean connection termination over TCP--if a client crashes, or something else happens, you won't receive the termination packets from the client, and the server won't know that the client has disconnected. Furthermore, Socket.IO supports various other mechanisms (other than TCP sockets) to transfer data, and in these cases the client won't (or can't) send a termination message to the server.
By default, a Socket.IO client will send a heartbeat to the server every 15 seconds (heartbeat interval), and if the server hasn't heard from the client in 20 seconds (heartbeat timeout) it will consider the client disconnected.
I can't think of many average use cases where you probably wouldn't want to use heartbeats.