I am using mongoose in a c++ application to provide a websocket server.
Now I would like to send a bigger amount of data to the connected
clients, triggered by the server itself.
The problem is that the server and the send (or broadcast) function run in different threads.
Does anybody know if there is a solution of synchronizing these two threads?
BR, M
Related
I am developing a Server that should handle many requests, I`m using Delphi and Indy Library. Each request/connection has it's own thread, I want to send message from Server to Client, I have a Thread-List that holds Client Contexts and keep connections alive, when I want to send message from server I pick the client context from this list and send message to it.
There are many connections and many threads, The question is if I save and keep all contexts, will threads be alive ? if yes, is it possible to use Thread-Pool ( TIdSchedulerOfThreadPool ) in this situation ?!
Is there a way to send message from server to client without keeping connection threads alive ?
I have a node server accepting websocket connections from the clients. Each client can broadcast a message to all of the other clients.
UPDATE: I am using https://github.com/websockets/ws as my library of choice.
At the moment, the server has an array with all of the connections. Each connection has a tabId. When one of the client emits a message, I go through all of the connections and check: if the connection's tabId doesn't match, I send the message to the client.
For loading issues, I am facing the problem of having to have more than one server. So, there will be say two servers, each one with a number of clients.
How do I make sure that a message gets broadcast to all of the websocket clients, and not only the ones connected to the same server?
One possible solution I thought is to have the connections stored on a database, where each record has the tabId and the serverId. However, even a simple broadcast gets tricky as messages to "local" sockets are easy to broadcast (the socket is local and available) whereas messages to "remote" sockets are tricky, and would imply intra-server communication.
Is there a good pattern to solve this? Surely, this is something that people face every day.
You could use a messagequeue like RabbitMQ.
When a client logs in to your server, create a consumer which listens to a queue which will receive messages directed to that particular client. And when the clients are sending messages, just use a publisher to publish them to the recipients queue.
This way it doesn't matter and you don't need to know on which nodes the clients are on, or if they jump from a node to another.
I am going to design a system where there is a two-way communication between clients and a web application. The web application can receive data from the client so it can persist it to a DB and so forth, while it can also send instructions to the client. For this reason, I am going to use Node.JS and Socket.IO.
I also need to use RabbitMQ since I want that if the web application sends an instruction to a client, and the client is down (hence the socket has dropped), I want it to be queued so it can be sent whenever the client connects again and creates a new socket.
From the client to the web application it should be pretty straightforward, since the client uses the socket to send the data to the Node.JS app, which in turn sends it to the queue so it can ultimately be forwarded to the web application. From this direction, if the socket is down, there is no internet connection, and hence the data is not sent in the first place, or is cached on the client.
My concern lies with the other direction, and I would like an answer before I design it this way and actually implement it, so I can avoid hitting any brick walls. Let's say that the web application tries to send an instruction to the client. If the socket is available, the web app forwards the instruction to the queue, which in turn forwards it to the Node.JS app, which in turn uses the socket to forward it to the client. So far so good. If on the other hand, the internet connection from the client has dropped, and hence the socket is currently down, the web app will still send the instruction to the queue. My question is, when the queue forwards the instruction to Node.JS, and Node.JS figures out that the socket does not exist, and hence cannot send the instruction, will the queue receive a reply from Node.JS that it could not forward the data, and hence that it should remain in the queue? If that is the case, it would be perfect. When the client manages to connect to the internet, it will perform a handshake once again, the queue will once again try to send to Node.JS, only this time Node.JS manages to send the instruction to the client.
Is this the correct reasoning of how those components would interact together?
this won't work the way you want it to.
when the node process receives the message from rabbitmq and sees the socket is gone, you can easily nack the message back to the queue.
however, that message will be processed again immediately. it won't sit there doing nothing. the node process will just pick it up again. you'll end up with your node / rabbitmq thrashing as it just nacks a message over and over and over and over, waiting for the socket to come back online.
if you have dozens or hundreds of messages for a client that isn't connected, you'll have dozens or hundreds of messages thrashing round in circles like this. it will destroy the performance of both your node process and rabbitmq.
my recommendation:
when the node app receives the message from rabbitmq, and the socket is not available to the client, put the message in a database table and mark it as waiting for that client.
when the client re-connects, check the database for any pending messages and forward them all at that point.
I'm using socket.io and nodejs,
I have a server and I use it as my nodeJS server. What I'm trying to do is moving clients according to messages sent as client -> server -> clients
For example; client1 sending a message "MOVE-RIGHT" to server. Server redirecting this message to all clients LIKE "MOVE-RIGHT-CLIENT1" and according to this message, all clients starting to move client1 to the right direction.
The problem is, all clients may have different latency according to their network status. For example, if server->client1 communication happens in 50 ms, server->client2 communication may happen in 250 ms. Therefore, client1 does this job nearly 200 ms earlier. So we can say that these two movements are not synchronized because one of them happens earlier than other ones.
As you know latency between clients and server may be different for each clients, and also it can be different for each message for the same client.
My question is, Which method should I use to synchronize these clients, to do their jobs at the same time. Is there any feature of socket.io or nodejs about this? What would you recommend for me?
The Socket.io API has the ability to send messages to all clients.
With one server and all sockets in memory, I understand how that server one can send a message to all its clients, that's pretty obvious. But what about with multiple servers using Redis to store the sockets?
If I have client a connected to server y and client b connected to server z (and a Redis box for the store) and I do socket.broadcast.emit on one server, the client on the other server will receive this message. How?
How do the clients that are actually connected to the other server get that message?
Is one server telling the other server to send a message to its connected client?
Is the server establishing its own connection to the client to send that message?
Socket.io uses MemoryStore by default, so all the connected clients will be stored in memory making it impossible (well, not quiet but more on that later) to send and receive events from clients connected to a different socket.io server.
One way to make all the socket.io servers receive all the events is that all servers use redis's pub-sub. So, instead using socket.emit one can publish to redis.
redis_client = require('redis').createClient();
redis_client.publish('channelName', data);
And all the socket servers subscribe to that channel through redis and upon receiving a message emit it to clients connected to them.
redis_sub = require('redis').createClient();
redis_sub.subscribe('channelName', 'moreChannels');
redis_sub.on("message", function (channel, message) {
socket.emit(channel, message);
});
Complicated Stuff !! But wait, turns out you dont actually need this sort of code to achieve the goal. Socket.io has RedisStore which essentially does what the code above is supposed to do in a nicer way so that you can write Socket.io code as you would write for a single server and will still get propagated over to other socket.io server through redis.
To summarise socket.io sends messages across multiple servers by using redis as the channel instead of memory.
There are a few ways you can do this. More info in this question. A good explanation of how pub/sub in Redis works is here, in Redis' docs. An explanation of how the paradigm works in general is here, on Wikipedia.
Quoting the Redis docs:
SUBSCRIBE, UNSUBSCRIBE and PUBLISH implement the Publish/Subscribe
messaging paradigm where (citing Wikipedia) senders (publishers) are
not programmed to send their messages to specific receivers
(subscribers). Rather, published messages are characterized into
channels, without knowledge of what (if any) subscribers there may be.
Subscribers express interest in one or more channels, and only receive
messages that are of interest, without knowledge of what (if any)
publishers there are. This decoupling of publishers and subscribers
can allow for greater scalability and a more dynamic network topology.