Sending Aiohttp Websocket messages between workers - python-3.x

Aiohttp has great websocket support:
# dictionary where the keys are user's ids, and the values are websocket connections
users_websock = {}
class WebSocket(web.View):
async def get(self):
# creating websocket instance
ws = web.WebSocketResponse()
await ws.prepare(self.request)
users_websock['some_user_id'] = ws
async for msg in ws:
# handle incoming messages
And now when I need to send a message to a special user:
# using ws.send_str() to send data via websocket connection
ws = users_websock['user_id']
ws.send_str('Some data')
That's good as long as I have only one server worker. But in production we always have multiple workers. And of course every worker has it's own different users_websock dictionary.
So the actual problem occurs when we need send message from worker 1 to some user connected to worker 2.
And the question is how and where I should store the list of websocket connections so that each worker can get the necessary connection?
Maybe I can store in the DB some connection id or something to create websocket instance everywhere?
Or there is another way to approach this?

aiohttp doesn't provide interprocess/internode communication channels, it is another big and interesting area not related to HTTP processing itself.
You need channels between your workers. It can be Redis pubsub, RabiitMQ or even websockets -- depending on your library.
Or tools like https://crossbar.io/
There is no single solution that covers all possible cases.

Related

Reuse socket after connect fails in node

I need to reuse socket for two connect calls made using http.request. I tried passing custom agent limiting number of sockets but the first socket is removed before the 2nd connect call is made by code:
https://github.com/nodejs/node/blob/master/lib/_http_client.js#L438
mock code:
var options = {
method: 'CONNECT', agent: new http.Agent({ keepAlive: true, maxSockets: 1 })
};
var request = this.httpModule.request(options);
request.on('connect', (res, sock, head) => {
console.log(sock.address());
// some processing...
var request2 = this.httpModule.request(options);
request2.on('connect', (res, sock, head) => {
console.log(sock.address());
});
request2.end();
});
request.end();
Is there some way by which I can reuse the same socket for two connect calls?
The two unique sockets are required for this form of communication.
Each socket in this case represents a connection between a client and a server. There is no such socket that represents n clients and one server, so to speak. They also don't act like "threads" here, where one socket can perform work for many clients.
By setting the max sockets to 1, you've requested that only 1 client connection be active at any time. When you try to connect that second client, it kills the first one because the max is reached and we need room for a new connection!
If you want to recycle sockets -- For example, a client connects, refreshes the page after an hour, and the same client triggers another connection -- There's probably not a way to do it this high in the technology stack, and it would be far more complicated and unnecessary than destroying the old socket to make way for a new one anyway. If you don't understand why you would or wouldn't need to do this, you don't need to do it.
If you want to send a message to many clients (and you wanted to accomplish it "under one socket" in your question), consider using the broadcast and emit methods.

Socket.io : How to handle/manage multiple clients requests and responses?

I am very eager to integrate Socket.io to my node.js project, but I have some confusions on how to properly use socket.io. I have been looking through documentations and tutorials, but I have not been able to understand some concepts on how socket.io works.
The scenario I have in mind is the following:
There are multiple clients C1, C2, ..., Cn
Clients emit request to the server R1,...,Rn
Server receives request, does data processing
When data-processing is complete, Server emits response to clients Rs1, .., Rs2
The confusion I have in this scenario is that, when the server has finished data processing it emits the response in the following way:
// server listens for request from client
socket.on('request_from_client', function(data){
// user data and request_type is stored in the data variable
var user = data.user.id
var action = data.action
// server does data processing
do_some_action(..., function(rData){
// when the processing is completed, the response data is emitted as a response_event
// The problem is here, how to make sure that the response data goes to the right client
socket.emit('response_to_client', rData)
})
})
But here I have NOT defined which client I am sending the response to!
How does socket.io handle this ?
How does socket.io make sure that: response Rs1 is sent to C1 ?
What is making sure that: response Rs1 is not sent to C2 ?
I hope I have well explained my doubts.
The instance of the socket object corresponds to a client connection. So every message you emit from that instance is send to the client that opened that socket connection. Remember that upon the connection event you get (through the onDone callback) the socket connection object. This event triggers everytime a client connects to the socket.io server.
If you want to send a message to all clients you can use
io.sockets.emit("message-to-all-clients")
and if you want to send an event to every client apart the one that emits the event
socket.broadcast.emit("message-to-all-other-clients");

Websocket transport reliability (Socket.io data loss during reconnection)

Used
NodeJS, Socket.io
Problem
Imagine there are 2 users U1 & U2, connected to an app via Socket.io. The algorithm is the following:
U1 completely loses Internet connection (ex. switches Internet off)
U2 sends a message to U1.
U1 does not receive the message yet, because the Internet is down
Server detects U1 disconnection by heartbeat timeout
U1 reconnects to socket.io
U1 never receives the message from U2 - it is lost on Step 4 I guess.
Possible explanation
I think I understand why it happens:
on Step 4 Server kills socket instance and the queue of messages to U1 as well
Moreover on Step 5 U1 and Server create new connection (it is not reused), so even if message is still queued, the previous connection is lost anyway.
Need help
How can I prevent this kind of data loss? I have to use hearbeats, because I do not people hang in app forever. Also I must still give a possibility to reconnect, because when I deploy a new version of app I want zero downtime.
P.S. The thing I call "message" is not just a text message I can store in database, but valuable system message, which delivery must be guaranteed, or UI screws up.
Thanks!
Addition 1
I do already have a user account system. Moreover, my application is already complex. Adding offline/online statuses won't help, because I already have this kind of stuff. The problem is different.
Check out step 2. On this step we technically cannot say if U1 goes offline, he just loses connection lets say for 2 seconds, probably because of bad internet. So U2 sends him a message, but U1 doesn't receive it because internet is still down for him (step 3). Step 4 is needed to detect offline users, lets say, the timeout is 60 seconds. Eventually in another 10 seconds internet connection for U1 is up and he reconnects to socket.io. But the message from U2 is lost in space because on server U1 was disconnected by timeout.
That is the problem, I wan't 100% delivery.
Solution
Collect an emit (emit name and data) in {} user, identified by random emitID. Send emit
Confirm the emit on client side (send emit back to server with emitID)
If confirmed - delete object from {} identified by emitID
If user reconnected - check {} for this user and loop through it executing Step 1 for each object in {}
When disconnected or/and connected flush {} for user if necessary
// Server
const pendingEmits = {};
socket.on('reconnection', () => resendAllPendingLimits);
socket.on('confirm', (emitID) => { delete(pendingEmits[emitID]); });
// Client
socket.on('something', () => {
socket.emit('confirm', emitID);
});
Solution 2 (kinda)
Added 1 Feb 2020.
While this is not really a solution for Websockets, someone may still find it handy. We migrated from Websockets to SSE + Ajax. SSE allows you to connect from a client to keep a persistent TCP connection and receive messages from a server in realtime. To send messages from a client to a server - simply use Ajax. There are disadvantages like latency and overhead, but SSE guarantees reliability because it is a TCP connection.
Since we use Express we use this library for SSE https://github.com/dpskvn/express-sse, but you can choose the one that fits you.
SSE is not supported in IE and most Edge versions, so you would need a polyfill: https://github.com/Yaffle/EventSource.
Others have hinted at this in other answers and comments, but the root problem is that Socket.IO is just a delivery mechanism, and you cannot depend on it alone for reliable delivery. The only person who knows for sure that a message has been successfully delivered to the client is the client itself. For this kind of system, I would recommend making the following assertions:
Messages aren't sent directly to clients; instead, they get sent to the server and stored in some kind of data store.
Clients are responsible for asking "what did I miss" when they reconnect, and will query the stored messages in the data store to update their state.
If a message is sent to the server while the recipient client is connected, that message will be sent in real time to the client.
Of course, depending on your application's needs, you can tune pieces of this--for example, you can use, say, a Redis list or sorted set for the messages, and clear them out if you know for a fact a client is up to date.
Here are a couple of examples:
Happy path:
U1 and U2 are both connected to the system.
U2 sends a message to the server that U1 should receive.
The server stores the message in some kind of persistent store, marking it for U1 with some kind of timestamp or sequential ID.
The server sends the message to U1 via Socket.IO.
U1's client confirms (perhaps via a Socket.IO callback) that it received the message.
The server deletes the persisted message from the data store.
Offline path:
U1 looses internet connectivity.
U2 sends a message to the server that U1 should receive.
The server stores the message in some kind of persistent store, marking it for U1 with some kind of timestamp or sequential ID.
The server sends the message to U1 via Socket.IO.
U1's client does not confirm receipt, because they are offline.
Perhaps U2 sends U1 a few more messages; they all get stored in the data store in the same fashion.
When U1 reconnects, it asks the server "The last message I saw was X / I have state X, what did I miss."
The server sends U1 all the messages it missed from the data store based on U1's request
U1's client confirms receipt and the server removes those messages from the data store.
If you absolutely want guaranteed delivery, then it's important to design your system in such a way that being connected doesn't actually matter, and that realtime delivery is simply a bonus; this almost always involves a data store of some kind. As user568109 mentioned in a comment, there are messaging systems that abstract away the storage and delivery of said messages, and it may be worth looking into such a prebuilt solution. (You will likely still have to write the Socket.IO integration yourself.)
If you're not interested in storing the messages in the database, you may be able to get away with storing them in a local array; the server tries to send U1 the message, and stores it in a list of "pending messages" until U1's client confirms that it received it. If the client is offline, then when it comes back it can tell the server "Hey I was disconnected, please send me anything I missed" and the server can iterate through those messages.
Luckily, Socket.IO provides a mechanism that allows a client to "respond" to a message that looks like native JS callbacks. Here is some pseudocode:
// server
pendingMessagesForSocket = [];
function sendMessage(message) {
pendingMessagesForSocket.push(message);
socket.emit('message', message, function() {
pendingMessagesForSocket.remove(message);
}
};
socket.on('reconnection', function(lastKnownMessage) {
// you may want to make sure you resend them in order, or one at a time, etc.
for (message in pendingMessagesForSocket since lastKnownMessage) {
socket.emit('message', message, function() {
pendingMessagesForSocket.remove(message);
}
}
});
// client
socket.on('connection', function() {
if (previouslyConnected) {
socket.emit('reconnection', lastKnownMessage);
} else {
// first connection; any further connections means we disconnected
previouslyConnected = true;
}
});
socket.on('message', function(data, callback) {
// Do something with `data`
lastKnownMessage = data;
callback(); // confirm we received the message
});
This is quite similar to the last suggestion, simply without a persistent data store.
You may also be interested in the concept of event sourcing.
Michelle's answer is pretty much on point, but there are a few other important things to consider. The main question to ask yourself is: "Is there a difference between a user and a socket in my app?" Another way to ask that is "Can each logged in user have more than 1 socket connection at one time?"
In the web world it is probably always a possibility that a single user has multiple socket connections, unless you have specifically put something in place that prevents this. The simplest example of this is if a user has two tabs of the same page open. In these cases you don't care about sending a message/event to the human user just once... you need to send it to each socket instance for that user so that each tab can run it's callbacks to update the ui state. Maybe this isn't a concern for certain applications, but my gut says it would be for most. If this is a concern for you, read on....
To solve this (assuming you are using a database as your persistent storage) you would need 3 tables.
users - which is a 1 to 1 with real people
clients - which represents a "tab" that could have a single connection to a socket server. (any 'user' may have multiple)
messages - a message that needs sent to a client (not a message that needs sent to a user or to a socket)
The users table is optional if your app doesn't require it, but the OP said they have one.
The other thing that needs properly defined is "what is a socket connection?", "When is a socket connection created?", "when is a socket connection reused?". Michelle's psudocode makes it seem like a socket connection can be reused. With Socket.IO, they CANNOT be reused. I've seen be the source of a lot of confusion. There are real life scenarios where Michelle's example does make sense. But I have to imagine those scenarios are rare. What really happens is when a socket connection is lost, that connection, ID, etc will never be reused. So any messages marked for that socket specifically will never be delivered to anyone because when the client who had originally connected, reconnects, they get a completely brand new connection and new ID. This means it's up to you to do something to track clients (rather than sockets or users) across multiple socket connections.
So for a web based example here would be the set of steps I'd recommend:
When a user loads a client (typically a single webpage) that has the potential for creating a socket connection, add a row to the clients database which is linked to their user ID.
When the user actually does connect to the socket server, pass the client ID to the server with the connection request.
The server should validate the user is allowed to connect and the client row in the clients table is available for connection and allow/deny accordingly.
Update the client row with the socket ID generated by Socket.IO.
Send any items in the messages table connected to the client ID. There wouldn't be any on initial connection, but if this was from the client trying to reconnect, there may be some.
Any time a message needs to be sent to that socket, add a row in the messages table which is linked to the client ID you generated (not the socket ID).
Attempt to emit the message and listen for the client with the acknowledgement.
When you get the acknowledgement, delete that item from the messages table.
You may wish to create some logic on the client side that discards duplicate messages sent from the server since this is technically a possibility as some have pointed out.
Then when a client disconnects from the socket server (purposefully or via error), DO NOT delete the client row, just clear out the socket ID at most. This is because that same client could try to reconnect.
When a client tries to reconnect, send the same client ID it sent with the original connection attempt. The server will view this just like an initial connection.
When the client is destroyed (user closes the tab or navigates away), this is when you delete the client row and all messages for this client. This step may be a bit tricky.
Because the last step is tricky (at least it used to be, I haven't done anything like that in a long time), and because there are cases like power loss where the client will disconnect without cleaning up the client row and never tries to reconnect with that same client row - you probably want to have something that runs periodically to cleanup any stale client and message rows. Or, you can just permanently store all clients and messages forever and just mark their state appropriately.
So just to be clear, in cases where one user has two tabs open, you will be adding two identical message to the messages table each marked for a different client because your server needs to know if each client received them, not just each user.
As already written in another answer, I also believe you should look at the realtime as a bonus : the system should be able to work also with no realtime.
I’m developing an enterprise chat for a large company (ios, android, web frontend and .net core + postGres backend) and after having developed a way for the websocket to re-establish connection (through a socket uuid) and get undelivered messages (stored in a queue) I understood there was a better solution: resync via rest API.
Basically I ended up by using websocket just for realtime, with an integer tag on each realtime message (user online, typers, chat message and so on) for monitoring lost messages.
When the client gets an id which is not monolithic (+1) then it understands it is out of sync so it drops all the socket messages and asks a resync of all its observers through REST api.
This way we can handle many variations in the state of the application during the offline period without having to parse tons of websocket messages in a row on reconnection and we are sure to be synced (because the last sync date is set just by the REST api, not from the socket).
The only tricky part is monitoring for realtime messages from the moment you call REST api to the moment the server replies because what is read from the db takes time to get back to the client and in the meanwhile variations could happen so they need to be cached and took into account.
We are going into production in a couple of months,
I hope to get back sleeping by then :)
It is seem that you already have user account system. You know which account is online/offline, you you can handle connect/disconnect event:
So the solution is, add online/offline and offline messages on database for each user:
chatApp.onLogin(function (user) {
user.readOfflineMessage(function (msgs) {
user.sendOfflineMessage(msgs, function (err) {
if (!err) user.clearOfflineMessage();
});
})
});
chatApp.onMessage(function (fromUser, toUser, msg) {
if (user.isOnline()) {
toUser.sendMessage(msg, function (err) {
// alert CAN NOT SEND, RETRY?
});
} else {
toUser.addToOfflineQueue(msg);
}
})
Look here: Handle browser reload socket.io.
I think you could use solution which I came up with. If you modify it properly, it should work as you want.
What I think you want is to have a reusable socket for each user, something like:
Client:
socket.on("msg", function(){
socket.send("msg-conf");
});
Server:
// Add this socket property to all users, with your existing user system
user.socket = {
messages:[],
io:null
}
user.send = function(msg){ // Call this method to send a message
if(this.socket.io){ // this.io will be set to null when dissconnected
// Wait For Confirmation that message was sent.
var hasconf = false;
this.socket.io.on("msg-conf", function(data){
// Expect the client to emit "msg-conf"
hasconf = true;
});
// send the message
this.socket.io.send("msg", msg); // if connected, call socket.io's send method
setTimeout(function(){
if(!hasconf){
this.socket = null; // If the client did not respond, mark them as offline.
this.socket.messages.push(msg); // Add it to the queue
}
}, 60 * 1000); // Make sure this is the same as your timeout.
} else {
this.socket.messages.push(msg); // Otherwise, it's offline. Add it to the message queue
}
}
user.flush = function(){ // Call this when user comes back online
for(var msg in this.socket.messages){ // For every message in the queue, send it.
this.send(msg);
}
}
// Make Sure this runs whenever the user gets logged in/comes online
user.onconnect = function(socket){
this.socket.io = socket; // Set the socket.io socket
this.flush(); // Send all messages that are waiting
}
// Make sure this is called when the user disconnects/logs out
user.disconnect = function(){
self.socket.io = null; // Set the socket to null, so any messages are queued not send.
}
Then the socket queue is preserved between disconnects.
Make sure it saves each users socket property to the database and make the methods part of your user prototype. The database does not matter, just save it however you have been saving your users.
This will avoid the problem mentioned in Additon 1 by requiring a confirmation from the client before marking the message as sent. If you really wanted to, you could give each message an id and have the client send the message id to msg-conf, then check it.
In this example, user is the template user that all users are copied from, or like the user prototype.
Note: This has not been tested.
Been looking at this stuff latterly and think different path might be better.
Try looking at Azure Service bus, ques and topic take care of the off line states.
The message wait for user to come back and then they get the message.
Is a cost to run a queue but its like $0.05 per million operations for a basic queue so cost of dev would be more from hours work need to write a queuing system.
https://azure.microsoft.com/en-us/pricing/details/service-bus/
And azure bus has libraries and examples for PHP, C#, Xarmin, Anjular, Java Script etc.
So server send message and does not need to worry about tracking them.
Client can use message to send back also as means can handle load balancing if needed.
Try this emit chat list
io.on('connect', onConnect);
function onConnect(socket){
// sending to the client
socket.emit('hello', 'can you hear me?', 1, 2, 'abc');
// sending to all clients except sender
socket.broadcast.emit('broadcast', 'hello friends!');
// sending to all clients in 'game' room except sender
socket.to('game').emit('nice game', "let's play a game");
// sending to all clients in 'game1' and/or in 'game2' room, except sender
socket.to('game1').to('game2').emit('nice game', "let's play a game (too)");
// sending to all clients in 'game' room, including sender
io.in('game').emit('big-announcement', 'the game will start soon');
// sending to all clients in namespace 'myNamespace', including sender
io.of('myNamespace').emit('bigger-announcement', 'the tournament will start soon');
// sending to individual socketid (private message)
socket.to(<socketid>).emit('hey', 'I just met you');
// sending with acknowledgement
socket.emit('question', 'do you think so?', function (answer) {});
// sending without compression
socket.compress(false).emit('uncompressed', "that's rough");
// sending a message that might be dropped if the client is not ready to receive messages
socket.volatile.emit('maybe', 'do you really need it?');
// sending to all clients on this node (when using multiple nodes)
io.local.emit('hi', 'my lovely babies');
};

Overwriting Backbone.sync for socket.io

Im working on a socket.io based server/client connection instead of ajax.
Client uses Backbone and I overwritten the Backbone.sync function with one
half assed of my own:
Backbone.sync = function (method, collection, options) {
// use the window.io variable that was attached on init
var socket = window.io.connect('http://localhost:3000');
// emit the collection/model data with standard ajax method names and options
socket.emit(method,{collection:collection.name,url:collection.url});
// create a model in the collection for each frame coming in through that connection
socket.on(collection.url,function(socket_frame){
collection.create(socket_frame['model']);
})
};
Instead of ajax calls I simply emit through socket attached to window.io
global var. Server listens to those emits and based on the model url, I don't want to change that behaviour and I use the default crud method names (read,patch...) inside each emited frame. The logic behind it (its a bit far thought, but who knows) that in case the client doesn't support Websockets I can easily fallback to default jQuery ajax. I attached the orginal Backbone.sync to a var so I can pass the same arguments to it when no websocket is available.
All it that greatness behalves properly and the server answers to the client events. The server emits then each model data as a seperate websocket frames in one connection.
I see the frames in the Network/Websocket filter as one (concurrent/established) connection
and things seems to be working
Currently the function assumes I pass a collection and not a model.
Questions:
Is that approach ok with you?
How can I use the socket.io callbacks on 'success' and 'failure' etc in Backbone the right way so I don't have to call the collection.create function 'by-hand'?
Is it better to establish different concurrent connections for models/collections or use the one already established instead?

Communicating between two different processes in Node.js

The issue is:
Lets assume we have two Node.js processes running: example1.js and example2.js.
In example1.js there is function func1(input) which returns result1 as a result.
Is there a way from within example2.js to call func1(input) and obtain result1 as the outcome?
From what I've learned about Node.js, I have only found one solution which uses sockets for communication. This is less than ideal however because it would require one process listening on a port. If possible I wish to avoid that.
EDIT: After some questions I'd love to add that in hierarchy example1.js cannot be child process of example2.js, but rather the opposite. Also if it helps -- there can be only one example1.js processing its own data and many example2.js's processing own data + data from first process.
The use case you describe makes me think of dnode, with which you can easily expose functions to be called by different processes, coordinated by dnode, which uses network sockets (and socket.io, so you can use the same mechanism in the browser).
Another approach would be to use a message queue, there are many good bindings for different message queues.
The simplest way to my knowledge, is to use child_process.fork():
This is a special case of the spawn() functionality for spawning Node processes. In addition to having all the methods in a normal ChildProcess instance, the returned object has a communication channel built-in. The channel is written to with child.send(message, [sendHandle]) and messages are received by a 'message' event on the child.
So, for your example, you could have example2.js:
var fork = require('child_process').fork;
var example1 = fork(__dirname + '/example1.js');
example1.on('message', function(response) {
console.log(response);
});
example1.send({func: 'input'});
And example1.js:
function func(input) {
process.send('Hello ' + input);
}
process.on('message', function(m) {
func(m);
});
May be you should try Messenger.js. It can do IPC in a handy way.
You don't have to do the communication between the two processes by yourself.
Use Redis as a message bus/broker.
https://redis.io/topics/pubsub
You can also use socket messaging like ZeroMQ, which are point to point / peer to peer, instead of using a message broker like Redis.
How does this work?
With Redis, in both your node applications you have two Redis clients doing pub/sub. So each node.js app would have a publisher and subscriber client (yes you need 2 clients per node process for Redis pub/sub)
With ZeroMQ, you can send messages via IPC channels, directly between node.js processes, (no broker involved - except perhaps the OS itself..).

Resources