I was preparing a chat application on nodejs using socket-io
The problem is with concurrent callbacks, explaination is as follows:
I keep in database socketId of related users,
On disconnect event of socket-io I find the user based on its socket.id from mongodb, and do some db operations, remove the socket.id from subscriber socket list and update back the list(array in mongodb) in the same column.
Now the problem is when there are concurrent requests.
All the callbacks finds the socketId
Then all the callbacks create the list of socketids excluding itself(as event is disconnect) and then all updates back to db.
Problem in this approch is that Even though result is expected as blank column I end up with socketIds, only one is removed.
Hope was able to explain. Can anybody please let me know what I am doing wrong here?? This seems to be a very common problem what is the standard approach of solving same.??
code snippit:
if(sSocketIds.length > 1){
console.log("{disconnect} session contains multiple socket processing for removing the current one");
var nSubscriberSocketids = [];
for(var cnt=0 ;sSocketIds.length>cnt ; cnt++){
if(sSocketIds[cnt]!=socket.id){
nSubscriberSocketids.push(sSocketIds[cnt]);
}
}
// This below line is problem I always end up with n-1 socketIds due to concurrency.
__db.SessionFrame.update({"_id":session._id },{"subscriberSocketIds":nSubscriberSocketids}, function(err,count){
if(err){
console.error("{disonnect} error while removing socket: "+socket.id);
}else{
console.log("{disconnect} subscriber socket removed successfully: socketId: "+socket.id+" session will still persist");
}
});
}
Related
I have run into an unforeseen problem with my socket.io setup.
I use socket.io to live load data from my database (mongoDB, nodejs, react).
To accomplish this, I use mongoDB's changestream to detect changes and then push them to the front-end via socket.io.
Now this works perfectly as long as the user is connected. And right now, when the user reconnects, it just reloads all data. While this is fine for most users, there is a small group with very bad network connection and thus the front-end is reloading data all the time. Which causes the front-end to be unresponsive for some time.
So, I am looking for a way to only send events that occurred during the front-end being offline. While the front-end can do this quite easily: https://socket.io/docs/v4/client-offline-behavior/
It doesn't seem possible to do this at the server side. Since socket.io (server side) immediately forgets sockets that have disconnected and thus cant buffer events.
So, I was wondering if there is a good way do this? Or would this need a full "wrapper" around socket.io that caches disconnected sockets?
Any help or advice would be appreciated!
I find it is a really interesting and painful problem ! ^^'
If you can give more variables, it may help people to give you a better answer
For instance
How many data are stored in database, how much a typical user will receive, and how many events are triggered on a time frame ?
How long should an event take to be visible ? I mean, if users receive an event with a 10s,30s,... delay, is it harmfull for the service they provide.
How your data is structured ? is it a simple json array with the same field, custom field, dynamic json object, etc..
How your react app is structured, do you put heavy logic when your data is update, etc..
I think you should put more controls in your front end code and update only when new datas.
Some paths to explore
1. Put more controls in your front end
As you stated, for the users with bad connection, the react client seems to update his state too quickly, when they reload data after the websocket is connected, again and again. Ui may freeze in this case, yes.
For this, I think of two approaches :
Before updating the state, check if react current state is the same as the data you receive from websocket connection. If the reconnection is quick enough and no new data arrived, it should be the same. So in this case do not update react state.
If too many events are triggered and after each reconnection new data arrived, you can buffer the datas from the websocket and display it only once per time frame. What i mean by time frame, is you can use functions like setInterval or requestAnimationFrame to trigger react update. A pseudo react code to illustrate this.
function App() {
const [events, setEvents] = useState({ datas: [] });
const bufferedEvents = useRef([]);
useEffect(() => {
websocket.on("connected", (newEvents) => {
bufferedEvents.current = bufferedEvents.current.concat(newEvents);
})
websocket.on("data", (newEvent) => {
bufferedEvents.current = bufferedEvents.current.concat(newEvent);
})
// In the setInterval function you take all the events receive at the connection + new events. to update the react state. You clean the bufferedEvents at the same time.
const intervalId=setInterval(() => {
const events = bufferedEvents.current;
bufferedEvents.current = [];
//update if new datas
if (events.length > 0) {
setEvents((prevState) => { return { datas: prevState.datas.concat(events) } });
}
// console.log(events)
}, 1000) // trigger data update every second. You could replace this approach with a requestAnimationFrame. You can adapt the time refresh as you need.
//Do not forget to clear the interval when the component is unmount
return ()=>{
clearInterval(intervalId)
}
}, []);
return (
<div>
<span>Total events : {events.datas.length}</span>
<br />
{
events.datas.map(event => {
return <div>{event.data}</div>
})
}
</div>
)
}
You can look at this article for details on using requestAnimation frame.
I think that modifying the front end is needed in all case, but still alone, not really good on performance.
2. Fetch only new data in your back end
For this approach, it really depends how your data is structured in the database.
If the data have some timestamp in it, I can think of a naive but simple cookie with a timestamp in it.
When user connects the first time, this cookie is null.
When they fetch the data, on the websocket connection, they receive all the datas. When datas arrived, you update the cookie timestamp with the most recent date in the data.
Websocket is disconnected, you open a new websocket with the cookie timestamp on it. With this information you can query all the datas more recent than the timestamp on the cookie.
Like this, you don't have to download the entirity of data, but only fresh ones.
Other approaches may be more helpfull but without more informations on your datas and more precise requirements, it is hard to say.
If you have a lot of data, I will personally check some pagination mechanism and maybe combine some classic http request for fetching the data, and websocket, sse, or long polling for live events.
You can put a comment if needed and I will update my response !
Cheers
I am unable to get two users chatting to each other despite reducing the complexity and the potential code that could have caused the issue.
I am able to emit to all connected sockets so I have established it's not an issue in context of emit/on structure but rather; coming from the way i'm handling the private socket ids.
I have tried various versions of trying to send the private message to the correct socket id; I have tried older ways such as socket.to and the current way from the docs which is io.to(sockid).emit('event' message); all these variations have been unable to help me. I have consoled out the socket id I have on my Angular client side by printing console.log('THIS IS MY SOCKET '+this.socket.id) and comparing it to the value I have in Redis using redis-cli and they both match perfectly every time which doesn't give me too much to go on.
problem arises here:
if (res === 1) {
_active_users.get_client_key(recipient)
.then(socket_id => {
console.log('======='+io.sockets.name)
console.log('I am sending the message to: '+ recipient + 'and my socket id is'+ socket_id)
// socket.to(socket_id)socket.emit('incoming', "this is top secret"
io.of('/chat').to(socket_id).emit('incoming', "this is top secret")
})
.catch(error => {
console.log("COULD NOT RETRIEVE KEY: " + error)
})
Here is the link to the pastebin with more context:
https://pastebin.com/fYPJSnWW
The classes I import are essentially just setters and getters for handling the socket id you can think of them as just a worker class that handles Redis actions.
Expected: To allow two clients to communicate based on just their socket ids.
Actual:
am able to emit to all connected sockets and receive the expected results but the problem arises when trying to send to a specific socket id from a unknown reason.
Issue was coming from my front end.. I hope nobody has a headache like this! but here is what happened; when you're digging your own hole you often don't realise how deep you got yourself if you don't take the time to look around. I had two instances of the sockets. I instantiated both and used the one to connect and the other to send the message; which of course you cannnot do if you want things to work properly. So what I did was created only one instance of the socket in and and passed that ref of the socket around where I needed it which is ( sendMessage(username, socket) getMessage(socket)
ngOnInit(
this.socket = io.connect('localhost:3600',{
reconnection: true,
reconnectionDelay: 1000,
reconnectionDelayMax : 5000,
reconnectionAttempts: Infinity});
I'm trying to develop an API for multiplayer online using socket programming in node js
I have some basic questions:
1. How to know which connection is related to a user?
2. How to create a socket object related to another person?
3. When it's opponent turn, how to make an event?
4. There is a limited time for move, how to handle the time to create an event and change turn?
As it is obvious I don't know how to handle users and for example list online users
If you can suggest some articles or answering these questions would be greate
Thanks
Keep some sort of data structure in memory where you are saving your sockets to. You may want to wrap the node.js socket in your own object which contains an id property. Then you can save these objects into a data structure saved in memory.
class User {
constructor(socket) {
this.socket = socket;
this.id = //some random id or even counter?
}
}
Then save this object in memory when you get a new socket.
const sockets = {}
server = net.createServer((socket) => {
const user = new User(socket);
sockets[user.id] = user
})
I am unsure what you mean by that, but maybe the above point helps out
This depends on when you define a new turn starts. Does the new turn start by something that is triggered by another user? If so use your solution to point 2 to relay that message to the related user and write something back to that socket.
Use a timeout. Maybe give your User class an additional property timeout whenver you want to start a new timeout do timeout = setTimeout(timeouthandler,howlong) If the timeouthandler is triggered the user is out of time, so write to the socket. Don't forget to cancel your timeouts if you need to.
Also, as a side note, if you are doing this with pure node.js tcp sockets you need to come up with some ad-hoc protocol. Here is why:
socket.on("data", (data) => {
//this could be triggered multiple times for a single socket.write() due to the streaming nature of tcp
})
You could do something like
class User {
constructor(socket) {
this.socket = socket;
this.id = //some random id or even counter?
socket.on("data", (data) => {
//on each message you get, find out the type of message
//which could be anything you define. Is it a login?
// End of turn?
// logout?
})
}
}
EDIT: This is not something that scales well. This is just to give you an idea on what can be done. Imagine for some reason you decide to have one node.js server instance running for hundreds of users. All those users socket instances would be stored in the servers memory
I need to know how to omit in sails.js two or more sockets in sails.sockets.broadcast? I tried this:
function sendMessage(data){
var socketIds = ['socketId1','socketId2'];
sails.sockets.broadcast("room","event",data,socketIds);
//sending data to ALL sockets in the room :/
}
but it doesn't work.
I need know this because I need omit the sockets which belong to the same session. (example: session of user in computer browser and android browser)
somebody help?
There's nothing built-in that will do this for you, but broadcast is just a wrapper around emit anyway, so you can just roll your own by getting all of the socket IDs in the room you want to broadcast to, and omitting the IDs in your array.
// Get all the IDs of the sockets subscribed to "room"
var socketIds = sails.sockets.subscribers("room");
// Remove the IDs you want to omit
socketIds = _.difference(socketIds, ['socketId1','socketId2']);
// Emit your event to the rest!
sails.sockets.emit(socketIds, "event", data);
I have a MongoDB collection of 3257477 cities, and I'm using Mongoose on NodeJS to access it. I'm making requests to it repeatedly (once per 500ms). Requests are usually answered very quickly. However, when I make a bad typo the query takes a long time and requests start to pile up until the initial request is answered. Here are some logs I collected of requests and responses:
21:48:50 started query for "new"
21:48:50 finished query for "new"
21:48:52 started query for "newj ljl" // blockage
21:48:54 started query for "newj"
21:48:55 started query for "new"
21:48:57 started query for "new ye"
21:48:59 started query for "new york"
21:49:08 finished query for "newj ljl" // blockage removed, quick queries flood in
21:49:08 finished query for "new"
21:49:08 finished query for "new york"
21:49:08 finished query for "new ye"
21:49:23 finished query for "newj"
I'm able to cancel the requests made by the client so I'm not worried about queries coming back in the wrong order. And I'm not interested in how to make that query faster at this point, since queries for actual correct spellings are quick.
I'm wondering how a new request can cancel an old request that was made by the same client. In other words "newj ljl" gets canceled when "newj" arrives, "newj" gets canceled when "new" arrives, and so on. If it's just going to be thrown out, why tie up the database?
Is there a proper way to do this?
Update:
I'm aware of db.currentOp().inprog and I'm thinking I can use the client property of the documents within that array to know whether it's a repeat request, but I can't quite figure out how to access that from Mongoose. I'm also not sure when to do that, or how I know which request was spawned from this client (and therefore which to cancel). I'd like an actual code example using Mongoose, or the native NodeJS MongoDB driver if possible!
Here's some sample code to go off of:
models.City.find({ ... })
.exec(function (err, cities) {
});
Below is what I came up with to solve the issue.
I can easily do db.currentOp().inprog and db.killOp() from the Mongo shell, but I really need this to happen automatically, when it needs to, from Mongoose. Since you can reference the MongoDB driver using require('mongoose').connection.db, you can execute those commands by doing "queries" on the following collections:
db.collection('$cmd.sys.inprog');
db.collection('$cmd.sys.killop');
The full solution:
var db = require('mongoose').connection.db,
// get the client IP address
ip = request.headers['x-forwarded-for'] ||
request.connection.remoteAddress ||
request.socket.remoteAddress ||
request.connection.socket.remoteAddress;
// same thing as db.currentOp().inprog
db.collection('$cmd.sys.inprog').findOne(function (err, data) {
if (err) throw err;
data.inprog.filter(function (op) {
// get the operation's client IP address without the port
return ip == op.client.split(':')[0];
}).forEach(function(op){
// same thing as db.killOp()
db.collection('$cmd.sys.killop')
.findOne({ 'op': op.opid }, function (err, data) {
if (err) throw err;
});
});
// start the new cities query
models.City.find({ ... })
.exec(function (err, cities) {
});
});
Helpful links:
https://groups.google.com/forum/#!topic/mongodb-user/1wFp7AqWnM4
drop database with mongoose
How to determine a user's IP address in node
You can try using db.killOp()
http://docs.mongodb.org/manual/reference/method/db.killOp/#db.killOp
UPDATE: You can get the list of current operations from db.currentOp() and identify the operation to be cancelled by matching fields like op, query and client
http://docs.mongodb.org/manual/reference/method/db.currentOp/#db.currentOp
You can definitely do this with killop, and the above solution looks like it could work for the problem as stated. However, I think it may be worthwhile to dig a bit deeper.
The fact that you have a noticeably slow query when you've got a query that's going to return no results seems unusual. That reeks of a full collection scan. The questions to ask are, first, do you have indices set up, and second, are you querying with a general regex? MongoDB doesn't really handle regex searches like { "name" : /.*new york.*/ } particularly well.
Also, the whole "send an http request every time the user hits a key" approach is simple and elegant, but also causes some unnecessary server load. Perhaps a search button or a client-side timeout where you only send a request if a user hasn't hit a key for 1 second could help alleviate the need for the killop approach.