Does join presence event will fire in PUBNUB. If user is already subricibe with channel - pubnub

Suppose same uuid is subscribe() with channels but they got subcribed at different
time and from different browser
Scenario
Before 10 min using Chrome
subcribe with channel=>channel_1
with uuid=> abcValue
After 5 min using Mozilla
subcribe with channel=>channel_1
with uuid=> abcValue
var pubnub = PUBNUB.init({
publish_key: "demo",
subscribe_key: "demo",
uuid:"abcValue"
});
pubnub.subscribe({
channel:'channel_1',
presence:function(value, envelope, source_channel){
if( value.action ==="join" && value.uuid === 'abcValue' )
console.log('Join Called')
});
Whether Join Called will get logged twice or not?

Browser isn't the factor.
and no, you don't get two presence join notices when you are using the same uuid, 'abcValue' from Chrome and Firefox, only the first browser (Chrome, in this case) registers the event.
Because 'abcValue' never left, and 'abcValue' joining after 'abcValue' is already joined means you wouldn't get another join.
If you assign a uuid at each connection (so each browser has different uuid), you should see two separated join events are called.

Related

How to leave a socket room with vue-socket and rejoin without duplicate messages?

When I join the room, and then leave the route and go back, and then use the chat I've built, I get double messages of * amount of messages as many times I left and rejoined.
This problem goes away when I hard refresh.
I've tried everything I could find thus far, and have been unable to get it to work.
I tried on the client side, during beforeRouteLeave, beforeDestroy and window.onbeforeunload
this.$socket.removeListener("insertListener"); --> tried with all
this.$socket = null
this.$socket.connected = false
this.$socket.disconnected = true
this.$socket.removeAllListeners()
this.$socket.disconnect()
During the same events, I also sent a this.$socket.emit("leaveChat", roomId) and then on the server side tried the following inside the io.on("connection") receiver socket.on("leaveChat", function(roomId) {}):
socket.leave(roomId) --> this is what should according to docs work;
socket.disconnect()
socket.off() -- seems to be deprecated
socket.removeAllListeners(roomId)
There were a bunch of other things I tried that I can't remember but will update the post if I do.
Either it somehow disconnects and upon rejoining, previous listeners or something is still remaining, meaning all the messages are received * times rejoin. OR, if I disconnect, I don't seem to be able to reconnect.
On joining, I emit to server the room id and use socket.join(roomId).
All I want to do, is without refresh, when I leave the page, before that happens, the user can leave the room and when they go back, they get to rejoin, with no duplicate messages occurring.
I am currently trying to chew through the source code now.
Full disclosure here, I didn't read the full response posed by roberfoenix, but this is a common issue with socket.io and it comes down to calling the 'on' event multiple times.
When you create an .on event for your socket its a bind, and you can bind multiple times to the same event.
My assumption is, when a users hits a page you run something like
socket.on("joinRoom", data)
This in turn will say join the room, pull your messages from Mongo(or something else) and then emit to the room (side note, using .once on can help so you don't emit to every users when a user joings a room)
Now you leave the room, call socket.emit('leaveRoom',room), cool you left the room, then you go back into the room, guess what you now just binded to the same on event again, so when you emit, it emits two times to that user etc etc.
The way we addressed this is to place all our on-events into a function and call the function once. So, a user joins a page this will run the function like socketInit();
The socketInit function will have something like this
function socketInit(){
if (init === false){
//Cool it has not run, we will bind our on events
socket.on("event")
socket.on("otherEvent")
init = true;
}
}
Basically the init is a global variable, if is false, bind your events, otherwise don't rebind.
This can be improved to use a promis or could be done on connect but if a users reconnects it may run again.
If you're using Vue-Socket and feel like going slightly mad having tried everything, this may be your solution.
Turns out challenging core assumptions and investigating from the ground up pays off. It is possible that you forgot yourself so deeply in Socket.io, that you forgot you were using Vue-Socket.
The solution in my case was using Vue-Socket's built in unsubscribe function.
With Vue-Socket, one of the ways you can initially subscribe to events is as follows:
this.sockets.subscribe('EVENT_NAME', (data) => {
this.msg = data.message;
});
Because you're using Vue Socket, not the regular one, you also need to use Vue Socket's way for unsubscribing right before you leave the room (unless you were looking for a very custom solution). This is why I suspect many of the other things I tried didn't work and did next to nothing!
The way you do that is as follows:
this.sockets.unsubscribe('EVENT_NAME');
Do that for any events causing you trouble in the form of duplicates. The reason you'd be getting duplicates in the first place, especially upon rejoining post leaving a room, is because the previous event listeners were still running, and now the singular user would be playing the role of as if two or more listeners.
An alternative possibility is that you're emitting the message to everyone, including the original sender, when you should most likely be emitting it to everyone else except the sender (check here for socket.io emit cheatsheet).
If the above doesn't solve it for you, then make sure you're actually leaving the room, and doing so server-side. You can accomplish that through emitting a signal to the server right before leaving the route (in case you're using a reactive single page application), receiving it server side, and calling 'socket.leave(yourRoomName)' inside your io.on("connection", function(socket) {}) instance.

Can't get key's value after subscribing for expiration

I have a shallow key, which is supposed to expire, after listening to its expiration, I take the key, generate the key which holds the real value and try to get its value.
Code:
//.: Set the config for "notify-keyspace-events" channel used for expired type events
listener.send_command('config', ['set','notify-keyspace-events','Ex']);
// __keyevent#0__:expired is the channel name to which we need to subscribe, 0 is the default DB
listener.subscribe('__keyevent#0__:expired');
listener.on('message', (chan, msg) => {
listener.get(`${msg}-details`, redis.print);
});
Getting the error below after running listener.get:
ReplyError: ERR only (P)SUBSCRIBE / (P)UNSUBSCRIBE / PING / QUIT allowed in this context
I need the real key's value.
As noted in SUBSCRIBE command:
Once the client enters the subscribed state it is not supposed to
issue any other commands, except for additional SUBSCRIBE, PSUBSCRIBE,
UNSUBSCRIBE, PUNSUBSCRIBE, PING and QUIT commands.
The usual pattern is you would have two client connections (you would call redis.createClient() twice). Here is an example: How to receive Redis expire events with node?
Basically, you would have one connection for the expiration events, and one for the other logic you want (getting the key value, etc).

Mongo Change Streams running multiple times (kind of): Node app running multiple instances

My Node app uses Mongo change streams, and the app runs 3+ instances in production (more eventually, so this will become more of an issue as it grows). So, when a change comes in the change stream functionality runs as many times as there are processes.
How to set things up so that the change stream only runs once?
Here's what I've got:
const options = { fullDocument: "updateLookup" };
const filter = [
{
$match: {
$and: [
{ "updateDescription.updatedFields.sites": { $exists: true } },
{ operationType: "update" }
]
}
}
];
const sitesStream = Client.watch(sitesFilter, options);
// Start listening to site stream
sitesStream.on("change", async change => {
console.log("in site change stream", change);
console.log(
"in site change stream, update desc",
change.updateDescription
);
// Do work...
console.log("site change stream done.");
return;
});
It can easily be done with only Mongodb query operators. You can add a modulo query on the ID field where the divisor is the number of your app instances (N). The remainder is then an element of {0, 1, 2, ..., N-1}. If your app instances are numbered in ascending order from zero to N-1 you can write the filter like this:
const filter = [
{
"$match": {
"$and": [
// Other filters
{ "_id": { "$mod": [<number of instances>, <this instance's id>]}}
]
}
}
];
Doing this with strong guarantees is difficult but not impossible. I wrote about the details of one solution here: https://www.alechenninger.com/2020/05/building-kafka-like-message-queue-with.html
The examples are in Java but the important part is the algorithm.
It comes down to a few techniques:
Each process attempts to obtain a lock
Each lock (or each change) has an associated fencing token
Processing each change must be idempotent
While processing the change, the token is used to ensure ordered, effectively-once updates.
More details in the blog post.
It sounds like you need a way to partition updates between instances. Have you looked into Apache Kafka? Basically what you would do is have a single application that writes the change data to a partitioned Kafka Topic and have your node application be a Kafka consumer. This would ensure only one application instance ever receives an update.
Depending on your partitioning strategy, you could even ensure that updates for the same record always go to the same node app (if your application needs to maintain its own state). Otherwise, you can spread out the updates in a round robin fashion.
The biggest benefit to using Kafka is that you can add and remove instances without having to adjust configurations. For example, you could start one instance and it would handle all updates. Then, as soon as you start another instance, they each start handling half of the load. You can continue this pattern for as many instances as there are partitions (and you can configure the topic to have 1000s of partitions if you want), that is the power of the Kafka consumer group. Scaling down works in the reverse.
While the Kafka option sounded interesting, it was a lot of infrastructure work on a platform I'm not familiar with, so I decided to go with something a little closer to home for me, sending an MQTT message to a little stand alone app, and letting the MQTT server monitor messages for uniqueness.
siteStream.on("change", async change => {
console.log("in site change stream);
const mqttClient = mqtt.connect("mqtt://localhost:1883");
const id = JSON.stringify(change._id._data);
// You'll want to push more than just the change stream id obviously...
mqttClient.on("connect", function() {
mqttClient.publish("myTopic", id);
mqttClient.end();
});
});
I'm still working out the final version of the MQTT server, but the method to evaluate uniqueness of messages will probably store an array of change stream IDs in application memory, as there is no need to persist them, and evaluate whether to proceed any further based on whether that change stream ID has been seen before.
var mqtt = require("mqtt");
var client = mqtt.connect("mqtt://localhost:1883");
var seen = [];
client.on("connect", function() {
client.subscribe("myTopic");
});
client.on("message", function(topic, message) {
context = message.toString().replace(/"/g, "");
if (seen.indexOf(context) < 0) {
seen.push(context);
// Do stuff
}
});
This doesn't include security, etc., but you get the idea.
Will that having a field in DB called status which will be updated using findAnUpdate based on the event received from change stream. So lets say you get 2 events at the same time from change stream. First event will update the status to start and the other will throw error if status is start. So the second event will not process any business logic.
I'm not claiming those are rock-solid production grade solutions, but I believe something like this could work
Solution 1
applying Read-Modify-Write:
Add version field to the document, all the created docs have version=0
Receive ChangeStream event
Read the document that needs to be updated
Perform the update on the model
Increment version
Update the document where both id and version match, otherwise discard the change
Yes, it creates 2 * n_application_replicas useless queries, so there is another option
Solution 2
Create collection of ResumeTokens in mongo which would store collection -> token mapping
In the changeStream handler code, after successful write, update ResumeToken in the collection
Create a feature toggle that will disable reading ChangeStream in your application
Configure only a single instance of your application to be a "reader"
In case of "reader" failure you might either enable reading on another node, or redeploy the "reader" node.
As a result: there might be an infinite amount of non-reader replicas and there won't be any useless queries

[Node.JS Socket.io]communication between two targeted sockets

I'm currently working on a small node.js game.
Game supposedly has a global chat, with a "logged in" list to challenge people.
For the chat and the logged in list, i'm using the default socket.io room/namespace.
I successfully send the challenge request with following code
// When a user sends a battle challenge
socket.on("sendChallenge", function(data) {
// Try with room
socket.join('battleRoom');
var data= {
"userID": data.targetID,
"challengerName": data.challengerName
};
console.log(data.challengerName + " challenged " + data.userID);
// broadcast the message, but only the concerned player will answer thanks to his ID
socket.broadcast.emit('receiveChallenge', data);
});
Client side, I then have this code :
socket.on('receiveChallenge', function (data) {
if (data.userID == userID) {
alert("received challenge from " + data.challengerName);
socket.emit('ack');
}
});
The right player indeed receive the alert, and sends 'ack' to the server :
socket.on('ack', function() {
socket.join('battleRoom');
socket.to('battleRoom').broadcast.emit('receiveMessage', 'SYSTEM: Battle begun');
//socket.to('battleRoom').emit('receiveMessage', 'SYSTEM: Battle begun');
})
Except that the "socket.to('battleRoom').broadcast.emit('receiveMessage', 'SYSTEM: Battle begun');" is only received by the challenger and not by the challenged, and I'm stuck.
(the 2nd line is commented because challenger received 2 messages and challenged received none)
The way i understand it, on the server, the functions socket.on() have "socket" as the client that sent the message, and then you use broadcast to send to all others.
Why is why, in the sendChallenge event, i have the socket.join('battleRoom') for the challenger to enter battleRoom.
I then broadcast the challenge, and asks the client to acknowledge the challenge.
In the ack, the client then supposedly joins battleRoom too.
But i'm obviously doing something wrong, and i can't seem to see what...
I want both the challenger and the challenged communicating through the server.
A link to an image more or less showing the situation : Here
(screen is during the second time i clicked on the quickFight button, showing the Battle begun of the 1st click on the left, and the alert that user was challenged on the right)
Thanks in advance for your help!
As per the socket.io documentation broadcast does the following:
Broadcasting messages
To broadcast, simply add a broadcast flag to emit and send method calls. Broadcasting means sending a message to everyone else except for the socket that starts it.
So when you do:
socket.to('battleRoom').broadcast.emit('receiveMessage', 'SYSTEM: Battle begun');
You do not emit to the socket that broadcasted to the room. Simply replace that line with the following:
io.to('battleRoom').emit('receiveMessage', 'SYSTEM: Battle begun');
This is assuming that you named your socket.io object io, if you named it differently use that.
Just as a side-note, it's not necessary here to broadcast your original message to everyone if you create an associative array with all your connected socket id's (or sockets themselves). Then you can find the challenged socket id in your array and only emit to them specifically with the challenge. This would cut down on your server calls.

Meteor publish method

I just started the Meteor js, and I'm struggling in its publish method. Below is one publish method.
//Server side
Meteor.publish('topPostsWithTopComments', function() {
var topPostsCursor = Posts.find({}, {sort: {score: -1}, limit: 30});
var userIds = topPostsCursor.map(function(p) { return p.userId });
return [
topPostsCursor,
Meteor.users.find({'_id': {$in: userIds}})
];
});
// Client side
Meteor.subscribe('topPostsWithTopComments');
Now I'm not getting how I can use publish data on client. I meant I want to use data which will be given by topPostsWithTopComments
Problem is detailed below
When a new post enters the top 30 list, two things need to happen:
The server needs to send the new post to the client.
The server needs to send that post’s author to the client.
Meteor is observing the Posts cursor returned on line 6, and so will send the new post down as soon as it’s added, ensuring the client will receive the new post straight away.
However, consider the Meteor.users cursor returned on line 7. Even if the cursor itself is reactive, it’s now using an outdated value for the userIds array (which is a plain old non-reactive variable), which means its result set will be out of date as well.
This is why as far as that cursor is concerned, there is no need to re-run the query and Meteor will happily continue to publish the same 30 authors for the original 30 top posts ad infinitum.
So unless the whole code of the publication runs again (to construct a new list of userIds), the cursor is no longer going to return the correct information.
Basically what I need is:
if any changes happens in Post, then it should have the updated users list. without calling user collection again. I found some user full mrt modules.
link1 |
link2 |
link3
Please share your views!
-Neelesh
When you publish data on the server you're just publishing what the client is allowed to query. This is for security. After you subscribe to your publication you still need to query what the publication returned.
if(Meteor.isClient) {
Meteor.subscribe('topPostsWithTopComments');
// This returns all the records published with topPostsWithComments from the Posts Collection
var posts = Posts.find({});
}
If you wanted to only publish posts that the current user owns you would want to filter them out in the publish method on the server and not on the client.
I think #Will Brock already answered your question but maybe it becomes more clear with an abstract example.
Let's construct two collections named collectiona and collectionb.
// server and client
CollectionA = new Meteor.Collection('collectiona');
CollectionB = new Meteor.Collection('collectionb');
On the server you could now call Meteor.publish with 'collectiona' and 'collectionb' separately to publish both record sets to the client. This way the client could then also separately subscribe to them.
But instead you can also publish multiple record sets in a single call to Meteor.publish by returning multiple cursors in an array. Just like in the standard publishing procedure you can of course define what is being sent down to the client. Like so:
if (Meteor.isServer) {
Meteor.publish('collectionAandB', function() {
// constrain records from 'collectiona': limit number of documents to one
var onlyOneFromCollectionA = CollectionA.find({}, {limit: 1});
// all cursors in the array are published
return [
onlyOneFromCollectionA,
CollectionB.find()
];
});
}
Now on the client there is no need to subscribe to 'collectiona' and 'collectionb' separately. Instead you can simply subscribe to 'collectionAandB':
if (Meteor.isClient) {
Meteor.subscribe('collectionAandB', function () {
// callback to use collection A and B on the client once
// they are ready
// only one document of collection A will be available here
console.log(CollectionA.find().fetch());
// all documents from collection B will be available here
console.log(CollectionB.find().fetch());
});
}
So I think what you need to understand is that there is no array sent to the client that contains the two cursors published in the Meteor.publish call. This is because returning an array of cursors in the function passed as an argument to your call to Meteor.publish merely tells Meteor to publish all cursors contained in the array. You still need to query the individual records using your collection handles on the client (see #Will Brock's answer).

Resources