Building an app with presence following the firebase docs, is there a scenario where the on-disconnect fires when the app is still connected? We see instances where the presence node shows the app as going offline and then back online within a few seconds when we aren't losing a network connection.
We are seeing on multiple embedded devices installed in the field where presence is set to false and then almost immediately right back to true and it's occurring on all the devices within a few seconds of each other. From the testing we have done and the docs online we know that if we lose internet connection on the device it takes roughly 60 seconds before the timeout on the server fires the onDisconnect() method.
We have since added code in the presence method that allows the device if it sees the presence node be set to false while the app is actually running it will reset the presence back to true. At times when this happens we get a single write back to true and that is the end of it, other times it is like the server and client are fighting each other and the node is reset to true numerous times over the course of 50-200 milliseconds. We monitor this by pushing to another node within the device GUID each time we are forcing presence back to true. This only occurs while the module is running and after it initially establishes presence.
Here is the method that we call from our various modules that are running on the device so that we can monitor the status of each of the modules at any given time.
exports.online = function (program, currentProgram) {
var programPath = process.env.FIREBASE_DEVICES + process.env.GUID + '/status/' + program
var onlinePath = process.env.FIREBASE_DEVICES + process.env.GUID + '/statusOnlineTimes/' + program
var programRef = new firebase(programPath);
var statusRef = new firebase(process.env.FIREBASE_DEVICES + process.env.GUID + '/status/bootup');
var onlineRef = new firebase(onlinePath)
amOnline.on('value', function(snapshot) {
if (snapshot.val()) {
programRef.onDisconnect().set(false);
programRef.set(true);
programRef.on('value', function(snapshot){
if (snapshot.val() == false){
programRef.set(true);
console.log('[NOTICE] Resetting', program, 'module status back to True after Fireabase set to False')
var objectToPush = {
program: program,
time: new Date().toJSON()
}
onlineRef.push(objectToPush)
}
})
if (currentProgram != undefined) {
statusRef.onDisconnect().set('Offline')
statusRef.set(currentProgram)
}
}
});
The question we have is there ever an instance where Firebase is calling the onDisconnect() method even though it really isn't losing its status? We had instances where we would see the device go offline and then back online within 60 seconds before we added the reset code. The reset code was to combat another issue we had in the field where if the power were interrupted to the device and it did not make a clean exit, the device could reboot and and reset the presence with a new UID before the timeout for the prior instance had fired. Then once the timeout fired the device would show as offline even though it was actually online.
So we were able to stop the multiple pushes that were happening when the device reconnected by adding a programRef.off() call directly before the programRef.on(...) call. What we determined to be happening is that anytime the device went online from an offline state and the amOnline.on(...) callback fired it created a new listener.
Now we are able to handle the case where a onDisconnect() fires from a earlier program PID and overwrites the currently active program with a status of offline. This seems to solve the issue we are having with the race condition of the devices in the field able to reboot and regain connection prior to the onDisconnect() firing for the instance that was not cleanly exited.
We are still having an issue where all of the devices are going off and then back online at approximately the same time (within 1-3 seconds of each other). Are there any times where Firebase resets the ./info/connected node? Because we are monitoring presence and actually logging on and off events maybe we are just catching an event that most people don't see? Or is there something that we are doing wrong?
Related
Basically, I'm challenging myself to build something similar to watch2gether, where you can watch youtube videos simultaneously through the Youtube API and Socket.io.
My problem is that there's no way to check if the video has been paused other than utilizing the 'onStateChange' event of the Youtube API.
But since I cannot listen to the CLICK itself rather than the actual pause EVENT, when I emit a pause command and broadcast it via socket, when the player pauses in other sockets, it will fire the event again, and thus I'm not able to track who clicked pause first NOR prevent the pauses from looping.
This is what I currently have:
// CLIENT SIDE
// onStateChange event
function YtStateChange(event) {
if(event.data == YT.PlayerState.PAUSED) {
socket.emit('pausevideo', $user); // I'm passing the current user for future implementations
}
// (...) other states
}
// SERVER SIDE
socket.on('pausevideo', user => {
io.emit('smsg', `${user} paused the video`)
socket.broadcast.emit('pausevideo'); // Here I'm using broadcast to send the pause to all sockets beside the one who first clicked pause, since it already paused from interacting with the iframe
});
// CLIENT SIDE
socket.on('pausevideo', () => {
ytplayer.pauseVideo(); // The problem here is, once it pauses the video, onStateChange obviously fires again and results in an infinite ammount of pauses (as long as theres more than one user in the room)
});
The only possible solution I've thought of is to use a different PLAY/PAUSE button other than the actual Youtube player on the iframe to catch the click events and from there pause the player, but I know countless websites that uses the plain iframe and catch these kind of events, but I couldn't find a way to do it with my current knowledge.
If the goal here is to be able to ignore a YT.PlayerState.PAUSED event when it is specifically caused by you earlier calling ytplayer.pauseVideo(), then you can do that by recording a timestamp when you call ytplayer.pauseVideo() and then checking that timestamp when you get a YT.PlayerState.PAUSED event to see if that paused event was occurring because you just called ytplayer.pauseVideo().
The general concept is like this:
let pauseTime = 0;
const kPauseIgnoreTime = 250; // experiment with what this value should be
// CLIENT SIDE
// onStateChange event
function YtStateChange(event) {
if(event.data == YT.PlayerState.PAUSED) {
// only send pausevideo message if this pause wasn't caused by
// our own call to .pauseVideo()
if (Date.now() - pauseTime > kPauseIgnoreTime) {
socket.emit('pausevideo', $user); // I'm passing the current user for future implementations
}
}
// (...) other states
}
// CLIENT SIDE
socket.on('pausevideo', () => {
pauseTime = Date.now();
ytplayer.pauseVideo();
});
If you have more than one of these in your page, then (rather than a variable like this) you can store the pauseTime on a relevant DOM element related to which player the event is associated with.
You can do some experimentation to see what value is best for kPauseIgnoreTime. It needs to be large enough so that any YT.PlayerState.PAUSED event cause by you specifically calling ytplayer.pauseVideo() is detected, but not so long that it catches a case where someone might be pausing, then unpausing relatively soon after.
I actually found a solution while working around what that guy answered, I'm gonna be posting it in here in case anyone gets stuck with the same problem and ends up here.
Since socket.broadcast.emit doesn't emit to itself, I created a bool ignorePause and made it to be true only when the client received the pause request.
Then I only emit the socket if the pause request wasn't already broadcasted and thus received, and if so, the emit is ignored and the bool is set to false again in case this client/socket pauses the video afterwards.
Our Chrome extension has both content and background scripts communicating with each other. When the plugin is updated, the background script is stopped and the content scripts start getting Error: Extension context invalidated.. In V2, we used port.onDisconnect event as described here to clean things up. But in V3, this event is also sent after 5 minutes (when the background service worker is automatically terminated). So this event now means either extension unloading (and the cleanup should be done), or just SW lifecycle event (no need to cleanup, reconnecting is fine).
So the question is, how to unambiguously determine whether the cleanup is necessary.
I've tried:
chrome.management. events: onDisabled etc. But unfortunately chrome.management is undefined in my content script.
Checking for chrome.runtime.id inside port.onDisconnected callback to determine the plugin is unloaded. But the id is still present at that moment.
Again inside port.onDisconnected, trying to do chrome.runtime.connect() again and catching the exception. But there's no exception! The port is created successfully, but it receives neither messages nor its own onDisconnected events.
Trying point 3 inside setTimeout(..., 0) and setTimeout(..., 100). The former doesn't produce exceptions either. The latter does, but it introduces a delay of questionable duration (why 100? would it work the CPU is overloaded?) and potential race conditions when other plugin functionality could try to send messages with unpredictable results. So I'd appreciate a more bullet-proof solution.
Thanks to wOxxOm's suggestions, I've found a solution that seems to work for now: every once in a while (<5 seconds) to disconnect the port in the content script and then reconnect again. The code looks like this:
let portToBackground: chrome.runtime.Port | undefined = openPortToBackground();
function openPortToBackground(): chrome.runtime.Port {
const port = chrome.runtime.connect();
const timeout = setTimeout(() => {
console.log('reconnecting');
portToBackground = openPortToBackground();
port.disconnect();
}, 2 * 60 * 1000); // 2 minutes here, just to be sure
port.onDisconnect.addListener(() => {
clearTimeout(timeout);
if (port !== portToBackground) return;
// perform the cleanup
});
return port;
}
export function isExtensionContextInvalidated(): boolean {
return !portToBackground;
}
When I join the room, and then leave the route and go back, and then use the chat I've built, I get double messages of * amount of messages as many times I left and rejoined.
This problem goes away when I hard refresh.
I've tried everything I could find thus far, and have been unable to get it to work.
I tried on the client side, during beforeRouteLeave, beforeDestroy and window.onbeforeunload
this.$socket.removeListener("insertListener"); --> tried with all
this.$socket = null
this.$socket.connected = false
this.$socket.disconnected = true
this.$socket.removeAllListeners()
this.$socket.disconnect()
During the same events, I also sent a this.$socket.emit("leaveChat", roomId) and then on the server side tried the following inside the io.on("connection") receiver socket.on("leaveChat", function(roomId) {}):
socket.leave(roomId) --> this is what should according to docs work;
socket.disconnect()
socket.off() -- seems to be deprecated
socket.removeAllListeners(roomId)
There were a bunch of other things I tried that I can't remember but will update the post if I do.
Either it somehow disconnects and upon rejoining, previous listeners or something is still remaining, meaning all the messages are received * times rejoin. OR, if I disconnect, I don't seem to be able to reconnect.
On joining, I emit to server the room id and use socket.join(roomId).
All I want to do, is without refresh, when I leave the page, before that happens, the user can leave the room and when they go back, they get to rejoin, with no duplicate messages occurring.
I am currently trying to chew through the source code now.
Full disclosure here, I didn't read the full response posed by roberfoenix, but this is a common issue with socket.io and it comes down to calling the 'on' event multiple times.
When you create an .on event for your socket its a bind, and you can bind multiple times to the same event.
My assumption is, when a users hits a page you run something like
socket.on("joinRoom", data)
This in turn will say join the room, pull your messages from Mongo(or something else) and then emit to the room (side note, using .once on can help so you don't emit to every users when a user joings a room)
Now you leave the room, call socket.emit('leaveRoom',room), cool you left the room, then you go back into the room, guess what you now just binded to the same on event again, so when you emit, it emits two times to that user etc etc.
The way we addressed this is to place all our on-events into a function and call the function once. So, a user joins a page this will run the function like socketInit();
The socketInit function will have something like this
function socketInit(){
if (init === false){
//Cool it has not run, we will bind our on events
socket.on("event")
socket.on("otherEvent")
init = true;
}
}
Basically the init is a global variable, if is false, bind your events, otherwise don't rebind.
This can be improved to use a promis or could be done on connect but if a users reconnects it may run again.
If you're using Vue-Socket and feel like going slightly mad having tried everything, this may be your solution.
Turns out challenging core assumptions and investigating from the ground up pays off. It is possible that you forgot yourself so deeply in Socket.io, that you forgot you were using Vue-Socket.
The solution in my case was using Vue-Socket's built in unsubscribe function.
With Vue-Socket, one of the ways you can initially subscribe to events is as follows:
this.sockets.subscribe('EVENT_NAME', (data) => {
this.msg = data.message;
});
Because you're using Vue Socket, not the regular one, you also need to use Vue Socket's way for unsubscribing right before you leave the room (unless you were looking for a very custom solution). This is why I suspect many of the other things I tried didn't work and did next to nothing!
The way you do that is as follows:
this.sockets.unsubscribe('EVENT_NAME');
Do that for any events causing you trouble in the form of duplicates. The reason you'd be getting duplicates in the first place, especially upon rejoining post leaving a room, is because the previous event listeners were still running, and now the singular user would be playing the role of as if two or more listeners.
An alternative possibility is that you're emitting the message to everyone, including the original sender, when you should most likely be emitting it to everyone else except the sender (check here for socket.io emit cheatsheet).
If the above doesn't solve it for you, then make sure you're actually leaving the room, and doing so server-side. You can accomplish that through emitting a signal to the server right before leaving the route (in case you're using a reactive single page application), receiving it server side, and calling 'socket.leave(yourRoomName)' inside your io.on("connection", function(socket) {}) instance.
I'm using the net library of Node.js to conect to a server that is publishing data. So I'm listening for 'data'-events on client side. When the data-event is fired, I append the received data to my rx-buffer and check if we got a complete message by reading some bytes. If I got a valid message, I remove the message from the buffer and process it. The source code looks like:
rxBuffer = ''
client.on('data', (data) => {
rxBuffer += data
// for example... 10 stores the message length...
while (rxBuffer.length > 10 && rxBuffer.length >= (10 + rxBuffer[10])) {
const msg = rxBuffer.slice(0, 10 + rxBuffer[10])
rxBuffer = rxBuffer.slice(0, msg.length) // remove message from buffer
processMsg(msg) // process message..
}
})
As far as I know that the typical way. But... what happens if the data event fired multiple times? So, imagine I'm getting a data event and while I append the data to my rx-buffer I'm getting the next data event. So the "new" data event will also append the data to the rxBuffer and starts my while-loop. So I've two handlers that are processing the same messages because they share the same rx-buffer. Is this correct?
How can I handle this? In other languages I'd say use something like a mutex to prevent multiple access to the rx-buffer... but what's the solution forjs?!?! Or maybe I'm wrong and I'm never getting multiple data-events while one event is still active? Any ideas?
JavaScript is single threaded. The second event will not run until the first one either completes or blocks, the latter of which could presumably happen in your processMsg(). If that's the case, multiple executions of processMsg() could be interleaved. If they aren't changing any global data (rxBuffer included), then you shouldn't have a problem.
Something strange is happening with my app, I am using SailsJs with official PostgreSQL driver and my data gets deleted. I don't have any pattern or list of specific events which deletes the data but I have following observations.
Few days back i was writing a function to destroy data and when I
executed that function it gave me an error I fixed the error and ran
my web app again and whoa data from one of my table was all gone.
Yesterday i wrote a function and I tried to get the HTTP call to that
function but it was giving me 500 server error, I started debugging it
and after executing my program 3 to 4 times with this error partial
data was deleted from one of my database table. Later the error was i
had a typo in URL.
If any of you guys had any experience with what is happening to me please let me know how to fix it? or at least help me on how to reproduce this issue ?
EDIT
I activated the logs and was waiting for it to happen again and it happened again and here is the log from sailsjs
In the logs I saw that its talking about alter.js sync strategy but i have selected it to be the safe strategy
It has happened to me quite a few times, when lifting the app and it is in the process of making changes to the db and it fails, sometimes due to ORM timeout.
What sails do when its lifting and needs to update the data structure is controlled in config/models.js migrate: 'alter', usually commented out, you get a prompt for what to do 1... 2... 3... (writing from the top of my head, i dont remember the actual messages) and a warning about using alter on a production system.
Changing
config/orm.js to have this
// config/orm.js
module.exports.orm = {
_hookTimeout: 60000 // I used 60 seconds as my new timeout
};
And for reasons I don't know changing config/pubsub.js
// config/pubsub.js
module.exports.pubsub = {
_hookTimeout: 60000 // I used 60 seconds as my new timeout
};
has helped me, avoid data loss.