How to not receive the accumulated pushes from Pusher after returning online? - pusher

How can one prevent Pusher from automatically pushing all the piled up messages to the client after the client eventually goes online after being offline, i.e. after the client re-establishes the connection?

After exchanging messages with a Pusher support enginner, the issue became more clear.
The connection may still be opened even when the laptop gets asleep (this behaviour varies among computers). Thus, after waking up, it may still be connected. (This is exactly what happened in my case so that everything looked like Pusher pushed the accumulated messages.)
However, the default activity timeout is 120s, and the time to wait for a pong response before closing the connection is 30s. So, allowing it around three minutes would make the client disconnect completely, and the behaviour I encountered would not take place.

Pusher doesn't presently buffer messages to be delivered upon reconnection. So the functionality described in the questions isn't something an application needs to consider right now.
Future releases may contains something called Event Buffer which will offer this functionality. Documentation will be released around that time to detail how to avoid receiving buffered events.

Related

Websockets, SSE, or HTTP when auto updating a constantly open dashboard page

My app is built in Angular (2+) and NodeJS. One of the pages is basically a dashboard that shows current tasks of a company, where this dashboard is shown all day on a TV to the companies staff.
Rarely is it refreshed or reloaded manually.
Tasks are updated every 5-10 mins by a staff member from another computer.
Tasks on dashboard need to be updated asap after any task is updated.
App should limit data transfer when updating dashboard, after task update.
I initially tried websockets but had a problem with connection reliability as sometimes the board would never get updated because the websocket would lose its connection. I could never figure this problem out and read that websockets can be unreliable.
Currently I'm just running an http call every 15 seconds to retrieve a new set of data from the backend. But this can be costly with data transfer as the app scales.
I've just recently heard about SSE but know nothing about it.
At the moment my next plan is to set up a "last updated" status check where I still run an http call every 15 seconds, passing a "last updated" time from the frontend and comparing that to the backends "last updated" time (which is updated whenever a task is changed) and only returning data if the frontend's time is outdated, to reduce data transfer.
Does that sounds like a good idea, or should I try websockets again, or SSE?
I initially tried websockets but had a problem with connection reliability as sometimes the board would never get updated because the websocket would lose its connection.
Handle the event for when the connection is lost and reconnect it.
I could never figure this problem out and read that websockets can be unreliable.
Don't let some random nonsense you read on the internet keep you from owning the problem and figuring it out. Web Sockets are reliable as anything else. And, like anything else, they can get disconnected. And, like many of the newer APIs, they leave re-connection logic up to you... the app developer. If you simply don't want to deal with it, there are many packages on NPM for auto-reconnecting Web Sockets which do exactly I suggested. They handle the events for disconnection, and immediately reconnect.
Currently I'm just running an http call every 15 seconds to retrieve a new set of data from the backend. But this can be costly with data transfer as the app scales.
It can be, yes.
I've just recently heard about SSE but know nothing about it.
From what little we know about your problem, SSE sounds like the right way to go. SSE is best for:
Evented data
Data that can be somehow serialized to text (JSON is fine, but don't base64 encode large binary streams as you'll make them too big)
Unidirectional messages, from server to client
Most implementations will reconnect for you, and it even supports a method of picking up where it left off, if a disconnection actually occurs.
If you need to push only data from the server to the client, it may worth having a look at Server-Sent Events.
You can have a look at this article (https://streamdata.io/blog/push-sse-vs-websockets/) and this video (https://www.youtube.com/watch?v=NDDp7BiSad4) to get an insight about this technology and whether it could fit your needs. They summarize pros & cons of both SSE and WebSockets.

How can I make socketio wait before sending the next event?

So I have a socket.io server which works well. It's very simple: kind of mimicking screen sharing as it broadcasts one clients position on the page to the other, who catches it and moves to said location etc. All of this works fine, but because of the way I'm catching movement, its possible (and quite common) for it to be sent too many messages at once, making it impossible for the other client to keep up.
I was wondering if there is a way to make socket.io 'sleep' or 'wait' for a certain interval, ignore the messages sent during this interval without returning an error, and then begin listening again?
It is feasible to implement this in each client (and this may be the better option), but I just wanted to know if this is possible on the server side too.
Use volatile messages. If there are too much messages, they will just be dropped to go again with real time messages.
socket.volatile.emit('msg', data);
From socket.io website :
Sending volatile messages.
Sometimes certain messages can be dropped. Let's say you have an app that shows realtime tweets for the keyword `bieber`.
If a certain client is not ready to receive messages (because of network slowness or other issues, or because he's connected through long polling and is in the middle of a request-response cycle), if he doesn't receive ALL the tweets related to bieber your application won't suffer.
In that case, you might want to send those messages as volatile messages.

NodeJS close/error event when computer gets killed?

I have a NodeJS server set up that accepts TLS connections using the tls module: http://nodejs.org/api/tls.html
The clients are are using the NodeJS TLS module for the connections. I'm also storing a list/hashmap of all connected client and their IDs. If a client disconnects, then I will remove it from the list using the "error", "clientError" and "close" events.
This works in any normal case - however, when I "kill" the client (unplug power, unplug network cable) it seems like there is no event fired and the stream is open forever. Maybe I have overlooked something, but is there an event for something like this or how can I detect when the stream is not there any longer?
Sure, I could poll it in a certain interval, but that does not sound pretty good, since it will cause a lot of traffic (for almost no reason).
In the end, the stream is actually closed. If you try to call write, then it will cause an "write after end" error. Sadly, it seems like there is no event fired when the stream itself closes.
So right now, I'm just trying to write something every few minutes to see if the stream is still alive.

Does sockets.io emit sometimes fail?

I have a web based multiplayer game. It happens from time to time that someone is kicked out because server did not get expected message from client. It seems from my logs that client did not disconnect, just did not send message or server did not receive it. My question here is "Does this things happen normally from time to time?" Should i use some kind of callback mechanism to ensure message is delivered and if not send it again or is there some issue that i am not aware?
socket.io already provides ACKs and message ID tracking, on top of TCP.
Also, socket.io uses pings to check the connection. So, if you say that the client is not disconnected, and the server tells that the client is not disconnected, then the connection is still there.
The problem must be situated elsewhere.
Are you sure there is not a bug in either part of the implementation? Showing some code snippets could help, as well as the environment you are using.

Node.js game logics

I'm in process of making realtime multiplayer racing game. Now I need help writing game logics in Node.js TCP (net) server. I don't know if it's possible, I don't know if i'm doing that right, but I'm trying my best. I know it's hard to understand my broken english, so i made this "painting" :)
Thank you for your time
To elaborate on driushkin's answer, you should use remote procedure calls (RPC) and an event queue. This works like in the image you've posted, where each packet represents a 'command' or RPC with some arguments (i.e. movement direction). You'll also need an event queue to make sure RPCs are executed in order and on time. This will require a timestamp or framecount for each command to be executed on (at some point in the future, in a simple scheme), and synchronized watches (World War II style).
You might notice one critical weakness in this scheme: RPC messages can be late (arrive after the time they should be applied) due to network latency, malicious users, etc. In a simple scheme, late RPCs are dropped. This is fine since all clients (even the originator!) wait for the server to send an RPC before acting (if the originating client didn't wait for the server message, his game state would be out of sync with the server, and your game would be broken).
Consider the impact of lag on such a scheme. Let's say the lag for Client A to the server was 100ms, and the return trip was also 100ms. This means that client input goes like:
Client A presses key, and sends RPC to server, but doesn't add it locally (0ms)
Server receives and rebroadcasts RPC (100ms)
Client A receives his own event, and now finally adds it to his event queue for processing (200ms)
As you can see, the client reacts to his own event 1/5 of a second after he presses the key. This is with fairly nice 100ms lag. Transoceanic lag can easily be over 200ms each way, and dialup connections (rare, but still existent today) can have lag spikes > 500ms. None of this matters if you're playing on a LAN or something similar, but on the internet this unresponsiveness could be unbearable.
This is where the notion of client side prediction (CSP) comes in. CSP is made out to be big and scary, but implemented correctly and thoughtfully it's actually very simple. The interesting feature of CSP is that clients can process their input immediately (the client predicts what will happen). Of course, the client can (and often will) be wrong. This means that the client will need a way of applying corrections from the Server. Which means you'll need a way for the server to validate, reject, or amend RPC requests from clients, as well as a way to serialize the gamestate (so it can be restored as a base point to resimulate from).
There are lots of good resources about doing this. I like http://www.gabrielgambetta.com/?p=22 in particular, but you should really look for a good multiplayer game programming book.
I also have to suggest socket.io, even after reading your comments regarding Flex and AS3. The ease of use (and simple integration with node) make it one of the best (the best?) option(s) for network gaming over HTTP that I've ever used. I'd make whatever adjustments necessary to be able to use it. I believe that AIR/AS3 has at least one WebSockets library, even if socket.io itself isn't available.
This sounds like something socket.io would be great for. It's a library that gives you real time possibilities on the browser and on your server.
You can model this in commands in events: client sends command move to the server, then server validates this command and if everything is ok, he publishes event is moving.
In your case, there is probably no need for different responses to P1 (ok, you can move) and the rest (P1 is moving), the latter suffices in both cases. The is moving event should contain all necessary info (like current position, velocity, etc).
In this simplest form, the one issuing command would experience some lag until the event from server arrives, and to avoid that you could start moving immediately, and then apply some compensating actions if necessary when event arrives. But this can get complicated.

Resources