NodeJS close/error event when computer gets killed? - node.js

I have a NodeJS server set up that accepts TLS connections using the tls module: http://nodejs.org/api/tls.html
The clients are are using the NodeJS TLS module for the connections. I'm also storing a list/hashmap of all connected client and their IDs. If a client disconnects, then I will remove it from the list using the "error", "clientError" and "close" events.
This works in any normal case - however, when I "kill" the client (unplug power, unplug network cable) it seems like there is no event fired and the stream is open forever. Maybe I have overlooked something, but is there an event for something like this or how can I detect when the stream is not there any longer?
Sure, I could poll it in a certain interval, but that does not sound pretty good, since it will cause a lot of traffic (for almost no reason).

In the end, the stream is actually closed. If you try to call write, then it will cause an "write after end" error. Sadly, it seems like there is no event fired when the stream itself closes.
So right now, I'm just trying to write something every few minutes to see if the stream is still alive.

Related

How to not receive the accumulated pushes from Pusher after returning online?

How can one prevent Pusher from automatically pushing all the piled up messages to the client after the client eventually goes online after being offline, i.e. after the client re-establishes the connection?
After exchanging messages with a Pusher support enginner, the issue became more clear.
The connection may still be opened even when the laptop gets asleep (this behaviour varies among computers). Thus, after waking up, it may still be connected. (This is exactly what happened in my case so that everything looked like Pusher pushed the accumulated messages.)
However, the default activity timeout is 120s, and the time to wait for a pong response before closing the connection is 30s. So, allowing it around three minutes would make the client disconnect completely, and the behaviour I encountered would not take place.
Pusher doesn't presently buffer messages to be delivered upon reconnection. So the functionality described in the questions isn't something an application needs to consider right now.
Future releases may contains something called Event Buffer which will offer this functionality. Documentation will be released around that time to detail how to avoid receiving buffered events.

How can I make socketio wait before sending the next event?

So I have a socket.io server which works well. It's very simple: kind of mimicking screen sharing as it broadcasts one clients position on the page to the other, who catches it and moves to said location etc. All of this works fine, but because of the way I'm catching movement, its possible (and quite common) for it to be sent too many messages at once, making it impossible for the other client to keep up.
I was wondering if there is a way to make socket.io 'sleep' or 'wait' for a certain interval, ignore the messages sent during this interval without returning an error, and then begin listening again?
It is feasible to implement this in each client (and this may be the better option), but I just wanted to know if this is possible on the server side too.
Use volatile messages. If there are too much messages, they will just be dropped to go again with real time messages.
socket.volatile.emit('msg', data);
From socket.io website :
Sending volatile messages.
Sometimes certain messages can be dropped. Let's say you have an app that shows realtime tweets for the keyword `bieber`.
If a certain client is not ready to receive messages (because of network slowness or other issues, or because he's connected through long polling and is in the middle of a request-response cycle), if he doesn't receive ALL the tweets related to bieber your application won't suffer.
In that case, you might want to send those messages as volatile messages.

Identifying remote disconnection in socket client

How do I find out from a socket client program that the remote connection is down (e.g. the server is down). When I do a recv and the server is down it blocks if I do not set any timeout. However in my case I cannot put any reliable timeout value to get around it since otherwise the recv times out even when the server is up but the response really takes longer than the timeout value that I have set.
Unfortunately, ZeroMQ just passes this on to the next layer. So the protocol you are implementing on top of ZeroMQ will have to handle this.
Heartbeats are recommended. Basically, just have one side send a message if the connection is otherwise idle. The other side can treat the absence of such messages as a failure condition and close the connection.
You may wish to modify your higher level protocols to be more robust. For example, you can submit a command, query its status, and allow the other side to forget about the command. That way, if the connection is lost, you can reconnect and query any outstanding commands. Any it doesn't have, you know didn't get through and can resubmit. Once you get a reply with the result of a command, you can tell the other side that it can now forget the response.
This allows you to keep the connection active while a long-running command is ongoing. Every so often you ask, "is everything okay". The other side responds, "yes". You can use long polling where the other side delays responding for a second or so while the command is in process. This allows it to return the results immediately rather than having to wait a second for your next query.
The specifics depend on your exact requirements, but you must design this correctly into your protocol.
If the remote host goes down without sending you a tcp FIN package then you have no chance to detect that. You can test that behaviour by firewalling a port after a connection has been established on that port. Your program will "hang" forever.
However, the Linux kernel supports a mechanism called TCP keep alives which are meant to close a tcp connection after a given timeout. If you can't specify a timeout for your application, than there isn't a reliable chance to use that. Last chance might be to use features of the application protocol (can you name it?), if that protocol does not support features for connection handling you may invent something on your own on top of that.

Does sockets.io emit sometimes fail?

I have a web based multiplayer game. It happens from time to time that someone is kicked out because server did not get expected message from client. It seems from my logs that client did not disconnect, just did not send message or server did not receive it. My question here is "Does this things happen normally from time to time?" Should i use some kind of callback mechanism to ensure message is delivered and if not send it again or is there some issue that i am not aware?
socket.io already provides ACKs and message ID tracking, on top of TCP.
Also, socket.io uses pings to check the connection. So, if you say that the client is not disconnected, and the server tells that the client is not disconnected, then the connection is still there.
The problem must be situated elsewhere.
Are you sure there is not a bug in either part of the implementation? Showing some code snippets could help, as well as the environment you are using.

Node.js game logics

I'm in process of making realtime multiplayer racing game. Now I need help writing game logics in Node.js TCP (net) server. I don't know if it's possible, I don't know if i'm doing that right, but I'm trying my best. I know it's hard to understand my broken english, so i made this "painting" :)
Thank you for your time
To elaborate on driushkin's answer, you should use remote procedure calls (RPC) and an event queue. This works like in the image you've posted, where each packet represents a 'command' or RPC with some arguments (i.e. movement direction). You'll also need an event queue to make sure RPCs are executed in order and on time. This will require a timestamp or framecount for each command to be executed on (at some point in the future, in a simple scheme), and synchronized watches (World War II style).
You might notice one critical weakness in this scheme: RPC messages can be late (arrive after the time they should be applied) due to network latency, malicious users, etc. In a simple scheme, late RPCs are dropped. This is fine since all clients (even the originator!) wait for the server to send an RPC before acting (if the originating client didn't wait for the server message, his game state would be out of sync with the server, and your game would be broken).
Consider the impact of lag on such a scheme. Let's say the lag for Client A to the server was 100ms, and the return trip was also 100ms. This means that client input goes like:
Client A presses key, and sends RPC to server, but doesn't add it locally (0ms)
Server receives and rebroadcasts RPC (100ms)
Client A receives his own event, and now finally adds it to his event queue for processing (200ms)
As you can see, the client reacts to his own event 1/5 of a second after he presses the key. This is with fairly nice 100ms lag. Transoceanic lag can easily be over 200ms each way, and dialup connections (rare, but still existent today) can have lag spikes > 500ms. None of this matters if you're playing on a LAN or something similar, but on the internet this unresponsiveness could be unbearable.
This is where the notion of client side prediction (CSP) comes in. CSP is made out to be big and scary, but implemented correctly and thoughtfully it's actually very simple. The interesting feature of CSP is that clients can process their input immediately (the client predicts what will happen). Of course, the client can (and often will) be wrong. This means that the client will need a way of applying corrections from the Server. Which means you'll need a way for the server to validate, reject, or amend RPC requests from clients, as well as a way to serialize the gamestate (so it can be restored as a base point to resimulate from).
There are lots of good resources about doing this. I like http://www.gabrielgambetta.com/?p=22 in particular, but you should really look for a good multiplayer game programming book.
I also have to suggest socket.io, even after reading your comments regarding Flex and AS3. The ease of use (and simple integration with node) make it one of the best (the best?) option(s) for network gaming over HTTP that I've ever used. I'd make whatever adjustments necessary to be able to use it. I believe that AIR/AS3 has at least one WebSockets library, even if socket.io itself isn't available.
This sounds like something socket.io would be great for. It's a library that gives you real time possibilities on the browser and on your server.
You can model this in commands in events: client sends command move to the server, then server validates this command and if everything is ok, he publishes event is moving.
In your case, there is probably no need for different responses to P1 (ok, you can move) and the rest (P1 is moving), the latter suffices in both cases. The is moving event should contain all necessary info (like current position, velocity, etc).
In this simplest form, the one issuing command would experience some lag until the event from server arrives, and to avoid that you could start moving immediately, and then apply some compensating actions if necessary when event arrives. But this can get complicated.

Resources