I have a server side function to check if a player is idle
socket.idle = 0;
socket.cached = {};
socket.checkIdle = function(){
socket.idle++;
console.log(socket.idle);
if(socket.cached.x != players[id].x || socket.cached.y != players[id].y){
socket.idle=0;
}
socket.cached = players[id];
if(socket.idle>12){
socket.disconnect();
}
}
socket.interval = setInterval(socket.checkIdle,1000);
I've noticed that even after the player gets booted/disconnected for being too long. The server still console log the socket.idle for it.
Am I going about this the wrong way? Also should I then clear the interval for when the player disconnects?
socket.on('disconnect', function(){
clearInterval(socket.interval);
});
You certainly shouldn't leave a setInterval() running for a player that is no longer connected. Your server will just have more and more of these running, wasting CPU cycles and possibly impacting your server's responsiveness or scalability.
I've noticed that even after the player gets booted/disconnected for being too long. The server still console log the socket.idle for it.
Yeah, that's because the interval is still running and, in fact, it even keeps the socket object from getting garbage collected. All of that is bad.
Also should I then clear the interval for when the player disconnects?
Yes, when a socket disconnects, you must clear the interval timer associated with that socket.
Am I going about this the wrong way?
If you keep with the architecture of a polling interval timer for each separate socket, then this is the right way to clear the timer when the socket disconnects.
But, I would think that maybe you could come up with another design that doesn't need to regularly "poll" for idle at all. It appears you want to have a 12 second timeout such that if the player hasn't moved within 12 seconds, then you disconnect the user. There's really no reason to check for this every second. You could just set a single timer with setTimeout() for 12 seconds from when the user connects and then each time you get notified of player movement (which your server must already being notified about since you're referencing it in players[id].x and players[id].y), you just clear the old timer and set a new one. When the timer fires, you must have gone 12 seconds without motion and you can then disconnect. This would be more typically how a timeout-type timer would work.
Related
Issue/ How I ran into it
I'm writing a batch processing Lambda function in Node.JS that makes calls to Redis in two different places. Some batch items may never reach the second Redis call. This is all happening asynchronously, so if i close the connection as soon as the batch queue is empty, any future Redis calls would fail. How do I close the connection?
What i've tried
process.on('beforeExit', callback) -- Doesn't get called as the event loop still contains the Redis connection
client.unref()
-- Closes connection if no commands are pending. Doesn't handle future calls.
client.on('idle', callback)
-- Works but is deprecated and may still miss future calls
What I'm currently doing
Once the batch queue is empty, I call:
intervalId = setInterval(closeRedis, 1000);
I close the Redis connection and clear the interval in the callback after a timeout:
function closeRedis() {
redis.client('list', (err, result) => {
var idle = parseClientList(result, 'idle');
if (idle > timeout) {
redis.quit();
clearInterval(intervalId);
}
});
}
This approach mostly works, but if just checking for a timeout, there is still a chance that other processes are going on and a Redis call may be made in the future. I'd like to close the connection when there's only an idle connection remaining in the event loop. Is there a way to do this?
I ended up using process._getActiveHandles(). Once the batch queue is empty, I set an interval to check every half a second if only the minimum processes remain. If so I unref the redisClient.
redisIntervalId = setInterval(closeRedis, 500);
// close redis client connection if it's the last required process
function closeRedis() {
// 2 core processes plus Redis and interval
var minimumProcesses = 4;
if (process._getActiveHandles().length > minimumProcesses)
return;
clearInterval(redisIntervalId);
redisClient.unref();
}
The advantage of this approach is that I can be sure Redis client will not close the connection while other important processes are running. I can also be sure that the client won't keep the event loop alive after all the important processes have been completed.
The downside is _getActiveHandles() is an undocumented node function so it may get changed or removed later. Also, unref() is experimental and doesn't consider some Redis commands when closing the connection.
Before everybody marks this as a dup let me state that I know my fair share of network programming and this question is my attempt to solve something that riddles me even after finding the "solution".
The setup
I've spend the last weeks writing some glue code to incorporate a big industrial system into our current setup. The system is controlled by a Windows XP computer (PC A) which is controlled from a Ubuntu 14.04 system (PC B) by sending a steady stream of UDP packets at 2000 Hz. It responds with UDP packets containing the current state of the system.
Care was taken to ensure that the the 2000 Hz rate was held because there is a 3ms timeout after which the system faults and returns to a safe state. This involves measuring and accounting for inaccuracies in std::this_thread::sleep_for. Measurements show that there is only a 0.1% derivation from the target rate.
The observation
Problems started when I started to receive the state response from the system. The controlling side on PC B looks roughly like this:
forever at 2000Hz {
send current command;
if ( socket.available() >= 0 ) {
receive response;
}
}
edit 2: Or in real code:
auto cmd_buf = ...
auto rsp_buf = ...
while (true) {
// prepare and send command buffer
cmd_buf = ...
socket.send(cmd_buf, endpoint);
if (socket.available() >= 0) {
socket.receive(rsp_buf);
// the results are then parsed and stored, nothing fancy
}
// time keeping
}
Problem is that, whenever the receiving portion of the code was present on PC B, PC A started to run out of memory within seconds when trying to allocate receive buffers. Additionally it raised errors stating that the timeout was missed, which was probably due to packets not reaching the control software.
Just to highlight the strangeness: PC A is the pc sending UDP packets in this case.
Edit in response to EJP: this is the (now) working setup. It started out as:
forever at 2000Hz {
send current command;
receive response;
}
But by the time the response was received (blocking) the deadline was missed. Therefore the availability check.
Another thing that was tried was to receive in a seperate thread:
// thread A
forever at 2000Hz {
send current command;
}
// thread B
forever {
receive response;
}
Which displays the same behavior as the first version.
The solution
The solution was to set the socket on PC B to non blocking mode. One line and all problems were gone.
I am pretty sure that even in blocking mode the deadline was met. There should be no performance difference between blocking and non-blocking mode when there is just one socket involved. Even if checking the socket for available data takes some microseconds more than in non-blocking mode it shouldn't make a difference when the overall deadline is met accuratly.
Now ... what is happening here?
If I read your code correctly and referring to this code:
forever at 2000Hz {
send current command;
receive response;
}
Examine the difference between the blocking and not blocking socket. With blocking socket you send current command and then you are stuck waiting for the response. By this time I would guess you already miss the 2kHz goal.
Now in non blocking socket you send the current command, try to received whatever is in receive buffers, but if there is nothing there you return immediately and continue your tight 2kHz loop of sending. This explains to me why your industrial control system works fine in non-blocking code.
I have a Visual Studio C++ 2013 MFC application that writes/reads data from a serial rotary encoder, and sends the read data to a stepper motor. The reading is in a while (flagTrue) {}. The while loop is spooled in a separate thread.
It does the job, but when I try to exit the application graciously,I keep getting this:
'System.ObjectDisposedException' mscorlib.dll
I tried setting timers for 1-2 seconds to let the serial listening finish, but it seems like the listening keeps going even when I seemingly have exited the thread. Here are snippets of the code:
//this is inside of the main window CDlg class
pSerialThread = (CSerialThread*)AfxBeginThread(RUNTIME_CLASS(CSerialThread));
//this is inside the CSerialThread::InitInstance() function
init_serial_port();
//this is the serial listening while loop
void init_serial_port() {
SerialPort^ serialPort = gcnew SerialPort();
while (bSerialListen) {
//do read/write using the serialPort object
}
serialPort->Close();
}
///this is in the OnOK() function
bSerialListen = false;
pSerialThread->ExitInstance();
An incomplete workaround, inspired Hans's answer below, was to have the thread reset a flag after port closes:
SerialPort^ serialPort = gcnew SerialPort();
serialIsOpen = true;
while (bSerialListen) {
//do read/write using the serialPort object
}
serialPort->Close();
serialIsOpen = false;
}
Then inside OnOK() (which does result in a clean exit):
bSerialListen=false;
//do other stuff which normally takes longer than the port closing.
if (serialIsOpen) {
//ask user to press Exit again;
return;
}
OnOK();
}
However, the user always has to press Exit twice, because the following never works
while (serialIsOpen) {
Sleep(100);
//safety counter, do not proceed to OnOK();
}
OnOK();
while expires before the port resets the flag, even if one waits for 10 seconds -- much longer than the user pressing the button twice:
while (bSerialListen) {
Very troublesome. First of all, a bool is not a proper thread synchronization primitive by a very long shot. Second of all, surely the most likely problem, is that your code isn't checking it. Because the thread is actually stuck in the SerialPort::Read() call. Which isn't completing because the device isn't sending anything at the moment you want to terminate your program.
What happens next is very rarely graceful. A good way to trigger an uncatchable ObjectDisposedException is to jerk the USB connector. The only other thing you can do when you see it not working and have no idea what to do next. Very Bad Idea. That makes many a USB driver throw up its hands in disgust, it knows a userland app has the port opened but it isn't there anymore and now starts failing any requests. Sometimes even failing the close request, very unpleasant. There is no way to do this in a graceful way, serial port are not plug & play devices. This trips ODE in a worker thread that SerialPort starts to raise events, it is uncatchable.
Never, never, never jerk the USB connector, using the "Safely Remove Hardware" tray icon is a rock-hard requirement for legacy devices like serial ports. Don't force it.
So what to do next? The only graceful way to get the SerialPort::Read() call to complete is to jerk the floor mat. You have to call SerialPort::Close(). You still get an ObjectDisposedException but now it is one you can actually catch. Immediately get out of the loop, don't do anything else and let the thread terminate.
But of course you have to do so from another thread since this one is stuck. Plenty of trouble doing that no doubt when you use MFC, the thread that wants it to exit is not a managed thread in your program. Sounds like you already discovered that.
The better way is the one you might find acceptable after you read this post. Just don't.
I am using socket.io to send packets via websockets. They seem to disappear from time to time. So I have to implement some sort of acknowledge-system. My Idea was to immediatelly respond to a packet with an ACK-packet. If the server does not receive this ACK-packet within a given time, he will resend it (up to 3 times, then disconnect the socket).
My first thought was to start a timer (setTimeout) after sending a packet. if the timeout-event occurs, the packet has to be sent again. If the ACK will arrive, the timeout will get deleted. Quite easy and short.
var io = require('socket.io').listen(80);
// ... connection handling ...
function sendData(someData, socket) {
// TODO: Some kind of counter to stop after 3 tries.
socket.emit("someEvent", someData);
var timeout = setTimeout(function(){ sendData(someData, socket); }, 2000);
socket.on("ack", function(){
// Everything went ok.
clearTimeout(timeout);
});
}
But I will have 1k-3k clients connected with much traffic. I can't imagine, that 10k timers running at the same time are handlable by NodeJS. Even worse: I read that NodeJS will not fire the event if there is no time for it.
How to implement a good working and efficient packet acknowledge system?
If socket.io is not reliable enough for you, you might want to consider implementing your own websocket interface instead of adding a layer on top of socket.io. But to answer your question, I don't think running 10k timers is going to be a big deal. For example, the following code ran in under 3 seconds for me and printed out the expected result of 100000:
var x = 0;
for (var i = 0; i < 100000; i++) {
setTimeout(function() { x++; }, 1000);
}
setTimeout(function() { console.log(x); }, 2000);
There isn't actually that much overhead for a timeout; it essentially just gets put in a queue until it's time to execute it.
I read that NodeJS will not fire the event if there is no time for it.
This is a bit of an exaggeration, node.js timers are reliable. A timer set by setTimeout will fire at some point. It may be delayed if the process is busy at the exact scheduled time, but the callback will be called eventually.
Quoted from Node.js docs for setTimeout:
The callback will likely not be invoked in precisely delay milliseconds. Node.js makes no guarantees about the exact timing of when callbacks will fire, nor of their ordering. The callback will be called as close as possible to the time specified.
I want to do multithreading! I want to make a timer with play/pause/stop buttoms
and when user pressed the play button, a timer starts counting.
I want to do the counting process, while this, another operation should do,
because with this timer user wants to measure sth that is going on, somwhere else in the scene
anyway, I want to to sth, and user measure how long it takes!!!
I'm newbie to flash, but as far as I know the solution is multithreading!
or is there any timer or sth like that that it can measure time, without causing the program to hang!
I'm working with as2 , but if as3 is the only way, it's fine!
tnx
Flash player 11.4 offers multi-threading of sorts with the new concurency (actionscript workers) features. Read about it here: http://blogs.adobe.com/flashplayer/2012/08/flash-player-11-4-and-air-3-4.html
Flash 11.3 and under don't offer multithreading. Your question though doesn't especially require multithreading though. The flash.utils.Timer class and flash.utils.setTimeout() are asynchronous and don't hang your code stack.
I would recommend looking at these classes on the adobe docs.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/utils/Timer.html
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/utils/package.html#setTimeout()
To address your question in the comments:
var timer:Timer = new Timer(1000); //fire every second, make this number smaller to have it update faster
timer.addEventListener(TimerEvent.TIMER, updateLabel);
var timeStamp:int;
function startTimer():void {
timeStamp = getTimer();
timer.reset();
timer.start();
}
function updateLabel(e:Event):void {
var timePassedSoFar:int = getTimer() - timeStamp;
//update your label to show how much time has passed (it's in milliseconds)
}
If you only want seconds, you could also just use the timer.currentCount property (instead of getTimer()) which tells you how many times the timer has fired, which in the example above is the amount of seconds elapsed as it fires once per second.