So I'm working on a text-game-thingy when I realize that I'm getting an error every once-in-a-while (~1/100 refreshes) when leaving the game. I have a basic object holding properties containing a player object. I also have a 'textCooldown' property in my player object (to stop you from spamming), and a 'setInterval' function to reduce this property's value every x seconds that looks, for all intensive purposes, like this:
setInterval(function() {
for(var p in PLAYER_LIST) {
var player = PLAYER_LIST[p];
player.textCooldown -= 0.1;
}
}, 100);
I also have a function that gets called whenever the client disconnects, which deletes that property inside PLAYER_LIST. The problem is, every once in a while when a client disconnects the server is still calling the 'for' loop (which tells me NodeJS is multithreaded in at least some aspects) and tries to change the property 'textCooldown' of an undefined object.
How can I fix this problem without putting if(typeof player != 'undefined' everywhere? Is there any way I can implement callbacks into this seemingly synchronous task?
Related
While this code is running i cant do anything. Is there asynchronous way for loops?
// This object is very large
var listOfUsers = {};
for(var key in listOfUsers){
delete listOfUsers[key]
}
Is there asynchronous way for loops?
No. Since delete listOfUsers[key] is itself a synchronous operation, then there is no way to do anything else while that loop is running. The JS interpreter is busy executing the loop and executing that delete operation. Because Javascript in node.js is single threaded so there's only ever one set of Javascript executing at a time. You can't execute anything else until the loop is done.
It occurs to me that if you're just trying to get listOfUsers back to an empty object and nobody else holds a reference to the original object, you could perhaps replace your existing loop with just this:
listOfUsers = {};
while would be a lot faster. The old object (and its properties) would then get garbage collected.
In rare circumstances, you can solve problems like this and lessen the impact of a synchronous operation by breaking its operation into chunks and doing one chunk, then letting the event loop run and then do another chunk.
For example, you might be able to do something like this:
// remove all users, chunked to 100 at a time
// allowing the event loop to run between chunks
function removeUsers() {
const chunkSize = 100;
let usersToDelete = Object.keys(listOfUsers).slice(chunkSize);
if (!usersToDelete.length) {
// everything deleted, no more work to do
return;
} else {
for (let key of usersToDelete) {
delete listOfUsers[key];
}
// delete some more after other things get a chance to run in the event loop
setTimeout(removeUsers, 20);
}
A problem with this approach is that you can't add any new users to the listOfUsers until this is done or they will get deleted.
In my node.js application I have a collection of client sockets as an array. When a communication error occurs, I simply call destroy on the socket.
My question is: should I destroy the socket before or after removing it from the array? The documentation doesn't say much.
var clientSockets = []
var destroySocketBefore = function(socket) {
socket.destroy()
var socketIdx = clientSockets.indexOf(socket)
if (socketIdx > -1) {
clientSockets.splice(socketIdx, 1)
}
}
var destroySocketAfter = function(socket) {
var socketIdx = clientSockets.indexOf(socket)
if (socketIdx > -1) {
clientSockets.splice(socketIdx, 1)
}
socket.destroy()
}
In the case of destroySocketBefore, I am not sure if the socket will be found in the array if I destroy it before searching for it, so there is a possibility that array still incorporates invalid sockets in subsequent logic.
In the case of destroySocketAfter, I am not sure if calling destroy on a socket that was removed from array will have the desired result. Is there a possibility that the system will delete the socket object after splicing the array, so sometimes I get to call destroyon a null object.
I tested and it seems that both methods work as there is no difference between them, so I am not sure which method is the correct one.
Either solution is valid and the two are effectively the same. The destroyed socket will get removed no matter what since there are no race conditions or anything like that (since javascript execution in node all happens on the same thread).
splice will only remove the socket from an user defined array and will have no effect to it been closed, therefore the second method is the best option according to your answers.
I am creating my first Meteor app with the Spheron smart package. I can control he sphero ok and change it's colors but I'm trying to create a delay in between the color change.
Here is my code:
function makePrettyLights(sphero,color){
var colors = [];
colors['red'] = '0xB36305';
colors['green'] = '0xE32017';
colors['blue'] = '0xFFD300';
console.log(color);
var spheroPort = '/dev/tty.Sphero-OBB-RN-SPP';
var timer = 2000;
Meteor.setTimeout(function(){
sphero.on('open', function() {
sphero.setRGB(colors[lineName], false);
});
sphero.open(spheroPort);
},2000);
}
This function is being called from in a loop. I havent included the loop at it involves me parsing some xml and other bits but it works.
if (Meteor.isServer) {
/**** Loop Code Here ****/
makePrettyLights(sphero,color)
/****End Loop Code ****/
}
I have also tried setting the timeout wrapper around the function where it is called instead of inside it.
But basically they all run at the end of my code at the same time.
I20140806-09:49:35.946(1)? set color
I20140806-09:49:35.946(1)? set color
I20140806-09:49:35.946(1)? set color
The problem is most probably in your loop. I assume it's a pretty standard for loop, in which case such behavior is expected. When you call:
for(var i=0; i<5; ++i) {
setTimeout(someFunction, 2000);
}
the setTimeout method will be called 5 times in a row in a single moment. This means that someFunction will be called 5 times in a row after 2000 miliseconds.
Your sphero variable is scoped outside the timeout. So every time a connection is opened the previously added callbacks will fire at the same time since you're just adding on to globally scoped sphero variable.
Try defining sphero (not currently shown with your code above) inside the Meteor.setTimeout callback instead of outside of it.
TL;DR
What is the best way to forcibly keep a Node.js process running, i.e., keep its event loop from running empty and hence keeping the process from terminating? The best solution I could come up with was this:
const SOME_HUGE_INTERVAL = 1 << 30;
setInterval(() => {}, SOME_HUGE_INTERVAL);
Which will keep an interval running without causing too much disturbance if you keep the interval period long enough.
Is there a better way to do it?
Long version of the question
I have a Node.js script using Edge.js to register a callback function so that it can be called from inside a DLL in .NET. This function will be called 1 time per second, sending a simple sequence number that should be printed to the console.
The Edge.js part is fine, everything is working. My only problem is that my Node.js process executes its script and after that it runs out of events to process. With its event loop empty, it just terminates, ignoring the fact that it should've kept running to be able to receive callbacks from the DLL.
My Node.js script:
var
edge = require('edge');
var foo = edge.func({
assemblyFile: 'cs.dll',
typeName: 'cs.MyClass',
methodName: 'Foo'
});
// The callback function that will be called from C# code:
function callback(sequence) {
console.info('Sequence:', sequence);
}
// Register for a callback:
foo({ callback: callback }, true);
// My hack to keep the process alive:
setInterval(function() {}, 60000);
My C# code (the DLL):
public class MyClass
{
Func<object, Task<object>> Callback;
void Bar()
{
int sequence = 1;
while (true)
{
Callback(sequence++);
Thread.Sleep(1000);
}
}
public async Task<object> Foo(dynamic input)
{
// Receives the callback function that will be used:
Callback = (Func<object, Task<object>>)input.callback;
// Starts a new thread that will call back periodically:
(new Thread(Bar)).Start();
return new object { };
}
}
The only solution I could come up with was to register a timer with a long interval to call an empty function just to keep the scheduler busy and avoid getting the event loop empty so that the process keeps running forever.
Is there any way to do this better than I did? I.e., keep the process running without having to use this kind of "hack"?
The simplest, least intrusive solution
I honestly think my approach is the least intrusive one:
setInterval(() => {}, 1 << 30);
This will set a harmless interval that will fire approximately once every 12 days, effectively doing nothing, but keeping the process running.
Originally, my solution used Number.POSITIVE_INFINITY as the period, so the timer would actually never fire, but this behavior was recently changed by the API and now it doesn't accept anything greater than 2147483647 (i.e., 2 ** 31 - 1). See docs here and here.
Comments on other solutions
For reference, here are the other two answers given so far:
Joe's (deleted since then, but perfectly valid):
require('net').createServer().listen();
Will create a "bogus listener", as he called it. A minor downside is that we'd allocate a port just for that.
Jacob's:
process.stdin.resume();
Or the equivalent:
process.stdin.on("data", () => {});
Puts stdin into "old" mode, a deprecated feature that is still present in Node.js for compatibility with scripts written prior to Node.js v0.10 (reference).
I'd advise against it. Not only it's deprecated, it also unnecessarily messes with stdin.
Use "old" Streams mode to listen for a standard input that will never come:
// Start reading from stdin so we don't exit.
process.stdin.resume();
Here is IFFE based on the accepted answer:
(function keepProcessRunning() {
setTimeout(keepProcessRunning, 1 << 30);
})();
and here is conditional exit:
let flag = true;
(function keepProcessRunning() {
setTimeout(() => flag && keepProcessRunning(), 1000);
})();
You could use a setTimeout(function() {""},1000000000000000000); command to keep your script alive without overload.
spin up a nice repl, node would do the same if it didn't receive an exit code anyway:
import("repl").then(repl=>
repl.start({prompt:"\x1b[31m"+process.versions.node+": \x1b[0m"}));
I'll throw another hack into the mix. Here's how to do it with Promise:
new Promise(_ => null);
Throw that at the bottom of your .js file and it should run forever.
I'm attempting to load a store catalog into MongoDb (2.2.2) using Node.js (0.8.18) and Mongoose (3.5.4) -- all on Windows 7 64bit. The data set contains roughly 12,500 records. Each data record is a JSON string.
My latest attempt looks like this:
var fs = require('fs');
var odir = process.cwd() + '/file_data/output_data/';
var mongoose = require('mongoose');
var Catalog = require('./models').Catalog;
var conn = mongoose.connect('mongodb://127.0.0.1:27017/sc_store');
exports.main = function(callback){
var catalogArray = fs.readFileSync(odir + 'pc-out.json','utf8').split('\n');
var i = 0;
Catalog.remove({}, function(err){
while(i < catalogArray.length){
new Catalog(JSON.parse(catalogArray[i])).save(function(err, doc){
if(err){
console.log(err);
} else {
i++;
}
});
if(i === catalogArray.length -1) return callback('database populated');
}
});
};
I have had a lot of problems trying to populate the database. Under previous scenarios (and this one), node pegs the processor and eventually runs out of memory. Note that in this scenario, I'm trying to allow Mongoose to save a record, and then iterate to the next record once the record saves.
But the iterator inside of the Mongoose save function never gets incremented. In addition, it never throws any errors. But if I put the iterator (i) outside of the asynchronous call to Mongoose, it will work, provided the number of records that I try to load are not too big (I have successfully loaded 2,000 this way).
So my questions are: Why isn't the iterator inside of the Mongoose save call ever incremented? And, more importantly, what is the best way to load a large data set into MongoDb using Mongoose?
Rob
i is your index to where you're pulling input data from in catalogArray, but you're also trying to use it to keep track of how many have been saved which isn't possible. Try tracking them separately like this:
var i = 0;
var saved = 0;
Catalog.remove({}, function(err){
while(i < catalogArray.length){
new Catalog(JSON.parse(catalogArray[i])).save(function(err, doc){
saved++;
if(err){
console.log(err);
} else {
if(saved === catalogArray.length) {
return callback('database populated');
}
}
});
i++;
}
});
UPDATE
If you want to add tighter flow control to the process, you can use the async module's forEachLimit function to limit the number of outstanding save operations to whatever you specify. For example, to limit it to one outstanding save at a time:
Catalog.remove({}, function(err){
async.forEachLimit(catalogArray, 1, function (catalog, cb) {
new Catalog(JSON.parse(catalog)).save(function (err, doc) {
if (err) {
console.log(err);
}
cb(err);
});
}, function (err) {
callback('database populated');
});
}
Rob,
The short answer:
You created an infinite loop. You're thinking synchronously and with blocking, Javascript functions asynchronously and without blocking. What you are trying to do is like trying to directly turn the feeling of hunger into a sandwich. You can't. The closest thing is you use the feeling of hunger to motivate you to go to the kitchen and make it. Don't try to make Javascript block. It won't work. Now, learn async.forEachLimit. It will work for what you want to do here.
You should probably review asynchronous design patterns and understand what it means on a deeper level. Callbacks are not simply an alternative to return values. They are fundamentally different in how and when they are executed. Here is a good primer: http://cs.brown.edu/courses/csci1680/f12/handouts/async.pdf
The long answer:
There is an underlying problem here, and that is your lack of understanding of what non-blocking IO and asynchronous means. Im not sure if you are breaking into node development, or this is just a one-off project, but if you do plan to continue using node (or any asynchronous language) then it is worth the time to understand the difference between synchronous and asynchronous design patterns, and what motivations there are for them. So, that is why you have a logic error putting the loop invariant increment inside an asynchronous callback which is creating an infinite loop.
In non-computer science, that means that your increment to i will never occur. The reason is because Javascript executes a single block of code to completion before any asynchronous callbacks are called. So in your code, your loop will run over and over, without i ever incrementing. And, in the background, you are storing the same document in mongo over and over. Each iteration of the loop starts sending document with index 0 to mongo, the callback can't fire until your loop ends, and all other code outside the loop runs to completion. So, the callback queues up. But, your loop runs again since i++ is never executed (remember, the callback is queued until your code finishes), inserting record 0 again, queueing another callback to execute AFTER your loop is complete. This goes on and on until your memory is filled with callbacks waiting to inform your infinite loop that document 0 has been inserted millions of times.
In general, there is no way to make Javascript block without doing something really really bad. For example, something paramount to setting your kitchen on fire to fry some eggs for that sandwich I talked about in the "short answer".
My advice is to take advantage of libs like async. https://github.com/caolan/async JohnnyHK mentioned it here, and he was correct for doing so.