I have the following code:
let startTime;
let stopTime;
const arr = [1, 2, 3, 8, 5, 0, 110, 4, 4, 16, 3, 8, 7, 56, 1, 2, 3, 8, 5, 0, 110, 16, 3, 8, 7, 56];
const sum = 63;
durationTime = (start, stop, desc) => {
let duration = (stop - start);
console.info('Duration' + ((desc !== undefined) ? '(' + desc + ')' : '') + ': ' + duration + 'ms');
};
findPair = (arr, sum) => {
let result = [];
const filterArr = arr.filter((number) => {
return number <= sum;
});
filterArr.forEach((valueFirst, index) => {
for (let i = index + 1; i < filterArr.length; i++) {
const valueSecond = filterArr[i];
const checkSum = valueFirst + valueSecond;
if (sum === checkSum) {
result.push([valueFirst, valueSecond]);
}
}
});
//console.info(result);
};
for (let i = 0; i < 5; i++) {
startTime = new Date();
findPair(arr, sum);
stopTime = new Date();
durationTime(startTime, stopTime);
}
When I run locally on the nodejs (v8.9.3), the result in the console:
Duration(0): 4ms
Duration(1): 0ms
Duration(2): 0ms
Duration(3): 0ms
Duration(4): 0ms
My Question: Why does the first call of 'findPair' take 4ms and other calls only 0ms?
When the loop runs the first time the JavaScript engine (Google's V8) interprets the code, compiles it and executes. However, code that runs more often will have it's compiled code optimized and cached so that subsequent runs of that code can run faster. Code inside loops would be a good example of such code that runs often.
Unless you fiddle with prototypes and things that could make that cached code invalid, it'll keep running that cached code which is a lot faster than interpreting the code every time it runs.
There are a lot of smart things V8 does to make your code run faster, if you are interested in this stuff I'd highly recommend reading the sources for my answer:
Dynamic Memory and V8 with JavaScript
How JavaScript works: inside the V8 engine + 5 tips on how to write optimized code
Before beginning, better time measurement is:
for (let i = 0; i < 5; i++) {
console.time('timer '+i);
findPair(arr, sum);
console.timeEnd('timer ' + i);
}
...
The first function call is slower, probably because V8 dynamically create hidden classes behind the scenes, and (initially) puts function into it.
More information on https://github.com/v8/v8/wiki/Design%20Elements#fast-property-access
Related
let orders = [1, 2, 3, 4, 5, ..., 100];
let drivers = [1, 2, 3, 4, ..., 50];
for (let order of orders) {
for (let driver of drivers) {
// run 2nd iteration after 30 sec or one mint
}
}
I have a list of orders and drivers, I want to assign one order to one driver. if the driver rejects the order(g.e: Id=1) same order goes to the next driver after 30 sec or 0ne mint.
please guide me.
I have tried this but not working correctly in my case.
Generally, it is not a good design choice to block the thread for a given amount of time, but the OP seems to be sufficient, so I'm just going to try and answer it without going into the "best practice" territory.
You need to put this double loop in an async function and delay the flow for a given amount of time using another async function:
const delay = (msec) => new Promise((resolve) => setTimeout(resolve, msec));
let orders = [1,2,3,4,5,......100];
let drivers = [1,2,3,4,....50];
(async function main() {
for (let order of orders) {
for (let driver of drivers) {
// (pseudocode) suggest the order to the driver
driver.suggestOrder(order);
// wait for half a minute
await delay(30_000);
// (pseudocode) don't suggest the same order to another driver
if (driver.agreedToOrder(order)) break;
}
}
})();
I'm trying to understand how Nodejs uses libuv threadpool. Turns out there are some modules that uses this threadpool like fs and crypto so I tried this :
const crypto = require("crypto")
const start = Date.now()
for (let i = 0; i < 5; i++) {
crypto.pbkdf2('a', 'b', 100000, 512, 'sha512', () => {
console.log(`${i + 1}: ${Date.now() - start}`)
})
}
the results were :
1: 1178
2: 1249
3: 1337
4: 1344
5: 2278
And that was expected. The default threadpool size is 4, so 4 hashes will finish almost at the same time and the fifth will wait.
But when I do the same thing with fs.readfile() :
const start = Date.now()
const fs = require('fs')
for (let i = 0; i < 5; i++) {
fs.readFile('./test.txt', () => {
console.log(`${i + 1} : ${Date.now() - start}`)
})
}
I get this:
1 : 7
2 : 20
3 : 20
4 : 20
5 : 21
There is always one that finishes first and the others finish around the same time. I tried reading the file just twice, the same thing happens one will finish first the second after. I also tried reading it 10 times, same thing. Why is this happening
Edit:
test.txt is just 5 bytes
I'm using a quad-core laptop running windows 10
I'd heard on the grapevine a while ago that reading from process.env is a hit to performance in Node. I wondered if someone can clarify whether this is still the case, and calls to process.env should be avoided or whether it makes no difference to performance?
Thanks!
You can set up your own test for this using process.hrtime(), let's try reading it a bunch of times and see what we get:
const time = process.hrtime();
const NS_PER_SEC = 1e9;
const loopCount = 10000000;
let hrTime1 = process.hrtime(time);
for (var i = 0; i < loopCount; i++)
{
let result = process.env.TEST_VARIABLE
}
let hrTime2 = process.hrtime(time);
let ns1 = hrTime1[0] * NS_PER_SEC + hrTime1[1];
let ns2 = hrTime2[0] * NS_PER_SEC + hrTime2[1];
console.log(`Read took ${(ns2 - ns1)/loopCount} nanoseconds`);
The result on my machine (oldish Windows Tower, Node v8.11.2 ):
Read took 222.5536641 nanoseconds
So around ~0.2 microseconds.
This is pretty fast.. when we talk about performance issues everything is relative. If you really need to read this very frequently, it would be best to cache it.
To make this clear, let's test both scenarios:
// Cache
const test = process.env.TEST_VARIABLE;
let loopCount = 10000000; console.time("process.env cached"); for (var i = 0; i < loopCount; i++) { let result = test } console.timeEnd("process.env cached");
// No cache
loopCount = 10000000; console.time("process.env uncached"); for (var i = 0; i < loopCount; i++) { let result = process.env.TEST_VARIABLE } console.timeEnd("process.env uncached");
This takes ~10ms when caching, and ~2s when no variable is used to cache the value.
I currently use socket.io v1.4.2 and node.js v0.10.29 on my server.
I try to track a memory leak in my app, I'm not sure, but I think socket.io is a part of my problem.
So here a code of the server (demo example):
var server = require ('http').createServer ();
var io = require ('socket.io')(server);
io.on ("connection", function (socket) {
socket.on ('disconnect', function (data) { /* Do nothing */ });
});
Step 1 : Memory : 58Mb
Step 2 : I create A LOT of clients (~10000), Memory : 300 Mb
Step 3 : I close all clients and waiting the GC doing his work
Step 4 : I look at my memory : 100 Mb :'(
Step 5 : Same as step 2 and 3
Step 6 : Memory 160Mb...
And so on and the memory keeps growing.
I presume the GC was lazy so I retry with the follow code :
setInterval (function () {
global.gc ();
}, 30000);
And I start my app.js with :
node --expose-gc app.js
But I had the same result.
Finally I try
var server = require ('http').createServer ();
var io = require ('socket.io')(server);
clients = {};
io.on ("connection", function (socket) {
clients[socket.id] = socket;
socket.on ('disconnect', function (data) {
delete clients[socket.id];
});
});
And I had the same result.
How can I free this memory ?
EDIT
I create snapshot directly on my main source.
I install the new module with the follow command :
npm install heapdump
I write in my code this :
heapdump = require ('heapdump');
setInterval (function () { heapdump.writeSnapshot (); }, 30000);
It took heapdump of the program every 30 seconds, and save it in the current directory.
I read the heapdump with the module 'profiles' of Chrome.
So, the issue is probably socket.io, cause I found many strings not released, that I emit with socket.
Perhaps I don't write the emit in the right way ?
I do that :
var data1 = [1, 2, 3];
var data2 = [4, 5, 6];
var data3 = [7, 8, 9];
socket.emit ('b', data1, data2, data3);
data1 = [];
data2 = [];
data3 = [];
And in my snapshot say that the program keeps the following string: "b [1, 2, 3] [4, 5, 6] [7, 8, 9]" in my memory, millions of time
What I'm suppose to do ?
I also make an another (perhaps stupid ?) test :
var t1 = new Date ();
...
var t2 = new Date ();
var data1 = [1, 2, 3];
var data2 = [4, 5, 6];
var data3 = [7, 8, 9];
socket.emit ('b', data1, data2, data3);
data1 = [];
data2 = [];
data3 = [];
console.log ("LAG: " + t2 - t1);
t1 = new Date ();
I had this result :
LAG: 1
LAG: 1
...
LAG: 13
LAG: 2
LAG: 26
LAG: 3
...
LAG: 100
LAG: 10
LAG: 1
LAG: 1
LAG: 120
...
keeps growing
EDIT 2 :
This is my entire test code :
/* Make snapshot every 30s in current directory */
heapdump = require ('heapdump');
setInterval (function () { heapdump.writeSnapshot (); }, 30000);
/* Create server */
var server = require ('http').createServer ();
var io = require ('socket.io')(server);
var t1 = new Date ();
clients = {};
io.on ("connection", function (socket) {
clients[socket.id] = socket;
socket.on ('disconnect', function (data) {
delete clients[socket.id];
});
});
setInterval (function () {
var t2 = new Date ();
for (c in clients) {
var data1 = [1, 2, 3];
var data2 = [4, 5, 6];
var data3 = [7, 8, 9];
clients[c].emit ('b', data1, data2, data3);
data1 = [];
data2 = [];
data3 = [];
}
console.log ("LAG: " + t2 - t1);
t1 = new Date ();
}, 100);
I don't give the code of client. Because I assume that: if the problem is in the client, so it's a security issue. In fact, it will be an easy way to saturate the RAM of the server. So it's a kind of better DDOS, I juste hope the problem is not in the client.
Edit based on the server code you included
On your server:
c.emit ('b', data1, data2, data3);`
should be changed to:
clients[c].emit('b', data1, data2, data3);
c.emit() was probably throwing an exception because c is the socket.id string and strings don't have a .emit() method.
Original answer
What you need to determine is whether the growth in memory is actually memory that is allocated within the node.js heap or if it's free memory that has just not been returned to the operating system and is available for future reuse within node.js? Measuring the memory used by the node.js process is useful to see what it has taken from the system and that should not continually go up forever over time, but it doesn't tell you what is really going on inside.
FYI, as long as your node.js app has a few free cycles, you shouldn't ever have to manually call the GC. It will do that itself.
The usual way to measure what is being used within the node.js heap is to take a heap snapshot, run your Steps 1-4, take a heap snapshot, run those steps again, take another heap snapshot, diff the snapshots and see what memory in the node.js heap is actually different between the two states.
That will show you what is actually in use within node.js that has changed.
Here's an article on taking heap snapshots and reading them in the debugger: https://strongloop.com/strongblog/how-to-heap-snapshots/
My basic setup I have using the cluster module is: (I have 6 cores)
var cluster = require('cluster');
if (cluster.isMaster) {
var numCPUs = require('os').cpus().length;
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
}else{
//Code here
console.time("Time: ");
var obj = {'abcdef' : 1, 'qqq' : 13, '19' : [1, 2, 3, 4]};
for(var i = 0; i < 500000; i++) {
JSON.parse(JSON.stringify(obj));
}
console.timeEnd("Time: ");
}
If I were to run that test.
It will output:
But... if I run that same exact test inside the cluster.isMaster block, it will output:
1) Why is my code being executed multiple times instead of once?
2) Since I have 6 cpu cores helping me run that test, shouldn't it run that code only once but perform the operation faster?
You're forking os.cpus().length separate processes. So if os.cpus().length === 6, then you should see 6 separate outputs (which is the case from the output you've posted).
No, that's not how it works. Each process would be scheduled on a separate core. It's not about running it faster, but being able to do more processing in parallel.