I'd heard on the grapevine a while ago that reading from process.env is a hit to performance in Node. I wondered if someone can clarify whether this is still the case, and calls to process.env should be avoided or whether it makes no difference to performance?
Thanks!
You can set up your own test for this using process.hrtime(), let's try reading it a bunch of times and see what we get:
const time = process.hrtime();
const NS_PER_SEC = 1e9;
const loopCount = 10000000;
let hrTime1 = process.hrtime(time);
for (var i = 0; i < loopCount; i++)
{
let result = process.env.TEST_VARIABLE
}
let hrTime2 = process.hrtime(time);
let ns1 = hrTime1[0] * NS_PER_SEC + hrTime1[1];
let ns2 = hrTime2[0] * NS_PER_SEC + hrTime2[1];
console.log(`Read took ${(ns2 - ns1)/loopCount} nanoseconds`);
The result on my machine (oldish Windows Tower, Node v8.11.2 ):
Read took 222.5536641 nanoseconds
So around ~0.2 microseconds.
This is pretty fast.. when we talk about performance issues everything is relative. If you really need to read this very frequently, it would be best to cache it.
To make this clear, let's test both scenarios:
// Cache
const test = process.env.TEST_VARIABLE;
let loopCount = 10000000; console.time("process.env cached"); for (var i = 0; i < loopCount; i++) { let result = test } console.timeEnd("process.env cached");
// No cache
loopCount = 10000000; console.time("process.env uncached"); for (var i = 0; i < loopCount; i++) { let result = process.env.TEST_VARIABLE } console.timeEnd("process.env uncached");
This takes ~10ms when caching, and ~2s when no variable is used to cache the value.
Related
In order to tell NetworkManager to create a Wi-Fi access point over D-Bus using Node.js with the node-dbus library I need to provide an SSID as a byte array. As Node.js doesn't have the Blob class from client-side JavaScript my understanding is that I need to use a Buffer for this, but it's not working.
I can successfully turn a byte array into a string with the following code:
let bytes = new Uint8Array(ssidBytes);
let string = new TextDecoder().decode(bytes);
How do I reverse this to get a byte array from a string?
I've tried:
let ssidBytes = Buffer.from(ssid);
And I've tried:
let ssidBytes = [];
for (let i = 0; i < ssid.length; ++i) {
ssidBytes.push(ssid.charCodeAt(i));
}
Assuming there isn't another error in my code (or the library I'm using), neither of these seem to have the desired effect.
For more background information see https://github.com/Shouqun/node-dbus/issues/228 and https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/663
Thanks
I found the solution in another post, here https://stackoverflow.com/a/36389863/1782967
let ssid = 'my-ap';
let ssidByteArray = [];
let buffer = Buffer.from(ssid);
for (var i = 0; i < buffer.length; i++) {
ssidByteArray.push(buffer[i]);
}
A more compact solution that does the same:
const ssid = 'my-ap';
const ssidByteArray = Array.from(Buffer.from(ssid));
I'm doing a few tutorials on CosmosDB. I've got the database set up with the Core (SQL) API, and using Node.js to interface with it. for development, I'm using the emulator.
This is the bit of code that I'm running:
const CosmosClient = require('#azure/cosmos').CosmosClient
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
const options = {
endpoint: 'https://localhost:8081',
key: REDACTED,
userAgentSuffix: 'CosmosDBJavascriptQuickstart'
};
const client = new CosmosClient(options);
(async () => {
let cost = 0;
let i = 0
while (i < 2000) {
i += 1
console.log(i+" Creating record, running cost:"+cost)
let response = await client.database('TestDB').container('TestContainer').items.upsert({}).catch(console.log);
cost += response.requestCharge;
}
})()
This, without fail, stops at around iteration 1565, and doesn't continue. I've tried it with different payloads, without much difference (it may do a few more or a few less iterations, but seems to almsot always be around that number)
On the flipside, a similar .NET Core example works great to insert 10,000 documents:
double cost = 0.0;
int i = 0;
while (i < 10000)
{
i++;
ItemResponse<dynamic> resp = await this.container.CreateItemAsync<dynamic>(new { id = Guid.NewGuid() });
cost += resp.RequestCharge;
Console.WriteLine("Created item {0} Operation consumed {1} RUs. Running cost: {2}", i, resp.RequestCharge, cost);
}
So I'm not sure what's going on.
So, after a bit of fiddling, this doesn't seem to have anything to do with CosmosDB or it's library.
I was running this in the debugger, and Node would just crap out after x iterations. I noticed if I didn't use a console.log it would actually work. Also, if I ran the script with node file.js it also worked. So there seems to be some sort of issue with debugging the script while also printing to the console. Not exactly sure whats up with that, but going to go ahead and mark this as solved
My app will require up to 1000 timers at any given moment. Do timers consume a lot of resources? Is it an accepted practice to deploy multiple timers? Or should I avoid it?
By #jfriend00's suggestion, i made a sample check below, this might be not accurate (cuz of dom manipulations), but hope it gives you concept
// let's use 2 measurings, window.performance.now and console.time
// start console.time, usually it gives same result as window.perf.now
// but window.perf.now uses 10 decimal points
console.time('timer_perf_test');
var now1 = window.performance.now();
// our count, this test really lags when 100,000
// CHANGE THIS
var count = 10000;
// some dom elements for text
var elem = document.querySelector('img');
var counter = document.querySelector('#counter');
var perf = document.querySelector('#perf');
// how smooth our image gonna rotate?
var rotate = function(degree){
elem.style.transform = 'rotate(' + degree +'deg)';
counter.textContent = 'timers executed: ' + degree;
}
// create bunch of timers with different timeout
var timer = function(added_time){
setTimeout(function(){
rotate(added_time);
}, 1000 + (added_time * 10));
}
// test begins
for (var i = 0; i < count; i++) {
timer(i)
}
// check results
var now2 = window.performance.now();
perf.textContent = now2 - now1 + ' MS required to create ' + count + ' timers';
console.timeEnd('timer_perf_test');
<img src="https://km.support.apple.com/library/APPLE/APPLECARE_ALLGEOS/HT4211/HT4211-ios7-activity_icon-001-en.png" width="100" height="100">
<p id="counter"></p>
<p id="perf"></p>
I am trying to divide a task in node.js onto several cores (using a i5 I have 4 cores available). So far every explanation I found was to cryptic for me (especially the ones talking about servers, which I have no idea of). Can someone show me on the simple example below how I can split the task onto several cores?
Example:
I just want to split the task, so that each core runs one of the loops. How do I do that?
var fs = require('fs');
var greater = fs.createWriteStream('greater.txt');
var smaller = fs.createWriteStream('smaller.txt');
for (var i=0; i<10000; i++){
var input = Math.random()*100;
if (input > 50){
greater.write(input + '\r\n');
}
}
for (var i=0; i<10000; i++){
var input = Math.random()*100;
if (input < 50){
smaller.write(input + '\r\n');
}
}
greater.end();
smaller.end();
I'm deciding on the best way to store a lot of timeseries data in memory and I made a simple benchmark to compare buffers vs simple arrays:
var buffers = {};
var started = Date.now();
var before = process.memoryUsage().heapUsed;
for (var i = 0; i < 100000; i++) {
buffers[i] = new Buffer(4);
buffers[i].writeFloatLE(i+1.2, 0);
// buffers[i] = [i+1.2];
}
console.log(Date.now() - started, 'ms');
console.log((process.memoryUsage().heapUsed - before) / 1024 / 1024);
And the results are as follows:
Arrays: 22 'ms'
8.391242980957031
Buffers:
123 'ms'
9.9490966796875
So according to this benchmark arrays are 5+ times faster and take 18% less memory. Is this correct? I certainly expected buffers to take less memory.
There's an overhead (in time and space ) for each Buffer you create.
I expect you'll get better space (and maybe time) performance if you compare
buffers[i] = new Buffer(4*1000);
for(k=0;j<1000;++j)
{
buffers[i].writeFloatLE(i+k+1.2, 4*j);
}
With
buffers[i] = [];
for(k=0;j<1000;++j)
{
buffers[i].push(i+k+1.2);
}