When the memory isn't enough, my code stops itself and gives a bunch of errors about 'out of memory'.
Is there any way to console.log() anything I want when my code stops working or i stop it by myself (ctrl+c)?
Everything goes up in the terminal.
You may try something like:
const os = require('os');
const THRESHOLD = 1000000 * 100; // 100 mb
// Check how much space left with a minute interval
setInterval(function () {
if (os.freemem() - process.memoryUsage().rss < THRESHOLD) {
console.log('We lack of memory!');
}
}, 1000 * 60);
Well you can try increasing the memory limit of Node.js by passing:
// Increase max memory to 4GB.
$ node --max-old-space-size=4096 index.js
so it just doesn't crash.
Related
I am trying to debug an out of memory error in NodeJS.
My script fetches data and pushes it into an array. It looks like this:
class Fetcher {
async fetchAll () {
const hits = []
let page = 1
while (true) {
const result = await fetchData(page)
hits.push(...result)
if (!result.length) {
break
}
page++
}
return hits
}
}
When running this on a rather long database, I'm facing an out of memory error on pages 50+ (all the pages are approximately of the same size):
If I set up the --max-old-space-size to 8096, the error disappears. If I set it up to 2048, the error rises again.
However, I'm sure that my data is not that space consuming, and I am trying to see what operation exactly is taking so much space so I tried to debug memory usage with a function like this:
const printMemory = (logger: Logger) => {
const formatMemory = memory => {
return `${Math.round(memory / 1024 / 1024 * 100) / 100} MB`
}
const used = process.memoryUsage();
const v8HeapStatistics = v8.getHeapStatistics()
logger.debug(`V8 [used_heap_size]: ${formatMemory(v8HeapStatistics.used_heap_size)}`)
logger.debug(`V8 [total_available_size]: ${formatMemory(v8HeapStatistics.total_available_size)}`)
for (let key in used) {
logger.debug(`${key} ${formatMemory(used[key])}`);
}
}
This function is supposed to print the memory usage by the process and by V8.
When I run this function inside the loop (where the application crashes), it never exceeds 250MB of usage even if the memory usage grows over time. Example on the last loop before crash:
I'd really like to see something like v8.getHeapStatistics().total_available_size get closer to 0 just before the crash in order to debug my code.
Why is my app crashing with an out of memory error, even if it looks like I still have plenty of available memory?
I wrote up a simple load testing script that runs N number of hits to and HTTP endpoint over M async parallel lanes. Each lane waits for the previous request to finish before starting a new request. The script, for my specific use-case, is randomly picking a numeric "width" parameter to add to the URL each time. The endpoint returns between 200k and 900k of image data on each request depending on the width parameter. But my script does not care about this data and simply relies on garbage collection to clean it up.
const fetch = require('node-fetch');
const MIN_WIDTH = 200;
const MAX_WIDTH = 1600;
const loadTestUrl = `
http://load-testing-server.com/endpoint?width={width}
`.trim();
async function fetchAll(url) {
const res = await fetch(url, {
method: 'GET'
});
if (!res.ok) {
throw new Error(res.statusText);
}
}
async function doSingleRun(runs, id) {
const runStart = Date.now();
console.log(`(id = ${id}) - Running ${runs} times...`);
for (let i = 0; i < runs; i++) {
const start = Date.now();
const width = Math.floor(Math.random() * (MAX_WIDTH - MIN_WIDTH)) + MIN_WIDTH;
try {
const result = await fetchAll(loadTestUrl.replace('{width}', `${width}`));
const duration = Date.now() - start;
console.log(`(id = ${id}) - Width ${width} Success. ${i+1}/${runs}. Duration: ${duration}`)
} catch (e) {
const duration = Date.now() - start;
console.log(`(id = ${id}) - Width ${width} Error fetching. ${i+1}/${runs}. Duration: ${duration}`, e)
}
}
console.log(`(id = ${id}) - Finished run. Duration: ` + (Date.now() - runStart));
}
(async function () {
const RUNS = 200;
const parallelRuns = 10;
const promises = [];
const parallelRunStart = Date.now();
console.log(`Running ${parallelRuns} parallel runs`)
for (let i = 0; i < parallelRuns; i++) {
promises.push(doSingleRun(RUNS, i))
}
await Promise.all(promises);
console.log(`Finished parallel runs. Duration ${Date.now() - parallelRunStart}`)
})();
When I run this in Node 14.17.3 on my MacBook Pro running MacOS 10.15.7 (Catalina) with even a modest parallel lane number of 3, after about 120 (x 3) hits of the endpoint the following happens in succession:
Console output ceases in the terminal for the script, indicating the script has halted
Other applications such as my browser are unable to make network connections.
Within 1 - 2 mins other applications on my machine begin to slow down and eventually freeze up.
My entire system crashes with a kernel panic and the machine reboots.
panic(cpu 2 caller 0xffffff7f91ba1ad5): userspace watchdog timeout: remoted connection watchdog expired, no updates from remoted monitoring thread in 60 seconds, 30 checkins from thread since monitoring enabled 640 seconds ago after loadservice: com.apple.logd, total successful checkins since load (642 seconds ago): 64, last successful checkin: 10 seconds ago
service: com.apple.WindowServer, total successful checkins since load (610 seconds ago): 60, last successful checkin: 10 seconds ago
I can very easily stop of the progression of these symptoms by doing a Ctrl+C in the terminal of my script and force quitting it. Everything quickly gets back to normal. And I can repeat the experiment multiple times before allowing it to crash my machine.
I've monitored Activity Monitor during the progression and there is very little (~1%) CPU usage, memory usage reaches up to maybe 60-70mb, though it is pretty evident that the Network activity is peaking during the script's run.
In my search for others with this problem there were only two Stack Overflow articles that came close:
node.js hangs other programs on my mac
Node script causes system freeze when uploading a lot of files
Anyone have any idea why this would happen? It seems very dangerous that a single app/script could so easily bring down a machine without being killed first by the OS.
var redis = require("redis"),
client = redis.createClient();
for(var i =0 ; i < 1000000; i++){
client.publish('channel_1', 'hello!');
}
After the code is executed, the Node process consumes 1.2GB of memory and stays there; GC does not reduce allocated memory. If I simulate 2 million messages or 4x500000, node crashes with memory error.
Node: 0.8.*, tried 4.1.1 later but nothing changed
Redis: 2.8 , works well (1MB allocated memory).
My server will be publishing more than 1 million messages per hour. So this is absolutely not acceptable (process crashing every hour).
updated test
var redis = require("redis"),
client = redis.createClient();
var count = 0;
var x;
function loop(){
count++;
console.log(count);
if(count > 2000){
console.log('cleared');
clearInterval(x);
}
for(var i =0 ; i < 100000; i++){
client.set('channel_' + i, 'hello!');
}
}
x = setInterval(loop, 3000);
This allocate ~ 50Mb, with peak at 200Mb, and now GC drop memory back to 50Mb
If you take a look at the node_redis client source, you'll see that every send operation returns a boolean that indicates whether the command queue has passed the high water mark (by default 1000). If you were to log this return value (alternatively, enable redis.debug_mode), there is a good possibility that you'll see false a lot- an indication that you're sending too more requests than Redis can handle all at once.
If this turns out not to be the case, then the command queue is indeed being cleared regularly which means GC is most likely the issue.
Either way, try jfriend00's suggestion. Sending 1M+ async messages with no delay (so basically all at once) is not a good test. The queue needs time to clear and GC needs time to do its thing.
Sources:
Backpressure and Unbounded Concurrency & Node-redis client return values
I use the node-memwatch to monitor the memory usage of the node application. The simplified code is as below
#file test.js
var memwatch = require('memwatch');
var util = require('util');
var leak = [];
setInterval(function() {
leak.push(new Error("leak string"));
}, 1);
memwatch.on('stats', function(stats) {
console.log('MEM watch: ' + JSON.stringify(stats));
console.log('Process: ' + util.inspect(process.memoryUsage()));
});
Run 'node test.js', I get the output below.
MEM watch: {"num_full_gc":1,"num_inc_gc":6,"heap_compactions":1,"usage_trend":0,"estimated_base":8979176,"current_base":8979176,"min":0,"max":0}
Process: { rss: 28004352, heapTotal: 19646208, heapUsed: 9303856 }
Does anyone know what do the estimated_base and current_base mean? In the page https://github.com/lloyd/node-memwatch, they are not described detailedly.
Regards,
Jeffrey
Memwatch splits its results into two Periods.
The RECENT_PERIOD which takes 10 consecutive GCs and the ANCIENT_PERIOD which is 120 consecutive GCs.
estimated_base = The Heap Size after 10 consecutive GCs have been executed. This is the RECENT_PERIOD.
current_base = The Heap size exactly after a GC.
base min = The Minimum value recorded for the Heap size for the given
period.
base max = the Maximum value recorded for the Heap size for the given
period.
If you follow this link you will be able to check out the code: Memwatch
Is there a way to find out the cpu usage in % for a node.js process via code. So that when the node.js application is running on the server and detects the CPU exceeds certain %, then it will put an alert or console output.
On *nix systems can get process stats by reading the /proc/[pid]/stat virtual file.
For example this will check the CPU usage every ten seconds, and print to the console if it's over 20%. It works by checking the number of cpu ticks used by the process and comparing the value to a second measurement made one second later. The difference is the number of ticks used by the process during that second. On POSIX systems, there are 10000 ticks per second (per processor), so dividing by 10000 gives us a percentage.
var fs = require('fs');
var getUsage = function(cb){
fs.readFile("/proc/" + process.pid + "/stat", function(err, data){
var elems = data.toString().split(' ');
var utime = parseInt(elems[13]);
var stime = parseInt(elems[14]);
cb(utime + stime);
});
}
setInterval(function(){
getUsage(function(startTime){
setTimeout(function(){
getUsage(function(endTime){
var delta = endTime - startTime;
var percentage = 100 * (delta / 10000);
if (percentage > 20){
console.log("CPU Usage Over 20%!");
}
});
}, 1000);
});
}, 10000);
Try looking at this code: https://github.com/last/healthjs
Network service for getting CPU of remote system and receiving CPU usage alerts...
Health.js serves 2 primary modes: "streaming mode" and "event mode". Streaming mode allows a client to connect and receive streaming CPU usage data. Event mode enables Health.js to notify a remote server when CPU usage hits a certain threshold. Both modes can be run simultaneously...
You can use the os module now.
var os = require('os');
var loads = os.loadavg();
This gives you the load average for the last 60seconds, 5minutes and 15minutes.
This doesnt give you the cpu usage as a % though.
Use node process.cpuUsage function (introduced in node v6.1.0).
It shows time that cpu spent on your node process. Example taken from docs:
const previousUsage = process.cpuUsage();
// { user: 38579, system: 6986 }
// spin the CPU for 500 milliseconds
const startDate = Date.now();
while (Date.now() - startDate < 500);
// At this moment you can expect result 100%
// Time is *1000 because cpuUsage is in us (microseconds)
const usage = process.cpuUsage(previousUsage);
const result = 100 * (usage.user + usage.system) / ((Date.now() - startDate) * 1000)
console.log(result);
// set 2 sec "non-busy" timeout
setTimeout(function() {
console.log(process.cpuUsage(previousUsage);
// { user: 514883, system: 11226 } ~ 0,5 sec
// here you can expect result about 20% (0.5s busy of 2.5s total runtime, relative to previousUsage that is first value taken about 2.5s ago)
}, 2000);
see node-usage for tracking process CPU and Memory Usage (not the system)
Another option is to use node-red-contrib-os package