How can I set the memory size for Intern when run from Grunt? - intern

Presumably to increase Intern's memory limit you would do something like:
npx --max-old-space-size=8192 intern
But how do I do it when running Intern from Grunt? For Intern 3, it looks like node_modules/intern/tasks/intern.js spawns Intern as a separate process from Grunt, so the option should be passed there. But for Intern4 it's less clear to me. Maybe Intern is run in the Grunt process, in which case I would need to increase the memory limit for Grunt itself?

You're correct that Intern is run in the grunt process, so increasing grunt's memory allocation is the way to go.

Related

How to force nodejs to do garbage collection? [duplicate]

At startup, it seems my node.js app uses around 200MB of memory. If I leave it alone for a while, it shrinks to around 9MB.
Is it possible from within the app to:
Check how much memory the app is using ?
Request the garbage collector to run ?
The reason I ask is, I load a number of files from disk, which are processed temporarily. This probably causes the memory usage to spike. But I don't want to load more files until the GC runs, otherwise there is the risk that I will run out of memory.
Any suggestions ?
If you launch the node process with the --expose-gc flag, you can then call global.gc() to force node to run garbage collection. Keep in mind that all other execution within your node app is paused until GC completes, so don't use it too often or it will affect performance.
You might want to include a check when making GC calls from within your code so things don't go bad if node was run without the flag:
try {
if (global.gc) {global.gc();}
} catch (e) {
console.log("`node --expose-gc index.js`");
process.exit();
}
When you cannot pass the --expose-gc flag to your node process on start for any reason, you may try this:
import { setFlagsFromString } from 'v8';
import { runInNewContext } from 'vm';
setFlagsFromString('--expose_gc');
const gc = runInNewContext('gc'); // nocommit
gc();
Notes:
This worked for me in node 16.x
You may want to check process.memoryUsage() before and after running the gc
Use with care: Quote from the node docs v8.setFlagsFromString:
This method should be used with care. Changing settings after the VM has started may result in unpredictable behavior, including crashes and data loss; or it may simply do nothing.
One thing I would suggest, is that unless you need those files right at startup, try to load only when you need them.
EDIT: Refer to the post above.

Is it possible to create a child process to handle some tasks in Node?

I have a pretty big and terribly written piece of code that eventually crashes main Node JS process. Probably there are a lot of memory leaks. I tried fixing it, but it's very bad. (Single letter variables and such.)
Sometimes it crashes in 10 seconds, sometimes after 5 hours, but it crashes.
It is not something mission critical. It is trying to read emails by using IMAP.
I don't want to integrate a queue processor right now. Can I simply create a child instance with Node JS, run this code block in the scope of this? Any correct way of doing it?
You can use .spawn() or .exec() from the child_process module. If you're running a node.js script, then the program you are running is node and you specify the script you want to run as the first argument and it will run in another process and any other arguments to the script as subsequent arguments.
You just separate out the troublesome code into it's own node.js script and then you can run it this way.
If you want to understand more about the difference between spawn and exec, this is a good article on that.

How can I automatically test for memory leaks in Node?

I have some code in a library that has in the past leaked badly, and I would like to add regression tests to avoid that in the future. I understand how to find memory leaks manually, by looking at memory usage profiles or Valgrind, but I have had trouble writing automatic tests for them.
I tried using global.gc() followed by process.memoryUsage() after running the operation I was checking for leaks, then doing this repeatedly to try to establish a linear relationship between number of operations and memory usage, but there seems to be noise in the memory usage numbers that makes this hard to measure accurately.
So, my question is this: is there an effective way to write a test in Node that consistently passes when an operation leaks memory, and fails when it does not leak memory?
One wrinkle that I should mention is that the memory leaks were occurring in a C++ addon, and some of the leaked memory was not managed by the Node VM, so I was measuring process.memoryUsage().rss.
Automating and logging information to test for memory leaks in node js.
There is a great module called memwatch-next.
npm install --save memwatch-next
Add to app.js:
const memwatch = require('memwatch-next');
// ...
memwatch.on('leak', (info) => {
// Some logging code...
console.error('Memory leak detected:\n', info);
});
This will allow you to automatically measure if there is a memory leak.
Now to put it to a test:
Good tool for this is Apache jMeter. More information here.
If you are using http you can use jMeter to soak test the application's end points.
SOAK testing is done to verify system's stability and performance characteristics over an extended period of time, its good when you are looking for memory leaks, connection leaks etc.
Continuous integration software:
Prior to deployment to production if you are using a software for continuous integration like Jenkins, you can make a Jenkins job to do this for you, it will test the application with parameters provided after the test will ether deploy the application or report that there is a memory leak. ( Depending on your Jenkins job configuration )
Hope it helps, update me on how it goes;
Good luck,
Given some arbitrary program, is it always possible to determine if it will ever terminate? The halting problem describes this. Consider the following program:
function collatz(n){
if(n==1)
return;
if(n%2==0)
return collatz(n/2);
else
return collatz(3*n+1);
}
The same idea can be applied to data in memory. It's not always possible to identify what memory isn't needed anymore and can thus be garbage collected. There is also the case of the program being designed to consume a lot of memory in some situation. The only known option is coming up with some heuristic like you have done, but it will most likely result in false positives and negatives. It may be easier to determine the root cause of the leak so it can be corrected.

Running garbage collection manually in node

I am using node and am considering manually running garbage collection in node. Is there any drawbacks on this? The reason I am doing this is that it looks like node is not running garbage collection frequently enough. Does anyone know how often V8 does its garbage collection routine in node?
Thanks!
I actually had the same problem running node on heroku with 1GB instances.
When running the node server on production traffic, the memory would grow constantly until it exceeded the memory limit, which caused it to run slowly.
This is probably caused by the app generating a lot of garbage, it mostly serves JSON API responses. But it wasn't a memory leak, just uncollected garbage.
It seems that node doesn't prioritize doing enough garbage collections on old object space for my app, so memory would constantly grow.
Running global.gc() manually (enabled with node --expose_gc) would reduce memory usage by 50MB every time and would pause the app for about 400ms.
What I ended up doing is running gc manually on a randomized schedule (so that heroku instances wouldn't do GC all at once). This decreased the memory usage and stopped the memory quota exceeded errors.
A simplified version would be something like this:
function scheduleGc() {
if (!global.gc) {
console.log('Garbage collection is not exposed');
return;
}
// schedule next gc within a random interval (e.g. 15-45 minutes)
// tweak this based on your app's memory usage
var nextMinutes = Math.random() * 30 + 15;
setTimeout(function(){
global.gc();
console.log('Manual gc', process.memoryUsage());
scheduleGc();
}, nextMinutes * 60 * 1000);
}
// call this in the startup script of your app (once per process)
scheduleGc();
You need to run your app with garbage collection exposed:
node --expose_gc app.js
I know this may be a bit of a tardy reply to help to OP, but i thought I would collaborate my recent experiences with Node JS memory allocation and garbage collection.
We are currently working on a node JS server running on a raspberry pi 3. Every so often it would crash due to running out of memory. I initially thought this was a memory leak, and after a week and a half of searching through my code and coming up with nothing, I thought the problem could have been exacerbated by the fact that Node JS allocates more memory than available on the Rpi3 for its processes before it does the GC.
I have been running new instances of my server with the following commands:
'node server.js --max-executable-size=96 --max-old-space-size=128 --max-semi-space-size=2'
This effectively limits the total amount of space that node is allowed to take up on the local machine and forces garbage collections to be done more frequently. Thus far, we are seeing a constant usage of memory and it confirms to me that my code was not leaking initially, but rather node was allocating more memory than possible.
EDIT: This link here outlines in more specific terms the issue I was dealing with.
-nodejs decrease v8 garbage collector memory usage
-https://github.com/nodejs/node/issues/2738
V8 Run garbage collection when he thinks it's useful. There is no fixed delay for that. You can read this article to learn about garbage collection V8: https://strongloop.com/strongblog/node-js-performance-garbage-collection/
Anyway, it's a bad idea to run manually the garbage collector in your project because it blocks completely the node process. So during the garbage collection, your program won't handle any requests.

Is it possible to run Watir test in parallel?

I have simple Watir tests.
Each test is self-contained, no shared state or dependency of any kind. Each test open and close the browser.
Is it possible to run the test in parallel to reduce the time to run all tests?
Even only 2 or 3 tests in parallel can reduce the time dramatically.
Take a look at parallel_tests Ruby gem. Depending on your setup, running the tests in parallel could be as simple as this:
parallel_cucumber features/

Resources