How do you increase NodeJS memory limit on Windows? - node.js

I need to run npm install with 8GB memory limit on NodeJS 14 on Windows 10 (with 16GB RAM).
I have tried the following to no avail:
node --max-old-space-size=8192 "C:\Program Files\nodejs\npm" install - unexpected behavior but still seems to be at 4GB after using CTRL + C
set NODE_OPTIONS=--max-old-space-size=8192&&npm install - still 4GB
Adding NODE_OPTIONS environment variable (to both User and System variables) - still 4GB
Related questions:
Where do I set 'NODE_OPTIONS="--max-old-space-size=2048"'
npm install - javascript heap out of memory

TROUBLESHOOTING THE ISSUE
You haven't shown any troubleshooting results, thus far, that leads me to believe that the issue is being caused by the "HEAP's MAX_SIZE Env Variable", nor have you shown any evidence that the variable isn't being assigned the value that you are attempting to assign to it. I am not saying that the issue isn't being caused by the env Variable. However, I am saying, that more troubleshooting needs to be done to know for certain, what the underlying cause to your issue is..
Personally, from what you have shown the community thus far, I believe that the problem is due to running out of memory in a location other than the heap, which if your using a typical PC with Windows 10, its probably your machine that doesn't have enough memory, not period, but at the moment you attempt to do the download.
This is why I think you may be running out of memory:
In my machine I have:
16GB of ram,
4x HDD # 2TB each for a total of 8TB of memory
a 8x Threaded Quad Core I7 6700 6gen from Intel
ASUS Motherboard w/ a Z190 Chip-set.
My Operating Systems
I use Linux, but I got Windows for Free because it was cheaper for me to buy a new PC, and upgrade it, than buy the parts seperatly. I never boot to windows, so its always sitting as an untouched fresh install.
Currently I just upgraded to Fedora Workstation 36.
HD #1 (2TB): Windows 10
HD #2 (2TB): Fedora Workstation
HD #3 & #4 (2TBx2 for 4TB total): Local Storage
TESTING
I ran performance test using a node program that implements child processes on other threads, and the Firefox Browser with up to 70 Tabs open, which I would attempt to refresh all at once.
The results were staggering.
I won't go into detail, but the important thing to note, is I have 16GB of DDR4 RAM
When Windows would need 15+GB of memory, Linux would only be using about 10GB+. That is a 37.5% - 42.5% gain.
However, I have Windows PRO, this allowed me to disable much of the telemetry, and I found that if I disabled windows telemetry and some other features windows was much more performant. When windows would use 15+ Linux would be at 12+ GB, that's still around a 25-20% gain of memory though.
One important thing to note though, sometime Linux would freeze up on me, windows was more robust at recovering when I overloaded the system, but it took quite a bit less to bog windows down.
The Point that I am making:
...is that your only using 8GB, that's not very much now'a days.
2BH I feel like the 16GB I have is far to little, I plan on upgrading soon.
STEP #1 — Troubleshooting the Issue Further
When troubleshooting, you need to always check that your syntax, and semantics are correct, that way you can rule out that the problem isn't due to a silly typo that you made.
"I know what your thinking..."
"Its not a typo!",
But as it turns out there is a typo: The code snippet you added as part of your question uses the Node Command-line Flag syntax, as the syntax for the Node Environment Variable, this is incorrect, as they use different sets of characters. At first glance, they look identical, but if you look at the 2-part MD table I created below, you'll see there is actually a pretty big difference between the two
              SEMANTICS
                         SYNTAX
               DIFFERENCE
Command-line Flag
"--max-old-space-size"
DASHES
Environment Variable
"--max_old_space_size"
UNDERSCORES
"For those with less-than-perfect vision, the difference is that the Environment Variable is typed using underscores, and the other is typed using dashes"
STEP #2 — TEST LOG "MAX-OLD-SPACE" POST-CONFIG
"If you used the right syntax and you still are having an issue, you can check to see that the max size of the heap is actually being configured to the size value that you are assigning to it. The process for logging the configured V8 Max Heap Size in the console can be done by completing the stops below."
TO START: Create a completely empty JavaScript file in an environment where you can run it using the node command — name the file test.js.
Add the code below to test.js, and don't add anything else.
/**
* #file "./test.js"
* #desc "PRINTS: The upper memory limit for V8s HEAP"
* */
const maxHeapSz = require('v8').getHeapStatistics().heap_size_limit;
const maxHeapSz_GB = (maxHeapSz / 1024 ** 3).toFixed(1);
console.log('--------------------------');
console.log(`${maxHeapSz_GB}GB`);
STEP #3 — Run the Following Commands
~/$ node --max-old-space-size=8192 test.js; node --max-old-space-size=4096 test.js; node --max-old-space-size=2048 test.js; node --max-old-space-size=1024 test.js; node --max-old-space-size=512 test.js
It would be very strange if you produced a different result than what the editor in the image (my editor) produced, and here is why:
The command above configures the "Max_Old_Space" Env Var. The program that I asked you to "Copy-&-Paste" into the file "test.js" prints the result of the commands Environment configuration change. It shows that every time you set the Env Var to something different, the Env Var is different during "Run Time". Its important to double note, its a variable, an "Environment Variable", it doesn't actually change the size of the heap, the value it represents is just saying that:
_"I the developer approve of the heap growing to the size that is set in the Environment Variable: "MAX_OLD_SPACE_SIZE"
...which does not instantly mean node is able to accomplish such sizes, hell, you could set it to 10e1000 MBs if you wanted, and it would mostly likely accept that number as a valid configuration. I tested all kinds of different stuff, and there is no doubt that you can set it several hundred times more than the amount of ram you actually have. In other words, its a variable, (i.e. an address to a location in memory that holds a binary value), there shouldn't be any issue setting it. It is far more likely that your machine is unable to allocate the amount of space needed, 8GB of RAM, when you only have 16GBs, can be, for some systems, a large amount to offer to a single process, especially f your running Windows, or MacOS. Linux is king in resource intensive situations.
STEP #4 — Lets See What Your System Has to Offer
"I assume your on Windows from the image you uploaded into your question, therefore, I will continue to answer this question for Windows only. If you need help with a different platform please let me know by editing your question."
Check your OS performance tool, see what the memory preformace chart looks like for your machine. If 8GBs aren't available, it won't matter how high you set the Heap Limit to. This is a very probable case, because you can set the HeapSize limit to use more ram that you physically have, and just because you got a 16MB stick doesn't mean half of it isn't being used, you might find that your machine is using 7-10Gbs for background processes. Windows is hard on system resources, that's why people use Linux/Unix for launching applications."_
Preform the Following Steps
Press the following keys: CTRL + ALT + DEL
Click on “Task Manager”
Click on the “Performance” tab and check the section titled “Memory”
If the performance tool shows your that your Memory Use is above 45% then that is your problem. A quick solution to not having enough memory is to simply add more memory, this can be done by: Using additional sticks of Memory if your machine has room, upgrading to sticks with more memory, upgrading your entire machine, switching to a VM on the cloud with enough memory, upgrading your VM if your already using one, etc... There's an endless amount of different ways that one can go about about increasing memory, however; all of them will require money, therefore, it would be best to try and utilize your current memory to the best of your abilities, which would include some of the following options:
Eliminating background processes
Changing your platform to something less resource demanding. If you want to keep it GUI based, use Ubuntu 21.04, or Mint is a good alternative if your use to windows.
if you really want to conserve as much resources as possible, spending the majority of your memory on your application/program/server, you will obviously want to use a command-line Interface, like Debian, Red-hat or Ubuntu, make sure to choose an LTS version, as it isn't like programming languages, using non LTS versions don't give you very many extra goodies, while LTS is extremely important to security.
Lastly, this is obvious, but if it is possible, don't require downloads that need more memory than you can give.
"If this question still is not resolved at this point I will need a bit more info from you, such as the error messages you receive. Also a more in-depth explanation of what is actually happening, for instance, a further more indepth explanation about the statement below would be very helpful. As well as providing the screen shot asked for above"
What does this mean? What was unexpected?
"unexpected behavior but seems to still be 4GB after hitting CTRL + C"

The NODE_OPTIONS should work, especially in the environment variable. However, some of them should be underscores instead of hypens. Try --max_old_space_size=8192 in the NODE_OPTIONS environment variable instead. Here is a link to support that.

Related

Code working on windows but launch failures on Linux

First and foremost: I am completely unable to create a MCVE, as I can only reproduce this when running a full code, any attempt to measure or replicate the error in a simpler environment makes it disappear. TDLR I suspect its not a code problem, but a configuration problem.
I have a piece of code for some mathematics on kernels in CUDA. I have a windows machine Win10 x64, GTX 1050, CUDA 9.2 and a Ubuntu 17.04, 2xGTX 1080 Ti, CUDA 9.1.
My code runs good on the windows machine. It is long (~700ms per kernel call for big samples) so I needed to increase the TDR value in windows. The code also (for now) forces it to run in 1 GPU, the first one that is selected with cudaSetDevice(0).
When I copy the same input data and code to the linux machine (I am using git, it is the same code), I get either
an illegal memory access was encountered
or
unspecified launch failure
in my error checking after the GPU call.
If I change the kernel to instead do the math, to just write a number in the output, the kernel executes properly. Other CUDA code (different functions that I have) works fine too. All this leads me to think that there is a problem outside the code, not with the code itself, nor with the general configuration of the drivers/environment variables.
I read that the xorg.conf can have an effect on the timeout of the kernels. I generated a xorg.conf (I had none) and remove the devices from there, as suggested here. I am connecting to the server remotely, and have no monitor plugged in. This changes nothing in the behavior, my kernels still error.
My question is: what else should I look? What linux specific configuration should I have a look at to pinpoint the cause of the kernel halts?
The error ended up being indeed illegal memory access.
These were caused by the fact that sizeof(unsigned long) is machine specific, and my linux machine returns 8 while my windows machine returns 4. As this code is called from MATLAB, and MATLAB (like some other high level languages such as python) defines the sizes of variables in bits (such as uint32(1)) there was a mismatch in the linux machine when doing memcpys. Turns out that this happened in a variable that is a index, so the kernels were reading garbage (due to the bad memcpy), but then triying to access another array at that location, creating an illegal memory error.
Too specific? yeah.

How do I ensure accurate results of stress tests?

I have written a script for a project that stress tests the cpu, the vm and the i/o whilst running vmstat, iostat and sar. The scripts all run for 30 seconds. My tutor has asked me however to ensure that the results are accurate? How can I ever be sure? Surely I just take the machine's word for it after running a few tests? The tests have been run for 60 seconds each and so have the commands to try and ensure a fair test, but how can I be sure that they are accurate according to my tutor's concerns? Any ideas?
The systems are server versions of Ubuntu 12.04, Debian 7 and Suse 11
There is no way to know which are your tutor's concerns, so you should ask him!
"accuracy" usually means that your test results should not be offset by a factor you're not taking into account, like some CPU features being disabled or not used, differences in software configuration, etc.
What is it that you evaluate, anyway? Evaluating CPU performance is not the same as evaluating a particular hardware system, which is yet different if you consider the software as well. Basically, you need to eliminate all differences which are not part of your evaluation, and make sure the rest of the configuration is representative (e.g. installing a modern OS which supports all the features the CPU provides).
And remember that in the end you will always take the machine's word for it, there's just no other way. All you can say is that you have considered all factors you're aware of, and hope that the factors remaining unknown don't have a big influence.

Limiting the memory usage of a program in Linux

I'm new to Linux and Terminal (or whatever kind of command prompt it uses), and I want to control the amount of RAM a process can use. I already looked for hours to find an easy-t-use guide. I have a few requirements for limiting it:
Multiple instances of the program will be running, but I only want to limit some of the instances.
I do not want the process to crash once it exceeds the limit. I want it to use HDD page swap.
The program will run under WINE, and is a .exe.
So can somebody please help with the command to limit the RAM usage on a process in Linux?
The fact that you’re using Wine makes no difference in this particular context, which leaves requirements 1 and 2. Requirement 2 –
I do not want the process to crash once it exceeds the limit. I want it to use HDD page swap.
– is known as limiting the resident set size or rss of the process, and it’s actually rather nontrivial to do on Linux, as is demonstrated by a question asked in 2010. You’ll need to set up Linux control groups (cgroups). Fortunately, Justin L.’s answer gives a brief rundown on how to do so. Note that
instead of jlebar, you should use your own Unix user name, and
instead of your/program, you should use wine /path/to/Windows/program.exe.
Using cgroups will also satisfy your other requirements – you can start as many instances of the program as you wish, but only those which you start with cgexec -g memory:limited will be limited.

Node JS, Highcharts Memory usage keeps climbing

I am looking after an app built with Node JS that's producing some interesting issues. It was originally running on Node JS v0.3.0 and I've since upgraded to v0.10.12. We're using Node JS to render charts on the server and we've noticed the memory usage keeps climbing chart after chart.
Q1: I've been monitoring the RES column in top for the Node JS process, is this correct or should I be monitoring something else?
I've been setting variables to null to try and reallocate memory back to the system resources (I read this somewhere as a solution) and it makes only a slight difference.
I've pushed the app all the way to 1.5gb and it then ceases to function and the process doesn't appear to die. No error messages which I found odd.
Q2: Is there anything else I can do?
Thanks
Steve
That is a massive jump in versions. You may want to share what code changes you may have made to get it working on latest stable. The api is not the same as back in v0.3, so that may be part of the problem.
If not then the issue you see it more likely from heap fragmentation than from an actual leak. In later v8 versions garbage collection is more liberal with cleanup to improve performance. (see http://code.google.com/p/chromium/issues/detail?id=112386 for some discussion on this)
You may try running the application with --max_old_space_size=32 which will limit the amount of memory v8 can use to around 32MB. Note the docs say "max size of the old generation", so it won't be exactly 32MB. Just around it, for lack of a better technical explanation.
Also you can track the amount of external memory usage with --trace_external_memory. This will allow you to know if external memory (i.e. Buffers) are being retained in your application.
You're note on the application hanging around 1.5GB would tell me you're probably on a 64-bit system. You only mentioned it ceases to function, but didn't note if the CPU is spinning during that time. Also since I don't have example code I'm not sure of what might be causing this to happen.
I'd try running on latest development (v0.11.3 at the time of this writing) and see if the issue is fixed. A lot of performance/memory enhancements are being worked on that may help your issue.
I guess you have somewhere a memory leak (in form of a closure?) that keeps the (not longer used?) diagrams(?) somewhere in memory.
The v8 sometimes needs a bit tweaking when it comes to > 1 GB of memory. Try out --noincremental_marking and/or --max_old_space_size=81920000 (if you have 8 GB available).
Check for more options with node --v8-options and go through the --trace*-parameters to find out what slows down/stops node.

Monitoring the instructions of a running program in ubuntu?

I'm a little stuck here.
The idea is that I'd like to get a file of every instruction run by a program during it's execution. I'd like to do it with just the executable in hand (no source) and be able to determine what operation is occuring on what address when.
For example, I'd like to be able to run it on Google Chrome, Firefox, etc.
I want to use this for a performance prediction system I'm working on. I figure if I'm able to obtain each instruction that is executed in order it is executed on system 1, I can attempt to simulate/model the run time of an identical program being run on system 2, because I'll be able to predict(although I know not with 100% accuracy) L1/L2 cache-misses, L1/L2 cache-hits, TLB hits/misses, page faults, time taken on floating point multiplication operations, etc.
I'd like to try to do this on two different systems:
System 1: Ubuntu 10.10 on Intel Core 2 Duo CPU
System 2: Ubuntu 12.04 on system with 2x AMD Sixteen Core Opteron model 6274
(I can definitely change the OS's as neccessary, but would prefer to stay with Ubuntu, if possible)
Is this possible / how could I go about doing it? I know with debuggers, you can use them to step through everything, but I don't have the source available.
I think, you can use qemu (or even bochs) or valgrind to monitor every executed instruction. They are x86 binary translation tools (excluding bochs - which is an interpreter of x86 code). There is a valgrind tool called cachegrind (+ kcachegrind gui), which is ready to emulate cache by instrumenting every memory access and simulating some L1/L2 cache model (sizes may be configured via command line options).
To get deeper (into pipeline) you may want to look on free ptlsim (http://www.ptlsim.org/)

Resources