FATAL ERROR: JS Allocation failed when try to publish npm module - node.js

I'm trying to publish a module with:
npm publish ./
Following this gist: https://gist.github.com/coolaj86/1318304
But when I try to run this command I get the following error:
FATAL ERROR: JS Allocation failed - process out of memory
The npm process reaches 1.2 Gb in memory, but in my computer is more free ram, any ideas?

Turns out I had a huge ignored "tags" file in my root dir. It was 1.5Gb. I removed it and I was able to publish the package.
npm publish --dd provides some aditional info. I was able to check that the compressing was taking too long, which gave me the hint.

Related

npm CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

I am running nodejs on Ubuntu on a raspberry pi. I recently tried to install the moment package and now, when I am in my project directory, I get the following error whenever I run any command with npm:
FATAL ERROR: MarkCompactCollector: young object promotion failed Allocation failed - JavaScript heap out of memory
I am aware of answers to the seemingly same problem such as:
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory - webpack sample Angular 2
JS : CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
npm install - javascript heap out of memory
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
npm install - javascript heap out of memory
However, these approaches do not seem to work on my setup. I can't even run commands such as "increase-memory-limit" as any command I run with NPM will result in the error. I also tried reinstalling Nodejs, but it seems like that does not remove the packages. I tried to remove them with "npm uninstall ..." but as any other command with NPM this will result in the same error.
I should mention that my Raspi only has 850MB of memory, but I also added 6GB of swap.
Any suggestions how I might approach this issue?

Angular 8 error: '"node --max-old-space-size=10240"' is not recognized as an internal or external command

First I had the issue with limit allocation and I tried to resolve that using this answers here:
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory in ionic 3
But after that, I have this issue:
I didn't type this number 10240... I don't know where is this number from and how to solve the problem. Any idea?
Finally, I deleted the existing node module folder from my project and run the below commands:
npm install -all
npm audit fix
If anyone else is still having this issue: the solution for me was to run a script via node once, which converts all "%prog%" into %prog% in each cmd file in node-modules like this one

Getting error when deploying a Google Cloud Function after Node upgrade

I updated Brew then updated Node from 10.12.0 -> 13.8.0
Now, I get the following error when trying to deploy a Google Cloud Function
firebase deploy --only functions:createJWT
i functions: preparing functions directory for
uploading...
Error: Error parsing triggers: Failed to load gRPC binary module
because it was not installed for the current system Expected
directory: node-v79-darwin-x64-unknown Found:
[node-v64-darwin-x64-unknown] This problem can often be fixed by
running "npm rebuild" on the current system Original error: Cannot
find module
'/Users/.../cloud-functions/functions/node_modules/grpc/src/node/extension_binary/node-v79-darwin-x64-unknown/grpc_node.node'
Require stack:
- /Users/.../cloud-functions/functions/node_modules/grpc/src/grpc_extension.js
- /Users/.../cloud-functions/functions/node_modules/grpc/src/client_interceptors.js
- /Users/.../cloud-functions/functions/node_modules/grpc/src/client.js
- /Users/.../cloud-functions/functions/node_modules/grpc/index.js
- /Users/.../cloud-functions/functions/node_modules/#google-cloud/common-grpc/src/service.js
- /Users/.../cloud-functions/functions/node_modules/#google-cloud/common-grpc/src/operation.js
- /Users/.../cloud-functions/functions/node_modules/#google-cloud/common-grpc/src/index.js
- /Users/.../cloud-functions/functions/node_modules/#google-cloud/logging/src/index.js
- /Users/.../cloud-functions/functions/index.js
- /usr/local/lib/node_modules/firebase-tools/lib/triggerParser.js
Try running "npm install" in your functions directory before
deploying.
Tried npm rebuild and npm install in my functions directory and nothing works
Furthermore...could this issue be due to the fact that GCF Node runtime enviroment is Node10 and I have installed Node13 on my machine? - according to these docs:
https://cloud.google.com/functions/docs/concepts/nodejs-10-runtime
I am struggling to revert back to Node10, have tried by running brew install node#10 and get this:
Then tried running the following command as per output above to symlink it to /usr/local but still no luck
echo 'export PATH="/usr/local/opt/node#10/bin:$PATH"' >> ~/.bash_profile
Searching around about this error, indeed, this seems that the problem is related to your system waiting for a version and founding another one - as per this part of the error.
Error: Error parsing triggers: Failed to load gRPC binary module because it was not installed for the current system Expected directory: node-v79-darwin-x64-unknown Found: [node-v64-darwin-x64-unknown]
There are some options that you can give it a try, besides trying the npm rebuild. Another option might be updating the package.json - as per this case solved here - that would return your npm version to an old one.
Besides that, on this question in the Community, there are a few solutions that helped other users, that I would recommend you to take a look at it: NodeJs Error - Failed to load gRPC binary module because it was not installed for the current system Expected directory?
Let me know if the information helped you!
Trying to deploy to an unsupported Google Function execution environment won't work. According to the google docs the current supported environments are Node8 and Node10(beta), re-installing Node10 worked for me.

How to allocate more memory to my Virtual Machine running on Fedora to avoid Heap out of Memory Error

I'm running Jenkins on a Fedora Virtual Machine and have an app created by create-react-app .
When I try to build for production on my local machine, after ~8 minutes, it does get compiled successfully (although with the message: 'the bundle size is significantly larger than recommended...'
However, when I run the same script during my Jenkins build process, I get the following error: FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory.
The build script is as follows: npm run build-css && node --max_old_space_size=8192 node_modules/.bin/react-scripts-ts build && npm run copy-to-build.
My question is, how can I allocate more memory for my Virtual Machine running on Fedora so the script can run successfully before throwing the FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory .
The solution for me was to set GENERATE_SOURCEMAP=false in the .env.production file as described here.
A better solution (although more time consuming) is code split the huge files ( >1MB)

npm publish - out of memory

I'm on Windows (and using nvm). I want to publish a ~200mb module to our private npm repo but I get
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
I found out, that I should try to increase my memory usage. I'm not sure if I did it correctly because I'm on Windows and use nvm (currently using node v6.10.0), so my command looked like this:
node --max-old-space-size=4096 C:\\Users\\MyName\\AppData\\Roaming\\nvm\\v6.10.0\\node_modules\\npm\\bin\\npm-cli.js publish
but I ran into the same issue.
While doing so, I had a look into my task manager and the node process only allocated ~1.4GB memory - so how can I still have an out of memory error when my memory limit isn't reached? And, of course, how can I publish my module?

Resources