node-webkit-agent wont connect to my application - node.js

I've been trying to debug my node application as I keep getting 'stack size exceeded' errors. I've replaced global accidental vars with the prefix var for declaration but I can't seem to figure out what the problem is. I tried using node-inspector but apparently the profiling tab is not available anymore. I tried finding other methods of analyzing my garbage collection with heap analysis tools but they all seem to be paid services that I can't really afford just for a grad school project.
I tried using node-webkit-agent but I am not able to connect to the agent. After running my file, node myFile.js and entering kill -USR2 (pid of myFile), whenever I restart my app again I get the following error:
https://www.dropbox.com/s/1xvz4rnf69zc68d/Screenshot%202014-10-07%2018.17.42.png?dl=0
I don't really know why this is happening... I'm running it on a free EC2 instance and have gotten node-inspector to work previously but as mentioned its not helping me to debug memory leaks. I think I may start setting up node-memwatch as I'm not really sure why this is so hard to get working and am not really very optimistic about using open source free tools with a GUI for memory usage for nodeJS.
Any help would be greatly appreciated!!

Related

How do I re-open nodejs server info

For some reason I often notice after having my terminal open for a while, I can't see the output from my nodejs server anymore. I know it's running, though. How do I re-open the node output info? (By output info, I mean "console.log" in nodejs code. I'm running on linux.
For anyone looking for an answer to this, my research conclusion is that it's an artifact of nodemon that exists. You have to physically go into the port and kill it, which I've spent an enormous amount of time doing. Not very helpful if you were running this in production and you couldn't see output anymore. But it is what it is.

Deploy node code without restart

According to Uber it's possible to deploy code to node without restart. Where can I read more about this? I expecting it's not forever or pm2.
That’s where the second strength Uber found in Node.js (quick iteration) comes into play:
through an interactive testing environment called REPL - Read Eval Print Loop - JavaScript allows
developers to deploy new code - and fix the errors that new code may create - without having to
stop any processes.
“One of the things that makes Node.js uniquely suited to running in production is that you can
inspect and change a program without restarting it,” said Ranney. “So very few other languages
offer that capability. Not a lot of people seem to know that ability exists, but indeed you can
inspect and even change your program while it’s running without restarting it.”
Source: https://nodejs.org/static/documents/casestudies/Nodejs-at-Uber.pdf

Nodejs --debug-brk extremely slow

I'm using node v6.10.0 and trying to figure out why my --debug-brk is so incredibly slow. Without this flag (with just --inspect or --debug), it's almost instantaneous, though the debugger still takes forever to attach.
This one flag dramatically increases the load time. My project is taking 50s+ to start up when debugging is enabled.
Any ideas on how to start debugging this issue?
Edit: To be clear, it's happening across two computers and does NOT happen with Hello World.
Edit 2: More detail. I'm using es6. I used webstorm to log out what was going on and found that it was just taking forever to read all my modules? Perhaps that's what's going on?
Is there a way to speed this up? It's taking 34 seconds just to load all the require statements.
Edit 3: It's absolutely the files and require statements. I moved some of the require statements to only load after the database connection is established. The connection is established instantly, but the process hangs on moving forward after that (again for several, several seconds).
Is there any way to speed this up?
What do you mean by "load time"? Are you talking about time between opening the frontend (e.g. Chrome DevTools) and hitting the breakpoint on the first line of your script?
From your description it sounds like there's an issue with the socket connection being slow. Some things to check:
If the URL your Node.js version outputs has localhost - replace it with 127.0.0.1. Some OSes use DNS to resolve this name and might fail to resolve it or to be slow.
Do you have any issues with the network access? Particular Chrome DevTools version has to be downloaded for your node version, this might be slow.
This might be a bug in particular Node.js version (I cannot recall anything specific that might've caused this). What is puzzling is that it is app specific - when you run with --debug-brk or --inspect-brk no JS is executed until after the debug frontend is connected.
Please consider reporting this issue on Node.js bugtracker - feel free to CC me directly (add #eugeneo anywhere in the bug description)... Is there any chance I could see your code - e.g. is it on GitHub? Also - can you please try a newer Node version?

Segmentation Fault running Express

I'm receiving SIGSEGV quite randomly when running an express app with PM2. The strange thing is the server runs quite well for the past few weeks. It does not print any error message except:
App [XXX] with id [7] and pid [27757], exited with code [255] via signal [SIGSEGV]
After implementing the "segfault-handler" module, I started to receive some stack traces. It seems the app encounters a few different segmentation fault:
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7fd211f87330]
node(_ZN2v88internal9HashTableINS0_15ObjectHashTableENS0_20ObjectHashTableShapeENS0_6HandleINS0_6ObjectEEEE18FindInsertionEntryEj+0x40)[0xc0b680]
node(_ZN2v88internal15ObjectHashTable3PutENS0_6HandleIS1_EENS2_INS0_6ObjectEEES5_i+0x124)[0xc0c0a4]
node(_ZN2v88internal7Runtime17WeakCollectionSetENS0_6HandleINS0_16JSWeakCollectionEEENS2_INS0_6ObjectEEES6_i+0x59)[0xc7d639]
node(_ZN2v88internal25Runtime_WeakCollectionSetEiPPNS0_6ObjectEPNS0_7IsolateE+0x11d)[0xc7d89d]
[0x2acdd80963b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7f0fc311c330]
node(_ZN2v88internal32IncrementalMarkingMarkingVisitor26VisitFixedArrayIncrementalEPNS0_3MapEPNS0_10HeapObjectE+0x376)[0xad8a16]
node(_ZN2v88internal18IncrementalMarking4StepElNS1_16CompletionActionENS1_18ForceMarkingActionENS1_21ForceCompletionActionE+0x2c1)[0xad6181]
node(_ZN2v88internal8NewSpace15SlowAllocateRawEiNS0_19AllocationAlignmentE+0x74)[0xb05244]
node(_ZN2v88internal4Heap11AllocateRawEiNS0_15AllocationSpaceES2_NS0_19AllocationAlignmentE+0x1b9)[0xa678c9]
node(_ZN2v88internal4Heap20AllocateFillerObjectEibNS0_15AllocationSpaceE+0x19)[0xab00b9]
node(_ZN2v88internal7Factory15NewFillerObjectEibNS0_15AllocationSpaceE+0x2d)[0xa67d1d]
node(_ZN2v88internal29Runtime_AllocateInTargetSpaceEiPPNS0_6ObjectEPNS0_7IsolateE+0x5e)[0xc99e8e]
[0x249862c06355]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7fbebabd2330]
node(_ZN2v88internal9HashTableINS0_15ObjectHashTableENS0_20ObjectHashTableShapeENS0_6HandleINS0_6ObjectEEEE18FindInsertionEntryEj+0x40)[0xc0b680]
node(_ZN2v88internal15ObjectHashTable3PutENS0_6HandleIS1_EENS2_INS0_6ObjectEEES5_i+0x124)[0xc0c0a4]
node(_ZN2v88internal7Runtime17WeakCollectionSetENS0_6HandleINS0_16JSWeakCollectionEEENS2_INS0_6ObjectEEES6_i+0x59)[0xc7d639]
node(_ZN2v88internal25Runtime_WeakCollectionSetEiPPNS0_6ObjectEPNS0_7IsolateE+0x11d)[0xc7d89d]
[0x125b9620963b]
I know there is little information here. Can anyone please tell me a good way to start diagnosing? I've checked the PM2 log, mongoDB log but no luck.
Thanks!
Mars
Since the stack trace is different every time and not very illuminating, all you can do is try things. The first main suspects will be things that use native code because it's not that likely that plain Javascript is causing a segFault. It is probably native code that is somehow corrupting memory or not properly interacting with the garbage collector in node.js.
So, the things to look for are the interaction between your current version of node.js and the things you have that use native code (such as mongoDB). Here are things to try:
Identify all modules that use native code and temporarily remove any that you can live without.
Upgrade both node.js and mongoDB to recent versions in case you have some interaction between their specific versions that is causing the problem. If you can't upgrade node.js to a recent stable version, then make absolutely sure that all the modules you are running are certified to be stable with the version of node.js that you do have.
Restart your server just in case there's anything goofed up in the OS that is contributing to the problem.
Start with a clean database or run some sort of database check on your database in order to verify that there is no corruption there.
Whenever you update your DB scheme, make sure you have a strategy for moving the prior database forward (it looks like in MongoDB you can just make sure you assign a default value to new scheme elements).
Gather new info after making changes and repeat the process, trying to only change one thing at a time so that if it fixes the issue you will know exactly which item it was that fixed it.
Something like that can happen when you copy the code with node_modules that included binary modules compiled for a different architecture than the one you're trying to run it on.
Try either removing node_modules and running npm install from scratch, or you can try running npm rebuild without removing node_modules.

Not able to run meteor in cloud ide, need help to understand meteor memory usage

I’m new to both meteor and web frameworks [Core C/C++ developer].
When I tried meteor apps in cloud IDE (both cloud9 and Koding), sample apps runs fine. But, if I add twbs:bootstrap package, the IDE kills meteor (mongodb) due to insufficient memory (Cloud9 has 768MB and Koding provides 1GB).
Also noted that the disk space grows from 60mb initial to some 200+ mb, just for adding one package (twbs:bootstrap).
Hence, I’m not able to proceed further with meteor in cloud. Is it normal that meteor uses this much RAM and disk space? If so, why it uses such huge memory? This wouldn’t be problem for real production web apps?
Please guide me.
The first time you install a package, and start Meteor, it tries to update the package and Meteor (if there's a newer version). This can take up a lot more memory than usual. I have been able to get around this by running meteor update and then restarting the meteor server. Please note that sometimes even meteor update complains of being out of memory, but it should still complete. If it truly runs out of memory, it would say 'Killed' on the terminal. Contact support in this instance.
I have tried using the bootstrap package and have been able to make it work on Cloud9 workspaces using the technique above (Full disclosure, I work at Cloud9). We do try to keep the meteor version up to date due to this issue, but if you have an older workspace, you might still run into this issue each time meteor version increases.
The other thing I've noticed is that memory consumption tends to increase with each hot-reload. If the workspace starts complaining, simply shut the meteor server down and restart it. It should get back to normal levels.
Hope this helps!

Resources