Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 11 months ago.
The community reviewed whether to reopen this question 11 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
If I use gcore to make a code dump of a Node.js process, what are the best tools to analyze it?
Inspired by:
Tool for analyzing java core dump
In my specific case, I'm interested in investigating some memory leaks, so I'm really curious to get some heap analysis. General tools and even instrumentation packages and techniques are also welcome. I'm finding Node.js to be very interesting, but the runtime analysis tools are just not there yet.
For investigating crashes, I've found node-segfault-handler to be invaluable. It's a module I cooked up to get a native-code stack trace in the event of a hard-crash with a signal - eg deref of NULL leading to SIGSEGV
For investigating memory / allocation issues, here's some of the data I've collected thus far:
1) Blog post by Dave Patheco - the author talks about using a plugin to MDB for getting JS stacks and such. Sadly, as far as I can tell, the source of that plugin was never released (nor any binary form).
2) Postmortem Debugging in Dynamic Environments - ACM Queue article also written by Dave Patheco (linked from the blog post). While it makes for GREAT background reading, the article doesn't have many concrete tools and techniques in it.
3) node-panic - A pure-JS tool for dumping state in the event of an assert-failure type crash. Does nothing to help debug crashes that originate from native code faults (SIGSEGV, etc)
4) Joyent: Debugging Production Systems - talk by Bryan Cantrill on the tools and techniques he recommends (thx crickeys).
On Linux and Mac you can use llnode a plugin for the lldb debugger. The project is available under the nodejs organization on github:
https://github.com/nodejs/llnode
You can install from source via github or use brew on Mac. The readme on github should help you get it installed and there's an introductory blog article here:
https://developer.ibm.com/node/2016/08/15/exploring-node-js-core-dumps-using-the-llnode-plugin-for-lldb/
The original question was about memory analysis and the v8 findjsobjects and v8 findjsinstances commands will help there by generating a basic histogram of object counts and allowing you to list the instances of each type.
There's a full article on using llnode for memory analysis here:
http://www.brendangregg.com/blog/2016-07-13/llnode-nodejs-memory-leak-analysis.html
2017 update: Now you can use #h-hellyer's solution (llnode, based on lldb rather than mdb). https://stackoverflow.com/a/40045103/3221630
mdb + mdb_v8 is the way to go.
In order to use mdb, you will need a supported OS.
Now, most likely you will be running on Linux. If this is your case:
Part 1. get your core dump
You can get your core dump in many ways.
To get your core dump from a running process you can do this:
pgrep -lf node # get pids
gdb -p your_pid
# once in gdb..
gcore # this will output your core dump
detach # this will allow the process to continue to run.
Part 2. use mdb
There is a chance you know about Solaris, OpenSolaris, IllumOS or SmartOS. Most likely this is not the case. If you can afford the time of setting up SmartOS and mdb_v8, fine.
If not, install VirtualBox, and then autopsy. This handles the ritual of installing SmartOS as well as uploading your core dump files to the VM.
Once you are done, and when you are in your mdb session, you can then follow some of the steps from this presentation.
Related
I'm relatively new to linux and building binaries using CMAKE.
In Windows, I'm accustomed to seeing small binaries with the debug info stored in an external pdb. However, in linux I asked this question Is there a way to determine what code is bloating a linux shared object? and it was noted that the build option -g included the DebugInfo directly into the binary which caused the code bloat.
At this point, I have two concerns:
Performance
Luckily, after reading this online, RelWithDebInfo performance is equivalent to Release mode:
Code build with RelwithDebInfo mode should not have it’s performance
degraded in comparison to Release mode, as the symbol table and debug
metadata that are generated do not live in the executable code
section, so they should not affect the performance when the code is
run without a debugger attached.
Memory
Some of the linux binaries created with RelWithDebInfo are ~100MB, so in a Google Cloud VM environment, how does running many of these binaries affect memory usage/cost? I google'd but didn't find any definitive answers.
Question
Can RelWithDebInfo negatively affect memory usage/cost, and if so, are there methods to avoid this, but still retain the DebugInfo?
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 days ago.
Improve this question
I have some Amazon Linux Instance, I installed tomcat with version 8.5.29.
So Apache announce latest version 8.5.40.
I want to upgrade tomcat from 8.5.29 to 8.5.40.
I use command: sudo yum update tomcat8 but I can't upgrade to 8.5.40, only 8.5.32 I can.
I check by command: sudo yum info tomcat8 but only available package 8.5.32.
Please tell me why me can't upgrade to version 8.5.40?
And tell me how to upgrade it.
Thank you.
AWS Supporter answer me as below:
I checked the tomcat versions from the Tomcat
website(https://tomcat.apache.org ) and I could find that the version
8.5.40 is available from April 10 2019. Usually it takes sometime to get these packages to Amazon Linux repository after rpm building and
basic testing of the same, once it is available from the package
vendor. I have reached out to the service team who maintains the
'amzn-updates' repository to seek by when the 8.5.40 version will be
available via Amazon Linux repositories. Unfortunately, I do not have
any ETA by when it would be available. Please be rest assured that I
would inform you as soon as I hear from the service team. I am
keeping this case open with the status "Pending on Amazon Action"
until we get a reply from service tea,. Please feel free to update the
case if you have any additional queries in the meantime.
Our internal service team has accepted our request to add the tomcat
8.5.40 version to the Amazon Linux repository. However, there are several blockers to get it added quickly, hence, you may expect
sometime before it will be available via Amazon Linux repository.
Unfortunately, service team is unable to provide any ETA by when this
package will be available, but they are actively working on getting
the tomcat 8.5.40 version available for Amazon Linux. I understand it
is frustrating, but I would request you to bear with us until we
complete the process to make the package version available for Amazon
Linux. On behalf of AWS,I apologize for any inconvenience caused due
to this. Since it is not possible to keep the case open until the
package is available, I am flipping the case status to "Pending
merchant action" for now. Inactivity on this case with the above
status might cause the case to auto resolve. Even if the case get auto
resolved, I will be sending you a reply on this case once the package
is available. Also, you may re-open this case anytime by adding a
correspondence to this case for seeking updates.
Hope this helps.Please feel free to revert to us in case you have any
additional queries, and we will be glad to be of assistance.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Back when the powers that be didn't squeeze the middle-class as much and there was time to waste "fooling around" etc, I used to compile everything from scratch from .tgz and manually get dependencies and make install to localdir.
Sadly, there's no more time for such l(in)uxuries these days so I need a quick lazy way to keep my 16GB Linux Boot OS partition as small as possible and have apps/software/Development Environment and other data on a separate partition.
I can deal with mounting my home dir to other partition but my remaining issue is with /var and /usr etc and all the stuff that gets installed there every time I apt-get some packages I end up with a trillion dependencies installed because an author of a 5kb app decided not to include a 3kb parser and wanted me to install another 50MB package to get that 3kb library :) yay!
Of course later when I uninstall those packages, all those dependencies that got installed and never really have a need for anymore get left behind.
But anyway the point is I don't want to have to manually compile and spend hours chasing down dependencies so I can compile and install to my own paths and then have to tinker with a bunch of configuration files. So after some research this is the best I could come up with, did I miss some easier solution?
Use OVERLAYFS and Overlayroot to do an overlay of my root / partition on my secondary drive or partition so that my Linux OS is never written to anymore but everything will be transparently written to the other partition.
I like the idea of this method and I want to know who uses this method and if its working out well. What I like is that this way I can continue to be lazy and just blindly apt-get install toolchain and everything should work as normal without any special tinkering with each apps config files etc to change paths.
Its also nice that dependencies will be easily re-used by the different apps.
Any problems I haven't foreseen with this method? Is anyone using this solution?
DOCKER or Other Application Containers, libvirt/lxc etc?
This might be THE WAY to go? With this method I assume I should install ALL my apps I want to install/try-out inside ONE Docker container otherwise I will be wasting storage space by duplication of dependencies in each container? Or does DOCKER or other app-containers do DEDUPLICATION of files/libs across containers?
Does this work fine for graphical/x-windows/etc apps inside docker/containers?
If you know of something easier than Overlayfs/overlayroot or Docker/LXC to accomplish what I want and that's not any more hassle to setup please tell me.tx
After further research and testing/trying out docker for this, I've decided that "containers" like docker are the easy way to go to install apps you may want to purge later. It seems that this technique already uses the Overlayfs overlayroot kind of technique under the hood to make use of already installed dependencies in the host and installs other needed dependencies in the docker image. So basically I gain the same advantages as the other manual overlayroot technique I talked about and yet without having to work to set all that up.
So yep, I'm a believer in application containers now! Even works for GUI apps.
Now I can keep a very light-weight small size main root partition and simply install anything I want to try out inside app containers and delete when done.
This also solves the problem of lingering no longer needed dependencies which I'd have to deal with myself if manually doing an overlayroot over the main /.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am considering developing on the Yocto project for an embedded Linux project (an industrial application) and I have a few questions for those with experience with embedded Linux in general -- Yocto experience a bonus. Just need to get an idea of what is being commonly done in firmware updates.
I have a few requirements, that being authentication, a secure communications protocol, some type of rollback if the update failed. Also, if there is a way to gradually release the patch across the fleet of devices then that would also be interesting as I want to avoid bricked devices in the field.
How do you deploy updates/patches to field devices today – and how long did it take to develop it? Are there any other considerations I am missing?
Although you certainly can use rpm, deb, or ipk for your upgrades, my preferred way (at least for small to reasonably sized images) is to have two images stored on flash, and to only update complete rootfs images.
Today I would probably look at meta-swupdate if I were to start working with embedded Linux using OpenEmbedded / Yocto Project.
What I've been using for myself and multiple clients is something more like this:
A container upgrade file which is a tarball consisting of another tarball (hereafter called the upgrade file), the md5sum of the upgrade file, and often a gpg-signature.
An updater script stored in the running image. This script is responsible to unpack the outer container of the upgrade file, verify the correctness of the upgrade file using md5sum and often to verify a cryptographic signature (normally gpg based). If the update file passes these tests, the updater script looks for a upgrade script inside the update file, and executes this.
The upgrade script inside the update file performs the actual upgrade, ie normally rewrite the non-running image, extracting and rewriting the kernel and if these steps are successful, instruct the bootloader to use the newly written kernel and image instead of the currently running system.
The benefit of having the script that performs the actual upgrade inside the upgrade file, is that you can do whatever you need in the future in a single step. I've made special upgrade images that upgrades the FW of attached modems, or that extracted some extra diagnostics information instead of performing an actual upgrade. This flexibility will payoff in the future.
To make the system even more reliable, the bootloader users a feature called bootcount, which could the number of boot attempts, and if this number gets above a threshold, eg 3, the bootloader chooses to boot the other image (as the image configured to be booted is considered to be faulty). This ensures that of the image is completely corrupt, the other, stored image will automatically be booted.
The main risk with this scheme is that you upgrade to an image, whose upgrade mechanism is broken. Normally, we also implement some kind of restoration mechanism in the bootloader, such that the bootloader can reflash a completely new system; though this rescue mechanism usually means that the data partition (used to store configurations, databases, etc) also will be erased. This is partly for security (not leaking info) and partly to ensure that after this rescue operation the system state will be completely known to us. (Which is a great benefit when this is performed by an inexperienced technician far away).
If you do have enough flash storage, you can do the following. Make two identical partitions, one for the live system, the other for the update. Let the system pull the updated image over a secure method, and write it directly to the other partition. It can be as simple as plugging in a flash drive, with the USB socket behind a locked plate (physical security), or using ssh/scp with appropriate host and user keys. Swap the partitions with sfdisk, or edit the setting of your bootloader only if the image is downloaded and written correctly. If not, then nothing happens, the old firmware lives on, and you can retry later. If you need gradual releases, then let the clients select an image based on the last byte of their MAC address. All this can be implemented with a few simple shellscripts in a few hours. Or a few days with actually testing it :)
#Anders answer is complete exaustive and very good. The only thing I can add as a suggestion to you is to think on some things:
Has your application an internet connection/USB/SD card to store a
complete new rootfs? Working with Linux embedded is not the like write a 128K firmware on a Cortex M3..
Has your final user the capability to do the update work?
Is your application installed in a accessible area remote controlled?
About the time you need to develop a complete/robust/stable solution is not a so simple question, but take notes that is a key point of an application that impact on the market feeling of your application. Especially in early days/month of first deploy where is usual to send updates to fix little/big youth bugs.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am looking for ways to test the net speed on a linux box with no GUI from the command line. I am not interested in tools like bmon/iftop/wget/curl especially from the upload side of things, for download it is pretty easy with wget on different targets and servers(places). But i am more interested in the upload side of things, which is the most important part of a server's bandwidth. I want to test the upload speed on different servers and places around the world just like you could do it by visiting speedtest.net using a browser with flash. If that tool can handle download speeds too that way then all the better then.
I'm not aware of a way to do this without a cooperating remote server. If you upload data, it has to go somewhere... Sites like speedtest.net do exactly that (they have a data sink somewhere).
Provided you do have ssh access to a remote server with a download link somewhat faster than the upload link you want to test, you may achieve this rather simply with netcat :
On your remote server (let's assume IP 1.2.3.4) :
$ nc -kl 12345 > /dev/null
On the machine you want to test :
$ time nc 1.2.3.4 12345 < large-file
$ stat -c'%s' large-file
Divide the file size by the "real" time and you have an estimation of your speed.
Note that you only need to run nc once on the server, and it will accept any number of sequential tests. If you only want it to work once (for security reasons or whatever), omit the -k flag.
I took this from another post I found here and thought I would pass it on:
It looks like there is a tool available on sourceforge that uses speedtest.net from the terminal.
Terminal speedtest: http://sourceforge.net/projects/tespeed/
iperf is a tool designed for this.
You run it on both sides of the connection and it can measure bandwidth either way, with TCP or UDP, and has many tweakable parameters.
This is a great tool tespeed. It tests your upload and download speeds with great details.
I don't think that exists some command-line tool for this kind of test, but someone seems to have your same question, take a look at the solutions suggested there..
speed testing using iperf is advisable if you want a command line tool for that.
Iperf is an awesome tool because of the following reasons:
It allows you to make parallel connections. It also can modify window
size It reports the gitter and dropped packets.
Refer the below link for the complete explanation.
Network speed test using iperf