I would like to monitor and log the memory usage of processes being carried out by the different nodes in my ros-based system.
Ideally, the information would be similar to the output of the linux top command, but only for ros processes. The rqt_top plugin seems promising, but I am unclear about how to use/store information from this package.
Examples/tutorials for rqt_top or alternative ways to store memory usage data would be appreciated!
Using ros indigo and Ubuntu 14.04
Found an answer!
Jeet Sukumaran's Syrupy (https://github.com/jeetsukumaran/Syrupy) seems to solve my problem quite well and can be easily added to the launch of my ros nodes.
Related
Little intro:
I have two OS on my pc. Linux and Windows. I need Linux for work, but it freezes on my pc, but windows does not. I've heard that is a common thing for ASRock motherboards.
That's why i want to switch to Windows for work.
So my idea was to create docker image with everything i need for work, such as yarn, make, and a lot of other stuff, and run it on windows for using linux functionality. You got the idea.
I know that docker is designed to only do one thing per image, but i gave this a shot.
But there are problems constantly. For example right now i'm trying to install nvm on my image, but, after building the image, command 'nvm' is not found on bash. It is a known problem and running source ~/.profile adds the command in console, but running it while building the image doesnt affect your console when you run this image. So you need to do that manually every time you use this image.
People suggest putting this in .bashrc which gives segmentation error.
And that's just my problem for today, but i've encountered many more, as i've been trying creating this image for a couple of days already.
So my question is basically this: is it possible to create fully operational OS in one docker image, or maybe one could connect multiple images to create OS, or do i just need to stop that and use a virtual machine like a sensible person?
I would recommend using a virtual machine for your use-case. Since you will be using this for work and modifying settings, and installing new software, these operations are better suited to be in a virtual machine where it is expected that you change the state or configurations.
In contrast, Docker containers are generally meant to be immutable, as in the running instance of the image should not be altered or configured. This is so that others can pull down the image and it works "out-of-the-box." Additionally, most Docker containers available on Docker Hub are made to be lean, with only one or two use cases in mind and not extra (for security purposes and image size), so I expect that you would frequently run into problems trying to essentially set up a Docker image that you would be working on. Lastly, since it is not done frequently, there would be less help available online, and Docker-level virtualization does not really suit your situation.
I recently installed a 5 node Datastax Enterprise/Cassandra 2.1 cluster on Ubuntu 14.04 using a Datastax AMI. I was able to bring the cluster up successfully, however when I did, I received errors on console about some linux limits, shown below:
The linux limit 'memlock' is '64'. The recommended is 'unlimited'.
Check your limits.conf. The linux limit 'nofile' is '4096'. The
recommended is '100000'. Check your limits.conf. Check our
documentation for more details. http://docs.datastax.com/en/cassandra/2.1/cassandra/install/installRecommendSettings.html?scroll=reference_ds_sxl_gf3_2k__user-resource-limits
When I checked the limits file, the memlock and nofile limits are set properly for both the cassandra and root user. However the settings are getting ignored. Has anyone else experienced this problem? I'm not that experienced with linux settings and I'm not sure where to look for an error. Sorry if this has already been asked and answered, I didn't find this question when searching.
Bob Glassett
You may want to try and reboot the systems. The package installer will try to use sysctl to set the various options at runtime, but maybe there was an issue with that.
Currently I am of the assumption that Postgres-XL can be installed on Linux/Unix machines. But is there a way to install Postgres-XL on a Windows Server Machine?
If Yes, any help or resource would be really helpful.
Thanks
Postgres-XL does not currently compile in Windows. IIRC, this was due to the the threading that the Global Transaction Manager uses. It probably would not be difficult to buidl the other components- the Coordinator and Datanode. In any event, some testing should be done if tried. Feel free to send me a message if you are interested in helping test it if we do a Windows build.
I´m running a Raspberry Pi Model B (512MB RAM) with a 16 gB 300MB/s SD-card and recent raspbian with all updates.
On this machine I´ve set up a apache2-server, node.js with socket.io and firmata.
Within my web-application, video streaming is a key feature.
When I access my webserver just for streaming the videos (without node/socket.io/firmata), everything streams with a good performance. But when I switch on node.js/socket.io/firmata it's rather slow, it takes 5-7 seconds to start streaming the videos.
I had problems installing node.js in the first place. Node.js from source compiled/installed like a charm, but when I tried to run it, I got this mysterious "Illegal instruction" message.
As an alternative I took the precompiled debian-packages and installed them using dpkg using this repo:
http://revryl.com/2014/01/04/nodejs-raspberry-pi/
They say that nodejs will run slower, but that´s not acceptable for me.
Any hints?
Thanks and regards!
Allright, it´s faster now.
For everyone with this issue:
Dispose apache2 and use lighttpd instead. Just check out this page and see why: http://www.jeremymorgan.com/blog/programming/raspberry-pi-web-server-comparison/
Start node.js via script
and put into /etc/rc.local Out of some reason it uses much less RAM
and CPU when in idle.
Try to avoid firmata. If you need to control
hardware that requires simple wiring, try to use the "pi-gpio". It´s
MUCH faster and uses less resources. Also you don´t need your arduino
anymore as you can use only the rpi.
I need to build a small cluster for my research. It's pretty humble and I'd like to build a cluster just with my other 3 laptops at home.
I'm writing in C++. My codes in MPI framework are ready. I can simulate them using visual studio 2010 and they're fine. Now I want to see the real thing.
I want to do it free (I'm a student). I have ubuntu installed and I wonder:
if I could build a cluster using ubuntu. I couldn't find a clear answer to that on the net.
if not, is there a free linux distro that I can use at building cluster?
I also wonder if I have to install ubuntu, or the linux distro on the host machine to all other laptops. Will any other linux distribution (like openSUSE) work with the one at the host machine? Or do all of them have to be same linux distro?
Thank you all.
In principle, any linux distro will work with the cluster, and also in principle, they can all be different distros. In practice, it'll be a enormously easier with everything the same, and if you get a distribution which already has a lot of your tools set up for you, it'll go much more quickly.
To get started, something like the Bootable Cluster CD should be fairly easy -- I've not used it myself yet, but know some who have. It'll let you boot up a small cluster without overwriting anything on the host computer, which lets you get started very easily. Other distributions of software for clusters include Rocks and Oscar. A technical discussion on building a cluster can be found here.
I also liked PelicanHPC when I used it a few years back. I was more successful getting it to work that with Rocks, but it is much less popular.
http://pareto.uab.es/mcreel/PelicanHPC/
Just to get a cluster up and running is actually not very difficult for the situation you're describing. Getting everything installed and configured just how you want it though can be quite challenging. Good luck!