I wrote a C++ process which is running inside a VMWare machine with a 512Mb of assigned RAM.
I see by TOP/HTOP that the VIRT column shows a value of 490Mb. Instead other processes show few Kbytes for the same field.
Do you know why? Have I to setup something for my process?
Thank you very much!
Virt really doesn't matter, use -a with the resident size. Virt will show even pages that have been swapped out, and I think it's probably useless for what you're trying to figure out.
Here is a good explanation that I'm going to see and learn...
Edit (2016-04-07): I've just seen it, and it is brilliant! Please look at /proc//smaps to know how physical ram is used by your process.
Edit (2016-04-08): I'm going deeper into the problem, and I discovered that each time I create a thread the process increases the used VIRT. I have seen also that all other linux processes with threads allocate much of VIRT memory size, so I think it is absolutely normal!
Related
Here is my system based on Linux2.6.32.12:
1 It contains 20 processes which occupy a lot of usr cpu
2 It needs to write data on rate 100M/s to disk and those data would not be used recently.
What I expect:
It can run steadily and disk I/O would not affect my system.
My problem:
At the beginning, the system run as I thought. But as the time passed, Linux would cache a lot data for the disk I/O, that lead to physical memory reducing. At last, there will be not enough memory, then Linux will swap in/out my processes. It will cause I/O problem that a lot cpu time was used to I/O.
What I have try:
I try to solved the problem, by "fsync" everytime I write a large block.But the physical memory is still decreasing while cached increasing.
How to stop page cache here, it's useless for me
More infomation:
When Top show free 46963m, all is well including cpu %wa is low and vmstat shows no si or so.
When Top show free 273m, %wa is so high which affect my processes and vmstat shows a lot si and so.
I'm not sure that changing something will affect overall performance.
Maybe you might use posix_fadvise(2) and sync_file_range(2) in your program (and more rarely fsync(2) or fdatasync(2) or sync(2) or syncfs(2), ...). Also look at madvise(2), mlock(2) and munlock(2), and of course mmap(2) and munmap(2). Perhaps ionice(1) could help.
In the reader process, you might perhaps use readhahead(2) (perhaps in a separate thread).
Upgrading your kernel (to a 3.6 or better) could certainly help: Linux has improved significantly on these points since 2.6.32 which is really old.
To drop pagecache you can do the following:
"echo 1 > /proc/sys/vm/drop_caches"
drop_caches are usually 0. And, can be changed as per need. As you've identified yourself, that you need to free pagecache, so this is how to do it. You can also take a look at dirty_writeback_centisecs (and it's related tunables)(http://lxr.linux.no/linux+*/Documentation/sysctl/vm.txt#L129) to make quick writeback, but note it might have consequences, as it calls up kernel flasher thread to write out dirty pages. Also, note the uses of dirty_expire_centices, which defines how much time some data needs to be eligible for writeout.
How do I find the stack size of a process ?
/proc/5848/status gives me VmStk but this doesnt change
No matter how much ever while loop and recursion I do in my test program this value hardly changes.
when I looked at /proc/pid/status all of the process has 136k and have no idea where that value comes from.
Thanks,
There really is no such thing as the "stack size of a process" on Linux. Processes have a starting stack, but as you see, they rarely allocate much from the standard stack. Instead, processes just allocate generic memory from the operating system and use it as a stack. So there's no way for the OS to know -- that detail is only visible from inside the process.
A typical, modern OS may have a stack size limit of 8MB imposed by the operating system. Yet processes routinely allocate much larger objects on their stack. That's because the application is using a stack that is purely application-managed and not a stack as far as the OS is concerned.
This is always true for multi-threaded processes. For single-threaded processes, it's possible they are actually just using very, very little stack.
Maybe you just want to get the address map of some process. For process 1234, read sequentially the /proc/1234/maps pseudo-file. For your own process, read /proc/self/maps
Try
cat /proc/self/maps
to get a feeling of it (the above command displays the address map of the cat process executing it).
Read proc(5) man page for details.
You might also be interested by process limits, e.g. getrlimit(2) and related syscalls.
I am not sure that stack size has some precise sense, notably for multi-threaded processes.
Maybe you are interested in mmap(2)-ed segments with MAP_GROWSDOWN.
the stksize can be got by pidstat command. install it by apt install sysstat
pidstat -p 11577 -l -s
I had a problem in which my server began failing some of its normal processes and checks because the server's memory was completely full and taken.
I looked in the logging history and found that what it killed were some Java processes.
I used the "top" command to see what processes were taking up the most memory right now(after the issue was fixed) and it was a Java process. So in essence, I can tell what processes are taking up the most memory right now.
What I want to know is if there is a way to see what processes were taking up the most memory at the time when the failures started happening? Perhaps Linux keeps track or a log of the memory usage at particular times? I really have no idea but it would be great if I could see that kind of detail.
#Andy has answered your question. However, I'd like to add that for future reference use a monitoring tool. Something like these. These will give you what happened during a crash since you obviously cannot monitor all your servers all the time. Hope it helps.
Are you saying the kernel OOM killer went off? What does the log in dmesg say? Note that you can constrain a JVM to use a fixed heap size, which means it will fail affirmatively when full instead of letting the kernel kill something else. But the general answer to your question is no: there's no way to reliably run anything at the time of an OOM failure, because the system is out of memory! At best, you can use a separate process to poll the process table and log process sizes to catch memory leak conditions, etc...
There is no history of memory usage in linux be default, but you can achieve it with some simple command-line tool like sar.
Regarding your problem with memory:
If it was OOM-killer that did some mess on machine, then you have one great option to ensure it won't happen again (of course after reducing JVM heap size).
By default linux kernel allocates more memory than it has really. This, in some cases, can lead to OOM-killer killing the most memory-consumptive process if there is no memory for kernel tasks.
This behavior is controlled by vm.overcommit sysctl parameter.
So, you can try setting it to vm.overcommit = 2 is sysctl.conf and then run sysctl -p.
This will forbid overcommiting and make possibility of OOM-killer doing nasty things very low. Also you can think about adding a little-bit of swap space (if you don't have it already) and setting vm.swappiness to some really low value (like 5, for example. default value is 60), so in normal workflow your application won't go into swap, but if you'll be really short on memory, it will start using it temporarily and you will be able to see it even with df
WARNING this can lead to processes receiving "Cannot allocate memory" error if you have your server overloaded by memory. In this case:
Try to restrict memory usage by applications
Move part of them to another machine
The question:
How can I tell how much memory is in use by the VMA's of my process (either when I'm in userspace or in kernel) ?
I'll give a short explanation on what I'm doing, So you could understand why I'm asking this.
I run on my Linux machine a few processes and one driver (kernel module). The processes memory is locked (not swappable), Therefore I want to make sure that the memory consume by the module along with the processes isn't acceding 90% of my total physical memory. In order to reduce malloc overhead I'm using mmap.
what I really need to know is how much memory my processes are really consuming rather than how much they asked for, and as much as I can tell I'm only missing the VMA's overhead of any allocation.
After digging I've found the answer:
While I'm in the driver I can use
current->mm->map_count
To know the current number of VMA's for the current process.
Multiply it by sizeof(struct vm_area_struct) will give me what I was looking for.
From here the accounting is pretty simple.
Hello I developed a multi-threaded TCP server application that allows 10 concurrent connections receives continuous requests from them, after some processing requests, responds them to clients. I'm running it on a TI OMAP l137 processor based board it runs Monta Vista Linux. Threads are created per client ie 10 threads and it's pre-threaded. it's physical memory usage is about %1.5 and CPU usage is about %2 according to ps, top and meminfo. It's vm usage rises up to 80M where i have 48M (i reduced it from u-boot to reserve some mem for DSP). Any help is appreciated, how can i reduce it??.(/proc/sys/vm/.. tricks doesn't help :)
Thanks.
You can try using a drop in garbage collecting replacement for malloc(), and see if that solves your problem. If it does, find the leaks and fix them, then get rid of the garbage collector.
Its 'interesting' to chase these kinds of problems on platforms that most heap analyzers and profilers (e.g. valgrind) don't fully (if at all) support.
On another note, given the constraints .. I'm assuming you have decreased the default thread stack size? I think the default is 8M, you probably don't need that much. See pthread_attr_setstacksize() if you haven't adjusted it.
Edit:
You can check the default stack size with pthread_attr_getstacksize(). If it is at 8M, you've already blown your ceiling during thread creation (10 threads, as you mentioned).
Most VM is probably just for stacks. Of course, it's virtual, so it doesn't get commited if you don't use it.
(I'm wondering if thread's default stack size has anything to do with ulimit -s)
Apparently yes, according to
this other SO question
Does it rise to that level and stay there? Or does it eventually run out of memory? If the former, you simply need to figure out a way to have a smaller working set. If the latter, you have a memory leak and need to fix it.