I have seen twice recently RHEL 6 boxes showing a swap used value, reported by 'free', of something in the 10^15 bytes range. This is, of course, far in excess of what is allocated. Further, the '-/+ buffers/cache' line shows around 3 GB free. Both of these machines subsequently became unstable and had to be rebooted.
Does anyone have any ideas as to what might cause this? Someone told me that this could be indicative of a memory leak, but I can not find any supporting information online.
Are the systems running kernel-2.6.32-573.1.1.el6.x86_64?
Could be this bug:
access.redhat.com/solutions/1571043
Linux moves unused memory pages to swap (see linux swappiness), so if you still have free memory (considering buffers/cache) you are good to go.
You probably have a process that is not used much and it was swapped out.
Related
Recently, I've been playing around with flutter. Between running an emulator, using the browser, and using vscode, my system memory has been getting decently close to maxed out. My laptop has crashed twice now before I started paying attention to memory usage.
Looking at Ubuntu's system manager, I noticed that my Swap frequently goes up to 100%, even though I still have some free ram. Is this expected behavior, or should I be concerned?
Here's a picture of memory usage in System manager
Swap space usage becomes an issue only when there is not enough RAM available. You can reduce swap usage by configuring /etc/sysctl.conf as root. Change vm.swappiness= to any value lower than 60(default value).
In short, no. SWAP is less efficient than RAM which is why you don't want to maximize SWAP usage.
I have a use-case where I have bursts of allocations in the range of 5-6gb, specifically when Visual Studio Code compiles my D project while I'm typing. (The compiler doesn't release memory at all, in order to be as fast as possible.)
DMD does memory allocation in a bit of a sneaky way. Since compilers are short-lived programs, and speed is of the essence, DMD just mallocs away, and never frees. This eliminates the scaffolding and complexity of figuring out who owns the memory and when it should be released. (It has the downside of consuming all the resources of your machine if the module being compiled is big enough.)
source
The machine is a Dell XPS 13 running Manjaro 64-bit, with 16gb of memory -- and I'm hitting that roof. The system seizes up completely, REISUB may or may not work, etc. I can leave it for an hour and it's still hung, not slowly resolving itself. The times I've been able to get to a tty, dmesg has had all kinds of jovial messages. So I thought to enable a big swap partition to alleviate the pressure, but it isn't helping.
I realise that swap won't be used until it's needed, but by then it's too late. Even with the swap, when I run out of memory everything segfaults; Qt, zsh, fuse-ntfs, Xorg. At that point it will report a typical 70mb of swap in use.
vm.swappiness is at 100. swapon reports the swap as being active, automatically enabled by systemd.
NAME TYPE SIZE USED PRIO
/dev/nvme0n1p8 partition 17.6G 0B -2
What can I do to make it swap more?
Try this. Remember to put this question in superuser or serverfault. Stackoverflow is only for programming stuff.
https://askubuntu.com/questions/371302/make-my-ubuntu-use-more-swap-than-ram
Background:
I was trying to setup a ubuntu machine on my desktop computer. The whole process took a whole day, including installing OS and softwares. I didn't thought much about it, though.
Then I tried doing my work using the new machine, and it was significantly slower than my laptop, which is very strange.
I did iotop and found that disk traffic when decompressing a package is around 1-2MB/s, and it's definitely abnormal.
Then, after hours of research, I found this article that describes exactly same problem, and provided a ugly solution:
We recently had a major performance issue on some systems, where disk write speed is extremely slow (~1 MB/s — where normal performance
is 150+MB/s).
...
EDIT: to solve this, either remove enough RAM, or add “mem=8G” as kernel boot parameter (e.g. in /etc/default/grub on Ubuntu — don’t
forget to run update-grub !)
I also looked at this post
https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
and did
cat /proc/vmstat | egrep "dirty|writeback"
output is:
nr_dirty 10
nr_writeback 0
nr_writeback_temp 0
nr_dirty_threshold 0 // and here
nr_dirty_background_threshold 0 // here
those values were 8223 and 4111 when mem=8g is set.
So, it's basically showing that when system memory is greater than 8GB (32GB in my case), regardless of vm.dirty_background_ratio and vm.dirty_ratio settings, (5% and 10% in my case), the actual dirty threshold goes to 0 and write buffer is disabled?
Why is this happening?
Is this a bug in the kernel or somewhere else?
Is there a solution other than unplugging RAM or using "mem=8g"?
UPDATE: I'm running 3.13.0-53-generic kernel with ubuntu 12.04 32-bit, so it's possible that this only happens on 32-bit systems.
If you use a 32 bit kernel with more than 2G of RAM, you are running in a sub-optimal configuration where significant tradeoffs must be made. This is because in these configurations, the kernel can no longer map all of physical memory at once.
As the amount of physical memory increases beyond this point, the tradeoffs become worse and worse, because the struct page array that is used to manage all physical memory must be kept mapped at all times, and that array grows with physical memory.
The physical memory that isn't directly mapped by the kernel is called "highmem", and by default the writeback code treats highmem as undirtyable. This is what results in your zero values for the dirty thresholds.
You can change this by setting /proc/sys/vm/highmem_is_dirtyable to 1, but with that much memory you will be far better off if you install a 64-bit kernel instead.
Is this a bug in the kernel
According to the article you quoted, this is a bug, which did not exist in earlier kernels, and is fixed in more recent kernels.
Note that this issue seems to be fixed in later releases (3.5.0+) and is a regression (doesn’t happen on e.g. 2.6.32)
I'm trying to track down a segfault problem in an old C code (not written by me). The segfaults occur only, if the addresses of certain variables in that code exceed the 32bit integer limit. (So I've got a pretty good idea what's going wrong, but I don't know where.)
So, my question is: Is there any way to force linux to allocate memory for a process in the high address space? At the moment it's pretty much down to chance whether the segfault happen, which makes debugging a bit difficult.
I'm running Ubuntu 10.04, Kernel 2.6.31-23-generic on a Dell inspiron 1525 laptop with 2GB ram, if that's any help.
Thanks in advance,
Martin.
You can allocate an anonymous block of memory with the mmap() system call, which you can pass as an argument where you want it to be mapped.
I would turn on the -Wpointer-to-int-cast and -Wint-to-pointer-cast warning options and check out any warnings they turn up (I believe these are included in -Wall on 64-bit targets). The cause is very likely something related to this, and simply auditing the warnings the compiler turns up may be a better approach than using a debugger.
Hello I developed a multi-threaded TCP server application that allows 10 concurrent connections receives continuous requests from them, after some processing requests, responds them to clients. I'm running it on a TI OMAP l137 processor based board it runs Monta Vista Linux. Threads are created per client ie 10 threads and it's pre-threaded. it's physical memory usage is about %1.5 and CPU usage is about %2 according to ps, top and meminfo. It's vm usage rises up to 80M where i have 48M (i reduced it from u-boot to reserve some mem for DSP). Any help is appreciated, how can i reduce it??.(/proc/sys/vm/.. tricks doesn't help :)
Thanks.
You can try using a drop in garbage collecting replacement for malloc(), and see if that solves your problem. If it does, find the leaks and fix them, then get rid of the garbage collector.
Its 'interesting' to chase these kinds of problems on platforms that most heap analyzers and profilers (e.g. valgrind) don't fully (if at all) support.
On another note, given the constraints .. I'm assuming you have decreased the default thread stack size? I think the default is 8M, you probably don't need that much. See pthread_attr_setstacksize() if you haven't adjusted it.
Edit:
You can check the default stack size with pthread_attr_getstacksize(). If it is at 8M, you've already blown your ceiling during thread creation (10 threads, as you mentioned).
Most VM is probably just for stacks. Of course, it's virtual, so it doesn't get commited if you don't use it.
(I'm wondering if thread's default stack size has anything to do with ulimit -s)
Apparently yes, according to
this other SO question
Does it rise to that level and stay there? Or does it eventually run out of memory? If the former, you simply need to figure out a way to have a smaller working set. If the latter, you have a memory leak and need to fix it.