Shoud I be concerned if my swap usage is 100%? - linux

Recently, I've been playing around with flutter. Between running an emulator, using the browser, and using vscode, my system memory has been getting decently close to maxed out. My laptop has crashed twice now before I started paying attention to memory usage.
Looking at Ubuntu's system manager, I noticed that my Swap frequently goes up to 100%, even though I still have some free ram. Is this expected behavior, or should I be concerned?
Here's a picture of memory usage in System manager

Swap space usage becomes an issue only when there is not enough RAM available. You can reduce swap usage by configuring /etc/sysctl.conf as root. Change vm.swappiness= to any value lower than 60(default value).

In short, no. SWAP is less efficient than RAM which is why you don't want to maximize SWAP usage.

Related

inotifywait secretely consumes a lot of memory

On Ubuntu 14.04:
$ cat /proc/sys/fs/inotify/max_queued_events
16384
$ cat /proc/sys/fs/inotify/max_user_instances
128
$ cat /proc/sys/fs/inotify/max_user_watches
1048576
Right after computer restart I had 1GB of RAM consumed. After 20-30 minutes (having just 1 terminal opened) I had 6GB RAM used and growing, however none of the processes seemed to be using so much memory (according to htop and top). When I've killed inotifywait process memory was not freed but stopped growing. Then I've restarted PC, killed inotifywait right away and memory usage stopped at 1GB.
I have 2 hard drives, one is 1TB and second is 2TB. Was inotifywait somehow caching those or in general is it normal that it caused such behavior?
This is the Linux disk cache at work, it is not a memory leak in inotifywait or anything else.
I've accepted previous answer because it explains what's going on. However I have some second thoughts regarding this topic. What the page is basically saying is "caching doesn't occupy memory because this memory can at any point be taken back, so you should not worry, you should not panic, you should be grateful!". Well ... I'm not. I believe there should be some decent but at the same time hard limit for caching.
The idea behind this process was good -> "let's not waste user time and cache everything till we have space". However what this process is actually doing in my case is wasting my time. Currently I'm working on Linux which is running in virtual machine. Since I have a lot jira pages opened, a lot terminals in many desktops, some tools opened and etc I don't want to open all that stuff every day, so I just save virtual machine state instead of turning it off at the end of the day. Now let's say that my stuff occupies 4GB RAM. What will happen is that 4GB will be written into my hard drive after I save state and then 4GB will have to be loaded into RAM when I start virtual machine. Unfortunately that's only theory. In practice due to inotifywait which will happily fill my 32 GB RAM I have to wait 8 times longer for saving and restoring virtual machine. Yes my RAM is "fine" as the page is saying, yes I can open different app and I will not hit OOM but at the same time caching is wasting my time.
If the limit was decent let's say 1GB for caching then it would not be so painful. I think if I would have VM on HDD instead of SSD it would took forever to save the state and I would probably not use it at all.

Why would free be reporting a freakishly high swap used value?

I have seen twice recently RHEL 6 boxes showing a swap used value, reported by 'free', of something in the 10^15 bytes range. This is, of course, far in excess of what is allocated. Further, the '-/+ buffers/cache' line shows around 3 GB free. Both of these machines subsequently became unstable and had to be rebooted.
Does anyone have any ideas as to what might cause this? Someone told me that this could be indicative of a memory leak, but I can not find any supporting information online.
Are the systems running kernel-2.6.32-573.1.1.el6.x86_64?
Could be this bug:
access.redhat.com/solutions/1571043
Linux moves unused memory pages to swap (see linux swappiness), so if you still have free memory (considering buffers/cache) you are good to go.
You probably have a process that is not used much and it was swapped out.

Docker not reporting memory usage correctly?

Through some longevity testing with docker (docker 1.5 and 1.6 with no memory limit) on (centos 7 / rhel 7) and observing the systemd-cgtop stats for the running containers, I noticed what appeared to be very high memory use. Typically the particular application running in a non-containerized state only utilizes around 200-300Meg of memory. Over a 3 day period I ended up seeing systemd-cgtop reporting that my container was up to 13G of memory used. While I am not an expert Linux admin by any means, I started digging in to this which pointed me to the following articles:
https://unix.stackexchange.com/questions/34795/correctly-determining-memory-usage-in-linux
http://corlewsolutions.com/articles/article-6-understanding-the-free-command-in-ubuntu-and-linux
So basically what I am understanding is to determine the actual free memory within the system unit would be to look at the -/+ buffers/cache: within "free -m" and not the top line, as I also noticed that the top line within "free -m" would constantly increase with memory used and constantly show a decreased amount of free memory just like what I am observing with my container through systemd-cgtop. If I observe the -/+ buffers/cache: line I will see the actual stable amounts of memory being used / free. Also, if I observe the actual process within top on the host, I can see the process itself is only ever using less then 1% of memory (0.8% of 32G).
I am a bit confused as to whats going on here. If I set a memory limit of 500-1000M for a container (I believe it would turn out to be twice as much due to the swap) would my process eventually stop when I reach my memory limit, even though the process itself is not using anywhere near that much memory? If anybody out there has any feedback on the former, that would be great. Thanks!
I used docker in CentOS 7 for a while, and got the same confused by these. Checking the github issues link, it looks like docker stats in this release is kind of mislead.
https://github.com/docker/docker/issues/10824
so I just ignored memory usage getting from docker stats.
A year since you asked, but adding an answer here for anyone else interested. If you set a memory limit, I think it would not be killed unless it fails to reclaim unused memory. the cgroups metrics and consequently docker stats shows page cache+RES. You could look at the cgroups detailed metrics to see the breakup
I had a similar issue and when I tested with a memory limit, I saw that the container is not killed. Rather the memory is reclaimed and reused.

linux CPU cache slowdown

We're getting overnight lockups on our embedded (Arm) linux product but are having trouble pinning it down. It usually takes 12-16 hours from power on for the problem to manifest itself. I've installed sysstat so I can run sar logging, and I've got a bunch of data, but I'm having trouble interpreting the results.
The targets only have 512Mb RAM (we have other models which have 1Gb, but they see this issue much less often), and have no disk swap files to avoid wearing the eMMCs.
Some kind of paging / virtual memory event is initiating the problem. In the sar logs, pgpin/s, pgnscand/s and pgsteal/s, and majflt/s all increase steadily before snowballing to crazy levels. This puts the CPU up correspondingly high levels (30-60 on dual core Arm chips). At the same time, the frmpg/s values go very negative, whilst campg/s go highly positive. The upshot is that the system is trying to allocate a large amount of cache pages all at once. I don't understand why this would be.
The target then essentially locks up until it's rebooted or someone kills the main GUI process or it crashes and is restarted (We have a monolithic GUI application that runs all the time and generally does all the serious work on the product). The network shuts down, telnet blocks forever, as do /proc filesystem queries and things that rely on it like top. The memory allocation profile of the main application in this test is dominated by reading data in from file and caching it as textures in video memory (shared with main RAM) in an LRU using OpenGL ES 2.0. Most of the time it'll be accessing a single file (they are about 50Mb in size), but I guess it could be triggered by having to suddenly use a new file and trying to cache all 50Mb of it all in one go. I haven't done the test (putting more logging in) to correlate this event with these system effects yet.
The odd thing is that the actual free and cached RAM levels don't show an obvious lack of memory (I have seen oom-killer swoop in the kill the main application with >100Mb free and 40Mb cache RAM). The main application's memory usage seems reasonably well-behaved with a VmRSS value that seems pretty stable. Valgrind hasn't found any progressive leaks that would happen during operation.
The behaviour seems like that of a system frantically swapping out to disk and making everything run dog slow as a result, but I don't know if this is a known effect in a free<->cache RAM exchange system.
My problem is superficially similar to question: linux high kernel cpu usage on memory initialization but that issue seemed driven by disk swap file management. However, dirty page flushing does seem plausible for my issue.
I haven't tried playing with the various vm files under /proc/sys/vm yet. vfs_cache_pressure and possibly swappiness would seem good candidates for some tuning, but I'd like some insight into good values to try here. vfs_cache_pressure seems ill-defined as to what the difference between setting it to 200 as opposed to 10000 would be quantitatively.
The other interesting fact is that it is a progressive problem. It might take 12 hours for the effect to happen the first time. If the main app is killed and restarted, it seems to happen every 3 hours after that fact. A full cache purge might push this back out, though.
Here's a link to the log data with two files, sar1.log, which is the complete output of sar -A, and overview.log, a extract of free / cache mem, CPU load, MainGuiApp memory stats, and the -B and -R sar outputs for the interesting period between midnight and 3:40am:
https://drive.google.com/folderview?id=0B615EGF3fosPZ2kwUDlURk1XNFE&usp=sharing
So, to sum up, what's my best plan here? Tune vm to tend to recycle pages more often to make it less bursty? Are my assumptions about what's happening even valid given the log data? Is there a cleverer way of dealing with this memory usage model?
Thanks for your help.
Update 5th June 2013:
I've tried the brute force approach and put a script on which echoes 3 to drop_caches every hour. This seems to be maintaining the steady state of the system right now, and the sar -B stats stay on the flat portion, with very few major faults and 0.0 pgscand/s. However, I don't understand why keeping the cache RAM very low mitigates a problem where the kernel is trying to add the universe to cache RAM.

How to tell where the memory went in Linux

I have a long running process that I suspect has a memory leak. I use top to monitor the memory levels of each process and nothing uses more than 15% of the total RAM. The machine has 4GB of RAM and the process starts with well over 3GB free. The process itself does very heavy, custom calculations on several MB of data. It takes a single core at 100%.
As time goes on, memory disappears but top does not blame my long running process. Instead, the "cached" and "buffers" memory increases and the "free" memory is reduced to as low as 2MB. The process eventually finishes its job and exits without issue but the memory never comes back. Should I be concerned or is this "normal"? Are there other tools besides top that can provide a deeper understanding?
Thanks.
That's normal. Your process is operating on files which are getting cached in memory. If there is "memory pressure" (demand from other programs) then that cache memory will be relinquished. The first time I wrote an X widget to show how much memory was "free" it took me a while to get used to the idea that free memory is doing you no good: Best to have it all in use doing some kind of caching until it's needed elsewhere!

Resources