OS: Ubuntu 14.04, 4GB ram, i5, 1.5 GB swap space
I'm running a compilation command ./build.sh , however, the computer keeps on freezing during running it. I checked the RAM, CPU and Swap Space usage during the compilation and all of them are completely used. What will be the best solution to tackle this? I tried increasing swap space using swap file, however, the max limit (for swap file) seems to be 3 GB which is also being used completely.
Can I know the command you are using, anyways if you are using make like this:
make -j4 -l4
change it to:
make -j2 -l2
Also, I think the swap space is too low, it should be at least equal to RAM. If the problem is not solved by above command, follow this tutorial to increase swap space.
I solved the problem by increase RAM to 8 GB as well as by increasing the SWAP space to 5 GB. When the compilation is run it was observed that the maximum 5.5 GB of RAM was used, so both RAM and Swap space were not sufficient.
Related
On Ubuntu 14.04:
$ cat /proc/sys/fs/inotify/max_queued_events
16384
$ cat /proc/sys/fs/inotify/max_user_instances
128
$ cat /proc/sys/fs/inotify/max_user_watches
1048576
Right after computer restart I had 1GB of RAM consumed. After 20-30 minutes (having just 1 terminal opened) I had 6GB RAM used and growing, however none of the processes seemed to be using so much memory (according to htop and top). When I've killed inotifywait process memory was not freed but stopped growing. Then I've restarted PC, killed inotifywait right away and memory usage stopped at 1GB.
I have 2 hard drives, one is 1TB and second is 2TB. Was inotifywait somehow caching those or in general is it normal that it caused such behavior?
This is the Linux disk cache at work, it is not a memory leak in inotifywait or anything else.
I've accepted previous answer because it explains what's going on. However I have some second thoughts regarding this topic. What the page is basically saying is "caching doesn't occupy memory because this memory can at any point be taken back, so you should not worry, you should not panic, you should be grateful!". Well ... I'm not. I believe there should be some decent but at the same time hard limit for caching.
The idea behind this process was good -> "let's not waste user time and cache everything till we have space". However what this process is actually doing in my case is wasting my time. Currently I'm working on Linux which is running in virtual machine. Since I have a lot jira pages opened, a lot terminals in many desktops, some tools opened and etc I don't want to open all that stuff every day, so I just save virtual machine state instead of turning it off at the end of the day. Now let's say that my stuff occupies 4GB RAM. What will happen is that 4GB will be written into my hard drive after I save state and then 4GB will have to be loaded into RAM when I start virtual machine. Unfortunately that's only theory. In practice due to inotifywait which will happily fill my 32 GB RAM I have to wait 8 times longer for saving and restoring virtual machine. Yes my RAM is "fine" as the page is saying, yes I can open different app and I will not hit OOM but at the same time caching is wasting my time.
If the limit was decent let's say 1GB for caching then it would not be so painful. I think if I would have VM on HDD instead of SSD it would took forever to save the state and I would probably not use it at all.
WE have a 12 node cassandra cluster across 2 different datacenter. We are migrating the data from sql DB to cassandra through a net application and there is another .net app thats reads data from the cassandra. Off recently we are seeing one or the other node going down (nodetool status shows DN and the service is stopped on it). Below is the output of the nodetool status. WE have to start the service to again get it working but it stops again.
https://ibb.co/4P1T453
Path to the log: https://pastebin.com/FeN6uDGv
So in looking through your pastebin, I see a few things that can be adjusted.
First I'm reasonably sure that this is your primary issue:
Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out,
especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK or run Cassandra as root.
From GNU Error Codes:
Macro: int ENOMEM
“Cannot allocate memory.” The system cannot allocate more virtual
memory because its capacity is full.
-Xms12G, -Xmx12G, -Xmn3000M,
How much RAM is on your instance? From what I'm seeing your node is dying from an OOM (Out of Memory error). My guess is that you're designating too much RAM to the heap, and there isn't enough for the OS/page-cache. In fact, I wouldn't designate much more than 50%-60% of RAM to the heap.
For example, I mostly build instances on 16GB of RAM, and I've found that a 10GB max heap is about as high as you'd want to go on that.
-XX:+UseParNewGC, -XX:+UseConcMarkSweepGC
In fact, as you're using CMS GC, I wouldn't go higher than 8GB for max heap size.
Maximum number of memory map areas per process (vm.max_map_count) 65530 is too low,
recommended value: 1048575, you can change it with sysctl.
This means you haven't adjusted your limits.conf or sysctl.conf. Check through the guide (DSE 6.0 - Recommended Production Settings), but generally it's a good idea to add the following to these files:
/etc/limits.conf
* - memlock unlimited
* - nofile 100000
* - nproc 32768
* - as unlimited
/etc/sysctl.conf
vm.max_map_count = 1048575
Note: After adjusting sysctl.conf, you'll want to run a sudo sysctl -p or reboot.
Is swap disabled? : false,
You will want to disable swap. If Cassandra starts swapping contents of RAM to disk, things will get really slow. Run a swapoff -a and then edit /etc/fstab and remove any swap entries.
tl;dr; Summary
Set your initial and max heap sizes to 8GB (heap new size is fine).
Modify your limits.conf an sysctl.conf files appropriately.
Disable swap.
It's also a good idea to get on the latest version of 3.11 (3.11.4).
Hope this helps!
Background:
I was trying to setup a ubuntu machine on my desktop computer. The whole process took a whole day, including installing OS and softwares. I didn't thought much about it, though.
Then I tried doing my work using the new machine, and it was significantly slower than my laptop, which is very strange.
I did iotop and found that disk traffic when decompressing a package is around 1-2MB/s, and it's definitely abnormal.
Then, after hours of research, I found this article that describes exactly same problem, and provided a ugly solution:
We recently had a major performance issue on some systems, where disk write speed is extremely slow (~1 MB/s — where normal performance
is 150+MB/s).
...
EDIT: to solve this, either remove enough RAM, or add “mem=8G” as kernel boot parameter (e.g. in /etc/default/grub on Ubuntu — don’t
forget to run update-grub !)
I also looked at this post
https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
and did
cat /proc/vmstat | egrep "dirty|writeback"
output is:
nr_dirty 10
nr_writeback 0
nr_writeback_temp 0
nr_dirty_threshold 0 // and here
nr_dirty_background_threshold 0 // here
those values were 8223 and 4111 when mem=8g is set.
So, it's basically showing that when system memory is greater than 8GB (32GB in my case), regardless of vm.dirty_background_ratio and vm.dirty_ratio settings, (5% and 10% in my case), the actual dirty threshold goes to 0 and write buffer is disabled?
Why is this happening?
Is this a bug in the kernel or somewhere else?
Is there a solution other than unplugging RAM or using "mem=8g"?
UPDATE: I'm running 3.13.0-53-generic kernel with ubuntu 12.04 32-bit, so it's possible that this only happens on 32-bit systems.
If you use a 32 bit kernel with more than 2G of RAM, you are running in a sub-optimal configuration where significant tradeoffs must be made. This is because in these configurations, the kernel can no longer map all of physical memory at once.
As the amount of physical memory increases beyond this point, the tradeoffs become worse and worse, because the struct page array that is used to manage all physical memory must be kept mapped at all times, and that array grows with physical memory.
The physical memory that isn't directly mapped by the kernel is called "highmem", and by default the writeback code treats highmem as undirtyable. This is what results in your zero values for the dirty thresholds.
You can change this by setting /proc/sys/vm/highmem_is_dirtyable to 1, but with that much memory you will be far better off if you install a 64-bit kernel instead.
Is this a bug in the kernel
According to the article you quoted, this is a bug, which did not exist in earlier kernels, and is fixed in more recent kernels.
Note that this issue seems to be fixed in later releases (3.5.0+) and is a regression (doesn’t happen on e.g. 2.6.32)
I have a Ubuntu 12.10 (kernel 3.9.0-rc2) installation running on VMWare. I've given it 512MB RAM.
cat /proc/meminfo shows:
MemTotal: 507864 KB
MemFree: 440180
I want to use the swap (for some reason) so I wrote a C program which allocates a 500MB array (using malloc()) and fills it with junk. However, the program gets killed before it can fill the entire array and a message "Killed" appears on the screen.
I wanted to ask if this is normal behavior and what is the reason behind this? In my opinion, the swap should be used because the free RAM is insufficient.
Edit: I did not mention that I have 1GB swap. cat /proc/swaps shows:
/dev/sda5 Size: 1046524 Used: 14672.
The "Used" amount increases when I run the memory-eating program. But as you can see, a lot of swap is leftover. So why did the program have to be 'Killed'?
So I couldn't find a valid answer. I have a temporary solution:
I had modified the Virtual Machine settings to give 512MB RAM to the VM. Now I reverted back to 2GB and ran 5 parallel programs each consuming 500MB. Thankfully, all of them run and the swap gets used.
I just needed to use the swap for a project on swap management.
It also matters how you have written your C program to allocate the memory and what are the compiler flags. For example, if you are statically allocating memory (such as double A[N][N]), the behaviour is different from dynamically allocating it: (such as using malloc/calloc). Static allocations are limited by the memory model of the compiler (medium, small etc, often can be specified). Perhaps, a good starting point is :
http://en.wikipedia.org/wiki/C_dynamic_memory_allocation
Does this help?
I have written a program which works on huge set of data. My CPU and OS(Ubuntu) both are 64 bit and I have got 4GB of RAM. Using "top" (%Mem field), I saw that the process's memory consumption went up to around 87% i.e 3.4+ GB and then it got killed.
I then checked how much memory a process can access using "uname -m" which comes out to be "unlimited".
Now, since both the OS and CPU are 64 bit and also there exists a swap partition, the OS should have used the virtual memory i.e [ >3.4GB + yGB from swap space ] in total and only if the process required more memory, it should have been killed.
So, I have following ques:
How much physical memory can a process access theoretically on 64 bit m/c. My answer is 2^48 bytes.
If less than 2^48 bytes of physical memory exists, then OS should use virtual memory, correct?
If ans to above ques is YES, then OS should have used SWAP space as well, why did it kill the process w/o even using it. I dont think we have to use some specific system calls which coding our program to make this happen.
Please suggest.
It's not only the data size that could be the reason. For example, do ulimit -a and check the max stack size. Have you got a kill reason? Set 'ulimit -c 20000' to get a core file, it shows you the reason when you examine it with gdb.
Check with file and ldd that your executable is indeed 64 bits.
Check also the resource limits. From inside the process, you could use getrlimit system call (and setrlimit to change them, when possible). From a bash shell, try ulimit -a. From a zsh shell try limit.
Check also that your process indeed eats the memory you believe it does consume. If its pid is 1234 you could try pmap 1234. From inside the process you could read the /proc/self/maps or /proc/1234/maps (which you can read from a terminal). There is also the /proc/self/smaps or /proc/1234/smaps and /proc/self/status or /proc/1234/status and other files inside your /proc/self/ ...
Check with free that you got the memory (and the swap space) you believe. You can add some temporary swap space with swapon /tmp/someswapfile (and use mkswap to initialize it).
I was routinely able, a few months (and a couple of years) ago, to run a 7Gb process (a huge cc1 compilation), under Gnu/Linux/Debian/Sid/AMD64, on a machine with 8Gb RAM.
And you could try with a tiny test program, which e.g. allocates with malloc several memory chunks of e.g. 32Mb each. Don't forget to write some bytes inside (at least at each megabyte).
standard C++ containers like std::map or std::vector are rumored to consume more memory than what we usually think.
Buy more RAM if needed. It is quite cheap these days.
In what can be addressed literally EVERYTHING has to fit into it, including your graphics adaptors, OS kernel, BIOS, etc. and the amount that can be addressed can't be extended by SWAP either.
Also worth noting that the process itself needs to be 64-bit also. And some operating systems may become unstable and therefore kill the process if you're using excessive RAM with it.