We have a redhat 6 servers and memory is around 64GB, we are planing to configure kdump and I am confused about disk size I should set. Redhat suggest it would be memory + 2% more (that means around ~66GB Disk space). I need your suggestion what would be the best size I should define for kdump.
First, don't enable kdump unless Redhat support tells you to. KDumps don't really produce anything useful for most Linux 'customers'.
Second, kdump could (potentially) dump the entire contents of RAM into the dump file. If you have 64GB of RAM ..AND.. it is full when the kdump is triggered, then yes, the space for your kdump file will need to be what RH suggested. That said, most problems can be identified with partial kdumps. RH support has even said before to perform 'head -c ' on the file before sending it in to reduce its size. Usually trimming it down to the first 64MBs.
Finally, remember to disable kdumps after you have finished trouble-shooting the issue. This isn't something you want running constantly on any system above a 'Development/Test' level. Most importantly is to remember to clean this space out after a kdump has occurred.
Unless the system has enough memory, the kdump crash recovery service will not be operational. For the information on minimum memory requirements, refer to the Required minimums section of the Red Hat Enterprise Linux Technology capabilities and limits comparison chart. When kdump is enabled, the minimum memory requirements increase by the amount of memory reserved for it. This value is determined by a user, and when the crashkernel=auto option is used, it defaults to 128 MB plus 64 MB for each TB of physical memory (that is, a total of 192 MB for a system with 1 TB of physical memory).
In Red Hat Enterprise Linux 6, the crashkernel=auto only reserves memory if the system has 4 GB of physical memory or more.
To configure the amount of memory that is reserved for the kdump kernel, as root, open the /boot/grub/grub.conf file in a text editor and add the crashkernel=M (or crashkernel=auto) parameter to the list of kernel options
I had the same question before.
The amount of memory reserved for the kdump kernel can be estimated with the following scheme:
base memory to be reserved = 128MB
an additional 64MB added for each
TB of physical RAM present in the system.
So for example if a system
has 1TB of memory 192MB (128MB + 64MB) will be reserved.
So I believe 128MB will be sufficient in your case.
You may want to have a look at this link Configuring crashkernel on RHEL6.2 (and later) kernels
Related
Background:
I was trying to setup a ubuntu machine on my desktop computer. The whole process took a whole day, including installing OS and softwares. I didn't thought much about it, though.
Then I tried doing my work using the new machine, and it was significantly slower than my laptop, which is very strange.
I did iotop and found that disk traffic when decompressing a package is around 1-2MB/s, and it's definitely abnormal.
Then, after hours of research, I found this article that describes exactly same problem, and provided a ugly solution:
We recently had a major performance issue on some systems, where disk write speed is extremely slow (~1 MB/s — where normal performance
is 150+MB/s).
...
EDIT: to solve this, either remove enough RAM, or add “mem=8G” as kernel boot parameter (e.g. in /etc/default/grub on Ubuntu — don’t
forget to run update-grub !)
I also looked at this post
https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
and did
cat /proc/vmstat | egrep "dirty|writeback"
output is:
nr_dirty 10
nr_writeback 0
nr_writeback_temp 0
nr_dirty_threshold 0 // and here
nr_dirty_background_threshold 0 // here
those values were 8223 and 4111 when mem=8g is set.
So, it's basically showing that when system memory is greater than 8GB (32GB in my case), regardless of vm.dirty_background_ratio and vm.dirty_ratio settings, (5% and 10% in my case), the actual dirty threshold goes to 0 and write buffer is disabled?
Why is this happening?
Is this a bug in the kernel or somewhere else?
Is there a solution other than unplugging RAM or using "mem=8g"?
UPDATE: I'm running 3.13.0-53-generic kernel with ubuntu 12.04 32-bit, so it's possible that this only happens on 32-bit systems.
If you use a 32 bit kernel with more than 2G of RAM, you are running in a sub-optimal configuration where significant tradeoffs must be made. This is because in these configurations, the kernel can no longer map all of physical memory at once.
As the amount of physical memory increases beyond this point, the tradeoffs become worse and worse, because the struct page array that is used to manage all physical memory must be kept mapped at all times, and that array grows with physical memory.
The physical memory that isn't directly mapped by the kernel is called "highmem", and by default the writeback code treats highmem as undirtyable. This is what results in your zero values for the dirty thresholds.
You can change this by setting /proc/sys/vm/highmem_is_dirtyable to 1, but with that much memory you will be far better off if you install a 64-bit kernel instead.
Is this a bug in the kernel
According to the article you quoted, this is a bug, which did not exist in earlier kernels, and is fixed in more recent kernels.
Note that this issue seems to be fixed in later releases (3.5.0+) and is a regression (doesn’t happen on e.g. 2.6.32)
In 4GB RAM system running linux, 3gb is given to user-space and 1gb to kernel, does it mean that even if kernel is using 50MB and user space is running low, user cannot use kernel space? if no, why? why cannot linux map their pages to user space?
The 3/1 separation refers to VIRTUAL memory. The virtual memory, however, is sparse. Meaning that even though there is "on paper" 1 GB, in practice a LOT less than that is used. Whenever possible, the "virtual" memory is backed by physical pages (meaning, if your virtual memory footprint is 50MB, then you're using 50 MB of physical memory), up until the point where there is no more physical memory, in which case you either A) spill over to swap or B) the system encounters a low memory condition and frees memory the hard way - by killing processes.
It gets more complicated. Virtual memory is not really used (committed) until actually used. THis means when you allcoate memory, you get an "IOU" or "promise" for memory, but the memory only gets consumed when you actually use the memory, as in write some value to it. Overall, however, you are correct in that there is segregation - at the hardware level - between kernel and user mode. In other words, of the 4GB addressable (assuming 32bit), the top 1GB, even though it is in your address space, is not accessible to you, and in practice belongs to the kernel. (The limit of 4 GB stems from 32-bit pointers - for 64 bits, it's effectively 48, which means 256TB, btw, 128TB user, 128TB kernel). Further, this 1GB of your space that is the kernel's is identical in other processes, too. So it doesnt matter which process you are in, when you "call kernel", (i.e. a system call), you end up in the top 1GB, which is shared in between all processes.
Again, the key point is that the 1GB isn't REALLY used in full. The actual memory footprint of the kernel is a lot smaller - in the tens of MB. It's jsut that theoretically, the kernel can use UP to 1GB, but that is assuming it can be backed up either by RAM or (rarely) swap. You can look at /proc/meminfo. As for the answer above, about changing 3/1 - it actually CAN be changed (in Windows it's as easy as a kernel command line option in boot.ini, in Linux it requires recompilation).
The 3GB/1GB split in process space is fixed. There is no way to change it regardless of how much RAM is actually in use.
Today, i received few alerts about swapping activity of 3000KB/sec. This linux box have very few process running and has total 32GB of RAM. When i logged and did execute free, i did not see any thing suspicious. The ratio of free memory to used in (buffers/cache) row, was high enough(25GB in free, to 5GB in usage).
So i am wondering what are main causes of paging on linux system
How does swappiness impact paging
How long does a page stay in Physical RAM before it is swapped out. What controls this behavior on linux?
Is it possible that even if there is adequate free physical RAM, but a process's memory access pattern is such that data is spread over multiple pages. Would this cause paging?
For example consider a 5GB array, such that the program access 5GB in a loop, although slowly, such that pages which are not used , are swapped out. Again, keep in mind, that even if buffer is 5GB, there could be 20GB of physical RAM available.
UPDATE:
The linux vendor is RHEL 6.3, kernel version 2.6.32-279.el6.x86_64
We use a Tool (Whats Up Gold) to monitor memory usage on a Linux Box.
We see Memory usage (graphs) related to:
Physical, Real, Swap, Virtual Memory and ALL Memory (which is a average of all these).
'The ALL' Memory graphs show low memory usage of about: 10%.
But Physical memory shows as 95% used.
Swap memory shows as 2% used.
So, do i need more memory on this Linux Box?
In other words should i go by:
ALL Memory graph(which says memory situation is good) OR
Physical Memory Graph (which says memory situation is bad).
Real and Physical
Physical memory is the amount of DRAM which is currently used. Real memory shows how much your applications are using system DRAM memory. It is roughly lower than physical memory. Linux system caches some of disk data. This caching is the difference between physical and real memory. Actually, when you have free memory Linux goes to use it for caching. Do not worry, as your applications demand memory they gonna get the cached space back.
Swap and Virtual
Swap is additional space to your actual DRAM. This space is borrowed from disk space and once you application fill-out entire DRAM, Linux transfers some unused memory to swap to let all application stay alive. Total of swap and physical memory is the virtual memory.
Do you need extra memory?
In answer to your question, you need to check real memory. If your real memory is full, you need to get some RAM. Use free command to check the amount of actual free memory. For example on my system free says:
$ free
total used free shared buffers cached
Mem: 16324640 9314120 7010520 0 433096 8066048
-/+ buffers/cache: 814976 15509664
Swap: 2047992 0 2047992
You need to check buffer/cache section. As shown above, there are real 15 GB free DRAM (second line) on my system. Check this on your system and find out whether you need more memory or not. The lines represent physical, real, and swap memory, respectively.
free -m
as for free tool analisys about memory lack in linux i have some opinion proved by experiments (practice)
~# free -m
total used free shared buff/cache available
Mem: 2000 164 144 1605 1691 103
you should summarize 'used'+'shared' and compare with 'total'
other columns are useless just confuse and nothing more
i would say
[ total - (used + shared ) ] should be always at least > 200 MB
also you can get almost the same number if you check MemAvailable in meminfo :
# cat /proc/meminfo
MemAvailable: 107304 kB
MemAvailable - is how much memory linux thinks right now is really free before active swapping happens.
so now you can consume 107304 kB maximum. if you
consume more big swappening starts happening.
MemAvailable also is in good correlation with real practice.
What is the limitation on the virtual address space available to a process ?
Is it
32 bit Vs 64 bit Address bus ?
32 bit vs 64 bit Processor ?
Secondary storage available ?
Maximum swap space configured ?
Thanks in advance
Secondary storage / swap space have nothing to do with it, because pages can be mapped into your address space without being allocated. And the same page can be mapped at multiple virtual addresses. ([edit] This is the default behavior, but the vm.overcommit_memory sysctl setting can be used to prevent the mapping of VM pages for which there is no RAM or swap available. Do a search on that sysctl setting for more information.)
The CPU certainly puts an upper limit, and that is essentially the only limit on 64-bit systems. Although note that current x86_64 processors do not actually let you use the entire 64-bit space.
On 32-bit Linux, things get more complicated. Older versions of Linux reserved 2GB of virtual space of each process for the kernel; newer ones reserve 1GB. (If memory serves, that is. I believe these are configurable when the kernel is compiled.) Whether you consider that space "available to a process" is a matter of semantics.
Linux also has a per-process resource limit RLIMIT_AS accessible via setrlimit and getrlimit.