Linux process memory scheme [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
AFAIK there's a partition of a process memory that stores kernel related data and it's marked as read-only.
I can't find a factual explanation for why this happens, what is the purpose of this area and why should you include it in every process memory space ?

Just like the user-mode memory space, the kernel needs its own code section (RX), data section (R/RW), and stack frames for the threads (RW).
I would not say that it needs to be included in process memory space, but rather say that it's where the kernel always resides. Unlike the process memory space that gets replaced whenever there's a context switch between processes, the kernel space (>=0xC0000000 in 32bit and >=0xFFFFFFFF80000000 in 64bit), in its entirety, never gets replaced.
This is a necessary requirement since there's only one kernel on a system and it must remain at the same place in the memory (virtual) at all times for handling system calls, interrupts, and running various kernel tasks.

Related

Swap memory using [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 days ago.
Improve this question
Our system is consuming swap memory even system has avaiable memory. Is that behaviour normal ?
Our system is redhat 8.6.
memory usage figure
Solution for memory usage problem.
The Linux 2.6 kernel introduced a new kernel parameter called swappiness, which allows administrators to customize how Linux swaps.
Swappiness is a property for the Linux kernel that changes the balance between swapping out runtime memory, as opposed to dropping pages from the system page cache. Swappiness can be set to values between 0 and 100, inclusive. A low value means the kernel will try to avoid swapping as much as possible where a higher value instead will make the kernel aggressively try to use swap space.
Since the linux kernel 5.8 has a swappiness value range of 0 to 200 and a default value of 60. You can change it temporarily (until your next reboot) with the following command
echo 42 > /proc/sys/vm/swappiness
If you want to change it permanently, edit the vm.swappiness parameter in the /etc/sysctl.conf file.
It should be noted that the swappiness number does not imply that 60% of memory will be moved into swap. There is a swap algorithm that determines when and how much data is put into swap.
The following formula is provided by Redhat to determine swap tendency:
swap_tendency = mapped_ratio/2 + distress + vm_swappiness;
You can read more here

benefits of allocating huge pages at boot [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
The kernel boots with default_hugepagesz=1G option, which defines size of the default page size. So when an application want large memory, the kernel will allocate it with 1G pages.
If the kernel boots with hugepages=N, i.e. allocate N huge pages at boot. So in this case, the kernel will automatically take a page from this pool, thus saving time on allocating memory?
When this pool runs out of available pages, how will the kernel allocate huge memory?
hugepages kernel options reserves contiguous ranges of page frames (RAM), so that the user can allocate that many huge pages without fail.
When there are no reserved contiguous huge pages the kernel tries to find more, which may fail when the physical memory becomes fragmented.

Does Linux have a page file? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I found at several places that Linux uses pages and a paging mechanism but I didn't find anywhere where this file is or how to configure it.
All the information I found is about the Linux swap file / partition. There is a difference between paging and swapping:
Paging moves pages (a small frame which contains a piece of data - usually 4 KB but can vary between different OS's) from main memory to a backbend storage, happens always as a normal function of the operating system.
Swapping moves an entire process to storage and happens when the system is memory stressed or on windows 8 when a new application is hibernating.
Does Linux uses it's swap file / partition for both cases?
If so, how could I see how many page are currently paged out? This information is not there in vmstat, free or swapon commands (or that I fail to see it).
Or is there another file used for paging?
If so, how can I configure it (and watch it's usage)?
Or perhaps Linux does not use paging at all and I was mislead?
I would appreciate if the answers will be specific to red hat enterprise Linux both versions 6 and 7 but also a general answer about all Linux's will be good.
Thanks in advance.
On Linux, the swap partition(s) are used for paging.
Linux does not respond to memory pressure by swapping out whole processes. The virtual memory system does demand paging, page by page. Under extreme memory pressure, one or more processes will be killed by the OOM killer. (There are some useful links to documentation in the first NOTE in man malloc)
There is a line in the top header which shows swap partition usage, but if that is all the information you want, use
swapon -s
man swapon for more information.
The swap partition usage is not the same as the number of unmapped pages. A page might be memory-mapped to a file using the mmap call; since that page has backing store in the file, there is no need to also write it to a swap partition, and the system won't use swap space for that. But swap partition usage is a pretty good indicator.
Also note that Linux (unlike Windows) does not allocate swap space for pages when they are allocated. Instead, it adds the new page to the virtual memory map without any backing store. and allocates the swap space when the page needs to be swapped out. The consequence (as described in the malloc manpage referenced earlier) is that a malloc call may succeed in allocating virtual memory, but a subsequent attempt to use that virtual memory may fail.
Although Linux retains the term 'swap partition' as a historical relic, it actually performs paging. So your expectation is borne out; you were just thrown by the archaic terminology.

With virtual memory, programs running on the system can allocate far more memory than is physically available; [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How the OS is able to do this
With virtual memory, programs running on the system can allocate far
more memory than is physically available;
It is in practice "a little more memory", not "far more memory", otherwise you are experimenting thrashing.
Every desktop, latop or server processor has an MMU. It is used by the virtual memory system to give a virtual address space thru paging & the page cache. When the kernel gets a page fault, it could fetch a page from disk -e.g. in a segment of an ELF executable or shared object or some other mapped file, or some pages from the swap area- or send a SIGSEGV signal, see signal(7).
On Linux, several system calls can change the address space: mmap(2) and munmap (and also the obsolete sbrk, etc...) and execve(2). You might advise the kernel using madvise(2)
You could use cat /proc/$somepid/maps (e.g. cat /proc/$$/maps in your shell) to understand the address space map of some process. See proc(5).
Follow all the links above and read also Advanced Linux Programming and Operating Systems: Three Easy Pieces

MMU related to physical memory address handling [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What happens when physical memory is fully occupied by process and a new process(similar priority) is introduced. How does the Memory Management unit handle the pages(resource) requested by the new and old processes(same priority tasks).
So I mean to ask how swapping of memory done for similar priority process and physical memory is full on the other side. Please explain with an example?
You should not care about what happenning in that case, and on current Linux deskops & laptops it is an improbable case (because usually the kernel steals page from the filesystem cache).
When a new program is started with the execve(2) syscalls, new memory mappings are set up (as if nearly done by mmap(2)), possibly with copy-on-write mechanism. Once the program is accessing them, the kernel will page-fault and ultimately load the page in physical RAM. It may have to choose which pages should be stealed. If they are dirty, it has to write them to some swap zone (or to some mmap-ed file if the mapping is MAP_SHARED). Otherwise, it just reuses them (and reassign the physical pages).
If all memory resources are used, memory overcommit may happen
The MMU is used by the linux kernel for virtual memory management. Applications see on some virtual address space (look into /proc/ e.g. with cat /proc/self/maps to understand it).
The MMU is doing the virtual to physical address translation and is giving page faults. The kernel is responsible for configuring the MMU (i.e. setting up the virtual address space translation mechanism) and for handling page faults (which are usually invisible to the application -e.g. because the kernel would fetch a page from the disk, the filesystem or the swap area-, except as the SIGSEGV signal which occurs when a "non-existent" page is accessed).
please take time to read all the links given here.

Resources