Shared memory between threads and processes [closed] - multithreading

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Could someone help me with these questions:
What is the memory (code/data) sections shared by threads within the same process (not shared by different processes)?
Can two processes share their virtual address space?
Can two processes share global variables?
What kind of data sharing can be implemented among processes, using memory mapped files?
Is it possible to share a linked list using memory mapped files? And an array of numbers?

A process has only one address space. All threads in a single process can access all of the process's memory.
No. On Windows, to share memory across process boundaries, you have to use either a shared data segment, or a memory-mapped file object.
Only if the variables are stored in shared memory.
Any POD data can be shared using a memory mapped file. Consider it a block of raw contiguous bytes. You can share anything that would normally fit in a byte array.
A linked list cannot be shared because its nodes contain pointers to each other in memory, and pointers cannot be used across process boundaries. You would have to serialize the list into a flat format that uses offsets instead of pointers. An array of POD types, like integers, can be shared, yes.

Related

benefits of allocating huge pages at boot [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
The kernel boots with default_hugepagesz=1G option, which defines size of the default page size. So when an application want large memory, the kernel will allocate it with 1G pages.
If the kernel boots with hugepages=N, i.e. allocate N huge pages at boot. So in this case, the kernel will automatically take a page from this pool, thus saving time on allocating memory?
When this pool runs out of available pages, how will the kernel allocate huge memory?
hugepages kernel options reserves contiguous ranges of page frames (RAM), so that the user can allocate that many huge pages without fail.
When there are no reserved contiguous huge pages the kernel tries to find more, which may fail when the physical memory becomes fragmented.

Does Linux have a page file? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I found at several places that Linux uses pages and a paging mechanism but I didn't find anywhere where this file is or how to configure it.
All the information I found is about the Linux swap file / partition. There is a difference between paging and swapping:
Paging moves pages (a small frame which contains a piece of data - usually 4 KB but can vary between different OS's) from main memory to a backbend storage, happens always as a normal function of the operating system.
Swapping moves an entire process to storage and happens when the system is memory stressed or on windows 8 when a new application is hibernating.
Does Linux uses it's swap file / partition for both cases?
If so, how could I see how many page are currently paged out? This information is not there in vmstat, free or swapon commands (or that I fail to see it).
Or is there another file used for paging?
If so, how can I configure it (and watch it's usage)?
Or perhaps Linux does not use paging at all and I was mislead?
I would appreciate if the answers will be specific to red hat enterprise Linux both versions 6 and 7 but also a general answer about all Linux's will be good.
Thanks in advance.
On Linux, the swap partition(s) are used for paging.
Linux does not respond to memory pressure by swapping out whole processes. The virtual memory system does demand paging, page by page. Under extreme memory pressure, one or more processes will be killed by the OOM killer. (There are some useful links to documentation in the first NOTE in man malloc)
There is a line in the top header which shows swap partition usage, but if that is all the information you want, use
swapon -s
man swapon for more information.
The swap partition usage is not the same as the number of unmapped pages. A page might be memory-mapped to a file using the mmap call; since that page has backing store in the file, there is no need to also write it to a swap partition, and the system won't use swap space for that. But swap partition usage is a pretty good indicator.
Also note that Linux (unlike Windows) does not allocate swap space for pages when they are allocated. Instead, it adds the new page to the virtual memory map without any backing store. and allocates the swap space when the page needs to be swapped out. The consequence (as described in the malloc manpage referenced earlier) is that a malloc call may succeed in allocating virtual memory, but a subsequent attempt to use that virtual memory may fail.
Although Linux retains the term 'swap partition' as a historical relic, it actually performs paging. So your expectation is borne out; you were just thrown by the archaic terminology.

With virtual memory, programs running on the system can allocate far more memory than is physically available; [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How the OS is able to do this
With virtual memory, programs running on the system can allocate far
more memory than is physically available;
It is in practice "a little more memory", not "far more memory", otherwise you are experimenting thrashing.
Every desktop, latop or server processor has an MMU. It is used by the virtual memory system to give a virtual address space thru paging & the page cache. When the kernel gets a page fault, it could fetch a page from disk -e.g. in a segment of an ELF executable or shared object or some other mapped file, or some pages from the swap area- or send a SIGSEGV signal, see signal(7).
On Linux, several system calls can change the address space: mmap(2) and munmap (and also the obsolete sbrk, etc...) and execve(2). You might advise the kernel using madvise(2)
You could use cat /proc/$somepid/maps (e.g. cat /proc/$$/maps in your shell) to understand the address space map of some process. See proc(5).
Follow all the links above and read also Advanced Linux Programming and Operating Systems: Three Easy Pieces

Who determine the block size when writting to a disk? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
This might be a naive question but I can't find a straight answer for it.
While using IO tools such as dd tool, fio and bonnie++, one of the tools parameters is to set the block size that will be used in the test. So, one can set the block size to 512 KB, 1 MB or even more. And as the block size get greater the output MB/s also get higher, and I believe it is logical, since you get to write on less blocks.
So my questions are:
-How does the process happen while the default block size is 4 KB or 32 KB in some kernels ?!
-In any other application, who determine the block size to write on a disk ? is it the application itself or the operating system ?!
-What would be a typical block size of a database application for instance ?!
Thanks in advance :)
If you use something like dd, you're doing a block-level operation, so you get to specify a block size. Up to a point, you'll get greater speed by using a bigger block size, but it will quickly tail off. It's very inefficient to read from a disk byte by byte, but by the time you've hit a few megabytes, you won't notice any further speed increase.
When an application writes to disk, it is generally not doing block-level access, but reading and writing files. It's the operating system that is responsible for turning this file-level access into block-level access. An application, unless it's a specialised one running as root, won't care about block-level access, and won't be involved in determining block sizes for this kind of thing.
It's further complicated by the disk cache: when you read something at the application level, if you're lucky, you won't touch the disk at all: it'll be something already cached, and you'll retrieve it from there (without being aware of it). When you write, you will hopefully find that you write into the cache and appear to finish immediately, and then the operating system will do the actual write when it gets round to it. It's only if you're doing lots of writing, or if the cache is turned off, that you will exhaust the cache and the writes will need to happen before control gets passed back to your application.
In short: unless you muck about at a fairly low level, you don't need to worry about block sizes.

Linux process memory scheme [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
AFAIK there's a partition of a process memory that stores kernel related data and it's marked as read-only.
I can't find a factual explanation for why this happens, what is the purpose of this area and why should you include it in every process memory space ?
Just like the user-mode memory space, the kernel needs its own code section (RX), data section (R/RW), and stack frames for the threads (RW).
I would not say that it needs to be included in process memory space, but rather say that it's where the kernel always resides. Unlike the process memory space that gets replaced whenever there's a context switch between processes, the kernel space (>=0xC0000000 in 32bit and >=0xFFFFFFFF80000000 in 64bit), in its entirety, never gets replaced.
This is a necessary requirement since there's only one kernel on a system and it must remain at the same place in the memory (virtual) at all times for handling system calls, interrupts, and running various kernel tasks.

Resources