Is there a way to have a.out loaded in linux x86_64 "high memory"? - linux

If I look at the memory mapping for a 64-bit process on Linux (x86_64) I see that the a.out is mapped in fairly low memory:
$ cat /proc/1160/maps
00400000-004dd000 r-xp 00000000 103:03 536876177 /usr/bin/bash
006dc000-006dd000 r--p 000dc000 103:03 536876177 /usr/bin/bash
006dd000-006e6000 rw-p 000dd000 103:03 536876177 /usr/bin/bash
006e6000-006ec000 rw-p 00000000 00:00 0
00e07000-00e6a000 rw-p 00000000 00:00 0 [heap]
7fbeac11c000-7fbeac128000 r-xp 00000000 103:03 1074688839 /usr/lib64/libnss_files-2.17.so
7fbeac128000-7fbeac327000 ---p 0000c000 103:03 1074688839 /usr/lib64/libnss_files-2.17.so
I'd like to map a 2G memory region in the very lowest portion of memory, but have to put this in the region after these a.out mappings, crossing into the second 2G region.
Is the a.out being mapped here part of the x86_64 ABI, or can this load address be moved to a different region, using one of:
runtime loader flags
linker flags when the executable is created
?

Yes. Building a Linux x86-64 application as a position-independent executable will cause both it and its heap to be mapped into high memory, right along with libc and other libraries. This should leave the space under 2GB free for your use. (However, note that the kernel will probably protect the first 64KB or so of memory from being mapped to protect it from certain exploits; look up vm.mmap_min_addr for information.)
To build your application as a position-independent executable, pass -pie -fPIE to the compiler.

Related

why and how do the addresses returned by 'cat /proc/self/maps' change when it's executed again

I'm trying to understand linux memory management.
Why and how do the addresses returned by 'cat /proc/self/maps' change when it's executed again
user#notebook:/$ cat /proc/self/maps | grep heap
55dc94a7c000-55dc94a9d000 rw-p 00000000 00:00 0 [heap]
user#notebook:/$ cat /proc/self/maps | grep heap
562609879000-56260989a000 rw-p 00000000 00:00 0 [heap]
This is due to Address Space Layout Randomization, aka ASLR. Linux will load code and libraries at different locations each time to make it harder to exploit buffer overflows and similar.
You can disable it with
echo 0 > /proc/sys/kernel/randomize_va_space
which will make the addresses the same each time. You can then re-enable it with:
echo 2 > /proc/sys/kernel/randomize_va_space
and the addresses will be randomized again.

Why is vdso appearing during execution of static binaries? [duplicate]

This question already has answers here:
What are vdso and vsyscall?
(2 answers)
Closed 7 years ago.
Here is a quick sample program. (This will basically get the procmap associated with the process)
> cat sample.c
#include<stdio.h>
int main()
{
char buffer[1000];
sprintf(buffer, "cat /proc/%d/maps\n", getpid());
int status = system(buffer);
return 1;
}
Preparing it statically
> gcc -static -o sample sample.c
> file sample
sample: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.24, BuildID[sha1]=9bb9f33e867df8f2d56ffb4bfb5d348c544b1050, not stripped
Executing the binary
> ./sample
00400000-004c0000 r-xp 00000000 08:01 12337398 /home/admin/sample
006bf000-006c2000 rw-p 000bf000 08:01 12337398 /home/admin/sample
006c2000-006c5000 rw-p 00000000 00:00 0
0107c000-0109f000 rw-p 00000000 00:00 0 [heap]
7ffdb3d78000-7ffdb3d99000 rw-p 00000000 00:00 0 [stack]
7ffdb3de7000-7ffdb3de9000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
I googled about vDSO but did not understand properly. Wikipedia says "these are ways in which kernel routines can be accessed from user space". My question is why are these shared objects appearing in execution of static binaries?
My question is why are these shared objects appearing in execution of static binaries?
They appear because your kernel "injects" them into every process.
Read more about them here and here.

virtual memory in linux

I am debugging one issue where a same program behaves differently on different Linux boxes (all 2.6 kernels). Basically on some linux box, mmap() of 16MB always succeeds, but on other linux box the same mmap() would fail with "ENOMEM".
I checked /proc//maps and saw the virtual memory map on different linux boxes are quite different. One thing is: the address range for heap:
linux box1: the mmap() would return address of 31162000-31164000
linux box2: the mmap() would return address of a9a67000-a9a69000
My question is: for a particular process, how is linux virtual memory arranged? What decides the actual address ranges? Why the mmap() would fail even I can still see some large unused virtual address ranges?
UPDATE: here is an example of address space where mmap() of 16MB would fail:
10024000-10f6b000 rwxp 10024000 00:00 0 [heap]
30000000-3000c000 rw-p 30000000 00:00 0
3000c000-3000d000 r--s 00000000 00:0c 5461 /dev/shm/mymap1
3000d000-3000e000 rw-s 00001000 00:0c 5461 /dev/shm/mymap2
3000e000-30014000 r--s 00000000 00:0c 5463 /dev/shm/mymap3
30014000-300e0000 r--s 00000000 00:0c 5465 /dev/shm/mymap4
300e0000-310e0000 r--s 00000000 00:0b 2554 /dev/mymap5
310e0000-31162000 rw-p 310e0000 00:00 0
31162000-31164000 rw-s 00000000 00:0c 3554306 /dev/shm/mymap6
31164000-312e4000 rw-s 00000000 00:0c 3554307 /dev/shm/mymap7
312e4000-32019000 rw-s 00000000 00:0c 3554309 /dev/shm/mymap8
7f837000-7f84c000 rw-p 7ffeb000 00:00 0 [stack]
in the above example: there are still big space between the last mymap8 and [stack]. But further mmap() of 16MB would fail. My question is: how does Linux decide mmap() base and the allowed range?
Thanks.

identifying glibc mmap areas (VMA's) from a Linux kernel module

I understood When allocating a blocks of memory larger than MMAP_THRESHOLD bytes, the glibc malloc() implementation allocates the memory as a private anonymous mapping using mmap ,and this mmap allocated area wont come as a part of [heap] in linux vma.
So is there any method available to identify all the glibc mmap areas from a linux kernel module.?
example :
One of test program which do malloc greater than MMAP_THRESHOLD many times shows cat /proc/pid/maps output as
00013000-00085000 rw-p 00000000 00:00 0 [heap]
40000000-40016000 r-xp 00000000 00:0c 14107305 /lib/arm-linux-gnueabi/ld-2.13.so
4025e000-4025f000 r--p 00001000 00:0c 14107276 /lib/arm-linux-gnueabi/libdl-2.13.so
4025f000-40260000 rw-p 00002000 00:0c 14107276 /lib/arm-linux-gnueabi/libdl-2.13.so
.....
.....
40260000-40261000 ---p 00000000 00:00 0
40261000-40a60000 rw-p 00000000 00:00 0
40a60000-40a61000 ---p 00000000 00:00 0
40a61000-42247000 rw-p 00000000 00:00 0
beed8000-beef9000 rw-p 00000000 00:00 0 [stack]
In this few are (40a61000-42247000,40261000-40a60000) actually glibc mmap areas,So from a Linux kernel module is there any way to identify this areas ,something like below code which identify stack and heap ?
if (vma->vm_start <= mm->start_brk &&
vma->vm_end >= mm->brk) {
name = "[heap]";
} else if (vma->vm_start <= mm->start_stack &&
vma->vm_end >= mm->start_stack) {
name = "[stack]";
}
I believe you should not dump the memory of your application from a kernel module. You should consider using application checkpointing, see this answer and the Berkley checkpoint restart library
You could also consider using the core dumping facilities inside the kernel.

What do the "---p" permissions in /proc/self/maps mean?

I understand the meaning of rwxps bits. r-xp is for .text. rw-p is for .data/.bss/heap/stack. What is the use of just ---p pages?
For example see this output of cat /proc/self/maps
00400000-0040b000 r-xp 00000000 08:03 827490 /bin/cat
0060b000-0060c000 rw-p 0000b000 08:03 827490 /bin/cat
0060c000-0062d000 rw-p 00000000 00:00 0 [heap]
3819a00000-3819a1e000 r-xp 00000000 08:03 532487 /lib64 ld-2.11.2.so
3819c1d000-3819c1e000 r--p 0001d000 08:03 532487 /lib64/ld-2.11.2.so
3819c1e000-3819c1f000 rw-p 0001e000 08:03 532487 /lib64/ld-2.11.2.so
3819c1f000-3819c20000 rw-p 00000000 00:00 0
3819e00000-3819f70000 r-xp 00000000 08:03 532490 /lib64/libc-2.11.2.so
3819f70000-381a16f000 ---p 00170000 08:03 532490 /lib64/libc-2.11.2.so
381a16f000-381a173000 r--p 0016f000 08:03 532490 /lib64/libc-2.11.2.so
381a173000-381a174000 rw-p 00173000 08:03 532490 /lib64/libc-2.11.2.so
381a174000-381a179000 rw-p 00000000 00:00 0
7fb859c49000-7fb85fa7a000 r--p 00000000 08:03 192261 /usr/lib/locale/locale-archive
7fb85fa7a000-7fb85fa7d000 rw-p 00000000 00:00 0
7fb85fa95000-7fb85fa96000 rw-p 00000000 00:00 0
7fff64894000-7fff648a9000 rw-p 00000000 00:00 0 [stack]
7fff649ff000-7fff64a00000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
According to the man page, it means private (copy on write). No idea what the usefulness of such a mapping is without being able to read/write/execute anything in it, though.
Possibly it is private to libc, allowing it to modify the permissions to access it without a user program accidentally mucking it up.
This is something I've wondered about the specifics of too. It didn't appear until sometime in the last few years, but I'm unsure whether GNU binutils or the glibc dynamic linker (ld-linux.so.2) is responsible for the change.
At first I thought it was a sort of guard region created by the dynamic linker to protect against out of bounds access to a library's data segment, but it makes no sense for it to be so large. It's possible that it's a complete map of the while library file so that the dynamic linker can make it readable again at some time in the future (perhaps during dlopen or dlsym calls) to access ELF metadata that doesn't normally need to be mapped.
In any case, it's nasty bloat, especially on 32-bit machines where virtual address space is a precious resource. It also bloats the kernel page tables, increasing the kernelspace resources used by a process.
P.S. Sorry this isn't really an answer. I know it's just random bits and pieces that might help lead to an answer, but it was way too long for a comment.
Private mapping (MAP_PRIVATE): Modifications to the contents of the mapping are not visible to other processes.
For file mapping they are not carried through to the underlying file. Changes to the contents of the mapping are nevertheless private to each process.
The kernel accomplishes this by using the copy-on-write technique. This means that whenever a process attempts to modify the contents of a page, the kernel first creates a new, separate copy of that page for the process (and adjusts the process’s page tables).
For this reason, a MAP_PRIVATE mapping is sometimes referred to as a private, copy-on-write mapping. (Source: The Linux Programming Interface book)

Resources