Is there a way to check which memory protection machenizem is used by the OS?
I have a program that fails with segmentation fault, in one computer (ubuntu) but not in another (RH6).
One of the explanations was memory protection mechanizem used by the OS.
Is there a way I can find / change it?
Thanks,
You might want to learn more about virtual memory, system calls, the linux kernel, ASLR.
Then you could study the role and usage of mmap & munmap system calls (also mprotect). They are the syscalls used to retrieve memory (e.g. to implement malloc & free), sometimes with obsolete syscalls like sbrk (which is increasingly useless).
You should use the gdb debugger (its watch command may be handy), and the valgrind utility. strace could also be useful.
Look also inside the /proc pseudo file system. Try to understand what
cat /proc/self/maps
is telling you (about the process running that cat). Look also inside /proc/$(pidof your-program)/maps
consider also using the pmap utility.
If it is your own source code, always compile it with all warnings and debuggiing info, e.g. gcc -Wall -Wextra -g and improve it till the compiler don't give any warnings. Use a recent version of gcc (ie 4.7) and of gdb (i.e. 7.4).
Related
I'm seeking possible solution to achieve Kernel Mode Linux without modify Glibc.
The project called "Kernel Mode Linux on aarch64", which make specified processes execute in kernel mode, not all processes. (ex: programs in /trusted/) It enhance the speed of invoking system call. The background research is from Toshiyuki Maeda Website and sonicyang/KML.
If the user program execute in kernel mode, that means it can access syscall function directly.(Monolithic kernel) However, the access path of syscall is a hard path in arm64 glibc. The syscall will eventually use "svc 0" which cause an "Instruction Abort" exception. (# define INTERNAL_SYSCALL_RAW(name, nr, args...) \ in sysdeps/unix/sysv/linux/aarch64/sysdep.h). Of course, there is vDSO (vsyscall) way to go, but the current impl doesn't let most syscall functions have option to go vsyscal way.
In this situation, I have two modification plan, but both miss critical step.
Modify INTERNAL_SYSCALL_RAW to be multiplex of syscall or dl-call (or vsyscall) in glibc. How can I determine the process is in kernel mode or user mode without heavy overhead? (mrs x0, CurrentEL isn't allowed in EL0)
Replace svc 0 to bl dl-call when binelf loader loads. The program will be loaded by elf loader. We set it in kernel mode, no problem, but as we knew the libc.so is an dynamic link library. It keeps one piece in vma, but other normal user program will use it too. How can I deal with this situation? compile in static is great, but the size is really not acceptable.
Due to my limit understanding, please drop me any practical idea.
After a few research, the option 1 could work well as long as compile a customized glibc. The program runs in kernel mode must link to the the customized glibc. It'll not affect the system's glibc.
I wanted to know whether an instruction is from the application itself or from the library code.
I observed some application code/data are located at about 0x000055xxxx while libraries and mmaped regions are by default located at 0x00007fcxxxx. Can I use for example, 0x00007f00...00 as a boundary to tell instruction is from the application itself or from the library?
How can I configure this boundary in Linux kernel?
Updated.
Can I prevent (or detect) a syscall instruction being issued from application code (only allow it to go through libc). Maybe we can do a binary scan, but due to the variable length of instruction, it's hard to prevent unintended syscall instruction.
Do it the other way. You need to learn a lot.
First, read a lot more about operating systems. So read the Operating Systems: Three Easy Pieces textbook.
Then, learn more about ASLR.
Read also Drepper's How to write shared libraries and Levine's Linkers and loaders book.
You want to use pmap(1) and proc(5).
You probably want to parse the /proc/self/maps pseudo-file from inside your program. Or use dladdr(3).
To get some insight, run cat /proc/$$/maps and cat /proc/self/maps in a Linux terminal
I wanted to know whether an instruction is from userspace or from library code.
You are confused: both library code and main executable code are userspace.
On Linux x86_64, you can distinguish kernel addresses from userpsace addresses, because the kernel addresses are in the FFFF8000'00000000 through FFFFFFFF'FFFFFFFF range on current (48-bit) implementations. See canonical form address description here.
I observed some application code/data are located at about 0x000055xxxx while libraries and mmaped regions are by default located at 0x00007fcxxxx. Can I use for example, 0x00007f00...00 as a boundary to tell instruction is from the application itself or from the library?
No, in general you can't. An application can be linked to load anywhere within canonical address space (though most applications aren't).
As Basile Starynkevitch already answered, you'll need to parse /proc/$pid/maps, or know what address the executable is linked to load at (for non-PIE binary).
i want to analyse each memory block content produced by a particular process. So what i did was using "gcore pid" to get a core dump of the process, but i do not know how to retrieve the content out, can anyone help?
In general, the good tool to analyze a core dump is the gdb debugger.
So you should compile all your code with the -g flag passed to gcc or g++ or clang (to have DWARF debug information inside your ELF executable).
Then, you can analyze the (post-mortem or not) core dump of your program myprog with the command gdb myprog core. Learn how to use gdb. Notice that gdb is scriptable and extensible (in Python and Guile).
You could (but probably should not) analyze the core file otherwise (without gdb). Then you need to understand its detailed format (and that could require months of work). See elf(5) and core(5).
BTW, valgrind could also be useful.
You could even use gdb to analyze a core dump from a program compiled without -g but that is much less useful.
I have a process waiting on a futex:
# strace -p 5538
Process 5538 attached - interrupt to quit
futex(0x7f86c9ed6a0c, FUTEX_WAIT, 20, NULL
How can I best debug such a situation? Can I identify who holds the futex? Are there any tools similar to ipcs and ipcrm but for futexes?
Try using gdb -p *PID* and then run where or bt to see a backtrace.
It won't be spectacularly useful with binaries and libraries that have had their debugging symbols stripped, but you may be able to deduce a fair bit from the context. It might be able to indicate to you which part of a complex process is hanging, and then you could examine the right part of the sources to search for the lock.
I have the same problem with a piece of c++ code. Running ubuntu 12.10 64bit. It looks to like a similar problem in 2007, where the libc was buggy (and maybe still is?).
I start a pthread which runs a traceroute in a system call. Printf before and after the system indicate, that the operating system hangs on the system call, WITHOUT executing the traceroute.
I am not sure if my linux is broken once again because of the ubuntu update, or if it's a libc related bug. Since a lot of applications seem to have "similar" problems, I assume it's stuck somewhere in the userspace.
My c++ code runs perfectly on 32bit systems and even 64bit osx, so i assume that ubuntu 12.10 + 64bit libc combination is broken.
I want to install qt in my Dreamhost Linux host. As you know, any hosting service will limit its users resource such as CPU and memory. When linking QT, it will cause the ld linker more than 400M memory, and then it get killed by the process monitor of Dreamhost...
I try to google for hours without finding any real answer for my problem. I am searching for Linux command utility which can run a program under certain amount of physical memory. I mean, I can run it as:
memory-limit -m 200M ld ld-args ...
And then, ld will run under 200M physical memory, but this does not mean ld can't allocate more than 200M. When ld allocate more than 200M, the physical memory will not increase, and it will use swap disk. And the RES part of ld's memory will not exceed 200M...
I know, the feature I need sounds like a virtual machine, I am wondering whether KVM can provide such feature. I am really wondering whether there is such a tool... :) Please help if you know something about this.
Thanks!
Add some swap space; Linux can swap on a file, so if you can create a few gigabytes of swap file, that will get the linking done.
However, you really ought to be able to get a binary package for Dreamlinux and just install it, rather than trying to compile QT there.
If this is just about compiling QT, the easiest solution is to compile it somewhere else (a virtual machine with the same OS and arch maybe?) and then just copy the binaries.
Have you tryed to reduce dependencies? I assume you do not use GUI at all for web applications, maybe you need only QtCore shared library that should be significantly smaller.
By default qmake links with QtGUI.
Not entirely an answer to your question, but you can try running ld with these options set, which may improve its chances of survival:
--no-keep-memory
--reduce-memory-overheads