Run a program under certain amount of physical memory? - linux

I want to install qt in my Dreamhost Linux host. As you know, any hosting service will limit its users resource such as CPU and memory. When linking QT, it will cause the ld linker more than 400M memory, and then it get killed by the process monitor of Dreamhost...
I try to google for hours without finding any real answer for my problem. I am searching for Linux command utility which can run a program under certain amount of physical memory. I mean, I can run it as:
memory-limit -m 200M ld ld-args ...
And then, ld will run under 200M physical memory, but this does not mean ld can't allocate more than 200M. When ld allocate more than 200M, the physical memory will not increase, and it will use swap disk. And the RES part of ld's memory will not exceed 200M...
I know, the feature I need sounds like a virtual machine, I am wondering whether KVM can provide such feature. I am really wondering whether there is such a tool... :) Please help if you know something about this.
Thanks!

Add some swap space; Linux can swap on a file, so if you can create a few gigabytes of swap file, that will get the linking done.
However, you really ought to be able to get a binary package for Dreamlinux and just install it, rather than trying to compile QT there.

If this is just about compiling QT, the easiest solution is to compile it somewhere else (a virtual machine with the same OS and arch maybe?) and then just copy the binaries.

Have you tryed to reduce dependencies? I assume you do not use GUI at all for web applications, maybe you need only QtCore shared library that should be significantly smaller.
By default qmake links with QtGUI.

Not entirely an answer to your question, but you can try running ld with these options set, which may improve its chances of survival:
--no-keep-memory
--reduce-memory-overheads

Related

How to prevent compiler & linker from taking over entire machine?

When I compile a large project the compiler slows down the machine tremendously, virtually freezes it out. If I'm lucky a keystroke in vim takes a few seconds to register. If I'm not I may as well go for a walk since nothing can be done on my workstation at all.
Is there any way to prevent compiler and linker from consuming the entire machine? More generally, is it possible to limit a family of processes to a portion of computing resources, such as threads, memory, disk access bandwidth?
Something like limiting the resources available to the process tree that originates from the shell that runs the build would be ideal.
Most linux distros have a package called cpulimit. You can use this to limit the CPU usage for the gcc tool chain binaries.
It's mention as an answer to this question.
Limiting certain processes to CPU % - Linux
I'm not an expert about it but you could try starting the compilation with a specific cgroup that has limited resources. I don't know exactly how complicated it is to do it but it shouldn't be too hard.
You could also try changing the nice of the process to give it a lower priority so that it does take the entire machine but will be easily bumped by another process.

Does executing a binary on a RAMDisk reload the executable into memory?

Let's say I have two copies of the same 10MB binary executable, A and B.
If I have plenty of available memory and run ./A, my understanding is that A will be loaded into memory and run from there. This will take around 10MB of RAM to accomplish.
If I have plenty of available memory, create a RAMDisk, copy B to the RAMDisk, and run ./B from the RAMDisk, my understanding is that B will be (re)loaded into memory and run from there. This will take around 10MB of RAM for the executable, plus the memory in use for the RAMDisk.
Is this correct? Is a RAMDisk smart enough to say "oh, I already have that binary executable in memory, let's just run it in place?" Even if it was, wouldn't the loader have to do its magic to run the thing?
I'm using QNX and running ELF without COFF binaries, but I would appreciate answers for any *Nix system.
I would really expect it to be loaded, typical ELF binaries are really not an "execute in place" format.
There are things you need to do, like relocating any position-independent code and of course dynamic library loading, which the file system on the RAM disk knows nothing about.

changing linux memory protection

Is there a way to check which memory protection machenizem is used by the OS?
I have a program that fails with segmentation fault, in one computer (ubuntu) but not in another (RH6).
One of the explanations was memory protection mechanizem used by the OS.
Is there a way I can find / change it?
Thanks,
You might want to learn more about virtual memory, system calls, the linux kernel, ASLR.
Then you could study the role and usage of mmap & munmap system calls (also mprotect). They are the syscalls used to retrieve memory (e.g. to implement malloc & free), sometimes with obsolete syscalls like sbrk (which is increasingly useless).
You should use the gdb debugger (its watch command may be handy), and the valgrind utility. strace could also be useful.
Look also inside the /proc pseudo file system. Try to understand what
cat /proc/self/maps
is telling you (about the process running that cat). Look also inside /proc/$(pidof your-program)/maps
consider also using the pmap utility.
If it is your own source code, always compile it with all warnings and debuggiing info, e.g. gcc -Wall -Wextra -g and improve it till the compiler don't give any warnings. Use a recent version of gcc (ie 4.7) and of gdb (i.e. 7.4).

Stripping down a kernel in linux?

I recently read a post (admittedly its a few years old) and it was advice for fast number-crunching program:
"Use something like Gentoo Linux with 64 bit processors as you can compile it natively as you install. This will allow you to get the maximum punch out of the machine as you can strip the kernel right down to only what you need."
can anyone elaborate on what they mean by stripping down the kernel? Also, as this post was about 6 years old, which current version of Linux would be best for this (to aid my google searches)?
There is some truth in the statement, as well as something somewhat nonsensical.
You do not spend resources on processes you are not running. So as a first instance I would try minimise the number of processes running. For that we quite enjoy Ubuntu server iso images at work -- if you install from those, log in and run ps or pstree you see a thing of beauty: six or seven processes. Nothing more. That is good.
That the kernel is big (in terms of source size or installation) does not matter per se. Many of this size stems from drivers you may not be using anyway. And the same rule applies again: what you do not run does not compete for resources.
So think about a headless server, stripped down -- rather than your average desktop installation with more than a screenful of processes trying to make the life of a desktop user easier.
You can create a custom linux kernel for any distribution.
Start by going to kernel.org and downloading the latest source. Then choose your configuration interface (you have the choice of console text, 'config', ncurses style 'menuconfig', KDE style 'xconfig' and GNOME style 'gconfig' these days) and execute ./make whateverconfig. After choosing all the options, type make to create your kernel. Then make modules to compile all the selected modules for this kernel. Then, make install will copy the files to your /boot directory, and make modules_install, copies the modules. Next, go to /boot and use mkinitrd to create the ram disk needed to boot properly, if needed. Then you'll add the kernel to your GRUB menu.lst, by editing menu.lst and copying the latest entry and adding a similar one pointing to the new kernel version.
Of course, that's a basic overview and you should probably search for 'linux kernel compile' to find more detailed info. Selecting the necessary kernel modules and options takes a bit of experience - if you choose the wrong options, the kernel might not be bootable and you'll have to start over, which is a pain because selecting the options and compiling the kernel can take 15-30 minutes.
Ultimately, it isn't going to make a large difference to compile a stripped-down custom kernel unless your given task is very, very performance sensitive. It makes sense to remove things you're never going to use from the kernel, though, like say ISDN support.
I'd have to say this question is more suited to SuperUser.com, by the way, as it's not quite about programming.

Why is the startup of an App on linux slower when using shared libs?

On the embedded device I'm working on, the startup time is an important issue. The whole application consists of several executables that use a set of libraries. Because space in FLASH memory is limited we'd like to use shared libraries.
The application workes as usual when compiled and linked with shared libraries and the amount of FLASH memory is reduced as expected.
The difference to the version that is linked to static libs is that the startup time of the application is about 20s longer and I have no idea why.
The application runs on an ARM9 CPU at 180 MHz with Linux 2.6.17 OS,
16 MB FLASH (JFFS File System) and 32 MB RAM.
Bacause shared libraries have to be linked to at runtime, usually by dlopen() or something similar. There's no such step for static libraries.
Edit: some more detail. dlopen has to perform the following tasks.
Find the shared library
Load it into memory
Recursively load all dependencies (and their dependencies....)
Resolve all symbols
This requires quite a lot of IO operations to accomplish.
In a statically linked program all of the above is done at compile time, not runtime. Therefore it's much faster to load a statically linked program.
In your case, the difference is exaggerated by the relatively slow hardware your code has to run on.
This is a fine example of the classic tradeoff of speed and space.
You can statically link all your executables so that they are faster but then they will take more space
OR
You can have shared libraries that take less space but also more time to load.
So decide what you want to sacrifice.
There are many factors for this difference (OS, compiler e.t.c) but a good list of reasons can be found here. Basically shared libraries were created for space reasons and much of the "magic" involved to make them work takes a performance hit.
(As a historical note the original Netscape navigator on Linux/Unix was a statically linked big fat executable).
This may help others with similar problems:
The reason why startup took so long in my case was, that the default setting of the GCC is to export all symbols inside of a library.
A big improvement is to set a compiler setting "-fvisibility=hidden".
All symbols that the lib has to export have to be augmented with the statement
__attribute__ ((visibility("default")))
see gcc wiki
and the very fine article how to write shared libraries
Ok, I have learned now that the usage of shared libraries has it's disadvatages concerning speed. I found this article about dynamic linking and loading enlighting. The loading process seems to be much lengthier than I have expected.
Interesting.. typically loading time for a shared library is unnoticeable from a fat app that is statically linked. So I can only surmise that the system is either very slow to load a library from flash memory, or the library that is loaded is being checked in some way (eg .NET apps run a checksum for all loaded dlls, reducing startup time considerably in some cases). It could be that the shared libraries are being loaded as-needed, and unloaded afterwards which could indicate a configuration problem.
So, sorry I can't help say why, but I think its an issue with your ARM device/OS. Have you tried instrumenting the startup code, or statically linking with 1 of the most commonly-used libraries to see if that makes a large difference. Also put the shared libs in the same directory as the app to reduce the time it takes to search the FS for the lib.
One option which seems obvious to me, is to statically link the several programs all into a single binary. That way you continue to share as much code as possible (probably more than before), but you will also avoid the overhead of the dynamic linker AND save the space of having the dynamic linker on the system at all.
It's pretty easy to combine several executables into the same one, you normally just examine argv and decide which routine to call based on that.

Resources