When I compile GTK4 "Hello World" application in rust I get binary with size 192Mb for debug mode. I use old SSD and I worry about it's resource as I compile and debug very frequently. I tried -C prefer-dynamic flag, but size of binary become 188Mb only.
Is this way to make application binary size much smaller?
PS: I work in win10 and use MSYS2.
PPS: I don't have problem with release build's size. With -C link-arg=-s and lto = true the size is about 200kb
Related
When I execute objcopy --strip-all in any rust program it halves their size. For example if I compile an normal hello world application with cargo build --release it ends with an 3 mb executable (in linux). Then when i run objcopy --strip-all on the executable i end with an 330 kb executable. Why does this happen?
I also tested this in windows with x86_64-pc-windows-gnu as my toolchain and it also lowered down the size of the executable from 4 mb to 1 mb.
In windows my toolchain is nightly 2021-07-22. In linux my toolchain is nightly 2021-07-05.
When you generate a binary, the Rust compiler generates a lot of debugging information, as well as other information such as symbol names for each symbol. Most other compilers do this as well, sometimes with an option (e.g., -g). Having this data is very helpful for debugging, even if you're compiling in release mode.
What you're doing with --strip-all is removing all of this extra data. In Rust, most of the data you're removing is just debugging information, and in a typical Linux distro, this data is stripped out of the binary and stored in special debug packages so it can be used if needed, but otherwise not downloaded.
This data isn't absolutely needed to run the program, so you may decide to strip it (which is usually done with the strip binary). If size isn't a concern for you, keeping it to aid debugging in case of a problem may be more helpful.
I had a process on a server. My process uses a shared lib, runing in the linux background. I use CPU profiler in gperftool to examine the functions. The steps is following:
1. in my app,
main ()
{
ProfilerStart("dump.txt");
...code..
ProfilerFlush();
ProfilerStop();
return 0;
}
2. CPUPROFILE_FREQUENCY=1000000 LD_LIBRARY_PATH=/usr/local/lib/libprofiler.so CPUPROFILE=dump.txt ./a.out
3. pprof --text a.out dump.txt
I checked my steps on the other process (not using shared lib), it's ok.
Problem: The dump.txt file is just remain an unchanged file size (8kb or 9kb), can not show the output despite of long time running in 2 or 3 hours (the app receive message from clients). I think that because my app uses the shared lib, some thing wrong here, totally not clear about this.
Can you pls explain me what happened? Any solution?
Thanks a lot,
Part LD_LIBRARY_PATH=/usr/local/lib/libprofiler.so is incorrect in your run.
According to documentation http://goog-perftools.sourceforge.net/doc/cpu_profiler.html
To install the CPU profiler into your executable, add -lprofiler to the link-time step for your executable. (It's also probably possible to add in the profiler at run-time using LD_PRELOAD, but this isn't necessarily recommended.)
you can either add libprofiler to linking step as -lprofiler of your application like
gcc -c myapp.c -o myapp.o
gcc myapp.o mystaticlib.a -Lmypath -lmydynamiclib -lprofiler -o myapp
or add it at with environment variable LD_PRELOAD (not LD_LIBARY_PATH as you did):
LD_PRELOAD=/usr/lib/libprofiler.so ./myapp
When cpu profiler of gperftools is correctly used, it will print information about event count and output file size when application terminates.
I'm using the following command on Win7 x64
.\b2 --cxxflags=/MP --build-type=complete
also tried
.\b2 --cxxflags=-MP --build-type=complete
However, cl.exe is still using only one of the 8 cores of my system.Any suggestions?
Make the compilation parallel at the build tool level, not per translation unit with
.\b2 -j8
or similar (if you have n cores, -j(n+1) is often used)
Turns out Malwarebytes was the culprit. It was slowing down the compilation by scanning newly generated files and memory. I turned it off, now I'm seeing 50% utilization(4 cores) sometimes. It's still between 5%-14% most of the time though.
We have a application of size about 20MB in release mode. This application is meant to run on MIPS running Linux 2.6.12 The debug build of the same is about 42 MB, with optimization switched off and -g flag added. The additional 22 MB increase is only because of gdb debug symbols embedded into the application (no logs or print statements added).
Now will debug build run slower than the image compared to release mode, if yes why ?
Also AFAIK strip debug_image should give me release_image, but in my case I observe following.
debug_image = 42MB
strip debug_image = 24MB
release_image = 20MB
Why is there a difference between stripped debug_image and release_image ?
Are there any other side effects embedding gdb symbols into application ?
Now will debug build run slower than the image compared to release
mode, if yes why ?
Yes it will, if optimizations are off, which is true in your case.
Why is there a difference between stripped debug_image and
release_image ?
Because of optimizations are on in release, the whole image size is optimized, reducing it. This results to less image size in release than in debug.
Are there any other side effects embedding gdb symbols into
application ?
It will take longer time for gdb to load symbols and more memory will be required.
I am running an embedded application on ARM9 board, where total flash size is 180MB only. I am able to run gdb, but when I do
(gdb) generate-core-dump
I get an error
warning: Memory read failed for corefile section, 1048576 bytes at 0x4156c000.
warning: Memory read failed for corefile section, 1048576 bytes at 0x50c00000.
Saved corefile core.5546
The program is running. Quit anyway (and detach it)? (y or n) [answered Y; input not from terminal]
Tamper Detected
**********OUTSIDE ifelse 0*********
length validation is failed
I also set ulimit -c 50000 but still the core dump exceeds this limit. When I do ls -l to check file size it is over 300 MB. In this case how should I limit the size of core dump?
GDB does not respect 'ulimit -c', only the kernel does.
It's not clear whether you run GDB on target board, or on a development host (and using gdbserver on target). You probably should use the latter, which will allow you to collect full core dump.
Truncated core dumps are a pain anyway, as often they will not contain exactly the info you need to debug the problem.
in your shell rc-file:
limit coredumpsize 50000 # or whatever limit size you like
that should set the limit for everything, including GDB
Note:
If you set it to 0 , you can make sure your home directory is not cluttered with core dump files.
When did you use ulimit -c ? It must be used before starting the program for which you're generating a core dump, and inside the same session.