I'm trying to use the 64-bit MinGW from http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Automated%20Builds/ but when I compile a program with it, the resulting executable fails when a DLL isn't available.
How do I get this compiler to do static linking with the standard library?
Or is there another distribution of 64-bit MinGW that I should be using instead?
The g++ switch is supposed to be
-static
See
http://gcc.gnu.org/onlinedocs/gcc/Link-Options.html.
-static
On systems that support dynamic linking, this prevents linking with
the shared libraries. On other systems, this option has no effect.
You should post the command line, that you use in order to compile/link, in order to get more help if this does not work for you.
Related
In Windows, the dynamic loader always looks for modules in the path of the loaded executable first, making it possible to have private libraries without affecting system libraries.
The dynamic loader on Linux only looks for libraries in a fixed path, in the sense that it is independent on the chosen binary. I needed GCC 5 for its overflow checked arithmetic functions, but since the C++ ABI changed between 4.9 and 5, some applications became unstable and recompiling them solved the issue. While waiting for my distro [kubuntu] to upgrade the default compiler, is it possible to have newly compiled application linking to the new runtime, while packaged application still links to the old library, either by static linkage, or something that mimics the Windows behavior?
One way of emulating it would be to create a wrapper script
#!/bin/bash
LD_LIBRARY_PATH=$(dirname $(which your_file)) your_file
And after the linking step copy the affected library but it is sort of a hack.
You can use rpath.
Let's say your "new ABI" shared libraries are in /usr/local/newapi-libs.
gcc -L/usr/local/newapi-libs
-Wl,-rpath,/usr/local/newapi-libs
program.cpp -o program -lsomething`
The -rpath option of the linker is the runtime counterpart to -L. When a program compiled this way is run, the linker will first look in /usr/local/newapi-libs before searching the system library paths.
More information here and here.
You can emulate the Windows behavior of looking in the executable's directory by specifying -Wl,-rpath,.
[edit] added missing -L parameter and dashes before rpath.
Context: I'm using a linux toolchain (includes g++, other build tools, libs, headers, etc) to build my code with statically linked libraries. I want to ensure that I'm using ONLY libraries/headers from my toolchain, not the default ones on the build machine. I can use strace to see what g++ is doing (which libraries it is using) while it is compiling which would be helpful in a normal scenario - but my build system has many wrappers around g++ that hide all of the output.
Question: is there a way to obtain from a statically-linked binary any useful information regarding the library and header files which were used to create the binary? I've taken a look at the objdump tool but I'm not sure if it will help much.
Just pass -v to g++ or gcc at link time. It will show all the linked libraries. Perhaps try make CC='gcc -v' CXX='g++ -v'
More generally, -v passed g++ or gcc shows you the underlying command with its arguments because gcc or g++ is just a driver program (starting cc1, ld or collect2, as, ...)
By passing the -H flag to GCC (i.e. g++ or gcc) you can see every included header. So you can check that only the heanders you expect are included.
You cannot see what static library has been linked, because linking a static library just means linking the relevant object file members in it, so a static library can (and usually is) linked in only partly.
You could use the nm command to find names from such libraries.
If you can simply recompile, then there are ways (using some of the techniques that Basile explained) to get the headers and libraries (static or dynamic) but, unfortunately, there is no way to know which libraries were used after the compilation is complete.
I wrote a very simple ncurses program to be run in BusyBox environment. However, it seems like that I cannot get my program to compile with everything. I used:
g++ menu.cpp -ohello -lncurses --> Works fine
g++ -static menu.cpp -ohello -lncurses --> Undefined reference to SP (many times)
I found this question but it ignores linking to ncurses. I need a very single executable. My targeted environment is fixed, so I do not concern portability.
You should paste the exact compiler calls and the exact error messages that you are getting.
Do you have a static version of the ncurses library?
More importantly, do you have a static version of the ncurses library compiled for your target environment? For example your target environment may be using ulibc instead of glibc or it could even be a whole different platform (hint: tell us what your target platform is).
Are you certain that you are compiling with the right flags? The compiler flags that you are showing seem more suited to compiling an application for use in the build host environment...
I have a shared library which is supposed to export only one function which is marked with __attribute__ ((visibility ("default"))). It also links with another static library (fftw), and
#include<fftw3.h>
is preceded with:
#pragma GCC visibility push(hidden)
The linker command used:
g++.exe -fvisibility=hidden -shared -o mylib.dll -Wl,--out-implib,mylib.dll.a -Wl,--no-whole-archive libfftw3.a libfftw3_omp.a -lgomp
Now the resulting library is huge and if I check the exported functions it includes ALL fftw functions, and ALL function from my files. It looks like mingw ignores visibility options. I read that previously it gave warning about -fvisibility, but now it compiles with no warnings whatsoever.
Does mingw and gcc 4.6.1 support visibility flags? If yes, how do I get rid of all unnecessary stuff in my shared library?
Mingw is a Windows port of GCC toolchain but Windows dll are not Linux so. Especially the link part is different. To specify the visibility with MingGW you have to go the Windows way and annotate your classes and functions with :
__declspec(dllexport) while compiling the library
__declspec(dllimport) while linking
If you want multiplatform support for the GCC toolchain you can add a header in your project doing that for you. For a step by step example and lots of details have a look at GCC's visibility guide.
Windows PE object files do not have visibility attributes. The closest is dllexport/dllimport, but that's only for shared libraries (DLL's). So either you don't mark all FFTW functions with __declspec(dllexport), and hope linking the static library does The Right Thing (tm), or you take care not to link to FFTW if linking with your library.
It should warn about bad visibility attributes, perhaps you need to turn up the warning level -Wall -Wextra -pedantic).
What is the best way to compile programs with DMD on a 64bit machine? It doesn't need to compile to 64Bit code. I know about GDC, but want to work with D2 also. There is also chroot, but am hoping for a simpler way.
The actual problem isn't with compiling, but linking. DMD calls on GCC to perform linking with system libraries. Could I get DMD to have GCC link against 32bit library? Or how would I do it manually?
I already have the ia32 libraries installed which is why I can run DMD.
Ask GCC to perform 32-bit link by passing it '-m32' flag.
It appears that DMD doesn't invoke gcc to perform the link, but rather invokes ld directly. The equivalent ld switch is '-melf_i386', and apparently the way to make DMD pass that option to the linker is with '-L-melf_i386' flag.
Note that many systems separate runtime and development libraries. 32-bit runtime packages are almost always installed by default, but 32-bit development packages may not be.
You need development 32-bit packages to build 32-bit programs. The fact that 32-bit DMD can run does not in itself prove that you have all the 32-bit libraries you need in order to build 32-bit programs.