I'm running into some segfaults that seem to be resolved on linux platforms by symbol versioning inside the ELF dynamic libraries. But I'm still getting segfaults on macOS. Is there a comparable feature in Mach-O shared libraries? And if so, how can I see the version information in the file?
For example, I know that on linux I can do readelf -s libsomething.so, and it will output the version info along with the symbols. But readelf chokes on .dylib files.
Not really, but you can kinda roll your own.
To serve as an example, Apple has actually done some kind of symbol versioning to the stat() family of functions. Depending on macro definitions, the declaration of stat will expand to either this:
int stat(const char *, struct stat *) __asm("_stat");
Or this:
int stat(const char *, struct stat *) __asm("_stat$INODE64");
And this is what the library exports look like:
% nm -arch x86_64 /usr/lib//system/libsystem_kernel.dylib | fgrep ' _stat'
0000000000009650 T _stat
0000000000001fd8 T _stat$INODE64
0000000000001fd8 T _stat64
00000000000250dc T _statfs
0000000000002e4c T _statfs$INODE64
0000000000002e4c T _statfs64
(Note that you have to use the x86_64 version there, as the arm64 version never had the old implementation to begin with, and so _stat there is already the new version.)
But if you're just a user of a library, then unless you have broken headers with mismatched struct and function definitions, then this is not going to solve your segfault issue.
Related
I am porting a Windows VC++ application to Linux that links to an assembler module currently produced by MASM. After changing its Windows ABI assumptions to Linux ABI, I would like to again assemble the module on MASM to OMF (in Windows), then directly input that object file into the GCC build (in Linux). This would greatly simplify maintenance over time and guarantee an identical assembly under both operating systems. The alternative is porting the assembler code to YASM/NASM and its complications. The assembler code is entirely leaf routines (no calls), with no macros, no Unicode data and scant integer/real data; it includes 32-bit and a 64-bit assembler versions. Barring endian issues, does it really matter whose tool chain generates the OMF representation for this module?
I tested out it out using a simple test case and it worked fine when linked using the GNU linker under Linux. So you probably don't need to do anything special.
Here's the assembly file I tested it with:
_TEXT SEGMENT USE32
PUBLIC foo
foo:
mov eax, 1234
ret
_TEXT ENDS
END
And here's the C program:
#include <stdio.h>
extern int foo();
int
main() {
printf("%d\n", foo());
return 0;
}
I assembled the first file on Windows using MASM, copied the resulting .OBJ file to a Linux machine (Debian x86_64) and compiled/linked it with the following command:
gcc -m32 main.c foo.obj
Running the generated executable (a.out) produced the expected output: 1234. I also tested the equivalent 64-bit case and it worked as well.
Unless you're dependent on PECOFF-specific section (segment) ordering or other PECOFF-specific features it looks like you shouldn't have any problems, at least far the object file format goes. Note it's possible that the version of the GNU linker installed on your Linux machine wasn't built with support for PECOFF. In that case you may need to build your own version from source.
In Windows, the dynamic loader always looks for modules in the path of the loaded executable first, making it possible to have private libraries without affecting system libraries.
The dynamic loader on Linux only looks for libraries in a fixed path, in the sense that it is independent on the chosen binary. I needed GCC 5 for its overflow checked arithmetic functions, but since the C++ ABI changed between 4.9 and 5, some applications became unstable and recompiling them solved the issue. While waiting for my distro [kubuntu] to upgrade the default compiler, is it possible to have newly compiled application linking to the new runtime, while packaged application still links to the old library, either by static linkage, or something that mimics the Windows behavior?
One way of emulating it would be to create a wrapper script
#!/bin/bash
LD_LIBRARY_PATH=$(dirname $(which your_file)) your_file
And after the linking step copy the affected library but it is sort of a hack.
You can use rpath.
Let's say your "new ABI" shared libraries are in /usr/local/newapi-libs.
gcc -L/usr/local/newapi-libs
-Wl,-rpath,/usr/local/newapi-libs
program.cpp -o program -lsomething`
The -rpath option of the linker is the runtime counterpart to -L. When a program compiled this way is run, the linker will first look in /usr/local/newapi-libs before searching the system library paths.
More information here and here.
You can emulate the Windows behavior of looking in the executable's directory by specifying -Wl,-rpath,.
[edit] added missing -L parameter and dashes before rpath.
Ok, so I want to link against a lower version of libc / glibc, for compatibility. I noticed this answer about how to do this, on a case-by-case basis:
How can I link to a specific glibc version?
https://stackoverflow.com/a/2858996/920545
However, when I tried to apply this myself, I ran into problems, because I can't figure out what lower-version-number I should use to link against. Using the example in the answer, if I use "nm" to inspect the symbols provided by my /lib/libc.so.6 (which, in my case, is a link to libc-2.17.so), I see that it seems to provide versions 2.0 and 2.3 of realpath:
> nm /lib/libc.so.6 | grep realpath#
4878d610 T realpath##GLIBC_2.3
48885c20 T realpath#GLIBC_2.0
However, if I try to link against realpath#GLIBC_2.0:
__asm__(".symver realpath,realpath#GLIBC_2.0");
...i get an error:
> gcc -o test_glibc test_glibc.c
/tmp/ccMfnLmS.o: In function `main':
test_glibc.c:(.text+0x25): undefined reference to `realpath#GLIBC_2.0'
collect2: error: ld returned 1 exit status
However, using realpath#GLIBC_2.3 works... and the code from the example, realpath#GLIBC_2.2.5 works - even though, according to nm, no such symbol exists. (FYI, if I compile without any __asm__ instruction, then inspect with nm, I see that it linked against realpath#GLIBC_2.3, which makes sense; and I confirmed that linking to realpath#GLIBC_2.2.5 works.)
So, my question is, how the heck to I know which version of the various functions I can link against? Or even which are available? Are there some other kwargs I should be feeding to nm? Am I inspecting the wrong library?
Thanks!
It seems to me that you have mixed up your libraries and binaries a bit...
/lib/libc.so.6 on most Linux distributions is a 32-bit shared object and should contain the *#GLIBC_2.0 symbols. If you are on an x86_64 platform, though, I would expect GCC to produce an 64-bit binary by default. 64-bit binaries are generally linked against /lib64/libc.so.6, which would not contain compatibility symbols for an old glibc version like 2.0 - the x86_64 architecture did not even exist back then...
Try compiling your *#GLIBC_2.0 program with the -m32 GCC flag to force linking against the 32-bit C library.
If I compile this simple program fn main() { println!("Hello"); } with rustc test.rs -o test then I can run it with ./test, but double clicking it in the file manager gives this error: Could not display "test". There is no application installed for "shared library" file. Running file test seems to agree: test: ELF 64-bit LSB shared object....
How can I get rustc, and also tools that use it such as cargo, to produce executables rather than shared objects?
I am using 64-bit Linux (Ubuntu 14.10).
EDIT: I have posted on the Rust forum about this.
EDIT#: So it turns out this is an issue with the file executable.
I don't have the rust compiler and can't find its docs on the internet, but I know how to do shared obkect vs executable in C, so maybe this info will help you out in solving it.
The difference is the -pie option to the linker. With a hello world C program:
$ gcc test.c -ohello -fPIC -pie
$ file hello
hello: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), not stripped
If we remove the position-independent flags, we get an executable:
$ gcc test.c -ohello
$ file hello
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), not stripped
Both generated files work the same way from the command line, but I suspect the difference file sees is changing what your GUI does. (Again, I'm not on Ubuntu... I use Slackware without a gui file manager, so I can't confirm myself, but I hope my guesses will help you finish solving the problem yourself.)
So, what I'd try next if I was on your computer would be to check the rustc man page or rustc --help and see if there's an option to turn off that -pie option to the linker. It stands for "position independent executable", so look for those words in the help file too.
If it isn't mentioned, try rustc -v test.rs -o test - or whatever the verbose flag is in the help file. That should print out the command it uses to link at the end. It'll probably be a call to gcc or ld. You can use that to link it yourself (there's probably a flag -c or something that you can pass to rustc to tell it to compile only, do not link, which will leave just the .o file it generates).
Once you have that, just copy/paste the final link command rustc called and remove the -pie option yourself.... if it is there... and see what happens.
Manually copy/pasting isn't fun to do and won't work with tools, but if you can get it to work at least once, you can confirm my hunch and maybe ask a differently worded question to get more rust users' attention.
You might also be able to tell the file manager how to open the shared object files and just use them. If the manager treated them the same as programs file identifies as executables, as the command line does, everything should work without changing the build process. I don't know how to do that though, but maybe asking on the ubuntu stack exchange will find someone who does.
What you have described is not entirely consistent with what I observe:
$ rustc - -o test <<< 'fn main() { println!("Hello"); }'
$ ./test
Hello
$ file test
test: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=99e97fcdc41eb2d36c950b8b6794816b05cf854c, not stripped
Unless you tell rustc to produce a library with e.g. #![crate_type = "lib"] or --crate-type lib, it will produce an executable.
It sounds like your file manager may be being too smart for its own good. It should just be trusting the executable bit and executing it.
I have a piece of C/C++ code that uses __thread keyword for thread local storage but having trouble getting it compiled on 64-bit Solaris Sparc with g++ (version 4.0.2), while it compiles and runs OK on linux with g++34 compiler. Here is an example of source code:
__thread int count = 0;
Compiler info from 'g++ -dumpversion' command returns '4.0.2' and 'g++ -dumpmachine' shows 'sparc-sun-solaris2.8'. 'uname -a' displays 'SunOS devsol1 5.9 Generic_118558-26 sun4u sparc SUNW,UltraAX-i2'.
The error message while running make with g++ is: "error: thread-local storage not supported for this target", and compiler option I am using is
-m64 -g -fexceptions -fPIC -I../fincad -I/usr/java_1.6.0_12/include -I/usr/java_1.6.0_12/include/solaris -I/opt/csw/gcc4/lib/sparcv9 -I/opt/csw/gcc4/lib/gcc/sparc-sun-solaris2.8/4.0.2/sparcv9 -I. -I/usr/include -I/usr/include/iso -I/usr/local/include
Any help is very much appreciated as I have being struggling on this over the weekend and am facing a deadline.
Thanks,
Charles
You can ignore the gcc-specific thread-specific storage and use posix thead-specific storage. It should work and it's not gnu-specific. There's an example on the
sun site.
Here's a condensed example from ibm. Obviously you'd want to use more than one thread.
pthread_key_t tlsKey = 0;
int main(int argc, char **argv)
rc = pthread_key_create(&tlsKey, globalDestructor);
/* The key can now be used from all threads */
// Each thread can now use the key:
char *myThreadDataStructure;
void *global;
myThreadDataStructure = malloc(15);//your data structure
pthread_setspecific(tlsKey, myThreadDataStructure);
/* Get the data back */
global = pthread_getspecific(tlsKey);
free (myThreadDataStructure);
rc = pthread_key_delete(tlsKey);
}
You can try adding the -pthread command-line option to g++: this options means, in GCC parlance: "do everything required for POSIX threading support". This might unlock support for __thread.
Thread-local storage with __thread requires some specific system support, in the compiler but also in the linkers (both the static linker, invoked at the end of the compilation, and the dynamic linker, when the program is executed). I do not know whether your specific combination (a rather old g++ with a rather old Solaris) is supported (some googling shows me that some people can use it with an older gcc [3.4.3] with a newer Solaris [10]). If it is not supported, you can use the POSIX / Single Unix functions pthread_key_create(), pthread_setspecific() and pthread_getspecific(). They are somewhat slower, and not nearly as convenient, as the __thread qualifier, but at least they work.
You could implement this in a portable way using thread_specific_ptr from Boost.Thread.
If nothing else, you should be able to work out how to do this on Solaris using that as a reference.