Error while executing the fortran code through gfortran [duplicate] - linux

I am trying to build a Fortran program, but I get errors about an undefined reference or an unresolved external symbol. I've seen another question about these errors, but the answers there are mostly specific to C++.
What are common causes of these errors when writing in Fortran, and how do I fix/prevent them?
This is a canonical question for a whole class of errors when building Fortran programs. If you've been referred here or had your question closed as a duplicate of this one, you may need to read one or more of several answers. Start with this answer which acts as a table of contents for solutions provided.

A link-time error like these messages can be for many of the same reasons as for more general uses of the linker, rather than just having compiled a Fortran program. Some of these are covered in the linked question about C++ linking and in another answer here: failing to specify the library, or providing them in the wrong order.
However, there are common mistakes in writing a Fortran program that can lead to link errors.
Unsupported intrinsics
If a subroutine reference is intended to refer to an intrinsic subroutine then this can lead to a link-time error if that subroutine intrinsic isn't offered by the compiler: it is taken to be an external subroutine.
implicit none
call unsupported_intrinsic
end
With unsupported_intrinsic not provided by the compiler we may see a linking error message like
undefined reference to `unsupported_intrinsic_'
If we are using a non-standard, or not commonly implemented, intrinsic we can help our compiler report this in a couple of ways:
implicit none
intrinsic :: my_intrinsic
call my_intrinsic
end program
If my_intrinsic isn't a supported intrinsic, then the compiler will complain with a helpful message:
Error: ‘my_intrinsic’ declared INTRINSIC at (1) does not exist
We don't have this problem with intrinsic functions because we are using implicit none:
implicit none
print *, my_intrinsic()
end
Error: Function ‘my_intrinsic’ at (1) has no IMPLICIT type
With some compilers we can use the Fortran 2018 implicit statement to do the same for subroutines
implicit none (external)
call my_intrinsic
end
Error: Procedure ‘my_intrinsic’ called at (1) is not explicitly declared
Note that it may be necessary to specify a compiler option when compiling to request the compiler support non-standard intrinsics (such as gfortran's -fdec-math). Equally, if you are requesting conformance to a particular language revision but using an intrinsic introduced in a later revision it may be necessary to change the conformance request. For example, compiling
intrinsic move_alloc
end
with gfortran and -std=f95:
intrinsic move_alloc
1
Error: The intrinsic ‘move_alloc’ declared INTRINSIC at (1) is not available in the current standard settings but new in Fortran 2003. Use an appropriate ‘-std=*’ option or enable ‘-fall-intrinsics’ in order to use it.
External procedure instead of module procedure
Just as we can try to use a module procedure in a program, but forget to give the object defining it to the linker, we can accidentally tell the compiler to use an external procedure (with a different link symbol name) instead of the module procedure:
module mod
implicit none
contains
integer function sub()
sub = 1
end function
end module
use mod, only :
implicit none
integer :: sub
print *, sub()
end
Or we could forget to use the module at all. Equally, we often see this when mistakenly referring to external procedures instead of sibling module procedures.
Using implicit none (external) can help us when we forget to use a module but this won't capture the case here where we explicitly declare the function to be an external one. We have to be careful, but if we see a link error like
undefined reference to `sub_'
then we should think we've referred to an external procedure sub instead of a module procedure: there's the absence of any name mangling for "module namespaces". That's a strong hint where we should be looking.
Mis-specified binding label
If we are interoperating with C then we can specify the link names of symbols incorrectly quite easily. It's so easy when not using the standard interoperability facility that I won't bother pointing this out. If you see link errors relating to what should be C functions, check carefully.
If using the standard facility there are still ways to trip up. Case sensitivity is one way: link symbol names are case sensitive, but your Fortran compiler has to be told the case if it's not all lower:
interface
function F() bind(c)
use, intrinsic :: iso_c_binding, only : c_int
integer(c_int) :: f
end function f
end interface
print *, F()
end
tells the Fortran compiler to ask the linker about a symbol f, even though we've called it F here. If the symbol really is called F, we need to say that explicitly:
interface
function F() bind(c, name='F')
use, intrinsic :: iso_c_binding, only : c_int
integer(c_int) :: f
end function f
end interface
print *, F()
end
If you see link errors which differ by case, check your binding labels.
The same holds for data objects with binding labels, and also make sure that any data object with linkage association has matching name in any C definition and link object.
Equally, forgetting to specify C interoperability with bind(c) means the linker may look for a mangled name with a trailing underscore or two (depending on compiler and its options). If you're trying to link against a C function cfunc but the linker complains about cfunc_, check you've said bind(c).
Not providing a main program
A compiler will often assume, unless told otherwise, that it's compiling a main program in order to generate (with the linker) an executable. If we aren't compiling a main program that's not what we want. That is, if we're compiling a module or external subprogram, for later use:
module mod
implicit none
contains
integer function f()
f = 1
end function f
end module
subroutine s()
end subroutine s
we may get a message like
undefined reference to `main'
This means that we need to tell the compiler that we aren't providing a Fortran main program. This will often be with the -c flag, but there will be a different option if trying to build a library object. The compiler documentation will give the appropriate options in this case.

There are many possible ways you can see an error like this. You may see it when trying to build your program (link error) or when running it (load error). Unfortunately, there's rarely a simple way to see which cause of your error you have.
This answer provides a summary of and links to the other answers to help you navigate. You may need to read all answers to solve your problem.
The most common cause of getting a link error like this is that you haven't correctly specified external dependencies or do not put all parts of your code together correctly.
When trying to run your program you may have a missing or incompatible runtime library.
If building fails and you have specified external dependencies, you may have a programming error which means that the compiler is looking for the wrong thing.

Not linking the library (properly)
The most common reason for the undefined reference/unresolved external symbol error is the failure to link the library that provides the symbol (most often a function or subroutine).
For example, when a subroutine from the BLAS library, like DGEMM is used, the library that provides this subroutine must be used in the linking step.
In the most simple use cases, the linking is combined with compilation:
gfortran my_source.f90 -lblas
The -lblas tells the linker (here invoked by the compiler) to link the libblas library. It can be a dynamic library (.so, .dll) or a static library (.a, .lib).
In many cases, it will be necessary to provide the library object defining the subroutine after the object requesting it. So, the linking above may succeed where switching the command line options (gfortran -lblas my_source.f90) may fail.
Note that the name of the library can be different as there are multiple implementations of BLAS (MKL, OpenBLAS, GotoBLAS,...).
But it will always be shortened from lib... to l... as in liopenblas.so and -lopenblas.
If the library is in a location where the linker does not see it, you can use the -L flag to explicitly add the directory for the linker to consider, e.g.:
gfortran -L/usr/local/lib -lopenblas
You can also try to add the path into some environment variable the linker searches, such as LIBRARY_PATH, e.g.:
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/lib
When linking and compilation are separated, the library is linked in the linking step:
gfortran -c my_source.f90 -o my_source.o
gfortran my_source.o -lblas

Not providing the module object file when linking
We have a module in a separate file module.f90 and the main program program.f90.
If we do
gfortran -c module.f90
gfortran program.f90 -o program
we receive an undefined reference error for the procedures contained in the module.
If we want to keep separate compilation steps, we need to link the compiled module object file
gfortran -c module.f90
gfortran module.o program.f90 -o program
or, when separating the linking step completely
gfortran -c module.f90
gfortran -c program.f90
gfortran module.o program.o -o program

Problems with the compiler's own libraries
Most Fortran compilers need to link your code against their own libraries. This should happen automatically without you needing to intervene, but this can fail for a number of reasons.
If you are compiling with gfortran, this problem will manifest as undefined references to symbols in libgfortran, which are all named _gfortran_.... These error messages will look like
undefined reference to '_gfortran_...'
The solution to this problem depends on its cause:
The compiler library is not installed
The compiler library should have been installed automatically when you installed the compiler. If the compiler did not install correctly, this may not have happened.
This can be solved by correctly installing the library, by correctly installing the compiler. It may be worth uninstalling the incorrectly installed compiler to avoid conflicts.
N.B. proceed with caution when uninstalling a compiler: if you uninstall the system compiler it may uninstall other necessary programs, and may render other programs unusable.
The compiler cannot find the compiler library
If the compiler library is installed in a non-standard location, the compiler may be unable to find it. You can tell the compiler where the library is using LD_LIBRARY_PATH, e.g. as
export LD_LIBRARY_PATH="/path/to/library:$LD_LIBRARY_PATH"
If you can't find the compiler library yourself, you may need to install a new copy.
The compiler and the compiler library are incompatible
If you have multiple versions of the compiler installed, you probably also have multiple versions of the compiler library installed. These may not be compatible, and the compiler might find the wrong library version.
This can be solved by pointing the compiler to the correct library version, e.g. by using LD_LIBRARY_PATH as above.
The Fortran compiler is not used for linking
If you are linking invoking the linker directly, or indirectly through a C (or other) compiler, then you may need to tell this compiler/linker to include the Fortran compiler's runtime library. For example, if using GCC's C frontend:
gcc -o program fortran_object.o c_object.o -lgfortran

Related

multiple definition of `__udivti3' in libgcc and Rust system library

I am trying to link some executable (which compiled with gcc) with the library that is compiled with cargo build.
cargo generates both .a and .so libraries from the code written in Rust language.
The linkage error is:
/sharedhome/maxaxe01/mbed-cloud-client-example-internal/mbed-cloud-client/parsec-se-driver/target/debug/libparsec_tpm_direct_se_driver.a(compiler_builtins-2541f1e09df1c67d.compiler_builtins.dh9snxly-cgu.0.rcgu.o): In function `__udivti3':
/cargo/registry/src/github.com-1ecc6299db9ec823/compiler_builtins-0.1.25/src/int/udiv.rs:247: multiple definition of `__udivti3'
/usr/lib/gcc/x86_64-linux-gnu/7/libgcc.a(_udivdi3.o):(.text+0x0): first defined here
collect2: error: ld returned 1 exit status
As I understand, the problem that some low level processor math function defined twice, in my libgcc and RUST system library compiler_builtins-0.1.25/src/int/udiv.rs May be somebody has some idea how to solve this?
If I link executable with library as shared object, it is successfully linked, but I need to compile with static lib! (cargo build generates both, .so and .a)
This thread ("multiple definition of `memcmp" error when linking Rust staticlib with embedded C program) does not help me.
If the two versions of the function __udivti3 are equivalent, you could try linking your program with -Wl,--allow-multiple-definition. This is obviously an ugly hack and I would be interested in getting a proper solution, but it worked for me. I was getting a similar conflict on __muloti4 between the compiler-builtins crate (part of the standard library) and a static version of LLVM libc++.
Obviously, I am assuming that using a cdylib is not an option for you and your Rust library needs to be static.

Why are some foreign functions statically linked while others are dynamically linked?

I'm working on a program that needs to manipulate git repositories. I've decided to use libgit2. Unfortunately, the haskell bindings for it are several years out of date and lack several functions that I require. Because of this I've decided to write the portions that use libgit2 in C and call them through the FFI. For demonstration purposes one of them is called git_update_repo.
git_update_repo works perfectly when used in a pure C program, however when it's called from haskell an assertion fails indicating that the libgit2 global init function, git_libgit2_init, hasn't been called. But, git_libgit2_init is called by git_update_repo. And if I use gdb I can see that git_libgit2_init is indeed called and reports that the initialization has been successful.
I've used nm to examine the executables and found something interesting. In a pure C executable, all the libgit2 functions are dynamically linked (as expected). However, in my haskell executable, git_libgit2_init is dynamically linked, while the rest of the libgit2 functions are statically linked. I'm certain that this mismatch is the cause of my issue.
So why do certain functions get linked dynamically and others statically? How can I change this?
The relevant settings in my .cabal file are
cc-options: -g
c-sources:
src/git-bindings.c
extra-libraries:
git2

Dynamic loading and weak symbol resolution

Analyzing this question I found out some things about behavior of weak symbol resolution in the context of dynamic loading (dlopen) on Linux. Now I'm looking for the specifications governing this.
Let's take an example. Suppose there is a program a which dynamically loads libraries b.so and c.so, in that order. If c.so depends on two other libraries foo.so (actually libgcc.so in that example) and bar.so (actually libpthread.so), then usually symbols exported by bar.so can be used to satisfy weak symbol linkages in foo.so. But if b.so also depends on foo.so but not on bar.so, then these weak symbols will apparently not be linked against bar.so. It seems as if foo.so inkages only look for symbols from a and b.so and all their dependencies.
This makes sense, to some degree, since otherwise loading c.so might change the behavior of foo.so at some point where b.so has already been using the library. On the other hand, in the question that got me started this caused quite a bit of trouble, so I wonder whether there is a way around this problem. And in order to find ways around, I first need a good understanding about the very exact details how symbol resolution in these cases is specified.
What is the specification or other technical document to define correct behavior in these scenarios?
Unfortunately, the authoritative documentation is the source code. Most distributions of Linux use glibc or its fork, eglibc. In the source code for both, the file that should document dlopen() reads as follows:
manual/libdl.texi
#c FIXME these are undocumented:
#c dladdr
#c dladdr1
#c dlclose
#c dlerror
#c dlinfo
#c dlmopen
#c dlopen
#c dlsym
#c dlvsym
What technical specification there is can be drawn from the ELF specification and the POSIX standard. The ELF specification is what makes a weak symbol meaningful. POSIX is the actual specification for dlopen() itself.
This is what I find to be the most relevant portion of the ELF specification.
When the link editor searches archive libraries, it extracts archive
members that contain definitions of undefined global symbols. The
member’s definition may be either a global or a weak symbol.
The ELF specification makes no reference to dynamic loading so the rest of this paragraph is my own interpretation. The reason I find the above relevant is that resolving symbols occurs at a single "when". In the example you give, when program a dynamically loads b.so, the dynamic loader attempts to resolve undefined symbols. It may end up doing so with either global or weak symbols. When the program then dynamically loads c.so, the dynamic loader again attempts to resolve undefined symbols. In the scenario you describe, symbols in b.so were resolved with weak symbols. Once resolved, those symbols are no longer undefined. It doesn't matter if global or weak symbols were used to defined them. They're already no longer undefined by the time c.so is loaded.
The ELF specification gives no precise definition of what a link editor is or when the link editor must combine object files. Presumably it's a non-issue because the document has dynamic-linking in mind.
POSIX describes some of the dlopen() functionality but leaves much up to the implementation, including the substance of your question. POSIX makes no reference to the ELF format or weak symbols in general. For systems implementing dlopen() there need not even be any notion of weak symbols.
http://pubs.opengroup.org/onlinepubs/9699919799/functions/dlopen.html
POSIX compliance is part of another standard, the Linux Standard Base. Linux distributions may or may not choose to follow these standards and may or may not go to the trouble of being certified. For example, I understand that a formal Unix certification by Open Group is quite expensive -- hence the abundance of "Unix-like" systems.
An interesting point about the standards compliance of dlopen() is made on the Wikipedia article for dynamic loading. dlopen(), as mandated by POSIX, returns a void*, but C, as mandated by ISO, says that a void* is a pointer to an object and such a pointer is not necessarily compatible with a function pointer.
The fact remains that any conversion between function and object
pointers has to be regarded as an (inherently non-portable)
implementation extension, and that no "correct" way for a direct
conversion exists, since in this regard the POSIX and ISO standards
contradict each other.
The standards that do exist contradict and what standards documents there are may not be especially meaningful anyway. Here's Ulrich Drepper writing about his disdain for Open Group and their "specifications".
http://udrepper.livejournal.com/8511.html
Similar sentiment is expressed in the post linked by rodrigo.
The reason I've made this change is not really to be more conformant
(it's nice but no reason since nobody complained about the old
behaviour).
After looking into it, I believe the proper answer to the question as you've asked it is that there is no right or wrong behavior for dlopen() in this regard. Arguably, once a search has resolved a symbol it is no longer undefined and in subsequent searches the dynamic loader will not attempt to resolve the already defined symbol.
Finally, as you state in the comments, what you describe in the original post is not correct. Dynamically loaded shared libraries can be used to resolve undefined symbols in previously dynamically loaded shared libraries. In fact, this isn't limited to undefined symbols in dynamically loaded code. Here is an example in which the executable itself has an undefined symbol that is resolved through dynamic loading.
main.c
#include <dlfcn.h>
void say_hi(void);
int main(void) {
void* symbols_b = dlopen("./dyload.so", RTLD_NOW | RTLD_GLOBAL);
/* uh-oh, forgot to define this function */
/* better remember to define it in dyload.so */
say_hi();
return 0;
}
dyload.c
#include <stdio.h>
void say_hi(void) {
puts("dyload.so: hi");
}
Compile and run.
gcc-4.8 main -fpic -ldl -Wl,--unresolved-symbols=ignore-all -o main
gcc-4.8 dyload.c -shared -fpic -o dyload.so
$ ./main
dyload.so: hi
Note that the main executable itself was compiled as PIC.

What is the difference between _imp and __imp?

I came across an interesting error when I was trying to link to an MSVC-compiled library using MinGW while working in Qt Creator. The linker complained of a missing symbol that went like _imp_FunctionName. When I realized That it was due to a missing extern "C", and fixed it, I also ran the MSVC compiler with /FAcs to see what the symbols are. Turns out, it was __imp_FunctionName (which is also the way I've read on MSDN and quite a few guru bloggers' sites).
I'm thoroughly confused about how the MinGW linker complains about a symbol beginning with _imp, but is able to find it nicely although it begins with __imp. Can a deep compiler magician shed some light on this? I used Visual Studio 2010.
This is fairly straight-forward identifier decoration at work. The imp_ prefix is auto-generated by the compiler, it exports a function pointer that allows optimizing binding to DLL exports. By language rules, the imp_ is prefixed by a leading underscore, required since it lives in the global namespace and is generated by the implementation and doesn't otherwise appear in the source code. So you get _imp_.
Next thing that happens is that the compiler decorates identifiers to allow the linker to catch declaration mis-matches. Pretty important because the compiler cannot diagnose declaration mismatches across modules and diagnosing them yourself at runtime is very painful.
First there's C++ decoration, a very involved scheme that supports function overloads. It generates pretty bizarre looking names, usually including lots of ? and # characters with extra characters for the argument and return types so that overloads are unambiguous. Then there's decoration for C identifiers, they are based on the calling convention. A cdecl function has a single leading underscore, an stdcall function has a leading underscore and a trailing #n that permits diagnosing argument declaration mismatches before they imbalance the stack. The C decoration is absent in 64-bit code, there is (blessfully) only one calling convention.
So you got the linker error because you forgot to specify C linkage, the linker was asked to match the heavily decorated C++ name with the mildly decorated C name. You then fixed it with extern "C", now you got the single added underscore for cdecl, turning _imp_ into __imp_.

Linker Errors for Unused Functionality: When do they occur?

Assume a static library libfoo that depends on another static library libbar for some functionality. These and my application are written in D. If my application only uses libfoo directly, and only calls functions from libfoo that do not reference symbols from libbar, sometimes the program links successfully without passing libbar to the linker and other times it doesn't.
Which of these happens seems to depend on what compiler I'm using to compile libfoo, libbar and my application, even though all compilers use the GCC toolchain to link. If I'm using DMD, I never receive linker errors if I don't pass libbar to the linker. If I'm using GDC, I sometimes do, for reasons I don't understand. If I'm using LDC, I always do.
What determines whether the GCC linker fails when a symbol referred in libfoo is undefined, but this symbol occurs in a function not referred to by the application object file?
What determines whether the GCC linker fails when a symbol referred in libfoo is undefined, but this symbol occurs in a function not referred to by the application object file?
If the linker complains about an unresolved symbol, then that is symbol is referenced from somewhere.
Usually the linker will tell you which object the unresolved reference comes from, but if it doesn't, the -Wl,-y,unres_symbol should.
You may also want to read this description of how the whole thing works.
if the linker does no effort to eliminate dead (unused) code in the libraries it simply assumes all referenced symbols are used and tries to link them in
if it does the elimination (through for example a simple mark and sweep algo (note that you cannot fully decide if some code is unused as that problem can be reduced to the halting problem)) it can eliminate the unused libraries if they are never used
this behavior is implementation defined (and there may be linker flags you can set to en/disable it

Resources