How to install and setup pthread library in dev -c++ - multithreading

I am a student and new to multithreading concepts. By default, the compiler does not have pthread.h header file required for it. I have searched a lot but I was unable to find the correct method to set up pthread.h header file in my compiler. The platform has a lot of information about pthread.h handling but I could not find any method mentioned about the setup of pthread header file. Firstly I tried on vs code but it did not work so I moved to dev-c++ just to make it simpler. but I cannot find the correct method to install the required files in it. Also, there are a lot of various pthread setupt directories available on the internet and I could not know which one was the right one. Any help configuring pthread on dev will be appreciated. Thanks.

Related

Loading Linux libraries at runtime

I think a major design flaw in Linux is the shared object hell when it comes to distributing programs in binary instead of source code form.
Here is my specific problem: I want to publish a Linux program in ELF binary form that should run on as many distributions as possible so my mandatory dependencies are as low as it gets: The only libraries required under any circumstances are libpthread, libX11, librt and libm (and glibc of course). I'm linking dynamically against these libraries when I build my program using gcc.
Optionally, however, my program should also support ALSA (sound interface), the Xcursor, Xfixes, and Xxf86vm extensions as well as GTK. But these should only be used if they are available on the user's system, otherwise my program should still run but with limited functionality. For example, if GTK isn't there, my program will fall back to terminal mode. Because my program should still be able to run without ALSA, Xcursor, Xfixes, etc. I cannot link dynamically against these libraries because then the program won't start at all if one of the libraries isn't there.
So I need to manually check if the libraries are present and then open them one by one using dlopen() and import the necessary function symbols using dlsym(). This, however, leads to all kinds of problems:
1) Library naming conventions:
Shared objects often aren't simply called "libXcursor.so" but have some kind of version extension like "libXcursor.so.1" or even really funny things like "libXcursor.so.0.2000". These extensions seem to differ from system to system. So which one should I choose when calling dlopen()? Using a hardcoded name here seems like a very bad idea because the names differ from system to system. So the only workaround that comes to my mind is to scan the whole library path and look for filenames starting with a "libXcursor.so" prefix and then do some custom version matching. But how do I know that they are really compatible?
2) Library search paths: Where should I look for the *.so files after all? This is also different from system to system. There are some default paths like /usr/lib and /lib but *.so files could also be in lots of other paths. So I'd have to open /etc/ld.so.conf and parse this to find out all library search paths. That's not a trivial thing to do because /etc/ld.so.conf files can also use some kind of include directive which means that I have to parse even more .conf files, do some checks against possible infinite loops caused by circular include directives etc. Is there really no easier way to find out the search paths for *.so?
So, my actual question is this: Isn't there a more convenient, less hackish way of achieving what I want to do? Is it really so complicated to create a Linux program that has some optional dependencies like ALSA, GTK, libXcursor... but should also work without it! Is there some kind of standard for doing what I want to do? Or am I doomed to do it the hackish way?
Thanks for your comments/solutions!
I think a major design flaw in Linux is the shared object hell when it comes to distributing programs in binary instead of source code form.
This isn't a design flaw as far as creators of the system are concerned; it's an advantage -- it encourages you to distribute programs in source form. Oh, you wanted to sell your software? Sorry, that's not the use case Linux is optimized for.
Library naming conventions: Shared objects often aren't simply called "libXcursor.so" but have some kind of version extension like "libXcursor.so.1" or even really funny things like "libXcursor.so.0.2000".
Yes, this is called external library versioning. Read about it here. As should be clear from that description, if you compiled your binaries using headers on a system that would normally give you libXcursor.so.1 as a runtime reference, then the only shared library you are compatible with is libXcursor.so.1, and trying to dlopen libXcursor.so.0.2000 will lead to unpredictable crashes.
Any system that provides libXcursor.so but not libXcursor.so.1 is either a broken installation, or is also incompatible with your binaries.
Library search paths: Where should I look for the *.so files after all?
You shouldn't be trying to dlopen any of these libraries using their full path. Just call dlopen("libXcursor.so.1", RTLD_GLOBAL);, and the runtime loader will search for the library in system-appropriate locations.

Finding the shared library name to use with dlload

In my open-source project Artha I use libnotify for showing passive desktop notifications to the user.
Instead of statically linking libnotify, a lookup at runtime is made for the shared object (.so) file via dlload, if available on the target machine, Artha exposes the notification feature in it's GUI. On app. start, a call to dlload with filename param as libnotify.so.1 is made and if it returns a non-null pointer, then the feature is exposed.
A recurring problem with this model is that every time the version number of the library is bumped, Artha's code needs to be updated, currently libnotify.so.4 is the latest to entail such an occurance.
Is there a linux system call (irrespective of the distro the app. is running on), which can tell me if a particular library's shared object is available at runtime? I know that there exists the bruteforce option of enumerating the library by going from 1 to say 10, I find the solution ugly and inelegant.
Also, if this can be addressed via autoconf, then that solution is welcome too I.e. at build time, based on the target machine, the configure.h generated should've the right .so name that can be passed to dlload.
P.S.: I think good distros follow the style of creating links to libnotify.so.x so that a programmer can just do dlload("libnotify.so", RTLD_LAZY) and the right version numbered .so is loaded; unfortunately not all distros follow this, including Ubuntu.
The answer is: you don't.
dlopen() is not designed to deal with things like that, and trying to load whichever soversion you find on the system just because it happens to have the symbols you need is not a good way to do it.
Different sonames have different ABIs, and different ABIs means that you may be calling the same exact symbol name that is expecting a different set (or different size) of parameters, which will cause crashes or misbehaviour that are extremely difficult do debug.
You should have a read on how shared object versions work and what an ABI is.
The libfoo.so link is there for the link editor (ld) and is usually installed with the -devel packages for that reason; it might also very well not be a link but rather a text file with a linker script, often times on purpose to avoid exactly what you're trying to do.

How to get a list of paths in /etc/ld.so.conf on Linux

What is the most portable and robust way to get the list of paths, configured by /etc/ld.so.conf and files included from it? Parsing the file manually seems to be not a good idea — the format is likely to change in the future revisions.
To allow better understanding of the question, I will give you specific details below. Note that, despite these details, this is a general programming question, applicable to other situations.
There is a program, called LuaRocks. It is a package manager for Lua programming language (somewhat like Ruby gems or Python eggs). LuaRocks packages are called "rocks".
As a convenience feature, LuaRocks allows a rock author to specify a list of external dependencies for a rock, formulated as a list of C header files and / or dynamic library files. (.so on Linux.) If the specified file does not exist, the rock can't be installed.
Currently, on Linux, LuaRocks by default checks .so file existance by searching for the file in two hardcoded paths, /usr/lib and /usr/local/lib.
I believe that this is incorrect behaviour, and it is broken by the recent changes in the Ubuntu and other Debian distributions.
Update: the paths are not hardcoded per se, but are user-configurable in the config file. Still, IMO, not a best solution.
Instead (as I understand it), LuaRocks should look up file in the paths, specified by /etc/ld.so.conf and files included from it.
(Now please re-read the question above ;-) )
You shouldn't need to parse /etc/ld.so.conf or any of the config files - if you run 'ldconfig', it will scan the configured directories and generate a cache file.
Then, subsequently when you attempt a dlopen it'll automatically find the files by iterating through the cached library directories. Same thing with compiling and giving -lSomeLib, you shouldn't need to specify -L/my/other/path if you've got it configured in ld.so.conf(.d)
autoconf accomplishes this by attempting to compile a test program that links to the shared library, but that's just a functional wrapper around the dlopen() call.
So, while other methods may not necessarily be 'wrong', at the root of it attempting to link to the library or doing a dlopen() are the 'most right' ways of doing it.
Consider this, if you attempt to link to a library in a directory that ISN'T cached in /etc/ld.so.cache, when you try to run the program it will fail because it won't be able to dlopen() the library!
Hence, any 'good' shared library will be in /etc/ld.so.cache and be linkable/dlopen()able, this means that gcc can use it to link and that the user-generated library or executable will be able to open it when it executes.
You can circumvent this by expressly setting the environment variable LD_LIBRARY_PATH, or LD_PRELOAD_PATH - but each of these has it's own caveats and should be avoided if possible for 'standard' use.
A good write-up on writing shared libraries covers some of these issues, and is a good read for anyone working on programmatic consuming of other-shared libraries. Ulrich Drepper's How to write shared libraries.
According to the FHS, the following are valid locations for dynamic libraries:
/lib*/
/opt/*/lib*/
/usr/lib*/
/usr/local/lib*/
(And most likely ~/lib*/ as well.)
All entries in my /etc/ld.so.conf.d/* conform to this. Some entries reference subdirectories below the FHS dirs, which probably means that you can use the libraries in there without path information.
Now I don't know enough about LuaRocks. If you're limited to Lua-path-style globs (only ?), you cannot match these and have to parse the configs. Otherwise, you could just try to find them anywhere in these directories.
This would break on non-FHS-conforming systems (only option: parse config) and if a directory is not included in the config, the installer might see libraries that the linker cannot find.
These two seem acceptable to me, therefore I'd simply ignore the config and look at these dirs.
(Another possibility could be trying to link the library, this should automagically use the right path. However, this is platform-specific and maybe dangerous.)

mixing code compiled with /MT and /MD

I have a large body of code, compiled with /MT (i.e. expecting to statically link against the CRT). I need to combine this with a static third-party library, which has been built with /MD (i.e. expecting to link the CRT dynamically).
Is it theoretically possible to link the two into one executable without recompiling either?
If I link with /nodefaultlib:msvcrt, I end up with a small number of undefined references to things like __imp__wgetenv. I'm tempted to try implementing those functions in my own code, forwarding to wgetenv, etc. Is that worth trying, or will I run straight into the next problem?
Unfortunately I'm Forbidden from taking the easy option of packing the thirdparty code into a separate DLL :-/
No. /MT and /MD are mutually exclusive.
All modules passed to a given invocation of the linker must have been compiled with the same run-time library compiler option (/MD, /MT, /LD).
Source
I found such solution in OpenSSL sources: All obj files of the library are compiled with combination: /MT /Zl. As author described, such combination allows to build static library with ability to compile with applications either dynamic CRT (/MD) or static CRT (/MT).
I faced similar situation where in I had two libraries one was built with MT and another one with MD. I had to build an executable which uses functionalities from both the libraries. The library built as MD was third party thus I couldn't rebuilt it and library built as MT has many dependencies and to built all of them as MD is a big pain. I was getting error from the third party config header file which made it mandatory to built the executable as MD. I was looking for the easy way of packaging third party dll as a separate dll as mentioned in question. However, I couldn't find enough explanation online on this easy way. Hence my two cents below.
The following is the way I circumvent it
I built another .dll which acted as an interface. This interface basically wrapped all api calls that was made to third party dll. The header file for this interface did not include any header file from third party dll rather all those header files were included in the interface.cpp file. Interface as you expect was built as MD.
Now In my main.cpp file I included this interface header file to make all the calls to third party dll through the interface.
Extra care has to be taken in passing arguments to the interface. Basic variables like int,bool etc can be passed as value. However any class or structure needs to be passed as const reference to avoid heap corruption. This is applicable to even string.
Happy to share more details if it is not clear!

How To: Referencing LAPACK library from FORTRAN 95 in Cygwin

I have a FORTRAN 95 program that needs to make some calls to the LAPACK library. I recently found out that Cygwin because it can install LAPACK as an extra option.
Well, LAPACK exists in the /lib/lapack/ directory as "cyglapack.dll". Having only a very informal training in Fortran programming, I have no idea how to reference a .dll library as opposed to a .mod module.
Any suggestions or directions to articles answering my question are GREATLY appreciated!
(P.S. I did search first.. I don't think I know the proper terms to get a useful article.)
Conceptually calling Lapack should be as easy as calling any other DLL. You just have to figure out what link flags and statements to include in your build statement(s).
From Fortran you would, probably, declare as EXTERNAL the functions from Lapack that you wan't to use. This tells the compiler not to bother looking for a definition of the function in your sources, or in a mod file, but that the definition will be provided at link time. This is where the fun begins, as you try to ensure that the signatures of your calls match the signatures expected by the DLL.
I might be able to provide more help if you provide more information. What is your Windows development environment ? What Fortran compiler are you using ? What compile and link tools are you using ? What does your current link statement look like ?
Search terms: dynamic linking fortran
Take a look at this page:
http://sources.redhat.com/ml/binutils/2001-12/msg00471.html
It mentions using dlltool to generate a .a file from a .dll file. Presumably you should be able to link to that in the normal way (usually a lib switch on the compile command).
Otherwise, consider running a linux Live CD for the sake of avoiding the problem in the first place! If you're a student or an academic, see if you can find a server with fortran installed (the IT staff are usually pretty helpful) where you can compile and run your program.

Resources