Importance of compiling single-threaded v. multi-threaded (and lib naming conventions)? - multithreading

[ EDIT ] ==>
To clarify, in those environments where multiple targets are deployed to the same directory, Planet Earth has decided on a convention to append "d" or "_d" or "_debug" to the "DEBUG" version (of a library or executable). Such a convention can be considered "ubiquitous" and "understood", although (of course) not everybody does this.
SIMILARLY, to resolve ambiguity between "shared" and "static" versions of a library, a common convention is to append something to distinguish between the static-and-shared (like "myfile.lib" for shared-import-lib-on-Windows and "myfile_s.lib" for static-import-lib-on-Windows). While Posix does not have this ambiguity based on file extension, remember that the file extension is not used on the "link line", so it is similarly useful to be able to explicitly specify the "static" or "shared" version of a library.
For the purpose of this question, both "debug/release" and "static/shared" are promoted to "ubiquitous convention to decorate the file name root".
QUESTION: Does any other deployment configuration get "promoted" to this level of "ubiquitous convention" such that it would become explicit in the file target root name?
My current guess is "no". For the answer to be "Yes", it would require: More than one configuration for given target is intended to be "used" (and thus deployed to a common directory, which is the assumed basis for the question).
In the past, we compiled with-and-without "web plug-in" capability, which similarly required that name decoration, but we no longer build those targets (so I won't assert that as an example). Similarly, we sometimes compile with-and-without multi-byte character support, but I hate that, so I won't assert that either.
[ORIGINAL QUESTION]
We're establishing library naming conventions/policy, to be applied across languages and platforms (e.g., we support hybrid products using several languages on different platforms, including C/C++, C#, Java). A particular goal is to ensure we handle targets/resources for mobile development (which is new to us) in addition to our traditional desktop (and embedded) applications.
Of course, one option is to have different paths for targets from different build configurations. For the purpose of this question, the decision is made to have all targets co-locate to a single directory, and to "decorate" the library/resource/executable name to avoid collisions based on build configuration (e.g., "DEBUG" v. "RELEASE", "static lib" v. "shared/DLL", etc.)
Current decision is similar to others on the web, where we append tokens to avoid naming collisions:
MyName.lib (release build, import for shared/dll)
MyName_s.lib (release build, static lib)
MyName_d.lib (debug build, import for shared/DLL)
MyName_ud.lib (Unicode/wide-char, debug, import for shared/DLL)
MyName_usd.lib (Unicode/wide-char, static lib, debug)
(The above are Windows examples, but these policies similarly apply to our POSIX systems.)
These are based on:
d (release or debug)
u (ASCII or Unicode/wide-char)
s (shared/DLL or static-lib)
QUESTION: We do not have legacy applications that must be compiled single-threaded, and my understanding is that (unlike Microsoft) POSIX systems can link single- and multi-threaded targets into a single application without issue. Given today's push towards multi-core and multi-threaded, Is there a need in a large enterprise to establish the following to identify "single-" versus "multi-threaded" compiled targets?
t (single-threaded or multi-threaded) *(??needed??)*
...and did we miss any other target collision, like compile with-and-without STL (on C++)?
As an aside, Microsoft has library naming conventions at:
http://msdn.microsoft.com/en-us/library/aa270400(v=vs.60).aspx and their DLL naming conventions at: http://msdn.microsoft.com/en-us/library/aa270964(v=vs.60).aspx
A similar question on SO a year ago that didn't talk about threading and didn't reference the Microsoft conventions can be found at: What is proper naming convention for MSVC dlls, static libraries and import libraries

You are using an ancient compiler. There is no need to establish such a standard in an enterprise, the vendor has already done this. Microsoft hasn't shipped a single-threaded version of the CRT for the past 13 years. Similarly, Windows has been a Unicode operating system for the past 17 years. It makes zero sense to still write Unicode agnostic code these days.
But yes, the common convention is to append a "d" for the debug build of a library. And to give a DLL version of a library a completely different name.

Related

Runtime plugins in Rust

We have a commercially sold application that is presently written in Java and Python. We are currently looking at moving to Rust for performance and non-crashy reasons.
In our present Java/Python architecture, we have a feature that manages customisations that particular customers want. This involves placing Java jars/classes and python files under a specific folder designated for customisation for specific customers. In the application configuration, the Java classpath and the PYTHON_PATH have this folder precede the folders containing the normal, uncustomised application code. Because of this, any code in this special folder will override the normal, uncustomised behaviour of the application.
We would like to keep this feature in some form when moving to Rust. We certainly want to avoid distributing source code to our customers for the core app (mostly Java now) and have customers compile, which is what we would need to do if we used Rust's module feature.
Is there a way we can we implement this feature when we go to Rust?
Target OS's are a mix of Linux and Windows.
Sounds like you want some kind of plugin architecture, with a dynamic library (also written in Rust) that's loaded at runtime.
Unfortunately, Rust doesn't have a stable ABI yet, meaning that those librarise would have to be compiled with the exact same compiler that built the main application. One workaround is to expose a C ABI from the plugin side, and use C FFI to call it, if you can live with the unsafety and hassle that entails. There's also the abi_stable crate, which might be safer/simpler to use.
Scripting languages might be another avenue to explore. For example, Rhai is a language specifically developed for use in Rust applications, and interoperates as seamlessly as these things get. Of course, performance of the scripted parts will not be as great as native Rust code.
I don't think that it is possible without recompiling it or at least compiling the config.rs file that you intend to create for individual users.
Assuming that the end user does not have Rust installed on their system, a few alternatives might be:
Using .yaml files for loading configs (similar to how GitHub Actions work)
Allowing users to run custom programs (you can use tokio::process to run them in an async manner)
Using rhaiscript (I personally prefer this option)
Taken from the official language docs for the modules feature
you could set up your commercial project in such a way that the source code is threated as an external crate, and then load it into the main project with the path attribute.
A minimal example, already on the docs:
#[path = "thread_files"]
mod thread {
// Load the `local_data` module from `thread_files/tls.rs` relative to
// this source file's directory.
#[path = "tls.rs"]
mod local_data;
}

C++ .a: what affects portability across distros?

I'm building a .a from C++ code. It only depends on the standard library (libc++/libstdc++). From general reading, it seems that portability of binaries depends on
compiler version (because it can affect the ABI). For gcc, the ABI is linked to the major version number.
libc++/libstdc++ versions (because they could pass a vector<T> into the .a and its representation could change).
I.e. someone using the .a needs to use the same (major version of) the compiler + same standard library.
As far as I can see, if compiler and standard library match, a .a should work across multiple distros. Is this right? Or is there gubbins relating to system calls, etc., meaning a .a for Ubuntu should be built on Ubuntu, .a for CentOS should be built on CentOS, and so on?
Edit: see If clang++ and g++ are ABI incompatible, what is used for shared libraries in binary? (though it doens't answer this q.)
Edit 2: I am not accessing any OS features explicitly (e.g. via system calls). My only interaction with the system is to open files and read from them.
It only depends on the standard library
It could also depend implicitly upon other things (think of resources like fonts, configuration files under /etc/, header files under /usr/include/, availability of /proc/, of /sys/, external programs run by system(3) or execvp(3), specific file systems or devices, particular ioctl-s, available or required plugins, etc...)
These are kind of details which might make the porting difficult. For example look into nsswitch.conf(5).
The evil is in the details.
(in other words, without a lot more details, your question don't have much sense)
Linux is perceived as a free software ecosystem. The usual way of porting something is to recompile it on -or at least for- the target Linux distribution. When you do that several times (for different and many Linux distros), you'll understand what details are significant in your particular software (and distributions).
Most of the time, recompiling and porting a library on a different distribution is really easy. Sometimes, it might be hard.
For shared libraries, reading Program Library HowTo, C++ dlopen miniHowTo, elf(5), your ABI specification (see here for some incomplete list), Drepper's How To Write Shared Libraries could be useful.
My recommendation is to prepare binary packages for various common Linux distributions. For example, a .deb for Debian & Ubuntu (some particular versions of them).
Of course a .deb for Debian might not work on Ubuntu (sometimes it does).
Look also into things like autoconf (or cmake). You may want at least to have some externally provided #define-d preprocessor strings (often passed by -D to gcc or g++) which would vary from one distribution to the next (e.g. on some distributions, you print by popen-ing lp, on others, by popen-ing lpr, on others by interacting with some CUPS server etc...). Details matter.
My only interaction with the system is to open files
But even these vary a lot from one distribution to another one.
It is probable that you won't be able to provide a single -and the same one- lib*.a for several distributions.
NB: you probably need to budget more work than what you believe.

Finding the shared library name to use with dlload

In my open-source project Artha I use libnotify for showing passive desktop notifications to the user.
Instead of statically linking libnotify, a lookup at runtime is made for the shared object (.so) file via dlload, if available on the target machine, Artha exposes the notification feature in it's GUI. On app. start, a call to dlload with filename param as libnotify.so.1 is made and if it returns a non-null pointer, then the feature is exposed.
A recurring problem with this model is that every time the version number of the library is bumped, Artha's code needs to be updated, currently libnotify.so.4 is the latest to entail such an occurance.
Is there a linux system call (irrespective of the distro the app. is running on), which can tell me if a particular library's shared object is available at runtime? I know that there exists the bruteforce option of enumerating the library by going from 1 to say 10, I find the solution ugly and inelegant.
Also, if this can be addressed via autoconf, then that solution is welcome too I.e. at build time, based on the target machine, the configure.h generated should've the right .so name that can be passed to dlload.
P.S.: I think good distros follow the style of creating links to libnotify.so.x so that a programmer can just do dlload("libnotify.so", RTLD_LAZY) and the right version numbered .so is loaded; unfortunately not all distros follow this, including Ubuntu.
The answer is: you don't.
dlopen() is not designed to deal with things like that, and trying to load whichever soversion you find on the system just because it happens to have the symbols you need is not a good way to do it.
Different sonames have different ABIs, and different ABIs means that you may be calling the same exact symbol name that is expecting a different set (or different size) of parameters, which will cause crashes or misbehaviour that are extremely difficult do debug.
You should have a read on how shared object versions work and what an ABI is.
The libfoo.so link is there for the link editor (ld) and is usually installed with the -devel packages for that reason; it might also very well not be a link but rather a text file with a linker script, often times on purpose to avoid exactly what you're trying to do.

Combining C++/CLI, x86, x64 and strong naming

Let me get right to the point:
Main application:
C# (4.0), AnyCPU.
Library:
Wrapper for native .dll written in C++/CLI. Compiled in two versions; x86 and x64, both signed with the same .snk key (using this workaround)
Limitations:
In the end a single distribution package is required for x86 and x64 platforms.
Main application needs strong name due to references to other strongly named libs.
Rewriting the library using managed C# and P/Invoke is an absolute last way out.
The problem:
As long as the main application, at compile time, references the version (x86 or x64) of the library that is needed when run, this is all working fine.
Moving the same compiled output - and exchanging the library with the right platform version during installation - does not work since the signature of the library changes from that of the referenced one.
In a test application without any strong naming I can switch between them as needed.
The question:
Is there a way to enable switching between the x86 and x64 libraries within the set limitations, or is strong naming preventing any possible solution other than rewriting the lib?
Let me clarify that it is not a question about finding the correct .dll (as discussed here) but about being able to load the .dll once found.
#Damien_The_Unbeliever's comment got me thinking and he is right in that the strong names are the same, and it was not the actual issue.
I found another difference between the two versions of the library; the output name was set to nnn.dll and nnnx64.dll. Changing it so that both have the same output name magically made it all work.
Perhaps someone knows why such a setting matters, I certainly don't.

Specifying different platform specific package at compile time in Ada (GNAT)

I'm still new to the Ada programming world so forgive me if this question is obvious.
I am looking at developing an application (in Ada, using the features in the 2005 revision) that reads from the serial port and basically performs manipulation of the strings and numbers it receives from an external device.
Now my intention was to likely use Florist and the POSIX terminal interfaces to do all the serial work on Linux first....I'll get to Windows/MacOS/etc... some other time but I want to leave that option open.
I would like to follow Ada best practices in whatever I do with this. So instead of a hack like conditional compilation under C (which I know Ada does not have anyway) I would like to find out how you are suppose to specify a change in package files from the command line (gnatmake for example)?
The only thing I can think of right now is I could name all platform packages exactly the same (i.e. package name Serial.Connector with the same filenames) and place them in different folders in the project archive and then upon compilation specify the directories/Libraries to look in for the files with -I argument and change directory names for different platforms.
This is way I was shown for GCC using C/C++...is this still the best way with Ada using GNAT?.
Thanks,
-Josh
That's a perfectly acceptable way of handling this kind of situation. If at all possible you should have a common package specification (or specifications if more than one is appropriate), with all the platform-specific stuff strictly confined to the corresponding package body variations.
(If you did want to go down the preprocessor path, there's a GNAT preprocessor called gnatprep that can be used, but I don't like conditional compilation either, so I'd recommend staying with the separate subdirectories approach.)
You could use the GNAT Project file package Naming: an extract from a real example, where I wanted to choose between two versions of a package in the same directory, one with debug additions, is
...
type Debug_Code is ("no", "yes");
Debug : Debug_Code := External ("DEBUG", "no");
...
package Naming is
case Debug is
when "yes" =>
for Spec ("BC.Support.Managed_Storage")
use "bc-support-managed_storage.ads-debug";
for Body ("BC.Support.Managed_Storage")
use "bc-support-managed_storage.adb-debug";
when "no" =>
null;
end case;
end Naming;
To select the special naming, either set the environment variable DEBUG to yes or build with gnatmake -XDEBUG=yes.
Yes, the generally accepted way to handle this in Ada is to do it with different files, selected by your build system. Gnu make is about as multiplatform as it gets, and can allow you to build different files (with different names and/or directories and everything) under different configurations.
As a matter of fact, I find this a superior way (over #ifdefs) to do it in C as well.

Resources