My background is in Windows and I'm a Linux noob. Still trying to wrap my head around some basic concepts, and specifically the system libraries:
Windows has ntdll.dll which wraps system calls, and a CRT dll
which interface between the C syntax to the ntdll OS-exposed
services.
(for simplification I ignore the intermediate layer of
user32, kernel32, kernalbase etc. I also realize the CRT is several dlls, this is not the point).
It seems Unix/Linux has pretty much just libc, which wraps system calls and called directly from your application code.
Is this the right analogy? (ntdll + CRT) <===> libc ?
I realize that C & Unix evolved together, but am still surprised. Can it be that the C interface is hard-wired into the OS for Unix/Linux? In Windows non-C programs link against the underlying OS provided dlls. Is it possible that in Linux there is no OS/C-runtime border?
In general, most programs link against libc, even if they are written in another language. It provides the C standard library interface (like MSVCRT), POSIX features (the equivalent of certain parts of the Win32 subsystem), and wrappers around system calls. For example, Rust uses libc because it provides a portable environment to link against.
However, on Linux, you don't have to link against libc. Go chooses to make system calls directly, which means it can ship static binaries that have no runtime dependencies. This is possible because Linux guarantees a stable kernel ABI, but not all operating systems do this (e.g., macOS). So unless you have significant resources (like an entire programming language team), this isn't generally a smart move unless you're only working with a handful of syscalls.
I should point out that even Windows is intrinsically wired into the C language: it uses C strings (granted, usually wide C strings) for its system calls, and much of the kernel is written in C. Even if you were starting a kernel from scratch, you'd still need a general C interface, because virtually every programming language has a way to interact with C.
The Linux system calls are documented in syscalls(2) and are the foundation of user-land programs. The calling conventions are documented in the ABI specifications. The ELF executable format is documented, e.g. in elf(5).
Read also Advanced Linux Programming and about the Unix philosophy.
You can make system calls directly in assembler. The Linux Assembly HowTo explains that. You'll prefer to use the C interface, and for that reason the libc is preferable. In practice, the libc.so is the cornerstone of most Linux systems.
Play with ldd(1), pmap(1), strace(1), BusyBox
The GCC compiler enables useful language extensions, and mixing C and assembler code.
Some programming languages implementations are barely using C and could call system calls directly (look into SBCL or Go ...)
Both the Linux kernel and usual GNU libc (or musl-libc), and also the GCC compiler and the binutils are free software or open source, and you can study their source code.
Things become trickier with systemd and vdso(7).
See also http://linuxfromscratch.org/
Graphical applications are using some display server, often Xorg or Wayland. Read about X11. You would want to use GUI toolkits like GTK or Qt to code them.
Related
I'm building a .a from C++ code. It only depends on the standard library (libc++/libstdc++). From general reading, it seems that portability of binaries depends on
compiler version (because it can affect the ABI). For gcc, the ABI is linked to the major version number.
libc++/libstdc++ versions (because they could pass a vector<T> into the .a and its representation could change).
I.e. someone using the .a needs to use the same (major version of) the compiler + same standard library.
As far as I can see, if compiler and standard library match, a .a should work across multiple distros. Is this right? Or is there gubbins relating to system calls, etc., meaning a .a for Ubuntu should be built on Ubuntu, .a for CentOS should be built on CentOS, and so on?
Edit: see If clang++ and g++ are ABI incompatible, what is used for shared libraries in binary? (though it doens't answer this q.)
Edit 2: I am not accessing any OS features explicitly (e.g. via system calls). My only interaction with the system is to open files and read from them.
It only depends on the standard library
It could also depend implicitly upon other things (think of resources like fonts, configuration files under /etc/, header files under /usr/include/, availability of /proc/, of /sys/, external programs run by system(3) or execvp(3), specific file systems or devices, particular ioctl-s, available or required plugins, etc...)
These are kind of details which might make the porting difficult. For example look into nsswitch.conf(5).
The evil is in the details.
(in other words, without a lot more details, your question don't have much sense)
Linux is perceived as a free software ecosystem. The usual way of porting something is to recompile it on -or at least for- the target Linux distribution. When you do that several times (for different and many Linux distros), you'll understand what details are significant in your particular software (and distributions).
Most of the time, recompiling and porting a library on a different distribution is really easy. Sometimes, it might be hard.
For shared libraries, reading Program Library HowTo, C++ dlopen miniHowTo, elf(5), your ABI specification (see here for some incomplete list), Drepper's How To Write Shared Libraries could be useful.
My recommendation is to prepare binary packages for various common Linux distributions. For example, a .deb for Debian & Ubuntu (some particular versions of them).
Of course a .deb for Debian might not work on Ubuntu (sometimes it does).
Look also into things like autoconf (or cmake). You may want at least to have some externally provided #define-d preprocessor strings (often passed by -D to gcc or g++) which would vary from one distribution to the next (e.g. on some distributions, you print by popen-ing lp, on others, by popen-ing lpr, on others by interacting with some CUPS server etc...). Details matter.
My only interaction with the system is to open files
But even these vary a lot from one distribution to another one.
It is probable that you won't be able to provide a single -and the same one- lib*.a for several distributions.
NB: you probably need to budget more work than what you believe.
since dlopen uses libdl.so , but i am working on standalone application which do not use OS support, so my idea is to implement dlopen directly using coding is there any
Loading shared libraries is intrinsically dependent on the operating
system's runtime loader and in turn on the operating system's executable file format and its process construction model. There is no OS-independent way to do it.
The GNU source code of dlopen is of course
freely available, but that does not make it independent of an operating system.
The maximum degree of OS independence you can achieve in C is obtained by
restricting yourself to software that you can write entirely with the
resources of the Standard C Library. The Standard C Library does not contain
dlopen or any equivalent functionality, because such functionality is
intrinsically OS-dependent.
As your question is tagged Linux, it is not quite clear why you would want your application
to be independent of OS support that is provided by Linux.
What is the difference between arm-none-eabi and arm-linux-gnueabi? I know the difference in how to use them (one for bare metal software, the other one for software meant to be run on linux). But what is the technical background?
I see there is a difference in the ABI which is, as far as I understood, something like an API but on binary level. It ensures interoperability of different applications.
But I don't really understand in which way having or not having an operating system affects my toolchain. The only thing that came to my mind is, that libraries maybe have to be statically linked (do they?) while compiling bare metal software, because there is no os dynamically providing them.
The most pages I found related to this toppic just answered how to use the toolchains but not the technical background. I'm a student of mechatronics and new to embedded systems, so my experience in this field is somewhat limited.
Maybe this link to a description will help.
Probably the biggest difference:
"The bare-metal ABI will assume a different C library (newlib for example, or even no C library) to the Linux ABI (which assumes glibc). Therefore, the compiler may make different function calls depending on what it believes is available above and beyond the Standard C library."
I am aware that there are (at least) two radically different kinds of shared-library files on Unix-type systems. One is the kind used on GNU/Linux systems and probably other systems as well (with the filename ending in ".so") and the other used in Mac OS X, and also possibly other systems as well (with the filename ending in ".dylib").
My question is this --- is there any type of test I could do from a shell-script that would easily detect which of these two paradigms the current OS uses for shared libraries?
I'm sure I could find some way to easily deal with this variance --- if only I knew of a simple test I could run from a shell-script that would tell me which type of shared library is used on the current system.
Well, I guess you need to check filetypes of executables on a target platform. You may use file for that (check its output for, say, /bin/ls ). ELF is a most widely used executable type on Linux, while Mach-O is "natively" used in MacOS X.
A note: technically there're other executable types on these systems, say a.out and PEF, and, you guess, those formats have their own dynamic libraries. Frankly speaking Linux has a pluggable support for executable formats and even Win32 .EXEs may be executed "quasi-natively" in Linux (of course, they need an implementation of Win32 API working above a given kernel API, WINE is a such implemetation).
Also if you need to create a dynamically loaded library, then you should use one of those portable build systems (to name a few: GNU autotools, CMake, QMake...). Thus you'll get not only ordinary DLL extension but also linker flags, portable methods of installation/uninstallation and so on...
I want to write a socket program in Linux. So it'll use glibc system calls like socket(), bind(), listen(), write() etc.
I wonder, can i compile it without any changing in FreeBSD, Solaris or Mac OS? If yes, is it called "posix standards"?
Socket (), bind (), write () are all Posix functions and using them will make your code portable across a wide range of POSIX complaint operating systems.
Linux uses glibc, however other POSIX complaint OS will use any other libc not necessarily glibc. But all the above functions (system calls) will be in implemented with the same signature and functionality and you can compile then and run the same code everywhere.
http://en.wikipedia.org/wiki/Berkeley_sockets#BSD_vs_POSIX
The socket calls originated with BSD but today all Unix-like OSs support them. Windows also somewhat supports these in its own flavor (called Winsock).
I don't think these are part of Posix but in reality you shouldn't have portability issues.
Btw, when you do a 'man 2 socket' (or whatever call) it shows useful history and standards info at the bottom.
All the systems you mention follow the Single UNIX Specification, and have POSIX:2001 as a common denominator (see the compliance section), so that's what you want to target.
The GNU libc has many functions that are not in POSIX however. To see whether you can use a particular function you can look at the "CONFORMING TO" section of the relevant manpage, or refer to the GNU libc manual. For example we can see in socket(2) that socket conforms to POSIX.1-2001, so you can use it.
For more background on this, read 1.2 Standards and Portability.
One of the most portable ways you can do networking is using the Asio library from Boost:
http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio.html
It's easy to use and portable to both Windows as well as Unix/Posix systems (like Linux, Mac, the various BSDs, etc.)