Do
Linux system calls
Linux API
depend on
Linux distributions (e.g. Debian, Fedora, Ubuntu, Arch, Gentoo, ...), and/or
Linux kernel?
To answer your question about Linux System calls, you need to read man syscalls.
So yes, with different distributions, Linux kernel will change, hence the available system calls.
What do you mean by Linux API?
Linux Kernel's internal API
or the C ABI
Answer depends on the fact that is it a POSIX compliant call and if you are using a POSIX compliant system.
If your are using a POSIX call them most the system you have mentioned will support and work pretty much in the same way, since its a well defined standard that they follow them strictly.
There exists many system calls that are specific to certain systems, if you use such system calls or API then you your code is at risk since there is a good chance that it may or may not be available on the other systems.
More on POSIX here.
To answer the first part, the answer is mixture of yes and no, the yes part, all distributions of Linux, their core, the kernel comes from the main repository tree.
The no part, is that there is huge differences between Kernel 2.x, Kernel 3.x, likewise, Kernel 4.x, so the underlying implementation of the API governing aspects of the system, such as device drivers, for example, is different. For example, kernel module that is dependent on Kernel v3.x implementation, will not work under Kernel v2.x.
That is nonetheless to say, differing implementations can be backported to the older versions of the Kernel.
However, the system calls are relatively static and have not changed much. (see SysCalls)
Distributions, on the other hand, encompasses the Kernel and all libraries, notably GNU C library, which would have been recompiled as updates are made where applicable.
Provided the API behind those runtime libraries have not changed, then code that targets a version of library can be recompiled against a newer version of them runtime libraries.
Related
I am curious about mechanism of Linux compatibility layer of FreeBSD and got some info below.
https://en.wikipedia.org/wiki/FreeBSD#Compatibility_layers_with_other_operating_systems
https://unix.stackexchange.com/questions/172038/what-allows-bsd-to-run-linux-binaries-but-not-vice-versa
The key difference between two OSes is difference of system calls.
And, I know Linux app and BSD app depend on different standard dynamic libraries (linux-gate.so.1 for example).
Is there anything else in the implementation?
The approach to being able to run Linux apps in FreeBSD is multi-faceted.
The parts of the strategy, as I understand it, are as follows:
Provide a system call layer that mimics as close as it can the Linux system call structure and semantics. In FreeBSD, this layer is called 'the linuxolator'
Install a set of vanilla pre-compiled Linux userland libraries. These libraries work because the linuxolator provides the right system calls that they depend on.
Install/provide/mount platform services that Linux userland libraries and apps expect. For example:
Mount a Linux-compatible procfs - linprocfs.
Install pre-compiled Linux apps and have them depend on these Linux userland libraries.
The Linux Apps call the Linux libraries which call the Linuxolator's Linux system calls which call the FreeBSD system calls.
Some functionality is available on Linux (udev, systemd, inotify(7), ...) but not on FreeBSD (and probably vice versa).
Some system calls have different flags. FreeBSD mmap(2) is not exactly the same as Linux mmap(2), etc...
Both are Unix systems, but the devil is in the details.
If you want to code in C an application for both OSes, try hard to follow POSIX.
LSM(Linux Security Module) was used in kernel 2.6. Now kernel comes to 4.x and I can't figure out if there are still some using LSM? Does the latest kernel give up the support for LSM?
According to wikipedia the kernel still uses LSM. Modules like AppArmor are implemented on top of LSM. So yes, it is still used and modules that take advantage of it do exist.
I have read "system call APIs are for user-space access and
system APIs are for system space access". I am new to Linux OS concepts, I don't have any knowledge about the System API. Can anyone explain the difference between these two?
A system call is an explicit request to the kernel made via a software interrupt. It is the lowest level thing which talks to the Operating system. System call is when you call the kernel. System-calls are actually intended to be very low-level interfaces, you can say to a very specific functionality which your program cannot accomplish on its own.
Whereas a System API are used to invoke system call
Read system call and linux kernel wikipages first.
As Rahul Triparhi answered, system calls are the elementary operations, as seen from a user-mode application software. Use strace(1) to find out which syscalls are done by some program.
The system calls are well documented in the section 2 of the man pages (type first man man in a terminal on your Linux system). So read intro(2) and then syscalls(2).
Stricto sensu, syscalls have an interface, notably specified in ABI specifications like x86-64 ABI, defined at the lowest possible machine level - in terms of machine instructions and registers, etc... The functions in section 2 are tiny C wrappers above them. See also Linux Assembly HowTo
Please read also Advanced Linux Programming which explains quite well many of them.
BTW, I am not sure that "System API" has a well defined meaning, even if I guess what it could be. See also the several answers to this question.
Probably "System API" refers to the many functions standardized by POSIX, implemented in the POSIX C library such as GNU libc (but you could use some other libc on Linux, like MUSL libc, if you really wanted to). I am thinking of functions like dlopen (to dynamically load a plugin) or getaddrinfo(3) (to get information about some network things) etc... The Linux implementation (e.g. dlopen(3)) is providing a super-set of it.
More generally, the section 3 of man pages, see intro(3), is providing a lot of library functions (most of them built above system calls, so dlopen actually calls mmap(2) syscall, and getaddrinfo may use syscalls to connect to some server - see nsswitch.conf(5), etc...). But some library functions are probably not doing any syscall, like snprintf(3) or sqrt(3) or longjmp(3) .... (they are just doing internal computations without needing any additional kernel service).
We are about to release a couple of softwares with Linux support.
As for Mac and Windows, the number of version to support is quite limited (xp, 2000, vista, 7 for win, 10.4-6 for Mac). But for linux it's another story.
We'd like to support as many Linux as possible, but the choice is large.
The questions are:
Which distribution format (binaries) to use to support as many Linux as possible?
For testing, what "base linux" can we test on and extend our results to other linuxes.
According we provide statically linked binary with all the dependencies, what do we need to check? I assume kernel version and libc version, but I'm wondering.
Our software is written in ANSI compliant C with a bit of BSD and POSIX (gettimeofday, pthreads).
So you think three versions each for Mac and Windows is normal, but you shy away from Linux? Hm.
Just make sure it builds using the standard tool chains -- configure, make and make install traditionally. The rest should take care of itself.
Else, pick what you are comfortable with. For me that would be Debian/Ubuntu, others prefer Fedora. Look at the Linux Standards Base and things like FreeDesktop.org for other standards. Kernel and libc should not matter unless you are doing something very hardware or driver-specific.
The kernel strives to maintain a backwards-compatible binary API. Statically linked binaries built against 1.0 series kernels are supposed to still run fine to this day on the latest 2.6 series kernels.
If you are statically linking with everything (including libc), then the major problem you are likely to face is different filesystem arrangements, which may not even be a great issue for you. (Testing is the only way to find out, though).
An idea is to survey your proposed customer base so see which linux version they run and make a short list from their feedback. However from what I know (which is subjective!) ...
I would suggest running two different distribution types -- rpm and .tar.gz. With rpm you cater for the latest Fedora/openSUSE/RHEL/SLES (and derived distros, which is a fair chunk of the corporate market). You are already handing a lot of dependency problem by static linking, so kernel version should be sufficient.
With .tar.gz distribution you cater for 'all others' but watch support and configuration problems as they quickly become a time sink.
For testing, have virtual machines of each version you choose to support. These can also be used for product support (I assume you will need to provide product support??) I wouldn't try to extrapolate results between linux versions because there a too many hidden 'gotchas'.
You can release statically compiled Linux binaries against the kernel & version of glibc. You really only need worry about compatibility-breaking revisions. If you have some time, you can setup everything to cross-compile on the same host. The kernel is backward compatible. glibc is more temperamental.
File paths can be assumed to be Linux Standard Base, if you want to package it with an installer. The more flexible you can be here, the better. I've never heard a customer complain about receiving a tarball of binaries, which I'd recommend offering. I have had customers complain about incorrect assumptions.
Your best bet for a formal package format is probably between DEB (Debian Linux & derivatives, like Ubuntu) and RPM (Red-Hat & derivatives, like Cent-OS). Packages are nice to have, but are just a headache if you don't plan on utilizing the native update manager.
For test & build, I'd personally recommend Gentoo. It's pretty raw, however, so you might want to look into Ubuntu as a distant second choice.
This is an issue for your product management team. Once they have determined that producing a Linux version is a desirable idea (i.e. on a cost-benefit basis), then you will need to find out what distros your customers use or want supported.
In principle you can support any but the more you support the more of a headache it will be, so you want as FEW as possible.
Support as few OS / architecture combinations as your PM thinks you can get away with
Deprecate OSs / architectures as soon as you can
Only take on new ones if premium support customers demand it, or to get big deals, as per your PM's decision.
How hard it is to support them is largely dependent on how complex your product is (esp. dependencies) and how complete its auto-test suite is. Adding more supported OSs ties your hands with respect to library usage, kernel feature usage etc as well as testing, so it's not something you want to be lumbered with long-term.
So in short, it's not a software engineering issue, but a product management one.
After a linux kernel upgrade, my VMWare server cannot start until using vmware-config.pl to do some re-config work (including build some kernel modules).
If I update my windows VMWare host with latest Windows Service Pack, I usually not need to do anything to run VMWare.
Why VMWare works differently between Linux and Windows? Does this re-compile action brings any benifits on Linux platform over Windows?
Go read The Linux Kernel Driver Interface.
This is being written to try to explain why Linux does not have a binary kernel interface, nor does it have a stable kernel interface. Please realize that this article describes the _in kernel_ interfaces, not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is _very_ stable over time, and will not break. I have old programs that were built on a pre 0.9something kernel that still works just fine on the latest 2.6 kernel release. This interface is the one that users and application programmers can count on being stable.
It reflects the view of a large portion of Linux kernel developers:
the freedom to change in-kernel implementation details and APIs at any time allows them to develop much faster and better.
Without the promise of keeping in-kernel interfaces identical from release to release, there is no way for a binary kernel module like VMWare's to work reliably on multiple kernels.
As an example, if some structures change on a new kernel release (for better performance or more features or whatever other reason), a binary VMWare module may cause catastrophic damage using the old structure layout. Compiling the module again from source will capture the new structure layout, and thus stand a better chance of working -- though still not 100%, in case fields have been removed or renamed or given different purposes.
If a function changes its argument list, or is renamed or otherwise made no longer available, not even recompiling from the same source code will work. The module will have to adapt to the new kernel. Since everybody (should) have source and (can find somebody who) is able to modify it to fit. "Push work to the end-nodes" is a common idea in both networking and free software: since the resources [at the fringes]/[of the developers outside the Linux kernel] are larger than the limited resources [of the backbone]/[of the Linux developers], the trade-off to make the former do more of the work is accepted.
On the other hand, Microsoft has made the decision that they must preserve binary driver compatibility as much as possible -- they have no choice, as they are playing in a proprietary world. In a way, this makes it much easier for outside developers who no longer face a moving target, and for end-users who never have to change anything. On the downside, this forces Microsoft to maintain backwards-compatibility, which is (at best) time-consuming for Microsoft's developers and (at worst) is inefficient, causes bugs, and prevents forward progress.
Linux does not have a stable kernel ABI - things like the internal layout of datastructures, etc changes from version to version. VMWare needs to be rebuilt to use the ABI in the new kernel.
On the other hand, Windows has a very stable kernel ABI that does not change from service pack to service pack.
To add to bdonlan's answer, ABI compatibility is a mixed bag. On one hand, it allows you to distribute binary modules and drivers which will work with newer versions of the kernel. On the other hand, it forces kernel programmers to add a lot of glue code to retain backwards compatibility. Because Linux is open-source, and because kernel developers even whether they're even allowed, the ability to distribute binary modules isn't considered that important. On the upside, Linux kernel developers don't have to worry about ABI compatibility when altering datastructures to improve the kernel. In the long run, this results in cleaner kernel code.
It's a consequence of Linux and Windows being developed in different cultural environments and expectations: http://www.joelonsoftware.com/articles/Biculturalism.html. In short: Windows is designed to be suitable for users, whereas Linux evolves to be suitable for open source developers.