For example, my local machine only has gcc 4.4. Can I run a binary compiled from gcc 4.8?
Running a binary is dependent on your libraries, not your compiler, so it doesn't matter which gcc is installed. Check your library versions.
There are several possible answers here, depending on the particular circumstances:
First, an important factor whether the binary in question is C or C++. The C++ ABI is tightly coupled to the compiler. It's highly unlikely that you will be able to run C++ code built by a higher version of gcc. The C++ standard library comes with the compiler. If you do not have gcc 4.8 installed, you won't have that version's libstdc++ installed either, and it's unlikely that any gcc 4.8 C++ code will run. Generally, it's far more likely that libstdc++ will have forward, than backward, compatibility.
The C ABI, rather, is less tightly married to the compiler, and there is a moderate chance that pure C code will run, as long as it doesn't use any symbols from the C standard library that are specific to a higher version of libc. Same goes for every other shared library used by the particular code in question.
But there is also one other answer:
Simply "no", that's it. This question is often asked in the context of having an established seat of an older version of a commercial Linux distribution which ships with a stale version of some popular package, often Apache, or it could also be Mysql, perhaps PHP or something similar; and there is a requirement to install another product that requires a newer version of the component, that's available in the current version of the commercial Linux distribution, but not in the older version being used.
So this kind of question is an attempt to figure out whether it's possible to simply grab Apache, or whatnot, from the current version of the distro, and somehow cram it inside the older version. When I hear the question being asked here, it's been my experience that this kind of a question is really a manifestation of this class of an XY problem.
And the proper answer is to either update your commercial Linux distro to the current version, or grab the source code to the newer version of the software package in question, and build it using your commercial Linux distro's packaging tool, then proceed to execute a standard update of this system component using the commercial Linux distro's regular package update process; rather than attempting to execute this kind of a half-hearted transplant from a newer version of the Linux distro, due to lack of local developer and/or sysadmin resource which has the necessary knowledge to prepare a proper software update build. This kind of a brain transplant rarely works, at best it results in random instability and crashes, and becomes a perpetual drain of support resources.
The answer here needs to be the right answer: either update your Linux distro properly, or build the newer version of the software in question from source, using the system compiler.
Related
I'm trying to update glibc 2.19-r1 to newer version 2.23-r1 in order to overcome some security vulnerabilities. I generated a new binary package (tbz2) using Gentoo system, but now I'm having problems with installing it to my system.
my question is: How can I know if there is anther feature/application that also needs to be updated? Which dependencies does glibc has?
Thank you,
Sami
Which dependencies does glibc has?
It doesn't have any.
When configured, it may require a minimal kernel version on which it would run. Usually it supports kernels that are newer than at least 5 years; on x86 often much older kernels are also supported.
To build it, you also need sufficiently recent versions of gcc, make and some other tools (but these dependencies don't transitively apply to the system on which you want to install it).
The code is written in c/c++,may depend some libs in the compiling host;
and it should run in another host without libs depending problems. Both hosts are linux, may have different versions.
Do you have a good strategy?
Pure static linking on Linux is discouraged, it's only really possible if you use an alternative libc (such as dietlibc) which isn't an option with C++, my favoured approach is to build the application on the oldest version of Linux you have to support as the newer libc builds will have backwards compatibility.
This will only cover libc, other requirements, such as gtk, pangom, etc will have to be compiled directly into your binary.
Link the application statically, so that it depends on as few dynamically loaded libraries as possible. This is the general solution to this problem.
Other solutions include:
Ship the required libraries with the application, and override the system's LD_LIBRARY_PATH variable to make the included versions the preferred ones.
Write code to dynamically load the libraries using dlopen() and friends, and handle differences in library versions manually.
Most platforms have a well-defined ABI that covers C code, but ABIs that cover C++ functionality are not yet common.
A program written in c++ using just libc may not run on another platform.
if binary compatibility is an important issue consider using c.
Take all answers for this question into account (static linking, compiling on the oldest Linux, etc.) and then check your final binary by the Linux App Checker to show compatibility issues with other Linux distributions.
We are about to release a couple of softwares with Linux support.
As for Mac and Windows, the number of version to support is quite limited (xp, 2000, vista, 7 for win, 10.4-6 for Mac). But for linux it's another story.
We'd like to support as many Linux as possible, but the choice is large.
The questions are:
Which distribution format (binaries) to use to support as many Linux as possible?
For testing, what "base linux" can we test on and extend our results to other linuxes.
According we provide statically linked binary with all the dependencies, what do we need to check? I assume kernel version and libc version, but I'm wondering.
Our software is written in ANSI compliant C with a bit of BSD and POSIX (gettimeofday, pthreads).
So you think three versions each for Mac and Windows is normal, but you shy away from Linux? Hm.
Just make sure it builds using the standard tool chains -- configure, make and make install traditionally. The rest should take care of itself.
Else, pick what you are comfortable with. For me that would be Debian/Ubuntu, others prefer Fedora. Look at the Linux Standards Base and things like FreeDesktop.org for other standards. Kernel and libc should not matter unless you are doing something very hardware or driver-specific.
The kernel strives to maintain a backwards-compatible binary API. Statically linked binaries built against 1.0 series kernels are supposed to still run fine to this day on the latest 2.6 series kernels.
If you are statically linking with everything (including libc), then the major problem you are likely to face is different filesystem arrangements, which may not even be a great issue for you. (Testing is the only way to find out, though).
An idea is to survey your proposed customer base so see which linux version they run and make a short list from their feedback. However from what I know (which is subjective!) ...
I would suggest running two different distribution types -- rpm and .tar.gz. With rpm you cater for the latest Fedora/openSUSE/RHEL/SLES (and derived distros, which is a fair chunk of the corporate market). You are already handing a lot of dependency problem by static linking, so kernel version should be sufficient.
With .tar.gz distribution you cater for 'all others' but watch support and configuration problems as they quickly become a time sink.
For testing, have virtual machines of each version you choose to support. These can also be used for product support (I assume you will need to provide product support??) I wouldn't try to extrapolate results between linux versions because there a too many hidden 'gotchas'.
You can release statically compiled Linux binaries against the kernel & version of glibc. You really only need worry about compatibility-breaking revisions. If you have some time, you can setup everything to cross-compile on the same host. The kernel is backward compatible. glibc is more temperamental.
File paths can be assumed to be Linux Standard Base, if you want to package it with an installer. The more flexible you can be here, the better. I've never heard a customer complain about receiving a tarball of binaries, which I'd recommend offering. I have had customers complain about incorrect assumptions.
Your best bet for a formal package format is probably between DEB (Debian Linux & derivatives, like Ubuntu) and RPM (Red-Hat & derivatives, like Cent-OS). Packages are nice to have, but are just a headache if you don't plan on utilizing the native update manager.
For test & build, I'd personally recommend Gentoo. It's pretty raw, however, so you might want to look into Ubuntu as a distant second choice.
This is an issue for your product management team. Once they have determined that producing a Linux version is a desirable idea (i.e. on a cost-benefit basis), then you will need to find out what distros your customers use or want supported.
In principle you can support any but the more you support the more of a headache it will be, so you want as FEW as possible.
Support as few OS / architecture combinations as your PM thinks you can get away with
Deprecate OSs / architectures as soon as you can
Only take on new ones if premium support customers demand it, or to get big deals, as per your PM's decision.
How hard it is to support them is largely dependent on how complex your product is (esp. dependencies) and how complete its auto-test suite is. Adding more supported OSs ties your hands with respect to library usage, kernel feature usage etc as well as testing, so it's not something you want to be lumbered with long-term.
So in short, it's not a software engineering issue, but a product management one.
I distribute a statically linked binary version of my application on linux. However, on systems with the 2.4 kernel, I get a segfault on startup, and the message: "FATAL: kernel too old."
How can I easily get a version up and running with a 2.4 kernel? Some of the libraries I need aren't even available on old linux distributions circa 2003. Is there an apt-get install or something that will allow me to easily target older kernels?
The easiest way is to simply install VirtualBox (or something similar, e.g. VMWare),Install CentOS 3 or any suitable old distro with a 2.4 kernel and build/test your app on that.
Since you're getting a "kernel too old", chances are you're relying on some features not present in 2.4 kernels so you'll have to trace down and rework that.
The error might simply be caused by linking statically to glibc, you could try linking to glibc dynamically and all your other libs statically, though to be backwards compatible you'd have to build your app on an old glibv system. Using the lsb tools to build could help too
For my use case, I can't statically link my supporting libraries. Also, current Linux distributions seem to make this difficult to accomplish for certain situations. But I needed my application binaries to run on 10-year old Linux systems.
I also didn't want to limit myself to an ancient 10-year old C/C++ compiler. I also found that the hardware I needed to use prevented me from installing a 10 year old Linux distribution for some reason.
So, I did this:
Installed docker.
Within a docker instance, install a 10-year old Linux system (I used Debian's Lenny distribution). This has the added advantage of making this build system available to any other machine that can run docker.
Within the docker instance, build the current GNU compilers (8.3.0 when I did this).
This gave me a modern compiler that compiled binaries that would run on very old Linux systems. I did this for both 32-bit and 64-bit processors.
From there, I created a series of scripts that allowed me to use the docker-contained cross-compiler to build all my supporting libraries. I made sure to set the rpath to my compiled binaries to a path relative to my binaries (using -Wl,-rpath,$ORIGIN/../lib), and a built a script to retrieve any supporting libraries from the compiler, using g++ -print-search-dirs to get the paths, ldd to get the supporting libraries I needed from my binaries, and some aggressive bash scripting to find the supporting libraries existing within the search-dirs from g++, dropping these libs into the rpath I set up.
From there, I package my binary accordingly, with all supporting libs.
Yeah, this is somewhat painful, but it results in a fully functioning binary capable of working on ridiculously old linux systems without having to install different Linux distributions on multiple virtual machines.
I tried creating a proper cross-compiler (native to the current Linux distribution hosting my docker images), but found it too difficult to work with, even with the best tools I could find to help me. Compiling the compiler within a docker image took far less of my time, and worked rather smoothly.
Any way to make a binary in a Linux distribution and run it on another distribution with same architecture? Or I should compile and build it on different distributions?
Is there any compatibility between Redhat, Debian based distributions for binary files?
(I want to use my Ubuntu binary file on fedora!)
Enter Linux Standard Base to reduce the differences between individual Linux distributions.
See
http://www.linuxfoundation.org/collaborate/workgroups/lsb
http://en.wikipedia.org/wiki/Linux_Standard_Base
Statically linking your binaries makes them LESS portable because some libraries won't then work correctly for that machine (differing authentication methods, etc).
If you statically link any "unusual" libraries and keep your set of supported distros to a minimum, you should be ok.
Don't statically link the C library (or the whole binary), that's a recipe for trouble :)
Look at what (e.g.) Google do with Chrome.
What language is your application coded in? If its in a language like Python, (and no C bindings) or Java or any other VM based language, then I think you can trust the VM to make sure your application will work on the different Linux distributions.
Also, there is the Linux Standard Base which you can refer to.
HTH,Amit
I realize this is a very old question, but it comes up high in search results and this hasn't been mentioned:
CDE is a tool to create portable Linux applications. This tool packages together all needed files (including libraries) by analyzing at run-time. I have used it successfully on command-line tools several times, one example being getting tcpdump to run on a old hardware appliance running a custom distribution. CDE also doesn't require source, it just packages an executable you are able to run.
At one point I had an error running the cde command which was fixed by prepending the command with LD_ASSUME_KERNEL=2.4.1, this might not be necessary in recent versions as it was years back.
Code is also on GitHub: https://github.com/pgbovine/CDE
It works. But it also depends on the version of the shared libraries you use, including libc, libstdc++ which are forced by the compiler version that may differ from distro to distro.
The best way is to distribute the source code and to make it easy to build the source on any reasonable Linux distribution. This is better than binary distribution because it is not enough to make the binary compatible with shared libraries. You also need to make sure you adapt your program to things like distribution specified locations and conventions for where web apps go, or how e-mail is sent, or how services are started, or how to determine the default paper size, or a myriad of other details.
See for example the Debian Policy Manual for a document describing many of the things a distribution needs to decide to ensure compatibility between applications running on it. You don't need to read it through or learn it by heart, but it shows the scope of the issues that may trip you.
You should probably work together with several of the major distributions to ensure your application works well with all of them. Most distributions' developers will happily help if you approach them politely. If you're lucky, you can attract volunteers from the distros to make the binary packaging for you, and that will quickly give you feedback on what you need to change at the source level to make your application easy to package.
The Linux Standard Base already mentioned by others attempts to work out a cross-distribution solution to these variables, but it is not comprehensive and not fully supported by most distributions. However, most distributions consider it a problem if they accidentally break LSB compatibility.
Normally it's ok to use binaries across linux distributions as long as you have the same set of libraries available. You can use 'ldd' to check which libraries are needed by a binary. libc should sure have the same version in the distributions involved.
You could statically link your executables for portability.
LSB is definitely worth checking out. Though in regards of working with libraries I was most satisfied by this answer here at SO https://stackoverflow.com/questions/1209674/shipping-closed-source-application-for-linux/1242738#1242738 and this detailed treatment of the rpath mechanism http://www.eyrie.org/~eagle/notes/rpath.html
How about HTML?
It's cross platform, it's been around forever, and if you consult caniuse, and you know your target environment. It can render any UI I ever dreamt of, and if you're willing to learn javascript, you can approach this from a server programming perspective and a client programming perspective, without switching languages, which comes in handy if you're the one doing both.
It's probably the closest thing we have to a machine lingua france these days, and that is a good thing, because that means there's potentially more pickins for everybody involved.
People don't know that the web browser does most things you would want from a program, including rendering 3D graphics and PGP style encryption.
The biggest benefit I see in the browser as a platform is that everyone's nephew knows how to install a browser on a new computer, and from there it's just a url away, including packaging in some of the stores.