Library version differences between linux distros - linux

I'm debugging a program that runs fine on one machine running (fill in the blank with some linux distro and version) but acts flaky on another machine running (different distro/version).
A helpful clue would be to see a side-by-side list of versions of all the major libraries provided by each distro - Qt, boost, libpng, fftw3, etc. including all the obscure ones. We're not concerned about upgrades to libraries that may have happened since (for now).
Today it's Red Hat 5 vs. Fedora 13, but in the past I've wanted to compare Fedora and Ubuntu, and in the future any combination of major distros and versions, from at least four years ago up to last week, could come up. I'm hunting for a general way to find these library version differences.
A web app that lets one pick any two distros and get a list would be awesome. Does such a thing exist? (If not, there's an idea for some entrepreneur.)
Note: I don't have access to one of the machines, so sorting and comparing 'ls /usr/lib' is out.

The closest I know about would definitely be this one (the name rpmfind.net should give away it's RPM centric so Ubuntu will probably not be listed): http://rpmfind.net/linux/RPM/

Related

How to manage multiple haskell installations on macOS?

I have on my computer (macOS High Sierra) multiple installations of the standard (Glasgow) Haskell compiler. The oldest one (as far as I can remember) is a minimal installation, while I obtained the most recent version form the "Platform" installer, something like five or six months ago. I'm trying to get rid of the whole ecosystem (ghc, stack, cabal, and friends), mostly because I don't know what tools I will be using in the case I'll plan to get back using the language.
So my questions are:
Where does the minimal/"Platform" installer installs Haskell and all related stuff?
How can one remove the whole Haskell language and all its components mentioned above (in the case of a "Platform" install, and even in the case of a minimal installation)?
EDIT: I've just remembered of the uninstall-hs command (see also here), should I run this instead of removing files manually?
I think I have come up with a solution and I'll answer the latter question first:
All the details I was looking for (what effectively is installed after launching the minimal / "Platform" installer, and, most importantly, where these guys were placed) can be found in the documentation welcome file at /Library/Haskell/Doc/Start.html. More precisely:
On Mac OS X, the Haskell Platform is installed in two major pieces: GHC and Haskell Platform. They are installed respectively in: /Library/Frameworks/GHC.framework and /Library/Haskell. Executables are symlinked in /usr/local/bin [...].
There are also another two or three directories in the home folder, namely ~/.ghc and ~/.stack.
As said in the OP, the script uninstall-hs done its work, showing a (at least I hope so) comprehensive list of all the files that constitutes the Haskell "sdk" actually installed.
Feel free to post your answers, I hope this post will be useful to the community.

Linux kernel version numbers?

I'm somewhat confused on the Linux kernel version numbers. I noticed that many Linux distributions do no use the latest Kernels, in fact I have seen a few using version 2.4. Is there a reason for this? Is there any differences between 2.4, 2.6, 3.2 apart from age? What are the security implications of using an older kernel?
Different kernels have different features and improvements the developers of a distribution take a look at features they are going to support then decide which kernel to use based on the most stable support of that kernel.
It is the decision of the distribution developer which one to use, and they would go with the one they felt comfortable with.
This is a feature of software development its like asking what Java 1.1 and Java 1.7 is and whats the difference is apart from age... the answer is many things.
Most kernels and software will also have a security patch schedule and it is up to the user to keep their systems patched and up to date if you do not then you invite security issues as they are never fixed.
between major releases (e.g. 2.4 and 2.6), new features are introduced and old features are deprecated, eventually making the kernel binary incompatible.
this is important if you depend on some kernel-module that is not part of the mainline kernel (e.g. it is proprietary and the original authors don't provide updated modules; or it is open source (but never made it into the kernel) and nobody is willing to spend time to migrate the code)
also a new major release might change the system behaviour significantly. i remember that when switching from 2.4 to 2.6, many people that needed low latency audio (that's my background, so forgive me) would stay with the old kernel, since the new scheduling algorithms performed worse in some situations.
I hope the below page would help us in understanding the linux version numbering system:
http://www.linfo.org/kernel_version_numbering.html
A snap from http://www.tldp.org/FAQ/Linux-FAQ/kernel.html#linux-versioning
Linux version numbers follow a longstanding tradition. Each version has three numbers, i.e., X.Y.Z. The "X" is only incremented when a really significant change happens, one that makes software written for one version no longer operate correctly on the other. This happens very rarely -- in Linux's history it has happened exactly once.
The "Y" tells you which development "series" you are in. A stable kernel will always have an even number in this position, while a development kernel will always have an odd number.
The "Z" specifies which exact version of the kernel you have, and it is incremented on every release.
There is a human-readable changelog for the Linux Kernel.
Some people take the attitude "if it works - don't change it". So, there are security implications of running an older kernel - but there's also the risk that upgrading may break something.
As kernels improve/change, they might break expectations that held on earlier versions. If a distribution becomes unstable because too many software breaks under the new version of the kernel, the developers might opt to use an earlier version until the software is able to handle the changes in the new kernel.
Typically a binary image may not be portable between kernels where the first or second part of the version number changes. So in order to upgrade from 2.4 to 2.6, a large proportion of libs and executables will need to be upgraded / recompiled too. This can be a particular problem if you use closed source applications/drivers.
Most Linux distributors will back port patches from newer kernel versions depending on the security impact / demand for additional functionality. If you're not able to upgrade your entire system at regular intervals, then you need to find a distribution which explicitly supports a long support cycle with a good history of back porting (e.g. RHEL, SuSE).

Linux/Mac: What is good method to determine platform at compile time?

I would like to generalize a build system to compile on several (somewhat similar) platforms. What is a good method for determining the type of host that the shell script or Makefile is running on. I would like to distinguish between mac and linux, but also different specific distributions of linux (e.g. RHEL, Ubuntu). Cygwin is not important for me, but if you include it in your response I am sure others will find it valuable.
The rationale may include using the host type to fetch and install the correct versions of binary packages when it is more convenient to do so than compile from source. In addition, some commercial software is binary-packaged for specific distros, so part of the motivation is to grab the right binary.
Thanks,
SetJmp
Autotools to the rescue. It has tons of macros that help you do this kind of stuff.
http://www.lrde.epita.fr/~adl/autotools.html
uname -a to distinguish major *nix variants
Not so sure what the best way to distinguish red hat from ubuntu would be - could look for package managing tools and query installed packages, eventually helping you narrow down different debian derivatives, etc. There's probably something more obvious and up front though.
linux variants generally store distro information in /etc/issue.
most kernels will put info in /proc/version
It's not completely straightforward. You can use uname to find out the general parameters but to differentiate between distributions is a harder task. Maybe you should consider using something like autoconf to generalise your build system?
Just in case you're using Qt, there's this really nice set of defines, Q_OS_*, that guide you to the Operating System you're compiling on:
Q_OS_AIX
Q_OS_BSD4
Q_OS_BSDI
Q_OS_CYGWIN
Q_OS_DARWIN
Q_OS_DGUX
Q_OS_DYNIX
Q_OS_FREEBSD
Q_OS_HPUX
Q_OS_HURD
Q_OS_IRIX
Q_OS_LINUX
Q_OS_LYNX
Q_OS_MAC
Q_OS_MSDOS
Q_OS_NETBSD
Q_OS_OS2
Q_OS_OPENBSD
Q_OS_OS2EMX
Q_OS_OSF
...
They are defined in QtGlobal. There are even defines that help you figure out the compiler used Q_CC_* or the target Windowing System Q_WS_*.
But if you're not using Qt and want to go for a generic method, you most likely have to fall back to the Autotools package or CMake.
Determining Linux distributions is pretty tricky, but not hard. You first have to figure out what distributions you care about and then make all kinds of distribution specific file/configuration checks like in this example for the ones you've chosen, since you can't really support all of the myriad of Linux distros available out the. :-)
As for the Mac side i'll let the Mac experts answer, but it shouldn't be that hard, since at least the diversity issue is out of the question.

Contributing to a Linux distribution

I'm interested in contributing to a Linux distro, but regarding the various distro's developer communities, I'm having a bit of trouble figuring out which one I'd most like to join.
What languages I know: C, C++, Lua, Python, and fairly familiar with Perl (though I wouldn't say I "know" it). In particular, I have very little experience with x86 assembly besides hacking stuff together for performance tweaks, though that will be partially rectified soon.
What I'm looking for: A community that provides plenty of opportunities for developers to work on various aspects of the distribution. To be honest I'm most interested in reading and working on the kernel source (in which case the distro doesn't matter), but it's pretty daunting and I figure getting into the Linux community and working with experienced Linux developers might give me a better idea of how to jump into the guts(let me know if this is bogus, or if you have any advice regarding that).
So...
Which distro has the "best" developer community in terms of organization, people who are fun to work with, and opportunities to contribute?
I've read various "Contributing to XXX" pages and mailing lists for distros like Ubuntu, OpenSuse, Fedora, etc. but I'd rather get a more personal testament from an actual developer.
Unless you have a specific desire to learn the ins and outs of various packaging formats you would probably be better off contributing directly upstream to applications/libraries that you find interesting. While individual distributions often have a few management applications that are unique(ish) to them most core applications and libraries are shared between them.
As you have expressed an interest in guts it would make sense to stick to one of the main community distros (Fedora and Ubuntu/Debian) as the rest tend to be variations on a base distro. The other option is to choose a source based distribution which have a number of advantages to developers although you may find yourself spending a bit of time keeping your machine trim.
As I'm a developer I personally use Gentoo which gives me a number of things:
Rolling release: New versions of applications are generally available soon after release
Stable/Unstable mix: I can run stable core with bleeding edge on upstream packages I care about
Development ready: Any installed package is by default a "dev" package, the distinction between buildtime/runtime dependencies is blurred
Packaging is easy: If it's a simple as "configure/make/make install" writing and ebuild is very easy.
Contribution is easy: Contributing new ebuilds is fairly painless, from there you can get as involved as you like
Of course there are downsides, not least of all your machine spends a considerable amount of time building things and if your run a large selection of "unstable" packages you may find you occasionally need to fix-up your machine. However I find these disadvantages minor compared to giving me an up to date platform with which to contribute to upstream from.
If you want to work with the kernel then you shouldn't be picking a distribution, but rather working upstream.
Somebody correct me if I'm wrong, but I think that contributing to Ubuntu can be very easy and fun if you use Launchpad. I haven't tried contributing code, but I contribute translations and file bugs on some projects.

Distributing a program in linux without the source

I want to be able to distribute a program in Linux without distributing the source with it. The current solution is distributing a tar.gz with a precompiled binary. What is the easiest way to have this binary be placed in the Applications Menu? Is there a way to do this that is common across most linux distributions, but Ubuntu, Fedora, and OpenSUSE would be the priority.
You will want to create a .deb and a .rpm. The former covers Ubuntu (Debian variants), and the latter Red Hat variants. You can also supply a standalone executable for other users who can deal with things like menus themselves.
You will have to deal with Gnome and KDE menu management, and also different distributions lay out their menus differently. There is also the issue of netbook variants such as Moblin, that have a netbook interface that probably has its own "add application" mechanism. I don't know if it is possible for a single .deb to handle both Gnome and KDE menus systems (for Ubuntu and Kubuntu respectively) but I imagine the capability is there to reduce duplication of effort for Ubuntu.
All recent distributions should have xdg-utils installed, which provides scripts such as
xdg-desktop-icon
xdg-desktop-menu
which seem to be what you're looking for.
Haven't looked into it lately...but back in the day (which really wasn't all that long ago) when I was using Linux, RPM was the easiest way to distribute pre-combiled binaries (most distributations had, and still have, some kind of support for RPM packages).
Here's an old how-to on building an RPM package:
Linux Online - RPM How-To
You could look at BitRock intaller.
Try Autopackage or other solutions posted in another question.
Do tar.gz and then give community rights to redistribute modified packages. They will make RPMs, DEBs and any other packages for their beloved distributions... which will probably fit their distros much better than you could ever make.
There is really too many differences between distributions to make one-size-fits-all package, often subtle ones. For example some distributions has "Application" section, other "Applications"... and this made menu items disappear on some distros. Libraries can be different, default settings can be different, and so on...
RPMs and DEBs aren't so portable as it is believed. With one package there might be problems even with different versions of a single distribution, and there is nothing worse than fighting to install badly prepared package correctly.
JeeBee is correct that you would want to go with .deb or .rpm.
For Ubuntu/Debian (the .deb) I would add that you do not send it to people but you create a "repository" and have the users add that url to their /etc/apt/sources.list, then you get a easy way to update the software as well.
That way you solve the distribution and updated problem at the same time.
And here is a example of how this could look like:
http://www.avrfreaks.net/wiki/index.php/Documentation:AVR32_General/Installing_tools_on_Ubuntu_Linux#Ubuntu_8.04_-_Hardy_Heron
And how a repository could look like:
http://www.atmel.no/avr32/ubuntu/
But don't repeat Atmels mistake and only do i386 because there is a lot of other common architectures out there right now, like the amd64.
/Johan
For RPM, this three-part tutorial by IBM is the best beginner's guide to packaging I know:
http://www.ibm.com/developerworks/library/l-rpm1/
http://www.ibm.com/developerworks/library/l-rpm2/
http://www.ibm.com/developerworks/library/l-rpm3.html

Resources