Linux/Mac: What is good method to determine platform at compile time? - linux

I would like to generalize a build system to compile on several (somewhat similar) platforms. What is a good method for determining the type of host that the shell script or Makefile is running on. I would like to distinguish between mac and linux, but also different specific distributions of linux (e.g. RHEL, Ubuntu). Cygwin is not important for me, but if you include it in your response I am sure others will find it valuable.
The rationale may include using the host type to fetch and install the correct versions of binary packages when it is more convenient to do so than compile from source. In addition, some commercial software is binary-packaged for specific distros, so part of the motivation is to grab the right binary.
Thanks,
SetJmp

Autotools to the rescue. It has tons of macros that help you do this kind of stuff.
http://www.lrde.epita.fr/~adl/autotools.html

uname -a to distinguish major *nix variants
Not so sure what the best way to distinguish red hat from ubuntu would be - could look for package managing tools and query installed packages, eventually helping you narrow down different debian derivatives, etc. There's probably something more obvious and up front though.

linux variants generally store distro information in /etc/issue.
most kernels will put info in /proc/version

It's not completely straightforward. You can use uname to find out the general parameters but to differentiate between distributions is a harder task. Maybe you should consider using something like autoconf to generalise your build system?

Just in case you're using Qt, there's this really nice set of defines, Q_OS_*, that guide you to the Operating System you're compiling on:
Q_OS_AIX
Q_OS_BSD4
Q_OS_BSDI
Q_OS_CYGWIN
Q_OS_DARWIN
Q_OS_DGUX
Q_OS_DYNIX
Q_OS_FREEBSD
Q_OS_HPUX
Q_OS_HURD
Q_OS_IRIX
Q_OS_LINUX
Q_OS_LYNX
Q_OS_MAC
Q_OS_MSDOS
Q_OS_NETBSD
Q_OS_OS2
Q_OS_OPENBSD
Q_OS_OS2EMX
Q_OS_OSF
...
They are defined in QtGlobal. There are even defines that help you figure out the compiler used Q_CC_* or the target Windowing System Q_WS_*.
But if you're not using Qt and want to go for a generic method, you most likely have to fall back to the Autotools package or CMake.
Determining Linux distributions is pretty tricky, but not hard. You first have to figure out what distributions you care about and then make all kinds of distribution specific file/configuration checks like in this example for the ones you've chosen, since you can't really support all of the myriad of Linux distros available out the. :-)
As for the Mac side i'll let the Mac experts answer, but it shouldn't be that hard, since at least the diversity issue is out of the question.

Related

Contributing to a Linux distribution

I'm interested in contributing to a Linux distro, but regarding the various distro's developer communities, I'm having a bit of trouble figuring out which one I'd most like to join.
What languages I know: C, C++, Lua, Python, and fairly familiar with Perl (though I wouldn't say I "know" it). In particular, I have very little experience with x86 assembly besides hacking stuff together for performance tweaks, though that will be partially rectified soon.
What I'm looking for: A community that provides plenty of opportunities for developers to work on various aspects of the distribution. To be honest I'm most interested in reading and working on the kernel source (in which case the distro doesn't matter), but it's pretty daunting and I figure getting into the Linux community and working with experienced Linux developers might give me a better idea of how to jump into the guts(let me know if this is bogus, or if you have any advice regarding that).
So...
Which distro has the "best" developer community in terms of organization, people who are fun to work with, and opportunities to contribute?
I've read various "Contributing to XXX" pages and mailing lists for distros like Ubuntu, OpenSuse, Fedora, etc. but I'd rather get a more personal testament from an actual developer.
Unless you have a specific desire to learn the ins and outs of various packaging formats you would probably be better off contributing directly upstream to applications/libraries that you find interesting. While individual distributions often have a few management applications that are unique(ish) to them most core applications and libraries are shared between them.
As you have expressed an interest in guts it would make sense to stick to one of the main community distros (Fedora and Ubuntu/Debian) as the rest tend to be variations on a base distro. The other option is to choose a source based distribution which have a number of advantages to developers although you may find yourself spending a bit of time keeping your machine trim.
As I'm a developer I personally use Gentoo which gives me a number of things:
Rolling release: New versions of applications are generally available soon after release
Stable/Unstable mix: I can run stable core with bleeding edge on upstream packages I care about
Development ready: Any installed package is by default a "dev" package, the distinction between buildtime/runtime dependencies is blurred
Packaging is easy: If it's a simple as "configure/make/make install" writing and ebuild is very easy.
Contribution is easy: Contributing new ebuilds is fairly painless, from there you can get as involved as you like
Of course there are downsides, not least of all your machine spends a considerable amount of time building things and if your run a large selection of "unstable" packages you may find you occasionally need to fix-up your machine. However I find these disadvantages minor compared to giving me an up to date platform with which to contribute to upstream from.
If you want to work with the kernel then you shouldn't be picking a distribution, but rather working upstream.
Somebody correct me if I'm wrong, but I think that contributing to Ubuntu can be very easy and fun if you use Launchpad. I haven't tried contributing code, but I contribute translations and file bugs on some projects.

How to compile Intel Mac binaries on Linux?

I was reading an article about cross-compiling for OSX on linux, but it was quite hard to understand.
What tools do I need? And what configurations are necessary?
Are there any tools for creating packages too?
First you need odcctools, which contains assembler and linker and such (like binutils but capable of handling the Mach-O object format). Then you need the system libraries from the official SDK. You can download it from Apple, but must agree to some stuff and become a member to do so. And finally good old gcc. Quite easy in theory, but in reallity a horrible mess. The easiest way to go (that I know of) is to use I'm Cross!.
Update: I found a newer and better updated method called xchain. It requires more manual work than I'm Cross! thou.

What is the easiest x86 Embedded Linux? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to play around with some embedded linux. I want it to be able to run on an x86 processor (for start, it will be running on my regular PC). I have looked online, but the ones I have found seem hard to setup or lack proper documentation. So what are some good embedded x86 compatible linux distros that are easy to setup or have good documentation on how to get things setup?
Since the definition of "embedded" vary depending on who you talk to, what is considered an embedded Linux distribution will also vary.
As other have said, you can go with distribution building tool kit, like :
T2 SDE
OpenEmbedded
LinuxFromScratch
Buildroot
You can also use any "standard" Linux distribution, which can often be customized for an embedded environment. They have the advantage of being heavily tested in their normal environment. So you can choose any of :
Fedora (with Revisor, Instalinux)
OpenSuse (with SuseStudio, Instalinux)
Debian (with Reconstructor, Instalinux)
Ubuntu (with Reconstructor, Instalinux)
Gentoo
Slackware (with NimbleX)
CentOS (with Instalinux)
gNewSense (with Builder)
Finally, you can also build your own completely, from source. In that case, BusyBox will probably be helpful since it provide a lot of functionalities and common application. To help you with that, there is the nice 3 parts series : Building Tiny Linux Systems with Busybox (part 1, part 2, and part 3)
You may want to take a look at the OpenEmbedded project. It is a meta-distribution, meaning it's more of a distribution construction kit rather than ready distribution. But using it may take effort on your part. The same applies to all embedded solutions, though.
BusyBox
BusyBox is designed to be a small executable for use with the Linux kernel, which makes it ideal for use with embedded devices. It provides a fairly complete environment for any small or embedded system.
What do you actually mean by "Embedded Linux"? It depends what you want to run on that.
For example you can use OpenWRT, but there are surely others which might better fit your purpose.
If you want to build some multimedia thing, Moblin might be a solution as well.
You might want to look at the Beagle Board.
It's not x86, but decent community of developers, and it will give a good idea how to build and run embedded Linux.(i.e. flash file system, somewhat limited RAM...) and its real cheap!
I can also recommend these two books:
Building Embedded Linux Systems and
Embedded Linux Primer
I'd start by having a look at the output of the buildroot tool which comes with busybox.
You are suggesting that you want to make your own Linux distribution, this is fine but you really need to know how to use an existing one first. I am assuming you understand fully how Linux boots and works on a basic level. You'll need
Some kind of boot media (in some cases this CAN be a rom, but usually isn't) that the firmware can boot from (in most cases the firmware on x86 is some kind of bios, or bios-like - except on things like Macs)
A boot loader - I like to use syslinux because it's easy (and boots from a dos filesystem)
A kernel
A root filesystem of some kind - you can use an initramfs for this in which case, it's loaded by the bootloader and expanded at boot time. Initramfs is cool, it avoids the need for a "real" root fs or block device drivers etc (at the expense of some ram, but ram is easy).
A C library (unless all your exes are static linked)
Some userspace software
I'd strongly recommend using an emulator (such as vmware) to test this, it reduces turnaround time a lot. A development system will need to have rather a lot of disc space, as you'll probably need to compile everything in the above list, and possibly some other tools as well (such as gcc and C library) which aren't small. Your build box will probably need to be running a proper Linux distribution.
I have done this and it's good fun, but frustrating at times (debugging can be a mission in itself)
Happy hacking :)
Busybox + LFS, Gentoo, Arch all do the job well
First to you'd compile your stuff in a chroot jail on dev computer, last you don't need to compile but you need to mirror/keep your own repository because you can't get old packages from official arch repositories.
I suggest debian

Distributing a program in linux without the source

I want to be able to distribute a program in Linux without distributing the source with it. The current solution is distributing a tar.gz with a precompiled binary. What is the easiest way to have this binary be placed in the Applications Menu? Is there a way to do this that is common across most linux distributions, but Ubuntu, Fedora, and OpenSUSE would be the priority.
You will want to create a .deb and a .rpm. The former covers Ubuntu (Debian variants), and the latter Red Hat variants. You can also supply a standalone executable for other users who can deal with things like menus themselves.
You will have to deal with Gnome and KDE menu management, and also different distributions lay out their menus differently. There is also the issue of netbook variants such as Moblin, that have a netbook interface that probably has its own "add application" mechanism. I don't know if it is possible for a single .deb to handle both Gnome and KDE menus systems (for Ubuntu and Kubuntu respectively) but I imagine the capability is there to reduce duplication of effort for Ubuntu.
All recent distributions should have xdg-utils installed, which provides scripts such as
xdg-desktop-icon
xdg-desktop-menu
which seem to be what you're looking for.
Haven't looked into it lately...but back in the day (which really wasn't all that long ago) when I was using Linux, RPM was the easiest way to distribute pre-combiled binaries (most distributations had, and still have, some kind of support for RPM packages).
Here's an old how-to on building an RPM package:
Linux Online - RPM How-To
You could look at BitRock intaller.
Try Autopackage or other solutions posted in another question.
Do tar.gz and then give community rights to redistribute modified packages. They will make RPMs, DEBs and any other packages for their beloved distributions... which will probably fit their distros much better than you could ever make.
There is really too many differences between distributions to make one-size-fits-all package, often subtle ones. For example some distributions has "Application" section, other "Applications"... and this made menu items disappear on some distros. Libraries can be different, default settings can be different, and so on...
RPMs and DEBs aren't so portable as it is believed. With one package there might be problems even with different versions of a single distribution, and there is nothing worse than fighting to install badly prepared package correctly.
JeeBee is correct that you would want to go with .deb or .rpm.
For Ubuntu/Debian (the .deb) I would add that you do not send it to people but you create a "repository" and have the users add that url to their /etc/apt/sources.list, then you get a easy way to update the software as well.
That way you solve the distribution and updated problem at the same time.
And here is a example of how this could look like:
http://www.avrfreaks.net/wiki/index.php/Documentation:AVR32_General/Installing_tools_on_Ubuntu_Linux#Ubuntu_8.04_-_Hardy_Heron
And how a repository could look like:
http://www.atmel.no/avr32/ubuntu/
But don't repeat Atmels mistake and only do i386 because there is a lot of other common architectures out there right now, like the amd64.
/Johan
For RPM, this three-part tutorial by IBM is the best beginner's guide to packaging I know:
http://www.ibm.com/developerworks/library/l-rpm1/
http://www.ibm.com/developerworks/library/l-rpm2/
http://www.ibm.com/developerworks/library/l-rpm3.html

Why use build tools like Autotools when we can just write our own makefiles?

Recently, I switched my development environment from Windows to Linux. So far, I have only used Visual Studio for C++ development, so many concepts, like make and Autotools, are new to me. I have read the GNU makefile documentation and got almost an idea about it. But I am kind of confused about Autotools.
As far as I know, makefiles are used to make the build process easier.
Why do we need tools like Autotools just for creating the makefiles? Since all knows how to create a makefile, I am not getting the real use of Autotools.
What is the standard? Do we need to use tools like this or would just handwritten makefiles do?
You are talking about two separate but intertwined things here:
Autotools
GNU coding standards
Within Autotools, you have several projects:
Autoconf
Automake
Libtool
Let's look at each one individually.
Autoconf
Autoconf easily scans an existing tree to find its dependencies and create a configure script that will run under almost any kind of shell. The configure script allows the user to control the build behavior (i.e. --with-foo, --without-foo, --prefix, --sysconfdir, etc..) as well as doing checks to ensure that the system can compile the program.
Configure generates a config.h file (from a template) which programs can include to work around portability issues. For example, if HAVE_LIBPTHREAD is not defined, use forks instead.
I personally use Autoconf on many projects. It usually takes people some time to get used to m4. However, it does save time.
You can have makefiles inherit some of the values that configure finds without using automake.
Automake
By providing a short template that describes what programs will be built and what objects need to be linked to build them, Makefiles that adhere to GNU coding standards can automatically be created. This includes dependency handling and all of the required GNU targets.
Some people find this easier. I prefer to write my own makefiles.
Libtool
Libtool is a very cool tool for simplifying the building and installation of shared libraries on any Unix-like system. Sometimes I use it; other times (especially when just building static link objects) I do it by hand.
There are other options too, see StackOverflow question Alternatives to Autoconf and Autotools?.
Build automation & GNU coding standards
In short, you really should use some kind of portable build configuration system if you release your code to the masses. What you use is up to you. GNU software is known to build and run on almost anything. However, you might not need to adhere to such (and sometimes extremely pedantic) standards.
If anything, I'd recommend giving Autoconf a try if you're writing software for POSIX systems. Just because Autotools produce part of a build environment that's compatible with GNU standards doesn't mean you have to follow those standards (many don't!) :) There are plenty of other options, too.
Edit
Don't fear m4 :) There is always the Autoconf macro archive. Plenty of examples, or drop in checks. Write your own or use what's tested. Autoconf is far too often confused with Automake. They are two separate things.
First of all, the Autotools are not an opaque build system but a loosely coupled tool-chain, as tinkertim already pointed out. Let me just add some thoughts on Autoconf and Automake:
Autoconf is the configuration system that creates the configure script based on feature checks that are supposed to work on all kinds of platforms. A lot of system knowledge has gone into its m4 macro database during the 15 years of its existence. On the one hand, I think the latter is the main reason Autotools have not been replaced by something else yet. On the other hand, Autoconf used to be far more important when the target platforms were more heterogeneous and Linux, AIX, HP-UX, SunOS, ..., and a large variety of different processor architecture had to be supported. I don't really see its point if you only want to support recent Linux distributions and Intel-compatible processors.
Automake is an abstraction layer for GNU Make and acts as a Makefile generator from simpler templates. A number of projects eventually got rid of the Automake abstraction and reverted to writing Makefiles manually because you lose control over your Makefiles and you might not need all the canned build targets that obfuscate your Makefile.
Now to the alternatives (and I strongly suggest an alternative to Autotools based on your requirements):
CMake's most notable achievement is replacing AutoTools in KDE. It's probably the closest you can get if you want to have Autoconf-like functionality without m4 idiosyncrasies. It brings Windows support to the table and has proven to be applicable in large projects. My beef with CMake is that it is still a Makefile-generator (at least on Linux) with all its immanent problems (e.g. Makefile debugging, timestamp signatures, implicit dependency order).
SCons is a Make replacement written in Python. It uses Python scripts as build control files allowing very sophisticated techniques. Unfortunately, its configuration system is not on par with Autoconf. SCons is often used for in-house development when adaptation to specific requirements is more important than following conventions.
If you really want to stick with Autotools, I strongly suggest to read Recursive Make Considered Harmful (archived) and write your own GNU Makefile configured through Autoconf.
The answers already provided here are good, but I'd strongly recommend not taking the advice to write your own makefile if you have anything resembling a standard C/C++ project. We need the autotools instead of handwritten makefiles because a standard-compliant makefile generated by automake offers a lot of useful targets under well-known names, and providing all these targets by hand is tedious and error-prone.
Firstly, writing a Makefile by hand seems a great idea at first, but most people will not bother to write more than the rules for all, install and maybe clean. automake generates dist, distcheck, clean, distclean, uninstall and all these little helpers. These additional targets are a great boon to the sysadmin that will eventually install your software.
Secondly, providing all these targets in a portable and flexible way is quite error-prone. I've done a lot of cross-compilation to Windows targets recently, and the autotools performed just great. In contrast to most hand-written files, which were mostly a pain in the ass to compile. Mind you, it is possible to create a good Makefile by hand. But don't overestimate yourself, it takes a lot of experience and knowledge about a bunch of different systems, and automake creates great Makefiles for you right out of the box.
Edit: And don't be tempted to use the "alternatives". CMake and friends are a horror to the deployer because they aren't interface-compatible to configure and friends. Every half-way competent sysadmin or developer can do great things like cross-compilation or simple things like setting a prefix out of his head or with a simple --help with a configure script. But you are damned to spend an hour or three when you have to do such things with BJam. Don't get me wrong, BJam is probably a great system under the hood, but it's a pain in the ass to use because there are almost no projects using it and very little and incomplete documentation. autoconf and automake have a huge lead here in terms of established knowledge.
So, even though I'm a bit late with this advice for this question: Do yourself a favor and use the autotools and automake. The syntax might be a bit strange, but they do a way better job than 99% of the developers do on their own.
For small projects or even for large projects that only run on one platform, handwritten makefiles are the way to go.
Where autotools really shine is when you are compiling for different platforms that require different options. Autotools is frequently the brains behind the typical
./configure
make
make install
compilation and install steps for Linux libraries and applications.
That said, I find autotools to be a pain and I've been looking for a better system. Lately I've been using bjam, but that also has its drawbacks. Good luck finding what works for you.
Autotools are needed because Makefiles are not guaranteed to work the same across different platforms. If you handwrite a Makefile, and it works on your machine, there is a good chance that it won't on mine.
Do you know what unix your users will be using? Or even which distribution of Linux? Do you know where they want software installed? Do you know what tools they have, what architecture they want to compile on, how many CPUs they have, how much RAM and disk might be available to them?
The *nix world is a cross-platform landscape, and your build and install tools need to deal with that.
Mind you, the auto* tools date from an earlier epoch, and there are many valid complaints about them, but the several projects to replace them with more modern alternatives are having trouble developing a lot of momentum.
Lots of things are like that in the *nix world.
Autotools is a disaster.
The generated ./configure script checks for features that have not been present on any Unix system for last 20 years or so. To do this, it spends a huge amount of time.
Running ./configure takes for ages. Although modern server CPUs can have even dozens of cores, and there may be several such CPUs per server, the ./configure is single-threaded. We still have enough years of Moore's law left that the number of CPU cores will go way up as a function of time. So, the time ./configure takes will stay approximately constant whereas parallel build times reduce by a factor of 2 every 2 years due to Moore's law. Or actually, I would say the time ./configure takes might even increase due to increasing software complexity taking advantage of improved hardware.
The mere act of adding just one file to your project requires you to run automake, autoconf and ./configure which will take ages, and then you'll probably find that since some important files have changed, everything will be recompiled. So add just one file, and make -j${CPUCOUNT} recompiles everything.
And about make -j${CPUCOUNT}. The generated build system is a recursive one. Recursive make has for a long amount of time been considered harmful.
Then when you install the software that has been compiled, you'll find that it doesn't work. (Want proof? Clone protobuf repository from Github, check out commit 9f80df026933901883da1d556b38292e14836612, install it to a Debian or Ubuntu system, and hey presto: protoc: error while loading shared libraries: libprotoc.so.15: cannot open shared object file: No such file or directory -- since it's in /usr/local/lib and not /usr/lib; workaround is to do export LD_RUN_PATH=/usr/local/lib before typing make).
The theory is that by using autotools, you could create a software package that can be compiled on Linux, FreeBSD, NetBSD, OpenBSD, DragonflyBSD and other operating systems. The fact? Every non-Linux system to build packages from source has numerous patch files in their repository to work around autotools bugs. Just take a look at e.g. FreeBSD /usr/ports: it's full of patches. So, it would have been as easy to create a small patch for a non-autotools build system on a per project basis than to create a small patch for an autotools build system on a per project basis. Or perhaps even easier, as standard make is much easier to use than autotools.
The fact is, if you create your own build system based on standard make (and make it inclusive and not recursive, following the recommendations of the "Recursive make considered harmful" paper), things work in a much better manner. Also, your build time goes down by an order of magnitude, perhaps even two orders of magnitude if your project is very small project of 10-100 C language files and you have dozens of cores per CPU and multiple CPUs. It's also much easier to interface custom automatic code generation tools with a custom build system based on standard make instead of dealing with the m4 mess of autotools. With standard make, you can at least type a shell command into the Makefile.
So, to answer your question: why use autotools? Answer: there is no reason to do so. Autotools has been obsolete since when commercial Unix has become obsolete. And the advent of multi-core CPUs has made autotools even more obsolete. Why programmers haven't realized that yet, is a mystery. I'll happily use standard make on my build systems, thank you. Yes, it takes some amount of work to generate the dependency files for C language header inclusion, but the amount of work is saved by not having to fight with autotools.
I dont feel I am an expert to answer this but still give you a bit analogy with my experience.
Because upto some extent it is similar to why we should write Embedded Codes in C language(High Level language) rather then writing in Assembly Language.
Both solves the same purpose but latter is more lenghty, tedious ,time consuming and more error prone(unless you know ISA of the processor very well) .
Same is the case with Automake tool and writing your own makefile.
Writing Makefile.am and configure.ac is pretty simple than writing individual project Makefile.

Resources