Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am interested in cross-compiling a Linux kernel for an ARM target on a x86 host. Are there some good practices you recommend? Which is the best cross-compile suite in your opinion?
Have you settled up a custom cross-compile environment? If yes, what advices do you have? Is it a good idea?
There are two approaches I've used for ARM/Linux tools. The easiest is to download a pre-built tool chain directly.
Pro: It just works and you can get on with the interesting part of your project
Con: You are stuck with whichever version of gcc/binutils/libc they picked
If the later matters to you, check out crosstool-ng. This project is a configuration tool similar to the Linux kernel configuration application. Set which versions of gcc, binutils, libc (GNU or uCLibc), threading, and Linux kernel to build and crosstool-ng does the rest (i.e. downloads the tar balls, configures the tools, and builds them).
Pro: You get exactly what you selected during the configuration
Con: You get exactly what you selected during the configuration
meaning you take on full responsibility for the choice of compiler/binutil/libc and their associated features/shortcomings/bugs. Also, as mentioned in the comments, there is some "pain" involved in selecting the versions of binutils, C library etc. as not all combinations necessarily work together or even build.
One hybrid approach might be to start with the pre-built tools and replace them later with a custom solution via crosstool-ng if necessary.
Update: The answer originally used the CodeSourcery tools as an example of a pre-built tool chain. The CodeSourcery tools for ARM were free to download from Mentor Graphics, but they are now called the Sourcery CodeBench and must be purchased from Mentor Graphics. Other options now include Linaro as well as distribution specific tools from Android, Ubuntu, and others.
I use the emdebian toolchain for compiling stuff for my ARM machines that isn't happy being compiled natively in the small resources available (/me glares at the kernel). The main package is gcc-4.X-arm-linux-gnueabi (X = 1,2,3), and provides appropriately suffixed gcc/cpp/ld/etc commands. I add this to my sources.list:
deb http://www.emdebian.org/debian/ unstable main
Of course, if you're not using Debian, this probably isn't so useful, but by gum it works well for me.
I've used scratchbox while experimenting with building apps for maemo (Nokia N810), which uses an ARM processor. Supposedly, scratchbox is not restricted to maemo development.
I've used crosstool on several targets. It's great as long as you want to build your toolchain from scratch.
Of course there are several pre built toolchains for arm as well, just google it -- too many to mention here.
1) In my opinion building your own toolchain works the best. You end up having tight control over everything, plus if you're new to embedded linux, it's a GREAT learning experience.
2) Don't go with a commercial toolchain. Even if you don't want to take the time to build your own, there are free alternatives out there.
If your company will spend the money, have them buy you a jtag debugger.
It will save you tons of time. and it allows you to easily learn and step through the kernel startup, etc..
I highly recommend using the Lauterbach jtag products... They work with a ton of targets and the software is cross platform. Their support is great as well.
If you can't get a jtag debugger and you're working in the kernel, use an VM to do that, usermode linux, vmware..etc.. your code will be debugged on x86.. porting it to your arm target will be a different story, but it's a cheaper way to iron out some bugs.
If you're porting a bootloader, use uboot. Of course, if you're using a reference platform, then you're probably better off using what they provide with the BSP.
I hope that helps.
Buildroot is a tool I've had reasonably good luck with for building a custom uClibc-based toolchain from scratch. It's very customizable, and not excessively particular about what distribution you happen to be running on.
Also, many of its existing users (ie. embedded router distros) are also targeting ARM.
If you're using Gentoo, getting a cross-compiling toolchain is as easy as
$ emerge crossdev
$ crossdev -t $ARCH-$VENDOR-$OS-$LIBC
where ARCH is arm or armeb, VENDOR is unknown or softfloat, OS is linux, and LIBC is gnu or uclibc.
If all you want is a compiler (and linker) for the kernel, the LIBC part is irrelevant, and you can use -s1/--stage1 to inform crossdev that you only need binutils and gcc.
This is what Eurotech uses for their Debian ARM distibution. You'll note that they don't recommend using a cross compiler if you can avoid it. Compiling on the target itself tends to be a more reliable way of getting outputs that you know will run.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
There are many distributions of Linux. All of them have one thing in common, however: the kernel. And Linux programs run across all of them. If I make a minimalistic distribution from the kernel, will current programs made for Linux run? What defines the differences of distributions? I am a beginner on this stuff, don't go harsh if it is a stupid question. Thank you
Yes, with caveats.
You need to make sure you have full C support and by that I mean something like glibc installed or installable or you cannot build programs for your minimal install. If you can install and compile C programs on Linux then you can in effect build practically everything else from scratch.
If you want to be able to download binaries and run them this is different, the binaries will likely require shared libraries that they would have on the systems they were built for. Unless you have those libraries you cannot run the existing binaries you find online.
What defines the differences of distributions?
There are a lot of defining factors in each distribution. If we disregard things like...
Licensing ie Redhat vs Debian
Stance on things like GPL/BSD/NonFree
Release schedules Debian Vs Ubuntu
Target audience ie Ubuntu vs Debian
I think the biggest defining factor is package management ie yum/rpm vs apt/dpkg and how the base configuration is managed on the machine. This is certainly the thing I seem to use the most and miss the most when I change distributions. The kernel itself is very rarely on my mind which is in part a large part of it's success.
Most people start with something like ISO Linux and get a bootable CD but even then you normally choose a base distribution. If you want to create a base distributions that's a ton of work. Have a look at this great info graphic of the linux family tree
https://en.wikipedia.org/wiki/List_of_Linux_distributions#/media/File:Linux_Distribution_Timeline.svg
If you look at Debian/Ubuntu the amount of infrastructure these distributions have setup is quite staggering. They have millions perhaps even billions of lines of code in them all designed to run on their supported versions. You might be able to take a binary from one of them and run it on Redhat but it's likely to fail unless the planets are in alignment etc. Some people think this is actually a bad thing
https://en.wikipedia.org/wiki/Ingo_Moln%C3%A1r#Quotes
The basic failure of the free Linux desktop is that it's, perversely, not free
enough...
Desktop Linux distributions are trying to "own" 20 thousand
application packages consisting of over a billion lines of code and have
created parallel, mostly closed ecosystems around them... The Linux package
management method system works reasonably well in the enterprise (which is a
hierarchical, centrally planned organization in most cases), but desktop Linux
on the other hand stopped scaling 10 years ago, at the 1000 packages limit...
If I make a minimalistic distribution from the kernel, will current programs made for Linux run?
Very few programs actually use the kernel directly. They also need a libc, which is responsible for actually implementing most of the C routines used by either the programs themselves or the VMs running their code.
It is possible to statically link the libc to the program, but this both bloats the size of the program and makes it impossible to fix security issues in the linked libraries without rebuilding the whole program.
Well, certain programs demand a specific version of the kernel. Usually these programs act as "drivers" for the rest of the system (e.g. nvidia proprietary drivers: some of them act in kernel space while others run in user space, but require that very specific kernel module and thus that very specific kernel build).
A less stricter case is when a program demand a specific capability from kernel. For example almost all modern Linux virtualization systems rely on cgroups feature. Thus, to use them you need to have a reasonably fresh kernel.
Nevertheless a lot of kernel API is stable, so you can rely on it. But usually programs don't call kernel routines directly. A typical way to use a kernel function is to call a correspondent library routine which wraps and leverages the kernel API. The main, most basic library of that kind is libc.
Technically programs compiled for one version of libc (as well as other shared libraries) can be used with slightly different versions of correspondent libraries. For example, a lot of people use Skype compiled for SuSE in completely different Linux distributions. That Skype is a pretty complex application with a lot libraries being linked in and so on, but nevertheless it works without any significant problem. So that with a lot of other proprietary programs, which couldn't be compiled for a given distribution or even for a given installation. But sometimes shit just happens :) Those binary incompabilities are quite rare but they happen from time to time.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
From what I have been reading a Linux distribution is little more than a packaging of a kernel with various packages and some limited configuration details, such as which window manager and GUI to use by default (assuming you even want a GUI, blech). In the old days apparently there were some unique advantages to distributions. For example, Red Hat had Red Hat Package Manager (rpm). Of course, nowadays rpm is no longer a unique advantage of Red Hat.
So, why even bother bother with a distribution? Why not just install a kernel and bunch of packages of one's own choosing? What's the complexity?
Basically, a GNU/Linux Distro IS a kernel and a "bunch of packages" (GNU packages) of one's choosing.
People creates distros to perform specific tasks, like server, desktop distros, multimedia oriented distros, etc.
Creating a linux distro can be a really educational task, as you can get to know how a linux system is build from scratch.
I recommend you cheking LFS (Linux From Scratch). Its a project to guide you on assembling your own linux distro from scratch, and believe me, its a great fun and indeed YOU WILL LEARN A LOT.
If you'r intereseted on getting to known how a linux distro works, don't miss this.
The webpage says:
Many wonder why they should go through the hassle of building a Linux system from scratch when they could just download an existing Linux distribution. However, there are several benefits of building LFS. Consider the following:
LFS teaches people how a Linux system works internally
Building LFS teaches you about all that makes Linux tick, how things work together and depend on each other. And most importantly, how to customize it to your own tastes and needs.
Building LFS produces a very compact Linux system When you install a
regular distribution, you often end up installing a lot of programs
that you would probably never use. They're just sitting there taking
up (precious) disk space. It's not hard to get an LFS system installed
under 100 MB. Does that still sound like a lot? A few of us have been
working on creating a very small embedded LFS system. We installed a
system that was just enough to run the Apache web server; total disk
space usage was approximately 8 MB. With further stripping, that can
be brought down to 5 MB or less. Try that with a regular distribution.
LFS is extremely flexible Building LFS could be compared to a finished
house. LFS will give you the skeleton of a house, but it's up to you
to install plumbing, electrical outlets, kitchen, bath, wallpaper,
etc. You have the ability to turn it into whatever type of system you
need it to be, customized completely for you.
LFS offers you added security You will compile the entire system from
source, thus allowing you to audit everything, if you wish to do so,
and apply all the security patches you want or need to apply. You
don't have to wait for someone else to provide a new binary package
that (hopefully) fixes a security hole. Often, you never truly know
whether a security hole is fixed or not unless you do it yourself.
Of course there are other tools to create a linux distro based on your HD installation, maybe for backuping purposes.
Linux Live
And lot of other scripts to get you started, just google for them.
Of course, all of them are like automatically tools oriented for the user, so don't expect to learn a lot from them.
There are lots, thousends of linux distros out there, so obviously is a waste of time to try to make the "ideal" linux distro and compite with ubuntu, mint, etc.
I still recommend you to check out Linux From Scratch, just as a weekend educative project . Trust me, you will learn a lot.
It covers also embedded linux distro creating, to target ARM processors and so.
If you're on the embedded world, Yocto Project worths a look.
I spent the last three weeks researching about crossdevelopment under Mac OS X. I want to achieve two separate results, but I believe they can be reached through the same path.
I want to
set up distcc to help my old Gentoo laptop using the iMac I recently got at home (OS X 10.6, 64 bit native) which I also use for iOS development, so Xcode 4 tools are already there;
develop my pet project which is an elf kernel for x86, x86_64, and arm (and I'll stop here as it's OT).
So, after a lot of that thinking thing we all do in these cases, I came up the idea that to reach the first goal I need to set up an i686-pc-linux-gnu toolchain (or is it i686-unknown-linux-gnu?) with all the appropriate versions (eg gcc-4.4) and make it callable by distcc. It seems like a reasonable task, but unfortunately there seem to be clearer tools and instructions to build toolchains for obscure archs like sparc or mips, and not a single reasonably updated resource on how to go for x86 the best way. Therefore, first question: is there anybody that succesfully build such a toolchain and feels like sharing the pain? :)
Second goal. My current workbench is made of Gentoo on an i686 laptop (yes, the same as the first goal) with all the regular development stuff, and I use QEMU to test it (its gdb integration is awesome). What I'd really like to do is to keep using the laptop while travelling (I do a lot of commuting) and continue to work and test on the iMac when I'm home (git is awesome in this respect). Hence, second question: is there anybody that have done something like this and wants to share?
I'd really appreciate any input. Seriously.
EDIT I know about MacPorts, crosstool, and crosstool-ng. I tried installing i386-elf-binutils 2.18 from MacPorts just to discover I have 2.20 in my laptop. Also I couldn't get gcc44 to produce i686-pc-linux-gnu elf objects, and using i386-elf-gcc is not an option as I need 4.4 and the packaged one is 4.3.
This is no easy task, specially because you want to cross compile for so many different platforms.
The most used approach is to run a Virtual Machine with the desired OS (e.g. VirtualBox, Parallels, VMWare Fusion) and install your workbench tools to work from it. This is very often used because it's not complex to setup and it also make it easier to write, test and debug code for/from the target system.
Of course, if you search enough you'll find all sorts of hacks/tricks to setup a toolchain on Mac OS X and compile code for other architectures:
One of these uses Buildroot, but that means that there is no official support for Mac OS X.
Another one, also interesting, offers a .dmg package with the tools needed to compile for Linux on MacOS X.
You already mentioned Gentoo, so I think you should take a look at Gentoo Prefix. Gentoo Prefix lets you install a small Gentoo system in a user defined directory (= prefix). From there, you may start a shell which lets you use portage (= Gentoo's package system) which should enable you to install the necessary tools.
I do not know in what shape Prefix on OS X today is, but I was able to install it on a friend's MacBook a year or so ago. If you are interested, I can give further details about the installation process which can be a bit tricky.
We are about to release a couple of softwares with Linux support.
As for Mac and Windows, the number of version to support is quite limited (xp, 2000, vista, 7 for win, 10.4-6 for Mac). But for linux it's another story.
We'd like to support as many Linux as possible, but the choice is large.
The questions are:
Which distribution format (binaries) to use to support as many Linux as possible?
For testing, what "base linux" can we test on and extend our results to other linuxes.
According we provide statically linked binary with all the dependencies, what do we need to check? I assume kernel version and libc version, but I'm wondering.
Our software is written in ANSI compliant C with a bit of BSD and POSIX (gettimeofday, pthreads).
So you think three versions each for Mac and Windows is normal, but you shy away from Linux? Hm.
Just make sure it builds using the standard tool chains -- configure, make and make install traditionally. The rest should take care of itself.
Else, pick what you are comfortable with. For me that would be Debian/Ubuntu, others prefer Fedora. Look at the Linux Standards Base and things like FreeDesktop.org for other standards. Kernel and libc should not matter unless you are doing something very hardware or driver-specific.
The kernel strives to maintain a backwards-compatible binary API. Statically linked binaries built against 1.0 series kernels are supposed to still run fine to this day on the latest 2.6 series kernels.
If you are statically linking with everything (including libc), then the major problem you are likely to face is different filesystem arrangements, which may not even be a great issue for you. (Testing is the only way to find out, though).
An idea is to survey your proposed customer base so see which linux version they run and make a short list from their feedback. However from what I know (which is subjective!) ...
I would suggest running two different distribution types -- rpm and .tar.gz. With rpm you cater for the latest Fedora/openSUSE/RHEL/SLES (and derived distros, which is a fair chunk of the corporate market). You are already handing a lot of dependency problem by static linking, so kernel version should be sufficient.
With .tar.gz distribution you cater for 'all others' but watch support and configuration problems as they quickly become a time sink.
For testing, have virtual machines of each version you choose to support. These can also be used for product support (I assume you will need to provide product support??) I wouldn't try to extrapolate results between linux versions because there a too many hidden 'gotchas'.
You can release statically compiled Linux binaries against the kernel & version of glibc. You really only need worry about compatibility-breaking revisions. If you have some time, you can setup everything to cross-compile on the same host. The kernel is backward compatible. glibc is more temperamental.
File paths can be assumed to be Linux Standard Base, if you want to package it with an installer. The more flexible you can be here, the better. I've never heard a customer complain about receiving a tarball of binaries, which I'd recommend offering. I have had customers complain about incorrect assumptions.
Your best bet for a formal package format is probably between DEB (Debian Linux & derivatives, like Ubuntu) and RPM (Red-Hat & derivatives, like Cent-OS). Packages are nice to have, but are just a headache if you don't plan on utilizing the native update manager.
For test & build, I'd personally recommend Gentoo. It's pretty raw, however, so you might want to look into Ubuntu as a distant second choice.
This is an issue for your product management team. Once they have determined that producing a Linux version is a desirable idea (i.e. on a cost-benefit basis), then you will need to find out what distros your customers use or want supported.
In principle you can support any but the more you support the more of a headache it will be, so you want as FEW as possible.
Support as few OS / architecture combinations as your PM thinks you can get away with
Deprecate OSs / architectures as soon as you can
Only take on new ones if premium support customers demand it, or to get big deals, as per your PM's decision.
How hard it is to support them is largely dependent on how complex your product is (esp. dependencies) and how complete its auto-test suite is. Adding more supported OSs ties your hands with respect to library usage, kernel feature usage etc as well as testing, so it's not something you want to be lumbered with long-term.
So in short, it's not a software engineering issue, but a product management one.
Any way to make a binary in a Linux distribution and run it on another distribution with same architecture? Or I should compile and build it on different distributions?
Is there any compatibility between Redhat, Debian based distributions for binary files?
(I want to use my Ubuntu binary file on fedora!)
Enter Linux Standard Base to reduce the differences between individual Linux distributions.
See
http://www.linuxfoundation.org/collaborate/workgroups/lsb
http://en.wikipedia.org/wiki/Linux_Standard_Base
Statically linking your binaries makes them LESS portable because some libraries won't then work correctly for that machine (differing authentication methods, etc).
If you statically link any "unusual" libraries and keep your set of supported distros to a minimum, you should be ok.
Don't statically link the C library (or the whole binary), that's a recipe for trouble :)
Look at what (e.g.) Google do with Chrome.
What language is your application coded in? If its in a language like Python, (and no C bindings) or Java or any other VM based language, then I think you can trust the VM to make sure your application will work on the different Linux distributions.
Also, there is the Linux Standard Base which you can refer to.
HTH,Amit
I realize this is a very old question, but it comes up high in search results and this hasn't been mentioned:
CDE is a tool to create portable Linux applications. This tool packages together all needed files (including libraries) by analyzing at run-time. I have used it successfully on command-line tools several times, one example being getting tcpdump to run on a old hardware appliance running a custom distribution. CDE also doesn't require source, it just packages an executable you are able to run.
At one point I had an error running the cde command which was fixed by prepending the command with LD_ASSUME_KERNEL=2.4.1, this might not be necessary in recent versions as it was years back.
Code is also on GitHub: https://github.com/pgbovine/CDE
It works. But it also depends on the version of the shared libraries you use, including libc, libstdc++ which are forced by the compiler version that may differ from distro to distro.
The best way is to distribute the source code and to make it easy to build the source on any reasonable Linux distribution. This is better than binary distribution because it is not enough to make the binary compatible with shared libraries. You also need to make sure you adapt your program to things like distribution specified locations and conventions for where web apps go, or how e-mail is sent, or how services are started, or how to determine the default paper size, or a myriad of other details.
See for example the Debian Policy Manual for a document describing many of the things a distribution needs to decide to ensure compatibility between applications running on it. You don't need to read it through or learn it by heart, but it shows the scope of the issues that may trip you.
You should probably work together with several of the major distributions to ensure your application works well with all of them. Most distributions' developers will happily help if you approach them politely. If you're lucky, you can attract volunteers from the distros to make the binary packaging for you, and that will quickly give you feedback on what you need to change at the source level to make your application easy to package.
The Linux Standard Base already mentioned by others attempts to work out a cross-distribution solution to these variables, but it is not comprehensive and not fully supported by most distributions. However, most distributions consider it a problem if they accidentally break LSB compatibility.
Normally it's ok to use binaries across linux distributions as long as you have the same set of libraries available. You can use 'ldd' to check which libraries are needed by a binary. libc should sure have the same version in the distributions involved.
You could statically link your executables for portability.
LSB is definitely worth checking out. Though in regards of working with libraries I was most satisfied by this answer here at SO https://stackoverflow.com/questions/1209674/shipping-closed-source-application-for-linux/1242738#1242738 and this detailed treatment of the rpath mechanism http://www.eyrie.org/~eagle/notes/rpath.html
How about HTML?
It's cross platform, it's been around forever, and if you consult caniuse, and you know your target environment. It can render any UI I ever dreamt of, and if you're willing to learn javascript, you can approach this from a server programming perspective and a client programming perspective, without switching languages, which comes in handy if you're the one doing both.
It's probably the closest thing we have to a machine lingua france these days, and that is a good thing, because that means there's potentially more pickins for everybody involved.
People don't know that the web browser does most things you would want from a program, including rendering 3D graphics and PGP style encryption.
The biggest benefit I see in the browser as a platform is that everyone's nephew knows how to install a browser on a new computer, and from there it's just a url away, including packaging in some of the stores.