Will using the Linux Kernel support current programs? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
There are many distributions of Linux. All of them have one thing in common, however: the kernel. And Linux programs run across all of them. If I make a minimalistic distribution from the kernel, will current programs made for Linux run? What defines the differences of distributions? I am a beginner on this stuff, don't go harsh if it is a stupid question. Thank you

Yes, with caveats.
You need to make sure you have full C support and by that I mean something like glibc installed or installable or you cannot build programs for your minimal install. If you can install and compile C programs on Linux then you can in effect build practically everything else from scratch.
If you want to be able to download binaries and run them this is different, the binaries will likely require shared libraries that they would have on the systems they were built for. Unless you have those libraries you cannot run the existing binaries you find online.
What defines the differences of distributions?
There are a lot of defining factors in each distribution. If we disregard things like...
Licensing ie Redhat vs Debian
Stance on things like GPL/BSD/NonFree
Release schedules Debian Vs Ubuntu
Target audience ie Ubuntu vs Debian
I think the biggest defining factor is package management ie yum/rpm vs apt/dpkg and how the base configuration is managed on the machine. This is certainly the thing I seem to use the most and miss the most when I change distributions. The kernel itself is very rarely on my mind which is in part a large part of it's success.
Most people start with something like ISO Linux and get a bootable CD but even then you normally choose a base distribution. If you want to create a base distributions that's a ton of work. Have a look at this great info graphic of the linux family tree
https://en.wikipedia.org/wiki/List_of_Linux_distributions#/media/File:Linux_Distribution_Timeline.svg
If you look at Debian/Ubuntu the amount of infrastructure these distributions have setup is quite staggering. They have millions perhaps even billions of lines of code in them all designed to run on their supported versions. You might be able to take a binary from one of them and run it on Redhat but it's likely to fail unless the planets are in alignment etc. Some people think this is actually a bad thing
https://en.wikipedia.org/wiki/Ingo_Moln%C3%A1r#Quotes
The basic failure of the free Linux desktop is that it's, perversely, not free
enough...
Desktop Linux distributions are trying to "own" 20 thousand
application packages consisting of over a billion lines of code and have
created parallel, mostly closed ecosystems around them... The Linux package
management method system works reasonably well in the enterprise (which is a
hierarchical, centrally planned organization in most cases), but desktop Linux
on the other hand stopped scaling 10 years ago, at the 1000 packages limit...

If I make a minimalistic distribution from the kernel, will current programs made for Linux run?
Very few programs actually use the kernel directly. They also need a libc, which is responsible for actually implementing most of the C routines used by either the programs themselves or the VMs running their code.
It is possible to statically link the libc to the program, but this both bloats the size of the program and makes it impossible to fix security issues in the linked libraries without rebuilding the whole program.

Well, certain programs demand a specific version of the kernel. Usually these programs act as "drivers" for the rest of the system (e.g. nvidia proprietary drivers: some of them act in kernel space while others run in user space, but require that very specific kernel module and thus that very specific kernel build).
A less stricter case is when a program demand a specific capability from kernel. For example almost all modern Linux virtualization systems rely on cgroups feature. Thus, to use them you need to have a reasonably fresh kernel.
Nevertheless a lot of kernel API is stable, so you can rely on it. But usually programs don't call kernel routines directly. A typical way to use a kernel function is to call a correspondent library routine which wraps and leverages the kernel API. The main, most basic library of that kind is libc.
Technically programs compiled for one version of libc (as well as other shared libraries) can be used with slightly different versions of correspondent libraries. For example, a lot of people use Skype compiled for SuSE in completely different Linux distributions. That Skype is a pretty complex application with a lot libraries being linked in and so on, but nevertheless it works without any significant problem. So that with a lot of other proprietary programs, which couldn't be compiled for a given distribution or even for a given installation. But sometimes shit just happens :) Those binary incompabilities are quite rare but they happen from time to time.

Related

Why do I need to choose my processor architecture when downloading an application for Linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Isn't the operating system an abstraction on top of the hardware?
Making hardware architectures irrelevant for software being run on the same operating system?
If so why do I need to choose my processor architecture (e.g: ARM or amd64) when downloading NodeJS for example?
Different platforms abstract away different things:
Java/WASM abstract away CPU architecture, memory model, device access, terminal output and file access.
Any program can run anywhere.
Linux/Windows abstract away device access, terminal output and file access.
Any program built for that CPU and ABI can run.
DOS abstracts away terminal output and file access.
Any program built for that CPU and ABI that includes drivers for devices can run.
BIOS abstracts away terminal output.
Any program built for that CPU and ABI that includes device drivers and file system drivers to load its own data can run.
You need to account for everything that is not abstracted away, and on Linux that includes the CPU architecture.
It's better than DOS where you additionally needed to make sure your program supported your sound card, but not as convenient as Java where a single Android app can run on both x86 and arm64.
You've probably heard programs can be compiled to "machine code." These are low-level instructions for the hardware, different for every type of machine (and are influenced not only by CPUs but also by peripherals).
The NodeJS interpreter is written in C and C++ and compiled to machine code. This compiled code is only valid on a particular type of a machine. So you need to download the correct version of the NodeJS interpreter for your machine.
You can write pure JS code to be run on NodeJS and then it will usually not depend on the machine type - it will be "universal" to an extent. But as soon as the JS code (this is usually true for some specific modules and libraries) uses native code (C, C++, & others) for performance reasons, this code gets to be compiled for a specific machine, and then the JS module also becomes bound to a specific machine.
The operating system has little to no influence in all of this. It basically says how the machine code will be written into a file (e.g. which file format to use), and abstracts access to hardware (such as the disk drives) in a way this code can use.
Historically, there have been attempts to create operating systems which would completely abstract the underlying machine in a way which makes programs completely portable. They usually do it by disallowing user-written machine code (i.e. user-compiled programs) to execute, and only allow interpreted code to run.
The operating system installed must support the processor(s), data buses and memory addressing of the hardware.
At a systems level, in kernel code and device drivers it is impossible to ignore details of the hardware architecture. Applications typically sit a level above all this but are still dependent on the abstraction layers below.
Incidentally, Node.js is written in part in C and C++ which takes advantage of the performance improvements offered by 64 bit processing. Wrangling an optimised performance has been a key objective of node.js design, it has been refactored more than once to that end.

how to cross compile CUPS on ARM in Linux environment? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am interested in cross-compiling a Linux kernel for an ARM target on a x86 host. Are there some good practices you recommend? Which is the best cross-compile suite in your opinion?
Have you settled up a custom cross-compile environment? If yes, what advices do you have? Is it a good idea?
There are two approaches I've used for ARM/Linux tools. The easiest is to download a pre-built tool chain directly.
Pro: It just works and you can get on with the interesting part of your project
Con: You are stuck with whichever version of gcc/binutils/libc they picked
If the later matters to you, check out crosstool-ng. This project is a configuration tool similar to the Linux kernel configuration application. Set which versions of gcc, binutils, libc (GNU or uCLibc), threading, and Linux kernel to build and crosstool-ng does the rest (i.e. downloads the tar balls, configures the tools, and builds them).
Pro: You get exactly what you selected during the configuration
Con: You get exactly what you selected during the configuration
meaning you take on full responsibility for the choice of compiler/binutil/libc and their associated features/shortcomings/bugs. Also, as mentioned in the comments, there is some "pain" involved in selecting the versions of binutils, C library etc. as not all combinations necessarily work together or even build.
One hybrid approach might be to start with the pre-built tools and replace them later with a custom solution via crosstool-ng if necessary.
Update: The answer originally used the CodeSourcery tools as an example of a pre-built tool chain. The CodeSourcery tools for ARM were free to download from Mentor Graphics, but they are now called the Sourcery CodeBench and must be purchased from Mentor Graphics. Other options now include Linaro as well as distribution specific tools from Android, Ubuntu, and others.
I use the emdebian toolchain for compiling stuff for my ARM machines that isn't happy being compiled natively in the small resources available (/me glares at the kernel). The main package is gcc-4.X-arm-linux-gnueabi (X = 1,2,3), and provides appropriately suffixed gcc/cpp/ld/etc commands. I add this to my sources.list:
deb http://www.emdebian.org/debian/ unstable main
Of course, if you're not using Debian, this probably isn't so useful, but by gum it works well for me.
I've used scratchbox while experimenting with building apps for maemo (Nokia N810), which uses an ARM processor. Supposedly, scratchbox is not restricted to maemo development.
I've used crosstool on several targets. It's great as long as you want to build your toolchain from scratch.
Of course there are several pre built toolchains for arm as well, just google it -- too many to mention here.
1) In my opinion building your own toolchain works the best. You end up having tight control over everything, plus if you're new to embedded linux, it's a GREAT learning experience.
2) Don't go with a commercial toolchain. Even if you don't want to take the time to build your own, there are free alternatives out there.
If your company will spend the money, have them buy you a jtag debugger.
It will save you tons of time. and it allows you to easily learn and step through the kernel startup, etc..
I highly recommend using the Lauterbach jtag products... They work with a ton of targets and the software is cross platform. Their support is great as well.
If you can't get a jtag debugger and you're working in the kernel, use an VM to do that, usermode linux, vmware..etc.. your code will be debugged on x86.. porting it to your arm target will be a different story, but it's a cheaper way to iron out some bugs.
If you're porting a bootloader, use uboot. Of course, if you're using a reference platform, then you're probably better off using what they provide with the BSP.
I hope that helps.
Buildroot is a tool I've had reasonably good luck with for building a custom uClibc-based toolchain from scratch. It's very customizable, and not excessively particular about what distribution you happen to be running on.
Also, many of its existing users (ie. embedded router distros) are also targeting ARM.
If you're using Gentoo, getting a cross-compiling toolchain is as easy as
$ emerge crossdev
$ crossdev -t $ARCH-$VENDOR-$OS-$LIBC
where ARCH is arm or armeb, VENDOR is unknown or softfloat, OS is linux, and LIBC is gnu or uclibc.
If all you want is a compiler (and linker) for the kernel, the LIBC part is irrelevant, and you can use -s1/--stage1 to inform crossdev that you only need binutils and gcc.
This is what Eurotech uses for their Debian ARM distibution. You'll note that they don't recommend using a cross compiler if you can avoid it. Compiling on the target itself tends to be a more reliable way of getting outputs that you know will run.

How easy is it to make a Linux distribution? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
From what I have been reading a Linux distribution is little more than a packaging of a kernel with various packages and some limited configuration details, such as which window manager and GUI to use by default (assuming you even want a GUI, blech). In the old days apparently there were some unique advantages to distributions. For example, Red Hat had Red Hat Package Manager (rpm). Of course, nowadays rpm is no longer a unique advantage of Red Hat.
So, why even bother bother with a distribution? Why not just install a kernel and bunch of packages of one's own choosing? What's the complexity?
Basically, a GNU/Linux Distro IS a kernel and a "bunch of packages" (GNU packages) of one's choosing.
People creates distros to perform specific tasks, like server, desktop distros, multimedia oriented distros, etc.
Creating a linux distro can be a really educational task, as you can get to know how a linux system is build from scratch.
I recommend you cheking LFS (Linux From Scratch). Its a project to guide you on assembling your own linux distro from scratch, and believe me, its a great fun and indeed YOU WILL LEARN A LOT.
If you'r intereseted on getting to known how a linux distro works, don't miss this.
The webpage says:
Many wonder why they should go through the hassle of building a Linux system from scratch when they could just download an existing Linux distribution. However, there are several benefits of building LFS. Consider the following:
LFS teaches people how a Linux system works internally
Building LFS teaches you about all that makes Linux tick, how things work together and depend on each other. And most importantly, how to customize it to your own tastes and needs.
Building LFS produces a very compact Linux system When you install a
regular distribution, you often end up installing a lot of programs
that you would probably never use. They're just sitting there taking
up (precious) disk space. It's not hard to get an LFS system installed
under 100 MB. Does that still sound like a lot? A few of us have been
working on creating a very small embedded LFS system. We installed a
system that was just enough to run the Apache web server; total disk
space usage was approximately 8 MB. With further stripping, that can
be brought down to 5 MB or less. Try that with a regular distribution.
LFS is extremely flexible Building LFS could be compared to a finished
house. LFS will give you the skeleton of a house, but it's up to you
to install plumbing, electrical outlets, kitchen, bath, wallpaper,
etc. You have the ability to turn it into whatever type of system you
need it to be, customized completely for you.
LFS offers you added security You will compile the entire system from
source, thus allowing you to audit everything, if you wish to do so,
and apply all the security patches you want or need to apply. You
don't have to wait for someone else to provide a new binary package
that (hopefully) fixes a security hole. Often, you never truly know
whether a security hole is fixed or not unless you do it yourself.
Of course there are other tools to create a linux distro based on your HD installation, maybe for backuping purposes.
Linux Live
And lot of other scripts to get you started, just google for them.
Of course, all of them are like automatically tools oriented for the user, so don't expect to learn a lot from them.
There are lots, thousends of linux distros out there, so obviously is a waste of time to try to make the "ideal" linux distro and compite with ubuntu, mint, etc.
I still recommend you to check out Linux From Scratch, just as a weekend educative project . Trust me, you will learn a lot.
It covers also embedded linux distro creating, to target ARM processors and so.
If you're on the embedded world, Yocto Project worths a look.

Reprogram a device [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Is it possible to take a device, say, a PDA, and wipe an software off of it and install your own?
For example, could I take a mac terminal program and install it onto a PDA (with wifi) and do SSHing and such?
And what language would / could it be in?
The language this could be in is not really the issue; it is, mostly, a matter of system compatibility.
Software applications do not run in a vacuum: they rely on the underlying operating system or for the least some form of virtual environment or a runtime such as Java, Silverlight etc.
Before one can re-purpose a PDA or other similar device, he/she need to install some system / host-software of sorts on it, and doing so can be rather complicated because of the proprietary and dedicated nature of many of the hardware subsystems therein.
General purpose systems such as Linux or Windows can be installed on various hardware platforms (including appliances) provided that:
- said hardware subsystems (CPU, keyboard/input devices, display device, storage devices...) comply to some specification, and
- the corresponding device drivers are available.
In the case of PDA, GPS appliances, smartphones and various other hardware platforms (and while many such platforms run on custom versions of Windows, Linux, Android etc.), there is typically enough proprietary differences, custom hardware and other deviations from specifications that installing alternative operating systems or runtimes is typically a challenge. Lack of documentation can also be a limiting factor.
Many such devices however host some form of runtime atop the system (Java in many cases), and rather than installing anew a alternative operating system, it is possible, in some cases, to install and run applications written in these hosted languages.
Even though, uninstalling existing applications (say to make room) and installing new applications may be a challenge as well. Difficulties arise because of
- purposeful "locking in" of the appliances (the manufacturers purposely prevent such re-purposing, using various forms of encryption, undocumented features and the like)
- intrinsic limitations of the runtime (whereby only a subset / sandboxed version of the language features is available).
In short, the specific approach for re-purposing appliances depends on:
the specific appliance/device: make, version etc.
the intended purpose: which particular uses are desired for the new device
the technical expertise and patience of the implementers ;-)
In general this is far from trivial: beginners beware! (*)
(*) BTW, the relative lack of sophistication apparent in the question seem to indicate the OP may not have the necessary skills involved in this kind of "hacking". It can however, be a very fun and rewarding learning experience.
No, but you can probably find a PDA terminal and do SSH with it.
Mac and PDAs have different architecture (their processors talk different language).

Linux distribution binary compatibility

Any way to make a binary in a Linux distribution and run it on another distribution with same architecture? Or I should compile and build it on different distributions?
Is there any compatibility between Redhat, Debian based distributions for binary files?
(I want to use my Ubuntu binary file on fedora!)
Enter Linux Standard Base to reduce the differences between individual Linux distributions.
See
http://www.linuxfoundation.org/collaborate/workgroups/lsb
http://en.wikipedia.org/wiki/Linux_Standard_Base
Statically linking your binaries makes them LESS portable because some libraries won't then work correctly for that machine (differing authentication methods, etc).
If you statically link any "unusual" libraries and keep your set of supported distros to a minimum, you should be ok.
Don't statically link the C library (or the whole binary), that's a recipe for trouble :)
Look at what (e.g.) Google do with Chrome.
What language is your application coded in? If its in a language like Python, (and no C bindings) or Java or any other VM based language, then I think you can trust the VM to make sure your application will work on the different Linux distributions.
Also, there is the Linux Standard Base which you can refer to.
HTH,Amit
I realize this is a very old question, but it comes up high in search results and this hasn't been mentioned:
CDE is a tool to create portable Linux applications. This tool packages together all needed files (including libraries) by analyzing at run-time. I have used it successfully on command-line tools several times, one example being getting tcpdump to run on a old hardware appliance running a custom distribution. CDE also doesn't require source, it just packages an executable you are able to run.
At one point I had an error running the cde command which was fixed by prepending the command with LD_ASSUME_KERNEL=2.4.1, this might not be necessary in recent versions as it was years back.
Code is also on GitHub: https://github.com/pgbovine/CDE
It works. But it also depends on the version of the shared libraries you use, including libc, libstdc++ which are forced by the compiler version that may differ from distro to distro.
The best way is to distribute the source code and to make it easy to build the source on any reasonable Linux distribution. This is better than binary distribution because it is not enough to make the binary compatible with shared libraries. You also need to make sure you adapt your program to things like distribution specified locations and conventions for where web apps go, or how e-mail is sent, or how services are started, or how to determine the default paper size, or a myriad of other details.
See for example the Debian Policy Manual for a document describing many of the things a distribution needs to decide to ensure compatibility between applications running on it. You don't need to read it through or learn it by heart, but it shows the scope of the issues that may trip you.
You should probably work together with several of the major distributions to ensure your application works well with all of them. Most distributions' developers will happily help if you approach them politely. If you're lucky, you can attract volunteers from the distros to make the binary packaging for you, and that will quickly give you feedback on what you need to change at the source level to make your application easy to package.
The Linux Standard Base already mentioned by others attempts to work out a cross-distribution solution to these variables, but it is not comprehensive and not fully supported by most distributions. However, most distributions consider it a problem if they accidentally break LSB compatibility.
Normally it's ok to use binaries across linux distributions as long as you have the same set of libraries available. You can use 'ldd' to check which libraries are needed by a binary. libc should sure have the same version in the distributions involved.
You could statically link your executables for portability.
LSB is definitely worth checking out. Though in regards of working with libraries I was most satisfied by this answer here at SO https://stackoverflow.com/questions/1209674/shipping-closed-source-application-for-linux/1242738#1242738 and this detailed treatment of the rpath mechanism http://www.eyrie.org/~eagle/notes/rpath.html
How about HTML?
It's cross platform, it's been around forever, and if you consult caniuse, and you know your target environment. It can render any UI I ever dreamt of, and if you're willing to learn javascript, you can approach this from a server programming perspective and a client programming perspective, without switching languages, which comes in handy if you're the one doing both.
It's probably the closest thing we have to a machine lingua france these days, and that is a good thing, because that means there's potentially more pickins for everybody involved.
People don't know that the web browser does most things you would want from a program, including rendering 3D graphics and PGP style encryption.
The biggest benefit I see in the browser as a platform is that everyone's nephew knows how to install a browser on a new computer, and from there it's just a url away, including packaging in some of the stores.

Resources