Does Debian system uses FreeBSD kernel and Redhat system uses GNU kernel? - gnu

Is it true that the Debian system uses FreeBSD kernel and the Redhat system uses GNU kernel. Does it make any difference to use Free BSD kernel or GNU kernel?

Well the comment grew too long so I might as well post an answer. It is quite unclear what you are asking for, since you don't present any specific problem. At first glance this question does not belong to SO. But I assume since posted here you ask about difference from coders point of view.
1st to clear up, the BSD kernel is rather the alternative one like Hurd, so a Debian system can use FreeBSD kernel, it's not default. Actually its status is "official port".
Probably the biggest difference lies in the philosophy the user believes and their peace of mind, alternatively the hood they grew up in. Unless you are a distro developer or work even lower on the system.
Generally as a programmer you work with standard library implementation (libc,libc++,libstdc++ or whatever), they need to follow the standard, and the differences are really small. On debian this is natively provided with gnu toolset. You can assume to work in gnu wonderland. The kernel lies even lower, and you would need to work on really low-level kernel related stuff to notice difference of the kernel itself. Usually you work few layers from it and kernel should pleasantly abstract.
So no need to worry unless you acutally run into trouble.

Related

Difference between arm-none-eabi and arm-linux-gnueabi?

What is the difference between arm-none-eabi and arm-linux-gnueabi? I know the difference in how to use them (one for bare metal software, the other one for software meant to be run on linux). But what is the technical background?
I see there is a difference in the ABI which is, as far as I understood, something like an API but on binary level. It ensures interoperability of different applications.
But I don't really understand in which way having or not having an operating system affects my toolchain. The only thing that came to my mind is, that libraries maybe have to be statically linked (do they?) while compiling bare metal software, because there is no os dynamically providing them.
The most pages I found related to this toppic just answered how to use the toolchains but not the technical background. I'm a student of mechatronics and new to embedded systems, so my experience in this field is somewhat limited.
Maybe this link to a description will help.
Probably the biggest difference:
"The bare-metal ABI will assume a different C library (newlib for example, or even no C library) to the Linux ABI (which assumes glibc). Therefore, the compiler may make different function calls depending on what it believes is available above and beyond the Standard C library."

Is it possible to run the BSD userland as a replacement to GNU coreutils with the linux kernel?

I have been looking for a linux distribution that is not for embedded systems and does not use many of the GNU utilities found in many popular distributions. I want to develop a (pet project) linux distribution that uses musl-libc, bsd userland, and Plan 9 from user space. Before I start and possibly waste time doing the impossible, is it feasible/practical to use the BSD userland as a replacement for GNU coreutils? If not, what is an alternative?
Your goal appears to be much close to stali project (the only difference is the BSD userland requirement).
http://sta.li/
I don't know much about the stage of this project, but you can get some help in the project mailing-list.
As far as I know, the BSD tools uses a lot of direct syscalls and little usage of the POSIX API. I don't believe that bsd guys had written code using a lot of #ifdefs to get fully compliant programs (but I can be wrong)...
The suckless site ported the plan9 userland to unix (based on plan9port too), it's called 9base (and is available on archlinux repo to install).
I think you'll have the same problem I had in the past trying to assembly a similar distro: Too much effort to get rid of GNU... The base system is easy, but for something useful you'll need a C compiler and then you're out of good alternatives. GCC is gnu and have dozens of gnu dependencies and the sane freebsd gcc port never will work on linux for obvious reasons.
My current try is help finish the ken-c (or 9-cc) port for linux.

difference between Linux kernel and UNIX kernel(such as FreeBSD) from programmer's point of view

difference between Linux kernel and UNIX kernel(such as FreeBSD) from programmer's point of view.
I searched several articles about this. They compared these from User's view and Administrator's view also from Company's manager's view.
Can any body find article or say something from programmer's view?
The programmer I means, both user land programmer or kernel level programmer?
Any hints or enlightenment is really appreciate.
Wish this is not a cliche question make everybody sick. :P
From a standards point of view there really isn't any difference. Linux is a "POSIX" compliant OS, FreeBSD, Mac OS X and Solaris are also all "POSIX" compliant. In theory at least.
Once you move past the standards there are quite a few differences. Linux as inotify, udev and a bunch of other systems that are unique to it. FreeBSD has kqueue. There are differences in their exact implementations of things like ptrace. For example Mac OS X's ptrace has almost no functionality that you will find in the other Unix systems.
Beyond custom libraries there are differences in development tools. Solaris and FreeBSD have dtrace. Linux has valgrind. Mac OSX has instruments.
What level you are looking at will affect what differences you see or don't see.
For a userland programmer, there is no difference. The userland programming will be coding to a language VM like C and it will be up to the C library routines to translate that into lower level system calls.
Those using other tools such as Perl, Python, Java and so on, are even more removed from the kernel so it will not directly affect them either.
In terms of the kernel programmer, the differences are likely to be significant since the kernels themselves are different. I haven't seen the FreeBSD internals although I've done a fair bit of work inside Linux, so I can't comment intelligently on the low-level differences but (and this final bit is informed opinion, not gospel), since they run independent development streams, the chances of having exactly the same view is small.

Is assembler portable between Linux distros?

Is a program shipped in assembler format portable between Linux distributions (modulo CPU architecture differences)?
Here's the background to my question: I'm working on a new programming language (named Aklo), whose modus operandi will be the classic compiling to .s and feeding the result to the GNU assembler.
Obviously it would be nice ultimately to have the implementation written in itself, but I had resigned myself to maintaining it in C++ to solve the chicken and egg problem: suppose you download the compiler for the first time and it is itself written in Aklo, how do you compile it? As I understand it, different Linux distributions and other UNIX like systems have different conventions for binary formats.
But it's just occurred to me, a solution might be to ship the .s file (well, one per CPU architecture): it's fair to assume you have or can install the GNU assembler. Of course I'd still need a bootstrap compiler, but that doesn't need to be fast; I can write it in Python.
Is assembler portable in the way that binaries are not? Are there any other stumbling blocks I haven't thought of?
Added in response to one answer:
I had looked wistfully at LLVM, there is certainly a lot of good stuff there and it would make my life easier -- except that it would incur a dependency on the correct version of LLVM being installed. It wouldn't be so bad having that dependency on development machines, but in a world where it's common to ship programs as source, the same dependency would be incurred for every user of every program ever written in Aklo, and I decided that was too high a price to pay.
But if the solution of shipping compiled programs as assembler works... then that solves that problem, and I can use LLVM after all, which would be a big win.
So the question about portability of assembler is even considerably more important than I had first realized.
Conclusion: from answers here and on the LLVM mailing list http://lists.cs.uiuc.edu/pipermail/llvmdev/2010-January/028991.html it seems the bad news is the problem is unsolvable, but the good news is that means using LLVM makes it no worse, so I'm free to do so and obtain all the advantages thereof.
You might want to check out LLVM before going down this particular path. It might make your life a lot easier, as it provides a low level virtual machine that makes a lot of hard stuff just work and has been very popular.
At a very high level, the ABI consists of { instruction set, system calls, binary format, libraries }.
Distribution as .s may free you from the binary format. This is still rather pointless, because you are fixed to a particular ISA and still need to use libraries and/or make system calls. Libraries vary from distribution to distribution (although this isn't really that bad, especially if you just use libc) and syscalls vary from OS to OS.
It's basically 20 years since I last bootstrapped a C compiler. At the level of compilers, the differences between Linux distributions are minimal.
The much more important reason for going LLVM is cross-platform; if you're not writing some intermediate language, your compiler will be extremely difficult to retarget for different processors. And seeing as, on my laptop, I have compilers for x86, x86_64, two kinds of MIPS, PowerPC, ARM and AVR... you see where I'm going? I can compile multiple languages for most of those targets too (only C for AVR).

Licensing and using the Linux kernel [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I would like to write my own OS, and would like to temporarily jump over the complicated task of writing the kernel and come back to it later by using the Linux kernel in the mean time. However, I would like to provide the OS as closed source for now. What license is the Linux kernel under and is it possible to use it for release with a closed source OS?
Edit: I am not interested in closing the source of the Linux kernel, I would still provide that as open sourced. I am wondering if I could use a closed source OS with an open source kernel.
Further edit: By OS, I mean the system that runs on top of the kernel and is used to launch other programs. I certainly did not mean to include the kernel in the closed source statement.
You can of course write whatever closed-source OS over the Linux kernel that you like provided you are compatible with the licensing of components you link against.
Of course that's likely to include the gnu C library (or some other C library). You may also need some command line utilities which will probably be GPL to do things such as filesystem maintenance, network setup etc. But provided you leave those as their own standalone programs, it should not be a problem.
Anything that you link into the kernel itself (e.g. custom modules, patches) should be released as open source GPL to comply with the kernel's licence.
The Linux kernel is released under the GPLv2 and you can use it as part of a closed-source OS but you have to keep the kernel and all modifications released GPLv2.
Edit: Btw, you may want to use something like OpenSolaris instead. It's much easier to work with, in my opinion (obviously very subjective), and you can keep modifications closed-source if you so choose, so long as you follow the terms of the CDDL.
I think you're going to have to be more specific about what you mean by 'OS'. It's by no means a clear concept. Some would say that the kernel is all of the OS. Others would say that the shell and core utilities such as 'ls' are part of the OS. Others would go as far as to say that standard applications such as Notepad are part of the OS.
IANAL, but I don't believe there's anything to stop you from bundling the Linux kernel with a load of closed-source programs of your own. Take care not to use any GPL library code however (LGPL is OK).
I do question your motives.
It's GPL version 2 and you may certainly not close its source.
You must keep the source open, and any works derived from the code, however, if you use the Kernel, write your own application stack on top of that (pretty much ALL the GNU stuff) then you don't have to open that up.
The GPL says that "derived" works... so if you're writing new code, instead of expanind on, then that's fine. In fact, you could even, for example, use the GNU toolchain, the Linux Kernel, and then have your own system on top of that (or just a DE) that is closed source.
It's when you modify/derive from something that you have to keep it open!
Linux has the GPL (v2) as its licence, which means you have to open source any derivative works.
You may want to use BSD, its license is a lot les restrictive in what you can do with derived works
If the filesystem you use is to be linked into the kernel itself, and if you plan to distribute it to others, the GPL pretty unambiguously requires that the filesystem be GPL'ed as well.
That being said: one way to legally interface Linux with a GPL-incompatible filesystem is via FUSE (filesystem in userspace). This has been used, for example, to run the GPL-incompatible ZFS filesystem on top of Linux. Running a filesystem in userspace does, however, carry a performance penalty that may be significant.
It is GPL. Short answer -- no.
You can always keep any extensions (modules) and/or applications you write closed source, but the kernel itself will need to remain open source.
There's a not-so-obvious aspect of the GPLv2 that you can exploit while testing the system: you only need to release source code to those who have access to the system. The GPLv2 states that you need to give full access to the source code to anyone with access to the binary/compiled distribution of the program. So, if you are only going to use the software inside of the company that is paying to develop it, you don't need to distribute the source code to the rest of the world, but just them.
Generally I would say that you're allowed to do such a thing, as long as you provide the source for the kernel, but there's one point where I'm unsure:
On a normal Linux system between the (GPL) kernel and a non-GPL compatible application, there is always the GNU libc, which is LGPL and thus allows derived works that are non-free. Now, if you have a non-free libc, that might be considered a derived work, since you are directly calling the kernel, and also using kernel headers.
As many others have said before, you might be better off using a *BSD.
If you're serious in developing a new operating system and want a working kernel to start with I suggest that you look into the FreeBSD kernel. It has a much more relaxed license than Linux, I think you might find it worthwhile.
Just my 2 cents...
I agree with MarkR but nobody has stated the obvious to you. If you are serious, you need to consult a lawyer with expertise in this area.

Resources