Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I was wondering that how is Linux and GNU related to each other. Can anyone clear my doubt in it?
Thanks!
GNU, founded by Richard Stallman, is a collection of tools which more or less help to create a fully functional operating system. GNU's goal was to create a fully free, open source replacement of UNIX.
Linux was created by Linus Torvalds with no connection to GNU. Linux functions as an operating system kernel. When Linux was created, there were many GNU components already created but GNU lacked a kernel, so Linux was used with GNU components to create a complete operating system. There is now a kernel created by the GNU team in development (GNU Hurd) which can be used instead of Linux producing a fully GNU-based operating system. However, GNU Hurd is still in development stages (and has been for 20 years) and Linux is a more mature kernel.
It is possible also (such as in the case of Android) to have a Linux-based operating system which has no GNU components.
But usually a complete operating system will consist of Linux + many GNU components, which is sometimes referred to as GNU/Linux.
Originally, GNU was a project to build a complete Unix-compatible operating system piece by piece.
The plan was to rewrite each small utility according to specification, testing it on a working Unix by replacing the original one. It went very well, except for the kernel, where progress was especially slow, probably because several good developers couldn't agree on the absolute best design.
The planned HURD kernel had in fact a very advanced design, with a lot of innovations, but it seemed that it wouldn't be completed any time soon.
Meanwhile, Linus Torvalds was writing his own kernel, mainly to teach himself how to control low-level aspects of the Intel 80386 processor. At first it was just a task switcher, but he quickly implemented most of the old syscalls specifications, until he managed to run most of MINIX (another Unix-like system, mostly used in education) environment on top of the new kernel.
Soon, other people suggested using GNU utilities instead of MINIX ones, and got a much more complete system. It worked so well that most GNU developers just adopted the Linux kernel instead of perpetually waiting for the HURD kernel.
The resultant OS is commonly called just "Linux", but it's true that Linux is just the kernel. All the GNU utilities are a lot more lines of code, so it would be more properly be called GNU/Linux.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'm trying to develop a new system call and add it to the kernel, but since the c file that contains the syscall method implementation can only use functions which reside inside the kernel address space I'm pretty sure I can't use methods like popen, stat, etc..
I made a bit of research on the Internet but I couldn't find anything that would give me the functions that I can use inside the kernel.
Probably the biggest difference (among many big differences) that you will need to get your head around is this: the kernel is not linked against libc. So, look at everything provided by libc. you don't get any of that...
...well, sort of. Some of the functionality that libc provides is actually implemented inside the kernel itself. You need to include the kernel versions of those headers:
#include <linux/[header file].h>
To get an idea of what is available inside the kernel, you'll need to look at the functions defined in the header files of the kernel source tree.
A few other points to keep in mind:
Linux kernel is programmed using GNU C, not strict ANSI C, which makes sense: as some folks would be quick to point out, Linux is just the kernel, GNU is everything else; that includes the GCC compiler.
No easy floating point math. Normally the kernel facilitates the use of floating point instructions, but the mechanism it uses to that cannot be easily used on the kernel itself. See here for more.
A good book on the subject is Linux Kernel Development by Robert Love (I am in no way affiliated; it's just a good book).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
For example, to convert from integer UNIX timestamp to a readable date, one can use date utility. However, the syntax is different on different platforms:
on Linux - date --date='#123456789'
on BSD - date -r 123456789
I'm not sure about Mac OS X, AIX, HP-UX and all the other flavors.
So programs are very different across the system, i.e. even bear a different name and usage syntax. For example, it's seq 1 10 on Linux, but it's jot 10 1 on BSD.
I know that there is a POSIX standard for shell (i.e. sh), but as far as I know, there is no POSIX standards for all the other utilities (like, manpage of date in Linux says nothing about conformance to something like POSIX / SysV / BSD standards).
This question - Portability between Unix shells - am I thinking about the issue correctly? - tackles the issue, but only about the shell itself, not the all other utilities.
So, I have 2 questions:
Is there some sort of compatibility / portability comparison table available, that lists relevant differences on various major implementations of popular UNIX CLI tools, like date, sed, awk, etc?
Is there some sort of compatibility layer available, i.e. a minimal subset of what I can use to make sure that my shell scripts are portable, or some sort of shims (like in HTML / JavaScript) that bring missing functionality to alternative systems?
There is definitely a standard for basic commands like date. There is however no support for "time since epoch in seconds' with it.
seq and jot are not specified by POSIX but it would be trivial to implement them with a shell script.
The minimal subset or compatibility layer you are looking for is the POSIX standard. Have a look to http://pubs.opengroup.org/onlinepubs/9699919799/utilities/contents.html
For this sort of thing I'd highly recommend O'Reilly's UNIX in a Nutshell which does an excellent job of collating the options for various flavors of UNIX for each command.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am a computer science minor and I do appreciate *nix a lot more since i started to delve into computer science. I used to be a windows fan boy and now i own two macs (as well as my PC which has windows and ubuntu on it).
I want to learn more about how linux was developed. I know that linux is only the kernel and the GNU is actually the most of what i am interfacing with. So when i type ls -al on my mac which uses unix how is it different from when i type ls -al on my Ubuntu boot on my PC? Does the difference actually lie in the differences between linux and unix? Or does unix use a non-GNU libraries for stuff like ls and cd?
So what exactly are the difference of linux and unix? Does Unix use GNU libraries for ls, cd, and all those common terminal operations?
First of all, you need to know that ... Linux Is Not UniX. :)
Good question, but it's difficult to give a straight answer.
The kernel is different. The design is different. The software is different (!)
That said, if you have Mac OS X (UNIX), you can build almost any command-line tool that was written for Linux.
Most of the free open-source software is compatible with both Linux ans UNIX, so depending on your level, you might never know the difference.
But technically, there's a huge difference. If you're on a hardware and driver-level you will start noticing differences, but if you're above those levels, you can easily write portable code.
Some people would claim that Linux is the poor-man's UNIX (which is probably also true), while others would say that Linux fixes the problems that UNIX has.
Due to the nature of the question (it's fairly broad), it's difficult to go in details.
I work with both and do not feel a huge difference. My UNIX was set up for me, so I'm basically a novice user there, but I had to install and configure parts of my Linux system myself.
I would say (in my own opinion) that most of the time, Linux is something you build yourself, you decide which components you want. UNIX on the other hand is a little more "one big package", though you can still add components.
Looking at it from a different angle: Linux is open-source and free, where some versions of UNIX aren't. UNIX is often found in enterprise servers from large companies.
Take a command like 'ls' as you mentioned. Older versions of UNIX had a command called 'lc' which listed directories instead of files (as far as I recall). This command does not exist in the UNIX that Mac OS X is based upon, so there's a difference between UNIX and UNIX.
On the other hand, Linux did not make a straight copy of the UNIX command 'ls'. The output often differ slightly, and the switches are different. But!
If you're running Bash, then Bash on your Mac OS X is most likely exactly the same Bash you've got on Linux, just the version differ.
If you got 'curl' on your Mac, and 'curl' on Linux, then it IS the same tool, because it's built from the same sources; it's just built for two different Operating Systems.
GCC is the same as well. (GNU is Not Unix - but it works well on UNIX).
If you install the gitolite server (which I'm quite fond of), you will experience that it will not install on the stock Mac OS X 10.5.8; this is because the arguments for the 'cp' command differ. The author refused to correct the problem, when I suggested him a solution that would work on all platforms. So 'cp' may not always be 100% compatible, and I do not know whether or not it would be a good idea to 'upgrade', because the 'cp' that you have now is compatible with the scripts that Apple provided with your system. Upgrading 'cp' to a different version could break compatibility, which could mean that your system got corrupted and would need a re-install. -So it's better to not upgrade that particular command. ;)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
The community is reviewing whether to reopen this question as of 8 days ago.
Improve this question
What does "opt" mean (as in the "opt" directory)? I commonly see this directory in Unix systems with development tools inside.
Is it an abbreviation?
In the old days, "/opt" was used by UNIX vendors like AT&T, Sun, DEC and 3rd-party vendors to hold "Option" packages; i.e. packages that you might have paid extra money for. I don't recall seeing "/opt" on Berkeley BSD UNIX. They used "/usr/local" for stuff that you installed yourself.
But of course, the true "meaning" of the different directories has always been somewhat vague. That is arguably a good thing, because if these directories had precise (and rigidly enforced) meanings you'd end up with a proliferation of different directory names.
The Filesystem Hierarchy Standard says this about "/opt/*":
"/opt is reserved for the installation of add-on application software packages."
By contrast it says this about "/usr/local/*":
"The /usr/local hierarchy is for use by the system administrator when installing software locally."
These days, "/usr/local/*" is typically used for installing software that has been built locally, possibly after tweaking configuration options, etcetera.
It's usually describes as for optional add-on software packagessource, or anything that isn't part of the base system. Only some distributions use it, others simply use /usr/local.
OPTional
It holds optional software and packages that you install that are not required for the system to run.
Add-on software packages.
See http://www.pathname.com/fhs/2.2/fhs-3.12.html for details.
Also described at Wikipedia.
Its use dates back at least to the late 1980s, when it was a standard part of System V UNIX. These days, it's also seen in Linux, Solaris (which is SysV), OSX Cygwin, etc. Other BSD unixes (FreeBSD, NetBSD, etc) tend to follow other rules, so you don't usually see BSD systems with an /opt unless they're administered by someone who is more comfortable in other environments.
It is an abbreviation for 'optional' , used for optional software in some distros.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Which is the best light weight distro for learning linux kernel development. It should have lot of debugging and profiling tools available along with it :)
LFS. Then install every debugger and profiler you can find.
I've heard Linus himself uses Fedora. I'd recommend Gentoo which lets (intends) for you to hand customize your kernel, it's the perfect setting for it (and I've spent many hours squeezing out every last bit of performance for the fun of it).
Naturally Ubuntu is my preferred distro, but you may have trouble if you start hijacking and removing expected kernel features. Gentoo won't complain, and doesn't expected them around to begin with.
I've enjoyed using Gentoo for fiddling around with the kernel.
The distro does not really matter. It is what you want to do with the kernel and do development/testing its feature.
Here are few things to do.
a. Turn on the kernel debugging and the logging options. Those would definitely help you in debugging.
see useful linux kernel debug options to turn on
b. Getdebuggers tool like Valgrind that checks for memory leak. See doc like https://www.kernel.org/doc/Documentation/kmemleak.txt
c. Found a good editor for editing. I don't want to start a vim vs emacs war. It is really a personal preference, just make sure you follow the linux kernel coding style guidelines. https://www.kernel.org/doc/Documentation/CodingStyle
d. Get familiar with the log systems and proc system, as they provide valuable information.
e. Read the documentation in the directory /usr/src/linux/Documentation Very good starting point to understand the kernel
The distro probably doesn't make much difference since you'll be working on your own kernel and not the "kitchen sink" kernel the distros tend to provide with a bunch of patches in most cases.
If you're doing kernel development work then I suppose you want a distro that boots quickly, something like puppy might be ideal here and do your actual coding from something like Ubuntu.
Buildroot
Buildroot is a set of scripts that generates tiny distros with rootfs images smaller than 10MiB.
It downloads everything from source and compiles it, so it is trivial to patch packages up.
The generated images are so tiny, that it becomes possible to understand the entire userland setup, which will make it easier to focus on the kernel.
Advantage over LFS: everything is fully automated. Because of this, Buildroot is used professionally in large organizations.
I have created this setup to automate things as much as possible: https://github.com/cirosantilli/linux-kernel-module-cheat