Linux kernel version numbers? - linux

I'm somewhat confused on the Linux kernel version numbers. I noticed that many Linux distributions do no use the latest Kernels, in fact I have seen a few using version 2.4. Is there a reason for this? Is there any differences between 2.4, 2.6, 3.2 apart from age? What are the security implications of using an older kernel?

Different kernels have different features and improvements the developers of a distribution take a look at features they are going to support then decide which kernel to use based on the most stable support of that kernel.
It is the decision of the distribution developer which one to use, and they would go with the one they felt comfortable with.
This is a feature of software development its like asking what Java 1.1 and Java 1.7 is and whats the difference is apart from age... the answer is many things.
Most kernels and software will also have a security patch schedule and it is up to the user to keep their systems patched and up to date if you do not then you invite security issues as they are never fixed.

between major releases (e.g. 2.4 and 2.6), new features are introduced and old features are deprecated, eventually making the kernel binary incompatible.
this is important if you depend on some kernel-module that is not part of the mainline kernel (e.g. it is proprietary and the original authors don't provide updated modules; or it is open source (but never made it into the kernel) and nobody is willing to spend time to migrate the code)
also a new major release might change the system behaviour significantly. i remember that when switching from 2.4 to 2.6, many people that needed low latency audio (that's my background, so forgive me) would stay with the old kernel, since the new scheduling algorithms performed worse in some situations.

I hope the below page would help us in understanding the linux version numbering system:
http://www.linfo.org/kernel_version_numbering.html
A snap from http://www.tldp.org/FAQ/Linux-FAQ/kernel.html#linux-versioning
Linux version numbers follow a longstanding tradition. Each version has three numbers, i.e., X.Y.Z. The "X" is only incremented when a really significant change happens, one that makes software written for one version no longer operate correctly on the other. This happens very rarely -- in Linux's history it has happened exactly once.
The "Y" tells you which development "series" you are in. A stable kernel will always have an even number in this position, while a development kernel will always have an odd number.
The "Z" specifies which exact version of the kernel you have, and it is incremented on every release.

There is a human-readable changelog for the Linux Kernel.
Some people take the attitude "if it works - don't change it". So, there are security implications of running an older kernel - but there's also the risk that upgrading may break something.

As kernels improve/change, they might break expectations that held on earlier versions. If a distribution becomes unstable because too many software breaks under the new version of the kernel, the developers might opt to use an earlier version until the software is able to handle the changes in the new kernel.

Typically a binary image may not be portable between kernels where the first or second part of the version number changes. So in order to upgrade from 2.4 to 2.6, a large proportion of libs and executables will need to be upgraded / recompiled too. This can be a particular problem if you use closed source applications/drivers.
Most Linux distributors will back port patches from newer kernel versions depending on the security impact / demand for additional functionality. If you're not able to upgrade your entire system at regular intervals, then you need to find a distribution which explicitly supports a long support cycle with a good history of back porting (e.g. RHEL, SuSE).

Related

Does Debian system uses FreeBSD kernel and Redhat system uses GNU kernel?

Is it true that the Debian system uses FreeBSD kernel and the Redhat system uses GNU kernel. Does it make any difference to use Free BSD kernel or GNU kernel?
Well the comment grew too long so I might as well post an answer. It is quite unclear what you are asking for, since you don't present any specific problem. At first glance this question does not belong to SO. But I assume since posted here you ask about difference from coders point of view.
1st to clear up, the BSD kernel is rather the alternative one like Hurd, so a Debian system can use FreeBSD kernel, it's not default. Actually its status is "official port".
Probably the biggest difference lies in the philosophy the user believes and their peace of mind, alternatively the hood they grew up in. Unless you are a distro developer or work even lower on the system.
Generally as a programmer you work with standard library implementation (libc,libc++,libstdc++ or whatever), they need to follow the standard, and the differences are really small. On debian this is natively provided with gnu toolset. You can assume to work in gnu wonderland. The kernel lies even lower, and you would need to work on really low-level kernel related stuff to notice difference of the kernel itself. Usually you work few layers from it and kernel should pleasantly abstract.
So no need to worry unless you acutally run into trouble.

Contributing to a Linux distribution

I'm interested in contributing to a Linux distro, but regarding the various distro's developer communities, I'm having a bit of trouble figuring out which one I'd most like to join.
What languages I know: C, C++, Lua, Python, and fairly familiar with Perl (though I wouldn't say I "know" it). In particular, I have very little experience with x86 assembly besides hacking stuff together for performance tweaks, though that will be partially rectified soon.
What I'm looking for: A community that provides plenty of opportunities for developers to work on various aspects of the distribution. To be honest I'm most interested in reading and working on the kernel source (in which case the distro doesn't matter), but it's pretty daunting and I figure getting into the Linux community and working with experienced Linux developers might give me a better idea of how to jump into the guts(let me know if this is bogus, or if you have any advice regarding that).
So...
Which distro has the "best" developer community in terms of organization, people who are fun to work with, and opportunities to contribute?
I've read various "Contributing to XXX" pages and mailing lists for distros like Ubuntu, OpenSuse, Fedora, etc. but I'd rather get a more personal testament from an actual developer.
Unless you have a specific desire to learn the ins and outs of various packaging formats you would probably be better off contributing directly upstream to applications/libraries that you find interesting. While individual distributions often have a few management applications that are unique(ish) to them most core applications and libraries are shared between them.
As you have expressed an interest in guts it would make sense to stick to one of the main community distros (Fedora and Ubuntu/Debian) as the rest tend to be variations on a base distro. The other option is to choose a source based distribution which have a number of advantages to developers although you may find yourself spending a bit of time keeping your machine trim.
As I'm a developer I personally use Gentoo which gives me a number of things:
Rolling release: New versions of applications are generally available soon after release
Stable/Unstable mix: I can run stable core with bleeding edge on upstream packages I care about
Development ready: Any installed package is by default a "dev" package, the distinction between buildtime/runtime dependencies is blurred
Packaging is easy: If it's a simple as "configure/make/make install" writing and ebuild is very easy.
Contribution is easy: Contributing new ebuilds is fairly painless, from there you can get as involved as you like
Of course there are downsides, not least of all your machine spends a considerable amount of time building things and if your run a large selection of "unstable" packages you may find you occasionally need to fix-up your machine. However I find these disadvantages minor compared to giving me an up to date platform with which to contribute to upstream from.
If you want to work with the kernel then you shouldn't be picking a distribution, but rather working upstream.
Somebody correct me if I'm wrong, but I think that contributing to Ubuntu can be very easy and fun if you use Launchpad. I haven't tried contributing code, but I contribute translations and file bugs on some projects.

How to build Linux system from kernel to UI layer

I have been looking into MeeGo, maemo, Android architecture.
They all have Linux Kernel, build some libraries on it, then build middle layer libraries [e.g telephony, media etc...].
Suppose i wana build my own system, say Linux Kernel, with some binariers like glibc, Dbus,.... UI toolkit like GTK+ and its binaries.
I want to compile every project from source to customize my own linux system for desktop, netbook and handheld devices. [starting from netbook first :)]
How can i build my own customize system from kernel to UI.
I apologize in advance for a very long winded answer to what you thought would be a very simple question. Unfortunately, piecing together an entire operating system from many different bits in a coherent and unified manner is not exactly a trivial task. I'm currently working on my own Xen based distribution, I'll share my experience thus far (beyond Linux From Scratch):
1 - Decide on a scope and stick to it
If you have any hope of actually completing this project, you need write an explanation of what your new OS will be and do once its completed in a single paragraph. Print that out and tape it to your wall, directly in front of you. Read it, chant it, practice saying it backwards and whatever else may help you to keep it directly in front of any urge to succumb to feature creep.
2 - Decide on a package manager
This may be the single most important decision that you will make. You need to decide how you will maintain your operating system in regards to updates and new releases, even if you are the only subscriber. Anyone, including you who uses the new OS will surely find a need to install something that was not included in the base distribution. Even if you are pushing out an OS to power a kiosk, its critical for all deployments to keep themselves up to date in a sane and consistent manner.
I ended up going with apt-rpm because it offered the flexibility of the popular .rpm package format while leveraging apt's known sanity when it comes to dependencies. You may prefer using yum, apt with .deb packages, slackware style .tgz packages or your own format.
Decide on this quickly, because its going to dictate how you structure your build. Keep track of dependencies in each component so that its easy to roll packages later.
3 - Re-read your scope then configure your kernel
Avoid the kitchen sink syndrome when making a kernel. Look at what you want to accomplish and then decide what the kernel has to support. You will probably want full gadget support, compatibility with file systems from other popular operating systems, security hooks appropriate for people who do a lot of browsing, etc. You don't need to support crazy RAID configurations, advanced netfilter targets and minixfs, but wifi better work. You don't need 10GBE or infiniband support. Go through the kernel configuration carefully. If you can't justify including a module by its potential use, don't check it.
Avoid pulling in out of tree patches unless you absolutely need them. From time to time, people come up with new scheduling algorithms, experimental file systems, etc. It is very, very difficult to maintain a kernel that consumes from anything else but mainline.
There are exceptions, of course. If going out of tree is the only way to meet one of your goals stated in your scope. Just remain conscious of how much additional work you'll be making for yourself in the future.
4 - Re-read your scope then select your base userland
At the very minimum, you'll need a shell, the core utilities and an editor that works without an window manager. Paying attention to dependencies will tell you that you also need a C library and whatever else is needed to make the base commands work. As Eli answered, Linux From Scratch is a good resource to check. I also strongly suggest looking at the LSB (Linux standard base), this is a specification that lists common packages and components that are 'expected' to be included with any distribution. Don't follow the LSB as a standard, compare its suggestions against your scope. If the purpose of your OS does not necessitate inclusion of something and nothing you install will depend on it, don't include it.
5 - Re-read your scope and decide on a window system
Again, referring to the everything including the kitchen sink syndrome, try and resist the urge to just slap a stock install of KDE or GNOME on top of your base OS and call it done. Another common pitfall is to install a full blown version of either and work backwards by removing things that aren't needed. For the sake of sane dependencies, its really better to work on this from bottom up rather than top down.
Decide quickly on the UI toolkit that your distribution is going to favor and get it (with supporting libraries) in place. Define consistency in UIs quickly and stick to it. Nothing is more annoying than having 10 windows open that behave completely differently as far as controls go. When I see this, I diagnose the OS with multiple personality disorder and want to medicate its developer. There was just an uproar regarding Ubuntu moving window controls around, and they were doing it consistently .. the inconsistency was the behavior changing between versions. People get very upset if they can't immediately find a button or have to increase their mouse mileage.
6 - Re-read your scope and pick your applications
Avoid kitchen sink syndrome here as well. Choose your applications not only based on your scope and their popularity, but how easy they will be for you to maintain. Its very likely that you will be applying your own patches to them (even simple ones like messengers updating a blinking light on the toolbar).
Its important to keep every architecture that you want to support in mind as you select what you want to include. For instance, if Valgrind is your best friend, be aware that you won't be able to use it to debug issues on certain ARM platforms.
Pretend you are a company and will be an employee there. Does your company pass the Joel test? Consider a continuous integration system like Hudson, as well. It will save you lots of hair pulling as you progress.
As you begin unifying all of these components, you'll naturally be establishing your own SDK. Document it as you go, avoid breaking it on a whim (refer to your scope, always). Its perfectly acceptable to just let linux be linux, which turns your SDK more into formal guidelines than anything else.
In my case, I'm rather fortunate to be working on something that is designed strictly as a server OS. I don't have to deal with desktop caveats and I don't envy anyone who does.
7 - Additional suggestions
These are in random order, but noting them might save you some time:
Maintain patch sets to every line of upstream code that you modify, in numbered sequence. An example might be 00-make-bash-clairvoyant.patch, this allows you to maintain patches instead of entire forked repositories of upstream code. You'll thank yourself for this later.
If a component has a testing suite, make sure you add tests for anything that you introduce. Its easy to just say "great, it works!" and leave it at that, keep in mind that you'll likely be adding even more later, which may break what you added previously.
Use whatever version control system is in use by the authors when pulling in upstream code. This makes merging of new code much, much simpler and shaves hours off of re-basing your patches.
Even if you think upstream authors won't be interested in your changes, at least alert them to the fact that they exist. Coordination is essential, even if you simply learn that a feature you just put in is already in planning and will be implemented differently in the future.
You may be convinced that you will be the only person to ever use your OS. Design it as though millions will use it, you never know. This kind of thinking helps avoid kludges.
Don't pull upstream alpha code, no matter what the temptation may be. Red Hat tried that, it did not work out well. Stick to stable releases unless you are pulling in bug fixes. Major bug fixes usually result in upstream releases, so make sure you watch and coordinate.
Remember that it's supposed to be fun.
Finally, realize that rolling an entire from-scratch distribution is exponentially more complex than forking an existing distribution and simply adding whatever you feel that it lacks. You need to reward yourself often by booting your OS and actually using it productively. If you get too frustrated, consistently confused or find yourself putting off work on it, consider making a lightweight fork of Debian or Ubuntu. You can then go back and duplicate it entirely from scratch. Its no different than prototyping an application in a simpler / rapid language first before writing it for real in something more difficult. If you want to go this route (first), gNewSense offers utilities to fork your own OS directly from Ubuntu. Note, by default, their utilities will strip any non free bits (including binary kernel blobs) from the resulting distro.
I strongly suggest going the completely from scratch route (first) because the experience that you will gain is far greater than making yet another fork. However, its also important that you actually complete your project. Best is subjective, do what works for you.
Good luck on your project, see you on distrowatch.
Check out Linux From Scratch:
Linux From Scratch (LFS) is a project
that provides you with step-by-step
instructions for building your own
customized Linux system entirely from
source.
Use Gentoo Linux. It is a compile from source distribution, very customizable. I like it a lot.

Ada compilers for Linux

I'm doing a trade study for Ada development on Linux. Do you have any good compiler/OS recommendations?
So far, I've got GNAT from AdaCore running on CentOS 5.4, and I have license requests in for Rational Apex and Aonix ObjectAda.
This is a porting effort. The original codebase is Apex 3.0 on OSF1 4.0d.
Anything else I should be considering? Ideally, it would be a supported environment.
One issue you need to take into consideration is to determine to what degree your system that's being ported utilizes vendor-supplied packages to perform its function. What I've seen with older, large systems, especially Apex ones, is a propensity for the language gurus during its development time to have decided that vanilla Ada just wasn't good enough, and so tie into all these vendor-supplied packages. If that's what your system does right now, it's a strong argument for upgrading within the vendor and sticking with Apex (all other things being mostly equal).
Whenever I've done ports of such systems, if given the opportunity I've done my best to tear out all the vendor-supplied stuff--nine times out of ten replacing the vendor-specific stuff with vanilla Ada implementations worked just as well, and you no longer have to deal with the quirks of a compiler-specific package. Plus, you increase the portability and maintainability of the system, allowing it to better adapt to future changes.
There is always SPARK, but I believe its a specialized/subsetted version of the Ada language. You might want to contact SigAda or the Ada usenet group to see if there are any other ideas.
Honestly though, GNAT is a great tool set. You can use GNATBench, an Eclipse interface, or GPS, a light-weight GTK+ IDE, to interface with the GNAT tools.
Other compilers I am aware of are Green Hills AdaMULTI (for various RTOSes), and DDC-I's SCORE (also for various RTOSes)
Providers of certified compilers that support Linux (in addition to those listed in the question):
Irvine Compiler Corp.
OC Systems
RR Software
Sofcheck

Getting software version numbers right. v1.0.0.1 [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I distribute software online, and always wonder if there is a proper way to better define version numbers.
Let's assume A.B.C.D in the answers. When do you increase each of the components?
Do you use any other version number tricks such as D mod 2 == 1 means it is an in house release only?
Do you have beta releases with their own version numbers, or do you have beta releases per version number?
I'm starting to like the Year.Release[.Build] convention that some apps (e.g. Perforce) use. Basically it just says the year in which you release, and the sequence within that year. So 2008.1 would be the first version, and if you released another a months or three later, it would go to 2008.2.
The advantage of this scheme is there is no implied "magnitude" of release, where you get into arguments about whether a feature is major enough to warrant a major version increment or not.
An optional extra is to tag on the build number, but that tends to be for internal purposes only (e.g. added to the EXE/DLL so you can inspect the file and ensure the right build is there).
In my opinion, almost any release number scheme can be made to work more or less sanely. The system I work on uses version numbers such as 11.50.UC3, where the U indicates 32-bit Unix, and the C3 is a minor revision (fix pack) number; other letters are used for other platform types. (I'd not recommend this scheme, but it works.)
There are a few golden rules which have not so far been stated, but which are implicit in what people have discussed.
Do not release the same version twice - once version 1.0.0 is released to anyone, it can never be re-released.
Release numbers should increase monotonically. That is, the code in version 1.0.1 or 1.1.0 or 2.0.0 should always be later than version 1.0.0, 1.0.9, or 1.4.3 (respectively).
Now, in practice, people do have to release fixes for older versions while newer versions are available -- see GCC, for example:
GCC 3.4.6 was released after 4.0.0, 4.1.0 (and AFAICR 4.2.0), but it continues the functionality of GCC 3.4.x rather than adding the extra features added to GCC 4.x.
So, you have to build your version numbering scheme carefully.
One other point which I firmly believe in:
The release version number is unrelated to the CM (VCS) system version numbering, except for trivial programs. Any serious piece of software with more than one main source file will have a version number unrelated to the version of any single file.
With SVN, you could use the SVN version number - but probably wouldn't as it changes too unpredictably.
For the stuff I work with, the version number is a purely political decision.
Incidentally, I know of software that went through releases from version 1.00 through 9.53, but that then changed to 2.80. That was a gross mistake - dictated by marketing. Granted, version 4.x of the software is/was obsolete, so it didn't immediately make for confusion, but version 5.x of the software is still in use and sold, and the revisions have already reached 3.50. I'm very worried about what my code that has to work with both the 5.x (old style) and 5.x (new style) is going to do when the inevitable conflict occurs. I guess I have to hope that they will dilly-dally on changing to 5.x until the old 5.x really is dead -- but I'm not optimistic. I also use an artificial version number, such as 9.60, to represent the 3.50 code, so that I can do sane if VERSION > 900 testing, rather than having to do: if (VERSION >= 900 || (VERSION >= 280 && VERSION < 400), where I represent version 9.00 by 900. And then there's the significant change introduced in version 3.00.xC3 -- my scheme fails to detect changes at the minor release level...grumble...grumble...
NB: Eric Raymond provides Software Release Practice HOWTO including the (linked) section on naming (numbering) releases.
I usually use D as a build counter (automatic increment by compiler)
I increment C every time a build is released to "public" (not every build is released)
A and B are used as major/minor version number and changed manually.
I think there are two ways to answer this question, and they are not entirely complimentary.
Technical: Increment versions based on technical tasks. Example: D is build number, C is Iteration, B is a minor release, A is a major release. Defining minor and major releases is really subjective, but could be related things like changes to underlying architecture.
Marketing: Increment versions based on how many "new" or "useful" features are being provided to your customers. You may also tie the version numbers to an update policy...Changes to A require the user to purchase an upgrade license, whereas other changes do not.
The bottom line, I think, is finding a model that works for you and your customers. I've seen some cases where even versions are public releases, and odd versions are considered beta, or dev releases. I've seen some products which ignore C and D all together.
Then there is the example from Micrsoft, where the only rational explanation to the version numbers for the .Net Framework is that Marketing was involved.
Our policy:
A - Significant (> 25%) changes or
additions in functionality or
interface.
B - small changes or
additions in functionality or
interface.
C - minor changes that
break the interface.
D - fixes to a
build that do not change the
interface.
People tend to want to make this much harder than it really needs to be. If your product has only a single long-lived branch, just name successive versions by their build number. If you've got some kind of "minor bug fixes are free, but you have to pay for major new versions", then use 1.0, 1.1 ... 1.n, 2.0, 2.1... etc.
If you can't immediately figure out what the A,B,C, and D in your example are, then you obviously don't need them.
The only use I have ever made of the version number was so that a customer could tell me they're using version 2.5.1.0 or whatever.
My only rule is designed to minimize mistakes in reporting that number: all four numbers have to be 1 digit only.
1.1.2.3
is ok, but
1.0.1.23
is not. Customers are likely to report both numbers (verbally, at least) as "one-one-two-three".
Auto-incrementing build numbers often results in version numbers like
1.0.1.12537
which doesn't really help, either.
A good and non-technical scheme just uses the build date in this format:
YYYY.MM.DD.BuildNumber
Where BuildNumber is either a continuous number (changelist) or just starts over at 1 each day.
Examples: 2008.03.24.1 or 2008.03.24.14503
This is mainly for internal releases, public releases would see the version printed as 2008.03 if you don't release more often than once a month. Maintenance releases get flagged as 2008.03a 2008.03b and so on. They should rarely go past "c" but if it does it's a good indicator you need better QA and/or testing procedures.
Version fields that are commonly seen by the user should be printed in a friendly "March 2008" format, reserve the more technical info in the About dialog or log files.
Biggest disadvantage: just compiling the same code on another day might change the version number. But you can avoid this by using the version control changelist as last number and checking against that to determine if the date needs to be changed as well.
In the github world, it has become popular to follow Tom Preston-Werner's "semver" spec for version numbers.
From http://semver.org/ :
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes, MINOR version
when you add functionality in a backwards-compatible manner, and PATCH
version when you make backwards-compatible bug fixes. Additional
labels for pre-release and build metadata are available as extensions
to the MAJOR.MINOR.PATCH format.
I use V.R.M e.g. 2.5.1
V (version) changes are a major rewrite
R (revision) changes are significant new features or bug fixes
M (modification) changes are minor bux fixes (typos, etc)
I sometimes use an SVN commit number on the end too.
Its all really subjective at the end of the day and simply up to yourself/your team.
Just take a look at all the answers already - all very different.
Personally I use Major.Minor.*.* - Where Visual Studio fills in the revison/build number automatically. This is used where I work too.
I like Year.Month.Day. So, v2009.6.8 would be the "version" of this post. It is impossible to duplicate (reasonably) and it very clear when something is a newer release. You could also drop the decimals and make it v20090608.
In the case of a library, the version number tells you about the level of compatibility between two releases, and thus how difficult an upgrade will be.
A bug fix release needs to preserve binary, source, and serialization compatibility.
Minor releases mean different things to different projects, but usually they don't need to preserve source compatibility.
Major version numbers can break all three forms.
I wrote more about the rationale here.
For in-house development, we use the following format.
[Program #] . [Year] . [Month] . [Release # of this app within the month]
For example, if I'm releasing application # 15 today, and it's the third update this month, then my version # will be
15.2008.9.3
It's totally non-standard, but it is useful for us.
For the past six major versions, we've used M.0.m.b where M is the major version, m is the minor version, and b is the build number. So released versions included 6.0.2, 7.0.1, ..., up to 11.0.0. Don't ask why the second number is always 0; I've asked a number of times and nobody really knows. We haven't had a non-zero there since 5.5 was released in 1996.

Resources