what if kernel version is different from module build - linux

Suppose I build a module in kernel 2.6.32-431, but I load it successfully in kernel 2.6.32-432. Can this module work properly? Or is it harm to system?

With such a little difference between kernel versions (2.6.32-431 vs 2.6.32-432) and by passing checksum check (see e.g. this answer about checksum in Linux kernel modules), your module will very likely operate correctly.
Of course, no one can be sure about correctness.

The version of your kernel is 2.6.32.
The number you see after the dash (-432) is an iteration of patchsets applied by your distribution's developers. Most of those changes likely are security patches.
Moreover, 2.6.32 kernel is an LTS release, which normally doesn't accept anything, but security updates and fixes for severe issues.
So, you should not worry that module compiled with 2.6.32-431 kernel sources won't work on 2.6.32-432 kernel.
What you should really worry about is that 2.6.32 kernel is not supported since February 2016.

As long as your changes complied successfully under the module build in your new kernel version, it should not be a problem. It should work normally 99%.

Related

How to find in Linux system, which suitable version of glibc can be installed

I'm trying to update glibc 2.19-r1 to newer version 2.23-r1 in order to overcome some security vulnerabilities. I generated a new binary package (tbz2) using Gentoo system, but now I'm having problems with installing it to my system.
my question is: How can I know if there is anther feature/application that also needs to be updated? Which dependencies does glibc has?
Thank you,
Sami
Which dependencies does glibc has?
It doesn't have any.
When configured, it may require a minimal kernel version on which it would run. Usually it supports kernels that are newer than at least 5 years; on x86 often much older kernels are also supported.
To build it, you also need sufficiently recent versions of gcc, make and some other tools (but these dependencies don't transitively apply to the system on which you want to install it).

Is there some security software using Linux Security Module?

LSM(Linux Security Module) was used in kernel 2.6. Now kernel comes to 4.x and I can't figure out if there are still some using LSM? Does the latest kernel give up the support for LSM?
According to wikipedia the kernel still uses LSM. Modules like AppArmor are implemented on top of LSM. So yes, it is still used and modules that take advantage of it do exist.

New linux kernels, no lsm using lkms, no kernel hooks now what?

For security reasons, the kernel ceased to export characters necessary for writing security modules in the form of loadable kernel modules (Linux Kernel Module, LKM) starting with version 2.6.24.
And you can't export sys_call_table, again for security reasons.
But then, how can I filter filesystem requests?
I'll state it simply: I want to hook the "open" function!
I don't want to have to compile my own version of the kernel, what's the point of drivers? It should work for all kernels.
Please help, thought I would have more freedom than Windows with Linux, but now I see the most precious parts of my life are blocked in Linux.
I've written a kernel module that can do this called tpe-lkm. I've also mentioned it on some other questions similar to this here on StackOverflow:
access to the sys_call_table in kernel 2.6+
Reading kernel memory using a module
intercepting file system system calls
Hope one of these helps you out.

is it really necessary to recompile the kernel in order to do linux driver development?

As a newbie in Linux driver development, I've successfully recompiled a new kernel(2.6.39.4) upon my previous Ubuntu 11.04 (2.6.38-8-generic), and am going well so far. But I am wondering why do I need to recompile the kernel? Is it really necessary? I played with some simple driver samples on my previous kernel and they compiled and ran well.
It depends on the driver you are working on.
If the driver does not rely on any difference features between the two kernel versions, you don't need to recompile the kernel, just compile the driver with the corresponding kernel headers. Otherwise, you must recompile the right kernel so that the driver could work properly.

Why I need to re-compile vmware kernel module after a linux kernel upgrade?

After a linux kernel upgrade, my VMWare server cannot start until using vmware-config.pl to do some re-config work (including build some kernel modules).
If I update my windows VMWare host with latest Windows Service Pack, I usually not need to do anything to run VMWare.
Why VMWare works differently between Linux and Windows? Does this re-compile action brings any benifits on Linux platform over Windows?
Go read The Linux Kernel Driver Interface.
This is being written to try to explain why Linux does not have a binary kernel interface, nor does it have a stable kernel interface. Please realize that this article describes the _in kernel_ interfaces, not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is _very_ stable over time, and will not break. I have old programs that were built on a pre 0.9something kernel that still works just fine on the latest 2.6 kernel release. This interface is the one that users and application programmers can count on being stable.
It reflects the view of a large portion of Linux kernel developers:
the freedom to change in-kernel implementation details and APIs at any time allows them to develop much faster and better.
Without the promise of keeping in-kernel interfaces identical from release to release, there is no way for a binary kernel module like VMWare's to work reliably on multiple kernels.
As an example, if some structures change on a new kernel release (for better performance or more features or whatever other reason), a binary VMWare module may cause catastrophic damage using the old structure layout. Compiling the module again from source will capture the new structure layout, and thus stand a better chance of working -- though still not 100%, in case fields have been removed or renamed or given different purposes.
If a function changes its argument list, or is renamed or otherwise made no longer available, not even recompiling from the same source code will work. The module will have to adapt to the new kernel. Since everybody (should) have source and (can find somebody who) is able to modify it to fit. "Push work to the end-nodes" is a common idea in both networking and free software: since the resources [at the fringes]/[of the developers outside the Linux kernel] are larger than the limited resources [of the backbone]/[of the Linux developers], the trade-off to make the former do more of the work is accepted.
On the other hand, Microsoft has made the decision that they must preserve binary driver compatibility as much as possible -- they have no choice, as they are playing in a proprietary world. In a way, this makes it much easier for outside developers who no longer face a moving target, and for end-users who never have to change anything. On the downside, this forces Microsoft to maintain backwards-compatibility, which is (at best) time-consuming for Microsoft's developers and (at worst) is inefficient, causes bugs, and prevents forward progress.
Linux does not have a stable kernel ABI - things like the internal layout of datastructures, etc changes from version to version. VMWare needs to be rebuilt to use the ABI in the new kernel.
On the other hand, Windows has a very stable kernel ABI that does not change from service pack to service pack.
To add to bdonlan's answer, ABI compatibility is a mixed bag. On one hand, it allows you to distribute binary modules and drivers which will work with newer versions of the kernel. On the other hand, it forces kernel programmers to add a lot of glue code to retain backwards compatibility. Because Linux is open-source, and because kernel developers even whether they're even allowed, the ability to distribute binary modules isn't considered that important. On the upside, Linux kernel developers don't have to worry about ABI compatibility when altering datastructures to improve the kernel. In the long run, this results in cleaner kernel code.
It's a consequence of Linux and Windows being developed in different cultural environments and expectations: http://www.joelonsoftware.com/articles/Biculturalism.html. In short: Windows is designed to be suitable for users, whereas Linux evolves to be suitable for open source developers.

Resources