How to build Linux kernel to support SO_ATTACH_BPF socket option? - linux

I want to build a application which supports eBPF on CentOS 7 (the kernel version is 3.10.0):
if(setsockopt(sock, SOL_SOCKET, SO_ATTACH_BPF, prog_fd, sizeof(prog_f)) {
......
}
So I download a 4.0.5 version, make the following configurations on:
CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
Then follow this link to build and install a 4.0.5 kernel.
After executing make modules_install install, I find there is still no SO_ATTACH_BPF in <asm-generic/socket.h>, so the above code can't be compiled successfully.
How to build Linux kernel to support SO_ATTACH_BPF socket option?

In my setup, which is based on Fedora 21, I use very similar steps to those you linked to compile and install the latest kernel. As an additional step, I will do the following from the kernel build tree to install the kernel header files into /usr/local/include:
sudo make INSTALL_HDR_PATH=/usr/local headers_install
This will cause both the stock kernel header files to remain installed in /usr/include/{linux,asm,asm-generic,...}, and the new kernel header files to be installed in /usr/local/include/{linux,asm,asm-generic,...}. During your test program compile, depending on which build system you use, you may need to prefix gcc/clang with -I/usr/local/include.

Your newly installed kernel supports SO_ATTACH_BPF, but your current libc package doesn't now about that (as you mention, distro's native 3.10.0 kernel lacks of given option support).
You need to update libc package as well for use new kernel's features in user space programs.

Related

How can I install GCC and other developer tools inside QEMU virtual machine that only has BusyBox?

I download Linux kernel source code, successfully compiled it and run it with BusyBox in QEMU.
Because of BusyBox, I can use some frequently-used tools, such as vi,ls,cp,cat, etc.
But when I try to compile a simple "hello world" C/C++ program, I get gcc: not found.
In addition, I can't make a new Linux module by make -C /lib/modules/$(shell uname -r)/build/ M=$(PWD) modules inside QEMU.
I googled a lot, still can't figure it out.
So my question is: how can I install common developer tools like gcc, make, etc. inside my bare-bones QEMU VM that is running my custom Linux kernel (and not a standard distribution)?
I see that you are trying to compile some program (or module) to use it inside your QEMU machine, but you do not have a compiler toolchain installed in the machine itself. You have a couple of options:
Probably the easiest: since you already compiled the kernel that you are using for QEMU externally (in your host machine), you can easily also compile anything else this way. For modules, just pointing make to the same kernel source directory where you built the VM kernel should suffice. Once compiled you can then copy them inside the VM disk/image like you did for busybox.
You can download and compile your own GCC from source (always on the host), and then install it inside the QEMU virtual machine. This is usually done by mounting the VM disk (QEMU image or whatever you are using) somewhere (e.g. /mnt/my-qemu-disk) and then configuring GCC with --prefix=/mnt/my-qemu-disk/usr/local, building and installing it with make install. This and other stuff is explained in this documentation page.
Once you have GCC installed inside the machine, you should be able to use it as you normally do. You can now use it to compile GNU Make inside the VM, or you can just compile outside in the same way.
For complex stuff like building kernel modules you will probably also need to build and install GNU binutils in the machine, again either from the inside with the GCC you just installed or from the outside.

Build and bind against older libc version

I have dependencies in my code that requires libc. When building (cargo build --release) on Ubuntu 20.04 (glibc 2.31) the resulting executable doesn't run on CentOS 7 (glibc 2.17). It throws an error saying it requires GLIBC 2.18.
When build the same code on CentOS 7 the resulting executable runs on CentOS 7 and Ubuntu 20.04.
Is there a way to control which GLIBC version is required to build this version on Ubuntu 20.04 too?
If your project does not depend on any native libraries, then probably the easiest way would be to use the x86_64-unknown-linux-musl target.
This target statically links against MUSL Libc rather than dynamically linking against the system's libc. As a result it produces completely static binaries which should run on a wide range of systems.
To install this target:
rustup target add x86_64-unknown-linux-musl
To build your project using this target:
cargo build --target x86_64-unknown-linux-musl
See the edition guide for more details.
If you are using any non-rust libraries it becomes more difficult, because they may be dynamically linked and may in turn depend on the system libc. In that case you would either need to statically link the external libraries (assuming that is even possible, and that the libraries you are using will work with MUSL libc), or make different builds for each platform you want to target.
If you end up having to make different builds for each platform, a docker container would be the easiest way to achieve that.
Try cross.
Install it globally:
cargo install cross
Then build your project with it:
cross build --target x86_64-unknown-linux-gnu --release
cross take the same arguments as cargo but you have to specify a target explicitly. Also, the build directory is always target/{TARGET}/(debug|release), not target/(debug|release)
cross uses docker images prebuilt for different target architectures but nothing stops you from "cross-compiling" against the host architecture. The glibc version in these docker images should be conservative enough. If it isn't, you can always configure cross to use a custom image.
In general, you need to build binaries for a given OS on that OS, or at the very least build on the oldest OS you intend to support.
glibc uses symbol versioning to preserve the behavior of older programs while adding support for new functionality. For example, a newer version of pthread_mutex_lock may support lock elision, while the old one would not. You're seeing this error because when you link against libc, you link against the default version of the symbol if a version isn't explicitly specified, and in at least one case, the version you linked against is from glibc 2.18. Changing this would require recompiling libstd (and the libc crate, if you're using it) with custom changes to pick the old versioned symbols, which is a lot of work for little gain.
If your only dependency is glibc, then it might be sufficient to just compile on CentOS 7. However, if you depend on other libraries, like OpenSSL, then those just aren't compatible across OS versions because their SONAMEs differ, and there's no way around that. So that's why generally you want to build different binaries per OS.

How to build glibc with reduced size?

I'm trying to download glibc 2.23 sources and build them on my Ubuntu system.
I need to build that specific version from sources for getting modified version of glibc customized for my research, and it will be used only within my research apps using the loader environment variables (e.g., LD_PREDLOAD or LD_LIBRARY_PATH).
But, when building it as following, I got a huge file as an output (libc.so weights about 11MB):
download the sources to some local dir (let's say /tmp/glibc/)
create new directory for build results (/tmp/glibc/build)
run configure from build dir:
< build-dir >$ ../configure --prefix=< build-dir >
As a result, the build process will produce libc.so file under build-dir with a size of 11MB.
Is there anyway to reduce the size of the built libc.so?
p.s.
Here are my system details:
Linux version 4.4.0-93-generic (buildd#lgw01-03) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #116-Ubuntu SMP Fri Aug 11 21:17:51 UTC 2017
Thanks :)
Building glibc from source could be a bad idea. See this and some comments there. Its current version is GNU libc 2.26... Consider instead upgrading your entire Ubuntu distribution (Ubuntu 17.10 should be released in a few weeks, end of October 2017)
../configure --prefix= build-dir
is a misunderstanding of the role of --prefix in autoconf-ed software. It relates to where the software is installed, not to its build directory.
(and I don't know exactly what should be your --prefix since libc is so essential to your system, perhaps it should be --prefix=/ but you should check carefully)
Is there any way to reduce the size of the built libc.so?
You might use (very carefully) strip(1), but you risk breaking your system.
And you might not care about reducing the size of libc since it is used (and shared) by almost every software on your Linux system!
BTW, consider also musl-libc. It can cohabit nicely with GNU glibc, and in practice is used only by programs built with musl-gcc (provided by it).
If you are doing some research, it would be reasonable to work in a chroot(2)-ed environment. See also schroot. You could install with the help of make install DESTDIR=/tmp/instmylibc then copy that /tmp/instmylibc appropriately. Read more about autoconf
PS. Be sure to at least back up your important data before such dangerous experimentations. I don't think that the size of your libc.so should be a significant concern. But you need to use chroot, perhaps with the help of debootstrap during installation of the chrooted environment.

Cross compiled ARM Kernel instead of ARMHF

However I Cross Compiled ARM Kernel instead of ARMHF(for my Cubietruck). I followed this tutorial:
https://romanrm.net/a10/cross-compile-kernel
How can I determine for which architecture I´m cross-compiling?
i got a new error that /linux/utsrealease.h is not found
from above comment as you mentioned.. from that its clear that kernel module which your building must match with running kernel version . As kernel modules loading mechanism doesn't allow loading modules that were not compiled against the running kernel, due to mismatch error is coming.
The macro UTS_RELEASE is required by your driver in order to rebuild
kernel modules from source.
retrieving the version string constant,
older versions require you to include<linux/version.h>,
others <linux/utsrelease.h>,
and newer ones <generated/utsrelease.h>
So my suggestion you do workaround by doing
you can find utsrelease.h in kernel source code make sure your running kernel must match with your source-code
copy linux-x.x.x/include/generated/utsrelease.h to installed header i.e ../include/linux/utsrelease.h
Im not sure give a try .
If above doesnot work pls update your question with
1)which kernel sourcode version you have
2)Whats the kernel version running on target
When you compile your kernel, mention the architecture you are compiling for in:
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- EXTRAVERSION=-custom1 uImage
For eg, here ARCH=arm, so you are compiling for ARM, if it's x86, then you'll replace it with x86. Check what architecture your target board is on.
EDIT: gnueabihf is for armhf.

Compiling program for old kernel

I statically compiled and linked a program in an up-to-date Linux machine, and ran it in another Linux which is 9 years old. It gave me an error "FATAL: kernel too old" and quit. Specifically, the new one is Fedora 18 (gcc 4.7.2, glibc 2.16, kernel 3.7.2) and the old one is RHEL4.8 (glibc 2.3.4, kernel 2.6.9). Since it's static linking, glibc version shouldn't matter. I guess the problem here is that the program calls system calls that's not in the old kernel.
If development on the old system is not an option, how can I build the program in the new system and run in the older (or even better, both)? I was looking for a way to run gcc in a compatible mode, which only calls old system calls. No luck yet.
The easiest option is to always build on the older system.
Alternatively, copy the glibc headers and static libraries from the old system to the new and link against those.
If that doesn't work, you'll have to rebuild glibc with --enable-kernel=2.6.9 or something like that.

Resources