I have developed an eBPF code that needs to be compiled with kernel headers.
My code is working properly on top of AKS however on an EKS cluster I couldn't find the kernel headers.
The kernel version of my vms on EKS is: "5.4.117-58.216.amzn2.x86_64".
Running "apt install linux-headers-$(uname -r)" result:
What is the right way to get kernel headers in case they don't exist in apt?
You can find kernel headers for Amazon Linux 2 kernels by searching into their packages' SQLite databases.
In your case, following procedure too:
Download the mirror list
wget amazonlinux.us-east-1.amazonaws.com/2/extras/kernel-5.4/latest/x86_64/mirror.list
Notice that for other kernel versions you may want to substitute "extras/kernel-5.4/latest" with "core/latest" or "core/2.0".
It should contain one (or more) URL(s) like this one:
http://amazonlinux.us-east-1.amazonaws.com/2/extras/kernel-5.4/stable/x86_64/be95e4ca87d6c3b5eb71edeaded5b3b9b216d7cdd330d44f1489008dc4039789
Append the suffix repodata/primary.sqlite.gz to the URL(s) and download the SQLite database(s)
wget "$(head -1 mirror.list)/repodata/primary.sqlite.gz"
Notice the URL(s) may contain the placeholder "$basearch". If that's the case, substitute it with the target architecture (eg., x86_64).
Unarchive it
gzip -d primary.sqlite.gz
Query it for finding where to download your kernel-headers package.
sqlite3 primary.sqlite \
"SELECT location_href FROM packages WHERE name LIKE 'kernel%' AND name NOT LIKE '%tools%' AND name NOT LIKE '%doc%' AND version='5.4.117' AND release='58.216.amzn2'" | \
sed 's#\.\./##g'
You'll obtain these:
blobstore/e12d27ecb4df92edc6bf25936a732c93f55291f7c732b83f4f37dd2aeaad5dd4/kernel-headers-5.4.117-58.216.amzn2.x86_64.rpm
blobstore/248b2b078145c4cc7c925850fc56ec0e3f0da141fb1b269fd0c2ebadfd8d41cd/kernel-devel-5.4.117-58.216.amzn2.x86_64.rpm
blobstore/7d82d21a61fa03af4b3afd5fcf2309d5b6a1f5a01909a1b6190e8ddae8a67b89/kernel-5.4.117-58.216.amzn2.x86_64.rpm
Download the package you want by appending its segment to the initial base URL
Like so:
wget amazonlinux.us-east-1.amazonaws.com/blobstore/e12d27ecb4df92edc6bf25936a732c93f55291f7c732b83f4f37dd2aeaad5dd4/kernel-headers-5.4.117-58.216.amzn2.x86_64.rpm
A similar procedure can be used for Amazon Linux 1 kernel headers.
I don't know if it's an ideal solution, but you could always download the kernel sources and install the headers from there.
$ git clone --depth 1 -b v5.4.117 \
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
$ cd linux
# make headers_install
The headers you get from the stable branch are likely close enough to those of your kernel for your eBPF program to work. I don't know if there's a way to retrieve the header files used for building EKS' kernel.
Related
I'm using linux3.3 and recently while building busybox for a new command, found the added busybox source file uses linux kernel headers.
So I looked up the internet and did 'make headers_install ARCH=.. CROSS_COMPILE=.. INSTALL_HDR_PATH=..' to extract headers usable for the user space program.
Then I used the new header files instead of files under sparc-snake-linux/sys-include.
But I had to copy over some missing files from the sys-include to the new header directories and had to copy some missing definitions from the sys-include files to the corresponing file in the new header files.(somewhere on the internet I read this 'make headers_install' was not upgraded after linux2.6 or so)
Is this what I am supposed to do? (why are there some missing files? I geuss it's because the 'make headers_install' is not well-maintained and doesn't work well for versions later than 2.6? Am I correct?)
Using this method, I have removed tens of 'undefined' errors but now I see some definitions conflict between files from sparc-snake-linux/sys-include (of course new cleaned and beefed-up version) and sparc-snake-linux/include. What version should be used?
And if I succeed compilation(by fixing header problems), do I have to build the glibc again with this new header files? (I'm afraid it's so. I'm using glibc for busybox)
any help would be deeply appreciated.
Thanks
Chan
ADD : I've extracted the new header files using above command and built busybox with new added command(route and other IP related functions). It works fine and the reason it didn't work was I had the variable KERNEL defined for busybox which should not be done(because busybox is not kernel code, but user program).
I've extracted the new header files using above command and built busybox with new added command(route and other IP related functions). It works fine and the reason it didn't work was I stupidly had the variable _KERNEL_ defined for a while for busybox which should not be done(because busybox is not kernel code, but user program).
when you use
echo "" | arch-abc-linux-gcc -o /tmp/tmp.o -v -x c -
you can see what the standard include path is. If the cross compiler is for compiling application on linux (like the one above), it will have linux system header path in the standard include path. Replace that with the new extracted header path. What I did was to use -nostdinc option and provide include path explicitly.
I am running ubuntu 13.10 on linux kernel version 3.11.0-12.I have to add a system call in this but i am facing a problem. The very first step says that I have to change my current working directory to kernel directory.
I used the command " cd linux-3.11.0-12 " but it is showing that no such file or directory exists. Please tell me where am i going wrong and how do I correct this mistake.
Wait, you want to add a system call to the Linux kernel, but you don't know how to reach the source code? Are you sure you are able to modify, configure, build, install and boot the Linux kernel?
Assuming yes, you need to get the source code of Linux first (for example, by cloning https://github.com/torvalds/linux or just downloading the version you are interested in), extract it somewhere and then cd to the path where you extracted it. Then you can begin modifying to your heart's content.
Perhaps this blog post could help you.
To get the source of the installed kernel on ubuntu, you can use the command [for ubuntu 13.04+ ]
apt-get source linux-image-`uname -r`
and should be typically be placed under /usr/src
Reference:
[1] https://help.ubuntu.com/community/Kernel/Compile
[2] https://wiki.ubuntu.com/Kernel/BuildYourOwnKernel
I want to add some debug info or printf in the random.c in order to look deeply into the Linux random number generator. The entropy in /dev/random and /dev/urandom are both generated by random.c. My questions are:
1. Where I can find the random.c file in Linux 2.6.32?
2. What is the best way to add my modification of random source code into the kernel? Is it OK to just compile random.c and load it as loadable kernel module? Or do I have to recompile and install the kernel to make the new random.c with debug msg somehow take effect? The key point is to make sure that only one copy of random number generator is running in the kernel.
Thank you. Any kind of suggestion is highly appreciated.
random.c is linked directly into the kernel, it isn't built as a module, so you can't just recompile it alone and load it into your kernel, you need to recompile the whole new kernel.
To build the kernel, make sure you have the usual development tools installed: gcc, GNU make, etc. Some distros provide a "build-essentials" or "Development Tools" or similar metapackage that include all of the usual development tools for building the core system packages.
How you build your kernel depends on whether you have any distribution specific patches that are needed to use your system, or if you want to ensure that you use your distro's packaging system to install the kernel. If so, you should probably follow your distro's instructions for building the kernel. For example, Ubuntu's instructions, Arch's instructioins, Fedora's instructions, CentOS instructions (likely similar on RHEL 6, Red Hat doesn't provide documentation as they don't support building custom kernels), SuSE instructions.
Otherwise, if you don't mind configuring and installing your kernel manually, you can do it by hand. The following instructions should cover most distros reasonable well, but be sure to check your distro docs in case there are any distro-specific gotchas.
Download the appropriate tarball from kernel.org and decompress it somewhere. Or if you prefer, check it out using Git. Since you reference 2.6.32, I've included the latest version of 2.6.32 in the below instructions.
$ curl -O https://www.kernel.org/pub/linux/kernel/v2.6/longterm/v2.6.32/linux-2.6.32.61.tar.xz
$ xzcat linux-2.6.32.61.tar.xz | tar xvf -
$ cd linux-2.6.32.61
# or...
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ cd linux
$ git checkout -b my-branch v2.6.32.61
Now you need to do to configure it, build it, and install it. Greg Kroah-Hartmann, a leading kernel developer and stable kernel maintainer, has a free book on the subject. I'd recommend reading his book, but if you want a quick rundown, I'll summarize the highlights.
There are several ways to configure it. A good way to start is to just copy your current config in, and then run make menuconfig or make xcconfig to get a curses or graphical kernel configuration utility that allows you to easily browse and choose the right options (as there may be new options in the new kernel that you are building). Many distros install the config for a given kernel in /boot/config or /boot/config-version corresponding to the kernel version. Copy that into your source tree as .config, and then run make menuconfig or make xconfig:
$ cp /boot/config .config
$ make xconfig
After configuring it, I'd recommend adding something to the EXTRAVERSION definition in the Makefile. The contents of that are tacked on to the version, to help distinguish your modified kernel from the upstream one. I'd recommend setting it to help keep track of which is your modified kernel.
Once it's configured, just build it like anything else. I recommend using -j to run a parallel build if you have multiple cores.
$ make -j8
Now it's built, and you can install it. On most systems, the following works; if not, check out Greg's book or check your distro's documentation:
$ sudo make modules_install
$ sudo make install
And finally you have to add it to your bootloader (on some systems, make install may do this, on some it may not). Depending on whether you use Lilo, Grub, or Grub2, you may need to edit /etc/lilo.conf (followed by running sudo lilo to install the changes), /boot/grub/menu.lst, or /boot/grub/custom.cfg (followed by sudo grub-mkconfig -o /boot/grub/grub.cfg to install the changes). See the relevant documentation for the given bootloader for more details. Generally you want to copy an existing entry and duplicate it, updating it to point to your new kernel. Make sure you leave the existing entries, so you will be able to boot back into your old kernel if this doesn't work.
Now reboot, select your new kernel, and hope your system boots. Woo! You've built your own kernel.
Now that you've made sure you can do that successfully without modifications, you can make your change. You are going to want to modify drivers/char/random.c. To print out debugging statements, use printk(). It works mostly like printf(), though it's not exactly the same so check out the documentation before using it. After you modify, rebuild, and reinstall your new kernel, and reboot into it, you can see the messages printed out with printk() statements using the dmesg command.
For more information, check out Greg's book that I linked to above, the kernel README, and HOWTO, browse around the kernel's Documentation directory, and various other docs.
If you look at the Makefile for it, char driver is not meant to be compiled as a module (random.o is included as obj-y in drivers/char/Makefile).
You can read more about how to kbuild (kernel build) system works from: https://www.kernel.org/doc/Documentation/kbuild/makefiles.txt
Particularly section --- 3.1 Goal definitions touches this topic.
Generally you can search for files in kernel sources from source cross references (called LXR's). One is for example provided in http://lxr.free-electrons.com/
Indeed, you can add your modifications to the drivers/char/random.c, and recompile the char driver. After that you will have to rebuild the kernel, so that it will link also your new random.o to the kernel. And then you will have to boot that kernel, that process will depend on your distribution.
Most distributions have good/decent instructions around how to recompile/boot your own kernel.
I have been tasked with identifying new (non-operating system) software installed on several Red Hat Enterprise Linux (RHEL) machines. Can anyone suggest an efficient way to do this? The way I was doing it is manually comparing the list of installed software with the list on Red Hat's FTP site for the relevant operating system:
ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/
The problems I am encountering with this method is it is tedious / timeconsuming, and just the source packages are listed (e.g. I can't tell if avahi-glib is installed as part of the avahi package). If anyone can suggest a more efficient way to identify the software that doesn't come with the operating system on a RHEL machine, it would be greatly appreciated!
Here is what I have come up with so far as a more efficient method (though I still haven't figured out the last part, and there may be more efficient methods). If anyone can help me with the last step of this method, or can share a better method, it would be greatly appreciated!
New method (work in progress):
Copy the list of packages from Red Hat's FTP site into a text file (OSPackages.txt).
To fix the problem of just source RPMs being listed, also copy the list of files from the relevant corresponding version in http://vault.centos.org into a text file, and merge this data with OSPackages.txt.
Do a rpm -qa > list1, yum -y list installed > list2, ls /usr/bin > list3, ls /usr/share > list4, ls /usr/lib > list5.
Use cat to merge all the listX files together into InstalledPackages.txt.
Use sort to sort out the unique entries, perhaps like: sort -u -k 1 InstalledPackages.txt > SortedInstalledPackages.txt
Do a diff between SortedInstalledPackages.txt and OSPackages.txt using a regular expression (-I regexp) to identify the package names (and eliminate the version numbers). I would need to also do a "one way diff", e.g. ignore the extra OS packages in OSPackages.txt that do not appear in the installed packages file.
Note: I asked the following question to help me with this part, and believe I am now fairly close to a solution:
How do I do a one way diff in Linux?
If diff (or another command) can perform the last step, it should produce a list of packages that don't come on the OS. This is the step I am stuck on and would appreciate further help. What command would I use to perform step 6?
rpm -qa --last | less
This will list recently installed rpms with the installed date.
yum provides some useful information about when & from where a package was installed. If you have the system installation date then can you pull out packages that were installed after that, as well as packages that were installed from different sources & locations.
Coming at it from the other direction you can query rpm to find out which packages provides each of the binaries in /sbin /lib etc ... - any package that doesn't provide a "system" binary or library is part of your initial set for consideration.
Get a list of configured repository ids:
yum repolist | tail -n +3 | grep -v 'repolist:' | cut -f1 -d' '
Now identify which are the valid Red Hat repositories. Once you do that you can list all the packages from that repository. For example if I were to do this for Fedora official repositories, I would enlist the package names like so:
yum list installed --disablerepo="*" --enablerepo="fedora*"
From this list you get which package you have installed.
for p in $PACKAGES; do rpmls $p; done
Or like this:
yum list installed --disablerepo="*" --enablerepo="fedora*" \
| cut -f1 -d' ' \
| ( while read p; do rpmls $p; done ) \
| cut -c13-
So have a list of files which are supposed to come from the official repositories.
Now you can list all the installed files using rpm:
rpm -qal
With these two lists, it would be easy to compare the contents of two outputs.
If redhat has an equivalent of /var/log/installer/initial-status.gz on Ubuntu systems then you could cat that to a tmpfile and then search for installed packages and grep -v the tmpfile.
One of the first scripts I wrote to learn Linux did this exact same thing on Ubuntu:
https://gist.github.com/sysadmiral/d58388e315a6c6384053aa6b0af66c5f
This works on Ubuntu and may work on other Debian based systems or systems that use aptitude package manager. It doesn't work on Redhat/CentOS but I added it here as a starting point I guess.
Disclaimer: It will not pickup manually compiled things i.e. your package manager needs to know about it for this script to show it.
Personal Disclaimer: please forgive the none use of tee. I was still learning the ropes when I wrote this and have never updated the code for nostalgia's sake.
I have a debian package that I built that contains a tar ball of the files, a control file, and a postinst file. Its built using dpkg-deb and it installs properly using dpkg.
The modification I would like to make is to have the installation directory of the files be determined at runtime based on an environment variable that will be set when dpkg -i is run on the deb file. I echo out the environment variable in the postinst script and I can see that its set properly.
My questions:
1) Is it possible to dynamically determine the installation directory at runtime?
2) If its possible how would I go about this? I have read about the rules file and the mypackage.install files but I don't know if either of these would allow me to accomplish this.
I could hack it by copying the files to the target location in the posinst script but I would prefer to do it the right way if possible.
Thanks in advance!
So this is what I found out about this problem over the past couple of weeks.
With prepackaged binaries you can't build a debian package with a destination directory dynamicall determined at runtime. I believe that this might be possible if installing a package that is built from source where you can set the install directory using configure. But in this case since these are embedded Ubuntu machines they don't have make so I didn't pursue such an option. I did work out a non traditional method (hack) for installing that did work. Since debian packages simply contain a tar ball relative to / simply build your package relative to a directory under /tmp. In the postinst script you can then determine where to copy the files from the archive into a permanent location.
I expected that after rebooting and the automatic deletion of the subdirectory under /tmp that dpkg might not know that the file package existed. This wasn't a problem. When I ran 'dpkg -l myapp' it showed as still installed. Updating the package using dpkg/apt-get also worked without a hitch.
What I did find is that if you attempted to remove the package using 'dpkg -r myapp' that dpkg would try and remove /tmp which wasn't good. However /tmp isn't easily removed so it never succeeded. Plus in our situation we never remove packages but instead simply upgrade them.
I eventually had to abandon the universal package due to code differences in the sources resulting in having to recompile per platform but I would have left it this way and it did work.
I tried using --instdir to change the install directory of the package and it does relocate the files but dpkg fails since the dpkg file can't be found relative to the new instdir. Using --instdir is sort of like a chroot. I also tried --admindir and --root in various combinations to see if I could use the dpkg system relative to / but install relocate the files but they didn't work. I guess rpm has a relocate option that works but not Ubuntu.
You can also write a script that runs dpkg-deb with a different environment for 6 times, generating 6 different packages. When you make a modification, you simply have to run your script, and all 6 packages gets generated and you can install them on your machines avoiding postinst hacking!
Why not install to a standard location, and simply use a postinst script to create symbolic links to the desired location? This is much cleaner, and shouldn't break anything in dpk -I.