zfs installation issue Ubuntu 12.04 on Amazon EC2 - linux

I attached Three EBS Volumes of 3GB to an Amazon EC2 Micro-instance and mounted the disks xvdd, xvdc and xvdb.
My aim was to create a zfs pool using these 3 disks.
I had updated, upgraded Ubuntu 12.04, installed the zfs-linux dependencies, added the zfs-native repo PPA and then issued the zfs install command whhich is
sudo apt-get install ubuntu-zfs
After this, I get the console status which is as below and after the "run-parts:" status indicated below, the install process never proceeded further. I waited for 20+ minutes and got this:
Setting up zfs-dkms (0.6.0.91-0ubuntu1~precise1) ...
Loading new zfs-0.6.0.91 DKMS files...
First Installation: checking all kernels...
Building only for 3.2.0-31-virtual
Module build for the currently running kernel was skipped since the
kernel source for this kernel does not seem to be installed.
Setting up linux-headers-3.2.0-35 (3.2.0-35.55) ...
Setting up linux-headers-3.2.0-35-generic (3.2.0-35.55) ...
Examining /etc/kernel/header_postinst.d.
run-parts: executing /etc/kernel/header_postinst.d/dkms 3.2.0-35-generic /boot/vmlinuz-3.2.0-35-generic
Is this issue related to EC2 kernel for Ubuntu? or does the machine to run ZFS should be of higher capacity?

Usually with hosting providers the kernel is the case. My provider (ovh) delivers their own customized (and allegedly more secure) kernel (alas without sources), although reluctantly permits installing the generic kernel, which solved the problem for me. I don't know about Amazon - perhaps their customized kernel is crucial for their EC2 service. On the other hand I very much doubt any hosting provider would produce the source code of their kernel.

Related

Any success installing VMware Workstation on virgin Rocky Linux 8.5?

Using a virgin (but updated) version of Rocky Linux 8.5, I am trying to install VMware Workstation 16.2.1 (and others), but get compile errors during the first attempt to run, when vmmon and vmnet are being built.
All the proper, current headers from kernel-devel and kernel-headers are installed.
I tried upgrading to the 5.16.4 kernal at kernel.org, with all associated headers, and basically get the same errors.
"Unable to install all modules." i.e., vmmon and vmnet
Posts i have found with searching the net seem to indicate that there was a "back-port" of an upstream fix to Rocky that has affected the ability to build the loadable kernel modules necessary to run vmware - but i cannot confirm this is actually the problem that I am experiencing.
So i simply ask these questions: Can anyone (today) install VMware Workstation 16.2.1 (or any version), on a fresh install of Rocky Linux 8.5?
If so, would you please point me at your installation instructions, because I am unable to build "vmmon" and "vmnet" modules today (2022-01-04), that allow me to actually run virtual machines with vmware? (The kernel modules fail to compile and build.)
(and after 15 years of using stackoverflow i do not have the reputation to create a "rocky-linux" question tag...)
See https://unix.stackexchange.com/questions/689436/the-vmmon-and-vmnet-vmware-workstation-kernel-modules-fail-to-build-on-rocky-lin
mbubecek's instructions work for a variety of releases and should compile perfectly and run without issue, if you follow his instructions.
I have successfully used these methods at least a half dozen times with Rocky 8.5 and 8.6 with vmware workstation 16.1 up to version 16.2.1
NOTE: This error is NOT Rocky Linux specific. Also happens on some versions of RHEL 8 and CentOS 8.x I would also expect this "fix" to work on all of the other linux versions that are RHEL 8-derived.
I've been having difficulty with the same issue, and a colleague pointed me to check my kernel. This is our "official" resolution. See if the below works for you.
This is due to differences between the kernel and the source code for the VMWare modules, see here for more information. You can get the correct kernel modules, and build them by executing the following commands
wget https://github.com/mkubecek/vmware-host-modules/archive/workstation-16.1.0.tar.gz
tar -xf workstation-16.1.0.tar.gz
cd vmware-host-modules-workstation-16.1.0/
make
sudo make install
If you get the error,
crosspage.c:53:16: fatal error: linux/frame.h: No such file or directory
The error is described here. The solution is to remove (i.e. comment out) the offending include file in crosspage.c After doing the sudo make install, it is a very good idea to restart you host.
You may need to manually insert the modules into the kernel the first time after running make install'. The kernel modules (vmmon.ko and vmnet.ko) will be found at /lib/modules//misc. The following set of command will do this:
cd /lib/modules/$(uname -r)/misc
sudo insmod vmmon.ko
sudo insmod vmnet.ko
The modules should be load automatically after a restart/reboot.
If you update vmware to a different version (say 16.2.1) you may need to this again. Just change the versions in the above commands. If you hit the update button on the splash-screen and failed to notice the version you are updating to, you can run `vmware -v' at a command prompt to get the version you updated to.

Which commands of the defined Linux Distribution are available in a Docker container?

I'm new to docker and understand that the linux kernel is shared between the host-os and the containers. But I don't really understand how deep docker emulates a specific linux-distribution. Lets say we have a simple docker file like this:
FROM ubuntu:16.10
RUN apt-get install nginx
It will give me a docker container with nginx installed in an Ubuntu 16.10 environment. So I should be able to use apt-get as default package manager of Ubuntu. But how deep is this? Can I assume that typical commands of those distribution like lsb_release are emulated like in a full VM with Ubuntu 16.10 installed?
The reason behind my question is that linux distributions are different. I need to know which commands are avaliable, for example when I run a container with Ubuntu 16.10 like the one above on a host which a different distribution installed (like Red Hat, CentOS etc).
A Ubuntu image in Docker is about 150 MB. So I think there are not all tools included like in a real installation. But how can I know on which I can desert that they're there.
Base OS images for Docker are deliberately stripped down, and for Ubuntu they are removing more commands with each new release. The image is meant as the base for a dedicated application to run, you wouldn't typically connect to the container and run commands inside it, and a smaller image is easier to move around and has a smaller attack vector.
There isn't a list of commands in each image version that I know of, you'll only know by building your image. But when images are tagged you can assume a future minor update will not break downstream images - a good argument for explicitly specifying a tag in your Dockerfile.
E.g, this Dockerfile builds correctly:
FROM ubuntu:trusty
RUN ping -c 1 127.0.0.1
This one fails:
FROM ubuntu:xenial
RUN ping -c 1 127.0.0.1
That's because ping was removed from the image for the xenial release. If you just used FROM ubuntu then the same Dockerfile would have built correctly when trusty was the latest tag and then failed when it was replaced by xenial.
A container is presenting you with the same software environment as the non-containerized distribution. It may not have (in fact, probably does not have) all the same packages installed by default, but you can install whatever you need using the appropriate package manager. The availability of software in the container has nothing to do with the distribution running on your host (the Ubuntu image will be the same regardless of whether your are running Docker under CentOS, Fedora, Ubuntu, Arch, etc).
If you require certain commands to be available, just ensure that they are installed in your Dockerfile.
One of the few things that works differently inside a container is that there is typically no service management process running (like init or systemd or whatever), so you cannot start services the same way you can on the host without a little bit of work.

/lib/modules/*/build point to non existing files

I have a CentOS Linux release 7.2.1511 (Core).
I want to build some kernel code for currently running kernel.
My uname -r says 3.10.0-327.10.1.el7.x86_64, but ls -l /usr/src/kernels/
shows only 3.10.0-327.13.1.el7.x86_64. Why do I have sources of not current kernel on my filesystem(vanilla fresh provisioned Digitalocean box)?
Why does yum install kernel-devel does not install headers for currently running kernel?
uname is a system call to the kernel to get information. It's telling you what's running on that machine. What's physically present on the hard drive can be anything that anyone has installed. Someone may have downloaded the wrong package or you may have multiple kernels installed etc. But, the one that's running is what uname is telling you.

How to update/check the ubuntu kernel configuration?

I am new to Linux, and currently I want to try Kubernetes in my laptop. The official tutorial says
You need to have docker installed on one machine.
Your kernel should support memory and swap accounting. Ensure that the following configs are turned on in your linux kernel:
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_MEMCG_KMEM=y
I just installed Ubuntu 14.04, and I dont know whether the above configs are already turned on or not, how to check them? and if some are still off, how to turn them on?

About compiling Linux kernel in Debian Live

This is my first time compiling Linux kernel. I am using Debian Live. I used kernel-package to compile and I also added a new system call to return an arbitrary integer value greater than zero.
Everything went fine and I got both headers and image .deb files. When I tried to install them with dpkg, there was a warning that said I needed to configure LILO. I then aborted the installation and looked for LILO to find out that Debian Live got neither LILO nor GRUB. I installed GRUB, but it was not installed on my sda1 (USB disk running Debain Live), it said that it was not a proper block device. Debian Live uses squashfs (a file system).
Then, I ignored bootloader and installed the custom kernel. After I rebooted my computer, I was directly booted to the old Debain Live and my system call returns -1.
Please provide some solutions guys.
Thanks,
Debian Live is not a suitable base for you do to your own kernel development on. As you've found, it doesn't contain the tools needed to rebuild itself (that's not what its designed to do).
Install the regular Debian distribution (perhaps inside a virtualisation environment like VMWare Server or VirtualBox). Do your kernel development there.

Resources