ERROR minimum kernel version required for CoreDNS is 4.14.35 (UEKR5) - linux

I'm trying to setup High available Kubernetes cluster on ORL7 with kernel version as "3.10.0-514.10.2.el7.x86_64". All the pre requisites have been performed as per the link "https://docs.oracle.com/cd/E52668_01/E88884/html/kube_ha_install_master.html".
Now when bringing up the cluster with the below command:
$ kubeadm-ha-setup up ~/ha.yaml
I'm getting below error:
[ERROR] minimum kernel version required for CoreDNS is 4.14.35 (UEKR5)
It worked for kubeadm (below command) but i want to know how to acheive the same in kubeadm-ha-setup.
$ kubeadm-ha-setup up ~/ha.yaml --feature-gates=CoreDNS=false
I can't upgade kernel version and using latest supported by our organization.

It seems that you can't. As per documentation "Requirements to Use Oracle Container Services for use with Kubernetes" and "Replacement of KubeDNS with CoreDNS"
Note:
A future version of Oracle Linux Container Services for use with Kubernetes will migrate existing single master clusters from KubeDNS to CoreDNS. CoreDNS requires an Oracle Linux 7 Update 5 image or later with the Unbreakable Enterprise Kernel Release 5 (UEK R5).
Existing Oracle Linux Container Services for use with Kubernetes 1.1.9 installations may already run on an Oracle Linux 7 Update 3 image, with Unbreakable Enterprise Kernel Release 4 (UEK R4), but you must upgrade your environment to permit future product upgrades.

kubeadm-ha-setup up ~/ha.yaml --feature-gates=CoreDNS=false

Related

Azure Site Recovery - CentOS 8.3 unsupported kernel

I'm trying to protect a vmware vm centos 8.3, but I'm getting unsupported kernel error.
The current kernel is 4.18.0-240 and I didn't find anywhere centos compatible kernel versions.
Mobility service is being installed in auto mode.
Configuration Server version: 9.42.1.0
Agent version: 5.1.6784.0
Thanks in advance.
Some kernel versions may have LIS components missing. You can try installing LIS components again and then try to enable the replication.
Reference:
Support matrix for vmware to azure
ASR CENTOS 8.3

Are there any limitations regarding the age of a linux distribution which can be used to create a docker base-image?

Im wondering if its possible to use very old Linux Distribution like Debian GNU/Linux 3.1 (Sarge) and create a base-image of it to run legacy code not working under "younger" distros.
Only Thing i found about it was somebody successfully using Ubuntu Feisty: Run old Linux release in a Docker container?
Are there any known limitations?
Your host needs to have a minimal version of the Linux kernel, and that version is 3.10
See
Docker minimum kernel version 3.8.13 or 3.10
extract from the previous link
There's also a shell-script to check if your system has the required dependencies in place and to check which features are available;
https://github.com/docker/docker/blob/master/contrib/check-config.sh
So you can use this to check if you will be able to use docker on this host.
From
https://wiki.debian.org/DebianSarge?action=show&redirect=Sarge
I see
kernel : linux 2.4.27 and 2.6.8
So it may not work

Datastax Enterprise Installation on Virtual Box CentOS

Can anyone please guide me step by step installation one by one for Datastax Enterprise Installation on Virtual Box CentOS .
I checked Datastax Documentation , but getting little bit confused in few steps and due to which I am not satisfied. Also checked other resources but not able to understand completely.
So Help me to know installation process one by one with all basis steps.
Thanks in advance .
You may have an easier time using OpsCenter's Lifecycle Manager to deploy DSE. (Disclaimer, I am a Lifecycle Manager dev so am biased.)
First you need to install OpsCenter in a separate VM or Centos box. If you're able to get through the Java install and yum repository setup parts of DSE setup, this won't be difficult: https://docs.datastax.com/en/opscenter/6.0/opsc/install/opscInstallRHEL_t.html
Then run an install job from LCM: https://docs.datastax.com/en/opscenter/6.0/opsc/LCM/opscLCMinstallJob.html Example the pre-requisite section of that page carefully. It will show you the things you need to do in LCM to get ready to run the job, it's all point-and-click, though.
The only pre-requisites on your target DSE machine are "python" (usually installed by default) and for the minute "which", though we'll be removing that dependency in an upcoming version.
Note at the end of this process, you'll need to provide cqlsh an IP address, username, and password to connect to the cluster... even when making a "local" connection from your DSE vm. For example: "cqlsh 192.168.1.100 -u cassandra -p the-password-you-chose-during-lcm-install"

Hadoop multi-node cluster manual installation over Ubuntu 14.04

I am a newcomer to Hadoop. For my College project we are given 4 VMs. I need to configure a multi-mode Hadoop cluster on this ( 1 master 3 slaves) and run my webapp on it. I would be using HBase in my project. Usually CentOS is used for installation and deployment of HDP, whereas I was given ubuntu. I cannot use Apache ambari plugin for installation as it is not supported in Ubuntu. I need to manually deploy them, Hence I tried looking out for tutorials.
I looked out for a tutorial to install HDP multinode clusters on ubuntu and found this [http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/]
But its too outdated (2010)
I have the official documentation here, but I am not able to follow it properly.
[http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_installing_manually_book/content/rpm-chap2-3.html] and I tried following them.
Could someone suggest me somelinks which are latest, a tutorial with decent amount of screenshots for installation of multinode clusters over Ubuntu 14.04 ( 12.04 is also fine).
Thanks a lot!!
The Michael Noll tutorial is too old, I think. I found this site:
https://www.digitalocean.com/community/tutorials/how-to-install-hadoop-on-ubuntu-13-10
I have a mini cluster (with 5 slaves and a master) in my University Lab. Ubuntu 12.04 and Hadoop 2.5.0 is there. Furthermore, I have a VM cluster in my laptop (2 slaves and a master) of Hadoop 1.2.1 on Ubuntu 12.04 too.
But I couldn't install Hadoop (any version) in Ubuntu 14.04. I don't remember the cause, but I think it was some problem with Java version (I don't check that).
I hope that help you!
I can across the same issue to install HDP 2.2 on Ubuntu 14.04, and found a solution.
I documented everything here: http://www.swiss-scalability.com/2014/12/install-hdp-22-on-ubuntu-1404-trusty.html
In a nutshell, the magic happens here:
sed -e "s/14.04/12.04/g" -i /etc/*-release
And the you can install or restart ambari-agent, it will be able to communicate with ambari-server.

zfs installation issue Ubuntu 12.04 on Amazon EC2

I attached Three EBS Volumes of 3GB to an Amazon EC2 Micro-instance and mounted the disks xvdd, xvdc and xvdb.
My aim was to create a zfs pool using these 3 disks.
I had updated, upgraded Ubuntu 12.04, installed the zfs-linux dependencies, added the zfs-native repo PPA and then issued the zfs install command whhich is
sudo apt-get install ubuntu-zfs
After this, I get the console status which is as below and after the "run-parts:" status indicated below, the install process never proceeded further. I waited for 20+ minutes and got this:
Setting up zfs-dkms (0.6.0.91-0ubuntu1~precise1) ...
Loading new zfs-0.6.0.91 DKMS files...
First Installation: checking all kernels...
Building only for 3.2.0-31-virtual
Module build for the currently running kernel was skipped since the
kernel source for this kernel does not seem to be installed.
Setting up linux-headers-3.2.0-35 (3.2.0-35.55) ...
Setting up linux-headers-3.2.0-35-generic (3.2.0-35.55) ...
Examining /etc/kernel/header_postinst.d.
run-parts: executing /etc/kernel/header_postinst.d/dkms 3.2.0-35-generic /boot/vmlinuz-3.2.0-35-generic
Is this issue related to EC2 kernel for Ubuntu? or does the machine to run ZFS should be of higher capacity?
Usually with hosting providers the kernel is the case. My provider (ovh) delivers their own customized (and allegedly more secure) kernel (alas without sources), although reluctantly permits installing the generic kernel, which solved the problem for me. I don't know about Amazon - perhaps their customized kernel is crucial for their EC2 service. On the other hand I very much doubt any hosting provider would produce the source code of their kernel.

Resources