centos 6 kickstart cloud - linux

I am trying to create a Kickstart script for Centos 6 that would be Cloud ready, so as a basic pre-requisite it will have just 1 partition so the cloud-init scripts will be capable of growing the partition.
While I have been successful with Centos 7, I am finding lot of issues with Centos 6.
The far I have gone is creating just 1 partition, but kickstart seems failing to make it bootable and there it breaks.
Also note I am using QUEMU + PACKER, so I got the VIRTIO drivers loaded as part of the build.
So, this has been my code so far
install
url --url http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/os/x86_64/
repo --name updates --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/updates/x86_64/
repo --name="os" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/os/x86_64/ --cost=100
repo --name="updates" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/updates/x86_64/ --cost=100
repo --name="extras" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/extras/x86_64/ --cost=100
# for too new hardware
unsupported_hardware
text
skipx
bootloader
firewall --disabled
selinux --disabled
firstboot --disabled
lang en_GB.UTF-8
keyboard uk
timezone --utc Etc/UTC
zerombr
clearpart --all --initlabel
part / --ondisk=vda --size=8191 --grow
rootpw password
authconfig --enableshadow --passalgo=sha512
reboot
%packages --nobase
#core
-*firmware
-b43-openfwwf
-efibootmgr
-audit*
-libX*
-fontconfig
-freetype
sudo
openssh-clients
openssh-server
gcc
make
perl
kernel-firmware
kernel-devel
%end
%post
sed -i 's/^.*requiretty/#Defaults requiretty/' /etc/sudoers
sed -i 's/rhgb //' /boot/grub/grub.conf
%end
And I just got stuck there.
I have tried many combinations in terms of partition but nothing seems to work.
For Centos7 I do not have any of those problems, but Centos 6.9 seems harder.
Any help, please?
Many thanks.

In the end this just worked as stated below...:
install
url --url http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/os/x86_64/
repo --name updates --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/updates/x86_64/
repo --name="os" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/os/x86_64/ --cost=100
repo --name="updates" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/updates/x86_64/ --cost=100
repo --name="extras" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/extras/x86_64/ --cost=100
# for too new hardware
unsupported_hardware
text
skipx
bootloader
firewall --disabled
selinux --disabled
firstboot --disabled
lang en_GB.UTF-8
keyboard uk
timezone --utc Etc/UTC
zerombr
clearpart --all --initlabel
part / --ondisk=vda --size=3000 --grow
rootpw password
authconfig --enableshadow --passalgo=sha512
I assume the ondisk was the missing bit on all this.

Related

Ubuntu Focal headless setup on Raspberry pi 4 - cloud init wifi initialisation before first reboot

i'm having trouble in setting up a full headless install for Ubuntu Server Focal (ARM) on a Raspberry pi 4 using cloud init config. The whole purpose of doing this is to simplify the SD card swap in case of failure. I'm trying to use cloud-init config files to apply static config for lan/wlan, create new user, add ssh authorized keys for the new user, install docker etc. However, whatever i do it seems the Wifi settings are not applied before the first reboot.
Step1: burn the image on SD Card.
Step2: rewrite SD card system-boot/network_config and system-boot/user-data with config files
network-config
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: false
optional: true
addresses: [192.168.100.8/24]
gateway4: 192.168.100.2
nameservers:
addresses: [192.168.100.2, 8.8.8.8]
wifis:
wlan0:
optional: true
access-points:
"AP-NAME":
password: "AP-Password"
dhcp4: false
addresses: [192.168.100.13/24]
gateway4: 192.168.100.2
nameservers:
#search: [mydomain, otherdomain]
addresses: [192.168.100.2, 8.8.8.8]
user-data
chpasswd:
expire: true
list:
- ubuntu:ubuntu
# Enable password authentication with the SSH daemon
ssh_pwauth: true
groups:
- myuser
- docker
users:
- default
- name: myuser
gecos: My Name
primary_group: myuser
groups: sudo
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAA....
lock_passwd: false
passwd: $6$rounds=4096$7uRxBCbz9$SPdYdqd...
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- git
runcmd:
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io
- systemctl start docker
- systemctl enable docker
## TODO: add git deployment and configure folders
power_state:
mode: reboot
During the first boot cloud-init always applies the fallback network config.
I also tried to apply the headless config for wifi as described here.
Created wpa_supplicant.conf and copied it to SD system-boot folder.
trl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=RO
network={
ssid="AP-NAME"
psk="AP-Password"
}
Also created an empty ssh file and copied it to system-boot
The run commands always fail since during the first boot cloud-init applies the fallback network config. After reboot, lan/wlan settings are applied, the user is created, ssh authorized keys added. However i still need to ssh into the PI and install install the remaining packages: docker etc, and i wanted to avoid this. Am i doing something wrong?
I'm not sure if you ever found a workaround, but I'll share some information I found when researching options.
Ubuntu's Raspberry Pi WiFi Setup Page states the need for a reboot when using network-config with WiFi:
Note: During the first boot, your Raspberry Pi will try to connect to this network. It will fail the first time around. Simply reboot sudo reboot and it will work.
There's an interesting workaround & approach in this repo.
It states it was created for 18.04, but it should work with 20.04 as both Server versions use netplan and systemd-networkd.
Personally, I've gone a different route.
I create custom images that contain my settings & packages, then burn to uSD or share via a TFTP server. I was surprised at how easy this was.
There's a good post on creating custom images here
Some important additional info is here

HDP 2.5 Hortonworks ambari-admin-password-reset missing

I have downloaded the sandbox from hortonworks (Centos OS), then tried to follow the tutorial. It seems like the ambari-admin-password-reset command is not there and missing. I tried also to login with putty, the console asked me to change the password so I did.
now it seems like the command is there, but I have different passwords for the console and one for the putty for the same user.
I have tried to look for the reason why for the same user 'root' I have 2 different passwords (one for the virtual box console and one for the putty) that I can login with. I see different commands on each box. more than that when I share folder I can only see it on the virtual box console but not on the putty console) which is really frustrating.
How can I enforce that what I would see from putty would be the same as what I see from the virtual box console.
I think it somehow related to TTY but I am not sure.
EDIT:
running commands from the virtual box machine output:
grep "^passwd" /etc/nsswitch.conf
OUT: passwd: files sss
grep root /etc/passwd
OUT: rppt"x"0"0"root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
getent passwd root
OUT: root:x:0:0:root:/root:/bin/bash
EDIT:
I think this is all about docker containers. It seems like the machine 2222 port is the ssh port for the hdp 2.5 container and not for the hosting machine.
Now I get another problem. when running
docker exec sandbox ls
it is getting stuck. any help ?
Thanks for helpers
So now I had the time to analyze the sandbox vm, and write it up for other users.
As you stated correctly in your edit of the question, its the docker container setup of the sandbox, which confuses with two separate root users:
via ssh root#127.0.0.1 -p 2222 you get into the docker container called "sandbox". This is a CentOS release 6.8 (Final), containing all the HDP services, especially the ambari service. The configuration enforces a password change at first login for the root user. Inside this VM you can also execute the ambari-admin-password-reset and set there a password for the ambari admin.
via console access you reach the docker host running a Centos 7.2, here you can login with the default root password for the VM as found in the HDP docs.
Coming to your sub-question with the hanging docker exec, it seems to be a bug in that specific docker version. If you google that, you will find issues discussing this or similar problems with docker.
So I thought that it would be a good idea to just update the host via yum update. However this turned out to be a difficult path.
yum tried to update the kernel, but complained that there is not enough space on the boot partion.
So I moved the boot partion to the root partition:
edit /etc/fsab and comment out the boot entry
unmount /boot
mv /boot
cp -a /boot.org /boot
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-install /dev/sda
reboot
After that I have found out that the docker configuration is broken and docker does not start anymore. In the logs it complained about
"Error starting daemon: error initializing graphdriver:
\"/var/lib/docker\" contains other graphdrivers: devicemapper; Please
cleanup or explicitly choose storage driver (-s )"
So I edited /etc/systemd/system/multi-user.target.wants/docker.service and changed the ExecStart setting to:
ExecStart=/usr/bin/dockerd --storage-driver=overlay
After a service docker start and a docker start sandbox. The container worked again and I could could login to the container and after a ambari-server restart everything worked again.
And now - with the new docker version 1.12.2, docker exec sandbox ls works again.
So to sum up the docker exec command has a bug in that specific version of the sandbox, but you should think twice if you want to upgrade your sandbox.
I ran into the same issue.
The HDP 2.5 sandbox runs all of its components in a docker container, but commands like docker exec -it sandbox /bin/bash or docker attach sandbox got stuck.
When I ran a simple ps aux, I found several /usr/bin/docker-proxy commands which looked like :
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 60000 -container-ip 172.17.0.2 -container-port 60000
They probably forward the HTTP ports of the various UIs of HDP components.
I could ssh into the container ip (here 172.17.0.2) using root/hadoop to authenticate. From there, I could use all "missing" commands like ambari-admin-password-reset.
$ ssh root#172.17.0.2
... # change password
$ ambari-admin-password-reset
NB: I am new to docker, so there's probably a better way to deal with this.
I'd like to post here the instructions for 3.0.1 here.
I followed the instructions of installing hortonworks version 3.0.1 here: https://youtu.be/5TJMudSNn9c
After running the docker container, go to your browser and enter "localhost:4200", that will take you to the in browser terminal of the container, that hosts ambari. Enter "root" for login and "hadoop" for password, change the root password, and then enter "ambari-admin-password-reset" in order to reset ambari password.
In order to be able to use sandbox-hdp.hortonworks.com, you need to add the line "127.0.0.1 sandbox-hdp.hortonworks.com" at the end of the /private/etc/hosts file on your mac.
Incorrect Pass
Then right corner click on power button >> power off drop down >> Restart >> when it boots up then press Esc key to get into recovery menu
Restart
select advance option and hit enter
Advance Option
Select Recovery mode hit enter
Select Root
Root enter
Command
mount -rw -o remount/
ls /home
change pass command
passwd username
user as yours
last step
enter pass two times by pressing enter
enter image description here
Hopefully you changed password (:

Git push/pull fails on GitLab in Google Compute Engine

I've installed GitLab on Google Compute Engine using "Click to Deploy" from the project interface. The deployment is successful after a few minutes. I can SSH into the instance, and muck around with it as expected.
I can also log in to GitLab using the web interface, and add SSH keys to my profile. So far, so good. However, when I attempt to push or pull to a new example repository, I receive this message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've removed my local SSH config so it doesn't interfere. Do I need to setup an SSH tunnel of some sort? What am I missing?
UPDATE: Wiping out my local ~/.ssh folder, and regenerating an SSH key (which I've added to my profile in GitLab) produces the following error:
Received disconnect from {GITLAB_IP_ADDRESS}: 2: Too many authentication failures for git
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPDATE 2: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
You need to create an SSH tunnel to communicate with GitLab.
1. Log into your development server as your user, and create a key.
ssh-keygen -t rsa
Follow the steps, and create a passcode (that you can remember) as you'd need this to pull and push code from/to GitLab.
2. Now that you've created your key, we can copy it;
cat id_rsa.pub
Copy the output of that command (including ssh-rsa), and add it to your GitLab profile. (http://my-gitlab-server.com/profile/keys/new).
3. Ensure you have the correct privilege to the project(s)
Ensure you are at role developer at the very least. (Screengrab of roles: http://i.stack.imgur.com/DSSvl.jpg)
4. Now, copy the project link
Go into your project, and find the SSH link in the top right;
5. Now back to your development server
Navigate to your directory where you'd like to work, and run the following;
$ git init
$ git remote add origin <<project_url>>
$ git fetch
Where <<project_url>> is the link we copied in step 4.
You will be prompted your password (this is your ssh key password, not your server password) and to add the host to your known_hosts file. After that, the project will start to download and you can enjoy development.
I did these steps on a CentOS 6.4 machine with Digital Ocean. But they shouldn't differ from using Google CE.
Edit
Quote from Marty Penner answer as per this comment
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
UPDATE: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
In my situation the git user wasn´t set up completely. If you get in your log files messages like "User git not allowed because account is locked" (Under Centos or Redhat it´s /var/log/secure) than you simply need to activate the user via "passwd -d git"

Dockerfile: Docker build can't download packages: centos->yum, debian/ubuntu->apt-get behind intranet

PROBLEM: Any build, with a Dockerfile depending on centos, ubuntu or debian fails to build.
ENVIRONMENT: I have a Mac OS X, running VMWare with a guest Ubuntu 14.04, running Docker:
mdesales#ubuntu ~ $ sudo docker version
Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2.1
Git commit (client): d84a070
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2.1
Git commit (server): d84a070
BEHAVIOR: Using "docker build" fails to download packages. Here's an example of such Dockerfile: https://github.com/Krijger/docker-cookbooks/blob/master/jdk8-oracle/Dockerfile, https://github.com/ottenhoff/centos-java/blob/master/Dockerfile
I know that we can run a container with --dns, but this is during the build time.
CENTOS
FROM centos
RUN yum install a b c
UBUNTU
FROM ubuntu
RUN apt-get install a b c
Users have reported that it might be problems with DNS configuration, others and the configuration has the Google's DNS servers commented out.
Step 2 : RUN yum install -y curl; yum upgrade -y; yum update -y; yum clean all
---> Running in 5f11b65c87b8
Loaded plugins: fastestmirror
Couldn't resolve host 'mirrorlist.centos.org
Still the problem persisted... So, most users on #docker#Freenode mentioned that it might be a problem with the DNS configuration... So here's my Ubuntu:
$ sudo cat /etc/resolv.conf
nameserver 127.0.1.1
search localdomain
I tried changing that, same problem...
PROBLEM
Talking to some developers at #docker#freenode, the problem was clear to everyone: DNS and the environment. The build works just fine at a regular Internet connection at home.
SOLUTION:
This problem occurs in an environment that has a private DNS server, or the network blocks the Google's DNS servers. Even if the docker container can ping 8.8.8.8, the build still needs to have access to the same private DNS server behind your firewall or Data Center.
Start the Docker daemon with the --dns switch to point to your private DNS server, just like your host OS is configured. That was found by trial and error.
Details
My MAC OS X, host OS, had a different DNS configured on my /etc/resolv.conf:
mdesales#Marcello-Work ~ (mac) $ cat /etc/resolv.conf
search corp.my-private-company.net
nameserver 172.18.20.13
nameserver 172.20.100.29
My host might be dropping the packets to the Google's IP address 8.8.8.8 while building... I just took those 2 IP addresses and placed under the Ubuntu's docker daemon configuration:
mdesales#ubuntu ~ $ cat /etc/default/docker
...
...
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 172.18.20.13 --dns 172.20.100.29 --dns 8.8.8.8"
...
The build now works as expected!
$ sudo ./build.sh
Sending build context to Docker daemon 7.168 kB
Sending build context to Docker daemon
Step 0 : FROM centos
---> b157b77b1a65
Step 1 : MAINTAINER Marcello_deSales#intuit.com
---> Running in 49bc6e233e4c
---> 2a380810ffda
Removing intermediate container 49bc6e233e4c
Step 2 : RUN yum install -y curl; yum upgrade -y; yum update -y; yum clean all
---> Running in 5f11b65c87b8
Loaded plugins: fastestmirror
Determining fastest mirrors
* base: mirror.supremebytes.com
* extras: centos.mirror.ndchost.com
* updates: mirrors.tummy.com
Resolving Dependencies
--> Running transaction check
---> Package systemd.x86_64 0:208-11.el7 will be updated
---> Package systemd.x86_64 0:208-11.el7_0.2 will be an update
---> Package systemd-libs.x86_64 0:208-11.el7 will be updated
---> Package systemd-libs.x86_64 0:208-11.el7_0.2 will be an update
--> Finished Dependency Resolution
Thanks to #BrianF and others who helped in the IRC channel!
Permanent VM Solution - UPDATE JULY 2, 2015
We now have GitHub Enterprise and CoreOS Enterprise Docker Registry in the mix... So, it was important for me to add the corporate DNS servers from the HOST machine in order to get the VM also to work.
Replacing the /etc/resolv.conf from the guest OS with the Host's /etc/resolv.conf also resolved the problem! Docker 1.7.0. I just created a new VM using Ubuntu 15.04 on VMWare Fusion and I had this problem again...
/etc/resolv.conf BEFORE
~/dev/github/public/stackedit on ⭠ master ⌚ 20:31:02
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
search localdomain
/etc/resolv.conf AFTER*
~/dev/github/public/stackedit on ⭠ master ⌚ 20:56:09
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
search corp.mycompany.net
nameserver 10.180.194.35
nameserver 10.180.194.36
nameserver 192.168.1.1
I had pretty the same problem. The provided solution didn't help in my case. But it worked as soon I updated my Dockerfile adding environment variables for the proxy in it.
ENV HTTP_PROXY http://<proxy_host>:<port>
ENV HTTPS_PROXY http://<proxy_host>:<port>
ENV http_proxy http://<proxy_host>:<port>
ENV https_proxy http://<proxy_host>:<port>
It's likely due to your local caching name server listening on 127.0.1.1 which is not accessible from within the container.
Try putting the following into your Dockerfile:
CMD "sh" "-c" "echo nameserver 8.8.8.8 > /etc/resolv.conf"
also, just adding the nameservers from the host (in my case mac osx ) to the docker-machine vm solves the problem.
For me the problem was that my ISP blocked google's DNS (8.8.8.8) which docker uses as a fallback default.
The trick here is to find out your DNS IP and tell docker to use it.
In my case (running Ubuntu 17.04), trying to get this information from /etc/resolv.conf did not work, but I used this command:
nmcli dev show | grep IP4.DNS
Then I took this IP and added in /etc/defaults/docker:
DOCKER_OPTS="--dns 192.168.50.1"
Now restart your docker daemon, and try building again.
In my case, the issue is that our company's DNS is flawed in few ways, which requires tampering the /etc/hosts, and for docker, /etc/docker/daemon.json. That's the file which was hiding the error:
{
"dns": ["10.5...", "10.5...", "10.5..."]
}
I have backed this up and replaced with
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
And it started working. I am looking for a solution that would work in all cases - on our VPN which needs the custom DNS servers as well as home on a normal network.
Note that in modern Linux, /etc/hosts is generated and DNS is managed by SystemD. I am not sure how Docker handles this, but perhaps it could be enough to point it to SystemD's fake DNS at 127.0.0.53.
Create a local repo mirror - this can also be done as a docker-mirror-packages-repo
Then run "docker build --add-host "archive.ubuntu.com:repo-docker-ip" to have the build process download from your local mirror. That is not only faster but it ensures a better reproducibility of your builds.
I am using that for the testsuite of the docker-systemctl-replacement which is testing compatibility with a number of distros each with dozens of docker rebuilds.

How to access local apt repository that requires HTTP authentication?

I have a couple of local apt repositories which are setup using reprepro.
I want to make them available on the local network and I've got next to everything setup.
However the repos are sitting behind https (this is out of my control) so when I try to pull from them from another server the request just hangs and I think it's because it is waiting for the username / password to be supplied.
I'm not sure how to supply these. Do they go in the sources.list file on the pulling server? What would the format be?
Cheers
Instead of hardcoding the password in sources.list, its better to create an auth.conf file and supply your credentials as is documented in Debian page:
/etc/apt/auth.conf:
machine example.org
login apt
password debian
for an entry like below in sources.list
deb https://example.org/debian buster main
Refer for more info:
Debian Page Reference
In order to supply a password to a debian-style repository with https, add this line to /etc/apt/sources.list
deb https://user:password#repo.server.com/debian ./
If you want per user authentication, you can pass a custom auth.conf per apt call:
trap "rm ./auth.conf.tmp" EXIT
cat << EOF > ./auth.conf.tmp
machine example.org
login apt
password debian
EOF
# overrule /etc/apt/auth.conf with Dir::Etc::netrc config
# see all options with
# zcat /usr/share/doc/apt/examples/configure-index.gz
sudo apt -o Dir::Etc::netrc=./auth.conf.tmp update
Sources:
https://manpages.debian.org/bullseye/apt/apt_auth.conf.5.en.html#FILES
https://manpages.ubuntu.com/manpages/xenial/man8/apt-get.8.html

Resources