Ubuntu Focal headless setup on Raspberry pi 4 - cloud init wifi initialisation before first reboot - ubuntu-server

i'm having trouble in setting up a full headless install for Ubuntu Server Focal (ARM) on a Raspberry pi 4 using cloud init config. The whole purpose of doing this is to simplify the SD card swap in case of failure. I'm trying to use cloud-init config files to apply static config for lan/wlan, create new user, add ssh authorized keys for the new user, install docker etc. However, whatever i do it seems the Wifi settings are not applied before the first reboot.
Step1: burn the image on SD Card.
Step2: rewrite SD card system-boot/network_config and system-boot/user-data with config files
network-config
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: false
optional: true
addresses: [192.168.100.8/24]
gateway4: 192.168.100.2
nameservers:
addresses: [192.168.100.2, 8.8.8.8]
wifis:
wlan0:
optional: true
access-points:
"AP-NAME":
password: "AP-Password"
dhcp4: false
addresses: [192.168.100.13/24]
gateway4: 192.168.100.2
nameservers:
#search: [mydomain, otherdomain]
addresses: [192.168.100.2, 8.8.8.8]
user-data
chpasswd:
expire: true
list:
- ubuntu:ubuntu
# Enable password authentication with the SSH daemon
ssh_pwauth: true
groups:
- myuser
- docker
users:
- default
- name: myuser
gecos: My Name
primary_group: myuser
groups: sudo
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAA....
lock_passwd: false
passwd: $6$rounds=4096$7uRxBCbz9$SPdYdqd...
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- git
runcmd:
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io
- systemctl start docker
- systemctl enable docker
## TODO: add git deployment and configure folders
power_state:
mode: reboot
During the first boot cloud-init always applies the fallback network config.
I also tried to apply the headless config for wifi as described here.
Created wpa_supplicant.conf and copied it to SD system-boot folder.
trl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=RO
network={
ssid="AP-NAME"
psk="AP-Password"
}
Also created an empty ssh file and copied it to system-boot
The run commands always fail since during the first boot cloud-init applies the fallback network config. After reboot, lan/wlan settings are applied, the user is created, ssh authorized keys added. However i still need to ssh into the PI and install install the remaining packages: docker etc, and i wanted to avoid this. Am i doing something wrong?

I'm not sure if you ever found a workaround, but I'll share some information I found when researching options.
Ubuntu's Raspberry Pi WiFi Setup Page states the need for a reboot when using network-config with WiFi:
Note: During the first boot, your Raspberry Pi will try to connect to this network. It will fail the first time around. Simply reboot sudo reboot and it will work.
There's an interesting workaround & approach in this repo.
It states it was created for 18.04, but it should work with 20.04 as both Server versions use netplan and systemd-networkd.
Personally, I've gone a different route.
I create custom images that contain my settings & packages, then burn to uSD or share via a TFTP server. I was surprised at how easy this was.
There's a good post on creating custom images here
Some important additional info is here

Related

Setting up a Remotely Accessible Postgres Database with Linux and PGAdmin

I'm trying to set up a remotely accessible Postgres database. I want to host this databse on one Linux based device (HOST), and to access it on another Linux based device (CLIENT).
In my specific case, HOST is a desktop device running Ubuntu. CLIENT is a Chromebook with a Linux virtual system. (I know. But it's the closest thing to a Linux based device that I have to hand.
Steps Already Taken to Set Up the Database
Installed the required software on HOST using APT.
PGP_KEY_URL="https://www.postgresql.org/media/keys/ACCC4CF8.asc"
POSTGRES_URL_STEM="http://apt.postgresql.org/pub/repos/apt/"
POSTGRES_URL="$POSTGRES_URL_STEM `lsb_release -cs`-pgdg main"
POSTGRES_VERSION="12"
PGADMIN_URL_SHORT="https://www.pgadmin.org/static/packages_pgadmin_org.pub"
PGADMIN_URL_STEM="https://ftp.postgresql.org/pub/pgadmin/pgadmin4/apt"
PGADMIN_TO_ECHO="deb $PGADMIN_URL_STEM/`lsb_release -cs` pgadmin4 main"
PGADMIN_PATH="/etc/apt/sources.list.d/pgadmin4.list"
sudo apt install curl --yes
sudo apt install gnupg2 --yes
wget --quiet -O - $PGP_KEY_URL | sudo apt-key add -
echo "deb $POSTGRES_URL" | sudo tee /etc/apt/sources.list.d/pgdg.list
sudo apt install postgresql-$POSTGRES_VERSION --yes
sudo apt install postgresql-client-$POSTGRES_VERSION --yes
sudo curl $PGADMIN_URL_SHORT | sudo apt-key add
sudo sh -c "echo \"$PGADMIN_TO_ECHO\" > $PGADMIN_PATH && apt update"
sudo apt update
sudo apt install pgadmin4 --yes
Create a new Postgres user.
NU_USERNAME="my_user"
NU_PASSWORD="guest"
NU_QUERY="CREATE USER $NU_USERNAME WITH superuser password '$NU_PASSWORD';"
sudo -u postgres psql -c "$NU_QUERY"
Created the new server and database. I did this manually, using the PGAdmin GUI.
Added test data, a table with a couple of records. I did this with a script.
Followed the steps given in this answer to make the databse remotely accessible.
Steps Already Taken to Connect to the Database REMOTELY
Installed PGAdmin on CLIENT.
Attempted to connect using PGAdmin. I used the "New Server" wizard, and entered:
Host IP Address: 192.168.1.255
Port: 5432 (same as when I set up the database on HOST)
User: my_user
Password: guest
However, when I try to save the connection, PGAdmin responds after a few seconds saying that the connection has timed out.
You have to configure listen_addresses in /var/lib/pgsql/data/postgresql.conf like this:
listen_addresses = '*'
Next make sure your firewall doesn't block the connection by checking if telnet can connect to your server:
$ telnet 192.168.1.255 5432
Connected to 192.168.1.255.
Escape character is '^]'.
If you see Connected network connectivity is ok. Next you have to configure access rights for remote hosts.

centos 6 kickstart cloud

I am trying to create a Kickstart script for Centos 6 that would be Cloud ready, so as a basic pre-requisite it will have just 1 partition so the cloud-init scripts will be capable of growing the partition.
While I have been successful with Centos 7, I am finding lot of issues with Centos 6.
The far I have gone is creating just 1 partition, but kickstart seems failing to make it bootable and there it breaks.
Also note I am using QUEMU + PACKER, so I got the VIRTIO drivers loaded as part of the build.
So, this has been my code so far
install
url --url http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/os/x86_64/
repo --name updates --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/updates/x86_64/
repo --name="os" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/os/x86_64/ --cost=100
repo --name="updates" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/updates/x86_64/ --cost=100
repo --name="extras" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/extras/x86_64/ --cost=100
# for too new hardware
unsupported_hardware
text
skipx
bootloader
firewall --disabled
selinux --disabled
firstboot --disabled
lang en_GB.UTF-8
keyboard uk
timezone --utc Etc/UTC
zerombr
clearpart --all --initlabel
part / --ondisk=vda --size=8191 --grow
rootpw password
authconfig --enableshadow --passalgo=sha512
reboot
%packages --nobase
#core
-*firmware
-b43-openfwwf
-efibootmgr
-audit*
-libX*
-fontconfig
-freetype
sudo
openssh-clients
openssh-server
gcc
make
perl
kernel-firmware
kernel-devel
%end
%post
sed -i 's/^.*requiretty/#Defaults requiretty/' /etc/sudoers
sed -i 's/rhgb //' /boot/grub/grub.conf
%end
And I just got stuck there.
I have tried many combinations in terms of partition but nothing seems to work.
For Centos7 I do not have any of those problems, but Centos 6.9 seems harder.
Any help, please?
Many thanks.
In the end this just worked as stated below...:
install
url --url http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/os/x86_64/
repo --name updates --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/updates/x86_64/
repo --name="os" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/os/x86_64/ --cost=100
repo --name="updates" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/updates/x86_64/ --cost=100
repo --name="extras" --baseurl=http://mirrors.ukfast.co.uk/sites/ftp.centos.org/6/extras/x86_64/ --cost=100
# for too new hardware
unsupported_hardware
text
skipx
bootloader
firewall --disabled
selinux --disabled
firstboot --disabled
lang en_GB.UTF-8
keyboard uk
timezone --utc Etc/UTC
zerombr
clearpart --all --initlabel
part / --ondisk=vda --size=3000 --grow
rootpw password
authconfig --enableshadow --passalgo=sha512
I assume the ondisk was the missing bit on all this.

Sending mail using GNU Mailutils inside Docker container

I have created a docker image where I install the mailutils package using:
RUN apt-get update && apt-get install -y mailutils
As a sample command, I am running:
mail -s 'Hello World' {email-address} <<< 'Message body'
When I am executing the same command on my local machine, it sends the mail. However, in docker container, it is not showing any errors but there is no mail received on the specified e-mail id.
I tried with --net=host argument in while spawning my docker container.
Following is my docker command:
docker run --net=host -p 0.0.0.0:8000:8000 {imageName}:{tagName} {arguments}
Is there anything that I am missing? Could someone explain the networking concepts behind this problem?
Install ssmtp and configure to send all mails to your relayhost.
https://wiki.debian.org/sSMTP
Thanks for the response #pilasguru. ssmtp works for sending mail from within a docker container.
Just to make the response more verbose, here are the things one would need to do.
Install ssmtp in the container. You could do this by the following command.
RUN apt-get update && apt-get -y install ssmtp.
You could configure the configurations for ssmtp at /etc/ssmtp/ssmtp.conf
Ideal configurations.
`
#
# Config file for sSMTP sendmail
#
# The person who gets all mail for userids < 1000
# Make this empty to disable rewriting.
root={root-name}
# The place where the mail goes. The actual machine name is required no
# MX records are consulted. Commonly mailhosts are named mail.domain.com
mailhub={smtp-server}
# Where will the mail seem to come from?
rewriteDomain={domain-name}
# The full hostname
hostname=c67fcdc6361d
# Are users allowed to set their own From: address?
# YES - Allow the user to specify their own From: address
# NO - Use the system generated From: address
FromLineOverride=YES
`
You can directly copy that from your root directory where you're building your docker image. For eg. you keep your configurations in file named: my.conf.
You can copy them in your docker container using the command:
COPY ./my.conf /etc/ssmtp/ssmtp.conf
Send a mail using a simple command such as:
ssmtp recipient_name#gmail.com < filename.txt
You can even send an attachment, specify to and from using the following command:
echo -e "to: {to-addr}\nFrom: {from-addr}\nsubject: {subject}\n"| (cat - && uuencode /path/to/file/inside/container {attachment-name-in mail}) | ssmtp recipient_name#gmail.com
uuencode could be installed by the command apt-get install sharutils

Timeout while waiting for the machine to boot!! Vagrant-Virtualbox

I have gentoo(linux) host machine. On which, I have Virtualbox 4.3.28 and vagrant 1.4.3 installed(these are the latest available version for gentoo).
On vagrant up, the Ubuntu 14.04 gets launched. I'm also able to ssh to Ubuntu. But then as soon as it gets launched I get the following error. Below is my Vagrantfile and output error.
P.S I have created Ubuntu 14.04 base box from scratch
-----------Vagrantfile-------------
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "Ubuntu"
config.vm.boot_timeout = "700"
config.vm.provider :virtualbox do |vb|
vb.gui = true
end
end
-----------Output in terminal------------
Bringing machine 'default' up with 'virtualbox' provider...
[default] Clearing any previously set forwarded ports...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Booting VM...
[default] Waiting for machine to boot. This may take a few minutes...
**
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period. This can
mean a number of things.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.**
Any solution to fix this problem?
P.S I have created Ubuntu 14.04 base box from scratch
That could be the missing piece - When you package a box, you need to run a few commands as explained below
It is very common for Linux-based boxes to fail to boot initially.
This is often a very confusing experience because it is unclear why it
is happening. The most common case is because there are persistent
network device udev rules in place that need to be reset for the new
virtual machine. To avoid this issue, remove all the persistent-net
rules. On Ubuntu, these are the steps necessary to do this:
$ rm /etc/udev/rules.d/70-persistent-net.rules
$ mkdir /etc/udev/rules.d/70-persistent-net.rules
$ rm -rf /dev/.udev/
$ rm /lib/udev/rules.d/75-persistent-net-generator.rules
Can you make sure to run the command above before packaging the box.

Dockerfile: Docker build can't download packages: centos->yum, debian/ubuntu->apt-get behind intranet

PROBLEM: Any build, with a Dockerfile depending on centos, ubuntu or debian fails to build.
ENVIRONMENT: I have a Mac OS X, running VMWare with a guest Ubuntu 14.04, running Docker:
mdesales#ubuntu ~ $ sudo docker version
Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2.1
Git commit (client): d84a070
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2.1
Git commit (server): d84a070
BEHAVIOR: Using "docker build" fails to download packages. Here's an example of such Dockerfile: https://github.com/Krijger/docker-cookbooks/blob/master/jdk8-oracle/Dockerfile, https://github.com/ottenhoff/centos-java/blob/master/Dockerfile
I know that we can run a container with --dns, but this is during the build time.
CENTOS
FROM centos
RUN yum install a b c
UBUNTU
FROM ubuntu
RUN apt-get install a b c
Users have reported that it might be problems with DNS configuration, others and the configuration has the Google's DNS servers commented out.
Step 2 : RUN yum install -y curl; yum upgrade -y; yum update -y; yum clean all
---> Running in 5f11b65c87b8
Loaded plugins: fastestmirror
Couldn't resolve host 'mirrorlist.centos.org
Still the problem persisted... So, most users on #docker#Freenode mentioned that it might be a problem with the DNS configuration... So here's my Ubuntu:
$ sudo cat /etc/resolv.conf
nameserver 127.0.1.1
search localdomain
I tried changing that, same problem...
PROBLEM
Talking to some developers at #docker#freenode, the problem was clear to everyone: DNS and the environment. The build works just fine at a regular Internet connection at home.
SOLUTION:
This problem occurs in an environment that has a private DNS server, or the network blocks the Google's DNS servers. Even if the docker container can ping 8.8.8.8, the build still needs to have access to the same private DNS server behind your firewall or Data Center.
Start the Docker daemon with the --dns switch to point to your private DNS server, just like your host OS is configured. That was found by trial and error.
Details
My MAC OS X, host OS, had a different DNS configured on my /etc/resolv.conf:
mdesales#Marcello-Work ~ (mac) $ cat /etc/resolv.conf
search corp.my-private-company.net
nameserver 172.18.20.13
nameserver 172.20.100.29
My host might be dropping the packets to the Google's IP address 8.8.8.8 while building... I just took those 2 IP addresses and placed under the Ubuntu's docker daemon configuration:
mdesales#ubuntu ~ $ cat /etc/default/docker
...
...
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 172.18.20.13 --dns 172.20.100.29 --dns 8.8.8.8"
...
The build now works as expected!
$ sudo ./build.sh
Sending build context to Docker daemon 7.168 kB
Sending build context to Docker daemon
Step 0 : FROM centos
---> b157b77b1a65
Step 1 : MAINTAINER Marcello_deSales#intuit.com
---> Running in 49bc6e233e4c
---> 2a380810ffda
Removing intermediate container 49bc6e233e4c
Step 2 : RUN yum install -y curl; yum upgrade -y; yum update -y; yum clean all
---> Running in 5f11b65c87b8
Loaded plugins: fastestmirror
Determining fastest mirrors
* base: mirror.supremebytes.com
* extras: centos.mirror.ndchost.com
* updates: mirrors.tummy.com
Resolving Dependencies
--> Running transaction check
---> Package systemd.x86_64 0:208-11.el7 will be updated
---> Package systemd.x86_64 0:208-11.el7_0.2 will be an update
---> Package systemd-libs.x86_64 0:208-11.el7 will be updated
---> Package systemd-libs.x86_64 0:208-11.el7_0.2 will be an update
--> Finished Dependency Resolution
Thanks to #BrianF and others who helped in the IRC channel!
Permanent VM Solution - UPDATE JULY 2, 2015
We now have GitHub Enterprise and CoreOS Enterprise Docker Registry in the mix... So, it was important for me to add the corporate DNS servers from the HOST machine in order to get the VM also to work.
Replacing the /etc/resolv.conf from the guest OS with the Host's /etc/resolv.conf also resolved the problem! Docker 1.7.0. I just created a new VM using Ubuntu 15.04 on VMWare Fusion and I had this problem again...
/etc/resolv.conf BEFORE
~/dev/github/public/stackedit on ⭠ master ⌚ 20:31:02
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
search localdomain
/etc/resolv.conf AFTER*
~/dev/github/public/stackedit on ⭠ master ⌚ 20:56:09
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
search corp.mycompany.net
nameserver 10.180.194.35
nameserver 10.180.194.36
nameserver 192.168.1.1
I had pretty the same problem. The provided solution didn't help in my case. But it worked as soon I updated my Dockerfile adding environment variables for the proxy in it.
ENV HTTP_PROXY http://<proxy_host>:<port>
ENV HTTPS_PROXY http://<proxy_host>:<port>
ENV http_proxy http://<proxy_host>:<port>
ENV https_proxy http://<proxy_host>:<port>
It's likely due to your local caching name server listening on 127.0.1.1 which is not accessible from within the container.
Try putting the following into your Dockerfile:
CMD "sh" "-c" "echo nameserver 8.8.8.8 > /etc/resolv.conf"
also, just adding the nameservers from the host (in my case mac osx ) to the docker-machine vm solves the problem.
For me the problem was that my ISP blocked google's DNS (8.8.8.8) which docker uses as a fallback default.
The trick here is to find out your DNS IP and tell docker to use it.
In my case (running Ubuntu 17.04), trying to get this information from /etc/resolv.conf did not work, but I used this command:
nmcli dev show | grep IP4.DNS
Then I took this IP and added in /etc/defaults/docker:
DOCKER_OPTS="--dns 192.168.50.1"
Now restart your docker daemon, and try building again.
In my case, the issue is that our company's DNS is flawed in few ways, which requires tampering the /etc/hosts, and for docker, /etc/docker/daemon.json. That's the file which was hiding the error:
{
"dns": ["10.5...", "10.5...", "10.5..."]
}
I have backed this up and replaced with
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
And it started working. I am looking for a solution that would work in all cases - on our VPN which needs the custom DNS servers as well as home on a normal network.
Note that in modern Linux, /etc/hosts is generated and DNS is managed by SystemD. I am not sure how Docker handles this, but perhaps it could be enough to point it to SystemD's fake DNS at 127.0.0.53.
Create a local repo mirror - this can also be done as a docker-mirror-packages-repo
Then run "docker build --add-host "archive.ubuntu.com:repo-docker-ip" to have the build process download from your local mirror. That is not only faster but it ensures a better reproducibility of your builds.
I am using that for the testsuite of the docker-systemctl-replacement which is testing compatibility with a number of distros each with dozens of docker rebuilds.

Resources