Creating a docker Base Image - linux

I have a private Linux distribution (based on redhat7).
I have an ISO file which holds the installation of that distribution, which can be used to install the OS on a clear system only.
I have some programs I would like to run as images on docker, each program on a different image.
Each program can only run on my Linux environment and so I am looking for a way to create the appropriate images, so they can be ran under docker.
I tried following Solomon instructions here:
mkdir rootfs
mount -o loop /path/to/iso rootfs
tar -C rootfs -c . | docker import - rich/mybase
But I don't know how to proceed. I can't run any command since the machine isn't running yet (no /bin/bash/ etc.)
How can I open the installation shell?
Is there a better way to run programs via docker on a private Linux distribution?
(Just to be clear, the programs can run only on that specific OS and that OS can only be installed on a clear machine. Not sure if I need a base image but I'd like to run these programs with Docker and that is possible only over this OS)
I ran into many questions like mine (like this) but I couldn't find answer that helped me.

Assumption
Server A where the ISO will be mount
Server R your private repositoy
Server N where container will be run
All server can connect to server R.
How to
build a base image as mentioned in your OP (named base/myimage)
Push the image to your private repository https://docs.docker.com/registry/deploying/
Create application images from your base base/myimage then push them to your private repo
From Server N, run the application image
docker run application/myapp

This script is from the official Docker contrib repo. It's used to create CentOS images from scratch. It should work with any Redhat/Centos based system and gives you plenty of control over the various steps. Anything beyond that you can then modify post-base-image through a Dockerfile.
The file is here
#!/usr/bin/env bash
#
# Create a base CentOS Docker image.
#
# This script is useful on systems with yum installed (e.g., building
# a CentOS image on CentOS). See contrib/mkimage-rinse.sh for a way
# to build CentOS images on other systems.
usage() {
cat <<EOOPTS
$(basename $0) [OPTIONS] <name>
OPTIONS:
-p "<packages>" The list of packages to install in the container.
The default is blank.
-g "<groups>" The groups of packages to install in the container.
The default is "Core".
-y <yumconf> The path to the yum config to install packages from. The
default is /etc/yum.conf for Centos/RHEL and /etc/dnf/dnf.conf for Fedora
EOOPTS
exit 1
}
# option defaults
yum_config=/etc/yum.conf
if [ -f /etc/dnf/dnf.conf ] && command -v dnf &> /dev/null; then
yum_config=/etc/dnf/dnf.conf
alias yum=dnf
fi
install_groups="Core"
while getopts ":y:p:g:h" opt; do
case $opt in
y)
yum_config=$OPTARG
;;
h)
usage
;;
p)
install_packages="$OPTARG"
;;
g)
install_groups="$OPTARG"
;;
\?)
echo "Invalid option: -$OPTARG"
usage
;;
esac
done
shift $((OPTIND - 1))
name=$1
if [[ -z $name ]]; then
usage
fi
target=$(mktemp -d --tmpdir $(basename $0).XXXXXX)
set -x
mkdir -m 755 "$target"/dev
mknod -m 600 "$target"/dev/console c 5 1
mknod -m 600 "$target"/dev/initctl p
mknod -m 666 "$target"/dev/full c 1 7
mknod -m 666 "$target"/dev/null c 1 3
mknod -m 666 "$target"/dev/ptmx c 5 2
mknod -m 666 "$target"/dev/random c 1 8
mknod -m 666 "$target"/dev/tty c 5 0
mknod -m 666 "$target"/dev/tty0 c 4 0
mknod -m 666 "$target"/dev/urandom c 1 9
mknod -m 666 "$target"/dev/zero c 1 5
# amazon linux yum will fail without vars set
if [ -d /etc/yum/vars ]; then
mkdir -p -m 755 "$target"/etc/yum
cp -a /etc/yum/vars "$target"/etc/yum/
fi
if [[ -n "$install_groups" ]];
then
yum -c "$yum_config" --installroot="$target" --releasever=/ --setopt=tsflags=nodocs \
--setopt=group_package_types=mandatory -y groupinstall $install_groups
fi
if [[ -n "$install_packages" ]];
then
yum -c "$yum_config" --installroot="$target" --releasever=/ --setopt=tsflags=nodocs \
--setopt=group_package_types=mandatory -y install $install_packages
fi
yum -c "$yum_config" --installroot="$target" -y clean all
cat > "$target"/etc/sysconfig/network <<EOF
NETWORKING=yes
HOSTNAME=localhost.localdomain
EOF
# effectively: febootstrap-minimize --keep-zoneinfo --keep-rpmdb --keep-services "$target".
# locales
rm -rf "$target"/usr/{{lib,share}/locale,{lib,lib64}/gconv,bin/localedef,sbin/build-locale-archive}
# docs and man pages
rm -rf "$target"/usr/share/{man,doc,info,gnome/help}
# cracklib
rm -rf "$target"/usr/share/cracklib
# i18n
rm -rf "$target"/usr/share/i18n
# yum cache
rm -rf "$target"/var/cache/yum
mkdir -p --mode=0755 "$target"/var/cache/yum
# sln
rm -rf "$target"/sbin/sln
# ldconfig
rm -rf "$target"/etc/ld.so.cache "$target"/var/cache/ldconfig
mkdir -p --mode=0755 "$target"/var/cache/ldconfig
version=
for file in "$target"/etc/{redhat,system}-release
do
if [ -r "$file" ]; then
version="$(sed 's/^[^0-9\]*\([0-9.]\+\).*$/\1/' "$file")"
break
fi
done
if [ -z "$version" ]; then
echo >&2 "warning: cannot autodetect OS version, using '$name' as tag"
version=$name
fi
tar --numeric-owner -c -C "$target" . | docker import - $name:$version
docker run -i -t --rm $name:$version /bin/bash -c 'echo success'
rm -rf "$target"

Related

Identifying architecture dependent location of nsswitch libraries

I have a DEB package which dynamically creates a chroot filesystem in package postinst helper script. The package works fine for x86, amd64, and arm64 on Debian Stretch/Buster/Bullseye and Ubuntu Bionic/Focal/Jammy. However, I recently tried to install it on Raspbian arm32 and it failed.
The problem is that the pathname of the nsswitch libraries is constructed differently than on the other platforms. In other words, the piece meal assembly of the library path using uname -m is not matching what's present in the file-system.
#!/bin/bash -eu
U=chroot_user
UHOME=/home/$U
ARCH=$(uname -m)
function add_executable () {
FROM="$1"; shift
TO="$(basename $FROM)"
if [ $# -ge 1 ]; then
TO=$1; shift
fi
cp "$FROM" "$UHOME/bin/$TO"
ldd "$FROM" | grep "=> /" | awk '{print $3}' | xargs -I '{}' cp '{}' $UHOME/lib/
LIBNAME="ld-linux-$(echo $ARCH | tr '_' '-').so*"
if compgen -G "/lib64/${LIBNAME}" > /dev/null; then
cp /lib64/${LIBNAME} $UHOME/lib64/
elif compgen -G "/lib/${LIBNAME}" > /dev/null; then
cp /lib/${LIBNAME} $UHOME/lib/
fi
}
if [ "$1" = "configure" ]; then
# Create a system user that has restricted bash as its login shell.
IS_USER=$(grep $U /etc/passwd || true)
if [ ! -z "$IS_USER" ]; then
killall -u $U || true
userdel -f $U > /dev/null 2>&1 || true
fi
adduser --system --home ${UHOME} --no-create-home --group --shell /bin/rbash ${U}
# Create a clean usable chroot
rm -rf $UHOME
mkdir -p $UHOME
mkdir -p $UHOME/dev/
mknod -m 666 $UHOME/dev/null c 1 3
mknod -m 666 $UHOME/dev/tty c 5 0
mknod -m 666 $UHOME/dev/zero c 1 5
mknod -m 666 $UHOME/dev/random c 1 8
mknod -m 644 $UHOME/dev/urandom c 1 9
chown root:root $UHOME
chmod 0755 $UHOME
mkdir -p $UHOME/bin
mkdir -p $UHOME/etc
mkdir -p $UHOME/lib
mkdir -p $UHOME/usr
cd $UHOME/usr
ln -s ../bin bin
cd - > /dev/null
cd $UHOME
ln -s lib lib64
cd - > /dev/null
mkdir $UHOME/lib/${ARCH}-linux-gnu
cp /lib/${ARCH}-linux-gnu/libnss* $UHOME/lib/${ARCH}-linux-gnu
cat <<EOT>$UHOME/etc/nsswitch.conf
passwd: files
group: files
EOT
chmod 0444 $UHOME/etc/nsswitch.conf
echo "127.0.0.1 localhost" > $UHOME/etc/hosts
chmod 0444 $UHOME/etc/hosts
if [ -d /etc/terminfo/ ]; then
cp -R /etc/terminfo $UHOME/etc
fi
if [ -d /lib/terminfo/ ]; then
cp -R /lib/terminfo $UHOME/lib
fi
# Add restricted bash and ssh/scp executables into the chroot. There is no
# need for any other executable.
add_executable /bin/bash rbash
add_executable /usr/bin/ssh
add_executable /usr/bin/scp
add_executable /bin/date
add_executable /bin/ls
add_executable /bin/rm
add_executable /bin/mv
add_executable /bin/cp
grep $U /etc/passwd > $UHOME/etc/passwd
grep $U /etc/group > $UHOME/etc/group
mkdir -p $UHOME/.ssh
chmod 700 $UHOME/.ssh
chown -R $U:$U $UHOME/.ssh
# When using SSH to get out of the jail onto localhost machine, we don't want
# to be constantly told about fingerprints and permanently added hosts
mkdir -p $UHOME/home/$U/.ssh
chmod 0700 $UHOME/home/$U/.ssh
chown -R $U:$U $UHOME/home/$U
fi
#DEBHELPER#
exit 0
# vim: set ts=2 sw=2 tw=0 et :
Not in the expected location ... well, more like: the architecture type in uname's output doesn't match the directory name you want to construct ...
But you could find the directory in a different way, since you're on apt based distros.
dpkg -L libnss3 | awk '/libnss3.so/{gsub(/\/libnss3.so/,"",$0);print}'
This worked for me on both Ubuntu 20.04 and Raspbian GNU/Linux 10 (buster)
Raspbian:
$ dpkg -L libnss3 | awk '/libnss3.so/{gsub(/\/libnss3.so/,"",$0);print}'
/usr/lib/arm-linux-gnueabihf
Ubuntu:
$ dpkg -L libnss3 | awk '/libnss3.so/{gsub(/\/libnss3.so/,"",$0);print}'
/usr/lib/x86_64-linux-gnu

OpenBSD 6.7 how to install xbase

I am updating our integration test environments to OpenBSD 6.7 (from 6.5)
We use ansible to install all the packages on the target system (openbsd 6.7, Vagrant image https://app.vagrantup.com/generic/boxes/openbsd6/versions/3.0.6 )
With the above image, I cannot install java openjdk 11.
obsd-31# pkg_add -r jdk%11
quirks-3.325 signed on 2020-05-27T12:56:02Z
jdk-11.0.7.10.2p0v0:lz4-1.9.2p0: ok
jdk-11.0.7.10.2p0v0:zstd-1.4.4p1: ok
jdk-11.0.7.10.2p0v0:jpeg-2.0.4p0v0: ok
jdk-11.0.7.10.2p0v0:tiff-4.1.0: ok
jdk-11.0.7.10.2p0v0:lcms2-2.9p0: ok
jdk-11.0.7.10.2p0v0:png-1.6.37: ok
jdk-11.0.7.10.2p0v0:giflib-5.1.6: ok
Can't install jdk-11.0.7.10.2p0v0 because of libraries
|library X11.17.0 not found
| not found anywhere
|library Xext.13.0 not found
| not found anywhere
|library Xi.12.1 not found
| not found anywhere
|library Xrender.6.0 not found
| not found anywhere
|library Xtst.11.0 not found
| not found anywhere
|library freetype.30.0 not found
| not found anywhere
Direct dependencies for jdk-11.0.7.10.2p0v0 resolve to png-1.6.37 libiconv-1.16p0 giflib-5.1.6 lcms2-2.9p0 jpeg-2.0.4p0v0
Full dependency tree is giflib-5.1.6 lz4-1.9.2p0 tiff-4.1.0 png-1.6.37 xz-5.2.5 jpeg-2.0.4p0v0 lcms2-2.9p0 zstd-1.4.4p1 libiconv-1.16p0
Couldn't install jdk-11.0.7.10.2p0v0
my guess is that xbase is not installed.
However, I cannot figure out how to install xbase without rebooting into a bootable installer (because I need to do it via a shell command running from ansible)
Is there a way?
The generic OpenBSD Vagrant image you're using was created as a command line environment, so the X windows files were were excluded during the install process.
There are lots of ways to add X windows to OpenBSD after installation, but the quickest method that comes to mind would be:
sudo su -l
curl -LO 'https://ftp.usa.openbsd.org/pub/OpenBSD/6.7/amd64/x{base,serv,font,share}67.tgz'
tar xzf xbase67.tgz -C /
tar xzf xserv67.tgz -C /
tar xzf xfont67.tgz -C /
tar xzf xshare67.tgz -C /
rm -f xbase67.tgz xfont67.tgz xserv67.tgz xshare67.tgz
ldconfig /usr/local/lib /usr/X11R6/lib
If you would like to test for the presence of X windows on OpenBSD, try using the following shell snippet:
if [ -d /usr/X11R6/bin/ ] && [ -f /usr/X11R6/bin/xinit ]; then
echo "X windows has been installed."
else
echo "This is a command line only system."
fi
The xbase file set can be extracted manually via the following commands:
cd /
curl -LO https://ftp.usa.openbsd.org/pub/OpenBSD/6.7/amd64/xbase67.tgz
tar xzvf xbase67.tgz
Note: this is the mirror used in the vagrant sources.
If you care about security enough to use OpenBSD, then you really shouldn't grab new package sets from the internet without also checking the hashes/signatures are valid. Try this script:
#!/bin/ksh
echo -n "Downloading ... "
curl --silent --fail --fail-early -O "https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/SHA256.sig" -O "https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/x{base,font,serv,share}70.tgz"
if [ $? != 0 ]; then
echo "X windows download failed. Terminating."
exit 1
fi
echo "complete."
signify -Cp /etc/signify/openbsd-70-base.pub -x SHA256.sig xbase70.tgz xfont70.tgz xserv70.tgz xshare70.tgz
if [ $? != 0 ]; then
echo "X windows signature verification failed. Terminating."
exit 1
fi
tar -z -x -C / -f xbase70.tgz && tar -z -x -C / -f xfont70.tgz && tar -z -x -C / -f xserv70.tgz && tar -z -x -C / -f xshare70.tgz
if [ $? != 0 ]; then
echo "X windows installation failed. Terminating."
exit 1
fi
echo "Installation complete. Happy hacking."
On the other hand if you just want a one liners:
# Install just x11 base set.
sudo ksh -c 'curl --silent https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/xbase70.tgz | gzip -d -c | tar -x -C / -f - '
# Install all the x11 sets.
sudo ksh -c 'curl --silent https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/xbase70.tgz | gzip -d -c | tar -x -C /-f - '
You can omit the sudo portion if you are already logged in as root. And for the vagrant folks, the lazy version looks:
# Install just x11 base set from the host, to a vagrant guest.
vagrant ssh -c "sudo ksh -c 'curl --silent https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/xbase70.tgz | gzip -d -c | tar -x -C / -f - '"
# Install all the x11 sets from the host, to a vagrant guest.
vagrant ssh -c "sudo ksh -c 'curl --silent -O \"https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/x{base,font,serv,share}70.tgz\" && tar -z -x -C / -f xbase70.tgz && tar -z -x -C / -f xfont70.tgz && tar -z -x -C / -f xserv70.tgz && tar -z -x -C / -f xshare70.tgz'"

Running OpenSSH in an Alpine Docker Container

I've installed OpenSSH and now I wish to run it as described in the documentation by running /etc/init.d/sshd start. However it does not start:
/ # /etc/init.d/sshd start
/bin/ash: /etc/init.d/sshd: not found
Thoughts?
P.S.
/ # ls -la /etc/init.d/sshd
-rwxr-xr-x 1 root root 2622 Jan 14 20:48 /etc/init.d/sshd
Contents of /etc/init.d/sshd:
#!/sbin/openrc-run
# Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/openssh/files/sshd.rc6.4,v 1.5 2015/05/04 02:56:25 vapier Exp $
description="OpenBSD Secure Shell server"
description_checkconfig="Verify configuration file"
description_reload="Reload configuration"
extra_commands="checkconfig"
extra_started_commands="reload"
: ${SSHD_CONFDIR:=/etc/ssh}
: ${SSHD_CONFIG:=${SSHD_CONFDIR}/sshd_config}
: ${SSHD_PIDFILE:=/var/run/${SVCNAME}.pid}
: ${SSHD_BINARY:=/usr/sbin/sshd}
depend() {
use logger dns
if [ "${rc_need+set}" = "set" ] ; then
: # Do nothing, the user has explicitly set rc_need
else
local x warn_addr
for x in $(awk '/^ListenAddress/{ print $2 }' "$SSHD_CONFIG" 2>/dev/null) ; do
case "${x}" in
0.0.0.0|0.0.0.0:*) ;;
::|\[::\]*) ;;
*) warn_addr="${warn_addr} ${x}" ;;
esac
done
if [ -n "${warn_addr}" ] ; then
need net
ewarn "You are binding an interface in ListenAddress statement in your sshd_config!"
ewarn "You must add rc_need=\"net.FOO\" to your /etc/conf.d/sshd"
ewarn "where FOO is the interface(s) providing the following address(es):"
ewarn "${warn_addr}"
fi
fi
}
checkconfig() {
if [ ! -d /var/empty ] ; then
mkdir -p /var/empty || return 1
fi
if [ ! -e "${SSHD_CONFIG}" ] ; then
eerror "You need an ${SSHD_CONFIG} file to run sshd"
eerror "There is a sample file in /usr/share/doc/openssh"
return 1
fi
if ! yesno "${SSHD_DISABLE_KEYGEN}"; then
ssh-keygen -A || return 1
fi
[ "${SSHD_PIDFILE}" != "/var/run/sshd.pid" ] \
&& SSHD_OPTS="${SSHD_OPTS} -o PidFile=${SSHD_PIDFILE}"
[ "${SSHD_CONFIG}" != "/etc/ssh/sshd_config" ] \
&& SSHD_OPTS="${SSHD_OPTS} -f ${SSHD_CONFIG}"
"${SSHD_BINARY}" -t ${SSHD_OPTS} || return 1
}
start() {
checkconfig || return 1
ebegin "Starting ${SVCNAME}"
start-stop-daemon --start --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" \
-- ${SSHD_OPTS}
eend $?
}
stop() {
if [ "${RC_CMD}" = "restart" ] ; then
checkconfig || return 1
fi
ebegin "Stopping ${SVCNAME}"
start-stop-daemon --stop --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" --quiet
eend $?
if [ "$RC_RUNLEVEL" = "shutdown" ]; then
_sshd_pids=$(pgrep "${SSHD_BINARY##*/}")
if [ -n "$_sshd_pids" ]; then
ebegin "Shutting down ssh connections"
kill -TERM $_sshd_pids >/dev/null 2>&1
eend 0
fi
fi
}
reload() {
checkconfig || return 1
ebegin "Reloading ${SVCNAME}"
start-stop-daemon --signal HUP \
--exec "${SSHD_BINARY}" --pidfile "${SSHD_PIDFILE}"
eend $?
}
A container is not a full installed environment.
The official document is for that installed alpine on some machine.
With power on, boot up services, etc. that a container does not have.
So, anything in /etc/init.d/ can not be used directly in a container which is used by boot up service (like systemd, or alpine's rc*). That's why you got error messages cause the rc* isn't installed in the container.
What you need to do is start sshd manuanlly.
You can take look on below example:
https://hub.docker.com/r/danielguerra/alpine-sshd/~/dockerfile/
Despite there are some details still not clear to me, let me take a voice in the discussion. The solution specified by the below configuration works for me. It's the result of arduous experiments.
First, the dockerfile
FROM alpine
RUN apk update && \
apk add --no-cache sudo bash openrc openssh
RUN mkdir -p /run/openrc && \
touch /run/openrc/softlevel && \
rc-update add sshd default
RUN adduser --disabled-password regusr && \
sh -c 'echo "regusr:<encoded_passwd>"' | chpasswd -e > /dev/null 2>&1 && \
sh -c 'echo "regusr ALL=NOPASSWD: ALL"' >> /etc/sudoers
VOLUME ["/home/reguser/solution/entrypoint-init.d","/sys/fs/cgroup"]
USER reguser
WORKDIR /home/reguser
RUN mkdir -p $HOME/solution && sudo chown reguser:reguser $HOME/solution
ADD ./entrypoint.sh /home/reguser/solution/
EXPOSE 22
ENTRYPOINT ["./solution/entrypoint.sh"]
CMD ["/bin/bash"]
Next, /home/reguser/solution/entrypoint.sh
#!/bin/bash
for f in ./solution/entrypoint-init.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
exec "$#"
Next, /home/reguser/solution/entrypoint-init.d/10-ssh-up.sh
#!/bin/bash
sudo sed --in-place --expression='/^#[[:space:]]*Port[[:space:]]\+22$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*AddressFamily[[:space:]]\+any$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostKey[[:space:]]\+\/etc\/ssh\/ssh_host_rsa_key$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^[[:space:]]*\(HostbasedAuthentication\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^[[:space:]]*\(IgnoreRhosts\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^[[:space:]]*\(PasswordAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^[[:space:]]*\(PubkeyAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PrintMotd[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]].*/ s/^[[:space:]]*\(PrintMOTD\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='$ a\' --expression='\nAcceptEnv LANG LC_\*' -- /etc/ssh/sshd_config
sudo /etc/init.d/sshd --dry-run start
sudo /etc/init.d/sshd start
The last two lines are in the heart of the trick. In particular, the sudo /etc/init.d/sshd --dry-run start makes the solution working.
Finally, command-line controls
docker build --tag='dockerRegUser/sshdImg:0.0.1' --file='./dockerfile' .
docker container create --tty \
--volume $(pwd)/dock/entrypoint-init.d:/home/reguser/solution/entrypoint-init.d:ro \
--name sshdCnt 'dockerRegUser/sshdImg:0.0.1' tail -f /dev/null
docker start sshdCnt && \
ssh-keygen -f "/home/user/.ssh/known_hosts" -R "$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)" && \
sleep 5 && \
ssh-copy-id -i ~/.ssh/sshkey reguser#$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)
I know, I know, there is a lot of unnecessary constructs. The example is also against the single service docker container principle. But there are phases and situations in solution development and delivery lifecycle that justify (or at least tempt) considering extending the container with the sshd or other openrc-controlled services.
/etc/init.d/sshd: not found
Try to run these commands:
apk add --no-cache openrc
rc-update add sshd
Check first is sshd is not present in /usr/bin or /usr/sbin.
Then, init.d should have sshd only if you set it up to to automatically start wiht:
rc-update add sshd
rc-status
I needed sshd for a very specific reason. I had to run front (cypress) and back (django) end tests on a CI server. Running them in one container is tricky at the least, so I decided to go with 2 containers. Also, there had to be one entrypoint that will run tests in both containers. So, the idea was that one container will run its tests, than run the tests in the other container over ssh.
In your case, you might not want to do exactly as I did, e.g. setting empty root password, empty passphrase.
It's best to run it in a separate directory, since it creates files (id_rsa.pub).
server.sh:
#!/bin/sh -eux
apk add openssh-server
ssh-keygen -A
passwd -d root
mkdir ~/.ssh
while ! [ -e id_rsa.pub ]; do sleep 1; done
cp id_rsa.pub ~/.ssh/authorized_keys
/usr/sbin/sshd -De
client.sh:
#!/bin/sh -eux
apk add openssh-client wait4ports
ssh-keygen -f ~/.ssh/id_rsa -N ''
cp ~/.ssh/id_rsa.pub .
wait4ports -s 1 tcp://c1:22
ssh-keyscan -t rsa c1 > ~/.ssh/known_hosts
ssh c1 echo DO SOMETHING
echo done
docker-compose.yml:
version: '3'
services:
server:
image: alpine:3.12
command: sh -c 'cd app && ./server.sh'
volumes:
- .:/app
client:
image: alpine:3.12
command: sh -c 'cd app && ./client.sh'
volumes:
- .:/app
$ docker-compose up -d && docker-compose logs -f
If you decide to run it again:
$ rm -f id_rsa.pub && docker-compose down && docker-compose up -d && docker-compose logs -f
If you want to setup openssh server on your docker container with alpine try this Dockerfile.
In this example, I am using docker:dind image
FROM docker:dind
# Setup SSH Service
RUN \
apk update && \
apk add openrc --no-cache && \
apk add openssh-server && \
rc-update add sshd && \
rc-status && \
touch /run/openrc/softlevel
# Expose port for ssh
EXPOSE 22
# Start SSH Service
CMD ["sh" , "-c", "service sshd restart && sh"]
Once your container is up and running try running this command to make sure ssh works fine:
ssh localhost

Installing a custom RPM hangs when trying to run a post-install script

I'm building an RPM with my own code and am running into an issue where the RPM is hanging when I install it with yum if I try to launch something as part of the post-install script. For some background, this is a virtual appliance where we've replaced the shell with a program with minimal configuration options. Previously, I couldn't get the program to load via the post-install options in my spec file; it causes the 'installing' line during a yum install to hang. I put a bandaid on it by having a reboot occur. I need this menu to load after installation, but everything I've tried results in yum hanging on the 'installing' line. My latest attempt; using the trap command, causes the same hang. Does anyone have any other ideas?
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}
%description
Installs the program.
%prep
%setup -q
%install
rsync -ar /home/makerpm/rpmbuild/BUILD/%{name}-%{version}/ /home/makerpm/rpmbuild/BUILDROOT/%{name}-%{version}-%{release}.x86_64
find $RPM_BUILD_ROOT -not -type d -printf "%%%attr(%%m,root,root) %%p\n" | sed -e "s|$RPM_BUILD_ROOT||g" > %{_tmppath}/%{name}_contents.txt
%clean
rm -rf ${RPM_BUILD_ROOT}
rm -rf %{_tmppath}/*
rm -rf ${RPM_BUILD_DIR}/*
%changelog
* Sun Jan 25 2014 - %{version} - Modified menu with trap ability so the first reboot isn't required.
%post
function finish {
/bin/launcher
}
chmod 755 /bin/launcher
cat /etc/shells | grep "/bin/launcher"
if [ $? -eq 1 ]
then
echo "/bin/bash" >> /etc/shells
fi
chsh -s /bin/launcher root
trap finish EXIT
#List of all files to be extracted
%files -f %{_tmppath}/%{name}_contents.txt

Detect host operating system distro in chef-solo deploy bash script

When deploying a chef-solo setup you need to switch between using sudo or not eg:
bash install.sh
and
sudo bash install.sh
Depending on the distro on the host server. How can this be automated?
ohai already populates these attributes and are readily available in your recipe
for example,
"platform": "centos",
"platform_version": "6.4",
"platform_family": "rhel",
you can reference to these as
if node[:platform_family].include?("rhel")
...
end
To see what other attributes ohai sets, just type
ohai
on the command line.
You can detect the distro on the remote host and deploy accordingly. in deploy.sh:
DISTRO=`ssh -o 'StrictHostKeyChecking no' ${host} 'bash -s' < bootstrap.sh`
The DISTRO variable is populated by whatever is echoed by the bootstrap.sh script, which is run on the host machine. So we can now use bootstrap.sh to detect the distro or any other server settings we need to and echo, which will be bubbled to the local script and you can respond accordingly.
example deploy.sh:
#!/bin/bash
# Usage: ./deploy.sh [host]
host="${1}"
if [ -z "$host" ]; then
echo "Please provide a host - eg: ./deploy root#my-server.com"
exit 1
fi
echo "deploying to ${host}"
# The host key might change when we instantiate a new VM, so
# we remove (-R) the old host key from known_hosts
ssh-keygen -R "${host#*#}" 2> /dev/null
# rough test for what distro the server is on
DISTRO=`ssh -o 'StrictHostKeyChecking no' ${host} 'bash -s' < bootstrap.sh`
if [ "$DISTRO" == "FED" ]; then
echo "Detected a Fedora, RHEL, CentOS distro on host"
tar cjh . | ssh -o 'StrictHostKeyChecking no' "$host" '
rm -rf /tmp/chef &&
mkdir /tmp/chef &&
cd /tmp/chef &&
tar xj &&
bash install.sh'
elif [ "$DISTRO" == "DEB" ]; then
echo "Detected a Debian, Ubuntu distro on host"
tar cj . | ssh -o 'StrictHostKeyChecking no' "$host" '
sudo rm -rf ~/chef &&
mkdir ~/chef &&
cd ~/chef &&
tar xj &&
sudo bash install.sh'
fi
example bootstrap.sh:
#!/bin/bash
# Fedora/RHEL/CentOS distro
if [ -f /etc/redhat-release ]; then
echo "FED"
# Debian/Ubuntu
elif [ -r /lib/lsb/init-functions ]; then
echo "DEB"
fi
This will allow you to detect the platform very early in the deploy process.

Resources