Running OpenSSH in an Alpine Docker Container - linux

I've installed OpenSSH and now I wish to run it as described in the documentation by running /etc/init.d/sshd start. However it does not start:
/ # /etc/init.d/sshd start
/bin/ash: /etc/init.d/sshd: not found
Thoughts?
P.S.
/ # ls -la /etc/init.d/sshd
-rwxr-xr-x 1 root root 2622 Jan 14 20:48 /etc/init.d/sshd
Contents of /etc/init.d/sshd:
#!/sbin/openrc-run
# Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/openssh/files/sshd.rc6.4,v 1.5 2015/05/04 02:56:25 vapier Exp $
description="OpenBSD Secure Shell server"
description_checkconfig="Verify configuration file"
description_reload="Reload configuration"
extra_commands="checkconfig"
extra_started_commands="reload"
: ${SSHD_CONFDIR:=/etc/ssh}
: ${SSHD_CONFIG:=${SSHD_CONFDIR}/sshd_config}
: ${SSHD_PIDFILE:=/var/run/${SVCNAME}.pid}
: ${SSHD_BINARY:=/usr/sbin/sshd}
depend() {
use logger dns
if [ "${rc_need+set}" = "set" ] ; then
: # Do nothing, the user has explicitly set rc_need
else
local x warn_addr
for x in $(awk '/^ListenAddress/{ print $2 }' "$SSHD_CONFIG" 2>/dev/null) ; do
case "${x}" in
0.0.0.0|0.0.0.0:*) ;;
::|\[::\]*) ;;
*) warn_addr="${warn_addr} ${x}" ;;
esac
done
if [ -n "${warn_addr}" ] ; then
need net
ewarn "You are binding an interface in ListenAddress statement in your sshd_config!"
ewarn "You must add rc_need=\"net.FOO\" to your /etc/conf.d/sshd"
ewarn "where FOO is the interface(s) providing the following address(es):"
ewarn "${warn_addr}"
fi
fi
}
checkconfig() {
if [ ! -d /var/empty ] ; then
mkdir -p /var/empty || return 1
fi
if [ ! -e "${SSHD_CONFIG}" ] ; then
eerror "You need an ${SSHD_CONFIG} file to run sshd"
eerror "There is a sample file in /usr/share/doc/openssh"
return 1
fi
if ! yesno "${SSHD_DISABLE_KEYGEN}"; then
ssh-keygen -A || return 1
fi
[ "${SSHD_PIDFILE}" != "/var/run/sshd.pid" ] \
&& SSHD_OPTS="${SSHD_OPTS} -o PidFile=${SSHD_PIDFILE}"
[ "${SSHD_CONFIG}" != "/etc/ssh/sshd_config" ] \
&& SSHD_OPTS="${SSHD_OPTS} -f ${SSHD_CONFIG}"
"${SSHD_BINARY}" -t ${SSHD_OPTS} || return 1
}
start() {
checkconfig || return 1
ebegin "Starting ${SVCNAME}"
start-stop-daemon --start --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" \
-- ${SSHD_OPTS}
eend $?
}
stop() {
if [ "${RC_CMD}" = "restart" ] ; then
checkconfig || return 1
fi
ebegin "Stopping ${SVCNAME}"
start-stop-daemon --stop --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" --quiet
eend $?
if [ "$RC_RUNLEVEL" = "shutdown" ]; then
_sshd_pids=$(pgrep "${SSHD_BINARY##*/}")
if [ -n "$_sshd_pids" ]; then
ebegin "Shutting down ssh connections"
kill -TERM $_sshd_pids >/dev/null 2>&1
eend 0
fi
fi
}
reload() {
checkconfig || return 1
ebegin "Reloading ${SVCNAME}"
start-stop-daemon --signal HUP \
--exec "${SSHD_BINARY}" --pidfile "${SSHD_PIDFILE}"
eend $?
}

A container is not a full installed environment.
The official document is for that installed alpine on some machine.
With power on, boot up services, etc. that a container does not have.
So, anything in /etc/init.d/ can not be used directly in a container which is used by boot up service (like systemd, or alpine's rc*). That's why you got error messages cause the rc* isn't installed in the container.
What you need to do is start sshd manuanlly.
You can take look on below example:
https://hub.docker.com/r/danielguerra/alpine-sshd/~/dockerfile/

Despite there are some details still not clear to me, let me take a voice in the discussion. The solution specified by the below configuration works for me. It's the result of arduous experiments.
First, the dockerfile
FROM alpine
RUN apk update && \
apk add --no-cache sudo bash openrc openssh
RUN mkdir -p /run/openrc && \
touch /run/openrc/softlevel && \
rc-update add sshd default
RUN adduser --disabled-password regusr && \
sh -c 'echo "regusr:<encoded_passwd>"' | chpasswd -e > /dev/null 2>&1 && \
sh -c 'echo "regusr ALL=NOPASSWD: ALL"' >> /etc/sudoers
VOLUME ["/home/reguser/solution/entrypoint-init.d","/sys/fs/cgroup"]
USER reguser
WORKDIR /home/reguser
RUN mkdir -p $HOME/solution && sudo chown reguser:reguser $HOME/solution
ADD ./entrypoint.sh /home/reguser/solution/
EXPOSE 22
ENTRYPOINT ["./solution/entrypoint.sh"]
CMD ["/bin/bash"]
Next, /home/reguser/solution/entrypoint.sh
#!/bin/bash
for f in ./solution/entrypoint-init.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
exec "$#"
Next, /home/reguser/solution/entrypoint-init.d/10-ssh-up.sh
#!/bin/bash
sudo sed --in-place --expression='/^#[[:space:]]*Port[[:space:]]\+22$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*AddressFamily[[:space:]]\+any$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostKey[[:space:]]\+\/etc\/ssh\/ssh_host_rsa_key$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^[[:space:]]*\(HostbasedAuthentication\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^[[:space:]]*\(IgnoreRhosts\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^[[:space:]]*\(PasswordAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^[[:space:]]*\(PubkeyAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PrintMotd[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]].*/ s/^[[:space:]]*\(PrintMOTD\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='$ a\' --expression='\nAcceptEnv LANG LC_\*' -- /etc/ssh/sshd_config
sudo /etc/init.d/sshd --dry-run start
sudo /etc/init.d/sshd start
The last two lines are in the heart of the trick. In particular, the sudo /etc/init.d/sshd --dry-run start makes the solution working.
Finally, command-line controls
docker build --tag='dockerRegUser/sshdImg:0.0.1' --file='./dockerfile' .
docker container create --tty \
--volume $(pwd)/dock/entrypoint-init.d:/home/reguser/solution/entrypoint-init.d:ro \
--name sshdCnt 'dockerRegUser/sshdImg:0.0.1' tail -f /dev/null
docker start sshdCnt && \
ssh-keygen -f "/home/user/.ssh/known_hosts" -R "$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)" && \
sleep 5 && \
ssh-copy-id -i ~/.ssh/sshkey reguser#$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)
I know, I know, there is a lot of unnecessary constructs. The example is also against the single service docker container principle. But there are phases and situations in solution development and delivery lifecycle that justify (or at least tempt) considering extending the container with the sshd or other openrc-controlled services.

/etc/init.d/sshd: not found
Try to run these commands:
apk add --no-cache openrc
rc-update add sshd

Check first is sshd is not present in /usr/bin or /usr/sbin.
Then, init.d should have sshd only if you set it up to to automatically start wiht:
rc-update add sshd
rc-status

I needed sshd for a very specific reason. I had to run front (cypress) and back (django) end tests on a CI server. Running them in one container is tricky at the least, so I decided to go with 2 containers. Also, there had to be one entrypoint that will run tests in both containers. So, the idea was that one container will run its tests, than run the tests in the other container over ssh.
In your case, you might not want to do exactly as I did, e.g. setting empty root password, empty passphrase.
It's best to run it in a separate directory, since it creates files (id_rsa.pub).
server.sh:
#!/bin/sh -eux
apk add openssh-server
ssh-keygen -A
passwd -d root
mkdir ~/.ssh
while ! [ -e id_rsa.pub ]; do sleep 1; done
cp id_rsa.pub ~/.ssh/authorized_keys
/usr/sbin/sshd -De
client.sh:
#!/bin/sh -eux
apk add openssh-client wait4ports
ssh-keygen -f ~/.ssh/id_rsa -N ''
cp ~/.ssh/id_rsa.pub .
wait4ports -s 1 tcp://c1:22
ssh-keyscan -t rsa c1 > ~/.ssh/known_hosts
ssh c1 echo DO SOMETHING
echo done
docker-compose.yml:
version: '3'
services:
server:
image: alpine:3.12
command: sh -c 'cd app && ./server.sh'
volumes:
- .:/app
client:
image: alpine:3.12
command: sh -c 'cd app && ./client.sh'
volumes:
- .:/app
$ docker-compose up -d && docker-compose logs -f
If you decide to run it again:
$ rm -f id_rsa.pub && docker-compose down && docker-compose up -d && docker-compose logs -f

If you want to setup openssh server on your docker container with alpine try this Dockerfile.
In this example, I am using docker:dind image
FROM docker:dind
# Setup SSH Service
RUN \
apk update && \
apk add openrc --no-cache && \
apk add openssh-server && \
rc-update add sshd && \
rc-status && \
touch /run/openrc/softlevel
# Expose port for ssh
EXPOSE 22
# Start SSH Service
CMD ["sh" , "-c", "service sshd restart && sh"]
Once your container is up and running try running this command to make sure ssh works fine:
ssh localhost

Related

Docker run is unable to locate the directory

I've built a docker image, docker build -t dockeragent:latest . but can't seem to trigger the container to run. The command: docker run -e AZP_URL=<obfuscate> -e AZP_TOKEN=<obfuscate> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest produces the following error: exec ./start.sh: no such file or directory.
I understand that the start.sh script is called by the Dockerfile and I've ensured that the Dockerfile is in the same directory as the start.sh script. I've also tested referencing the start.sh script by using interpolation to point to the absolute path pointing to the start.sh script. Example:
ENTRYPOINT [ "${pwd}/start.sh" ]
Any ideas on what parameter has been misconfigured? The files are directly copied from Micorosft's guide on building self-hosted agents with Docker
For reference, please see the below Dockerfile and associated start.sh
Dockerfile
FROM ubuntu:20.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends \
apt-transport-https \
apt-utils \
ca-certificates \
curl \
git \
iputils-ping \
jq \
lsb-release \
software-properties-common
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT [ "./start.sh" ]
start.sh
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
# If the agent has some running jobs, the configuration removal process will fail.
# So, give it some time to finish the job.
while true; do
./config.sh remove --unattended --auth PAT --token $(cat "$AZP_TOKEN_FILE") && break
echo "Retrying in 30 seconds..."
sleep 30
done
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_PACKAGES=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=$TARGETARCH&top=1")
AZP_AGENT_PACKAGE_LATEST_URL=$(echo "$AZP_AGENT_PACKAGES" | jq -r '.value[0].downloadUrl')
if [ -z "$AZP_AGENT_PACKAGE_LATEST_URL" -o "$AZP_AGENT_PACKAGE_LATEST_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent"
echo 1>&2 "check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and extracting Azure Pipelines agent..."
curl -LsS $AZP_AGENT_PACKAGE_LATEST_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 0' EXIT
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
chmod +x ./run-docker.sh
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run-docker.sh "$#" & wait $!
Thanks in advance!
Check the Dockerfile and Start.sh file. The settings should be correct.
Refer to this doc: Linux Docker container agent
Save the following content to ~/dockeragent/start.sh, making sure to use Unix-style (LF) line endings:
The start.sh need to use Linux LF line endings when creating Linux Docker Container Agent.
When you create the start.sh in windows system, it will use Windows CRLF line endings.
You can convert the Start.sh file from Windows CRLF to Linux LF in this Online site: LF and CRLF converter online Then you can run the same command to create the Pipeline agent.
Or you can directly create the files in Linux system.
You can run this and exec into pod check you run file start.sh
docker run -e AZP_URL=<obfuscate> -e AZP_TOKEN=<obfuscate> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest --entrypoint sh

How to check if user has sudo privileges inside the bash script?

I would like to check if the user has sudo privileges. This is an approximate example of what I am trying to do. I am trying to get this to work across the following os: centos, ubuntu, arch.
if userIsSudo; then
chsh -s $(which zsh)
fi
Try with this:
$ sudo -v &> /dev/null && echo "Sudoer" || echo "Not sudoer"
Also, IDK how secure will be searching for his membership in the sudo group, i.e:
$ groups "$(id -un)" \
| grep -q ' sudo ' \
&& echo In sudo group \
|| echo Not in sudo group
Or:
$ getent group sudo \
| grep -qE "(:|,)$(id -un)(,|$)" \
&& echo in sudo group \
|| echo not in sudo group
sudo -l will display the commands that the user can run with sudo privileges. If there are no commands that can be run, sudo -l will return an error code and so you could try:
sudo -l && chsh -s $(which zsh)
Usually when you run an script you want to know if end it well or you got an error or what kind of error you got if there was any.
This is a more elaborated snippet, sudoer-script.sh:
## Define error code
E_NOTROOT=87 # Non-root exit error.
## check if is sudoer
if ! $(sudo -l &> /dev/null); then
echo 'Error: root privileges are needed to run this script'
exit $E_NOTROOT
fi
## do something else you
## means it was successfully executed
exit 0
Now you can reuse your script, pipe it or concatenate with other commands
sudoer-script.sh && ls
## in a script
if $(sudoer-script.sh); then
echo 'success'
fi
## capture error
stderr=$(./sudoer-script.sh 2>&1 >/dev/null)
echo $stderr
As a function:
is_sudoer() {
## Define error code
E_NOTROOT=87 # Non-root exit error.
## check if is sudoer
if ! $(sudo -l &> /dev/null); then
echo 'Error: root privileges are needed to run this script'
return $E_NOTROOT
fi
return 0
}
if is_sudoer; then
echo "Sudoer"
else
echo "Not sudoer"
fi

X11 Display variable is not set - can't run Docker Image

i made a Docker-Image of JMeter because I want to run it remote (and from a cloud). If I run the Image I am getting the error: 'No X11 DISPLAY variable was set, but this program performed an operation which requires it.'
I've updated the ssh_config file and the sshd_config file (as mentioned in similiar questions) but it still don't work.
And my DISPLAY variable is set to localhost:10.0. It's maybe useful to know that i am doing this on a VM on Ubuntu 19.04.
Thanks for your help.
After a few hours searching I found the solution: (credit)
My setup is ubuntu 18.04, lxde, this docker build
I modified the run script like this:
#!/bin/bash
#
# Run JMeter Docker image with options
NAME="jmeter"
JMETER_VERSION=${JMETER_VERSION:-"5.4"}
IMAGE="justb4/jmeter:${JMETER_VERSION}"
# Finally run
xhost +
docker run -e DISPLAY=$DISPLAY --rm --name ${NAME} -i -v ${PWD}:${PWD} -v /tmp/.X11-unix:/tmp/.X11-unix:ro -w ${PWD} ${IMAGE} $#
xhost -
this work, by term of effort it's much less than another method (vnc...)
You should declare this DISPLAY variable using ENV command like:
ENV DISPLAY :10
But be aware that you need to have a display server, at least Xvfb.
So running JMeter GUI in Docker container is possible, but you will have to treat it like a normal Linux desktop, it can be a minimal one like Xfce
Example Dockerfile which downloads latest JMeter, installs virtual desktop and makes it available via VNC and RDP
FROM alpine:edge
ENV DISPLAY :99
ENV RESOLUTION 1366x768x24
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories \
&& apk add --no-cache curl xfce4-terminal xvfb x11vnc xfce4 openjdk8-jre bash xrdp \
&& curl -L https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.1.1.tgz > /tmp/jmeter.tgz \
&& tar -xvf /tmp/jmeter.tgz -C /opt \
&& rm /tmp/jmeter.tgz \
&& curl -L https://jmeter-plugins.org/get/ > /opt/apache-jmeter-5.1.1/lib/ext/jmeter-plugins-manager.jar \
&& echo "[Globals]" > /etc/xrdp/xrdp.ini \
&& echo "bitmap_cache=true" >> /etc/xrdp/xrdp.ini \
&& echo "bitmap_compression=true" >> /etc/xrdp/xrdp.ini \
&& echo "autorun=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "[jmeter]" >> /etc/xrdp/xrdp.ini \
&& echo "name=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "lib=libvnc.so" >> /etc/xrdp/xrdp.ini \
&& echo "ip=localhost" >> /etc/xrdp/xrdp.ini \
&& echo "port=5900" >> /etc/xrdp/xrdp.ini \
&& echo "username=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "password=" >> /etc/xrdp/xrdp.ini
EXPOSE 5900
EXPOSE 3389
CMD ["bash", "-c", "rm -f /tmp/.X99-lock && rm -f /var/run/xrdp.pid\
&& nohup bash -c \"/usr/bin/Xvfb :99 -screen 0 ${RESOLUTION} -ac +extension GLX +render -noreset && export DISPLAY=99 > /dev/null 2>&1 &\"\
&& nohup bash -c \"startxfce4 > /dev/null 2>&1 &\"\
&& nohup bash -c \"x11vnc -xkb -noxrecord -noxfixes -noxdamage -display :99 -forever -bg -nopw -rfbport 5900 > /dev/null 2>&1\"\
&& nohup bash -c \"xrdp > /dev/null 2>&1\"\
&& nohup bash -c \"/opt/apache-jmeter-5.1.1/bin/./jmeter -Jjmeter.laf=CrossPlatform > /dev/null 2>&1 &\"\
&& tail -f /dev/null"]
You can build it like:
docker build -t jmeter.
and once done kick off the container using Docker run command like:
docker run -p 5900:5900 -p 3389:3389 jmeter
You might also find Make Use of Docker with JMeter - Learn How guide useful.
there is NO solution for Docker-Images. Because Docker does not support GUI and therefore i am getting this error. So if you are working with Docker and you are getting this error, just ignore it or update your image to only non-gui.
Cheers

Creating a docker Base Image

I have a private Linux distribution (based on redhat7).
I have an ISO file which holds the installation of that distribution, which can be used to install the OS on a clear system only.
I have some programs I would like to run as images on docker, each program on a different image.
Each program can only run on my Linux environment and so I am looking for a way to create the appropriate images, so they can be ran under docker.
I tried following Solomon instructions here:
mkdir rootfs
mount -o loop /path/to/iso rootfs
tar -C rootfs -c . | docker import - rich/mybase
But I don't know how to proceed. I can't run any command since the machine isn't running yet (no /bin/bash/ etc.)
How can I open the installation shell?
Is there a better way to run programs via docker on a private Linux distribution?
(Just to be clear, the programs can run only on that specific OS and that OS can only be installed on a clear machine. Not sure if I need a base image but I'd like to run these programs with Docker and that is possible only over this OS)
I ran into many questions like mine (like this) but I couldn't find answer that helped me.
Assumption
Server A where the ISO will be mount
Server R your private repositoy
Server N where container will be run
All server can connect to server R.
How to
build a base image as mentioned in your OP (named base/myimage)
Push the image to your private repository https://docs.docker.com/registry/deploying/
Create application images from your base base/myimage then push them to your private repo
From Server N, run the application image
docker run application/myapp
This script is from the official Docker contrib repo. It's used to create CentOS images from scratch. It should work with any Redhat/Centos based system and gives you plenty of control over the various steps. Anything beyond that you can then modify post-base-image through a Dockerfile.
The file is here
#!/usr/bin/env bash
#
# Create a base CentOS Docker image.
#
# This script is useful on systems with yum installed (e.g., building
# a CentOS image on CentOS). See contrib/mkimage-rinse.sh for a way
# to build CentOS images on other systems.
usage() {
cat <<EOOPTS
$(basename $0) [OPTIONS] <name>
OPTIONS:
-p "<packages>" The list of packages to install in the container.
The default is blank.
-g "<groups>" The groups of packages to install in the container.
The default is "Core".
-y <yumconf> The path to the yum config to install packages from. The
default is /etc/yum.conf for Centos/RHEL and /etc/dnf/dnf.conf for Fedora
EOOPTS
exit 1
}
# option defaults
yum_config=/etc/yum.conf
if [ -f /etc/dnf/dnf.conf ] && command -v dnf &> /dev/null; then
yum_config=/etc/dnf/dnf.conf
alias yum=dnf
fi
install_groups="Core"
while getopts ":y:p:g:h" opt; do
case $opt in
y)
yum_config=$OPTARG
;;
h)
usage
;;
p)
install_packages="$OPTARG"
;;
g)
install_groups="$OPTARG"
;;
\?)
echo "Invalid option: -$OPTARG"
usage
;;
esac
done
shift $((OPTIND - 1))
name=$1
if [[ -z $name ]]; then
usage
fi
target=$(mktemp -d --tmpdir $(basename $0).XXXXXX)
set -x
mkdir -m 755 "$target"/dev
mknod -m 600 "$target"/dev/console c 5 1
mknod -m 600 "$target"/dev/initctl p
mknod -m 666 "$target"/dev/full c 1 7
mknod -m 666 "$target"/dev/null c 1 3
mknod -m 666 "$target"/dev/ptmx c 5 2
mknod -m 666 "$target"/dev/random c 1 8
mknod -m 666 "$target"/dev/tty c 5 0
mknod -m 666 "$target"/dev/tty0 c 4 0
mknod -m 666 "$target"/dev/urandom c 1 9
mknod -m 666 "$target"/dev/zero c 1 5
# amazon linux yum will fail without vars set
if [ -d /etc/yum/vars ]; then
mkdir -p -m 755 "$target"/etc/yum
cp -a /etc/yum/vars "$target"/etc/yum/
fi
if [[ -n "$install_groups" ]];
then
yum -c "$yum_config" --installroot="$target" --releasever=/ --setopt=tsflags=nodocs \
--setopt=group_package_types=mandatory -y groupinstall $install_groups
fi
if [[ -n "$install_packages" ]];
then
yum -c "$yum_config" --installroot="$target" --releasever=/ --setopt=tsflags=nodocs \
--setopt=group_package_types=mandatory -y install $install_packages
fi
yum -c "$yum_config" --installroot="$target" -y clean all
cat > "$target"/etc/sysconfig/network <<EOF
NETWORKING=yes
HOSTNAME=localhost.localdomain
EOF
# effectively: febootstrap-minimize --keep-zoneinfo --keep-rpmdb --keep-services "$target".
# locales
rm -rf "$target"/usr/{{lib,share}/locale,{lib,lib64}/gconv,bin/localedef,sbin/build-locale-archive}
# docs and man pages
rm -rf "$target"/usr/share/{man,doc,info,gnome/help}
# cracklib
rm -rf "$target"/usr/share/cracklib
# i18n
rm -rf "$target"/usr/share/i18n
# yum cache
rm -rf "$target"/var/cache/yum
mkdir -p --mode=0755 "$target"/var/cache/yum
# sln
rm -rf "$target"/sbin/sln
# ldconfig
rm -rf "$target"/etc/ld.so.cache "$target"/var/cache/ldconfig
mkdir -p --mode=0755 "$target"/var/cache/ldconfig
version=
for file in "$target"/etc/{redhat,system}-release
do
if [ -r "$file" ]; then
version="$(sed 's/^[^0-9\]*\([0-9.]\+\).*$/\1/' "$file")"
break
fi
done
if [ -z "$version" ]; then
echo >&2 "warning: cannot autodetect OS version, using '$name' as tag"
version=$name
fi
tar --numeric-owner -c -C "$target" . | docker import - $name:$version
docker run -i -t --rm $name:$version /bin/bash -c 'echo success'
rm -rf "$target"

Detect host operating system distro in chef-solo deploy bash script

When deploying a chef-solo setup you need to switch between using sudo or not eg:
bash install.sh
and
sudo bash install.sh
Depending on the distro on the host server. How can this be automated?
ohai already populates these attributes and are readily available in your recipe
for example,
"platform": "centos",
"platform_version": "6.4",
"platform_family": "rhel",
you can reference to these as
if node[:platform_family].include?("rhel")
...
end
To see what other attributes ohai sets, just type
ohai
on the command line.
You can detect the distro on the remote host and deploy accordingly. in deploy.sh:
DISTRO=`ssh -o 'StrictHostKeyChecking no' ${host} 'bash -s' < bootstrap.sh`
The DISTRO variable is populated by whatever is echoed by the bootstrap.sh script, which is run on the host machine. So we can now use bootstrap.sh to detect the distro or any other server settings we need to and echo, which will be bubbled to the local script and you can respond accordingly.
example deploy.sh:
#!/bin/bash
# Usage: ./deploy.sh [host]
host="${1}"
if [ -z "$host" ]; then
echo "Please provide a host - eg: ./deploy root#my-server.com"
exit 1
fi
echo "deploying to ${host}"
# The host key might change when we instantiate a new VM, so
# we remove (-R) the old host key from known_hosts
ssh-keygen -R "${host#*#}" 2> /dev/null
# rough test for what distro the server is on
DISTRO=`ssh -o 'StrictHostKeyChecking no' ${host} 'bash -s' < bootstrap.sh`
if [ "$DISTRO" == "FED" ]; then
echo "Detected a Fedora, RHEL, CentOS distro on host"
tar cjh . | ssh -o 'StrictHostKeyChecking no' "$host" '
rm -rf /tmp/chef &&
mkdir /tmp/chef &&
cd /tmp/chef &&
tar xj &&
bash install.sh'
elif [ "$DISTRO" == "DEB" ]; then
echo "Detected a Debian, Ubuntu distro on host"
tar cj . | ssh -o 'StrictHostKeyChecking no' "$host" '
sudo rm -rf ~/chef &&
mkdir ~/chef &&
cd ~/chef &&
tar xj &&
sudo bash install.sh'
fi
example bootstrap.sh:
#!/bin/bash
# Fedora/RHEL/CentOS distro
if [ -f /etc/redhat-release ]; then
echo "FED"
# Debian/Ubuntu
elif [ -r /lib/lsb/init-functions ]; then
echo "DEB"
fi
This will allow you to detect the platform very early in the deploy process.

Resources