X11 Display variable is not set - can't run Docker Image - linux

i made a Docker-Image of JMeter because I want to run it remote (and from a cloud). If I run the Image I am getting the error: 'No X11 DISPLAY variable was set, but this program performed an operation which requires it.'
I've updated the ssh_config file and the sshd_config file (as mentioned in similiar questions) but it still don't work.
And my DISPLAY variable is set to localhost:10.0. It's maybe useful to know that i am doing this on a VM on Ubuntu 19.04.
Thanks for your help.

After a few hours searching I found the solution: (credit)
My setup is ubuntu 18.04, lxde, this docker build
I modified the run script like this:
#!/bin/bash
#
# Run JMeter Docker image with options
NAME="jmeter"
JMETER_VERSION=${JMETER_VERSION:-"5.4"}
IMAGE="justb4/jmeter:${JMETER_VERSION}"
# Finally run
xhost +
docker run -e DISPLAY=$DISPLAY --rm --name ${NAME} -i -v ${PWD}:${PWD} -v /tmp/.X11-unix:/tmp/.X11-unix:ro -w ${PWD} ${IMAGE} $#
xhost -
this work, by term of effort it's much less than another method (vnc...)

You should declare this DISPLAY variable using ENV command like:
ENV DISPLAY :10
But be aware that you need to have a display server, at least Xvfb.
So running JMeter GUI in Docker container is possible, but you will have to treat it like a normal Linux desktop, it can be a minimal one like Xfce
Example Dockerfile which downloads latest JMeter, installs virtual desktop and makes it available via VNC and RDP
FROM alpine:edge
ENV DISPLAY :99
ENV RESOLUTION 1366x768x24
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories \
&& apk add --no-cache curl xfce4-terminal xvfb x11vnc xfce4 openjdk8-jre bash xrdp \
&& curl -L https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.1.1.tgz > /tmp/jmeter.tgz \
&& tar -xvf /tmp/jmeter.tgz -C /opt \
&& rm /tmp/jmeter.tgz \
&& curl -L https://jmeter-plugins.org/get/ > /opt/apache-jmeter-5.1.1/lib/ext/jmeter-plugins-manager.jar \
&& echo "[Globals]" > /etc/xrdp/xrdp.ini \
&& echo "bitmap_cache=true" >> /etc/xrdp/xrdp.ini \
&& echo "bitmap_compression=true" >> /etc/xrdp/xrdp.ini \
&& echo "autorun=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "[jmeter]" >> /etc/xrdp/xrdp.ini \
&& echo "name=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "lib=libvnc.so" >> /etc/xrdp/xrdp.ini \
&& echo "ip=localhost" >> /etc/xrdp/xrdp.ini \
&& echo "port=5900" >> /etc/xrdp/xrdp.ini \
&& echo "username=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "password=" >> /etc/xrdp/xrdp.ini
EXPOSE 5900
EXPOSE 3389
CMD ["bash", "-c", "rm -f /tmp/.X99-lock && rm -f /var/run/xrdp.pid\
&& nohup bash -c \"/usr/bin/Xvfb :99 -screen 0 ${RESOLUTION} -ac +extension GLX +render -noreset && export DISPLAY=99 > /dev/null 2>&1 &\"\
&& nohup bash -c \"startxfce4 > /dev/null 2>&1 &\"\
&& nohup bash -c \"x11vnc -xkb -noxrecord -noxfixes -noxdamage -display :99 -forever -bg -nopw -rfbport 5900 > /dev/null 2>&1\"\
&& nohup bash -c \"xrdp > /dev/null 2>&1\"\
&& nohup bash -c \"/opt/apache-jmeter-5.1.1/bin/./jmeter -Jjmeter.laf=CrossPlatform > /dev/null 2>&1 &\"\
&& tail -f /dev/null"]
You can build it like:
docker build -t jmeter.
and once done kick off the container using Docker run command like:
docker run -p 5900:5900 -p 3389:3389 jmeter
You might also find Make Use of Docker with JMeter - Learn How guide useful.

there is NO solution for Docker-Images. Because Docker does not support GUI and therefore i am getting this error. So if you are working with Docker and you are getting this error, just ignore it or update your image to only non-gui.
Cheers

Related

Docker run is unable to locate the directory

I've built a docker image, docker build -t dockeragent:latest . but can't seem to trigger the container to run. The command: docker run -e AZP_URL=<obfuscate> -e AZP_TOKEN=<obfuscate> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest produces the following error: exec ./start.sh: no such file or directory.
I understand that the start.sh script is called by the Dockerfile and I've ensured that the Dockerfile is in the same directory as the start.sh script. I've also tested referencing the start.sh script by using interpolation to point to the absolute path pointing to the start.sh script. Example:
ENTRYPOINT [ "${pwd}/start.sh" ]
Any ideas on what parameter has been misconfigured? The files are directly copied from Micorosft's guide on building self-hosted agents with Docker
For reference, please see the below Dockerfile and associated start.sh
Dockerfile
FROM ubuntu:20.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends \
apt-transport-https \
apt-utils \
ca-certificates \
curl \
git \
iputils-ping \
jq \
lsb-release \
software-properties-common
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT [ "./start.sh" ]
start.sh
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
# If the agent has some running jobs, the configuration removal process will fail.
# So, give it some time to finish the job.
while true; do
./config.sh remove --unattended --auth PAT --token $(cat "$AZP_TOKEN_FILE") && break
echo "Retrying in 30 seconds..."
sleep 30
done
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_PACKAGES=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=$TARGETARCH&top=1")
AZP_AGENT_PACKAGE_LATEST_URL=$(echo "$AZP_AGENT_PACKAGES" | jq -r '.value[0].downloadUrl')
if [ -z "$AZP_AGENT_PACKAGE_LATEST_URL" -o "$AZP_AGENT_PACKAGE_LATEST_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent"
echo 1>&2 "check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and extracting Azure Pipelines agent..."
curl -LsS $AZP_AGENT_PACKAGE_LATEST_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 0' EXIT
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
chmod +x ./run-docker.sh
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run-docker.sh "$#" & wait $!
Thanks in advance!
Check the Dockerfile and Start.sh file. The settings should be correct.
Refer to this doc: Linux Docker container agent
Save the following content to ~/dockeragent/start.sh, making sure to use Unix-style (LF) line endings:
The start.sh need to use Linux LF line endings when creating Linux Docker Container Agent.
When you create the start.sh in windows system, it will use Windows CRLF line endings.
You can convert the Start.sh file from Windows CRLF to Linux LF in this Online site: LF and CRLF converter online Then you can run the same command to create the Pipeline agent.
Or you can directly create the files in Linux system.
You can run this and exec into pod check you run file start.sh
docker run -e AZP_URL=<obfuscate> -e AZP_TOKEN=<obfuscate> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest --entrypoint sh

OpenBSD 6.7 how to install xbase

I am updating our integration test environments to OpenBSD 6.7 (from 6.5)
We use ansible to install all the packages on the target system (openbsd 6.7, Vagrant image https://app.vagrantup.com/generic/boxes/openbsd6/versions/3.0.6 )
With the above image, I cannot install java openjdk 11.
obsd-31# pkg_add -r jdk%11
quirks-3.325 signed on 2020-05-27T12:56:02Z
jdk-11.0.7.10.2p0v0:lz4-1.9.2p0: ok
jdk-11.0.7.10.2p0v0:zstd-1.4.4p1: ok
jdk-11.0.7.10.2p0v0:jpeg-2.0.4p0v0: ok
jdk-11.0.7.10.2p0v0:tiff-4.1.0: ok
jdk-11.0.7.10.2p0v0:lcms2-2.9p0: ok
jdk-11.0.7.10.2p0v0:png-1.6.37: ok
jdk-11.0.7.10.2p0v0:giflib-5.1.6: ok
Can't install jdk-11.0.7.10.2p0v0 because of libraries
|library X11.17.0 not found
| not found anywhere
|library Xext.13.0 not found
| not found anywhere
|library Xi.12.1 not found
| not found anywhere
|library Xrender.6.0 not found
| not found anywhere
|library Xtst.11.0 not found
| not found anywhere
|library freetype.30.0 not found
| not found anywhere
Direct dependencies for jdk-11.0.7.10.2p0v0 resolve to png-1.6.37 libiconv-1.16p0 giflib-5.1.6 lcms2-2.9p0 jpeg-2.0.4p0v0
Full dependency tree is giflib-5.1.6 lz4-1.9.2p0 tiff-4.1.0 png-1.6.37 xz-5.2.5 jpeg-2.0.4p0v0 lcms2-2.9p0 zstd-1.4.4p1 libiconv-1.16p0
Couldn't install jdk-11.0.7.10.2p0v0
my guess is that xbase is not installed.
However, I cannot figure out how to install xbase without rebooting into a bootable installer (because I need to do it via a shell command running from ansible)
Is there a way?
The generic OpenBSD Vagrant image you're using was created as a command line environment, so the X windows files were were excluded during the install process.
There are lots of ways to add X windows to OpenBSD after installation, but the quickest method that comes to mind would be:
sudo su -l
curl -LO 'https://ftp.usa.openbsd.org/pub/OpenBSD/6.7/amd64/x{base,serv,font,share}67.tgz'
tar xzf xbase67.tgz -C /
tar xzf xserv67.tgz -C /
tar xzf xfont67.tgz -C /
tar xzf xshare67.tgz -C /
rm -f xbase67.tgz xfont67.tgz xserv67.tgz xshare67.tgz
ldconfig /usr/local/lib /usr/X11R6/lib
If you would like to test for the presence of X windows on OpenBSD, try using the following shell snippet:
if [ -d /usr/X11R6/bin/ ] && [ -f /usr/X11R6/bin/xinit ]; then
echo "X windows has been installed."
else
echo "This is a command line only system."
fi
The xbase file set can be extracted manually via the following commands:
cd /
curl -LO https://ftp.usa.openbsd.org/pub/OpenBSD/6.7/amd64/xbase67.tgz
tar xzvf xbase67.tgz
Note: this is the mirror used in the vagrant sources.
If you care about security enough to use OpenBSD, then you really shouldn't grab new package sets from the internet without also checking the hashes/signatures are valid. Try this script:
#!/bin/ksh
echo -n "Downloading ... "
curl --silent --fail --fail-early -O "https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/SHA256.sig" -O "https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/x{base,font,serv,share}70.tgz"
if [ $? != 0 ]; then
echo "X windows download failed. Terminating."
exit 1
fi
echo "complete."
signify -Cp /etc/signify/openbsd-70-base.pub -x SHA256.sig xbase70.tgz xfont70.tgz xserv70.tgz xshare70.tgz
if [ $? != 0 ]; then
echo "X windows signature verification failed. Terminating."
exit 1
fi
tar -z -x -C / -f xbase70.tgz && tar -z -x -C / -f xfont70.tgz && tar -z -x -C / -f xserv70.tgz && tar -z -x -C / -f xshare70.tgz
if [ $? != 0 ]; then
echo "X windows installation failed. Terminating."
exit 1
fi
echo "Installation complete. Happy hacking."
On the other hand if you just want a one liners:
# Install just x11 base set.
sudo ksh -c 'curl --silent https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/xbase70.tgz | gzip -d -c | tar -x -C / -f - '
# Install all the x11 sets.
sudo ksh -c 'curl --silent https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/xbase70.tgz | gzip -d -c | tar -x -C /-f - '
You can omit the sudo portion if you are already logged in as root. And for the vagrant folks, the lazy version looks:
# Install just x11 base set from the host, to a vagrant guest.
vagrant ssh -c "sudo ksh -c 'curl --silent https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/xbase70.tgz | gzip -d -c | tar -x -C / -f - '"
# Install all the x11 sets from the host, to a vagrant guest.
vagrant ssh -c "sudo ksh -c 'curl --silent -O \"https://ftp.usa.openbsd.org/pub/OpenBSD/7.0/amd64/x{base,font,serv,share}70.tgz\" && tar -z -x -C / -f xbase70.tgz && tar -z -x -C / -f xfont70.tgz && tar -z -x -C / -f xserv70.tgz && tar -z -x -C / -f xshare70.tgz'"

In Docker Jmeter HTML Report is not generating

The issue is Container stopped after docker completing the Jmeter hit.
Docker File last Line :
CMD jmeter -n -t Get_Ping_Node_API.jmx -l .csv -e -o Get_Ping_Node_API2.html
Running:
ubuntu#ubuntu:~/sumit/docker-jmeter$ docker exec -it 3f2092a9895d bash
Error response from daemon: Container 3f2092a9895d881b97459af9f9c7982e06c696d1b0d4dc1484ee9dd75a3368ee is not running
ubuntu#ubuntu:~/sumit/docker-jmeter$
You're misinterpreting command line options:
As per this documentation:
http://jmeter.apache.org/usermanual/generating-dashboard.html
Parameter following -o should be a folder, you have put a file, it should be:
-o OUTPUT_FOLDER
Parameter following -l should be a csv file, you have put .csv which is only a suffix
It should be:
-l results.csv
Your JMeter execution line is not very correct, you should amend it like:
CMD jmeter -n -t Get_Ping_Node_API.jmx -l result.csv -e -o Get_Ping_Node_API2
-l command line argument assumes file where results will be stored, you cannot leave the filename empty
-o command-line argument assumes folder where the dashboard will be generated, you should remove .html extension from it
References:
Non-GUI Mode (Command Line mode)
Full list of command-line options
Generating Report Dashboard
Make Use of Docker with JMeter - Learn How
Example Dockerfile you can use as a basis, it executes Test.jmx file from JMeter's "extras" folder, feel free to amend it as required to kick off your own test plan:
# 1
FROM alpine:3.6
# 2
LABEL maintainer=”vincenzo.marrazzo#domain.personal>
# 3
ARG JMETER_VERSION="4.0"
# 4
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV MIRROR_HOST http://mirrors.ocf.berkeley.edu/apache/jmeter
ENV JMETER_DOWNLOAD_URL ${MIRROR_HOST}/binaries/apache-jmeter-${JMETER_VERSION}.tgz
ENV JMETER_PLUGINS_DOWNLOAD_URL http://repo1.maven.org/maven2/kg/apc
ENV JMETER_PLUGINS_FOLDER ${JMETER_HOME}/lib/ext/
# 5
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& cp /usr/share/zoneinfo/Europe/Rome /etc/localtime \
&& echo "Europe/Rome" > /etc/timezone \
&& rm -rf /var/cache/apk/* \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
# 6
RUN curl -L --silent ${JMETER_PLUGINS_DOWNLOAD_URL}/jmeter-plugins-dummy/0.2/jmeter-plugins-dummy-0.2.jar -o ${JMETER_PLUGINS_FOLDER}/jmeter-plugins-dummy-0.2.jar
RUN curl -L --silent ${JMETER_PLUGINS_DOWNLOAD_URL}/jmeter-plugins-cmn-jmeter/0.5/jmeter-plugins-cmn-jmeter-0.5.jar -o ${JMETER_PLUGINS_FOLDER}/jmeter-plugins-cmn-jmeter-0.5.jar
# 7
ENV PATH $PATH:$JMETER_BIN
#8
WORKDIR ${JMETER_BIN}
#9
CMD ./jmeter -n -t ../extras/Test.jmx -l result.jtl -e -o Get_Ping_Node_API2

Bash - combine 2 ssh calls into 1 (with optional and mandatory commands)

I have a script with 2 ssh commands. The SSH scripts uses SSH to log into a remote server and deletes docker images.
ssh person#someserver.com 'set -x &&
echo "Stop docker images" ;
sudo docker stop $(sudo docker ps -a -q) ;
sudo docker rmi -f $(sudo docker images -q) ;
sudo docker rm -f $(sudo docker ps -a -q)'
Note use of ; to separate commands (we don't care if one or more of the commands fail).
The 2nd ssh command uses SSH to log into the same server, grab a docker compose file and run docker.
ssh person#someserver.com 'set -x &&
export AWS_CONFIG_FILE=/somelocation/myaws.conf &&
aws s3 cp s3://com.somebucket.somewhere/docker-compose/docker-compose.yml . --region us-east-1 &&
echo "Get ECR login credentials and do a docker compose up" &&
sudo $(aws ecr get-login --region us-east-1) &&
sudo /usr/local/bin/docker-compose up -d'
Note use of && to separate commands (this time we do care if one or more of the commands fail as we grab the exit code i.e exitCode=$?).
I don't like the fact I have to split this into 2 so my question is can these 2 sections of bash commands be combined into a single SSH call (with both ; and && combinations)?
Although it is possible to pass a set of commands as a simple single-quoted string, I wouldn't recommend that, because:
internal quotation marks should be escaped
it is difficult to read (and maintain!) a code that looks like a string in a text editor
I find it better to keep the scripts in separate files, then pass them to ssh as standard input:
cat script.sh | ssh -T user#host -- bash -s -
Execution of several scripts is done in the same way. Just concatenate more scripts:
cat a.sh b.sh | ssh -T user#host -- bash -s -
If you still want to use a string, use a here document instead:
ssh -T user#host -- <<'END_OF_COMMANDS'
# put your script here
END_OF_COMMANDS
Note the -T option. You don't need pseudo-terminal allocation for non-interactive scripts.
ssh person#someserver.com 'set -x;
echo "Stop docker images" ;
sudo docker stop $(sudo docker ps -a -q) ;
sudo docker rmi -f $(sudo docker images -q) ;
sudo docker rm -f $(sudo docker ps -a -q) ;
export AWS_CONFIG_FILE=/somelocation/myaws.conf &&
aws s3 cp s3://com.somebucket.somewhere/docker-compose/docker-compose.yml . --region us-east-1 &&
echo "Get ECR login credentials and do a docker compose up" &&
sudo $(aws ecr get-login --region us-east-1) &&
sudo /usr/local/bin/docker-compose up -d'

Running OpenSSH in an Alpine Docker Container

I've installed OpenSSH and now I wish to run it as described in the documentation by running /etc/init.d/sshd start. However it does not start:
/ # /etc/init.d/sshd start
/bin/ash: /etc/init.d/sshd: not found
Thoughts?
P.S.
/ # ls -la /etc/init.d/sshd
-rwxr-xr-x 1 root root 2622 Jan 14 20:48 /etc/init.d/sshd
Contents of /etc/init.d/sshd:
#!/sbin/openrc-run
# Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/openssh/files/sshd.rc6.4,v 1.5 2015/05/04 02:56:25 vapier Exp $
description="OpenBSD Secure Shell server"
description_checkconfig="Verify configuration file"
description_reload="Reload configuration"
extra_commands="checkconfig"
extra_started_commands="reload"
: ${SSHD_CONFDIR:=/etc/ssh}
: ${SSHD_CONFIG:=${SSHD_CONFDIR}/sshd_config}
: ${SSHD_PIDFILE:=/var/run/${SVCNAME}.pid}
: ${SSHD_BINARY:=/usr/sbin/sshd}
depend() {
use logger dns
if [ "${rc_need+set}" = "set" ] ; then
: # Do nothing, the user has explicitly set rc_need
else
local x warn_addr
for x in $(awk '/^ListenAddress/{ print $2 }' "$SSHD_CONFIG" 2>/dev/null) ; do
case "${x}" in
0.0.0.0|0.0.0.0:*) ;;
::|\[::\]*) ;;
*) warn_addr="${warn_addr} ${x}" ;;
esac
done
if [ -n "${warn_addr}" ] ; then
need net
ewarn "You are binding an interface in ListenAddress statement in your sshd_config!"
ewarn "You must add rc_need=\"net.FOO\" to your /etc/conf.d/sshd"
ewarn "where FOO is the interface(s) providing the following address(es):"
ewarn "${warn_addr}"
fi
fi
}
checkconfig() {
if [ ! -d /var/empty ] ; then
mkdir -p /var/empty || return 1
fi
if [ ! -e "${SSHD_CONFIG}" ] ; then
eerror "You need an ${SSHD_CONFIG} file to run sshd"
eerror "There is a sample file in /usr/share/doc/openssh"
return 1
fi
if ! yesno "${SSHD_DISABLE_KEYGEN}"; then
ssh-keygen -A || return 1
fi
[ "${SSHD_PIDFILE}" != "/var/run/sshd.pid" ] \
&& SSHD_OPTS="${SSHD_OPTS} -o PidFile=${SSHD_PIDFILE}"
[ "${SSHD_CONFIG}" != "/etc/ssh/sshd_config" ] \
&& SSHD_OPTS="${SSHD_OPTS} -f ${SSHD_CONFIG}"
"${SSHD_BINARY}" -t ${SSHD_OPTS} || return 1
}
start() {
checkconfig || return 1
ebegin "Starting ${SVCNAME}"
start-stop-daemon --start --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" \
-- ${SSHD_OPTS}
eend $?
}
stop() {
if [ "${RC_CMD}" = "restart" ] ; then
checkconfig || return 1
fi
ebegin "Stopping ${SVCNAME}"
start-stop-daemon --stop --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" --quiet
eend $?
if [ "$RC_RUNLEVEL" = "shutdown" ]; then
_sshd_pids=$(pgrep "${SSHD_BINARY##*/}")
if [ -n "$_sshd_pids" ]; then
ebegin "Shutting down ssh connections"
kill -TERM $_sshd_pids >/dev/null 2>&1
eend 0
fi
fi
}
reload() {
checkconfig || return 1
ebegin "Reloading ${SVCNAME}"
start-stop-daemon --signal HUP \
--exec "${SSHD_BINARY}" --pidfile "${SSHD_PIDFILE}"
eend $?
}
A container is not a full installed environment.
The official document is for that installed alpine on some machine.
With power on, boot up services, etc. that a container does not have.
So, anything in /etc/init.d/ can not be used directly in a container which is used by boot up service (like systemd, or alpine's rc*). That's why you got error messages cause the rc* isn't installed in the container.
What you need to do is start sshd manuanlly.
You can take look on below example:
https://hub.docker.com/r/danielguerra/alpine-sshd/~/dockerfile/
Despite there are some details still not clear to me, let me take a voice in the discussion. The solution specified by the below configuration works for me. It's the result of arduous experiments.
First, the dockerfile
FROM alpine
RUN apk update && \
apk add --no-cache sudo bash openrc openssh
RUN mkdir -p /run/openrc && \
touch /run/openrc/softlevel && \
rc-update add sshd default
RUN adduser --disabled-password regusr && \
sh -c 'echo "regusr:<encoded_passwd>"' | chpasswd -e > /dev/null 2>&1 && \
sh -c 'echo "regusr ALL=NOPASSWD: ALL"' >> /etc/sudoers
VOLUME ["/home/reguser/solution/entrypoint-init.d","/sys/fs/cgroup"]
USER reguser
WORKDIR /home/reguser
RUN mkdir -p $HOME/solution && sudo chown reguser:reguser $HOME/solution
ADD ./entrypoint.sh /home/reguser/solution/
EXPOSE 22
ENTRYPOINT ["./solution/entrypoint.sh"]
CMD ["/bin/bash"]
Next, /home/reguser/solution/entrypoint.sh
#!/bin/bash
for f in ./solution/entrypoint-init.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
exec "$#"
Next, /home/reguser/solution/entrypoint-init.d/10-ssh-up.sh
#!/bin/bash
sudo sed --in-place --expression='/^#[[:space:]]*Port[[:space:]]\+22$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*AddressFamily[[:space:]]\+any$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostKey[[:space:]]\+\/etc\/ssh\/ssh_host_rsa_key$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^[[:space:]]*\(HostbasedAuthentication\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^[[:space:]]*\(IgnoreRhosts\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^[[:space:]]*\(PasswordAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^[[:space:]]*\(PubkeyAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PrintMotd[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]].*/ s/^[[:space:]]*\(PrintMOTD\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='$ a\' --expression='\nAcceptEnv LANG LC_\*' -- /etc/ssh/sshd_config
sudo /etc/init.d/sshd --dry-run start
sudo /etc/init.d/sshd start
The last two lines are in the heart of the trick. In particular, the sudo /etc/init.d/sshd --dry-run start makes the solution working.
Finally, command-line controls
docker build --tag='dockerRegUser/sshdImg:0.0.1' --file='./dockerfile' .
docker container create --tty \
--volume $(pwd)/dock/entrypoint-init.d:/home/reguser/solution/entrypoint-init.d:ro \
--name sshdCnt 'dockerRegUser/sshdImg:0.0.1' tail -f /dev/null
docker start sshdCnt && \
ssh-keygen -f "/home/user/.ssh/known_hosts" -R "$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)" && \
sleep 5 && \
ssh-copy-id -i ~/.ssh/sshkey reguser#$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)
I know, I know, there is a lot of unnecessary constructs. The example is also against the single service docker container principle. But there are phases and situations in solution development and delivery lifecycle that justify (or at least tempt) considering extending the container with the sshd or other openrc-controlled services.
/etc/init.d/sshd: not found
Try to run these commands:
apk add --no-cache openrc
rc-update add sshd
Check first is sshd is not present in /usr/bin or /usr/sbin.
Then, init.d should have sshd only if you set it up to to automatically start wiht:
rc-update add sshd
rc-status
I needed sshd for a very specific reason. I had to run front (cypress) and back (django) end tests on a CI server. Running them in one container is tricky at the least, so I decided to go with 2 containers. Also, there had to be one entrypoint that will run tests in both containers. So, the idea was that one container will run its tests, than run the tests in the other container over ssh.
In your case, you might not want to do exactly as I did, e.g. setting empty root password, empty passphrase.
It's best to run it in a separate directory, since it creates files (id_rsa.pub).
server.sh:
#!/bin/sh -eux
apk add openssh-server
ssh-keygen -A
passwd -d root
mkdir ~/.ssh
while ! [ -e id_rsa.pub ]; do sleep 1; done
cp id_rsa.pub ~/.ssh/authorized_keys
/usr/sbin/sshd -De
client.sh:
#!/bin/sh -eux
apk add openssh-client wait4ports
ssh-keygen -f ~/.ssh/id_rsa -N ''
cp ~/.ssh/id_rsa.pub .
wait4ports -s 1 tcp://c1:22
ssh-keyscan -t rsa c1 > ~/.ssh/known_hosts
ssh c1 echo DO SOMETHING
echo done
docker-compose.yml:
version: '3'
services:
server:
image: alpine:3.12
command: sh -c 'cd app && ./server.sh'
volumes:
- .:/app
client:
image: alpine:3.12
command: sh -c 'cd app && ./client.sh'
volumes:
- .:/app
$ docker-compose up -d && docker-compose logs -f
If you decide to run it again:
$ rm -f id_rsa.pub && docker-compose down && docker-compose up -d && docker-compose logs -f
If you want to setup openssh server on your docker container with alpine try this Dockerfile.
In this example, I am using docker:dind image
FROM docker:dind
# Setup SSH Service
RUN \
apk update && \
apk add openrc --no-cache && \
apk add openssh-server && \
rc-update add sshd && \
rc-status && \
touch /run/openrc/softlevel
# Expose port for ssh
EXPOSE 22
# Start SSH Service
CMD ["sh" , "-c", "service sshd restart && sh"]
Once your container is up and running try running this command to make sure ssh works fine:
ssh localhost

Resources