Docker run is unable to locate the directory - linux

I've built a docker image, docker build -t dockeragent:latest . but can't seem to trigger the container to run. The command: docker run -e AZP_URL=<obfuscate> -e AZP_TOKEN=<obfuscate> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest produces the following error: exec ./start.sh: no such file or directory.
I understand that the start.sh script is called by the Dockerfile and I've ensured that the Dockerfile is in the same directory as the start.sh script. I've also tested referencing the start.sh script by using interpolation to point to the absolute path pointing to the start.sh script. Example:
ENTRYPOINT [ "${pwd}/start.sh" ]
Any ideas on what parameter has been misconfigured? The files are directly copied from Micorosft's guide on building self-hosted agents with Docker
For reference, please see the below Dockerfile and associated start.sh
Dockerfile
FROM ubuntu:20.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends \
apt-transport-https \
apt-utils \
ca-certificates \
curl \
git \
iputils-ping \
jq \
lsb-release \
software-properties-common
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT [ "./start.sh" ]
start.sh
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
# If the agent has some running jobs, the configuration removal process will fail.
# So, give it some time to finish the job.
while true; do
./config.sh remove --unattended --auth PAT --token $(cat "$AZP_TOKEN_FILE") && break
echo "Retrying in 30 seconds..."
sleep 30
done
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_PACKAGES=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=$TARGETARCH&top=1")
AZP_AGENT_PACKAGE_LATEST_URL=$(echo "$AZP_AGENT_PACKAGES" | jq -r '.value[0].downloadUrl')
if [ -z "$AZP_AGENT_PACKAGE_LATEST_URL" -o "$AZP_AGENT_PACKAGE_LATEST_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent"
echo 1>&2 "check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and extracting Azure Pipelines agent..."
curl -LsS $AZP_AGENT_PACKAGE_LATEST_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 0' EXIT
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
chmod +x ./run-docker.sh
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run-docker.sh "$#" & wait $!
Thanks in advance!

Check the Dockerfile and Start.sh file. The settings should be correct.
Refer to this doc: Linux Docker container agent
Save the following content to ~/dockeragent/start.sh, making sure to use Unix-style (LF) line endings:
The start.sh need to use Linux LF line endings when creating Linux Docker Container Agent.
When you create the start.sh in windows system, it will use Windows CRLF line endings.
You can convert the Start.sh file from Windows CRLF to Linux LF in this Online site: LF and CRLF converter online Then you can run the same command to create the Pipeline agent.
Or you can directly create the files in Linux system.

You can run this and exec into pod check you run file start.sh
docker run -e AZP_URL=<obfuscate> -e AZP_TOKEN=<obfuscate> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest --entrypoint sh

Related

Azure agents with docker ./start.sh no such file or directory

So I am following the documentation from Microsoft here:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops#add-tools-and-customize-the-container
This is my dockerfile:
FROM ubuntu:20.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends \
apt-transport-https \
apt-utils \
ca-certificates \
curl \
git \
iputils-ping \
jq \
lsb-release \
software-properties-common
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT [ "./start.sh" ]
This is my start.sh:
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
# If the agent has some running jobs, the configuration removal process will fail.
# So, give it some time to finish the job.
while true; do
./config.sh remove --unattended --auth PAT --token $(cat "$AZP_TOKEN_FILE") && break
echo "Retrying in 30 seconds..."
sleep 30
done
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_PACKAGES=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=$TARGETARCH&top=1")
AZP_AGENT_PACKAGE_LATEST_URL=$(echo "$AZP_AGENT_PACKAGES" | jq -r '.value[0].downloadUrl')
if [ -z "$AZP_AGENT_PACKAGE_LATEST_URL" -o "$AZP_AGENT_PACKAGE_LATEST_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent"
echo 1>&2 "check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and extracting Azure Pipelines agent..."
curl -LsS $AZP_AGENT_PACKAGE_LATEST_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 0' EXIT
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
chmod +x ./run-docker.sh
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run-docker.sh "$#" & wait $!
I can build everything fine and this is what it spits out:
[+] Building 185.9s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:20.04 0.6s
=> CACHED [1/8] FROM docker.io/library/ubuntu:20.04#sha256:fd92c36d3cb9b1d027c4d2a72c6bf0125da82425fc2ca37c414d4 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 30B 0.0s
=> [2/8] RUN DEBIAN_FRONTEND=noninteractive apt-get update 10.4s
=> [3/8] RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y 8.8s
=> [4/8] RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends apt-transport-ht 53.5s
=> [5/8] RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash 54.7s
=> [6/8] WORKDIR /azp 0.1s
=> [7/8] COPY ./start.sh . 0.1s
=> [8/8] RUN chmod +x start.sh 0.5s
=> exporting to image 56.8s
=> => exporting layers 56.8s
=> => writing image sha256:fadefaae070c65381941b5a17a063d2248ebaba97c10d8a131dac711f153ae50 0.0s
=> => naming to docker.io/library/dockeragent:latest 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
But when I go to run the image with
docker run -e AZP_URL=https://myazureurl -e AZP_TOKEN=myTokenIGenerated -e AZP_POOL=myAgentPool -e AZP_AGENT_NAME=myAgentName dockeragent:latest
I get this error:
exec ./start.sh: no such file or directory
But start.sh is in the same folder as my dockerfile
I found out the issue was that when making the start.sh file in windows it creates a CRLF line ending file. Linux uses LF, so to convert it just open the file up in notepad ++ and rightclick the bottom right where you see windows CRLF and swap it to Linux LF

X11 Display variable is not set - can't run Docker Image

i made a Docker-Image of JMeter because I want to run it remote (and from a cloud). If I run the Image I am getting the error: 'No X11 DISPLAY variable was set, but this program performed an operation which requires it.'
I've updated the ssh_config file and the sshd_config file (as mentioned in similiar questions) but it still don't work.
And my DISPLAY variable is set to localhost:10.0. It's maybe useful to know that i am doing this on a VM on Ubuntu 19.04.
Thanks for your help.
After a few hours searching I found the solution: (credit)
My setup is ubuntu 18.04, lxde, this docker build
I modified the run script like this:
#!/bin/bash
#
# Run JMeter Docker image with options
NAME="jmeter"
JMETER_VERSION=${JMETER_VERSION:-"5.4"}
IMAGE="justb4/jmeter:${JMETER_VERSION}"
# Finally run
xhost +
docker run -e DISPLAY=$DISPLAY --rm --name ${NAME} -i -v ${PWD}:${PWD} -v /tmp/.X11-unix:/tmp/.X11-unix:ro -w ${PWD} ${IMAGE} $#
xhost -
this work, by term of effort it's much less than another method (vnc...)
You should declare this DISPLAY variable using ENV command like:
ENV DISPLAY :10
But be aware that you need to have a display server, at least Xvfb.
So running JMeter GUI in Docker container is possible, but you will have to treat it like a normal Linux desktop, it can be a minimal one like Xfce
Example Dockerfile which downloads latest JMeter, installs virtual desktop and makes it available via VNC and RDP
FROM alpine:edge
ENV DISPLAY :99
ENV RESOLUTION 1366x768x24
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories \
&& apk add --no-cache curl xfce4-terminal xvfb x11vnc xfce4 openjdk8-jre bash xrdp \
&& curl -L https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.1.1.tgz > /tmp/jmeter.tgz \
&& tar -xvf /tmp/jmeter.tgz -C /opt \
&& rm /tmp/jmeter.tgz \
&& curl -L https://jmeter-plugins.org/get/ > /opt/apache-jmeter-5.1.1/lib/ext/jmeter-plugins-manager.jar \
&& echo "[Globals]" > /etc/xrdp/xrdp.ini \
&& echo "bitmap_cache=true" >> /etc/xrdp/xrdp.ini \
&& echo "bitmap_compression=true" >> /etc/xrdp/xrdp.ini \
&& echo "autorun=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "[jmeter]" >> /etc/xrdp/xrdp.ini \
&& echo "name=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "lib=libvnc.so" >> /etc/xrdp/xrdp.ini \
&& echo "ip=localhost" >> /etc/xrdp/xrdp.ini \
&& echo "port=5900" >> /etc/xrdp/xrdp.ini \
&& echo "username=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "password=" >> /etc/xrdp/xrdp.ini
EXPOSE 5900
EXPOSE 3389
CMD ["bash", "-c", "rm -f /tmp/.X99-lock && rm -f /var/run/xrdp.pid\
&& nohup bash -c \"/usr/bin/Xvfb :99 -screen 0 ${RESOLUTION} -ac +extension GLX +render -noreset && export DISPLAY=99 > /dev/null 2>&1 &\"\
&& nohup bash -c \"startxfce4 > /dev/null 2>&1 &\"\
&& nohup bash -c \"x11vnc -xkb -noxrecord -noxfixes -noxdamage -display :99 -forever -bg -nopw -rfbport 5900 > /dev/null 2>&1\"\
&& nohup bash -c \"xrdp > /dev/null 2>&1\"\
&& nohup bash -c \"/opt/apache-jmeter-5.1.1/bin/./jmeter -Jjmeter.laf=CrossPlatform > /dev/null 2>&1 &\"\
&& tail -f /dev/null"]
You can build it like:
docker build -t jmeter.
and once done kick off the container using Docker run command like:
docker run -p 5900:5900 -p 3389:3389 jmeter
You might also find Make Use of Docker with JMeter - Learn How guide useful.
there is NO solution for Docker-Images. Because Docker does not support GUI and therefore i am getting this error. So if you are working with Docker and you are getting this error, just ignore it or update your image to only non-gui.
Cheers

Exception while executing Docker command in Jenkinsfile

I have a test project for end2end tests based on Nightwatch.js that is an NodeJS framework. I want to use 'Jenkinsfile' for my project to build a pipeline for my end2end tests to execute them over a Jenkins in a Docker container. So, I want to start a Docker container and execute the tests inside this Docker container. And this should be realized over a Jenkinsfile. Everything is perfect when I don't use a Jenkinsfile but directly use shell commands in a manually created job. While using Jenkinsfile I get an MultipleCompilationErrorsException while running the pipeline and I don't know why.
This is my Jenkinsfile:
pipeline {
agent any
parameters {
text(defaultValue: 'grme/nightwatch-chrome-firefox:0.0.3', description: '', name: 'docker_image')
text(defaultValue: 'npm-test-chrome', description: '', name: 'run_script_method')
text(defaultValue: '/Applications/Docker.app/Contents/Resources/bin/docker', description: '', name: 'docker')
}
stages {
stage('Test') {
steps {
sh 'sudo chmod -R 777 $(pwd)'
echo "------ stop all Docker containers ------"
sh '(sudo ${params.docker} stop $(sudo ${params.docker} ps -a -q) || echo "------ all Docker containers are still stopped ------")'
echo "------ remove all Docker containers ------"
sh '(sudo ${params.docker} rm $(sudo ${params.docker} ps -a -q) || sudo echo "------ all Docker containers are still removed ------")'
echo "------ pull Docker image from Docker Cloud ------"
sh 'sudo ${params.docker} pull "${params.docker_image}"'
echo "------ start Docker container from image ------"
sh 'sudo ${params.docker} run -d -t -i -v $(pwd):/my_tests/ "${params.docker_image}" /bin/bash'
echo "------ execute end2end tests on Docker container ------"
sh 'sudo ${params.docker} exec -i $(sudo ${params.docker} ps --format "{{.Names}}") bash -c "cd /my_tests && xvfb-run --server-args='-screen 0 1600x1200x24' npm run ${params.run_script_method} || true && google-chrome --version && firefox --version"'
echo "------ cleanup all temporary files ------"
sh 'sudo rm -Rf $(pwd)/tmp-*'
sh 'sudo rm -Rf $(pwd)/.com.google*'
sh 'sudo rm -Rf $(pwd)/rust_mozprofile*'
sh 'sudo rm -Rf $(pwd)/.org.chromium*'
echo "------ stop all Docker containers again ------"
sh '(sudo ${params.docker} stop $(sudo ${params.docker} ps -a -q) || sudo echo "------ all Docker containers are still stopped ------")'
echo "------ remove all Docker containers again ------"
sh '(sudo ${params.docker} rm $(sudo ${params.docker} ps -a -q) || sudo echo "------ all Docker containers are still removed ------")'
}
}
}
}
And this is the exception I get when running the pipeline:
Started by user GRme
> git rev-parse --is-inside-work-tree # timeout=10
Setting origin to https://github.com/GRme/e2e-web-tests
> git config remote.origin.url https://github.com/GRme/e2e-web-tests # timeout=10
Fetching origin...
Fetching upstream changes from origin
> git --version # timeout=10
using GIT_ASKPASS to set credentials
> git fetch --tags --progress origin +refs/heads/*:refs/remotes/origin/*
Seen branch in repository origin/master
Seen 1 remote branch
Obtained Jenkinsfile from 0eb7d8c437df1efc56e46171d945e7f2806b838b
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 23: Expected a symbol # line 23, column 9.
sh 'sudo ${params.docker} exec -i $(sudo ${params.docker} ps --format "{{.Names}}") bash -c "cd /my_tests && xvfb-run --server-args='-screen 0 1600x1200x24' npm run ${params.run_script_method} || true && google-chrome --version && firefox --version"'
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:129)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:123)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:516)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:479)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:269)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:419)
Finished: FAILURE
What do I wrong and how can I solve this exception?
After escaping the ' in the line, the pipeline has no syntax error anymore :)
sh 'sudo ${params.docker} exec -i $(sudo ${params.docker} ps --format "{{.Names}}") bash -c "cd /my_tests && xvfb-run --server-args=\'-screen 0 1600x1200x24\' npm run ${params.run_script_method} || true && google-chrome --version && firefox --version"'

Running OpenSSH in an Alpine Docker Container

I've installed OpenSSH and now I wish to run it as described in the documentation by running /etc/init.d/sshd start. However it does not start:
/ # /etc/init.d/sshd start
/bin/ash: /etc/init.d/sshd: not found
Thoughts?
P.S.
/ # ls -la /etc/init.d/sshd
-rwxr-xr-x 1 root root 2622 Jan 14 20:48 /etc/init.d/sshd
Contents of /etc/init.d/sshd:
#!/sbin/openrc-run
# Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/openssh/files/sshd.rc6.4,v 1.5 2015/05/04 02:56:25 vapier Exp $
description="OpenBSD Secure Shell server"
description_checkconfig="Verify configuration file"
description_reload="Reload configuration"
extra_commands="checkconfig"
extra_started_commands="reload"
: ${SSHD_CONFDIR:=/etc/ssh}
: ${SSHD_CONFIG:=${SSHD_CONFDIR}/sshd_config}
: ${SSHD_PIDFILE:=/var/run/${SVCNAME}.pid}
: ${SSHD_BINARY:=/usr/sbin/sshd}
depend() {
use logger dns
if [ "${rc_need+set}" = "set" ] ; then
: # Do nothing, the user has explicitly set rc_need
else
local x warn_addr
for x in $(awk '/^ListenAddress/{ print $2 }' "$SSHD_CONFIG" 2>/dev/null) ; do
case "${x}" in
0.0.0.0|0.0.0.0:*) ;;
::|\[::\]*) ;;
*) warn_addr="${warn_addr} ${x}" ;;
esac
done
if [ -n "${warn_addr}" ] ; then
need net
ewarn "You are binding an interface in ListenAddress statement in your sshd_config!"
ewarn "You must add rc_need=\"net.FOO\" to your /etc/conf.d/sshd"
ewarn "where FOO is the interface(s) providing the following address(es):"
ewarn "${warn_addr}"
fi
fi
}
checkconfig() {
if [ ! -d /var/empty ] ; then
mkdir -p /var/empty || return 1
fi
if [ ! -e "${SSHD_CONFIG}" ] ; then
eerror "You need an ${SSHD_CONFIG} file to run sshd"
eerror "There is a sample file in /usr/share/doc/openssh"
return 1
fi
if ! yesno "${SSHD_DISABLE_KEYGEN}"; then
ssh-keygen -A || return 1
fi
[ "${SSHD_PIDFILE}" != "/var/run/sshd.pid" ] \
&& SSHD_OPTS="${SSHD_OPTS} -o PidFile=${SSHD_PIDFILE}"
[ "${SSHD_CONFIG}" != "/etc/ssh/sshd_config" ] \
&& SSHD_OPTS="${SSHD_OPTS} -f ${SSHD_CONFIG}"
"${SSHD_BINARY}" -t ${SSHD_OPTS} || return 1
}
start() {
checkconfig || return 1
ebegin "Starting ${SVCNAME}"
start-stop-daemon --start --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" \
-- ${SSHD_OPTS}
eend $?
}
stop() {
if [ "${RC_CMD}" = "restart" ] ; then
checkconfig || return 1
fi
ebegin "Stopping ${SVCNAME}"
start-stop-daemon --stop --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" --quiet
eend $?
if [ "$RC_RUNLEVEL" = "shutdown" ]; then
_sshd_pids=$(pgrep "${SSHD_BINARY##*/}")
if [ -n "$_sshd_pids" ]; then
ebegin "Shutting down ssh connections"
kill -TERM $_sshd_pids >/dev/null 2>&1
eend 0
fi
fi
}
reload() {
checkconfig || return 1
ebegin "Reloading ${SVCNAME}"
start-stop-daemon --signal HUP \
--exec "${SSHD_BINARY}" --pidfile "${SSHD_PIDFILE}"
eend $?
}
A container is not a full installed environment.
The official document is for that installed alpine on some machine.
With power on, boot up services, etc. that a container does not have.
So, anything in /etc/init.d/ can not be used directly in a container which is used by boot up service (like systemd, or alpine's rc*). That's why you got error messages cause the rc* isn't installed in the container.
What you need to do is start sshd manuanlly.
You can take look on below example:
https://hub.docker.com/r/danielguerra/alpine-sshd/~/dockerfile/
Despite there are some details still not clear to me, let me take a voice in the discussion. The solution specified by the below configuration works for me. It's the result of arduous experiments.
First, the dockerfile
FROM alpine
RUN apk update && \
apk add --no-cache sudo bash openrc openssh
RUN mkdir -p /run/openrc && \
touch /run/openrc/softlevel && \
rc-update add sshd default
RUN adduser --disabled-password regusr && \
sh -c 'echo "regusr:<encoded_passwd>"' | chpasswd -e > /dev/null 2>&1 && \
sh -c 'echo "regusr ALL=NOPASSWD: ALL"' >> /etc/sudoers
VOLUME ["/home/reguser/solution/entrypoint-init.d","/sys/fs/cgroup"]
USER reguser
WORKDIR /home/reguser
RUN mkdir -p $HOME/solution && sudo chown reguser:reguser $HOME/solution
ADD ./entrypoint.sh /home/reguser/solution/
EXPOSE 22
ENTRYPOINT ["./solution/entrypoint.sh"]
CMD ["/bin/bash"]
Next, /home/reguser/solution/entrypoint.sh
#!/bin/bash
for f in ./solution/entrypoint-init.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
exec "$#"
Next, /home/reguser/solution/entrypoint-init.d/10-ssh-up.sh
#!/bin/bash
sudo sed --in-place --expression='/^#[[:space:]]*Port[[:space:]]\+22$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*AddressFamily[[:space:]]\+any$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostKey[[:space:]]\+\/etc\/ssh\/ssh_host_rsa_key$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^[[:space:]]*\(HostbasedAuthentication\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^[[:space:]]*\(IgnoreRhosts\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^[[:space:]]*\(PasswordAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^[[:space:]]*\(PubkeyAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PrintMotd[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]].*/ s/^[[:space:]]*\(PrintMOTD\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='$ a\' --expression='\nAcceptEnv LANG LC_\*' -- /etc/ssh/sshd_config
sudo /etc/init.d/sshd --dry-run start
sudo /etc/init.d/sshd start
The last two lines are in the heart of the trick. In particular, the sudo /etc/init.d/sshd --dry-run start makes the solution working.
Finally, command-line controls
docker build --tag='dockerRegUser/sshdImg:0.0.1' --file='./dockerfile' .
docker container create --tty \
--volume $(pwd)/dock/entrypoint-init.d:/home/reguser/solution/entrypoint-init.d:ro \
--name sshdCnt 'dockerRegUser/sshdImg:0.0.1' tail -f /dev/null
docker start sshdCnt && \
ssh-keygen -f "/home/user/.ssh/known_hosts" -R "$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)" && \
sleep 5 && \
ssh-copy-id -i ~/.ssh/sshkey reguser#$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)
I know, I know, there is a lot of unnecessary constructs. The example is also against the single service docker container principle. But there are phases and situations in solution development and delivery lifecycle that justify (or at least tempt) considering extending the container with the sshd or other openrc-controlled services.
/etc/init.d/sshd: not found
Try to run these commands:
apk add --no-cache openrc
rc-update add sshd
Check first is sshd is not present in /usr/bin or /usr/sbin.
Then, init.d should have sshd only if you set it up to to automatically start wiht:
rc-update add sshd
rc-status
I needed sshd for a very specific reason. I had to run front (cypress) and back (django) end tests on a CI server. Running them in one container is tricky at the least, so I decided to go with 2 containers. Also, there had to be one entrypoint that will run tests in both containers. So, the idea was that one container will run its tests, than run the tests in the other container over ssh.
In your case, you might not want to do exactly as I did, e.g. setting empty root password, empty passphrase.
It's best to run it in a separate directory, since it creates files (id_rsa.pub).
server.sh:
#!/bin/sh -eux
apk add openssh-server
ssh-keygen -A
passwd -d root
mkdir ~/.ssh
while ! [ -e id_rsa.pub ]; do sleep 1; done
cp id_rsa.pub ~/.ssh/authorized_keys
/usr/sbin/sshd -De
client.sh:
#!/bin/sh -eux
apk add openssh-client wait4ports
ssh-keygen -f ~/.ssh/id_rsa -N ''
cp ~/.ssh/id_rsa.pub .
wait4ports -s 1 tcp://c1:22
ssh-keyscan -t rsa c1 > ~/.ssh/known_hosts
ssh c1 echo DO SOMETHING
echo done
docker-compose.yml:
version: '3'
services:
server:
image: alpine:3.12
command: sh -c 'cd app && ./server.sh'
volumes:
- .:/app
client:
image: alpine:3.12
command: sh -c 'cd app && ./client.sh'
volumes:
- .:/app
$ docker-compose up -d && docker-compose logs -f
If you decide to run it again:
$ rm -f id_rsa.pub && docker-compose down && docker-compose up -d && docker-compose logs -f
If you want to setup openssh server on your docker container with alpine try this Dockerfile.
In this example, I am using docker:dind image
FROM docker:dind
# Setup SSH Service
RUN \
apk update && \
apk add openrc --no-cache && \
apk add openssh-server && \
rc-update add sshd && \
rc-status && \
touch /run/openrc/softlevel
# Expose port for ssh
EXPOSE 22
# Start SSH Service
CMD ["sh" , "-c", "service sshd restart && sh"]
Once your container is up and running try running this command to make sure ssh works fine:
ssh localhost

Docker Compose Image Failing

I am learning about Dockerfiles and docker-compose. When I manually run the Dockerfiles and create the containers they all work as they should however by triggering the build and deploy using a docker-compose.yml script the subscribe container stops working.
The process that works successfully is:
docker build -t cu/broker:1.0.0 broker/
docker run -d --name broker -p 6379:6379 cu/broker:1.0.0
docker build -t cu/subscriber:1.0.0 subscriber/
docker run -d --name subscriber --link broker:db cu/subscriber:1.0.0
docker build -t cu/publisher:1.0.0 publisher/
docker run --name publisher --link broker:db -ti cu/publisher:1.0.0
After running these commands I have two running containers and an interactive console where I can send individual publish commands to the redis server.
Rather than adding each script to this question together with a folder structure I have written a shell script that resets everything and generates the correct structures and files.
When I trigger the docker-compose.yml script it completes successfully but only the broker container is running, the other two terminate as soon as they start. I don't understand why and even with the --verbose flag I get no useful information to help debug this. This is the command I use to run the script.
docker-compose --verbose up -d
And here is the shell script config.sh that builds the folder and file structure.
#!/bin/sh
# builds the folder structures and dockerfiles
echo "STOPPING RUNNING CONTAINERS"
docker stop $(docker ps -a -q)
echo "DELETING ALL CONTAINERS"
docker rm $(docker ps -a -q)
echo "DELETING ALL IMAGES"
docker rmi $(docker images -q)
docker ps
docker images
if [ -d docker ]; then
echo "DELETING EXISTING FILES AND DIRECTORIES"
rm -rf docker
fi
echo "CREATING DIRECTORIES AND FILES"
mkdir docker
cd docker
echo -e "broker:" >> docker-compose.yml
echo -e " build: broker/" >> docker-compose.yml
echo -e " ports:" >> docker-compose.yml
echo -e " - \"6379:6379\"\n" >> docker-compose.yml
echo -e "subscriber:" >> docker-compose.yml
echo -e " build: subscriber/" >> docker-compose.yml
echo -e " links:" >> docker-compose.yml
echo -e " - \"broker:db\"\n" >> docker-compose.yml
echo -e "publisher:" >> docker-compose.yml
echo -e " build: publisher/" >> docker-compose.yml
echo -e " links:" >> docker-compose.yml
echo -e " - \"broker:db\"\n" >> docker-compose.yml
mkdir broker
cd broker
echo "CREATING BROKER DOCKERFILE"
touch Dockerfile
echo -e "FROM redis:3.0.3" >> Dockerfile
echo -e "RUN apt-get update -y && apt-get upgrade -y\n" >> Dockerfile
cd ..
mkdir publisher
cd publisher
echo "CREATING PUBLISHER DOCKERFILE"
touch Dockerfile
echo -e "FROM ubuntu:14.04" >> Dockerfile
echo -e "RUN apt-get update -y && apt-get upgrade -y" >> Dockerfile
echo -e "RUN apt-get install -y redis-server && service redis-server stop" >> Dockerfile
echo -e "CMD redis-cli -h $DB_PORT_6379_TCP_ADDR\n" >> Dockerfile
cd ..
mkdir subscriber
cd subscriber
echo "CREATING SUBSCRIBER DOCKERFILE"
touch Dockerfile
echo -e "FROM node:0.12.7" >> Dockerfile
echo -e "RUN apt-get update -y && apt-get upgrade -y" >> Dockerfile
echo -e "RUN apt-get install -y apt-utils tree wget nano" >> Dockerfile
echo -e "WORKDIR /home" >> Dockerfile
echo -e "ADD index.js /home/index.js" >> Dockerfile
echo -e "RUN npm install ioredis" >> Dockerfile
echo -e "CMD ["node", "/home/index.js"]\n" >> Dockerfile
echo "CREATING JAVASCRIPT FILE"
echo -e "var redis = require('ioredis');" >> index.js
echo -e "var port = process.env.DB_PORT_6379_TCP_PORT;" >> index.js
echo -e "var ip = process.env.DB_PORT_6379_TCP_ADDR;\n" >> index.js
echo -e "client = redis.createClient(port, ip, {});\n" >> index.js
echo -e "console.log('REDIS PORT: '+port);" >> index.js
echo -e "console.log('REDIS IP: '+ip);" >> index.js
echo -e "console.log('subscribed to "test" channel');\n" >> index.js
echo -e "client.subscribe('test');\n" >> index.js
echo -e "client.on('message', function(channel, message) {" >> index.js
echo -e " console.log('MESSAGE RECEIVED');" >> index.js
echo -e " console.log('CHANNEL: '+channel);" >> index.js
echo -e " console.log('MESSAGE: '+message);" >> index.js
echo -e "});\n" >> index.js
cd ..

Resources