Exception while executing Docker command in Jenkinsfile - node.js

I have a test project for end2end tests based on Nightwatch.js that is an NodeJS framework. I want to use 'Jenkinsfile' for my project to build a pipeline for my end2end tests to execute them over a Jenkins in a Docker container. So, I want to start a Docker container and execute the tests inside this Docker container. And this should be realized over a Jenkinsfile. Everything is perfect when I don't use a Jenkinsfile but directly use shell commands in a manually created job. While using Jenkinsfile I get an MultipleCompilationErrorsException while running the pipeline and I don't know why.
This is my Jenkinsfile:
pipeline {
agent any
parameters {
text(defaultValue: 'grme/nightwatch-chrome-firefox:0.0.3', description: '', name: 'docker_image')
text(defaultValue: 'npm-test-chrome', description: '', name: 'run_script_method')
text(defaultValue: '/Applications/Docker.app/Contents/Resources/bin/docker', description: '', name: 'docker')
}
stages {
stage('Test') {
steps {
sh 'sudo chmod -R 777 $(pwd)'
echo "------ stop all Docker containers ------"
sh '(sudo ${params.docker} stop $(sudo ${params.docker} ps -a -q) || echo "------ all Docker containers are still stopped ------")'
echo "------ remove all Docker containers ------"
sh '(sudo ${params.docker} rm $(sudo ${params.docker} ps -a -q) || sudo echo "------ all Docker containers are still removed ------")'
echo "------ pull Docker image from Docker Cloud ------"
sh 'sudo ${params.docker} pull "${params.docker_image}"'
echo "------ start Docker container from image ------"
sh 'sudo ${params.docker} run -d -t -i -v $(pwd):/my_tests/ "${params.docker_image}" /bin/bash'
echo "------ execute end2end tests on Docker container ------"
sh 'sudo ${params.docker} exec -i $(sudo ${params.docker} ps --format "{{.Names}}") bash -c "cd /my_tests && xvfb-run --server-args='-screen 0 1600x1200x24' npm run ${params.run_script_method} || true && google-chrome --version && firefox --version"'
echo "------ cleanup all temporary files ------"
sh 'sudo rm -Rf $(pwd)/tmp-*'
sh 'sudo rm -Rf $(pwd)/.com.google*'
sh 'sudo rm -Rf $(pwd)/rust_mozprofile*'
sh 'sudo rm -Rf $(pwd)/.org.chromium*'
echo "------ stop all Docker containers again ------"
sh '(sudo ${params.docker} stop $(sudo ${params.docker} ps -a -q) || sudo echo "------ all Docker containers are still stopped ------")'
echo "------ remove all Docker containers again ------"
sh '(sudo ${params.docker} rm $(sudo ${params.docker} ps -a -q) || sudo echo "------ all Docker containers are still removed ------")'
}
}
}
}
And this is the exception I get when running the pipeline:
Started by user GRme
> git rev-parse --is-inside-work-tree # timeout=10
Setting origin to https://github.com/GRme/e2e-web-tests
> git config remote.origin.url https://github.com/GRme/e2e-web-tests # timeout=10
Fetching origin...
Fetching upstream changes from origin
> git --version # timeout=10
using GIT_ASKPASS to set credentials
> git fetch --tags --progress origin +refs/heads/*:refs/remotes/origin/*
Seen branch in repository origin/master
Seen 1 remote branch
Obtained Jenkinsfile from 0eb7d8c437df1efc56e46171d945e7f2806b838b
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 23: Expected a symbol # line 23, column 9.
sh 'sudo ${params.docker} exec -i $(sudo ${params.docker} ps --format "{{.Names}}") bash -c "cd /my_tests && xvfb-run --server-args='-screen 0 1600x1200x24' npm run ${params.run_script_method} || true && google-chrome --version && firefox --version"'
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:129)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:123)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:516)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:479)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:269)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:419)
Finished: FAILURE
What do I wrong and how can I solve this exception?

After escaping the ' in the line, the pipeline has no syntax error anymore :)
sh 'sudo ${params.docker} exec -i $(sudo ${params.docker} ps --format "{{.Names}}") bash -c "cd /my_tests && xvfb-run --server-args=\'-screen 0 1600x1200x24\' npm run ${params.run_script_method} || true && google-chrome --version && firefox --version"'

Related

Docker run is unable to locate the directory

I've built a docker image, docker build -t dockeragent:latest . but can't seem to trigger the container to run. The command: docker run -e AZP_URL=<obfuscate> -e AZP_TOKEN=<obfuscate> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest produces the following error: exec ./start.sh: no such file or directory.
I understand that the start.sh script is called by the Dockerfile and I've ensured that the Dockerfile is in the same directory as the start.sh script. I've also tested referencing the start.sh script by using interpolation to point to the absolute path pointing to the start.sh script. Example:
ENTRYPOINT [ "${pwd}/start.sh" ]
Any ideas on what parameter has been misconfigured? The files are directly copied from Micorosft's guide on building self-hosted agents with Docker
For reference, please see the below Dockerfile and associated start.sh
Dockerfile
FROM ubuntu:20.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends \
apt-transport-https \
apt-utils \
ca-certificates \
curl \
git \
iputils-ping \
jq \
lsb-release \
software-properties-common
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT [ "./start.sh" ]
start.sh
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
# If the agent has some running jobs, the configuration removal process will fail.
# So, give it some time to finish the job.
while true; do
./config.sh remove --unattended --auth PAT --token $(cat "$AZP_TOKEN_FILE") && break
echo "Retrying in 30 seconds..."
sleep 30
done
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_PACKAGES=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=$TARGETARCH&top=1")
AZP_AGENT_PACKAGE_LATEST_URL=$(echo "$AZP_AGENT_PACKAGES" | jq -r '.value[0].downloadUrl')
if [ -z "$AZP_AGENT_PACKAGE_LATEST_URL" -o "$AZP_AGENT_PACKAGE_LATEST_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent"
echo 1>&2 "check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and extracting Azure Pipelines agent..."
curl -LsS $AZP_AGENT_PACKAGE_LATEST_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 0' EXIT
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
chmod +x ./run-docker.sh
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run-docker.sh "$#" & wait $!
Thanks in advance!
Check the Dockerfile and Start.sh file. The settings should be correct.
Refer to this doc: Linux Docker container agent
Save the following content to ~/dockeragent/start.sh, making sure to use Unix-style (LF) line endings:
The start.sh need to use Linux LF line endings when creating Linux Docker Container Agent.
When you create the start.sh in windows system, it will use Windows CRLF line endings.
You can convert the Start.sh file from Windows CRLF to Linux LF in this Online site: LF and CRLF converter online Then you can run the same command to create the Pipeline agent.
Or you can directly create the files in Linux system.
You can run this and exec into pod check you run file start.sh
docker run -e AZP_URL=<obfuscate> -e AZP_TOKEN=<obfuscate> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest --entrypoint sh

Jira doesn't seem to be installing on my EC2 instance at all: Failed to start jira.service: Unit jira.service not found

I'm trying to run a user-data script to install Jira on my RHEL 8 EC2 instance. When I SSH into my instance and run ps -ef | grep jira which I found on their website it returns this in my terminal
ec2-user 5256 5199 0 20:39 pts/0 00:00:00 grep --color=auto jira
I also try running this command in my terminal to start Jira sudo systemctl start jira and this is the error I get Failed to start jira.service: Unit jira.service not found
I'm not too sure what's causing Jira not to be installed or why I can't start it manually when I SSH into my instance- here's my user-data script that I'm using to install jira
#!/usr/bin/env bash
set \
-o nounset \
-o pipefail \
-o errexit
echo "===== Downloading Jira ====="
cd ~
wget https://www.atlassian.com/software/jira/downloads/binary/atlassian-jira-software-8.22.4-x64.bin
chmod +x atlassian-jira-software-8.22.4-x64.bin
wget -O ~/jira.bin ${jira_dl_url}
cat <<\EOF >> ~/jira.varfile
app.confHome=/var/atlassian/application-data/jira
app.install.service$Boolean=false
portChoice=custom
httpPort$Long=8080
rmiPort$Long=8443
launch.application$Boolean=false
sys.adminRights$Boolean=true
sys.confirmedUpdateInstallationString=false
sys.installationDir=/opt/atlassian/jira
sys.languageId=en
EOF
echo "done"
echo "===== Modify Jira Permissions ====="
chmod +x ~/jira.bin
echo "===== Start Jira Install ====="
sudo ./atlassian-jira-software-8.22.4-x64.bin -q
#sudo ~/jira.bin -q -varfile ~/jira.varfile
#sudo /opt/atlassian/jira/bin/startup.sh
echo "===== Stopping Jira Service ====="
sudo systemctl stop jira
echo "====== Change Ownership to Jira user ========="
sudo chown -R jira:jira /opt/atlassian/jira
sudo chown -R jira:jira /var/atlassian/application-data/jira
echo "===== Cleaning up Jira Files ====="
rm -f ~/jira.bin
rm -f ~/jira.varfile
chmod 750 /opt/atlassian/jira/logs/

Bash - combine 2 ssh calls into 1 (with optional and mandatory commands)

I have a script with 2 ssh commands. The SSH scripts uses SSH to log into a remote server and deletes docker images.
ssh person#someserver.com 'set -x &&
echo "Stop docker images" ;
sudo docker stop $(sudo docker ps -a -q) ;
sudo docker rmi -f $(sudo docker images -q) ;
sudo docker rm -f $(sudo docker ps -a -q)'
Note use of ; to separate commands (we don't care if one or more of the commands fail).
The 2nd ssh command uses SSH to log into the same server, grab a docker compose file and run docker.
ssh person#someserver.com 'set -x &&
export AWS_CONFIG_FILE=/somelocation/myaws.conf &&
aws s3 cp s3://com.somebucket.somewhere/docker-compose/docker-compose.yml . --region us-east-1 &&
echo "Get ECR login credentials and do a docker compose up" &&
sudo $(aws ecr get-login --region us-east-1) &&
sudo /usr/local/bin/docker-compose up -d'
Note use of && to separate commands (this time we do care if one or more of the commands fail as we grab the exit code i.e exitCode=$?).
I don't like the fact I have to split this into 2 so my question is can these 2 sections of bash commands be combined into a single SSH call (with both ; and && combinations)?
Although it is possible to pass a set of commands as a simple single-quoted string, I wouldn't recommend that, because:
internal quotation marks should be escaped
it is difficult to read (and maintain!) a code that looks like a string in a text editor
I find it better to keep the scripts in separate files, then pass them to ssh as standard input:
cat script.sh | ssh -T user#host -- bash -s -
Execution of several scripts is done in the same way. Just concatenate more scripts:
cat a.sh b.sh | ssh -T user#host -- bash -s -
If you still want to use a string, use a here document instead:
ssh -T user#host -- <<'END_OF_COMMANDS'
# put your script here
END_OF_COMMANDS
Note the -T option. You don't need pseudo-terminal allocation for non-interactive scripts.
ssh person#someserver.com 'set -x;
echo "Stop docker images" ;
sudo docker stop $(sudo docker ps -a -q) ;
sudo docker rmi -f $(sudo docker images -q) ;
sudo docker rm -f $(sudo docker ps -a -q) ;
export AWS_CONFIG_FILE=/somelocation/myaws.conf &&
aws s3 cp s3://com.somebucket.somewhere/docker-compose/docker-compose.yml . --region us-east-1 &&
echo "Get ECR login credentials and do a docker compose up" &&
sudo $(aws ecr get-login --region us-east-1) &&
sudo /usr/local/bin/docker-compose up -d'

Running OpenSSH in an Alpine Docker Container

I've installed OpenSSH and now I wish to run it as described in the documentation by running /etc/init.d/sshd start. However it does not start:
/ # /etc/init.d/sshd start
/bin/ash: /etc/init.d/sshd: not found
Thoughts?
P.S.
/ # ls -la /etc/init.d/sshd
-rwxr-xr-x 1 root root 2622 Jan 14 20:48 /etc/init.d/sshd
Contents of /etc/init.d/sshd:
#!/sbin/openrc-run
# Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/openssh/files/sshd.rc6.4,v 1.5 2015/05/04 02:56:25 vapier Exp $
description="OpenBSD Secure Shell server"
description_checkconfig="Verify configuration file"
description_reload="Reload configuration"
extra_commands="checkconfig"
extra_started_commands="reload"
: ${SSHD_CONFDIR:=/etc/ssh}
: ${SSHD_CONFIG:=${SSHD_CONFDIR}/sshd_config}
: ${SSHD_PIDFILE:=/var/run/${SVCNAME}.pid}
: ${SSHD_BINARY:=/usr/sbin/sshd}
depend() {
use logger dns
if [ "${rc_need+set}" = "set" ] ; then
: # Do nothing, the user has explicitly set rc_need
else
local x warn_addr
for x in $(awk '/^ListenAddress/{ print $2 }' "$SSHD_CONFIG" 2>/dev/null) ; do
case "${x}" in
0.0.0.0|0.0.0.0:*) ;;
::|\[::\]*) ;;
*) warn_addr="${warn_addr} ${x}" ;;
esac
done
if [ -n "${warn_addr}" ] ; then
need net
ewarn "You are binding an interface in ListenAddress statement in your sshd_config!"
ewarn "You must add rc_need=\"net.FOO\" to your /etc/conf.d/sshd"
ewarn "where FOO is the interface(s) providing the following address(es):"
ewarn "${warn_addr}"
fi
fi
}
checkconfig() {
if [ ! -d /var/empty ] ; then
mkdir -p /var/empty || return 1
fi
if [ ! -e "${SSHD_CONFIG}" ] ; then
eerror "You need an ${SSHD_CONFIG} file to run sshd"
eerror "There is a sample file in /usr/share/doc/openssh"
return 1
fi
if ! yesno "${SSHD_DISABLE_KEYGEN}"; then
ssh-keygen -A || return 1
fi
[ "${SSHD_PIDFILE}" != "/var/run/sshd.pid" ] \
&& SSHD_OPTS="${SSHD_OPTS} -o PidFile=${SSHD_PIDFILE}"
[ "${SSHD_CONFIG}" != "/etc/ssh/sshd_config" ] \
&& SSHD_OPTS="${SSHD_OPTS} -f ${SSHD_CONFIG}"
"${SSHD_BINARY}" -t ${SSHD_OPTS} || return 1
}
start() {
checkconfig || return 1
ebegin "Starting ${SVCNAME}"
start-stop-daemon --start --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" \
-- ${SSHD_OPTS}
eend $?
}
stop() {
if [ "${RC_CMD}" = "restart" ] ; then
checkconfig || return 1
fi
ebegin "Stopping ${SVCNAME}"
start-stop-daemon --stop --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" --quiet
eend $?
if [ "$RC_RUNLEVEL" = "shutdown" ]; then
_sshd_pids=$(pgrep "${SSHD_BINARY##*/}")
if [ -n "$_sshd_pids" ]; then
ebegin "Shutting down ssh connections"
kill -TERM $_sshd_pids >/dev/null 2>&1
eend 0
fi
fi
}
reload() {
checkconfig || return 1
ebegin "Reloading ${SVCNAME}"
start-stop-daemon --signal HUP \
--exec "${SSHD_BINARY}" --pidfile "${SSHD_PIDFILE}"
eend $?
}
A container is not a full installed environment.
The official document is for that installed alpine on some machine.
With power on, boot up services, etc. that a container does not have.
So, anything in /etc/init.d/ can not be used directly in a container which is used by boot up service (like systemd, or alpine's rc*). That's why you got error messages cause the rc* isn't installed in the container.
What you need to do is start sshd manuanlly.
You can take look on below example:
https://hub.docker.com/r/danielguerra/alpine-sshd/~/dockerfile/
Despite there are some details still not clear to me, let me take a voice in the discussion. The solution specified by the below configuration works for me. It's the result of arduous experiments.
First, the dockerfile
FROM alpine
RUN apk update && \
apk add --no-cache sudo bash openrc openssh
RUN mkdir -p /run/openrc && \
touch /run/openrc/softlevel && \
rc-update add sshd default
RUN adduser --disabled-password regusr && \
sh -c 'echo "regusr:<encoded_passwd>"' | chpasswd -e > /dev/null 2>&1 && \
sh -c 'echo "regusr ALL=NOPASSWD: ALL"' >> /etc/sudoers
VOLUME ["/home/reguser/solution/entrypoint-init.d","/sys/fs/cgroup"]
USER reguser
WORKDIR /home/reguser
RUN mkdir -p $HOME/solution && sudo chown reguser:reguser $HOME/solution
ADD ./entrypoint.sh /home/reguser/solution/
EXPOSE 22
ENTRYPOINT ["./solution/entrypoint.sh"]
CMD ["/bin/bash"]
Next, /home/reguser/solution/entrypoint.sh
#!/bin/bash
for f in ./solution/entrypoint-init.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
exec "$#"
Next, /home/reguser/solution/entrypoint-init.d/10-ssh-up.sh
#!/bin/bash
sudo sed --in-place --expression='/^#[[:space:]]*Port[[:space:]]\+22$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*AddressFamily[[:space:]]\+any$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostKey[[:space:]]\+\/etc\/ssh\/ssh_host_rsa_key$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^[[:space:]]*\(HostbasedAuthentication\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^[[:space:]]*\(IgnoreRhosts\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^[[:space:]]*\(PasswordAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^[[:space:]]*\(PubkeyAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PrintMotd[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]].*/ s/^[[:space:]]*\(PrintMOTD\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='$ a\' --expression='\nAcceptEnv LANG LC_\*' -- /etc/ssh/sshd_config
sudo /etc/init.d/sshd --dry-run start
sudo /etc/init.d/sshd start
The last two lines are in the heart of the trick. In particular, the sudo /etc/init.d/sshd --dry-run start makes the solution working.
Finally, command-line controls
docker build --tag='dockerRegUser/sshdImg:0.0.1' --file='./dockerfile' .
docker container create --tty \
--volume $(pwd)/dock/entrypoint-init.d:/home/reguser/solution/entrypoint-init.d:ro \
--name sshdCnt 'dockerRegUser/sshdImg:0.0.1' tail -f /dev/null
docker start sshdCnt && \
ssh-keygen -f "/home/user/.ssh/known_hosts" -R "$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)" && \
sleep 5 && \
ssh-copy-id -i ~/.ssh/sshkey reguser#$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)
I know, I know, there is a lot of unnecessary constructs. The example is also against the single service docker container principle. But there are phases and situations in solution development and delivery lifecycle that justify (or at least tempt) considering extending the container with the sshd or other openrc-controlled services.
/etc/init.d/sshd: not found
Try to run these commands:
apk add --no-cache openrc
rc-update add sshd
Check first is sshd is not present in /usr/bin or /usr/sbin.
Then, init.d should have sshd only if you set it up to to automatically start wiht:
rc-update add sshd
rc-status
I needed sshd for a very specific reason. I had to run front (cypress) and back (django) end tests on a CI server. Running them in one container is tricky at the least, so I decided to go with 2 containers. Also, there had to be one entrypoint that will run tests in both containers. So, the idea was that one container will run its tests, than run the tests in the other container over ssh.
In your case, you might not want to do exactly as I did, e.g. setting empty root password, empty passphrase.
It's best to run it in a separate directory, since it creates files (id_rsa.pub).
server.sh:
#!/bin/sh -eux
apk add openssh-server
ssh-keygen -A
passwd -d root
mkdir ~/.ssh
while ! [ -e id_rsa.pub ]; do sleep 1; done
cp id_rsa.pub ~/.ssh/authorized_keys
/usr/sbin/sshd -De
client.sh:
#!/bin/sh -eux
apk add openssh-client wait4ports
ssh-keygen -f ~/.ssh/id_rsa -N ''
cp ~/.ssh/id_rsa.pub .
wait4ports -s 1 tcp://c1:22
ssh-keyscan -t rsa c1 > ~/.ssh/known_hosts
ssh c1 echo DO SOMETHING
echo done
docker-compose.yml:
version: '3'
services:
server:
image: alpine:3.12
command: sh -c 'cd app && ./server.sh'
volumes:
- .:/app
client:
image: alpine:3.12
command: sh -c 'cd app && ./client.sh'
volumes:
- .:/app
$ docker-compose up -d && docker-compose logs -f
If you decide to run it again:
$ rm -f id_rsa.pub && docker-compose down && docker-compose up -d && docker-compose logs -f
If you want to setup openssh server on your docker container with alpine try this Dockerfile.
In this example, I am using docker:dind image
FROM docker:dind
# Setup SSH Service
RUN \
apk update && \
apk add openrc --no-cache && \
apk add openssh-server && \
rc-update add sshd && \
rc-status && \
touch /run/openrc/softlevel
# Expose port for ssh
EXPOSE 22
# Start SSH Service
CMD ["sh" , "-c", "service sshd restart && sh"]
Once your container is up and running try running this command to make sure ssh works fine:
ssh localhost

Docker Compose Image Failing

I am learning about Dockerfiles and docker-compose. When I manually run the Dockerfiles and create the containers they all work as they should however by triggering the build and deploy using a docker-compose.yml script the subscribe container stops working.
The process that works successfully is:
docker build -t cu/broker:1.0.0 broker/
docker run -d --name broker -p 6379:6379 cu/broker:1.0.0
docker build -t cu/subscriber:1.0.0 subscriber/
docker run -d --name subscriber --link broker:db cu/subscriber:1.0.0
docker build -t cu/publisher:1.0.0 publisher/
docker run --name publisher --link broker:db -ti cu/publisher:1.0.0
After running these commands I have two running containers and an interactive console where I can send individual publish commands to the redis server.
Rather than adding each script to this question together with a folder structure I have written a shell script that resets everything and generates the correct structures and files.
When I trigger the docker-compose.yml script it completes successfully but only the broker container is running, the other two terminate as soon as they start. I don't understand why and even with the --verbose flag I get no useful information to help debug this. This is the command I use to run the script.
docker-compose --verbose up -d
And here is the shell script config.sh that builds the folder and file structure.
#!/bin/sh
# builds the folder structures and dockerfiles
echo "STOPPING RUNNING CONTAINERS"
docker stop $(docker ps -a -q)
echo "DELETING ALL CONTAINERS"
docker rm $(docker ps -a -q)
echo "DELETING ALL IMAGES"
docker rmi $(docker images -q)
docker ps
docker images
if [ -d docker ]; then
echo "DELETING EXISTING FILES AND DIRECTORIES"
rm -rf docker
fi
echo "CREATING DIRECTORIES AND FILES"
mkdir docker
cd docker
echo -e "broker:" >> docker-compose.yml
echo -e " build: broker/" >> docker-compose.yml
echo -e " ports:" >> docker-compose.yml
echo -e " - \"6379:6379\"\n" >> docker-compose.yml
echo -e "subscriber:" >> docker-compose.yml
echo -e " build: subscriber/" >> docker-compose.yml
echo -e " links:" >> docker-compose.yml
echo -e " - \"broker:db\"\n" >> docker-compose.yml
echo -e "publisher:" >> docker-compose.yml
echo -e " build: publisher/" >> docker-compose.yml
echo -e " links:" >> docker-compose.yml
echo -e " - \"broker:db\"\n" >> docker-compose.yml
mkdir broker
cd broker
echo "CREATING BROKER DOCKERFILE"
touch Dockerfile
echo -e "FROM redis:3.0.3" >> Dockerfile
echo -e "RUN apt-get update -y && apt-get upgrade -y\n" >> Dockerfile
cd ..
mkdir publisher
cd publisher
echo "CREATING PUBLISHER DOCKERFILE"
touch Dockerfile
echo -e "FROM ubuntu:14.04" >> Dockerfile
echo -e "RUN apt-get update -y && apt-get upgrade -y" >> Dockerfile
echo -e "RUN apt-get install -y redis-server && service redis-server stop" >> Dockerfile
echo -e "CMD redis-cli -h $DB_PORT_6379_TCP_ADDR\n" >> Dockerfile
cd ..
mkdir subscriber
cd subscriber
echo "CREATING SUBSCRIBER DOCKERFILE"
touch Dockerfile
echo -e "FROM node:0.12.7" >> Dockerfile
echo -e "RUN apt-get update -y && apt-get upgrade -y" >> Dockerfile
echo -e "RUN apt-get install -y apt-utils tree wget nano" >> Dockerfile
echo -e "WORKDIR /home" >> Dockerfile
echo -e "ADD index.js /home/index.js" >> Dockerfile
echo -e "RUN npm install ioredis" >> Dockerfile
echo -e "CMD ["node", "/home/index.js"]\n" >> Dockerfile
echo "CREATING JAVASCRIPT FILE"
echo -e "var redis = require('ioredis');" >> index.js
echo -e "var port = process.env.DB_PORT_6379_TCP_PORT;" >> index.js
echo -e "var ip = process.env.DB_PORT_6379_TCP_ADDR;\n" >> index.js
echo -e "client = redis.createClient(port, ip, {});\n" >> index.js
echo -e "console.log('REDIS PORT: '+port);" >> index.js
echo -e "console.log('REDIS IP: '+ip);" >> index.js
echo -e "console.log('subscribed to "test" channel');\n" >> index.js
echo -e "client.subscribe('test');\n" >> index.js
echo -e "client.on('message', function(channel, message) {" >> index.js
echo -e " console.log('MESSAGE RECEIVED');" >> index.js
echo -e " console.log('CHANNEL: '+channel);" >> index.js
echo -e " console.log('MESSAGE: '+message);" >> index.js
echo -e "});\n" >> index.js
cd ..

Resources