gitlab ci/cd ERROR: Job failed: exit code 1 on chmod og= $STAGE_ID_RSA - node.js

This is my ci.yml
deploy-stage:
stage: deploy
image: alpine:latest
script:
- chmod og= $STAGE_ID_RSA
- apk update && apk add openssh-client
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker stop $CI_PROJECT_NAME || true"
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker rm $CI_PROJECT_NAME || true"
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker image rm $CI_REGISTRY_IMAGE:latest || true"
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker pull $CI_REGISTRY_IMAGE:latest"
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker run -d --restart unless-stopped --name $CI_PROJECT_NAME -p 8882:4000 -e "variableData=Docker-Run-Command" $CI_REGISTRY_IMAGE:latest"
I get this error:
BusyBox v1.35.0 (2022-05-09 17:27:12 UTC) multi-call binary.
Usage: chmod [-Rcvf] MODE[,MODE]... FILE...
MODE is octal number (bit pattern sstrwxrwxrwx) or [ugoa]{+|-|=}[rwxXst]
-R Recurse
-c List changed files
-v Verbose
-f Hide errors
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
And this my variable:
https://i.stack.imgur.com/zegmz.png
And I enter breakline end of STAGE_ID_RSA value

First, you can use chmod og= -- $STAGE_ID_RSA to make sure chmod will not consider $STAGE_ID_RSA as an option, but a parameter.
Second, chmod [OPTIONS] [ugoa…][-+=]perms…[,…] FILE... means the chmod applies to a file.
If $STAGE_ID_RSA represents a value, that would not work.
You would need to redirect its content to a file.
Yet, I do see STAGE_ID_RSA here, used as you do in .gitlab-ci.yml, but using the kroniak/ssh-client rather than alpine:latest.
The chmod in alpine:latest might not behave like a chmod in a more traditional image.
See "Linux commands in Alpine docker image +755 invalid mode".
That is most likely your issue.

Related

GitLab CI - Restart service on remote host

I have a GitLab pipeline that deploy a site and need to restart fpm service.
stages:
- deploy
Deploy:
image: gotechnies/alpine-ssh
stage: deploy
before_script:
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
# other steps
- ssh forge#$SERVER_IP -o "SendEnv=FORGE_PHP_FPM" -o "SendEnv=FORGE_SUDO_PWD" 'bash -O extglob -c "(flock -w 10 9 || exit 1\n echo 'Restarting FPM...'; echo "$FORGE_SUDO_PWD" | sudo -S service $FORGE_PHP_FPM reload) 9>/tmp/fpmlock"'
variables:
FORGE_PHP_FPM: php8.1-fpm
FORGE_SUDO_PWD: $PRODUCTION_SUDO_PWD
only:
- master
$PRODUCTION_SUDO_PWD is added on gitlab variables and marked as protected.
My problem is with this line:
- ssh forge#$SERVER_IP -o "SendEnv=FORGE_PHP_FPM" -o "SendEnv=FORGE_SUDO_PWD" 'bash -O extglob -c "(flock -w 10 9 || exit 1\n echo 'Restarting FPM...'; echo "$FORGE_SUDO_PWD" | sudo -S service $FORGE_PHP_FPM reload) 9>/tmp/fpmlock"'
I want to restart php8.1-fpm service but each time I run the pipeline I get:
[sudo] password for forge: Sorry, try again.
[sudo] password for forge:
sudo: no password was provided
sudo: 1 incorrect password attempt
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
As far as I know the SendEnv should pass the value of the variable and if I remove the bash command and just add echo $FORGE_SUDO_PWD it print the value.
What am I missing?

SSH error when trying to deploy to Digital Ocean via Gitlab CI/CD

Good Evening,
I am trying to deploy to Digital Ocean via a Gitlab CI/CD pipeline, but when I run the pipeline I get a:
"chmod: /root/.ssh/id_rsa: No such file or directory
$ chmod og= ~/.ssh/id_rsa
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1".
For some reason its not using the user that I have made for deployment, and is using the root, but when I use the cat command to view the ssh key in my server it shows in both root and deployer user.
The below is my .yml file.
before_script:
- echo $PATH
- pwd
- whoami
- mkdir -p ~/.ssh
- cd ~/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > id_rsa
- echo "$SSH_PUBLIC_KEY" | tr -d '\r' > id_rsa.pub
- chmod 700 id_rsa id_rsa.pub
- cp id_rsa.pub authorized_keys
- cp id_rsa.pub known_hosts
- ls -ld *
- cd -
stages:
- build
- publish
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
build:
image: node:latest
stage: build
script:
- npm install
- echo "ACCOUNT_SID=$ACCOUNT_SID" >> .env
- echo "AUTH_TOKEN=$AUTH_TOKEN" >> .env
- echo "API_KEY=$API_KEY" >> .env
- echo "API_SECRET=$API_SECRET" >> .env
- echo "PHONE_NUMBER=$PHONE_NUMBER" >> .env
- echo "sengrid_api=$sengrid_api" >> .env
publish:
image: docker:latest
stage: publish
services:
- docker:dind
script:
- docker build . -t $TAG_COMMIT -t $TAG_LATEST
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: alpine:latest
stage: deploy
tags:
- deployment
script:
- whoami
- uname -a
- echo "user $SERVER_USER"
- echo "ip $SERVER_IP"
- echo "id_rsa $ID_RSA"
- (which ifconfig) || (apt install net-tools)
- /sbin/ifconfig
- touch blah
- find .
- apk update && apk add openssh-client
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 80:3000 --name my-app $TAG_COMMIT"
environment:
name: production
url: http://167.172.225.124
only:
- master
After hours of work and errors:
cat id_rsa.pub >> authorized_keys: fixed the permission denied (public key,password) error
ssh-keyscan gitlab.com >> authorized_keys: This key fixed connection refused error.
The below is the final .yml file that works.
# ssh-keyscan gitlab.com >> authorized_keys: use this command to add gitlab ssh keys to sever. Run on server terminal
# cat id_rsa.pub >> authorized_keys Run this command on the sever on the terminal.
# Both COMMANDS ABOVE ARE necessary.
stages:
- build
- publish
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
build:
image: node:latest
stage: build
script:
- npm install
- echo "ACCOUNT_SID=$ACCOUNT_SID" >> .env
- echo "AUTH_TOKEN=$AUTH_TOKEN" >> .env
- echo "API_KEY=$API_KEY" >> .env
- echo "API_SECRET=$API_SECRET" >> .env
- echo "PHONE_NUMBER=$PHONE_NUMBER" >> .env
- echo "sengrid_api=$sengrid_api" >> .env
publish:
image: docker:latest
stage: publish
services:
- docker:dind
script:
- docker build . -t $TAG_COMMIT -t $TAG_LATEST
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: ubuntu:latest
stage: deploy
tags:
- deployment
before_script:
##
## Install ssh-agent if not already installed, it is required by Docker.
## (change apt-get to yum if you use an RPM-based image)
##
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
##
## Run ssh-agent (inside the build environment)
##
- eval $(ssh-agent -s)
##
## Create the SSH directory and give it the right permissions
##
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
##
## Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
## We're using tr to fix line endings which makes ed25519 keys work
## without extra base64 encoding.
## https://gitlab.com/gitlab-examples/ssh-private-key/issues/1#note_48526556
##
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- echo "$SSH_PUBLIC_KEY" | tr -d '\r' > ~/.ssh/id_rsa.pub
- chmod 600 ~/.ssh/*
- chmod 644 ~/.ssh/*.pub
- ssh-add
##
## Use ssh-keyscan to scan the keys of your private server. Replace gitlab.com
## with your own domain name. You can copy and repeat that command if you have
## more than one server to connect to.
##
- ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- ls -ld ~/.ssh/*
- cat ~/.ssh/*
##
## Alternatively, assuming you created the SSH_SERVER_HOSTKEYS variable
## previously, uncomment the following two lines instead.
##
#- echo "$SSH_SERVER_HOSTKEYS" > ~/.ssh/known_hosts'
#- chmod 644 ~/.ssh/known_hosts
##
## You can optionally disable host key checking. Be aware that by adding that
## you are suspectible to man-in-the-middle attacks.
## WARNING: Use this only with the Docker executor, if you use it with shell
## you will overwrite your user's SSH config.
##
#- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
##
## Optionally, if you will be using any Git commands, set the user name and
## email.
##
script:
- ssh -v -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 80:3000 --name my-app $TAG_COMMIT"
environment:
name: production
url: http://167.172.225.124
only:
- master
The prerequisites of the DigitalOcean tutorial you are following include a sudo non-root user, and a user account on a GitLab instance with an enabled container registry.
The gitlab-runner service installed through script.deb.sh should need a non-root user’s password to proceed.
And it involves creating a user that is dedicated for the deployment task, with a CI/CD pipeline configured later to log in to the server with that user.
That means the gitlab-ci is not supposed to be executed by root, which is not involved at any stage.

Executing a command inside docker shows wrong $PATH

I am trying to run a bash command inside docker from host:
$ docker exec -it -u weiss apollo_dev /bin/bash -c "rosbag"
/bin/bash: rosbag: command not found
So I tried:
$ docker exec -it -u weiss apollo_dev /bin/bash -c "echo \$PATH"
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
But when I run docker interactively:
$ docker exec -it -u weiss apollo_dev /bin/bash
weiss#docker$ echo $PATH
/usr/local/cuda-8.0/bin:/home/tmp/ros/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Any reason why I am getting different results for $PATH?
This path is most likely changed in your .bashrc file, and this file is not loaded when the shell is non interactive (see https://www.gnu.org/software/bash/manual/bash.html#Bash-Startup-Files)
So /bin/bash will load it, /bin/bash -c will not
Here you are getting de $PATH of your Host. Before you run the container the variable is replace for the host's $PATH.
$ docker exec -it -u weiss apollo_dev /bin/bash -c "echo \$PATH"
You need to pass the command without replace the variable, so when run the command in the container just invoke the $PATH variable.
$ docker exec -it -u weiss apollo_dev /bin/bash -c 'echo \$PATH'
Te 'apostrophe' is the key. Bye

ls sort order inside container

Running ls -d to list directories, print directories in different order if trailing / is present in file name. Why is that? What sorting rules apply? and why does this happen only with docker?
With trailing /
$ docker run --rm ubuntu:16.04 /bin/bash -c "mkdir foo ; mkdir foo-bar ; ls -d foo/ foo-bar/"
foo-bar/
foo/
Without trailing /
$ docker run --rm -it ubuntu:16.04 /bin/bash -c "mkdir foo ; mkdir foo-bar ; ls -d foo foo-bar"
foo
foo-bar
I found out I get the same behavior using sort command
docker run --rm ubuntu:16.04 /bin/bash -c "echo -e 'foo/\nfoo-bar/' | sort"
But the sorting order changes when using sort -d
docker run --rm ubuntu:16.04 /bin/bash -c "echo -e 'foo/\nfoo-bar/' | sort -d"
Thanks to David for pointing me in the right direction, this is caused by the locale settings as described here
On bare ubuntu container, POSIX locale is used which has different sorting rules then en_US.
I solved my problem by installing en_US locale in the docker image, and sorting works as expected again.

Running OpenSSH in an Alpine Docker Container

I've installed OpenSSH and now I wish to run it as described in the documentation by running /etc/init.d/sshd start. However it does not start:
/ # /etc/init.d/sshd start
/bin/ash: /etc/init.d/sshd: not found
Thoughts?
P.S.
/ # ls -la /etc/init.d/sshd
-rwxr-xr-x 1 root root 2622 Jan 14 20:48 /etc/init.d/sshd
Contents of /etc/init.d/sshd:
#!/sbin/openrc-run
# Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/openssh/files/sshd.rc6.4,v 1.5 2015/05/04 02:56:25 vapier Exp $
description="OpenBSD Secure Shell server"
description_checkconfig="Verify configuration file"
description_reload="Reload configuration"
extra_commands="checkconfig"
extra_started_commands="reload"
: ${SSHD_CONFDIR:=/etc/ssh}
: ${SSHD_CONFIG:=${SSHD_CONFDIR}/sshd_config}
: ${SSHD_PIDFILE:=/var/run/${SVCNAME}.pid}
: ${SSHD_BINARY:=/usr/sbin/sshd}
depend() {
use logger dns
if [ "${rc_need+set}" = "set" ] ; then
: # Do nothing, the user has explicitly set rc_need
else
local x warn_addr
for x in $(awk '/^ListenAddress/{ print $2 }' "$SSHD_CONFIG" 2>/dev/null) ; do
case "${x}" in
0.0.0.0|0.0.0.0:*) ;;
::|\[::\]*) ;;
*) warn_addr="${warn_addr} ${x}" ;;
esac
done
if [ -n "${warn_addr}" ] ; then
need net
ewarn "You are binding an interface in ListenAddress statement in your sshd_config!"
ewarn "You must add rc_need=\"net.FOO\" to your /etc/conf.d/sshd"
ewarn "where FOO is the interface(s) providing the following address(es):"
ewarn "${warn_addr}"
fi
fi
}
checkconfig() {
if [ ! -d /var/empty ] ; then
mkdir -p /var/empty || return 1
fi
if [ ! -e "${SSHD_CONFIG}" ] ; then
eerror "You need an ${SSHD_CONFIG} file to run sshd"
eerror "There is a sample file in /usr/share/doc/openssh"
return 1
fi
if ! yesno "${SSHD_DISABLE_KEYGEN}"; then
ssh-keygen -A || return 1
fi
[ "${SSHD_PIDFILE}" != "/var/run/sshd.pid" ] \
&& SSHD_OPTS="${SSHD_OPTS} -o PidFile=${SSHD_PIDFILE}"
[ "${SSHD_CONFIG}" != "/etc/ssh/sshd_config" ] \
&& SSHD_OPTS="${SSHD_OPTS} -f ${SSHD_CONFIG}"
"${SSHD_BINARY}" -t ${SSHD_OPTS} || return 1
}
start() {
checkconfig || return 1
ebegin "Starting ${SVCNAME}"
start-stop-daemon --start --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" \
-- ${SSHD_OPTS}
eend $?
}
stop() {
if [ "${RC_CMD}" = "restart" ] ; then
checkconfig || return 1
fi
ebegin "Stopping ${SVCNAME}"
start-stop-daemon --stop --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" --quiet
eend $?
if [ "$RC_RUNLEVEL" = "shutdown" ]; then
_sshd_pids=$(pgrep "${SSHD_BINARY##*/}")
if [ -n "$_sshd_pids" ]; then
ebegin "Shutting down ssh connections"
kill -TERM $_sshd_pids >/dev/null 2>&1
eend 0
fi
fi
}
reload() {
checkconfig || return 1
ebegin "Reloading ${SVCNAME}"
start-stop-daemon --signal HUP \
--exec "${SSHD_BINARY}" --pidfile "${SSHD_PIDFILE}"
eend $?
}
A container is not a full installed environment.
The official document is for that installed alpine on some machine.
With power on, boot up services, etc. that a container does not have.
So, anything in /etc/init.d/ can not be used directly in a container which is used by boot up service (like systemd, or alpine's rc*). That's why you got error messages cause the rc* isn't installed in the container.
What you need to do is start sshd manuanlly.
You can take look on below example:
https://hub.docker.com/r/danielguerra/alpine-sshd/~/dockerfile/
Despite there are some details still not clear to me, let me take a voice in the discussion. The solution specified by the below configuration works for me. It's the result of arduous experiments.
First, the dockerfile
FROM alpine
RUN apk update && \
apk add --no-cache sudo bash openrc openssh
RUN mkdir -p /run/openrc && \
touch /run/openrc/softlevel && \
rc-update add sshd default
RUN adduser --disabled-password regusr && \
sh -c 'echo "regusr:<encoded_passwd>"' | chpasswd -e > /dev/null 2>&1 && \
sh -c 'echo "regusr ALL=NOPASSWD: ALL"' >> /etc/sudoers
VOLUME ["/home/reguser/solution/entrypoint-init.d","/sys/fs/cgroup"]
USER reguser
WORKDIR /home/reguser
RUN mkdir -p $HOME/solution && sudo chown reguser:reguser $HOME/solution
ADD ./entrypoint.sh /home/reguser/solution/
EXPOSE 22
ENTRYPOINT ["./solution/entrypoint.sh"]
CMD ["/bin/bash"]
Next, /home/reguser/solution/entrypoint.sh
#!/bin/bash
for f in ./solution/entrypoint-init.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
exec "$#"
Next, /home/reguser/solution/entrypoint-init.d/10-ssh-up.sh
#!/bin/bash
sudo sed --in-place --expression='/^#[[:space:]]*Port[[:space:]]\+22$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*AddressFamily[[:space:]]\+any$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostKey[[:space:]]\+\/etc\/ssh\/ssh_host_rsa_key$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^[[:space:]]*\(HostbasedAuthentication\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^[[:space:]]*\(IgnoreRhosts\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^[[:space:]]*\(PasswordAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^[[:space:]]*\(PubkeyAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PrintMotd[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]].*/ s/^[[:space:]]*\(PrintMOTD\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='$ a\' --expression='\nAcceptEnv LANG LC_\*' -- /etc/ssh/sshd_config
sudo /etc/init.d/sshd --dry-run start
sudo /etc/init.d/sshd start
The last two lines are in the heart of the trick. In particular, the sudo /etc/init.d/sshd --dry-run start makes the solution working.
Finally, command-line controls
docker build --tag='dockerRegUser/sshdImg:0.0.1' --file='./dockerfile' .
docker container create --tty \
--volume $(pwd)/dock/entrypoint-init.d:/home/reguser/solution/entrypoint-init.d:ro \
--name sshdCnt 'dockerRegUser/sshdImg:0.0.1' tail -f /dev/null
docker start sshdCnt && \
ssh-keygen -f "/home/user/.ssh/known_hosts" -R "$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)" && \
sleep 5 && \
ssh-copy-id -i ~/.ssh/sshkey reguser#$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)
I know, I know, there is a lot of unnecessary constructs. The example is also against the single service docker container principle. But there are phases and situations in solution development and delivery lifecycle that justify (or at least tempt) considering extending the container with the sshd or other openrc-controlled services.
/etc/init.d/sshd: not found
Try to run these commands:
apk add --no-cache openrc
rc-update add sshd
Check first is sshd is not present in /usr/bin or /usr/sbin.
Then, init.d should have sshd only if you set it up to to automatically start wiht:
rc-update add sshd
rc-status
I needed sshd for a very specific reason. I had to run front (cypress) and back (django) end tests on a CI server. Running them in one container is tricky at the least, so I decided to go with 2 containers. Also, there had to be one entrypoint that will run tests in both containers. So, the idea was that one container will run its tests, than run the tests in the other container over ssh.
In your case, you might not want to do exactly as I did, e.g. setting empty root password, empty passphrase.
It's best to run it in a separate directory, since it creates files (id_rsa.pub).
server.sh:
#!/bin/sh -eux
apk add openssh-server
ssh-keygen -A
passwd -d root
mkdir ~/.ssh
while ! [ -e id_rsa.pub ]; do sleep 1; done
cp id_rsa.pub ~/.ssh/authorized_keys
/usr/sbin/sshd -De
client.sh:
#!/bin/sh -eux
apk add openssh-client wait4ports
ssh-keygen -f ~/.ssh/id_rsa -N ''
cp ~/.ssh/id_rsa.pub .
wait4ports -s 1 tcp://c1:22
ssh-keyscan -t rsa c1 > ~/.ssh/known_hosts
ssh c1 echo DO SOMETHING
echo done
docker-compose.yml:
version: '3'
services:
server:
image: alpine:3.12
command: sh -c 'cd app && ./server.sh'
volumes:
- .:/app
client:
image: alpine:3.12
command: sh -c 'cd app && ./client.sh'
volumes:
- .:/app
$ docker-compose up -d && docker-compose logs -f
If you decide to run it again:
$ rm -f id_rsa.pub && docker-compose down && docker-compose up -d && docker-compose logs -f
If you want to setup openssh server on your docker container with alpine try this Dockerfile.
In this example, I am using docker:dind image
FROM docker:dind
# Setup SSH Service
RUN \
apk update && \
apk add openrc --no-cache && \
apk add openssh-server && \
rc-update add sshd && \
rc-status && \
touch /run/openrc/softlevel
# Expose port for ssh
EXPOSE 22
# Start SSH Service
CMD ["sh" , "-c", "service sshd restart && sh"]
Once your container is up and running try running this command to make sure ssh works fine:
ssh localhost

Resources