Linux paste content to default file of Nginx without formation error in cloud-init - linux

i have a problem with my formation in yaml. First of all here is the file:
#cloud-config
runcmd:
- mkdir react
- cd react
- type -p curl >/dev/null || sudo apt install curl -y
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
- curl -o actions-runner-linux-x64-2.301.1.tar.gz -L https://github.com/actions/runner/releases/download/v2.301.1/actions-runner-linux-x64-2.301.1.tar.gz
- tar xzf ./actions-runner-linux-x64-2.301.1.tar.gz
- yes "" | ./config.sh --url https://github.com/yuuval/react-deploy --token AVYXWHXNRBPIDXJDPUDK6QTD2LIPE
- sudo ./svc.sh install
- sudo ./svc.sh start
- yes "" | sudo apt install nginx
- gh auth login --hostname github.com --with-token <<< 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 --git-protocol https
- gh repo clone yuuval/react-deploy
- cd react-deploy
- gh workflow run node.js.yml
- sleep 70
- cd /etc/nginx/sites-available
- sudo rm default
- echo "server {
listen 80 default_server;
server_name _;
# react app & front-end files
location / {
root /home/ubuntu/react/_work/react-deploy/react-deploy/build;
try_files \$uri /index.html;
}
}" | sudo tee /etc/nginx/sites-available/default
- sudo service nginx restart
- sudo chmod +x /home
- sudo chmod +x /home/ubuntu
- sudo chmod +x /home/ubuntu/react
- sudo chmod +x /home/ubuntu/react/_work
- sudo chmod +x /home/ubuntu/react/_work/react-deploy
- sudo chmod +x /home/ubuntu/react/_work/react-deploy/react-deploy
- sudo chmod +x /home/ubuntu/react/_work/react-deploy/react-deploy/build
The problem is here:
echo "server {
listen 80 default_server;
server_name _;
# react app & front-end files
location / {
root /home/ubuntu/react/_work/react-deploy/react-deploy/build;
try_files \$uri /index.html;
}
}" | sudo tee /etc/nginx/sites-available/default
The lines that should go into the default file must have a certain structure. This is the case with the current cloud-init, but it becomes invalid because it does not start at the same line as the "-". Does anyone have an idea how to get around this?
This is the error of a yaml linter tool:
All mapping items must start at the same column at line 25, column 1
Implicit keys need to be on a single line at line 25, column 3
Implicit map keys need to be followed by map values at line 25, column 3
Unexpected flow-map-end token in YAML stream: "}" at line 32, column 1
Unexpected double-quoted-scalar token in YAML stream: "\" | sudo tee /etc/nginx/sites-available/default\n - sudo service nginx restart\n - sudo chmod +x /home\n - sudo chmod +x /home/ubuntu\n - sudo chmod +x /home/ubuntu/react\n - sudo chmod +x /home/ubuntu/react/_work\n - sudo chmod +x /home/ubuntu/react/_work/react-deploy\n - sudo chmod +x /home/ubuntu/react/_work/react-deploy/react-deploy\n - sudo chmod +x /home/ubuntu/react/_work/react-deploy/react-deploy/build" at line 32, column 2
line 25 is here: listen 80 default_server;

Related

Cloud-init File Command line option 'S' [from -fsSL] is not understood in combination with the other options

i want to execute this cloud-init file and terraform file:
Cloud-init:
#cloud-config
runcmd:
- mkdir react
- cd react
- type -p curl >/dev/null || sudo apt install curl -y
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
- curl -o actions-runner-linux-x64-2.301.1.tar.gz -L https://github.com/actions/runner/releases/download/v2.301.1/actions-runner-linux-x64-2.301.1.tar.gz
- tar xzf ./actions-runner-linux-x64-2.301.1.tar.gz
- yes "" | ./config.sh --url https://github.com/yuuval/react-deploy --token AVYXWHXNRBPIDXJDPUDK6QTD2LIPE
- sudo ./svc.sh install
- sudo ./svc.sh start
- yes "" | sudo apt install nginx
- gh auth login --hostname github.com --with-token <<< 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 --git-protocol https
- gh repo clone yuuval/react-deploy
- cd react-deploy
- gh workflow run node.js.yml
- sleep 70
- cd /etc/nginx/sites-available
- sudo rm default
- echo "server {
listen 80 default_server;
server_name _;
# react app & front-end files
location / {
root /home/ubuntu/react/_work/react-deploy/react-deploy/build;
try_files \$uri /index.html;
}
}" | sudo tee /etc/nginx/sites-available/default
- sudo service nginx restart
- sudo chmod +x /home
- sudo chmod +x /home/ubuntu
- sudo chmod +x /home/ubuntu/react
- sudo chmod +x /home/ubuntu/react/_work
- sudo chmod +x /home/ubuntu/react/_work/react-deploy
- sudo chmod +x /home/ubuntu/react/_work/react-deploy/react-deploy
- sudo chmod +x /home/ubuntu/react/_work/react-deploy/react-deploy/build
The terraform file isn't relevant i think. So when i run this whole thing with terraform init and terraform apply, its going threw but nothing is hapenning. In the /var/log in the file cloud-init-output file i found this error:
dd: unrecognized operand ‘ ’
Try 'dd --help' for more information.
E: Command line option 'S' [from -fsSL] is not understood in combination with the other options.
I guess its from this command, which should install gh cli (found here: https://github.com/cli/cli/blob/trunk/docs/install_linux.md):
type -p curl >/dev/null || sudo apt install curl -y
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
If i do this whole cloud-init file manually it works. So i don't know what to do else.
You seem to be missing \ and && after install curl -y, since I just tried on two WSL machines (that's all I have with me right now) and it was just fine there.
So my suspicion is that your curl command got dazed inside, since you're not exactly running that smaller command and bigger one separately, but they should be rather sundered, so maybe give it a shot?
On this weird page (came up by exact search) https://ouyen.github.io/github/ I found no install curl -y but the next one, which clearly indicated it being ran separately, so I think your issue is just there.

ClamAV docker & GKE deployment error connection ECONNREFUSED when I run docker image

I am trying to build a ClamAV malware scanner docker image that runs on a squid proxy and I get:
!NotifyClamd: Can't connect to clamd on 127.0.0.1:3310: Connection refused
and error:
connect ECONNREFUSED 127.0.0.1:3310
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1158:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 3310 }
Stopping ClamAV daemon:
clamd.
Clamav signatures not found in /var/lib/clamav ... failed!
Please retrieve them using freshclam ... failed!
Then run 'invoke-rc.d clamav-daemon start' ... failed!
This is my dockerfile :
FROM node:17.6.0-bullseye-slim
# Set versions
ENV CLOUD_SDK_VERSION=372.0.0
# Install base packages
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
RUN apt-get update && \
apt-get install -y build-essential clamav-daemon clamav-freshclam curl python3 sudo && \
rm -rf /var/lib/apt/lists/* && \
mkdir -p /usr/local/gcloud && \
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz && \
tar -C /usr/local/gcloud -xvf google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz && \
rm google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz && \
ln -s /lib /lib64 && \
gcloud config set core/disable_usage_reporting true && \
gcloud config set component_manager/disable_update_check true && \
mkdir -p /home/node/app && \
chown -R node:node /home/node/app && \
chmod 777 /var/log/clamav/freshclam.log && \
chmod 777 /var/lib/clamav && \
echo "TCPSocket 3310" >> /etc/clamav/clamd.conf && \
echo "TCPAddr 127.0.0.1" >> /etc/clamav/clamd.conf && \
echo "User node" >> /etc/clamav/clamd.conf && \
echo "DatabaseOwner node" >> /etc/clamav/freshclam.conf && \
echo "HTTPProxyServer squid-proxy.neds.local" >> /etc/clamav/freshclam.conf && \
echo "HTTPProxyPort 3128" >> /etc/clamav/freshclam.conf && \
echo "node ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/node
# Bring in app code
WORKDIR /home/node/app
COPY --chown=node:node . .
# Set up app
RUN npm config set python $(which python3) && \
npm install
# Run the rest as the node user
USER 1000
CMD ["/bin/bash", "bootstrap.sh"]
and this is bootstrap.sh :
#!/bin/bash
sudo service clamav-freshclam stop && \
sudo freshclam && \
sudo service clamav-freshclam start && \
sudo service clamav-daemon force-reload && \
npm start
It fails when I docker run it OR when I deploy it on a GKE cluster,
all IPs required are whitelisted on the squid.

/bin/sh: passwd: command not found

I tried to execute Docker-compose build but getting the below error.
I'm using centos7 and completely new to Linux.
/bin/sh: passwd: command not found.
ERROR: Service 'remote_host' failed to build: The command '/bin/sh -c useradd remote_user && echo "welcome1" | passwd remote_user --stdin && mkdir /home/remote_user/.ssh && chmod 700 /home/remote_user/.ssh' returned a non-zero code: 127.
DockerFile.
FROM centos: latest
RUN yum -y install OpenSSH-server
RUN useradd remote_user && \
echo "welcome1" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh`enter code here`
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
CMD /usr/sbin/sshd -D
whoami: mosses987
$PATH: /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/mosses987/.local/bin:/home/mosses987/bin
add this line its working:
RUN yum install -y passwd
And comment this line:
RUN /usr/sbin/sshd-keygen
This should work,
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
#RUN /usr/sbin/sshd-keygen
CMD /usr/sbin/sshd -D
You need to install passwd because the remote host does not have passwd installed. Add below line before the passwd command.
RUN yum install -y passwd
add this line
RUN yum install -y passwd
That should work
FROM centos:7
RUN yum update -y && \
yum -y install openssh-server && \
yum install -y passwd
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown -R remote_user:remote_user /home/remote_user/.ssh && \
chmod -R 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
CMD /usr/sbin/sshd -D

how do i fix my script problem issuing SED command

i am trying to write my first script and everything is working fine it is to automatically install a new server, the only problem i have is using sed to change the ssl certificate file i have followed all the answers in the forums available here but i still cant get it to overwrite i have used 2 other sed commands and working fine
i am running script on ubuntu 16.04 with apache2 and php7.0 lamp
the script completes but no rewrite of conf
this is my script just in case anything is conflicting
#!/bin/bash
apt-get -y update
apt-get -y upgrade
apt-get -y install apache2
apt-get install -y php7.0 libapache2-mod-php7.0 php7.0-cli php7.0-common php7.0-mbstring php7.0-gd php7.0-intl php7.0-xml php7.0-mysql php7.0-mcrypt php7.0-zip
echo mysql-server-5.1 mysql-server/root_password password PASSWORD | debconf-set-selections
echo mysql-server-5.1 mysql-server/root_password_again password PASSWORD | debconf-set-selections
apt-get install -y mysql-server
/etc/init.d/mysql restart
a2enmod ssl
a2ensite default-ssl.conf
service apache2 restart
APP_PASS="PASSWORD"
ROOT_PASS="PASSWORD"
APP_DB_PASS="PASSWORD"
echo "phpmyadmin phpmyadmin/dbconfig-install boolean true" | debconf-set-selections
echo "phpmyadmin phpmyadmin/app-password-confirm password $APP_PASS" | debconf-set-selections
echo "phpmyadmin phpmyadmin/mysql/admin-pass password $ROOT_PASS" | debconf-set-selections
echo "phpmyadmin phpmyadmin/mysql/app-pass password $APP_DB_PASS" | debconf-set-selections
echo "phpmyadmin phpmyadmin/reconfigure-webserver multiselect apache2" | debconf-set-selections
apt-get install -y phpmyadmin
sed -i 's/Port 22/Port 4747/g' /etc/ssh/sshd_config
sed -i 's/PermitRootLogin yes/PermitRootLogin no/g' /etc/ssh/sshd_config
service sshd restart
apt-get install vsftpd -y
sed -i 's/root/#root/g' /etc/ftpusers
service vsftpd restart
apt-get install software-properties-common -y
add-apt-repository ppa:certbot/certbot -y
apt-get update -y
apt-get install python-certbot-apache -y
service apache2 stop
certbot certonly --standalone --non-interactive --agree-tos -m EMAIL#mymail.com -d domain.com
adduser --quiet --disabled-password --shell /bin/bash --home /home/USERNAME --gecos "User" USERNAME
echo "USERNAME:PASSWORD" | chpasswd
usermod -aG sudo USERNAME
iptables -I INPUT 1 -p udp -m udp --dport 1900 -j DROP
crontab -l > mycron
echo "#daily letsencrypt renew --quiet && systemctl reload apache2" >> mycron
crontab mycron
rm mycron (WORKS BUT GIVES ERROR no crontab for root)
#sed -i "s|SSLCertificateFile=/etc/ssl/certs/ssl-cert-snakeoil.pem|SSLCertificateFile=/letsencrypt/live/domain.com/fullchain.pem|g" /etc/apache2/sites-enabled/default-ssl.conf (NOT WORKING)
#SSL_DEFAULT_CERT_PATH="SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem"
#SSL_CERT_PATH="SSLCertificateFile /letsencrypt/live/domain.com/fullchain.pem"
#sed -i "s|.*\b$SSL_DEFAULT_CERT_PATH\b.*|$SSL_CERT_PATH|" /etc/apache2/sites-enabled/default-ssl.conf (NOT WORKING)
service apache2 restart
these are the two i have tried but no luck
sed -i "s|SSLCertificateFile=/etc/ssl/certs/ssl-cert-snakeoil.pem|SSLCertificateFile=/letsencrypt/live/domain.com/fullchain.pem|g" /etc/apache2/sites-enabled/default-ssl.conf
does not work
SSL_DEFAULT_CERT_PATH="SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem"
SSL_CERT_PATH="SSLCertificateFile /letsencrypt/live/domain.com/fullchain.pem"
sed -i "s|.*\b$SSL_DEFAULT_CERT_PATH\b.*|$SSL_CERT_PATH|" /etc/apache2/sites-enabled/default-ssl.conf
does not work
original file SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
not sure if the spaces make a difference

Running OpenSSH in an Alpine Docker Container

I've installed OpenSSH and now I wish to run it as described in the documentation by running /etc/init.d/sshd start. However it does not start:
/ # /etc/init.d/sshd start
/bin/ash: /etc/init.d/sshd: not found
Thoughts?
P.S.
/ # ls -la /etc/init.d/sshd
-rwxr-xr-x 1 root root 2622 Jan 14 20:48 /etc/init.d/sshd
Contents of /etc/init.d/sshd:
#!/sbin/openrc-run
# Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/openssh/files/sshd.rc6.4,v 1.5 2015/05/04 02:56:25 vapier Exp $
description="OpenBSD Secure Shell server"
description_checkconfig="Verify configuration file"
description_reload="Reload configuration"
extra_commands="checkconfig"
extra_started_commands="reload"
: ${SSHD_CONFDIR:=/etc/ssh}
: ${SSHD_CONFIG:=${SSHD_CONFDIR}/sshd_config}
: ${SSHD_PIDFILE:=/var/run/${SVCNAME}.pid}
: ${SSHD_BINARY:=/usr/sbin/sshd}
depend() {
use logger dns
if [ "${rc_need+set}" = "set" ] ; then
: # Do nothing, the user has explicitly set rc_need
else
local x warn_addr
for x in $(awk '/^ListenAddress/{ print $2 }' "$SSHD_CONFIG" 2>/dev/null) ; do
case "${x}" in
0.0.0.0|0.0.0.0:*) ;;
::|\[::\]*) ;;
*) warn_addr="${warn_addr} ${x}" ;;
esac
done
if [ -n "${warn_addr}" ] ; then
need net
ewarn "You are binding an interface in ListenAddress statement in your sshd_config!"
ewarn "You must add rc_need=\"net.FOO\" to your /etc/conf.d/sshd"
ewarn "where FOO is the interface(s) providing the following address(es):"
ewarn "${warn_addr}"
fi
fi
}
checkconfig() {
if [ ! -d /var/empty ] ; then
mkdir -p /var/empty || return 1
fi
if [ ! -e "${SSHD_CONFIG}" ] ; then
eerror "You need an ${SSHD_CONFIG} file to run sshd"
eerror "There is a sample file in /usr/share/doc/openssh"
return 1
fi
if ! yesno "${SSHD_DISABLE_KEYGEN}"; then
ssh-keygen -A || return 1
fi
[ "${SSHD_PIDFILE}" != "/var/run/sshd.pid" ] \
&& SSHD_OPTS="${SSHD_OPTS} -o PidFile=${SSHD_PIDFILE}"
[ "${SSHD_CONFIG}" != "/etc/ssh/sshd_config" ] \
&& SSHD_OPTS="${SSHD_OPTS} -f ${SSHD_CONFIG}"
"${SSHD_BINARY}" -t ${SSHD_OPTS} || return 1
}
start() {
checkconfig || return 1
ebegin "Starting ${SVCNAME}"
start-stop-daemon --start --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" \
-- ${SSHD_OPTS}
eend $?
}
stop() {
if [ "${RC_CMD}" = "restart" ] ; then
checkconfig || return 1
fi
ebegin "Stopping ${SVCNAME}"
start-stop-daemon --stop --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" --quiet
eend $?
if [ "$RC_RUNLEVEL" = "shutdown" ]; then
_sshd_pids=$(pgrep "${SSHD_BINARY##*/}")
if [ -n "$_sshd_pids" ]; then
ebegin "Shutting down ssh connections"
kill -TERM $_sshd_pids >/dev/null 2>&1
eend 0
fi
fi
}
reload() {
checkconfig || return 1
ebegin "Reloading ${SVCNAME}"
start-stop-daemon --signal HUP \
--exec "${SSHD_BINARY}" --pidfile "${SSHD_PIDFILE}"
eend $?
}
A container is not a full installed environment.
The official document is for that installed alpine on some machine.
With power on, boot up services, etc. that a container does not have.
So, anything in /etc/init.d/ can not be used directly in a container which is used by boot up service (like systemd, or alpine's rc*). That's why you got error messages cause the rc* isn't installed in the container.
What you need to do is start sshd manuanlly.
You can take look on below example:
https://hub.docker.com/r/danielguerra/alpine-sshd/~/dockerfile/
Despite there are some details still not clear to me, let me take a voice in the discussion. The solution specified by the below configuration works for me. It's the result of arduous experiments.
First, the dockerfile
FROM alpine
RUN apk update && \
apk add --no-cache sudo bash openrc openssh
RUN mkdir -p /run/openrc && \
touch /run/openrc/softlevel && \
rc-update add sshd default
RUN adduser --disabled-password regusr && \
sh -c 'echo "regusr:<encoded_passwd>"' | chpasswd -e > /dev/null 2>&1 && \
sh -c 'echo "regusr ALL=NOPASSWD: ALL"' >> /etc/sudoers
VOLUME ["/home/reguser/solution/entrypoint-init.d","/sys/fs/cgroup"]
USER reguser
WORKDIR /home/reguser
RUN mkdir -p $HOME/solution && sudo chown reguser:reguser $HOME/solution
ADD ./entrypoint.sh /home/reguser/solution/
EXPOSE 22
ENTRYPOINT ["./solution/entrypoint.sh"]
CMD ["/bin/bash"]
Next, /home/reguser/solution/entrypoint.sh
#!/bin/bash
for f in ./solution/entrypoint-init.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
exec "$#"
Next, /home/reguser/solution/entrypoint-init.d/10-ssh-up.sh
#!/bin/bash
sudo sed --in-place --expression='/^#[[:space:]]*Port[[:space:]]\+22$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*AddressFamily[[:space:]]\+any$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostKey[[:space:]]\+\/etc\/ssh\/ssh_host_rsa_key$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^[[:space:]]*\(HostbasedAuthentication\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^[[:space:]]*\(IgnoreRhosts\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^[[:space:]]*\(PasswordAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^[[:space:]]*\(PubkeyAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PrintMotd[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]].*/ s/^[[:space:]]*\(PrintMOTD\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='$ a\' --expression='\nAcceptEnv LANG LC_\*' -- /etc/ssh/sshd_config
sudo /etc/init.d/sshd --dry-run start
sudo /etc/init.d/sshd start
The last two lines are in the heart of the trick. In particular, the sudo /etc/init.d/sshd --dry-run start makes the solution working.
Finally, command-line controls
docker build --tag='dockerRegUser/sshdImg:0.0.1' --file='./dockerfile' .
docker container create --tty \
--volume $(pwd)/dock/entrypoint-init.d:/home/reguser/solution/entrypoint-init.d:ro \
--name sshdCnt 'dockerRegUser/sshdImg:0.0.1' tail -f /dev/null
docker start sshdCnt && \
ssh-keygen -f "/home/user/.ssh/known_hosts" -R "$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)" && \
sleep 5 && \
ssh-copy-id -i ~/.ssh/sshkey reguser#$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)
I know, I know, there is a lot of unnecessary constructs. The example is also against the single service docker container principle. But there are phases and situations in solution development and delivery lifecycle that justify (or at least tempt) considering extending the container with the sshd or other openrc-controlled services.
/etc/init.d/sshd: not found
Try to run these commands:
apk add --no-cache openrc
rc-update add sshd
Check first is sshd is not present in /usr/bin or /usr/sbin.
Then, init.d should have sshd only if you set it up to to automatically start wiht:
rc-update add sshd
rc-status
I needed sshd for a very specific reason. I had to run front (cypress) and back (django) end tests on a CI server. Running them in one container is tricky at the least, so I decided to go with 2 containers. Also, there had to be one entrypoint that will run tests in both containers. So, the idea was that one container will run its tests, than run the tests in the other container over ssh.
In your case, you might not want to do exactly as I did, e.g. setting empty root password, empty passphrase.
It's best to run it in a separate directory, since it creates files (id_rsa.pub).
server.sh:
#!/bin/sh -eux
apk add openssh-server
ssh-keygen -A
passwd -d root
mkdir ~/.ssh
while ! [ -e id_rsa.pub ]; do sleep 1; done
cp id_rsa.pub ~/.ssh/authorized_keys
/usr/sbin/sshd -De
client.sh:
#!/bin/sh -eux
apk add openssh-client wait4ports
ssh-keygen -f ~/.ssh/id_rsa -N ''
cp ~/.ssh/id_rsa.pub .
wait4ports -s 1 tcp://c1:22
ssh-keyscan -t rsa c1 > ~/.ssh/known_hosts
ssh c1 echo DO SOMETHING
echo done
docker-compose.yml:
version: '3'
services:
server:
image: alpine:3.12
command: sh -c 'cd app && ./server.sh'
volumes:
- .:/app
client:
image: alpine:3.12
command: sh -c 'cd app && ./client.sh'
volumes:
- .:/app
$ docker-compose up -d && docker-compose logs -f
If you decide to run it again:
$ rm -f id_rsa.pub && docker-compose down && docker-compose up -d && docker-compose logs -f
If you want to setup openssh server on your docker container with alpine try this Dockerfile.
In this example, I am using docker:dind image
FROM docker:dind
# Setup SSH Service
RUN \
apk update && \
apk add openrc --no-cache && \
apk add openssh-server && \
rc-update add sshd && \
rc-status && \
touch /run/openrc/softlevel
# Expose port for ssh
EXPOSE 22
# Start SSH Service
CMD ["sh" , "-c", "service sshd restart && sh"]
Once your container is up and running try running this command to make sure ssh works fine:
ssh localhost

Resources