Makefile:108: *** recipe commences before first target - linux

GNU Make 4.1 Built for x86_64-pc-linux-gnu
Below is the Makefile:
# Project variables
PROJECT_NAME ?= todobackend
ORG_NAME ?= shamdockerhub
REPO_NAME ?= todobackend
# File names
DEV_COMPOSE_FILE := docker/dev/docker-compose.yml
REL_COMPOSE_FILE := docker/release/docker-compose.yml
# Docker compose project names
REL_PROJECT := $(PROJECT_NAME)$(BUILD_ID)
DEV_PROJECT := $(REL_PROJECT)dev
# Check and inspect logic
INSPECT := $$(docker-compose -p $$1 -f $$2 ps -q $$3 | xargs -I ARGS docker inspect -f "{{ .State.ExitCode }}" ARGS)
CHECK := #bash -c '\
if [[ $(INSPECT) -ne 0 ]]; \
then exit $(INSPECT); fi' VALUE
# Use these settings to specify a custom Docker registry
DOCKER_REGISTRY ?= docker.io
APP_SERVICE_NAME := app
.PHONY: test build release clean tag
test: # Run unit & integration test cases
${INFO} "Pulling latest images..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) pull
${INFO} "Building images..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) build cache
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) build --pull test
${INFO} "Ensuring database is ready..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) run --rm agent
${INFO} "Running tests..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) up test
# docker cp $$(docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) ps -q test):/reports/. reports
${CHECK} ${DEV_PROJECT} ${DEV_COMPOSE_FILE} test
${INFO} "Testing complete"
build: # Create deployable artifact and copy to ../target folder
${INFO} "Creating builder image..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) build builder
${INFO} "Building application artifacts..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) up builder
${CHECK} ${DEV_PROJECT} ${DEV_COMPOSE_FILE} builder
${INFO} "Copying artifacts to target folder..."
# docker cp $$(docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) ps -q builder):/wheelhouse/. target
${INFO} "Build complete"
release: # Creates release environment, bootstrap the environment
${INFO} "Building images..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) build webroot
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) build app
${INFO} "Ensuring database is ready..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) run --rm agent
${INFO} "Collecting static files..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) run --rm app manage.py collectstatic --noinput
${INFO} "Running database migrations..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) run --rm app manage.py migrate --noinput
${INFO} "Pull external image and build..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) build --pull nginx
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) pull test
${INFO} "Running acceptance tests..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) up test
${CHECK} $(REL_PROJECT) $(REL_COMPOSE_FILE) test
# docker cp $$(docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) ps -q test):/reports/. reports
${INFO} "Acceptance testing complete"
clean:
${INFO} "Destroying development environment..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) kill
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) rm -f -v
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) kill
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) rm -f -v
# docker images -q -f dangling=true -f label=application=$(REPO_NAME) | xargs -I ARGS docker rmi -f ARGS
${INFO} "Clean complete"
tag:
$(INFO) "Tagging release image with tags $(TAG_ARGS)"
# $(foreach tag, $(TAG_ARGS), docker tag $(IMAGE_ID) $(DOCKER_REGISTRY)/$(ORG_NAME)/$(REPO_NAME):$(tag);)
${INFO} "Tagging complete"
# Cosmetics
YELLOW := "\e[1;33m"
NC := "\e[0m"
# Shell functions
INFO := #bash -c '\
printf $(YELLOW); \
echo "=> $$1"; \
printf $(NC)' VALUE
# Get container id of application service container
APP_CONTAINER_ID := $$(docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) ps -q $(APP_SERVICE_NAME))
# Get image id of application service
IMAGE_ID := $$(docker inspect -f '{{ .Image }}' $(APP_CONTAINER_ID))
# Extract tag arguments
ifeq (tag, $(firstword $(MAKECMDGOALS)))
TAG_ARGS := $(wordlist 2, $(words $(MAKECMDGOALS)), $(MAKECMDGOALS))
ifeq ($(TAG_ARGS),)
$(error You must specify a tag)
endif
$(eval $(TAG_ARGS):;#:) # line 108 Do not interpret "0.1 latest whatever" as make target files
endif
Below is the error on running make command:
$ make tag 0.1 latest $(git rev-parse --short HEAD)
Makefile:108: *** recipe commences before first target. Stop.
Line 108, purpose of $(eval $(TAG_ARGS):;#:) to convey that 0.1 latest $(git rev-parse --short HEAD) are not make targets.
Why $(eval $(TAG_ARGS):;#:) gives error?

That particular error happens because your $(eval ...) line is indented by a TAB (something that it's hidden by this horribly broken web interface).
Example:
$ make -f <(printf '\t$(eval foo:;echo yup)')
/dev/fd/63:1: *** recipe commences before first target. Stop.
# now with spaces instead of TAB
$ make -f <(printf ' $(eval foo:;echo yup)')
echo yup
yup
The error is documented in the make manual:
recipe commences before first target. Stop.
This means the first thing in the makefile seems to be part of a
recipe: it begins with a recipe prefix character and doesn't appear
to be a legal make directive (such as a variable assignment).
Recipes must always be associated with a target.
The "recipe prefix character" is TAB by default.
$ make -f <(printf '\tfoo')
/dev/fd/63:1: *** recipe commences before first target. Stop.
It doesn't have to be the "first thing in the makefile", though: the same error will trigger after a number of rules, if preceded by a directive like a macro assignment or such:
$ make -f <(printf 'all:;\nkey=val\n\tfoo')
/dev/fd/63:3: *** recipe commences before first target. Stop.
And even if a macro expands to an empty string, GNU make will not consider empty a line containing just macros expanding to empty strings:
$ make -f <(printf '\t\nfoo:;#:')
$ make -f <(printf '\t$(info foo)\nfoo:;#:')
/dev/fd/63:1: *** recipe commences before first target. Stop.
$ make -f <(printf ' $(info foo)\nfoo:;#:')
foo

I can't reproduce this problem. I put your last ifeq statement into a makefile and it works fine for me with GNU make 4.1 and 4.2.1. There must be something more unusual about your situation.
The classic way to debug issues with eval is to duplicate the line and replace the eval with info; this way make will print out exactly what it sees. Often this will show you what is wrong.
There are other confusing things about this makefile.
First, why are you using eval here in the first place? Why not just write the rule directly? There's nothing wrong with:
$(TAG_ARGS):;#:
no need to wrap it in an eval.
Second, why are you using := then escaping the variables? Why not just use = instead and not bother with the escapes?
INSPECT = $(docker-compose -p $1 -f $2 ps -q $3 | xargs -I ARGS docker inspect -f "{{ .State.ExitCode }}" ARGS)
works just fine.
Finally, I strongly urge you to not add # to your recipes. It makes debugging makefiles very difficult and frustrating. Instead consider using a method such as Managing Recipe Echoing to handle this.

Related

Docker run is unable to locate the directory

I've built a docker image, docker build -t dockeragent:latest . but can't seem to trigger the container to run. The command: docker run -e AZP_URL=<obfuscate> -e AZP_TOKEN=<obfuscate> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest produces the following error: exec ./start.sh: no such file or directory.
I understand that the start.sh script is called by the Dockerfile and I've ensured that the Dockerfile is in the same directory as the start.sh script. I've also tested referencing the start.sh script by using interpolation to point to the absolute path pointing to the start.sh script. Example:
ENTRYPOINT [ "${pwd}/start.sh" ]
Any ideas on what parameter has been misconfigured? The files are directly copied from Micorosft's guide on building self-hosted agents with Docker
For reference, please see the below Dockerfile and associated start.sh
Dockerfile
FROM ubuntu:20.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends \
apt-transport-https \
apt-utils \
ca-certificates \
curl \
git \
iputils-ping \
jq \
lsb-release \
software-properties-common
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT [ "./start.sh" ]
start.sh
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
# If the agent has some running jobs, the configuration removal process will fail.
# So, give it some time to finish the job.
while true; do
./config.sh remove --unattended --auth PAT --token $(cat "$AZP_TOKEN_FILE") && break
echo "Retrying in 30 seconds..."
sleep 30
done
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_PACKAGES=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=$TARGETARCH&top=1")
AZP_AGENT_PACKAGE_LATEST_URL=$(echo "$AZP_AGENT_PACKAGES" | jq -r '.value[0].downloadUrl')
if [ -z "$AZP_AGENT_PACKAGE_LATEST_URL" -o "$AZP_AGENT_PACKAGE_LATEST_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent"
echo 1>&2 "check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and extracting Azure Pipelines agent..."
curl -LsS $AZP_AGENT_PACKAGE_LATEST_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 0' EXIT
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
chmod +x ./run-docker.sh
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run-docker.sh "$#" & wait $!
Thanks in advance!
Check the Dockerfile and Start.sh file. The settings should be correct.
Refer to this doc: Linux Docker container agent
Save the following content to ~/dockeragent/start.sh, making sure to use Unix-style (LF) line endings:
The start.sh need to use Linux LF line endings when creating Linux Docker Container Agent.
When you create the start.sh in windows system, it will use Windows CRLF line endings.
You can convert the Start.sh file from Windows CRLF to Linux LF in this Online site: LF and CRLF converter online Then you can run the same command to create the Pipeline agent.
Or you can directly create the files in Linux system.
You can run this and exec into pod check you run file start.sh
docker run -e AZP_URL=<obfuscate> -e AZP_TOKEN=<obfuscate> -e AZP_AGENT_NAME=mydockeragent dockeragent:latest --entrypoint sh

gitlab ci/cd ERROR: Job failed: exit code 1 on chmod og= $STAGE_ID_RSA

This is my ci.yml
deploy-stage:
stage: deploy
image: alpine:latest
script:
- chmod og= $STAGE_ID_RSA
- apk update && apk add openssh-client
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker stop $CI_PROJECT_NAME || true"
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker rm $CI_PROJECT_NAME || true"
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker image rm $CI_REGISTRY_IMAGE:latest || true"
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker pull $CI_REGISTRY_IMAGE:latest"
- ssh -i $STAGE_ID_RSA -o StrictHostKeyChecking=no $STAGE_SERVER_USER#$STAGE_SERVER_IP "docker run -d --restart unless-stopped --name $CI_PROJECT_NAME -p 8882:4000 -e "variableData=Docker-Run-Command" $CI_REGISTRY_IMAGE:latest"
I get this error:
BusyBox v1.35.0 (2022-05-09 17:27:12 UTC) multi-call binary.
Usage: chmod [-Rcvf] MODE[,MODE]... FILE...
MODE is octal number (bit pattern sstrwxrwxrwx) or [ugoa]{+|-|=}[rwxXst]
-R Recurse
-c List changed files
-v Verbose
-f Hide errors
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
And this my variable:
https://i.stack.imgur.com/zegmz.png
And I enter breakline end of STAGE_ID_RSA value
First, you can use chmod og= -- $STAGE_ID_RSA to make sure chmod will not consider $STAGE_ID_RSA as an option, but a parameter.
Second, chmod [OPTIONS] [ugoa…][-+=]perms…[,…] FILE... means the chmod applies to a file.
If $STAGE_ID_RSA represents a value, that would not work.
You would need to redirect its content to a file.
Yet, I do see STAGE_ID_RSA here, used as you do in .gitlab-ci.yml, but using the kroniak/ssh-client rather than alpine:latest.
The chmod in alpine:latest might not behave like a chmod in a more traditional image.
See "Linux commands in Alpine docker image +755 invalid mode".
That is most likely your issue.

ls sort order inside container

Running ls -d to list directories, print directories in different order if trailing / is present in file name. Why is that? What sorting rules apply? and why does this happen only with docker?
With trailing /
$ docker run --rm ubuntu:16.04 /bin/bash -c "mkdir foo ; mkdir foo-bar ; ls -d foo/ foo-bar/"
foo-bar/
foo/
Without trailing /
$ docker run --rm -it ubuntu:16.04 /bin/bash -c "mkdir foo ; mkdir foo-bar ; ls -d foo foo-bar"
foo
foo-bar
I found out I get the same behavior using sort command
docker run --rm ubuntu:16.04 /bin/bash -c "echo -e 'foo/\nfoo-bar/' | sort"
But the sorting order changes when using sort -d
docker run --rm ubuntu:16.04 /bin/bash -c "echo -e 'foo/\nfoo-bar/' | sort -d"
Thanks to David for pointing me in the right direction, this is caused by the locale settings as described here
On bare ubuntu container, POSIX locale is used which has different sorting rules then en_US.
I solved my problem by installing en_US locale in the docker image, and sorting works as expected again.

Running OpenSSH in an Alpine Docker Container

I've installed OpenSSH and now I wish to run it as described in the documentation by running /etc/init.d/sshd start. However it does not start:
/ # /etc/init.d/sshd start
/bin/ash: /etc/init.d/sshd: not found
Thoughts?
P.S.
/ # ls -la /etc/init.d/sshd
-rwxr-xr-x 1 root root 2622 Jan 14 20:48 /etc/init.d/sshd
Contents of /etc/init.d/sshd:
#!/sbin/openrc-run
# Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/openssh/files/sshd.rc6.4,v 1.5 2015/05/04 02:56:25 vapier Exp $
description="OpenBSD Secure Shell server"
description_checkconfig="Verify configuration file"
description_reload="Reload configuration"
extra_commands="checkconfig"
extra_started_commands="reload"
: ${SSHD_CONFDIR:=/etc/ssh}
: ${SSHD_CONFIG:=${SSHD_CONFDIR}/sshd_config}
: ${SSHD_PIDFILE:=/var/run/${SVCNAME}.pid}
: ${SSHD_BINARY:=/usr/sbin/sshd}
depend() {
use logger dns
if [ "${rc_need+set}" = "set" ] ; then
: # Do nothing, the user has explicitly set rc_need
else
local x warn_addr
for x in $(awk '/^ListenAddress/{ print $2 }' "$SSHD_CONFIG" 2>/dev/null) ; do
case "${x}" in
0.0.0.0|0.0.0.0:*) ;;
::|\[::\]*) ;;
*) warn_addr="${warn_addr} ${x}" ;;
esac
done
if [ -n "${warn_addr}" ] ; then
need net
ewarn "You are binding an interface in ListenAddress statement in your sshd_config!"
ewarn "You must add rc_need=\"net.FOO\" to your /etc/conf.d/sshd"
ewarn "where FOO is the interface(s) providing the following address(es):"
ewarn "${warn_addr}"
fi
fi
}
checkconfig() {
if [ ! -d /var/empty ] ; then
mkdir -p /var/empty || return 1
fi
if [ ! -e "${SSHD_CONFIG}" ] ; then
eerror "You need an ${SSHD_CONFIG} file to run sshd"
eerror "There is a sample file in /usr/share/doc/openssh"
return 1
fi
if ! yesno "${SSHD_DISABLE_KEYGEN}"; then
ssh-keygen -A || return 1
fi
[ "${SSHD_PIDFILE}" != "/var/run/sshd.pid" ] \
&& SSHD_OPTS="${SSHD_OPTS} -o PidFile=${SSHD_PIDFILE}"
[ "${SSHD_CONFIG}" != "/etc/ssh/sshd_config" ] \
&& SSHD_OPTS="${SSHD_OPTS} -f ${SSHD_CONFIG}"
"${SSHD_BINARY}" -t ${SSHD_OPTS} || return 1
}
start() {
checkconfig || return 1
ebegin "Starting ${SVCNAME}"
start-stop-daemon --start --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" \
-- ${SSHD_OPTS}
eend $?
}
stop() {
if [ "${RC_CMD}" = "restart" ] ; then
checkconfig || return 1
fi
ebegin "Stopping ${SVCNAME}"
start-stop-daemon --stop --exec "${SSHD_BINARY}" \
--pidfile "${SSHD_PIDFILE}" --quiet
eend $?
if [ "$RC_RUNLEVEL" = "shutdown" ]; then
_sshd_pids=$(pgrep "${SSHD_BINARY##*/}")
if [ -n "$_sshd_pids" ]; then
ebegin "Shutting down ssh connections"
kill -TERM $_sshd_pids >/dev/null 2>&1
eend 0
fi
fi
}
reload() {
checkconfig || return 1
ebegin "Reloading ${SVCNAME}"
start-stop-daemon --signal HUP \
--exec "${SSHD_BINARY}" --pidfile "${SSHD_PIDFILE}"
eend $?
}
A container is not a full installed environment.
The official document is for that installed alpine on some machine.
With power on, boot up services, etc. that a container does not have.
So, anything in /etc/init.d/ can not be used directly in a container which is used by boot up service (like systemd, or alpine's rc*). That's why you got error messages cause the rc* isn't installed in the container.
What you need to do is start sshd manuanlly.
You can take look on below example:
https://hub.docker.com/r/danielguerra/alpine-sshd/~/dockerfile/
Despite there are some details still not clear to me, let me take a voice in the discussion. The solution specified by the below configuration works for me. It's the result of arduous experiments.
First, the dockerfile
FROM alpine
RUN apk update && \
apk add --no-cache sudo bash openrc openssh
RUN mkdir -p /run/openrc && \
touch /run/openrc/softlevel && \
rc-update add sshd default
RUN adduser --disabled-password regusr && \
sh -c 'echo "regusr:<encoded_passwd>"' | chpasswd -e > /dev/null 2>&1 && \
sh -c 'echo "regusr ALL=NOPASSWD: ALL"' >> /etc/sudoers
VOLUME ["/home/reguser/solution/entrypoint-init.d","/sys/fs/cgroup"]
USER reguser
WORKDIR /home/reguser
RUN mkdir -p $HOME/solution && sudo chown reguser:reguser $HOME/solution
ADD ./entrypoint.sh /home/reguser/solution/
EXPOSE 22
ENTRYPOINT ["./solution/entrypoint.sh"]
CMD ["/bin/bash"]
Next, /home/reguser/solution/entrypoint.sh
#!/bin/bash
for f in ./solution/entrypoint-init.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
exec "$#"
Next, /home/reguser/solution/entrypoint-init.d/10-ssh-up.sh
#!/bin/bash
sudo sed --in-place --expression='/^#[[:space:]]*Port[[:space:]]\+22$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*AddressFamily[[:space:]]\+any$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostKey[[:space:]]\+\/etc\/ssh\/ssh_host_rsa_key$/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]].*/ s/^[[:space:]]*\(HostbasedAuthentication\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*HostbasedAuthentication[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]].*/ s/^[[:space:]]*\(IgnoreRhosts\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*IgnoreRhosts[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]].*/ s/^[[:space:]]*\(PasswordAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PasswordAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]].*/ s/^[[:space:]]*\(PubkeyAuthentication\)[[:space:]]\(.*\)/\1 yes/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PubkeyAuthentication[[:space:]]\+no.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^#[[:space:]]*PrintMotd[[:space:]].*/ s/^#//i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]].*/ s/^[[:space:]]*\(PrintMOTD\)[[:space:]]\(.*\)/\1 no/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='/^[[:space:]]*PrintMotd[[:space:]]\+yes.*/ s/^/#/i' -- /etc/ssh/sshd_config
sudo sed --in-place --expression='$ a\' --expression='\nAcceptEnv LANG LC_\*' -- /etc/ssh/sshd_config
sudo /etc/init.d/sshd --dry-run start
sudo /etc/init.d/sshd start
The last two lines are in the heart of the trick. In particular, the sudo /etc/init.d/sshd --dry-run start makes the solution working.
Finally, command-line controls
docker build --tag='dockerRegUser/sshdImg:0.0.1' --file='./dockerfile' .
docker container create --tty \
--volume $(pwd)/dock/entrypoint-init.d:/home/reguser/solution/entrypoint-init.d:ro \
--name sshdCnt 'dockerRegUser/sshdImg:0.0.1' tail -f /dev/null
docker start sshdCnt && \
ssh-keygen -f "/home/user/.ssh/known_hosts" -R "$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)" && \
sleep 5 && \
ssh-copy-id -i ~/.ssh/sshkey reguser#$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshdCnt)
I know, I know, there is a lot of unnecessary constructs. The example is also against the single service docker container principle. But there are phases and situations in solution development and delivery lifecycle that justify (or at least tempt) considering extending the container with the sshd or other openrc-controlled services.
/etc/init.d/sshd: not found
Try to run these commands:
apk add --no-cache openrc
rc-update add sshd
Check first is sshd is not present in /usr/bin or /usr/sbin.
Then, init.d should have sshd only if you set it up to to automatically start wiht:
rc-update add sshd
rc-status
I needed sshd for a very specific reason. I had to run front (cypress) and back (django) end tests on a CI server. Running them in one container is tricky at the least, so I decided to go with 2 containers. Also, there had to be one entrypoint that will run tests in both containers. So, the idea was that one container will run its tests, than run the tests in the other container over ssh.
In your case, you might not want to do exactly as I did, e.g. setting empty root password, empty passphrase.
It's best to run it in a separate directory, since it creates files (id_rsa.pub).
server.sh:
#!/bin/sh -eux
apk add openssh-server
ssh-keygen -A
passwd -d root
mkdir ~/.ssh
while ! [ -e id_rsa.pub ]; do sleep 1; done
cp id_rsa.pub ~/.ssh/authorized_keys
/usr/sbin/sshd -De
client.sh:
#!/bin/sh -eux
apk add openssh-client wait4ports
ssh-keygen -f ~/.ssh/id_rsa -N ''
cp ~/.ssh/id_rsa.pub .
wait4ports -s 1 tcp://c1:22
ssh-keyscan -t rsa c1 > ~/.ssh/known_hosts
ssh c1 echo DO SOMETHING
echo done
docker-compose.yml:
version: '3'
services:
server:
image: alpine:3.12
command: sh -c 'cd app && ./server.sh'
volumes:
- .:/app
client:
image: alpine:3.12
command: sh -c 'cd app && ./client.sh'
volumes:
- .:/app
$ docker-compose up -d && docker-compose logs -f
If you decide to run it again:
$ rm -f id_rsa.pub && docker-compose down && docker-compose up -d && docker-compose logs -f
If you want to setup openssh server on your docker container with alpine try this Dockerfile.
In this example, I am using docker:dind image
FROM docker:dind
# Setup SSH Service
RUN \
apk update && \
apk add openrc --no-cache && \
apk add openssh-server && \
rc-update add sshd && \
rc-status && \
touch /run/openrc/softlevel
# Expose port for ssh
EXPOSE 22
# Start SSH Service
CMD ["sh" , "-c", "service sshd restart && sh"]
Once your container is up and running try running this command to make sure ssh works fine:
ssh localhost

setting up variables in alias is not working

I want my alias to pick up variables that I pass in commandLine:
For example, startdocker 004 should execute the following
docker run -d -t -p 8080:8080 -v /var/source:/source -P mydockerregistry:5000/foo/bar:004
(004 is the tag I am passing in commandLine).
I tried the following in my .bashrc (sourced after the change).
alias startdocker='function dockerStartTag(){ echo "docker run -d -t -p 8080:8080 -v /var/source:/source -P mydockerregistry:5000/foo/bar: $1";};dockerStartTag'
when I run startdocker 004, It is not picking up 004.
It is empty and hence defaults to "latest" by docker.
what is the correct way to do it? what am I doing wrong?
Thanks
EDIT
I don't want to echo but execute the command.
The correct answer is provided below
There's no reason to put a function inside an alias. Just do
function startdocker(){ echo "docker run -d -t -p 8080:8080 -v /var/source:/source -P mydockerregistry:5000/foo/bar: $1"; }
This now works on my system:
$ startdocker 004
docker run -d -t -p 8080:8080 -v /var/source:/source -P mydockerregistry:5000/foo/bar: 004
And to actually run it, we just need:
function startdocker(){ docker run -d -t -p 8080:8080 -v /var/source:/source -P mydockerregistry:5000/foo/bar: $1; }

Resources