Why use sleep after chmod in a Dockerfile - azure

The Azure documentation gives instructions for how to enable SSH in a custom container. They suggest to add these commands in my Dockerfile:
# Install OpenSSH and set the password for root to "Docker!". In this example, "apk add" is the install instruction for an Alpine Linux-based image.
RUN apk add openssh \
&& echo "root:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY sshd_config /etc/ssh/
# Copy and configure the ssh_setup file
RUN mkdir -p /tmp
COPY ssh_setup.sh /tmp
RUN chmod +x /tmp/ssh_setup.sh \
&& (sleep 1;/tmp/ssh_setup.sh 2>&1 > /dev/null)
# Open port 2222 for SSH access
EXPOSE 80 2222
Why is there a sleep 1 after the chmod +x command? I know it's not harmful, but I'd really like to understand why it's there.

The sleep 1 command is there to pause the script for 1 second before continuing. It is often used as a way to give the system time to complete a task or stabilize before the script continues.
In this case, the chmod +x command is used to make the ssh_setup.sh script executable. It is likely that the sleep 1 command is included to give the system time to complete this task before the script is run.
Keep in mind that this is just a suggestion and the use of the sleep 1 command may not be necessary in all cases. It is included as a way to potentially avoid any issues that may arise if the script continues before the system has had a chance to fully process the previous command.

Related

docker RUN mkdir does not work when folder exist in prev image

the only difference between them is that the "dev" folder exists in centos image,
check the comment in this piece of code(while executing docker build),appreciate it if anyone can explain why?
FROM centos:latest
LABEL maintainer="xxxx"
RUN dnf clean packages
RUN dnf -y install sudo openssh-server openssh-clients curl vim lsof unzip zip
**below works well!**
# RUN mkdir -p oop/script
# RUN cd oop/script
# ADD text.txt /oop/script
**/bin/sh: line 0: cd: dev/script: No such file or directory**
RUN mkdir -p dev/script
RUN cd dev/script
ADD text.txt /dev/script
EXPOSE 22
There are two things going on here.
The root of your problem is that /dev is a special directory, and is re-created for each RUN command. So while RUN mkdir -p dev/script successfully creates a /dev/script directory, that directory is gone once the RUN command is complete.
Additionally, a command like this...
RUN cd /some/directory
...is a complete no-op. This is exactly the same thing as running sh -c "cd /some/directory" on your local system; while the cd is successful, the cd only affects the process running the cd command, and has no effect on the parent process or subsequent commands.
If you really need to place something into /dev, you can copy it into a different location in your Dockerfile (e.g., COPY test.txt /docker/test.txt), and then at runtime via your CMD or ENTRYPOINT copy it into an appropriate location in /dev.

What is the proper way to run a cronjob as a non-root user in a Docker image?

It's been a really long time since I've posted something here. I've been struggling with this for a while now and thought it would be the perfect time to come here.
I need a container image that would execute a cron job. The issue is that I need to run the container as a non-root user due to security reasons and best practices; however, the default crond (busybox) won't execute as a non root. Therefore, I decided to use dcron, which is a lightweight cron daemon.
Dockerfile:
FROM alpine:3.12.1
RUN apk --no-cache add dcron
RUN adduser -S 11111 -u 11111 -G cron -s /bin/ash && \
chgrp cron /usr/sbin/crond && \
chmod 4770 /usr/sbin/crond
RUN echo "* * * * * date >> /tmp/log/test 2>&1" >> /etc/crontabs/11111
RUN chown 11111 /etc/crontabs/11111
USER 11111
CMD ["crond", "-f"]
Problem:
When I run this container, I get the following output: setpgid: Operation not permitted.
Interestingly though, if I omit the CMD from the Dockerfile and run crond -f inside the shell it will work just fine!
Any suggestions would be highly appreciated!
So after some investigation, I ended up not using cron at all and doing something simple instead. Not an ideal solution, but does what I need at the moment. I am going to share it here, but please do share a proper way to solve this if you know. For example, with the following a 'job' will run every 12 hours:
Dockerfile:
FROM alpine:3.12.1
COPY entrypoint.sh /usr/local/bin
RUN chmod +x /usr/local/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
entrypoint.sh
#!/usr/bin/env ash
while true; do
echo "$(date) Hello there!"
sleep "12h"
done
Additional information
Some other solutions that you might find interesting:
https://github.com/aptible/supercronic/
https://hub.docker.com/r/blacklabelops/logrotate
https://github.com/blacklabelops/logrotate
https://medium.com/#geekidea_81313/running-cron-jobs-as-non-root-on-alpine-linux-e5fa94827c34
Some of the suggestions like (3) didn't work, and (1,2) were a little too complex for what I needed.
If your docker container is running as non root user, you cannot start the cron daemon as it requires root privilege to start.
It is worth to explore https://github.com/aptible/supercronic/ which gives you similar set of cron that can be run without root user privileges and it is equivalent to cron.
Build from Source:
go get -d github.com/aptible/supercronic
cd "${GOPATH}/src/github.com/aptible/supercronic"
go mod vendor
go install
Download Binaries:
https://github.com/aptible/supercronic/releases
To better performance, you can store the binaries in antifactory and read using docker build.
Hack
You can cross build and generate the binaries.

Why do I get "s6-log: fatal: unable to open_append /run/service/app/lock: Not a directory"?

I'm learning about s6 and I've come to a point where I want to use s6-log. I have the following Dockerfile
FROM alpine:3.10
RUN apk --no-cache --update add s6
WORKDIR /run/service
COPY \
./rootfs/run \
./rootfs/app /run/service/
CMD ["s6-supervise", "."]
with ./rootfs/app being just a simple sh script
#!/bin/sh
while true;
do
sleep 1
printf "Hello %s\n" "$(date)"
done
and run being
#!/bin/execlineb -P
fdmove -c 2 1
s6-log -b n20 s1000000 t /var/log/app/
/run/service/app
Why do I keep getting
s6-log: fatal: unable to open_append /run/service/app/lock: Not a directory
? Without the s6-log line it all works fine.
So it seems that I've been doing this incorrectly. Namely I should've used s6-svscan instead of s6-supervice.
Using s6-svscan I can create a log/ subdirectory in my service's directory so that my app's stdout is redirected to logger's stdin as described on s6-svscan's website:
For every new subdirectory dir it finds, the scanner spawns a s6-supervise process on it. If dir/log exists, it spawns a s6-supervise process on both dir and dir/log, and maintains a never-closing pipe from the service's stdout to the logger's stdin.
I've written run script like so:
#!/bin/execlineb -P
s6-log -b n20 s512 T /var/log/app
and with that I've changed the CMD to
CMD ["s6-svscan", "/run/"]
where /run/service/ contains both run script for my service (without s6-log call) and log subdirectory with the run script above.

How to sudo run a local script over ssh

I try to sudo run a local script over ssh,
ssh $HOST < script.sh
and I tried
ssh -t $HOST "sudo -s && bash" < script.sh
Actually, I searched a lot in google, find some similar questions, however, I don't find a solution which can sudo run a local script.
Reading the error message of
$ ssh -t $HOST "sudo -s && bash" < script.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
makes it pretty clear what's going wrong here.
You can't use the ssh parameter -t (which sudo needs to ask for a password) whilst redirecting your script to bash's stdin of your remote session.
If it is acceptable for you, you could transfer the local script via scp to your remote machine and then execute the script without the need of I/O redirection:
scp script.sh $HOST:/tmp/ && ssh -t $HOST "sudo -s bash /tmp/script.sh"
Another way to fix your issue is to use sudo in non-interactive mode -n but for this you need to set NOPASSWD within the remote machine's sudoers file for the executing user. Then you can use
ssh $HOST "sudo -n -s bash" < script.sh
To make Edward Itrich's answer more scalable and geared towards frequent use, you can set up a system where you only run a one line script that can be quickly ported to any host, file or command in the following manner:
Create a script in your Scripts directory if you have one by changing the name you want the script to be (I use this format frequently to change 1 word for my script name and create the file, set permissions and open for editing):
newscript="runlocalscriptonremotehost.sh"
touch $newscript && chmod +x $newscript && nano $newscript
In nano fill out the script as follows placing the directory and name information of the script you want to run remotely in the variable lines of runlocalscriptonremotehost.sh(only need to edit lines 1-3):
HOSTtoCONTROL="sudoadmin#192.168.0.254"
PATHtoSCRIPT="/home/username/Scripts/"
SCRIPTname="scripttorunremotely.sh"
scp $PATHtoSCRIPT$SCRIPTname $HOSTtoCONTROL:/tmp/ && ssh -t $HOSTtoCONTROL "sudo -s bash /tmp/$SCRIPTname"
Then just run:
sh ./runlocalscriptonremotehost.sh
Keep runlocalscriptonremotehost.sh open in a tabbed text editor for quick updating, go ahead and create a bash alias for the script and you have yourself an app-ified version of this frequently used operation.
First of all divide your objective in 2 parts. 1) ssh to the host. 2) run the command you want as sudo. After you are certain that you can 1) access the host and 2) have sudo privileges then you can combine the two commands with &&. What x_cmd && y_cmd does is that the y_cmd gets executed after x_cmd has exited successfully.

Docker can't write to directory mounted using -v unless it has 777 permissions

I am using the docker-solr image with docker, and I need to mount a directory inside it which I achieve using the -v flag.
The problem is that the container needs to write to the directory that I have mounted into it, but doesn't appear to have the permissions to do so unless I do chmod 777 on the entire directory. I don't think setting the permission to allows all users to read and write to it is the solution, but just a temporary workaround.
Can anyone guide me in finding a more canonical solution?
Edit: I've been running docker without sudo because I added myself to the docker group. I just found that the problem is solved if I run docker with sudo, but I am curious if there are any other solutions.
More recently, after looking through some official docker repositories I've realized the more idiomatic way to solve these permission problems is using something called gosu in tandem with an entry point script. For example if we take an existing docker project, for example solr, the same one I was having trouble with earlier.
The dockerfile on Github very effectively builds the entire project, but does nothing to account for the permission problems.
So to overcome this, first I added the gosu setup to the dockerfile (if you implement this notice the version 1.4 is hardcoded. You can check for the latest releases here).
# grab gosu for easy step-down from root
RUN mkdir -p /home/solr \
&& gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
Now we can use gosu, which is basically the exact same as su or sudo, but works much more nicely with docker. From the description for gosu:
This is a simple tool grown out of the simple fact that su and sudo have very strange and often annoying TTY and signal-forwarding behavior.
Now the other changes I made to the dockerfile were these adding these lines:
COPY solr_entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
just to add my entrypoint file to the docker container.
and removing the line:
USER $SOLR_USER
So that by default you are the root user. (which is why we have gosu to step-down from root).
Now as for my own entrypoint file, I don't think it's written perfectly, but it did the job.
#!/bin/bash
set -e
export PS1="\w:\u docker-solr-> "
# step down from root when just running the default start command
case "$1" in
start)
chown -R solr /opt/solr/server/solr
exec gosu solr /opt/solr/bin/solr -f
;;
*)
exec $#
;;
esac
A docker run command takes the form:
docker run <flags> <image-name> <passed in arguments>
Basically the entrypoint says if I want to run solr as per usual we pass the argument start to the end of the command like this:
docker run <flags> <image-name> start
and otherwise run the commands you pass as root.
The start option first gives the solr user ownership of the directories and then runs the default command. This solves the ownership problem because unlike the dockerfile setup, which is a one time thing, the entry point runs every single time.
So now if I mount directories using the -d flag, before the entrypoint actually runs solr, it will chown the files inside of the docker container for you.
As for what this does to your files outside the container I've had mixed results because docker acts a little weird on OSX. For me, it didn't change the files outside of the container, but on another OS where docker plays more nicely with the filesystem, it might change your files outside, but I guess that's what you'll have to deal with if you want to mount files inside the container instead of just copying them in.

Resources