How to ping/ssh to a running docker by name? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I want to access a running docker via ssh, by name.
How can I ping to the docker?
How can I connect to the docker using ssh?
Bonus: How can I connect to the docker, using ssh, from a different computer than the one it runs on?
I am aware that it is considered better to access via docker exec, but this does not work for me, as I have to use ssh for my case [I am trying to use clion's fully remote mode on a remote hosted docker via ssh tunneling. Their docs only support remote non-docker, or local docker].
This is my dockerfile
ARG VER=
ARG TOOL_DOCKER=
ARG BASE_IMAGE=
ARG TOOL_DIR=
FROM devsrv:5000/${TOOL_DOCKER}:${VER} AS tool_base
ARG VER=
ARG BASE_IMAGE=
ARG TOOL_DIR=
FROM ${BASE_IMAGE}
ARG VER=
ARG BASE_IMAGE=
ARG TOOL_DOCKER=
ARG TOOL_DIR=
ARG UNAME=
ARG UID=
USER root
COPY launchpad.key /tmp/launchpad.key
RUN apt-get update && \
apt-get install -y software-properties-common && \
apt-key add /tmp/launchpad.key && \
add-apt-repository -y ppa:git-core/ppa && apt-get update && \
apt-get install -y git libxt-dev libxtst6 libnss3 libnspr4 \
libgbm-dev libxss-dev libasound2 libatk-bridge2.0-0 \
libcanberra-gtk-module libcanberra-gtk3-module valgrind sudo \
libx11-xcb-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /tmp/*
RUN groupadd --system ${UNAME} --gid ${UID} && \
useradd --uid ${UID} --system --gid ${UNAME} --home-dir /home/${UNAME} --create-home --comment "Docker image user" ${UNAME} && \
chown -R ${UNAME}:${UNAME} /home/${UNAME} && \
usermod -aG sudo ${UNAME} && \
echo "${UNAME} ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/${UNAME}
COPY --from=tool_base ${TOOL_DIR} ${TOOL_DIR}
and this is the gist of how it is run
BUILD_CMDLINE="docker build \
--build-arg UID=${UID} \
--build-arg UNAME=${USER} \
--build-arg VER=${VERSION} \
--build-arg BASE_IMAGE=${BASE_DOCKER} \
--build-arg TOOL_DOCKER=$(${DIR}/impl/known-tools.py docker ${TOOL}) \
--build-arg TOOL_DIR=$(${DIR}/impl/known-tools.py tool-dir ${TOOL}) \
-f ${DIR}/impl/personal-tool.dockerfile -t ${IMAGE} ${DIR}/impl"
echo "Building docker using: ${BUILD_CMDLINE}"
${BUILD_CMDLINE} || exit 1
# Need to give the container access to your windowing system
xhost +
echo $HOME
echo ${USER_ID}:${GROUP_ID}
RUN_CMD="docker run --group-add ${DOCKER_GROUP_ID} \
--env HOME=${HOME} \
--env="DISPLAY" \
--entrypoint /bin/bash \
--interactive \
--net "host" \
--rm \
--tty \
--user=${USER_ID}:${GROUP_ID} \
--volume ${HOME}:${HOME} \
--volume /isilon:/isilon \
--volume /mnt:/mnt \
$(cat ${HOME}/personal-uv-docker-flags) \
-v "${HOME}/.Xauthority:${HOME}/.Xauthority:rw" \
--volume /var/run/docker.sock:/var/run/docker.sock \
--workdir ${HOME} \
--cap-add sys_ptrace \
-p127.0.0.1:2222:22 \
--name my_docker \
${IMAGE} $(${DIR}/impl/known-tools.py cmd-line ${TOOL})"
echo "Running docker using: ${RUN_CMD}"
${RUN_CMD}
When running this docker, doing docker ps gives
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a5f15b6f5e7b clion-professional_devsrv_5000/acq-base-docker_latest:noam "/bin/bash /opt/clio…" 18 minutes ago Up 18 minutes my_docker
If information is missing please say so and I will edit the question.
Edit:
Edited the dockerfile to
COPY launchpad.key /tmp/launchpad.key
RUN apt-get update && \
apt-get install -y software-properties-common && \
apt-get install -y openssh-client && \
apt-get install -y openssh-server && \
systemctl enable sshd && \
apt-key add /tmp/launchpad.key && \
add-apt-repository -y ppa:git-core/ppa && apt-get update && \
apt-get install -y git libxt-dev libxtst6 libnss3 libnspr4 \
libgbm-dev libxss-dev libasound2 libatk-bridge2.0-0 \
libcanberra-gtk-module libcanberra-gtk3-module valgrind sudo \
libx11-xcb-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /tmp/*
RUN groupadd --system ${UNAME} --gid ${UID} && \
useradd --uid ${UID} --system --gid ${UNAME} --home-dir /home/${UNAME} --create-home --comment "Docker image user" ${UNAME} && \
chown -R ${UNAME}:${UNAME} /home/${UNAME} && \
usermod -aG sudo ${UNAME} && \
echo "${UNAME} ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/${UNAME}
COPY --from=tool_base ${TOOL_DIR} ${TOOL_DIR}
USER {UNAME}
output:
...
Get:11 http://archive.ubuntu.com/ubuntu xenial/main amd64 ssh-import-id all 5.5-0ubuntu1 [10.2 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 1003 kB in 0s (1184 kB/s)
Selecting previously unselected package libwrap0:amd64.
(Reading database ... 25945 files and directories currently installed.)
Preparing to unpack .../libwrap0_7.6.q-25_amd64.deb ...
Unpacking libwrap0:amd64 (7.6.q-25) ...
Selecting previously unselected package ncurses-term.
Preparing to unpack .../ncurses-term_6.0+20160213-1ubuntu1_all.deb ...
Unpacking ncurses-term (6.0+20160213-1ubuntu1) ...
Selecting previously unselected package openssh-sftp-server.
Preparing to unpack .../openssh-sftp-server_1%3a7.2p2-4ubuntu2.10_amd64.deb ...
Unpacking openssh-sftp-server (1:7.2p2-4ubuntu2.10) ...
Selecting previously unselected package openssh-server.
Preparing to unpack .../openssh-server_1%3a7.2p2-4ubuntu2.10_amd64.deb ...
Unpacking openssh-server (1:7.2p2-4ubuntu2.10) ...
Selecting previously unselected package python3-pkg-resources.
Preparing to unpack .../python3-pkg-resources_20.7.0-1_all.deb ...
Unpacking python3-pkg-resources (20.7.0-1) ...
Selecting previously unselected package python3-chardet.
Preparing to unpack .../python3-chardet_2.3.0-2_all.deb ...
Unpacking python3-chardet (2.3.0-2) ...
Selecting previously unselected package python3-six.
Preparing to unpack .../python3-six_1.10.0-3_all.deb ...
Unpacking python3-six (1.10.0-3) ...
Selecting previously unselected package python3-urllib3.
Preparing to unpack .../python3-urllib3_1.13.1-2ubuntu0.16.04.4_all.deb ...
Unpacking python3-urllib3 (1.13.1-2ubuntu0.16.04.4) ...
Selecting previously unselected package python3-requests.
Preparing to unpack .../python3-requests_2.9.1-3ubuntu0.1_all.deb ...
Unpacking python3-requests (2.9.1-3ubuntu0.1) ...
Selecting previously unselected package tcpd.
Preparing to unpack .../tcpd_7.6.q-25_amd64.deb ...
Unpacking tcpd (7.6.q-25) ...
Selecting previously unselected package ssh-import-id.
Preparing to unpack .../ssh-import-id_5.5-0ubuntu1_all.deb ...
Unpacking ssh-import-id (5.5-0ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu21.16) ...
Setting up libwrap0:amd64 (7.6.q-25) ...
Setting up ncurses-term (6.0+20160213-1ubuntu1) ...
Setting up openssh-sftp-server (1:7.2p2-4ubuntu2.10) ...
Setting up openssh-server (1:7.2p2-4ubuntu2.10) ...
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
Creating SSH2 RSA key; this may take some time ...
2048 SHA256:Wlq9V+siHa4herOkUxo+f7Gsy+Dr5obNzd21YlvcTxw root#20cd14a69430 (RSA)
Creating SSH2 DSA key; this may take some time ...
1024 SHA256:PHYTyaGyXHO7N5V3VOGoFcBY23FDBydEcCdrrI01ZpU root#20cd14a69430 (DSA)
Creating SSH2 ECDSA key; this may take some time ...
256 SHA256:/T4agN5tch9KKW3+vp7jdFhGBGHtZ2lA7rD9BFk/vfM root#20cd14a69430 (ECDSA)
Creating SSH2 ED25519 key; this may take some time ...
256 SHA256:xm6KylI0biBsq1imRWYuTecinrwTAlFE+ekVlWV8G3o root#20cd14a69430 (ED25519)
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Setting up python3-pkg-resources (20.7.0-1) ...
Setting up python3-chardet (2.3.0-2) ...
Setting up python3-six (1.10.0-3) ...
Setting up python3-urllib3 (1.13.1-2ubuntu0.16.04.4) ...
Setting up python3-requests (2.9.1-3ubuntu0.1) ...
Setting up tcpd (7.6.q-25) ...
Setting up ssh-import-id (5.5-0ubuntu1) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...
Processing triggers for systemd (229-4ubuntu21.16) ...
Operation failed: Too many levels of symbolic links
The command '/bin/sh -c apt-get update && apt-get install -y software-properties-common && apt-get install -y openssh-client && apt-get install -y openssh-server && systemctl enable sshd && apt-key add /tmp/launchpad.key && add-apt-repository -y ppa:git-core/ppa && apt-get update && apt-get install -y git libxt-dev libxtst6 libnss3 libnspr4 libgbm-dev libxss-dev libasound2 libatk-bridge2.0-0 libcanberra-gtk-module libcanberra-gtk3-module valgrind sudo libx11-xcb-dev && apt-get clean && rm -rf /var/lib/apt/lists/* && rm -rf /tmp/*' returned a non-zero code: 1
With the highlighted errors being
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
and
Operation failed: Too many levels of symbolic links

Based on #vector's answer, here is a complete solution :
#!/bin/bash
# docker.sh
docker run --rm --hostname dns.mageddo\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /etc/resolv.conf:/etc/resolv.conf\
-d defreitas/dns-proxy-server
docker run -ti --hostname my_docker --name my_docker\
-p 2222:22 --rm debian bash -c "
apt update -y;apt install -y openssh-server; service ssh start;
useradd pi; mkdir -p /home/pi; chown pi /home/pi;
passwd pi <<< \$'password\npassword'; exec bash"
In one terminal, run ./docker.sh, once both containers running,
open another terminal :
ping my_docker
ssh pi#my_docker # password : password
from another computer than your-machine :
ssh -p 2222 pi#your-machine # password : password
Dockerfile version :
cat << EOF > Dockerfile
FROM debian
RUN apt update && apt install openssh-server sudo -y
RUN useradd -rm -d /home/pi -s /bin/bash -g root -G sudo -u 1000 pi
RUN echo 'pi:password' | chpasswd
RUN service ssh start
EXPOSE 22
CMD ["/usr/sbin/sshd","-D"]
EOF
docker build -t my_docker .
docker run --hostname my_docker --name my_docker -it -p 2222:22 my_docker

I think docker does not support connecting to the container by name, you have to expose the port to the host machine and then connect through it.
Or if you still want to connect via name you can refer here defreitas/dns-proxy-server
Example:
# First run DPS
$ docker run --rm --hostname dns.mageddo \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/resolv.conf:/etc/resolv.conf \
defreitas/dns-proxy-server
# Then run the container
$ docker run --hostname my_docker --name my_docker -d my_image
# Now, you can connect by name
$ ping my_docker

Related

Run Ubuntu Docker container as User with sudo access

I'm trying to write a Dockerfile that creates a user with a home directory who is part of sudoers group and that launches the container as this user.
The problem I'm facing is that, from within the container, every command needs to be prepended sudo, which obviously creates permission issues for every file that's created.
My reasoning behind doing this is that I want a container that mimics a clean linux environment from which I can write install scripts for users.
Here is a copy of my Dockerfile so far:
FROM ubuntu:20.04
# Make user home
RUN mkdir -p /home/nick
# Create a nick user
RUN useradd -r -d /home/nick -m -s /sbin/nologin -c "Docker image user" nick
# Add to sudoers
RUN usermod -a -G sudo nick
# Change ownership of home directory
RUN chown -R nick:nick $HOME
# Set password
RUN echo "nick:******" | chpasswd
# Install sudo
RUN apt-get -y update && apt-get -y install sudo
ENV HOME=/home/nick
WORKDIR $HOME
USER nick
I don't understand why this doesn't work:
FROM continuumio/miniconda3
# FROM --platform=linux/amd64 continuumio/miniconda3
MAINTAINER Brando Miranda "brandojazz#gmail.com"
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ssh \
git \
m4 \
libgmp-dev \
opam \
wget \
ca-certificates \
rsync \
strace \
gcc \
rlwrap \
sudo
# https://github.com/giampaolo/psutil/pull/2103
RUN useradd -m bot
# format for chpasswd user_name:password
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
WORKDIR /home/bot
USER bot
# CMD /bin/bash

python3: command not found

I have a dockerfile in which I have specified the entrypint as shell script named run-services.sh
Contents of the shell script are as follows:
apache2ctl start
echo "Started apache2ctl..."
python3 mock_ta.py
Now when I deploy this service at my local machine I get an error saying
python3: command not found
I removed entrypoint and went inside the container and executed the which python3 command and I can see that python3 is installed at /usr/bin/python3.
Ideally it should run the python script if python is installed, right? Any idea why this happens?
============================================================
Edit:Added Dockerfile
FROM php:7.1-apache
# Utilities
RUN apt-get update && \
apt-get -y install apt-transport-https git curl vim --no-install-recommends && \
rm -r /var/lib/apt/lists/*
# SimpleSAMLphp
ARG SIMPLESAMLPHP_VERSION=1.15.2
RUN curl -s -L -o /tmp/simplesamlphp.tar.gz https://github.com/simplesamlphp/simplesamlphp/releases/download/v$SIMPLESAMLPHP_VERSION/simplesamlphp-$SIMPLESAMLPHP_VERSION.tar.gz && \
tar xzf /tmp/simplesamlphp.tar.gz -C /tmp && \
rm -f /tmp/simplesamlphp.tar.gz && \
mv /tmp/simplesamlphp-* /var/www/simplesamlphp && \
touch /var/www/simplesamlphp/modules/exampleauth/enable
COPY config/simplesamlphp/config.php /var/www/simplesamlphp/config
COPY config/simplesamlphp/authsources.php /var/www/simplesamlphp/config
COPY config/simplesamlphp/saml20-sp-remote.php /var/www/simplesamlphp/metadata
COPY config/simplesamlphp/server.crt /var/www/simplesamlphp/cert/
COPY config/simplesamlphp/server.pem /var/www/simplesamlphp/cert/
# Apache
COPY config/apache/ports.conf /etc/apache2
COPY config/apache/simplesamlphp.conf /etc/apache2/sites-available
COPY config/apache/cert.crt /etc/ssl/cert/cert.crt
COPY config/apache/private.key /etc/ssl/private/private.key
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf && \
a2enmod ssl && \
a2dissite 000-default.conf default-ssl.conf && \
a2ensite simplesamlphp.conf
COPY config/run-services.sh /var/www/simplesamlphp/config/run-services.sh
ENTRYPOINT ["/var/www/simplesamlphp/config/run-services.sh"]
# Set work dir
WORKDIR /var/www/simplesamlphp
# General setup
EXPOSE 8080 8443
Thanks #David
With your help I was able to figure out that the python3 image that was present inside container was not accessible indeed.
So I had to install python3 and pip packages with the help of following command
RUN apt update -y && apt upgrade -y && apt install -y python3 && apt install -y python3-pip

useradd with -u option causes docker to hang

I have the following docker file
FROM ubuntu:18.04
ARG user_id
ARG user_gid
# Essential packages for building on a Ubuntu host
# https://docs.yoctoproject.org/ref-manual/system-requirements.html#ubuntu-and-debian
# Note, we set DEBIAN_FRONTEND=noninteractive prior to the call to apt-get
# install because otherwise we'll get prompted to select a timezone when the
# tzdata package gets included as a dependency.
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential \
chrpath socat cpio python3 python3-pip python3-pexpect xz-utils \
debianutils iputils-ping python3-git python3-jinja2 libegl1-mesa \
libsdl1.2-dev pylint3 xterm python3-subunit mesa-common-dev sudo
# Add a user and set them up for passwordless sudo. We're using the same user
# ID and group numbers as the host system. This allows us to give the yocto
# user ownership of files and directories in the poky volume we're going to add
# without needing to change ownership which would also affect the host system.
RUN groupadd -g $user_gid yoctouser
RUN useradd -m yoctouser -u $user_id -g $user_gid
#echo "yoctouser ALL=(ALL:ALL) NOPASSWD:ALL" | tee -a /etc/sudoers
USER yoctouser
WORKDIR /home/yoctouser
ENV LANG=en_US.UTF-8
CMD /bin/bash
The useradd command is hanging, and specifically the -u option is the issue. If I remove -u $user_id everything works fine. Furthermore, docker is filling up my disk. /var/lib/docker/overlay2/ goes from being 852MB before adding the -u option to gigabytes after just a few seconds. If I don't kill it, it entirely fills up my disk and I end up having to stop the docker daemon and manually remove folders inside of the overlay2 directory.
Why might specifying this uid be an issue?
Here is the relevant section of a python script I wrote to drive this so you can see how I'm getting the user ID and passing it to docker build.
def build_docker_image():
print("Building a docker image named:", DOCKER_IMAGE_NAME)
USERID_ARG = "user_id=" + str(os.getuid())
USERGID_ARG = "user_gid=" + str(os.getgid())
print(USERID_ARG)
print(USERGID_ARG)
try:
subprocess.check_call(['docker', 'build',
'--build-arg', USERID_ARG,
'--build-arg', USERGID_ARG,
'-t', DOCKER_IMAGE_NAME, '.',
'-f', DOCKERFILE_NAME])
except:
print("Failed to create the docker image")
sys.exit(1)
FWIW, on my system
user_id=1666422094
user_gid=1666400513
I am running Docker version 20.10.5, build 55c4c88 on a Ubuntu 18.04 host.
I need to use the -l / --no-log-init option when calling useradd to workaround a bug in docker relating to how large UIDs are handled.
My final dockerfile looks like
FROM ubuntu:18.04
ARG user_id
ARG user_gid
# Essential packages for building on a Ubuntu host
# https://docs.yoctoproject.org/ref-manual/system-requirements.html#ubuntu-and-debian
# Note, we set DEBIAN_FRONTEND=noninteractive prior to the call to apt-get
# install because otherwise we'll get prompted to select a timezone when the
# tzdata package gets included as a dependency.
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential \
chrpath socat cpio python3 python3-pip python3-pexpect xz-utils \
debianutils iputils-ping python3-git python3-jinja2 libegl1-mesa \
libsdl1.2-dev pylint3 xterm python3-subunit mesa-common-dev
# Set up locales
RUN apt-get install -y locales
RUN dpkg-reconfigure locales && \
locale-gen en_US.UTF-8 && \
update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
ENV LC_ALL=en_US.UTF-8
ENV LANG=en_US.UTF-8
ENV LANGUAGE=en_US.UTF-8
# Add a user and set them up for passwordless sudo. We're using the same user
# ID and group numbers as the host system. This allows us to give the yocto
# user ownership of files and directories in the poky mount we're going to add
# without needing to change ownership which would also affect the host system.
# Note the use of the --no-log-init option for useradd. This is a workaround to
# [a bug](https://github.com/moby/moby/issues/5419) relating to how large UIDs
# are handled.
RUN apt-get install -y sudo && \
groupadd --gid ${user_gid} yoctouser && \
useradd --create-home --no-log-init --uid ${user_id} --gid yoctouser \
yoctouser && \
echo "yoctouser ALL=(ALL:ALL) NOPASSWD:ALL" | tee -a /etc/sudoers
USER yoctouser
WORKDIR /home/yoctouser
CMD ["/bin/bash"]

using a docker app to make a new directory in an external hard drive

I am using a docker container to execute a python script located at my host machine. The script should make a new directory at a target location.
When the target location is located under $HOME or $HOME/*, everything works. However, when I want to create a directory at /media/my_name/external_drive, the terminal says that PermissionError: [Errno 13] Permission denied: '/media/my_name'
Here is the code I run
sudo docker-compose run --rm --user="$(id -u):$(id -g)" main process_all.py
Here is docker-compose.yml:
version: '2.3'
services:
main:
build: .
volumes:
- .:/app
- /etc/localtime:/etc/localtime:ro
environment:
- PYTHONIOENCODING=utf_8
init: true
network_mode: host
Here is the dockerfile
FROM ubuntu:16.04
# Install some basic utilities
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
sudo \
git \
bzip2 \
axel \
&& rm -rf /var/lib/apt/lists/*
# Create a working directory
RUN mkdir /app
WORKDIR /app
# Create a non-root user and switch to it
RUN adduser --disabled-password --gecos '' --shell /bin/bash user \
&& chown -R user:user /app
RUN echo "user ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-user
USER user
# All users can use /home/user as their home directory
ENV HOME=/home/user
RUN chmod 777 /home/user
# Install Miniconda
RUN curl -so ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-4.4.10-Linux-x86_64.sh \
&& chmod +x ~/miniconda.sh \
&& ~/miniconda.sh -b -p ~/miniconda \
&& rm ~/miniconda.sh
ENV PATH=/home/user/miniconda/bin:$PATH
# Create a Python 3.6 environment
RUN /home/user/miniconda/bin/conda install conda-build \
&& /home/user/miniconda/bin/conda create -y --name py36 python=3.6.4 \
&& /home/user/miniconda/bin/conda clean -ya
ENV CONDA_DEFAULT_ENV=py36
ENV CONDA_PREFIX=/home/user/miniconda/envs/$CONDA_DEFAULT_ENV
ENV PATH=$CONDA_PREFIX/bin:$PATH
# Ensure conda version is at least 4.4.11
# (because of this issue: https://github.com/conda/conda/issues/6811)
ENV CONDA_AUTO_UPDATE_CONDA=false
RUN conda install -y "conda>=4.4.11" && conda clean -ya
# Install FFmpeg
RUN conda install --no-update-deps -y -c conda-forge ffmpeg=3.2.4 \
&& conda clean -ya
# Install NumPy
RUN conda install --no-update-deps -y numpy=1.13.3 \
&& conda clean -ya
# Install build tools
RUN sudo apt-get update \
&& sudo apt-get install -y build-essential gfortran libncurses5-dev \
&& sudo rm -rf /var/lib/apt/lists/*
# Build and install CDF
RUN cd /tmp \
&& curl -O https://spdf.sci.gsfc.nasa.gov/pub/software/cdf/dist/cdf36_4/linux/cdf36_4-dist-all.tar.gz \
&& tar xzf cdf36_4-dist-all.tar.gz \
&& cd cdf36_4-dist \
&& make OS=linux ENV=gnu CURSES=yes FORTRAN=no UCOPTIONS=-O2 SHARED=yes all \
&& sudo make INSTALLDIR=/usr/local/cdf install
# Install other dependencies from pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# Create empty SpacePy config (suppresses an annoying warning message)
RUN mkdir /home/user/.spacepy && echo "[spacepy]" > /home/user/.spacepy/spacepy.rc
# Copy scripts into the image
COPY --chown=user:user . /app
# Set the default command to python3
CMD ["python3"]
Untested, going by memory but I would debug the issue with an interactive version of your container.
Something like:
sudo docker run -t -i --rm --user="$(id -u):$(id -g)" main /bin/bash
You'll get a bash shell. Then you can debug it by
cd /media
ls -l
What I think you'll find is that the drive is probably not mounted. Or, the user doesn't have permission to access it.
With regards to mounts, either pass it through from the host or create a volume mount. I'm a little bit unsure about what you can do there because since I last used docker many changes around mounting and volume drivers were introduced. But the documentation on the docker website is pretty good. So experiment.
This is the cmd line reference for docker: https://docs.docker.com/engine/reference/run/
The key is to use the -t -i parameters to make it interactive.

Azure Hybrid Worker Docker

I am currently trying to docker-ize a Azure Hybrid Worker using the instructions provided at:
https://learn.microsoft.com/en-us/azure/automation/automation-linux-hrw-install
I am 90% successful however when I try to run the final step using onboarding.py the script is not found in the location specificied by the documentation. Basically the file is not found anywhere in the container. Any help would be great.
FROM ubuntu:14.04
RUN apt-get update && \
apt-get -y install sudo
ENV user docker
RUN useradd -m -d /home/${user} ${user} && \
chown -R ${user} /home/${user} && \
adduser ${user} sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER ${user}
#WORKDIR /home/${user}
RUN sudo apt-get -y install apt-utils && \
sudo apt-get -y install openssl && \
sudo apt-get -y install curl && \
sudo apt-get -y install wget && \
sudo apt-get -y install cron && \
sudo apt-get -y install net-tools && \
sudo apt-get -y install auditd && \
sudo apt-get -y install python-ctypeslib
RUN sudo wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh && \
sudo sh onboard_agent.sh -w <my-workplace-id> -s <my-workspace-key>
RUN sudo python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/scripts/onboarding.py --register <arguments-removed-for-stackoverflow-post>
EXPOSE 443
Although I don't know the exact reason why it doesn't work yet, I have made some progress that I would like to share.
I've been experimenting with this problem by comparing the differences between centos running on a VM and a centos docker container. Although I haven't been able to pinpoint the exact things that are missing, I was able to get the onboarding.py file to show up on a centos docker container.
First what I did is create a file that has a list of packages that are installed on a minimal centos VM. In my docker file I run through this file and install each package. I plan to cut down the file to see what's necessary for this to work.
The second thing is you must have systemd, which is not installed by default. Here is what my docker image looks like while I'm testing:
FROM centos:7
RUN yum -y update && yum install -y sudo
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
ENV user docker
RUN useradd -m -d /home/${user} ${user}
RUN chown -R ${user} /home/${user}
RUN echo "docker ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
USER ${user}
WORKDIR /home/${user}
COPY ./install_packages .
RUN sudo yum install -y $(cat ./install_packages)
sudo wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh
CMD ["/usr/sbin/init"]
After that I use docker run to run my container locally and start systemd:
docker run -v /run -v /sys/fs/cgroup:/sys/fs/cgroup:ro -d container_id
I then exec into my container and run the onboard script:
sudo sh onboard_agent.sh -w 'xxx' -s 'xxx'
After it's done, you sometimes need to wait about 5 minutes for the missing folders to appear. To trigger this to happen sooner, you need to run this command:
/opt/microsoft/omsagent/bin/service_control restart {OMS_WORKSTATION_ID}
My understanding is this command will restart the OMS agent and it requires systemctl.
I understand this doesn't answer your question on how to get it working from building and running the container without having to remote into it. I'm still working on that and I'll let you know if I find an answer.
Good luck.

Resources