Starting a shell in the Docker Alpine container - linux

To start an interactive shell for the Ubuntu image we can run:
ole#T:~$ docker run -it --rm ubuntu
root#1a6721e1fb64:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
But when this is run for the Alpine Docker image, the following results:
ole#T:~$ docker run -it --rm alpine
Error response from daemon: No command specified
What is the command for starting an interactive shell in an Alpine base container?

ole#T:~$ docker run -it --rm alpine /bin/ash
(inside container) / #
Options used above:
/bin/ash is Ash (Almquist Shell) provided by BusyBox
--rm Automatically remove the container when it exits (docker run --help)
-i Interactive mode (Keep STDIN open even if not attached)
-t Allocate a pseudo-TTY

Usually, an Alpine Linux image doesn't contain bash, Instead you can use /bin/ash, /bin/sh, ash or only sh.
/bin/ash
docker run -it --rm alpine /bin/ash
/bin/sh
docker run -it --rm alpine /bin/sh
ash
docker run -it --rm alpine ash
sh
docker run -it --rm alpine sh
I hope this information helps you.

Nowadays, Alpine images will boot directly into /bin/sh by default, without having to specify a shell to execute:
$ sudo docker run -it --rm alpine
/ # echo $0
/bin/sh
This is since the alpine image Dockerfiles now contain a CMD command, that specifies the shell to execute when the container starts: CMD ["/bin/sh"].
In older Alpine image versions (pre-2017), the CMD command was not used, since Docker used to create an additional layer for CMD which caused the image size to increase. This is something that the Alpine image developers wanted to avoid. In recent Docker versions (1.10+), CMD no longer occupies a layer, and so it was added to alpine images. Therefore, as long as CMD is not overridden, recent Alpine images will boot into /bin/sh.
For reference, see the following commit to the official Alpine Dockerfiles by Glider Labs:
https://github.com/gliderlabs/docker-alpine/commit/ddc19dd95ceb3584ced58be0b8d7e9169d04c7a3#diff-db3dfdee92c17cf53a96578d4900cb5b

In case the container is already running:
docker exec -it container_id_or_name ash

Related

OCI runtime exec failed: exec failed: container_linux.go:344: starting container process

When i run the below command
$ docker container exec -it nginx1 ping nginx2
This is the error which i faced :
OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused "exec: \"ping\": executable file not found in $PATH": unknown
How to resolve this issue ?
Before reading this answer just let you know, it's my 2nd day of learning docker, It may not be the perfect help for you.
This error may also occur when the ping package is not installed in the container, I resolved the problem as follow, bash into the container like this
docker container exec -it my_nginx /bin/bash
then install ping package
apt-get update
apt-get install inetutils-ping
This solved my problem.
Please use alpine image of nginx:
docker container run -d --name my_nginx_name nginx:alpine
docker container run -d --name my_nginx_name2 nginx:alpine
Then try to ping using below command:
docker container exec -it my_nginx_name ping my_nginx_name2
I had the same problem and managed to solve it by accessing:
docker exec -ti <CONTAINER ID> /bin/sh
This something I came across recently. When ran a docker container with a custom name and if we put an command/option(s)/etc after the name, that would be passed to the container as commands. So in here container tried to find the ping command inside it but couldn't, So as the above answer you must install the inetutils-ping inside the container and run the command
Try this it worked for me
# $ docker container exec -it new_nginx bash
# apt-get update
# apt-get install inetutils-ping
Do it for both the container than run your command
# $ docker container exec -it nginx1 ping nginx2
Install ping utilities in the container .
docker container exec -it webhost /bin/bash
apt-get update
apt-get install inetutils-ping
docker container exec -it webhost ping new_nginx
Try to install ping in both conatiners,
apt-get update,
apt-get install inetutils-ping
After that Try the ping command.
This error is reported when you try to run the command not found in docker image. Please check if ping is installed in the docker image.

A python script in Docker needs to accessAPI data present in different Docker image

I am new to docker and trying to figure out the following:
I need to access API data by running docker image docker run -dit -p 5000:5000 abc/xyz:v1.0.0
I have created a python application which can access this data.
I have created docker file for python application too. I am trying to run the API docker image when building the python app docker. (I am sure that's not the right way). Please tell me how to approach this situation.
I want docker run -i my-python-app to somehow access docker run -dit -p 5000:5000 abc/xyz:v1.0.0
This is how my Docker file looks like:
COPY variants /usr/local/variants
COPY requirements /usr/local/requirements
COPY tests /usr/local/tests
RUN apk add -U python3 g++ docker \
&& python3 -m ensurepip \
&& rm -r /usr/lib/python*/ensurepip \
&& pip3 install --upgrade pip \
&& pip install --upgrade setuptools \
&& pip3 install -r /usr/local/requirements/common.txt
ENTRYPOINT docker run --privileged -dit -p 5000:5000 abc/xyz:v1.0.0
ENTRYPOINT ["python3", "-m", "usr.local.variants.main"]```
With your -p 5000:5000 option you can communicate your docker with your host, but if you want to communicate two docker in the same host, you need to define a docker network for them.
The easiest way to do that is launching dockers with --net=host options. This network allows your docker to use host interfaces, including localhost.
EDIT: Adding more info
Create docker image (I recommend without ENTRYPOINT lines, copying your binary to a directory that belongs to the path.
Just add net=host to your docker run command, just before creating image with docker build: docker run -dit --net=host -p 5000:5000 abc/xyz:v1.0.0 python3 <your_entry_point_path>
Add --net=host to other dockers that you launch.

Running desktop enviroment in docker in headless linux

is it possible to run in headless linux, to be exact, linux with no desktop enviroment with GUI from inside docker.
(only if couldt be done differently with x server of some sort, but I would rather run everything within docker)
I want to run GUI only on occasions and I dont want it to share the userspace with the base system programs. Also I dont want to preserve the DE till the next occasion that is needed.
Sure it's possible!
First let's create a docker volume to store the X11 socket:
docker volume create --name xsocket
Now we can create an image with X Server:
FROM ubuntu
RUN apt-get update && \
DEBIAN_FRONTEND='noninteractive' apt-get install -y xorg
CMD /usr/bin/X :0 -nolisten tcp vt1
Let us build it and start it and store the X11 socket in xsocket docker volume:
docker build . -t docker-x-server:latest
docker run --privileged -v xsocket:/tmp/.X11-unix -d docker-x-server:latest
Now we can run a GUI application in another docker container (yay!) and point it to our X server using xsocket volume:
docker run --rm -it -e DISPLAY=:0 -v xsocket:/tmp/.X11-unix:ro stefanscherer/xeyes
If you need input (like keyboard) install xserver-xorg-input-evdev package and add -v /run/udev/data:/run/udev/data since there's no udev in containers by default.
You can even get rid of --privileged flag by granting SYS_TTY_CONFIG capability and binding some devices into container:
docker run --name docker-x-server --device=/dev/input --device=/dev/console --device=/dev/dri --device=/dev/fb0 --device=/dev/tty --device=/dev/tty1 --device=/dev/vga_arbiter --device=/dev/snd --device=/dev/psaux --cap-add=SYS_TTY_CONFIG -v xsocket:/tmp/.X11-unix -d docker-x-server:latest

Share folders between host and container in docker for Windows

I'm using the lastest Docker for Windows, which needs Hyper-V to be enabled, and virtualbox cannot be used in this case.
I've installed the the ubuntu container and started it, I want to mount C:\Users\username in the docker container. I've tried the following methods.
docker run -t -i -v /c/Users/username:/mnt/c ubuntu /bin/bash
docker run -d -P --name windows -v C:\Users\username:/mnt/c ubuntu /bin/bash
docker run -t -i -v /c/Users/username:/mnt/c ubuntu /bin/bash
None of them worked. I noticed that /mnt/c was created automatically, but it contained nothing.
Given that Docker for Windows is pretty new, most information I found online was about Boot2Docker or virtualbox, which is useless to me.

How to SSH into running docker container from jenkins execute shell

I am running a docker container (dind) from jenkins execute shell
CONTAINER_ID="$(sudo docker run --privileged -i -d jpetazzo/dind)"
To execute docker commands inside container I get into container shell
sudo docker exec -it --privileged ${CONTAINER_ID} bash
and than I am trying to execute these commands inside dind container.
sudo docker pull hubuser/hello-world
sudo docker run hubuser/hello-world
sudo docker tag imageId hubuser/hello-world:123
sudo docker login --username=hubuser --password=password
sudo docker push hubuser/hello-world
All of these 7 commands I have written in jenkins execute shell. Below given 5 commands are executing outside dind container, not inside. If I am trying from terminal than it is attaching to the container shell and executing properly. I want to execute them inside container but from jenkins. I also tried adding exec before every command like this:
sudo docker exec -it --privileged ${CONTAINER_ID} sudo docker pull hubuser/hello-world
sudo docker exec -it --privileged ${CONTAINER_ID} sudo docker run hubuser/hello-world
and so on. This executes commands inside dind container but all commands executes in parallel, so before pulling and running the hello-world image, it tries to tag it and push it. There it is not finding any hello-world image to tag and it does't do anything.
I want all my below 5 commands to execute serially inside dind container, that too from jenkins execute shell.
The title of your post is "how to ssh into running docker". I just want to point out this article, wrote by a Docker engineer: If you run SSHD in your Docker containers, you're doing it wrong!
After I read your post, that is not treating issue with ssh, I just thought about: why not execute a bash-script that does sequentially what you wanted to?
I'm not sure if I understood well btw, is Jenkins inside a docker? Are you running a docker in a docker?

Resources