docker container exits immediately after run [mosquitto broken container] - linux

hello i have a problem with docker, recently i make dockerfile for create a image of "mosquitto-mqtt" to make my own broken mqtt with ssl protection. i build dockerfile all is good, i don't have a problem but if i run a new container with " docker run -itd --name broken ce69ee4b2f4e" a container run and exit automaticly, and if a check log all is good "[ ok .] Starting network daemon:: mosquitto.". i don't have why ? check my dockerfile. i need help to solve it, thanks you
#Download base image debian
FROM debian:latest
#Update system
RUN apt-get update -y
#Install Wget and gnup2
RUN apt-get install wget -y && apt-get install gnupg2 -y
#Download and add key
RUN wget http://repo.mosquitto.org/debian/mosquitto-repo.gpg.key
RUN apt-key add mosquitto-repo.gpg.key
RUN rm mosquitto-repo.gpg.key
## append apt mirror for debian
RUN echo "# mirror" >> /etc/apt/source.list
RUN echo "deb http://repo.mosquitto.org/debian stretch main" >> /etc/apt/source.list
#Update and upgrade system
RUN apt-get update -y && apt-get upgrade -y
#install mosquitto
RUN apt-get install mosquitto -y
#Copy file configuration
COPY mosquitto.conf /etc/mosquitto
#Copy certificate folder
COPY certs/mosquitto-ca.crt /etc/mosquitto/certs
COPY certs/mosquitto-server.crt /etc/mosquitto/certs
COPY certs/mosquitto-server.key /etc/mosquitto/certs
#Run command
ENTRYPOINT ["/etc/init.d/mosquitto", "start"]
log print
[ ok .] Starting network daemon:: mosquitto.
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d00bd23ae2d6 ce69ee4b2f4e "/etc/init.d/mosquit…" 9 minutes ago Exited (0) 9 minutes ago broken

Containers are a wrapper around a process, and when that process exits, the container exits. In this case:
ENTRYPOINT ["/etc/init.d/mosquitto", "start"]
That process is /etc/init.d/mosquitto which almost certainly runs, spawns a daemon in the background, and exits (standard for anything in init.d). You should instead run mosquito directly with foreground options if available.
If that's some possible, something like supervisord would be a less than optimal fallback, with the ability to watch a background daemon.
And if neither of those work, you can run your command from a script that ends with a tail -f /dev/null, but that would be the worst option since you ignore any errors.

it works ! i found the solution, it just need to add "-C" on command and specify directory
this is a good method
ENTRYPOINT ["mosquitto", "-c", "/etc/mosquitto/mosquitto.conf"]
thanks all to help Me!

Related

Nextcloud docker install with SSH access enabled

I’m trying to install SSH (and enable the service) on top of my Nextcloud installation in Docker, and have it work on reboot. Having run through many Dockerfile, docker-compose combinations I can’t seem to get this to work. Ive tried using entrypoint.sh scripts with Dockerfile, but it wants a CMD at the end and then it doesn’t execute the “normal” nextcloud start up.
entrypoint.sh:
#!/bin/sh
# Start the ssh server
service ssh start
# Execute the CMD
exec "$#"
Dockerfile:
FROM nextcloud:latest
RUN apt update -y && apt-get install ssh -y
RUN apt-get install python3 -y && apt-get install sudo -y
RUN echo 'ansible ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN useradd -m ansible -s /bin/bash
RUN sudo -u ansible mkdir /home/ansible/.ssh
RUN mkdir -p /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/sbin/sshd", "-D"]
Any help would be much appreciated. Thank you
In general I'd say - break the problem you're having down into smaller parts - it'll help isolate the source of the problem.
Here's how I'd approach the reported issue.
First - replace (in your Dockerfile)
apt-get install -y ssh
with the recommended
apt install -y openssh-server
Then - test just the required parts of your Dockerfile addressing the issue - simplify it just to the following:
FROM nextcloud:latest
RUN apt update
RUN apt install -y openssh-server
Then build a test image using this Dockerfile via the command
docker build . -t test_nextcloud
This will build the image - giving it the name (tag) of test_nextcloud.
Then run a container from this newly built image via the docker run command
docker run -p 8080:80 -d --name nextcloud test_nextcloud
This will run the container on port 8080 in detatched mode, and give the assicated container the name of nextcloud.
Then - with the container running - you should be able to enter into it using the following command
docker container exec -u 0 -it nextcloud bash
as root.
Now that you are in, you should be able to startup the ssh server via the command
service ssh start
Having followed a set of steps like this to confirm that you can indeed startup an ssh server in the nextcloud container, begin adding back in your additional logic (begining with the original Dockerfile).

Run sshd in Docker container

I found this Dockerfile sample here:
// version 1
FROM ubuntu:latest
RUN apt update && apt install ssh -y
RUN service ssh start
CMD ["/usr/sbin/sshd","-D"]
When I build and run this Dockerfile, it runs an SSH server in the foreground, which is great.
If I use the following Dockerfile though:
// version 2
FROM ubuntu:latest
RUN apt update && apt install ssh -y
RUN service ssh start
# CMD ["/usr/sbin/sshd","-D"] // without this line
And then run the container:
~$ docker run -p 2222:22 -it ssh_server
And try to connect to it from another terminal, it doesn't work. Seemingly this call to sshd is necessary. On the other hand, If I just install SSH in the Dockerfile:
// version 3
FROM ubuntu:latest
RUN apt-get update && apt-get install -y ssh
And run the container like this:
~$ docker run -p 2222:22 -it ssh:test
~$ service ssh start
* Starting OpenBSD Secure Shell server sshd
Now I'm able to connect to the container. So I wonder: If the line RUN ssh service start
in version 1 is necessary, why isn't necessary for version 3?
To add more to the confusion, if I build and run version 4:
// version 4
FROM ubuntu:latest
RUN apt update && apt install ssh -y
#RUN service ssh start // without this line
CMD ["/usr/sbin/sshd","-D"]
It also doesn't work either.
Can someone please explain those behaviours? What is the relation between service ssh start and /usr/sbin/sshd?
OK everything is clear now:
Basically running the /usr/sbin/sshd is what runs the ssh server. The reason it didn't work out on it's own (version 4) is because the script that runs when you run service ssh start - which is the script /etc/init.d/ssh - creates a directory /run/sshd which is required for the run of sshd.
This script also calls the executable /usr/sbin/sshd, but since this is run as part of the build, it didn't sustain beyond the temporary container that the layer was made of. W
What did sustain is the /run/sshd directory! That's why when we run /usr/sbin/sshd as the CMD it works!
Thanks all!
To build on #YoavKlein's answer, service ssh start can take arguments which are passed to sshd, so rather than
# Incidentally creates /run/sshd
RUN service ssh start
# Run the service in the foreground when starting the container
CMD ["/usr/sbin/sshd", "-D"]
you can just do
# Run the service in the foreground when starting the container
CMD ["service", "ssh", "start", "-D"]
which will start the SSH server through service, but run it in the foreground, avoiding having to have a separate RUN to do first time setup.
I have taken the idea from #mark-raymond :)
Following docker run command with the -D flag worked for me!:
docker run -itd -p 2222:22 <dockerImageName:Tag> /usr/sbin/sshd -D

Ubuntu Docker container immediately stops, issue with Dockerfile?

I'm pretty new to Docker, and completely baffled as to why my container exits upon start.
I've built an Ubuntu image of which starts Apache and fail2ban upon boot. I'm unsure as to whether it's an issue with the Dockerfile, or the command I am running to start the container.
I've tried:
docker run -d -p 127.0.0.1:80:80 image
docker run -d -ti -p 127.0.0.1:80:80 image
docker run -d -ti -p 127.0.0.1:80:80 image /bin/bash
The Dockerfile is as follows:
FROM ubuntu:latest
RUN \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y iptables && \
apt-get install -y software-properties-common && \
apt-get install -y apache2 fail2ban && \
rm -rf /etc/fail2ban/jail.conf
ADD index.html /var/www/html/
ADD jail.conf /etc/fail2ban/
ENV HOME /root
WORKDIR /root
EXPOSE 80 443
ENTRYPOINT service apache2 start && service fail2ban start
CMD ["bash"]
I can jump into the container itself with:
docker exec -it image /bin/bash
But the moment I try to run it whilst staying within the host, it fails. Help?
Considering your question, where you mention "upon boot" I think it would be useful to read https://docs.docker.com/config/containers/multi-service_container/.
In a nutshell docker containers do not "boot" as a normal system, they start a process and execute it until it exits.
So, if you want to start two processes you can do a wrapper script as explained at the link above.
Remove the following line from your Dockerfile:
CMD ["bash"]
Also, when you want to get a shell into your container, you have to override the ENTRYPOINT definition of your Dockerfile:
docker exec -it --entrypoint "/bin/bash" image
See Dockerfile "ENTRYPOINT" documentation for more details

How to continue using docker container after startup application exit?

I want to create docker image that will start nano editor after running and give users possibilities continue work after nano closing.
For that I wrote next Dockerfile
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y nano
RUN mkdir /home/working
ENV EDITOR /bin/nano
WORKDIR /home/working
ENTRYPOINT /bin/nano
After running container (docker run -it --rm test) nano starts, but after exiting off nano, container closes. I want to continue work into ubuntu container after closing nano. What should i change in my Dockerfile?
i wouldnt set the ENTRYPOINT to nano. better use /bin/bash.
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y nano
RUN mkdir /home/working
ENV EDITOR /bin/nano
WORKDIR /home/working
ENTRYPOINT /bin/bash
now you can (if the container is running) access the container with
docker exec -it <containername> /bin/bash
and use nano as often as you want. for example to edit multiple files. after you close nano /bin/bash is still running and the container doesnt exits.

How do I connect to the localhost of a docker container (from inside the container)

I have a nodejs app that connects to a blockchain on the same server. Normally I use 127.0.0.1 + the port number (each chain gets a different port).
I decided to put the chain and the app in the same container, so that the frontend developers don't have to bother with setting up the chain.
However, When I build the image the chain should start. When I run the image it isn't. Furthermore, when I do go in the container and try to run it manually it says "besluitChain2#xxx.xx.x.2:PORT". So I thought instead of 127.0.0.1 I needed to connect to the port on 127.0.0.2, but that doesn't seem to work.
I'm sure connecting like this isn't new, and should work the same with a database. Can anyone help? The first piece of advice would be how to debug these images, because I have no idea where it goes wrong.
here is my dockerfile
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y apt-utils
RUN apt-get install -y build-essential
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs
ADD workfolder/app /root/applications/app
ADD .multichain /root/.multichain
RUN npm install \
&& apt-get upgrade -q -y \
&& apt-get dist-upgrade -q -y \
&& apt-get install -q -y wget curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& cd /tmp \
&& wget http://www.multichain.com/download/multichain-1.0-beta-1.tar.gz \
&& tar -xvzf multichain-1.0-beta-1.tar.gz \
&& cd multichain-1.0-beta-1 \
&& mv multichaind multichain-cli multichain-util /usr/local/bin \
&& cd /tmp \
&& rm -Rf multichain*
RUN multichaind Chain -daemon
RUN cd /root/applications/app && npm install
CMD cd /root/applications/app && npm start
EXPOSE 8080
btw due to policies I can only connect to the server at port 80 to check if it works. When I run the docker image I can go to my /api-docs but not to any of the endpoints where I start interacting with the blockchain.
I decided to put the chain and the app in the same container
That was a mistake, I think.
Docker is not a virtual machine. It's a virtual application or process instance.
A Docker container runs a linux distro under the hood, but this is a detail that should be ignored when thinking about the purpose of Docker.
You should think of a Docker container as a single application process, not as a full virtual machine to run generally run multiple processes. This is evidenced by the way Docker will shut the container down once the main process shuts down (the process with PID 1).
I've got a longer post about this, here: https://derickbailey.com/2016/08/29/so-youre-saying-docker-isnt-a-virtual-machine/
Additionally, the RUN multichaind instruction in your dockerfile doesn't run the chain in your image / container. It tells the image to run this instruction during the build process.
A Dockerfile is a list of instructions for building an image. The wording here is important. An image is not executed, it is built. An image is a static, immutable template from which a Container is executed.
RUN multichaind Chain -daemon
By putting this RUN instruction in your image, you are temporarily starting the chain, but it is immediately halted (forcefully) when the image layer is done building. It will not remain running, because an image is not executed, it is built.
My advice is to put the chain in a separate image.
You'll have one image for the chain, and one for the node.js app.
You can use docker-compose to make it easier to run containers from both of these at the same time. Or you can run containers manually from them. Either way, you need two images.

Resources