I am new to Docker and facing some beginner misunderstandings in the usage/concepts of Docker.
How I start my built image:
docker run -d -p 7070:80 --name mov_container my_image
My Dockerfile:
FROM php:7.1-apache
RUN apt-get update
RUN apt-get install -y python3
RUN apt-get install -y python3-pip
RUN pip3 install requests
RUN pip3 install pymysql
COPY src/ /var/www/html/
COPY Movement_Tracker.py /var/movtrack/
RUN docker-php-ext-install mysqli
RUN docker-php-ext-enable mysqli
apachectl restart
EXPOSE 80
CMD python3 /var/movtrack/Movement_Tracker.py > flog.log
Is this a prober Dockerfile?
The "Movement_Tracker.py" seems to stop every second or third day. But the script is designed to run endlessly.
More precisely:
ps aux | grep python3 at host (where docker is installed) shows the python3 process.
Inside the container (exec -it...ps aux | grep python3) shows NO python3 process.
The task of the script is to write some sensor data to a database, that does not happen anymore. (As mentioned after 2 or 3 days)
My questions:
Do I have an anti pattern, because there is an apache service and a never-ending python3 script running?
Why is the python3 script still visible on the host and not in the container? But it is obviously not working anymore.
Is it allowed to have an apache/php and a python script running in a single container?
Why stops the python3 script randomly (assuming there is no occurrence of script error?)
Thank you in advance.
Related
I am using docker container to host a music bot using ffmpeg, with every song it processes it keeps on adding up to the memory and eventually runs of memory, is there any fix ? when using it on windows/linux without docker it works really fine only uses 150mb max and dosent even use more that 40% cpu but on docker it goes like 2.3 GB and dosent go down below that.
P.S i am using fluent-ffmpeg
Here is the Docker File -
FROM ubuntu:latest
USER root
WORKDIR /app
RUN apt-get update
RUN apt-get -y install curl gnupg
RUN curl -fsSL https://deb.nodesource.com/setup_18.x | bash -
RUN apt-get -y install nodejs
RUN apt install ffmpeg -y
COPY . .
RUN mkdir /app/storage/
RUN npm install
CMD ["node" , "."]
I’m trying to install SSH (and enable the service) on top of my Nextcloud installation in Docker, and have it work on reboot. Having run through many Dockerfile, docker-compose combinations I can’t seem to get this to work. Ive tried using entrypoint.sh scripts with Dockerfile, but it wants a CMD at the end and then it doesn’t execute the “normal” nextcloud start up.
entrypoint.sh:
#!/bin/sh
# Start the ssh server
service ssh start
# Execute the CMD
exec "$#"
Dockerfile:
FROM nextcloud:latest
RUN apt update -y && apt-get install ssh -y
RUN apt-get install python3 -y && apt-get install sudo -y
RUN echo 'ansible ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN useradd -m ansible -s /bin/bash
RUN sudo -u ansible mkdir /home/ansible/.ssh
RUN mkdir -p /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/sbin/sshd", "-D"]
Any help would be much appreciated. Thank you
In general I'd say - break the problem you're having down into smaller parts - it'll help isolate the source of the problem.
Here's how I'd approach the reported issue.
First - replace (in your Dockerfile)
apt-get install -y ssh
with the recommended
apt install -y openssh-server
Then - test just the required parts of your Dockerfile addressing the issue - simplify it just to the following:
FROM nextcloud:latest
RUN apt update
RUN apt install -y openssh-server
Then build a test image using this Dockerfile via the command
docker build . -t test_nextcloud
This will build the image - giving it the name (tag) of test_nextcloud.
Then run a container from this newly built image via the docker run command
docker run -p 8080:80 -d --name nextcloud test_nextcloud
This will run the container on port 8080 in detatched mode, and give the assicated container the name of nextcloud.
Then - with the container running - you should be able to enter into it using the following command
docker container exec -u 0 -it nextcloud bash
as root.
Now that you are in, you should be able to startup the ssh server via the command
service ssh start
Having followed a set of steps like this to confirm that you can indeed startup an ssh server in the nextcloud container, begin adding back in your additional logic (begining with the original Dockerfile).
I found this Dockerfile sample here:
// version 1
FROM ubuntu:latest
RUN apt update && apt install ssh -y
RUN service ssh start
CMD ["/usr/sbin/sshd","-D"]
When I build and run this Dockerfile, it runs an SSH server in the foreground, which is great.
If I use the following Dockerfile though:
// version 2
FROM ubuntu:latest
RUN apt update && apt install ssh -y
RUN service ssh start
# CMD ["/usr/sbin/sshd","-D"] // without this line
And then run the container:
~$ docker run -p 2222:22 -it ssh_server
And try to connect to it from another terminal, it doesn't work. Seemingly this call to sshd is necessary. On the other hand, If I just install SSH in the Dockerfile:
// version 3
FROM ubuntu:latest
RUN apt-get update && apt-get install -y ssh
And run the container like this:
~$ docker run -p 2222:22 -it ssh:test
~$ service ssh start
* Starting OpenBSD Secure Shell server sshd
Now I'm able to connect to the container. So I wonder: If the line RUN ssh service start
in version 1 is necessary, why isn't necessary for version 3?
To add more to the confusion, if I build and run version 4:
// version 4
FROM ubuntu:latest
RUN apt update && apt install ssh -y
#RUN service ssh start // without this line
CMD ["/usr/sbin/sshd","-D"]
It also doesn't work either.
Can someone please explain those behaviours? What is the relation between service ssh start and /usr/sbin/sshd?
OK everything is clear now:
Basically running the /usr/sbin/sshd is what runs the ssh server. The reason it didn't work out on it's own (version 4) is because the script that runs when you run service ssh start - which is the script /etc/init.d/ssh - creates a directory /run/sshd which is required for the run of sshd.
This script also calls the executable /usr/sbin/sshd, but since this is run as part of the build, it didn't sustain beyond the temporary container that the layer was made of. W
What did sustain is the /run/sshd directory! That's why when we run /usr/sbin/sshd as the CMD it works!
Thanks all!
To build on #YoavKlein's answer, service ssh start can take arguments which are passed to sshd, so rather than
# Incidentally creates /run/sshd
RUN service ssh start
# Run the service in the foreground when starting the container
CMD ["/usr/sbin/sshd", "-D"]
you can just do
# Run the service in the foreground when starting the container
CMD ["service", "ssh", "start", "-D"]
which will start the SSH server through service, but run it in the foreground, avoiding having to have a separate RUN to do first time setup.
I have taken the idea from #mark-raymond :)
Following docker run command with the -D flag worked for me!:
docker run -itd -p 2222:22 <dockerImageName:Tag> /usr/sbin/sshd -D
hello i have a problem with docker, recently i make dockerfile for create a image of "mosquitto-mqtt" to make my own broken mqtt with ssl protection. i build dockerfile all is good, i don't have a problem but if i run a new container with " docker run -itd --name broken ce69ee4b2f4e" a container run and exit automaticly, and if a check log all is good "[ ok .] Starting network daemon:: mosquitto.". i don't have why ? check my dockerfile. i need help to solve it, thanks you
#Download base image debian
FROM debian:latest
#Update system
RUN apt-get update -y
#Install Wget and gnup2
RUN apt-get install wget -y && apt-get install gnupg2 -y
#Download and add key
RUN wget http://repo.mosquitto.org/debian/mosquitto-repo.gpg.key
RUN apt-key add mosquitto-repo.gpg.key
RUN rm mosquitto-repo.gpg.key
## append apt mirror for debian
RUN echo "# mirror" >> /etc/apt/source.list
RUN echo "deb http://repo.mosquitto.org/debian stretch main" >> /etc/apt/source.list
#Update and upgrade system
RUN apt-get update -y && apt-get upgrade -y
#install mosquitto
RUN apt-get install mosquitto -y
#Copy file configuration
COPY mosquitto.conf /etc/mosquitto
#Copy certificate folder
COPY certs/mosquitto-ca.crt /etc/mosquitto/certs
COPY certs/mosquitto-server.crt /etc/mosquitto/certs
COPY certs/mosquitto-server.key /etc/mosquitto/certs
#Run command
ENTRYPOINT ["/etc/init.d/mosquitto", "start"]
log print
[ ok .] Starting network daemon:: mosquitto.
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d00bd23ae2d6 ce69ee4b2f4e "/etc/init.d/mosquit…" 9 minutes ago Exited (0) 9 minutes ago broken
Containers are a wrapper around a process, and when that process exits, the container exits. In this case:
ENTRYPOINT ["/etc/init.d/mosquitto", "start"]
That process is /etc/init.d/mosquitto which almost certainly runs, spawns a daemon in the background, and exits (standard for anything in init.d). You should instead run mosquito directly with foreground options if available.
If that's some possible, something like supervisord would be a less than optimal fallback, with the ability to watch a background daemon.
And if neither of those work, you can run your command from a script that ends with a tail -f /dev/null, but that would be the worst option since you ignore any errors.
it works ! i found the solution, it just need to add "-C" on command and specify directory
this is a good method
ENTRYPOINT ["mosquitto", "-c", "/etc/mosquitto/mosquitto.conf"]
thanks all to help Me!
I have a Flask server running on the 8090 port
...code of the Flask server...
app.run(host='0.0.0.0', port=8090, debug=True)
Then I have the Dockerfile as follow:
FROM ubuntu
WORKDIR home
MAINTAINER Califfo
# copy files into the image
ADD files /home/files
ADD ServerCategory.py /home
ADD requirements.txt /home
# install python3, pip and Flask
RUN apt-get update && apt-get install -y python3 python3-pip net-tools lsof && pip3 install -r requirements.txt
# launch flask server
RUN python3 ServerCategory.py flask run
When I build the image and run the container with this command
docker build -t server_category . && docker run -p 8090:8090 -it --rm server_category
everything is OK.
Running on http://0.0.0.0:8090/ (Press CTRL+C to quit)
Restarting with stat
Debugger is active!
Debugger PIN: 280-257-458
But I cannot connect to the server from my browser, for example with localhost:8090/.
I get this error
Error: Couldn't connect to server
As nauer says in the comments, one of the problems is that you don't have any ENTRYPOINT or CMD command on Dockerfile. So whenever you starts the container he will close immediately. With your docker run command, the container are still alive because you open the bash with the -it flags, but this is not the optimal approach.
For the problem "Error: Couldn't connect to server" you will need to give some more information, since the problem appears to be with flask and not with docker itself.