I need to set the environment in my Dockerfile (I can't use docker-compose because I will need this dockerfile for kubernetes).
Here is my Dockerfile:
FROM tensorflow/tensorflow:2.1.0-py3
RUN apt -qq -y update \
&& apt -qq -y upgrade
WORKDIR /socialworks-api
COPY . /socialworks-api
RUN python -m pip install --upgrade pip
RUN apt -y install git
#for mysql
RUN apt install -y libmysqlclient-dev
RUN pip --no-cache-dir install -r requirements.txt
EXPOSE 5000
CMD ["/bin/bash", "./start.sh"]
Here is my start.sh file:
#!/bin/bash
export $(grep -v '^#' env/docker.env | xargs)
source env/docker.env
python app.py
Here is where I start my flask server in app.py:
if __name__ == '__main__':
print("test")
socketio.run(app, host="0.0.0.0", debug=True, use_reloader=False)
The issue is that after these commands, I try to run my docker container and the flask server is being launched (it even prints test in the terminal) and when I try to run http://0.0.0.0:5000/ I am having unable to connect issue on the browser.
Maybe the issue is with CMD in my dockerfile or start.sh file? I am new to Docker. Any help will be appreciated. Locally everything works fine.
Here is the command I use to run the docker container:
docker run -it flaskapp
Here is the docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
337f1bdacb3e flaskapp "/bin/bash ./start.sh" 44 seconds ago Up 42 seconds 5000/tcp dazzling_mcclintock
Here are the logs:
docker logs 337f1bdacb3e
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
2020-04-20 10:02:44.287504: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-04-20 10:02:44.309581: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2394225000 Hz
2020-04-20 10:02:44.310031: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4971910 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-20 10:02:44.310068: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
test
(10) wsgi starting up on http://0.0.0.0:5000
Logs are the same what I get when running it locally and locally everything works. Could it be that the flask application is being stopped somehow when running it?
The trouble seems to be that you have not mapped the exposed port of your docker container to your local machine. To remedy this you need to change your run to command to
docker run -it -p 5000:5000 flaskapp.
This tells docker to map port 5000 in the running container (the first 5000) to port 5000 on your local machine (the second 5000) [If you wanted to run on it on your local machine port 8080 for instance, instead the flag would be -p 5000:8080].
When you do so, the ports section output from docker ps should look something like:
0.0.0.0:5000->5000/tcp
Related
I am very elementary at Docker and I know a few commands to test my tasks. I have built a Docker image for 64-bit python because I had to install CatBoost in it. My dockerfile looks like this:
FROM amd64/python:3.9-buster
RUN apt update
RUN apt install -y libgl1-mesa-glx apt-utils
RUN pip install -U pip
RUN pip install --upgrade setuptools
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip --default-timeout=100 install -r requirements.txt
COPY . /app
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
I built the image with docker build -t myimage ., it gets built for a long time presumably successfully. When I run the image using the following command: docker run -p 5000:5000 myimage, it gives a warning WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested and downloads some things for while and then exits. Not sure whats happening. How do I tackle this?
I managed to solve it by running the same thing on a linux machine.
I found this Dockerfile sample here:
// version 1
FROM ubuntu:latest
RUN apt update && apt install ssh -y
RUN service ssh start
CMD ["/usr/sbin/sshd","-D"]
When I build and run this Dockerfile, it runs an SSH server in the foreground, which is great.
If I use the following Dockerfile though:
// version 2
FROM ubuntu:latest
RUN apt update && apt install ssh -y
RUN service ssh start
# CMD ["/usr/sbin/sshd","-D"] // without this line
And then run the container:
~$ docker run -p 2222:22 -it ssh_server
And try to connect to it from another terminal, it doesn't work. Seemingly this call to sshd is necessary. On the other hand, If I just install SSH in the Dockerfile:
// version 3
FROM ubuntu:latest
RUN apt-get update && apt-get install -y ssh
And run the container like this:
~$ docker run -p 2222:22 -it ssh:test
~$ service ssh start
* Starting OpenBSD Secure Shell server sshd
Now I'm able to connect to the container. So I wonder: If the line RUN ssh service start
in version 1 is necessary, why isn't necessary for version 3?
To add more to the confusion, if I build and run version 4:
// version 4
FROM ubuntu:latest
RUN apt update && apt install ssh -y
#RUN service ssh start // without this line
CMD ["/usr/sbin/sshd","-D"]
It also doesn't work either.
Can someone please explain those behaviours? What is the relation between service ssh start and /usr/sbin/sshd?
OK everything is clear now:
Basically running the /usr/sbin/sshd is what runs the ssh server. The reason it didn't work out on it's own (version 4) is because the script that runs when you run service ssh start - which is the script /etc/init.d/ssh - creates a directory /run/sshd which is required for the run of sshd.
This script also calls the executable /usr/sbin/sshd, but since this is run as part of the build, it didn't sustain beyond the temporary container that the layer was made of. W
What did sustain is the /run/sshd directory! That's why when we run /usr/sbin/sshd as the CMD it works!
Thanks all!
To build on #YoavKlein's answer, service ssh start can take arguments which are passed to sshd, so rather than
# Incidentally creates /run/sshd
RUN service ssh start
# Run the service in the foreground when starting the container
CMD ["/usr/sbin/sshd", "-D"]
you can just do
# Run the service in the foreground when starting the container
CMD ["service", "ssh", "start", "-D"]
which will start the SSH server through service, but run it in the foreground, avoiding having to have a separate RUN to do first time setup.
I have taken the idea from #mark-raymond :)
Following docker run command with the -D flag worked for me!:
docker run -itd -p 2222:22 <dockerImageName:Tag> /usr/sbin/sshd -D
I have a Flask server running on the 8090 port
...code of the Flask server...
app.run(host='0.0.0.0', port=8090, debug=True)
Then I have the Dockerfile as follow:
FROM ubuntu
WORKDIR home
MAINTAINER Califfo
# copy files into the image
ADD files /home/files
ADD ServerCategory.py /home
ADD requirements.txt /home
# install python3, pip and Flask
RUN apt-get update && apt-get install -y python3 python3-pip net-tools lsof && pip3 install -r requirements.txt
# launch flask server
RUN python3 ServerCategory.py flask run
When I build the image and run the container with this command
docker build -t server_category . && docker run -p 8090:8090 -it --rm server_category
everything is OK.
Running on http://0.0.0.0:8090/ (Press CTRL+C to quit)
Restarting with stat
Debugger is active!
Debugger PIN: 280-257-458
But I cannot connect to the server from my browser, for example with localhost:8090/.
I get this error
Error: Couldn't connect to server
As nauer says in the comments, one of the problems is that you don't have any ENTRYPOINT or CMD command on Dockerfile. So whenever you starts the container he will close immediately. With your docker run command, the container are still alive because you open the bash with the -it flags, but this is not the optimal approach.
For the problem "Error: Couldn't connect to server" you will need to give some more information, since the problem appears to be with flask and not with docker itself.
Here is my code (app.py) :
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World composcan"
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0', port=8002)
When i use this dockerfile :
FROM ubuntu:18.04
MAINTAINER bussiere "bussiere#composcan.fr"
RUN echo "nameserver 8.8.8.8" >> /etc/resolv.conf
RUN echo "nameserver 80.67.169.12" >> /etc/resolv.conf
RUN echo "nameserver 208.67.222.222" >> /etc/resolv.conf
#RUN echo "dns-nameservers 8.8.8.8 8.8.4.4 80.67.169.12 208.67.222.222" >> /etc/network/interfaces
ENV LANG C.UTF-8
RUN apt-get update -y
RUN apt-get install -y --no-install-recommends apt-utils
RUN apt-get install -y python3 python3-pip python3-dev build-essential
RUN python3 -m pip install pip --upgrade
RUN python3 -m pip install pipenv
RUN export LC_ALL=C.UTF-8
RUN export LANG=C.UTF-8
COPY app /app
WORKDIR /app
RUN pipenv --python 3.6
RUN pipenv install -r requirements.txt
ENTRYPOINT ["pipenv"]
CMD ["run","python","app.py"]
it works on azure perfectly :
http://koalabourre.azurewebsites.net/
But when i try to run it locally from docker on ubuntu with :
docker run --rm -it -p 8002:8002 flask-quickstart
i have :
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://127.0.0.1:8002/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 101-413-323
i can't open it in my browser with localhost:8002
here is the organisation of the project :
And the docker is here
https://hub.docker.com/r/dockercompo/koalabourre/
And running app.py outside a container locally works perfectly ...
You'll have to be listening on an "external" (docker network) address to forward ports. Looks to me like your code doesn't quite match the program's output in that regard.
Your code says
app.run(debug=True,host='0.0.0.0', port=8002)
But your output says
* Running on http://127.0.0.1:8002/ (Press CTRL+C to quit)
I am new to Docker and facing some beginner misunderstandings in the usage/concepts of Docker.
How I start my built image:
docker run -d -p 7070:80 --name mov_container my_image
My Dockerfile:
FROM php:7.1-apache
RUN apt-get update
RUN apt-get install -y python3
RUN apt-get install -y python3-pip
RUN pip3 install requests
RUN pip3 install pymysql
COPY src/ /var/www/html/
COPY Movement_Tracker.py /var/movtrack/
RUN docker-php-ext-install mysqli
RUN docker-php-ext-enable mysqli
apachectl restart
EXPOSE 80
CMD python3 /var/movtrack/Movement_Tracker.py > flog.log
Is this a prober Dockerfile?
The "Movement_Tracker.py" seems to stop every second or third day. But the script is designed to run endlessly.
More precisely:
ps aux | grep python3 at host (where docker is installed) shows the python3 process.
Inside the container (exec -it...ps aux | grep python3) shows NO python3 process.
The task of the script is to write some sensor data to a database, that does not happen anymore. (As mentioned after 2 or 3 days)
My questions:
Do I have an anti pattern, because there is an apache service and a never-ending python3 script running?
Why is the python3 script still visible on the host and not in the container? But it is obviously not working anymore.
Is it allowed to have an apache/php and a python script running in a single container?
Why stops the python3 script randomly (assuming there is no occurrence of script error?)
Thank you in advance.