Here is my code (app.py) :
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World composcan"
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0', port=8002)
When i use this dockerfile :
FROM ubuntu:18.04
MAINTAINER bussiere "bussiere#composcan.fr"
RUN echo "nameserver 8.8.8.8" >> /etc/resolv.conf
RUN echo "nameserver 80.67.169.12" >> /etc/resolv.conf
RUN echo "nameserver 208.67.222.222" >> /etc/resolv.conf
#RUN echo "dns-nameservers 8.8.8.8 8.8.4.4 80.67.169.12 208.67.222.222" >> /etc/network/interfaces
ENV LANG C.UTF-8
RUN apt-get update -y
RUN apt-get install -y --no-install-recommends apt-utils
RUN apt-get install -y python3 python3-pip python3-dev build-essential
RUN python3 -m pip install pip --upgrade
RUN python3 -m pip install pipenv
RUN export LC_ALL=C.UTF-8
RUN export LANG=C.UTF-8
COPY app /app
WORKDIR /app
RUN pipenv --python 3.6
RUN pipenv install -r requirements.txt
ENTRYPOINT ["pipenv"]
CMD ["run","python","app.py"]
it works on azure perfectly :
http://koalabourre.azurewebsites.net/
But when i try to run it locally from docker on ubuntu with :
docker run --rm -it -p 8002:8002 flask-quickstart
i have :
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://127.0.0.1:8002/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 101-413-323
i can't open it in my browser with localhost:8002
here is the organisation of the project :
And the docker is here
https://hub.docker.com/r/dockercompo/koalabourre/
And running app.py outside a container locally works perfectly ...
You'll have to be listening on an "external" (docker network) address to forward ports. Looks to me like your code doesn't quite match the program's output in that regard.
Your code says
app.run(debug=True,host='0.0.0.0', port=8002)
But your output says
* Running on http://127.0.0.1:8002/ (Press CTRL+C to quit)
Related
I’m trying to install SSH (and enable the service) on top of my Nextcloud installation in Docker, and have it work on reboot. Having run through many Dockerfile, docker-compose combinations I can’t seem to get this to work. Ive tried using entrypoint.sh scripts with Dockerfile, but it wants a CMD at the end and then it doesn’t execute the “normal” nextcloud start up.
entrypoint.sh:
#!/bin/sh
# Start the ssh server
service ssh start
# Execute the CMD
exec "$#"
Dockerfile:
FROM nextcloud:latest
RUN apt update -y && apt-get install ssh -y
RUN apt-get install python3 -y && apt-get install sudo -y
RUN echo 'ansible ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN useradd -m ansible -s /bin/bash
RUN sudo -u ansible mkdir /home/ansible/.ssh
RUN mkdir -p /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/sbin/sshd", "-D"]
Any help would be much appreciated. Thank you
In general I'd say - break the problem you're having down into smaller parts - it'll help isolate the source of the problem.
Here's how I'd approach the reported issue.
First - replace (in your Dockerfile)
apt-get install -y ssh
with the recommended
apt install -y openssh-server
Then - test just the required parts of your Dockerfile addressing the issue - simplify it just to the following:
FROM nextcloud:latest
RUN apt update
RUN apt install -y openssh-server
Then build a test image using this Dockerfile via the command
docker build . -t test_nextcloud
This will build the image - giving it the name (tag) of test_nextcloud.
Then run a container from this newly built image via the docker run command
docker run -p 8080:80 -d --name nextcloud test_nextcloud
This will run the container on port 8080 in detatched mode, and give the assicated container the name of nextcloud.
Then - with the container running - you should be able to enter into it using the following command
docker container exec -u 0 -it nextcloud bash
as root.
Now that you are in, you should be able to startup the ssh server via the command
service ssh start
Having followed a set of steps like this to confirm that you can indeed startup an ssh server in the nextcloud container, begin adding back in your additional logic (begining with the original Dockerfile).
For a personnal project, I want to creat a container with Docker for a Python script (a bot for Discord) to isolate it from the system.
I need to use PM2 to run the script, but I can't use the Python from keymetrics/pm2:latest-alpine due to the version (I need the 3.9 and not the 3.8).
So I decided to use a multi stage container to get files from a python container first and then, to execute it inside the other image.
Before calling my bot, I am working step by step. So I'm trying here to get only the version of Python in a first time (and then I'll try to call an hello world script with Python).
My trouble is in this first step.
My Dockerfile is :
# =============== Python slim ========================
FROM python:3.9-slim as base
# Setup env
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONFAULTHANDLER 1
FROM base AS python-deps
# Install pipenv and compilation dependencies
RUN pip install pipenv
RUN apt-get update && apt-get install -y --no-install-recommends gcc
COPY requirements.txt .
# Install python dependencies in /opt/venv
# . Create env, activate
RUN python3 -m venv --copies /opt/venv && cd /opt/venv/bin/ && chmod a+x activate && ./activate && chmod a-x activate && cd -
# . Install packages with pip
RUN python3 -m pip install --upgrade pip && pip3 install --no-cache-dir --no-compile pipenv && PIPENV_VENV_IN_PROJECT=1 pip install --user -r requirements.txt
# >> Here, I can call :
# CMD ["/opt/venv/bin/python3.9", "--version"]
# =============== PM2 ================================
# second stage
FROM keymetrics/pm2:latest-alpine
WORKDIR /code
# Copy datas from directory
COPY ./src .
COPY ecosystem.config.js .
# Copy datas from previous
# Copy virtual env from python-deps stage
COPY --from=python-deps /opt/venv /opt/venv
# Install app dependencies : useless... (python3.8 de toute facon, et je dois etre en 3.9)
# RUN apk add --no-cache git python3
# Variables d'environnement Python :
ENV PYROOT=/opt/venv
ENV PYTHONUSERBASE=$PYROOT
ENV PATH="${PYROOT}/bin:${PATH}"
ENV PYTHONPATH="${PYROOT}/lib/python3.9/site-packages/"
# CMD ["ls", "-la", "/opt/venv/bin/python3"] # Ok here : file exists
# CMD ["which", "python3"] # Ok here : output: /opt/venv/bin/python3
CMD ["/opt/venv/bin/python3", "--version"] # not ok (cf below)
# ..... Then I will call after other stuff once Python works ....
# ENV NPM_CONFIG_LOGLEVEL warn
# RUN npm install pm2 -g
# RUN npm install --production
# RUN pm2 update && pm2 install pm2-server-monit # && pm2 install pm2-auto-pull
# CMD ["pm2-runtime", "ecosystem.config.js" ]
My requirements.txt is :
Flask==1.1.1
And my error is
/usr/local/bin/docker-entrypoint.sh: exec: line 8: /opt/venv/bin/python3: not found
I really don't understand why...
I tried to fo inside my image with
$ docker run -d --name hello myimage watch "date >> /var/log/date.log"
$ docker exec -it hello sh
And inside, I saw that Python exists with ls, which see it too, but if I go in the directory and I call it with ./python3, I get the message sh: python: not found
I am a noob with Docker, even if I did some stuff with it before, but I didn't get courses about it because I use it only for few personnal stuff (and it's my 1st big trouble with it).
Thanks !
File: Dockerfile
From ubuntu:focal-20211006
RUN apt-get update
RUN apt-get -y install python3 python3-pip
RUN pip install asyncio
RUN pip install apscheduler==3.7.0
COPY test.py /home/testing/test.py
WORKDIR /home/testing
CMD python3 -u ./test.py
File:test.py
import asyncio
async def main():
print('Comes to main')
if __name__ == '__main__':
try:
print('Comes to Main 1')
asyncio.ensure_future(main())
print('Comes to Main 2')
asyncio.get_event_loop().run_forever()
except (KeyboardInterrupt, SystemExit):
print('Comes to interrupt')
raise
Commands:
sudo docker build -t test:test
sudo docker run test:test
Unable to exit the process with ctrl+c on ubuntu:20.04
Any help would be appreciated
Note: If i use the CMD in Dockerfile to CMD python3 ./test.py (without -u option) then there are no outputs. And using docker with docker-compose fails to attach
You need to run your docker in interactive tty mode so the shell connects to the docker.
sudo docker run -it test:test
I need to set the environment in my Dockerfile (I can't use docker-compose because I will need this dockerfile for kubernetes).
Here is my Dockerfile:
FROM tensorflow/tensorflow:2.1.0-py3
RUN apt -qq -y update \
&& apt -qq -y upgrade
WORKDIR /socialworks-api
COPY . /socialworks-api
RUN python -m pip install --upgrade pip
RUN apt -y install git
#for mysql
RUN apt install -y libmysqlclient-dev
RUN pip --no-cache-dir install -r requirements.txt
EXPOSE 5000
CMD ["/bin/bash", "./start.sh"]
Here is my start.sh file:
#!/bin/bash
export $(grep -v '^#' env/docker.env | xargs)
source env/docker.env
python app.py
Here is where I start my flask server in app.py:
if __name__ == '__main__':
print("test")
socketio.run(app, host="0.0.0.0", debug=True, use_reloader=False)
The issue is that after these commands, I try to run my docker container and the flask server is being launched (it even prints test in the terminal) and when I try to run http://0.0.0.0:5000/ I am having unable to connect issue on the browser.
Maybe the issue is with CMD in my dockerfile or start.sh file? I am new to Docker. Any help will be appreciated. Locally everything works fine.
Here is the command I use to run the docker container:
docker run -it flaskapp
Here is the docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
337f1bdacb3e flaskapp "/bin/bash ./start.sh" 44 seconds ago Up 42 seconds 5000/tcp dazzling_mcclintock
Here are the logs:
docker logs 337f1bdacb3e
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
2020-04-20 10:02:44.287504: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-04-20 10:02:44.309581: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2394225000 Hz
2020-04-20 10:02:44.310031: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4971910 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-20 10:02:44.310068: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
test
(10) wsgi starting up on http://0.0.0.0:5000
Logs are the same what I get when running it locally and locally everything works. Could it be that the flask application is being stopped somehow when running it?
The trouble seems to be that you have not mapped the exposed port of your docker container to your local machine. To remedy this you need to change your run to command to
docker run -it -p 5000:5000 flaskapp.
This tells docker to map port 5000 in the running container (the first 5000) to port 5000 on your local machine (the second 5000) [If you wanted to run on it on your local machine port 8080 for instance, instead the flag would be -p 5000:8080].
When you do so, the ports section output from docker ps should look something like:
0.0.0.0:5000->5000/tcp
I have a Flask server running on the 8090 port
...code of the Flask server...
app.run(host='0.0.0.0', port=8090, debug=True)
Then I have the Dockerfile as follow:
FROM ubuntu
WORKDIR home
MAINTAINER Califfo
# copy files into the image
ADD files /home/files
ADD ServerCategory.py /home
ADD requirements.txt /home
# install python3, pip and Flask
RUN apt-get update && apt-get install -y python3 python3-pip net-tools lsof && pip3 install -r requirements.txt
# launch flask server
RUN python3 ServerCategory.py flask run
When I build the image and run the container with this command
docker build -t server_category . && docker run -p 8090:8090 -it --rm server_category
everything is OK.
Running on http://0.0.0.0:8090/ (Press CTRL+C to quit)
Restarting with stat
Debugger is active!
Debugger PIN: 280-257-458
But I cannot connect to the server from my browser, for example with localhost:8090/.
I get this error
Error: Couldn't connect to server
As nauer says in the comments, one of the problems is that you don't have any ENTRYPOINT or CMD command on Dockerfile. So whenever you starts the container he will close immediately. With your docker run command, the container are still alive because you open the bash with the -it flags, but this is not the optimal approach.
For the problem "Error: Couldn't connect to server" you will need to give some more information, since the problem appears to be with flask and not with docker itself.