I have a flask based python code which simply connects to mongodb.It has two routes Get Post. Get simply prints hello world and using Post we can post any json data which is later saved in MongoDB This python code is working fine. MongoDB is hosted on cloud.
I have now created a Dockerfile:
FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7
RUN pip3 install pymongo
ENV LISTEN_PORT=8000
EXPOSE 8000
COPY /app /app
Using command to run
docker run --rm -it -p 8000:8000 myflaskimage
After starting the container for this docker image, I am getting response of GET but no response from POST. I am using Postman software to post json data. I get below error:
pymongo.errors.ServerSelectionTimeoutError: No servers found yet
I am bit confused as to why the python code is working fine but when I put the same in docker and start container, it throws error. Do we have to include anything in Dockerfile to enable connections to MongoDB.
Please help. Thanks
Python Code:
from flask import Flask, request
from pymongo import MongoClient
app = Flask(__name__)
def connect_db():
try:
client = MongoClient(<mongodbURL>)
return client.get_database(<DBname>)
except Exception as e:
print(e)
def main():
db = connect_db()
collection = db.get_collection('<collectionName>')
#app.route('/data', methods=['POST'])
def data():
j_data = request.get_json()
x = collection.insert_one(j_data).inserted_id
return "Data added successfully"
#app.route('/')
def hello_world():
return "Hello World"
main()
if __name__ == '__main__':
app.run()
You probably don't have internet connection from the container. I had a similar issue when connecting from containerized java application to public web service.
At first I would try to restart docker:
systemctl restart docker
If it does not help then look into resolv.conf in you container:
docker run --rm myflaskimage cat /etc/resolv.conf
If it shows nameserver 127.x.x.x then you can try:
1) on the host system comment dns=dnsmasq line in /etc/NetworkManager/NetworkManager.conf file with a # and restart NetworkManager using systemctl restart network-manager
2) or explicitly set DNS for docker adding this into the /etc/docker/daemon.json file and restarting the docker:
{
"dns": ["my.dns.server"]
}
Related
I want to know the correct way to start a flask application. The docs show two different commands:
$ flask -a sample run
and
$ python3.4 sample.py
produce the same result and run the application correctly.
What is the difference between the two and which should be used to run a Flask application?
The flask command is a CLI for interacting with Flask apps. The docs describe how to use CLI commands and add custom commands. The flask run command is the preferred way to start the development server.
Never use this command to deploy publicly, use a production WSGI server such as Gunicorn, uWSGI, Waitress, or mod_wsgi.
As of Flask 2.2, use the --app option to point the command at your app. It can point to an import name or file name. It will automatically detect an app instance or an app factory called create_app. Use the --debug option to run in debug mode with the debugger and reloader.
$ flask --app sample --debug run
Prior to Flask 2.2, the FLASK_APP and FLASK_ENV=development environment variables were used instead. FLASK_APP and FLASK_DEBUG=1 can still be used in place of the CLI options above.
$ export FLASK_APP=sample
$ export FLASK_ENV=development
$ flask run
On Windows CMD, use set instead of export.
> set FLASK_APP=sample
For PowerShell, use $env:.
> $env:FLASK_APP = "sample"
The python sample.py command runs a Python file and sets __name__ == "__main__". If the main block calls app.run(), it will run the development server. If you use an app factory, you could also instantiate an app instance at this point.
if __name__ == "__main__":
app = create_app()
app.run(debug=True)
Both these commands ultimately start the Werkzeug development server, which as the name implies starts a simple HTTP server that should only be used during development. You should prefer using the flask run command over the app.run().
Latest documentation has the following example assuming you want to run hello.py(using .py file extension is optional):
Unix, Linux, macOS, etc.:
$ export FLASK_APP=hello
$ flask run
Windows:
> set FLASK_APP=hello
> flask run
you just need to run this command
python app.py
(app.py is your desire flask file)
but make sure your .py file has the following flask settings(related to port and host)
from flask import Flask, request
from flask_restful import Resource, Api
import sys
import os
app = Flask(__name__)
api = Api(app)
port = 5100
if sys.argv.__len__() > 1:
port = sys.argv[1]
print("Api running on port : {} ".format(port))
class topic_tags(Resource):
def get(self):
return {'hello': 'world world'}
api.add_resource(topic_tags, '/')
if __name__ == '__main__':
app.run(host="0.0.0.0", port=port)
The very simples automatic way without exporting anything is using python app.py see the example here
from flask import (
Flask,
jsonify
)
# Function that create the app
def create_app(test_config=None ):
# create and configure the app
app = Flask(__name__)
# Simple route
#app.route('/')
def hello_world():
return jsonify({
"status": "success",
"message": "Hello World!"
})
return app # do not forget to return the app
APP = create_app()
if __name__ == '__main__':
# APP.run(host='0.0.0.0', port=5000, debug=True)
APP.run(debug=True)
For Linux/Unix/MacOS :-
export FLASK_APP = sample.py
flask run
For Windows :-
python sample.py
OR
set FLASK_APP = sample.py
flask run
You can also run a flask application this way while being explicit about activating the DEBUG mode.
FLASK_APP=app.py FLASK_DEBUG=true flask run
I'm making a fictional character generator API. Runs fine when I send requests through Postman when run locally, but gives a 500 error and times out when run through docker. In the said flask API app:
from flask import Flask, jsonify, request
from flask_restful import Api, Resource
...
class AddCharacter(Resource):
def post(self):
...
p.add_person()
# saves the character to a mongodb
p.save_person() #<-- causes a 500 error in postman when run through docker build and docker up
retJson = {
"Message": "Character has been created",
"Status Code": 200
}
return jsonify(retJson)
How I connect to mongo:
db = MongoEngine()
app = Flask(__name__)
app.config['MONGODB_SETTINGS'] = {
'db': 'projectdb',
'host': 'localhost',
'port': 27017
}
with app.app_context():
db.init_app(app)
try:
db_client = db.connection['projectdb']
except ConnectionFailure as e:
sys.stderr.write("Could not connect to MongoDB: %s" % e)
sys.exit(1)
the database manager:
class DatabaseManager():
def save_user(self, first_name, last_name, openness, conscientiousness, extraversion,
agreeableness, emotional_stability, organization, anxiety,
knowledgeableness, sympathy, talkativeness, accommodation,
expressiveness, carefulness, depressiveness, gregariousness,
altruism, inquisitiveness):
new_user = User(first_name=first_name, last_name=last_name)
new_user.save()
new_personality = Personality(Openness=openness, Conscientiousness=conscientiousness, Extraversion=extraversion,
Agreeableness=agreeableness, EmotionalStability=emotional_stability,
Organization=organization, Anxiety=anxiety,
Knowledgeableness=knowledgeableness, Sympathy=sympathy,
Talkativeness=talkativeness, Accommodation=accommodation,
Expressiveness=expressiveness, Carefulness=carefulness,
Depressiveness=depressiveness, Gregariousness=gregariousness,
Altruism=altruism, Inquisitiveness=inquisitiveness)
new_profile = Profile(person=new_user, personality=new_personality)
new_profile.save()
In the Person class:
def save_person(self):
dm.save_user(first_name=self.first_name, last_name=self.last_name, openness=aff.Openness,
conscientiousness=aff.Conscientiousness, extraversion=aff.Extraversion,
agreeableness=aff.Agreeableness, emotional_stability=aff.EmotionalStability,
organization=aff.Organization, anxiety=aff.Anxiety,
knowledgeableness=aff.Knowledgeableness, sympathy=aff.Sympathy, talkativeness=aff.Talkativeness,
accommodation=aff.Accommodation,
expressiveness=aff.Expressiveness, carefulness=aff.Carefulness, depressiveness=aff.Depressiveness,
gregariousness=aff.Gregariousness,
altruism=aff.Altruism, inquisitiveness=aff.Inquisitiveness)
file structure,
Dockerfile,
requirements.txt,
.yml file images.
I run docker with sudo docker-compose build and then sudo docker-compose up
It seems from your setup that the host machine is running the database server, which you are unable to reach from your container's virtual network, what you need to do here is interface the host network and ports with container's virtual network so that your application is able to reach the database as it is able to when running on the same host according to your application config.
By default, a docker container starts in bridge mode nothing is explicitly specified about network modes when starting up the container, so either you can bind database server with the bridge IP or you can specify the network mode to host for your container.
The problem is the way you connect to MongoDB from the backend.
Keep in mind that localhost for a docker container is different than the running host, and points to the container itself (unless you run containers in host network). So almost 99% of times you don't want to connect to localhost in a dockerized environment.
By default, docker-compose creates an internal network between running containers, such that you can resolve a container's ip by its service name. That is, if you replace localhost with db (your database service name), you can connect to your database.
I found adding networks to the docker-compose.yml fixed the problem:
Only when trying to connect to my Azure DB from Python 3.7 running in
a OpenShift container (FROM rhel7:latest) I see the following error:
sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('IM004', "[IM004][unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed (0) (SQLDriverConnect)
I tried the exact same code in Docker on my MAC, Windows and a RHEL7 Virtualbox running the RHEL7 base container - it always works! The problem is only in my container running in OpenShift!
I checked that I can telnet to my Azure DB server in 1433 from Openshift.
I enabled the ODBC logs as well but there is no more information than the above error.
What else should I check?
Here is how I set up the MSODBC driver in my Dockerfile:
RUN curl https://packages.microsoft.com/config/rhel/7/prod.repo > /etc/yum.repos.d/mssql-release.repo && \
yum remove unixODBC-utf16 unixODBC-utf16-devel && \
ACCEPT_EULA=Y yum install -y msodbcsql17 && \
yum install -y unixODBC-devel
And here is the code that throws the error:
inside modules.database:
pyodbc_connstring_safe = 'DRIVER={{ODBC Driver 17 for SQL Server}};SERVER='+config.settings["DB_HOST"]+\
';PORT=1433;DATABASE='+config.settings["DB_NAME"]+';UID='+config.usernames["database"]+\
';PWD={};MARS_Connection=Yes'
if config.settings["debug"]:
print("Using DB connection string: {}".format(pyodbc_connstring_safe.format("SAFE_DB_PASS")))
pyodbc_connstring = pyodbc_connstring_safe.format(config.passwords["database"])
Base = declarative_base()
quoted = urllib.parse.quote_plus(pyodbc_connstring)
def get_engine():
return create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted), echo=config.settings["debug"], pool_pre_ping=True)
Inside my flask app (the error gets thrown in the call to 'has_table'):
#app.route("/baselinedb", methods=["POST"])
def create_db():
from modules.database import Base
engine = database.get_engine()
if not engine.dialect.has_table(engine, database.get_db_object_name("BaselineDefinition"), schema = 'dbo'):
Base.metadata.create_all(engine)
db.session.commit()
return "OK"
As I mentioned in the beginning, the same Dockerfile gives me a working Container in Docker either locally on Mac or Windows or inside a RHEL7 VM.
Thanks for having a look!
unixODBC is trying to find the odbc.ini in the current users home directory. It's trying to do this by looking up the user in /etc/passwd. Since Openshift is using a project specific UID which does not exist in /etc/passwd the user lookup will not work and the connection will fail.
To resolve this add the following to the dockerfile
ADD entrypoint.sh .
RUN chmod 766 /etc/passwd
..
..
ENTRYPOINT entrypoint.sh
And the following in the entrypoint script
export $(id)
echo "default:x:$uid:0:user for openshift:/tmp:/bin/bash" >> /etc/passwd
python3.7 app.py
The above will insert the current user to /etc/passwd during startup of the container.
An alternative and probably better approach might be to use nss_wrapper:
https://cwrap.org/nss_wrapper.html
I encountered the same problem while using django on Windows.
After upgrading the 'SQL Server 2017 client' to the latest client resolves my issue.
Use below link to download latest patch:
https://www.microsoft.com/en-us/download/details.aspx?id=56567
I have been stuck trying to figure out how to edit a python flask code after pulling from a Docker Hub repository on a different computer. I want to create a Folder in my Linux Desktop that contains all of the packages the image has when running as a container (Dockerfile, requirements.txt, app.py) that way I can edit the app.py regardless of what computer I have or even if my classmates want to edit it they can simply just pull my image, run the container, and be able to have a copy of the code saved on their local machine for them to open it using Visual Studio Code (or any IDE) and edit it. This is what I tried.
I first pulled from the Docker hub:
sudo docker pull woonx/dockertester1
Then used this command to run the image as a container and create a directory:
sudo docker run --name=test1 -v ~/testfile:/var/lib/docker -p 4000:80 woonx/dockertester1
I was able to create a local directory called testfile but it was an empty folder when I opened it. No app.py, dockerfile, nothing.
The example code I am using to test is from following the example guide on the Docker website: https://docs.docker.com/get-started/part2/
Dockerfile:
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
requirements.txt:
Flask
Redis
app.py:
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
What I do is;
First, I issue docker run command.
sudo docker run --name=test1 -v ~/testfile:/var/lib/docker -p 4000:80 woonx/dockertester1
At this stage, files are created in container. Then I stop the container (lets say container id is 0101010101) .
docker container stop 0101010101
What I do is simply copying those files from container to the appropriate directory on my machine by using :
docker cp <container_name>:/path/in/container /path/of/host
or
cd ~/testfile
docker cp <container_name>:/path/in/container .
So, You have the files craeted by docker run on you local host. Now you can use them with -v option.
sudo docker run --name=test1 -v ~/testfile:/var/lib/docker -p 4000:80 woonx/dockertester1
Normally, when you change a setting in your configuration, it should be enough to stop/start container to take in action.
I hope this approach solves your problem.
Regards
I am new to Flask/Python and the question might be silly or I might be missing something obvious, so please bear with me.
I have created a Flask app and the structure is as follow:
myproject
api
__init__.py
api.py
application.py
config.py
models.py
migrations
...
appserver.py
manage.py
Procfile
requirements.txt
The contents of my appserver.py:
from api.application import create_app
if __name__ == '__main__':
create_app = create_app()
create_app.run()
The contents of my api/application.py:
from flask import Flask
def create_app(app_name='MYAPPNAME'):
app = Flask(app_name)
app.config.from_object('api.config.DevelopmentConfig')
from api.api import api
app.register_blueprint(api, url_prefix='/api')
from api.models import db
db.init_app(app)
return app
When I run my server locally with python appserver.py everything works as expected. When I try to run gunicorn like so: gunicorn --bind 127.0.0.1:5000 appserver:create_app I get this error: TypeError: create_app() takes from 0 to 1 positional arguments but 2 were given
What am I doing wrong here?
I would suggest you update the code inside the appserver.py files as shown below:
from api.application import create_app
if __name__ == '__main__':
create_app = create_app()
create_app.run()
else:
gunicorn_app = create_app()
and then run the app as follows
gunicorn --bind 127.0.0.1:5000 appserver:gunicorn_app
The reason for the above steps is as follows:
Running the server locally
When you run the server locally with python appserver.py the if block gets executed. Hence the Flask object gets created via your create_app method and you are able to access the server.
Running the server via Gunicorn
When you run the server via Gunicorn, you need to specify the module name and the variable name of the app for Gunicorn to access it. Note that the variable should be a WSGI callable object for e.g a flask app object. This is as per the definition in Gunicorn Docs.
When you were running the Gunicorn command gunicorn --bind 127.0.0.1:5000 appserver:create_app, it mistook create_app as the WSGI callable object(Flask app object). This threw the error as create_app is just a regular method which returns the Flask app object on correct invocation.
So we added the part of creating the object in the else block gunicorn_app = create_app() and called this via the Gunicorn using gunicorn --bind 127.0.0.1:5000 appserver:gunicorn_app
The other thing that you need to note is when you run python appserver.py the if block gets triggered since it is the main file getting executed. Where as when you gunicorn --bind 127.0.0.1:5000 appserver:create_app the appserver.py gets imported by gunicorn. Hence the else block gets triggered. This is the reason we have placed gunicorn_app = create_app() in the else block.
I hope the above explanation was satisfactory. Let me know if you have not understood any part.
Pranav Kundaikar answer is excellent. I don't know if they updated gunicorn but I could avoid adding the main in my app using:
gunicorn -w 4 'api:create_app()'