Cannot configure Openshift 3 with Tornado server - python-3.x

I'm trying to migrate my Tornado app from Openshift2 to Openshift3 and don't know how to actually setup route, service and etc.
First I'm creating simple Python 3.5 application on RHEL 7. In advanced options I'm set up git repo, add APP_FILE variable. Cloning and app build finishing successfully. And I executed curl localhost:8080 in web console terminal, it seems working.
But service root link returns me this message:
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
I actually not changed anything in route and service configuration, I guess what I should set it up somehow. But now haven't any thoughts how to do this.
Here is my wsgi.py:
#!/usr/bin/env python
import importlib.machinery
if __name__ == '__main__':
print('Executing __main__ ...')
ip = 'localhost'
port = 8080
app = importlib.machinery.SourceFileLoader("application", 'wsgi/application').load_module("application")
from wsgiref.simple_server import make_server
httpd = make_server(ip, port, app.application)
print('Starting server on http://{0}:{1}'.format(ip, port))
httpd.serve_forever()
And application:
#!/usr/bin/env python
import os
import sys
import tornado.wsgi
from wsgi.openshift import handlers
if 'OPENSHIFT_REPO_DIR' in os.environ:
sys.path.append(os.path.join(os.environ['OPENSHIFT_REPO_DIR'], 'wsgi',))
virtenv = os.environ['OPENSHIFT_PYTHON_DIR'] + '/virtenv/venv'
os.environ['PYTHON_EGG_CACHE'] = os.path.join(virtenv, 'lib/python3.3/site-packages')
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
exec(compile(open(virtualenv).read(), virtualenv, 'exec'), dict(__file__=virtualenv))
except IOError:
pass
settings = {
'cookie_secret': 'TOP_SECRET',
'static_path' : os.path.join(os.getcwd(), 'wsgi/static'),
'template_path' : os.path.join(os.getcwd(), 'wsgi/templates'),
'xsrf_cookies': False,
'debug': True,
'login_url': '/login',
}
application = tornado.wsgi.WSGIApplication(handlers, **settings)
EDIT:
Here is some console oc output:
> oc status
In project photoservice on server https://api.starter-us-west-1.openshift.com:443
http://photoservice-photoservice.a3c1.starter-us-west-1.openshiftapps.com to pod port 8080-tcp (svc/photoservice)
dc/photoservice deploys istag/photoservice:latest <-
bc/photoservice source builds git#bitbucket.org:ashchuk/photoservice.git#master on openshift/python:3.5
deployment #1 deployed 3 minutes ago - 1 pod
View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
> oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
photoservice photoservice-photoservice.a3c1.starter-us-west-1.openshiftapps.com photoservice 8080-tcp None

Just changed ip = 'localhost' to ip = '0.0.0.0' as Graham said and this worked.
Here is an explanation:
If you use localhost or 127.0.0.1 it will only accept requests from the network loopback device. This can only be connected to by clients running on the same host (container). You need to listen on all network interfaces, indicated by 0.0.0.0 to be able to accept requests from outside of the host (container). If you don't do that, OpenShift cannot connect to your application to proxy requests to it.

Related

Unable to run flask app if starter file is other than app.py - Test-Driven Development with Python, Flask, and Docker [duplicate]

I want to know the correct way to start a flask application. The docs show two different commands:
$ flask -a sample run
and
$ python3.4 sample.py
produce the same result and run the application correctly.
What is the difference between the two and which should be used to run a Flask application?
The flask command is a CLI for interacting with Flask apps. The docs describe how to use CLI commands and add custom commands. The flask run command is the preferred way to start the development server.
Never use this command to deploy publicly, use a production WSGI server such as Gunicorn, uWSGI, Waitress, or mod_wsgi.
As of Flask 2.2, use the --app option to point the command at your app. It can point to an import name or file name. It will automatically detect an app instance or an app factory called create_app. Use the --debug option to run in debug mode with the debugger and reloader.
$ flask --app sample --debug run
Prior to Flask 2.2, the FLASK_APP and FLASK_ENV=development environment variables were used instead. FLASK_APP and FLASK_DEBUG=1 can still be used in place of the CLI options above.
$ export FLASK_APP=sample
$ export FLASK_ENV=development
$ flask run
On Windows CMD, use set instead of export.
> set FLASK_APP=sample
For PowerShell, use $env:.
> $env:FLASK_APP = "sample"
The python sample.py command runs a Python file and sets __name__ == "__main__". If the main block calls app.run(), it will run the development server. If you use an app factory, you could also instantiate an app instance at this point.
if __name__ == "__main__":
app = create_app()
app.run(debug=True)
Both these commands ultimately start the Werkzeug development server, which as the name implies starts a simple HTTP server that should only be used during development. You should prefer using the flask run command over the app.run().
Latest documentation has the following example assuming you want to run hello.py(using .py file extension is optional):
Unix, Linux, macOS, etc.:
$ export FLASK_APP=hello
$ flask run
Windows:
> set FLASK_APP=hello
> flask run
you just need to run this command
python app.py
(app.py is your desire flask file)
but make sure your .py file has the following flask settings(related to port and host)
from flask import Flask, request
from flask_restful import Resource, Api
import sys
import os
app = Flask(__name__)
api = Api(app)
port = 5100
if sys.argv.__len__() > 1:
port = sys.argv[1]
print("Api running on port : {} ".format(port))
class topic_tags(Resource):
def get(self):
return {'hello': 'world world'}
api.add_resource(topic_tags, '/')
if __name__ == '__main__':
app.run(host="0.0.0.0", port=port)
The very simples automatic way without exporting anything is using python app.py see the example here
from flask import (
Flask,
jsonify
)
# Function that create the app
def create_app(test_config=None ):
# create and configure the app
app = Flask(__name__)
# Simple route
#app.route('/')
def hello_world():
return jsonify({
"status": "success",
"message": "Hello World!"
})
return app # do not forget to return the app
APP = create_app()
if __name__ == '__main__':
# APP.run(host='0.0.0.0', port=5000, debug=True)
APP.run(debug=True)
For Linux/Unix/MacOS :-
export FLASK_APP = sample.py
flask run
For Windows :-
python sample.py
OR
set FLASK_APP = sample.py
flask run
You can also run a flask application this way while being explicit about activating the DEBUG mode.
FLASK_APP=app.py FLASK_DEBUG=true flask run

Seeing terminal logs of Flask App after session on server ended but app is still running on background

The scenario is below:
I SSH to server Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-96-generic
x86_64) using putty with my credentials, from Windows
Go to the directory where I put my source code
start Flask app by running command python3 main.py logs are showing on terminal
however, after I left my computer for some time the session is disconnected/ended.
I know the app still running because another team still can test the app
when I re-login to the server and go to the same directory I don't want to kill/restart the already running app because it would interfere with others doing the test
How to see the running log so I would know what testers are doing and occasionally catch what's wrong
my main.py code:
if __name__ == "__main__":
ip = 'someip'
port = 9053
app.run(debug=True, host=os.getenv('IP', ip),
port=int(os.getenv('PORT', port)), threaded=True)
you can save your python log on file, so you can review it any time, this the example of using logging lib:
import logging
logger = logging.getLogger(<logging_name>)
fh = logging.FileHandler(<logging file>)
logger.addHandler(fh)

How to send a post request with postman while also saving to a Mongo database?

I'm making a fictional character generator API. Runs fine when I send requests through Postman when run locally, but gives a 500 error and times out when run through docker. In the said flask API app:
from flask import Flask, jsonify, request
from flask_restful import Api, Resource
...
class AddCharacter(Resource):
def post(self):
...
p.add_person()
# saves the character to a mongodb
p.save_person() #<-- causes a 500 error in postman when run through docker build and docker up
retJson = {
"Message": "Character has been created",
"Status Code": 200
}
return jsonify(retJson)
How I connect to mongo:
db = MongoEngine()
app = Flask(__name__)
app.config['MONGODB_SETTINGS'] = {
'db': 'projectdb',
'host': 'localhost',
'port': 27017
}
with app.app_context():
db.init_app(app)
try:
db_client = db.connection['projectdb']
except ConnectionFailure as e:
sys.stderr.write("Could not connect to MongoDB: %s" % e)
sys.exit(1)
the database manager:
class DatabaseManager():
def save_user(self, first_name, last_name, openness, conscientiousness, extraversion,
agreeableness, emotional_stability, organization, anxiety,
knowledgeableness, sympathy, talkativeness, accommodation,
expressiveness, carefulness, depressiveness, gregariousness,
altruism, inquisitiveness):
new_user = User(first_name=first_name, last_name=last_name)
new_user.save()
new_personality = Personality(Openness=openness, Conscientiousness=conscientiousness, Extraversion=extraversion,
Agreeableness=agreeableness, EmotionalStability=emotional_stability,
Organization=organization, Anxiety=anxiety,
Knowledgeableness=knowledgeableness, Sympathy=sympathy,
Talkativeness=talkativeness, Accommodation=accommodation,
Expressiveness=expressiveness, Carefulness=carefulness,
Depressiveness=depressiveness, Gregariousness=gregariousness,
Altruism=altruism, Inquisitiveness=inquisitiveness)
new_profile = Profile(person=new_user, personality=new_personality)
new_profile.save()
In the Person class:
def save_person(self):
dm.save_user(first_name=self.first_name, last_name=self.last_name, openness=aff.Openness,
conscientiousness=aff.Conscientiousness, extraversion=aff.Extraversion,
agreeableness=aff.Agreeableness, emotional_stability=aff.EmotionalStability,
organization=aff.Organization, anxiety=aff.Anxiety,
knowledgeableness=aff.Knowledgeableness, sympathy=aff.Sympathy, talkativeness=aff.Talkativeness,
accommodation=aff.Accommodation,
expressiveness=aff.Expressiveness, carefulness=aff.Carefulness, depressiveness=aff.Depressiveness,
gregariousness=aff.Gregariousness,
altruism=aff.Altruism, inquisitiveness=aff.Inquisitiveness)
file structure,
Dockerfile,
requirements.txt,
.yml file images.
I run docker with sudo docker-compose build and then sudo docker-compose up
It seems from your setup that the host machine is running the database server, which you are unable to reach from your container's virtual network, what you need to do here is interface the host network and ports with container's virtual network so that your application is able to reach the database as it is able to when running on the same host according to your application config.
By default, a docker container starts in bridge mode nothing is explicitly specified about network modes when starting up the container, so either you can bind database server with the bridge IP or you can specify the network mode to host for your container.
The problem is the way you connect to MongoDB from the backend.
Keep in mind that localhost for a docker container is different than the running host, and points to the container itself (unless you run containers in host network). So almost 99% of times you don't want to connect to localhost in a dockerized environment.
By default, docker-compose creates an internal network between running containers, such that you can resolve a container's ip by its service name. That is, if you replace localhost with db (your database service name), you can connect to your database.
I found adding networks to the docker-compose.yml fixed the problem:

Docker container not able to connect to remote MongoDB

I have a flask based python code which simply connects to mongodb.It has two routes Get Post. Get simply prints hello world and using Post we can post any json data which is later saved in MongoDB This python code is working fine. MongoDB is hosted on cloud.
I have now created a Dockerfile:
FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7
RUN pip3 install pymongo
ENV LISTEN_PORT=8000
EXPOSE 8000
COPY /app /app
Using command to run
docker run --rm -it -p 8000:8000 myflaskimage
After starting the container for this docker image, I am getting response of GET but no response from POST. I am using Postman software to post json data. I get below error:
pymongo.errors.ServerSelectionTimeoutError: No servers found yet
I am bit confused as to why the python code is working fine but when I put the same in docker and start container, it throws error. Do we have to include anything in Dockerfile to enable connections to MongoDB.
Please help. Thanks
Python Code:
from flask import Flask, request
from pymongo import MongoClient
app = Flask(__name__)
def connect_db():
try:
client = MongoClient(<mongodbURL>)
return client.get_database(<DBname>)
except Exception as e:
print(e)
def main():
db = connect_db()
collection = db.get_collection('<collectionName>')
#app.route('/data', methods=['POST'])
def data():
j_data = request.get_json()
x = collection.insert_one(j_data).inserted_id
return "Data added successfully"
#app.route('/')
def hello_world():
return "Hello World"
main()
if __name__ == '__main__':
app.run()
You probably don't have internet connection from the container. I had a similar issue when connecting from containerized java application to public web service.
At first I would try to restart docker:
systemctl restart docker
If it does not help then look into resolv.conf in you container:
docker run --rm myflaskimage cat /etc/resolv.conf
If it shows nameserver 127.x.x.x then you can try:
1) on the host system comment dns=dnsmasq line in /etc/NetworkManager/NetworkManager.conf file with a # and restart NetworkManager using systemctl restart network-manager
2) or explicitly set DNS for docker adding this into the /etc/docker/daemon.json file and restarting the docker:
{
"dns": ["my.dns.server"]
}

Using gevent-socketio paster integration causes my application to be unresponsive

I am writing a Pyramid application that relies on gevent-socketio and redis. However, I noticed that when I navigate away from the view that establishes the socket.io connection, my application becomes unresponsive. In order to try and isolate the issue, I created another bare-bones application and discovered that using pubsub.listen() was causing the issue:
class TestNamespace(BaseNamespace):
def initialize(self):
self.spawn(self.emitter)
def emitter(self):
client = redis.pubsub()
client.subscribe('anything')
for broadcast in client.listen():
if broadcast['type'] != 'message':
continue
The way I'm starting up my application is as follows:
pserve --reload development.ini
However, I can only get my application to work if use use the serve.py from the examples:
import os.path
from socketio.server import SocketIOServer
from pyramid.paster import get_app
from gevent import monkey; monkey.patch_all()
HERE = os.path.abspath(os.path.dirname(__file__))
if __name__ == '__main__':
app = get_app(os.path.join(HERE, 'development.ini'))
print 'Listening on port http://0.0.0.0:8080 and on port 10843 (flash policy server)'
SocketIOServer(('0.0.0.0', 8080), app,
resource="socket.io", policy_server=True,
policy_listener=('0.0.0.0', 10843)).serve_forever()
Unfortunatey this is rather cumbersome for development as I lose --reload functionality. Ideally I'd like to use the paster integration entry point
Another thing I noticed is that the gevent-sockectio paster integration does not monkey patch gevent, whereas the examples server.py does.
How can I get pserve --reload to work with gevent-socketio?
I've uploaded my test application to github: https://github.com/m-martinez/iotest
Under [server:main] in your ini file.
use = egg:gevent-socketio#paster
transports = websocket, xhr-multipart, xhr-polling
policy_server = True
host = 0.0.0.0
port = 6543
If you get an error make sure you using the latest version of gevent-socketio.
With no success using egg:gevent-socketio#paster, I ended up using gunicorn with watchdog to achieve what I wanted for development:
watchmedo auto-restart \
--pattern "*.py;*.ini" \
--directory ./iotest/ \
--recursive \
-- \
gunicorn --paste ./iotest/development.ini
This is what my [server:main] section looks like:
[server:main]
use = egg:gunicorn#main
worker_class = socketio.sgunicorn.GeventSocketIOWorker
host = 0.0.0.0
port = 8080
debug = True
logconfig = %(here)s/development.ini

Resources