Using gevent-socketio paster integration causes my application to be unresponsive - pyramid

I am writing a Pyramid application that relies on gevent-socketio and redis. However, I noticed that when I navigate away from the view that establishes the socket.io connection, my application becomes unresponsive. In order to try and isolate the issue, I created another bare-bones application and discovered that using pubsub.listen() was causing the issue:
class TestNamespace(BaseNamespace):
def initialize(self):
self.spawn(self.emitter)
def emitter(self):
client = redis.pubsub()
client.subscribe('anything')
for broadcast in client.listen():
if broadcast['type'] != 'message':
continue
The way I'm starting up my application is as follows:
pserve --reload development.ini
However, I can only get my application to work if use use the serve.py from the examples:
import os.path
from socketio.server import SocketIOServer
from pyramid.paster import get_app
from gevent import monkey; monkey.patch_all()
HERE = os.path.abspath(os.path.dirname(__file__))
if __name__ == '__main__':
app = get_app(os.path.join(HERE, 'development.ini'))
print 'Listening on port http://0.0.0.0:8080 and on port 10843 (flash policy server)'
SocketIOServer(('0.0.0.0', 8080), app,
resource="socket.io", policy_server=True,
policy_listener=('0.0.0.0', 10843)).serve_forever()
Unfortunatey this is rather cumbersome for development as I lose --reload functionality. Ideally I'd like to use the paster integration entry point
Another thing I noticed is that the gevent-sockectio paster integration does not monkey patch gevent, whereas the examples server.py does.
How can I get pserve --reload to work with gevent-socketio?
I've uploaded my test application to github: https://github.com/m-martinez/iotest

Under [server:main] in your ini file.
use = egg:gevent-socketio#paster
transports = websocket, xhr-multipart, xhr-polling
policy_server = True
host = 0.0.0.0
port = 6543
If you get an error make sure you using the latest version of gevent-socketio.

With no success using egg:gevent-socketio#paster, I ended up using gunicorn with watchdog to achieve what I wanted for development:
watchmedo auto-restart \
--pattern "*.py;*.ini" \
--directory ./iotest/ \
--recursive \
-- \
gunicorn --paste ./iotest/development.ini
This is what my [server:main] section looks like:
[server:main]
use = egg:gunicorn#main
worker_class = socketio.sgunicorn.GeventSocketIOWorker
host = 0.0.0.0
port = 8080
debug = True
logconfig = %(here)s/development.ini

Related

Unable to run flask app if starter file is other than app.py - Test-Driven Development with Python, Flask, and Docker [duplicate]

I want to know the correct way to start a flask application. The docs show two different commands:
$ flask -a sample run
and
$ python3.4 sample.py
produce the same result and run the application correctly.
What is the difference between the two and which should be used to run a Flask application?
The flask command is a CLI for interacting with Flask apps. The docs describe how to use CLI commands and add custom commands. The flask run command is the preferred way to start the development server.
Never use this command to deploy publicly, use a production WSGI server such as Gunicorn, uWSGI, Waitress, or mod_wsgi.
As of Flask 2.2, use the --app option to point the command at your app. It can point to an import name or file name. It will automatically detect an app instance or an app factory called create_app. Use the --debug option to run in debug mode with the debugger and reloader.
$ flask --app sample --debug run
Prior to Flask 2.2, the FLASK_APP and FLASK_ENV=development environment variables were used instead. FLASK_APP and FLASK_DEBUG=1 can still be used in place of the CLI options above.
$ export FLASK_APP=sample
$ export FLASK_ENV=development
$ flask run
On Windows CMD, use set instead of export.
> set FLASK_APP=sample
For PowerShell, use $env:.
> $env:FLASK_APP = "sample"
The python sample.py command runs a Python file and sets __name__ == "__main__". If the main block calls app.run(), it will run the development server. If you use an app factory, you could also instantiate an app instance at this point.
if __name__ == "__main__":
app = create_app()
app.run(debug=True)
Both these commands ultimately start the Werkzeug development server, which as the name implies starts a simple HTTP server that should only be used during development. You should prefer using the flask run command over the app.run().
Latest documentation has the following example assuming you want to run hello.py(using .py file extension is optional):
Unix, Linux, macOS, etc.:
$ export FLASK_APP=hello
$ flask run
Windows:
> set FLASK_APP=hello
> flask run
you just need to run this command
python app.py
(app.py is your desire flask file)
but make sure your .py file has the following flask settings(related to port and host)
from flask import Flask, request
from flask_restful import Resource, Api
import sys
import os
app = Flask(__name__)
api = Api(app)
port = 5100
if sys.argv.__len__() > 1:
port = sys.argv[1]
print("Api running on port : {} ".format(port))
class topic_tags(Resource):
def get(self):
return {'hello': 'world world'}
api.add_resource(topic_tags, '/')
if __name__ == '__main__':
app.run(host="0.0.0.0", port=port)
The very simples automatic way without exporting anything is using python app.py see the example here
from flask import (
Flask,
jsonify
)
# Function that create the app
def create_app(test_config=None ):
# create and configure the app
app = Flask(__name__)
# Simple route
#app.route('/')
def hello_world():
return jsonify({
"status": "success",
"message": "Hello World!"
})
return app # do not forget to return the app
APP = create_app()
if __name__ == '__main__':
# APP.run(host='0.0.0.0', port=5000, debug=True)
APP.run(debug=True)
For Linux/Unix/MacOS :-
export FLASK_APP = sample.py
flask run
For Windows :-
python sample.py
OR
set FLASK_APP = sample.py
flask run
You can also run a flask application this way while being explicit about activating the DEBUG mode.
FLASK_APP=app.py FLASK_DEBUG=true flask run

How do I serve a Websocket Application written in Python using Twisted Framework

I have written a websocket server application using the Twisted Framework. I am new to this and am trying to figure out how to serve it as an application so I can use NGINX to reverse proxy it.
The main body of the application looks as below:
if __name__ == "__main__":
#Clear redis cache
R.flushdb()
log.startLogging(sys.stdout)
contextFactory = ssl.DefaultOpenSSLContextFactory('keys/server.key',
'keys/server.crt')
ServerFactory = BroadcastServerFactory
factory = BroadcastServerFactory("wss://127.0.0.1:8080")
factory.protocol = BroadcastServerProtocol
resource = WebSocketResource(factory)
root = File(".")
root.putChild(b"ws", resource)
site = Site(root)
reactor.listenSSL(8080, site, contextFactory)
reactor.run()
My understanding is that I need to create a WSGI application, but I am confused as how to do this. I am not sure how I change this program into WSGI. When I have worked with Django and Flash they have a WSGI file, but this new project is just a single python file using the Twisted Framework.
Sorry as I am struggling a bit to explain this.
What I have done is change the code so it doesn't have the if statement anymore and looks as below:
#New imports
from twisted.application import internet,service
#Bottom of file
R.flushdb()
log.startLogging(sys.stdout)
contextFactory = ssl.DefaultOpenSSLContextFactory('keys/server.key',
'keys/server.crt')
ServerFactory = BroadcastServerFactory
factory = BroadcastServerFactory("wss://127.0.0.1:8080")
factory.protocol = BroadcastServerProtocol
resource = WebSocketResource(factory)
application = service.Application("picserver")
service = internet.TCPServer('8080', factory)
resource = WSGIResource(reactor, reactor.getThreadPool(), factory)
root = File(".")
root.putChild(b"ws", resource)
site = Site(root)
reactor.listenSSL(8080, site, contextFactory)
service.setServiceParent(application)
reactor.run()
I renamed the file to 'server.tap', but I don't think this is necessary. The code changes then allow me to then run the program as a a daemon using:
twistd -y server.py
I then created a .service file in /etc/systemd/system as below:
[Unit]
Description=picserver startup script
After=network.target
[Service]
User=django
Group=www-data
Environment="DBNAME=mydb"
Environment="DBUSER=dbuser"
Environment="DBPASSWORD=password"
ExecStart=/home/<username>/Documents/python/environments/gameservertest/bin/python /home/<username>/Documents/python/environments/gameservertest/bin/twistd -y /home/<username>/Documents/python/picturegameserver/server.tap
WorkingDirectory=/home/<username>/Documents/python/picturegameserver/
Restart=always
[Install]
WantedBy=multi-user.target
I am now able to use "systemctl" to run this as a service and can connect the front end running locally on the server. At the moment, I don't think I will need to configure Nginx to reverse proxy it as I can just have the front end running on the same server.

Seeing terminal logs of Flask App after session on server ended but app is still running on background

The scenario is below:
I SSH to server Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-96-generic
x86_64) using putty with my credentials, from Windows
Go to the directory where I put my source code
start Flask app by running command python3 main.py logs are showing on terminal
however, after I left my computer for some time the session is disconnected/ended.
I know the app still running because another team still can test the app
when I re-login to the server and go to the same directory I don't want to kill/restart the already running app because it would interfere with others doing the test
How to see the running log so I would know what testers are doing and occasionally catch what's wrong
my main.py code:
if __name__ == "__main__":
ip = 'someip'
port = 9053
app.run(debug=True, host=os.getenv('IP', ip),
port=int(os.getenv('PORT', port)), threaded=True)
you can save your python log on file, so you can review it any time, this the example of using logging lib:
import logging
logger = logging.getLogger(<logging_name>)
fh = logging.FileHandler(<logging file>)
logger.addHandler(fh)

Cannot configure Openshift 3 with Tornado server

I'm trying to migrate my Tornado app from Openshift2 to Openshift3 and don't know how to actually setup route, service and etc.
First I'm creating simple Python 3.5 application on RHEL 7. In advanced options I'm set up git repo, add APP_FILE variable. Cloning and app build finishing successfully. And I executed curl localhost:8080 in web console terminal, it seems working.
But service root link returns me this message:
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
I actually not changed anything in route and service configuration, I guess what I should set it up somehow. But now haven't any thoughts how to do this.
Here is my wsgi.py:
#!/usr/bin/env python
import importlib.machinery
if __name__ == '__main__':
print('Executing __main__ ...')
ip = 'localhost'
port = 8080
app = importlib.machinery.SourceFileLoader("application", 'wsgi/application').load_module("application")
from wsgiref.simple_server import make_server
httpd = make_server(ip, port, app.application)
print('Starting server on http://{0}:{1}'.format(ip, port))
httpd.serve_forever()
And application:
#!/usr/bin/env python
import os
import sys
import tornado.wsgi
from wsgi.openshift import handlers
if 'OPENSHIFT_REPO_DIR' in os.environ:
sys.path.append(os.path.join(os.environ['OPENSHIFT_REPO_DIR'], 'wsgi',))
virtenv = os.environ['OPENSHIFT_PYTHON_DIR'] + '/virtenv/venv'
os.environ['PYTHON_EGG_CACHE'] = os.path.join(virtenv, 'lib/python3.3/site-packages')
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
exec(compile(open(virtualenv).read(), virtualenv, 'exec'), dict(__file__=virtualenv))
except IOError:
pass
settings = {
'cookie_secret': 'TOP_SECRET',
'static_path' : os.path.join(os.getcwd(), 'wsgi/static'),
'template_path' : os.path.join(os.getcwd(), 'wsgi/templates'),
'xsrf_cookies': False,
'debug': True,
'login_url': '/login',
}
application = tornado.wsgi.WSGIApplication(handlers, **settings)
EDIT:
Here is some console oc output:
> oc status
In project photoservice on server https://api.starter-us-west-1.openshift.com:443
http://photoservice-photoservice.a3c1.starter-us-west-1.openshiftapps.com to pod port 8080-tcp (svc/photoservice)
dc/photoservice deploys istag/photoservice:latest <-
bc/photoservice source builds git#bitbucket.org:ashchuk/photoservice.git#master on openshift/python:3.5
deployment #1 deployed 3 minutes ago - 1 pod
View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
> oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
photoservice photoservice-photoservice.a3c1.starter-us-west-1.openshiftapps.com photoservice 8080-tcp None
Just changed ip = 'localhost' to ip = '0.0.0.0' as Graham said and this worked.
Here is an explanation:
If you use localhost or 127.0.0.1 it will only accept requests from the network loopback device. This can only be connected to by clients running on the same host (container). You need to listen on all network interfaces, indicated by 0.0.0.0 to be able to accept requests from outside of the host (container). If you don't do that, OpenShift cannot connect to your application to proxy requests to it.

pyramid + gunicorn_paster development.ini : Error waitress

I am trying to use gunicorn with pyramid.
I installed gunicorn 18 into pyramid 1.5 dedicated virtualenv,
and after activating it, I start gunicorn_paster, but it stops at once with an error :
(venv) gunicorn_paster development.ini
Error: waitress
What this error means ?
I tried --debug but it did not give me more clues.
--preload does not work neither.
'pserve development.ini' or mod_wsgi works well, so my virtualenv should be OK.
You need a configuration file.
#gunicorn_conf.py
import os
def numCPUs():
if not hasattr(os, "sysconf"):
raise RuntimeError("No sysconf detected.")
return os.sysconf("SC_NPROCESSORS_ONLN")
workers = numCPUs() * 2 + 1
bind = "127.0.0.1:8001"
pidfile = "/tmp/gunicorn-app.pid"
backlog = 2048
logfile = "/var/log/gunicorn-app.log"
loglevel = "info"
Then launch as shown (note gunicorn_conf.py needs to be in same dir as development.ini)
gunicorn --paste development.ini
You can leave your development.ini as it is, no need to edit.
I Found the problem : I just had to deactivate/activate again the virtualenv after gunicorn install to have it work.
What server setting does your development.ini have ? By default, it might be using waitress. Kindly check the ini file configuration.
Try this:
# ini file
[server:main]
use = egg:gunicorn#main
host = 0.0.0.0
port = 5000

Resources