I'm unable to connect to CherryPy server running inside a docker container from my system when i use cherrypy.tree.mount but when I do cherrypy.quickstart() I can connect to the server. A curl request to localhost:8080 with cherrypy.tree.mount gives a curl: (56) Recv failure: Connection reset by peer error.
App file which works
import cherrypy
class HelloWorld(object):
#cherrypy.expose
def index(self):
return "Hello World!"
cherrypy.quickstart(HelloWorld(), '/', {'global': {'server.socket_host':'0.0.0.0','server.socket_port': 8080}})
App file which fails
import cherrypy
class HelloWorld(object):
#cherrypy.expose
def index(self):
return "Hello World!"
cherrypy.tree.mount(HelloWorld(), '/', {'global':{'server.socket_host':'0.0.0.0','server.socket_port': 8080}})
cherrypy.engine.start()
cherrypy.engine.block()
Dockerfile
FROM python:3.6
RUN mkdir -p /opt/server
WORKDIR /opt/server
ADD . /opt/server
VOLUME /opt/server
RUN pip install cherrypy
EXPOSE 8080
CMD python app.py
I have to use cherrypy.tree.mount because I have to run multiple applications on the same server.
I was giving the configuration wrong. The right way to set global configs is:
cherrypy.config.update({'server.socket_host':'0.0.0.0','server.socket_port': 8080})
once that was set it is working fine. configs on cherrypy.tree.mount is for each application.
Related
I'm using the socket.io service in my Django app, and I want to create one gunicorn service that is responsible for starting the socket.io service.
Below is the socket io server code
File name: server.py
from wsgi import application
from server_events import sio
app = socketio.WSGIApp(sio, application)
class Command(BaseCommand):
help = 'Start the server'
def handle(self, *args, **options):
eventlet.wsgi.server(eventlet.listen(('127.0.0.1', 8001)), app)
Below is the actual code of the server with connect, disconnect and one custom method
File name: server_events.py
from wsgi import application
sio = socketio.Server(logger=True)
app = socketio.WSGIApp(sio, application)
#sio.event
def connected(sid, environ):
print("Server connected with sid: {}".format(sid))
#sio.event
def disconnect(sid):
print("Server disconnected with sid: {}".format(sid))
#sio.event
def run_bots(sid):
print("func executed")
**# Here custom logic, I am calling normal python function, Not any event function. i.e. do_something()**
When I hit python manage.py server in local, it will work fine, but on a production server, I don't want to type the python manage.py server command. What I want is to create one Gunicorn service and provide some instructions to that service so that when I hit the Gunicorn service command, it will start the socket IO server automatically, just like the runserver command.
I tried to implement those things by creating the Gunicorn service file, but it couldn't work.
socket-gunicorn.service
[Unit]
Description=SocketIO server
After=network.target
[Service]
Type=simple
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/crypto-trading-bot
ExecStart=/home/ubuntu/crypto-trading-bot/venv/bin/python3 manage.py server >> /home/ubuntu/crypto-trading-bot/socketIO.log 2>&1
Restart=always
[Install]
WantedBy=multi-user.target
I am new to server-side stuff, so any help will be good.
Thanks
I have this structure in FastAPI.
project_folder/
project_folder/app/
project_folder/app/main.py (with app object of FastAPI)
project_folder/app/rest/panel.py (here I try import app object from main)
In panel.py I import by:
from ..main import app
or
from app.main import app
Its works with other files... like "from app.models import XYZModel".
I run by command:
bash -c "uvicorn main:app --host 0.0.0.0 --port 8000 --reload"
and I try this:
bash -c "uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload"
It works. All paths and imports to multiple files inside different directories work fine. The problem is only when I try to import the app object from main.
I got error like this:
Error loading ASGI app. Could not import module "main".
Notice that there are many confusing things in your solution:
conflicting name between package name and object name : app
what the root source path for your project
I suggest:
rename the directory app into src, and consider it your source root
when you import, try to use abs import: from main import app
run server inside src folder and user uvicorn main:app
I am developing an app and the development setup was really easy.
I run it locally with:
$ . .venv/bin/activate
(.venv) $
(.venv) $ python -m flask run
* Serving Flask app 'app'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:8080
Press CTRL+C to quit
* Restarting with stat
* Debugger is active!
* Debugger PIN: -###-###
and I have configured apache2 on my (ubuntu) laptop with:
ProxyPass / http://127.0.0.1:8080
My code is structured like:
app.py
pages/scc_main/scc.html
...
The code has this:
import re
import jinja2
from flask import Flask
from flask import request
import data
app = Flask(__name__)
env = jinja2.Environment(loader=jinja2.FileSystemLoader("pages"))
#app.route('/')
def hello_world():
return '<h2>Hello, World!</h2>'
#app.route('/contracts/scc')
#app.route('/contracts/scc/')
def contracts_main():
main = env.get_template('scc_main/scc.html')
context = data.build('scc_main')
return main.render(**context)
And everything works great. As in:
$ curl 'http://localhost/'
<h2>Hello, World!</h2>$
But when I deploy. Wow. I set my site's root to point to the app. That is actually working. I can hit https://opencalaccess.org/ and it gets my static content.
I have:
import sys
import logging
logging.basicConfig(
level=logging.DEBUG,
filename='/var/www/<full-path>/logs/contracts_scc.log',
format='^(asctime)s %(message)s')
sys.path.insert(0, '/var/www/<full-path>')
sys.path.insert(0, '/var/www/<full-path>/.venv/lib/python3.8/site-packages')
And https://opencalaccess.org/contracts/scc works. But only after I change the Environment call above to:
env = jinja2.Environment(loader=jinja2.FileSystemLoader("/var/www/full-path>/pages"))
Now, any link which is just a link is fine. But anything that looks at the flask.request.path gives me:
The browser (or proxy) sent a request that this server could not understand.
What the heck? Setting up the dev environment was so easy. What do you have to do to get this working in deployment? Any suggestions?
ADDED:
Well, it seems clear that it is the WSGI part that is having the problem. My script is not receiving the request structure and so it cannot read any parameters. I have all my parameters on the URL, so my data building method reads the request.path to see what to do.
So, where to go from here. We will see.
I am no longer able to reproduce this.
I have a flask application with uwsgi configuration. This flask process requests such as addition, subtraction and multiplication. Now in my project structure i have a single app and this app is called in uwsgi config. But now i need to have separate flask application for each operation i.e flask1 for processing addition and flask2 for processing subtraction and so on. I am totally a beginner and have no idea how to achieve this through uwsgi.
I have heard about uwsgi emperor mode but doesn' have idea on it
My app file :
from myapp import app
if __name__ == __main__:
app.run()
wsgi config
module = wsgi:app
You could do this by using Werkzeug's Dispatcher Middleware.
With a sample application like this:
# application.py
from flask import Flask
def create_app_root():
app = Flask(__name__)
#app.route('/')
def index():
return 'I am the root app'
return app
def create_app_1():
app = Flask(__name__)
#app.route('/')
def index():
return 'I am app 1'
return app
def create_app_2():
app = Flask(__name__)
#app.route('/')
def index():
return 'I am app 2'
return app
from werkzeug.middleware.dispatcher import DispatcherMiddleware
dispatcher_app = DispatcherMiddleware(create_app_root(), {
'/1': create_app_1(),
'/2': create_app_2(),
})
You can then run this with gunicorn:
gunicorn --bind 0.0.0.0:5000 application:dispatcher_app
And test with curl:
$ curl -L http://localhost:5000/
I am the root app%
$ curl -L http://localhost:5000/1
I am app 1%
$ curl -L http://localhost:5000/2
I am app 2%
This seems to work by issuing a redirect, which is the reason for the -L flag here.
I have developed an api using falcon framework (v1.0). Now, I want to deploy this api on apache2 server with mod_wsgi at amazon EC2 instance.
I'm running my app using wsgiref package on EC2 server.
import falcon
from wsgiref import simple_server
api = app = falcon.API()
class Resource(object):
def on_get(self, req, resp):
print("i was here :(")
if 'fields' in req.params:
print(req.params['fields'])
print(len(req.params['fields']))
print(type(req.params['fields']))
res = Resource()
api.add_route('/', res)
if __name__ == '__main__':
http = simple_server.make_server('0.0.0.0', 8000, app)
http.serve_forever()
When I call https://example.com:8000/, I don't get any response and also my server is not getting the request.
wsgi.py file contains:
from test import app as application
I've added following lines to /etc/apache2/sites-available/000-default.conf
WSGIDaemonProcess test python-path=/var/www/test/test:/var/www/test/env/lib/python3.4/site-packages
WSGIProcessGroup test
WSGIScriptAlias / /var/www/test/wsgi.py