Deploying python using CherryPy in Elastic Beanstalk - python-3.x

I am new to python. I have to run a python application from Amazon Cloud. I am using CherryPy and deploying through Beanstalk. Here is my simple HelloWorld code
import cherrypy
class Hello(object):
#cherrypy.expose
def index(self):
return "Hello world!"
if __name__ == '__main__':
cherrypy.config.update({'server.socket_host': '0.0.0.0',
'server.socket_port': 80,})
cherrypy.quickstart(Hello())
In requirements.txt file I have CherryPy==10.2.2. Still, I am not able to see any output in beanstalk URL. While deploying I get the following error,
Your WSGIPath refers to a file that does not exist.
Can anyone give any insight?

The problem was the WSGIPath variable in Software Configuration specifies application.py as the init file. The Hello class in the above code was in a file named differently.
Make sure the initial code is in a file named application.py or change the configuration.

Related

Unable to run flask app if starter file is other than app.py - Test-Driven Development with Python, Flask, and Docker [duplicate]

I want to know the correct way to start a flask application. The docs show two different commands:
$ flask -a sample run
and
$ python3.4 sample.py
produce the same result and run the application correctly.
What is the difference between the two and which should be used to run a Flask application?
The flask command is a CLI for interacting with Flask apps. The docs describe how to use CLI commands and add custom commands. The flask run command is the preferred way to start the development server.
Never use this command to deploy publicly, use a production WSGI server such as Gunicorn, uWSGI, Waitress, or mod_wsgi.
As of Flask 2.2, use the --app option to point the command at your app. It can point to an import name or file name. It will automatically detect an app instance or an app factory called create_app. Use the --debug option to run in debug mode with the debugger and reloader.
$ flask --app sample --debug run
Prior to Flask 2.2, the FLASK_APP and FLASK_ENV=development environment variables were used instead. FLASK_APP and FLASK_DEBUG=1 can still be used in place of the CLI options above.
$ export FLASK_APP=sample
$ export FLASK_ENV=development
$ flask run
On Windows CMD, use set instead of export.
> set FLASK_APP=sample
For PowerShell, use $env:.
> $env:FLASK_APP = "sample"
The python sample.py command runs a Python file and sets __name__ == "__main__". If the main block calls app.run(), it will run the development server. If you use an app factory, you could also instantiate an app instance at this point.
if __name__ == "__main__":
app = create_app()
app.run(debug=True)
Both these commands ultimately start the Werkzeug development server, which as the name implies starts a simple HTTP server that should only be used during development. You should prefer using the flask run command over the app.run().
Latest documentation has the following example assuming you want to run hello.py(using .py file extension is optional):
Unix, Linux, macOS, etc.:
$ export FLASK_APP=hello
$ flask run
Windows:
> set FLASK_APP=hello
> flask run
you just need to run this command
python app.py
(app.py is your desire flask file)
but make sure your .py file has the following flask settings(related to port and host)
from flask import Flask, request
from flask_restful import Resource, Api
import sys
import os
app = Flask(__name__)
api = Api(app)
port = 5100
if sys.argv.__len__() > 1:
port = sys.argv[1]
print("Api running on port : {} ".format(port))
class topic_tags(Resource):
def get(self):
return {'hello': 'world world'}
api.add_resource(topic_tags, '/')
if __name__ == '__main__':
app.run(host="0.0.0.0", port=port)
The very simples automatic way without exporting anything is using python app.py see the example here
from flask import (
Flask,
jsonify
)
# Function that create the app
def create_app(test_config=None ):
# create and configure the app
app = Flask(__name__)
# Simple route
#app.route('/')
def hello_world():
return jsonify({
"status": "success",
"message": "Hello World!"
})
return app # do not forget to return the app
APP = create_app()
if __name__ == '__main__':
# APP.run(host='0.0.0.0', port=5000, debug=True)
APP.run(debug=True)
For Linux/Unix/MacOS :-
export FLASK_APP = sample.py
flask run
For Windows :-
python sample.py
OR
set FLASK_APP = sample.py
flask run
You can also run a flask application this way while being explicit about activating the DEBUG mode.
FLASK_APP=app.py FLASK_DEBUG=true flask run

Why when I start uvicorn in my FastAPI service does my configuration method run twice?

I have written a service using fastapi and uvicorn. I have a main in my service that starts uvicorn (see below). In that main, the first thing I do is load configuration settings. I have some INFO outputs that output the settings when I load the configuration. I notice when I start my service, the configuration loading method seems to be running twice.
# INITIALIZE
if __name__ == "__main__":
# Load the config once at bootstrap time. This outputs the string "Loading configuration settings..."
config = CdfAuthConfig()
print("Loaded Configuration")
# Create FastAPI object
app = FastAPI()
# Start uvicorn
uvicorn.run(app, host="127.0.0.1", port=5050)
The output when I run the service looks like:
Loading configuration settings...
Loading configuration settings...
Loaded Configuration
Why is the "CdfAuthConfig()" class being instantiated twice? It obviously has something to do with the "uvicorn.run" command.
I had a similar setup and this behavior made me curious, I did some tests and now I see probably why.
Your if __name__ == "__main__": is being reached only once, this is a fact.
How can you test this.
Add the following line before your if:
print(__name__)
If you run your code as is, but adding the line I mentioned, it will print:
__main__ # in the first run
Then uvicorn will call your program again and will print something like:
__mp_main__ # after uvicorn starts your code again
And right after it will also print:
app # since this is the argument you gave to uvicorn
If you want to avoid that, you should call uvicorn from the command line, like:
uvicorn main:app --reload --host 0.0.0.0 --port 5000 # assuming main.py is your file name
uvicorn will reload your code since you are calling it from inside the code. Maybe a work around would be to have the uvicorn call in a separate file, or as I said, just use the command line.
If you don't wanna write the command with the arguments all the time, you can write a small script (app_start.sh)
I hope this helps you understand a little bit better.

waitress+flask+gcloud how to set it up server

I have been trying to deploy a basic app to google engine app(because Azure is an extortion) for the past few days, I have learned that Gunicode does not work on windows system and that the alternative is waitress. I read all the answers related to the subject here, before I posted this question!!!
So I have been trying different setups, reading about it and I still can't get it running. My field is data science, but deployment seems to be obligatory nowadays. If someone can help me out please, it would be very appreciated.
app.py file
from flask import Flask, render_template, request
from waitress import serve
app = Flask(__name__)
#app.route('/')
def index():
name = request.args.get("name")
if name == None:
name = "Reinhold"
return render_template("index.html", name=name)
if __name__ == '__main__':
#app.run(debug=True)
serve(app, host='0.0.0.0', port=8080)
Gcloud app deploy will look for the gunicode to start the deployment which will be at the app.yaml file, I tried different setups there and I ended up setting it up None as Flask will look for an alternative in my humble view. Though I still think that would be better to setup the waitress server there.
app.yaml file
runtime: python37
#entrypoint: None
entrypoint: waitress-serve --listen=*:8080 serve:app
GCloud also will look for an appengine_config.py file where it will find the dependencies(I think)
from google.appengine.ext import vendor
vendor.add('venv\Lib')
The requirements.txt file will be the following:
astroid==2.3.3
autopep8==1.4.4
Click==7.0
colorama==0.4.3
dominate==2.4.0
Flask==1.1.1
Flask-Bootstrap==3.3.7.1
Flask-WTF==0.14.2
isort==4.3.21
itsdangerous==1.1.0
Jinja2==2.10.3
lazy-object-proxy==1.4.3
MarkupSafe==1.1.1
mccabe==0.6.1
pycodestyle==2.5.0
pylint==2.4.4
six==1.13.0
typed-ast==1.4.1
visitor==0.1.3
waitress==1.4.2
Werkzeug==0.16.0
wrapt==1.11.2
WTForms==2.2.1
In the google console I could access the log view to see what was going wrong during the deployment and that is what I got from the code I shared here.
{
insertId: "5e1e9b4500029d71f92c1db9"
labels: {…}
logName: "projects/bokehflaskgcloud/logs/stderr"
receiveTimestamp: "2020-01-15T04:55:33.288839846Z"
resource: {…}
textPayload: "/bin/sh: 1: exec: None: not found"
timestamp: "2020-01-15T04:55:33.171377Z"
}
If someone could help solve this, that would be great because google seems to be a good alternative to deploy some work. Azure and VScode have a good interaction so it isnt as hard to deploy it there, but the cost of it after the trial is insane.
That is what I get once I try to deploy the application.
Error: Server Error
The server encountered an error and could not complete your request.
Please try again in 30 seconds.
easily run your flask app using Gunicorn:
runtime: python37
entrypoint: gunicorn -b :$PORT main:app
you need to add gunicorn to your requirments.txt
check this documentation on how to define application startup in python 3
make sure that you run your app using flask run method, in case you want to test your app locally:
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080, debug=True)
appengine_config.py is not used in Python 3. The Python 2 runtime uses this file to install client libraries and provide values for constants and "hook functions". The Python 3 runtime doesn't use this file.
the app.py file there is no mention of flask library
Please add following import at line 2.
from flask import Flask, request, render_template

Why am I getting : Unable to import module 'handler': No module named 'paramiko'?

I was in the need to move files with a aws-lambda from a SFTP server to my AWS account,
then I've found this article:
https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/
Talking about paramiko as a SSHclient candidate to move files over ssh.
Then I've written this calss wrapper in python to be used from my serverless handler file:
import paramiko
import sys
class FTPClient(object):
def __init__(self, hostname, username, password):
"""
creates ftp connection
Args:
hostname (string): endpoint of the ftp server
username (string): username for logging in on the ftp server
password (string): password for logging in on the ftp server
"""
try:
self._host = hostname
self._port = 22
#lets you save results of the download into a log file.
#paramiko.util.log_to_file("path/to/log/file.txt")
self._sftpTransport = paramiko.Transport((self._host, self._port))
self._sftpTransport.connect(username=username, password=password)
self._sftp = paramiko.SFTPClient.from_transport(self._sftpTransport)
except:
print ("Unexpected error" , sys.exc_info())
raise
def get(self, sftpPath):
"""
creates ftp connection
Args:
sftpPath = "path/to/file/on/sftp/to/be/downloaded"
"""
localPath="/tmp/temp-download.txt"
self._sftp.get(sftpPath, localPath)
self._sftp.close()
tmpfile = open(localPath, 'r')
return tmpfile.read()
def close(self):
self._sftpTransport.close()
On my local machine it works as expected (test.py):
import ftp_client
sftp = ftp_client.FTPClient(
"host",
"myuser",
"password")
file = sftp.get('/testFile.txt')
print(file)
But when I deploy it with serverless and run the handler.py function (same as the test.py above) I get back the error:
Unable to import module 'handler': No module named 'paramiko'
Looks like the deploy is unable to import paramiko (by the article above it seems like it should be available for lambda python 3 on AWS) isn't it?
If not what's the best practice for this case? Should I include the library into my local project and package/deploy it to aws?
A comprehensive guide tutorial exists at :
https://serverless.com/blog/serverless-python-packaging/
Using the serverless-python-requirements package
as serverless node plugin.
Creating a virtual env and Docker Deamon will be required to packup your serverless project before deploying on AWS lambda
In the case you use
custom:
pythonRequirements:
zip: true
in your serverless.yml, you have to use this code snippet at the start of your handler
try:
import unzip_requirements
except ImportError:
pass
all details possible to find in Serverless Python Requirements documentation
You have to create a virtualenv, install your dependencies and then zip all files under sites-packages/
sudo pip install virtualenv
virtualenv -p python3 myvirtualenv
source myvirtualenv/bin/activate
pip install paramiko
cp handler.py myvirtualenv/lib/python
zip -r myvirtualenv/lib/python3.6/site-packages/ -O package.zip
then upload package.zip to lambda
You have to provide all dependencies that are not installed in AWS' Python runtime.
Take a look at Step 7 in the tutorial. Looks like he is adding the dependencies from the virtual environment to the zip file. So I'd assume your ZIP file to contain the following:
your worker_function.py on top level
a folder paramico with the files installed in virtual env
Please let me know if this helps.
I tried various blogs and guides like:
web scraping with lambda
AWS Layers for Pandas
spending hours of trying out things. Facing SIZE issues like that or being unable to import modules etc.
.. and I nearly reached the end (that is to invoke LOCALLY my handler function), but then my function even though it was fully deployed correctly and even invoked LOCALLY with no problems, then it was impossible to invoke it on AWS.
The most comprehensive and best by far guide or example that is ACTUALLY working is the above mentioned by #koalaok ! Thanks buddy!
actual link

How to get Flask app running with gunicorn

I am new to Flask/Python and the question might be silly or I might be missing something obvious, so please bear with me.
I have created a Flask app and the structure is as follow:
myproject
api
__init__.py
api.py
application.py
config.py
models.py
migrations
...
appserver.py
manage.py
Procfile
requirements.txt
The contents of my appserver.py:
from api.application import create_app
if __name__ == '__main__':
create_app = create_app()
create_app.run()
The contents of my api/application.py:
from flask import Flask
def create_app(app_name='MYAPPNAME'):
app = Flask(app_name)
app.config.from_object('api.config.DevelopmentConfig')
from api.api import api
app.register_blueprint(api, url_prefix='/api')
from api.models import db
db.init_app(app)
return app
When I run my server locally with python appserver.py everything works as expected. When I try to run gunicorn like so: gunicorn --bind 127.0.0.1:5000 appserver:create_app I get this error: TypeError: create_app() takes from 0 to 1 positional arguments but 2 were given
What am I doing wrong here?
I would suggest you update the code inside the appserver.py files as shown below:
from api.application import create_app
if __name__ == '__main__':
create_app = create_app()
create_app.run()
else:
gunicorn_app = create_app()
and then run the app as follows
gunicorn --bind 127.0.0.1:5000 appserver:gunicorn_app
The reason for the above steps is as follows:
Running the server locally
When you run the server locally with python appserver.py the if block gets executed. Hence the Flask object gets created via your create_app method and you are able to access the server.
Running the server via Gunicorn
When you run the server via Gunicorn, you need to specify the module name and the variable name of the app for Gunicorn to access it. Note that the variable should be a WSGI callable object for e.g a flask app object. This is as per the definition in Gunicorn Docs.
When you were running the Gunicorn command gunicorn --bind 127.0.0.1:5000 appserver:create_app, it mistook create_app as the WSGI callable object(Flask app object). This threw the error as create_app is just a regular method which returns the Flask app object on correct invocation.
So we added the part of creating the object in the else block gunicorn_app = create_app() and called this via the Gunicorn using gunicorn --bind 127.0.0.1:5000 appserver:gunicorn_app
The other thing that you need to note is when you run python appserver.py the if block gets triggered since it is the main file getting executed. Where as when you gunicorn --bind 127.0.0.1:5000 appserver:create_app the appserver.py gets imported by gunicorn. Hence the else block gets triggered. This is the reason we have placed gunicorn_app = create_app() in the else block.
I hope the above explanation was satisfactory. Let me know if you have not understood any part.
Pranav Kundaikar answer is excellent. I don't know if they updated gunicorn but I could avoid adding the main in my app using:
gunicorn -w 4 'api:create_app()'

Resources