I have deployed a simple Flask application on an azure webapp by forking the repo from https://github.com/Azure-Samples/python-docs-hello-world
Here is my application.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
#app.route("/sms")
def hello_sms():
return "Hello World SMS!"
# if __name__ == '__main__':
# app.run(debug = True)
And this is my requirements.txt
click==6.7
Flask==1.0.2
itsdangerous==0.24
Jinja2==2.10
MarkupSafe==1.0
Werkzeug==0.14.1
At first when I opened the URL ( https://staysafe.azurewebsites.net/ ) i got this message, "The resource you are looking for has been removed, had its name changed, or is temporarily unavailable."
After which i when to the application settings in the webapp dashboard in azure and set a python version.
And ever since this is what I get when i open my URL
Any clue as to what is going wrong?
Seams that your code is not uploaded to portal.
Please follow this official document for your test.
I Used your code from https://github.com/Azure-Samples/python-docs-hello-world, and works fine. The steps as below:
Environment: python3.7, windows 10
1.Open git bash,download the code to local using git clone https://github.com/Azure-Samples/python-docs-hello-world.git
2.In git bash, execute cd python-docs-hello-world
3.In git bash, execute following command one by one:
py -3 -m venv venv
venv/scripts/activate
pip install -r requirements.txt
FLASK_APP=application.py flask run
4.Open a web browser, and navigate to the sample app at http://localhost:5000/ .
It is to make sure it can work well in local.
5.Then just follow the article to create deployment credetial / resource group / service plan / a web app
6.If no issues, in git bash, push the code to azure:
git remote add azure <deploymentLocalGitUrl-from-create-step>
Then execute git push azure master
7.Browse to the website like https://your_app_name.azurewebsites.net, or https://your_app_name.azurewebsites.net/sms,
it works fine, screenshot as below:
Related
After spending many hours reading dozens of guides, I finally got into a working setup, and decided to publish the instructions here.
The problem: I have a working flask app running in my machine. How do I launch it as a web app using Microsoft Azure platform?
So here is my guide. I hope it will help others!
Steps for launching a new web app under Azure:
0. Login to Azure
Goto Azure portal https://portal.azure.com/ and sign-in using your Microsoft account.
1. Create a resource group:
Home > create a resource > Resource group
fill in: subscription(Free Trial), name (something with _resgrp), Region (e.g. West Europe)
2. DB:
Home > create a resource > create Azure Cosmos DB > Azure Cosmos DB for MongoDB
fill in: subscription(Free Trial), resource group (see above), account name (something with _db), Region (West Europe), [create]
goto Home > db account > connection strings, copy line marked "PRIMARY CONNECTION STRING" and keep it aside.
3. App:
Home > create a resource > create Web App
fill in: subscription(Free Trial), resource group (see above), name (will appear in the site url!),
publish: code, run time stack: python 3.9, region: West Europe, plan: Basic B1 ($13/mon), [create]
Home > our-web-app > configuration > Application settings > Connection strings
click "New Connection strings" and set MYDB with the connection string from step 2.
4. Code:
We will use a nice "to-do list" minimalist app published by Prashant Shahi. Thank you Prashant!
Clone code from https://github.com/prashant-shahi/ToDo-List-using-Flask-and-MongoDB into some local folder.
Delete everything but app.py, static, templates, requirements.txt
Edit requirements.txt so that Flask appears without "==version", because an older version is there by default.
create wsgi.py with:
from app import app
if __name__ == '__main__':
app.run()
Create go.sh with the following code. These commands are will setup the environment and then start gunicorn to respond to web requests. Some of these commands are used for debug only.
# azure webapp: called under sh from /opt/startup/startup.sh
set -x
ls -la
pip install -r /home/site/wwwroot/requirements.txt
echo "$(pwd) $(date)"
ps aux
gunicorn --bind=0.0.0.0 --log-level=debug --timeout 600 wsgi:app
edit app.py:
replace first 3 lines about db connection with: (btw, MYDB comes from steps 3)
CON_STR = os.environ['CUSTOMCONNSTR_MYDB']
client = MongoClient(CON_STR) #Configure the connection to the database
after app = Flask(name) add these lines for logging:
if __name__ != '__main__':
gunicorn_logger = logging.getLogger('gunicorn.error')
app.logger.handlers = gunicorn_logger.handlers
app.logger.setLevel(gunicorn_logger.level)
add first line under def about(): #clicking [about] in the app will dump environment vars to the logs)
app.logger.debug('\n'.join([f'{k}={os.environ[k]}' for k in os.environ.keys()]))
5. Ftp:
Home > our-web-app > Deployment Center > FTPS Ceredentials
Open FileZilla, top-left icon, [new site]
copy paste from web to FileZilla: FTPS endpoint into host, user to username, password to password, [connect]
upload the content (not the parent!) of the folder from step 4 to the remote path /site/wwwroot
6. Launch:
Home > our-web-app > configuration > General settings > Startup Command
paste this: sh -c "cp go.sh go_.sh && . go_.sh"
7. Test:
Browse to https://[our-web-app].azurewebsites.net
8. Logging / debugging:
Install Azure CLI (command line interface) from https://learn.microsoft.com/en-us/cli/azure/install-azure-cli
Open cmd and run
az login
# turn on container logging (run once):
az webapp log config --name [our-web-app] --resource-group [our-step1-group] --docker-container-logging filesystem
# tail the logs:
az webapp log tail --name [our-web-app] --resource-group [our-step1-group]
9. Kudu SCM management for the app
(must be logged into Azure for these to work):
Show file/dir: https://[our-web-app].scm.azurewebsites.net/api/vfs/site/[path]
Downloads full site: https://[our-web-app].scm.azurewebsites.net/api/zip/site/wwwroot
Status: https://[our-web-app].scm.azurewebsites.net/Env
SSH: https://[our-web-app].scm.azurewebsites.net/webssh/host
Bash: https://[our-web-app].scm.azurewebsites.net/DebugConsole
More on REST API here: https://github.com/projectkudu/kudu/wiki/REST-API
10. Notes:
I don't recommend on using automatic deployment from GitHub / BitBucket, unless you have Azure's support available. We encountered many difficulties with that.
Any comments are most welcome.
Let me preface this with the fact that I am fairly new to Docker, Jenkins, GCP/Cloud Storage and Python.
Basically, I would like to write a Python app, that runs locally in a Docker container (alpine3.7 image) and reads chunks, line by line, from a very large text file that is dropped into a GCP cloud storage bucket. Each line should just be output to the console for now.
I learn best by looking at working code, I am spinning my wheels trying to put all the pieces together using these technologies (new to me).
I already have the key file for that cloud storage bucket on my local machine.
I am also aware of these posts:
How to Read .json file in python code from google cloud storage bucket.
Lazy Method for Reading Big File in Python?
I just need some help putting all these pieces together into a working app.
I understand that I need to set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the key file in the container. However, I don't know how to do that in a way that works well for multiple developers and multiple environments (Local, Dev, Stage and Prod).
This is just a simple quickstart (I am sure it can be done better) to read a file from a Google Cloud Storage bucket via a python app (Docker container deployed to Google Cloud Run):
You can find more information here link
Create a directory with the following files:
a. app.py
import os
from flask import Flask
from google.cloud import storage
app = Flask(__name__)
#app.route('/')
def hello_world():
storage_client = storage.Client()
file_data = 'file_data'
bucket_name = 'bucket'
temp_file_name = 'temp_file_name'
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.get_blob(file_data)
blob.download_to_filename(temp_file_name)
temp_str=''
with open (temp_file_name, "r") as myfile:
temp_str = myfile.read().replace('\n', '')
return temp_str
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
b. Dockerfile
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory fo /app
WORKDIR /app
# Copy the current directory contents into the container /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
RUN pip install google-cloud-storage
# Make port 80 available to the world outside the container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
c. requirements.txt
Flask==1.1.1
gunicorn==19.9.0
google-cloud-storage==1.19.1
Create a service account to access the storage form Cloud Run:
gcloud iam service-accounts create cloudrun --description 'cloudrun'
Set the permission of the service account:
gcloud projects add-iam-policy-binding wave25-vladoi --member serviceAccount:cloud-run#project.iam.gserviceaccount.com --role roles/storage.admin
Build the container image:
gcloud builds submit --tag gcr.io/project/hello
Deploy the application to Cloud Run:
gcloud run deploy --image gcr.io/project/hello --platform managed ----service-account cloud-run#project.iam.gserviceaccount.com
EDIT :
One way to develop locally is :
Your Dev Opp Team will get the service account key.json:
gcloud iam service-accounts keys create ~/key.json --iam-account serviceAccount:cloudrun#project.iam.gserviceaccount.com
Store the key.json file in the same working directory
The Dockerfile command `COPY . /app ' will copy the file to Docker container
Change the app.py to :
storage.Client.from_service_account_json('key.json')
As wfastcgi module is not compatible with Python 3.7, What is the best way to host a python flask application on a Windows Server?
you need to install the python,wfastcgi, and flask at your server.
You can download the python from below link:
https://www.python.org/downloads/
after installing python download the wfastcgi:
pip install wfastcgi
run the command prompt as administrator and run this command.
wfastcgi-enable
run this command to enable wfastcgi.
below is my flask example:
app.py:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello from FastCGI via IIS!"
if __name__ == "__main__":
app.run()
after creating an application to run it use below command:
python app.py
now enable the cgi feature of iis:
now open iis.
right-click on the server name and select add site.
enter the site name physical path and the site binding.
after adding site select the site name and select the handler mapping feature from the middle pane.
Click “Add Module Mapping”
add below value:
executable path value:
C:\Python37-32\python.exe|C:\Python37-32\Lib\site-packages\wfastcgi.py
Click “Request Restrictions”. Make sure “Invoke handler only if
request is mapped to:” checkbox is unchecked:
Click “Yes” here:
now go back and again select the server name and select fast CGI setting from the middle pane.
Double click it, then click the “…” for the Environment Variables
collection to launch the EnvironmentVariables Collection Editor:
Set the PYTHONPATH variable:
And the WSGI_HANDLER (my Flask app is named app.py so the value is
app.app — if yours is named site.py it would be site.app or similar):
Click OK and browse to your site:
Note: Do not forget to assign the iusr and iis_iusrs user permission to the flask site folder and python folder.
I was in the need to move files with a aws-lambda from a SFTP server to my AWS account,
then I've found this article:
https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/
Talking about paramiko as a SSHclient candidate to move files over ssh.
Then I've written this calss wrapper in python to be used from my serverless handler file:
import paramiko
import sys
class FTPClient(object):
def __init__(self, hostname, username, password):
"""
creates ftp connection
Args:
hostname (string): endpoint of the ftp server
username (string): username for logging in on the ftp server
password (string): password for logging in on the ftp server
"""
try:
self._host = hostname
self._port = 22
#lets you save results of the download into a log file.
#paramiko.util.log_to_file("path/to/log/file.txt")
self._sftpTransport = paramiko.Transport((self._host, self._port))
self._sftpTransport.connect(username=username, password=password)
self._sftp = paramiko.SFTPClient.from_transport(self._sftpTransport)
except:
print ("Unexpected error" , sys.exc_info())
raise
def get(self, sftpPath):
"""
creates ftp connection
Args:
sftpPath = "path/to/file/on/sftp/to/be/downloaded"
"""
localPath="/tmp/temp-download.txt"
self._sftp.get(sftpPath, localPath)
self._sftp.close()
tmpfile = open(localPath, 'r')
return tmpfile.read()
def close(self):
self._sftpTransport.close()
On my local machine it works as expected (test.py):
import ftp_client
sftp = ftp_client.FTPClient(
"host",
"myuser",
"password")
file = sftp.get('/testFile.txt')
print(file)
But when I deploy it with serverless and run the handler.py function (same as the test.py above) I get back the error:
Unable to import module 'handler': No module named 'paramiko'
Looks like the deploy is unable to import paramiko (by the article above it seems like it should be available for lambda python 3 on AWS) isn't it?
If not what's the best practice for this case? Should I include the library into my local project and package/deploy it to aws?
A comprehensive guide tutorial exists at :
https://serverless.com/blog/serverless-python-packaging/
Using the serverless-python-requirements package
as serverless node plugin.
Creating a virtual env and Docker Deamon will be required to packup your serverless project before deploying on AWS lambda
In the case you use
custom:
pythonRequirements:
zip: true
in your serverless.yml, you have to use this code snippet at the start of your handler
try:
import unzip_requirements
except ImportError:
pass
all details possible to find in Serverless Python Requirements documentation
You have to create a virtualenv, install your dependencies and then zip all files under sites-packages/
sudo pip install virtualenv
virtualenv -p python3 myvirtualenv
source myvirtualenv/bin/activate
pip install paramiko
cp handler.py myvirtualenv/lib/python
zip -r myvirtualenv/lib/python3.6/site-packages/ -O package.zip
then upload package.zip to lambda
You have to provide all dependencies that are not installed in AWS' Python runtime.
Take a look at Step 7 in the tutorial. Looks like he is adding the dependencies from the virtual environment to the zip file. So I'd assume your ZIP file to contain the following:
your worker_function.py on top level
a folder paramico with the files installed in virtual env
Please let me know if this helps.
I tried various blogs and guides like:
web scraping with lambda
AWS Layers for Pandas
spending hours of trying out things. Facing SIZE issues like that or being unable to import modules etc.
.. and I nearly reached the end (that is to invoke LOCALLY my handler function), but then my function even though it was fully deployed correctly and even invoked LOCALLY with no problems, then it was impossible to invoke it on AWS.
The most comprehensive and best by far guide or example that is ACTUALLY working is the above mentioned by #koalaok ! Thanks buddy!
actual link
I have a problem pretty much exactly like this:
How to preserve a SQLite database from being reverted after deploying to OpenShift?
I don't understand his answer fully and clearly not enough to apply it to my own app and since I can't comment his answer (not enough rep) I figured I had to make ask my own question.
Problem is that when pushing my local files (not including the database file) my database on openshift becomes the one I have locally (all changes made through the server are reverted).
I've googled alot and pretty much understand the problem being that the database should be located somewhere else but I can't grasp fully where to place it and how to deploy it if it's outside the repo.
EDIT: Quick solution: If you have this problem, try connecting to your openshift app with rhc ssh appname
and then cp app-root/repo/database.db app-root/data/database.db
if you have the openshift data dir as reference to SQLALCHEMY_DATABASE_URI. I recommend the accepted answer below though!
I've attached my filestructure and here's some related code:
config.py
import os
basedir = os.path.abspath(os.path.dirname(__file__))
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'database.db')
SQLALCHEMY_MIGRATE_REPO = os.path.join(basedir, 'db_repository')
app/__ init.py__
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
#so that flask doesn't swallow error messages
app.config['PROPAGATE_EXCEPTIONS'] = True
app.config.from_object('config')
db = SQLAlchemy(app)
from app import rest_api, models
wsgi.py:
#!/usr/bin/env python
import os
virtenv = os.path.join(os.environ.get('OPENSHIFT_PYTHON_DIR', '.'), 'virtenv')
#
# IMPORTANT: Put any additional includes below this line. If placed above this
# line, it's possible required libraries won't be in your searchable path
#
from app import app as application
## runs server locally
if __name__ == '__main__':
from wsgiref.simple_server import make_server
httpd = make_server('localhost', 4599, application)
httpd.serve_forever()
filestructure: http://sv.tinypic.com/r/121xseh/8 (can't attach image..)
Via the note at the top of the OpenShift Cartridge Guide:
"Cartridges and Persistent Storage: Every time you push, everything in your remote repo directory is recreated. Store long term items (like an sqlite database) in the OpenShift data directory, which will persist between pushes of your repo. The OpenShift data directory can be found via the environment variable $OPENSHIFT_DATA_DIR."
You can keep your existing project structure as-is and just use a deploy hook to move your database to persistent storage.
Create a deploy action hook (executable file) .openshift/action_hooks/deploy:
#!/bin/bash
# This deploy hook gets executed after dependencies are resolved and the
# build hook has been run but before the application has been started back
# up again.
# if this is the initial install, copy DB from repo to persistent storage directory
if [ ! -f ${OPENSHIFT_DATA_DIR}database.db ]; then
cp -rf ${OPENSHIFT_REPO_DIR}database.db ${OPENSHIFT_DATA_DIR}/database.db 2>/dev/null
fi
# remove the database from the repo during all deploys
if [ -d ${OPENSHIFT_REPO_DIR}database.db ]; then
rm -rf ${OPENSHIFT_REPO_DIR}database.db
fi
# create symlink from repo directory to new database location in persistent storage
ln -sf ${OPENSHIFT_DATA_DIR}database.db ${OPENSHIFT_REPO_DIR}database.db
As another person pointed out, also make sure you are actually committing/pushing your database (make sure your database isn't included in your .gitignore).