Serving Flask-RESTPlus on https server - python-3.x

I am relatively new to python and I created a micro service using flask-resplus.
Works fine on my computer and on the dev server served with http.
I dont have control on where the microservice could be deployed. In these case it seems is behind a load balancer(not sure of details), served with https.
The actual errors given by the browser: Can't read from server. It may not have the appropriate access-control-origin settings.
When i check the network developer tools i see it fails loading swagger.json. But is checking it using:
http://hostname/api/swagger.json, instead of https.
I have been googling, and i ran into discussions of this issue.
And this seemed to be the fix that could work without me having to change the library or configurations on the server.
However still i couldnt get it to work.
This is what i have:
on the api file:
api_blueprint = Blueprint('api', __name__, url_prefix='/api')
api = Api(api_blueprint, doc='/doc/', version='1.0', title='My api',
description="My api")
on the main app file:
from flask import Flask
from werkzeug.contrib.fixers import ProxyFix
from lib.api import api_blueprint
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app)
app.register_blueprint(api_blueprint)
Also tried adding:
app.config['SERVER_URL'] = 'http://testfsdf.co.za' # but it dont look like is being considered
Using flask-restplus==0.9.2,
Any solution will be appreciated, as long as i dont need to make configurations on the container where the service will be deployed
(am ok with setting environment variables), i.e. service need to be self contained. And if there is a version of flask-resplus that i can install with pip, that already has a fix i can appreciate.
Thanks a lot guys,

Overide API class with _scheme='https' option in spec property.
class MyApi(Api):
#property
def specs_url(self):
"""Monkey patch for HTTPS"""
scheme = 'http' if '5000' in self.base_url else 'https'
return url_for(self.endpoint('specs'), _external=True, _scheme=scheme)
api = MyApi(api_blueprint, doc='/doc/', version='1.0', title='My api',
description="My api")

The solution above works like a charm. There are couple of things you should check.
Before applying the fix, make sure in your chrome developertools -> Network tab that whenever you reload the page(in https server) that shows the swagger UI, you get a mixed content error for swagger.json request.
The solution in the above post solves the issue when deployed on an https server but locally it might give issue. For that you can use the environment variable trick.
Set a custom environment variable or any variable which is already there on your https server while deploying your app. Check for the existence of that environment variable before applying the solution to make sure your app in running in the https server.
Now when you run the app locally, this hack won't be applied and swagger.json would be served through http and in your server it would be served via https. Implementation might look similar to this.
import os
from flask import url_for
from flask_restplus import Api
app = Flask( __name__)
if os.environ.get('CUSTOM_ENV_VAR'):
#property
def specs_url(self):
return url_for(self.endpoint('specs'), _external=True, _scheme='https')
Api.specs_url = specs_url
api = Api(app)

Related

How do you host a multi-page Dash application on IIS?

I am trying to host a multi-page Dash application on IIS for internal use at work. I have been able to get single-page Dash applications, such as below to work.
from flask import Flask
import dash
import dash_html_components as html
from dash.dependencies import Input, Output
server=Flask(__name__)
app = dash.Dash(__name__, suppress_callback_exceptions=True, show_undo_redo=True, server=server)
app.layout = html.Div([])
#app.callback(Output('page-content', 'children'),
Input('url', 'pathname'))
def hello():
return "Single page dash app works but not multi-page"
if __name__ == "__main__":
app.run_server(host='0.0.0.0', port=84)
However, when I try to use it on multi-page dash applications, as described in the user manual here, it doesn't work correctly. I am able to host the application using Waitress, but would prefer something more stable and secure. IIS is how my company hosts other internal applications, so I was asked to host it that way. When attempting to use what I believe is the correct settings and handler mappings, I receive an error message on the page that simply says:
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I believe that the issue is due to the fact that the flask application behind my Dash application is created in one script, and the entry point to the overall application is in another. I am not sure how to reconcile this, as the example page in the Dash documentation says that it must be this way.
So, how can I make this work?
cheers

Gcloud not found in flask api

I am using the following python method in GCP. This methods is in test.py
#functools.lru_cache()
def get_project():
return subprocess.check_output("gcloud config list --format 'value(core.project)'", shell=True).decode(
"utf-8").strip()
When I run test.py solely on tpu, it works and when I use this method in flask API then I get error
'gcloud not found'.
However, the same method works solely and under flask API in GCP VM.
I am not able to figure out what could be the possible cause of this.
This is not exactly an answer to your question, but you might be interested in knowing about the metadata server.
From this answer we can more or less deduce that the metadata server also works with TPUs. Note that I'm not 100% sure on this though.
Try the following code to see if you can get the project id with it.
import requests
def get_project():
# https://cloud.google.com/compute/docs/metadata/default-metadata-values#project_metadata
response = requests.get(
"http://metadata.google.internal/computeMetadata/v1/project/project-id",
headers={"Metadata-Flavor": "Google"}
)
return response.text

Problems with axios and Azure Application Insights

The NPM package #microsoft/applicationinsights-we from microsoft when used in the frontend will add headers for tracking calls across different parts of the application (e.g. frontend, backend, services etc.).
I'm using axios in the frontend which out of the box does not work with the package. Neither disableFetchTracking:false nor disableAjaxTracking:false works. I don't want to replace axios with fetch, because axios is more convenient to use and this would be a lot of rewrite too.
What can I do?
The #microsoft/applicationinsights-web package does inject correlation headers into axios calls (by instrumenting XMLHttpRequest). There might be a few causes of the problem in your application:
Another library broke the instrumentation
Something else might be hijacking the XMLHttpRequest object and affecting the AppInsights instrumentation. One such library is pace.js, which overwrites the window.XMLHttpRequest constructor but does not add open, send and abort to it's prototype. AppInsights expects these functions to be present on the XMLHttpRequest's prototype: https://github.com/microsoft/ApplicationInsights-JS/blob/91f08a1171916a1bbf14c03a019ebd26a3a69b86/extensions/applicationinsights-dependencies-js/src/ajax.ts#L330
Here is a working axios + #microsoft/applicationinsights-web example:
Here is the same example, but with pace.js loaded - the Request-Id header is not added:
Either removing pace.js or placing it's script tag / import after the AppInsights initialization code should fix the problem.
Cross-origin requests
Another explanation might be that the frontend app is making cross-origin requests, which are not processed by AppInsights by default - the enableCorsCorrelation: true config setting is needed.

Google Cloud Console - Flask - main.py vs package

OK, so I have been through some tutorials to get a flask app onto google cloud, which is fine.
I have also been through the flask tutorial to build a flaskr blog:
http://flask.pocoo.org/docs/1.0/tutorial/
It occurred to me that a sensible thing to do would be to create a database (MySQL in mycase) on google and then modify the code so that it uses that. This is fine and I can get it to work on my local machine.
However, now that I am coming to deploying this, I have hit a problem.
The google cloud tutorials tend to use a flask app that is initiated in a single file such as main.py, eg:
from flask import Flask, render_template
app = Flask(__name__)
....
The flask tutorial mentioned above uses a package and puts the code to create_app() in the __init__.py file and at present I cannot get this to start in the same way. (see sample code).
from flask import Flask
def create_app(test_config=None):
# create and configure the app
app = Flask(__name__, instance_relative_config=True)
app.config.from_mapping(
SECRET_KEY='dev'
)
Are there some adjustments that I need to make to something like the app.yaml file to get it to recognise flask as the flaskr package or do I need to rewrite the whole thing so that it uses a main.py file ?
I feel that this is one of the points in time where I could really pick up a bad habit. What in general is the preferred way to write flask apps on google cloud ?
I am using the standard environment in google.
Thanks for your advice.
Mark
Since you have an application factory, you can create the app anywhere. Just create it in main.py, since this is what App Engine expects:
from my_package import create_app
app = create_app()

Flask + Bokeh Server on Azure Web App Service

I want to host my bokeh server app in Azure Web App Services. Following the example in flask_embed.py I created a minimal example with a bokeh server process running on localhost:5006 and serving it with server_document in a flask route. Locally, in my computer, it runs normally without any errors:
from threading import Thread
from bokeh.embed import server_document
from bokeh.server.server import Server
from bokeh.models.widgets import Select, Div
from bokeh.layouts import column
from flask import Flask
from flask import render_template
from tornado.ioloop import IOLoop
app = Flask(__name__)
# This is the bokeh page
def modify_doc(doc):
dropdown = Select(title="Cities", options=["New York", "Berlin"])
title_row = Div(text="Home Page")
main_layout = column([
title_row,
dropdown
])
doc.add_root(main_layout)
doc.title = "My bokeh server app"
# This is the subprocess serving the bokeh page
def bk_worker():
server = Server(
{'/bkapp': modify_doc},
io_loop=IOLoop(),
allow_websocket_origin=["*"],
)
server.start()
server.io_loop.start()
Thread(target=bk_worker).start()
# This is the flask route showing the bokeh page
#app.route("/", methods=["GET"])
def my_app():
script = server_document("http://localhost:5006/bkapp")
return render_template("embed.html", script=script, template="Flask")
However, when I push it to the Azure web app, the page is blank and by inspecting the page an error message is shown:
GET https://<my-azure-site>.azurewebsites.net:5006/bkapp/autoload.js?bokeh-autoload-element=0bfb1475-9ddb-4af5-9afe-f0c4a681d7aa&bokeh-app-path=/bkapp&bokeh-absolute-url=https://<my-azure-site>.azurewebsites.net:5006/bkapp net::ERR_CONNECTION_TIMED_OUT
It seems like I don't have access to the localhost of the remote Azure server. Actually, it's not yet clear to me if the bokeh server runs/is allowed to run at all. In the server_document function I have tried putting server_document("<my-azure-site>:5006/bkapp") but the problem remains the same.
Any help is appreciated.
This post is related to another question: Bokeh embedded in flask app in azure web app
I realize this is from a while ago, but I've spent many hours in the past several days figuring this out, so this is for future people:
The issue is that server_document() is just creating a <script> tag that gets embedded into a jinja2 template, where it executes.
Locally it's not an issue because your bokeh server is running on YOUR MACHINE'S localhost:5006. To demonstrate, you can see that you can navigate directly to localhost:5006/bkapp to see your bokeh document.
Once you're hosting it on Azure, server_document() is creating the exact same script that a browser will try to execute - that is, your browser is going to try to execute a <script> tag that references localhost:5006, except that there isn't anything running on localhost:5006 because your bokeh app is actually running on Azure's server now.
I'm not sure what the best way to do it is, but the essence of it is that you need server_document() to point to the bokeh server that's running remotely. To do this you'll need to make sure that {your_remote_bokeh_server}:5006 is publicly accessible.

Resources