I am using the following python method in GCP. This methods is in test.py
#functools.lru_cache()
def get_project():
return subprocess.check_output("gcloud config list --format 'value(core.project)'", shell=True).decode(
"utf-8").strip()
When I run test.py solely on tpu, it works and when I use this method in flask API then I get error
'gcloud not found'.
However, the same method works solely and under flask API in GCP VM.
I am not able to figure out what could be the possible cause of this.
This is not exactly an answer to your question, but you might be interested in knowing about the metadata server.
From this answer we can more or less deduce that the metadata server also works with TPUs. Note that I'm not 100% sure on this though.
Try the following code to see if you can get the project id with it.
import requests
def get_project():
# https://cloud.google.com/compute/docs/metadata/default-metadata-values#project_metadata
response = requests.get(
"http://metadata.google.internal/computeMetadata/v1/project/project-id",
headers={"Metadata-Flavor": "Google"}
)
return response.text
Related
I've a very simply flask application, like:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def sth():
return 'STH'
.
I'd like to profile (cProfile, profile) it in a way that it collects the profiling information from the start of the webserver until the end of the webserver run, and writes the data only at the end of the run. I need the results written into a file, and not into any database.
By starting the webserver with
/path/venv/bin/python -m cProfile -m flask --debug --app entry.py run
command, it works as expected, however, for some reason I'd like to add this logic into the entry.py file. (Note the -m cProfile switch.).
What'd be the best way to achieve this?
I tried to play around with the cProfile.run() and app.run(), but that did not really work, and also not advised to use in production.
I also tried the profilerMiddleware from werkzeug.middleware.profile and the flask_profiler, but had no luck with my goal.
Thanks.
sorry for this basic question but I would like some help from you expert as I am still learning fastaapi.
I have a simple testing application running python FastApi and trying to use it with azure cli.
what I am trying to do, is to have a get request using fastapi to list all the resource groups I have in my subscriptions.
Now, reading documentation and some forums, I have this code here:
#app.get("/azure")
def az_cli (args_str):
temp = tempfile.TemporaryFile()
args = args_str.split()
code = get_default_cli().invoke(['login', '--service-principal', '-u', '', '-p', '','--tenant',''])
resource = get_default_cli().invoke(args)
data = temp.read().strip()
temp.close()
return [args, resource]
This def authenticate the user with service principle, and invoke a az command args.
If I run unicorn and head to docs and in the args field I type resource list the code work just fine, doesn't throw any error, but nothing shows in the request body. the full output though is visible in the terminal.
Can please somebody explain me how can I have the body output in the docs body?
Thank you very much for any help you can provide and I hope my example is clear enough, and if not please feel free to ask more informations.
I have a couple of Cloud Functions that make remote calls out to a 3rd party API and I would like to be able to gather latency metrics in Cloud Trace for those calls.
I'm trying to find a barebones working piece of example code that I can build off of. Only one I found is at https://medium.com/faun/tracing-python-cloud-functions-a17545586359
It is essentially.
import requests
from opencensus.trace import tracer as tracer_module
from opencensus.trace.exporters import stackdriver_exporter
from opencensus.trace.exporters.transports.background_thread \
import BackgroundThreadTransport
PROJECT_ID = 'test-trace'
# instantiate trace exporter
exporter = stackdriver_exporter.StackdriverExporter(project_id=PROJECT_ID, transport=BackgroundThreadTransport)
def main_fun(data, context):
tracer = tracer_module.Tracer(exporter=exporter)
with tracer.span(name='get_token'):
print('Getting Token')
authtoken = get_token(email,password)
print('Got Token')
def get_token(email,password):
# Make some request out to an API and get a Token
return accesstoken
There are no errors and everything works as intended, minus a trace not showing up in Cloud Trace or Stackdriver.
Am I doing something wrong here? Does anyone have some simple code that may work for Cloud Trace within a Cloud Function
OK, so I have been through some tutorials to get a flask app onto google cloud, which is fine.
I have also been through the flask tutorial to build a flaskr blog:
http://flask.pocoo.org/docs/1.0/tutorial/
It occurred to me that a sensible thing to do would be to create a database (MySQL in mycase) on google and then modify the code so that it uses that. This is fine and I can get it to work on my local machine.
However, now that I am coming to deploying this, I have hit a problem.
The google cloud tutorials tend to use a flask app that is initiated in a single file such as main.py, eg:
from flask import Flask, render_template
app = Flask(__name__)
....
The flask tutorial mentioned above uses a package and puts the code to create_app() in the __init__.py file and at present I cannot get this to start in the same way. (see sample code).
from flask import Flask
def create_app(test_config=None):
# create and configure the app
app = Flask(__name__, instance_relative_config=True)
app.config.from_mapping(
SECRET_KEY='dev'
)
Are there some adjustments that I need to make to something like the app.yaml file to get it to recognise flask as the flaskr package or do I need to rewrite the whole thing so that it uses a main.py file ?
I feel that this is one of the points in time where I could really pick up a bad habit. What in general is the preferred way to write flask apps on google cloud ?
I am using the standard environment in google.
Thanks for your advice.
Mark
Since you have an application factory, you can create the app anywhere. Just create it in main.py, since this is what App Engine expects:
from my_package import create_app
app = create_app()
I am relatively new to python and I created a micro service using flask-resplus.
Works fine on my computer and on the dev server served with http.
I dont have control on where the microservice could be deployed. In these case it seems is behind a load balancer(not sure of details), served with https.
The actual errors given by the browser: Can't read from server. It may not have the appropriate access-control-origin settings.
When i check the network developer tools i see it fails loading swagger.json. But is checking it using:
http://hostname/api/swagger.json, instead of https.
I have been googling, and i ran into discussions of this issue.
And this seemed to be the fix that could work without me having to change the library or configurations on the server.
However still i couldnt get it to work.
This is what i have:
on the api file:
api_blueprint = Blueprint('api', __name__, url_prefix='/api')
api = Api(api_blueprint, doc='/doc/', version='1.0', title='My api',
description="My api")
on the main app file:
from flask import Flask
from werkzeug.contrib.fixers import ProxyFix
from lib.api import api_blueprint
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app)
app.register_blueprint(api_blueprint)
Also tried adding:
app.config['SERVER_URL'] = 'http://testfsdf.co.za' # but it dont look like is being considered
Using flask-restplus==0.9.2,
Any solution will be appreciated, as long as i dont need to make configurations on the container where the service will be deployed
(am ok with setting environment variables), i.e. service need to be self contained. And if there is a version of flask-resplus that i can install with pip, that already has a fix i can appreciate.
Thanks a lot guys,
Overide API class with _scheme='https' option in spec property.
class MyApi(Api):
#property
def specs_url(self):
"""Monkey patch for HTTPS"""
scheme = 'http' if '5000' in self.base_url else 'https'
return url_for(self.endpoint('specs'), _external=True, _scheme=scheme)
api = MyApi(api_blueprint, doc='/doc/', version='1.0', title='My api',
description="My api")
The solution above works like a charm. There are couple of things you should check.
Before applying the fix, make sure in your chrome developertools -> Network tab that whenever you reload the page(in https server) that shows the swagger UI, you get a mixed content error for swagger.json request.
The solution in the above post solves the issue when deployed on an https server but locally it might give issue. For that you can use the environment variable trick.
Set a custom environment variable or any variable which is already there on your https server while deploying your app. Check for the existence of that environment variable before applying the solution to make sure your app in running in the https server.
Now when you run the app locally, this hack won't be applied and swagger.json would be served through http and in your server it would be served via https. Implementation might look similar to this.
import os
from flask import url_for
from flask_restplus import Api
app = Flask( __name__)
if os.environ.get('CUSTOM_ENV_VAR'):
#property
def specs_url(self):
return url_for(self.endpoint('specs'), _external=True, _scheme='https')
Api.specs_url = specs_url
api = Api(app)