I am exploring Google Cloud Functions in Python to write HTTP triggered functions. I have a main.py with all my triggered functions structured like in this post, but would like to be able to wrap in some endpoints. On nodejs, one could do so like in this post using Express, and on Python, very similarly using Flask.
I have attempted to dabble by wrapping my Cloud Functions using Flask, but Google will bring me to Google's authentication page. My code as follows:
from flask import Flask, jsonify, request
# Initialize Flask application
application = Flask(__name__)
#application.route('/some/endpoint/path', methods=['GET'])
def predict():
inputs = request.args.get('inputs')
//Some logic...
response_object = {}
response_object['statusCode'] = 200
response_object['results'] = results
return jsonify(response_object)
Is there a way to wrap the python cloud functions in such a way to acheive something like this?
https://us-central1-my-project.cloudfunctions.net/some
https://us-central1-my-project.cloudfunctions.net/some/endpoint
https://us-central1-my-project.cloudfunctions.net/some/endpoint/path
I believe you are getting the authentication Google screen because you are trying to access the base url for Cloud Functions on your project.
With HTTP Cloud Functions the trigger url is usually https://[REGION]-[PROJECT_ID].cloudfunctions.net/[FUNCTION_NAME], so any routes would need to follow another slash after the function name.
That being said, I found this post where the solution provided manage to set routes within the same main.py file to access the endpoints from a single Cloud Function. I had to adapt some things, but in the end it worked for me.
The following is the source code I tested at my end:
import flask
import werkzeug.datastructures
app = flask.Flask(__name__)
#app.route('/')
def root():
return 'Hello World!'
#app.route('/hi')
def hi():
return 'Hi there'
#app.route('/hi/<username>')
def hi_user(username):
return 'Hi there, {}'.format(username)
#app.route('/hi/<username>/congrats', methods=['POST'])
def hi_user_congrat(username):
achievement = flask.request.form['achievement']
return 'Hi there {}, congrats on {}!'.format(username, achievement)
def main(request):
with app.app_context():
headers = werkzeug.datastructures.Headers()
for key, value in request.headers.items():
headers.add(key, value)
with app.test_request_context(method=request.method, base_url=request.base_url, path=request.path, query_string=request.query_string, headers=headers, data=request.form):
try:
rv = app.preprocess_request()
if rv is None:
rv = app.dispatch_request()
except Exception as e:
rv = app.handle_user_exception(e)
response = app.make_response(rv)
return app.process_response(response)
This defined the following routes within a single Cloud Function:
https://[REGION]-[PROJECT_ID].cloudfunctions.net/[FUNCTION_NAME]
https://[REGION]-[PROJECT_ID].cloudfunctions.net/[FUNCTION_NAME]/hi
https://[REGION]-[PROJECT_ID].cloudfunctions.net/[FUNCTION_NAME]/hi/<username>
https://[REGION]-[PROJECT_ID].cloudfunctions.net/[FUNCTION_NAME]/hi/<username>/congrats
And the following was the command used to deploy this function:
gcloud functions deploy flask_function --entry-point main --runtime python37 --trigger-http --allow-unauthenticated
Cloud Functions is designed to be used as a single endpoint. You might consider using Cloud Run instead, as it's more suited towards applications with multiple routes, and has many of the same benefits as Cloud Functions.
If you're dead set on using Cloud Functions, something like the answer at Injecting a Flask Request into another Flask App should work, but it's not ideal.
Related
I am new to python basically i work in Infra.
To test one of AWS service i have written below python code which listen "/ping" GET method and "/invocations" POST method.
from flask import Flask, Response
app = Flask(__name__)
#app.route("/ping",methods=["GET"])
def ping():
return Response(response="ping endpoint", status=200)
#app.route("/invocations",methods=["POST"])
def predict():
return Response(response="invocation endpoint here", status=200)
if __name__ == "__main__":
print("Traninig started")
app.run(host="localhost", port=8080)
This code works fine but Flask give warning like "Dont use in production".
So i was looking to do the same in Django but not able to find appropriate ways to achieve this.Not sure what i am trying to do is even doable but i hope i made it clear what i am trying to do.
Use Gunicorn server for your flask projects in production
I am using the following python method in GCP. This methods is in test.py
#functools.lru_cache()
def get_project():
return subprocess.check_output("gcloud config list --format 'value(core.project)'", shell=True).decode(
"utf-8").strip()
When I run test.py solely on tpu, it works and when I use this method in flask API then I get error
'gcloud not found'.
However, the same method works solely and under flask API in GCP VM.
I am not able to figure out what could be the possible cause of this.
This is not exactly an answer to your question, but you might be interested in knowing about the metadata server.
From this answer we can more or less deduce that the metadata server also works with TPUs. Note that I'm not 100% sure on this though.
Try the following code to see if you can get the project id with it.
import requests
def get_project():
# https://cloud.google.com/compute/docs/metadata/default-metadata-values#project_metadata
response = requests.get(
"http://metadata.google.internal/computeMetadata/v1/project/project-id",
headers={"Metadata-Flavor": "Google"}
)
return response.text
I have a couple of Cloud Functions that make remote calls out to a 3rd party API and I would like to be able to gather latency metrics in Cloud Trace for those calls.
I'm trying to find a barebones working piece of example code that I can build off of. Only one I found is at https://medium.com/faun/tracing-python-cloud-functions-a17545586359
It is essentially.
import requests
from opencensus.trace import tracer as tracer_module
from opencensus.trace.exporters import stackdriver_exporter
from opencensus.trace.exporters.transports.background_thread \
import BackgroundThreadTransport
PROJECT_ID = 'test-trace'
# instantiate trace exporter
exporter = stackdriver_exporter.StackdriverExporter(project_id=PROJECT_ID, transport=BackgroundThreadTransport)
def main_fun(data, context):
tracer = tracer_module.Tracer(exporter=exporter)
with tracer.span(name='get_token'):
print('Getting Token')
authtoken = get_token(email,password)
print('Got Token')
def get_token(email,password):
# Make some request out to an API and get a Token
return accesstoken
There are no errors and everything works as intended, minus a trace not showing up in Cloud Trace or Stackdriver.
Am I doing something wrong here? Does anyone have some simple code that may work for Cloud Trace within a Cloud Function
I am attempting to deploy a basic python function which calls an api using an HTTP trigger through Google Functions (browser-editor).
Here's the function I'm trying to deploy:
import requests
import json
def call_api():
API_URL = 'https://some-api.com/v1'
API_TOKEN = 'some-api-token'
result = requests.get(API_URL+"/contacts?access_token="+API_TOKEN).json()
print(result)
call_api()
My requirements.txt contains:
requests==2.21.0
However, every time I try to deploy the function, the following error occurs:
Unknown resource type
What am I doing wrong? The function works just fine on my local machine.
Please refer to Writing HTTP Functions for more information. This is what comes to my mind when looking at your code:
Missing the request parameter (def call_api(request):)
Missing return at the end (you don't need to print it, just return it to the caller)
Calling call_api() at the end of the file will call the function only locally, CFs don't need this
Make sure you are deploying using gcloud functions deploy call_api --runtime python37 --trigger-http
I recently posted a question about How to allow invoking an AWS Lambda function only from EC2 instances inside a VPC.
I managed to get it working by attaching an IAM role with an "AWS lambda role" policy to the EC2 instances and now I can invoke the lambda function using boto3.
Now, I would like to make the call to the lambda function asynchronously using the asyncio await syntax. I read that the lambda function offers an asynchronous version by setting InvokeType='Event', but that actually makes the call return immediately without getting the result of the function.
Since the function takes some time and I would like to launch many in parallel I would like to avoid blocking the execution while waiting for the function to return.
I tried using aiobotocore but that is only supporting basic 's3' service functionalities.
The best way to solve this (in humble opinion) would be to use the AWS API Gateway service to invoke the lambda function through a GET/POST request that can be easily handled using aiohttp.
Nevertheless I don't manage to make it work.
I added to the EC2 IAM role the policy "AmazonAPIGatewayInvokeFullAccess" but every time I try to:
import requests
r = requests.get('https://url_to_api_gateway_for_function')
I get a forbidden response <Response [403]>.
I created the API Gateway using directly the trigger in the lambda function.
I also tried to edit the API Gateway settings, by adding a post method to the function path and setting the "AWS_IAM" authentication and then deploying it as "prod" deployment...no luck. Still same forbidden response. When I test it through the "test screen on the API gateway, it works fine".
Any idea how to fix this? Am I missing some step?
I managed to solve my issue after some struggling.
The problem is that curl and python modules like python's requests do not sign the http requests with the IAM credentials of the EC2 machine where they are running. The http request to the AWS GATEWAY API must be signed using the AWS v4 signin protocol.
An example is here:
http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
Luckily, to keep things simple, there are some helper modules like requests-aws-sign:
https://github.com/jmenga/requests-aws-sign
At the end the code could look something like:
import aiohttp
import asyncio
from requests_aws_sign import AWSV4Sign
from boto3 import session
session = session.Session()
credentials = session.get_credentials()
region = session.region_name or 'ap-southeast-2'
service = 'execute-api'
url = "get_it_from_api->stages->your_deployment->invoke_url"
auth=AWSV4Sign(credentials, region, service)
async def invoke_func(loop):
async with aiohttp.request('GET', url, auth=auth, loop=loop) as resp:
html = await resp.text()
print(html)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
Hope this will save time to somebody else!
EDIT:
For the sake of completeness and helping others I have to say that the code above does not work due to the fact that requests_aws_sign is not compatible with aiohttp. I was getting some "auth field error".
I manged to solve it by using:
async with session.get(url, headers=update_headers()) as resp:
where update_headers() is a simple function that mimics what requests_aws_sign was doing to the headers (so that I can then set them directly to the request above using the header parameter).
It looks like this:
def update_headers(sim_id):
url = urlparse("get_it_from_api->stages->your_deployment->invoke_url")
path = url.path or '/'
querystring = ''
if url.query:
querystring = '?' + urlencode(parse_qs(url.query), doseq=True)
safe_url = url.scheme + '://' + url.netloc.split(':')[0] + path + querystring
request = AWSRequest(method='GET', url=safe_url)
SigV4Auth(credentials, service, region).add_auth(request)
return dict(request.headers.items())