How should I automate variable creation from a string in Python? - python-3.x

I am exploring ways to automate REST API resource and endpoint implementation using flask-restful. In flask-restful, the add_resource(UpdateOrders, '/orders') function links the endpoint to the resource in the framework. UpdateOrders is the endpoint, and also the name of a class containing logic for handling requests sent to '/orders'.
I could manually specify the endpoints and resources like this:
app = Flask(__name__)
api = Api(app)
# only Orders endpoint included for simplicity
class Orders(Resource):
def get(self, item_id):
return order_data[item_id]
def add_resources():
api.add_resource(UpdateOrders, '/orders')
api.add_resource(Orders, '/orders/<item_id>')
api.add_resource(UpdatePayments, '/payments')
api.add_resource(Payments, '/payments/<item_id>')
if __name__ == '__main__':
add_resources()
app.run()
However, both the endpoints and resources will change depending on the use case of the REST API, e.g. instead of '/orders' I could have '/appointments'. I don't know what the use case will be (it is generated from the user's business process choreography).
My initial thought was to start by adding resources dynamically:
def add_resources():
# generate list of endpoint, resource tuples
endpoint_resource = [('UpdateOrders', '/orders'),
('Orders', '/orders/<item_id>'),
('UpdatePayments', '/payments'),
('Payments', '/payments/<item_id>')]
for endpoint, resource in endpoint_resource:
api.add_resource(endpoint, resource)
Of course, this won't work as endpoint here is a string, while add_resource requires a class. So my question is: can/should I convert endpoint from a string to a variable class name so that the API's resources can be created 'dynamically'?
In other words, instead of endpoint being assigned first to string 'UpdateOrders', it will be assigned to <class '__main__.UpdateOrders'>.
Any guidance is appreciated!

Related

Implementing roles with Flask-JWT-Extended

I am currently developing an flask api that uses flask-jwt-extended to protect endpoints. I have the jwt required decorator working correctly but I would like to add roles to have more granular control over access. In my imagination it would be best to have three tables Users, Roles and UserRoles. UserRoles would map users to roles using foreign ids and then use a custom decorator to check for each endpoint.
I have never done this before, how would you implement this and why?
As you suggested, having some basic tables and methods + decorators is the way to go.
You can also look into how this is implemented in Flask-Security
(or in packages Flask-Login and Flask-Principal, which are used in Flask-Security). It can give you some suggestions on what kind of functions you'd like to have.
According to the docs, it should be possible with custom decorators, like this:
from flask_jwt_extended import get_jwt
from flask_jwt_extended import verify_jwt_in_request
# Here is a custom decorator that verifies the JWT is present in the request,
# as well as insuring that the JWT has a claim indicating that this user is
# an administrator
def admin_required():
def wrapper(fn):
#wraps(fn)
def decorator(*args, **kwargs):
verify_jwt_in_request()
claims = get_jwt()
if claims["is_administrator"]:
return fn(*args, **kwargs)
else:
return jsonify(msg="Admins only!"), 403
return decorator
return wrapper
Just make sure that you save roles information using additional claims.

Building a jump-table for boto3 clients/methods

I'm trying to build a jumptable of API methods for a variety of boto3 clients, so I can pass an AWS service name and a authn/authz low-level boto3 client to my utility code and execute the appropriate method to get a list of resources from the AWS service.
I'm not willing to hand-code and maintain a massive if..elif..else statement with >100 clauses.
I have a dictionary of service names (keys) and API method names (values), like this:
jumpTable = { 'lambda' : 'list_functions' }
I'm passed the service name ('lambda') and a boto3 client object ('client') already connected to the right service in the region and account I need.
I use the dict's get() to find the method name for the service, and then use a standard getattr() on the boto3 client object to get a method reference for the desired API call (which of course vary from service to service):
`apimethod = jumpTable.get(service)`
`methodptr = getattr(client, apimethod)`
Sanity-checking says I've got a "botocore.client.Lambda object" for 'client' (that looks OK to me) and a "bound method ClientCreator._create_api_method.._api_call of <botocore.client.Lambda" for the methodptr which reports itself as of type 'method'.
None of the API methods I'm using require arguments. When I invoke it directly:
'response = methodptr()'
it returns a boto3 ClientError, while invoking at through the client:
response = client.methodptr()
returns a boto3 AttributeErrror.
Where am I going wrong here?
I'm locked into boto3, Python3, AWS and have to talk to 100s of AWS services, each of which has a different API method that provides the data I need to gather. To an old C coder, a jump-table seems obvious; a more Pythonic approach would be welcome...
The following works for me:
client = boto3.Session().client("lambda")
methodptr = getattr(client, apimethod)
methodptr()
Note that the boto3.Session() part is required. When calling boto3.client(..) directly, I get a 'UnrecognizedClientException' exception.

Google Cloud Functions Python3: Wrapping HTTP trigger functions with endpoints

I am exploring Google Cloud Functions in Python to write HTTP triggered functions. I have a main.py with all my triggered functions structured like in this post, but would like to be able to wrap in some endpoints. On nodejs, one could do so like in this post using Express, and on Python, very similarly using Flask.
I have attempted to dabble by wrapping my Cloud Functions using Flask, but Google will bring me to Google's authentication page. My code as follows:
from flask import Flask, jsonify, request
# Initialize Flask application
application = Flask(__name__)
#application.route('/some/endpoint/path', methods=['GET'])
def predict():
inputs = request.args.get('inputs')
//Some logic...
response_object = {}
response_object['statusCode'] = 200
response_object['results'] = results
return jsonify(response_object)
Is there a way to wrap the python cloud functions in such a way to acheive something like this?
https://us-central1-my-project.cloudfunctions.net/some
https://us-central1-my-project.cloudfunctions.net/some/endpoint
https://us-central1-my-project.cloudfunctions.net/some/endpoint/path
I believe you are getting the authentication Google screen because you are trying to access the base url for Cloud Functions on your project.
With HTTP Cloud Functions the trigger url is usually https://[REGION]-[PROJECT_ID].cloudfunctions.net/[FUNCTION_NAME], so any routes would need to follow another slash after the function name.
That being said, I found this post where the solution provided manage to set routes within the same main.py file to access the endpoints from a single Cloud Function. I had to adapt some things, but in the end it worked for me.
The following is the source code I tested at my end:
import flask
import werkzeug.datastructures
app = flask.Flask(__name__)
#app.route('/')
def root():
return 'Hello World!'
#app.route('/hi')
def hi():
return 'Hi there'
#app.route('/hi/<username>')
def hi_user(username):
return 'Hi there, {}'.format(username)
#app.route('/hi/<username>/congrats', methods=['POST'])
def hi_user_congrat(username):
achievement = flask.request.form['achievement']
return 'Hi there {}, congrats on {}!'.format(username, achievement)
def main(request):
with app.app_context():
headers = werkzeug.datastructures.Headers()
for key, value in request.headers.items():
headers.add(key, value)
with app.test_request_context(method=request.method, base_url=request.base_url, path=request.path, query_string=request.query_string, headers=headers, data=request.form):
try:
rv = app.preprocess_request()
if rv is None:
rv = app.dispatch_request()
except Exception as e:
rv = app.handle_user_exception(e)
response = app.make_response(rv)
return app.process_response(response)
This defined the following routes within a single Cloud Function:
https://[REGION]-[PROJECT_ID].cloudfunctions.net/[FUNCTION_NAME]
https://[REGION]-[PROJECT_ID].cloudfunctions.net/[FUNCTION_NAME]/hi
https://[REGION]-[PROJECT_ID].cloudfunctions.net/[FUNCTION_NAME]/hi/<username>
https://[REGION]-[PROJECT_ID].cloudfunctions.net/[FUNCTION_NAME]/hi/<username>/congrats
And the following was the command used to deploy this function:
gcloud functions deploy flask_function --entry-point main --runtime python37 --trigger-http --allow-unauthenticated
Cloud Functions is designed to be used as a single endpoint. You might consider using Cloud Run instead, as it's more suited towards applications with multiple routes, and has many of the same benefits as Cloud Functions.
If you're dead set on using Cloud Functions, something like the answer at Injecting a Flask Request into another Flask App should work, but it's not ideal.

How to pass in the model name during init in Azure Machine Learning Service?

I am deploying 50 NLP models on Azure Container Instances via the Azure Machine Learning service. All 50 models are quite similar and have the same input/output format with just the model implementation changing slightly.
I want to write a generic score.py entry file and pass in the model name as a parameter. The interface method signature does not allow a parameter in the init() method of score.py, so I moved the model loading into the run method. I am assuming the init() method gets run once whereas Run(data) will get executed on every invocation, so this is possibly not ideal (the models are 1 gig in size)
So how can I pass in some value to the init() method of my container to tell it what model to load?
Here is my current, working code:
def init():
def loadModel(model_name):
model_path = Model.get_model_path(model_name)
return fasttext.load_model(model_path)
def run(raw_data):
# extract model_name from raw_data omitted...
model = loadModel(model_name)
...
but this is what I would like to do (which breaks the interface)
def init(model_name):
model = loadModel(model_name)
def loadModel(model_name):
model_path = Model.get_model_path(model_name)
return fasttext.load_model(model_path)
def run(raw_data):
...
If you're looking to use the same deployed container and switch models between requests; it's not the preferred design choice for Azure machine learning service, we need to specify the model name to load during build/deploy.
Ideally, each deployed web-service endpoint should allow inference of one model only; with the model name defined before the container the image starts building/deploying.
It is mandatory that the entry script has both init() and run(raw_data) with those exact signatures.
At the moment, we can't change the signature of init() method to take a parameter like in init(model_name).
The only dynamic user input you'd ever get to pass into this web-service is via run(raw_data) method. As you have tried, given the size of your model passing it via run is not feasible.
init() is run first and only once after your web-service deploy. Even if init() took the model_name parameter, there isn't a straight forward way to call this method directly and pass your desired model name.
But, one possible solution is:
You can create params file like below and store the file in azure blob storage.
Example runtime parameters generation script:
import pickle
params = {'model_name': 'YOUR_MODEL_NAME_TO_USE'}
with open('runtime_params.pkl', 'wb') as file:
pickle.dump(params, file)
You'll need to use Azure Storage Python SDK to write code that can read from your blob storage account. This also mentioned in the official docs here.
Then you can access this from init() function in your score script.
Example score.py script:
from azure.storage.blob import BlockBlobService
import pickle
def init():
global model
block_blob_service = BlockBlobService(connection_string='your_connection_string')
blob_item = block_blob_service.get_blob_to_bytes('your-container-name','runtime_params.pkl')
params = pickle.load(blob_item.content)
model = loadModel(params['model_name'])
You can store connection strings in Azure KeyVault for secure access. Azure ML Workspaces comes with built-in KeyVault integration. More info here.
With this approach, you're abstracting runtime params config to another cloud location rather than the container itself. So you wouldn't need to re-build the image or deploy the web-service again. Simply restarting the container will work.
If you're looking to simply re-use score.py (not changing code) for multiple model deployments in multiple containers then here's another possible solution.
You can define your model name to use in web-service in a text file and read it in score.py. You'll need to pass this text file as a dependency when setting up the image config.
This would, however, need multiple params files for each container deployment.
Passing 'runtime_params.pkl' in dependencies to your image config (More detail example here):
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="myenv.yml",
dependencies=["runtime_params.pkl"],
docker_file="Dockerfile")
Reading this in your score.py init() function:
def init():
global model
with open('runtime_params.pkl', 'rb') as file:
params = pickle.load(file)
model = loadModel(params['model_name'])
Since your creating a new image config with this approach, you'll need to build the image and re-deploy the service.

using Locust.io for REST web service

I wanted to use Locust for performance testing on Spring Rest WebService, where each service is secured by the token.
is anyone tried to do the same by nesting task sets?
How can we maintain the same token for all the request from single user?
is it possible to go to the task on response from other task?
I had a similar scenario. If you know what the token is in advance, you can do:
def on_start(self):
""" on_start is called when a Locust starts, before any task is scheduled """
self.access_token = "XYZ" # method 1
# self.login() # <-- method 2
Otherwise you could call something like a login method that would authenticate your user, and then store the resulting token on self.
Since on start happens before any tasks, I never had to worry about nesting task sets.
If you need things to happen in a certain order within tasks, you can just run something like:
#task(1)
def mytasks(self):
self.get_service_1()
self.get_service_2()

Resources