I am using FastAPI to take in a JSON file which will then be the body of an API request out.. Orwell and Good so far. Now I want to apply the same but with robyn built on rust, instead of FastAPI. Not managed to get any joy with calling the API here at the point marked ??.
What things do I need to consider (documentation is sparse). Does robyn cut it alone, or am I missing something?
from robyn import Robyn, jsonify
app = Robyn(__file__)
#app.post("/yt")
async def json(request):
body = request["body"]
outurl = "https://translate.otherapi.com/translate/v1/translate"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer {0}".format(TOKEN)
}
?? response_data = await call_api(data)
return response_data['translations'][0]
app.start(port=5000)
With FastAPI:
import aiohttp
import aiofiles
import json
import requests
from fastapi import FastAPI, Header, Depends, HTTPException, Request
app = FastAPI()
async def call_api(data):
async with aiohttp.ClientSession(connector=aiohttp.TCPConnector(ssl=False)) as session:
async with session.post(url, headers=headers, json=data) as resp:
response_data = await resp.json()
return response_data
#app.post("/yt")
async def root(request:Request):
data = await request.json()
file_path = "data.json"
await write_json_to_file(data, file_path)
data = await read_json_from_file(file_path)
response_data = await call_api(data)
return response_data['translations'][0]
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8001)
The author of Robyn here. I am unable to understand what you are trying to achieve here. However, there is one issue, request["body"] returns a byte string array at the moment.
You need to alter your code to this:
import json
#app.post("/yt")
async def json(request):
body = bytearray(request["body"]).decode("utf-8")
data = json.loads(body)
outurl = "https://translate.otherapi.com/translate/v1/translate"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer {0}".format(TOKEN)
}
response_data = await call_api(data)
return response_data['translations'][0]
This is peculiarity that I am not very fond of. We are hoping to fix this within the next few releases.
I hope this helped :D
I'm using AWS Elasticsearch and async elasticsearch-py package in my project to connect with the cluster.
AWS Elasticsearch Version 7.9
Python package: elasticsearch[async]==7.12.0
I'm not able to initialize the Async Elasticsearch client using the AWS4Auth library (mentioned in official AWS ES client Python documentation)
It should successfully connect with the client. However, it gives me this error:
AttributeError: 'AWS4Auth' object has no attribute 'encode'
Sharing my code snippet:
from elasticsearch import AsyncElasticsearch, AIOHttpConnection
from requests_aws4auth import AWS4Auth
import asyncio
host = 'my-test-domain.us-east-1.es.amazonaws.com'
region = 'us-east-1'
service = 'es'
credentials = {
'access_key': "MY_ACCESS_KEY",
'secret_key': "MY_SECRET_KEY"
}
awsauth = AWS4Auth(credentials['access_key'], credentials['secret_key'], region, service)
es = AsyncElasticsearch(
hosts=[{'host': host, 'port': 443}],
http_auth=awsauth,
use_ssl=True,
verify_certs=True,
connection_class=AIOHttpConnection
)
async def test():
print(await es.info())
asyncio.run(test())
class AWSAuthAIOHttpConnection(AIOHttpConnection):
"""Enable AWS Auth with AIOHttpConnection for AsyncElasticsearch
The AIOHttpConnection class built into elasticsearch-py is not currently
compatible with passing AWSAuth as the `http_auth` parameter, as suggested
in the docs when using AWSAuth for the non-async RequestsHttpConnection class:
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/request-signing.html#request-signing-python
To work around this we patch `AIOHttpConnection.perform_request` method to add in
AWS Auth headers before making each request.
This approach was synthesized from
* https://stackoverflow.com/questions/38144273/making-a-signed-http-request-to-aws-elasticsearch-in-python
* https://github.com/DavidMuller/aws-requests-auth
* https://github.com/jmenga/requests-aws-sign
* https://github.com/byrro/aws-lambda-signed-aiohttp-requests
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._credentials = boto3.Session().get_credentials()
async def perform_request(
self, method, url, params=None, body=None, timeout=None, ignore=(), headers=None
):
def _make_full_url(url: str) -> str:
# These steps are copied from the parent class' `perform_request` implementation.
# The ElasticSearch client only passes in the request path as the url,
# and that partial url format is rejected by the `SigV4Auth` implementation
if params:
query_string = urlencode(params)
else:
query_string = ""
full_url = self.host + url
full_url = self.url_prefix + full_url
if query_string:
full_url = "%s?%s" % (full_url, query_string)
return full_url
full_url = _make_full_url(url)
if headers is None:
headers = {}
# this request object won't be used, we just want to copy its auth headers
# after `SigV4Auth` processes it and adds the headers
_request = AWSRequest(
method=method, url=full_url, headers=headers, params=params, data=body
)
SigV4Auth(self._credentials, "es", "us-west-1").add_auth(_request)
headers.update(_request.headers.items())
# passing in the original `url` param here works too
return await super().perform_request(
method, full_url, params, body, timeout, ignore, headers
)
I took #francojposa's answer above and fixed/adapted it, I tried to submit an edit to his answer but the "suggestion queue is full" or such
requirements.txt:
boto3<2.0
elasticsearch[async]<7.14 # in 7.14 they "shut-out" anything other than elastic cloud
And here's the main definition
from urllib.parse import urlencode
import boto3
from botocore.auth import SigV4Auth
from botocore.awsrequest import AWSRequest
from elasticsearch import AsyncElasticsearch, AIOHttpConnection
class AWSAuthAIOHttpConnection(AIOHttpConnection):
"""Enable AWS Auth with AIOHttpConnection for AsyncElasticsearch
The AIOHttpConnection class built into elasticsearch-py is not currently
compatible with passing AWSAuth as the `http_auth` parameter, as suggested
in the docs when using AWSAuth for the non-async RequestsHttpConnection class:
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/request-signing.html#request-signing-python
To work around this we patch `AIOHttpConnection.perform_request` method to add in
AWS Auth headers before making each request.
This approach was synthesized from
* https://stackoverflow.com/questions/38144273/making-a-signed-http-request-to-aws-elasticsearch-in-python
* https://github.com/DavidMuller/aws-requests-auth
* https://github.com/jmenga/requests-aws-sign
* https://github.com/byrro/aws-lambda-signed-aiohttp-requests
"""
def __init__(self, *args, aws_region=None, **kwargs):
super().__init__(*args, **kwargs)
self.aws_region = aws_region
self._credentials = boto3.Session().get_credentials()
self.auther = SigV4Auth(self._credentials, "es", self.aws_region)
def _make_full_url(self, url: str, params=None) -> str:
# These steps are copied from the parent class' `perform_request` implementation.
# The ElasticSearch client only passes in the request path as the url,
# and that partial url format is rejected by the `SigV4Auth` implementation
query_string = urlencode(params) if params else None
full_url = self.url_prefix + self.host + url
if query_string:
full_url = "%s?%s" % (full_url, query_string)
return full_url
async def perform_request(
self, method, url, params=None, body=None, timeout=None, ignore=(), headers=None
):
full_url = self._make_full_url(url)
if headers is None:
headers = {}
# this request object won't be used, we just want to copy its auth headers
# after `SigV4Auth` processes it and adds the headers
_request = AWSRequest(
method=method, url=full_url, headers=headers, params=params, data=body
)
self.auther.add_auth(_request)
headers.update(_request.headers.items())
# passing in the original `url` param here works too
return await super().perform_request(
method, url, params, body, timeout, ignore, headers
)
Usage:
es_client = AsyncElasticsearch(
['https://aws-es-or-opensearch-url-goes-here'],
use_ssl=True, verify_certs=True,
connection_class=AWSAuthAIOHttpConnection, aws_region='us-east-1'
)
async def test():
body = {...}
results = await es_client.search(body=body, index='test', doc_type='test') # I use ES 5/6
I think that with AWS4Auth you are bound to RequestsHttpConnection.
The default connection class is based on urllib3 which is more
performant and lightweight than the optional requests-based class.
Only use RequestsHttpConnection if you have need of any of requests
advanced features like custom auth plugins etc.
from https://elasticsearch-py.readthedocs.io/en/master/transports.html
Try:
es = AsyncElasticsearch(
hosts=[{'host': host, 'port': 443}],
http_auth=awsauth,
use_ssl=True,
verify_certs=True,
connection_class=RequestsHttpConnection
)
or non-async version if the code above doesn't work:
es = Elasticsearch(
hosts=[{'host': host, 'port': 443}],
http_auth=awsauth,
use_ssl=True,
verify_certs=True,
connection_class=RequestsHttpConnection
)
I am trying to write a lambda function in 3.8 version but I am getting error while doing a get requests
[ERROR] AttributeError: module 'botocore.vendored.requests' has no attribute 'get' Traceback (most recent call last): File "/var/task/lambda_function.py"
import json
from botocore.vendored import requests
def lambda_handler(event, context):
request = event['Records'][0]['cf']['request']
print (request)
print(request['headers'])
token = request['headers']['cookie'][0]['value'].partition("=")[2]
print (token)
print(type(request['uri']))
consumer_id = request['uri'].rpartition('/')[-1]
print (consumer_id)
#Take the token and send it somewhere
token_response = requests.get(url = 'https://url/api/files/' + consumer_id, params = {'token': token})
print (token_response)
return request
I tried following this blog https://aws.amazon.com/blogs/compute/upcoming-changes-to-the-python-sdk-in-aws-lambda/
but not able to identify which layer to add. Could anyone please help
According to the link you provided and assuming that request was correctly installed you should be using
import requests
instead of
from botocore.vendored import requests
I'm creating AWS API endpoint (GET) to get a PDF file and facing a serializable issue.
AWS Lambda is mapped to access the file from s3.
import boto3
import base64
def lambda_handler(event, context):
response = client.get_object(
Bucket='test-bucket',
Key=file_path,
)
data = response['Body'].read()
return {
'statusCode': 200,
'isBase64Encoded': True,
'body': data,
'headers': {
'content-type': 'application/pdf',
'content-disposition': 'attachment; filename=test.pdf'
}
}
[ERROR] Runtime.MarshalError: Unable to marshal response: bytes is not JSON serializable.
If I return str(data, "utf-8") will download PDF file and making issues while open.
Please suggest to me where I'm lagging.
Thanks.
You will need to initiate the client variable first and then encode the data that is coming back from s3 as followed:
import json
import boto3
import base64
client = boto3.client('s3')
def lambda_handler(event, context):
bucket_name ='bucket-name'
file_name='file-name.pdf'
fileObject = client.get_object(Bucket=bucket_name, Key=file_name)
file_content = fileObject["Body"].read()
print(bucket_name, file_name)
return base64.b64encode(file_content)
Here is the related code
import logging
logging.getLogger('googleapicliet.discovery_cache').setLevel(logging.ERROR)
import datetime
import json
from flask import Flask, render_template, request
from flask import make_response
from googleapiclient.discovery import build
from googleapiclient.http import MediaIoBaseDownload
from oauth2client.client import AccessTokenCredentials
...
#app.route('/callback_download')
def userselectioncallback_with_drive_api():
"""
Need to make it a background process
"""
logging.info("In download callback...")
code = request.args.get('code')
fileId = request.args.get('fileId')
logging.info("code %s", code)
logging.info("fileId %s", fileId)
credentials = AccessTokenCredentials(
code,
'flex-env/1.0')
http = httplib2.Http()
http_auth = credentials.authorize(http)
# Exports a Google Doc to the requested MIME type and returns the exported content. Please note that the exported content is limited to 10MB.
# v3 does not work? over quota?
drive_service = build('drive', 'v3', http=http_auth)
drive_request = drive_service.files().export(
fileId=fileId,
mimeType='application/pdf')
b = bytes()
fh = io.BytesIO(b)
downloader = MediaIoBaseDownload(fh, drive_request)
done = False
try:
while done is False:
status, done = downloader.next_chunk()
logging.log("Download %d%%.", int(status.progress() * 100))
except Exception as err:
logging.error(err)
logging.error(err.__class__)
response = make_response(fh.getbuffer())
response.headers['Content-Type'] = 'application/pdf'
response.headers['Content-Disposition'] = \
'inline; filename=%s.pdf' % 'yourfilename'
return response
It is based on some code example of drive api. I am trying to export some files from google drive to pdf format.
The exception comes from the line
response = make_response(fh.getbuffer())
It throws the exception:
TypeError: 'memoryview' object is not callable
How can I retrieve the pdf content properly from the fh? Do I need to further apply some base 64 encoding?
My local runtime is python 3.4.3
I have used an incorrect API. I should do this instead:
response = make_response(fh.getvalue())