Is it possible to set a custom header when using lambda non proxy integrations?
At the moment I have enabled binary support and I am returning straight from my handler but I have a requirement to set the file name of the download and was planning to use Content-Disposition: attachment; filename="filename.xlsx" but I am not sure how I can do this if I have lambda proxy integration turned off.
Reading this https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-integration-settings-integration-response.html I am not sure if it only works for json responses?
The example shows the body comment as taking a json object but then says there is a base64 encoding option for binary support, but I am just returning my binary data straight from my function and I had not planned to use lambda proxy at all if possible.
I currently have files downloading but I am using temporary files and I want to name the downloads.
# In my service
with tempfile.NamedTemporaryFile(suffix=".xlsx") as tmp:
pd.DataFrame(report_json).to_excel(tmp.name)
bytes_io = BytesIO(tmp.read())
return bytes_io
# In my handler
return base64.b64encode(bytes_io.getvalue())
Using later versions of the serverless framework a custom header for example Content-Disposition can be set like the following.
Integration: lambda
response:
headers:
Content-Type: "'text/csv'"
Content-Disposition: "'attachment; filename=abc.csv'"
I am not sure yet if it is possible to interpolate values from the context into these values.
Related
I'm trying to use the Put Blob Rest API from Postman (at the moment) using also a code generated SAS.
If I set the body as binary in postman and I select my file everything works just fine - I get my file in the blob storage as expected.
However, if I send the file using a multipart/form-data the file is being uploaded, but I get additional data at the beginning of the file such as:
----------------------------515848534032814231487294
Content-Disposition: form-data; name="file"; filename="my_file.json"
Content-Type: application/json
Does anybody know why is that and how I could use multipart/form-data for uploading my file to the blob storage?
Thank you in advance!
This is the expected behavior when using multipart/form-data.
By using multipart/form-data, the boundary(like this ---515848534032814231487294) is auto-generated in the file. But the blob storage backend does not get rid of it(means remove these lines auto-generated).
And one more thing, multipart/form-data is mostly used in a web project, and you can write a function which is used to processing these extra lines in the backend.
I've created a lambda function so that I can use it for validation purposes and then proxy the request to the service layer. Then the service layer response contains a binary blob(PDF), which goes through the lambda function then the API gateway finally would reach the client.
The first problem we ran into was the PDF got transformed or corrupted, just returned blank PDF. And then I found this post which did not make any sense to me at first. Until I saw this aws doc. It turns out it's required to encode the binary data into base64 and then put the indictor 'isBase64Encoded' to true. The gateway eventually converts the response back to the binary blob.
TBH, I am new to aws and I don't really understand why this is the way..what's wrong of passing through the original binary blob, why those conversion steps are necessary?
Here are list of things i had to do
Configured / as a Binary Media Type on gateway. (I tried to use application/pdf, but did not work?)
Make sure the response body from the service layer not transformed into string (I am using request, and by default it gives me string). I send encoding: null along with the request
When i get the Buffer data from the service layer, i use Buffer to convert response body into base64 encoding.
In the lambda output, I set isBase64Encoded to true
Finally, get the unaltered PDF...
I am wondering if someone can confirm i am doing in an expected way? Or maybe if there is a better way?
Also, when we set binary support media type to /, doesn't this mean it accepts all media types? But i only want the PDF to be supported.
This doc (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings.html) should be able to answer your question. And there are two things you need to note:
You can pass the original binary file (blob) as well as a base64-encoded binary file through API Gateway.
Ref: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-content-encodings-examples-image-lambda.html
*/* (or /) works in your case, but it means the API Gateway will treat all payload as binary data and this breaks payload with text data, for example JSON payload. So, ideally application/pdf should be used as the "Binary Media Type".
Please let me know if openapi-request-validator nodejs library can be used to validate request against open api 3 spec yaml file. I had a look at express-openapi-validator, but my application does not use expressjs. My service is a lambda function (nodejs) deployed in aws.
I believe in your lambda function you can use openapi-request-validator, which already makes it's function signature very friendly to the openapi spec yaml file. What you can do:
Include the openapi spec yaml file in the zip file when deploying to AWS.
At runtime, load the yaml file and convert it into a Javascript object using some library (e.g. js-yaml).
write a simple function to do the following:
Look up the Javascript spec object based on the request path to find out related parameters, requestBody, schemas etc required by OpenAPIRequestValidator.
Transform the incoming API Gateway proxy event object (I assume it's proxy integration) into the format that validateRequest expects.
Then you will be able to new OpenAPIRequestValidator and call its validateRequest method to validate the transformed request object.
I would like some assistance with CloudFront distributions and their YAML templates if anyone has experience here.
We use cloudfront for an internal CDN for media files, to get around a tainted canvas error in the UI (selecting a poster image for a video) I have manually added some headers to the white list and this resolved the issue.
This needs to be part of our automated deployments however and I cannot seem to find anything concrete on how to replicate this via a YAML template.
From CloudFormation documentation:
Specifies the headers that you want Amazon CloudFront to forward to the origin for this cache behavior (whitelisted headers). For the headers that you specify, Amazon CloudFront also caches separate versions of a specified object that is based on the header values in viewer requests.
Cookies:
Cookies
Headers:
- String
QueryString: Boolean
QueryStringCacheKeys:
- String
When navigating through AWS template documentation, use the Type links to dig further into specifications.
As an aside, I prefer to use Terraform to configure these resources:
cache_behavior {
forwarded_values {
headers = ["Host"]
}
}
My first deploy to AWS.
The files are all in place, and index.html loads.
There are two files in a subdir, one .js and once .css.
They both return 200 but fail to load. Chrome sais it's the 'parser'.
After trying a few things, I noted that this property is causing it: ContentEncoding: "gzip".
If I remove this property the files are found correctly.
Am I using this property incorrectly?
I am using the Node AWS SDK via this great project: https://github.com/MathieuLoutre/grunt-aws-s3
You can witness this behavior for yourself at http://tidepool.co.s3-website-us-west-1.amazonaws.com/
If you specify Content-Encoding: gzip then you need to make sure that the content is actually gzipped on S3.
From what I see in this CSS file:
http://tidepool.co.s3-website-us-west-1.amazonaws.com/08-26_6483218-dirty/all-min.css
the actual content is not gzipped, but the Content-Encoding: gzip header is present.
Also keep in mind that S3 is unable to compress your content on the fly based on the Accept-Encoding header in the request. You can either store it uncompressed and it will work for all browsers/clients or store it in a compressed format (gzip/deflate) and it will only work on some clients that can work with compressed content.
You could also take a look at the official AWS SDK for Node.js.