Hosting images on Lambda + API gateway - node.js

I got NodeJS application running on AWS Lambda + API gateway environment.
I am deploying my app via serverless app (https://www.npmjs.com/package/serverless). My assets (including images) are packed together to zip format, sent to S3 storage and deployed via cloudfront (regular serverless flow).
Requests to images responses with 200 OK status. The problem is that they are not displaying. I have no idea where should I start to look for an issue.
I enabled binary media types in my API Gateway, and provided following types: image/gif, image/jpeg.
For example I am trying to display this image:
http://www.top13.net/wp-content/uploads/2015/10/perfectly-timed-funny-cat-pictures-5.jpg
Here is URL to it in my app:
http://angular-universal-serverless.maciejtreder.com/assets/img/cat.jpg
Is it even possible to display images this way? Maybe I should upload them to S3 storage?
Here are some entries from logs (before enabling binary media types):
http://www.heypasteit.com/clip/0IILRO
and after enabling:
http://www.heypasteit.com/clip/0IILS2

I have solved problem, by my own.
Here is boilerplate repository: https://github.com/maciejtreder/angular-universal-serverless
The point was to encode files on Lambda side, and send it in encoded form to API GW with proper headers.

I had this problem on Angular 12.0.5 using regular angular-universal (ng add #nguniversal/express-engine), serverless and #vendia/serverless-express.
The solution was:
1 - Add the media types in the API Gateway:
2 - Add apiGateway.binaryMediaTypes on my serverless.yml file:
provider:
...
apiGateway:
binaryMediaTypes:
- '*/*'

Related

Serverless AWS Lambda project running on deploy

I have built a simple Node.JS application that consists of a single API endpoint. Deployment works and I can all the API endpoint as expected. However the handler function also runs on deployment of the code, which is problematic for me as handler function posts a tweet to twitter, and I don't want it to tweet every time I deploy a code update. I haven't been able to find anyone online reporting a similar problem, but I'm sure this would not be expected functionality.
This is my serverless.yml file (I created originally from the GitLab Node.JS serverless template here https://gitlab.com/gitlab-org/project-templates/serverless-framework/):
service: my-project
provider:
name: aws
region: ${env:AWS_REGION}
runtime: nodejs10.x
plugins:
- serverless-offline
- serverless-jest-plugin
- serverless-stack-output # Allows us to output endpoint url to json file
functions:
post:
handler: main.main
events:
- http:
path: post
method: post
custom:
output:
handler: main.main
file: stack.json
I believe the problem in your case is the fact that the handler you have for serverless-stack-output plugin is the same as for your function. That plugin explicitly calls the handler (https://github.com/sbstjn/serverless-stack-output#handler) which executes your function. If you will drop use of the plugin or just configure a different handler for it the problem should disappear

Moving existing lambda edge to Serverless Framework

I have a Lambda Edge attached to a CloudFront distribution. What I want to do is use Serverless Framework to publish the lambda (instead of manually uploading files and click on "Deploy to Lambda#Edge"). What I've tried to do, looking at the serverless documentation, is add this yml file to the project and run the deployment script
service: cloudfront-service
provider:
name: aws
runtime: nodejs10.x
functions:
cfLambda:
handler: index.handler
events:
- cloudFront:
eventType: origin-request
origin: <CloudFront-Origin-ID>
This deployed the Lambda but it didn't attached it to CloudFront (it hasn't been published and there is no versions or triggers related). So how can I do this, using an existing CloudFront distribution?
This plugin #silvermine/serverless-plugin-cloudfront-lambda-edge will not help if you want to use an existing cloud front distribution. It is only helpful if you are going to create a new one.
This issue has been already reported and as per the forum, this functionality they are not supporting.
Lambda#Edge with Serverless-Framework is quite easy. We use this plugin.
plugins:
- '#silvermine/serverless-plugin-cloudfront-lambda-edge'
Please go directly to the plugin author's website for complete examples: https://github.com/silvermine/serverless-plugin-cloudfront-lambda-edge
Base on your implementation you have a wrong indentation so I think it wont really attach it to your cloudfront. Having a wrong indetation will not create an events on your lambda function so intead of this
events:
- cloudFront:
eventType: origin-request
origin: <CloudFront-Origin-ID>
Do this:
events:
- cloudFront:
eventType: origin-request
origin: <CloudFront-Origin-ID>
I hope that this will solve your problem. Because I encounter this wrong indentation myself and wander why it is not being implemented properly.

openapi-request-validator Validate against yaml

Please let me know if openapi-request-validator nodejs library can be used to validate request against open api 3 spec yaml file. I had a look at express-openapi-validator, but my application does not use expressjs. My service is a lambda function (nodejs) deployed in aws.
I believe in your lambda function you can use openapi-request-validator, which already makes it's function signature very friendly to the openapi spec yaml file. What you can do:
Include the openapi spec yaml file in the zip file when deploying to AWS.
At runtime, load the yaml file and convert it into a Javascript object using some library (e.g. js-yaml).
write a simple function to do the following:
Look up the Javascript spec object based on the request path to find out related parameters, requestBody, schemas etc required by OpenAPIRequestValidator.
Transform the incoming API Gateway proxy event object (I assume it's proxy integration) into the format that validateRequest expects.
Then you will be able to new OpenAPIRequestValidator and call its validateRequest method to validate the transformed request object.

What could be causing this mystery GCloud App Deploy error? (NodeJS, AppEngine. Standard Environment)

ERROR: (gcloud.app.deploy) Error Response: [9] Cloud build 6axxx...xxx9b status: FAILURE.
I'm trying to understand if I can use a NodeJS / Express server with Google Cloud App Engine, Standard Mode. My application started out from an Express-Generator framework. There is a single page app, and some function calls back to server via custom routes. Nothing terribly crazy.
I set up repo, and $ git clone https://gitlab.com/my_repo into the GCloud shell. Test, test and retest using the sandbox (local development server.) Test url is of the form: https://8080-dot-xxxxxx-dot-devshell.appspot.com Yipee.
Next step is hard deploy: I start with $ gcloud app create followed by $ gcloud app deploy (had to make a side trip to ensure correct authorization and billing stuff is whole, etc...) . Website / server totally works as intended. URL is of the form https://my-custom-XYZ-website.appspot.com/ Works great.
I can check the version at the Google Cloud Platform -- App Engine -- Version console The output there shows me:
Version: 20181120t103136
Status: Deployed
Traffic Allocation: 100%
Instances: 1
Runtime: Node10
Environment: Standard
Size: 748.8 KB
Deployed: (Date/Time by me)
So that's the background. The problem is now I can no longer update the content. I can easily push code to the terminal interface, but the command $ gcloud app deploy fails for any sort of update / new version. Sigh.
Log related info -- Build steps:
Fetcher = successful
Builder = status, Step Failed
Builder Arguments
--name=us.gcr.io/my-custom-XYZ-website/app-engine-tmp/app/ttl-2h:12xxxxxxa5a0 --directory=/workspace --destination=/srv --cache-repository=us.gcr.io/my-custom-XYZ-website/app-engine-tmp/build-cache/ttl-7d --cache --base=gcr.io/gae
runtimes/nodejs10:nodejs10_10_13_0_20181111_RC00
Directory /workspace/
"builder": Permission denied for "d71xxxxxxxxxxxxxxxxxx88b5" from request "/v2/my-custom-XYZ-website/app-engine-tmp/build-cache/ttl-7d/node-cache/manifests/d71xxxxxxxxxxxxxxxxxx88b5". : None
app.yaml
# [START runtime]
runtime: nodejs10
# [END runtime]
handlers:
- url: /images
static_dir: public/images
- url: /javascript
static_dir: public/javascript
- url: /red-canoe
static_dir: public/alt-content
- url: /stylesheets
static_dir: public/stylesheets
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
Any idea on how to identify and correct what's wrong here?
Note: I did create another simple test product in node.js, and I can easily update the versions there. That test product had only a simple app.js with a simple Hello World response. Version #2 had Hello There, World (okay, so yeah, not the worlds most robust test...). But the version update, via $ gcloud app deploy worked just fine there. I did note the version size on the Hello World app was around 245kb or so.
So, after a whole lot of testing I think I figured out what is happening here.
The node.js application actually utilizes three different Google related components / tools.
Google Firebase Authentication
Google Sheets API, V4
Google App Engine (Deployment)
When I'm created those components, the system prompts me to either create a new project or utilize an existing project. I chose the exact same project for all three tools. I believe the fact that these were all tied together messed up the ability to perform updates to Google App Engine vcloud app deploy
The fix was to delete that three combo project, and create three separate projects
MyProject_Sheets
MyProject_Firebase_Auth
MyProject_AppEngineDeploy
This works reliably. All done.
And for anybody who may be interested in the Firebase / Sheets API stuff I did here, check out this link. I built an online phone directory, protected by login via mobile phone, with contact data stored on a private Google sheet.

Add CloudFront Whitelist Headers to YAML template

I would like some assistance with CloudFront distributions and their YAML templates if anyone has experience here.
We use cloudfront for an internal CDN for media files, to get around a tainted canvas error in the UI (selecting a poster image for a video) I have manually added some headers to the white list and this resolved the issue.
This needs to be part of our automated deployments however and I cannot seem to find anything concrete on how to replicate this via a YAML template.
From CloudFormation documentation:
Specifies the headers that you want Amazon CloudFront to forward to the origin for this cache behavior (whitelisted headers). For the headers that you specify, Amazon CloudFront also caches separate versions of a specified object that is based on the header values in viewer requests.
Cookies:
Cookies
Headers:
- String
QueryString: Boolean
QueryStringCacheKeys:
- String
When navigating through AWS template documentation, use the Type links to dig further into specifications.
As an aside, I prefer to use Terraform to configure these resources:
cache_behavior {
forwarded_values {
headers = ["Host"]
}
}

Resources