How to pass API Gateway authorizer context to a HTTP integration - node.js

I have successfully implemented a Lambda authorizer for my AWS API Gateway, but I want to pass a few custom properties from it to my Node.js endpoint.
My output from my authorizer follows the format specified by AWS, as seen below.
{
"principalId": "yyyyyyyy",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow|Deny",
"Resource": "arn:aws:execute-api:<regionId>:<accountId>:<appId>/<stage>/<httpVerb>/[<resource>/<httpVerb>/[...]]"
}
]
},
"context": {
"company_id": "123",
...
}
}
In my case, context contains a few parameters, like company_id, that I would like to pass along to my Node endpoint.
If I was to use a Lambda endpoint, I understand that this is done with Mapping Template and something like this:
{
"company_id": "$context.authorizer.company_id"
}
However, Body Mapping Template is only available under Integration Request if Lambda is selected as Integration type. Not if HTTP is selected.
In short, how do I pass company_id from my Lambda authorizer to my Node API?

Most of the credit goes out to #Michael-sqlbot in the comments to my question, but I'll put the complete answer here if someone else finds this question.
Authorizer Lambda
It has to return an object in this format, where context contains the parameters you want to forward to your endpoint, as specified in the question.
{
"principalId": "yyyyyyyy",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Allow|Deny",
"Resource": "arn:aws:execute-api:<regionId>:<accountId>:<appId>/<stage>/<httpVerb>/[<resource>/<httpVerb>/[...]]"
}]
},
"context": {
"company_id": "123", <-- The part you want to forward
...
}
}
Method Request
Under Method Request / HTTP Request Headers, add the context property you want to forward:
Name: company_id
Required: optional
Cashing: optional
Integration Request
And under Integration Request / HTTP Headers, add:
Name: company_id
Mapped from: context.authorizer.company_id
Cashing: optional

If you're using lamda-proxy, you can access the context from your event.requestContext.authorizer.
So your company_id can be accessed using event.requestContext.authorizer.company_id.

If you're using lamda-proxy (at least with Golang backend) you can access to that values stored on authorizer context without mapping template usage!
Remember re-launch API and wait a minutes!
It's working for me.

Related

Google cloud function always receive / from API Gateway

Let's setup the basics:
I'm using Google Api Gateway with differents backends like Google Cloud Function.
First, I was parsing the req paramters with a switch statement on a header containing the original request url. (Very messy but working)
So I decided to use an express app instead for my cloud function.
But here is the thing: my functions always receive / from the gateway and generate raging errors like CANNOT GET / when my path is https://mygateway/api/subservice/action
So my question is: can I change the handling of the express app to parse my header containing the original request url and not the default path url?
Here is a part of my config:
{
"swagger": "2.0",
"info": {
"title": "my API",
"version": "1.0.0"
},
"basePath": "/api",
"host": "mygateway.[REGION].gateway.dev",
"schemes": [
"https"
],
"paths": {
"/subservice/action": {
"get": {
"x-google-backend": {
"address": "https://[REGION]-[ProjectID].cloudfunctions.net/[mycloudfunction]"
},
"security": [
{
"jwt_security": []
}
],
I found on this question something similar that guided my search of the response possible duplicate here
According Google's explanation of path translation when we use x-google-backend, the backend will only receive the basic request. we have to define with the parameter path_translation the behaviour we expect. In my case, I want to receive the same path so i use APPEND_PATH_TO_ADDRESS

Azure API Management: Discriminate operations by both path and query parameters

I have a backend API (that implements ApiController) which I'd like to put behind an APIM API. ApiController allows us to discriminate between two different GET operations based on the query parameters that are passed in. When I attempt to define these endpoints in APIM, I get the following error:
The message suggests an endpoint is defined solely by the path and operation. But that seems to contradict documentation I found here which suggests there's a way to differentiate between operations based on query parameters:
Required parameters across both path and query must have unique names.
(In OpenAPI a parameter name only needs to be unique within a
location, for example path, query, header. However, in API Management
we allow operations to be discriminated by both path and query
parameters (which OpenAPI doesn't support). That's why we require
parameter names to be unique within the entire URL template.)
I have an ApiController that defines two different Get operations, differing only by the query parameters. How do I represent that in my APIM API?
The problem comes from multiple operation objects with the same OperationId. This is invalid swagger. In the Swagger file did not match the name of the selected API, so change the title attribute of the doc tag to match the destination API it worked..
Here is a similar SO thread you could refer to.
I got my answer from Azure support, sharing the info here:
APIM endpoints are defined by the path, method, and the name you assign to the operation. To differentiate between two GET endpoints to the same controller, differing only by query parameters, you need to hardcode required query parameters into the path. See the following two images:
In the latter image, the hardcoded query parameter is classified by the UI as a template parameter, but it still behaves like a regular query parameter. Query arguments defined in this way:
Are required
Can appear in anywhere in a request's list of query arguments
Are not case-sensitive
Are listed as a "Request Parameter" along side all other path parameters and query arguments in the Development Portal
Edit:
There's a typo in the screenshots. The URLs are case sensitive, and the casing of "blah" were different in each case. Here's what the the Open API Specification looks like when the casing is consistent. The overloaded path (with the query parameter hardcoded into the path template) appears in a section called x-ms-paths:
{
"swagger": "2.0",
"info": {
"title": "Echo API",
"version": "1.0"
},
"host": "<hostUrl>",
"basePath": "/echo",
"schemes": ["https"],
"securityDefinitions": {
"apiKeyHeader": {
"type": "apiKey",
"name": "Ocp-Apim-Subscription-Key",
"in": "header"
},
"apiKeyQuery": {
"type": "apiKey",
"name": "subscription-key",
"in": "query"
}
},
"security": [{
"apiKeyHeader": []
}, {
"apiKeyQuery": []
}],
"paths": {
"/Blah": {
"get": {
"operationId": "blah",
"summary": "Blah",
"responses": {}
}
}
},
"tags": [],
"x-ms-paths": {
"/Blah?alpha={alpha}": {
"get": {
"operationId": "blah2",
"summary": "Blah2",
"parameters": [{
"name": "alpha",
"in": "query",
"required": true,
"type": "string"
}],
"responses": {}
}
}
}
}

Micronauts AWS API Gateway Authorizer JSON output issue

I've written a simple lambda function in Micronauts/Groovy to return Allow/Deny policies as an AWS API gateway authorizer. When used as the API gateway authorizer the JSON cannot be parsed
Execution failed due to configuration error: Could not parse policy
When testing locally the response has the correct property case in the JSON.
e.g:
{
"principalId": "user",
"PolicyDocument": {
"Context": {
"stringKey": "1551172564541"
},
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/"
}
]
}}
When this is run in AWS the JSON response has the properties all in lowercase:
{
"principalId": "user",
"policyDocument": {
"context": {
"stringKey": "1551172664327"
},
"version": "2012-10-17",
"statement": [
{
"resource": "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/",
"action": "execute-api:Invoke",
"effect": "Allow"
}
]
}
}
Not sure if the case is the issue but I cannot see what else might be the issue (tried many variations in output).
I've tried various Jackson annotations (#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class) etc) but they do not seem to have an affect on the output in AWS.
Any idea how to sort this? Thanks.
Example code :
trying to get output to look like the example.
Running example locally using
runtime "io.micronaut:micronaut-function-web"
runtime "io.micronaut:micronaut-http-server-netty"
Lambda function handler:
AuthResponse sessionAuth(APIGatewayProxyRequestEvent event) {
AuthResponse authResponse = new AuthResponse()
authResponse.principalId = 'user'
authResponse.policyDocument = new PolicyDocument()
authResponse.policyDocument.version = "2012-10-17"
authResponse.policyDocument.setStatement([new session.auth.Statement(
Effect: Statement.Effect.Allow,
Action:"execute-api:Invoke",
Resource: "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/"
)])
return authResponse
}
AuthResponse looks like:
#CompileStatic
class AuthResponse {
String principalId
PolicyDocument policyDocument
}
#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
#CompileStatic
class PolicyDocument {
String Version
List<Statement> Statement = []
}
#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
#CompileStatic
class Statement {
String Action
String Effect
String Resource
}
Looks like you cannot rely on AWS lambda Java serializer to not change your JSON response if you are relying on some kind of annotation or mapper. If you want the response to be untouched you'll need to you the raw output stream type of handler.
See the end of this AWS doc Handler Input/Output Types (Java)

Amazon S3 waitFor() inside AWS Lambda

I'm having issue when calling S3.waitFor() function from inside Lambda function (Serverless nodejs). I'm trying to asynchronously write a file into Amazon S3 using S3.putObject() from one rest api, and poll the result file from another rest api using S3.waitFor() to see if the writing is ready/finished.
Please see the following snippet:
...
S3.waitFor('objectExists', {
Bucket: bucketName,
Key: fileName,
$waiter: {
maxAttempts: 5,
delay: 3
}
}, (error, data) => {
if (error) {
console.log("error:" + JSON.stringify(error))
} else {
console.log("Success")
}
});
...
Given valid bucketName and invalid fileName, when the code runs in my local test script, it returns error after 15secs (3secs x 5 retries) and generates result as follows:
error: {
"message": "Resource is not in the state objectExists",
"code": "ResourceNotReady",
"region": null,
"time": "2018-08-03T06:08:12.276Z",
"requestId": "AD621033DCEA7670",
"extendedRequestId": "JNkxddWX3IZfauJJ63SgVwyv5nShQ+Mworb8pgCmb1f/cQbTu3+52aFuEi8XGro72mJ4ik6ZMGA=",
"retryable": true,
"statusCode": 404,
"retryDelay": 3000
}
Meanwhile, when it is running inside AWS lambda function, it returns result directly as follows:
error: {
"message": "Resource is not in the state objectExists",
"code": "ResourceNotReady",
"region": null,
"time": "2018-08-03T05:49:43.178Z",
"requestId": "E34D731777472663",
"extendedRequestId": "ONMGnQkd14gvCfE/FWk54uYRG6Uas/hvV6OYeiax5BTOCVwbxGGvmHxMlOHuHPzxL5gZOahPUGM=",
"retryable": false,
"statusCode": 403,
"retryDelay": 3000
}
As you can see that the retryable and statusCode values are different between the two.
On lamba, it seems that it always get statusCode 403 when the file doesn't exists. While on my local, everything works as expected (retried 5 times every 3 seconds and received statusCode 404).
I wonder if it has anything to do with IAM role. Here's my IAM role statements settings inside my serverless.yml:
iamRoleStatements:
- Effect: "Allow"
Action:
- "logs:CreateLogGroup"
- "logs:CreateLogStream"
- "logs:PutLogEvents"
- "ec2:CreateNetworkInterface"
- "ec2:DescribeNetworkInterfaces"
- "ec2:DeleteNetworkInterface"
- "sns:Publish"
- "sns:Subscribe"
- "s3:*"
Resource: "*"
How to make it work from lambda function?
Thank you in advance!
It turned out that the key is on how you set the IAM Role for the bucket and all the objects under it.
Based on the AWS docs here, it states that S3.waitFor() is relying on the underlying S3.headObject().
Waits for the objectExists state by periodically calling the underlying S3.headObject() operation every 5 seconds (at most 20 times).
Meanwhile, S3.headObject() itself relies on HEAD Object API which has the following rule as stated on AWS Docs here:
You need the s3:GetObject permission for this operation. For more information, go to Specifying Permissions in a Policy in the Amazon Simple Storage Service Developer Guide. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3
will return a HTTP status code 404 ("no such key") error.
if you don’t have the s3:ListBucket permission, Amazon S3 will return
a HTTP status code 403 ("access denied") error.
It means that I need to add s3:ListBucket Action to the Bucket resource containing the objects to be able to get response 404 when the objects doesn't exist.
Therefore, I've configured the cloudformation AWS::IAM::Policy resource as below, where I added s3:Get* and s3:List* action specifically on the Bucket itself (i.e.: S3FileStorageBucket).
"IamPolicyLambdaExecution": {
"Type": "AWS::IAM::Policy",
"DependsOn": [
"IamRoleLambdaExecution",
"S3FileStorageBucket"
],
"Properties": {
"PolicyName": { "Fn::Join": ["-", ["Live-RolePolicy", { "Ref": "environment"}]]},
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "S3FileStorageBucket"
}
]
]
}
},
{
"Effect":"Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "S3FileStorageBucket"
},
"/*"
]
]
}
},
...
Now I've been able to do S3.waitFor() to poll for file/object under the bucket with only a single API call and get the result only when it's ready, or throw error when the resource is not ready after a specific timeout.
That way, the client implementation will be much simpler. As it doesn't have to implement poll by itself.
Hope that someone find it usefull. Thank you.

KeyConditionExpression error and empty error response

Im using Lambda as my backend and im performing all the DynamoDB operations from it.
I have a user table Users and i want to query it via its hash-key Username
Using the KeyConditionExpression statement on my params variable but i get the following error:
There were 2 validation errors:\n* MissingRequiredParameter: Missing >required key 'KeyConditions' in params\n* UnexpectedParameter: Unexpected >key 'KeyConditionExpression' found in params
So yeah, i tried the following legacy statement:
var userQuery = {
TableName:"Users",
KeyConditions:{
Username:{
ComparisonOperator:'EQ',
AttributeValueList:[{S:"some_username"}]
}
}
};
For some reason, i get empty errors on the query callback, which looks like this:
dynamo.query(userQuery,function(err,data){
if(err) console.log("error "+JSON.stringify(err,null,2));
else console.log("pass "+JSON.stringify(data,null,2));
});
I've tried literally everything, gotten to the point of desperation...
I cant seem to query any table, but i can scan and use putItem with no problem. My policy has the query parameter as well.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "my_Stmt_num",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:UpdateItem"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "",
"Resource": "*",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow"
}
]
}
In case its relevant, at the top of my handler js file im getting a reference to dynamo like this:
var doc = require('dynamodb-doc');
var dynamo = new doc.DynamoDB();
My whole application is 'new' meaning nothing prior than February 2015 exists, so i dont see any point in using legacy apis, as the docs say.
It sounds like your AWS SDK associated with the document client may be out of date and does not support the new KeyConditionExpression feature. Could you please try re-installing your AWS SDK and the document SDK? Please also attach the versions you are installing if you continue to have issues after re-installing.
Previous DynamoDB Document SDK was deprecated, new client from standard Javascript SDK should be used from now on:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/DocumentClient.html

Resources