Amazon S3 waitFor() inside AWS Lambda - node.js

I'm having issue when calling S3.waitFor() function from inside Lambda function (Serverless nodejs). I'm trying to asynchronously write a file into Amazon S3 using S3.putObject() from one rest api, and poll the result file from another rest api using S3.waitFor() to see if the writing is ready/finished.
Please see the following snippet:
...
S3.waitFor('objectExists', {
Bucket: bucketName,
Key: fileName,
$waiter: {
maxAttempts: 5,
delay: 3
}
}, (error, data) => {
if (error) {
console.log("error:" + JSON.stringify(error))
} else {
console.log("Success")
}
});
...
Given valid bucketName and invalid fileName, when the code runs in my local test script, it returns error after 15secs (3secs x 5 retries) and generates result as follows:
error: {
"message": "Resource is not in the state objectExists",
"code": "ResourceNotReady",
"region": null,
"time": "2018-08-03T06:08:12.276Z",
"requestId": "AD621033DCEA7670",
"extendedRequestId": "JNkxddWX3IZfauJJ63SgVwyv5nShQ+Mworb8pgCmb1f/cQbTu3+52aFuEi8XGro72mJ4ik6ZMGA=",
"retryable": true,
"statusCode": 404,
"retryDelay": 3000
}
Meanwhile, when it is running inside AWS lambda function, it returns result directly as follows:
error: {
"message": "Resource is not in the state objectExists",
"code": "ResourceNotReady",
"region": null,
"time": "2018-08-03T05:49:43.178Z",
"requestId": "E34D731777472663",
"extendedRequestId": "ONMGnQkd14gvCfE/FWk54uYRG6Uas/hvV6OYeiax5BTOCVwbxGGvmHxMlOHuHPzxL5gZOahPUGM=",
"retryable": false,
"statusCode": 403,
"retryDelay": 3000
}
As you can see that the retryable and statusCode values are different between the two.
On lamba, it seems that it always get statusCode 403 when the file doesn't exists. While on my local, everything works as expected (retried 5 times every 3 seconds and received statusCode 404).
I wonder if it has anything to do with IAM role. Here's my IAM role statements settings inside my serverless.yml:
iamRoleStatements:
- Effect: "Allow"
Action:
- "logs:CreateLogGroup"
- "logs:CreateLogStream"
- "logs:PutLogEvents"
- "ec2:CreateNetworkInterface"
- "ec2:DescribeNetworkInterfaces"
- "ec2:DeleteNetworkInterface"
- "sns:Publish"
- "sns:Subscribe"
- "s3:*"
Resource: "*"
How to make it work from lambda function?
Thank you in advance!

It turned out that the key is on how you set the IAM Role for the bucket and all the objects under it.
Based on the AWS docs here, it states that S3.waitFor() is relying on the underlying S3.headObject().
Waits for the objectExists state by periodically calling the underlying S3.headObject() operation every 5 seconds (at most 20 times).
Meanwhile, S3.headObject() itself relies on HEAD Object API which has the following rule as stated on AWS Docs here:
You need the s3:GetObject permission for this operation. For more information, go to Specifying Permissions in a Policy in the Amazon Simple Storage Service Developer Guide. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3
will return a HTTP status code 404 ("no such key") error.
if you don’t have the s3:ListBucket permission, Amazon S3 will return
a HTTP status code 403 ("access denied") error.
It means that I need to add s3:ListBucket Action to the Bucket resource containing the objects to be able to get response 404 when the objects doesn't exist.
Therefore, I've configured the cloudformation AWS::IAM::Policy resource as below, where I added s3:Get* and s3:List* action specifically on the Bucket itself (i.e.: S3FileStorageBucket).
"IamPolicyLambdaExecution": {
"Type": "AWS::IAM::Policy",
"DependsOn": [
"IamRoleLambdaExecution",
"S3FileStorageBucket"
],
"Properties": {
"PolicyName": { "Fn::Join": ["-", ["Live-RolePolicy", { "Ref": "environment"}]]},
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "S3FileStorageBucket"
}
]
]
}
},
{
"Effect":"Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "S3FileStorageBucket"
},
"/*"
]
]
}
},
...
Now I've been able to do S3.waitFor() to poll for file/object under the bucket with only a single API call and get the result only when it's ready, or throw error when the resource is not ready after a specific timeout.
That way, the client implementation will be much simpler. As it doesn't have to implement poll by itself.
Hope that someone find it usefull. Thank you.

Related

AWS SQS SendMessage isn't working with serverless-offline-sqs

I'm working with serverless, porting two functions, one (call it generator) that is currently a long-running node process and the other (call it checker) triggered by cron, so that they both would be lambdas, checker triggered by a schedule, and generator by receiving an SQS notification. I'm also a big fan of being able to do local dev, so added serverless-offline and serverless-offline-sqs (backed by ElasticMQ).
Ported the generator, and it deploys and runs just fine locally. Here's the function as defined in serverless.yml:
reportGenerator:
handler: src/reportGenerator.handleReportRequest
events:
- sqs:
arn:
Fn::GetAtt:
- ReportRequestQueue
- Arn
I can use the AWS CLI to trigger messages through ElasticMQ (aws account # obfuscated):
$ aws sqs send-message --queue-url https://sqs.us-west-1.amazonaws.com/000000000000/local-report-request --message-body file://./test/reportGenerator/simple_payload.json --endpoint-url http://localhost:9324
{
"MD5OfMessageBody": "267bd9129a76e3f48c26903664824c13",
"MessageId": "12034cd8-3783-403a-92eb-a5935c8759ae"
}
$
And the message is received fine, the generator lambda triggers:
{
"requestId": "ckokpja2e0001nbyocufzdrxk",
"function": "src/reportGenerator.js::exports:14",
"level": "info",
"message": "Found 0 waiting report request(s)"
}
offline: (λ: reportGenerator) RequestId: ckokpja2e0001nbyocufzdrxk Duration: 214.53 ms Billed Duration: 215 ms
Now I want to do that SQS send programmatically for the second function, which is pretty much the only real change to that existing (non-lambda) function. It is also deployed/launched just fine, but the aws-sdk library function to send the same message isn't working.
Here's the function definition in serverless.yml. Note that I commented out the normal cron schedule, since that isn't supported by serverless-offline, and just used the 10 minute rate for testing:
reportChecker:
handler: src/reportScheduler.check
environment:
REPORT_REQUEST_QUEUE_URL: "https://sqs.us-west-1.amazonaws.com/${self:custom.awsAccountId}/${opt:stage, self:custom.defaultStage}-report-request"
events:
# note that cron does not work for offline. Comment these out, and uncomment the rate to test locally
# - schedule:
# rate: cron(0 11 ? * 3 *)
# input:
# reportType: weekly
# - schedule:
# rate: cron(0 11 2 * ? *)
# input:
# reportType: monthly
- schedule:
rate: rate(10 minutes)
input:
reportType: weekly
Here's what I added to the rest of the function as the node version of the above command line:
const AWS = require('aws-sdk');
AWS.config.update({region: process.env.AWS_REGION});
...
try {
const sqs = new AWS.SQS({endpoint: 'http://localhost:9324'});
logger.debug('SQS info', {queueUrl: process.env.REPORT_REQUEST_QUEUE_URL, endpoint: sqs.endpoint});
await sqs.sendMessage({
QueueUrl: process.env.REPORT_REQUEST_QUEUE_URL,
MessageBody: `{"reportType":${checkType}}`
})
.promise();
}
catch(err) {
logger.error('SQS failed to send', {error: err});
}
The output from those logger statements is
{
"requestId": "ckokpulxw0002nbyo19sj87g2",
"function": "src/reportScheduler.js::exports:100",
"data": {
"queueUrl": "https://sqs.us-west-1.amazonaws.com/000000000000/local-report-request",
"endpoint": {
"protocol": "http:",
"host": "localhost:9324",
"port": 9324,
"hostname": "localhost",
"pathname": "/",
"path": "/",
"href": "http://localhost:9324/"
}
},
"level": "debug",
"message": "SQS info"
}
{
"requestId": "ckokpulxw0002nbyo19sj87g2",
"function": "src/reportScheduler.js::exports:109",
"data": {
"error": {
"message": "The specified queue does not exist for this wsdl version.",
"code": "AWS.SimpleQueueService.NonExistentQueue",
"time": "2021-05-12T00:20:00.497Z",
"requestId": "e55124c3-248e-5afd-acce-7dd605fe1fe5",
"statusCode": 400,
"retryable": false,
"retryDelay": 95.52839314049268
}
},
"level": "error",
"message": "SQS failed to send"
}
offline: (λ: reportChecker) RequestId: ckokpulxw0002nbyo19sj87g2 Duration: 477.95 ms Billed Duration: 478 ms
It sure looks to me like the code is using all the same values as the command line, but it just isn't working. Hoping someone else can see something obvious here.
Thanks.
Shouldn't the env variable REPORT_REQUEST_QUEUE_URL locally point to ElasticMQ? E.g. http://0.0.0.0:9324/queue/local-report-request.

S3 Bucket Policy to Deny access to all except an IAM Role and InstanceProfile

I have an EMR cluster that involves steps to write and delete objects on S3 bucket. I have been trying to create a bucket policy in the S3 bucket that denies deleting access to all principals except for the EMR role and the instance profile. Below is my policy.
{
"Version": "2008-10-17",
"Id": "ExamplePolicyId123458",
"Statement": [
{
"Sid": "ExampleStmtSid12345678",
"Effect": "Deny",
"Principal": "*",
"Action": [
"s3:DeleteBucket",
"s3:DeleteObject*"
],
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAI3FK4OGNWXLHB7IXM:*", #EMR Role Id
"AROAISVF3UYNPH33RYIZ6:*", # Instance Profile Role ID
"AIPAIDBGE7J475ON6BAEU" # Instance Profile ID
]
}
}
}
]
}
As I found somewhere, it is not possible to use wildcard entries to specify every Role session in the "NotPrincipal" section so I have used the condition of aws:userId to match.
Whenever I run the EMR step without the bucket policy, the step completes successfully. But when I add the policy to bucket and re run, the step fails with following error.
diagnostics: User class threw exception:
org.apache.hadoop.fs.s3a.AWSS3IOException: delete on s3://vr-dump/metadata/test:
com.amazonaws.services.s3.model.MultiObjectDeleteException: One or more objects could not be deleted
(Service: null; Status Code: 200; Error Code: null; Request ID: 9FC4797479021CEE; S3 Extended Request ID: QWit1wER1s70BJb90H/0zLu4yW5oI5M4Je5aK8STjCYkkhZNVWDAyUlS4uHW5uXYIdWo27nHTak=), S3 Extended Request ID: QWit1wER1s70BJb90H/0zLu4yW5oI5M4Je5aK8STjCYkkhZNVWDAyUlS4uHW5uXYIdWo27nHTak=: One or more objects could not be deleted (Service: null; Status Code: 200; Error Code: null; Request ID: 9FC4797479021CEE; S3 Extended Request ID: QWit1wER1s70BJb90H/0zLu4yW5oI5M4Je5aK8STjCYkkhZNVWDAyUlS4uHW5uXYIdWo27nHTak=)
What is the problem here? Is this related to EMR Spark Configuration or the bucket policy?
Assuming these role ids are correct (they start in AROA so they have a valid format) I believe you also need the aws account number on the policy. For example:
{
"Version": "2008-10-17",
"Id": "ExamplePolicyId123458",
"Statement": [
{
"Sid": "ExampleStmtSid12345678",
"Effect": "Deny",
"Principal": "*",
"Action": [
"s3:DeleteBucket",
"s3:DeleteObject*"
],
"Resource": [
"arn:aws:s3:::vr-dump",
"arn:aws:s3:::vr-dump/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAI3FK4OGNWXLHB7IXM:*", #EMR Role Id
"AROAISVF3UYNPH33RYIZ6:*", # Instance Profile Role ID
"AIPAIDBGE7J475ON6BAEU", # Instance Profile ID
"1234567890" # Your AWS Account Number
]
}
}
}
]
}

cMalformedResponse: Webhook error (206) whilst a seemingly valid webhook response - Actions on Google Dialogflow app in Node.js

Problem
I am developing a Actions on Google app (intended for the Google Home), using the Actions on Google library for Node.js, hosted on Firebase Google Functions, via Dialogflow. I often, but irregularly (and hard to replicate), encounter an error, forcing the Actions on Google (simulator or Google Home itself) to shut down my app. I route everything from any Dialogflow intent (including fallbacks) to my webhook fulfilment, and - based on the log files - the webhook responds quickly (within ~200ms) and with valid responses (investigating the JSON responses). However, the Actions on Google seems to reject the response and triggers the Dialogflow default text response. My biggest concern, is that it happens at different stages in the conversation, sometimes already at the Welcome event. It is also noticeable that - even though the fulfillment responds in milliseconds (~200), the Actions on Google / Dialogflow takes its time and - what I believe - has a timeout. Below are my explorations into the potential cause. But frankly, I am out of ideas.
-- Edit --
The service seems to run better now - I haven't experience this error. I have changed the code to call admin.firestore() less often, by using a global databse and passing it through. I had a hunch that, potentially the https functions were called simultaneously which might have caused a malformed response somehow.
const database = admin.firestore();
// code
if (!process.env.FUNCTION_NAME || process.env.FUNCTION_NAME === 'functionname') {
exports.functionname = functions.https.onRequest((req, res) => {
require('./functions').functionname(req, res, database);
});
}
I also found a flaw in my intent handeling, which caused no intent to be matched. I've restructured the Dialogflow intents to have less possibility of this no-intent (I basically made a two-step question to determine the user's intention). However, since the webhook did response correctly, I don't believe this was the problem. Still, I find it an odd error - so if anyone has more pointers please!
-- end edit --
Logs
This is a response from the webhook as seen from the Google Functions Log - indicating a send response to the Dialogflow app.
Function execution took 289 ms, finished with status code: 200
Response {
"status": 200,
"headers": {
"content-type": "application/json;charset=utf-8"
},
"body": {
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [{
"simpleResponse": {
"textToSpeech": "Welcome back! Which Widget do you want to work with?"
}
}]
}
}
},
"outputContexts": [{
"name": "****anonymized****/contexts/widget",
"lifespanCount": 1
},
{
"name": "***anonymized****/contexts/_actions_on_google",
"lifespanCount": 99,
"parameters": {
"data": "{\"started\":1552071670}"
}
}
]
}
}
However, this is what the Actions on Google console shows. The textToSpeech response is the default text response where DialogFlow can fall back to in case the fulfilment fails (as suggested by https://medium.com/google-developers/debugging-common-actions-on-google-errors-7c8527378d27)
{
"conversationToken": "[]",
"finalResponse": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Sorry! I cannot access my online service. Please try again!"
}
}
]
}
},
"responseMetadata": {
"status": {
"code": 14,
"message": "Webhook error (206)"
},
"queryMatchInfo": {
"queryMatched": true,
"intent": "e0bf4b96-9440-4545-a8a6-d0915cacd34f"
}
}
}
The stackdriver logs also indicate the latter case, where the default response from Dialogflow is received.
Thoughts
I have tried to replicate this error on the simulator, on my phone and on a Google Home. It occurs at different moments, though seems to occur more frequently than before. My three hunches are the following:
Overload. My Google Functions hosts 8 functions, including the Dialogflow app. Perhaps if the other functions are also called frequently at a given moment, something goes wrong with the Dialogflow app handeling the https requests from Dialogflow (the Google Assistant). However, I do not see how this should happen. I could replicate (sometimes) the problem by invoking the app on two devices with two different accounts. But since it also happens with one account I don't see this can be the only problem. I've followed this tip to reduce overload (https://stackoverflow.com/a/47985480/7053198) but that did not seem to help my issue. Can then the upscaling of Google Functions be a problem here?
Promise hell. I am using a number of promises (or more callbacks) to communicate with a Firestore database to retreive user data. However, I can invoke these intents quite rapidly, and they resolve nicely - up to a point where they don't. Below is a snippet of such one intent.
Size of my app. The number of Intents has gotten quite large, and there is a lot of communication between the Google Functions and the Firestore database. I just don't know how this might influence the rejection of the seemingly correct response by Dialogflow.
Intent with promises snippet:
app.intent(I.WIDGETTEST, conv => {
let database = admin.firestore();
let offline = [];
return database.collection('users').doc(conv.user.storage.userId).collection('widgets').get()
.then((snapshot) => {
const promises = [];
snapshot.forEach(doc => {
if (doc.data().active) {
if ((moment().unix()-doc.data().server.seen) > 360){
offline.push(doc.data().name);
}
promises.push(doc.ref.update({'todo.test': true}));
}
});
return Promise.all(promises);
})
.then(()=>{
conv.contexts.set(C.WIDGET, 1);
return conv.ask("Testing in progress. Which Widget do you want to work with now?");
})
.catch((err) => {
console.log(err)
throw err;
});
});
Problem Example Log
For good measure, here three log entries from the stackdriver side of things:
{
"textPayload": "Sending request with post data: {\"user\":{\"userId\":\"***anonymized***\",\"locale\":\"en-US\",\"lastSeen\":\"2019-03-08T18:54:47Z\",\"userStorage\":\"{\\\"data\\\":{\\\"userId\\\":\\\"***anonymized***\\\"}}\",\"idToken\":\"eyJhbGciOiJSUzI1NiIsImtpZCI6ImNmMDIyYTQ5ZTk3ODYxNDhhZDBlMzc5Y2M4NTQ4NDRlMzZjM2VkYzEiLCJ0eXAiOiJKV1QifQ.eyJpc3MiOiJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20iLCJuYmYiOjE1NTIwNzEzNjAsImF1ZCI6Ijk4NzgxNjU4MTIzMC1ibWVtMm5wcTRsNnE3c2I5MnVpM3BkdGRhMWFmajJvNy5hcHBzLmdvb2dsZXVzZXJjb250ZW50LmNvbSIsInN1YiI6IjEwMjk3OTk3NzIyNTM3NjY4MDEyOSIsImVtYWlsIjoiZGF2aWR2ZXJ3ZWlqQGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJuYW1lIjoiRCBWZXJ3ZWlqIiwicGljdHVyZSI6Imh0dHBzOi8vbGg2Lmdvb2dsZXVzZXJjb250ZW50LmNvbS8taldtOFd3eE5iS3MvQUFBQUFBQUFBQUkvQUFBQUFBQUFyLUUvNEE2UmJiYVNwem8vczk2LWMvcGhvdG8uanBnIiwiZ2l2ZW5fbmFtZSI6IkQiLCJmYW1pbHlfbmFtZSI6IlZlcndlaWoiLCJpYXQiOjE1NTIwNzE2NjAsImV4cCI6MTU1MjA3NTI2MCwianRpIjoiYjE4MDYwMjc0YmE4MjJhYzFhYzc0MTYwZjI2YWM2MDk3MzBmZDY4ZSJ9.Y9G0qo0Gf28-noF7RYPhtfHRuA7Qo6bCBSuN56Y0AtgIXaQKZjnmYvABIt9u8WQ1qPWwQc3jOLyhfoXIk8j0zhcQ0M0oc7LjkBwVCgFnJHvUAiV5fGEqQa95pZyrZhYmHipTDdwk0UhJHFGJOXAHDPP6oBSHKC9h48jqUjVszz6iEy4frV0XIKIzRR2U2iY6OgJuxPsV0A7xNjvLXiMmwaRUVtlj9CPmiizd3G2PhqD5C54Fy2Qg5ch89qMOA10vNB5B4AX9pmAXHpmtIqFo7ljvAeGAj-pRuqyMllz2awAdvqqOFRERDYfm5Fyh7N0l1OhR2A2XRegsUIL1I1EVPQ\"},\"conversation\":{\"conversationId\":\"ABwppHH8PXibDZg8in1DjbP-caFy67Dtq025k_Uq2ofoPNXKtiPXrbJTmpGVUnVy-aY6H1MeZCFIpQ\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"Talk to ***anonymized appname***\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"}]}],\"requestType\":\"SIMULATOR\"}.",
"insertId": "120zsprg2nqfayb",
"resource": {
"type": "assistant_action",
"labels": {
"action_id": "actions.intent.MAIN",
"project_id": "***anonymized***",
"version_id": ""
}
},
"timestamp": "2019-03-08T19:01:00.243055001Z",
"severity": "DEBUG",
"labels": {
"channel": "preview",
"source": "AOG_REQUEST_RESPONSE",
"querystream": "GOOGLE_USER"
},
{
"textPayload": "Received response from agent with body: HTTP/1.1 200 OK\r\nServer: nginx/1.13.6\r\nDate: Fri, 08 Mar 2019 19:01:10 GMT\r\nContent-Type: application/json;charset=UTF-8\r\nContent-Length: 330\r\nX-Cloud-Trace-Context: 7761b69610701e0a7d18cbc69eef9bde/3001560133061614360;o=0\r\nGoogle-Actions-API-Version: 2\r\nAccess-Control-Allow-Credentials: true\r\nVia: 1.1 google\r\nAlt-Svc: clear\r\n\r\n{\"conversationToken\":\"[]\",\"finalResponse\":{\"richResponse\":{\"items\":[{\"simpleResponse\":{\"textToSpeech\":\"Sorry! I cannot access my online service. Please try again!\"}}]}},\"responseMetadata\":{\"status\":{\"code\":14,\"message\":\"Webhook error (206)\"},\"queryMatchInfo\":{\"queryMatched\":true,\"intent\":\"e0bf4b96-9440-4545-a8a6-d0915cacd34f\"}}}.",
"insertId": "120zsprg2nqfayc",
"resource": {
"type": "assistant_action",
"labels": {
"project_id": "***anonymized***",
"version_id": "",
"action_id": "actions.intent.MAIN"
}
},
"timestamp": "2019-03-08T19:01:10.376894030Z",
"severity": "DEBUG",
"labels": {
"channel": "preview",
"source": "AOG_REQUEST_RESPONSE",
"querystream": "GOOGLE_USER"
},
"logName": "projects/***anonymized***/logs/actions.googleapis.com%2Factions",
"trace": "projects/987816581230/traces/ABwppHH8PXibDZg8in1DjbP-caFy67Dtq025k_Uq2ofoPNXKtiPXrbJTmpGVUnVy-aY6H1MeZCFIpQ",
"receiveTimestamp": "2019-03-08T19:01:10.389139428Z"
},
{
"textPayload": "MalformedResponse: Webhook error (206)",
"insertId": "1d4bzl9g3lossug",
"resource": {
"type": "assistant_action",
"labels": {
"project_id": "***anonymized***",
"version_id": "",
"action_id": "actions.intent.MAIN"
}
},
"timestamp": "2019-03-08T19:01:10.377231474Z",
"severity": "ERROR",
"labels": {
"channel": "preview",
"source": "JSON_RESPONSE_VALIDATION",
"querystream": "GOOGLE_USER"
},
"logName": "projects/***anonymized***/logs/actions.googleapis.com%2Factions",
"trace": "projects/987816581230/traces/ABwppHH8PXibDZg8in1DjbP-caFy67Dtq025k_Uq2ofoPNXKtiPXrbJTmpGVUnVy-aY6H1MeZCFIpQ",
"receiveTimestamp": "2019-03-08T19:01:10.388395945Z"
}
Help!
Any help or guidance towards unraveling this issue is much apreciated. Let me know if you have experienced this before, have a potential solution or would like to see more details of the code or logs. Many thanks!
I got the same error when I tried to modify one of my google assistant app.
Everything was working well but I suddently got the same error as you that I was not able to understand and was not related to my new developpement.
{
insertId: "102mhl8g1omvh70"
labels: {
channel: "preview"
querystream: "GOOGLE_USER"
source: "JSON_RESPONSE_VALIDATION"
}
logName: "projects/myprojectID/logs/actions.googleapis.com%2Factions"
receiveTimestamp: "2019-04-22T16:56:11.508733115Z"
resource: {
labels: {
action_id: "actions.intent.MAIN"
project_id: "myprojectID"
version_id: ""
}
type: "assistant_action"
}
severity: "ERROR"
textPayload: "MalformedResponse: Webhook error (206)"
timestamp: "2019-04-22T16:56:11.498787357Z"
trace: "projects/168413137357/traces/ABwppHG8ckJJgXMT5Jedih2WUtGNZZc9i0BVG5S-CkxCT8mkhy7mDr8L9GPd9p_EvXIIlTz3SK2z16jBK8Id"
}
I tried to solve it by disabling the webhook and set the playload in the dialogflow intent. Anyway I still got this error.
So I tried to rollback on my developpement by uploading the old version the incremeneted intent but this didn't work and I still got my error
Watching to my function logs I saw this
SyntaxError: Unexpected token : in JSON at position 6
at Object.parse (native)
at new User (/user_code/node_modules/actions-on-google/dist/service/actionssdk/conversation/user.js:75:43)
at DialogflowConversation.Conversation (/user_code/node_modules/actions-on-google/dist/service/actionssdk/conversation/conversation.js:47:21)
at DialogflowConversation (/user_code/node_modules/actions-on-google/dist/service/dialogflow/conv.js:36:9)
at WebhookClient.conv (/user_code/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:415:14)
at welcome (/user_code/index.js:37:26)
at WebhookClient.handleRequest (/user_code/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:273:44)
at exports.dialogflowFirebaseFulfillment.functions.https.onRequest (/user_code/index.js:247:11)
at cloudFunction (/user_code/node_modules/firebase-functions/lib/providers/https.js:37:41)
at /var/tmp/worker/worker.js:783:7
I didn't understand how the json from the assistant could not be parsed properly I was very surprised. I compared the JSON input that I got before and after the bug and I saw that the userstorage entry was not well formated.
I got something like that :
"user":{"userStorage":"\"data\":}","lastSeen":"2019-04-22T16:22:10Z","locale":"Mylanguage","userId":"ABwppHGoZFMQwTpdqZaOYBaIh32-hXH2kUN_EPJglUXMl5MeH7QlLyB9O0bCgZ0SQ9L0B2Ks-JQ9rHfRoAh-"}
instead of this :
"user":{"userStorage":"{\"data\":{}}","lastSeen":"2019-04-21T17:55:21Z","locale":"Mylanguage","userId":"ABwppHGoZFMQwTpdqZaOYBaIh32-hXH2kUN_EPJglUXMl5MeH7QlLyB9O0bCgZ0SQ9L0B2Ks-JQ9rHfRoAh-"}
And this json is going to provoke an error when you try to get the conversation from the agent such like that :
let conv = agent.conv();
I still don't understand how this is even possible but I have a trick to solve it.
I fix it by correcting the json before getting the conversation.
In my app I don't need any userstorage so I always intialise the userstorage properly before getting the conversation like that :
request.body.originalDetectIntentRequest.payload.user.userStorage = "{\"data\":{}}";
I am about to contact the dialogflow support to inform them of this I consider to be a bug
I hope this answer is helping you!

Micronauts AWS API Gateway Authorizer JSON output issue

I've written a simple lambda function in Micronauts/Groovy to return Allow/Deny policies as an AWS API gateway authorizer. When used as the API gateway authorizer the JSON cannot be parsed
Execution failed due to configuration error: Could not parse policy
When testing locally the response has the correct property case in the JSON.
e.g:
{
"principalId": "user",
"PolicyDocument": {
"Context": {
"stringKey": "1551172564541"
},
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/"
}
]
}}
When this is run in AWS the JSON response has the properties all in lowercase:
{
"principalId": "user",
"policyDocument": {
"context": {
"stringKey": "1551172664327"
},
"version": "2012-10-17",
"statement": [
{
"resource": "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/",
"action": "execute-api:Invoke",
"effect": "Allow"
}
]
}
}
Not sure if the case is the issue but I cannot see what else might be the issue (tried many variations in output).
I've tried various Jackson annotations (#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class) etc) but they do not seem to have an affect on the output in AWS.
Any idea how to sort this? Thanks.
Example code :
trying to get output to look like the example.
Running example locally using
runtime "io.micronaut:micronaut-function-web"
runtime "io.micronaut:micronaut-http-server-netty"
Lambda function handler:
AuthResponse sessionAuth(APIGatewayProxyRequestEvent event) {
AuthResponse authResponse = new AuthResponse()
authResponse.principalId = 'user'
authResponse.policyDocument = new PolicyDocument()
authResponse.policyDocument.version = "2012-10-17"
authResponse.policyDocument.setStatement([new session.auth.Statement(
Effect: Statement.Effect.Allow,
Action:"execute-api:Invoke",
Resource: "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/"
)])
return authResponse
}
AuthResponse looks like:
#CompileStatic
class AuthResponse {
String principalId
PolicyDocument policyDocument
}
#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
#CompileStatic
class PolicyDocument {
String Version
List<Statement> Statement = []
}
#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
#CompileStatic
class Statement {
String Action
String Effect
String Resource
}
Looks like you cannot rely on AWS lambda Java serializer to not change your JSON response if you are relying on some kind of annotation or mapper. If you want the response to be untouched you'll need to you the raw output stream type of handler.
See the end of this AWS doc Handler Input/Output Types (Java)

How to pass API Gateway authorizer context to a HTTP integration

I have successfully implemented a Lambda authorizer for my AWS API Gateway, but I want to pass a few custom properties from it to my Node.js endpoint.
My output from my authorizer follows the format specified by AWS, as seen below.
{
"principalId": "yyyyyyyy",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow|Deny",
"Resource": "arn:aws:execute-api:<regionId>:<accountId>:<appId>/<stage>/<httpVerb>/[<resource>/<httpVerb>/[...]]"
}
]
},
"context": {
"company_id": "123",
...
}
}
In my case, context contains a few parameters, like company_id, that I would like to pass along to my Node endpoint.
If I was to use a Lambda endpoint, I understand that this is done with Mapping Template and something like this:
{
"company_id": "$context.authorizer.company_id"
}
However, Body Mapping Template is only available under Integration Request if Lambda is selected as Integration type. Not if HTTP is selected.
In short, how do I pass company_id from my Lambda authorizer to my Node API?
Most of the credit goes out to #Michael-sqlbot in the comments to my question, but I'll put the complete answer here if someone else finds this question.
Authorizer Lambda
It has to return an object in this format, where context contains the parameters you want to forward to your endpoint, as specified in the question.
{
"principalId": "yyyyyyyy",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Allow|Deny",
"Resource": "arn:aws:execute-api:<regionId>:<accountId>:<appId>/<stage>/<httpVerb>/[<resource>/<httpVerb>/[...]]"
}]
},
"context": {
"company_id": "123", <-- The part you want to forward
...
}
}
Method Request
Under Method Request / HTTP Request Headers, add the context property you want to forward:
Name: company_id
Required: optional
Cashing: optional
Integration Request
And under Integration Request / HTTP Headers, add:
Name: company_id
Mapped from: context.authorizer.company_id
Cashing: optional
If you're using lamda-proxy, you can access the context from your event.requestContext.authorizer.
So your company_id can be accessed using event.requestContext.authorizer.company_id.
If you're using lamda-proxy (at least with Golang backend) you can access to that values stored on authorizer context without mapping template usage!
Remember re-launch API and wait a minutes!
It's working for me.

Resources