I am using lambda#edge to redirect my sites with cloudfront.
I have attached my versioned lambda arn to my cloud front cache behavior to all 4 events it has.
when i hit my cloudfront endpoint it says
502 ERROR
The request could not be satisfied.
The Lambda function returned invalid json: The json output must be an object type.
when i check my lambda logs / invocation metrics i dont see any hits at all .
what may be the reason behind this ?
i tried my best to find the fix why my lambda is not getting triggered ??
There are some common "gotchas" to Lambda#Edge and CloudFront. You need to:
Publish a new version of you Lambda function
Update the CloudFront Lambda association to your new version, e.g. arn:aws:lambda:us-east-1:572007530218:function:gofaas-WebAuthFunction:45
Look for Lambda#Edge logs in the region of the requestor
This is different from "normal" Lambda web console flow of saving a code change and jumping to logs from the monitoring tab.
i missed adding region under the header in my lambda code.
since lambda#edge runs in the edge location we need to mention the region dynamically so that it knows where to write the logs when its running in the nearest edge location.
'x-lae-region': [ { key: 'x-lae-region', value: process.env.AWS_REGION } ]
const response = {
status: '302',
statusDescription: 'Found',
headers: {
location: [{
key: 'Location',
value: 'http://<destinationdomainname>/goto/hello.html',
}],
'x-lae-region': [ { key: 'x-lae-region', value: process.env.AWS_REGION } ],
},
};
Related
pretty much as the title suggests, any request to the vault returns a { errors: [ '1 error occurred:\n\t* unsupported operation\n\n' ] }. I've checked whether or not it has anything to do with the permissions but changing the X-Vault-Token to something else returns a permission denied. The code used to work but since I migrated from plain js to typescript, it seems to be broken but I can't figure out what it could be that typescript does differently, so I'm inclined to say that it isn't typescript related.
I'm using a simple fetch:
// this url should be valid
// https://www.vaultproject.io/api/secret/pki#list-certificates
const url = "https://localhost:8200/v1/pki/certs";
const request = await fetch(url, {
headers: {
credentials: "include",
"X-Vault-Token": token,
},
agent: tlsAgent,
});
const result = await result.json()
/*
this results in { errors: [ '1 error occurred:\n\t* unsupported operation\n\n' ] }
*/
I've googled the symptoms but I found nothing relevant :(. Since this error was also not mentioned in the documentation of Vault, it is possible that it is not vault related at all but that would be weird considering the permission denied when parameters are changed.
Any hints towards the right direction is much appreciated
Listing the certificates is a LIST operation. Vault requires that you use the right HTTP verb because it uses the verb to evaluate what policy applies to a request. So even if you are using the root token for your test (so that no policy applies to you), you still need to use the right verb.
You can probably make your code work by adding method: 'LIST' in your request, or add the parameter list=true to the URL.
I am currently working on a project, one of the elements of which is Amazon Connect. So far I have the function of triggering the connection locally on the disk and according to the documentation I am using the following code.
const AWS = require('aws-sdk');
AWS.config.loadFromPath('./configuration/keys/key-aws.json');
exports.makeCall = (number) => {
let connect = new AWS.Connect();
var params = {
InstanceId: 'xxxxxx',
ContactFlowId: 'xxxxxx',
SourcePhoneNumber: 'xxxxxx',
DestinationPhoneNumber: number,
Attributes: {},
};
connect.startOutboundVoiceContact(
params,
function(error, response) {
if (error) {
console.log(error)
callback('Error', null);
} else {
console.log('Initiated an outbound call with Contact Id ' + JSON.stringify(response.ContactId));
}
}
);
};
My questions about this:
Is it possible to track the call status (in progress / completed / rejected), because in the above solution we only get information that the connection has been initiated, and we only have ContactId in the response.
Is it possible to use a custom function in Amazon connect without using AWS Lamda, but an external source (eg App engine from GCP).
Is it possible to create a solution where I can make another call only after finishing the first one?
Thanks in advance for your help!
There are a couple of options here. You could call DecribeContact and check the DisconnectTimestamp and the ConnectedToAgentTimestamp this can tell you if the call is still in progress and if it got to an agent, therfore if it got answered. However you'd have to keep calling this function to track the call.
The other option involves using the kinesis data stream and monitoring that to track the contact and calling the lamba when the contact completes. See https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html
I would be writing the contactid to a dynamodb and using that in the lambda that watches the kinesis stream.
Do you mean from within a contact flow? If so then not directly, you can only call lambdas, but the lambda could call the external source.
Sure, either using the DecribeContact call as above Or, if you send the outbound call to a queue that only has a single agent associated with it (via a Routing Profile or using an Agent Queue) the you could use GetCurrentMetricData and check the AGENTS_AVAILABLE and CONTACTS_IN_QUEUE metrics to decide whether to create a new call.
why don't you use amazon streams API? you will get the call statuses in every single event. e.g. OnConnecting, OnConnected, OnMissed & OnCompleted.
I have a node lambda function that I want to decode and read the payload of a jwt. I created a get method with lambda proxy integration enabled, and i'm only passing the authorization bearer-token to the endpoint. When i do this, event is completely empty. How can i pass the jwt to the lambda function?
If you return the event and it's empty then it sounds like some type of configuration is not set up correctly rather than the authorization header not being passed.
According to the docs the proxy integration requires the lambda output in the following format:
{
"isBase64Encoded" : "boolean",
"statusCode": "number",
"headers": { ... },
"body": "JSON string"
}
Make sure your Lambda returns a similar output.
I have created a lambda function using serverless. This function is fired via API Gateway on a GET request and should return a pdf file from a buffer. I'm using html-pdf to create the buffer and trying to return the pdf file with the following command
let response = {
statusCode: 200,
headers: {'Content-type' : 'application/pdf'},
body: buffer.toString('base64'),
isBase64Encoded : true,
};
return callback(null, response);
but the browser is just failing to load the pdf, so I don't know exactly how to return the pdf file directly to the browser. Could'nt find a solution for that.
well, I found the answer.
The settings in my response object are fine, I just had to manually change the settings in API Gateway for this to work in the browser. I have added "*/*" to binary media types under the binary settings in API Gateway console
API GATEWAY
just log into your console
choose your api
click on binary support in the dropdown
edit binary media type and add "*/*"
FRONTEND
opening the api url in new tab (target="_blank"). Probably the browser is handling the encoded base 64 response, In my case with chrome, the browser just opens the pdf in a new tab exactly like I want it to do.
After spending several hours on this I found out that if you set Content handling to Convert to binary (CONVERT_TO_BINARY) the entire response has to be base64, I would otherwise get an error: Unable to base64 decode the body.
Therefore my response now looks like:
callback(null, buffer.toString('base64'));
The Integration response:
The Method response:
And Binary Media Types:
If you have a gigantic PDF, then it will take a long time for Lambda to return it and in Lambda you are billed per 100ms.
I would save it to S3 first then let the Lambda return the S3 url to the client for downloading.
I was having a similar issue where pdf where downloaded as base64 and started happening when changed the serverles.yml file from:
binaryMediaTypes:
- '*/*'
to
binaryMediaTypes:
- 'application/pdf'
- '....other media types'
The issue is because the way AWS implemented this feature. From aws documentation here:
When a request contains multiple media types in its Accept header, API
Gateway honors only the first Accept media type. If you can't control
the order of the Accept media types and the media type of your binary
content isn't the first in the list, add the first Accept media type
in the binaryMediaTypes list of your API. API Gateway handles all
content types in this list as binary.
Basically if the first media type contained in the accept request header is not in your list in binaryMediaTypes then you will get base64 back.
I checked the request in the browser and the first media type in the accept header was text/html so I got it working after changing my settings to:
binaryMediaTypes:
- 'application/pdf'
- '....other media types'
- 'text/html'
Hope this helps anyone with the same issue.
Above solution is only for particular content-type. You can't more content type.
Follow only below two-step to resolve multiple content type issue.
Click on the checkbox of Use Lambda Proxy integration
API gateway --> API --> method --> integration request
Create your response as
let response = {
statusCode: 200,
headers: {
'Content-type': 'application/pdf',//you can change any content type
'content-disposition': 'attachment; filename=test.pdf' // key of success
},
body: buffer.toString('base64'),
isBase64Encoded: true
};
return response;
Note* - It is not secure
Instead of doing all this. It's better to use serverless-apigw-binary plugin in your serverless.yaml file.
Add
plugins:
- serverless-apigw-binary
custom:
apigwBinary:
types:
- "application/pdf"
Hope that will help someone.
I'm using googleapis npm package ("apis/drive/v3.js") for Google Drive service. On backend I'm using NodeJS and ngrok for local testing. My problem is that I can't get notifications.
The following code:
drive.changes.watch({
pageToken: startPageToken,
resource: {
id: uuid.v1(),
type: 'web_hook',
address: 'https://7def94f6.ngrok.io/notifications'
}
}, function(err, result) {
console.log(result)
});
returns some like:
{
kind: 'api#channel',
id: '8c9d74f0-fe7b-11e5-a764-fd0d7465593e',
resourceId: '9amJTbMCYabCkFvn8ssPrtzWvAM',
resourceUri: 'https://www.googleapis.com/drive/v3/changes?includeRemoved=true&pageSize=100&pageToken=6051&restrictToMyDrive=false&spaces=drive&alt=json',
expiration: '1460227829000'
}
When I try to change any files in Google Drive, the notifications do not comes. Dear colleges, what is wrong?
This should be a comment but i do not have enough (50points) experience to post one. Sorry if this is not a real answer but might help.
I learned this today. I'm doing practically the same thing like you - only not with Drive but Gmail api.
I see you have this error:
"push.webhookUrlUnauthorized", "message": "Unauthorized WebHook etc..."
I think this is because one of the 2 reasons:
you didn't give the Drive-api publisher permissions to your topic.
Second if you want to receive notifications, the authorized webHooks Url must be set both on the server( your project) and in your pub/sub service(Google Cloud).
See below - for me this setup works:
1. Create a topic
2. Give the Drive publish permissions to your topic. This is done by adding the Drive scope in the box and following steps 2 and 3.
3. Configure authorized WebHooks. Form the Create Topic page - click on add subscriptions. Not rely vizible here but once you are there you can manage.