I'm fairly new to node.js and Promises in general, although I think I get the gist of how they are supposed to work (I've been forced to use ES5 for a looooong time). I also have little in-depth knowledge of Cloud Functions (GCF), though again I do understand at a high level how they work
I'm using GCF for part of my app, which is meant to receive a HTTP request, translate them and send them onward to another endpoint. I need to use promises, as there are occasions when the originating HTTP request has multiple 'messages' sent at once
So, my function works in regards to making sure messages are sent in the correct order, but the subsequent messages are sent on very slowly (the logs suggest it's around a 20 second difference in terms of actually being sent onward)
I'm not entirely sure why that is happening - I would've expected it to be less than a couple of seconds difference. Maybe it's something to do with GCF and not my code? Or maybe it is my code? Either way, I'd like to know if there's something I can do to speed it up, especially since it's supposed to send it onward to a user in Google chat
(Before anyone comments on why it's request.body.body, I don't have control over the format of the incoming request)
exports.onward = (request, response) => {
response.status(200).send();
let bodyArr = request.body.body;
//Chain Promises over multiple messages sent at once stored in bodyArr
bodyArr.reduce(async(previous, next) =>{
await previous;
return process(next);
}, Promise.resolve());
};
function process(body){
return new Promise((resolve, reject) => {
//Obtain JWT from Google
let jwtClient = new google.auth.JWT(
privatekey.client_email,
null,
privatekey.private_key,
['https://www.googleapis.com/auth/chat.bot']
);
//Authorise JWT, reject promise or continue as appropriate
jwtClient.authorize((err, tokens) => {
if(err){
console.error('Google OAuth failure ' + err);
reject();
}else{
let payload = copyPayload();
setValues(payload, body); //Other function which sets payload values
axios.post(url, payload,
{
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Authorization': 'Bearer ' + tokens.access_token
},
})
.then(response => {
//HTTP 2xx response received
resolve();
})
.catch(error => {
switch(true){
//Something bad happened
reject();
});
}
});
});
}
EDIT: After testing the same thing, it's gone done a bit to around a 3-6 second delay between promises. Given that the code didn't change, I suspect that it's something to do with GCF?
By doing
exports.onward = (request, response) => {
response.status(200).send();
let bodyArr = request.body.body;
// Any work
};
you are incorrectly managing the life cycle of your Cloud Function: As a matter of fact by doing response.status(200).send(); you are indicating to the Cloud Functions platform that your function successfully reached its terminating condition or state and that, consequently, the platform can shut it down. See here in the doc for more explanations.
Since you send this signal at the beginning of your Cloud Function it may happen that the Cloud Functions shuts it down before the asynchronous job is finished.
In addition, you are potentially generating some "erratic" behavior of the Cloud Function that makes it difficult to debug. Sometimes your Cloud Function is terminated before the asynchronous work is completed, for the reason explained above. But some other times, the Cloud Function platform does not terminate the Function immediately and the asynchronous work can be completed (i.e. has the possibility to complete before the Cloud Function is terminated).
So, you should send the response after all the work is completed.
If you want to immediately acknowledged the user that the work has been started, without waiting for this work to be completed, you should use Pub/Sub: in your main Cloud Function, delegate the work to a Pub/Sub triggered Cloud Function and then return the response.
If you want to acknowledge the user when the work is completed (i.e. when the Pub/Sub triggered Cloud Function is completed), there are several options: Send a notification, an email or write to a Firestore document that you watch from your app.
Related
I have an array, lets say with a length of 9999. Now I want to send a message to SQS with the contents of each object in the array.
Here is how I want to send the messages:
const promises = tokens.map((token) => {
const params = {
Body: JSON.stringify(data),
};
return sqs.sendMessage({
MessageBody: JSON.stringify(params),
QueueUrl: 'my-queue-url',
}).promise();
});
await Promise.all(promises);
Now, the problem is that I trigger this function via API Gateway, which times out after 30 seconds. How can I avoid this? Is there a better way to do this?
Based on your response in the comments I suggest you do two things:
Use the SendMessageBatch API to reduce the number of API calls and speed up the process as #Marc B suggests
Set up asynchronous invocation of the backend Lambda function. This means the API Gateway will just send the Event to your Lambda and not wait for the response. That means the Lambda functions is free to run up to 15 minutes. If you have more messages than can be handled in that period of time, you may want to look into StepFunctions.
I started implementing a slash-command which kept evolving and eventually might hit the 3-second slack response limit. I am using serverless-stack with Node and TypeScript. With sst (and the vscode launchfile) it hooks and attaches the debugger into the lambda function which is pretty neat for debugging.
When hitting the api endpoint I tried various methods to send back an acknowledgement to slack, do my thing and send a delayed message back without success. I didnt have much luck finding info on this but one good source was this SO Answer - unfortunetly it didn't work. I didn't use request-promise since it's deprecated and tried to implement it with vanilla methods (maybe that's where i failed?). But also invoking a second lambda function from within (like in the first example of the post) didn't seem to be within the 3s limitation.
I am wondering if I am doing something wrong or if attachinf the debugger is just taking to long etc.
However, before attempting to send a delayed message it was fine including accessing and scaning dynamodb records, manipulating the results and then responding back to slack while debugger attached without hitting the timeout.
Attempting to use a post
export const answer: APIGatewayProxyHandlerV2 = async (
event: APIGatewayProxyEventV2, context, callback
) => {
const slack = decodeQueryStringAs<SlackRequest>(event.body);
axios.post(slack.response_url, {
text: "completed",
response_type: "ephemeral",
replace_original: "true"
});
return { statusCode: 200, body: '' };
}
The promise never resolved, i guess that once hitting return on the function the lambda function gets disposed and so the promise?
Invoking 2nd Lambda function
export const v2: APIGatewayProxyHandlerV2 = async (
event: APIGatewayProxyEventV2, context, callback
): Promise<any> => {
//tried with CB here and without
//callback(null, { statusCode: 200, body: 'processing' });
const slack = decodeQueryStringAs<SlackRequest>(event.body);
const originalMessage = slack.text;
const responseInfo = url.parse(slack.response_url)
const data = JSON.stringify({
...slack,
})
const lambda = new AWS.Lambda()
const params = {
FunctionName: 'dev-****-FC******SmE7',
InvocationType: 'Event', // Ensures asynchronous execution
Payload: data
}
return lambda.invoke(params).promise()// Returns 200 immediately after invoking the second lambda, not waiting for the result
.then(() => callback(null, { statusCode: 200, body: 'working on it' }))
};
Looking at the debugger logs it does send the 200 code and invokes the new lambda function though slack still times out.
Nothing special happens logic wise ... the current non-delayed-message implementation does much more logic wise (accessing DB and manipulating result data) and manages not to timeout.
Any suggestions or help is welcome.
Quick side note, I used request-promise in the linked SO question's answer since the JS native Promise object was not yet available on AWS Lambda's containers at the time.
There's a fundamental difference between the orchestration of the functions in the linked question and your own from what I understand but I think you have the same goal:
> Invoke an asynchronous operation from Slack which posts back to slack once it has a result
Here's the problem with your current approach: Slack sends a request to your (1st) lambda function, which returns a response to slack, and then invokes the second lambda function.
The slack event is no longer accepting responses once your first lambda returns the 200. Here lies the difference between your approach and the linked SO question.
The desired approach would sequentially look like this:
Slack sends a request to Lambda no. 1
Lambda no. 1 returns a 200 response to Slack
Lambda no. 1 invokes Lambda no. 2
Lambda no. 2 sends a POST request to a slack URL (google incoming webhooks for slack)
Slack receives the POST requests and displays it in the channel you chose for your webhook.
Code wise this would look like the following (without request-promise lol):
Lambda 1
module.exports = async (event, context) => {
// Invoking the second lambda function
const AWS = require('aws-sdk')
const lambda = new AWS.Lambda()
const params = {
FunctionName: 'YOUR_SECOND_FUNCTION_NAME',
InvocationType: 'Event', // Ensures asynchronous execution
Payload: JSON.stringify({
... your payload for lambda 2 ...
})
}
await lambda.invoke(params).promise() // Starts Lambda 2
return {
text: "working...",
response_type: "ephemeral",
replace_original: "true"
}
}
Lambda 2
module.exports = async (event, context) => {
// Use event (payload sent from Lambda 1) and do what you need to do
return axios.post('YOUR_INCOMING_WEBHOOK_URL', {
text: 'this will be sent to slack'
});
}
I have the following timed function to periodically refetch credentials from an external API in your usual movie fetching IMDB clone app:
// This variable I pass later to Apollo Server, below all this code.
let tokenToBeUsedLater: string;
// Fetch credentials and schedule a refetch before they expire
const fetchAndRefreshToken = async () => {
try {
// fetchCredentials() sends an http request to the external API:
const { access_token, expires_in} = await fetchCredentials() as Credentials;
tokenToBeUsedLater = access_token;
// The returned token expires, so the timeout below is meant to recursively
// loop this function to refetch fresh credentials shortly before expiry.
// This timeout should not stop the app's execution, so can't await it.
// Have also tried the following with just setTimeout() and
// no `return new Promise()`but it throws a 2nd identical error.
return new Promise(resolve => setTimeout(() => {
resolve(fetchAndRefreshToken());
}, expires_in - 60000));
} catch (err) { throw new Error(err); }
};
fetchAndRefreshToken(); // <-- TypeScript error here
I have tried rewriting it in a thousand ways, but no matter what I do I get a Promises must be handled appropriately or explicitly marked as ignored with the 'void' operator error.
I can get rid the error by by:
Using .then().catch() when calling refreshToken(). Not ideal since I don't want to mix it with async and try/catch.
Putting a void operator ahead of refreshToken(). Feels like cheating.
Putting await ahead of refreshToken(). Essentially breaks my app (pauses execution of the app while waiting for the token to expire, so users in the frontend can't search for movies).
Any idea about how to solve this?
Also, any suggested resources/topics to study about this? Because I had a similar issue yesterday and I still can't figure this one out despite having already solved the other one. Cheers =)
I am writing a Lambda function which is going to be used to send a test message to an API. If there are errors I will need it to run certain functionality (like notify me with AWS messaging). I would like to have a simple test by status code. for example if i get a 2XX do nothing, if I get a 4XX or 5XX, notify me so i can research issues. In the test environment I am passing the body as an XML string as a value in a JSON object.
example Lambda Test Event
{
"data": "<xml stuff, credentials, etc"
}
here is my function
exports.handler = async (event, context) => {
const https = require('https');
const options = {
hostname: 'https://mythingy.com',
port: 443,
path: '/target',
method: 'POST',
headers: {'Content-Type': 'application/xml'}
};
const req = https.request(options, res => {
console.log(`statusCode: ${res.statusCode}`);
res.on('data', d => {
process.stdout.write(d);
});
});
req.on('error', error => {
console.error(error);
});
req.write(event.data);
req.end();
};
I'm using node 10.x in Lambda, and i am getting a "result succeeded" message from lambda, but no logged response statusCode. I've done it several ways, and have easily pulled statsCodes from Node fetch, ajax, http requests in the past. I know this probably has something to do with Lambda's env anc the promise. Can anyone help me figure out how to log the stats code in Lambda?
You don't see it printed out because your function is async and https.request uses a callback approach, which will be run asynchronously by the Node.js workers. It turns out that the function will have reached its end before it has a chance to execute the code inside the callback. And yes, you are right, this is due to the way Lambda functions work, because they are short-lived (contexts can be reused, but that's a story for another question), therefore the processes are terminated by the underlying containers. It never happened to you in traditional Node.js applications because they usually run behind a webserver, which is responsible for keeping the process up and running, so callbacks are eventually executed.
You have to either promisify https.request or use a library which already works with Promises, so you could easily await on them. Axios and Request are good options.
Once you have chosen your library - or have promisified https.request - (I'll use axios for my example), you can simply await on the call, get its results and do whatever you want with it.
const res = await axios.post('https://service-you-want-to-connect-to.com', {})
console.log(JSON.stringify(res)) // here you inspect the res object and decide what do to with the status code.
I'm running into an issue with my http-proxy-middleware stuff. I'm using it to proxy requests to another service which i.e. might resize images et al.
The problem is that multiple clients might call the method multiple times and thus create a stampede on the original service. I'm now looking into (what some services call request coalescing i.e. varnish) a solution that would call the service once, wait for the response and 'queue' the incoming requests with the same signature until the first is done, and return them all in a single go... This is different from 'caching' results due to the fact that I want to prevent calling the backend multiple times simultaneously and not necessarily cache the results.
I'm trying to find if something like that might be called differently or am i missing something that others have already solved someway... but i can't find anything...
As the use case seems pretty 'basic' for a reverse-proxy type setup, I would have expected alot of hits on my searches but since the problemspace is pretty generic i'm not getting anything...
Thanks!
A colleague of mine has helped my hack my own answer. It's currently used as a (express) middleware for specific GET-endpoints and basically hashes the request into a map, starts a new separate request. Concurrent incoming requests are hashed and checked and walked on the separate request callback and thus reused. This also means that if the first response is particularly slow, all coalesced requests are too
This seemed easier than to hack it into the http-proxy-middleware, but oh well, this got the job done :)
const axios = require('axios');
const responses = {};
module.exports = (req, res) => {
const queryHash = `${req.path}/${JSON.stringify(req.query)}`;
if (responses[queryHash]) {
console.log('re-using request', queryHash);
responses[queryHash].push(res);
return;
}
console.log('new request', queryHash);
const axiosConfig = {
method: req.method,
url: `[the original backend url]${req.path}`,
params: req.query,
headers: {}
};
if (req.headers.cookie) {
axiosConfig.headers.Cookie = req.headers.cookie;
}
responses[queryHash] = [res];
axios.request(axiosConfig).then((axiosRes) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.json(axiosRes.data);
});
responses[queryHash] = undefined;
}).catch((err) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.status(500).json(false);
});
responses[queryHash] = undefined;
});
};