Axios always time out on AWS Lambda for a particular API - node.js

Describe the issue
I'm not really sure if this is an Axios issue or not. The following code runs successfully on my local development machine but always time out whenever I run it from the cloud (e.g. AWS Lambda). Same thing happens when I run on repl.it.
I can confirm that AWS Lambda has internet access and it works for any other API but this:
https://www.target.com.au/ws-api/v1/target/products/search?category=W95362
Example Code
https://repl.it/repls/AdeptFluidSpreadsheet
const axios = require('axios');
const handler = async () => {
const url = 'https://www.target.com.au/ws-api/v1/target/products/search?category=W95362';
const response = await axios.get(url, { timeout: 10000 });
console.log(response.data.data.productDataList);
}
handler();
Environment
Axios Version: 0.19.2
Runtime: nodejs12x
Update 1
I tried the native require('https') and it times out on both localhost and cloud server. Please find sample code here: https://repl.it/repls/TerribleViolentVolume
const https = require('https');
const url = 'https://www.target.com.au/ws-api/v1/target/products/search?category=W95362';
https.get(url, res => {
var body = '';
res.on('data', chunk => {
body += chunk;
});
res.on('end', () => {
var response = JSON.parse(body);
console.log("Got a response: ", response);
});
}).on('error', e => {
console.log("Got an error: ", e);
});
Again, I can confirm that same code works on any other API.
Update 2
I suspect that this is something server side as it also behaves very weirdly with curl.
curl from local -> 403 access denied
curl from local with User-Agent header -> success
curl from cloud server -> 403 access denied
It must be server side validation, something related to AkamaiGHost.

You have probably placed your Lambda function in a VPC without Internet access to the outside world. Try check the VPC section in your lambda configuration, and setup an internet gateway accordingly

You should try by wrapping axios call into try/catch maybe that will catch the issue.
const axios = require('axios');
const handler = async () => {
try {
const url = 'https://www.target.com.au/ws-api/v1/target/products/search?category=W95362';
const response = await axios.get(url, { timeout: 10000 });
console.log(typeof (response));
console.log(response);
} catch (e) {
console.log(e, "error api call");
}
}
handler();

As suggested by Akshay you can use try and catch block to get the error. Maybe it helps you out.
Have you configured Error Handling for Asynchronous Invocation?
To configure error handling follow the below steps:
Open the Lambda console Functions page.
Choose a function.
Under Asynchronous invocation, choose Edit.
Configure the following settings.
Maximum age of event – The maximum amount of time Lambda retains an event in the asynchronous event queue, up to 6 hours.
Retry attempts – The number of times Lambda retries when the function returns an error, between 0 and 2.
Choose Save.
axios is only Promise based HTTP client for the browser and node.js and as you set timeout: 10000 so I believe timeout issue is not from its end.
Although your API
https://www.target.com.au/ws-api/v1/target/products/search?category=W95362
is working fine on the browser and rendering JSON data.
and Function timeout of lambda is by default 15 minutes, which I believe is enough for the response. There may be another issue.
Make sure you have set other configurations like permissions etc. as suggested in the documentation.
Here you can check the default limits for AWS lambda.

Related

Node.js GET API is getting called twice intermittently

I have a node.js GET API endpoint that calls some backend services to get data.
app.get('/request_backend_data', function(req, res) {
---------------------
}
When there is a delay getting a response back from the backend services, this endpoint(request_backend_data) is getting triggered exactly after 2 minutes. I have checked my application code, but there is no retry logic written anywhere when there is a delay.
Does node.js API endpoint gets called twice in any case(like delay or timeout)?
There might be a few reasons:
some chrome extensions might cause bugs. Those chrome extensions have been causing a lot of issues recently. run your app on a different browser. If there is no issue, that means it is chrome-specific problem.
express might be making requests for favicon.ico. In order to prevent this, use this module : https://www.npmjs.com/package/serve-favicon
add CORS policy. Your server might sending preflight requests Use this npm package: https://www.npmjs.com/package/cors
No there is no default timeouts in nodejs or something like that.
Look for issue at your frontend part:
can be javascript fetch api with 'retry' option set
can be messed up RxJS operators chain which emits events implicitly and triggers another one REST request
can be entire page reload on timeout which leads to retrieve all neccessary data from backend
can be request interceptors (in axios, angular etc) which modify something and re-send
... many potential reasons, but not in backend (nodejs) for sure
Just make simple example and invoke your nodejs 'request_backend_data' endpoint with axois or xmlhttprequest - you will see that problem is not at backend part.
Try checking the api call with the code below, which includes follwing redirects. Add headers as needed (ie, 'Authorization': 'bearer dhqsdkhqd...etc'
var https = require('follow-redirects').https;
var fs = require('fs');
var options = {
'method': 'GET',
'hostname': 'foo.com',
'path': '/request_backend_data',
'headers': {
},
'maxRedirects': 20
};
var req = https.request(options, function (res) {
var chunks = [];
res.on("data", function (chunk) {
chunks.push(chunk);
});
res.on("end", function (chunk) {
var body = Buffer.concat(chunks);
console.log(body.toString());
});
res.on("error", function (error) {
console.error(error);
});
});
req.end();
Paste into a file called test.js then run with node test.js.

Slack delayed message integration with Node TypeScript and Lambda

I started implementing a slash-command which kept evolving and eventually might hit the 3-second slack response limit. I am using serverless-stack with Node and TypeScript. With sst (and the vscode launchfile) it hooks and attaches the debugger into the lambda function which is pretty neat for debugging.
When hitting the api endpoint I tried various methods to send back an acknowledgement to slack, do my thing and send a delayed message back without success. I didnt have much luck finding info on this but one good source was this SO Answer - unfortunetly it didn't work. I didn't use request-promise since it's deprecated and tried to implement it with vanilla methods (maybe that's where i failed?). But also invoking a second lambda function from within (like in the first example of the post) didn't seem to be within the 3s limitation.
I am wondering if I am doing something wrong or if attachinf the debugger is just taking to long etc.
However, before attempting to send a delayed message it was fine including accessing and scaning dynamodb records, manipulating the results and then responding back to slack while debugger attached without hitting the timeout.
Attempting to use a post
export const answer: APIGatewayProxyHandlerV2 = async (
event: APIGatewayProxyEventV2, context, callback
) => {
const slack = decodeQueryStringAs<SlackRequest>(event.body);
axios.post(slack.response_url, {
text: "completed",
response_type: "ephemeral",
replace_original: "true"
});
return { statusCode: 200, body: '' };
}
The promise never resolved, i guess that once hitting return on the function the lambda function gets disposed and so the promise?
Invoking 2nd Lambda function
export const v2: APIGatewayProxyHandlerV2 = async (
event: APIGatewayProxyEventV2, context, callback
): Promise<any> => {
//tried with CB here and without
//callback(null, { statusCode: 200, body: 'processing' });
const slack = decodeQueryStringAs<SlackRequest>(event.body);
const originalMessage = slack.text;
const responseInfo = url.parse(slack.response_url)
const data = JSON.stringify({
...slack,
})
const lambda = new AWS.Lambda()
const params = {
FunctionName: 'dev-****-FC******SmE7',
InvocationType: 'Event', // Ensures asynchronous execution
Payload: data
}
return lambda.invoke(params).promise()// Returns 200 immediately after invoking the second lambda, not waiting for the result
.then(() => callback(null, { statusCode: 200, body: 'working on it' }))
};
Looking at the debugger logs it does send the 200 code and invokes the new lambda function though slack still times out.
Nothing special happens logic wise ... the current non-delayed-message implementation does much more logic wise (accessing DB and manipulating result data) and manages not to timeout.
Any suggestions or help is welcome.
Quick side note, I used request-promise in the linked SO question's answer since the JS native Promise object was not yet available on AWS Lambda's containers at the time.
There's a fundamental difference between the orchestration of the functions in the linked question and your own from what I understand but I think you have the same goal:
> Invoke an asynchronous operation from Slack which posts back to slack once it has a result
Here's the problem with your current approach: Slack sends a request to your (1st) lambda function, which returns a response to slack, and then invokes the second lambda function.
The slack event is no longer accepting responses once your first lambda returns the 200. Here lies the difference between your approach and the linked SO question.
The desired approach would sequentially look like this:
Slack sends a request to Lambda no. 1
Lambda no. 1 returns a 200 response to Slack
Lambda no. 1 invokes Lambda no. 2
Lambda no. 2 sends a POST request to a slack URL (google incoming webhooks for slack)
Slack receives the POST requests and displays it in the channel you chose for your webhook.
Code wise this would look like the following (without request-promise lol):
Lambda 1
module.exports = async (event, context) => {
// Invoking the second lambda function
const AWS = require('aws-sdk')
const lambda = new AWS.Lambda()
const params = {
FunctionName: 'YOUR_SECOND_FUNCTION_NAME',
InvocationType: 'Event', // Ensures asynchronous execution
Payload: JSON.stringify({
... your payload for lambda 2 ...
})
}
await lambda.invoke(params).promise() // Starts Lambda 2
return {
text: "working...",
response_type: "ephemeral",
replace_original: "true"
}
}
Lambda 2
module.exports = async (event, context) => {
// Use event (payload sent from Lambda 1) and do what you need to do
return axios.post('YOUR_INCOMING_WEBHOOK_URL', {
text: 'this will be sent to slack'
});
}

Firebase cloud functions throws timeout exception but standalone works fine

I am trying to call a third party API using my Firebase cloud functions. I have billing enabled and all my other function are working fine.
However, I have one method that throws Timeout exception when it tries to call third API. The interesting thing is, when I run the same method from a standalone nodeJS file, it works fine. But when I deploy it on Firebase cloud or start the function locally, it shows timeout error.
Following is my function:
exports.fetchDemo = functions.https.onRequest(async (req, response) =>
{
var res = {};
res.started = true;
await myMethod();
res.ended = true;
response.status(200).json({ data: res });
});
async function myMethod() {
var url = 'my third party URL';
console.log('Line 1');
const res = await fetch(url);
console.log('Line 2'); // never prints when run with cloud functions
var data = await res.text();
console.log(`Line 3: ${data}`);
}
Just now I also noticed, when I hit the same URL in the browser it gives the following exception. It means, it works only with standalone node.
<errorDTO>
<code>INTERNAL_SERVER_ERROR</code>
<uid>c0bb83ab-233c-4fe4-9a9e-3f10063e129d</uid>
</errorDTO>
Any help will be appreciated...
It turned out that one of my colleague wrote a new method with the name fetch. I was not aware about it. So when my method was calling to the fetch method, it was actually calling his method he wrote down the file. I just took git update and did not notice he wrote this method.

How would I return status codes and repsones in AWS Lambda

I am writing a Lambda function which is going to be used to send a test message to an API. If there are errors I will need it to run certain functionality (like notify me with AWS messaging). I would like to have a simple test by status code. for example if i get a 2XX do nothing, if I get a 4XX or 5XX, notify me so i can research issues. In the test environment I am passing the body as an XML string as a value in a JSON object.
example Lambda Test Event
{
"data": "<xml stuff, credentials, etc"
}
here is my function
exports.handler = async (event, context) => {
const https = require('https');
const options = {
hostname: 'https://mythingy.com',
port: 443,
path: '/target',
method: 'POST',
headers: {'Content-Type': 'application/xml'}
};
const req = https.request(options, res => {
console.log(`statusCode: ${res.statusCode}`);
res.on('data', d => {
process.stdout.write(d);
});
});
req.on('error', error => {
console.error(error);
});
req.write(event.data);
req.end();
};
I'm using node 10.x in Lambda, and i am getting a "result succeeded" message from lambda, but no logged response statusCode. I've done it several ways, and have easily pulled statsCodes from Node fetch, ajax, http requests in the past. I know this probably has something to do with Lambda's env anc the promise. Can anyone help me figure out how to log the stats code in Lambda?
You don't see it printed out because your function is async and https.request uses a callback approach, which will be run asynchronously by the Node.js workers. It turns out that the function will have reached its end before it has a chance to execute the code inside the callback. And yes, you are right, this is due to the way Lambda functions work, because they are short-lived (contexts can be reused, but that's a story for another question), therefore the processes are terminated by the underlying containers. It never happened to you in traditional Node.js applications because they usually run behind a webserver, which is responsible for keeping the process up and running, so callbacks are eventually executed.
You have to either promisify https.request or use a library which already works with Promises, so you could easily await on them. Axios and Request are good options.
Once you have chosen your library - or have promisified https.request - (I'll use axios for my example), you can simply await on the call, get its results and do whatever you want with it.
const res = await axios.post('https://service-you-want-to-connect-to.com', {})
console.log(JSON.stringify(res)) // here you inspect the res object and decide what do to with the status code.

How to poll another server periodically from a node.js server?

I have a node.js server A with mongodb for database.
There is another remote server B (doesn't need to be node based) which exposes a HTTP/GET API '/status' and returns either 'FREE' or 'BUSY' as the response.
When a user hits a particular API endpoint in server A(say POST /test), I wish to start polling server B's status API every minute, until server B returns 'FREE' as the response. The user doesn't need to wait till the server B returns a 'FREE' response (polling B is a background job in server A). Once the server A gets a 'FREE' response from B, it shall send out an email to the user.
How can this be achieved in server A, keeping in mind that the number of concurrent users can go large ?
I suggest you use Agenda. https://www.npmjs.com/package/agenda
With agenda you can create recurring schedules under which you can schedule anything pretty flexible.
I suggest you use request module to make HTTP get/post requests.
https://www.npmjs.com/package/request
Going from the example in node.js docs I'd go with something like the code here. I tested and it works. BTW, I'm assuming here that the api response is something like {"status":"BUSY"} & {"status":"FREE"}
const http = require('http');
const poll = {
pollB: function() {
http.get('http://serverB/status', (res) => {
const { statusCode } = res;
let error;
if (statusCode !== 200) {
error = new Error(`Request Failed.\n` +
`Status Code: ${statusCode}`);
}
if (error) {
console.error(error.message);
res.resume();
} else {
res.setEncoding('utf8');
let rawData = '';
res.on('data', (chunk) => { rawData += chunk; });
res.on('end', () => {
try {
const parsedData = JSON.parse(rawData);
// The important logic comes here
if (parsedData.status === 'BUSY') {
setTimeout(poll.pollB, 10000); // request again in 10 secs
} else {
// Call the background process you need to
}
} catch (e) {
console.error(e.message);
}
});
}
}).on('error', (e) => {
console.error(`Got error: ${e.message}`);
});
}
}
poll.pollB();
You probably want to play with this script and get rid of unnecessary code for you, but that's homework ;)
Update:
For coping with a lot of concurrency in node.js I'd recommend to implement a cluster or use a framework. Here are some links to start researching about the subject:
How to fully utilise server capacity for Node.js Web Apps
How to Create a Node.js Cluster for Speeding Up Your Apps
Node.js v7.10.0 Documentation :: cluster
ActionHero.js :: Fantastic node.js framework for implementing an API, background tasks, cluster using http, sockets, websockets
Use a library like request, superagent, or restify-clients to call server B. I would recommend you avoid polling and instead use a webhook when calling B (assuming you are also authoring B). If you can't change B, then setTimeout can be used to schedule subsequent calls on a 1 second interval.

Resources