Firebase Functions timeout when querying AWS RDS PostgreSQL database - node.js

I am trying to query an Amazon RDS database from a Firebase Node JS cloud function. I built the query and can successfully run the code locally using firebase functions:shell. However, when I deploy the function and call it from client-side js on my site I receive errors on both the client and server side.
Client-side:
Error: internal
Origin http://localhost:5000 is not allowed by Access-Control-Allow-Origin.
Fetch API cannot load https://us-central1-*****.cloudfunctions.net/query due to access control checks.
Failed to load resource: Origin http://localhost:5000 is not allowed by Access-Control-Allow-Origin.
Server-side:
Function execution took 60004 ms, finished with status: 'timeout'
I believe the issue has two parts:
CORS
pool.query() is async
I have looked at multiple questions for a CORS solution, here and here for example, but none of the solutions have worked for me. In regards to pool.query() being async I believe I am handling it correctly however neither the result nor an error is printed to the servers logs.
Below is all the relevant code from my projects.
Client-side:
var queryRDS = firebase.functions().httpsCallable('query');
queryRDS({
query: document.getElementById("search-input").value
})
.then(function (result) {
if (result) {
console.log(result)
}
})
.catch(function (error) {
console.log(error);
});
Server-side:
const functions = require('firebase-functions');
const { Pool } = require('pg');
const pool = new Pool({
user: 'postgres',
host: '*****.*****.us-west-2.rds.amazonaws.com',
database: '*****',
password: '*****',
port: 5432
})
exports.query = functions.https.onCall((data, context) => {
// This is not my real query, I just changed it for the
// simplicity of this question
var query = "Select * FROM table"
pool.query(query)
.then(result_set => {
console.log(result_set)
return result_set
}).catch(err => {
console.log(err)
return err
})
})
I know everything works up until pool.query(), based on my logs it seems that the .then() or the .catch() are never reached and the returns never reach the client-side.
Update:
I increased the timeout of the Firebase Functions from 60s to 120s and changed my server function code by adding a return statment before pool.query():
return pool.query(query)
.then(result_set => {
console.log(result_set)
return result_set
}).catch(err => {
console.log("Failed to execute query: " + err)
return err
})
I now get an error message reading Failed to execute query: Error: connect ETIMEDOUT **.***.***.***:5432 with the IP address being my AWS RDS database. It seems this might have been the underlying problem all along, but I am not sure why the RDS is giving me a timeout.

The CORS should be automatically handled by the onCall handler. The error message about CORS is likely to be inaccurate, and a result of the function timing out, as the server side error is showing.
That being said, according to the Cloud Functions Documentation on Function's Timeout, the default timeout for Cloud Functions is of 60 seconds, which translated to the ~60000 ms on your error message, and this means that 1 minute is not enough for your function to execute such query, which makes sense if your consider that the function is accessing an external provider, which is the Amazon RDS database.
In order to fix it you will have to redeploy your function with a flag for setting the function execution timeout, as follows:
gcloud functions deploy FUNCTION_NAME --timeout=TIMEOUT
The Value of TIMEOUT could be anything until 540, which is the maximum seconds that Cloud Functions allows before timeout (9 minutes).
NOTE: This could also be mitigated by deploying your function to the closest location possible to where your Amazon RDS database is located, you can check this link on what locations are available for Cloud Functions and you can use --region=REGION on the deploy command to specify region to be deployed.

Related

Connections from mongoose createConnection not getting closed. Multi tenant app

I've been creating a multi tenant app where I've been creating the database connections on the fly as soon as I resolve the tenant database connection string from the request that has just hit the server.
It's working as expected, but the connections keeps adding up and never they are never getting disconnected.
From what I've been reading, it seems like the mongoose.connect manages the connections but the mongoose.createConnection doesn't, I'm not sure if my undestanding is correct here.
I thought on creating my own connection pool with map in memory and use the connection from the map if it already exists in the map, but I'm not sure if this is a good approach.
Does anyone know if there is a npm connection pool package already built for this issue? Or any implementation ideas?
I also thought closing out each connection manually when the request lifecycle ends, but it will affect the performance if I have to connect and disconnect from mongo per each request, instead of using a connection pool.
Here is the part of the code I'm creating the connection, nothing special here because I'm always creating the connection.
// ... Resolve connection string from request
let tentantConn;
try {
// One connection per tenant
tentantConn = await mongoose.createConnection(
decrypt(tenant.dbUrl),
{
useNewUrlParser: true,
useUnifiedTopology: true
});
}catch (e) {
req.log.info({ message: `Unauthorized - Error connecting to tenant database: ${currentHostname}`, error: e.message });
return reply.status(401).send({ message: `Unauthorized - Error connecting to tenant database: ${currentHostname}`, error: e.message });
}
// ...
The connection pool is implemented on the driver level:
https://github.com/mongodb/node-mongodb-native/blob/main/src/cmap/connection_pool.ts
By default it opens 5 connection per server. You can change pool size but you cannot disable it.
Now, terminology is a bit confusing, as a single mongodb server / cluster can have multiple databases. They share the same connection string - same 5 connections from the pool regardless of number of databases.
Assumption your tenants have individual clusters and do connect to different mongodb servers, in order to close these connections you need to explicitly call
await mongoose.connection.close()
Took me few days to be able to get back to this issue, but I was able to tweak my code and now the connection count on mongodb atlas seems to be stable. I'm not super happy to be using a global variable to fix this issue, but it is solving my issue for now.
async function switchTenantConnection(aConnStr, aDbName, aAsyncOpenCallback){
const hasConn = global.connectionPoolTest !== null;
if(!hasConn){
const tentantConn = await getTenantConnectionFromEncryptStr(aConnStr);
if(aAsyncOpenCallback){
tentantConn.once('open', aAsyncOpenCallback);
}
tentantConn.once('disconnected', async function () {
global.connectionPoolTest = null;
});
tentantConn.once('error', async function () {
global.connectionPoolTest = null;
});
global.connectionPoolTest= { dbName: aDbName, connection: tentantConn, createdAt: new Date() };
return tentantConn;
}
return global.connectionPoolTest.connection.useDb(aDbName);
}

Axios always time out on AWS Lambda for a particular API

Describe the issue
I'm not really sure if this is an Axios issue or not. The following code runs successfully on my local development machine but always time out whenever I run it from the cloud (e.g. AWS Lambda). Same thing happens when I run on repl.it.
I can confirm that AWS Lambda has internet access and it works for any other API but this:
https://www.target.com.au/ws-api/v1/target/products/search?category=W95362
Example Code
https://repl.it/repls/AdeptFluidSpreadsheet
const axios = require('axios');
const handler = async () => {
const url = 'https://www.target.com.au/ws-api/v1/target/products/search?category=W95362';
const response = await axios.get(url, { timeout: 10000 });
console.log(response.data.data.productDataList);
}
handler();
Environment
Axios Version: 0.19.2
Runtime: nodejs12x
Update 1
I tried the native require('https') and it times out on both localhost and cloud server. Please find sample code here: https://repl.it/repls/TerribleViolentVolume
const https = require('https');
const url = 'https://www.target.com.au/ws-api/v1/target/products/search?category=W95362';
https.get(url, res => {
var body = '';
res.on('data', chunk => {
body += chunk;
});
res.on('end', () => {
var response = JSON.parse(body);
console.log("Got a response: ", response);
});
}).on('error', e => {
console.log("Got an error: ", e);
});
Again, I can confirm that same code works on any other API.
Update 2
I suspect that this is something server side as it also behaves very weirdly with curl.
curl from local -> 403 access denied
curl from local with User-Agent header -> success
curl from cloud server -> 403 access denied
It must be server side validation, something related to AkamaiGHost.
You have probably placed your Lambda function in a VPC without Internet access to the outside world. Try check the VPC section in your lambda configuration, and setup an internet gateway accordingly
You should try by wrapping axios call into try/catch maybe that will catch the issue.
const axios = require('axios');
const handler = async () => {
try {
const url = 'https://www.target.com.au/ws-api/v1/target/products/search?category=W95362';
const response = await axios.get(url, { timeout: 10000 });
console.log(typeof (response));
console.log(response);
} catch (e) {
console.log(e, "error api call");
}
}
handler();
As suggested by Akshay you can use try and catch block to get the error. Maybe it helps you out.
Have you configured Error Handling for Asynchronous Invocation?
To configure error handling follow the below steps:
Open the Lambda console Functions page.
Choose a function.
Under Asynchronous invocation, choose Edit.
Configure the following settings.
Maximum age of event – The maximum amount of time Lambda retains an event in the asynchronous event queue, up to 6 hours.
Retry attempts – The number of times Lambda retries when the function returns an error, between 0 and 2.
Choose Save.
axios is only Promise based HTTP client for the browser and node.js and as you set timeout: 10000 so I believe timeout issue is not from its end.
Although your API
https://www.target.com.au/ws-api/v1/target/products/search?category=W95362
is working fine on the browser and rendering JSON data.
and Function timeout of lambda is by default 15 minutes, which I believe is enough for the response. There may be another issue.
Make sure you have set other configurations like permissions etc. as suggested in the documentation.
Here you can check the default limits for AWS lambda.

Node function in AWS Lamda timing out

I'm trying to call a lamda function writen in Node.JS hosted in the SAM local environment. The function is connecting to a locally hosted MySQL database.
The code is as follows:
var mysql = require('mysql');
exports.handler = (event, context, callback) => {
let id = (event.pathParameters || {}).division || false;
var con = mysql.createConnection({
host: "host.docker.internal",
user: "root",
password: "root",
database: "squashprod"
});
switch(event.httpMethod){
case "GET":
con.connect(function(err) {
if (err) throw err;
con.query("SELECT * FROM players where division_id = 1",
function (err, result, fields) {
if (err) throw err;
//console.log(result);
return callback(null, {body: "This message does not work"});
}
);
});
// return callback(null, {body: "This message works"});
break;
default:
// Send HTTP 501: Not Implemented
console.log("Error: unsupported HTTP method (" + event.httpMethod + ")");
callback(null, { statusCode: 501 })
}
}
However the callback (with the message "This message does not work") is not coming out. I know it's calling the DB as the console.log call prints the result. When this code runs I get an internal server error in the browser and the following messages from SAM Local:
2018-09-13 20:46:18 Function 'TableGetTest' timed out after 3 seconds
2018-09-13 20:46:20 Function returned an invalid response (must include one of: body, headers or statusCode in the response object). Response received: b''
2018-09-13 20:46:20 127.0.0.1 - - [13/Sep/2018 20:46:20] "GET /TableGetTest/2 HTTP/1.1" 502 -
2018-09-13 20:46:20 127.0.0.1 - - [13/Sep/2018 20:46:20] "GET /favicon.ico HTTP/1.1" 403 -
If I comment out the call to the DB and just go with the callback that says, "This message works" then there is no timeout and that message appears in the browser
I know the DB code is sound as it works standalone. I feels it's got something to do with the callback but I don't know Node well enough to understand.
I'm pulling what little hair I've got out. Any help would be greatly appreciated!
I had the same problem and here is how I solved it.
First problem is time is not enough for cold start.
Increase the execution time of your lambda. Initial connection setup will take longer time.
Further,
You need to close the connection once you are done with the query. Otherwise it will not keep the event loop of node empty which make the lambda assume still it is in the work.
Resolved with two ways:
Close all the connection as soon as everything is complete.
Use Sequelize rather than plan mysql library. Sequelize will help to maintain connection pools and share across connections.
https://www.npmjs.com/package/sequelize
Hope it helps.

Request Timeout while uploading image

Am developing web application using golang for web server and frontend with reactJS and nodeJS to serve the frontend. I have two issue while uploading images that are big (currently am testing with 2.9 mb) the first one am getting is a timeout within 10 second saying request timeout at the browser side but the upload is successfully uploaded to the database. The second issue is the request is being duplicated two times and as a result the request is saved to the database two times. I have searched on stack overflow but it doesn't seem to work.
First Option
Here is the code using ajax call i.e. fetch from isomorphic-fetch
Following suggestion to implement a timeout wrapper at https://github.com/github/fetch/issues/175
static addEvent(events){
let config = {
method: 'POST',
body: events
};
function timeout(ms, promise) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
reject(new Error("timeout"))
}, ms)
promise.then(resolve, reject)
})
}
return timeout(120000,fetch(`${SERVER_HOSTNAME}:${SERVER_PORT}/event`, config))
.then(function(response){
if(response.status >= 400){
return {
"error":"Bad Response from Server"
};
}else if(response.ok){
browserHistory.push({
pathname: '/events'
});
}
});
}
The request timeout still occurs within 10 seconds.
Second Option
I have tried a different node module for the ajax call i.e. axios since it has a timeout option but this also didn't fix the timeout issue.
Third Option
I tried to set read timeout and write timeout on the server side similiar to https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/
server := &http.Server{
Addr: ":9292",
Handler: router,
ReadTimeout: 180 * time.Second,
WriteTimeout: 180 * time.Second,
}
Again am getting request timeout at browser side within 10 seconds.
what shall I do to fix or point me if i made a mistake ?

How to check if ElasticSearch client is connected?

I'm working with elasticsearch-js (NodeJS) and everything works just fine as long as long as ElasticSearch is running. However, I'd like to know that my connection is alive before trying to invoke one of the client's methods. I'm doing things in a bit of synchronous fashion, but only for the purpose of performance testing (e.g., check that I have an empty index to work in, ingest some data, query the data). Looking at a snippet like this :
var elasticClient = new elasticsearch.Client({
host: ((options.host || 'localhost') + ':' + (options.port || '9200'))
});
// Note, I already have promise handling implemented, omitting it for brevity though
var promise = elasticClient.indices.delete({index: "_all"});
/// ...
Is there some mechanism to send in on the client config to fail fast, or some test I can perform on the client to make sure it's open before invoking delete?
Update: 2015-05-22
I'm not sure if this is correct, but perhaps attempting to get client stats is reasonable?
var getStats = elasticClient.nodes.stats();
getStats.then(function(o){
console.log(o);
})
.catch(function(e){
console.log(e);
throw e;
});
Via node-debug, I am seeing the promise rejected when ElasticSearch is down / inaccessible with: "Error: No Living connections". When it does connect, o in my then handler seems to have details about connection state. Would this approach be correct or is there a preferred way to check connection viability?
Getting stats can be a heavy call to simply ensure your client is connected. You should use ping, see 2nd example https://github.com/elastic/elasticsearch-js#examples
We are using ping too, after instantiating elasticsearch-js client connection on start up.
// example from above link
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
host: 'localhost:9200',
log: 'trace'
});
client.ping({
// ping usually has a 3000ms timeout
requestTimeout: Infinity,
// undocumented params are appended to the query string
hello: "elasticsearch!"
}, function (error) {
if (error) {
console.trace('elasticsearch cluster is down!');
} else {
console.log('All is well');
}
});

Resources