Node function in AWS Lamda timing out - node.js

I'm trying to call a lamda function writen in Node.JS hosted in the SAM local environment. The function is connecting to a locally hosted MySQL database.
The code is as follows:
var mysql = require('mysql');
exports.handler = (event, context, callback) => {
let id = (event.pathParameters || {}).division || false;
var con = mysql.createConnection({
host: "host.docker.internal",
user: "root",
password: "root",
database: "squashprod"
});
switch(event.httpMethod){
case "GET":
con.connect(function(err) {
if (err) throw err;
con.query("SELECT * FROM players where division_id = 1",
function (err, result, fields) {
if (err) throw err;
//console.log(result);
return callback(null, {body: "This message does not work"});
}
);
});
// return callback(null, {body: "This message works"});
break;
default:
// Send HTTP 501: Not Implemented
console.log("Error: unsupported HTTP method (" + event.httpMethod + ")");
callback(null, { statusCode: 501 })
}
}
However the callback (with the message "This message does not work") is not coming out. I know it's calling the DB as the console.log call prints the result. When this code runs I get an internal server error in the browser and the following messages from SAM Local:
2018-09-13 20:46:18 Function 'TableGetTest' timed out after 3 seconds
2018-09-13 20:46:20 Function returned an invalid response (must include one of: body, headers or statusCode in the response object). Response received: b''
2018-09-13 20:46:20 127.0.0.1 - - [13/Sep/2018 20:46:20] "GET /TableGetTest/2 HTTP/1.1" 502 -
2018-09-13 20:46:20 127.0.0.1 - - [13/Sep/2018 20:46:20] "GET /favicon.ico HTTP/1.1" 403 -
If I comment out the call to the DB and just go with the callback that says, "This message works" then there is no timeout and that message appears in the browser
I know the DB code is sound as it works standalone. I feels it's got something to do with the callback but I don't know Node well enough to understand.
I'm pulling what little hair I've got out. Any help would be greatly appreciated!

I had the same problem and here is how I solved it.
First problem is time is not enough for cold start.
Increase the execution time of your lambda. Initial connection setup will take longer time.
Further,
You need to close the connection once you are done with the query. Otherwise it will not keep the event loop of node empty which make the lambda assume still it is in the work.
Resolved with two ways:
Close all the connection as soon as everything is complete.
Use Sequelize rather than plan mysql library. Sequelize will help to maintain connection pools and share across connections.
https://www.npmjs.com/package/sequelize
Hope it helps.

Related

Firebase Functions timeout when querying AWS RDS PostgreSQL database

I am trying to query an Amazon RDS database from a Firebase Node JS cloud function. I built the query and can successfully run the code locally using firebase functions:shell. However, when I deploy the function and call it from client-side js on my site I receive errors on both the client and server side.
Client-side:
Error: internal
Origin http://localhost:5000 is not allowed by Access-Control-Allow-Origin.
Fetch API cannot load https://us-central1-*****.cloudfunctions.net/query due to access control checks.
Failed to load resource: Origin http://localhost:5000 is not allowed by Access-Control-Allow-Origin.
Server-side:
Function execution took 60004 ms, finished with status: 'timeout'
I believe the issue has two parts:
CORS
pool.query() is async
I have looked at multiple questions for a CORS solution, here and here for example, but none of the solutions have worked for me. In regards to pool.query() being async I believe I am handling it correctly however neither the result nor an error is printed to the servers logs.
Below is all the relevant code from my projects.
Client-side:
var queryRDS = firebase.functions().httpsCallable('query');
queryRDS({
query: document.getElementById("search-input").value
})
.then(function (result) {
if (result) {
console.log(result)
}
})
.catch(function (error) {
console.log(error);
});
Server-side:
const functions = require('firebase-functions');
const { Pool } = require('pg');
const pool = new Pool({
user: 'postgres',
host: '*****.*****.us-west-2.rds.amazonaws.com',
database: '*****',
password: '*****',
port: 5432
})
exports.query = functions.https.onCall((data, context) => {
// This is not my real query, I just changed it for the
// simplicity of this question
var query = "Select * FROM table"
pool.query(query)
.then(result_set => {
console.log(result_set)
return result_set
}).catch(err => {
console.log(err)
return err
})
})
I know everything works up until pool.query(), based on my logs it seems that the .then() or the .catch() are never reached and the returns never reach the client-side.
Update:
I increased the timeout of the Firebase Functions from 60s to 120s and changed my server function code by adding a return statment before pool.query():
return pool.query(query)
.then(result_set => {
console.log(result_set)
return result_set
}).catch(err => {
console.log("Failed to execute query: " + err)
return err
})
I now get an error message reading Failed to execute query: Error: connect ETIMEDOUT **.***.***.***:5432 with the IP address being my AWS RDS database. It seems this might have been the underlying problem all along, but I am not sure why the RDS is giving me a timeout.
The CORS should be automatically handled by the onCall handler. The error message about CORS is likely to be inaccurate, and a result of the function timing out, as the server side error is showing.
That being said, according to the Cloud Functions Documentation on Function's Timeout, the default timeout for Cloud Functions is of 60 seconds, which translated to the ~60000 ms on your error message, and this means that 1 minute is not enough for your function to execute such query, which makes sense if your consider that the function is accessing an external provider, which is the Amazon RDS database.
In order to fix it you will have to redeploy your function with a flag for setting the function execution timeout, as follows:
gcloud functions deploy FUNCTION_NAME --timeout=TIMEOUT
The Value of TIMEOUT could be anything until 540, which is the maximum seconds that Cloud Functions allows before timeout (9 minutes).
NOTE: This could also be mitigated by deploying your function to the closest location possible to where your Amazon RDS database is located, you can check this link on what locations are available for Cloud Functions and you can use --region=REGION on the deploy command to specify region to be deployed.

Looking for direction on a pouchdb error

error:"unauthorized"
id:"_design/db"
message:"You are not a db or server admin."
name:"unauthorized"
ok:true
reason:"You are not a db or server admin."
rev:"137-81fe83389359c1cfb50bf928f3558b81"
status:500
Pouchdb is trying to push a design document, after a full uninstall/reinstall of the app (so the local pouchdb should have been erased). I am guessing this is in the change stream somewhere. But the weird part is the couchdb is on revision 133, not 137.
How do I fix this? I tried a compact but that didn't work. Only obvious answer I can think of is manually make a bunch of revisions to the design on couch, so that it's newer than 137.
I ran a search on the changes stream using this code
var http=require('http');
var url = "http:/server/db/_changes?style=all_docs";
http.get(url, function(res){
var body = '';
res.on('data', function(chunk){
body += chunk;
});
res.on('end', function(){
var test = JSON.parse(body);
test.results.forEach(function(item,index){
if (item.id==="_design/db"){
console.log(item);
}
});
});
}).on('error', function(e){
console.log("Got an error: ", e);
});
And got 1 result, rev 133, the correct one. So where is pouchdb getting this from?
--Edit
Deleting the pouch database seems to fix it until the next app install.
The error status code is 500 which based on the documentation is:
500 - Internal Server Error
The request was invalid, either because the supplied JSON was invalid,
or invalid information was supplied as part of the request.
Also, the error message and reason mention that:
message:"You are not a db or server admin."
reason:"You are not a db or server admin."
I think the error might be caused by database admin and member permissions. Because, ordinary database member users/roles cannot PUT design docs, only database admin users/roles can PUT design docs:
You mentioned that:
It's really just because the phone has some future version of the
design doc ...
If there is a problem with revision, there should be received a 409 - Conflict error NOT a 500 - Internal Server Error.
I'm not sure, just an idea.
So it turns out Android now uses google drive to make backups of indexdb. This was causing the installed version of the app to keep getting a future version of the document after database rollbacks during testing.
The only way around it I found was to do this.
.on('denied', function (result) {
if (result.doc.error === "unauthorized" && result.doc.id === "_design/db") {
//catastrophic failure
var DBDeleteRequest = window.indexedDB.deleteDatabase("_pouch_");
DBDeleteRequest.onerror = function (event) {
console.error("Error deleting database.");
throw new Error("Error deleting database.");
};
DBDeleteRequest.onsuccess = function (event) {
console.log("Database deleted successfully");
window.location.reload(); //reload the app after purge
};
}
}
Even a pouchdb.destroy would not fully clear the problem. It's a bit of a nuke from orbit solution.

NODE.JS connect to multiple database on the fly gives me error

I am creating an API where I need to connect to different database using thier credentials on the fly. I need to make functionality similar to MySql workbench - test connection. Currently, I need to deal with MySql and MSSql server. I have to check all the permutation and combination for wrong credentials. i.e. if I pass correct credentials for example, correct host, username, password, port but wrong connector instead of MySql I pass Mssql. That is throwing an exception.
var db = {
host : data.hostName,
port : data.port,
database : data.database,
username : data.userName,
password : data.password,
connector : response[0].node_js_connector
}
var dataSource = new DataSource(db.connector, db);
dataSource.on('connected', function (er) {
if(er) {
console.log("reject");
reject(er);
}
else {
console.log("resolve");
resolve('Work With Database');
}
});
dataSource.on('error', function (er) {
if(er) {
console.log("reject1");
reject(er);
}
else {
console.log("reject1");
reject('Not Connected Databse');
}
});
I have also put the code in try/catch block to handle exception. However, I am not able to catch it. Currently, I am getting following error:
throw new RangeError('Index out of range');
RangeError: Index out of range
at checkOffset (buffer.js:968:11)
at Buffer.readUInt8 (buffer.js:1006:5)
It would be a great help, if someone can assist me in solving this issue.
Thanks in advance.
i use process.on() method to headland uncaughtException. this method handle all the exception in file execution time. so, in this time uncaughtException will handle my Index out of range exception. and reject my dataSource connection.
process.on('uncaughtException', function (err) {
console.log('UNCAUGHT EXCEPTION - keeping process alive:', err); // err.message is "foobar"
reject('Connected Faile');
});

Azure Mobile Services An unhandled exception occurred. Error: One of your scripts caused the service to become unresponsive

Apologize for my English.
I have a node js script that has to send AMQP messages to device using IoT hub. I've took thiss script from github of azure iot. Here is this sample.
Here is this sample
Here is my script, based on this one:
console.log("creating the client");
var Client = require('azure-iothub').Client;
console.log("client has been created");
var Message = require('azure-iot-common').Message;
console.log("message has been created");
var connectionString = "HostName=id**.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=***;
console.log(connectionString);
var targetDevice = 'devicesergey';
var client = Client.fromConnectionString(connectionString);
client.open(function (err) {
if (err) {
console.error('Could not connect: ' + err.message);
}
else {
console.log('Client connected');
var data = JSON.stringify({ text : 'foo' });
var message = new Message(data);
console.log("json message is created")
console.log('Sending message: ' + message.getData());
client.send(targetDevice, message, printResultFor('send'));
console.log("message has been sent");
}
});
function printResultFor(op) {
return function printResult(err, res) {
if (err) {
console.log(op + ' error: ' + err.toString());
} else {
console.log(op + ' status: ' + res.constructor.name);
}
};
}
That works fine locally and I see messages on my device emulator. But when I try to put it to Azure Mobile Services API and try to run it, I see this message on logs:
An unhandled exception occurred. Error: One of your scripts caused the service to become unresponsive and the service was restarted. This is commonly caused by a script executing an infinite loop or a long, blocking operation. The service was restarted after the script continuously executed for longer than 5000 milliseconds. at process.Server._registerUncaughtExceptionListenerAndCreateHttpServer._onUncaughtException (D:\home\site\wwwroot\node_modules\azure-mobile-services\runtime\server.js:218:17) at process.EventEmitter.emit (events.js:126:20)
And sometimes I see this IIS error
I know exactly that this line occurs this function: client.open(function....
I've evem tried to leave only client.open() and send a messages out of this function. But in this case I see "client is not connected".
I asked about this stuff on github. They advised me to asked here. Maybe someone know how to solve this issue (with script or Azure). I would be very very greatfull!
Thank you!
The Mobile Service Custom API is a script that expose the functionality of the express.js library, please see the section Overview of custom APIs of the offical document "Work with a JavaScript backend mobile service"
I reproduced the issue successfully. I guess your script was not wrapped in the code below as the body block, and not sent the response to the client like browser.
exports.get = function(request, response) {
// The body block
....
response.send(200, "<response-body>");
}
For more details of Mobile Service Custom API, please see https://msdn.microsoft.com/library/azure/dn280974.aspx.
Update:
I changed your code as below.
And In order to facilitate the test, I changed the permission for the api as below, then I can access the api link https://<mobile-service-name>.azure-mobile.net/api/test with browser.
I've just tried to execute my script on new Azure MS and it was unsuccesfully.
I will write my step-by-step actions, maybe you can see anything wrong, because I'm not so good in NodeJS.
Add a new Azure MS with new SQL Database
Add a new API "dev". Access - everyone for all points. Here is source code:
exports.get = function(request, response) {
console.log("creating the client");
var Client = require('azure-iothub').Client;
console.log("client has been created");
var Message = require('azure-iot-common').Message;
console.log("message has been created");
var connectionString = "HostName=i***.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey***";
console.log(connectionString);
var targetDevice = 'devicesergey';
var client = Client.fromConnectionString(connectionString);
client.open(function (err) {
if (err) {
console.error('Could not connect: ' + err.message);
}
else {
console.log('Client connected');
var data = JSON.stringify({ text : 'foo' });
var message = new Message(data);
console.log("json message is created")
console.log('Sending message: ' + message.getData());
client.send(targetDevice, message, printResultFor('send'));
console.log("message has been sent"); }
});
response(200, "Hello, world!");
};
function printResultFor(op) {
return function printResult(err, res) {
if (err) {
console.log(op + ' error: ' + err.toString());
} else {
console.log(op + ' status: ' + res.constructor.name);
}
};
}
If I try to execute this stuff it occurs "no azure-iothub" and "no azure-iot-common", so I need to use git to add these npm.
I clone this repository to my local dir using git access to Azure MS https://id.scm.azure-mobile.net/id.git
Enter the "API" folder and add the NPMs:
Then I perfom "Rescan", "Save changes", "Commit", "Push" on
After these actions I execute my script by path "http://id**.mobile-services.net/api/dev" and don't see anything o see the error "500.1013" and these messages on logs (id depends):
An unhandled exception occurred. Error: One of your scripts caused the
service to become unresponsive and the service was restarted. This is
commonly caused by a script executing an infinite loop or a long,
blocking operation. The service was restarted after the script
continuously executed for longer than 5000 milliseconds. at
process.Server._registerUncaughtExceptionListenerAndCreateHttpServer._onUncaughtException
(D:\home\site\wwwroot\node_modules\azure-mobile-services\runtime\server.js:218:17)
at process.EventEmitter.emit (events.js:126:20)
I can't realize what I'm doing wrong
UPDATE:
I've tried to use Kudu console for installing the npms and it returns many errors. If i figured out correctly, I need to update my node js and npm. But I don't know how to do this and I didn't manage to find a solution.
Here are logs:
I have lack of reputation, so I am not allowed to past log scripts.
I've tried to do these actions, but it doesn't help:
at the root of the repo, you'll find a .deployment file that has:
command = ..\ZumoDeploy.cmd Change it to
command = deploy.cmd And create a deploy.cmd next to it containing:
set
NPM_JS_PATH=%ProgramFiles(x86)%\npm\1.4.9\node_modules\npm\bin\npm-cli.js ..\ZumoDeploy.cmd Commit both files and push.
I'm confused. How is it possible? Azure Mobile services don't permit to install azure-iot-hub npm). What can I do with this issue?
UPDATE2:
Peter Pan - MSFT, you advised me to use Kudu DebucConsole to install necessary npm. But when I try to do it - I see errors.
I've messaged about this issue to "npm" command on github, they say that the version of npm which Azure is used to - is completely unsupported.
htt ps://github.com/npm/npm/issues/12210#event-615573997
UPDATE3 (04/12/2016)
I've solved this issue by different way. Created my own node JS script that is listening a port, read GET params(deviceId and message) and send D2C messages.
Unfortunately, I still can't get trow the Azure issue.
UPDATE4
Peter Pan gave me an advise how to use another version of nodejs and npm. Now I've succesfully installed necessary NPM modules. But now Azure Mobile Script APIs don't work, it shows me {"code":404,"error":"Error: Not Found"} on any script that I try to get in my browser.
Maybe I've deleted something when I tried to do these stuffs.

How to check if ElasticSearch client is connected?

I'm working with elasticsearch-js (NodeJS) and everything works just fine as long as long as ElasticSearch is running. However, I'd like to know that my connection is alive before trying to invoke one of the client's methods. I'm doing things in a bit of synchronous fashion, but only for the purpose of performance testing (e.g., check that I have an empty index to work in, ingest some data, query the data). Looking at a snippet like this :
var elasticClient = new elasticsearch.Client({
host: ((options.host || 'localhost') + ':' + (options.port || '9200'))
});
// Note, I already have promise handling implemented, omitting it for brevity though
var promise = elasticClient.indices.delete({index: "_all"});
/// ...
Is there some mechanism to send in on the client config to fail fast, or some test I can perform on the client to make sure it's open before invoking delete?
Update: 2015-05-22
I'm not sure if this is correct, but perhaps attempting to get client stats is reasonable?
var getStats = elasticClient.nodes.stats();
getStats.then(function(o){
console.log(o);
})
.catch(function(e){
console.log(e);
throw e;
});
Via node-debug, I am seeing the promise rejected when ElasticSearch is down / inaccessible with: "Error: No Living connections". When it does connect, o in my then handler seems to have details about connection state. Would this approach be correct or is there a preferred way to check connection viability?
Getting stats can be a heavy call to simply ensure your client is connected. You should use ping, see 2nd example https://github.com/elastic/elasticsearch-js#examples
We are using ping too, after instantiating elasticsearch-js client connection on start up.
// example from above link
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
host: 'localhost:9200',
log: 'trace'
});
client.ping({
// ping usually has a 3000ms timeout
requestTimeout: Infinity,
// undocumented params are appended to the query string
hello: "elasticsearch!"
}, function (error) {
if (error) {
console.trace('elasticsearch cluster is down!');
} else {
console.log('All is well');
}
});

Resources