Saving data to Postgres from AWS Lambda - node.js

I'm building a lambda function that is supposed to save a game feedback, like a performance grade, into my Postgres database, which is in AWS RDS.
I'm using NodeJS typescript and the function is kinda working, but in a strange way.
I made an API Gateway so I can POST data to the URL to the lambda process it and save it, the thing is, when I POST the data the function seems to process it until it reaches a max limit of connected clients, and than it seems to lose the other clients'data.
Another problem is that every time I POST data I get a response saying that there was a Internal Server Error and with a 'X-Cache→Error from cloudfront' header. For a GET request I figured it out that it was giving me this response because the format of the response was incorrect, but in this case I fixed the response format and still get this problem...
Sometimes I get a timeout response.
My function's code:
import { APIGatewayEvent, Callback, Context, Handler } from "aws-lambda";
import { QueryConfig, Client, Pool, PoolConfig } from "pg";
export const insert: Handler = async (
event: APIGatewayEvent,
context: Context,
cb: Callback
) => {
// context.callbackWaitsForEmptyEventLoop = false;
const config: PoolConfig = {
user: process.env.PG_USER,
host: process.env.PG_HOST,
database: process.env.PG_DB,
password: process.env.PG_PASS,
port: parseInt(process.env.PG_PORT),
idleTimeoutMillis: 0,
max: 10000
};
const pool = new Pool(config);
let postdata = event.body || event;
console.log("POST DATA:", postdata);
if (typeof postdata == "string") {
postdata = JSON.parse(postdata);
}
let query: QueryConfig = <QueryConfig>{
name: "get_all_questions",
text:
"INSERT INTO gamefeedback (gameid, userid, presenterstars, gamestars) VALUES ($1, $2, $3, $4);",
values: [
parseInt(postdata["game_id"]),
postdata["user_id"],
parseInt(postdata["presenter_stars"]),
parseInt(postdata["game_stars"])
]
};
console.log("Before Connect");
let con = await pool.connect();
let res = await con.query(query);
console.log("res.rowCount:", res.rowCount);
if (res.rowCount != 1) {
cb(new Error("Error saving the feedback."), {
statusCode: 400,
body: JSON.stringify({
message: "Error saving data!"
})
});
}
cb(null, {
statusCode: 200,
body: JSON.stringify({
message: "Saved successfully!"
})
});
console.log("The End");
};
Than the log from CloudWatch error with max number of clients connected looks like this:
2018-08-03T15:56:04.326Z b6307573-9735-11e8-a541-950f760c0aa5 (node:1) UnhandledPromiseRejectionWarning: error: sorry, too many clients already
at u.parseE (/var/task/webpack:/node_modules/pg/lib/connection.js:553:1)
at u.parseMessage (/var/task/webpack:/node_modules/pg/lib/connection.js:378:1)
at Socket.<anonymous> (/var/task/webpack:/node_modules/pg/lib/connection.js:119:1)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at Socket.Readable.push (_stream_readable.js:208:10)
at TCP.onread (net.js:607:20)
Can any of you guys help me with this strange problem?
Thanks

Well for one thing you need to put creating a pool above the handler, like so:
const config: PoolConfig = {
user: process.env.PG_USER,
...
};
const pool = new Pool(config);
export const insert: Handler = async (
event: APIGatewayEvent,
context: Context,
cb: Callback
) => {
..etc
The way you have it you are creating a pool on every invocation. If you create the pool outside the handler it gives Lambda a chance to share the pool between invocations.

Related

Adding firebase firestore entry failed : 7 PERMISSION_DENIED

I'm trying to use a Firebase Cloud Function to create a document within the Firestore database from my Node js environment with Express js, but it fails with below error on the function logs.
Error: Process exited with code 16
at process.on.code (/layers/google.nodejs.functions-framework/functions-framework/node_modules/#google-cloud/functions-framework/build/src/invoker.js:275:22)
at process.emit (events.js:198:13)
at process.EventEmitter.emit (domain.js:448:20)
at process.exit (internal/process/per_thread.js:168:15)
at Object.sendCrashResponse (/layers/google.nodejs.functions-framework/functions-framework/node_modules/#google-cloud/functions-framework/build/src/logger.js:37:9)
at process.on.err (/layers/google.nodejs.functions-framework/functions-framework/node_modules/#google-cloud/functions-framework/build/src/invoker.js:271:22)
at process.emit (events.js:198:13)
at process.EventEmitter.emit (domain.js:448:20)
at emitPromiseRejectionWarnings (internal/process/promises.js:140:18)
at process._tickCallback (internal/process/next_tick.js:69:34)
firebase.ts file :
import * as admin from 'firebase-admin'
import * as functions from 'firebase-functions'
admin.initializeApp({
credential: admin.credential.cert({
privateKey: functions.config().private.key.replace(/\\n/g, '\n'),
projectId: functions.config().project.id,
clientEmail: functions.config().client.email
}),
databaseURL: 'https://app-id.firebaseio.com'
})
const db = admin.firestore()
export { admin, db }
controller.ts :
import { Response } from 'express'
import { db } from './config/firebase'
type EntryType = {
title: string,
text: string,
}
type Request = {
body: EntryType,
params: { entryId: string }
}
const addEntry = async (req: Request, res: Response) => {
const { title, text } = req.body
try {
const entry = db.collection('entries').doc()
const entryObject = {
id: entry.id,
title,
text,
}
await entry.set(entryObject).catch(error => {
return res.status(400).json({
status: 'error',
message: error.message
})
})
return res.status(200).json({
status: 'success',
message: 'entry added successfully',
data: entryObject
})
} catch(error) {
console.log(error);
return res.status(500).json(error.message)
}
}
Im receiving below response from this trigger :
{
"status": "error",
"message": "7 PERMISSION_DENIED: Invalid project number: 113102533737774060828"
}
Is this related to the Cloud Firestore rules in the Google cloud? Im fairly new to Google cloud functions.
Any suggestions would be appreciated.
This typically means that the credentials you're using are not for the project you're trying to use them on.
Check your functions.config().private.key to ensure it is indeed for the project you run this code on.

Node.JS aws-sdk getting "socket hang up" error

I am trying to get the pricing information of AmazonEC2 machines using "#aws-sdk/client-pricing-node" package.
Each time I will send a request to get the pricing information, process it, and send the request again, until all the information is obtained (no NextToken anymore). The following is my code.
const {
PricingClient,
} = require('#aws-sdk/client-pricing-node/PricingClient');
const {
GetProductsCommand,
} = require('#aws-sdk/client-pricing-node/commands/GetProductsCommand');
const agent = new https.Agent({
maxSockets: 30,
keepAlive: true,
});
const pricing = new PricingClient({
region: "us-east-1",
httpOptions: {
timeout: 45000,
connectTimeout: 45000,
agent,
},
maxRetries: 10,
retryDelayOptions: {
base: 500,
},
});
const getProductsCommand = new GetProductsCommand( { ServiceCode: 'AmazonEC2', });
async function sendRequest() {
let result = false;
while (!result) {
try {
const data = await pricing.send(getProductsCommand);
result = await handleReqResults(data);
} catch (error) {
console.error(error);
}
}
}
async function handleReqResults(data) {
// some data handling code here
// ...
//return false when there is "NextToken" in the response data
if (data.NextToken) {
setNextToken(data.NextToken);
return false;
}
return true;
}
The code will run for a while (variable time) and then stop with the following error:
{ Error: socket hang up
at createHangUpError (_http_client.js:332:15)
at TLSSocket.socketOnEnd (_http_client.js:435:23)
at TLSSocket.emit (events.js:203:15)
at TLSSocket.EventEmitter.emit (domain.js:448:20)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
code: 'ECONNRESET',
'$metadata': { retries: 0, totalRetryDelay: 0 } }
I had tried to run it on a GCP VM instance, and there was no such a problem. But the problem happens when I run it on my local machine.
Do anyone have any idea how to solve this problem?
(BTW: my node version is v10.20.1)

Socket Hangup Error In Node JS On Force API Timeout

I am using request module in Node JS (v8.12) to call a third party API. Since the API is not very reliable and due to lack of better option I am timing out the call after 2 seconds in case if there is no response from the API. But in doing so it creates a socket hang up error. Below is the code used and stack trace
const options = {
url: resource_url,
rejectUnauthorized: false,
timeout: 2000,
method: 'GET',
headers: {
'content-Type': 'application/json',
}
};
return new Promise(function (resolve, reject) {
request(options, function (err, res, body) {
if (!err) {
resolve(JSON.parse(body.data));
} else {
if (err.code === 'ETIMEDOUT' || err.code == 'ESOCKETTIMEDOUT') {
resolve(someOldData);
} else {
resolve(someOldData);
}
}
});
});
Error: socket hang up
at createHangUpError (_http_client.js:331:15)
at TLSSocket.socketCloseListener (_http_client.js:363:23)
at scope.activate (<trace-log-base-path>/dd-trace/packages/dd-trace/src/scope/base.js:54:19)
at Scope._activate (<trace-log-base-path>/dd-trace/packages/dd-trace/src/scope/async_hooks.js:51:14)
at Scope.activate (<trace-log-base-path>/dd-trace/packages/dd-trace/src/scope/base.js:12:19)
at TLSSocket.bound (<trace-log-base-path>/dd-trace/packages/dd-trace/src/scope/base.js:53:20)
at emitOne (events.js:121:20)
at TLSSocket.emit (events.js:211:7)
at _handle.close (net.js:554:12)
at TCP.done [as _onclose] (_tls_wrap.js:356:7)
After doing a bit of reading and research I found this article pointing out a similar issue so I switched to http module as mentioned in one of the solution in the article. But switching to http module also did not resolve the issue. Below is code implementation using http and stack trace.
let responseData;
const requestOptions = {
hostname: resource_host,
path: resource_path,
method: 'GET',
timeout: 2000,
};
return new Promise((resolve, reject) => {
const requestObject = http.request(requestOptions, (responseObj) => {
responseObj.setEncoding('utf8');
responseObj.on('data', (body) => {
responseData = body;
});
responseObj.on('end', () => {
resolve(responseData);
});
});
requestObject.on('error', (err) => {
responseData = someOldData;
resolve(responseData);
});
requestObject.on('timeout', () => {
responseData = someOldData;
requestObject.abort();
});
requestObject.end();
});
Error: socket hang up
at connResetException (internal/errors.js:608:14)
at Socket.socketCloseListener (_http_client.js:400:25)
at <trace-log-base-path>/dd-trace/packages/dd-trace/src/scope/base.js:54:19
at Scope._activate (<trace-log-base-path>/dd-trace/packages/dd-trace/src/scope/async_hooks.js:51:14)
at Scope.activate (<trace-log-base-path>/dd-trace/packages/dd-trace/src/scope/base.js:12:19)
at Socket.bound (<trace-log-base-path>/dd-trace/packages/dd-trace/src/scope/base.js:53:20)
at Socket.emit (events.js:322:22)
at Socket.EventEmitter.emit (domain.js:482:12)
at TCP.<anonymous> (net.js:672:12)
I went through multiple SO post and various other resources over the web, but I am unable to resolve this issue.
Could it be because of the third party, because I also tried to reproduce the issue by creating a dummy server which sleeps for some time after the request is fired and timing out that request but was unable to reproduce the issue.
I'll be very grateful for any help in this regards.
Removing requestObject.abort() in timeout event block when using http module resolves this issue.

How to catch UnhandledPromiseRejectionWarning for GCS WriteStream

Observed Application Behavior
I'm getting a UnhandledPromiseRejectionWarning: Error: Upload failed when using #google-cloud/storage in node.js.
These errors come when processing thousands of requests. It's a small percentage that cause errors, but due to the lack of ability to handle the errors, and the lack of proper context from the error message, it's very difficult to determine WHICH files are failing.
I know in general promises must have a .catch or be surrounded by a try/catch block. But in this case I'm using a write stream. I'm a little bit confused as to where the promise that's being rejected is actually located and how I would intercept it. The stack trace is unhelpful, as it only contains library code:
UnhandledPromiseRejectionWarning: Error: Upload failed
at Request.requestStream.on.resp (.../node_modules/gcs-resumable-upload/build/src/index.js:163:34)
at emitTwo (events.js:131:20)
at Request.emit (events.js:214:7)
at Request.<anonymous> (.../node_modules/request/request.js:1161:10)
at emitOne (events.js:121:20)
at Request.emit (events.js:211:7)
at IncomingMessage.<anonymous> (.../node_modules/request/request.js:1083:12)
at Object.onceWrapper (events.js:313:30)
at emitNone (events.js:111:20)
at IncomingMessage.emit (events.js:208:7)
My Code
The code that's creating the writeStream looks like this:
const {join} = require('path')
const {Storage} = require('#google-cloud/storage')
module.exports = (config) => {
const storage = new Storage({
projectId: config.gcloud.project,
keyFilename: config.gcloud.auth_file
})
return {
getBucketWS(path, contentType) {
const {bucket, path_prefix} = config.gcloud
// add path_prefix if we have one
if (path_prefix) {
path = join(path_prefix, path)
}
let setup = storage.bucket(bucket).file(path)
let opts = {}
if (contentType) {
opts = {
contentType,
metadata: {contentType}
}
}
const stream = setup.createWriteStream(opts)
stream._bucket = bucket
stream._path = path
return stream
}
}
}
And the consuming code looks like this:
const gcs = require('./gcs-helper.js')
module.exports = ({writePath, contentType, item}, done) => {
let ws = gcs.getBucketWS(writePath, contentType)
ws.on('error', (err) => {
err.message = `Could not open gs://${ws._bucket}/${ws._path}: ${err.message}`
done(err)
})
ws.on('finish', () => {
done(null, {
path: writePath,
item
})
})
ws.write(item)
ws.end()
}
Given that I'm already listening for the error event on the stream, I don't see what else I can do here. There isn't a promise happening at the level of #google-cloud/storage that I'm consuming.
Digging into the #google-cloud/storage Library
The first line of the stack trace brings us to a code block in the gcs-resumable-upload node module that looks like this:
requestStream.on('complete', resp => {
if (resp.statusCode < 200 || resp.statusCode > 299) {
this.destroy(new Error('Upload failed'));
return;
}
this.emit('metadata', resp.body);
this.deleteConfig();
this.uncork();
});
This is passing the error to the destroy method on the stream. The stream is being created by the #google-cloud/common project's utility module, and this is using the duplexify node module to create the stream. The destroy method is defined on the duplexify stream and can be found in the README documentation.
Reading the duplexify code, I see that it first checks this._ondrain before emitting an error. Maybe I can provide a callback to avoid this error being unhandled?
I tried ws.write(item, null, cb) and still got the same UnhandledPromiseRejectionWarning. I tried ws.end(item, null, cb) and even wrapped the .end call in a try catch, and ended up getting this error which crashed the process entirely:
events.js:183
throw er; // Unhandled 'error' event
^
Error: The uploaded data did not match the data from the server. As a precaution, the file has been deleted. To be sure the content is the same, you should try uploading the file again.
at delete (.../node_modules/#google-cloud/storage/build/src/file.js:1295:35)
at Util.handleResp (.../node_modules/#google-cloud/common/build/src/util.js:123:9)
at retryRequest (.../node_modules/#google-cloud/common/build/src/util.js:404:22)
at onResponse (.../node_modules/retry-request/index.js:200:7)
at .../node_modules/teeny-request/build/src/index.js:208:17
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:189:7)
My final code looks something like this:
let ws = gcs.getBucketWS(writePath, contentType)
const handleErr = (err) => {
if (err) err.message = `Could not open gs://${ws._bucket}/${ws._path}: ${err.message}`
done(err)
}
ws.on('error', handleErr)
// trying to do everything we can to handle these errors
// for some reason we still get UnhandledPromiseRejectionWarning
try {
ws.write(item, null, err => {
handleErr(err)
})
ws.end()
} catch (e) {
handleErr(e)
}
Conclusion
It's still a mystery to me how a user of the #google-cloud/storage library, or duplexify for that matter, is supposed to perform proper error handling. Comments from library maintainers of either project would be appreciated. Thanks!

Google Cloud Function - Storage - Delete Image - "ApiError: Error during request"

UPDATED QUESTION
The problem is ApiError: Error during request.
Code:
import * as functions from 'firebase-functions';
const cors = require('cors')({ origin: true });
import * as admin from 'firebase-admin';
const gcs = admin.storage();
export const deleteImage = functions.https.onRequest((req, res) => {
return cors(req, res, async () => {
res.set('Content-Type', 'application/json');
const id = req.body.id;
const name = req.body.name;
const imageRef = gcs.bucket(`images/${name}`);
if (!name || !id) {
return res.status(400).send({message: 'Missing parameters :/'});
}
try {
await imageRef.delete();
console.log('Image deleted from Storage');
return res.status(200).send({status: 200, message: `Thank you for id ${id}`});
}
catch (error) {
console.log('error: ', error);
return res.status(500).send({message: `Image deletion failed: ${error}`});
}
});
});
And the problem is here: await imageRef.delete();, I get the following error:
ApiError: Error during request.
I do, indeed, have admin.initializeApp(); in one of my other functions, so that can't be the issue, unless GCF have a bug.
More In-Depth Error:
{ ApiError: Error during request.
at Object.parseHttpRespBody (/user_code/node_modules/firebase-admin/node_modules/#google-cloud/common/src/util.js:187:32)
at Object.handleResp (/user_code/node_modules/firebase-admin/node_modules/#google-cloud/common/src/util.js:131:18)
at /user_code/node_modules/firebase-admin/node_modules/#google-cloud/common/src/util.js:496:12
at Request.onResponse [as _callback] (/user_code/node_modules/firebase-admin/node_modules/#google-cloud/common/node_modules/retry-request/index.js:198:7)
at Request.self.callback (/user_code/node_modules/firebase-admin/node_modules/request/request.js:185:22)
at emitTwo (events.js:106:13)
at Request.emit (events.js:191:7)
at Request.<anonymous> (/user_code/node_modules/firebase-admin/node_modules/request/request.js:1161:10)
at emitOne (events.js:96:13)
at Request.emit (events.js:188:7)
code: undefined,
errors: undefined,
response: undefined,
message: 'Error during request.' }
(old question removed)
"Error: Can't set headers after they are sent" means that you tried to send two responses to the client. This isn't valid - you can send only one response.
Your code is clearly sending two 200 type responses to the client in the event that imageRef.delete() fails and the catch callback on it is triggered.
Also, you're mixing up await with then/catch. They're not meant to be used together. You choose one or the other. Typically, if you're using await for async programming, you don't also use then/catch with the same promise. This is more idiomatic use of await with error handling:
try {
await imageRef.delete()
res.status(200).send({status: 200, message: `Thank you for id ${id}`});
} catch (error) {
res.status(500).send({message: `Image deletion failed: ${err}`});
}
Note also that you typically send a 500 response to the client on failure, not 200, which indicates success.

Resources