Node.JS aws-sdk getting "socket hang up" error - node.js

I am trying to get the pricing information of AmazonEC2 machines using "#aws-sdk/client-pricing-node" package.
Each time I will send a request to get the pricing information, process it, and send the request again, until all the information is obtained (no NextToken anymore). The following is my code.
const {
PricingClient,
} = require('#aws-sdk/client-pricing-node/PricingClient');
const {
GetProductsCommand,
} = require('#aws-sdk/client-pricing-node/commands/GetProductsCommand');
const agent = new https.Agent({
maxSockets: 30,
keepAlive: true,
});
const pricing = new PricingClient({
region: "us-east-1",
httpOptions: {
timeout: 45000,
connectTimeout: 45000,
agent,
},
maxRetries: 10,
retryDelayOptions: {
base: 500,
},
});
const getProductsCommand = new GetProductsCommand( { ServiceCode: 'AmazonEC2', });
async function sendRequest() {
let result = false;
while (!result) {
try {
const data = await pricing.send(getProductsCommand);
result = await handleReqResults(data);
} catch (error) {
console.error(error);
}
}
}
async function handleReqResults(data) {
// some data handling code here
// ...
//return false when there is "NextToken" in the response data
if (data.NextToken) {
setNextToken(data.NextToken);
return false;
}
return true;
}
The code will run for a while (variable time) and then stop with the following error:
{ Error: socket hang up
at createHangUpError (_http_client.js:332:15)
at TLSSocket.socketOnEnd (_http_client.js:435:23)
at TLSSocket.emit (events.js:203:15)
at TLSSocket.EventEmitter.emit (domain.js:448:20)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
code: 'ECONNRESET',
'$metadata': { retries: 0, totalRetryDelay: 0 } }
I had tried to run it on a GCP VM instance, and there was no such a problem. But the problem happens when I run it on my local machine.
Do anyone have any idea how to solve this problem?
(BTW: my node version is v10.20.1)

Related

Node.js redis#4 Upgrade: SocketClosedUnexpectedlyError: Socket closed unexpectedly

I've got some legacy code that I'm upgrading from version 3 of the Node.js redis library to version 4 of the Node.js redis library. The basic shape of the code looks like this
var redis = require('redis')
var client = redis.createClient({
port: '6379',
host: process.env.REDIS_HOST,
legacyMode: true
})
client.connect()
client.flushall(function (err, reply) {
client.hkeys('hash key', function (err, replies) {
console.log("key set done")
client.quit()
})
})
console.log("main done")
When I run this code with redis#4.3.1, I get the following error, and node.js exits with a non-zero status code
main done
key set done
events.js:292
throw er; // Unhandled 'error' event
^
SocketClosedUnexpectedlyError: Socket closed unexpectedly
at Socket.<anonymous> (/Users/astorm/Documents/redis4/node_modules/#redis/client/dist/lib/client/socket.js:182:118)
at Object.onceWrapper (events.js:422:26)
at Socket.emit (events.js:315:20)
at TCP.<anonymous> (net.js:673:12)
Emitted 'error' event on Commander instance at:
at RedisSocket.<anonymous> (/Users/astorm/Documents/redis4/node_modules/#redis/client/dist/lib/client/index.js:350:14)
at RedisSocket.emit (events.js:315:20)
at RedisSocket._RedisSocket_onSocketError (/Users/astorm/Documents/redis4/node_modules/#redis/client/dist/lib/client/socket.js:205:10)
at Socket.<anonymous> (/Users/astorm/Documents/redis4/node_modules/#redis/client/dist/lib/client/socket.js:182:107)
at Object.onceWrapper (events.js:422:26)
at Socket.emit (events.js:315:20)
at TCP.<anonymous> (net.js:673:12)
While in redis#3.1.2 it runs (minus the client.connect()) without issue.
I've been able to work around this by replacing client.quit() with client.disconnect(), but the actual code is a little more complex than the above example and I'd rather use the graceful shutdown of client.quit than the harsher "SHUT IT DOWN NOW" of client.disconnect().
Does anyone know what the issue here might be? Why is redis#4 failing with a SocketClosedUnexpectedlyError: Socket closed unexpectedly error.
What I found so far is that after a while (keepAlive default is 5 minutes) without any requests the Redis client closes and throws an error event, but if you don't handle this event it will crash your application.
My solution for that was:
/* eslint-disable no-inline-comments */
import type { RedisClientType } from 'redis'
import { createClient } from 'redis'
import { config } from '#app/config'
import { logger } from '#app/utils/logger'
let redisClient: RedisClientType
let isReady: boolean
const cacheOptions = {
url: config.redis.tlsFlag ? config.redis.urlTls : config.redis.url,
}
if (config.redis.tlsFlag) {
Object.assign(cacheOptions, {
socket: {
// keepAlive: 300, // 5 minutes DEFAULT
tls: false,
},
})
}
async function getCache(): Promise<RedisClientType> {
if (!isReady) {
redisClient = createClient({
...cacheOptions,
})
redisClient.on('error', err => logger.error(`Redis Error: ${err}`))
redisClient.on('connect', () => logger.info('Redis connected'))
redisClient.on('reconnecting', () => logger.info('Redis reconnecting'))
redisClient.on('ready', () => {
isReady = true
logger.info('Redis ready!')
})
await redisClient.connect()
}
return redisClient
}
getCache().then(connection => {
redisClient = connection
}).catch(err => {
// eslint-disable-next-line #typescript-eslint/no-unsafe-assignment
logger.error({ err }, 'Failed to connect to Redis')
})
export {
getCache,
}
anyway... in your situation try to handle the error event
client.on('error', err => logger.error(`Redis Error: ${err}`))

Azure Functions EventhubTrigger - database inserts

i implemented some code in azure functions which is triggered by a even hub.
If triggered i want to insert the data to a azure sql database.
I got my code running and the bulk insert is working but i often get a RequestError Timeout.
Could somebody give me some advice how to implement this use case the right way with azure functions. The function is actually triggered pretty often because of data which is send to event hub.
[2022-03-30T13:13:07.464Z] Executing 'Functions.hotDatatoSql' (Reason='(null)', Id=a329b220-6ef6-4d75-9c15-beed6d7375cb)
[2022-03-30T13:13:07.466Z] Trigger Details: PartionId: 0, Offset: 85929768072-85930225256, EnqueueTimeUtc: 2022-03-29T16:46:03.5990000Z-2022-03-29T16:47:49.0860000Z, SequenceNumber: 351947-352202, Count: 256
[2022-03-30T13:13:23.109Z] RequestError: Timeout: Request failed to complete in 15000ms
[2022-03-30T13:13:23.111Z] Executed 'Functions.hotDatatoSql' (Succeeded, Id=a329b220-6ef6-4d75-9c15-beed6d7375cb, Duration=15717ms)
[2022-03-30T13:13:23.113Z] at BulkLoad.done [as callback] (C:\Home\Software\azure_functions\node_modules\mssql\lib\tedious\request.js:307:19)
[2022-03-30T13:13:23.142Z] at Parser.<anonymous> (C:\Home\Software\azure_functions\node_modules\tedious\lib\connection.js:2910:26)
[2022-03-30T13:13:23.145Z] at Object.onceWrapper (node:events:509:28)
[2022-03-30T13:13:23.148Z] at Parser.emit (node:events:390:28)
[2022-03-30T13:13:23.150Z] at Readable.<anonymous> (C:\Home\Software\azure_functions\node_modules\tedious\lib\token\token-stream-parser.js:32:12)
[2022-03-30T13:13:23.152Z] at Readable.emit (node:events:390:28)
[2022-03-30T13:13:23.153Z] at endReadableNT (node:internal/streams/readable:1343:12)
[2022-03-30T13:13:23.162Z] at processTicksAndRejections (node:internal/process/task_queues:83:21) {
[2022-03-30T13:13:23.164Z] code: 'ETIMEOUT',
[2022-03-30T13:13:23.167Z] originalError: RequestError: Timeout: Request failed to complete in 15000ms
My Azure Function Code:
const mssql = require('mssql');
const { get } = require('./pool-manager')
const config = {
user: "...",
password: "...",
server: '....',
database: '...',
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 30000
},
options: {
encrypt: true,
trustServerCertificate: true
}
};
const table = new mssql.Table('dbo.testTable');
table.columns.add('row1', mssql.VarChar(512), {nullable: true});
table.columns.add('row2', mssql.DateTime2, {nullable: true});
table.columns.add('row3', mssql.NVarChar(mssql.MAX), {nullable: true});
module.exports = async function (context, eventHubMessages) {
const pool = await get('default', config);
eventHubMessages.forEach((message, index) => {
table.rows.add(message.id, message.time, JSON.stringify(message.data));
});
const request = new mssql.Request(pool);
try{
let result = await request.bulk(table);
//console.log(result);
}
catch(err){
console.log(err);
}
};
To achieve the above requirements, As suggested by #Peter Bons we need to increase the timeout as default timeout is 15 seconds .
For example to increase the idletimeout:
const config = {
user: '...',
password: '...',
server: 'localhost',
database: '...',
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 300000
}
}
For more information you can refer this SO THREAD as well.

Dropbox API integration on AWS Lambda gets FetchError (ETIMEDOUT)

I have a node.js app which runs on AWS Lambda. The Lambda is connected with a VPC. It goes internet with a static IP. I use v10.23.0 dropbox-sdk-js. It always seems to run on my local but it sometimes runs on the lambda, sometimes gets fetch error.
My code is like this:
async function main() {
const Dropbox = require('dropbox').Dropbox;
const dropbox = {
dbx: new Dropbox({
accessToken: process.env.ACCESS_TOKEN,
pathRoot: JSON.stringify({ '.tag': 'namespace_id', 'namespace_id': process.env.NAMESPACE_ID })
})
};
const payload = {
path: '',
recursive: true,
include_media_info: false,
include_deleted: false,
include_has_explicit_shared_members: true,
include_mounted_folders: true,
include_non_downloadable_files: true
};
let hasMore = true;
let entries = [];
let response;
let cursor;
while (hasMore) {
try {
if (cursor) {
response = await dropbox.dbx.filesListFolderContinue({ cursor: cursor });
}
else {
response = await dropbox.dbx.filesListFolderGetLatestCursor(payload);
response = await dropbox.dbx.filesListFolderContinue({ cursor: response.result.cursor });
}
console.info('Entries: ', JSON.stringify(response.result.entries));
cursor = response.result.cursor;
entries = entries.concat(response.result.entries);
hasMore = response.result.has_more;
}
catch (error) {
console.info(error);
return error;
}
}
}
main();
Error log:
2022-01-20T08:22:18.579Z 67caa239-e75c-46ce-be4c-0fcf6c154694 INFO FetchError: request to https://api.dropboxapi.com/2/files/list_folder/continue failed, reason: connect ETIMEDOUT 162.125.4.19:443
at ClientRequest.<anonymous> (/var/task/node_modules/dropbox/node_modules/node-fetch/lib/index.js:1483:11)
at ClientRequest.emit (events.js:400:28)
at TLSSocket.socketErrorListener (_http_client.js:475:9)
at TLSSocket.emit (events.js:400:28)
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:82:21) {
type: 'system',
errno: 'ETIMEDOUT',
code: 'ETIMEDOUT'
}
I deattached VPC from lambda and it worked. I think AWS blocks fetching while you are using VPC.

Saving data to Postgres from AWS Lambda

I'm building a lambda function that is supposed to save a game feedback, like a performance grade, into my Postgres database, which is in AWS RDS.
I'm using NodeJS typescript and the function is kinda working, but in a strange way.
I made an API Gateway so I can POST data to the URL to the lambda process it and save it, the thing is, when I POST the data the function seems to process it until it reaches a max limit of connected clients, and than it seems to lose the other clients'data.
Another problem is that every time I POST data I get a response saying that there was a Internal Server Error and with a 'X-Cache→Error from cloudfront' header. For a GET request I figured it out that it was giving me this response because the format of the response was incorrect, but in this case I fixed the response format and still get this problem...
Sometimes I get a timeout response.
My function's code:
import { APIGatewayEvent, Callback, Context, Handler } from "aws-lambda";
import { QueryConfig, Client, Pool, PoolConfig } from "pg";
export const insert: Handler = async (
event: APIGatewayEvent,
context: Context,
cb: Callback
) => {
// context.callbackWaitsForEmptyEventLoop = false;
const config: PoolConfig = {
user: process.env.PG_USER,
host: process.env.PG_HOST,
database: process.env.PG_DB,
password: process.env.PG_PASS,
port: parseInt(process.env.PG_PORT),
idleTimeoutMillis: 0,
max: 10000
};
const pool = new Pool(config);
let postdata = event.body || event;
console.log("POST DATA:", postdata);
if (typeof postdata == "string") {
postdata = JSON.parse(postdata);
}
let query: QueryConfig = <QueryConfig>{
name: "get_all_questions",
text:
"INSERT INTO gamefeedback (gameid, userid, presenterstars, gamestars) VALUES ($1, $2, $3, $4);",
values: [
parseInt(postdata["game_id"]),
postdata["user_id"],
parseInt(postdata["presenter_stars"]),
parseInt(postdata["game_stars"])
]
};
console.log("Before Connect");
let con = await pool.connect();
let res = await con.query(query);
console.log("res.rowCount:", res.rowCount);
if (res.rowCount != 1) {
cb(new Error("Error saving the feedback."), {
statusCode: 400,
body: JSON.stringify({
message: "Error saving data!"
})
});
}
cb(null, {
statusCode: 200,
body: JSON.stringify({
message: "Saved successfully!"
})
});
console.log("The End");
};
Than the log from CloudWatch error with max number of clients connected looks like this:
2018-08-03T15:56:04.326Z b6307573-9735-11e8-a541-950f760c0aa5 (node:1) UnhandledPromiseRejectionWarning: error: sorry, too many clients already
at u.parseE (/var/task/webpack:/node_modules/pg/lib/connection.js:553:1)
at u.parseMessage (/var/task/webpack:/node_modules/pg/lib/connection.js:378:1)
at Socket.<anonymous> (/var/task/webpack:/node_modules/pg/lib/connection.js:119:1)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at Socket.Readable.push (_stream_readable.js:208:10)
at TCP.onread (net.js:607:20)
Can any of you guys help me with this strange problem?
Thanks
Well for one thing you need to put creating a pool above the handler, like so:
const config: PoolConfig = {
user: process.env.PG_USER,
...
};
const pool = new Pool(config);
export const insert: Handler = async (
event: APIGatewayEvent,
context: Context,
cb: Callback
) => {
..etc
The way you have it you are creating a pool on every invocation. If you create the pool outside the handler it gives Lambda a chance to share the pool between invocations.

Elastic search gives Bad request for ping

Code in elasticsearch.js file
function es() {
throw new Error('Looks like you are expecting the previous "elasticsearch" module. ' +
'It is now the "es" module. To create a client with this module use ' +
'`new es.Client(params)`.');
}
es.Client = require('./lib/client');
es.ConnectionPool = require('./lib/connection_pool');
es.Transport = require('./lib/transport');
es.errors = require('./lib/errors');
module.exports = es;
var elasticsearch = require('elasticsearch')
var client = new es.Client({
host: 'localhost:9200',
log: 'trace',
})
// Ping the cluster
client.ping({
requestTimeOut: 30000,
},
function(error){
if(error) {
console.log(error)
console.error("elasticsearch cluster is down!")
}
else {
console.log("All is well")
}
})
and I am running elastic search locally with command $bin/elasticsearch
but when I do $node elasticsearch.js it gives the error saying
Elasticsearch INFO: 2018-01-22T11:17:50Z
Adding connection to http://localhost:9200/
Elasticsearch DEBUG: 2018-01-22T11:17:50Z
starting request {
"method": "HEAD",
"requestTimeout": 3000,
"castExists": true,
"path": "/",
"query": {
"requestTimeOut": 30000
}
}
Elasticsearch TRACE: 2018-01-22T11:17:50Z
-> HEAD http://localhost:9200/?requestTimeOut=30000
<- 400
Elasticsearch DEBUG: 2018-01-22T11:17:50Z
Request complete
{ Error: Bad Request
at respond (/Users/ElasticSearchServer/node_modules/elasticsearch/src/lib/transport.js:307:15)
at checkRespForFailure (/Users/ElasticSearchServer/node_modules/elasticsearch/src/lib/transport.js:266:7)
at HttpConnector.<anonymous> (/Users/ElasticSearchServer/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)
at IncomingMessage.bound (/Users/ElasticSearchServer/node_modules/elasticsearch/node_modules/lodash/dist/lodash.js:729:21)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)
status: 400,
displayName: 'BadRequest',
message: 'Bad Request',
path: '/',
query: { requestTimeOut: 30000 },
body: undefined,
statusCode: 400,
response: '',
toString: [Function],
toJSON: [Function] }
elasticsearch cluster is down!
If I try adding new index, delete index, check the health or search, it works fine and gives the appropriate result.
Can anyone help me to fix the issue? thanks in advance!
In the new JavaScript client every option that is not intended for Elasticsearch lives in a second object, your code should be updated as follows:
'use strict'
const { Client } = require('#elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })
client.ping({}, { requestTimeout: 20000 }, (err, response) => {
...
})
In the response object other than body, statusCode, and headers, you will also find a warnings array and a meta object, which should help you debug issues.
In this case, warnings contained the following message: 'Client - Unknown parameter: "requestTimeout", sending it as query parameter'.

Resources