i implemented some code in azure functions which is triggered by a even hub.
If triggered i want to insert the data to a azure sql database.
I got my code running and the bulk insert is working but i often get a RequestError Timeout.
Could somebody give me some advice how to implement this use case the right way with azure functions. The function is actually triggered pretty often because of data which is send to event hub.
[2022-03-30T13:13:07.464Z] Executing 'Functions.hotDatatoSql' (Reason='(null)', Id=a329b220-6ef6-4d75-9c15-beed6d7375cb)
[2022-03-30T13:13:07.466Z] Trigger Details: PartionId: 0, Offset: 85929768072-85930225256, EnqueueTimeUtc: 2022-03-29T16:46:03.5990000Z-2022-03-29T16:47:49.0860000Z, SequenceNumber: 351947-352202, Count: 256
[2022-03-30T13:13:23.109Z] RequestError: Timeout: Request failed to complete in 15000ms
[2022-03-30T13:13:23.111Z] Executed 'Functions.hotDatatoSql' (Succeeded, Id=a329b220-6ef6-4d75-9c15-beed6d7375cb, Duration=15717ms)
[2022-03-30T13:13:23.113Z] at BulkLoad.done [as callback] (C:\Home\Software\azure_functions\node_modules\mssql\lib\tedious\request.js:307:19)
[2022-03-30T13:13:23.142Z] at Parser.<anonymous> (C:\Home\Software\azure_functions\node_modules\tedious\lib\connection.js:2910:26)
[2022-03-30T13:13:23.145Z] at Object.onceWrapper (node:events:509:28)
[2022-03-30T13:13:23.148Z] at Parser.emit (node:events:390:28)
[2022-03-30T13:13:23.150Z] at Readable.<anonymous> (C:\Home\Software\azure_functions\node_modules\tedious\lib\token\token-stream-parser.js:32:12)
[2022-03-30T13:13:23.152Z] at Readable.emit (node:events:390:28)
[2022-03-30T13:13:23.153Z] at endReadableNT (node:internal/streams/readable:1343:12)
[2022-03-30T13:13:23.162Z] at processTicksAndRejections (node:internal/process/task_queues:83:21) {
[2022-03-30T13:13:23.164Z] code: 'ETIMEOUT',
[2022-03-30T13:13:23.167Z] originalError: RequestError: Timeout: Request failed to complete in 15000ms
My Azure Function Code:
const mssql = require('mssql');
const { get } = require('./pool-manager')
const config = {
user: "...",
password: "...",
server: '....',
database: '...',
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 30000
},
options: {
encrypt: true,
trustServerCertificate: true
}
};
const table = new mssql.Table('dbo.testTable');
table.columns.add('row1', mssql.VarChar(512), {nullable: true});
table.columns.add('row2', mssql.DateTime2, {nullable: true});
table.columns.add('row3', mssql.NVarChar(mssql.MAX), {nullable: true});
module.exports = async function (context, eventHubMessages) {
const pool = await get('default', config);
eventHubMessages.forEach((message, index) => {
table.rows.add(message.id, message.time, JSON.stringify(message.data));
});
const request = new mssql.Request(pool);
try{
let result = await request.bulk(table);
//console.log(result);
}
catch(err){
console.log(err);
}
};
To achieve the above requirements, As suggested by #Peter Bons we need to increase the timeout as default timeout is 15 seconds .
For example to increase the idletimeout:
const config = {
user: '...',
password: '...',
server: 'localhost',
database: '...',
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 300000
}
}
For more information you can refer this SO THREAD as well.
Related
I am having a problem (ConnectionError) while trying to connect Next.js with SQL Server with mssql, I have enabled TCP/IP, and SQL Server Browser is running.
this is my code:
// db.js
import sql from 'mssql'
// connection configs
const config = {
user: 'test',
password: '1000',
server: '.\sqlexpress',
database: 'DATABASE_NAME',
port: 1433,
options: {
instancename: 'SQLEXPRESS',
trustedconnection: true,
trustServerCertificate: true
},
}
export default async function ExcuteQuery(query, options) {
try {
let pool = await sql.connect(config);
let products = await pool.request().query(query);
return products.recordsets;
}
catch (error) {
console.log(error);
}
}
// api/hello.js
import ExcuteQuery from '../../utils/db';
export default async function handler(req, res) {
console.log(await ExcuteQuery('select * from tbl_category'));
res.status(200).json({})
}
this is the error:
ConnectionError: getaddrinfo ENOTFOUND .
at E:\0 - WEB\pos\node_modules\mssql\lib\tedious\connection-pool.js:70:17
at Connection.onConnect (E:\0 - WEB\pos\node_modules\tedious\lib\connection.js:1012:9)
at Object.onceWrapper (node:events:628:26)
at Connection.emit (node:events:513:28)
at Connection.emit (E:\0 - WEB\pos\node_modules\tedious\lib\connection.js:1040:18)
at E:\0 - WEB\pos\node_modules\tedious\lib\connection.js:1081:16
at process.processTicksAndRejections (node:internal/process/task_queues:77:11) {
code: 'EINSTLOOKUP',
originalError: ConnectionError: getaddrinfo ENOTFOUND .
at E:\0 - WEB\pos\node_modules\tedious\lib\connection.js:1081:32
at process.processTicksAndRejections (node:internal/process/task_queues:77:11) {
code: 'EINSTLOOKUP',
isTransient: undefined
}
}
also these are the settings changed:
I tried setting the server to MY_DESKTOP_NAME\SQLEXPRESS but it didn't work, but after disabling and reenabling all the settings it works now, it was kinda like a glitch in the system
I am setting up a Node app that has connections to multiple databases. I am using a map to create Pools for all my databases as such:
const stnPool = new Map();
async function getOtherDb(stnName){
if(!stnPool.has(stnName)){
stnPool.set(stnName, new Pool({
host: 'localhost',
database: stnName.toLowerCase(),
user: USERNAME,
password: PASSWORD,
port: 5432,
max: 2000,
idleTimeoutMillis: 0,
connectionTimeoutMillis: 0,
}))
}
return stnPool.get(stnName);
}
PostgreSQL currently has 10 'station' databases. I have 10 remote servers that connect to my server and upload data every 3 seconds. I also have an X number of clients that connect to view the uploaded data in realtime. Bot Server and Client connections connect to upload/request data via Websockets.
To establish aan query a Pool connection:
var uvDb = await db.getOtherDb("wx_uv")
if( uvDb != -1 ) {
const dbT = await uvDb.connect()
... do various db queries ...
dbT.release()
}
After a few hours of getting Server uploads and Client requests I get this error:
/.../node/node_modules/pg-protocol/dist/parser.js:287
const message = name === 'notice' ? new messages_1.NoticeMessage(length, messageValue) : new messages_1.DatabaseError(messageValue, length, name);
^
error: parallel worker failed to initialize
at Parser.parseErrorMessage (/.../node/node_modules/pg-protocol/dist/parser.js:287:98)
at Parser.handlePacket (/.../node/node_modules/pg-protocol/dist/parser.js:126:29)
at Parser.parse (/.../node/node_modules/pg-protocol/dist/parser.js:39:38)
at Socket.<anonymous> (/.../node/node_modules/pg-protocol/dist/index.js:11:42)
at Socket.emit (node:events:365:28)
at addChunk (node:internal/streams/readable:314:12)
at readableAddChunk (node:internal/streams/readable:289:9)
at Socket.Readable.push (node:internal/streams/readable:228:10)
at TCP.onStreamRead (node:internal/stream_base_commons:190:23) {
length: 163,
severity: 'ERROR',
code: '55000',
detail: undefined,
hint: 'More details may be available in the server log.',
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parallel.c',
line: '826',
routine: 'WaitForParallelWorkersToFinish'
}
Anyone have any ideas on what causes this?
=======UPDATE=======
Tried adding a try/catch to release the connection as well:
if( uvDb != -1 ) {
const dbT = await uvDb.connect()
try{
... do various db queries ...
dbT.release()
} catch (err) {
dbT.release()
}
}
But now I get other errors...
"error: lost connection to parallel worker"
"error: parallel worker failed to initialize"
I am trying to get the pricing information of AmazonEC2 machines using "#aws-sdk/client-pricing-node" package.
Each time I will send a request to get the pricing information, process it, and send the request again, until all the information is obtained (no NextToken anymore). The following is my code.
const {
PricingClient,
} = require('#aws-sdk/client-pricing-node/PricingClient');
const {
GetProductsCommand,
} = require('#aws-sdk/client-pricing-node/commands/GetProductsCommand');
const agent = new https.Agent({
maxSockets: 30,
keepAlive: true,
});
const pricing = new PricingClient({
region: "us-east-1",
httpOptions: {
timeout: 45000,
connectTimeout: 45000,
agent,
},
maxRetries: 10,
retryDelayOptions: {
base: 500,
},
});
const getProductsCommand = new GetProductsCommand( { ServiceCode: 'AmazonEC2', });
async function sendRequest() {
let result = false;
while (!result) {
try {
const data = await pricing.send(getProductsCommand);
result = await handleReqResults(data);
} catch (error) {
console.error(error);
}
}
}
async function handleReqResults(data) {
// some data handling code here
// ...
//return false when there is "NextToken" in the response data
if (data.NextToken) {
setNextToken(data.NextToken);
return false;
}
return true;
}
The code will run for a while (variable time) and then stop with the following error:
{ Error: socket hang up
at createHangUpError (_http_client.js:332:15)
at TLSSocket.socketOnEnd (_http_client.js:435:23)
at TLSSocket.emit (events.js:203:15)
at TLSSocket.EventEmitter.emit (domain.js:448:20)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
code: 'ECONNRESET',
'$metadata': { retries: 0, totalRetryDelay: 0 } }
I had tried to run it on a GCP VM instance, and there was no such a problem. But the problem happens when I run it on my local machine.
Do anyone have any idea how to solve this problem?
(BTW: my node version is v10.20.1)
I'm building a lambda function that is supposed to save a game feedback, like a performance grade, into my Postgres database, which is in AWS RDS.
I'm using NodeJS typescript and the function is kinda working, but in a strange way.
I made an API Gateway so I can POST data to the URL to the lambda process it and save it, the thing is, when I POST the data the function seems to process it until it reaches a max limit of connected clients, and than it seems to lose the other clients'data.
Another problem is that every time I POST data I get a response saying that there was a Internal Server Error and with a 'X-Cache→Error from cloudfront' header. For a GET request I figured it out that it was giving me this response because the format of the response was incorrect, but in this case I fixed the response format and still get this problem...
Sometimes I get a timeout response.
My function's code:
import { APIGatewayEvent, Callback, Context, Handler } from "aws-lambda";
import { QueryConfig, Client, Pool, PoolConfig } from "pg";
export const insert: Handler = async (
event: APIGatewayEvent,
context: Context,
cb: Callback
) => {
// context.callbackWaitsForEmptyEventLoop = false;
const config: PoolConfig = {
user: process.env.PG_USER,
host: process.env.PG_HOST,
database: process.env.PG_DB,
password: process.env.PG_PASS,
port: parseInt(process.env.PG_PORT),
idleTimeoutMillis: 0,
max: 10000
};
const pool = new Pool(config);
let postdata = event.body || event;
console.log("POST DATA:", postdata);
if (typeof postdata == "string") {
postdata = JSON.parse(postdata);
}
let query: QueryConfig = <QueryConfig>{
name: "get_all_questions",
text:
"INSERT INTO gamefeedback (gameid, userid, presenterstars, gamestars) VALUES ($1, $2, $3, $4);",
values: [
parseInt(postdata["game_id"]),
postdata["user_id"],
parseInt(postdata["presenter_stars"]),
parseInt(postdata["game_stars"])
]
};
console.log("Before Connect");
let con = await pool.connect();
let res = await con.query(query);
console.log("res.rowCount:", res.rowCount);
if (res.rowCount != 1) {
cb(new Error("Error saving the feedback."), {
statusCode: 400,
body: JSON.stringify({
message: "Error saving data!"
})
});
}
cb(null, {
statusCode: 200,
body: JSON.stringify({
message: "Saved successfully!"
})
});
console.log("The End");
};
Than the log from CloudWatch error with max number of clients connected looks like this:
2018-08-03T15:56:04.326Z b6307573-9735-11e8-a541-950f760c0aa5 (node:1) UnhandledPromiseRejectionWarning: error: sorry, too many clients already
at u.parseE (/var/task/webpack:/node_modules/pg/lib/connection.js:553:1)
at u.parseMessage (/var/task/webpack:/node_modules/pg/lib/connection.js:378:1)
at Socket.<anonymous> (/var/task/webpack:/node_modules/pg/lib/connection.js:119:1)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at Socket.Readable.push (_stream_readable.js:208:10)
at TCP.onread (net.js:607:20)
Can any of you guys help me with this strange problem?
Thanks
Well for one thing you need to put creating a pool above the handler, like so:
const config: PoolConfig = {
user: process.env.PG_USER,
...
};
const pool = new Pool(config);
export const insert: Handler = async (
event: APIGatewayEvent,
context: Context,
cb: Callback
) => {
..etc
The way you have it you are creating a pool on every invocation. If you create the pool outside the handler it gives Lambda a chance to share the pool between invocations.
Code in elasticsearch.js file
function es() {
throw new Error('Looks like you are expecting the previous "elasticsearch" module. ' +
'It is now the "es" module. To create a client with this module use ' +
'`new es.Client(params)`.');
}
es.Client = require('./lib/client');
es.ConnectionPool = require('./lib/connection_pool');
es.Transport = require('./lib/transport');
es.errors = require('./lib/errors');
module.exports = es;
var elasticsearch = require('elasticsearch')
var client = new es.Client({
host: 'localhost:9200',
log: 'trace',
})
// Ping the cluster
client.ping({
requestTimeOut: 30000,
},
function(error){
if(error) {
console.log(error)
console.error("elasticsearch cluster is down!")
}
else {
console.log("All is well")
}
})
and I am running elastic search locally with command $bin/elasticsearch
but when I do $node elasticsearch.js it gives the error saying
Elasticsearch INFO: 2018-01-22T11:17:50Z
Adding connection to http://localhost:9200/
Elasticsearch DEBUG: 2018-01-22T11:17:50Z
starting request {
"method": "HEAD",
"requestTimeout": 3000,
"castExists": true,
"path": "/",
"query": {
"requestTimeOut": 30000
}
}
Elasticsearch TRACE: 2018-01-22T11:17:50Z
-> HEAD http://localhost:9200/?requestTimeOut=30000
<- 400
Elasticsearch DEBUG: 2018-01-22T11:17:50Z
Request complete
{ Error: Bad Request
at respond (/Users/ElasticSearchServer/node_modules/elasticsearch/src/lib/transport.js:307:15)
at checkRespForFailure (/Users/ElasticSearchServer/node_modules/elasticsearch/src/lib/transport.js:266:7)
at HttpConnector.<anonymous> (/Users/ElasticSearchServer/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)
at IncomingMessage.bound (/Users/ElasticSearchServer/node_modules/elasticsearch/node_modules/lodash/dist/lodash.js:729:21)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)
status: 400,
displayName: 'BadRequest',
message: 'Bad Request',
path: '/',
query: { requestTimeOut: 30000 },
body: undefined,
statusCode: 400,
response: '',
toString: [Function],
toJSON: [Function] }
elasticsearch cluster is down!
If I try adding new index, delete index, check the health or search, it works fine and gives the appropriate result.
Can anyone help me to fix the issue? thanks in advance!
In the new JavaScript client every option that is not intended for Elasticsearch lives in a second object, your code should be updated as follows:
'use strict'
const { Client } = require('#elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })
client.ping({}, { requestTimeout: 20000 }, (err, response) => {
...
})
In the response object other than body, statusCode, and headers, you will also find a warnings array and a meta object, which should help you debug issues.
In this case, warnings contained the following message: 'Client - Unknown parameter: "requestTimeout", sending it as query parameter'.