First, i make an API using nodejs and oracledb.
I have 2 routes with different response time, let say route A with 10s response time and route B 1s response time. When i execute the route A followed by route B , i got the error NJS-003: invalid connection because route B finish and close the connection followed by route A.
Any ideas how to solve this problem?
I'm using oracle pool , getConnection and close connection every API request.
async function DBGetData(req, res, query, params = {}) {
try {
connection = await oracledb.getConnection();
connection.callTimeout = 10 * 1000;
result = await connection.execute(
query,
params,
{
outFormat: oracledb.OUT_FORMAT_OBJECT,
}
);
// send query result
res.json({
status: res.statusCode,
length: result.rows.length,
results: result.rows,
});
} catch (err) {
return res.status(400).json({ error: err.toString() });
} finally {
if (connection) {
// Always close connections
await connection.close();
}
}
}
Add a let connection; before the try so that each DBGetData() invocation is definitely using its own connection. Currently it seems that you are referencing a global variable.
Related
I am facing the performance issue during load testing of the Rest API for Oracle Database developed by using Express and OracleDB nodejs modules.
I see the decrease in performance during load testing during increasing number of the requests per seconds to the developed API.This issue is reproduced on stored procedure and standard select requests to the database.
From database side I see the standard and stable response time for each of the requests. Requests are recieved by the database in the same time as they were initiated.
From application(reponse) side it looks like responses being puted in some kind queue and being readed by nodejs(OracleDB module) in packages.
With a standard response time in 0,5 seconds per requests, I might recieve 3-5 seconds response time on 10 request per second.
This issues is reproduced on different connection pool sizes(grater than number of requests per seconds).
At the moment I am stuck with possible options that might lead to the issue. What might be the reason for such behaviour? Or what options for diagnostic or turning is available for oracleDB module for the nodejs?
some code bellow:
creating the connection:
const init = async () => {
try {
Logger.info(`oracle instant client address ${process.env.LD_LIBRARY_PATH}`);
const pool = await oracledb.createPool(dbCredentials);
Logger.info('db connections pool created');
return pool;
} catch (err) {
Logger.error(`init() error: ${err.message}`);
}
};
let pool = await init();
route:
router.get('/test', async (req, res, next) => {
try {
const result = await testController(pool);
sendResponse(res, 200, result);
} catch (e) {
sendErrResponse(res, 500, e.message);
}
});
controller:
const testController = async (pool) => {
let connection;
try {
connection = await pool.getConnection();
} catch (error) {
return error;
}
try {
let items = { items: [{ itemSKU: 'xxxxx', itemQTY: 1234 }, { itemSKU: 'yyyyy', itemQTY: 123}] };
items = JSON.stringify(items);
const { rows: data } = await connection.execute(
'select shop_id, prod_id, qnt from TABLE(pkg_ecom.check(:items))',
{ items },
);
return data;
} catch (error) {
return error;
} finally {
if (connection) {
try {
await connection.close();
} catch (err) {
console.error(err);
}
}
}
};
The common problem with scaling node-oracledb connection load is Node.js thread starvation. Increase the value of the environment variable UV_THREADPOOL_SIZE before your application starts. See the documentation Connections, Threads, and Parallelism. On Linux, your package.json file might have:
"scripts": {
"start": "export UV_THREADPOOL_SIZE=10 && node index.js"
},
. . .
You can monitor pool usage by setting the enableStatistics attribute during pool creation and then calling pool.getStatistics() or pool.logStatistics(), see Connection Pool Monitoring. Look out for too many getConnection() requests being queued.
You should also read the node-oracledb case study Always Use Connection Pools — and How.
I am trying to connect to my mongodb from node/express and I am receiving a connection timed out error when trying to connect. This is the code I am working with atm to find the solution.
await client.connect().then((res:any) => console.log(res)
And this is the error code given.
MongoServerSelectionError: connect ETIMEDOUT 52.64.110.205:27017
So far I have tried adding additional timeout params including
keepAlive=true&socketTimeoutMS=360000&connectTimeoutMS=360000
I have also tried connecting to another cluster with a different username/password and received the same error. I don't think it's an error with env variables as all the other .env variables are working. And I think it might be worth mentioning that this function was working for the first day or two after I put it in.
Below is the entire function. I have commented some parts out for debugging purposes.It returns the same error either way, so I assume it can only be something to do with the connection.
export const handleCreateRequestDB = async (input: any) => {
console.log(`creating new user in DB # ${input}`)
const createUserAccount = async (client: any, newUser: object) => {
await client.connect().then((res:any) => console.log(res)
// await client.db('onlinestore').collection('user_data').insertOne(newUser).then((result: any) => {
// console.log(result)
// return result
// })
)
}
try {
createUserAccount(client, input)
.then((result) => {return result})
} catch (e) {
console.error(e);
return false
}
// finally {
// await client.close()
// }
}
with the help from everyone here, I was able to solve the issue. It appears that my IP had changed since I had created the Atlas and to fix this I just needed to add my updated IP address. It also appears that a connection timed out error from mongo can be caused by access restrictions, such as username/password or in my case, IP ban from my own service.
If the RabbitMQ instance if found error then it takes about 120 seconds to timeout before trying to the error
Here is my code used for connecting:
async function connectAmqp() {
try {
// Create amqp connection and is there any error then exit with error.
cluster = await amqp.connect(`amqp://127.0.0.1:5672`,{
timeout:2000
});
return cluster;
} catch (error) {
throw error;
}
}
Assuming you are using amqplib.
The second argument takes the timeout value.
It was added in this PR
const amqp = require('amqplib');
const connection = await amqp.connect('amqp://localhost', {
timeout: 2000,
});
I'm trying to do a get() from my AWS Lambda (NodeJS) on ElastiCache Redis using node_redis client. I believe that I'm able to connect to redis but I'm getting Time out (Lambda 60 sec time out) when I'm trying to perform a get() operation.
I have also granted my AWS lambda Administrator access just to be certain that it's not a permissions issue. I'm hitting lambda by going to AWS console and clicking the Test button.
Here is my redisClient.js:
const util = require('util');
const redis = require('redis');
console.info('Start to connect to Redis Server');
const client = redis.createClient({
host: process.env.ElastiCacheEndpoint,
port: process.env.ElastiCachePort
});
client.get = util.promisify(client.get);
client.set = util.promisify(client.set);
client.on('ready',function() {
console.log(" subs Redis is ready"); //Can see this output in logs
});
client.on('connect',function(){
console.log('subs connected to redis'); //Can see this output in logs
})
exports.set = async function(key, value) {
console.log("called set!");
return await client.set(key, value);
}
exports.get = async function(key) {
console.log("called get!"); //Can see this output in logs
return await client.get(key);
}
Here's my index.js which calls the redisClient.js:
const redisclient = require("./redisClient");
exports.handler = async (event) => {
const params = event.params
const operation = event.operation;
try {
console.log("Checking RedisCache by calling client get") // Can see this output in logs
const cachedVal = await redisclient.get('mykey');
console.log("Checked RedisCache by calling client get") // This doesn't show up in logs.
console.log(cachedVal);
if (cachedVal) {
return {
statusCode: 200,
body: JSON.stringify(cachedVal)
}
} else {
const setCache = await redisclient.set('myKey','myVal');
console.log(setCache);
console.log("*******")
let response = await makeCERequest(operation, params, event.account);
console.log("CE Request returned");
return response;
}
}
catch (err) {
return {
statusCode: 500,
body: err,
};
}
}
This is the output (time out error message) that I get:
{
"errorMessage": "2020-07-05T19:04:28.695Z 9951942c-f54a-4b18-9cc2-119eed65e9f1 Task timed out after 60.06 seconds"
}
I have tried using Bluebird (changing get to getAsync()) per this: https://github.com/UtkarshYeolekar/promisify-redis-client/blob/master/redis.js but still got the same behavior.
I also changed the port to use a random value (like 8088) that I'm using to create client (to see the behavior of connect event for a failed connection) - in this case I still see a Timed Out error response but I don't see the subs Redis is ready and subs connected to redis in my logs.
Can anyone please point me in the right direction? I don't seem to understand why I'm able to connect to redis but the get() request times out.
I figured out the issue and posting here in case it helps anyone in future as the behavior wasn't very intuitive for me.
I had enabled AuthToken param while setting up my redis. I was passing the param to lambda with the environment variables but wasn't using it while sending the get()/set() requests. When I disabled the AuthToken requirement from redis configuration - Lambda was able to hit redis with get/set requests. More details on AuthToken can be found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html#cfn-elasticache-replicationgroup-authtoken
I need to store my values from the request body to the cloud firestore and sent back the foruminsertdata.Name back in the response. But I am not able to do this.
const functions = require('firebase-functions');
const admin =require("firebase-admin");
admin.initializeApp(functions.config().firebase);
const db = admin.firestore();
exports.helloWorld = functions.https.onRequest((req, res) => {
if(req.method === 'POST'){
foruminsertdata = req.body;
db.collection('forum').add({
Name: foruminsertdata.Name,
Description: foruminsertdata.Description,
Heading: foruminsertdata.Heading,
PostedOn: foruminsertdata.PostedOn,
Status: foruminsertdata.Status,
})
.then(ref => {
console.log('Added document with ID: ', ref.id);
return res.status(200).json(
{
message: foruminsertdata.Name
});
})
.catch(err => {
console.log('Error getting documents', err);
});
res.json({
message: foruminsertdata.Status,
});
}
})
I don't know what is happening...Whatever I do I always get the output as
{
message: foruminsertdata.Status,
}
in which "foruminsertdata.Status" has some value that I give
but what I expect the output as
{
message: foruminsertdata.Name
}
Your function is immediately returning foruminsertdata.Status to the client without waiting for the promise from the database operations to resolve. Any function that returns a promise is asynchronous and returns immediately. Execution will continue in the callbacks you attach to it.
I'm not sure why you have two calls to res.json() in your code, but if you want to send a response only after your query completes, you'll remove the second one and just send a response after the query is done. You will probably also want to send a response in the catch callback as well to indicate an error.