I have a system that checks a database to see if their UserToken is in the database, If it's not it will stop the bot and display an error message, I'm trying to make the bot repeat the same function every minute to see if my database has been updated. Here is the code I'm using:
setInterval(() => {
const getToken = dtoken => new Promise((resolve, reject) => {
MongoClient.connect(url, {
useUnifiedTopology: true,
}, function(err, db) {
if (err) {
reject(err);
} else {
let dbo = db.db("heroku_fkcv4mqk");
let query = {
dtoken: dtoken
};
dbo.collection("tokens").find(query).toArray(function(err, result) {
resolve(result);
});
}
})
})
bot.on("ready", async message => {
const result = await getToken(dtoken)
if (result.length == 1) {
return
} else {
console.error('Error:', 'Your token has been revoked.')
bot.destroy()
}
})
}, 5000);
But it doesn't work and I keep getting this error message:
(node:9808) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 ready listeners added to [Client]. Use emitter.setMaxListeners() to increase limit
if I could get some help with the timeout that would be amazing.
Bot object listens to the event ready on each execution in setInterval(). So after every 5 seconds, a new listener is being added on bot object which you have never removed. That's why it is throwing an error that the maximum limit has been reached.
I think you can take the listener out of the setInterval. It will work.
Updated Code:::
let isReady = false;
bot.on("ready", () => {
isReady = true;
});
const getToken = dtoken => new Promise((resolve, reject) => {
MongoClient.connect(url, {
useUnifiedTopology: true,
}, function(err, db) {
if (err) {
reject(err);
} else {
let dbo = db.db("heroku_fkcv4mqk");
let query = {
dtoken: dtoken
};
dbo.collection("tokens").find(query).toArray(function(err, result) {
resolve(result);
});
}
})
})
setInterval(() => {
if (isReady) {
const result = await getToken(dtoken)
if (result.length == 1) {
return
} else {
console.error('Error:', 'Your token has been revoked.')
isReady = false
bot.destroy()
}
}
}, 5000);
Related
I'm new to using the generic-pool. I see that there is a maxWaitingClients setting, but it doesn't specify for how long a request can stay in the queue before it times out. Is there any way for me to specify such a timeout setting?
EDIT: Added my code for how I'm currently using generic-pool:
function createPool() {
const factory = {
create: function() {
return new Promise((resolve, reject) => {
const socket = new net.Socket();
socket.connect({
host: sdkAddress,
port: sdkPort,
});
socket.setKeepAlive(true);
socket.on('connect', () => {
resolve(socket);
});
socket.on('error', error => {
reject(error);
});
socket.on('close', hadError => {
console.log(`socket closed: ${hadError}`);
});
});
},
destroy: function(socket) {
return new Promise((resolve) => {
socket.destroy();
resolve();
});
},
validate: function (socket) {
return new Promise((resolve) => {
if (socket.destroyed || !socket.readable || !socket.writable) {
return resolve(false);
} else {
return resolve(true);
}
});
}
};
return genericPool.createPool(factory, {
max: poolMax,
min: poolMin,
maxWaitingClients: poolQueue,
testOnBorrow: true
});
}
const pool = createPool();
async function processPendingBlocks(ProcessingMap, channelid, configPath) {
setTimeout(async () => {
let nextBlockNumber = fs.readFileSync(configPath, "utf8");
let processBlock;
do {
processBlock = ProcessingMap.get(channelid, nextBlockNumber);
if (processBlock == undefined) {
break;
}
try {
const sock = await pool.acquire();
await blockProcessing.processBlockEvent(channelid, processBlock, sock, configPath, folderLog);
await pool.release(sock);
} catch (error) {
console.error(`Failed to process block: ${error}`);
}
ProcessingMap.remove(channelid, nextBlockNumber);
fs.writeFileSync(configPath, parseInt(nextBlockNumber, 10) + 1);
nextBlockNumber = fs.readFileSync(configPath, "utf8");
} while (true);
processPendingBlocks(ProcessingMap, channelid, configPath);
}, blockProcessInterval)
}
Since you are using pool.acquire(), you would use the option acquireTimeoutMillis to set a timeout for how long .acquire() will wait for an available resource from the pool before timing out.
You would presumably add that option here:
return genericPool.createPool(factory, {
max: poolMax,
min: poolMin,
maxWaitingClients: poolQueue,
testOnBorrow: true,
acquireTimeoutMillis, 3000 // max time to wait for an available resource
});
I have a typescript module.
public multipleQuery(queries: string[]) {
return new Promise(async (resolve, reject) => {
const cPool = new sql.ConnectionPool(this.room.db);
await cPool.connect().then((pool: any) => {
const transaction = new sql.Transaction(pool);
return transaction.begin(async (err: any) => {
const request = new sql.Request(transaction, { stream: true });
try {
queries.forEach(async (q) => {
await request.query(q);
});
transaction.commit((err2: any) => {
pool.close();
if (err2) {
reject(err2);
} else {
resolve(true);
}
});
} catch (err) {
transaction.rollback(() => {
pool.close();
reject(err);
});
}
});
}).catch((err: Error) => {
cPool.close();
reject(err);
});
});
}
queries variable is an array of string, I put inside a lot of sql insert queries.
No matter what I write in queries, I still receive this error, why?
RequestError: Requests can only be made in the LoggedIn state, not the
SentClientRequest state TransactionError: Can't acquire connection for
the request. There is another request in progress.
the solutions is to use async
const async = require("async");
public multipleQuery(queries: string[]) {
return new Promise((resolve, reject) => {
const pool = new sql.ConnectionPool(this.room.db);
return pool.connect().then((p: any) => {
const transaction = new sql.Transaction(p);
return transaction.begin((err: any) => {
const request = new sql.Request(transaction);
if (err) {
reject(err);
}
return async.eachSeries(queries, async (query: any, callback: any) => {
return request.query(query);
}, async (err2: any) => {
if ( err2 ) {
await transaction.rollback(() => {
pool.close();
reject(err2);
});
} else {
await transaction.commit(() => {
pool.close();
resolve(true);
});
}
});
});
});
});
}
In rethinkDB I required executing 10000 update query one by one with something unique data reference. It runs but it will take 25 to 30
minutes to perform on all 10000 records. I don't know what is an issue? Why query runs very slow?
let mergeDuplicateContactsInDB = (data) => {
try {
return new Promise((resolve, reject) => {
let arrPromise = [];
let primaryRecordIden;
if (data.arrMergeIdens.length > 0) {
forEach(data.arrMergeIdens, (arrContactIdens, index) => {
if (arrContactIdens.length > 0) {
arrPromise.push(
new Promise((resolve, reject) => {
primaryRecordIden = arrContactIdens[0];
arrContactIdens.splice(0, 1);
delete data.finalMergedData[index]['id'];
arrDeletedIden = arrDeletedIden.concat(arrContactIdens);
rdbdash.table("person")
.filter({
"id": primaryRecordIden
})
.update(data.finalMergedData[index])
//.run(DBConfig.rConnection(), function (err, cursor) {
.run()
.then((result) => {
if (!!result) {
return resolve(true);
}
});
})
);
}
});
Promise.all(arrPromise).then((data) => {
resolve({
"processed_data_length": data.length
});
});
} else {
resolve({
"processed_data_length": 0
});
}
});
} catch (e) {
throw new Error(e);
}
};
I create a promise function to processing a long-time query task. Some time the task will block for hours. I want set a time out to stop the task. Below is the code.
It can return error message correctly, but it still running connection.execute() for long time before stop. So how can stop it immediately when it return reject message?
Thanks!
function executeQuery(connection, query) {
return new Promise((resolve, reject) => {
"use strict";
//long time query
connection.execute(query, function (err, results) {
if (err) reject('Error when fetch data');
else resolve(results);
clearTimeout(t);
});
let t = setTimeout(function () {
reject('Time Out');
}, 10);
})
(async () => {
"use strict";
oracle.outFormat = oracle.OBJECT;
try {
let query = fs.readFileSync("query.sql").toString();
let results = await executeQuery(connection, query);
console.log(results.rows);
} catch (e) {
console.log(`error:${e}`);
}
So how can stop it immediately when it return reject message?
According to the docs, you can use connection.break:
return new Promise((resolve, reject) => {
connection.execute(query, (err, results) => {
if (err) reject(err);
else resolve(results);
clearTimeout(t);
});
const t = setTimeout(() => {
connection.break(reject); // is supposed to call the execute callback with an error
}, 10);
})
Make sure to also release the connection in a finally block.
Try this (using bluebird promises):
var execute = Promise.promisify(connection.execute);
function executeQuery(connection, query) {
return execute.call(connection, query)
.timeout(10000)
.then(function (results) {
// handle results here
})
.catch(Promise.TimeoutError, function (err) {
// handle timeout error here
});
.catch(function (err) {
// handle other errors here
});
};
If this still blocks, there's a possibility that the database driver you are using is actually synchronous rather than asynchronous. In that case, that driver would be incompatible with the node event loop and you may want to look into another one.
As Bergi mentioned, you'll need to use the connection.break method.
Given the following function:
create or replace function wait_for_seconds(
p_seconds in number
)
return number
is
begin
dbms_lock.sleep(p_seconds);
return 1;
end;
Here's an example of its use:
const oracledb = require('oracledb');
const config = require('./dbConfig.js');
let conn;
let err;
let timeout;
oracledb.getConnection(config)
.then((c) => {
conn = c;
timeout = setTimeout(() => {
console.log('Timeout expired, invoking break');
conn.break((err) => {
console.log('Break finished', err);
});
}, 5000);
return conn.execute(
`select wait_for_seconds(10)
from dual`,
[],
{
outFormat: oracledb.OBJECT
}
);
})
.then(result => {
console.log(result.rows);
clearTimeout(timeout);
})
.catch(err => {
console.log('Error in processing', err);
if (/^Error: ORA-01013/.test(err)) {
console.log('The error was related to the timeout');
}
})
.then(() => {
if (conn) { // conn assignment worked, need to close
return conn.close();
}
})
.catch(err => {
console.log('Error during close', err)
});
Keep in mind that the setTimeout call is just before the execute (because of the return statement). That timeout will start counting down immediately. However, the execute call isn't guaranteed to start immediately as it uses a thread from the thread pool and it may have to wait till one is available. Just something to keep in mind...
Using promises with NodeJS, I load a model that can then be re-used by susequent calls to the NodeJS app. How can I prevent the same object/model being loaded twice from a database if a second request arrives while the first is still being loaded?
I set a "loading flag" to say that the object is being retrieved from the database and "loaded" when done. If there is a second request that attempts to load the same object, it needs to wait until the initial model is filled and then both can use the same object.
Sample Code (simplified, ES6, Node 0.10 [old for a reason]).
It's the TODO that needs solving.
App:
import ClickController from './controllers/ClickController'
import express from 'express'
const app = express()
app.get('/click/*', (req, res) => {
// Get the parameters here
let gameRef = "test";
ClickController.getGameModel(gameRef)
.then(() => {
console.log('Got game model')
return this.handleRequest()
}, (err) => {
throw err
})
}
Controller:
import gameModel from '../models/GameModel'
class ClickController {
constructor(config) {
// Stores the objects so they can be re-used multiple times.
this.loadedGames = {}
}
// getGameModel() as a promise; return model (or throw error if it doesn't exist)
getGameModel(gameRef) {
return new Promise((resolve, reject) => {
let oGame = false
if(typeof this.loadedGames[gameRef] === 'undefined') {
oGame = new gameModel()
this.loadedGames[gameRef] = oGame
} else {
oGame = this.loadedGames[gameRef]
}
oGame.load(gameRef)
.then(function() {
resolve()
}, (err) => {
reject(err)
})
})
}
}
Model / Object:
class GameModel {
constructor {
this.loading = false
this.loaded = false
}
load(gameRef) {
return new Promise((resolve, reject) => {
if (this.loading) {
// TODO: Need to wait until loaded, then resolve or reject
} else if (!this.loaded) {
this.loading = true
this.getActiveDBConnection()
.then(() => {
return this.loadGame(gameRef)
}, (err) => {
console.log(err)
reject(err)
})
.then(() => {
this.loading = false
this.loaded = true
resolve()
})
} else {
// Already loaded, we're fine
resolve()
}
})
}
// As this uses promises, another event could jump in and call "load" while this is working
loadGame(gameRef) {
return new Promise((resolve, reject) => {
let sql = `SELECT ... FROM games WHERE gameRef = ${mysql.escape(gameRef)}`
this.dbConnection.query(sql, (err, results) => {
if (err) {
reject('Error querying db for game by ref')
} else if (results.length > 0) {
// handle results
resolve()
} else {
reject('Game Not Found')
}
})
})
}
}
I don't follow exactly which part of you're code you are asking about, but the usual scheme for caching a value with a promise while a request is already "in-flight" works like this:
var cachePromise;
function loadStuff(...) {
if (cachePromise) {
return cachePromise;
} else {
// cache this promise so any other requests while this one is stil
// in flight will use the same promise
cachePromise = new Promise(function(resolve, reject) {
doSomeAsyncOperation(function(err, result) {
// clear cached promise so subsequent requests
// will do a new request, now that this one is done
cachePromise = null;
if (err) {
reject(err);
} else {
resolve(result);
}
});
});
return cachePromise;
}
}
// all these will use the same result that is in progress
loadStuff(...).then(function(result) {
// do something with result
});
loadStuff(...).then(function(result) {
// do something with result
});
loadStuff(...).then(function(result) {
// do something with result
});
This keeps a cached promise and, as long as request is "in-flight", the cachePromise value is in place and will be returned by subsequent requests.
As soon as the request actually finishes, the cachePromise will be cleared so that the next request that comes later will issue a new request.