When the server is restarted, previous cron jobs keep it working correctly (Node.js,Bull.js and Redis) - node.js

How can I do that cron jobs work in the background? I mean when the server is restarted, previous cron jobs keep it working correctly.
const queue = new Queue('make_recurring',{
redis: {host: "127.0.0.1", port: 6379}
});
queue.add(order,{
repeat:{
cron: date // 38,40 12 12,13,14,15 3 *
},
removeOnComplete: true
});
queue.process(async(job,done) => {
console.log(job.data);
}

Related

Dynamic Crons are executed many times at the same time

I'm creating new CronJobs and scheduling them to run in the future, but when the execution time arrives, the same Job is fired three times.
After the execution of the job I am removing it from the registry and even so it does not avoid the tripling of the job.
localhost it's triggered onnly once
published it's triggered thrice
we have three pods behind kubernetes. i guess is something related with that.
const date = dateFns.addMinutes(new Date(), 10);
const job = new CronJob({
cronTime: date,
start: true,
onTick: async () => {
await this.sendEmail(params);
}
});
this.schedulerRegistry.addCronJob('job01', job);
Based on your reply in the comment, you are using clusters. So for every instance that is running, a cron is created. If you have three instances in the cluster, you will get three crons. What you need to do is assign a name to your cluster. I'll give you an example of the setup we have.
We are running instances based on max cpu cores.
We assigned a name to the instances, and gave one of them the name primary. And set it to run on one core.
The remaining instances doesn't matter what you name them, but we set the count to -1. This way we utilize all cores on the machine.
Here's an example of the ecosystem.config.js
module.exports = {
apps: [
{
name: 'nest-primary',
script: './dist/src/main.js',
instances: '1',
exec_mode: 'cluster',
time: true,
combine_logs: true,
max_memory_restart: '3500M',
max_old_space_size: 3000,
log_date_format: 'HH:mm YYYY-MM-DD Z',
log_type: 'json',
merge_logs: true,
env_local: {
NODE_ENV: 'local',
HOST: 'localhost',
PORT: 3000,
DATABASE_URL: 'mysql://user:password#localhost:3306/himam',
DATABASE_URL_PG: 'postgresql://postgres:password#localhost:5432/himam',
},
env_development: {
NODE_ENV: 'development',
PORT: 3000,
HOST: '0.0.0.0',
DATABASE_URL: 'mysql://user:password#localhost:3306/himam',
DATABASE_URL_PG: 'postgresql://postgres:password#localhost:5432/himam',
},
},
{
name: 'nest-replica',
script: './dist/src/main.js',
instances: '-1',
exec_mode: 'cluster',
time: true,
combine_logs: true,
max_memory_restart: '3500M',
max_old_space_size: 3000,
log_date_format: 'HH:mm YYYY-MM-DD Z',
log_type: 'json',
merge_logs: true,
env_local: {
NODE_ENV: 'local',
HOST: 'localhost',
PORT: 3000,
DATABASE_URL: 'mysql://user:password#localhost:3306/himam',
DATABASE_URL_PG: 'postgresql://postgres:password#localhost:5432/himam',
},
env_development: {
NODE_ENV: 'development',
PORT: 3000,
HOST: '0.0.0.0',
DATABASE_URL: 'mysql://user:password#localhost:3306/himam',
DATABASE_URL_PG: 'postgresql://postgres:password#localhost:5432/himam',
},
},
When I launch the cluster, I pass --env production
pm2 start ecosystem.config.js --env production
The most important part, in your crons, you need to check the name of the instance. You can do this by adding the names you used in the config above to your .env
PM2_PRIMARY_NAME=nest-primary
PM2_REPLICA_NAME=nest-replica
Finally, in your code when you want to run the cron, check the name of the process, like this:
async handleCron() {
if (process.env.name !== this.configService.get('PM2_PRIMARY_NAME')) {
return;
}
// do your cron logic here.
This ensures that your cron will run only once, because your primary instance is only running on 1 core, and you won't have duplicate triggers. Please do update us.
thanks for the help!
I was able to resolve this using cron from the yaml file:
cronjob:
use: true
schedule: '*/5 * * * *'
env:
CRON: 1
and call that on bootstrap app like that:
appBootstrapInstance
.bootstrap()
.then((app) => {
return app;
})
.then(async (app) => {
const env = app.get(EnvService);
if (env.getAsBoolean('CRON')) {
await new Promise((resolve) => setTimeout(resolve, CRON_TIMEOUT));
const task = app.get(MyTaskService);
await task.doSomething();
await new Promise((resolve) => setTimeout(resolve, CRON_TIMEOUT));
});

Why Google Cloud Run gettings massive container restart / new instance creation?

I've been using Google Cloud Run for a year now and the issue with cloud run containers restarts / new container start is from the beginning.
I've hosted Node + MongoDB app in Cloud Run, but cloud run container is restarting frequently. It's getting around 10 - 12 requests / second, couldn't find any performance bottleneck, requests are serving smoothly, sometimes requests are served more than normal time, might be new container instance cold start delay.
The issue I am facing is the HIGH Number of connections to the MONGODB Server. After some research I could find that I've to close mongodb connection on node process exit so I've added a graceful shutdown function.
// Function to terminate the app gracefully:
const gracefulShutdown = async () => {
console.log(`MONGODB CONNECTION CLOSED!`);
await mongoose.connection.close();
};
// This will handle process.exit():
process.on('exit', gracefulShutdown);
// This will handle kill commands, such as CTRL+C:
process.on('SIGINT', gracefulShutdown);
process.on('SIGTERM', gracefulShutdown);
// This will prevent dirty exit on code-fault crashes:
process.on('uncaughtException', gracefulShutdown);
But even after adding this, I couldn't find this graceful shutdown function is invoked while checking logs.
Does google cloud run really signals when the nodejs process in the container crashed?
Is there any way to identity a container restart or new instance creation in cloud run?
Here is the MongoDB connection code
exports.connect = () => {
try {
mongoose
.connect(MONGO.URI, {
useCreateIndex: true,
keepAlive: 1,
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false,
})
.then((docs) => {
console.log(`DB Connected`);
})
.catch((err) => {
console.log(`err`, err);
});
return mongoose.connection;
} catch (err) {
console.log(`#### Error Connecting DB`, err);
console.log(`Mongo URI: `, MONGO.URI);
}
};
Sometimes cloud run issues a high number of connections to MONGODB, and hits the connection limit of 1500 connections.
Any suggestions are appreciated! I've been facing this issue for a year now.
You should not start the node process using npm or yarn but directly as CMD ["node", "index.js"] (when you are inside a Docker container using Dockerfile)
Explanation here https://maximorlov.com/process-signals-inside-docker-containers/

Mongodb gets locked when working with crontab

I'm creating an application that pulls data from APIs and save it to a MongoDB using crontab.
This is an example of what my crontab file look like:
*/2 * * * * /home/`whoami`/TLC/dataProvider1.js
*/1 * * * * /home/`whoami`/TLC/dataProvider2.js
*/3 * * * * /home/`whoami`/TLC/dataProvider3.js
The scripts are JS files that I've made executable and they all have a require statement for a separate file that has a database connection, so each of them connects to the DB so it can save the pulled data.
DB connection file:
var mongoose = require('mongoose');
var connect = function () {
mongoose.connect('mongodb://localhost:27017/vario', {
socketTimeoutMS: 0,
keepAlive: true,
reconnectTries: 30,
useNewUrlParser: true,
autoReconnect: true,
keepAliveInitialDelay: 300000
});
};
connect();
mongoose.connection.on('connected', function () {
console.log('Connection established to the Database...');
});
mongoose.connection.on('error', function (err) {
console.log('Could not connect to Database.\nError: ' + err);
});
process.on('SIGINT', function () {
mongoose.connection.close(function () {
console.log('Force to close the MongoDB conection');
process.exit(0);
});
});
module.exports = connect;
Problem: When these jobs run for about an hour, MongoDB somehow gets locked and no more data can be saved. Firing the mongo shell produces an error that the connection has been refused. Using mongod to recover also gives an error that another instance on the data file (/data/db/) is already running so it cannot lock the mongod.lock file. The only way I'm dealing with this right now is by shutting down the deamon and restarting it. How can I handle this?
Note: I used crontab because setInterval was interfering with browsing the app in case it was saving to the DB.
After a careful inspection, I realized that my crontab jobs were creating new connections to the DB that were never closed. All I had to do is ensure that a connection is closed when a script finishes running.

Node.js server not waiting for more than approx. 15 sec

I am running a GET method of router on NodeJS with Express.js.
I am fetching data from MSSQL but my MSSQL server taking a time and my NodeJS server is not waiting for it more than approximately 15 seconds.
What should i do??
You do everything right. But, default timeout in tedious is 15 seconds.
Use requestTimeout for
let config = {
user: global.config.database.username,
password: process.env.database_pwd || process.env.DATABASE_PWD,
server: global.config.database.host,
port: 1433,
database: global.config.database.database,
requestTimeout: 180000
};
module.exports.connect = async () => {
pool = await sql.connect(config);
}

mongoError: Topology was destroyed

I have a REST service built in node.js with Restify and Mongoose and a mongoDB with a collection with about 30.000 regular sized documents.
I have my node service running through pmx and pm2.
Yesterday, suddenly, node started crapping out errors with the message "MongoError: Topology was destroyed", nothing more.
I have no idea what is meant by this and what could have possibly triggered this. there is also not much to be found when google-searching this. So I thought I'd ask here.
After restarting the node service today, the errors stopped coming in.
I also have one of these running in production and it scares me that this could happen at any given time to a pretty crucial part of the setup running there...
I'm using the following versions of the mentioned packages:
mongoose: 4.0.3
restify: 3.0.3
node: 0.10.25
It seems to mean your node server's connection to your MongoDB instance was interrupted while it was trying to write to it.
Take a look at the Mongo source code that generates that error
Mongos.prototype.insert = function(ns, ops, options, callback) {
if(typeof options == 'function') callback = options, options = {};
if(this.s.state == DESTROYED) return callback(new MongoError(f('topology was destroyed')));
// Topology is not connected, save the call in the provided store to be
// Executed at some point when the handler deems it's reconnected
if(!this.isConnected() && this.s.disconnectHandler != null) {
callback = bindToCurrentDomain(callback);
return this.s.disconnectHandler.add('insert', ns, ops, options, callback);
}
executeWriteOperation(this.s, 'insert', ns, ops, options, callback);
}
This does not appear to be related to the Sails issue cited in the comments, as no upgrades were installed to precipitate the crash or the "fix"
I know that Jason's answer was accepted, but I had the same problem with Mongoose and found that the service hosting my database recommended to apply the following settings in order to keep Mongodb's connection alive in production:
var options = {
server: { socketOptions: { keepAlive: 1, connectTimeoutMS: 30000 } },
replset: { socketOptions: { keepAlive: 1, connectTimeoutMS: 30000 } }
};
mongoose.connect(secrets.db, options);
I hope that this reply may help other people having "Topology was destroyed" errors.
This error is due to mongo driver dropping the connection for any reason (server was down for example).
By default mongoose will try to reconnect for 30 seconds then stop retrying and throw errors forever until restarted.
You can change this by editing these 2 fields in the connection options
mongoose.connect(uri,
{ server: {
// sets how many times to try reconnecting
reconnectTries: Number.MAX_VALUE,
// sets the delay between every retry (milliseconds)
reconnectInterval: 1000
}
}
);
connection options documentation
In my case, this error was caused by a db.close(); out of a 'await' section inside of 'async'
MongoClient.connect(url, {poolSize: 10, reconnectTries: Number.MAX_VALUE, reconnectInterval: 1000}, function(err, db) {
// Validate the connection to Mongo
assert.equal(null, err);
// Query the SQL table
querySQL()
.then(function (result) {
console.log('Print results SQL');
console.log(result);
if(result.length > 0){
processArray(db, result)
.then(function (result) {
console.log('Res');
console.log(result);
})
.catch(function (err) {
console.log('Err');
console.log(err);
})
} else {
console.log('Nothing to show in MySQL');
}
})
.catch(function (err) {
console.log(err);
});
db.close(); // <--------------------------------THIS LINE
});
Just a minor addition to Gaafar's answer, it gave me a deprecation warning. Instead of on the server object, like this:
MongoClient.connect(MONGO_URL, {
server: {
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 1000
}
});
It can go on the top level object. Basically, just take it out of the server object and put it in the options object like this:
MongoClient.connect(MONGO_URL, {
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 1000
});
"Topology was destroyed" might be caused by mongoose disconnecting before mongo document indexes are created, per this comment
In order to make sure all models have their indexes built before disconnecting, you can:
await Promise.all(mongoose.modelNames().map(model => mongoose.model(model).ensureIndexes()));
await mongoose.disconnect();
I met this in kubernetes/minikube + nodejs + mongoose environment.
The problem was that DNS service was up with a kind of latency. Checking DNS is ready solved my problem.
const dns = require('dns');
var dnsTimer = setInterval(() => {
dns.lookup('mongo-0.mongo', (err, address, family) => {
if (err) {
console.log('DNS LOOKUP ERR', err.code ? err.code : err);
} else {
console.log('DNS LOOKUP: %j family: IPv%s', address, family);
clearTimeout(dnsTimer);
mongoose.connect(mongoURL, db_options);
}
});
}, 3000);
var db = mongoose.connection;
var db_options = {
autoReconnect:true,
poolSize: 20,
socketTimeoutMS: 480000,
keepAlive: 300000,
keepAliveInitialDelay : 300000,
connectTimeoutMS: 30000,
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 1000,
useNewUrlParser: true
};
(the numbers in db_options are arbitrary found on stackoverflow and similar like sites)
I alse had the same error. Finally, I found that I have some error on my code. I use load balance for two nodejs server, but I just update the code of one server.
I change my mongod server from standalone to replication, but I forget to do the corresponding update for the connection string, so I met this error.
standalone connection string:
mongodb://server-1:27017/mydb
replication connection string:
mongodb://server-1:27017,server-2:27017,server-3:27017/mydb?replicaSet=myReplSet
details hereļ¼š[mongo doc for connection string]
Sebastian comment on Adrien's answer needs more attention it helped me but it being a comment might be ignore sometime so here's a solution:
var options = { useMongoClient: true, keepAlive: 1, connectTimeoutMS: 30000, reconnectTries: 30, reconnectInterval: 5000 }
mongoose.connect(config.mongoConnectionString, options, (err) => {
if(err) {
console.error("Error while connecting", err);
}
});
Here what I did, It works fine. Issue was gone after adding below options.
const dbUrl = "mongodb://localhost:27017/sampledb";
const options = { useMongoClient: true, keepAlive: 1, connectTimeoutMS: 30000, reconnectTries: 30, reconnectInterval: 5000, useNewUrlParser: true }
mongoose.connect(dbUrl,options, function(
error
) {
if (error) {
console.log("mongoerror", error);
} else {
console.log("connected");
}
});
You need to restart mongo to solve the topology error, then just change some options of mongoose or mongoclient to overcome this problem:
var mongoOptions = {
useMongoClient: true,
keepAlive: 1,
connectTimeoutMS: 30000,
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 5000,
useNewUrlParser: true
}
mongoose.connect(mongoDevString,mongoOptions);
I got this error, while I was creating a new database on my MongoDb Compass Community. The issue was with my Mongod, it was not running. So as a fix, I had to run the Mongod command as preceding.
C:\Program Files\MongoDB\Server\3.6\bin>mongod
I was able to create a database after running that command.
Hope it helps.
I was struggling with this for some time - As you can see from other answers, the issue can be very different.
The easiest way to find out whats causing is this is to turn on loggerLevel: 'info' in the options
In my case, this error was caused by an identical server instance already running background.
The weird thing is when I started my server without notice there's one running already, the console didn't show anything like 'something is using port xxx'. I could even upload something to the server. So, it took me quite long to locate this problem.
What's more, after closing all the applications I can imagine, I still could not find the process which is using this port in my Mac's activity monitor. I have to use lsof to trace. The culprit was not surprising - it's a node process. However, with the PID shown in the terminal, I found the port number in the monitor is different from the one used by my server.
All in all, kill all the node processes may solve this problem directly.
Using mongoose here, but you could do a similar check without it
export async function clearDatabase() {
if (mongoose.connection.readyState === mongoose.connection.states.disconnected) {
return Promise.resolve()
}
return mongoose.connection.db.dropDatabase()
}
My use case was just tests throwing errors, so if we've disconnected, I don't run operations.
I got this problem recently. Here what I do:
Restart MongoDb: sudo service mongod restart
Restart My NodeJS APP. I use pm2 to handle this pm2 restart [your-app-id]. To get ID use pm2 list
var mongoOptions = {
useNewUrlParser: true,
useUnifiedTopology: true,
}
mongoose.connect(mongoDevString,mongoOptions);
I solved this issue by:
ensuring mongo is running
restarting my server

Resources