prehistory
I have a nodejs server which instances are running on multiple machines and every instance runs cron job once per day at the same time.
While one instance is running or has just finished its job, another instances should skip executing logic inside of jobs.
I've already had mongodb connection so I decided to save state of runned job and its time to DB and check/change it inside of every job's callback.
The document model I chose to save job state in collection is:
interface JobDao {
_id: ObjectId;
type: string;
state: 'locked' | 'failed' | 'completed';
updatedDate: DateISO;
}
I use package "mongodb": "^3.6.3" to make queries.
After some tries, I wonder if I can implement described bellow behavior or not. Also maybe somebody can suggest another solution to sync running jobs for multiple machines.
So solution I try to implement and I ask for help with:
When cron starts, get job from DB.
check state of job with such conditions:
if it's locked and not expired -> skip logic (note: I use one hour expiration to prevent some unexpected issues when server was broken while running)
if it's locked and expired -> change state of job to locked
if it's not locked but was updated till last 5 minutes -> change state of job to locked
execute logic due to condition above
"unlock" job (update job's state in document)
But here's the issue I met. As there's no concurrency. Between getting and updating the document in one machine, another machines can get or update stale document with not relevant data.
I've tried such solutions as:
findOneAndUpdate
tried to add aggregation (here's the proplem to compare the exipration and it looks to be impossible).
trtransactions
bulk
But nothing had worked. I start thinking to change DB but maybe somebody can say how to implement it with mongodb or can recomend some more suitable solution?
After a small rest I decided to start from the scratch and I finally found a solution.
Here's my code example. It's not perfect, so I'm planning to refactore it, hovever, it works and solves my issue! Hope it'll help sb.
Small description of its logic
The "mediator" is the public method scheduleJob. Logic order:
when we schedule job, it creates new document for the type in DB if it doesn't exist.
unlocks job if it's stale (it's stale if has been locked more than for a half an hour). Server can fall down while running job what cause infinite lock of job but checking stale job should help
next step is locking unlocked job, othervise, finish the logic. It's possible when one instance finishes job just before next instance starts, so I added finishing of the job if the same job was running for last 5 minutes. It's important that such condition restricts frequency as jobs can't bet runned every 5 minutes but in my case it's suitable solution
CronJobDao and CronJobModel are the same and represent the document in DB:
export interface CronJobDao {
type: CronJobTypeEnum;
isLocked: boolean;
updatedAt: Date;
completedAt: Date;
}
Service with scheduleJob method:
import { inject, injectable } from 'inversify';
import { Job, scheduleJob } from 'node-schedule';
import { CronJobTypeEnum } from '../core/enums/cron-job-type.enum';
import { CronJobRepository } from './cron-job.repository';
#injectable()
export class CronJobService {
readonly halfHourMs = 30 * 60 * 1000;
readonly fiveMinutesMs = 5 * 60 * 1000;
constructor(
#inject(CronJobRepository) private cronJobRepository: CronJobRepository,
) {}
scheduleJob(type: CronJobTypeEnum, timeRule: string, callback: Function): Job {
this.cronJobRepository.registerJob(type).then();
return scheduleJob(
type,
timeRule,
async () => {
await this.unlockStaleJob(type);
const lockedJob = await this.cronJobRepository.lockJob(type);
if (!lockedJob) {
console.warn('Job has already been locked');
return;
}
if ((new Date().getTime() - lockedJob.completedAt?.getTime()) < this.fiveMinutesMs) {
await this.cronJobRepository.unlockJob(type);
console.warn('Job has recently been completed');
return;
}
console.info('Job is locked');
callback();
await this.cronJobRepository.completeJob(type);
console.info('Job is completed');
},
);
}
private async unlockStaleJob(type: CronJobTypeEnum): Promise<void> {
const staleJob = await this.cronJobRepository.unlockIfTimeExpired(type, this.halfHourMs);
if (!staleJob) {
return;
}
console.warn('Has stale job: ', JSON.stringify(staleJob));
}
}
Class for communication with DB:
import { inject, injectable } from 'inversify';
import { Db } from 'mongodb';
import { CronJobDao, mapCronJobDaoToModel } from '../core/daos/cron-job.dao';
import { CronJobTypeEnum } from '../core/enums/cron-job-type.enum';
import { CronJobModel } from '../core/models/cron-job.model';
import { AbstractRepository } from '../core/utils/abstract.repository';
#injectable()
export class CronJobRepository extends AbstractRepository<CronJobDao> {
constructor(#inject(Db) db: Db) {
super(db, 'cron_jobs');
}
async registerJob(type: CronJobTypeEnum) {
const result = await this.collection.findOneAndUpdate(
{ type },
{
$setOnInsert: {
type,
isLocked: false,
updatedAt: new Date(),
},
},
{ upsert: true, returnOriginal: false },
);
return result.value;
}
async unlockIfTimeExpired(type: CronJobTypeEnum, expiredFromMs: number): Promise<CronJobModel | null> {
const expirationDate = new Date(new Date().getTime() - expiredFromMs);
const result = await this.collection.findOneAndUpdate(
{
type,
isLocked: true,
updatedAt: { $lt: expirationDate },
},
{
$set: {
updatedAt: new Date(),
isLocked: false,
},
});
return result.value ? mapCronJobDaoToModel(result.value) : null;
}
async lockJob(type: CronJobTypeEnum) {
return this.toggleJobLock(type, false);
}
async unlockJob(type: CronJobTypeEnum) {
return this.toggleJobLock(type, true);
}
private async toggleJobLock(type: CronJobTypeEnum, stateForToggle: boolean): Promise<CronJobModel | null> {
const result = await this.collection.findOneAndUpdate(
{
type,
isLocked: stateForToggle,
},
{
$set: {
isLocked: !stateForToggle,
updatedAt: new Date(),
},
},
);
return result.value ? mapCronJobDaoToModel(result.value) : null;
}
async completeJob(type: CronJobTypeEnum): Promise<CronJobModel | null> {
const currentDate = new Date();
const result = await this.collection.findOneAndUpdate(
{
type,
isLocked: true,
},
{
$set: {
isLocked: false,
updatedAt: currentDate,
completedAt: currentDate,
},
},
);
return result.value ? mapCronJobDaoToModel(result.value) : null;
}
}
i would recommend you to use some locking mechanism to sync between multiple services.
you can use basic mutex in case you want that only one service will write/read in your critical section.
im not sure exactly what you want in case one service trying to read while other service performing some changes (wait, skip or something else).
you can use some shared component such as redis to store the locking key.
Related
I have the following document structure called activity:
{
duration: {
start?: Timestamp,
end?: Timestamp
}
}
If I want to copy duration.end into duration.start I do this:
...
const activity = (await activityRef.get()).data()
const { duration } = activity;
duration.start = duration.end
await activityRef.update({
duration
});
Problem is, now my duration.start is no longer a timestamp and instead a seconds, nanoseconds object:
{
duration: {
start: {
seconds: Number,
nanoseconds: Number,
},
end: Timestamp
}
}
How do I move a timestamp from one field to another without it losing its type?
Also is there a proper way to avoid having to deal with seconds, nanoseconds objects, without having to reinstantiate an admin.firestore.Timestamp?
I am trying to get soft deleted records (deletedAt column) when using query from TypeOrmQueryService but looks like there is no way to do that. I know that I can use withDeleted if I use find or findOne but I cannot switch to these methods or use a query builder since it would require a lot of changed in the front-end.
#QueryService(Patient)
export class PatientQueryService extends TypeOrmQueryService<Patient> {
constructor(#InjectRepository(Patient) repo: Repository<Patient>) {
super(repo);
}
async getOnePatient(currentUser: User, patientId: number) {
const result = await super.query({
paging: { limit: 1 },
filter: { id: { eq: 1 } },
});
}
}
I want to Implement Ban functionality with Nestjs and Mongodb. Admin can ban a user for a certain period of time or block for a certain period of time. And after that period of time the ban functionality will be automatically removed and user can login again.
It might be happen like this...
If we want to ban some one we can change his role, for example user(1) to ban(0). And for next 7 days the role will be ban(0). And after the 7th day the role will automatically convert into user(1). And user can see how many days are left until the ban restriction is removed from his account.
But I'm getting nothing from internet on this topic. Can anybody tell how do we implement this functionality? Or any blog or document which can help...
I think you are probably looking for task scheduling:
https://docs.nestjs.com/techniques/task-scheduling
You can execute a function periodically and either delete users or update their role depending on your business requirements.
So according to the answer and the documentation of Nestjs, #mh377 provided, I tried and made a code according to my question, There is soo much to Improve here but for initial steps I tried to explain how changing a role and reversing it automatically after sometime, can be done in Nests with MongoDb.
Add a new Property in the MongoDb Schema. The #ApiProperty() is for swagger-ui documentation. The isActive will be used to ban and unban users on the basis of true and false values. Every new user that is registered is set to true by default which means he has access
#ApiProperty()
#Prop({ required: true, default: true })
isActive: boolean;
Install The Following Task Scheduling Dependencies
$ npm install --save #nestjs/schedule
$ npm install --save-dev #types/cron
Register it in app.module.ts. You can register it in other modules as well. If You working in some other nested module or else...
import { ScheduleModule } from '#nestjs/schedule';
#Module({
imports: [
ScheduleModule.forRoot()
],
})
user.service.ts. Register SchedulerRegistry in constructor
constructor(
#InjectModel(User.name) private readonly userModel: Model<userDocument>,
private scheduler: SchedulerRegistry,
) {}
user.service.ts. Function to ban a user and remove the restriction after 10 seconds automatically. You can manage the time according to your requirements. If You won't stop the job it will start executing after 10 seconds again and again in an infinite loop
new CronJob() is the NestJs Task Scheduling Part
import { CronExpression, SchedulerRegistry } from '#nestjs/schedule';
import { CronJob } from 'cron';
private banned: userDocument;
private banRemoved: userDocument;
async banUser(id: string): Promise<userDocument> {
let user: User = await this.userModel.findById({ _id: id });
user.isActive = false;
banned = await this.userModel.findByIdAndUpdate(user._id, user, {
new: true,
});
console.log('Baned User', banned);
const job: CronJob = new CronJob(
CronExpression.EVERY_10_SECONDS,
async () => {
let user: User = await this.userModel.findById({ _id: id });
user.isActive = true;
banRemoved = await this.userModel.findByIdAndUpdate(user._id, user, {
new: true,
});
console.log('Revoked', banRemoved);
},
);
this.scheduler.addCronJob('name', job);
job.start();
setTimeout(() => {
job.stop();
}, 10100);
return banRemoved;
}
user.controller.ts
#Patch('banUser/:id')
async banUser(#Param('id') id: string): Promise<userDocument> {
return await this.userService.banUser(id);
}
Hit endpoint to check
nestjs-server-port / #Controller('name') / httprequest-decorator('name') / userId
http://localhost:3000/users/banUser/639b47a25Lef4ddd7a48bb60
I'm having a little trouble with an integration test for my mongoose application. The problem is, that my unique setting gets constantly ignored. The Schema looks more or less like this (so no fancy stuff in there)
const RealmSchema:Schema = new mongoose.Schema({
Title : {
type : String,
required : true,
unique : true
},
SchemaVersion : {
type : String,
default : SchemaVersion,
enum: [ SchemaVersion ]
}
}, {
timestamps : {
createdAt : "Created",
updatedAt : "Updated"
}
});
It looks like basically all the rules set in the schema are beeing ignored. I can pass in a Number/Boolean where string was required. The only thing that is working is fields that have not been declared in the schema won't be saved to the db
First probable cause:
I have the feeling, that it might have to do with the way I test. I have multiple integration tests. After each one my database gets dropped (so I have the same condition for every test and precondition the database in that test).
Is is possible that the reason is my indices beeing droped with the database and not beeing reinitiated when the next text creates database and collection again? And if this is the case, how could I make sure that after every test I get an empty database that still respects all my schema settings?
Second probable cause:
I'm using TypeScript in this project. Maybe there is something wrong in defining the Schema and the Model. This is what i do.
1. Create the Schema (code from above)
2. Create an Interface for the model (where IRealmM extends the Interface for the use in mongoose)
import { SpecificAttributeSelect } from "../classes/class.specificAttribute.Select";
import { SpecificAttributeText } from "../classes/class.specificAttribute.Text";
import { Document } from "mongoose";
interface IRealm{
Title : String;
Attributes : (SpecificAttributeSelect | SpecificAttributeText)[];
}
interface IRealmM extends IRealm, Document {
}
export { IRealm, IRealmM }
3. Create the model
import { RealmSchema } from '../schemas/schema.Realm';
import { Model } from 'mongoose';
import { IRealmM } from '../interfaces/interface.realm';
// Apply Authentication Plugin and create Model
const RealmModel:Model<IRealmM> = mongoose.model('realm', RealmSchema);
// Export the Model
export { RealmModel }
Unique options is not a validator. Check out this link from Mongoose docs.
OK i finally figured it out. The key issue is described here
Mongoose Unique index not working!
Solstice333 states in his answer that ensureIndex is deprecated (a warning I have been getting for some time now, I thought it was still working though)
After adding .createIndexes() to the model leaving me with the following code it works (at least as far as I'm not testing. More on that after the code)
// Apply Authentication Plugin and create Model
const RealmModel:Model<IRealmM> = mongoose.model('realm', RealmSchema);
RealmModel.createIndexes();
Now the problem with this will be that the indexes are beeing set when you're connection is first established, but not if you drop the database in your process (which at least for me occurs after every integration test)
So in my tests the resetDatabase function will look like this to make sure all the indexes are set
const resetDatabase = done => {
if(mongoose.connection.readyState === 1){
mongoose.connection.db.dropDatabase( async () => {
await resetIndexes(mongoose.models);
done();
});
} else {
mongoose.connection.once('open', () => {
mongoose.connection.db.dropDatabase( async () => {
await resetIndexes(mongoose.models);
done();
});
});
}
};
const resetIndexes = async (Models:Object) => {
let indexesReset: any[] = [];
for(let key in Models){
indexesReset.push(Models[key].createIndexes());
}
Promise.all(indexesReset).then( () => {
return true;
});
}
I would like baffle.where({id: 1}).fetch() to always get typeName attribute as a part of baffle model, without fetching it from baffleType explicitly each time.
The following works for me but it seems that withRelated will load relations if baffle model is fetched directly, not by relation:
let baffle = bookshelf.Model.extend({
constructor: function() {
bookshelf.Model.apply(this, arguments);
this.on('fetching', function(model, attrs, options) {
options.withRelated = options.withRelated || [];
options.withRelated.push('type');
});
},
virtuals: {
typeName: {
get: function () {
return this.related('type').attributes.typeName;
}
}
},
type: function () {
return this.belongsTo(baffleType, 'type_id');
}
});
let baffleType = bookshelf.Model.extend({});
What is the proper way to do that?
Issue on repo is related to Fetched event, However Fetching event is working fine (v0.9.2).
So just for example if you have a 3rd model like
var Test = Bookshelf.model.extend({
tableName : 'test',
baffleField : function(){
return this.belongsTo(baffle)
}
})
and then do Test.forge().fetch({ withRelated : ['baffleField']}), fetching event on baffle will fire. However ORM will not include type (sub Related model) unless you specifically tell it to do so by
Test.forge().fetch({ withRelated : ['baffleField.type']})
However I would try to avoid this if it is making N Query for N records.
UPDATE 1
I was talking about same thing that you were doing on fetching event like
fetch: function fetch(options) {
var options = options || {}
options.withRelated = options.withRelated || [];
options.withRelated.push('type');
// Fetch uses all set attributes.
return this._doFetch(this.attributes, options);
}
in model.extend. However as you can see, this might fail on version changes.
This question is super old, but I'm answering anyway.
I solved this by just adding a new function, fetchFull, which keeps things pretty DRY.
let MyBaseModel = bookshelf.Model.extend({
fetchFull: function() {
let args;
if (this.constructor.withRelated) {
args = {withRelated: this.constructor.withRelated};
}
return this.fetch(args);
},
};
let MyModel = MyBaseModel.extend({
tableName: 'whatever',
}, {
withRelated: [
'relation1',
'relation1.related2'
]
}
);
Then whenever you're querying, you can either call Model.fetchFull() to load everything, or in cases where you don't want to take a performance hit, you can still resort to Model.fetch().