Nestjs dynamic cron scheduling testing with jest - jestjs

I am trying to include nestjs' dynamic cron scheduling feature as documented here: https://docs.nestjs.com/techniques/task-scheduling
When running a unit test with jest, i am facing this issue: A worker process has failed to exit gracefully and has been force exited. This is likely caused by tests leaking due to improper teardown. Try running with --detectOpenHandles to find leaks.
After narrowing it down, the issue points to job.start(). Since this is an 'open' job, how do i close it in jest to resolve the error above?
Any help would be much appreciated!
addCronJob(name: string, seconds: string) {
const job = new CronJob(`${seconds} * * * * *`, () => {
this.logger.warn(`time (${seconds}) for job ${name} to run!`);
});
this.schedulerRegistry.addCronJob(name, job);
job.start();
this.logger.warn(
`job ${name} added for each minute at ${seconds} seconds!`,
);
}

Related

Cron job in strapi V3 does not get triggered when a method from strapi services is called

config/functions/cron.js
async function logger() {
console.log("Hello, Im Async");
}
module.exports = {
'*/10 * * * * *': async () => {
console.log("Before);
await strapi.services.collectionName.someMethodName();
console.log("After);
},
'*/10 * * * * *': async () => {
await logger();
},
};
In the example above logger gets called properly. But someMethodName doesn't get called at all. Before and After are also not printed. I dont know what is wrong and how to check it.
Same code works in staging site but not on production server.
I dont understand what is happening and how to solve this.
Does anyone know a solution to this?
Thanks!
Found the solution. Posting it here so that it will help someone else in the future.
Time format for both of the cron tasks is the same. When I checked the list of active cron in the strapi config; there was only one cron task available. It seems that as the key for the export was same, second entry was overriding the first one. When I wrote diff time formats for diff task, list of active cron jobs showed all the cron tasks.
Below commands will help you check the list of active cron tasks.
npm run strapi console
const strapi = require('strapi');
strapi().config

How to complete a process in Node JS after executing all the operations

I am very new to NodeJS and trying to develop an application which acts as a scheduler that tries to fetch data from ELK and sends the processed data to another ELK. I am able to achieve the expected behaviour but after completing all the processes, scheduler job does not exists and wait for another scheduler job to come up.
Note: This scheduler runs every 3 minutes.
job.js
const self = module.exports = {
async schedule() {
if (process.env.SCHEDULER == "MinuteFrequency") {
var timenow = moment().seconds(0).milliseconds(0).valueOf();
var endtime = timenow - 60000;
var starttime = endtime - 60000 * 3;
//sendData is an async method
reports.sendData(starttime, endtime, "SCHEDULER");
}
}
}
I tried various solutions such Promise.allSettled(....., Promise.resolve(true), etc, but not able to fix this.
As per my requirement, I want the scheduler to complete and process and exit so that I can save some resources as I am planning to deploy the application using Kubernetes cronjobs.
When all your work is done, you can call process.exit() to cause your application to exit.
In this particular code, you may need to know when reports.sendData() is actually done before exiting. We would have to know what that code is and/or see the code to know how to know when it is done. Just because it's an async function doesn't mean it's written properly to return a promise that resolves when it's done. If you want further help, show us the code for sendData() and any code that it calls too.

Repeatable jobs not getting triggered at given cron timing in Bull

I wanted to perform some data processing in parallel using Bull NPM and start processing each job at the given cron Time
const Queue = require("bull"),
/**
* initialize the queue for executing cron jobs
**/
this.workQueue = new Queue(this.queueOptions.name, {
redis: redisConfig
});
this.workQueue.process((job, done) => {
done();
this.processJob(job)
.then(data => {
global.logger.info(`successfully-executed-job ${job.id}`);
})
.catch(error => {
global.logger.error(`JSON.stringify(error)}-in-executing-job-${job.id}`);
});
});
// here I have included Unique JobId
this.workQueue.add({}, {repeat: { cron:"5 * * * *",jobId:Date.now()});
Any suggestions to achieve the same?
The issue is resolved now if you're facing the same issue make sure that you're referring to the correct timezone.
Cheers!!
I also faced this same issue. One thing to note with respect to the above code is that a Queuescheduler instance is not initialized. Ofcourse timezone also plays a crucial role. But without a Queuescheduler instance (which has the same name as the Queue), the jobs doesnt get added into the queue. The Queuescheduler instance acts as a book keeper. Also take care about one more important parameter "limit". If you dont set the limit to 1, then the job which is scheduled at a particular time will get triggered unlimited number of times.
For example: To run a job at german time 22:30 every day the configuration would look like:
repeat: {
cron: '* 30 22 * * *',
offset: datetime.getTimezoneOffset(),
tz: 'Europe/Berlin',
limit: 1
}
Reference: https://docs.bullmq.io/guide/queuescheduler In this above link, the documentation clearly mentions that the queuescheduler instance does the book keeping of the jobs.
In this link - https://docs.bullmq.io/guide/jobs/repeatable, the documentation specifically warns us to ensure that we instantiate a Queuescheduler instance.

Cypress: interrupt all tests on first failure

How to interrupt all Cypress tests on the first test failure?
We are using semaphore to launch complete e2e tests with Cypress for each PR. But it takes too much time.
I'd like to interrupt all tests on the first test failure.
Getting the complete errors is each developer's business when they develop. I just want to be informed ASAP if there is anything wrong prior to deploy, and don't have to wait for the full tests to complete.
So far the only solution I came up with was interrupting the tests on the current spec file with Cypress.
afterEach(() => {
if (this.currentTest.state === 'failed') {
Cypress.runner.end();
}
});
But this is not enough since it only interrupts the tests located on the spec file, not ALL the other files. I've done some intensive search on this matter today and it doesn't seem like this is a thing on Cypress.
So I'm trying other solutions.
1: with Semaphore
fail_fast:
stop:
when: "true"
It is supposed to interrupt the script on error. But it doesn't work: tests keep running after error. My guess is that Cypress will throw an error only when all tests are complete.
2: maybe with the script launching Cypress, but I'm out of ideas
Right now here are my scripts
"cy:run": "npx cypress run",
"cy:run:dev": "CYPRESS_env=dev npx cypress run",
"cy:test": "start-server-and-test start http-get://localhost:4202 cy:run"
EDIT: It seems like this feature was introduced, but it requires paid version of Cypress (Business Plan). More about it: Docs, comment in the thread
Original answer:
This has been a long-requested feature in Cypress for some reason still has not been introduced. There are some workarounds proposed by the community, however it is not guaranteed they will work. Check this thread on Cypress' Github for more details, maybe you will find a workaround that works for your case.
The solution by #user3504541 is excellent! Thanks a ton. I already started giving up on using Cypress since these issues keep popping up. But in any case, here's my config:
support/index.ts
declare global {
// eslint-disable-next-line
namespace Cypress {
interface Chainable {
interrupt: () => void
}
}
}
function abortEarly() {
if (this.currentTest.state === 'failed') {
return cy.task('shouldSkip', true)
}
cy.task('shouldSkip').then(value => {
if (value) return cy.interrupt()
})
}
commands/index.ts
Cypress.Commands.add('interrupt', () => {
eval("window.top.document.body.querySelector('header button.stop').click()")
})
In my case the Cypress tests were left pending indefinitely on the CI (Github action workflow) but with this fix they interrupt properly.
A little hack that worked for me
Cypress.Commands.add('interrupt', () => {
eval("window.top.document.body.querySelector('header button.stop').click()");
});
This is available as the Auto Cancelation feature, which is part of Smart Orchestration, but is only available to Business Plan. From the Auto Cancelation docs:
Continuous Integration (CI) pipelines are typically costly processes that can demand significant compute time. When a test failure occurs in CI, it often does not make sense to continue running the remainder of a test suite since the process has to start again upon merging of subsequent fixes and other code changes. When Auto Cancellation is enabled, once the number of failed tests goes over a preset threshold, the entire test run is canceled. Note that any in-progress specs will continue to run to completion.

Hudson SCM Polling Thread hungs while polling

We use Hudson for our continuose build environment. For some reason, the thread for SCM Polling hungs somethimes after a while. I've experiemented a lot with the settings, but nothing seems to really work. How to fix this and are there some scripts out there which can detect such a case to be able to restart hudson? Btw. restarting hudson is the only way to solve this issue for us at the moment.
That is similar to bug 5413, which should be solved since late 2010 with HUDSON 5977 (Hudson 1.380+, or now Jenkins).
You had in those thread some way to kill any thread stuck on the polling step:
very primitive (I'm too lazy to develop something better as this is not very important issue) Groovy script is bellow.
It could happened that it will kill also SCM polling which are not stuck, but we run this script automatically only once a day so it doesn't cause any troubles for us.
You can improve it e.g. by saving ids and names of SCM polling threads, check once again after some time and kill only threads which ids are on the list from previous check.
Thread.getAllStackTraces().keySet().each(){ item ->
if( item.getName().contains("SCM polling") &&
item.getName().contains("waiting for hudson.remoting")){
println "Interrupting thread " + item.getId() item.interrupt()
}
}
The other answer didn't work for me, but the following script found the the issue for this problem did:
Jenkins.instance.getTrigger("SCMTrigger").getRunners().each()
{
item ->
println(item.getTarget().name)
println(item.getDuration())
println(item.getStartTime())
long millis = Calendar.instance.time.time - item.getStartTime()
if(millis > (1000 * 60 * 3)) // 1000 millis in a second * 60 seconds in a minute * 3 minutes
{
Thread.getAllStackTraces().keySet().each()
{
tItem ->
if (tItem.getName().contains("SCM polling") && tItem.getName().contains(item.getTarget().name))
{
println "Interrupting thread " + tItem.getName();
tItem.interrupt()
}
}
}
}

Resources