Sometimes Crons is working sometimes getting missed. I have attached all setting and result. Anyone can check and revert.
It's completely normal behaviour. Some jobs are skipped caused the time frame is out of scheduled time for specified cron job. In your case the reindex process is scheduled every 1 minute. If there is more things to index (lot of changes on products, categories etc.) one minute is's not enough to complete. Also there is only one process per cron group, in your case index. Use Separate Process in cron configuration means that indexes process will run as separate process in relation to other cron groups.
Related
Dear UNIX/PBS experts:
I am user of a UNIX HPC system (CentOS Linux 7 (Core),Linux 3.10.0-693.5.2.el7.x86_64) and I do not have any root privileges.
Various jobs have been submitted at an HPC system and almost all resources are being used.
Jobs from other users may run for weeks while my submitted job would finish in less than a day.
My goal is to run my job exactly after the first resources will be freed instead of waiting for
all other users to have their jobs finished.
My submitted job has a number qid 66005.pbs.
However the last job running at this moment has number 55004.pbs.
By checking the status of job: qstat 55005,
I obtain: qstat: Unknown Job Id 55005.pbs
Thus my question is whether it is possible to change the name of job 66005.pbs to 55005.pbs, and if this action will allow my job to run?
If yes, how can this be achieved?
If not, are there any other solutions/alternatives for making sure that my jobs run before those ones of other users in queue?
Thank you very much for your help and any suggestion.
The good thing about the computer system is that it is not human. It will be unfair to run your job (which clearly was submitted after other users) before other users and because of that "No" it is not possible to change your job-id.
You can work with your admin to move the job to a higher priority queue instead.
I have a cron job that runs every 30 minutes, starting 10 minutes past a whole hour:
0+10/30+*+*+*+?
Now, this needs to be changed, so that in a specific time interval, it runs every 15 minutes instead. E.g. at 7.50, 8.05, 8.20 and 8.35. Then every 30 minutes again.
Is this possible with a single cron job and if so, how? Or do I need multiple jobs to accomplish this?
Thank you in advance.
not easy in a single cron, and that is also hard to read.
multiple jobs may work fine and show much clear
// This will start at 1:10am, and every 30minutes run once.
0+10/30+1-23/2+*+*+?
// This will start at 0:10am, and every 15minutes run once.
0+10/15+0-24/2+*+*+?
you may also consider to void the two job running at the same time.
As far as I've understood, this is not possible within a single cron job.
setup cron from morning to evening only points out that three different cron jobs are needed, so I am closing my question.
I have a website that is live. I have a cron job that executes every 24 hours. the cron job fetches and analyzes the data from a database table.
The problem is that the website gets very slow during the time when cron job is running. And gets back to normal after that. It gives me error Too many connections during this time.
I set the maximum allowed connections to 500 in mysql. The number of active connections that I checked in mysql were less than limit during that time.
I am unable to find any relevant help or even a clue to think in a particular direction.
Update:
I noticed one thing. the number of mysql connection continuously increases in this time. Although still less than the maximum limit.
nice command can change priority of a process. You want to lower the priority of the background process so it will try not to execute be executing while the website is being busy. E.g.
0 3 * * * nice -n 20 myjob arg arg
to execute myjob arg arg with lowered priority every day at 3am.
EDIT: Although, if the job is spending most of its time in database queries, this will not affect it much. MySQL has LOW_PRIORITY flag for INSERT and UPDATE statements that will do kind of the same thing for those queries.
As a user (not an admin), is there any way that I can look up jobs which were preempted at some point, then requeued? I tried:
sacct --allusers --state=PR --starttime=2016-01-01
And didn't get anything, but I don't think this command should actually work, because a job which got preempted and then requeued would not ultimately end up in the preempted state.
You need to use the --duplicate option of sacct; that will show you all the "intermediate states".
From the manpage:
-D, --duplicates
If Slurm job ids are reset, some job numbers will probably appear more than once in the accounting log file but refer to different jobs. Such
jobs can be distinguished by the "submit" time stamp in the data records.
When data for specific jobs are requested with the --jobs option, sacct returns the most recent job with that number. This behavior can be
overridden by specifying --duplicates, in which case all records that match the selection criteria will be returned.
When jobs are preempted, or requeued, you end up with several records in the database for the job, and this option allows you to see all of them.
In my webapp, users can create recurring invoices that need to be generated and sent out at certain dates each month. For example, an invoice might need to be sent out on the 5th of every month.
I am using Kue to process all my background jobs so I want to do it in this case as well.
My current solution is to use setInterval() to create a processRecurringInvoices job every hour. This job will then find all recurring invoices from database and create a separate generateInvoice job for each recurring invoice.
The generateInvoice job will then actually generate the invoice, and if needed, will also in turn create a sendInvoiceToEmail job that will email the invoice.
At the moment this solution looks good to me, because it has a nice separation of concerns, however, I have the following questions:
I am not sure if I should wait for all the 'child' jobs to complete before I call done() on the main processRecurringInvoices job?
Where should I handle errors? Should i pass them back to the processRecurringInvoices job or should I handle them separately for each job?
How can I make sure that if processing takes extra long time (more than an hour), and either processRecurringInvoices or any of the child jobs are still runnning, the processRecurringInvoices job is not created again? Kind of like a unique job, or mutual exclusion?
Instead of "processRecurringInvoices" it might be easier to think of it as a job that initiates other, separate invoice-processing jobs. Thinking of it this way, once the invoice processing jobs have been enqueued, you can safely call done() on the job that kicks them all off.
Thinking of the problem in the way described in question 1, errors should be handled within each of the individual invoice processing jobs. If an error occurred finding potential invoice jobs, then that would probably be handled in the processRecurringInvoices jobs.
you can use kue.Job.rangeByType() to search for currently active jobs. If a job is active, you can skip kicking it off again.