Lets suppose i want to delete record with id= 50 from my database with the help of cronjob in octobercms what are steps to do this
i did following things to achieve this
1)- In my plugin i put the following code
public function registerSchedule($schedule)
{
$schedule->call(function() {
DB::table('fsz_posting_tblposting')->where('id', '==', 50)->delete();
}); // Defaults to every minute (every execution)
}
2)- In my cronjob area of the server i put the following cronjob command
/usr/local/bin/php -q /home3/user/public_html/artisan scheduled:run
After doing these two steps my record is not deleted and i got the following email in my inbox
There are no commands defined in the "scheduled" namespace.
Did you mean this?
schedule
what should i do now?
Seems some typo in your crontab entry
its schedule:run not scheduled:run [schedule]
Corrected entry :
/usr/local/bin/php -q /home3/user/public_html/artisan schedule:run
for more information you can referhere:
Setup :https://octobercms.com/docs/setup/installation#crontab-setup
How to add task : https://octobercms.com/docs/plugin/scheduling
if any doubts please comment.
Related
I have the following cronjob set to execute daily at a specific time:
MAILTO="email address"
16 * * * python3 /home/cladkins/NBA.py && python3 /home/cladkins/NCAABB.py && python3 /home/cladkins/NCAAFB.py
added this via sudo nano crontab,crontab -e
It is executing almost to well, this is a case of it doesn't know what to do if you don't tell it what to do. My intention is at 1600 UTC everyday to have these scripts run once. Current behavior is that it just keeps looping forever. How can i get it to execute each script only once?
I suggest you use lib like celery to manage and track the execution of your cron jobs easily.
After setting up the celery, you will end up having something like this:
CELERYBEAT_SCHEDULE = {
# Executes every day at 16:00
'my-sample-task': {
'task': 'tasks.task_name',
'schedule': crontab(hour=16, minute=0),
'args': (16, 16),
},
}
Check this link for more details.
Good day everyone.
I have an issue, and Googling the issue has not helped me, basically I have the following requirement.
cronjob that runs 1st script, output is written to a file
file that is created, to have a date stamp
2nd script executes, mail the generated file as an attachment
The issue is with adding the timestamp, if I set the cron to run and just create a file with a generic filename the cronjob runs fine.
I have tried the following:
0 8-17/1 * * * python /usr/local/bin/script1.py >> /usr/local/bin/file_`date +\%Y-%m-%d`.txt 2>&1 && python /usr/local/bin/email_script.py
0 8-17/1 * * * python /usr/local/bin/acme_transcoding_check.py >> /usr/local/bin/file_$(date +"%Y-%m-%d").txt 2>&1 && python /usr/local/bin/email_script.py
Server is running Ubuntu 16.04
You need to escape the percent-sign (%) with a backslash as explained in this answer (not mine).
I am an intern on a company in Malta. The company has just made a big change from sugarCRM to vTigerCRM. Now we have a problem with the scheduler. What we want is, when a mail is entered it should automatically get synced with the organisations and contacts (I can link them when I click on the "SCAN NOW" button of the mailconverter). But I want it automatically.
But my cron files are not getting updated.
I installed a cron on the linux server with the code below:
*/15 * * * * sh /vtiger_root/cron/vtigercron.sh >/dev/null 2>&1
I adapt code the PHP_SAPI and I added the permissions on the proper files. But still. (as we speak my schedule task for the mail is at 1)
So every 15 minutes the vtigercron.sh is supposed to run vtigercron.php. But it doesn't happen. When I run vtigercron manually every things works fine. (The scheduler cron states get updated) but not with the cron file on the server.
Can somebody please be my hero?
In our crontab -
(sudo) vim /etc/crontab
I scheduled the job like this:
*/15 * * * * webdaemon /bin/bash /var/www/vtigercrm6/cron/vtigercron.sh
Are you getting any errors in /var/log/syslog or /var/log/messages or whichever system log that your OS uses?
This tutorial actually works: https://www.easycron.com/cron-job-tutorials/how-to-set-up-cron-job-for-vtiger-crm
A simpler way:
In the file vtigercron.php, change the line
if(vtigercron_detect_run_in_cli() || (isset($_SESSION["authenticated_user_id"]) && isset($_SESSION["app_unique_key"]) && $_SESSION["app_unique_key"] == $application_unique_key)){
to
if(vtigercron_detect_run_in_cli() || ($_REQUEST["app_unique_key"] == $application_unique_key) || (isset($_SESSION["authenticated_user_id"]) && isset($_SESSION["app_unique_key"]) && $_SESSION["app_unique_key"] == $application_unique_key)){
and then use
http://www.example.com/vtigercron.php?app_unique_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
as cron job URL.
IMPORTANT NOTICE: You may find your app_unique_key in config.inc.php (look for $application_unique_key in it).
Please replace xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx in the URL above with the 32-chars $application_unique_key you find in config.inc.php, and www.example.com with your vtiger install location.
I have developed a script that my company uses to back up our client's DBs on our server by creating a dump file of their DBs. Here is my code:
$result=mysql_query("SELECT * FROM backup_sites");
while($row=mysql_fetch_array($result)){
$backup_directory=$row["backup_directory"];
$thishost=$row["host"];
$thisusername=$row["username"];
$thispassword=$row["password"];
$thisdb_name=$row["db_name"];
$link=mysql_connect("$thishost", "$thisusername", "$thispassword");
if($link){
mysql_select_db("$thisdb_name");
$backupFile = $thisdb_name ."-". date("Y-m-d-H-i-s", strtotime ("+1 hour")) . '.sql';
$command = "/usr/bin/mysqldump -h $thishost -u $thisusername -p$thispassword $thisdb_name > site-backups/$backup_directory/$backupFile";
system($command);
}
}
The script gets all the client's db information from a table (backup_sites) and loops through each one to create the backup file in that client's directory. This script works great when it is manually executed. The problem that I am having is that it does not work when I set it to run as a Cron job. The email from the Cron job contains this error for each DB backup attempt:
sh: site-backups/CLIENT_DIRECTORY/DB_DUMP_FILE.sql: No such file or directory
So for whatever reason, the Cron job is unable to create/write the DB dump file. I can't figure this one out. It just seems strange that the script works perfectly when executed manually. Anyone have any ideas? Thanks in advance!
Adam
Consider using absolute pathes. Cron may have a different base directory.
How do you create a cron job from the command line, so that it shows up with a name in gnome-schedule?
I know how to create a cron job using crontab. However, all my jobs show up with a blank name. I'd like to better document my jobs so I can easily identify them in gnome-schedule, or similar cron wrapper.
Well, just made a cronjob in Scheduler, and took a look at my crontab file, and it looked like this:
0 0 * * * ls >/dev/null 2>&1 # JOB_ID_1
Notice the JOB_ID_1 at the end.
I went into ~/.gnome/gnome-scheduler/, looked at the files there, and there was one named just 1 (as in the number "one") which had a bit of info, including the name
ver=3
title=Hello
desc=
nooutput=1
So, I made a second cronjob:
0 0 * * * ls -al >/dev/null 2>&1 # JOB_ID_2
Copied the file 1 to 2 to match the JOB_ID_2, changed the description, making the file as:
ver=3
title=This is a test
desc=
nooutput=1
Then I switched over to Gnome-Schedule, and it had added the cronjob, and had the name updated.
Follow the same steps, and you should be able to manually name any cronjob you want