In Pimcore when I schedule an object to publish, it didn't work. I see that we require to enable cron job in some file. I didn't get complete details on this any where. Is there any step to enable this process?
Can we use scheduling as part of workflow?
Yes, that is correct, you need to add the cron job to crontab as described in step 5 here:
https://www.pimcore.org/docs/latest/Getting_Started/Installation.html
You need either a shell access or a control panel like cPanel or Plesk to set this up. The process is different for each operating system, but for most of the Linux distributions it means executing this command as your server user (for Debian/Ubuntu that is www-data, check what the user is for other distributions):
sudo -u www-data crontab -e
In there you have to add this line (modify the path to your console.php):
*/5 * * * * php /path/to/pimcore/cli/console.php maintenance
As said this is different for every linux distribution, but should be simple enough once you figure out how to setup a cron job.
It worked. I am using AWS instance, followed below steps.
1) pbrun beroot (Power Broker - root privilege)
2) crontab -e
3) insert mode (i)
4) */5 * * * * php /...path../pimcore/cli/console.php maintenance
6) :wq
Can we use scheduling as part of workflow? I have an object, after reviewer reviews, it should get published based on scheduling. Is there any way to schedule in workflow?
Related
I want to run rsync(in remote linux system) automatically in every minute. so whatever the changes (in test.txt, as mentioned below) are done in one system, it should be affected in another system at the same minute interval.
For this purpose, I have changed in sudo crontab -e , and added
*/1 * * * * /home/john/rsync.sh
rsync.sh contains two commands:
sudo rsync -av /home/john1/test.txt remote#remote_ip:
sudo rsync -av --update /home/john1/test.txt remote#remote_ip:
when I am running rsync.sh manually, all the changes are affected successfully.
If you added this in the root crontab, you don't need to start the rsync commands with sudo.
Things that run in crontab will probably not have the same environment variables. You can add the absolute path to rsync if you're unsure, for example /usr/bin/rsync. Also check other environment variables, for example running set.
When you run it manually, you're already in a specific shell which is probably able to run it. But when cron runs it, it doesn't know which interpreter to use. Always start your scripts with #!/usr/bin/bash (or whatever is your favorite shell). And/or call your cron job specifying which shell to use, for example:
*/1 * * * * /bin/bash /home/john/rsync.sh
I hope this helps. ;)
I am running SUSE Linux as a non-root user (getting root access also will not be a possibility). I would like to have a .sh script I created be run daily.
My crontab looks like:
0 0 * * * * /path/to/file.sh
I also have a line return after this as per many troubleshooting suggestions. My script deletes files older than 14 days. I also added a means to log output to check whether the script runs.
However, the job does not run automatically. I also am not able to check /var/log/messages for any notifications on whether cron can run or not.
What am I doing wrong? How can I check if cron itself is running/can run for my user? Do I have to supply cron with any paths or environment variables?
The correct approach to run your cron every midnight is:
00 00 * * * /bin/bash path/to/your/script.sh >> /path/to/log/file.log
I can install a new cron job using crontab command.
For example,
crontab -e
0 0 * * * root rdate -s time.bora.net && clock -w > /dev/null 2>&1
Now I have 100 machines in my cluster, I want to install the above cron task in all of the machines.
How to distribute cron jobs to the cluster machines?
Thanks,
Method 1: Ansible
Ansible can distribute machine configuration to remote machines. Please refer to:
https://www.ansible.com/
Method 2: Distributed Cron
You can use distribted cron to assign cron job. It has a master node and you can config your job easily and monitor the running result.
https://github.com/shunfei/cronsun
crontab is stored /var/spool/cron/(username)
so, write your own cron jobs and distribute that location after acquire root permission.
but if other user edit the crontab at the same time, you can never be sure when it will get changed.
below links help you
https://ubuntuforums.org/showthread.php?t=2173811
https://forums.linuxmint.com/viewtopic.php?t=144841
https://serverfault.com/questions/347318/is-it-bad-to-edit-cron-file-manually
Ansible already has a cron module:
https://docs.ansible.com/ansible/2.7/modules/cron_module.html
I am quite new to nitrous and programming in general. However, I wanted to see why my crontab job isn't doing anything on Nitrous.io.
I am using a virtualenv and I am in the understanding that you can run them on crontab. THis is my cronline:
10 6,19 * * * /home/action/susteq/bin/activate /home/action/susteq/start.py 2>&1 >> /home/action/susteq/log/start.log
Crontab should work on Nitrous.IO as long as you are actively logged into the box (or using tmux) and the box doesn't shutdown from inactivity. Paid boxes will stay running indefinitely.
Looking at this command you may want to ensure it runs as expected outside of the crontab. Try running the process first:
$ /home/action/susteq/bin/activate /home/action/susteq/http://start.py 2>&1 >> /home/action/susteq/log/start.log
If not then you may actually want to try placing 2>%1 at the end of the line (further explained on this redirection tutorial). The following command may be what you are looking for:
$ /home/action/susteq/bin/activate /home/action/susteq/start.py >> /home/action/susteq/log/start.log 2>&1
If that works, try adding it to your crontab:
$ 10 6,19 * * * /home/action/susteq/bin/activate /home/action/susteq/start.py >> /home/action/susteq/log/start.log 2>&1
Update: To run cron on Nitrous Pro you need to enable privileged mode on your container, which requires that you enable advanced container management. More details can be found here:
https://community.nitrous.io/docs/running-cron-on-nitrous
I'm trying to make cron jobs or task schduler working, but I can not figure out why my script is not taken in consideration.
I'm trying to simply archive a folder with:
tar -cvf /volume1/NetBackup/Backups/Monday.tgz /volume1/NetBackup/Backups/ns3268116.ovh.net/
Each time the script starts working but cannot achieve the work. Either with task scheduler or crontab, a file Monday.tgz is created in folder /volume1/NetBackup/Backups/, but this file is only 1024 bytes.
Synology Cron is really fussy.
Here are my own personal notes for Synology DS413j, DSM 5.2:
Hand edit /etc/crontab as root, crontab -e isn't available
Ensure you use tabs not spaces to separate the columns
Your crontab changes may not survive a reboot if there are syntax problems
The who column in crontab may not be reliable. Use root in the who column and /bin/su -c '<command>' <username> to run as another other user
remember that it uses ash not bash so check for bashisms, e.g use >> /path/to/logfile 2>&1' not&>> /path/to/logfile`
It doesn't support 'MAILTO='
you need to restart crond synoservicectl --reload crond for the new crontab to take effect
You may try adding some diagnostics to it. For instance:
Add MAILTO into the crontab file (on top of crontab -e) to receive cron errors by email:
MAILTO=username#domain.com
Redirect output of your tar command to the file:
your command > ~/log.txt 2>&1
Check cron log and look for anomalies. For instance (it may depend on your configuration):
/var/log/cron.log
You may also try searching through /var/log/messages at the time of your cron job.
Is volume1 a resource on remote host? If yes, it is worth checking this part of the system.
I agree about the really nagging nature of Crontab on Synology Linux OSs.
I would certainly suggest to create de desired job as a .sh shell script and call it via CRON task inserted by using the GUI, as suggested here.
As for today (March 2017) is the best method I have found, since working with crontab via CLI is nearly a pain.