I have installed PCoIP Teradici PCoIP Solutions for linux in Centos 6.4. I want to get macAdress of HostCard,
when i run pcoip_agent -info it gives correct output using command line. I want make this cron job, for that purpose i write bash script
#!/bin/bash
hostcardMacAddress=$(pcoip_agent --info|grep -F 'PCoIP Host card MAC:'|awk '{print $5}' > hostcardMacAddress.txt)
cat hostcardMacAddress.txt
after that i make this script a cron job but it gives empty macAddress.
Kindly help me. Thanks.
Check the mail of the user that owns the cron job. There will be an error message indicating what the issue is.
Login as cron-job's user
enter mail and check for mail at the time the cron job was invoked.
Related
When setting up AcySMS, there are few option for the cron job. "Web cron" runs at the fastest interval of 15 minutes, way too slow for me.
I have opted for "manual cron", and am given the following "cron URL" https://www.followmetrading.com/index.php?option=com_acysms&ctrl=cron
Putting that into the cPanel cron job manager just leaves me with an error everytime the cron attempts to run:
/usr/local/cpanel/bin/jailshell: http://www.followmetrading.com/index.php?option=com_acysms: No such file or directory
I have discovered that the following command executes the script properly.
curl --request GET 'https://www.followmetrading.com/index.php?option=com_acysms&ctrl=cron&no_html=1' >/dev/null 2>&1
Note: ensure the URL is surrounded by ' ' otherwise it seems to miss everything from the & onward.
Note: >/dev/null 2>&1 makes sure there is no email trail.
I'm new to cron and started using Whenever gem to perform scheduled tasks. However, sometimes they stall and I have to restart the app for it to pick up again. So, I wanted to inspect the logs to see if there are any exceptions that might cause it.
According to this wiki, I put a line in my schedule.rb setting the output location:
set :output, "/var/log/cron.log"
But this file is always empty.
I tried doing manually from the terminal /bin/bash -l -c 'echo "hello" >> /var/log/cron.log 2>&1' and it saved hello to the log.
Any thoughts? Thank you.
SOLVED
Scenario: I am a beginner in bash script, windows task scheduler and such. I am able to run a local bash script in my Windows Task Scheduler successfully.
Problem: I need to do this on many computers, thus I think storing just 1 copy of the bash script on a remote server may be of help. What my Task Scheduler needs to do is just to run the script and output a log. However, I can't get the correct syntax for the argument.
The below is what I have currently:
Program/Script: C:\cygwin64\bin\bash.exe
Argument (works successfully):
-l -c "ssh -p 222 ME#ME.com "httpdocs/bashscript.sh" >> /cygdrive/c/Users/ME/Desktop/`date +%Y%m%d`.log 2>&1"
Start in: C:\cygwin64\bin
Also had to make sure that the user account under Properties in Task Scheduler is correct, as mine was incorrect before. And need key authentication for ME#ME.com too.
For the password issue, you really should use ssh keys. I think your command would simply be ssh -p 222 ME#ME.com:.... I.e., just get rid of the --rsh stuff. – chrisaycock
I'm trying to make cron jobs or task schduler working, but I can not figure out why my script is not taken in consideration.
I'm trying to simply archive a folder with:
tar -cvf /volume1/NetBackup/Backups/Monday.tgz /volume1/NetBackup/Backups/ns3268116.ovh.net/
Each time the script starts working but cannot achieve the work. Either with task scheduler or crontab, a file Monday.tgz is created in folder /volume1/NetBackup/Backups/, but this file is only 1024 bytes.
Synology Cron is really fussy.
Here are my own personal notes for Synology DS413j, DSM 5.2:
Hand edit /etc/crontab as root, crontab -e isn't available
Ensure you use tabs not spaces to separate the columns
Your crontab changes may not survive a reboot if there are syntax problems
The who column in crontab may not be reliable. Use root in the who column and /bin/su -c '<command>' <username> to run as another other user
remember that it uses ash not bash so check for bashisms, e.g use >> /path/to/logfile 2>&1' not&>> /path/to/logfile`
It doesn't support 'MAILTO='
you need to restart crond synoservicectl --reload crond for the new crontab to take effect
You may try adding some diagnostics to it. For instance:
Add MAILTO into the crontab file (on top of crontab -e) to receive cron errors by email:
MAILTO=username#domain.com
Redirect output of your tar command to the file:
your command > ~/log.txt 2>&1
Check cron log and look for anomalies. For instance (it may depend on your configuration):
/var/log/cron.log
You may also try searching through /var/log/messages at the time of your cron job.
Is volume1 a resource on remote host? If yes, it is worth checking this part of the system.
I agree about the really nagging nature of Crontab on Synology Linux OSs.
I would certainly suggest to create de desired job as a .sh shell script and call it via CRON task inserted by using the GUI, as suggested here.
As for today (March 2017) is the best method I have found, since working with crontab via CLI is nearly a pain.
I want to get the details of the last run cron job. If the job is interrupted due to some internal problems, I want to re-run the cron job.
Note: I don't have superuser privilege.
You can see the date, time, user and command of previously executed cron jobs using:
grep CRON /var/log/syslog
This will show all cron jobs. If you only wanted to see jobs run by a certain user, you would use something like this:
grep CRON.*\(root\) /var/log/syslog
Note that cron logs at the start of a job so you may want to have lengthy jobs keep their own completion logs; if the system went down halfway through a job, it would still be in the log!
Edit: If you don't have root access, you will have to keep your own job logs. This can be done simply by tacking the following onto the end of your job command:
&& date > /home/user/last_completed
The file /home/user/last_completed would always contain the last date and time the job completed. You would use >> instead of > if you wanted to append completion dates to the file.
You could also achieve the same by putting your command in a small bash or sh script and have cron execute that file.
#!/bin/bash
[command]
date > /home/user/last_completed
The crontab for this would be:
* * * * * bash /path/to/script.bash
/var/log/cron contains cron job logs. But you need a root privilege to see.
CentOs,
sudo grep CRON /var/log/cron