This is my cron command:
/usr/bin/wget -O /dev/null -o /dev/null https://example.com/file1.php; wget -q -O - https://example.com/file2.php
The first file is running for 4 minutes for updating database (i have logs), the second file for some reason runs after 1:45, how it runs without the first one finished?
Thanks!
Related
I have 50+ cronjobs like the one given below running in my Centos 7 server.
curl -s https://url.com/file.php
This runs every 10 minutes. When running manually from the shell it only takes 1-2 minutes. It also is working fine using cronjob. The problem is that it does not exit after the execution. When i check my processes using ps command, it shows many cronjobs of previous dates(even 10 days before) which accumulates the total proccesses in my server.
Line in crontab :-
*/10 * * * * user curl -s https://url.com/file.php > /dev/null 2>&1
Is there any reasion for this? If i rememmber correctly this happened after latest patch update.
Please help.
Modify your command to store the logs in log files instead of dumping it to /dev/null.
Options
--max-time
--connect-timeout
--retry
--retry-max-time
can be used to control the curl command behaviour.
i have an file serverlist file,reading the file and getting the server name and performing some operations on those servers using for loop and command used:/usr/local/bin/sshcmd -q -u $userName -s $serverName" and one command for execution tooks 5-7 minutes on the one server.
i don't want to run the command one by one on all servers but requires to run the command at least 15 servers parallelly at same time for saving the time .
You can run commands into background mode by adding '&' at end of command.
For example:
/usr/local/bin/sshcmd -q -u $userName1 -s $serverName1 &
/usr/local/bin/sshcmd -q -u $userName2 -s $serverName2 &
It is runs two copies of sshcmd parallely.
I only have access to cpanel and I have two different types of cron jobs that I would like to understand better:
This one seems to suffer time out constraints like a normal webpage does
wget --secure-protocol=TLSv1 -O /dev/null "https://www.website.com/code.php" >/dev/null 2>&1
But what about this one?
php -q /home/website/public_html/code.php >/dev/null 2>&1
How do I make sure a cron job completes?
So i have a script for waking up my computer using rtc... The script manually works fine but when i am trying to run it through crontab -e it doesnt work. I am not very familiar with cron so maby i am doing something wrong.
at this time i use the command:
#reboot /nikos/script/auto.sh
just to try and see if it is working...a tried some other ways(using path and some others but nothing work)
THe code of the script
#!/bin/bash
sh -c "echo 0 > /sys/class/rtc/rtc0/wakealarm"
sh -c "echo `date '+%s' -d '+ 420 minutes'` > /sys/class/rtc/rtc0/wakealarm"
any help is apreciated
EDIT:
in order to see if it worked i run:
cat /proc/driver/rtc
and i see that it rtc alarm is not enabled
I see one thing right off. You need to provide absolute paths to /bin/sh, /bin/echo and /bin/date
Putting absolute paths in scripts executed in cron solves most problems. If your scripts run fine on the command line but not in cron that is usually the culprit.
/bin/sh -c "/bin/echo 0 > /sys/class/rtc/rtc0/wakealarm"
/bin/sh -c "/bin/echo /bin/date '+%s' -d '+ 420 minutes' > /sys/class/rtc/rtc0/wakealarm"
Answers to another post at stackexchange also makes mention of cron having issues with #reboot scripts for many reasons. Check here: https://unix.stackexchange.com/questions/109804/crontabs-reboot-only-works-for-root
It may be worth your while to run it by adding it to a bootup script such as /etc/rc.d/rc.local instead of using cron. You will still have best results providing absolute paths to commands to make sure they are accessible.
Also: test if #reboot is working properly on your system. Not all versions of cron execute that properly. Add this script to test: #reboot /bin/echo "#reboot works" > /tmp/reboot_test
If your target directory is not available at boot time by the time your reboot script starts that will also cause a problem.
I set up a cron job on my Ubuntu server. Basically, I just want this job to call a php page on an other server. This php page will then clean up some stuff in a database. So I tought it was a good idea to call this page with wget and then send the result to /dev/null because I don't care about the output of this page at all, I just want it to do its database cleaning job.
So here is my crontab:
0 0 * * * /usr/bin/wget -q --post-data 'pass=mypassword' http://www.mywebsite.com/myscript.php > /dev/null 2>&1
(I post a password to make sure no one could run the script but me). It works like a charm except that wget writes each time an empty page in my user directory: the result of downloading the php page.
I don't understand why the result isn't send to /dev/null ? Any idea about the problem here?
Thanks you very much!
wget's output to STDOUT is it trying to make a connection, showing progress, etc.
If you don't want it to store the saved file, use the -O file parameter:
/usr/bin/wget -q --post-data -O /dev/null 'pass=mypassword' http://www.mywebsite.com/myscript.php > /dev/null 2>&1
Checkout the wget manpage. You'll also find the -q option for completely disabling output to STDOUT (but offcourse, redirecting the output as you do works too).
wget -O /dev/null ....
should do the trick
you can mute wget output with the --quiet option
wget --quiet http://example.com