Using WGET to run a cronjob PHP disable notification email - linux

Im using godaddy as a webhost and id like to disable the email notification that is sent after a cronjob is done. Lucky for me they have been no help but the cronjob area says:
You can have cron send an email every time it runs a command. If you do not want an email to be sent for an individual cron job you can redirect the command’s output to /dev/null like this: mycommand >/dev/null 2>&1
Ive tried several variations of this and nothing seems to fix it.
My command:
wget http://example.com/wp-admin/tools.php?page=post-by-email&tab=log&check_mail=1
Any advice is greatly appreciated.

As the cronjob area says, you need to redirect the command’s output to /dev/null.
Your command should look like this:
wget -O /dev/null -o /dev/null "http://example.com/wp-admin/wp-mail.php" &> /dev/null
The -O option makes sure that the fetched content is sent to /dev/null.
If you want the fetched content to be downloaded in the server filesystem, you can use this option to specify the path to the desired file.
The -o option logs to /dev/null instead of stderr
&> /dev/null is another way yo redirect stdout output to /dev/null.
NOTES
For more information on wget, check the man pages: you can type man wget on the console, or use the online man pages: http://man.he.net/?topic=wget&section=all
With both -O and -o pointing to /dev/null, the output redirection ( &> ... ) should not be needed.
If you don't need to download the contents, and only need the server to process the request, you can simply use the --spider argument

Related

bash redirect output to file but result is incomplete

The question of redirecting output of a command was already asked many times, however I am having a strange behavior. I am using a bash shell (debian) with version
4.3.30(1)-release and tried to redirect output to a file, however not everything are logged in the file.
The bin file that I tries to run is sauce-connectv4.4.1 for linux (client of saucelabs that is publicly available in internet)
If I run
#sudo ./bin/sc --doctor
it showed me a complete lines
it prints :
INFO: resolved to '23.42.27.27'
INFO: resolving 'g2.symcb.com' using
DNS server '10.0.0.5'...
(followed by other line)
INFO: 'google.com' is not in hosts file
INFO: URL https://google.com can be reached
However, if I redirect the same command to a file with the following command
#sudo ./bin/sc --doctor > alloutput.txt 2>&1
and do
#cat alloutput.txt
the same command output is logged, but deprecated as following:
INFO: resolved to '23.42.2me#mymachine:/opt/$
It has incomplete line, and the next lines that follows are not even logged (missing).
I have tried with >> for appending, it has the same problem. Using command &> alloutput.txt also is not printing the whole stuff. Can anyone point out how to get all lines of the above command to be logged completely to the text file?
UPDATE
In the end I manage to use the native binary logging by using --log
alloutput.txt where it completely provide me with the correct output.
However I let this issue open as I am still wondering why one misses some information/lines by doing an output redirection
you should try this: stdbuf -o0
like:
stdbuf -o0 ./bin/sc --doctor 2>&1 | tee -a alloutput.txt
That is a funny problem, I've never seen that happening before. I am going to go out on a limb here and suggest this, see how it works:
sudo ./bin/sc --doctor 2>&1 | tee -a alloutput.txt
#commandtorun &> alloutput.txt
This command will redirects both the error and output to same file.

Cron output to nothing

I've noticed that my cron outputs are creating index.html files on my server. The command I'm using is wget http://www.example.com 2>&1. I've also tried including --reject "index.html*"
How can I prevent the output from creating index.html files?
--2013-07-21 16:03:01-- http://www.examplel.com
Resolving example.com... 192.0.43.10
Connecting to www.example.com|192.0.43.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 0 [text/html]
Saving to: `index.html.9'
0K 0.00 =0s
2013-07-21 16:03:03 (0.00 B/s) - `index.html.9' saved [0/0]
Normally, the whole point of running wget is to create an output file. A URL like http://www.example.com typically resolves to http://www.example.com/index.html, so by creating index.html, the wget command is just doing its job.
If you want to run wget and discard the downloaded file, you can use:
wget -q -O /dev/null http://www.example.com
The -o /dev/null discards log messages; -O /dev/null discards the downloaded file.
If you want to be sure that anything wget writes to stdout or stderr is discarded:
wget -q -O /dev/null http://www.example.com >/dev/null 2>&1
In a comment, you say that you're using the wget command to "trigger items on your cron controller" using CodeIgniter. I'm not familiar with CodeIgniter, but downloading and discarding an HTML file seems inefficient. I suspect (and hope) that there's a cleaner way to do whatever you're trying to do.

Adding a cronjob into cPanel

I just need to run the following url using cron jobs in my cPanel.
When I am trying to execute the link
http://www.insidewealth.com.au/index.php?option=com_acymailing&ctrl=cron
the link is running in browser but when I am tried to add the same URL as it is in cron jobs I am getting the following error
bash/sh/ file not found
and when I edited the cron job as
/usr/bin/php /home/staging/public_html/index.php?option=com_acymailing&ctrl=cron
but I am getting 404 error.
My cPanel username is staging
Can anybody tell me what's the syntax of cron job in cPanel.
Cron Job running every minute and email report showing this errors.
Use wget function with full URL.
#yannick-blondeau As suggested you can use a wget or curl to make a simple request to your website.
Usually wget will attempt to download a file but this is not necessary with the -O flag to pipe to /dev/null or -q (both options used to prevent from saving output to a file), an example will look like
wget -O /dev/null http://www.insidewealth.com.au/index.php?option=com_acymailing&ctrl=cron
wget -q http://www.insidewealth.com.au/index.php?option=com_acymailing&ctrl=cron
You can also use curl for the same effect
curl --silent http://www.insidewealth.com.au/index.php?option=com_acymailing&ctrl=cron

Check Whether a Web Application is Up or Not

I would like to write a script to check whethere the application is up or not using unix shell scripts.
From googling I found a script wget -O /dev/null -q http://mysite.com, But not sure how this works. Can someone please explain. It will be helpful for me.
Run the wget command
the -O option tells where to put the data that is retrieved
/dev/null is a special UNIX file that is always empty. In other words the data is discarded.
-q means quiet. Normally wget prints lots of info telling its progress in downloading the data so we turn that bit off.
http://mysite.com is the URL of the exact web page that you want to retrieve.
Many programmers create a special page for this purpose that is short, and contains status data. In that case, do not discard it but save it to a log file by replacing -O /dev/null with -a mysite.log.
Check whether you can connect to your web server.
Connect to the port where you web server
If it connects properly your web server is up otherwise down.
You can check farther. (e.g. if index page is proper)
See this shell script.
if wget -O /dev/null -q http://shiplu.mokadd.im;
then
echo Site is up
else
echo Site is down
fi

Using wget in a crontab to run a PHP script

I set up a cron job on my Ubuntu server. Basically, I just want this job to call a php page on an other server. This php page will then clean up some stuff in a database. So I tought it was a good idea to call this page with wget and then send the result to /dev/null because I don't care about the output of this page at all, I just want it to do its database cleaning job.
So here is my crontab:
0 0 * * * /usr/bin/wget -q --post-data 'pass=mypassword' http://www.mywebsite.com/myscript.php > /dev/null 2>&1
(I post a password to make sure no one could run the script but me). It works like a charm except that wget writes each time an empty page in my user directory: the result of downloading the php page.
I don't understand why the result isn't send to /dev/null ? Any idea about the problem here?
Thanks you very much!
wget's output to STDOUT is it trying to make a connection, showing progress, etc.
If you don't want it to store the saved file, use the -O file parameter:
/usr/bin/wget -q --post-data -O /dev/null 'pass=mypassword' http://www.mywebsite.com/myscript.php > /dev/null 2>&1
Checkout the wget manpage. You'll also find the -q option for completely disabling output to STDOUT (but offcourse, redirecting the output as you do works too).
wget -O /dev/null ....
should do the trick
you can mute wget output with the --quiet option
wget --quiet http://example.com

Resources