Cron output to nothing - cron

I've noticed that my cron outputs are creating index.html files on my server. The command I'm using is wget http://www.example.com 2>&1. I've also tried including --reject "index.html*"
How can I prevent the output from creating index.html files?
--2013-07-21 16:03:01-- http://www.examplel.com
Resolving example.com... 192.0.43.10
Connecting to www.example.com|192.0.43.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 0 [text/html]
Saving to: `index.html.9'
0K 0.00 =0s
2013-07-21 16:03:03 (0.00 B/s) - `index.html.9' saved [0/0]

Normally, the whole point of running wget is to create an output file. A URL like http://www.example.com typically resolves to http://www.example.com/index.html, so by creating index.html, the wget command is just doing its job.
If you want to run wget and discard the downloaded file, you can use:
wget -q -O /dev/null http://www.example.com
The -o /dev/null discards log messages; -O /dev/null discards the downloaded file.
If you want to be sure that anything wget writes to stdout or stderr is discarded:
wget -q -O /dev/null http://www.example.com >/dev/null 2>&1
In a comment, you say that you're using the wget command to "trigger items on your cron controller" using CodeIgniter. I'm not familiar with CodeIgniter, but downloading and discarding an HTML file seems inefficient. I suspect (and hope) that there's a cleaner way to do whatever you're trying to do.

Related

Using WGET to run a cronjob PHP disable notification email

Im using godaddy as a webhost and id like to disable the email notification that is sent after a cronjob is done. Lucky for me they have been no help but the cronjob area says:
You can have cron send an email every time it runs a command. If you do not want an email to be sent for an individual cron job you can redirect the command’s output to /dev/null like this: mycommand >/dev/null 2>&1
Ive tried several variations of this and nothing seems to fix it.
My command:
wget http://example.com/wp-admin/tools.php?page=post-by-email&tab=log&check_mail=1
Any advice is greatly appreciated.
As the cronjob area says, you need to redirect the command’s output to /dev/null.
Your command should look like this:
wget -O /dev/null -o /dev/null "http://example.com/wp-admin/wp-mail.php" &> /dev/null
The -O option makes sure that the fetched content is sent to /dev/null.
If you want the fetched content to be downloaded in the server filesystem, you can use this option to specify the path to the desired file.
The -o option logs to /dev/null instead of stderr
&> /dev/null is another way yo redirect stdout output to /dev/null.
NOTES
For more information on wget, check the man pages: you can type man wget on the console, or use the online man pages: http://man.he.net/?topic=wget&section=all
With both -O and -o pointing to /dev/null, the output redirection ( &> ... ) should not be needed.
If you don't need to download the contents, and only need the server to process the request, you can simply use the --spider argument

analyse return value of wget command

I would like to analyse returned value of wget command.
I try those :
GET=$(wget ftp://user:user#192.168.1.110/conf.txt
echo $GET
GET=`wget ftp://user:user#192.168.1.110/conf.txt`
echo $GET
but I don't get the returned value when display GET variable
how to get returned value of wget
Your question is a little ambiguous. If you're asking "What is the exit code of the 'wget' process, that is accessible in the $? special variable."
[~/tmp]$ wget www.google.foo
--2013-11-01 08:33:52-- http://www.google.foo/
Resolving www.google.foo... failed: nodename nor servname provided, or not known.
wget: unable to resolve host address ‘www.google.foo’
[~/tmp]$ echo $?
4
If you're asking for the standard output of the 'wget' command, then what you're doing is going to give you that, although you have a typo in your first line (Add a closing parentheses after "conf.txt"). The problem is that wget doesn't put anything to stdout, by default. The progress bars and messages you see when you run wget interactively are actually going to stderr, which you can see by redirecting stderr to stdout using shell redirection 2>&1:
[~/tmp]$ GET=`wget www.google.com 2>&1`
[~/tmp]$ echo $GET
--2013-11-01 08:36:23-- http://www.google.com/ Resolving www.google.com... 74.125.28.104, 74.125.28.99, 74.125.28.103, ... Connecting to www.google.com|74.125.28.104|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 18637 (18K) [text/html] Saving to: ‘index.html’ 0K .......... ........ 100% 2.72M=0.007s 2013-11-01 08:36:23 (2.72 MB/s) - ‘index.html’ saved [18637/18637]
If you're asking for the contents of the resource that wget received, then you need to instruct wget to send its output to stdout instead of a file. Depending on your flavor of wget, it's likely an option like -O or --output-document, and you can construct your command line as: wget -O - <url>. By convention the single dash (-) represents stdin and stdout in command line options, so you're telling wget to send its file to stdout.
[~/tmp]$ GET=`wget -O - www.google.com`
--2013-11-01 08:37:31-- http://www.google.com/
Resolving www.google.com... 74.125.28.104, 74.125.28.99, 74.125.28.103, ...
Connecting to www.google.com|74.125.28.104|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18621 (18K) [text/html]
Saving to: ‘STDOUT’
100%[=======================================>] 18,621 98.5KB/s in 0.2s
2013-11-01 08:37:32 (98.5 KB/s) - written to stdout [18621/18621]
[~/tmp]$ echo $GET
<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage"><head>
<snip lots of content>
You can get the exit code with
echo $?
after executing the command. But if you like to react on a working/not working download, you can use if
if wget -q www.google.com
then
echo "works"
else
echo "doesn't work"
fi

Adding a cronjob into cPanel

I just need to run the following url using cron jobs in my cPanel.
When I am trying to execute the link
http://www.insidewealth.com.au/index.php?option=com_acymailing&ctrl=cron
the link is running in browser but when I am tried to add the same URL as it is in cron jobs I am getting the following error
bash/sh/ file not found
and when I edited the cron job as
/usr/bin/php /home/staging/public_html/index.php?option=com_acymailing&ctrl=cron
but I am getting 404 error.
My cPanel username is staging
Can anybody tell me what's the syntax of cron job in cPanel.
Cron Job running every minute and email report showing this errors.
Use wget function with full URL.
#yannick-blondeau As suggested you can use a wget or curl to make a simple request to your website.
Usually wget will attempt to download a file but this is not necessary with the -O flag to pipe to /dev/null or -q (both options used to prevent from saving output to a file), an example will look like
wget -O /dev/null http://www.insidewealth.com.au/index.php?option=com_acymailing&ctrl=cron
wget -q http://www.insidewealth.com.au/index.php?option=com_acymailing&ctrl=cron
You can also use curl for the same effect
curl --silent http://www.insidewealth.com.au/index.php?option=com_acymailing&ctrl=cron

Check Whether a Web Application is Up or Not

I would like to write a script to check whethere the application is up or not using unix shell scripts.
From googling I found a script wget -O /dev/null -q http://mysite.com, But not sure how this works. Can someone please explain. It will be helpful for me.
Run the wget command
the -O option tells where to put the data that is retrieved
/dev/null is a special UNIX file that is always empty. In other words the data is discarded.
-q means quiet. Normally wget prints lots of info telling its progress in downloading the data so we turn that bit off.
http://mysite.com is the URL of the exact web page that you want to retrieve.
Many programmers create a special page for this purpose that is short, and contains status data. In that case, do not discard it but save it to a log file by replacing -O /dev/null with -a mysite.log.
Check whether you can connect to your web server.
Connect to the port where you web server
If it connects properly your web server is up otherwise down.
You can check farther. (e.g. if index page is proper)
See this shell script.
if wget -O /dev/null -q http://shiplu.mokadd.im;
then
echo Site is up
else
echo Site is down
fi

Using wget in a crontab to run a PHP script

I set up a cron job on my Ubuntu server. Basically, I just want this job to call a php page on an other server. This php page will then clean up some stuff in a database. So I tought it was a good idea to call this page with wget and then send the result to /dev/null because I don't care about the output of this page at all, I just want it to do its database cleaning job.
So here is my crontab:
0 0 * * * /usr/bin/wget -q --post-data 'pass=mypassword' http://www.mywebsite.com/myscript.php > /dev/null 2>&1
(I post a password to make sure no one could run the script but me). It works like a charm except that wget writes each time an empty page in my user directory: the result of downloading the php page.
I don't understand why the result isn't send to /dev/null ? Any idea about the problem here?
Thanks you very much!
wget's output to STDOUT is it trying to make a connection, showing progress, etc.
If you don't want it to store the saved file, use the -O file parameter:
/usr/bin/wget -q --post-data -O /dev/null 'pass=mypassword' http://www.mywebsite.com/myscript.php > /dev/null 2>&1
Checkout the wget manpage. You'll also find the -q option for completely disabling output to STDOUT (but offcourse, redirecting the output as you do works too).
wget -O /dev/null ....
should do the trick
you can mute wget output with the --quiet option
wget --quiet http://example.com

Resources