I have found many questions and articles about this but i still have some difficulties.
I'm using the following command
/usr/bin/php home/domain.com/public_html/cron/script.php
I receive the following error
Status: 404 Not Found
X-Powered-By: PHP/5.2.8
Content-type: text/html
No input file specified.
i'm using Cpanel, the file is hosted on domain.com/cron/script.php
Anyideas, thanks :p
Put a leading slash on the script name, i.e.
/usr/bin/php /home/domain.com/public_html/cron/script.php
Unless you actually intend to run the script through the web, as in lacqui's answer, and you don't mind random third parties being able to run it any time they like, there's no reason you should put it inside your public_html directory; quite the opposite.
Try:
wget -O - http://domain.com/cron/script.php
and see if you get a better result.
Edit: added "- O - " to not write output to home folder.
You might need to use the binary known as php-cli instead of just php.
I'm realising that it is an old question and that you may have found a solution but none of the answers above helped me and I was getting the same 404 error when I was running a cron script.
The problem was related to the way in which the path to the php script was written. The path must start from public_html like this /usr/bin/php public_html/public/index.php
In several shared hosting wget and curl commands are not available from cron. If one wants to execute a web (http) request from cron, then it can be done by calling the desired web url as php curl inside cron php script.
Below is an example code to be put inside cron php file:
<?php
function callRemoteHttp($url)
{
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec($curl);
$ret_arr = array('data' => $result, 'status_code' => curl_getinfo($curl, CURLINFO_HTTP_CODE));
curl_close($curl);
return $ret_arr;
}
$ret = callRemoteHttp('http://example.com?param1=value1¶m2=value2');
?>
Related
I tried to start with Gearman. After downloading and setting it, gearman_version() works. But, when I start server and try to init worker like so:
php myFileName.php &
I see the code:
And when I init the client, I see code too. What am I doing wrong?
i don`t know why, but the examples from youtube was not correct in my case. The first scripts, which worked I get from http://php.net/manual/ru/gearman.examples-reverse-bg.php
You likely have short open tags disabled on your install. Notice, running a php file that doesn't actually contain any PHP will just echo the contents of the file.
>$ echo 'hello' > text.php
>$ php text.php
hello
>$
You can verify the setting for your install with the following
>$ php -i | grep "short_open_tag"
short_open_tag => On => On
If tags are On you're all set.
I want to execute PHP Script on my Ubuntu Virtual Machine but I only have access to the command line. I've thought about using cUrl but I have a problem with it :
When I use the following command : "curl http://localhost/myscript.php"
The response is the plain file ("<?php echo "<p>Hello world</p>"; ?>") instead of html response ("<p>Hello world</p>")
How to solve this problem ?
Thank you
You may have to set the content-type using header().
Before the echo statement, insert this:
header('Content-Type: text/html');
See: http://php.net/manual/en/function.header.php
Team kindly help on the error Alert!: Unsupported URL scheme! when snding bulk sms in linux bash script. the lynx command works fine for static URL. This is what i have got below
#!/bin/bash
a="lynx -dump 'http://localhost:13013/cgi-bin/sendsms?from=8005&to="
b="&username=tester&password=foobar&smsc=smsc1&text=Test+mt+update'"
for i in cat numbers.txt;do $a$i$b;echo sent $i; done;
numbers
258909908780
256789123456
676675234789
The problem is the single quote before http:. Quotes are not processed after expanding variables, so it's being sent literally to lynx. There's no 'http URL scheme, hence the error message.
Remove the quotes before http: and after +update.
#!/bin/bash
a="lynx -dump http://localhost:13013/cgi-bin/sendsms?from=8005&to="
b="&username=tester&password=foobar&smsc=smsc1&text=Test+mt+update"
for i in $(cat numbers.txt);do $a$i$b;echo sent $i; done;
For more information about this, see
Setting an argument with bash
I want to set Nagios (on my Debian) to verify a SharePoint server is up. I already tried to use cURL but it didn't worked for some issue that I don't know so I decided to change the way I'll verify that service.
It's simple in theory, I just have to make a script to send an request (http or https, doesn't matter) and check the response, if is 200 for successful or 40x if not (ok at this point).
So I have to use telnet or any ftp service to do that or I can use another feature/tool for that.
With telnet I'am having problem with 400 error. SharePoint returns this error when server is up or down, so I don't work for me.
Any ideas??
You can use the check_http plugin of Nagios. For example:
check_http -H SharepointHostname/IP -p port
You can use the -S flag for secure http connections
You can use the -u flag for going to specific URL
You can use the -s flag to search for a specific string in the HTML page returned from the url specified with the -u flag.
So basically you can request a specific page, scan for a known String, and if successfully found, you are sure this page is up (which means server is up etc.)
Example:
check_http -H my.sharepoint.com -u /start/page/sharepoint.aspx -s "test string"
Commonly this is done on login pages etc. Don't forget to escape special chars in your URL, if it contains any (like ? and &).
There's also a perl script available for checking sharepoint servers.
Does this not do what you want:
http://exchange.nagios.org/directory/Plugins/Email-and-Groupware/Microsoft-Sharepoint/check_sharepoint-2Epl/details
Most likely you're going to need a login/password for Sharepoint in order to monitor much more than the basic IIS / website is working.
I done my own way to check if SharePoint is UP or DOWN. Please pay attention that this script just checks the service status, nothing more like user permissions or whatever.
Perl script:
#!/usr/bin/env perl
use strict;
use warnings;
use LWP::UserAgent;
use Getopt::Long qw(:config no_ignore_case_always auto_version);
GetOptions ('h=s' => \my $h);
my $ua = LWP::UserAgent->new;
$ua->agent('Mozilla/4.0 (compatible; MSIE 5.0; Windows 95)');
my $req = $ua->get('http://' . $h);
my $retorno = '';
if ($req->is_success)
{
$retorno = $req->content;
}
else
{
$retorno = $req->status_line;
}
if ($retorno eq "401 Unauthorized")
{
print "OK: SharePoint service at " . $h . " server is UP.";
exit 0;
}
else
{
print "CRITICAL: SharePoint service at " . $h . " server is DOWN.";
exit 2;
}
In case you got this exception when you run the script:
Can't locate LWP/UserAgent.pm in #INC
this article may help you as it helped me:
http://help.directadmin.com/item.php?id=274
So in Nagios commands.cfg file you'll declare the command this way:
command_line /usr/local/nagios/libexec/check_sharepoint.pl -h $HOSTADDRESS$
Where $HOSTADDRESS is the host IP variable in Nagios scope.
Remember to chmod +x on the file. I know you will...
I've spent days still can't figure out this.
I have following file structure under public_html:
cron_jobs/file.php contains - > include('../base/basefile.php')
base/basefile.php contains - > include('baseSubFile.php')
when I run
/pathtophp/php -f ~/public_html/cron_jobs/file.php
it works ok but when I copy the same command to cron in cpanel, I get error saying
'basesubfile.php' can't be found
Please help.
Cron won't run from the same directory as your php file is in, so you'll need to change to it first:
cd /home/user/public_html/cron_jobs/ && /pathtophp/php -f file.php
I recommend the full path versus ~ when dealing with cron scripts to avoid confusion
You should used
include dirname( __FILE__ ) . '/../base/basefile.php';
and
include dirname( __FILE__ ) . '/baseSubFile.php';
The function dirname returns parent directory's path
Simply put this at the top of your PHP script:
chdir(dirname(__FILE__));