PHP cron job can't create new files - cron

I have developed a script that my company uses to back up our client's DBs on our server by creating a dump file of their DBs. Here is my code:
$result=mysql_query("SELECT * FROM backup_sites");
while($row=mysql_fetch_array($result)){
$backup_directory=$row["backup_directory"];
$thishost=$row["host"];
$thisusername=$row["username"];
$thispassword=$row["password"];
$thisdb_name=$row["db_name"];
$link=mysql_connect("$thishost", "$thisusername", "$thispassword");
if($link){
mysql_select_db("$thisdb_name");
$backupFile = $thisdb_name ."-". date("Y-m-d-H-i-s", strtotime ("+1 hour")) . '.sql';
$command = "/usr/bin/mysqldump -h $thishost -u $thisusername -p$thispassword $thisdb_name > site-backups/$backup_directory/$backupFile";
system($command);
}
}
The script gets all the client's db information from a table (backup_sites) and loops through each one to create the backup file in that client's directory. This script works great when it is manually executed. The problem that I am having is that it does not work when I set it to run as a Cron job. The email from the Cron job contains this error for each DB backup attempt:
sh: site-backups/CLIENT_DIRECTORY/DB_DUMP_FILE.sql: No such file or directory
So for whatever reason, the Cron job is unable to create/write the DB dump file. I can't figure this one out. It just seems strange that the script works perfectly when executed manually. Anyone have any ideas? Thanks in advance!
Adam

Consider using absolute pathes. Cron may have a different base directory.

Related

How do I use Nagios to monitor a log file that generates a random ID

This the log file that I want to monitor:
/test/James-2018-11-16_15215125111115-16.15.41.111-appserver0.log
I want Nagios to read it this log file so I can monitor a specific string.
The issue is with 15215125111115 this is the random id that gets generated
Here is my script where the Nagios is checking for the Logfile path:
Veriables:
HOSTNAMEIP=$(/bin/hostname -i)
DATE=$(date +%F)
..
CHECK=$(/usr/lib64/nagios/plugins/check_logfiles/check_logfiles
--tag='failorder' --logfile=/test/james-${date +"%F"}_-${HOSTNAMEIP}-appserver0.log
....
I am getting the following output in nagios:
could not find logfile /test/James-2018-11-16_-16.15.41.111-appserver0.log
15215125111115 This number is always generated randomly but I don't know how to get nagios to identify it. Is there a way to add a variable for this or something? I tried adding an asterisk "*" but that didn't work.
Any ideas would be much appreciated.
--tag failorder --type rotating::uniform --logfile /test/dummy \
--rotation "james-${date +"%F"}_\d+-${HOSTNAMEIP}-appserver0.log"
If you add a "-v" you can see what happens inside. Type rotating::uniform tells check_logfiles that the rotation scheme makes no difference between current log and rotated archives regarding the filename. (You frequently find something like xyz..log). What check_logfile does is to look into the directory where the logfiles are supposed to be. From /test/dummy it only uses the directory part. Then it takes all the files inside /test and compares the filenames with the --rotation argument. Those files which match are sorted by modification time. So check_logfiles knows which of the files in question was updated recently and the newest is considered to be the current logfile. And inside this file check_logfiles searches the criticalpattern.
Gerhard

How can i get cover to create cover_db?

Im trying to set up Jenkins to run tests and coverage on my Perl project. In Jenkins i have a shell script that looks like this:
perl --version
perl Build.PL
prove -v test.pl --timer -l t > jenkins-${JOB_NAME}-${BUILD_NUMBER}-junit.TAP
/usr/local/bin/cover -test -report clover
When the shell is executes this call "/usr/local/bin/cover -test -report clover" it creates the following output :
Deleting database /var/lib/jenkins/workspace/Banking/cover_db
cover: running ./Build test "--extra_compiler_flags=-O0 -fprofile-arcs -ftest- coverage" "--extra_linker_flags=-fprofile-arcs -ftest-coverage"
test.pl .. ok
All tests successful.
Files=1, Tests=120, 7 wallclock secs ( 0.04 usr 0.01 sys + 0.91 cusr 0.04 csys = 1.00 CPU)
Result: PASS
Reading database from /var/lib/jenkins/workspace/Banking/cover_db
Writing clover output file to '/var/lib/jenkins/workspace/Banking/cover_db/clover.xml'...
No such file or directory at /usr/local/share/perl5/Devel/Cover/Report/Clover/Builder.pm line 40.
Build step 'Execute shell' marked build as failure
It seems to me like it deletes the cover_db directory if it exists but it cant recreate it, anyone knowing what im doing wrong ? As the Jenkins user i can both create and delete the cover_db directory so it should not be a user rights problem i guess.
Thank you in advance
Jan Eskilsson
In the line (40) mentioned in Devel/Cover/Report/Clover/Builder.pm:
open( my $fh, '>', $outfile ) or die($!);
With $outfile being 'clover.xml' in your example.
Edit: To clarify, it seems to me to die at
open ( my $fh, '>', '/var/lib/jenkins/workspace/Banking/cover_db/clover.xml' )
That is why I suspected one of the following problems, since 'open' itself has not yet failed me in any unrepdictable manner. I cannot, unfortunately, say exactly what the problem is. Hopefully someone can.
Thus, it should very likely be a problem with Clover not having access rights to write this file in the directory, as this line simply tries to open a file with write access and fails if open() fails.
Double check permissions for Clover, that should fix it.
Edit: since it is passing the whole path, ( i.e. $outfile seems to contain /var/lib/jenkins/workspace/Banking/cover_db/clover.xml, not just clover.xml, so it might also not have the directory created at that point, I did not delve deeply enough in the code to verify when that is done.
Thus, you may also want to check whether the directory exists at all after running, and whether it has rights to do so.
You can also check whether the directory exists or can be created by clover.

Scheduler doesn't work in vTiger

I am an intern on a company in Malta. The company has just made a big change from sugarCRM to vTigerCRM. Now we have a problem with the scheduler. What we want is, when a mail is entered it should automatically get synced with the organisations and contacts (I can link them when I click on the "SCAN NOW" button of the mailconverter). But I want it automatically.
But my cron files are not getting updated.
I installed a cron on the linux server with the code below:
*/15 * * * * sh /vtiger_root/cron/vtigercron.sh >/dev/null 2>&1
I adapt code the PHP_SAPI and I added the permissions on the proper files. But still. (as we speak my schedule task for the mail is at 1)
So every 15 minutes the vtigercron.sh is supposed to run vtigercron.php. But it doesn't happen. When I run vtigercron manually every things works fine. (The scheduler cron states get updated) but not with the cron file on the server.
Can somebody please be my hero?
In our crontab -
(sudo) vim /etc/crontab
I scheduled the job like this:
*/15 * * * * webdaemon /bin/bash /var/www/vtigercrm6/cron/vtigercron.sh
Are you getting any errors in /var/log/syslog or /var/log/messages or whichever system log that your OS uses?
This tutorial actually works: https://www.easycron.com/cron-job-tutorials/how-to-set-up-cron-job-for-vtiger-crm
A simpler way:
In the file vtigercron.php, change the line
if(vtigercron_detect_run_in_cli() || (isset($_SESSION["authenticated_user_id"]) && isset($_SESSION["app_unique_key"]) && $_SESSION["app_unique_key"] == $application_unique_key)){
to
if(vtigercron_detect_run_in_cli() || ($_REQUEST["app_unique_key"] == $application_unique_key) || (isset($_SESSION["authenticated_user_id"]) && isset($_SESSION["app_unique_key"]) && $_SESSION["app_unique_key"] == $application_unique_key)){
and then use
http://www.example.com/vtigercron.php?app_unique_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
as cron job URL.
IMPORTANT NOTICE: You may find your app_unique_key in config.inc.php (look for $application_unique_key in it).
Please replace xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx in the URL above with the 32-chars $application_unique_key you find in config.inc.php, and www.example.com with your vtiger install location.

fwrite, fopen, and cronjob - not saving to proper location

I wrote a php script to get my latest Twitter tweet and save it to a file. Here is the part doing so :
// No errors exist. Write tweets to json/txt file.
$file = $twitteruser."-tweets.txt";
$fh = fopen($file, 'w') or die("can't open file");
fwrite($fh, json_encode($tweets));
fclose($fh);
This works fine when I run my php script directly in the browser, however, when I run the file from a cron job it creates the file in my user root directory (obviously not the correct place).
If I change the above line to :
$file = "public_html/get-tweets/".$twitteruser."-tweets.txt";
the cronjob now works and saves to the correct location, but then manually running the file in my browser gives an fopen error that the file does not exist.
What the heck is the problem? I need this to work both from cronjob and manually.
Use a full path from the root of the filesystem, then both should be fine.

Need help - Getting an error: xrealloc: subst.c:4072: cannot reallocate 1073741824 bytes (0 bytes allocated)

Checking if anybody else had the similar issue.
Code in the shell script:
## Convert file into Unix format first.
## THIS is IMPORTANT.
#####################
dos2unix "${file}" "${file}";
#####################
## Actual DB Change
db_change_run_op="$(ssh -qn ${db_ssh_user}#${dbserver} "sqlplus $dbuser/${pswd}#${dbname} <<ENDSQL
#${file}
ENDSQL
")";
Summary:
1. From a shell script (on a SunOS source server) I'm running a sqlplus session via ssh on a target machine to run a .sql script.
2. Output of this target ssh session (running sqlplus) is getting stored in a variable within the shell script. Variable name: db_change_run_op (as shown above in the code snapshot).
3. Most of the .sql scripts (that the variable "${file}" stores) that I'm running, shell script runs it fine and returns me the output of the .sql file (ran on target server via ssh from source server) provided, if the .sql file contains something which doesn't take much time to complete -or generates reasonable amount of output log/lines.
for ex: Let's assume if .sql I want to run does the following, then it runs fine.
select * from database123;
udpate table....
alter table..
insert ....
...some procedure .... which doesn't take much time to create....
...some more sql commands which complete..within few minutes to an hour....
4. Now, the issue I'm facing is:
Let's assume I have a .sql file where a single select command from a table have couple of hundred thousands - upto 1-5millions of lines i.e.
select * from database321;
assume the above generates the above bullet 4 condition.
In this case, I'm getting the following error message thrown by the shell script (running on the source server).
Error:
*./db_change_load.sh: xrealloc: subst.c:4072: cannot reallocate 1073741824 bytes (0 bytes allocated)*
My questions:
1. Did the .sql script complete - I assume yes. But, how can I get the output LOG file of the .sql file generated on the target server directly. If this can be done, then I won't need the variable to hold the output of whole ssh session sqlplus command and then create a log file on source server by doing [ echo "${db_change_run_op}" > sql.${file}.log ] way.
I assume the error is coming as the output or no. of lines generated by the ssh session i.e. by the sqlplus is so big that it can't fit Unix/Linux BASH variable's limit and thus, xrealloc error.
Please advise if on the above 2 questions if you have any experience or how can i solve this.
I assume, I'll try using " | tee /path/on.target.ssh.server/sql.${file}.log" soon after << ENDSQL or final close of ENDSQL (here doc keyword), wondering if that would work or not..
OK. got it working. No more store stuff in a var and then echo $var to a file.
Luckily, I had a same mount point on both source and target server i.e. if I go to /scm on source and on target, the mount (df -kvh .) shows same output for Share/NAS mount value.
Filesystem size used avail capacity Mounted on
ServerNAS02:/vol/vol1/scm 700G 560G 140G 81% /scm
Now, instead of using the variable to store the whole output of ssh session calling sqlplus session, all I did is was to create a file on the remote server using the following code.
## Actual DB Change
#db_change_run_op="$(ssh -qn ${pdt_usshu_dbs}#${dbs} "sqlplus $dbu/${pswd}#$dbn <<ENDSQL | tee "${sql_run_output_file}".ssh.log
#set echo off
#set echo on
#set timing on
#set time on
#set serveroutput on size unlimited
##${file}
#ENDSQL
#")";
ssh -qn ${pdt_usshu_dbs}#${dbs} "sqlplus $dbu/${pswd}#$dbn <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
seems like unlimited doesn't work in 11g so I had to use the 1000000 value (these small sql cmds help to show command with its output, show clock time for each output line etc).
But basically, in the above code, I'm calling the ssh command directly without using a variable="$(.....)" way.. and after the <
Even if I wouldn't have the same mount, I could have tee'd the output to a file on the remote server path (which is not available from source server) but atleast I can see upto what level the .sql command completed or generated output as now output is going directly to a file on remote server and Unix/Linux doesn't care much about the file size until there's no space left.

Resources