can't run wkhtmltopdf from a cronjob - linux

I got a command line.. When running it from putty it works, but when running the command from a cronjob (webmin) running as root the command hangs and never completes executing..
/usr/bin/xvfb-run -a -s "-screen 0 640x480x16" /usr/bin/wkhtmltopdf /root/input.html /root/output.pdf
update
Command line in cronjob.php
echo shell_exec('/usr/bin/xvfb-run -a -s "-screen 0 640x480x16" /usr/bin/wkhtmltopdf /root/input.html /root/output.pdf');
Command for the cron job (running as root)
php -f /var/cronjob.php
When the cron job is running from webmin the execution never completes, but when running the exact same command from putty it works! This is the output
Loading page (1/2)
Printing pages (2/2)
Done
Exit with code 1 due to network error: ProtocolUnknownError
Running the command (without wkhtmltopdf) from both putty and webmin works
echo shell_exec('/usr/bin/xvfb-run -a -s "-screen 0 640x480x16"');
this is the output
xvfb-run: usage error: need a command to run
Usage: xvfb-run [OPTION ...] COMMAND
Run COMMAND (usually an X client) in a virtual X server environment.
...
When adding wkhtmltopdf the cronjob never completes
update II
This command line doesn't work either from a cron job
xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf -h
# Grokify
echo shell_exec('0 0 * * * * xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf /var/www/tmp/test.html /var/www/tmp/output.pdf >> /var/www/tmp/pdf.log 2>> /var/www/tmp/pdf.err');
pdf.err
sh: 1: 0: not found

The cronjob may not be pulling in the environment of the user and therefor doesn't know what $PATH actually contains. I've found I need to use the full path to binaries in my crons:
2 * * * /usr/bin/php -f /var/cronjob.php

whats the user the cron is running as? it may be a permission issue. try to use sudo to give power to create the full file.
So in the cron have
sudo xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf input.html output.pdf
you should consider putting xvfb-run in a full path like /usr/bin/xvfb-run or /bin/xvfb-run and when you specify input.html and output.pdf try to put its full path like /home/user/Documents/input.html /home/user/Documents/output.pdf

I'd assume problems with the (reduced) environment of cron and, subsequently, with xauth.
The way, I would try to gain progress is
(a) use the option --error-file=/tmp/xvfb.log of xvfb-run to see, what it says, and
(b) use the option --auth-file=/path/to/root_s_home/.Xauthority of xvfb-run
Another way round may be the installation of wkthtmltopdf from sources to get a real "headless" program without the need of xvfb-run
(These are just suggestions - I don't have chances to build a test scenario...)

The following PHP and crontab examples work for me both as unprivileged and privileged users. The example redirects STDOUT to /tmp/pdf.log and STDERR to /tmp/pdf.err to capture and examine logging and error messages. This example assumes xvfb-run and wkhtmltopdf are in your PATH which can be hard coded as well.
PHP
echo shell_exec('xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf /var/www/tmp/test.html /var/www/tmp/output.pdf >> /var/www/tmp/pdf.log 2>> /var/www/tmp/pdf.err');
crontab
0 0 * * * xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf /var/www/test.html /var/www/output.pdf >> /var/www/tmp/pdf.log 2>> /var/www/tmp/pdf.err
STDOUT: wkhtmltopdf Success
When run, the following appearing in /var/www/tmp/pdf.log indicating success which the error file remains empty:
$ cat /var/www/tmp/pdf.log
Loading page (1/2)
Printing pages (2/2)
Done
STDOUT: wkhtmltopdf Error
If there's a wkhtmltopdf error, it appears in STDOUT as well. The following is an example file not found error for test.html if the input file (/var/www/tmp/test.html) doesn't exist:
$ cat /var/www/tmp/pdf.log
Loading page (1/2)
Error: Failed loading page http:///var/www/tmp/test.html (sometimes it will work just to ignore this error with --ignore-load-errors)
[============================================================] 100%
Try adding the STDOUT and STDERR redirects to capture and check for error messages.

Related

What is the problem with cron privileges?

When I run the BASH script from the command line, it is executed.
When I try to run it as a cron task, it fails. By an exception method, I found a problem. It consists in the fact that the "which iptables" command returns an empty string. This happens with all the programs that I try to find in the "/sbin" directory.
Example:
# crontab -e
* * * * * /root/test.sh >> /root/test.log 2>&1
test.sh
#!/bin/bash
IPT=$(which iptables);
echo ${IPT} >> /root/test.log
But in test.log written empty string.
Tested on Ubuntu 16.04 and Debian 8.
It's not related to privileges.
which looks for the command in $PATH. cron scripts have a limited path, that doesn't include iptables, so it's not found.
$ /usr/bin/which iptables
/sbin/iptables
$ PATH=/bin:/usr/bin /usr/bin/which iptables
$ echo $?
1
When you have a limited path, it will return a empty string (on my other machine it reports that no iptables in (/usr/bin:/bin) so YMMV) and exit with non-zero code.
if you do something like echo $PATH >> /root/test.log you will see that cron has a path with just /usr/bin and /bin
You have to either set the $PATH to contain the iptables location, or use full path when calling iptables

Crontab not recognising command

I have a bash script which I want to run as a cron job.
It works fine except one command.
I redirected its stderr to get the error and found out that the error it shows was the command not recognized.
It is a root crontab.
Both the current user and root execute the command successfully when I type it in the terminal.
Even the script executes the command when I run it through the terminal.
Startup script :
#!/bin/bash
sudo macchanger -r enp2s0 > /dev/null
sudo /home/deadpool/.logkeys/logger.sh > /dev/null
logger.sh :
#!/bin/bash
dat="$(date)"
echo " " >> /home/deadpool/.logkeys/globallog.log
echo $dat >> /home/deadpool/.logkeys/globallog.log
echo " " >> /home/deadpool/.logkeys/globallog.log
cat /home/deadpool/.logkeys/logfile.log >> /home/deadpool/.logkeys/globallog.log
cat /dev/null > /home/deadpool/.logkeys/logfile.log
cat /dev/null > /home/deadpool/.logkeys/error.log
logkeys --start --output /home/deadpool/.logkeys/logfile.log 2> /home/deadpool/.logkeys/error.log
error.log
/home/deadpool/.logkeys/logger.sh: line 10: logkeys: command not found
Remember cron runs with a different environment then your user account or root does and might not include the path to logkeys in its PATH. You should try the absolute path for logkeys (find it with which logkeys from your user) in your script. Additionally I recommend looking at this answer on serverfault about running scripts like they are running from cron when you need to find out why it's working for you and not in a job.

Missing File Output When Script Command Runs as su -c command -m user

I have a script that needs to check if a lockfile exists that only root can access and then the script runs the actual test script that needs to be run as a different user.
The problem is the test script is supposed to generate xml files and those files do not exist. (aka I can't find them)
Relevant part of the script
if (mkdir ${lockdir} ) 2> /dev/null; then
echo $$ > $pidfile
trap 'rm -rf "$lockdir"; exit $?' INT TERM EXIT
if [ -f "$puppetlock" ]
then
su -c "/opt/qa-scripts/start-qa-test.sh > /var/log/qaTests/test-$(date +\"m-%d-%T\".log" -m "qaUser"
lockdir is what gets created when the test is run to signify that the test process has begun.
puppetlock checks if puppet is running by looking for the lock file puppet creates.
qaUser does not have the rights to check if puppetlock exists.
start-qa-test.sh ends up calling java to execute an automated test. My test-date.log file displays what console would see if the test was run.
However the test is supposed to produce some xml files in a directory called target. Those files are missing.
In case it's relevant start-qa-test.sh is trying to run something like this
nohup=true
/usr/bin/java -cp .:/folderStuff/$jarFile:/opt/folderResources org.junit.runnt.JUNITCore org.some.other.stuff.Here
Running start-qa-test.sh produces the xml output in the target folder. But running it through su -c it does not.
Edit
I figured out the answer to this issue.
I changed the line to
su - qaUser -c "/opt/qa-scripts/start-qa-test.sh > /var/log/qaTests/test-$(date +\"m-%d-%T\".log"
That allowed the output to show up at the /home/qaUser
Try redirecting the output of stout and stderr in the line:
su -c "/opt/qa-scripts/start-qa-test.sh > /var/log/qaTests/test-$(date +\"m-%d-%T\".log" -m "qaUser" 2>&1

Condor job - running shell script as executable

I’m trying to run a Condor job where the executable is a shell script which invokes certain Java classes.
Universe = vanilla
Executable = /script/testingNew.sh
requirements = (OpSys == "LINUX")
Output = /locfiles/myfile.out
Log = /locfiles/myfile.log
Error = /locfiles/myfile.err
when_to_transfer_output = ON_EXIT
Notification = Error
Queue
Here’s the content for /script/testingNew.sh file –
(Just becaz I’m getting error, I have removed the Java commands for now)
#!/bin/sh
inputfolder=/n/test_avp/test-modules/data/json
srcFolder=/n/test_avp/test-modules
logsFolder=/n/test_avp/test-modules/log
libFolder=/n/test_avp/test-modules/lib
confFolder=/n/test_avp/test-modules/conf
twpath=/n/test_avp/test-modules/normsrc
dataFolder=/n/test_avp/test-modules/data
scriptFolder=/n/test_avp/test-modules/script
locFolder=/n/test_avp/test-modules/locfiles
bakUpFldr=/n/test_avp/test-modules/backupCurrent
cd $inputfolder
filename=`date -u +"%Y%m%d%H%M"`.txt
echo $filename $(date -u)
mkdir $bakUpFldr/`date -u +"%Y%m%d"`
dirname=`date -u +"%Y%m%d"`
flnme=current_json_`date -u +"%Y%m%d%H%M%S"`.txt
echo DIRNameis $dirname Filenameis $flnme
cp $dataFolder/current_json.txt $bakUpFldr/`date -u +"%Y%m%d"`/current_json_$filename
cp $dataFolder/current_json.txt $filename
mkdir $inputfolder/`date -u +"%Y%m%d"`
echo Creating Directory $(date -u)
mv $filename $filename.inprocess
echo Created Inprocess file $(date -u)
Also, here’s the error log from Condor –
000 (424639.000.000) 09/09 16:08:18 Job submitted from host: <135.207.178.237:9582>
...
001 (424639.000.000) 09/09 16:08:35 Job executing on host: <135.207.179.68:9314>
...
007 (424639.000.000) 09/09 16:08:35 Shadow exception!
Error from slot1#marcus-8: Failed to execute '/n/test_avp/test-modules/script/testingNew.sh': (errno=8: 'Exec format error')
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
...
012 (424639.000.000) 09/09 16:08:35 Job was held.
Error from slot1#marcus-8: Failed to execute '/n/test_avp/test-modules/script/testingNew.sh': (errno=8: 'Exec format error')
Code 6 Subcode 8
...
Can anyone explain whats causing this error, also how to resolve this?
The testingNew.sh scripts run fine on the Linux box, if executed on a network machine seperately.
Thx a lot!! - GR
The cause, in our case, was the shell script using DOS line endings instead of Unix ones.
The Linux kernel will happily try to feed the script not to /bin/sh (as you intend) but to /bin/sh
. (Do you see that trailing carriage return character? Neither do I, but the Linux kernel does.) That file doesn't exist, so then, as a last resort, it will try to execute the script as a binary executable, which fails with the given error.
You need to specify input as:
input = /dev/null
Source: Submitting a job to Condor

Run "screen -S name ./script" command on #reboot using crontab

I've tried adding this to my crontab:
#reboot /root/startup
The "startup" file:
#!/bin/sh
svnserve -d -r /root/svnrepos/mainres
svnserve -d -r /root/svnrepos/mapres --listen-port=3691
screen -S mta ./mtaserver/mta-server > log1
screen -S mapmta ./mapserver/mta-server > log2
exit 0
Now svnserve commands run fine. The problem is with the screen command.
log1 and log2 files have the same content which is: Must be connected to a terminal.
What I'm trying to do is start the 2 executables on startup, and then later have a way to access them.
Is there a way to do this?
You want to add the following options to the 'screen' commands (e.g. before -S): -d -m
From the manpage:
-d -m Start screen in "detached" mode. This creates a new session but
doesn't attach to it. This is useful for system startup
scripts.

Resources