Snakemake Error submitting jobscript (exit code 127): - slurm

I am trying to setup an automation system with crontab to process files using Snakemake. Here is the bash script that I used to send to the slurm.
#!/bin/bash
#SBATCH --job-name=nextstrain
snakemake --configfile config.yaml --jobs 100 --keep-going --rerun-incomplete --latency-wait 360 --cluster 'sbatch' -r -p --useconda
This scrip runs as intended. However, when I run the script through crontab as so:
0 8 * * 1 /bin/bash /home/user/snakemake_automate.sh
I get the error
Error submitting jobscript (exit code 127):
I am not sure what I should do to fix this error.

Exit code 127 means command not found I suspect you need to load a module or conda env prior to invoking snakemake. When you run the script interactively it will use your current environment, but through cron it may not source your bashrc or similar.

Related

How to automate Python Script in Cron Job without .py file extension

I want to run the command webscreenshot automatically (project found at https://pypi.org/project/webscreenshot/#description).
This command should run via cron task or systemd automatically, every 15 minutes.
Running a linux server with python3.6, I've tried to incorporate this as a cron task but it is failing. Should I create my own python script to automate this?
*/1 * * * * /usr/bin/python3.6 /home/user/.local/bin/webscreenshot -i /home/user/projects/webscreenshot/data.txt -o /home/user/projects/webscreenshot/screenshots > /home/user/projects/log.txt 2>&1
I expect this to run the python script webscreenshot but this is not the case, screenshots are not produced.
#Ari - the .py extension is left out by the default installation.
I was able to get this to work by adding
xvfb-run to the begging, to read:
xvfb-run /dir/to/webscreenshot -i file.txt -o /output/dir/screenshots

Run a cron job now

I have a shell script called import.sh . This script will be used only once and will run for atleast 2 days.
I am able to schedule a cronjob like below.
02 10 25 7 * while IFS=',' read a;do /home/$USER/import.sh $a;done < /home/$USER/input/xaa
input.sh is the shell script
xaa is the file that contains arguments.
Now I want to run this script now.
I have tried ./import.sh xaa and sh -x import.sh xaa but If I run them in a terminal then I have to leave the terminal open for the time the script runs which might take more than 2 days.
How can I schedule the job to run now and terminate as soon as it finishes.
When using the command line interface for Linux, prefixing any command with nohup prevents the command from being aborted if you log out or exit the command line interface.
So you can do something like below.
nohup ./import.sh xaa

Redirect output of my java program under qsub

I am currently running multiple Java executable program using qsub.
I wrote two scripts: 1) qsub.sh, 2) run.sh
qsub.sh
#! /bin/bash
echo cd `pwd` \; "$#" | qsub
run.sh
#! /bin/bash
for param in 1 2 3
do
./qsub.sh java -jar myProgram.jar -param ${param}
done
Given the two scripts above, I submit jobs by
sh run.sh
I want to redirect the messages generated by myProgram.jar -param ${param}
So in run.sh, I replaced the 4th line with the following
./qsub.sh java -jar myProgram.jar -param ${param} > output-${param}.txt
but the messages stored in output.txt is "Your job 730 ("STDIN") has been submitted", which is not what I intended.
I know that qsub has an option -o for specifying the location of output, but I cannot figure out how to use this option for my case.
Can anyone help me?
Thanks in advance.
The issue is that qsub doesn't return the output of your job, it returns the output of the qsub command itself, which is simply informing your resource manager / scheduler that you want that job to run.
You want to use the qsub -o option, but you need to remember that the output won't appear there until the job has run to completion. For Torque, you'd use qstat to check the status of your job, and all other resource managers / schedulers have similar commands.

Execute shell script whithin another script prompts: No such file or directory

(I'm new in shell script.)
I've been stuck with this issue for a while. I've tried different methods but without luck.
Description:
When my script attempt to run another script (SiebelMessageCreator.sh, which I don't own) it prompts:
-bash: ./SiebelMessageCreator.sh: No such file or directory
But the file exists and has execute permissions:
-rwxr-xr-x 1 owner ownergrp 322 Jun 11 2015 SiebelMessageCreator.sh
The code that is performing the script execution is:
(cd $ScriptPath; su -c './SiebelMessageCreator.sh' - owner; su -c 'nohup sh SiebelMessageSender.sh &' - owner;)
It's within a subshell because I first thought that it was throwing that message because my script was running in my home directory (When I run the script I'm root and I've moved to my non-root home directory to run the script because I can't move my script [ policies ] to the directory where the other script resides).
I've also tried with the sh SCRIPT.sh ./SCRIPT.sh. And changing the shebang from bash to ksh because the SiebelMessageCreator.sh has that shell.
The su -c 'sh SCRIPT.sh' - owner is necessary. If the script runs as root and not as owner it brokes something (?) (that's what my partners told me from their experience executing it as root). So I execute it as the owner.
Another thing that I've found in my research is that It can throw that message if it's a Symbolic link. I'm really not sure if the content of the script it's a symbolic link. Here it is:
#!/bin/ksh
BASEDIRROOT=/path/to/file/cpp-plwsutil-c
ore-runtime.jar (path changed on purpose for this question)
java -classpath $BASEDIRROOT com.hp.cpp.plwsutil.SiebelMessageCreator
exitCode=$?
echo "`date -u '+%Y-%m-%d %H:%M:%S %Z'` - Script execution finished with exit code $exitCode."
exit $exitCode
As you can see it's a very siple script that just call a .jar. But also I can't add it to my script [ policies ].
If I run the ./SiebelMessageCreator.sh manually it works just fine. But not with my script. I suppose that discards the x64 x32 bits issue that I've also found when I googled?
By the way, I'm automating some tasks, the ./SiebelMessageCreator.sh and nohup sh SiebelMessageSender.sh & are just the last steps.
Any ideas :( ?
did you try ?
. ./SiebelMessageCreator.sh
you can also perform which sh or which ksh, then modify the first line #!/bin/ksh

can't run wkhtmltopdf from a cronjob

I got a command line.. When running it from putty it works, but when running the command from a cronjob (webmin) running as root the command hangs and never completes executing..
/usr/bin/xvfb-run -a -s "-screen 0 640x480x16" /usr/bin/wkhtmltopdf /root/input.html /root/output.pdf
update
Command line in cronjob.php
echo shell_exec('/usr/bin/xvfb-run -a -s "-screen 0 640x480x16" /usr/bin/wkhtmltopdf /root/input.html /root/output.pdf');
Command for the cron job (running as root)
php -f /var/cronjob.php
When the cron job is running from webmin the execution never completes, but when running the exact same command from putty it works! This is the output
Loading page (1/2)
Printing pages (2/2)
Done
Exit with code 1 due to network error: ProtocolUnknownError
Running the command (without wkhtmltopdf) from both putty and webmin works
echo shell_exec('/usr/bin/xvfb-run -a -s "-screen 0 640x480x16"');
this is the output
xvfb-run: usage error: need a command to run
Usage: xvfb-run [OPTION ...] COMMAND
Run COMMAND (usually an X client) in a virtual X server environment.
...
When adding wkhtmltopdf the cronjob never completes
update II
This command line doesn't work either from a cron job
xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf -h
# Grokify
echo shell_exec('0 0 * * * * xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf /var/www/tmp/test.html /var/www/tmp/output.pdf >> /var/www/tmp/pdf.log 2>> /var/www/tmp/pdf.err');
pdf.err
sh: 1: 0: not found
The cronjob may not be pulling in the environment of the user and therefor doesn't know what $PATH actually contains. I've found I need to use the full path to binaries in my crons:
2 * * * /usr/bin/php -f /var/cronjob.php
whats the user the cron is running as? it may be a permission issue. try to use sudo to give power to create the full file.
So in the cron have
sudo xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf input.html output.pdf
you should consider putting xvfb-run in a full path like /usr/bin/xvfb-run or /bin/xvfb-run and when you specify input.html and output.pdf try to put its full path like /home/user/Documents/input.html /home/user/Documents/output.pdf
I'd assume problems with the (reduced) environment of cron and, subsequently, with xauth.
The way, I would try to gain progress is
(a) use the option --error-file=/tmp/xvfb.log of xvfb-run to see, what it says, and
(b) use the option --auth-file=/path/to/root_s_home/.Xauthority of xvfb-run
Another way round may be the installation of wkthtmltopdf from sources to get a real "headless" program without the need of xvfb-run
(These are just suggestions - I don't have chances to build a test scenario...)
The following PHP and crontab examples work for me both as unprivileged and privileged users. The example redirects STDOUT to /tmp/pdf.log and STDERR to /tmp/pdf.err to capture and examine logging and error messages. This example assumes xvfb-run and wkhtmltopdf are in your PATH which can be hard coded as well.
PHP
echo shell_exec('xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf /var/www/tmp/test.html /var/www/tmp/output.pdf >> /var/www/tmp/pdf.log 2>> /var/www/tmp/pdf.err');
crontab
0 0 * * * xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf /var/www/test.html /var/www/output.pdf >> /var/www/tmp/pdf.log 2>> /var/www/tmp/pdf.err
STDOUT: wkhtmltopdf Success
When run, the following appearing in /var/www/tmp/pdf.log indicating success which the error file remains empty:
$ cat /var/www/tmp/pdf.log
Loading page (1/2)
Printing pages (2/2)
Done
STDOUT: wkhtmltopdf Error
If there's a wkhtmltopdf error, it appears in STDOUT as well. The following is an example file not found error for test.html if the input file (/var/www/tmp/test.html) doesn't exist:
$ cat /var/www/tmp/pdf.log
Loading page (1/2)
Error: Failed loading page http:///var/www/tmp/test.html (sometimes it will work just to ignore this error with --ignore-load-errors)
[============================================================] 100%
Try adding the STDOUT and STDERR redirects to capture and check for error messages.

Resources