pausing a shell script till previous command finishes - linux

I have a shell script which has a qsub command follwed by other command which depend on the output of the qsub command. I have a shell script which looks like
qsub <my_jobs>
cat <some_files>
grep <some_pattern>
I want the qsub to finish running before the script goes to the cat command. I put "wait" after the qsub, but did not work.

Add "-sync y" to the qsub command line.
From the manual in: http://gridscheduler.sourceforge.net/htmlman/htmlman1/qsub.html
-sync y[es]|n[o]
-sync y causes qsub to wait for the job to complete
before exiting. If the job completes successfully,
qsub's exit code will be that of the completed job. If
the job fails to complete successfully, qsub will print
out a error message indicating why the job failed and
will have an exit code of 1. If qsub is interrupted,
e.g. with CTRL-C, before the job completes, the job
will be canceled.
With the -sync n option, qsub will exit with an exit
code of 0 as soon as the job is submitted successfully.
-sync n is default for qsub.

Related

Check sbatch script of running job

When running an slurm job from an sbatch script, is there a command that lets me see what was in the sbatch script that I used to start this job?
For example sacct tells me I'm on SLURM_JOB_ID.3 and I would like to see how many job steps there will be in total.
I'm looking for a command that takes the job id and prints the sbatch script it is running.
You can use
scontrol write batch_script SLURM_JOB_ID
The above will display the submission script for job identified with jobid 12345
More info: https://slurm.schedmd.com/scontrol.html#OPT_write-batch_script

Add squeue in a Slurm job script

Normally I can check the job status by executing
squeue -u [userid]
at the command line after submitting the job. Is there a way to add this line in the job submission script so that the job status automatically shows up once the job is submitted?

Run a cron job now

I have a shell script called import.sh . This script will be used only once and will run for atleast 2 days.
I am able to schedule a cronjob like below.
02 10 25 7 * while IFS=',' read a;do /home/$USER/import.sh $a;done < /home/$USER/input/xaa
input.sh is the shell script
xaa is the file that contains arguments.
Now I want to run this script now.
I have tried ./import.sh xaa and sh -x import.sh xaa but If I run them in a terminal then I have to leave the terminal open for the time the script runs which might take more than 2 days.
How can I schedule the job to run now and terminate as soon as it finishes.
When using the command line interface for Linux, prefixing any command with nohup prevents the command from being aborted if you log out or exit the command line interface.
So you can do something like below.
nohup ./import.sh xaa

Disabling Hanging Script

When launching a bash script in LINUX, the script succeeds and is successful, yet the terminal hangs. I must always input CTRL+C to end the program. I am able to type in the terminal and press enter, but the script does not respond.
I can not change the script files, but can I launch it so that it disables waiting for the user? Any troubleshooting tips to disable this behaviour?
You can execute the script with & at the end, this will give the control back to the shell (executes the script as a background process).
./script.sh &
If you want to stop the script, you need to get its process id and then kill it. To get the process id, either execute ps aux | grep script where script is your script name, or execute echo $! right after you launched the script. When you have the process id, you can kill the process with kill 1234 where 1234 is the process id.
If the execution time of the script can be estimated, you can kill it automatically after a certain amount of time:
bash -c '(sleep 5m; kill $$ 2> /dev/null) & exec script' &
In this command sleep 5m is the time after the process will be killed, and script is the name of your script (or the command).
For example if the script's execution time is 30 seconds on average, then you can set the timeout to a minute or two to give it some extra time in case the execution is slower than usual. Note that this command doesn't guarantee that the script finished its execution, so use it with care.

can i delete a shell script after it has been submitted using qsub without affecting the job?

I want to submit a a bunch of jobs using qsub - the jobs are all very similar. I have a script that has a loop, and in each instance it rewrites over a file tmpjob.sh and then does qsub tmpjob.sh . Before the job has had a chance to run, the tmpjob.sh may have been overwritten by the next instance of the loop. Is another copy of tmpjob.sh stored while the job is waiting to run? Or do I need to be careful not to change tmpjob.sh before the job has begun?
Assuming you're talking about torque, then yes; torque reads in the script at submission time. In fact the submission script need never exist as a file at all; as given as an example in the documentation for torque, you can pipe in commands to qsub (from the docs: cat pbs.cmd | qsub.)
But several other batch systems (SGE/OGE, PBS PRO) use qsub as a queue submission command, so you'll have to tell us what queuing system you're using to be sure.
Yes. You can even create jobs and sub-jobs with HERE Documents. Below is an example of a test I was doing with a script initiated by a cron job:
#!/bin/env bash
printenv
qsub -N testCron -l nodes=1:vortex:compute -l walltime=1:00:00 <<QSUB
cd \$PBS_O_WORKDIR
printenv
qsub -N testsubCron -l nodes=1:vortex:compute -l walltime=1:00:00 <<QSUBNEST
cd \$PBS_O_WORKDIR
pwd
date -Isec
QSUBNEST
QSUB

Resources