I am trying to write a script so I can use the 'qsub' command to submit a job to the cluster.
Pretty much, once I get into the cluster, I go to the directory with my files and I do these steps:
export PATH=$PATH:$HOME/program/bin
Then,
program > run.log&
Is there any way to make this into a script so I am able to submit the job to the queue?
Thanks!
Putting the lines into a bash script and then running qsub myscript.sh should do it.
Related
I have seen similar questions, but not exactly the same as mine: Use Bash variable within SLURM sbatch script, because I am not talking about slurm parameters.
I want to launch a slurm job for each of my sample files, so imagine I have 3 vcfs and I want to run a job for each of them:
I created a script to loop through a file in which I wrote sampleIds to run another script with each sample, which would perfectly work if I wanted to run it directly with bash:
while read line
do
sampleID="${line[0]}"
myscript.sh $sampleID
The problem is that I need to run the script with slurm, so is there any way to indicate slurm the bash variable that it should include?
I was trying this, but it is not working:
sbatch myscrip.sh --export=$sampleID
Okay, I've solved it:
sbatch --export=sampleID=$sampleID myscript.sh
How to execute shell script on top spark.
script is below.
#!/bin/bash
load data into local path $source $dest
If you are using oozie , then have a two step workFlow
a bash action
the spark job.
In bash action execute the expected command.
Else you can do the same stuff by using hdfs client in your java code before submitting the spark job. sparkContext.submit.
Please follow for detailed understanding http://bytepadding.com/big-data/spark/how-to-submit-spark-job-through-oozie/
I am trying to run a shell script from Cron at reboot. The script is located in /var/gee. The script is named _startTest.sh. This is the command I have used in cron is:
cd /var/gee && ./_startTest.sh
The script doesn't run. Any ideas what I am doing wrong here?
Thanks for your responses. I found the answer to the problem. The job was set to run at Reboot. I was testing it by executing it via the scheduler. I erroneously thought this would start the job at that moment.
I am using Linux Centos to schedule a job.
I have created a shell script file called Im_daily_loads.sh to run the job at 12:42PM everyday.
with the following comands:
#!/bin/sh
42 12 * * * cd $pdi; ./kitchen.sh -file="/opt/kff/software/pdi/5.0.1.A/data- integration/projects/IML/code/stg/IML_Load_Frm_SRC_To_PSA.kjb" -level=Basic > -logfile="/opt/kff/software/pdi/5.0.1.A/data-integration/projects/IML/log/iml_daily_loads.err.log"
Then loaded the file into crontab by using the issuing the following command crontab Im_daily_loads.sh, but my job is not running.
What would be the problem?
Why not just use
crontab -e
as the user you plan to execute the job as, enter the job, save and exit the editor?
Also, it looks like you need to define $pdi in your script. How is crontab supposed to know where your script is located?
first , run a very simple job to be shure crontab works at all.
for example
set > /tmp/crontab_works.log 2>&1
it will write down all variables. so you will see not all variables available in crontab
I'm struggling trying to debug a cron job which isn't working correctly. The cron job calls a shell script which should unrar a rar file - this works correctly when i run the script manually, but for some reason it's not working via cron. I am using the absolute file path and have verified that the path is correct. Has anyone got any ideas why this could be happening?
Well, you already said that you have used absolute paths, so the number one problem is dealt with.
Next to check are permissions. Which user is the cron job run as? Does it have all the permissions necessary?
Then, a little trick: if you have a shell script that fails and it's not run in a terminal I like to redirect the output of it to some files. Right at the start of the script, add:
exec &>/tmp/my.log
This will redirect STDOUT and STDERR to /tmp/my.log. Then it might also be a good idea to also add the line:
set -x
This will make bash print which command it's about to execute, and at what nesting level.
Happy debugging!
The first thing to check when cron jobs fail is to see if the full environment is available to the script you are trying to execute. In other words, you need to realize that a job executed via cron runs as a detached process meaning it is not associated with a login environment. Therefore whenever you try to debug a cron job that works when you execute manually, you need to be sure the same environment is available to the cronjob as is available to you when you execute it manually. This include any PATH settings, and other envvars that the script may depend on.
For me, the problem was a different shell interpreter in crontab.