I have an sh file (named a.sh) and I process it with the command sbatch for my project, so I type a lot of:
sbatch a.sh
There are 2 critical lines for a.sh (rest of them are irrelevant I guess). They are
source /okyanus/progs/gaussian/g16/bsd/g16.sariyer.profile
and
g16 dtn-b3-0-0.gjf
The second one is at the end of the file (further will be mentioned as aaa.com) and it is what needs to be changed. and aaa.com should be in the same directory with a.sh to submit the job aaa.com.
In the a.sh file, there is a name of a file (lets say aaa.com) which the data is taken for the sbatch process. So a standard operation for me to do lets say next 4 jobs is:
modify the a.sh file to change the name of the file addressed (say change aaa.com to aab.com), write and quit
type: sbatch a.sh (to start operation)
modify the a.sh file to change the name of the file addressed (say change aab.com to aac.com), write and quit
type: sbatch a.sh (to start operation)
modify the a.sh file to change the name of the file addressed (say change aac.com to aad.com), write and quit
type: sbatch a.sh (to start operation)
modify the a.sh file to change the name of the file addressed (say change aad.com to aae.com), write and quit
type: sbatch a.sh (to start operation)
However, I used to have a command template with sed -i. That command could do these 8 operations in one tick. From my old notes, I could find some parts of my old template.
my old template is a short version but works effectively to execute first two operations in one step with the command:
sed -i 's/aaa.com/aab.com/g' a.sh ; sbatch a.sh
above command does first and second step at once. I knew I used a command which could execute all 8 steps at once. I was something like:
sed -i 's/aaa.com/aab.com???aab.com/aac.com???aac.com/aad.com???aad.com/aae.com/g' a.sh ; sbatch a.sh
The above command could do all 8 steps at once, submitting next 4 jobs. However, I could not remember what should have been written on the ??? parts to successfully execute the command.
I am sure the command I propose worked with its correct state. Any other ideas and helps will be appreciated.
P.S.: a.sh file is something generated from the system. It sources a program related with chemistry, then submits the values of the .com (or .gjf) file to the chemistry program it runs for.
Thanks!
what should have been written on the ??? parts to successfully execute the command
s or s/<regex>/<replacement>/<flags> is a substitute command.
The default and ultimate command separator in sed is the newline. Optionally when possible you can separate commands with a ;. So it looks like:
sed 's/regex/replacement/; s/regex2/replacement2/; : another_command ; b another_command'
I was something like:
sed -i 's/aaa.com/aab.com???aab.com/aac.com???aac.com/aad.com???aad.com/aae.com/g'
Doing:
sed 's/aaa.com/aab.com/; s/aab.com/aac.com/;`
makes no sense, first aaa is replaced by aab, then aab by aac. It's the same as:
sed 's/aaa.com/aac.com/;`
You can do it from the back:
sed 's/aab.com/aac.com/; s/aaa.com/aab.com/;'
But really save yourself the trouble of dealing with some uncretain state and create a template file:
# template.a.sh
work on UNIQUE_STRING
then iterate over values and replace recreating the whole file each time, that way you do not have to care what "was" in the file and what "will be" in the file. Just create the file from template:
for i in aaa.com aab.com aac.com; do
sed 's/UNIQUE_STRING/'"$i"'/' template.a.sh > a.sh
sbatch a.sh
done
Related
I'm trying to store the result of this command that is written in a script
ls -l /etc|wc -l
in a variable on another file.
To summarize, I have a script with that command and when I execute it, I want the result to be stored in a variable in another file.
Can someone help me with this please?
You may try to use temporary file (if possible).
This command:
ls -l /etc|wc -l > /tmp/myvar.txt
Another file:
myvar="$(cat /tmp/myvar.txt)"
You just need to use '> path/to/file' at the end of your command to redirect the output to a file (this will override the file content).
If you need another behavior, like append the content, you should use '>>' instead of '>'.
Take a look here for more details.
I'm not sure I understand what you're trying to do so I'll give you two solutions.
If the command you mention is in some file script_A.sh and you want the results of that script stored in some variable $var when running some other script script_B.sh, randomir's solution is good. In script_B:
var=$(bash path/to/script_A.sh)
If what you're asking is to run script_A.sh and then have it write a new line to a file that would store the results to a value when you run script_B.sh, I suppose you could run something like:
result=$(ls -l /etc|wc -l)
echo "var=\"$result\"" > path/to/script_B.sh
or even replace a line in a script_B.sh that already exists:
result= $(ls -l /etc|wc -l)
sed -i "s|var=SOMEPLACEHOLDER|var='$result'|" path/to/script_B.sh
If the latter is what you want, though, can you tell us more about what you're trying to accomplish? There's probably a better way than what you propose.
I'm struggling with passing a shell command. Specifically, I have written a shell file that the user will run. In it, another shell file is written based on the user inputs. This file will then be passed to a command that submits the job.
Now in this internal shell file I have a variable containing a function. However, when I run the user shell script I can't get this function to pass in a way that the internal shell file can use it.
I can't share my work but I'll try to make an example
#User shell script
cat >test.txt <<EOF
#a bunch of lines that are not relevant
var=`grep examples input.txt`
/bin/localfoo -${var}
EOF
# pass test.txt to localfoo2
/bin/localfoo2 /test.txt
When I run the 'User Shell Script' it prints that grep can't find the file, but I don't want grep to be evaluated. I need it to be written, as is, so that when the line '/bin/localfoo2 /test.txt' is read, grep is evaluated.
I've tried a number of things. I've tried double back ticks, i've tried using 'echo $(eval $var)'. But none of the methods I've found through googling have managed to pass this var in a way that will accomplish what I want.
Your help is much appreciated.
You can try with single quote (').
You have to put the single quote in before the grep command and end of the grep command like below.
#User shell script
cat >test.txt <<EOF
#a bunch of lines that are not relevant
var='`grep examples input.txt`'
/bin/localfoo -${var}
EOF
# pass test.txt to localfoo2
/bin/localfoo2 /test.txt
I did not understand where you have to execute that grep command.
If you want to execute the grep command inside the localfoo script, I hope this method will help.
I use -N option to specify the name for the job when submitting through qsub. The qsub, however, adds some numeric string after thejob name as described in the man page:
By default the file name for standard output has the
form job_name.ojob_id and job_name.ojob_id.task_id for
array job tasks (see -t option below).
Therefore, whenever I submit a new job with same job name, a new suffix .ojob_id is added to the job name and a new output file is created.
What I trying to achieve is to have same output file each time a job is submitted through qsub. How can I do that? I have to run a job several time and I want the output from a run to overwrite the output file generated in the previous run. How can I achieve that?
See the example below:
First time command is given to run script hello_world to output in log_hello_world:
qsub -cwd -N log_hello_world hello_world.sh
It creates two output files:
log_hello_world.e7584345
log_hello_world.o7584345
Second time the same command is given: It creates two more output files
log_hello_world.e7584366
log_hello_world.o7584366
What can I do to get the output in just one file log_hello_world.
I was able to resolve this issue by using options -o and -e to save the log and the error files respectively. With these options, the log and the errors from the job are written into same file every time from this command.
qsub -cwd -N job_hello_world -o log.hello_world -e error.hello_world hello_world.sh
You should append a file with a fixed name. This is done in the code which you run. You create a file in your directory, and then append it with the new results each time you run your code. So in your code (not in the qsub line), explicitly add a line of code which asks the results to be written to a file in your directory, in Mathematica this would be
str = OpenAppend["/home/my_name/Scratch/results.dat"];
Write[str,results];
Close[str];
Where results is a variable which contains the results of your computation. Then just run the job using qsub -N log_hello_world hello_world.sh. This will write the results to the same file every time you run the job without changing the name of the file.
(If you're looking to write both -o and -e files to the same file, you can just add -j y to the qsub after having specified the file path for the error file)
I need to run (in bash) a .txt file containing a bunch of commands written to it by another program, at a specific time using at. Normally I would run this with bash myfile.txt but of I choose to run at bash myfile.txt midnight it doesn't like it, saying
syntax error. Last token seen: b
Garbled time
How can I sort this out?
Try this instead:
echo 'bash myfile.txt' | at midnight
at reads commands from standard input or a specified file (parameter -f filename); not from the command line.
I want to write a very simple script , which takes a process name , and return the tail of the last file name which contains the process name.
I wrote something like that :
#!/bin/sh
tail $(ls -t *"$1"*| head -1) -f
My question:
Do I need the first line?
Why isn't ls -t *"$1"*| head -1 | tail -f working?
Is there a better way to do it?
1: The first line is a so called she-bang, read the description here:
In computing, a shebang (also called a
hashbang, hashpling, pound bang, or
crunchbang) refers to the characters
"#!" when they are the first two
characters in an interpreter directive
as the first line of a text file. In a
Unix-like operating system, the
program loader takes the presence of
these two characters as an indication
that the file is a script, and tries
to execute that script using the
interpreter specified by the rest of
the first line in the file
2: tail can't take the filename from the stdin: It can either take the text on the stdin or a file as parameter. See the man page for this.
3: No better solution comes to my mind: Pay attention to filenames containing spaces: This does not work with your current solution, you need to add quotes around the $() block.
$1 contains the first argument, the process name is actually in $0. This however can contain the path, so you should use:
#!/bin/sh
tail $(ls -rt *"`basename $0`"*| head -1) -f
You also have to use ls -rt to get the oldest file first.
You can omit the shebang if you run the script from a shell, in that case the contents will be executed by your current shell instance. In many cases this will cause no problems, but it is still a bad practice.
Following on from #theomega's answer and #Idan's question in the comments, the she-bang is needed, among other things, because some UNIX / Linux systems have more than one command shell.
Each command shell has a different syntax, so the she-bang provides a way to specify which shell should be used to execute the script, even if you don't specify it in your run command by typing (for example)
./myscript.sh
instead of
/bin/sh ./myscript.sh
Note that the she-bang can also be used in scripts written in non-shell languages such as Perl; in the case you'd put
#!/usr/bin/perl
at the top of your script.