Reordering item in PBS queue - pbs

Iv'e submitted several jobs to PBS. Now, I want the job I submitted that last will be the first.
One option is to hold all the previews jobs (using qhold ). The problem is that I used -W depend=afterok: switch in order to enable jobs just after previews job ended.
Therefore, my PBS queue look something like that:
468743.server username queue_name job1 4828 6 36 46gb 24:00 R 16:12
468744.server username queue_name job1_cont -- 6 36 46gb 24:00 H --
468745.server username queue_name job1_cont -- 6 36 46gb 24:00 H --
468746.server username queue_name job1_cont -- 6 36 46gb 24:00 H --
468747.server username queue_name job1_cont -- 6 36 46gb 24:00 H --
468748.server username queue_name job1_cont -- 6 36 46gb 24:00 H --
468743.server username queue_name job2 4828 6 36 46gb 24:00 R 16:12
468744.server username queue_name job2_cont -- 6 36 46gb 24:00 H --
468745.server username queue_name job2_cont -- 6 36 46gb 24:00 H --
468746.server username queue_name job2_cont -- 6 36 46gb 24:00 H --
468747.server username queue_name job2_cont -- 6 36 46gb 24:00 H --
468748.server username queue_name job2_cont -- 6 36 46gb 24:00 H --
468753.server username queue_name NewJob -- 6 36 46gb 24:00 H --
468754.server username queue_name NewJob_cont -- 6 36 46gb 24:00 H --
468755.server username queue_name NewJob_cont -- 6 36 46gb 24:00 H --
Now, I want NewJob, which is last on line, to run after the first job in {job1, job2} is finished, and before any of the "_cont". And I want the NewJob_cont jobs will run after NewJob.
Can I alter the position of NewJob in line without destroying the rest of the hold-queue hierarchy?

You can use qalter to change dependencies for jobs. You can execute:
qalter 468744 -W depend=after:468753
qalter 468753 -W depend=after:468743
This will make it so that 468744 doesn't execute until after the new job and the new job doesn't execute until after the first job. Just as you can add after dependencies to queued jobs, you can also add other kinds of dependencies.

Related

Efficient Reading of Input File

Currently for a task, I am working with input files which give Matrix related test cases (Matrix Multiplication) i.e., example of an input file ->
N M
1 3 5 ... 6 (M columns)
....
5 4 2 ... 1 (N rows)
I was using simple read() to access them till now, but this is not efficient for large files of size > 10^2.
So I wanted to know is there some way to use processes to do this in parallel.
Also I was thinking of using multiple IO readers based on line, so then each process could read different segments of the file but couldn't find any helpful resources.
Thank you.
PS: Current code is using this:
io:fread(IoDev, "", "~d")
Did you consider to use re module? I did not make a performance test, but it may be efficient. In the following example I do not use the first "M N" line. So I did not put it in the matrix.txt file.
matrix file:
1 2 3 4 5 6 7 8 9
11 12 13 14 15 16 17 18 19
21 22 23 24 25 26 27 28 29
31 32 33 34 35 36 37 38 39
I made the conversion in the shell
1> {ok,B} = file:read_file("matrix.txt"). % read the complete file and store it in a binary
{ok,<<"1 2 3 4 5 6 7 8 9\r\n11 12 13 14 15 16 17 18 19\r\n21 22 23 24 25 26 27 28 29\r\n31 32 33 34 35 36 37 38 39">>}
2> {ok,ML} = re:compile("[\r\n]+"). % to split the complete binary in a list a binary, one for each line
{ok,{re_pattern,0,0,0,
<<69,82,67,80,105,0,0,0,0,0,0,0,1,8,0,0,255,255,255,255,
255,255,...>>}}
3> {ok,MN} = re:compile("[ ]+"). % to split the line into binaries one for each integer
{ok,{re_pattern,0,0,0,
<<69,82,67,80,73,0,0,0,0,0,0,0,17,0,0,0,255,255,255,255,
255,255,...>>}}
4> % a function to split a line and convert each chunk into integer
4> F = fun(Line) -> Nums = re:split(Line,MN), [binary_to_integer(N) || N <- Nums] end.
#Fun<erl_eval.7.126501267>
5> Lines = re:split(B,ML). % split the file into lines
[<<"1 2 3 4 5 6 7 8 9">>,<<"11 12 13 14 15 16 17 18 19">>,
<<"21 22 23 24 25 26 27 28 29">>,
<<"31 32 33 34 35 36 37 38 39">>]
6> lists:map(F,Lines). % map the function to each lines
[[1,2,3,4,5,6,7,8,9],
[11,12,13,14,15,16,17,18,19],
[21,22,23,24,25,26,27,28,29],
[31,32,33,34,35,36,37,38,39]]
7>
if you want to check the matrix size, you can replace the last line with:
[[NbRows,NbCols]|Matrix] = lists:map(F,Lines),
case (length(Matrix) == NbRows) andalso
lists:foldl(fun(X,Acc) -> Acc andalso (length(X) == NbCols) end,true,Matrix) of
true -> {ok,Matrix};
_ -> {error_size,Matrix}
end.
is there some way to use processes to do this in parallel.
Of course.
Also I was thinking of using multiple IO readers based on line, so
then each process could read different segments of the file but
couldn't find any helpful resources.
You don't seek to positions in a file by line, rather you seek to byte positions. While a file may look like a bunch of lines, a file is actually just one long sequence of characters. Therefore, you will need to figure out what byte positions you want to seek to in the file.
Check out file:position, file:pread.

Using awk and sorting [duplicate]

This question already has answers here:
How to sort a file, based on its numerical values for a field?
(8 answers)
Closed 4 years ago.
I have a file that contains names and numbers like so:
students.txt:
Student A F 40 50 60
Student B F 50 60 70
Student C M 60 70 80
Student D M 100 90 90
Student E F 80 90 100
Student F M 20 30 40
Student G M 30 40 50
I want to sort these names using awk, and sort by the last number on a line.
When I try
sort -k6 students.txt | awk '{print}'
The output that is given to me is
... 100
... 40
... 50
... 60
... 70
... 80
... 90
As a result, it is mostly sorted except the first one. Is there a reason why 100 is at the start of the output rather than at the end?
You need to use numeric sort, via the -n flag. From the sort(1) man page:
-n, --numeric-sort
compare according to string numerical value
Result:
$ sort -n -k6 students.txt
Student F M 20 30 40
Student G M 30 40 50
Student A F 40 50 60
Student B F 50 60 70
Student C M 60 70 80
Student D M 100 90 90
Student E F 80 90 100

How to set an arbitrary seed to the --random-sort option of Linux SORT?

In man page of SORT, it says you can set a random source like:
$ sort some.txt --random-sort --random-source=/dev/urandom
I want to an standard output text to the source like:
$ sort some.txt --random-sort --random-source=`date +"%m%d%H%M"`
But this only says:
open failed: 11021103: No such file or directory
How can I do this?
Here's a simple python script that takes a seed and outputs random bytes:
> cat rand_bits.py
import random
import sys
if len(sys.argv) > 1:
rng = random.Random(int(sys.argv[-1]))
else:
rng = random.Random(0xBA5EBA11)
try:
while True:
sys.stdout.write(chr(rng.getrandbits(8)))
except (IOError, KeyboardInterrupt):
pass
sys.stdout.close()
You can just feed those bytes straight into sort:
> sort <(seq 25) -R --random-source=<(python rand_bits.py 5)
8
2
4
7
10
19
17
11
3
20
14
18
1
16
25
12
5
21
24
23
22
9
15
13
6
By the way, the input can be any file, but the file better be long enough!
> sort <(seq 25) -R --random-source=<(date +"%m%d%H%M")
sort: /dev/fd/12: end of file
> sort <(seq 25) -R --random-source=/dev/sda1
3
13
24
5
10
16
4
17
12
18
14
2
6
15
23
21
19
11
9
1
20
25
22
8
7

Find a pattern and modify the next line without modifying the other contents in the file. preferably linux based commands (sed, awk etc)

I have a file which looks like this:
# Hello, welcome to the world
# Trying to modify XXXXXX
# Some more random text
poly RANDOM LAYER{
20 25
18 2
1 5
1 2
5 6
}
poly RANDOM LAYER{
30 50
14 25
15 25
15 26
15 26
15 27
}
I would like to increment the values in the next line of poly RANDOM layer,say add 10 to the first number (20+10=30) and 20 to the next number (25+20=45). The rest of the contents should be the same:
This should be done for all the lines immediately after poly RANDOM LAYER
The output should look like:
# Hello, welcome to the world
# Trying to modify XXXXXX
# Some more random text
poly RANDOM LAYER{
*30 45*
18 2
1 5
1 2
5 6
}
poly RANDOM LAYER{
*40 60*
14 25
15 25
15 26
15 26
15 27
}
If the specific leading white space is always 4 chars:
$ awk 'f{$1=" "$1+10; $2+=20; f=0} /RANDOM/{f=1} 1' file
# Hello, welcome to the world
# Trying to modify XXXXXX
# Some more random text
poly RANDOM LAYER{
30 45
18 2
1 5
1 2
5 6
}
poly RANDOM LAYER{
40 70
14 25
15 25
15 26
15 26
15 27
}
otherwise use:
$ awk 'f{fmt=$0; gsub(/[^[:space:]]+/,"%s",fmt); $0=sprintf(fmt,$1+10,$2+20); f=0} /RANDOM/{f=1} 1' file
as that will just reproduce in your output WHATEVER leading, trailing, or inter-field white space you have in your input.
You say (sed, awk, etc). Is perl part of etc?
perl -pe 's/(\d+)/$1+10/ge if($lastLineMatch); $lastLineMatch = m/poly RANDOM/; ' < file
Or if you want to add different values to the two numbers:
perl -pe 's/(\d+)(\D+)(\d+)/($1+10).$2.($3+20)/ge if($lastLineMatch); $lastLineMatch = m/poly RANDOM/; ' < file

Shell script can not pass file data to shell input

cal April 2012 | cat > t | cat < t | more
Why does it showing nothing? Why isn't it showing
April 2012
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
| (anonymous pipe) connects stdout (1) of the first process with stdin (0) of the second. After redirecting the output to a file, there is no stdout left, so there's nothing to pipe. Also, cat | cat < file does not really make sense, it gets two inputs connected to stdin (at least with bash, redirection comes later and "wins": echo uiae | cat <somefile will output the content of somefile)
If you want to display output of a command and, at the same time, write it to the file, use the tee binary. It writes to a file, but still writes to stdout
cal April 2012 | tee t | more
cat t # content of the above `cal` command
Because that first cat > t sends all its output to a file called t, leaving no more for the pipeline.
If your intent is to send it to a file and through more to the terminal, just use:
cal April 2012 | tee t | more
This | cat < t construct is very strange and I'm not even sure if it would work. It's trying to connect two totally different things to the standard input of cat and certainly unnecessary.
this works for me if there's no existing file named t in the current directory. I'm using bash on Ubuntu Oneiric.
$ cal April 2012 | cat > t | cat < t | more
April 2012
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
$ cal April 2012 | cat > t | cat < t | more
$ rm t
$ cal April 2012 | cat > t | cat < t | more
April 2012
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30

Resources