In place string replacement of file in linux - linux

I Am trying to Read a file in linux line by line containg data something like this
522240227 B009CPMJ1M 20141003 20141103 1063278 1 1 6
604710621 B004NPI3OI 20141003 20141103 166431 1 1 6
1498812521 B00LFEHWJM 20141003 20141103 1044646 1 10 6
1498812521 B00D3IK0Y2 20141003 20141103 1044646 2 10 6
I then have to add 2000000000 to the first integer of each line and replace that integer. So final File would be like
2522240227 B009CPMJ1M 20141003 20141103 1063278 1 1 6
2604710621 B004NPI3OI 20141003 20141103 166431 1 1 6
31498812521 B00LFEHWJM 20141003 20141103 1044646 1 10 6
31498812521 B00D3IK0Y2 20141003 20141103 1044646 2 10 6
Is there i can do this operation on the same file without creating any other temporary file using shell script

This is for awk!
awk '{$1+=2000000000}1' file
It returns:
2522240227 B009CPMJ1M 20141003 20141103 1063278 1 1 6
2604710621 B004NPI3OI 20141003 20141103 166431 1 1 6
3498812521 B00LFEHWJM 20141003 20141103 1044646 1 10 6
3498812521 B00D3IK0Y2 20141003 20141103 1044646 2 10 6
It is quite explanatory: $1+=2000000000 adds 2000000000 to the first value. Then, 1 is True, so it performs the default awk action: {print $0}, that is, print the line.
To replace the file, redirect to a temp file and then move:
awk '{$1+=2000000000}1' file > new_file && mv new_file file
Since it was a big number and it was printing in a e+xx format, let's fix the format with sprintf():
awk '{$1 = sprintf("%0.f", $1+=2000000000)}1' file
This takes profit of the sprintf to store a formated value into a variable, so that then it is printed properly.

Related

How to extract the number after specific word using awk?

I have several lines of text. I want to extract the number after specific word using awk.
I tried the following code but it does not work.
At first, create the test file by: vi test.text. There are 3 columns (the 3 fields are generated by some other pipeline commands using awk).
Index AllocTres CPUTotal
1 cpu=1,mem=256G 18
2 cpu=2,mem=1024M 16
3 4
4 cpu=12,gres/gpu=3 12
5 8
6 9
7 cpu=13,gres/gpu=4,gres/gpu:ret6000=2 20
8 mem=12G,gres/gpu=3,gres/gpu:1080ti=1 21
Please note there are several empty fields in this file.
what I want to achieve is to extract the number after the first gres/gpu= in each line (if no gres/gpu= occurs in this line, the default number is 0) using a pipeline like: cat test.text | awk '{some_commands}' to output 4 columns:
Index AllocTres CPUTotal GPUAllocated
1 cpu=1,mem=256G 18 0
2 cpu=2,mem=1024M 16 0
3 4 0
4 cpu=12,gres/gpu=3 12 3
5 8 0
6 9 0
7 cpu=13,gres/gpu=4,gres/gpu:ret6000=2 20 4
8 mem=12G,gres/gpu=3,gres/gpu:1080ti=1 21 3
Firstly: awk do not need cat, it could read files on its' own. Combining cat and awk is generally discouraged as useless use of cat.
For this task I would use GNU AWK following way, let file.txt content be
cpu=1,mem=256G
cpu=2,mem=1024M
cpu=12,gres/gpu=3
cpu=13,gres/gpu=4,gres/gpu:ret6000=2
mem=12G,gres/gpu=3,gres/gpu:1080ti=1
then
awk 'BEGIN{FS="gres/gpu="}{print $2+0}' file.txt
output
0
0
0
3
0
0
4
3
Explanation: I inform GNU AWK that field separator (FS) is gres/gpu= then for each line I do print 2nd field increased by zero. For lines without gres/gpu= $2 is empty string, when used in arithmetic context this is same as zero so zero plus zero gives zero. For lines with at least one gres/gpu= increasing by zero provokes GNU AWK to find longest prefix which is legal number, thus 3 (4th line) becomes 3, 4, (7th line) becomes 4, 3, (8th line) becomes 3.
(tested in GNU Awk 5.0.1)
With your shown samples in GNU awk you can try following code. Written and tested in GNU awk. Simple explanation would be using awk's match function where using regex gres\/gpu=([0-9]+)(escaping / here) and creating one and only capturing group to capture all digits coming after =. Once match is found printing current line followed by array's arr's 1st element +0(to print zero in case no match found for any line) here.
awk '
FNR==1{
print $0,"GPUAllocated"
next
}
{
match($0,/gres\/gpu=([0-9]+)/,arr)
print $0,arr[1]+0
}
' Input_file
Using sed
$ sed '1s/$/\tGPUAllocated/;s~.*gres/gpu=\([0-9]\).*~& \t\1~;1!{\~gres/gpu=[0-9]~!s/$/ \t0/}' input_file
Index AllocTres CPUTotal GPUAllocated
1 cpu=1,mem=256G 18 0
2 cpu=2,mem=1024M 16 0
3 4 0
4 cpu=12,gres/gpu=3 12 3
5 8 0
6 9 0
7 cpu=13,gres/gpu=4,gres/gpu:ret6000=2 20 4
8 mem=12G,gres/gpu=3,gres/gpu:1080ti=1 21 3
awk '
BEGIN{FS="\t"}
NR==1{
$(NF+1)="GPUAllocated"
}
NR>1{
$(NF+1)=FS 0
}
/gres\/gpu=/{
split($0, a, "=")
gp=a[3]; gsub(/[ ,].*/, "", gp)
$NF=FS gp
}1' test.text
Index AllocTres CPUTotal GPUAllocated
1 cpu=1,mem=256G 18 0
2 cpu=2,mem=1024M 16 0
3 4 0
4 cpu=12,gres/gpu=3 12 3
5 8 0
6 9 0
7 cpu=13,gres/gpu=4,gres/gpu:ret6000=2 20 4
8 mem=12G,gres/gpu=3,gres/gpu:1080ti=1 21 3

How to replace a number to another number in a specific column using awk

This is probably basic but I am completely new to command-line and using awk.
I have a file like this:
1 RQ22067-0 -9
2 RQ34365-4 1
3 RQ34616-4 1
4 RQ34720-1 0
5 RQ14799-8 0
6 RQ14754-1 0
7 RQ22101-7 0
8 RQ22073-1 0
9 RQ30201-1 0
I want the 0s to change to 1 in column3. And any occurence of 1 and 2 to change to 2 in column3. So essentially only changing numbers in column 3. But I am not changing the -9.
1 RQ22067-0 -9
2 RQ34365-4 2
3 RQ34616-4 2
4 RQ34720-1 1
5 RQ14799-8 1
6 RQ14754-1 1
7 RQ22101-7 1
8 RQ22073-1 1
9 RQ30201-1 1
I have tried using (see below) but it has not worked
>> awk '{gsub("0","1",$3)}1' PRS_with_minus9.pheno.txt > PRS_with_minus9_modified.pheno
>> awk '{gsub("1","2",$3)}1' PRS_with_minus9.pheno.txt > PRS_with_minus9_modified.pheno
Thank you.
With this code in your question:
awk '{gsub("0","1",$3)}1' PRS_with_minus9.pheno.txt > PRS_with_minus9_modified.pheno
awk '{gsub("1","2",$3)}1' PRS_with_minus9.pheno.txt > PRS_with_minus9_modified.pheno
you're running both commands on the same input file and writing their
output to the same output file so only the output of the 2nd script
will be present in the output, and
you're trying to change 0 to 1
first and THEN change 1 to 2 so the $3s that start out as 0 would
end up as 2, you need to change the order of the operations.
This is what you should be doing, using your existing code:
awk '{gsub("1","2",$3); gsub("0","1",$3)}1' PRS_with_minus9.pheno.txt > PRS_with_minus9_modified.pheno
For example:
$ awk '{gsub("1","2",$3); gsub("0","1",$3)}1' file
1 RQ22067-0 -9
2 RQ34365-4 2
3 RQ34616-4 2
4 RQ34720-1 1
5 RQ14799-8 1
6 RQ14754-1 1
7 RQ22101-7 1
8 RQ22073-1 1
9 RQ30201-1 1
The gsub() should also just be sub()s as you only want to perform each substitution once, and you don't need to enclose the numbers in quotes so you could just do:
awk '{sub(1,2,$3); sub(0,1,$3)}1' file
You can check the value of column 3 and then update the field value.
Check for 1 as the first rule because if the first check is for 0, the value will be set to 1 and the next check will set the value to 2 resulting in all 2's.
awk '
{
if($3==1) $3 = 2
if($3==0) $3 = 1
}
1' file
Output
1 RQ22067-0 -9
2 RQ34365-4 2
3 RQ34616-4 2
4 RQ34720-1 1
5 RQ14799-8 1
6 RQ14754-1 1
7 RQ22101-7 1
8 RQ22073-1 1
9 RQ30201-1 1
With your shown samples and ternary operators try following code. Simple explanation would be, checking condition if 3rd field is 1 then set it to 2 else check if its 0 then set it to 0 else keep it as it is, finally print the line.
awk '{$3=$3==1?2:($3==0?1:$3)} 1' Input_file
Generic solution: Adding a Generic solution here, where we can have 3 awk variables named: fieldNumber in which you could mention all field numbers which we want to check for. 2nd one is: existValue which we want to match(in condition) and 3rd one is: newValue new value which needs to be there after replacement.
awk -v fieldNumber="3" -v existValue="1,0" -v newValue="2,1" '
BEGIN{
num=split(fieldNumber,arr1,",")
num1=split(existValue,arr2,",")
num2=split(newValue,arr3,",")
for(i=1;i<=num1;i++){
value[arr2[i]]=arr3[i]
}
}
{
for(i=1;i<=num;i++){
if($arr1[i] in value){
$arr1[i]=value[$arr1[i]]
}
}
}
1
' Input_file
This might work for you (GNU sed):
sed -E 's/\S+/\n&\n/3;h;y/01/12/;G;s/.*\n(.*)\n.*\n(.*)\n.*\n.*/\2\1/' file
Surround 3rd column by newlines.
Make a copy.
Replace all 0's by 1's and all 1's by 2's.
Append the original.
Pattern match on newlines and replace the 3rd column in the original by the 3rd column in the amended line.
Also with awk:
awk 'NR > 1 {s=$3;sub(/1/,"2",s);sub(/0/,"1",s);$3=s} 1' file
1 RQ22067-0 -9
2 RQ34365-4 2
3 RQ34616-4 2
4 RQ34720-1 1
5 RQ14799-8 1
6 RQ14754-1 1
7 RQ22101-7 1
8 RQ22073-1 1
9 RQ30201-1 1
the substitutions are made with sub() on a copy of $3 and then the copy with the changes is assigned to $3.
When you don't like the simple
sed 's/1$/2/; s/0$/1/' file
you might want to play with
sed -E 's/(.*)([01])$/echo "\1$((\2+1))"/e' file

bash cut columns to one file and save onto the end of another file

I would like to cut two columns from one file and stick them on the end of a second file. The two file have the exact same number of lines
file1.txt
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
file2.txt
a b c d e f g h i j
a b c d e f g h i j
a b c d e f g h i j
a b c d e f g h i j
So far I have been using
cut -f9-10 file2.txt | paste file1.txt - > file3.txt
which outputs exactly what I want
1 2 3 4 5 6 7 8 9 10 i j
1 2 3 4 5 6 7 8 9 10 i j
1 2 3 4 5 6 7 8 9 10 i j
However I don't want to have to make a new file I would prefer to alter file 1 to the above. I've tried
cut -f9-10 file2.txt | paste file1.txt -
but it simply prints everything on screen. Is there a way of just adding columns 9 and 10 to the end of file1.txt?
Use sponge from moreutils! It allows you to soak up standard input and write to a file. That is, to replace a file in-place after a pipe.
cut -f9-10 file2.txt | paste file1.txt - | sponge file1.txt
Note you can also do what you are doing by using paste with a process substitution.
$ paste -d' ' file1.txt <(awk '{print $(NF-1), $NF}' file2.txt) | sponge file1.txt
$ cat file1.txt
1 2 3 4 5 6 7 8 9 10 i j
1 2 3 4 5 6 7 8 9 10 i j
1 2 3 4 5 6 7 8 9 10 i j
This joins file1.txt with two last columns from file2.txt using ' ' as delimiter.

Extract n-th line from file in bash loop

I would like to extract n-th line from file and save it to a new file. For example I have index.txt :
cat index.txt
1 AAAGCGT
2 ACGAAGT
3 ACCTTGT
4 ATAATGT
5 AGGGTGT
6 AGCCAGT
7 AGTTCGT
8 AATGCAG
9 AAAGCGT
10 ACGAAGT
and output should be
cat index.1.txt:
1 AAAGCGT
2 ACGAAGT
cat index.2.txt:
3 ACCTTGT
4 ATAATGT
cat index.3.txt:
5 AGGGTGT
6 AGCCAGT
And so on.. So I would like to extract form input file first 2 rows in cycle and save to new file.
It doesn't give you exactly the names you want, but:
split -l 2 index.txt index.
seems like the easiest solution. It will create files with names beginning with the final argument, so will get names like 'index.aa' and 'index.bb'
This will work for any number of grouped lines just by changing the 2 to a 3 or whatever number you like:
$ awk 'NR%2==1{++i} {print > ("index." i ".txt")}' index.txt
$ ls index.?.txt
index.1.txt index.2.txt index.3.txt index.4.txt index.5.txt
$ tail index.?.txt
==> index.1.txt <==
1 AAAGCGT
2 ACGAAGT
==> index.2.txt <==
3 ACCTTGT
4 ATAATGT
==> index.3.txt <==
5 AGGGTGT
6 AGCCAGT
==> index.4.txt <==
7 AGTTCGT
8 AATGCAG
==> index.5.txt <==
9 AAAGCGT
10 ACGAAGT
awk '{print >"index."(x+=NR%2)".txt"}' file
This increments x every two lines starting from 1 and then prints the line into a file with that name
cat index.1.txt:
1 AAAGCGT
2 ACGAAGT
cat index.2.txt:
3 ACCTTGT
4 ATAATGT
cat index.3.txt:
5 AGGGTGT
6 AGCCAGT
In some awks, extra parens may be required as shown below (As commented by Ed Morton)
awk '{print >("index."(x+=NR%2)".txt")}' file
I would say:
awk '{file=int((NR+1)/2)".txt"; print > file}' file
int((NR+1)/2 maps every line number:
1 --> 1
2 --> 1
3 --> 2
x --> (x+1) / 2
So you get these files:
$ cat 1.txt
1 AAAGCGT
2 ACGAAGT
or
$ cat 3.txt
5 AGGGTGT
6 AGCCAGT

Records filtering

I have this kind of file file-1:
1 1 1.1552422143268792
1 2 1.1552422143268792
1 3 1.1552422143268792
1 4 1.1552422143268792
2 1 2.1906014042706916
2 2 2.1906014042706916
2 3 2.1906014042706916
2 4 2.1906014042706916
2 1 4.1906014042706916
2 2 4.1906014042706916
2 3 4.1906014042706916
2 4 4.1906014042706916
3 1 3.1876823799523781
3 2 3.1876823799523781
3 3 3.1876823799523781
3 4 3.1876823799523781
4 1 0.6213184222668061
4 2 0.6213184222668061
4 3 0.6213184222668061
4 4 0.6213184222668061
and I have antoher file too file-2
1
2
4
I would like to filter those records from file-1, in which the values of the first colum are the same as in file-2, so I would like to get this output
1 1 1.1552422143268792
1 2 1.1552422143268792
1 3 1.1552422143268792
1 4 1.1552422143268792
2 1 2.1906014042706916
2 2 2.1906014042706916
2 3 2.1906014042706916
2 4 2.1906014042706916
2 1 4.1906014042706916
2 2 4.1906014042706916
2 3 4.1906014042706916
2 4 4.1906014042706916
4 1 0.6213184222668061
4 2 0.6213184222668061
4 3 0.6213184222668061
4 4 0.6213184222668061
Can anybody help a little?
awk 'NR==FNR{f2[$1];next}$1 in f2' file-2 file-1
Very simple using join:
join file-1 file-2
The files must be sorted for join to work. The sort is based on text, not numeric values, so you may need to sort into a temp file first. Something like:
sort file-2 > sorted.tmp
sort file-1 | join - sorted.tmp
You can use the -f option in grep to read patterns from a file. But first you must change the patterns so that they match the first field only. You can do this by using sed to add a ^ to the beginning and a space to the end of each pattern in file-2, and using process substitution in your command.
The complete command is:
grep -f <(sed -e "s/^/^/g" -e "s/$/ /g" file-2) file-1
This might work for you:
sed 's/.*/\/^& \/p/' file-2 | sed -nf - file-1
Here is another way to do in awk:
awk 'NR==FNR{a[$1];next} !($1 in a){next}1' file-2 file-1

Resources