I want to replace specific lines in one file (File 1) with data contained in another file (File 2). For example:
File 1 (Input code):
other lines...
11 !!! Regular Expression
10 0.685682*100
11 0.004910*100
12 0.007012*100
13 0.146041*100
14 0.067827*100
15 0.019460*100
16 0.019277*100
17 0.001841*100
18 0.047950*100
other lines...
File 2 (to add new data):
1 0.36600*100
2 0.44466*100
3 0.0.046*100
4 0.15544*100
5 0.16600*100
6 0.14477*100
7 0.01927*100
8 0.00188*100
9 0.05566*100
How could I replace the Input data (File 1) from line 1 to line n with the data contained in File 2 (data). I tried using sed as follows:
sed '/!!! Regular Expresion/r File2' File1
and I get the following:
1 !!! Regular Expression
2 0.36600*100
3 0.44466*100
4 0.0.046*100
5 0.15544*100
6 0.16600*100
7 0.14477*100
8 0.01927*100
9 0.00188*100
10 0.05566*100
11 0.685682*100
12 0.004910*100
13 0.007012*100
14 0.146041*100
15 0.067827*100
16 0.019460*100
17 0.019277*100
19 0.001841*100
20 0.047950*100
My problem is that this command can insert the lines contained in File 2 but not replace them. How can I replace only these lines (from 10 to 18) with the new data?.
Thanks in advance.
replace specific lines in one file from 10 to 18 with the data contained in File 2
Lets use dynamic programming and split "replacing" into "deleting" and "inserting".
[Delete] specific lines in one file from 10 to 18
That's easy:
sed '10,18d'
would delete lines from 10 to 18.
[Insert] the data contained in File 2 [to line 10]
That's also easy:
sed '9r file2'
It appends the content of file2 after line 9, so first line of file2 is the new line 10.
All together:
sed '10,18d; 9r file2'
Example:
# seq 8 | sed '3,6d; 2r '<(seq -f 2%.0f 5)
1
2
21
22
23
24
25
7
8
Related
I have let's say two files Input1.txt and Input2.txt. Each of them is a text file containing a single line of 5 numbers separated by a tab.
For instance Input1.txt is
1 2 3 4 5
and Input2.txt is
6 7 8 9 10
The output that I desire is Output.txt :
Input1 1 2 3 4 5
Input2 6 7 8 9 10
So I want to merge the files in a table with an extra first column containing the names of the original files. Obviously I have more than 2 files (actually 1000) and I would like to make it with a for loop. You can assume that all my files are named as Input*.txt with * between 1 and 1000 and that they are all in the same directory.
I know how to do it with R, but I would like to make it with a basic line of commands in the ubuntu shell. Is it feasible ? Thanks for any help.
Assuming the line in Input1.txt, Input2.txt, etc. is terminated with a newline character, you can use
for i in Input*.txt
do
printf "%s " "$i"
cat "$i"
done > Output.txt
The result is
Input1.txt 1 2 3 4 5
Input2.txt 6 7 8 9 10
If you want to get Input1 etc. without .txt you can use
printf "%s " "${i%.txt}"
I have a file which I want to process it in bash or python.
The structure is with 4 columns but only with 3 column titles:
input.txt
1STCOLUMN 2NDCOLUMN THIRDCOLUMN
input1 12 33 45
input22 10 13 9
input4 2 23 11
input4534 3 1 1
I am trying to shift the title columns to right and add a title of "INPUTS" to the first column (input column).
Desired output: Adding the column title
Desired-output-step1.csv
INPUTS 1STCOLUMN 2NDCOLUMN THIRDCOLUMN
input1 12 33 45
input22 10 13 9
input4 2 23 11
input4534 3 1 1
I tried with sed:
sed -i '1iINPUTS, 1STCOLUMN, 2NDCOLUMN, THIRDCOLUMN' input.txt
But I do not prefer to type the names of the columns for this reason.
How do I just insert the new title to first column and the other column titles shift to right?
you can specify which line to be replaced using line numbers
$ sed '1s/^/INPUTS /' ip.txt
INPUTS 1STCOLUMN 2NDCOLUMN THIRDCOLUMN
input1 12 33 45
input22 10 13 9
input4 2 23 11
input4534 3 1 1
here, 1 indicates that you want to apply s command only for 1st line
s/^/INPUTS / insert something to start of line, you'll have to adjust the spacing as needed
instead of counting and testing the spaces, you can let column -t do the padding and formatting job:
sed '1s/^/INPUTS /' ip.txt|column -t
This will give you:
INPUTS 1STCOLUMN 2NDCOLUMN THIRDCOLUMN
input1 12 33 45
input22 10 13 9
input4 2 23 11
input4534 3 1 1
I am looking for a Linux command in which I can compare if, for a single column, the line below is the same value as the line currently being checked, and if they are then output both lines. My file is tab separated.
Input example:
line 1 1 var281 7
line 2 1 var100 80
line 3 1 var99 85
line 4 2 var281 90
line 5 2 var281 91
line 6 2 var300 61
line 7 3 var50 45
line 8 3 var99 14
line 9 3 var99 19
line 10 3 var670 80
Desired Output:
line 4 2 var281 90
line 5 2 var281 91
line 8 3 var99 14
line 9 3 var99 19
You could use:
sed '/^\s*$/d;s/\s[0-9][^ ]*$//g' inputfile | uniq -D -f3
Here sed deletes empty lines (^\s*$) as well as the last field of inputfile. Uniq prints duplicate lines ignoring the first three fields (-f3). The output is:
line 4 2 var281
line 5 2 var281
line 8 3 var99
line 9 3 var99
Note last field is not printed. To have it printed you could use grep:
grep "$(sed '/^\s*$/d;s/\s[0-9][^ ]*$//g' inputfile | uniq -D -f3)" inputfile
output:
line 4 2 var281 90
line 5 2 var281 91
line 8 3 var99 14
line 9 3 var99 19
I have a very large text file that is difficult to open in text editors.
Lines 12 - 15 are:
1 15.9994
2 24.305
Atoms
I would like to add:
3 196 to line 14 and then have a blank line between 3 196 and Atoms like it is currently. I tried:
sed '14 a <3 196>' file.data
But it did not seem to change anything. Anyone know of how I can do this?
Normally, sed only writes out the changes. It does not modify the file.
If you want the input file to be modified, you can use GNU sed -i:
sed -i '14 a <3 196>' file.data
Before:
[...]
9
10
11
1 15.9994
2 24.305
Atoms
16
17
[...]
After:
[...]
9
10
11
1 15.9994
2 24.305
<3 196>
Atoms
16
17
[...]
Note: If you want it after line 13 instead of 14, change 14 to 13 in your code. Similarly, if you wanted 3 196 instead of <3 196>, change <3 196> to 3 196 in your code.
I have the following file
ENST001 ENST002 4 4 4 88 9 9
ENST004 3 3 3 99 8 8
ENST009 ENST010 ENST006 8 8 8 77 8 8
Basically I want to count how many times ENST* is repeated in each line so the expected results is
2
1
3
Any suggestion please ?
Try this (and see it in action here):
awk '{print gsub("ENST[0-9]+","")}' INPUTFILE