I have a very large tab delimited file, I would like to replace a single line in this file with another. As the line has >100 columns, a simple sed 's/find/replace/' is not desirable. My newline is stored in file newline.txt
How do I achieve:
sed 's/find/newline.txt/' infile
With GNU sed:
Find line in file file.csv which contains find, append content (r) of file newline.txt and delete (d) line which contains find:
sed -e '/find/{r newline.txt' -e 'd}' file.csv
Based on GNU sed 4.2.2, also includes answers from Cyrus and Aaron
$ cat foo.txt
1 abc
2 ijk!
3 pqr
4 xyz
$ cat f1.txt
a/b/c
$ cat f2.txt
line
$ cat f3.txt
line a
line b
1) Pattern and replacement not containing characters that'll affect sed command or act weirdly due to bash substitution inside double quotes
$ sed "/3/c $(< f2.txt)" foo.txt
1 abc
2 ijk!
line
4 xyz
$ sed "s/.*3.*/$(< f2.txt)/" foo.txt
1 abc
2 ijk!
line
4 xyz
$ sed -e '/3/{r f2.txt' -e 'd}' foo.txt
1 abc
2 ijk!
line
4 xyz
2) Pattern getting affected due to bash substitution
$ sed "/!/c $(< f2.txt)" foo.txt
bash: !/c: event not found
$ sed '/!/c '"$(< f2.txt)" foo.txt
1 abc
line
3 pqr
4 xyz
$ sed "s/.*!.*/$(< f2.txt)/" foo.txt
bash: !.*/$: event not found
$ sed 's/.*!.*/'"$(< f2.txt)/" foo.txt
1 abc
line
3 pqr
4 xyz
$ sed -e '/!/{r f2.txt' -e 'd}' foo.txt
1 abc
line
3 pqr
4 xyz
3) Replacement line (single line only) containing characters affecting sed
$ sed "/3/c $(< f1.txt)" foo.txt
1 abc
2 ijk!
a/b/c
4 xyz
$ sed "s/.*3.*/$(< f1.txt)/" foo.txt
sed: -e expression #1, char 11: unknown option to `s'
$ sed "s|.*3.*|$(< f1.txt)|" foo.txt
1 abc
2 ijk!
a/b/c
4 xyz
$ sed -e '/3/{r f1.txt' -e 'd}' foo.txt
1 abc
2 ijk!
a/b/c
4 xyz
4) Replacement with multiple lines
$ sed "/3/c $(< f3.txt)" foo.txt
sed: -e expression #1, char 14: extra characters after command
$ sed "s/.*3.*/$(< f3.txt)/" foo.txt
sed: -e expression #1, char 14: unterminated `s' command
$ sed -e '/3/{r f3.txt' -e 'd}' foo.txt
1 abc
2 ijk!
line a
line b
4 xyz
From #Aaron
sed "s/^.*find.*$/$(cat newline.txt)/" infile.txt
Where find is a unique string in infile.txt that returns a single line, the line is then replaced by newline.txt
try this
sed s/find/$(< newline.txt)/ infile
Related
I want to replace the first two lines with a blank line as below.
Input:
sample
sample
123
234
235
456
Output:
<> blank line
123
234
235
456
Delete the first line, remove all the content from the second line but don't delete it completely:
$ sed -e '1d' -e '2s/.*//' input.txt
123
234
235
456
Or insert a blank line before the first, and delete the first two lines:
$ sed -e '1i\
' -e '1,2d' input.txt
123
234
235
456
Or use tail instead of sed to print all lines starting with the third, and an echo first to get a blank line:
(echo ""; tail +3 input.txt)
Or if you're trying to modify a file in place, use ed instead:
ed -s input.txt <<EOF
1,2c
.
w
EOF
(The c command changes the given range of lines to new content)
Here, we have two files. we need to copy a key from file 1 and need to replace in file 2 with specific string "key" using sed command. we tried with below commands:
sed -e '3 /key/{r file1' -e 'd}' file2
sed -n "3 s/key/$(cat file 1 |grep ^Key|cut -d ' ' -f2)/" file2
File 1
ABCD
EFGH
Key: qvUkD6QaFBA1jYEpynivMoQx+9V71F4+fdn1TIUKPBNny/3zCnjihd1mwxZg==
File 2
IJKL
MNOP
secret key;
MNOP
Expected result:
IJKL
MNOP
secret qvUkD6QaFBA1jYEpynivMoQx+9V71F4+fdn1TIUKPBNny/3zCnjihd1mwxZg==;
MNOP
awk
I am not sure how efficient my code will be for your usage.
awk ' /^Key/{q=$2;next} /A|E/{$0=""; next}/^secret/{$2="\""q"\";"}1' $file1 $file2
$ awk ' /^Key/{q=$2;next} /A|E/{$0=""; next}/^secret/{$2="\""q"\";"}1' $file1 $file2
IJKL
MNOP
secret "qvUkD6QaFBA1jYEpynivMoQx+9V71F4+fdn1TIUKPBNny/3zCnjihd1mwxZg==";
MNOP
Here, I am matching any line starting with the Key and secret string and substituting their values.
sed
You will need to create a variable to fetch the key first.
key=$(sed '1,2d;s/Key: //' $file1) or key=$(awk 'NR==3{print $2}' $file1)
$ echo $key
qvUkD6QaFBA1jYEpynivMoQx+9V71F4+fdn1TIUKPBNny/3zCnjihd1mwxZg==
The following code will generate your expected result, but once again, I am not sure how efficient it will be for your usage.
sed "/^secret/s|key|$key|" $file2
$ sed "/^secret/s|key|$key|" $file2
IJKL
MNOP
secret "qvUkD6QaFBA1jYEpynivMoQx+9V71F4+fdn1TIUKPBNny/3zCnjihd1mwxZg==";
MNOP
This might work for you (GNU sed):
sed -nE '/Key: /{s///;s/\W/\\&/g;s#.*#s/"key"/&/#p}' file1 | sed -Ef - file2
Craft a substitution command from file1 not forgetting to quote non-word characters.
Pass the sed substitution command as stdin, to a second invocation of sed via the -f option and use it to edit file2.
I have the following lines in file1:
line 1text
line 2text
line 3text
line 4text
line 5text
line 6text
line 7text
With the command cat file1 | sort -R | head -4 I get the following in file2:
line 5text
line 1text
line 7text
line 2text
I would like to order the lines (not numerically, just the same order as file1) into the following file3:
line 1text
line 2text
line 5text
line 7text
The actual data doesn't have digits. Any easy way to do this? I was thinking of doing a grep and finding the first instance in a loop. But, I'm sure you experienced guys know an easier solution. Your positive input is highly appreciated.
You can decorate with line numbers, select four random lines lines, sort by line number and remove the line numbers:
$ nl -b a file1 | shuf -n 4 | sort -n -k 1,1 | cut -f 2-
line 2text
line 5text
line 6text
line 7text
The -b a option to nl makes sure that also empty lines are numbered.
Notice that this loads all of file1 into memory, as pointed out by ghoti. To avoid that (and as a generally smarter solution), we can use a different feature of (GNU) shuf: its -i option takes a number range and treats each number as a line. To get four random line numbers from an input file file1, we can use
shuf -n 4 -i 1-$(wc -l < file1)
Now, we have to print exactly these lines. Sed can do that; we just turn the output of the previous command into a sed script and run sed with sed -n -f -. All together:
shuf -n 4 -i 1-$(wc -l < file1) | sort -n | sed 's/$/p/;$s/p/{&;q}/' |
sed -n -f - file1
sort -n sorts the line numbers numerically. This isn't strictly needed, but if we know that the highest line number comes last, we can quit sed afterwards instead of reading the rest of the file for nothing.
sed 's/$/p/;$s/p/{&;q}/ appends p to each line. For the last line, we append {p;q} to stop processing the file.
If the output from sort looks like
27
774
670
541
then the sed command turns it into
27p
774p
670p
541{p;q}
sed -n -f - file1 processes file1, using the output of above sed command as the instructions for sed. -n suppresses output for the lines we don't want.
The command can be parametrized and put into a shell function, taking the file name and the number of lines to print as arguments:
randlines () {
fname=$1
nlines=$2
shuf -n "$nlines" -i 1-$(wc -l < "$fname") | sort -n |
sed 's/$/p/;$s/p/{&;q}/' | sed -n -f - "$fname"
}
to be used like
randlines file1 4
cat can add line numbers:
$ cat -n file
1 line one
2 line two
3 line three
4 line four
5 line five
6 line six
7 line seven
8 line eight
9 line nine
So you can use that to decorate, sort, undecorate:
$ cat -n file | sort -R | head -4 | sort -n
You can also use awk to decorate with a random number and line index (if your sort lacks -R like on OS X):
$ awk '{print rand() "\t" FNR "\t" $0}' file | sort -n | head -4
0.152208 4 line four
0.173531 8 line eight
0.193475 6 line six
0.237788 1 line one
Then sort with the line numbers and remove the decoration (one or two columns depending if you use cat or awk to decorate):
$ awk '{print rand() "\t" FNR "\t" $0}' file | sort -n | head -4 | cut -f2- | sort -n | cut -f2-
line one
line four
line six
line eight
another solution could be to sort whole file
sort file1 -o file2
to pick random lines on file2
shuf -n 4 file2 -o file3
I need to count all lines of an unix file. The file has 3 lines but wc -l gives only 2 count.
I understand that it is not counting last line because it does not have end of line character
Could any one please tell me how to count that line as well ?
grep -c returns the number of matching lines. Just use an empty string "" as your matching expression:
$ echo -n $'a\nb\nc' > 2or3.txt
$ cat 2or3.txt | wc -l
2
$ grep -c "" 2or3.txt
3
It is better to have all lines ending with EOL \n in Unix files. You can do:
{ cat file; echo ''; } | wc -l
Or this awk:
awk 'END{print NR}' file
This approach will give the correct line count regardless of whether the last line in the file ends with a newline or not.
awk will make sure that, in its output, each line it prints ends with a new line character. Thus, to be sure each line ends in a newline before sending the line to wc, use:
awk '1' file | wc -l
Here, we use the trivial awk program that consists solely of the number 1. awk interprets this cryptic statement to mean "print the line" which it does, being assured that a trailing newline is present.
Examples
Let us create a file with three lines, each ending with a newline, and count the lines:
$ echo -n $'a\nb\nc\n' >file
$ awk '1' f | wc -l
3
The correct number is found.
Now, let's try again with the last new line missing:
$ echo -n $'a\nb\nc' >file
$ awk '1' f | wc -l
3
This still provides the right number. awk automatically corrects for a missing newline but leaves the file alone if the last newline is present.
Respect
I respect the answer from John1024 and would like to expand upon it.
Line Count function
I find myself comparing line counts A LOT especially from the clipboard, so I have defined a bash function. I'd like to modify it to show the filenames and when passed more than 1 file a total. However, it hasn't been important enough for me to do so far.
# semicolons used because this is a condensed to 1 line in my ~/.bash_profile
function wcl(){
if [[ -z "${1:-}" ]]; then
set -- /dev/stdin "$#";
fi;
for f in "$#"; do
awk 1 "$f" | wc -l;
done;
}
Counting lines without the function
# Line count of the file
$ cat file_with_newline | wc -l
3
# Line count of the file
$ cat file_without_newline | wc -l
2
# Line count of the file unchanged by cat
$ cat file_without_newline | cat | wc -l
2
# Line count of the file changed by awk
$ cat file_without_newline | awk 1 | wc -l
3
# Line count of the file changed by only the first call to awk
$ cat file_without_newline | awk 1 | awk 1 | awk 1 | wc -l
3
# Line count of the file unchanged by awk because it ends with a newline character
$ cat file_with_newline | awk 1 | awk 1 | awk 1 | wc -l
3
Counting characters (why you don't want to put a wrapper around wc)
# Character count of the file
$ cat file_with_newline | wc -c
6
# Character count of the file unchanged by awk because it ends with a newline character
$ cat file_with_newline | awk 1 | awk 1 | awk 1 | wc -c
6
# Character count of the file
$ cat file_without_newline | wc -c
5
# Character count of the file changed by awk
$ cat file_without_newline | awk 1 | wc -c
6
Counting lines with the function
# Line count function used on stdin
$ cat file_with_newline | wcl
3
# Line count function used on stdin
$ cat file_without_newline | wcl
3
# Line count function used on filenames passed as arguments
$ wcl file_without_newline file_with_newline
3
3
I have a file contaning for just 2 numubers. One number on eash line.
4.1865E+02
4.1766E+02
I know its something line BHF = ($1 from line 1 - $1 from line 2 )
but can find the exact command.
How can I do a mathematical operation on them and save the result to a variable.
PS: This was got using
sed -i -e '/^$/d' nodout15
sed -i -e 's/^[ \t]*//;s/[ \t]*$//' nodout15
awk ' {print $13} ' nodout15 > 15
mv 15 nodout15
sed -i -e '/^$/d' nodout15
sed -i -e 's/^[ \t]*//;s/[ \t]*$//' nodout15
sed -n '/^[0-9]\{1\}/p' nodout15 > 15
mv 15 nodout15
tail -2 nodout15 > 15
mv 15 nodout15
After all this I have these two numbers and now I am not able to do some arithmatics. If possible please tell me a short form to do it on the spot rather doing all this jugglary. Nodout is a file with different length of columns so I am only interested in 13th column. Since all lines wont be in the daughter file so , empty lines deleted. Then only those lines to be taken starting with number. Then the last two lines, as they show the final state. The difference between them , will lead to a conditional statement. so I need to save it in a variable.
regards.
awk
$ BHF=`awk -v RS='' '{print $1-$2}' input.txt`
$ echo $BHF
0.99
bc
$ BHF=`cat input.txt | xargs printf '%f-%f\n' | bc`
$ echo $BHF
.990000