Is there any equivalent command grep -nP "\t" some_file , using sed or awk - linux

I am trying to find the occurance of tab in a file some_file and print those line with leading line number.
grep -nP "\t" some_file works well for me but I want sed or awk equivalent command for the same.

To emulate: grep -nP "\t" file.txt
Here's one way using GNU awk:
awk '/\t/ { print NR ":" $0 }' file.txt
Here's one way using GNU sed:
< file.txt sed -n '/\t/{ =;p }' | sed '{ N;s/\n/:/ }'

Well, you can always do it in sed:
cat -n test.txt | sed -n "/\t/p"
Unfortunately, sed can only print line numbers to stdout with a new line, so in any case, more than one command is necessary. A more lengthy (unnecessary so) version of the above, but one only using sed, would be:
sed = test.txt | sed -n "N;s/\n/ /;/\t/p"
but I like the one with cat more. CATS ARE NICE.

Related

How do you change column names to lowercase with linux and store the file as it is?

I am trying to change the column names to lowercase in a csv file. I found the code to do that online but I dont know how to replace the old column names(uppercase) with new column names(lowercase) in the original file. I did something like this:
$cat head -n1 xxx.csv | tr "[A-Z]" "[a-z]"
But it simply just prints out the column names in lowercase, which is not enough for me.
I tried to add sed -i but it did not do any good. Thanks!!
Using awk (readability winner) :
concise way:
awk 'NR==1{print tolower($0);next}1' file.csv
or using ternary operator:
awk '{print (NR==1) ? tolower($0): $0}' file.csv
or using if/else statements:
awk '{if (NR==1) {print tolower($0)} else {print $0}}' file.csv
To change the file for real:
awk 'NR==1{print tolower($0);next}1' file.csv | tee /tmp/temp
mv /tmp/temp file.csv
For your information, sed using the in place edit switch -i do the same: it use a temporary file under the hood.
You can check this by using :
strace -f -s 800 sed -i'' '...' file
Using perl:
perl -i -pe '$_=lc() if $.==1' file.csv
It replace the file on the fly with -i switch
You can use sed to tell it to replace the first line with all lower-case and then print the rest as-is:
sed '1s/.*/\L&/' ./xxx.csv
Redirect the output or use -i to do an in-place edit.
Proof of Concept
$ echo -e "COL1,COL2,COL3\nFoO,bAr,baZ" | sed '1s/.*/\L&/'
col1,col2,col3
FoO,bAr,baZ

Extraction version from a file

I have a file file1 which looks as below
version=7.2.3.cdead_rcd345
I am using the following command but it is not working
cat file1 | awk -F'=' '{print $2}
It is not giving the version number
Awk solution to extract version number in format <number>.<number>[.number]:
awk -F'[=_]' '{ sub(/\.[^.]*$/, "", $2); print $2 }' file1
7.2.3
If you need only 7.2.3 as answer, try sed:
sed -r 's/.*=(.*)\..*/\1/g' file1
Outpt:
7.2.3
You can use one of the following commands:
awk without the cat, since it is pointless here
command:
awk -F'=' '{print $2}' file1
output:
7.2.3.cdead_rcd345wq
or even better use grep directly:
command:
grep -Po '(?<=version=).*' file1
output:
7.2.3.cdead_rcd345wq
test:
Last but not least, if you need only the version number (7.2.3) than you can use the following command:
$ grep -Po '(?<=version=)\d\.\d\.\d' file1
7.2.3
Maybe you are missing one ' in the end of your command:
awk -F"=" '{print $2}' file1
Output:
7.2.3.cdead_rcd345wq
You can grep "version" in the file, split the line by "=", and get the second word. This method is very easy to understand.
grep "version" file1 | cut -d "=" -f 2

How to run grep inside awk?

Suppose I have a file input.txt with few columns and few rows, the first column is the key, and a directory dir with files which contain some of these keys. I want to find all lines in the files in dir which contain these key words. At first I tried to run the command
cat input.txt | awk '{print $1}' | xargs grep dir
This doesn't work because it thinks the keys are paths on my file system. Next I tried something like
cat input.txt | awk '{system("grep -rn dir $1")}'
But this didn't work either, eventually I have to admit that even this doesn't work
cat input.txt | awk '{system("echo $1")}'
After I tried to use \ to escape the white space and the $ sign, I came here to ask for your advice, any ideas?
Of course I can do something like
for x in `cat input.txt` ; do grep -rn $x dir ; done
This is not good enough, because it takes two commands, but I want only one. This also shows why xargs doesn't work, the parameter is not the last argument
You don't need grep with awk, and you don't need cat to open files:
awk 'NR==FNR{keys[$1]; next} {for (key in keys) if ($0 ~ key) {print FILENAME, $0; next} }' input.txt dir/*
Nor do you need xargs, or shell loops or anything else - just one simple awk command does it all.
If input.txt is not a file, then tweak the above to:
real_input_generating_command |
awk 'NR==FNR{keys[$1]; next} {for (key in keys) if ($0 ~ key) {print FILENAME, $0; next} }' - dir/*
All it's doing is creating an array of keys from the first file (or input stream) and then looking for each key from that array in every file in the dir directory.
Try following
awk '{print $1}' input.txt | xargs -n 1 -I pattern grep -rn pattern dir
First thing you should do is research this.
Next ... you don't need to grep inside awk. That's completely redundant. It's like ... stuffing your turkey with .. a turkey.
Awk can process input and do "grep" like things itself, without the need to launch the grep command. But you don't even need to do this. Adapting your first example:
awk '{print $1}' input.txt | xargs -n 1 -I % grep % dir
This uses xargs' -I option to put xargs' input into a different place on the command line it runs. In FreeBSD or OSX, you would use a -J option instead.
But I prefer your for loop idea, converted into a while loop:
while read key junk; do grep -rn "$key" dir ; done < input.txt
Use process substitution to create a keyword "file" that you can pass to grep via the -f option:
grep -f <(awk '{print $1}' input.txt) dir/*
This will search each file in dir for lines containing keywords printed by the awk command. It's equivalent to
awk '{print $1}' input.txt > tmp.txt
grep -f tmp.txt dir/*
grep requires parameters in order: [what to search] [where to search]. You need to merge keys received from awk and pass them to grep using the \| regexp operator.
For example:
arturcz#szczaw:/tmp/s$ cat words.txt
foo
bar
fubar
foobaz
arturcz#szczaw:/tmp/s$ grep 'foo\|baz' words.txt
foo
foobaz
Finally, you will finish with:
grep `commands|to|prepare|a|keywords|list` directory
In case you still want to use grep inside awk, make sure $1, $2 etc are outside quote.
eg. this works perfectly
cat file_having_query | awk '{system("grep " $1 " file_to_be_greped")}'
// notice the space after grep and before file name

Linux cut string

In Linux (Cento OS) I have a file that contains a set of additional information that I want to removed. I want to generate a new file with all characters until to the first |.
The file has the following information:
ALFA12345|7890
Beta0-XPTO-2|30452|90 385|29
ZETA2334423 435; 2|2|90dd5|dddd29|dqe3
The output expected will be:
ALFA12345
Beta0 XPTO-2
ZETA2334423 435; 2
That is removed all characters after the character | (inclusive).
Any suggestion for a script that reads File1 and generates File2 with this specific requirement?
Try
cut -d'|' -f1 oldfile > newfile
And, to round out the "big 3", here's the awk version:
awk -F\| '{print $1}' in.dat
You can use a simple sed script.
sed 's/^\([^|]*\).*/\1/g' in.dat
ALFA12345
Beta0-XPTO-2
ZETA2334423 435; 2
Redirect to a file to capture the output.
sed 's/^\([^|]*\).*/\1/g' in.dat > out.dat
And with grep:
$ grep -o '^[^|]*' file1
ALFA12345
Beta0-XPTO-2
ZETA2334423 435; 2
$ grep -o '^[^|]*' file1 > file2

sed extract text between two patterns where second pattern may be either of one

I am trying to extract text between pattern1 (fixed) and pattern2 (this can be p2-1/p2-2).
can you please tell me how to achieve this in a single command?
A file starts with start and ends with either end or close
File1:
======
junktest
data
start
stackoverflow
sed
close
File2:
======
data2
start
stackoverflow
end
I can extract text from File1 with
sed -n "/start/,/close/p"
And from File2 with
sed -n "/start/,/end/p"
I need a single sed command to achieve both..
something like:
sed -n "/start/, /close or end /p"
Both GNU sed and BSD sed:
sed -nE '/start/,/close|end/p' file
This awk looks better
awk '/start/,/end|close/' file
sed -n -E "/Word1/,/Word2-1/p" | sed -n -E "/Word1/,/Word2-2/p"
Easy with awk:
$ awk '/start/{p=1}p{print}/end|close/{p=0}' file

Resources