Find lines containing ' \N abcd ' - linux

How can I find lines that contain a double tab and then \N
It should match, for example, \N abcd
I've tried
grep $'\t'$'\t''\N' file1.txt
grep $'\t\t''\N' file1.txt
grep $'\t\t\N' file1.txt

The following works for me:
RHEL:
$ grep $'\t\t''\\N' file1.txt
OSX:
$ grep '\t\t\\N' file1.txt

Related

Print text after last slash

I have a Data => File1.txt
Data
#demo/file/wk/Fil0.fk
#demo/file/wk/Fil1.fk
#demo/file/wk/Fil2.fk
#demo/file/wk/Fil3.fk
#demo/file/wk/Fil4.fk
Want to Print the data to Another File2.txt in below Format
Fil0.fk
Fil1.fk
Fil2.fk
Fil3.fk
Fil4.fk
Try below code :
cat File1.txt | cut -d'/' -f 4 > File2.txt

How to create a CSV file based on row in shell script?

I have a text file /tmp/some.txt with below values
JOHN YES 6 6 2345762
SHAUN NO 6 6 2345748
I want to create a csv file with below format (i.e based on rows. NOT based on columns).
JOHN,YES,6,6,2345762
SHAUN,NO,6,6,2345748
i tried below code
for i in `wc -l /tmp/some.txt | awk '{print $1}'`
do
awk 'NR==$i' /tmp/some.txt | awk '{print $1","$2","$3","$4","$5}' >> /tmp/some.csv
done
here wc -l /tmp/some.txt | awk '{print $1}' will get the value as 2 (i.e 2 rows in text file).
and for each row awk 'NR==$i' /tmp/some.txt | awk '{print $1","$2","$3","$4","$5}' will print the 5 fields into some.csvfile which is separated by comma.
when i execute each command separately it will work. but when i make it as a shell script i'm getting empty some.csv file.
#Kart: Could you please try following.
awk '{$1=$1;} 1' OFS=, Input_file > output.csv
I hope this helps you.
I suggest:
sed 's/[[:space:]]\+/,/g' /tmp/some.txt
You almost got it. awk already process the file row by row, so you don't need to iterate with the for loop.
So you just need to run:
awk '{print $1","$2","$3","$4","$5}' /tmp/some.txt >> /tmp/some.csv
With tr, squeezing (-s), and then transliterating space/tab ([:blank:]):
tr -s '[:blank:]' ',' <file.txt
With sed, substituting one or more space/tab with ,:
sed 's/[[:blank:]]\+/,/g' file.txt
With awk, replacing one ore more space/tab with , using gsub() function:
awk 'gsub("[[:blank:]]+", ",", $0)' file.txt
Example
% cat foo.txt
JOHN YES 6 6 2345762
SHAUN NO 6 6 2345748
% tr -s '[:blank:]' ',' <foo.txt
JOHN,YES,6,6,2345762
SHAUN,NO,6,6,2345748
% sed 's/[[:blank:]]\+/,/g' foo.txt
JOHN,YES,6,6,2345762
SHAUN,NO,6,6,2345748
% awk 'gsub("[[:blank:]]+", ",", $0)' foo.txt
JOHN,YES,6,6,2345762
SHAUN,NO,6,6,2345748

How to find uncommon lines between two text files using shell script?

I have two text files file1.txt & file2.txt
file1.txt Contains :
a
b
c
file2.txt Contains :
a
b
c
d
e
f
The Output Should be :
d
e
f
The command i'm trying to use is 'diff file2.txt file1.txt'
It gives the common lines only.
Assuming that the input files are sorted:
join -v 2 file1.txt file2.txt
Check man join for details on all the other things join can do for you.
please try below ones
grep -vf file1.txt file2.txt
comm -13 file1.txt file2.txt
for diff you have to perform something extra
diff inp inp1 | grep '>' | cut -f2 -d' '

How to pass variables in an awk

I need to pass some variable in the following awk cmd,
awk -F, 'NR==FNR{a[\$0]++;next}!(a[\$6])' "file1" "file2" > output
Instead of providing the value 6, I need to pass it as a variable?
Here's an example using -v to pass a variable to awk. Also note that if you use single quotes around the awk program, you don't want to escape the $s.
$ cat file1.txt
1
4
$ cat file2.txt
1,2
3,4
$ awk -F, -v i=1 'NR==FNR{a[$0]++;next} !(a[$i])' file1.txt file2.txt
3,4
$ awk -F, -v i=2 'NR==FNR{a[$0]++;next} !(a[$i])' file1.txt file2.txt
1,2

grep after match

I have the following lines in text file
myname aaa age 22
age 23 myname bbb
How can i find the word after myname using linux grep command .?
I want the output to be the word after myname ( aaa and bbb )
$ grep -Po '(?<=myname\s)\w+' inputFile
$ grep -o "myname [[:alnum:]]\+" /tmp/sample | cut -f2 -d' '
aaa
bbb
solution with sed:
sed -n '/myname/{s/.*myname \([^ ]*\).*/\1/;p}'

Resources