If my f06 file (trial.f06) looks like
RESULTS
TRIAL VALUES
ABCD
ABCD
ABCD
XX 1234
YY 1234
RESULTS
TRIAL VALUES
ABCD
ABCD
ABCD
ABCD
PP 1234
QQ 1234
RR 1234
And I just want to copy all those lines having numbers into a text file, how should I do it?
I tried this :
checkMessage =['grep -A 6 "TRIAL VALUES" trial.f06 > results.txt'];
status4 = system(checkMessage);
But it gave me the file results.txt the same as above.
This is for a LINUX machine
Use grep like this:
grep [0-9] trial.f06 > newfile.txt
Output:
XX 1234
YY 1234
PP 1234
QQ 1234
If you mean you want all lines containing numbers in the 6 lines following the words "TRIAL VALUES", you can do this:
grep -A6 "TRIAL VALUES" trial.f06 | grep [0-9] > newfile.txt
Related
I need to read a text file and make it fit X amount of columns.
For example, if my text.txt contains:
ABCDEFGHIJK
the resulting file for a width of 4 should be:
ABCD
EFGH
IJK
Like this?
$ echo ABCDEFGHIJK | sed -r 's/(....)/\1\n/g'
ABCD
EFGH
IJK
How can I write a linux command to delete the last column of tab-delimited csv?
Example input
aaa bbb ccc ddd
111 222 333 444
Expected output
aaa bbb ccc
111 222 333
It is easy to remove the fist field instead of the last. So we reverse the content, remove the first field, and then revers it again.
Here is an example for a "CSV"
rev file1 | cut -d "," -f 2- | rev
Replace the "file1" and the "," with your file name and the delimiter accordingly.
You can use cut for this. You specify a delimiter with option -d and then give the field numbers (option -f) you want to have in the output. Each line of the input gets treated individually:
cut -d$'\t' -f 1-6 < my.csv > new.csv
This is according to your words. Your example looks more like you want to strip a column in the middle:
cut -d$'\t' -f 1-3,5-7 < my.csv > new.csv
The $'\t' is a bash notation for the string containing the single tab character.
You can use below command which will delete the last column of tab-delimited csv irrespective of field numbers,
sed -r 's/(.*)\s+[^\s]+$/\1/'
for example:
echo "aaa bbb ccc ddd 111 222 333 444" | sed -r 's/(.*)\s+[^\s]+$/\1/'
I am trying to join 2 sorted simple file, but for some strange reason, its not working.
f1.txt:
f1 abc
f2 mno
f3 pqr
f2.txt
abc a1
mno a2
pqr a3
Command:
join -t '\t' f1.txt f2.txt -1 2 -2 1 > f3.txt
FYI in f1, f2 the space is actually a tab.
I don't know why this is not working. F3.txt is forming empty.
Please provide any valuable insights.
Using bash join on 2nd column of 1st file and 1st column on 2nd file
$ join -1 2 -2 1 file1 file2 > file3
$ cat file3
abc f1 a1
mno f2 a2
pqr f3 a3
Also join by default de-limits on tab-space characters. The man page of join says the following about the -t flag.
-t CHAR
use CHAR as input and output field separator.
Unless -t CHAR is given, leading blanks separate fields and are ignored,
I would like to write a script to check the log , if any line in the log have the string in include_list.txt but do not have the string in exclude_list.txt , then send alert mail to administrator , as below example , the line 2 have the string "string 4" ( which is in include_list.txt ) but do not have anything in exclude_list.txt , then display line 1 only in alert mail .
Would advise how to write this script ? very thanks .
vi exclude_list.txt
string 1
string 2
string 3
vi include_list.txt
string 4
string 5
string 6
For example
xxx string 4 xxxstring 2
xxx string 4 xxxxxxxxxx
xxx xxxxxxx xxxstring 3
You can use grep piped with another grep for this:
grep -iFf includes file.log | grep -iFf excludes
xxx string 4 xxxstring 2
If you want to match 2nd line that doesn't have corresponding entry in excludes then use grep -v after pipe:
grep -iFf includes file.log | grep -ivFf excludes
xxx string 4 xxxxxxxxxx
I want to ask, how to check file if I've two list name's, like
cat /data/file1/ab.txt
aa
bb
cc
dd
ee
cat /data/file2/cd.txt
cc
dd
ee
aa
zz
xx
yy
and I want the output something like :
zz
xx
yy
sort ab.txt > /tmp/file1
sort cd.txt > /tmp/file2
comm -13 /tmp/file1 /tmp/file2
The comm program compares two files and shows the lines that they have in common or unique to each. -13 means to omit the lines that are unique to file 1 and in common.
You can also use grep:
$ grep -vf ab.txt cd.txt
zz
xx
yy
-f tells grep to obtain patterns from ab.txt and -v inverts the matches.
You can also use awk:
awk 'NR==FNR{a[$1];next}!($1 in a)' ab.txt cd.txt