which command is fast to search consecutive patterns in a line - linux

which command is fast to search consecutive patterns in a line in unix ?
The word "=" follows the word "Model".
Input File
Model = It supports 10 Modular Controllers
Support Config Page Model = Yes
Model files are here
Output:
Extract the line where word "=" comes after the word "Model" and "Model" appears as a first word.
Here first line of input file satisfy the criteria- "Model = It supports 10 Modular Controllers"
I have used sed and awk commands but want to know which one is better.
sed -n '/^Model/ s/=/&/p'
sed -n 's/^Model.*=/&/p'
sed -n '/^Model/ {/=/p ;}'
awk '/^Model.*=/'
Can someone please tell me which one is fast and better.

As Ed Morton says, without RegEx is faster. My proposed solution is
awk '{a=index($0, "Model");b=index($0, "=")}a==1 && a<b'
First I get the position for the substrings, and then I compare them to avoid a double search for Model, Another solution could be:
awk 'index($0, "Model")!=1{next}index($0, "=")>1'
In both scripts I'm assuming that Model must be the first word (you are using the "^" in your regexps) And in this second script I check that Model is present and is the first word in string, once this is validated I only check that the "=" comes after the "Model" (its position its greater than 1)

Related

How to use invert "-v" in grep when I do not have a file but a long string that is just one line?

Supposed I have
echo "The first part. The second part. The third part."
and want to remove The first part and The third part to get:
The second part.
I tried:
echo "The first part. The second part. The third part." | grep -v -e "The first part." -e "The third part."
but the inverting flag appears to work only for files with multiple lines. How can I do it for a single string?
Use sed instead:
echo "The first part. The second part. The third part." \
| sed -e 's/[[:space:]]*The first part\.[[:space:]]*//g' \
-e 's/[[:space:]]*The third part\.[[:space:]]*//g'
grep is a tool which works line-based and is more as a select-lines-which-satesfy-condition tool, The task you want to implement is more remove-substrings-from-file. This is in the area of substitutions and not in the area of selection: The best tool for this task is to use sed
sed 's/string_to_get_rid_of//g' file
Of course it is possible that your file is structured in records and you want to remove all records which contain a particular word, then there is another option. Assume that your file is split into various records which are delimited by a unique character (eg. <full-stop>-character (.)). The it is better to use awk for this. Awk allows you to redefine it's record separator from a new-line (default) to anything you want by defining RS and ORS (the latter for the output):
awk 'BEGIN{RS=ORS="."}/string_that_should_not_appear/{next}1' file
Assume you have a file with the content:
foo.bar.baz.qux
quux.quuz.corge
If we want to remove all the records which do not contain qux, we do:
awk 'BEGIN{RS=ORS="."}/qux/{next}1' file
which returns
foo.bar.baz.quuz.corge.
Notice that the record containing "cux" contained a newline and that an extra ORS is added at the end. Also you might get
foo.bar.baz.quuz.corge
.
Which is due to the POSIX standard that files should end with a newline
In case of the OP, it would read:
awk 'BEGIN{RS=ORS="."}/The first part/{next}/The third part/{next}1' file

Prefix search names to output in bash

I have a simple egrep command searching for multiple strings in a text file which outputs either null or a value. Below is the command and the output.
cat Output.txt|egrep -i "abc|def|efg"|cut -d ':' -f 2
Output is:-
xxx
(null)
yyy
Now, i am trying to prefix my search texts to the output like below.
abc:xxx
def:
efg:yyy
Any help on the code to achieve this or where to start would be appreciated.
-Abhi
Since I do not know exactly your input file content (not specified properly in the question), I will put some hypothesis in order to answer your question.
Case 1: the patterns you are looking for are always located in the same column
If it is the case, the answer is quite straightforward:
$ cat grep_file.in
abc:xxx:uvw
def:::
efg:yyy:toto
xyz:lol:hey
$ egrep -i "abc|def|efg" grep_file.in | cut -d':' -f1,2
abc:xxx
def:
efg:yyy
After the grep just use the cut with the 2 columns that you are looking for (here it is 1 and 2)
REMARK:
Do not cat the file, pipe it and then grep it, since this is doing the work twice!!! Your grep command will already read the file so do not read it twice, it might not be that important on small files but you will feel the difference on 10GB files for example!
Case 2: the patterns you are looking for are NOT located in the same column
In this case it is a bit more tricky, but not impossible. There are many ways of doing, here I will detail the awk way:
$ cat grep_file2.in
abc:xxx:uvw
::def:
efg:yyy:toto
xyz:lol:hey
If your input file is in this format; with your pattern that could be located anywhere:
$ awk 'BEGIN{FS=":";ORS=FS}{tmp=0;for(i=1;i<=NF;i++){tmp=match($i,/abc|def|efg/);if(tmp){print $i;break}}if(tmp){printf "%s\n", $2}}' grep_file
2.in
abc:xxx
def:
efg:yyy
Explanations:
FS=":";ORS=FS define your input/output field separator at : Then on each line you define a test variable that will become true when you reach your pattern, you loop on all the fields of the line until you reach it if it is the case you print it, break the loop and print the second field + an EOL char.
If you do not meet your pattern you do nothing.
If you prefer the sed way, you can use the following command:
$ sed -n '/abc\|def\|efg/{h;s/.*\(abc\|def\|efg\).*/\1:/;x;s/^[^:]*:\([^:]*\):.*/\1/;H;x;s/\n//p}' grep_file2.in
abc:xxx
def:
efg:yyy
Explanations:
/abc\|def\|efg/{} is used to filter the lines that contain only one of the patterns provided, then you execute the instructions in the block. h;s/.*\(abc\|def\|efg\).*/\1:/; save the line in the hold space and replace the line with one of the 3 patterns, x;s/^[^:]*:\([^:]*\):.*/\1/; is used to exchange the pattern and hold space and extract the 2nd column element. Last but not least, H;x;s/\n//p is used to regroup both extracted elements on 1 line and print it.
try this
$ egrep -io "(abc|def|efg):[^:]*" file
will print the match and the next token after delimiter.
If we can assume that there are only two fields, that abc etc will always match in the first field, and that getting the last match on a line which contains multiple matches is acceptable, a very simple sed script could work.
sed -n 's/^[^:]*\(abc\|def\|efg\)[^:]*:\([^:]*\)/\1:\2/p' file
If other but similar conditions apply (e.g. there are three fields or more but we don't care about matches in the first two) the required modifications are trivial. If not, you really need to clarify your question.

How can I get substring from a string in linux?

I am trying to extract a specific string from a string in linux.
For example, I want to extract 'android.content.pm.PackageParser.parseBaseApplication' from the below string.
The String has a regular format and only the string within parenthesis is changeable.
Join point 'method-execution(boolean android.content.pm.PackageParser.parseBaseApplication(android.content.pm.PackageParser$Package, android.content.res.Resources, org.xmlpull.v1.XmlPullParser, android.util.AttributeSet, int, java.lang.String[]))' in Type
However, I have a trouble in finding a proper approach to do this.
At first, I tried sed command but it's too complicate so I couldn't complete the work.
Could you recommend any other approach to do this?
Thanks alot.
If the interested string is always the second string after the first ( then:
echo "..." | awk -F '[()]' '{split($2,a," "); printf a[2]}'
extract it.
It splits the line using delimiters ( and ). So $2 will the data between ( and ). split splits $2 and you get the second string which is
android.content.pm.PackageParser.parseBaseApplication
for your example.
This looks like AOP syntax. So with certain assumption, this can be done as :
echo "Join point...." | cut -d'(' -f2 | cut -d' ' -f2
Explanation : cut based on ( and get second field, which is the method signature except parameters. Since we are not interested in return type as well, split the signature based on blank space and get the second field, which is the method name.
This is based your stated invariant, that the substring you're capturing is the only part that varies from file to file, here is a perl solution:
Extract=$(perl -ne 'print $1 if /\s*Join point \x27method-execution\(boolean\s+([^(]*)/' file_to_search)
echo $Extract
android.content.pm.PackageParser.parseBaseApplication
I used the full lead-in because it reduced the chance of false-positive, but if you find other things change and want to use yet a substring of that (e.g., "method-execution(boolean "), that's your choice to make.
This matches out to the where the variant substring starts, which goes to the next invariant--the open parenthesis--so we can just capture while not open parenthesis. Since it's probably some human interaction changing the variant, I allowed for extra spaces with the \s+ (one or more white space).
You could use almost the same regex with sed, but would need to consume the entire string to avoid it becoming part of the output. e.g., in shorthand:
sed -r 's/.*LEAD_IN(CAPTURE_TEXT).*/\1/
Where LEAD_IN is the constant leader, "Join point..." and CAPTURE_TEXT the same capture group as in the perl solution. Main difference is leading and triling ".*" to consume the entire subject.

Grep (a.txt - En word list, b.txt - one string in each line) Q: string from b.txt built only from words or not?

I have a list with English words (1 in each line, around 100.000)-> a.txt and a b.txt contains strings (around 50.000 line, one string in each line, can contain pure words, word+something, garbage). I would like to know which strings from b.txt contains English words only (without any additional chars).
Can I do this with grep?
Example:
a.txt:
apple
pie
b.txt:
applepie
applebs
bspie
bsabcbs
Output:
c.txt:
applepie
Since your question is underspecified, maybe this answer can help as a shot in the dark to clarify your question:
c='cat b.txt'
while IFS='' read -e line
do
c="$c | grep '$line'"
done < a.txt
eval "$c" > c.txt
But this would also match a line like this is my apply on a pie. I don't know if that's what you want.
This is another try:
re=''
while IFS='' read -e line
do
re="$re${re:+|}$line"
done < a.txt
grep -E "^($re)*$" b.txt > c.txt
This will let pass only the lines which have nothing but a concatenation of these words. But it will also let pass things like 'appleapplepieapplepiepieapple'. Again, I don't know if this is what you want.
Given your latest explanation in the question I would propose another approach (because building such a list out of 100000+ words is not going to work).
A working approach for this amount of words could be to remove all recognized words from the text and see which lines get emptied in the process. This can easily be done iteratively without exploding the memory usage or other resources. It will take time, though.
cp b.txt inprogress.txt
while IFS='' read -e line
do
sed -i "s/$line//g" inprogress.txt
done < a.txt
for lineNumber in $(grep -n '^$' inprogress.txt | sed 's/://')
do
sed -n "${lineNumber}p" b.txt
done
rm inprogress.txt
But this still would not really solve your issue; consider if you have the words to and potato in your list, and removing the to would occur first, then this would leave a word pota in your text file, and pota is not a word which would then be removed.
You could address that issue by sorting your word file by word length (longest words first) but that still would be problematic in some cases of compound words, e. g. redart (being red + art) but dart would be removed first, so re would remain. If that is not in your word list, you would not recognize this word.
Actually, your problem is one of logical programming and natural language processing and probably does not fit to SO. You should have a look at the language Prolog which is designed around such problems as yours.
I will post this as an answer as well since I feel this is the correct answer to your specific question.
Your requirement is to find non-English words in a file (b.txt) based on a word list ( a.txt) which contains a list of English words. Based on the example in your question said word list does not contain compound words (e.g. applepie) but you would still like to match the file against compound words based on words in your word list (e.g. apple and pie).
There are two problem you are facing:
Not every permutation of words in a.txt will be a valid English compound word so just based on this your problem is already impossible to solve.
If you, nonetheless, were to attempt building a list of compound words yourself by compiling a list of all possible permutations you cannot easily do this because of the size of your wordlist (and resulting memory problems). You would most probably have to store your words in a more complex data structure, e.g. a tree, and build permutations on the fly by traversing the tree which is not doable in shell scripting.
Because of these points and your actual question being "can this be done with grep?" the answer is no, this is not possible.

How can I remove a doubled section of a string?

I'm having trouble with data manipulation in a txt file. My file currently looks like this:
HG02239 -23.42333333
NA06985NA06985 -20.125
NA06991NA06991 -20.92
This shows some of my tab-delimited data. Half the entries are in the correct seven-characters (letterletternumbernumbernumbernumbernumber) format, but some are doubled up. I want to go into the second column (first column is empty for a reason!) and remove the repeats in the string so it would read
HG02239 -23.42333333
NA06985 -20.125
NA06991 -20.92
I can't work out how to do this with sed/awk on a per column basis. I feel like I should be able to write a regex, but because the data is a repeat, I don't want to lose the first half of the string; and I can't work out how to cut on a specific column, or I would just delete the 7th character. Any help much appreciated!
Solution
You can solve this with a backreference. For example, using GNU sed:
$ cat << EOF | sed --regexp-extended 's/(.{7})\1/\1/'
HG02239 -23.42333333
NA06985NA06985 -20.125
NA06991NA06991 -20.92
EOF
HG02239 -23.42333333
NA06985 -20.125
NA06991 -20.92
If you aren't using GNU sed, you may need to escape the capture groups. In addition, you can tune the regular expression if you need a more accurate character match.
Explanation
The cat pipeline is just a here-document to make it easy to display and test the code. You can call sed directly on your file, or use the -i flag to perform an in-place edit when you're comfortable with the results.
The sed script does the following:
It stores any group of 7 consecutive characters in a capture group using an "interval expression" (the number in the curly braces).
The \1 is a backreference that matches the first capture group.
The match looks for "a capture group followed by a copy of the capture group."
The substitution replaces the match with a single copy of the capture group.
One way, using awk:
awk '{ print substr($1, 1, 7), $2 }' file.txt
Output:
HG02239 -23.42333333
NA06985 -20.125
NA06991 -20.92
You could use something like that:
sed -i 's|\([A-Z]\{2\}[0-9]\{5\}\)[A-Z0-9]*\s*\(.*\)|\1 \2|g' <your-file>

Resources