How to count exact match of certain patterns in a text file using linux shell command? - linux

I want to find the count of certain pattern in a text file which contains lot of mixed patterns also using linux shell command.
I have a text file which contains below patterns,
[--------------]
[+--------------+]
[+----------+------------+--------------------+]
[+---------------------+---------------------+]
How to find exact count of only first pattern [--------------]?
Note: Don't include square bracket as a pattern. Only special character inside square bracket is a pattern.

cat ./file | sed -e 's/\]/\]\n/' |grep "\[--------------\]" -c
cat reads file
sed replace ] with ]\n
grep searches every line for your expression and prints the number of lines -c

Related

Bash script - Get part of a line of text from another file

I'm quite new to bash scripting. I have a script where I want to extract part of the value of a particular line in a separate config file and then use that value as a variable in the script.
For example:
Line 75 in a file named config.cfg
"ssl_cert_location=/etc/ssl/certs/thecert.cer"
I want just the value at the end of "thecert.cer" to then use in the script. I've tried awk and various uses of grep but I can't quite get just the name of the certificate.
Any help would be appreciated. Thanks
These are some examples of the commands I ran:
awk -F "/" '{print $4}' config.cfg
grep -o *.cer config.cfg
Is this possible to extract the value on that line and then edit the output so it just contains the name of the certificate file?
This is a pure Bash version of the basic functionality of basename:
cert=${line##*/}
which removes everything up to and including the final slash. It presupposes that you've already read the line.
Or, using sed:
cert=$(sed -n '75s/^.*\///p' filename)
or
cert=$(sed -n '/^ssl_cert_location=/s/^.*\///p' filename)
This gets the specified line based on the line number or the setting name and replaces everything up to and including the final slash with nothing. It ignores all other lines in the file (unless the setting is repeated in the case of the text match version). The text match version is better because it works no matter what line number the setting is on.
grep uses regular expressions (as does sed). The grep command in your command appears to have a glob expression which won't work. One way to use grep (GNU grep) is to use the PCRE feature (Perl Compatible Regular Expressions):
cert=$(grep -Po '^ssl_cert_location=.*/\K.*' filename)
This works similarly to the sed command.
I have anchored the regular expressions to the beginning of the line. If there may be leading white spaces (the line may be indented), change the regex so it looks something like this:
^[[:space:]]*ssl_cert_location=
which works for both indented and unindented lines.
There are many variants, but a simple one that comes to mind with grep is first getting the line, then matching only non-slashes at the end of the line:
<config.cfg grep '^ssl_cert_location=' | grep -o '[^/]*$'
Why didn't your grep command (grep -o *.cer config.cfg) work? Becasue *.cer is a shell glob pattern and will be expanded by the shell to matching file names, even before the grep process is even started. If there are no matching files, it will be passed verbatim, but * in regular expressions is a quantifier which needs a preceeding expression. . in regex is "match any single character". So what you wanted is probably grep -o '.*\.cer', but .* matches anything, including slashes.
An awk solution would look like the following:
awk -F/ '/^ssl_cert_location=/{print $NF}' config.cfg
It uses "/" as separator, finds only lines starting with "ssl_cert_location" and then prints the last (NF) field in from this line.
Or an equivalent sed solution, which matches the same line and then deletes everything including the last slash:
sed -n '/^ssl_cert_location=/s#^.*/##p' config.cfg
To store the output of any command in a variable, use command substitution:
var="$(command with arguments)"

Linux counting words in random characters

I have generated a file of random characters for A-Z and a-z, the file has different sizes for example 10000 characters or 1000000 I would like to search in them how many times the word 'cat' or 'dog' appeared Would someone be able to provide the command linux grep... | wc... or any other command that can handle this task.
grep has a -c command that will count the number of matches found.
So
grep -c "cat\|dog" <file name>
add -i if you want a case insensitive count
You can use grep with the flag -o. For example:
grep -o "dog\|cat" <filename> | wc -l
About the flag -o, according to man grep: «Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line.»
This solution will work in several situations: multiple lines, a single line, the word surrounded with whitespaces or other characters, etc.

How to extract a specific text from gz file?

I need to extract the 5 to 11 characters from my fastq.gz data this data is just too large for running in R. So I was wondering if I can do it directly in Linux command line?
The fastq file looks like this:
#NB501399:67:HFKTCBGX5:1:11101:13202:1044 1:N:0:CTTGTA
GAGGTNACGGAGTGGGTGTGTGCAGGGCCTGGTGGGAATGGGGAGACCCGTGGACAGAGCTTGTTAGAGTGTCCTAGAGCCAGGGGGAACTCCAGGCAGGGCAAATTGGGCCCTGGATGTTGAGAAGCTGGGTAACAAGTACTGAGAGAAC
+
AAAAA#EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAAAEEEEEEEEEEEEEEEEAEEEEEEEEEEEEEEAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAE6
#NB501399:67:HFKTCBGX5:1:11101:1109:1044 1:N:0:CTTGTA
TAGGCNACCTGGTGGTCCCCCGCTCCCGGGAGGTCACCATATTGATGCCGAACTTAGTGCGGACACCCGATCGGCATAGCGCACTACAGCCCAGAACTCCTGGACTCAAGCGATCCTCCAGCCTCAGCCTCCCGAGTAGCTGGGACTACAG
+
And I only want to extract the 5 to 11 character which located in sequence part (for the first one is TNACGG, for the second is CNACCT) and makes it a new txt file. Can I do that?
You can use GNU sed with zcat:
zcat fastq.gz | sed -n '2~5{s/.\{4\}\(.\{6\}\).*/\1/;p}'
-n means lines are not printed by default
2~5 means start with line 2, match every fifth line
when the "address" matches, the substitution remembers the fifth to tenth character in \1 and replaces the whole line with it, p prints the result
Another using zgrep and positive lookbehind:
$ zgrep -oP "(?<=^[ACTGN]{4})[ACTGN]{6}" foo.gz
TNACGG
CNACCT
Explained:
zgrep : man zgrep: search possibly compressed files for a regular expression
-o Print only the matched (non-empty) parts of a matching line
-P Interpret the pattern as a Perl-compatible regular expression (PCRE).
(?<=^[ACTGN]{4}) positive lookbehind
[ACTGN]{6} match 6 named characters that are preceeded by above
foo.gz my test file
$ zcat fastq.gz | awk '(NR%5)==2{print substr($0,5,6)}'
TNACGG
CNACCT

Identify a text in a file which contains path(including * asterisk) in Shell

Input:
Text file: backup_list.txt
/home/common/xyz_V*.txt
/home/common/hello.txt
/home/mutaq/xya_*_logs.txt
/home/mutaq/xygs.txt
Text: /home/mutaq/xya_Juvi_V1.01_logs.txt
Now i want to match this text in the file.
As the file has a line /home/mutaq/xya_*_logs.txt which is similar to the text /home/mutaq/xya_Juvi_V1.01_logs.txt considering asterisk(*) as the character to represent multiple character in between.
now i want to know whether the text exists in the file or not.
Using grep, i cannot differentiate with asterisk.
one way i found that i can first iterate through the backup_list.txt
and invoke ls command for each of the line and store the same in some place then, i can directly match the text with the stored value.
But is there any better way of doing this, such that i can directory search the text withing that file ?
A first answer could be grep -f option, that allow grep to take parttern to search in the file pass in parameter. You could try :
ls /path/to/files/ | grep -f backup_list.txt
But to give backup_list.txt content to grep as pattern, '*' have to be replace by '.*' to say any character 0 or n time and '.' have to be replace by '\.' to match '.' character and not any character.
You can make replacement with sed.
Hope this help.

How to print the longest word in a file by using combination of grep and wc

iam trining to find the longest word in a text file.
i tried it and find out the no of characters in the longest word in a file
by using the command
wc -L
i need to print the longest word By using this number and grep command .
If you must use the two commands give, I'd suggest:
grep -E ".{$(wc -L < test.txt)}" test.txt
The command substitution is used to build the correct brace expression to match the line(s) with exactly the given number of characters. -E is needed to enable extended regular expression support; otherwise, the braces need to be escaped: grep ".\{...\}" test.txt.
Using an awk command that makes a single pass through the file may be faster.

Resources