How to use grep/egrep to count files in subdirectories containing a "String" [closed] - linux

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What I want to achieve is to get the count of all files in directory which contain a pattern String. And also not to count errors.
I have tried few commands but nothing seems to work this is what i have tried so far:
ls -l grep -cri "string" | wc -l
ls /path/ 2> /dev/null | grep -ci 'string' | wc -l
ls -l | grep -v ^l "string" | wc -l

Use the -l option to list just the filename when the contents match the pattern. Yse the -r option to recurse into subdirectories. Use the -F option to match the string exactly, rather than as a regular expression.
You need to tell it the name of the directory to recurse into; you can use . for the current directory.
Then pipe this to wc -l:
grep -rlF "string" . 2>/dev/null | wc -l

If you want to count the files in a directory with "string" in the name you can do it like this:
ls -l | grep -c "string"
or if you want it to be case insensitive use -i
ls -l | grep -ci "string"
-c will print the count of matching lines.

Related

Combine number of lines of more files with filename [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Specify a command / command set that displays the number of lines of code in the .c and .h files in the current directory, displaying each file in alphabetical order followed by ":" and the number of lines in the files, and finally the total of the lines of code.
An example that might be displayed would be :
test.c: 202
example.c: 124
example.h: 43
Total: 369
I'd like to find a solution in the shortest form possible. I've experimented many commands like:
find . -name '*.c' -o -name '*.h' | xargs wc -l
== it shows 0 ./path/test.c and the total, but isn't close enough
stat -c "%n:%s" *
== it shows test.c:0, but it shows all file types and doesn't show the number of lines or the total
wc -l *.c *.h | tr ' ' '\:
== it shows 0:test.c and the total, but doesn't search in sub-directories and the order is reversed compared to the problem (filename: number_of_lines).
This one is closer to the answer but I'm out of ideas after searching most commands I saw in similar problems.
This should do it:
wc -l *.c *.h | awk '{print $2 ": " $1}'
Run a subshell in xargs
xargs -n1 sh -c 'printf "%s: %s\n" "$1" "$(wc -l <"$1")"' --
xargs -n1 sh -c 'echo "$1 $(wc -l <"$1")"' --

Bash Console putting some invisible chars into string var [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Below I shared my console. I want to cut some string from output of some commands.
But there are 17 extra chars which I have no idea where comes from.
Can someone pls explain to me?
$ ls -al | grep total | sed 's/[[:blank:]].*$//' | wc -m
23
$ ns="total"
$ echo $ns | sed 's/[[:blank:]].*$//' | wc -c
6
But there are 17 extra chars which I have no idea where comes from.
Those are ANSI escape codes that grep uses for coloring matching substrings. You probably have an alias (run alias | grep grep to examine) like
alias grep='grep --color=always'
somewhere that causes grep to color matches even if output is not a tty, or something similar.
Try
ls -al | grep --color=never total | sed 's/[[:blank:]].*$//' | wc -m
and you'll get six.

Compare ZIP file with dir with shell command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have compressed a lot of files with zip from infozip.org.
How do I make sure that the zip file contains all the files from the original files. Or is there a GUI tool do to it.
You can install a command line tool called unzip, and run
$unzip -l yourzipfile.zip
Files contained in yourzipfile.zip will be listed.
========
To verify files automatically, you can follow these steps.
If files compressed into yourzipfile.zip is in dir1, you can first unzip yourzipfile.zip into dir2, then you may compare files in dir1 and dir2 by running
$ diff --brief -r dir1/ dir2/
I tried to do this myself, and you can string together a few things to do this without unzipping to a directory
diff <(unzip -l foo.zip | cut -d':' -f2 | cut -d' ' -f4-100 | sed 's/\/$//' | sort) <(find somedir/ | sort)
Basic breakdown is:
Use diff to compare output streams of 2 commands
diff <(command1) <(command2)
Use unzip -l, and process the output. I used 2 cuts to get just the filenames, remove trailing / on directories, and finally sort:
unzip -l foo.zip | cut -d':' -f2 | cut -d' ' -f4-100 | sed 's/\/$//' | sort
For the directory listing, a simple find and sort
find somedir/ | sort

awk: Iterate through content of a large list of files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Improve this question
So, I have about 60k-70k vCard-Files and want to check (or, at this point, count), which vCards contain a mail address (EMAIL;INTERNET:me#my-domain.com)
I tried to pass the output of find to awk, but I just get awk to work with the files list, not with every files content. How can I get awk to do so? I tried several combinations of find, xargs and awk, but I don't get it to work properly.
Thanks for your help,
Wolle
I'd probably use grep for this.
If you want to extract adresses from the files:
grep -rio "EMAIL;INTERNET:.*#[a-z0-9-]*\.[a-z]*" *
Use cut, sed or awk to remove the leading EMAIL;INTERNET::
... | cut -d: -f2
... | sed "s/.*://"
... | awk -F: '{print $2}'
If you want the names of the files containing a particular address:
grep -ril "EMAIL;INTERNET:me#my-domain\.com" *
If grep can't process that many files at once, drop the -r option and try with find and xargs:
find /start/dir -name "*.vcf" -print0 | xargs -0 -I {} grep -io "..." {}
grep recursive can do this
grep -r 'EMAIL.+#'

How to sort multiple files? Unix [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Usually i can do this to sort a textfile:
cat infile.txt | sort > outfile.out
mv outfile.out > infile.txt
I can also do it in a loop:
for inp in ./*; do
fname=${inp##*/}
cat "$inp" | sort > ./"$fname".out
done
Other than writing a loop, is there a one liner to do the above for all files in the terminal?
This strikes me as an absurd exercise since there's nothing wrong with a loop, but you can do:
ls | xargs -n 1 sh -c 'sort $1 > $1.tmp; mv $1.tmp $1' sh
With GNU sort you can do:
$ sort file -o file
You could use xargs instead of looping like:
$ ls | xargs -i% -n1 sort % -o %
If you don't have the -o option:
$ sort file > tmp && mv tmp file
$ ls | xargs -i% -n1 sort % > tmp && mv tmp %

Resources