Combine number of lines of more files with filename [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Specify a command / command set that displays the number of lines of code in the .c and .h files in the current directory, displaying each file in alphabetical order followed by ":" and the number of lines in the files, and finally the total of the lines of code.
An example that might be displayed would be :
test.c: 202
example.c: 124
example.h: 43
Total: 369
I'd like to find a solution in the shortest form possible. I've experimented many commands like:
find . -name '*.c' -o -name '*.h' | xargs wc -l
== it shows 0 ./path/test.c and the total, but isn't close enough
stat -c "%n:%s" *
== it shows test.c:0, but it shows all file types and doesn't show the number of lines or the total
wc -l *.c *.h | tr ' ' '\:
== it shows 0:test.c and the total, but doesn't search in sub-directories and the order is reversed compared to the problem (filename: number_of_lines).
This one is closer to the answer but I'm out of ideas after searching most commands I saw in similar problems.

This should do it:
wc -l *.c *.h | awk '{print $2 ": " $1}'

Run a subshell in xargs
xargs -n1 sh -c 'printf "%s: %s\n" "$1" "$(wc -l <"$1")"' --
xargs -n1 sh -c 'echo "$1 $(wc -l <"$1")"' --

Related

How to use grep/egrep to count files in subdirectories containing a "String" [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What I want to achieve is to get the count of all files in directory which contain a pattern String. And also not to count errors.
I have tried few commands but nothing seems to work this is what i have tried so far:
ls -l grep -cri "string" | wc -l
ls /path/ 2> /dev/null | grep -ci 'string' | wc -l
ls -l | grep -v ^l "string" | wc -l
Use the -l option to list just the filename when the contents match the pattern. Yse the -r option to recurse into subdirectories. Use the -F option to match the string exactly, rather than as a regular expression.
You need to tell it the name of the directory to recurse into; you can use . for the current directory.
Then pipe this to wc -l:
grep -rlF "string" . 2>/dev/null | wc -l
If you want to count the files in a directory with "string" in the name you can do it like this:
ls -l | grep -c "string"
or if you want it to be case insensitive use -i
ls -l | grep -ci "string"
-c will print the count of matching lines.

Bash Console putting some invisible chars into string var [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Below I shared my console. I want to cut some string from output of some commands.
But there are 17 extra chars which I have no idea where comes from.
Can someone pls explain to me?
$ ls -al | grep total | sed 's/[[:blank:]].*$//' | wc -m
23
$ ns="total"
$ echo $ns | sed 's/[[:blank:]].*$//' | wc -c
6
But there are 17 extra chars which I have no idea where comes from.
Those are ANSI escape codes that grep uses for coloring matching substrings. You probably have an alias (run alias | grep grep to examine) like
alias grep='grep --color=always'
somewhere that causes grep to color matches even if output is not a tty, or something similar.
Try
ls -al | grep --color=never total | sed 's/[[:blank:]].*$//' | wc -m
and you'll get six.

A bash script to count the number of all files [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I just started to learn linux.
What I wanna do is to write a bash script that prints the file name, the number of lines, and the number of words to stdout, for all files in the directory
for example: Apple.txt 15 155
I don't know how to write a command that can work for all the files in the directory.
Based on your most recent comment, I would say you want something like:
wc -lw ./* | awk '{print $3 "\t" $1 "\t" $2}'
Note that you will get a line in the output (from stderr) for each directory that looks something like:
wc: ./this-is-a-directory: Is a directory
If the message about directories is undesirable, you can suppress stderr messages by adding 2>/dev/null to the wc command, like this:
wc -lw ./* 2>/dev/null | awk '{print $3 "\t" $1 "\t" $2}'
Try this:
wc -lw ./*
It will be in the format of <lines> <words> <filename>.

Compare ZIP file with dir with shell command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have compressed a lot of files with zip from infozip.org.
How do I make sure that the zip file contains all the files from the original files. Or is there a GUI tool do to it.
You can install a command line tool called unzip, and run
$unzip -l yourzipfile.zip
Files contained in yourzipfile.zip will be listed.
========
To verify files automatically, you can follow these steps.
If files compressed into yourzipfile.zip is in dir1, you can first unzip yourzipfile.zip into dir2, then you may compare files in dir1 and dir2 by running
$ diff --brief -r dir1/ dir2/
I tried to do this myself, and you can string together a few things to do this without unzipping to a directory
diff <(unzip -l foo.zip | cut -d':' -f2 | cut -d' ' -f4-100 | sed 's/\/$//' | sort) <(find somedir/ | sort)
Basic breakdown is:
Use diff to compare output streams of 2 commands
diff <(command1) <(command2)
Use unzip -l, and process the output. I used 2 cuts to get just the filenames, remove trailing / on directories, and finally sort:
unzip -l foo.zip | cut -d':' -f2 | cut -d' ' -f4-100 | sed 's/\/$//' | sort
For the directory listing, a simple find and sort
find somedir/ | sort

How to sort multiple files? Unix [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Usually i can do this to sort a textfile:
cat infile.txt | sort > outfile.out
mv outfile.out > infile.txt
I can also do it in a loop:
for inp in ./*; do
fname=${inp##*/}
cat "$inp" | sort > ./"$fname".out
done
Other than writing a loop, is there a one liner to do the above for all files in the terminal?
This strikes me as an absurd exercise since there's nothing wrong with a loop, but you can do:
ls | xargs -n 1 sh -c 'sort $1 > $1.tmp; mv $1.tmp $1' sh
With GNU sort you can do:
$ sort file -o file
You could use xargs instead of looping like:
$ ls | xargs -i% -n1 sort % -o %
If you don't have the -o option:
$ sort file > tmp && mv tmp file
$ ls | xargs -i% -n1 sort % > tmp && mv tmp %

Resources