This command will count the number of files in the sub-directories.
find . -maxdepth 1 -type d |while read dir;do echo "$dir";find "$dir" -type f|wc -l;done
Which looks like
./lib64
327
./bin
118
Would it be possible to have it to look like
327 ./lib64
118 ./bin
instead?
There are a number of ways to do this... Here's something that doesn't change your code very much. (I've put it in multiple lines for readability.)
find . -maxdepth 1 -type d | while read dir; do
echo `find "$dir" -type f | wc -l` "$dir"
done
pipe into tr to remove or replace newlines. I expect you want the newline to be turned into a tab character, like this:
find . -maxdepth 1 -type d |while read dir;do
find "$dir" -type f|wc -l | tr '\n' '\t';
echo "$dir";
done
(Edit: I had them the wrong way around)
do echo -n "$dir "
The -n prevents echo from ending the line afterwards.
Related
Ubuntu 18.04 LTS with bash 4.4.20
I am trying to count the number of files in each directory starting in the directory where I executed the script. Borrowing from other coders, I found this script and modified it. I am trying to modify it to provide a total at the end, but I can't seem to get it. Also, the script is running the same count function twice each loop and that is inefficient. I inserted that extra find command because I could not get the results of the nested 'find | wc -l' to store in a variable. And it still didn't work.
Thanks!
#!/bin/bash
count=0
find . -maxdepth 1 -mindepth 1 -type d | sort -n | while read dir; do
printf "%-25.25s : " "$dir"
find "$dir" -type f | wc -l
filesthisdir=$(find "$dir" -type f | wc -l)
count=$count+$filesthisdir
done
echo "Total files : $count"
Here are the results. It should total up the results. Otherwise, this would work well.
./1800wls1 : 1086
./1800wls2 : 1154
./1900wls-in1 : 780
./1900wls-in2 : 395
./1900wls-in3 : 0
./1900wls-out1 : 8
./1900wls-out2 : 304
./1900wls-out3 : 160
./test : 0
Total files : 0
This doesn't work because the while loop is executed in a sub shell. By using <<< you make sure it's executed in the current shell.
#!/bin/bash
count=0
while read dir; do
printf "%-25.25s : " "$dir"
find "$dir" -type f | wc -l
filesthisdir=$(find "$dir" -type f | wc -l)
((count+=filesthisdir))
done <<< "$(find . -maxdepth 1 -mindepth 1 -type d | sort -n)"
echo "Total files : $count"
Of course you also can make use of a for loop:
for i in "$(find . -maxdepth 1 -mindepth 1 -type d | sort -n)"; do
# do something
done
Use (( count += filesthisdir)) and think about counting files with newlines.
You should change your find command:
filesthisdir=$(find "$dir" -type f -exec echo . \;| wc -l)
I have a folder and I want count all regular files in it, and for this I use this bash command:
find pathfolder -type f 2> err.txt | wc -l
In the folder there are 3 empty text files and a subfolder with inside it other text files.
For this reason I should get 3 as a result, but I get 6 and I don't understand why. Maybe there is some options that I did not set.
If I remove the subfolder I get 4 as result
To grab all the files and directories in current directory without dot files:
shopt -u dotglob
all=(*)
To grab only directories:
dirs=(*/)
To count only non-dot files in current directory:
echo $(( ${#all[#]} - ${#dirs[#]} ))
To do this with find use:
find . -type f -maxdepth 1 ! -name '.*' -exec printf '%.0s.\n' {} + | wc -l
Below solutions ignore the filenames starting with dot.
To count the files in pathfolder only:
find pathfolder -maxdepth 1 -type f -not -path '*/\.*' | wc -l
To count the files in ALL child directories of pathfolder:
find pathfolder -mindepth 2 -maxdepth 2 -type f -not -path '*/\.*' | wc -l
UPDATE: Converting comments into an answer
Based on the suggestions received from anubhava, by creating a dummy file using the command touch $'foo\nbar', the wc -l counts this filename twice, like in below example:
$> touch $'foo\nbar'
$> find . -type f
./foo?bar
$> find . -type f | wc -l
2
To avoid this, get rid of the newlines before calling wc (anubhava's solution):
$> find . -type f -exec printf '%.0sbla\n' {} +
bla
$> find . -type f -exec printf '%.0sbla\n' {} + | wc -l
1
or avoid calling wc at all:
$> find . -type f -exec sh -c 'i=0; for f; do ((i++)); done; echo $i' sh {} +
1
I am looking to combine the output of the Linux find and head commands (to derive a list of filenames) with output of another Linux/bash command and save the result in a file such that each filename from the "find" occurs with the other command output on a separate line.
So for example,
- if a dir testdir contains files a.txt, b.txt and c.txt,
- and the output of the other command is some number say 10, the desired output I'm looking for is
10 a.txt
10 b.txt
10 c.txt
On searching here, I saw folks recommending paste for doing similar merging but I couldn't figure out how to do it in this scenario as paste seems to be expecting files . I tried
paste $(find testdir -maxdepth 1 -type f -name "*.text" | head -2) $(echo "10") > output.txt
paste: 10: No such file or directory
Would appreciate any pointers as to what I'm doing wrong. Any other ways of achieving the same thing are also welcome.
Note that if I wanted to make everything appear on the same line, I could use xargs and that does the job.
$find testdir -maxdepth 1 -type f -name "*.text" | head -2 |xargs echo "10" > output.txt
$cat output.txt
10 a.txt b.txt
But my requirement is to merge the two command outputs as shown earlier.
Thanks in advance for any help!
find can handle both the -exec and -print directives, you just need to merge the output:
$ find . -maxdepth 1 -type f -name \*.txt -exec echo hello \; -print | paste - -
hello ./b.txt
hello ./a.txt
hello ./all.txt
Assuming your "command" requires the filename (here's a very contrived example):
$ find . -maxdepth 1 -type f -name \*.txt -exec sh -c 'wc -l <"$1"' _ {} \; -print | paste - -
4 ./b.txt
4 ./a.txt
7 ./all.txt
Of course, that's executing the command for each file. To restrict myself to your question:
cmd_out=$(echo 10)
for file in *.txt; do
echo "$cmd_out $file"
done
Try this,
$find testdir -maxdepth 1 -type f -name "*.text" | head -2 |tr ' ' '\n'|sed -i 's/^/10/' > output.txt
You can make xargs operate on one line at a time using -L1:
find testdir -maxdepth 1 -type f -name "*.text" | xargs -L1 echo "10" > output.txt
Is there some way to make this working?
pFile=find ${destpath} (( -iname "${mFile##*/}" )) -o (( -iname "${mFile##*/}" -a -name "*[],&<>*?|\":'()[]*" )) -exec printf '.' \;| wc -c
i need pFile return the number of file with the same filename, or if there aren't, return 0.
I have to do this, because if i only use:
pFile=find ${destpath} -iname "${mFile##*/}" -exec printf '.' \;| wc -c
It doesn't return if there are same filename with metacharacter.
Thanks
EDIT:
"${mFile##*/}" have as output file name in start folder without path.
echo "${mFile##*/}" -> goofy.mp3
Exmple
in start folder i have:
goofy.mp3 - mickey[1].avi - donald(2).mkv - scrooge.3gp
In destination folder i have:
goofy.mp3 - mickey[1].avi -donald(2).mkv -donald(1).mkv -donald(3).mkv -minnie.iso
i want this:
echo pFile -> 3
With:
pFile=find ${destpath} -iname "${mFile##*/}" -exec printf '.' \;| wc -c
echo pFile -> 2
With:
pFile=find ${destpath} -name "*[],&<>*?|\":'()[]*" -exec printf '.' \;| wc -c
echo pFile -> 4
With Same file name i mean:
/path1/mickey[1].avi = /path2/mickey[1].avi
I am not sure I understood your intended semantics of ${mFile##*/}, however looking at your start/destination folder example, I have created the following use case directory structure and the script below to solve your issue:
$ find root -type f | sort -t'/' -k3
root/dir2/donald(1).mkv
root/dir1/donald(2).mkv
root/dir2/donald(2).mkv
root/dir2/donald(3).mkv
root/dir1/goofy.mp3
root/dir2/goofy.mp3
root/dir1/mickey[1].avi
root/dir2/mickey[1].avi
root/dir2/minnie.iso
root/dir1/scrooge.3gp
Now, the following script (I've used gfind to indicated that you need GNU find for this to work, but if you're on Linux, just use find):
$ pFile=$(($(gfind root -type f -printf "%f\n" | wc -l) - $(gfind root -type f -printf "%f\n" | sort -u | wc -l)))
$ echo $pFile
3
I'm not sure this solves your issue, however it does print the number you expected in your provided example.
I am trying to list all directories and place its number of files next to it.
I can find the total number of files ls -lR | grep .*.mp3 | wc -l. But how can I get an output like this:
dir1 34
dir2 15
dir3 2
...
I don't mind writing to a text file or CSV to get this information if its not possible to get it on screen.
Thank you all for any help on this.
This seems to work assuming you are in a directory where some subdirectories may contain mp3 files. It omits the top level directory. It will list the directories in order by largest number of contained mp3 files.
find . -mindepth 2 -name \*.mp3 -print0| xargs -0 -n 1 dirname | sort | uniq -c | sort -r | awk '{print $2 "," $1}'
I updated this with print0 to handle filenames with spaces and other tricky characters and to print output suitable for CSV.
find . -type f -iname '*.mp3' -printf "%h\n" | uniq -c
Or, if order (dir-> count instead of count-> dir) is really important to you:
find . -type f -iname '*.mp3' -printf "%h\n" | uniq -c | awk '{print $2" "$1}'
There's probably much better ways, but this seems to work.
Put this in a shell script:
#!/bin/sh
for f in *
do
if [ -d "$f" ]
then
cd "$f"
c=`ls -l *.mp3 2>/dev/null | wc -l`
if test $c -gt 0
then
echo "$f $c"
fi
cd ..
fi
done
With Perl:
perl -MFile::Find -le'
find {
wanted => sub {
return unless /\.mp3$/i;
++$_{$File::Find::dir};
}
}, ".";
print "$_,$_{$_}" for
sort {
$_{$b} <=> $_{$a}
} keys %_;
'
Here's yet another way to even handle file names containing unusual (but legal) characters, such as newlines, ...:
# count .mp3 files (using GNU find)
find . -xdev -type f -iname "*.mp3" -print0 | tr -dc '\0' | wc -c
# list directories with number of .mp3 files
find "$(pwd -P)" -xdev -depth -type d -exec bash -c '
for ((i=1; i<=$#; i++ )); do
d="${#:i:1}"
mp3s="$(find "${d}" -xdev -type f -iname "*.mp3" -print0 | tr -dc "${0}" | wc -c )"
[[ $mp3s -gt 0 ]] && printf "%s\n" "${d}, ${mp3s// /}"
done
' "'\\0'" '{}' +