I have a directory with thousands of files (100K for now). When I use wc -l ./*, I'll get:
c1 ./test1.txt
c2 ./test2.txt
...
cn ./testn.txt
c1+c2+...+cn total
Because there are a lot of files in the directory, I just want to see the total count and not the details. Is there any way to do so?
I tried several ways and I got following error:
Argument list too long
If what you want is the total number of lines and nothing else, then I would suggest the following command:
cat * | wc -l
This catenates the contents of all of the files in the current working directory and pipes the resulting blob of text through wc -l.
I find this to be quite elegant. Note that the command produces no extraneous output.
UPDATE:
I didn't realize your directory contained so many files. In light of this information, you should try this command:
for file in *; do cat "$file"; done | wc -l
Most people don't know that you can pipe the output of a for loop directly into another command.
Beware that this could be very slow. If you have 100,000 or so files, my guess would be around 10 minutes. This is a wild guess because it depends on several parameters that I'm not able to check.
If you need something faster, you should write your own utility in C. You could make it surprisingly fast if you use pthreads.
Hope that helps.
LAST NOTE:
If you're interested in building a custom utility, I could help you code one up. It would be a good exercise, and others might find it useful.
Credit: this builds on #lifecrisis's answer, and extends it to handle large numbers of files:
find . -maxdepth 1 -type f -exec cat {} + | wc -l
find will find all of the files in the current directory, break them into groups as large as can be passed as arguments, and run cat on the groups.
awk 'END {print NR" total"}' ./*
Would be an interesting comparison to find out how many lines don't end with a new line.
Combining the awk and Gordon’s find solutions and avoiding the "." files.
find ./* -maxdepth 0 -type f -exec awk 'END {print NR}' {} +
No idea if this is better or worse but it does give a more accurate count (for me) and does not count lines in "." files. Using ./* is just a guess that appears to work.
Still need depth and ./* requires "0" depth.
I did get the same result with the "cat" and "awk" solutions (using the same find) since the "cat *" takes care of the new line issue. I don't have a directory with enough files to measure time. Interesting, I'm liking the "cat" solution.
This will give you the total count for all the files (including hidden files) in your current directory :
$ find . -maxdepth 1 -type f | xargs wc -l | grep total
1052 total
To count for files excluding hidden files use :
find . -maxdepth 1 -type f -not -path "*/\.*" | xargs wc -l | grep total
(Apologies for adding this as an answer—but I do not have enough reputation for commenting.)
A comment on #lifecrisis's answer. Perhaps cat is slowing things down a bit. We could replace cat by wc -l and then use awkto add the numbers. (This could be faster since much less data needs to go throught the pipe.)
That is
for file in *; do wc -l "$file"; done | awk '{sum += $1} END {print sum}'
instead of
for file in *; do cat "$file"; done | wc -l
(Disclaimer: I am not incorporating many of the improvements in other answers, but I thought the point was valid enough to write down.)
Here are my results for comparison (I ran the newer version first so that any cache effects would go against the newer candidate).
$ time for f in `seq 1 1500`; do head -c 5M </dev/urandom >myfile-$f |sed -e 's/\(................\)/\1\n/g'; done
real 0m50.360s
user 0m4.040s
sys 0m49.489s
$ time for file in myfile-*; do wc -l "$file"; done | awk '{sum += $1} END {print sum}'
30714902
real 0m3.455s
user 0m2.093s
sys 0m1.515s
$ time for file in myfile-*; do cat "$file"; done | wc -l
30714902
real 0m4.481s
user 0m2.544s
sys 0m4.312s
iF you want to know only total number Lines in directory excluding total line
ls -ltr | sed -n '/total/!p' | awk '{print NR}'
Previous comment will give total count of lines which includes only count of lines in all files
Below command will provide the total count of lines from all files in path
for i in `ls- ltr | awk ‘$1~”^-rw”{print $9}’`; do wc -l $I | awk ‘{print $1}’; done >>/var/tmp/filelinescount.txt
Cat /var/tmp/filelinescount.txt| sed -r “s/\s+//g”|tr “\n” “+”| sed “s:+$::g”| sed ’s/^/“/g’| sed ’s/$/“/g’ | awk ‘{print “echo” “ “ $0”+bc”}’| sh
Related
I would like to do a find in some dir, and do a awk on the files in this direcory, and then replace the original files by each result.
find dir | xargs cat | awk ... | mv ... > filename
So I need the filename (of each of the files found by find) in the last command. How can I do that?
I would use a loop, like:
for filename in `find . -name "*test_file*" -print0 | xargs -0`
do
# some processing, then
echo "what you like" > "$filename"
done
EDIT: as noted in the comments, the benefits of -print0 | xargs -0 are lost because of the for loop. And filenames containing a white space are still not handled correctly.
The following while loop would not handle unusual filenames neither (good to know it, though it was not in the question), but filenames with a standard white space at least, so it works better, indeed:
find . -name "*test*file*" -print > files_list
while IFS= read -r filename
do
# some process
echo "what you like" > "$filename"
done < files_list
You could do something like this (but I wouldn't recommend it at all).
find dir -print0 |
xargs -0 -n 2 awk -v OFS='\0' '<process the input and write to temporary file>
END {print "temporaryfile", FILENAME}' |
xargs -0 -n 2 mv
This passes the files to awk directly two at a time (which avoids the problem with your original where cat will get hundreds (perhaps more) files as arguments all at once and spit all their content at awk via standard input at once and thus lose their individual contents and filenames entirely).
It then has awk write the processed output to a temporary file and then outputs the temporary filename and the original filename where xargs picks them up (again two at a time) and runs mv on the pairs of temporary file/original file names.
As I said at the beginning however this is a terrible way to do this.
If you have a new enough version of GNU awk (version 4.1.0 or newer) then you could just use the -i (in-place) argument to awk and use (I believe):
find dir | xargs awk -i '......'
Without that I would use a while loop of the form in Bash FAQ 001 to read the find output line-by-line and operate on it in the loop.
Hey so I'm writing a linux script and I came to an interesting finding.
I've got a command that will sort the files inside a directory by it's size and prints the largest one. Command is as follows
find . -type f -ls | sort -r -n -k7 | head -n 1
This will print something amongst the likes of
895918591 8 -r-w-x 1 user01 xdf 1931 28 march 23:21 ./myscript.sh
So I want to to get the largest file size alone and print it. To separate it I used cut -d' ' -f2 issue is, this leaves only empty output. That is because the amount of spaces is inconsistent.
So I tried doing something like this
find . -type f -ls | sort -r -n -k7 | head -n 1 | tr -d [:blank:] | cut -d' ' -f2
Issue is, this removes all the blank spaces now I can't separate them by common separator. So I'm asking, is there a way to replace literally all the blank spaces and then replace them with a single blank space?
If not, at least any other way to get to that number of bytes?
Sed and Awk are great tools for this kind of thing. Sed is a regex-based language that modifies the contents of each line the Sed program receives, and Awk is also a line-oriented tool that automatically splits its input into fields.
To turn sequences of blanks into one blank (substitute all matches of /\s+/ with ) in Sed:
$ find ... | sed 's/\s+/ /g'
To just print the first "word" (sequence of nonspaces) of each line in Awk:
$ find ... | awk '{print $1}'
http://tldp.org/LDP/abs/html/sedawk.html can get you started with these languages.
Instead of cut you can use awk:
find . -type f -ls | sort -r -n -k7 | head -n 1 | awk '{print $2}'
However you can even avoid head as well using awk:
find . -type f -ls | sort -r -n -k7 | awk '{print $2; exit}'
The tool to convert multiple spaces to just one is called tr -s:
tr translates
s squeezes
Sample:
$ cat a
hello this is a sample text with multiple spaces
$ tr -s " " < a
hello this is a sample text with multiple spaces
If you then want to convert every space into X, just pipe to sed 's/ / /g'.
I think you're overthinking the issue at hand:
find -type f -printf "%s\n"|sort -n|tail -n1
Instead of using cut, you can try using the printf command that gives you control over your display
find . -type f -ls | sort -r -n -k7 | head -n 1 -printf %s
You're doing it wrong.
Parsing ls in any form ( like find's -ls option ) is the bad approach.
Do not use ls output for anything. ls is a tool for interactively looking at directory metadata. Any attempts at parsing ls output with code are broken.
I strongly suggest you to read further about this subject. Read Parsing ls.
Instead, use the following function:
# Usage: largest [dir]
largest() {
local f size largest
while read -rd '' f; do
size=$(wc -c < "$f")
if (( size > largest[0] )); then
largest=("$size" "$f")
fi
done < <(find "${1-.}" -type f -print0)
printf '%s is the largest file in %s\n' "${largest[1]}" "${1-.}"
}
So, I have 1000 files in a folderA.
Let's say:
File_0001, File_0002, File_0003, File_0004, File_0005, . . . , File_1000
Question, how to sort these files every two incremental number and copy these files into another folder (folderB). So that the files in folderB will be like this:
File_0002, File_0004, File_0006,File_0008, File_0010, . . . , File_1000
Any suggestions will be really appreciated.
Thank you
You can also use simple cp command:
cp File_*[02468] folderB
ls | sort | xargs -n2 echo | awk '{print $2}' | xargs -I '{}' echo mv '{}' /folderB
the trick is to use | xargs -n2 echo | awk '{print $2}' to get the even line.
Depending on what's actually wanted, I'd say #demostene's answer is probably right. If OP actually wants alternate files from the list, regardless of possibly skipped numbers, then
cp $(ls | awk 'NR%2 == 0 {print $0}') folderB
would seem to do the trick. Note the obvious extensions for every third, fourth, or Nth file.
I'm having some rather unusual problems using grep in a bash script. Below is an example of the bash script code that I'm using that exhibits the behaviour:
UNIQ_SCAN_INIT_POINT=1
cat "$FILE_BASENAME_LIST" | uniq -d >> $UNIQ_LIST
sed '/^$/d' $UNIQ_LIST >> $UNIQ_LIST_FINAL
UNIQ_LINE_COUNT=`wc -l $UNIQ_LIST_FINAL | cut -d \ -f 1`
while [ -n "`cat $UNIQ_LIST_FINAL | sed "$UNIQ_SCAN_INIT_POINT"'q;d'`" ]; do
CURRENT_LINE=`cat $UNIQ_LIST_FINAL | sed "$UNIQ_SCAN_INIT_POINT"'q;d'`
CURRENT_DUPECHK_FILE=$FILE_DUPEMATCH-$CURRENT_LINE
grep $CURRENT_LINE $FILE_LOCTN_LIST >> $CURRENT_DUPECHK_FILE
MATCH=`grep -c $CURRENT_LINE $FILE_BASENAME_LIST`
CMD_ECHO="$CURRENT_LINE matched $MATCH times," cmd_line_echo
echo "$CURRENT_DUPECHK_FILE" >> $FILE_DUPEMATCH_FILELIST
let UNIQ_SCAN_INIT_POINT=UNIQ_SCAN_INIT_POINT+1
done
On numerous occasions, when grepping for the current line in the file location list, it has put no output to the current dupechk file even though there have definitely been matches to the current line in the file location list (I ran the command in terminal with no issues).
I've rummaged around the internet to see if anyone else has had similar behaviour, and thus far all I have found is that it is something to do with buffered and unbuffered outputs from other commands operating before the grep command in the Bash script....
However no one seems to have found a solution, so basically I'm asking you guys if you have ever come across this, and any idea/tips/solutions to this problem...
Regards
Paul
The `problem' is the standard I/O library. When it is writing to a terminal
it is unbuffered, but if it is writing to a pipe then it sets up buffering.
try changing
CURRENT_LINE=`cat $UNIQ_LIST_FINAL | sed "$UNIQ_SCAN_INIT_POINT"'q;d'`
to
CURRENT LINE=`sed "$UNIQ_SCAN_INIT_POINT"'q;d' $UNIQ_LIST_FINAL`
Are there any directories with spaces in their names in $FILE_LOCTN_LIST? Because if they are, those spaces will need escaped somehow. Some combination of find and xargs can usually deal with that for you, especially xargs -0
A small bash script using md5sum and sort that detects duplicate files in the current directory:
CURRENT="" md5sum * |
sort |
while read md5sum filename;
do
[[ $CURRENT == $md5sum ]] && echo $filename is duplicate;
CURRENT=$md5sum;
done
you tagged linux, some i assume you have tools like GNU find,md5sum,uniq, sort etc. here's a simple example to find duplicate files
$ echo "hello world">file
$ md5sum file
6f5902ac237024bdd0c176cb93063dc4 file
$ cp file file1
$ md5sum file1
6f5902ac237024bdd0c176cb93063dc4 file1
$ echo "blah" > file2
$ md5sum file2
0d599f0ec05c3bda8c3b8a68c32a1b47 file2
$ find . -type f -exec md5sum "{}" \; |sort -n | uniq -w32 -D
6f5902ac237024bdd0c176cb93063dc4 ./file
6f5902ac237024bdd0c176cb93063dc4 ./file1
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
When I use ls or du, I get the amount of disk space each file is occupying.
I need the sum total of all the data in files and subdirectories I would get if I opened each file and counted the bytes. Bonus points if I can get this without opening each file and counting.
If you want the 'apparent size' (that is the number of bytes in each file), not size taken up by files on the disk, use the -b or --bytes option (if you got a Linux system with GNU coreutils):
% du -sbh <directory>
Use du -sb:
du -sb DIR
Optionally, add the h option for more user-friendly output:
du -sbh DIR
cd to directory, then:
du -sh
ftw!
Originally wrote about it here:
https://ao.ms/get-the-total-size-of-all-the-files-in-a-directory/
Just an alternative:
ls -lAR | grep -v '^d' | awk '{total += $5} END {print "Total:", total}'
grep -v '^d' will exclude the directories.
stat's "%s" format gives you the actual number of bytes in a file.
find . -type f |
xargs stat --format=%s |
awk '{s+=$1} END {print s}'
Feel free to substitute your favourite method for summing numbers.
If you use busybox's "du" in emebedded system, you can not get a exact bytes with du, only Kbytes you can get.
BusyBox v1.4.1 (2007-11-30 20:37:49 EST) multi-call binary
Usage: du [-aHLdclsxhmk] [FILE]...
Summarize disk space used for each FILE and/or directory.
Disk space is printed in units of 1024 bytes.
Options:
-a Show sizes of files in addition to directories
-H Follow symbolic links that are FILE command line args
-L Follow all symbolic links encountered
-d N Limit output to directories (and files with -a) of depth < N
-c Output a grand total
-l Count sizes many times if hard linked
-s Display only a total for each argument
-x Skip directories on different filesystems
-h Print sizes in human readable format (e.g., 1K 243M 2G )
-m Print sizes in megabytes
-k Print sizes in kilobytes(default)
For Win32 DOS, you can:
c:> dir /s c:\directory\you\want
and the penultimate line will tell you how many bytes the files take up.
I know this reads all files and directories, but works faster in some situations.
When a folder is created, many Linux filesystems allocate 4096 bytes to store some metadata about the directory itself.
This space is increased by a multiple of 4096 bytes as the directory grows.
du command (with or without -b option) take in count this space, as you can see typing:
mkdir test && du -b test
you will have a result of 4096 bytes for an empty dir.
So, if you put 2 files of 10000 bytes inside the dir, the total amount given by du -sb would be 24096 bytes.
If you read carefully the question, this is not what asked. The questioner asked:
the sum total of all the data in files and subdirectories I would get if I opened each file and counted the bytes
that in the example above should be 20000 bytes, not 24096.
So, the correct answer IMHO could be a blend of Nelson answer and hlovdal suggestion to handle filenames containing spaces:
find . -type f -print0 | xargs -0 stat --format=%s | awk '{s+=$1} END {print s}'
There are at least three ways to get the "sum total of all the data in files and subdirectories" in bytes that work in both Linux/Unix and Git Bash for Windows, listed below in order from fastest to slowest on average. For your reference, they were executed at the root of a fairly deep file system (docroot in a Magento 2 Enterprise installation comprising 71,158 files in 30,027 directories).
1.
$ time find -type f -printf '%s\n' | awk '{ total += $1 }; END { print total" bytes" }'
748660546 bytes
real 0m0.221s
user 0m0.068s
sys 0m0.160s
2.
$ time echo `find -type f -print0 | xargs -0 stat --format=%s | awk '{total+=$1} END {print total}'` bytes
748660546 bytes
real 0m0.256s
user 0m0.164s
sys 0m0.196s
3.
$ time echo `find -type f -exec du -bc {} + | grep -P "\ttotal$" | cut -f1 | awk '{ total += $1 }; END { print total }'` bytes
748660546 bytes
real 0m0.553s
user 0m0.308s
sys 0m0.416s
These two also work, but they rely on commands that don't exist on Git Bash for Windows:
1.
$ time echo `find -type f -printf "%s + " | dc -e0 -f- -ep` bytes
748660546 bytes
real 0m0.233s
user 0m0.116s
sys 0m0.176s
2.
$ time echo `find -type f -printf '%s\n' | paste -sd+ | bc` bytes
748660546 bytes
real 0m0.242s
user 0m0.104s
sys 0m0.152s
If you only want the total for the current directory, then add -maxdepth 1 to find.
Note that some of the suggested solutions don't return accurate results, so I would stick with the solutions above instead.
$ du -sbh
832M .
$ ls -lR | grep -v '^d' | awk '{total += $5} END {print "Total:", total}'
Total: 583772525
$ find . -type f | xargs stat --format=%s | awk '{s+=$1} END {print s}'
xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option
4390471
$ ls -l| grep -v '^d'| awk '{total = total + $5} END {print "Total" , total}'
Total 968133
du is handy, but find is useful in case if you want to calculate the size of some files only (for example, using filter by extension). Also note that find themselves can print the size of each file in bytes. To calculate a total size we can connect dc command in the following manner:
find . -type f -printf "%s + " | dc -e0 -f- -ep
Here find generates sequence of commands for dc like 123 + 456 + 11 +.
Although, the completed program should be like 0 123 + 456 + 11 + p (remember postfix notation).
So, to get the completed program we need to put 0 on the stack before executing the sequence from stdin, and print the top number after executing (the p command at the end).
We achieve it via dc options:
-e0 is just shortcut for -e '0' that puts 0 on the stack,
-f- is for read and execute commands from stdin (that generated by find here),
-ep is for print the result (-e 'p').
To print the size in MiB like 284.06 MiB we can use -e '2 k 1024 / 1024 / n [ MiB] p' in point 3 instead (most spaces are optional).
Use:
$ du -ckx <DIR> | grep total | awk '{print $1}'
Where <DIR> is the directory you want to inspect.
The '-c' gives you grand total data which is extracted using the 'grep total' portion of the command, and the count in Kbytes is extracted with the awk command.
The only caveat here is if you have a subdirectory containing the text "total" it will get spit out as well.
This may help:
ls -l| grep -v '^d'| awk '{total = total + $5} END {print "Total" , total}'
The above command will sum total all the files leaving the directories size.