When we list files in unix using ls -l command the output is a table with space as a separator, for example the following
(jupyter-lab) ➜ mylab ls -l
total 2
drwxr-sr-x. 2 hs0424 ragr 0 Feb 1 12:17 A bad directory
drwxr-sr-x. 2 hs0424 ragr 0 Feb 1 12:18 A very bad directory
I want to convert to a tab separated file (.tsv), just changing spaces to \t, such as ls -l | sed -E 's/ +/\t/g' would not work since filenames contain spaces. Do we have better solution ?
Hard to show expected output with tabs but if we use \t as a replacement of tab, I want something as follows,
(jupyter-lab) ➜ mylab ls -l
total 2
drwxr-sr-x.\t2\ths0424\tragr\t0\tFeb 1\t12:17\tA bad directory
drwxr-sr-x.\t2\ths0424\tragr\t0\tFeb 1\t12:18\tA very bad directory
(Edit 1)
We can assume access to GNU tools
Use GNU find -printf or stat, either of which let you provide an arbitrary format string, instead of ls.
find . -mindepth 1 -maxdepth 1 -printf '%M\t%y\t%g\t%G\t%u\t%U\t%f\t%l\n'
or
# for normal cases
stat --printf='%A\t%G\t%g\t%U\t%u\t%n\n' *
# for directories where filenames could exceed command line length limit
printf '%s\0' * | xargs -0 stat --printf='%A\t%G\t%g\t%U\t%u\t%n\n'
Related
I have a directory with thousands of files (100K for now). When I use wc -l ./*, I'll get:
c1 ./test1.txt
c2 ./test2.txt
...
cn ./testn.txt
c1+c2+...+cn total
Because there are a lot of files in the directory, I just want to see the total count and not the details. Is there any way to do so?
I tried several ways and I got following error:
Argument list too long
If what you want is the total number of lines and nothing else, then I would suggest the following command:
cat * | wc -l
This catenates the contents of all of the files in the current working directory and pipes the resulting blob of text through wc -l.
I find this to be quite elegant. Note that the command produces no extraneous output.
UPDATE:
I didn't realize your directory contained so many files. In light of this information, you should try this command:
for file in *; do cat "$file"; done | wc -l
Most people don't know that you can pipe the output of a for loop directly into another command.
Beware that this could be very slow. If you have 100,000 or so files, my guess would be around 10 minutes. This is a wild guess because it depends on several parameters that I'm not able to check.
If you need something faster, you should write your own utility in C. You could make it surprisingly fast if you use pthreads.
Hope that helps.
LAST NOTE:
If you're interested in building a custom utility, I could help you code one up. It would be a good exercise, and others might find it useful.
Credit: this builds on #lifecrisis's answer, and extends it to handle large numbers of files:
find . -maxdepth 1 -type f -exec cat {} + | wc -l
find will find all of the files in the current directory, break them into groups as large as can be passed as arguments, and run cat on the groups.
awk 'END {print NR" total"}' ./*
Would be an interesting comparison to find out how many lines don't end with a new line.
Combining the awk and Gordon’s find solutions and avoiding the "." files.
find ./* -maxdepth 0 -type f -exec awk 'END {print NR}' {} +
No idea if this is better or worse but it does give a more accurate count (for me) and does not count lines in "." files. Using ./* is just a guess that appears to work.
Still need depth and ./* requires "0" depth.
I did get the same result with the "cat" and "awk" solutions (using the same find) since the "cat *" takes care of the new line issue. I don't have a directory with enough files to measure time. Interesting, I'm liking the "cat" solution.
This will give you the total count for all the files (including hidden files) in your current directory :
$ find . -maxdepth 1 -type f | xargs wc -l | grep total
1052 total
To count for files excluding hidden files use :
find . -maxdepth 1 -type f -not -path "*/\.*" | xargs wc -l | grep total
(Apologies for adding this as an answer—but I do not have enough reputation for commenting.)
A comment on #lifecrisis's answer. Perhaps cat is slowing things down a bit. We could replace cat by wc -l and then use awkto add the numbers. (This could be faster since much less data needs to go throught the pipe.)
That is
for file in *; do wc -l "$file"; done | awk '{sum += $1} END {print sum}'
instead of
for file in *; do cat "$file"; done | wc -l
(Disclaimer: I am not incorporating many of the improvements in other answers, but I thought the point was valid enough to write down.)
Here are my results for comparison (I ran the newer version first so that any cache effects would go against the newer candidate).
$ time for f in `seq 1 1500`; do head -c 5M </dev/urandom >myfile-$f |sed -e 's/\(................\)/\1\n/g'; done
real 0m50.360s
user 0m4.040s
sys 0m49.489s
$ time for file in myfile-*; do wc -l "$file"; done | awk '{sum += $1} END {print sum}'
30714902
real 0m3.455s
user 0m2.093s
sys 0m1.515s
$ time for file in myfile-*; do cat "$file"; done | wc -l
30714902
real 0m4.481s
user 0m2.544s
sys 0m4.312s
iF you want to know only total number Lines in directory excluding total line
ls -ltr | sed -n '/total/!p' | awk '{print NR}'
Previous comment will give total count of lines which includes only count of lines in all files
Below command will provide the total count of lines from all files in path
for i in `ls- ltr | awk ‘$1~”^-rw”{print $9}’`; do wc -l $I | awk ‘{print $1}’; done >>/var/tmp/filelinescount.txt
Cat /var/tmp/filelinescount.txt| sed -r “s/\s+//g”|tr “\n” “+”| sed “s:+$::g”| sed ’s/^/“/g’| sed ’s/$/“/g’ | awk ‘{print “echo” “ “ $0”+bc”}’| sh
I have a directory which contains a large number of subdirectories. Each subdirectory is named something like "treedir_xxx" where xxx is a number. I would like to run a command (preferably from the command line as I have no experience with batch scripts) that will count the number of files in each subdirectory named 'treedir_xxx' and write these numbers to a text file. I feel this should not be very difficult but so far I have been unsuccessful.
I have tried things like find *treedir* -maxdepth 1 -type f | wc -l however this just returns the total number of files rather than the number of files in each individual folder.
Instead of using find, use a for loop. I am assuming that you are using bash or similar since that is the most common shell on most of the modern Linux distros:
for i in treedir_*; do ls "$i" | wc -l; done
Given the following structure:
treedir_001
|__ a
|__ b
|__ c
treedir_002
|__ d
|__ e
treedir_003
|__ f
The result is:
3
2
1
You can get fancy and print whatever you want around the numbers:
for i in treedir_*; do echo $i: $(ls "$i" | wc -l); done
gives
treedir_001: 3
treedir_002: 2
treedir_003: 1
This uses $(...) to get the output of a command as a string and pass it to echo, which can then print everything on one line.
for i in treedir_*; do echo $i; ls "$i" | wc -l; done
gives
treedir_001
3
treedir_002
2
treedir_003
1
This one illustrates the use of multiple commands in a single loop.
for can be redirected to a file or piped just like any other command, so you can do
for i in treedir_*; do ls "$i" | wc -l; done > list.txt
or better yet
for i in treedir_*; do ls "$i" | wc -l; done | tee list.txt
The second version sends the output to the program tee, which prints it to standard output and also redirects it to a file. This is sometimes nicer for debugging than a simple redirect with >.
find is a powerful hammer, but not everything is a nail...
I have a bunch of log files in a folder. When I cd into the folder and look at the files it looks something like this.
$ ls -lhat
-rw-r--r-- 1 root root 5.3K Sep 10 12:22 some_log_c48b72e8.log
-rw-r--r-- 1 root root 5.1M Sep 10 02:51 some_log_cebb6a28.log
-rw-r--r-- 1 root root 1.1K Aug 25 14:21 some_log_edc96130.log
-rw-r--r-- 1 root root 406K Aug 25 14:18 some_log_595c9c50.log
-rw-r--r-- 1 root root 65K Aug 24 16:00 some_log_36d179b3.log
-rw-r--r-- 1 root root 87K Aug 24 13:48 some_log_b29eb255.log
-rw-r--r-- 1 root root 13M Aug 22 11:55 some_log_eae54d84.log
-rw-r--r-- 1 root root 1.8M Aug 12 12:21 some_log_1aef4137.log
I want to look at the most recent messages in the most recent log file. I can now manually copy the name of the most recent log and then perform a tail on it and that will work.
$ tail -n 100 some_log_c48b72e8.log
This does involve manual labor so instead I would like to use bash-fu to do this.
I currently found this way to do it;
filename="$(ls -lat | sed -n 2p | tail -c 30)"; tail -n 100 $filename
It works, but I am bummed out that I need to save data into a variable to do it. Is it possible to do this in bash without saving intermediate results into a variable?
tail -n 100 "$(ls -at | head -n 1)"
You do not need ls to actually print timestamps, you just need to sort by them (ls -t). I added the -a option because it was in your original code, but note that this is not necessary unless your logfiles are "dot files", i.e. starting with a . (which they shouldn't).
Using ls this way saves you from parsing the output with sed and tail -c. (And you should not try to parse the output of ls.) Just pick the first file in the list (head -n 1), which is the newest. Putting it in quotation marks should save you from the more common "problems" like spaces in the filename. (If you have newlines or similar in your filenames, fix your filenames. :-D )
Instead of saving into a variable, you can use command substitution in-place.
A truly ls-free solution:
tail -n 100 < <(
for f in *; do
[[ $f -nt $newest ]] && newest=$f
done
cat "$newest"
)
There's no need to initialize newest, since any file will be newer than the null file named by the empty string.
It's a bit verbose, but it's guaranteed to work with any legal file name. Save it to a shell function for easier use:
tail_latest () {
dir=${1:-.}
size=${2:-100}
for f in "$dir"/*; do
[[ $f -nt $newest ]] && newest=$f
done
tail -f "$size" "$newest"
}
Some examples:
# Default of 100 lines from newest file in the current directory
tail_latest
# 200 lines from the newest file in another directory
tail_latest /some/log/dir 200
A plug for zsh: glob qualifiers let you sort the results of a glob directly, making it much easier to get the newest file.
tail -n 100 *(om[1,1])
om sorts the results by modification time (newest first). [1,1] limits the range of files matched to the first. (I think Y1 should do the same, but it kept giving me an "unknown file attribute" error.)
Without parsing ls, you'd use stat
tail -n 100 "$(stat -c "%Y %n" * | sort -nk1,1 | tail -1 | cut -d" " -f 2-)"
Will break if your filenames contain newlines.
version 2: newlines are OK
tail -n 100 "$(
stat --printf "%Y:%n\0" * |
sort -z -t: -k1,1nr |
{ IFS=: read -d '' time filename; echo "$filename"; }
)"
You can try this way also
ls -1t | head -n 1 | xargs tail -c 50
Explanation :
ls -1rht -- list the files based on modified time in reverse order.
tail -n 1 -- get the last one file
tail -c 50 -- show the last 50 character from the file.
Why does ls not work when the -l flag is passed in combination with xargs and grep?
$ ls -rt | xargs grep xyz
works, but:
$ ls -lrt | xargs grep xyz
grep: invalid option -- '-'
Usage: grep [OPTION]... PATTERN [FILE]...
Try `grep --help' for more information.
Because the output for ls -l is similar to this:
-rw-r--r-- 1 root root 1491872 2012-11-22 03:07 Xvfb_screen0
Piping this to xargs (ls -l | xargs grep xyz) is making your grep command to be
grep xyz -rw-r--r-- 1 root root 1491872 2012-11-22 03:07 Xvfb_screen0
And it does not have any sense.
edit
to answer #vladr comment here because it has better formatting than the comments box. Each whitespace separated text from the input of xargs is passed as a new param to the executed command, as you can see:
$ ls -l
total 4
-rwxrwxr-x 1 carlos carlos 18 2012-11-22 15:17 foo
$ cat foo
#!/bin/sh
echo $#
$ ls -l | xargs ./foo
10
It's possible to behave the way you say by setting the delimiter in xargs to \n:
$ ls -l | xargs -d '\n' ./foo
2
To answer your question, specifically, use ls -lrt | xargs grep xyz -- (notice the --, which signifies that any dashes - that appear afterwards are to be taken literally, not as option flags -- see #CarlosCampderrós's answer for what your command expands to), but I strongly doubt that you have the correct setup to begin with, as it is unlikely to achieve anything useful, as used.
More likely you are trying to grep for xyz in all files in reverse chronological order. If so then use ls -1rt | xargs grep xyz -- (notice dash-one -1, not dash-ell -l.) The -- is truly optional in this case (unless you expect one or more files' name(s) to begin with dash -.)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
When I use ls or du, I get the amount of disk space each file is occupying.
I need the sum total of all the data in files and subdirectories I would get if I opened each file and counted the bytes. Bonus points if I can get this without opening each file and counting.
If you want the 'apparent size' (that is the number of bytes in each file), not size taken up by files on the disk, use the -b or --bytes option (if you got a Linux system with GNU coreutils):
% du -sbh <directory>
Use du -sb:
du -sb DIR
Optionally, add the h option for more user-friendly output:
du -sbh DIR
cd to directory, then:
du -sh
ftw!
Originally wrote about it here:
https://ao.ms/get-the-total-size-of-all-the-files-in-a-directory/
Just an alternative:
ls -lAR | grep -v '^d' | awk '{total += $5} END {print "Total:", total}'
grep -v '^d' will exclude the directories.
stat's "%s" format gives you the actual number of bytes in a file.
find . -type f |
xargs stat --format=%s |
awk '{s+=$1} END {print s}'
Feel free to substitute your favourite method for summing numbers.
If you use busybox's "du" in emebedded system, you can not get a exact bytes with du, only Kbytes you can get.
BusyBox v1.4.1 (2007-11-30 20:37:49 EST) multi-call binary
Usage: du [-aHLdclsxhmk] [FILE]...
Summarize disk space used for each FILE and/or directory.
Disk space is printed in units of 1024 bytes.
Options:
-a Show sizes of files in addition to directories
-H Follow symbolic links that are FILE command line args
-L Follow all symbolic links encountered
-d N Limit output to directories (and files with -a) of depth < N
-c Output a grand total
-l Count sizes many times if hard linked
-s Display only a total for each argument
-x Skip directories on different filesystems
-h Print sizes in human readable format (e.g., 1K 243M 2G )
-m Print sizes in megabytes
-k Print sizes in kilobytes(default)
For Win32 DOS, you can:
c:> dir /s c:\directory\you\want
and the penultimate line will tell you how many bytes the files take up.
I know this reads all files and directories, but works faster in some situations.
When a folder is created, many Linux filesystems allocate 4096 bytes to store some metadata about the directory itself.
This space is increased by a multiple of 4096 bytes as the directory grows.
du command (with or without -b option) take in count this space, as you can see typing:
mkdir test && du -b test
you will have a result of 4096 bytes for an empty dir.
So, if you put 2 files of 10000 bytes inside the dir, the total amount given by du -sb would be 24096 bytes.
If you read carefully the question, this is not what asked. The questioner asked:
the sum total of all the data in files and subdirectories I would get if I opened each file and counted the bytes
that in the example above should be 20000 bytes, not 24096.
So, the correct answer IMHO could be a blend of Nelson answer and hlovdal suggestion to handle filenames containing spaces:
find . -type f -print0 | xargs -0 stat --format=%s | awk '{s+=$1} END {print s}'
There are at least three ways to get the "sum total of all the data in files and subdirectories" in bytes that work in both Linux/Unix and Git Bash for Windows, listed below in order from fastest to slowest on average. For your reference, they were executed at the root of a fairly deep file system (docroot in a Magento 2 Enterprise installation comprising 71,158 files in 30,027 directories).
1.
$ time find -type f -printf '%s\n' | awk '{ total += $1 }; END { print total" bytes" }'
748660546 bytes
real 0m0.221s
user 0m0.068s
sys 0m0.160s
2.
$ time echo `find -type f -print0 | xargs -0 stat --format=%s | awk '{total+=$1} END {print total}'` bytes
748660546 bytes
real 0m0.256s
user 0m0.164s
sys 0m0.196s
3.
$ time echo `find -type f -exec du -bc {} + | grep -P "\ttotal$" | cut -f1 | awk '{ total += $1 }; END { print total }'` bytes
748660546 bytes
real 0m0.553s
user 0m0.308s
sys 0m0.416s
These two also work, but they rely on commands that don't exist on Git Bash for Windows:
1.
$ time echo `find -type f -printf "%s + " | dc -e0 -f- -ep` bytes
748660546 bytes
real 0m0.233s
user 0m0.116s
sys 0m0.176s
2.
$ time echo `find -type f -printf '%s\n' | paste -sd+ | bc` bytes
748660546 bytes
real 0m0.242s
user 0m0.104s
sys 0m0.152s
If you only want the total for the current directory, then add -maxdepth 1 to find.
Note that some of the suggested solutions don't return accurate results, so I would stick with the solutions above instead.
$ du -sbh
832M .
$ ls -lR | grep -v '^d' | awk '{total += $5} END {print "Total:", total}'
Total: 583772525
$ find . -type f | xargs stat --format=%s | awk '{s+=$1} END {print s}'
xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option
4390471
$ ls -l| grep -v '^d'| awk '{total = total + $5} END {print "Total" , total}'
Total 968133
du is handy, but find is useful in case if you want to calculate the size of some files only (for example, using filter by extension). Also note that find themselves can print the size of each file in bytes. To calculate a total size we can connect dc command in the following manner:
find . -type f -printf "%s + " | dc -e0 -f- -ep
Here find generates sequence of commands for dc like 123 + 456 + 11 +.
Although, the completed program should be like 0 123 + 456 + 11 + p (remember postfix notation).
So, to get the completed program we need to put 0 on the stack before executing the sequence from stdin, and print the top number after executing (the p command at the end).
We achieve it via dc options:
-e0 is just shortcut for -e '0' that puts 0 on the stack,
-f- is for read and execute commands from stdin (that generated by find here),
-ep is for print the result (-e 'p').
To print the size in MiB like 284.06 MiB we can use -e '2 k 1024 / 1024 / n [ MiB] p' in point 3 instead (most spaces are optional).
Use:
$ du -ckx <DIR> | grep total | awk '{print $1}'
Where <DIR> is the directory you want to inspect.
The '-c' gives you grand total data which is extracted using the 'grep total' portion of the command, and the count in Kbytes is extracted with the awk command.
The only caveat here is if you have a subdirectory containing the text "total" it will get spit out as well.
This may help:
ls -l| grep -v '^d'| awk '{total = total + $5} END {print "Total" , total}'
The above command will sum total all the files leaving the directories size.