Number of files and directories in linux directory, only in level 2 - linux

The instruction
$ ls | wc -l
gives us the number of directories and files that are in a certain directory without counting those that exist within the subdirectories of this first level, that is, it gives us the number in a non-recursive way.
How could you know the number of files and directories that are only in the second level of that same directory? That is, the number of subdirectories and files in the subdirectories of the main directory, also in a non-recursive way, only at level 2.
The instruction:
$ shuf -ezn 7 directory/*/*/* | xargs -0 -n1 echo
gives us 7 files or subdirectories random chosen from the second level of the main one. It works perfectly, but I am unable to reason a similar instruction for what I want to achieve
I hope I have explained myself. Thank you

There are probably other better options, but I think I've found it:
$ find directory/ -mindepth 2 -maxdepth 2 | wc -l
gives me the expected result.
Hope it helps someone

This command should work:
$ ls * | wc -l

Related

Counting number of files in a directory with an OSX terminal command

I'm looking for a specific directory file count that returns a number. I would type it into the terminal and it can give me the specified directory's file count.
I've already tried echo find "'directory' | wc -l" but that didn't work, any ideas?
You seem to have the right idea. I'd use -type f to find only files:
$ find some_directory -type f | wc -l
If you only want files directly under this directory and not to search recursively through subdirectories, you could add the -maxdepth flag:
$ find some_directory -maxdepth 1 -type f | wc -l
Open the terminal and switch to the location of the directory.
Type in:
find . -type f | wc -l
This searches inside the current directory (that's what the . stands for) for all files, and counts them.
The fastest way to obtain the number of files within a directory is by obtaining the value of that directory's kMDItemFSNodeCount metadata attribute.
mdls -name kMDItemFSNodeCount directory_name -raw|xargs
The above command has a major advantage over find . -type f | wc -l in that it returns the count almost instantly, even for directories which contain millions of files.
Please note that the command obtains the number of files, not just regular files.
I don't understand why folks are using 'find' because for me it's a lot easier to just pipe in 'ls' like so:
ls *.png | wc -l
to find the number of png images in the current directory.
I'm using tree, this is the way :
tree ph

Counting Amount of Files in Directory Including Hidden Files with BASH

I want to count the amount of files in the directory I am currently in (including hidden files). So far I have this:
ls -1a | wc -l
but I believe this returns 2 more than what I want because it also counts "." (current directory) and ".." (directory above this one) as files. How would I go about returning the correct amount of files?
I believe to count all files / directories / hidden file you can also use BASH array like this:
shopt -s nullglob dotglob
cd /whatever/path
arr=( * )
count="${#arr[#]}"
This also works with filenames that contain space or newlines.
Edit:
ls piped to wc is not the right tool for that job. This is because filenames in UNIX can contain newlines as well. This would lead to counting them multiple times.
Following #gniourf_gniourf's comment (thanks!) the following command will handle newlines in file names correctly and should be used:
find -mindepth 1 -maxdepth 1 -printf x | wc -c
The find command lists files in the current directory - including hidden files, excluding the . and .. because of -mindepth 1. It works non-recursively because of -maxdepth 1.
The -printf x action simply prints an x for each file in the directory which leads to an output like this:
xxxxxxxx
Piped to wc -c (-c means counting characters) you get your final result.
Former Answer:
Use the following command:
ls -1A | wc -l
-a will include all files or directories starting with a dot, but -A will exclude the current folder . and the parent folder ..
I suggest to follow man ls
You almost got it right:
ls -1A | wc -l
If you filenames contain new-lines or other funny characters do:
find -type f -ls | wc -l

get folder with newest date name

I have this folders:
2014-09-01-00:00:01
2014-09-01-01:00:01
2014-09-01-02:00:01
2014-09-01-03:00:01
2014-09-01-04:00:01
(There are many more folders)
I write this names to an array. (folders=("2014-09-01-00:00:01" "2014-09-01-01:00:01"...))
How can I get the folder with the newest date? (Not based on the creation/modified date)
ls is the first thing to think about, but parsing ls is evil. Hence, I would use find for this:
find /your/path -mindepth 1 -maxdepth 1 -type d | sort -rn
It looks for all the directories in /your/path (not sub-directories) and sorts them numerically.
The first one will be the newer. Adding | head -1 we get just that one.
Given the convenient date/time format you're using, you can simply sort lexicographically, using sort:
ls -d PARENT_DIR/*/ | sort | tail -1
you can use below command
ll -rt | tail -n 1
Actually it will give you the latest file or directory in your parent directory.
But as you added in your question that your parent directory contains only directories. You can simply use above command.
bash filename pattern expansion gives you alphabetically sorted results, so:
folders=(*/)
echo "${folders[-1]}"

Finding executable files using ls and grep

I have to write a script that finds all executable files in a directory. So I tried several ways to implement it and they actually work. But I wonder if there is a nicer way to do so.
So this was my first approach:
ls -Fla | grep \*$
This works fine, because the -F flag does the work for me and adds to each executable file an asterisk, but let's say I don't like the asterisk sign.
So this was the second approach:
ls -la | grep -E ^-.{2}x
This too works fine, I want a dash as first character, then I'm not interested in the next two characters and the fourth character must be a x.
But there's a bit of ambiguity in the requirements, because I don't know whether I have to check for user, group or other executable permission. So this would work:
ls -la | grep -E ^-.{2}x\|^-.{5}x\|^-.{8}x
So I'm testing the fourth, seventh and tenth character to be a x.
Now my real question, is there a better solution using ls and grep with regex to say:
I want to grep only those files, having at least one x in the ten first characters of a line produced by ls -la
Do you need to use ls? You can use find to do the same:
find . -maxdepth 1 -perm -111 -type f
will return all executable files in the current directory. Remove the -maxdepth flag to traverse all child directories.
You could try this terribleness but it might match files that contain strings that look like permissions.
ls -lsa | grep -E "[d\-](([rw\-]{2})x){1,3}"
If you absolutely must use ls and grep, this works:
ls -Fla | grep '^\S*x\S*'
It matches lines where the first word (non-whitespace) contains at least one 'x'.
Find is the perfect tool for this. This finds all files (-type f) that are executable:
find . -type f -executable
If you don't want it to recursively list all executables, use maxdepth:
find . -maxdepth 1 -type f -executable
Perhaps with test -x?
for f in $(\ls) ; do test -x $f && echo $f ; done
The \ on ls will bypass shell aliases.
for i in `ls -l | awk '{ if ( $1 ~ /x/ ) {print $NF}}'`; do echo `pwd`/$i; done
This gives absolute paths to the executables.
While the question is very old and has been answered a long time ago, I want to add the version for anyone who is using the fd utility (which I personally highly recommend, see https://github.com/sharkdp/fd if you want to try), you get the same result as find . -type f -executable by running:
fd -tx
or
fd --type executable
One can also add -d or --max-depth argument, same as for the original find.
Maybe someone will find this useful.
file * |grep "ELF 32-bit LSB executable"|awk '{print $1}'

Find the number of files in a directory

Is there any method in Linux to calculate the number of files in a directory (that is, immediate children) in O(1) (independently of the number of files) without having to list the directory first? If not O(1), is there a reasonably efficient way?
I'm searching for an alternative to ls | wc -l.
readdir is not as expensive as you may think. The knack is avoid stat'ing each file, and (optionally) sorting the output of ls.
/bin/ls -1U | wc -l
avoids aliases in your shell, doesn't sort the output, and lists 1 file-per-line (not strictly necessary when piping the output into wc).
The original question can be rephrased as "does the data structure of a directory store a count of the number of entries?", to which the answer is no. There isn't a more efficient way of counting files than readdir(2)/getdents(2).
One can get the number of subdirectories of a given directory without traversing the whole list by stat'ing (stat(1) or stat(2)) the given directory and observing the number of links to that directory. A given directory with N child directories will have a link count of N+2, one link for the ".." entry of each subdirectory, plus two for the "." and ".." entries of the given directory.
However one cannot get the number of all files (whether regular files or subdirectories) without traversing the whole list -- that is correct.
The "/bin/ls -1U" command will not get all entries however. It will get only those directory entries that do not start with the dot (.) character. For example, it would not count the ".profile" file found in many login $HOME directories.
One can use either the "/bin/ls -f" command or the "/bin/ls -Ua" command to avoid the sort and get all entries.
Perhaps unfortunately for your purposes, either the "/bin/ls -f" command or the "/bin/ls -Ua" command will also count the "." and ".." entries that are in each directory. You will have to subtract 2 from the count to avoid counting these two entries, such as in the following:
expr `/bin/ls -f | wc -l` - 2 # Those are back ticks, not single quotes.
The --format=single-column (-1) option is not necessary on the "/bin/ls -Ua" command when piping the "ls" output, as in to "wc" in this case. The "ls" command will automatically write its output in a single column if the output is not a terminal.
The -U option for ls is not in POSIX, and in OS X's ls it has a different meaning from GNU ls, which is that it makes -t and -l use creation times instead of modification times. -f is in POSIX as an XSI extension. The manual of GNU ls describes -f as do not sort, enable -aU, disable -ls --color and -U as do not sort; list entries in directory order.
POSIX describes -f like this:
Force each argument to be interpreted as a directory and list the name found in each slot. This option shall turn off -l, -t, -s, and -r, and shall turn on -a; the order is the order in which entries appear in the directory.
Commands like ls|wc -l give the wrong result when filenames contain newlines.
In zsh you can do something like this:
a=(*(DN));echo ${#a}
D (glob_dots) includes files whose name starts with a period and N (null_glob) causes the command to not result in an error in an empty directory.
Or the same in bash:
shopt -s dotglob nullglob;a=(*);echo ${#a[#]}
If IFS contains ASCII digits, add double quotes around ${#a[#]}. Add shopt -u failglob to ensure that failglob is unset.
A portable option is to use find:
find . ! -name . -prune|grep -c /
grep -c / can be replaced with wc -l if filenames do not contain newlines. ! -name . -prune is a portable alternative to -mindepth 1 -maxdepth 1.
Or here's another alternative that does not usually include files whose name starts with a period:
set -- *;[ -e "$1" ]&&echo "$#"
The command above does however include files whose name starts with a period when an option like dotglob in bash or glob_dots in zsh is set. When * matches no file, the command results in an error in zsh with the default settings.
I used this command..works like a charm..only to change the maxdepth..that is sub directories
find * -maxdepth 0 -type d -exec sh -c "echo -n {} ' ' ; ls -lR {} | wc -l" \;
I think you can have more control on this using find:
find <path> -maxdepth 1 -type f -printf "." | wc -c
find -maxdepth 1 will not go deeper into the hierarchy of files.
-type f allows filtering to just files. Similarly, you can use -type d for directories.
-printf "." prints a dot for every match.
wc -c counts the characters, so it counts the dots created by the print... which means counting how many files exist in the given path.
For the number of all file in a current directory try this:
ls -lR * | wc -l
As far as I know, there is no better alternative. This information might be off-topic to this question and you may already know this that under Linux (in general under Unix) directories are just special file which contains the list of other files (I understand that the exact details will be dependent on specific file system but this is the general idea). And there is no call to find the total number of entries without traversing the whole list. Please make me correct if I'm wrong.
use ls -1 | wc -l

Resources