Linux - list all subdirectories and get ACL - linux

I am trying to:
List all the subdirectories under a top level directory (/shared) and output this to a file called directory_list
Afterwards I want to run getfacl using the file as input, and output the results to a file called acl_list
So ultimately, getting the ACLs for all subdirectories in /shared.
The steps I'm trying to use are:
ls -d -- /shared/*/*/*/ > directory_list
getfacl < directory_list > acl_list
Is there a different command I can use for (1) where it will list the subdirectories recursively, as -dr (list directories recursively) does not seem to work. Also I do not know how many levels of subdirectories there are.
When I run the 2nd command, there seems to be some syntax error preventing it from running, but I've checked the syntax from the linux man pages and it should be right.
Could anyone point me in the right direction?
Thanks

Use find to do a recursive list:
find /shared -type d -print > directory_list
getfacl expects the filenames to be command line arguments, it doesn't read standard input. Use xargs to perform this translation:
xargs -d '\n' getfacl < directory_list > acl_list
The -d option specifies the delimiter, overriding the default use of whitespace. However, I believe this is a GNU extension, it may not be in all versions of xargs. If you can't use it, you'll have to write a loop:
while read -r filename; do
getfacl "$filename"
done < directory_list > acl_list

I know you actually asked for this, but do you technically speaking NEED the intermediate directory_list ?
find /shared -type d -print -exec getfacl "{}" \+ > acl_list

The find command should get you what you need:
find /shared -type d
-type d restricts the output to directories only.

Related

How to find/list the directories where a particular sub-directory is not present

I am writing a shell script where it is checking if the bin directory is present under all the users directory under /home directory. The bin directory can be present directly under user directory or under the child directory of the user directory.
I mean let say I have a user as amit under /home. So the bin directory can be present directly as /amit/bin or can be present as /amit/jash/bin
Now my requirement is that I should have a list of users directories where the bin directory is not present either directly under user directory or under the child directory of the user directory. I tried the command as :
find /home -type d ! -exec test -e '{}/bin' \; -print
but it is not working. However when I am replacing the bin directory with some file, the command is working fine. Looks like this command is particularly for files. Is there any similar command for directories?? Any help on this will be greatly appreciated.
You're on the right track. The catch is that your test of "does the following directory NOT exist in this target" can't be expressed within find's conditions in such a way as to return only the top-level directory. So you need to nest, one way or another.
One strategy would be to use a for loop in bash:
$ mkdir foo bar baz one two
$ mkdir bar/bin baz/bin
$ for d in /home/*/; do find "$d" -type d -name bin | grep -q . || echo "$d"; done
foo/
one/
two/
This uses pathname expansion (globbing) to generate the list of directories to test, and then checks for the existence of "bin". If that check fails (i.e. find outputs nothing), the directory is printed. Note the trailing slash on /home/*/, which ensures that you will only be searching within directories, rather than files that might accidentally exist in /home/.
Another possibility might be to use nested finds, if you don't want to depend on bash:
$ find /home/ -type d -depth 1 -not -exec sh -c "find {}/ -type d -name bin -print | grep -q . " \; -print
/home/foo
/home/one
/home/two
This roughly duplicates the effect of the bash for loop above, but by nesting find within find -exec. It uses grep -q . to convert the output of find into an exit status that can be used as a condition for the outer find.
Note that since you're looking for a bin directory, we want to use test -d rather than test -e (which would also check for a bin file, which probably does not matter to you.)
Another option is to use bash process redirection. On multiple lines for easier reading:
cd /home/
comm -3 \
<(printf '%s\n' */ | sed 's|/.*||' | sort) \
<(find */ -type d -name bin | cut -d/ -f1 | uniq)
This unfortunately requires you to change to the /home directory before running, because of the way it strips off subdirectories. You can of course collapse this into a big long one-liner if you feel so inclined.
This comm solution also has the risk of failing on directories with special characters in their names, like newlines.
One last option is bash-only but more than a one-liner. It involves subtracting the directories containing "bin" from the full list. It uses an associative array and globstar, so it depends on bash version 4.
#!/usr/bin/env bash
shopt -s globstar
# Go to our root
cd /home
# Declare an associative array
declare -A dirs=()
# Populate the array with our "full" list of home directories
for d in */; do dirs[${d%/}]=""; done
# Remove directories that contain a "bin" somewhere inside 'em
for d in **/bin; do unset dirs[${d%%/*}]; done
# Print the result in reproducible form
declare -p dirs
# Or print the result just as a list of words.
printf '%s\n' "${!dirs[#]}"
Note that we're storing directories in the array index, which (1) makes it easy for us to find and delete items, and (2) insures unique entries, even if one user has multiple "bin" directories under their home.
cd /home
find . -maxdepth 1 -type d ! -name . | sort > a
find . -type d -name bin | cut -d/ -f1,2 | sort > b
comm -23 a b
Here, I'm making two sorted lists. The first contains all the home directories, and the second contains the top parent of any bin subdirectory. Finally I output any items from the first list not present in the second.

Shell Script to Recursively Loop Through Directory and print location of important files

So I am trying to write a command line shell script or a shell script that will be able to recursively loop through a directory, all its files, and sub-directories for certain files and then print the location of these files to a text file.
I know that this is possible using BASH commands such as find, locate, exec, and >.
This is what I have so far. find <top-directory> -name '*.class' -exec locate {} > location.txt \;
This does not work though. Can any BASH, Shell scripting experts help me out please?
Thank-you for reading this.
The default behavior of find (if you don't specify any other action) is to print the filename. So you can simply do:
find <top-directory> -name '*.class' > location.txt
Or if you want to be explicit about it:
find <top-directory> -name '*.class' -print > location.txt
You can save the redirection by using find's -fprint option:
find <top-directory> -name '*.class' -fprint location.txt
From the man page:
-fprint file
[...] print the full file name into file file. If file does not exist when find is run, it is created; if it does exist, it is truncated.
A less preferred way to do it is to use ls:
ls -d $PWD**/* | grep class
let's break it down:
ls -d # lists the directory (returns `.`)
ls -d $PWD # lists the directory - but this time $PWD will provide full path
ls -d $PWD/** # list the directory with full-path and every file under this directory (not recursively) - an effect which is due to `/**` part
ls -d $PWD/**/* # same like previous one, only that now do it recursively to the folders below (achieved by adding the `/*` at the end)
A better way of doing it:
After reading this due to recommendation from Charles Duffy, it appears as a bad idea to use both ls as well as find (article also says: "find is just as bad as ls in this context".) The reason it's a bad idea is because you can't control the output of ls: for example, you can't configure ls to terminate filenames with NUL. The reason it's problematic is that unix allows all kind of weird characters in a file-name (newline, pipe etc) and will "break" ls in a way you can't anticipate.
Better use a shell script for the task, and it's pretty simple task too:
Create a file my_script.sh, edit the file to contain:
for i in **/*; do
echo $PWD/$i
done
Give it execute permissions (by running: chmod +x my_script.sh).
Run it from the same directory with:
./my_script.sh
and you're good to go!

Unix: traverse a directory

I need to traverse a directory so starting in one directory and going deeper into difference sub directories. However I also need to be able to have access to each individual file to modify the file. Is there already a command to do this or will I have to write a script? Could someone provide some code to help me with this task? Thanks.
The find command is just the tool for that. Its -exec flag or -print0 in combination with xargs -0 allows fine-grained control over what to do with each file.
Example: Replace all foo's by bar's in all files in /tmp and subdirectories.
find /tmp -type f -exec sed -i -e 's/foo/bar/' '{}' ';'
for i in `find` ; do
if [ -d $i ] ; then do something with a directory ; fi
if [ -f $i ] ; then do something with a file etc. ; fi
done
This will return the whole tree (recursively) in the current directory in a list that the loop will go through.
This can be easily achieved by mixing find, xargs, sed (or other file modification command).
For example:
$ find /path/to/base/dir -type f -name '*.properties' | xargs sed -ie '/^#/d'
This will filter all files with file extension .properties.
The xargs command will feed the file path generated by find command into the sed command.
The sed command will delete all lines start with # in the files (feed by xargs).
Command combination in this way is very flexible.
For example, find command have different parameters so you can filter by user name, file size, file path (eg: under /test/ subfolder), file modification time.
Another dimension of flexibility is how and what to change in your file. For ex, sed command allows you to make changes on file in applying substitution (specify via regular expressions). Similarly, you can use gzip to compress the file. And so on ...
You would usually use the find command. On Linux, you have the GNU version, of course. It has many extra (and useful) options. Both will allow you to execute a command (eg a shell script) on the files as they are found.
The exact details of how to make changes to the file depend on the change you want to make to the file. That is probably best scripted, with find running the script:
POSIX or GNU:
find . -type f -exec your_script '{}' +
This will run your script once for a group of files with those names provided as arguments. If you want to do it one file at a time, replace the + with ';' (or \;).
I am assuming SearchMe is the example directory name you need to traverse completely.
I am also assuming, since it was not specified, the files you want to modify are all text file. Is this correct?
In such scenario I would suggest using the command:
find SearchMe -type f -exec vi {} \;
If you are not familiar with vi editor, just use another one (nano, emacs, kate, kwrite, gedit, etc.) and it should work as well.
Bash 4+
shopt -s globstar
for file in **
do
if [ -f "$file" ];then
# do some processing to your file here
# where the find command can't do conveniently
fi
done

Find the number of files in a directory

Is there any method in Linux to calculate the number of files in a directory (that is, immediate children) in O(1) (independently of the number of files) without having to list the directory first? If not O(1), is there a reasonably efficient way?
I'm searching for an alternative to ls | wc -l.
readdir is not as expensive as you may think. The knack is avoid stat'ing each file, and (optionally) sorting the output of ls.
/bin/ls -1U | wc -l
avoids aliases in your shell, doesn't sort the output, and lists 1 file-per-line (not strictly necessary when piping the output into wc).
The original question can be rephrased as "does the data structure of a directory store a count of the number of entries?", to which the answer is no. There isn't a more efficient way of counting files than readdir(2)/getdents(2).
One can get the number of subdirectories of a given directory without traversing the whole list by stat'ing (stat(1) or stat(2)) the given directory and observing the number of links to that directory. A given directory with N child directories will have a link count of N+2, one link for the ".." entry of each subdirectory, plus two for the "." and ".." entries of the given directory.
However one cannot get the number of all files (whether regular files or subdirectories) without traversing the whole list -- that is correct.
The "/bin/ls -1U" command will not get all entries however. It will get only those directory entries that do not start with the dot (.) character. For example, it would not count the ".profile" file found in many login $HOME directories.
One can use either the "/bin/ls -f" command or the "/bin/ls -Ua" command to avoid the sort and get all entries.
Perhaps unfortunately for your purposes, either the "/bin/ls -f" command or the "/bin/ls -Ua" command will also count the "." and ".." entries that are in each directory. You will have to subtract 2 from the count to avoid counting these two entries, such as in the following:
expr `/bin/ls -f | wc -l` - 2 # Those are back ticks, not single quotes.
The --format=single-column (-1) option is not necessary on the "/bin/ls -Ua" command when piping the "ls" output, as in to "wc" in this case. The "ls" command will automatically write its output in a single column if the output is not a terminal.
The -U option for ls is not in POSIX, and in OS X's ls it has a different meaning from GNU ls, which is that it makes -t and -l use creation times instead of modification times. -f is in POSIX as an XSI extension. The manual of GNU ls describes -f as do not sort, enable -aU, disable -ls --color and -U as do not sort; list entries in directory order.
POSIX describes -f like this:
Force each argument to be interpreted as a directory and list the name found in each slot. This option shall turn off -l, -t, -s, and -r, and shall turn on -a; the order is the order in which entries appear in the directory.
Commands like ls|wc -l give the wrong result when filenames contain newlines.
In zsh you can do something like this:
a=(*(DN));echo ${#a}
D (glob_dots) includes files whose name starts with a period and N (null_glob) causes the command to not result in an error in an empty directory.
Or the same in bash:
shopt -s dotglob nullglob;a=(*);echo ${#a[#]}
If IFS contains ASCII digits, add double quotes around ${#a[#]}. Add shopt -u failglob to ensure that failglob is unset.
A portable option is to use find:
find . ! -name . -prune|grep -c /
grep -c / can be replaced with wc -l if filenames do not contain newlines. ! -name . -prune is a portable alternative to -mindepth 1 -maxdepth 1.
Or here's another alternative that does not usually include files whose name starts with a period:
set -- *;[ -e "$1" ]&&echo "$#"
The command above does however include files whose name starts with a period when an option like dotglob in bash or glob_dots in zsh is set. When * matches no file, the command results in an error in zsh with the default settings.
I used this command..works like a charm..only to change the maxdepth..that is sub directories
find * -maxdepth 0 -type d -exec sh -c "echo -n {} ' ' ; ls -lR {} | wc -l" \;
I think you can have more control on this using find:
find <path> -maxdepth 1 -type f -printf "." | wc -c
find -maxdepth 1 will not go deeper into the hierarchy of files.
-type f allows filtering to just files. Similarly, you can use -type d for directories.
-printf "." prints a dot for every match.
wc -c counts the characters, so it counts the dots created by the print... which means counting how many files exist in the given path.
For the number of all file in a current directory try this:
ls -lR * | wc -l
As far as I know, there is no better alternative. This information might be off-topic to this question and you may already know this that under Linux (in general under Unix) directories are just special file which contains the list of other files (I understand that the exact details will be dependent on specific file system but this is the general idea). And there is no call to find the total number of entries without traversing the whole list. Please make me correct if I'm wrong.
use ls -1 | wc -l

How can I generate a list of files with their absolute path in Linux?

I am writing a shell script that takes file paths as input.
For this reason, I need to generate recursive file listings with full paths. For example, the file bar has the path:
/home/ken/foo/bar
but, as far as I can see, both ls and find only give relative path listings:
./foo/bar (from the folder ken)
It seems like an obvious requirement, but I can't see anything in the find or ls man pages.
How can I generate a list of files in the shell including their absolute paths?
If you give find an absolute path to start with, it will print absolute paths. For instance, to find all .htaccess files in the current directory:
find "$(pwd)" -name .htaccess
or if your shell expands $PWD to the current directory:
find "$PWD" -name .htaccess
find simply prepends the path it was given to a relative path to the file from that path.
Greg Hewgill also suggested using pwd -P if you want to resolve symlinks in your current directory.
readlink -f filename
gives the full absolute path. but if the file is a symlink, u'll get the final resolved name.
Use this for dirs (the / after ** is needed in bash to limit it to directories):
ls -d -1 "$PWD/"**/
this for files and directories directly under the current directory, whose names contain a .:
ls -d -1 "$PWD/"*.*
this for everything:
ls -d -1 "$PWD/"**/*
Taken from here
http://www.zsh.org/mla/users/2002/msg00033.html
In bash, ** is recursive if you enable shopt -s globstar.
You can use
find $PWD
in bash
ls -d "$PWD/"*
This looks only in the current directory. It quotes "$PWD" in case it contains spaces.
Command: ls -1 -d "$PWD/"*
This will give the absolute paths of the file like below.
[root#kubenode1 ssl]# ls -1 -d "$PWD/"*
/etc/kubernetes/folder/file-test-config.txt
/etc/kubernetes/folder/file-test.txt
/etc/kubernetes/folder/file-client.txt
Try this:
find "$PWD"/
You get list of absolute paths in working directory.
You can do
ls -1 |xargs realpath
If you need to specify an absolute path or relative path You can do that as well
ls -1 $FILEPATH |xargs realpath
The $PWD is a good option by Matthew above. If you want find to only print files then you can also add the -type f option to search only normal files. Other options are "d" for directories only etc. So in your case it would be (if i want to search only for files with .c ext):
find $PWD -type f -name "*.c"
or if you want all files:
find $PWD -type f
Note: You can't make an alias for the above command, because $PWD gets auto-completed to your home directory when the alias is being set by bash.
If you give the find command an absolute path, it will spit the results out with an absolute path. So, from the Ken directory if you were to type:
find /home/ken/foo/ -name bar -print
(instead of the relative path find . -name bar -print)
You should get:
/home/ken/foo/bar
Therefore, if you want an ls -l and have it return the absolute path, you can just tell the find command to execute an ls -l on whatever it finds.
find /home/ken/foo -name bar -exec ls -l {} ;\
NOTE: There is a space between {} and ;
You'll get something like this:
-rw-r--r-- 1 ken admin 181 Jan 27 15:49 /home/ken/foo/bar
If you aren't sure where the file is, you can always change the search location. As long as the search path starts with "/", you will get an absolute path in return. If you are searching a location (like /) where you are going to get a lot of permission denied errors, then I would recommend redirecting standard error so you can actually see the find results:
find / -name bar -exec ls -l {} ;\ 2> /dev/null
(2> is the syntax for the Borne and Bash shells, but will not work with the C shell. It may work in other shells too, but I only know for sure that it works in Bourne and Bash).
Just an alternative to
ls -d "$PWD/"*
to pinpoint that * is shell expansion, so
echo "$PWD/"*
would do the same (the drawback you cannot use -1 to separate by new lines, not spaces).
fd
Using fd (alternative to find), use the following syntax:
fd . foo -a
Where . is the search pattern and foo is the root directory.
E.g. to list all files in etc recursively, run: fd . /etc -a.
-a, --absolute-path Show absolute instead of relative paths
If you need list of all files in current as well as sub-directories
find $PWD -type f
If you need list of all files only in current directory
find $PWD -maxdepth 1 -type f
You might want to try this.
for name in /home/ken/foo/bar/*
do
echo $name
done
You can get abs path using for loop and echo simply without find.
find jar file recursely and print absolute path
`ls -R |grep "\.jar$" | xargs readlink -f`
/opt/tool/dev/maven_repo/com/oracle/ojdbc/ojdbc8-19.3.0.0.jar
/opt/tool/dev/maven_repo/com/oracle/ojdbc/ons-19.3.0.0.jar
/opt/tool/dev/maven_repo/com/oracle/ojdbc/oraclepki-19.3.0.0.jar
/opt/tool/dev/maven_repo/com/oracle/ojdbc/osdt_cert-19.3.0.0.jar
/opt/tool/dev/maven_repo/com/oracle/ojdbc/osdt_core-19.3.0.0.jar
/opt/tool/dev/maven_repo/com/oracle/ojdbc/simplefan-19.3.0.0.jar
/opt/tool/dev/maven_repo/com/oracle/ojdbc/ucp-19.3.0.0.jar
This works best if you want a dynamic solution that works well in a function
lfp ()
{
ls -1 $1 | xargs -I{} echo $(realpath $1)/{}
}
lspwd() { for i in $#; do ls -d -1 $PWD/$i; done }
Here's an example that prints out a list without an extra period and that also demonstrates how to search for a file match. Hope this helps:
find . -type f -name "extr*" -exec echo `pwd`/{} \; | sed "s|\./||"
This worked for me. But it didn't list in alphabetical order.
find "$(pwd)" -maxdepth 1
This command lists alphabetically as well as lists hidden files too.
ls -d -1 "$PWD/".*; ls -d -1 "$PWD/"*;
stat
Absolute path of a single file:
stat -c %n "$PWD"/foo/bar
This will give the canonical path (will resolve symlinks): realpath FILENAME
If you want canonical path to the symlink itself, then: realpath -s FILENAME
Most if not all of the suggested methods result in paths that cannot be used directly in some other terminal command if the path contains spaces. Ideally the results will have slashes prepended.
This works for me on macOS:
find / -iname "*SEARCH TERM spaces are okay*" -print 2>&1 | grep -v denied |grep -v permitted |sed -E 's/\ /\\ /g'
for p in <either relative of absolute path of the directory>/*; do
echo $(realpath -s $p)
done
Recursive files can be listed by many ways in Linux. Here I am sharing one liner script to clear all logs of files(only files) from /var/log/ directory and second check recently which logs file has made an entry.
First:
find /var/log/ -type f #listing file recursively
Second:
for i in $(find $PWD -type f) ; do cat /dev/null > "$i" ; done #empty files recursively
Third use:
ls -ltr $(find /var/log/ -type f ) # listing file used in recent
Note: for directory location you can also pass $PWD instead of /var/log.
If you don't have symbolic links, you could try
tree -iFL 1 [DIR]
-i makes tree print filenames in each line, without the tree structure.
-f makes tree print the full path of each file.
-L 1 avoids tree from recursion.
Write one small function
lsf() {
ls `pwd`/$1
}
Then you can use like
lsf test.sh
it gives full path like
/home/testuser/Downloads/test.sh
I used the following to list absolute path of files in a directory in a txt file:
find "$PWD" -wholename '*.JPG' >test.txt
find / -print will do this
ls -1 | awk -vpath=$PWD/ '{print path$1}'

Resources