Bash - listing programs in all subdirectories with directory name before file - linux

I don't need to do this in one line, but I've only got 1 line so far.
find . -perm -111 +type f | sort -r
What I'm trying to do is write a bash script that will display the list of all files in the current directory that are executable (z to a). I want the script to do the same for all subdirectories. What I'm having difficulty doing is displaying the name of the subdirectory before the list of executable files in that directory / subdirectory.
So, to clarify, desirable output might look like this:
program1
program2
SubDir1
program3
SubDirSubDir2
program4
SubDir2
program5
What I have right now (the above code) does this. Its not removing /path and it isn't listing the name of the new directory when directories are changed.
./exfile
./test/exfile1
./test1/program2
./test1/program
./first
Hopefully that was clear.

This will work.
I changed the permission to -100 because maybe some programs are only executable by its owner.
for d in $(find . -type d); do
echo "in $d:"
find $d -maxdepth 1 -perm -100 -type f | sed 's#.*/##'
done

This will do the trick for you.
find . -type d | sort | xargs -n1 -I{} bash -c "find {} -type f -maxdepth 1 -executable | sort -r"
The first find command lists all directories and sub directories and sort them in ascending order.
The sorted directories/sub-directories are then passed to xargs which calls bash to find the files within the directory/sub-directory and sort them in descending order.
If you prefer to also print the directory, you may run it without -type f.

You can use find on all directories and combine it with -print (to print the directory name) and -exec (to execute a find for files in that directory):
find . -type d -print -exec bash -c 'find {} -type f -depth 1 -perm +0111 | sort -r' \;
Let's break this down. First, you have the directory search:
find . -type d -print
Then the command to execute for each directory:
find {} -type f -depth 1 -perm +0111 | sort -r
The -exec switch will expand the path wherever it sees {}. Because this uses a pipe operator that is shell syntax, the whole thing is wrapped in bash -c.
You can expand on this further. If you want to strip the directory name off the files and space our your results nicer, something like this might suffice:
find {} -type f -depth 1 -print0 -perm +0111 | xargs -n1 -0 basename | sort -r && echo

Hmm, the sorting requirement makes this tricky - the "for d in $(find...)" command is clever, but hard to control the sorting. How about this? Everything is z->a, including the directories, but the awk statement is a bit of a monster ;_)
find `pwd` -perm 111 -type f |
sort -r |
xargs -n1 -I{} sh -c "dirname {};basename {}" |
awk '/^\// {dir=$0 ; if (dir != lastdir) {print;lastdir=dir}} !/^\// {print}'
Produces
/home/imcgowan/t/t3
jjj
iii
hhh
/home/imcgowan/t/t2
ggg
fff
eee
/home/imcgowan/t/t1
ddd
ccc
bbb
/home/imcgowan/t
aaa

Related

Count number of files in several folders with Unix command

I'd like to count the number of files in each folder. For one folder, I can do it with:
find ./folder1/ -type f | wc -l
I can repeat this command for every folder (folder2, folder3, ...), but I'd like to know if it is possible to get the information with one command. The output should look like this:
folder1 13
folder2 4
folder3 1254
folder4 327
folder5 2145
I can get the list of my folders with:
find . -maxdepth 1 -type d
which returns:
./folder1
./folder2
./folder3
./folder4
./folder5
Then, I thought about combining this command with the first one, but I don't know exactly how. Maybe with "-exec" or "xargs"?
Many thanks in advance.
A possible solution using xargs is to use the -I option, which replaces occurrences of replace-str (% in the code sample below) in the initial-arguments with names read from standard input:
find . -maxdepth 1 -type d -print0 | xargs -0 -I% sh -c 'echo -n "%: "; find "%" -type f | wc -l'
You also need to pass the find command to sh if you want to pipe it with wc, otherwise wc will count files in all directories.
Another solution (maybe less cryptic) is to use a one-liner for loop:
for d in */; do echo -n "$d: "; find "$d" -type f | wc -l; done

How to pipe a list of files returned from find to cat and sort them

I'm trying to find all the files from a folder and then print them but sorted.
I have this so far
find . -type f -exec cat {} \;
and it print's all files but I need to sort them too but when I do
find . -type f -exec sort cat {};
I get the next error
sort:cannot read:cat:No such file or directory
and if I switch sort and cat like this
find . -type f -exec cat sort {} \;
I get the same error the it print's the file(I have only one file to print)
It's not clear to me if you want to display the contents of the files unchanged sorting the files by name, or if you want to sort the contents of each file. If the latter:
find . -type f -exec sort {} \;
If the former, use bsd find's -s option:
find -s . -type f -exec cat {} \;
If you don't have bsd find, use:
find . -type f -print0 | sort -z | xargs -0 cat
Composing commands using pipes is often the simplest solution.
find . -print0 -type f | sort | xargs -0 cat
Explanation: you can sort filenames after the fact using ... | sort, then pass the output (the list of files) to cat using xargs, i.e. ... | xargs cat.
As #arkaduisz points out, when using pipes, should carefully handle filenames containing whitespaces (thus using -print0 and -0).

shell script to delete all files except the last updated file in different folders

My application logs will be created in below folders in linux system.
Folder 1: 100001_1001
folder 2 : 200001_1002
folder 3 :300061_1003
folder 4: 300001_1004
folder 5 :400011_1008
want to delete all files except the latest file in above folders and want to add this to cron job.
i tried below one not working need help
30 1 * * * ls -lt /abc/cde/etc/100* | awk '{if(NR!=1) print $9}' | xargs -i rm -rf {} \;
30 1 * * * ls -lt /abc/cde/etc/200* | awk '{if(NR!=1) print $9}' | xargs -i rm -rf {} \;
30 1 * * * ls -lt /abc/cde/etc/300* | awk '{if(NR!=1) print $9}' | xargs -i rm -rf {} \;
30 1 * * * ls -lt /abc/cde/etc/400* | awk '{if(NR!=1) print $9}' | xargs -i rm -rf {} \;
You can use this pipeline consisting all gnu utilities (so that we can also handle file paths with special characters, whitespaces and glob characters)
find /parent/log/dir -type f -name '*.zip' -printf '%T#\t%p\0' |
sort -zk1,1rn |
cut -zf2 |
tail -z -n +2 |
xargs -0 rm -f
Using a slightly modified approach to your own:
find /abc/cde/etc/100* -printf "%A+\t%p\n" | sort -k1,1r| awk 'NR!=1{print $2}' | xargs -i rm "{}"
The find version doesn't suffer the lack of paths, so this MIGHT work (I don't know anything about the directory structure, and whether 100* points at a directory, a file or a group of files ...
You should use find, instead. It has a -delete action that deletes he files it found that match you specification. Warning: it is very easy to go wrong with -delete. Test your command first. Example, to find all files named *.zip under a/b/c (and only files):
find a/b/c -depth -name '*.zip' -type f -print
This is the test, it will print all files that the final command will delete (do not forget the -depth, it is important). And once you are sure, the command that does the deletion is:
find a/b/c -depth -name '*.zip' -type f -delete
find also has options to select files by last modification date, by size... You could, for instance, find all files that were modified at least 24 hours ago:
find a/b/c -depth -type f -mtime +0 -print
and, after careful check, delete them:
find a/b/c -depth -type f -mtime +0 -delete

Bash script that writes subdirectories who has more than 5 files

while I was trying to practice my linux skills, but I could not solve this question.
So its basically saying "Write a bash script that takes a name of
directory as a command argument and printf the name of subdirectories
that has more than 5 files in it."
I thought we will use the find command but ı still could not figure it out. My code is:
find directory -type d -mindepth5
but it's not working.
You can use find twice:
First you can use find and wc to count the number of files in a given directory:
nb=$(find directory -maxdepth 1 -type f -printf "x\n" | wc -l)
This just asks find to output an x on a line for each file in the directory directory, proceeding non-recursively, then wc -l counts the number of lines, so, really, nb is the number of files in directory.
If you want to know whether a directory contains more than 5 files, it's a good idea to stop find as soon as 6 files are found:
nb=$(find directory -maxdepth 1 -type f -printf "x\n" | head -6 | wc -l)
Here nb has an upper threshold of 6.
Now if for each subdirectory of a directory directory you want to output the number of files (threshold at 6), you can do this:
find directory -type d -exec bash -c 'nb=$(find "$0" -maxdepth 1 -type f -printf "x\n" | head -6 | wc -l); echo "$nb"' {} \;
where the $0 that appears is the 0-th argument, namely {} that find will replaced by the subdirectory of directory.
Finally, you only want to display the subdirectory name if the number of files is more than 5:
find . -type d -exec bash -c 'nb=$(find "$0" -maxdepth 1 -type f -printf "x\n" | head -6 | wc -l); ((nb>5))' {} \; -print
The final test ((nb>5)) returns success or failure whether nb is greater than 5 or not, and in case of success, find will -print the subdirectory name.
This should do the trick:
find directory/ -type f | sed 's/\(.*\)\/.*/\1/g' | sort | uniq -c | sort -n | awk '{if($1>5) print($2)}'
Using mindpeth is useless here since it only lists directories at at least depth 5. You say you need subdirectories with more then 5 files in it.
find directory -type f prints all files in subdirectories
sed 's/\(.*\)\/.*/\1/g' removes names of files leaving only list of subdirecotries without filenames
sort sorts that list so we can use uniq
uniq -c merges duplicate lines and writes how many times it occured
sort -n sorts it by number of occurences (so you end up with a list:(how many times, subdirectory))
awk '{if($1>5) print($2)}' prints only those with first comlun 1 > 5 (and it only prints the second column)
So you end up with a list of subdirectories with at least 5 files inside.
EDIT:
A fix for paths with spaces was proposed:
Instead of awk '{if($1>5) print($2)}' there should be awk '{if($1>5){ $1=""; print(substr($0,2)) }}' which sets first part of line to "" and then prints whole line without a leading space (which was delimiter). So put together we get this:
find directory/ -type f | sed 's/\(.*\)\/.*/\1/g' | sort | uniq -c | sort -n | awk '{if($1>5){ $1=""; print(substr($0,2)) }}'

grep command to find files

I'm looking for a command that use grep to search in /usr/bin for all the files who have 2 links and sort them in ascending.
The second command I'm looking for must use the first one and display just the files that contain the "x"
Thanks you
You can do this direct from grep, eg:
grep -r --include=*.py "HOSTS" .
will search recursively ('-r') under the current directory ('.') in all python files ('*.py') for the string "HOSTS".
This would do
find /usr/bin -links 2 -print0 | xargs -0 ls -adltr
modify the ls to do the sorting you require
find /usr/bin -links 2 -print0 | xargs -0 grep -l "x"
Files containing the "x" :)
If you meant: 'contain the x' as 'are executable (x appears in ls -l output), use
find /usr/bin -links 2 -executable -print0 | ls -adltr
To see only dirs:
find /usr/bin -links 2 -type d -executable -print0 | ls -adltr
To see only files:
find /usr/bin -links 2 -type f -executable -print0 | ls -adltr
Note: directories get 2 links by default (. is a link) so you might want to look for -links 3 with directories

Resources