Find files older than X and Count them - linux

Using Linux. What I need to do is determine the number of files in a directory(recursively) that are older than DATE and echo that number.
I have:
find /u1/database/prod/arch -type f -mtime +10 -exec ls -laR | wc -l \;
That lists the files fine.
And then I have:
ls -laR | wc -l
Which lets me count the files recursively.
But I can't seem to put them together. I think I need a script to do this but don't know how to do that.
Would love some help

find /u1/database/prod/arch -type f -mtime +10 | wc -l
works here.

You dont need the exec. use -print (or nothing) and find will print a line per file (and handle the recursion)
find /u1/database/prod/arch -type f -mtime +10 -print | wc -l

Related

How do I find the number of all .txt files in a directory and all sub directories using specifically the find command and the wc command?

So far I have this:
find -name ".txt"
I'm not quite sure how to use wc to find out the exact number of files. When using the command above, all the .txt files show up, but I need the exact number of files with the .txt extension. Please don't suggest using other commands as I'd like to specifically use find and wc. Thanks
Try:
find . -name '*.txt' | wc -l
The -l option to wc tells it to return just the number of lines.
Improvement (requires GNU find)
The above will give the wrong number if any .txt file name contains a newline character. This will work correctly with any file names:
find . -iname '*.txt' -printf '1\n' | wc -l
-printf '1\n tells find to print just the line 1 for each file name found. This avoids problems with file names having difficult characters.
Example
Let's create two .txt files, one with a newline in its name:
$ touch dir1/dir2/a.txt $'dir1/dir2/b\nc.txt'
Now, let's find the find command:
$ find . -name '*.txt'
./dir1/dir2/b?c.txt
./dir1/dir2/a.txt
To count the files:
$ find . -name '*.txt' | wc -l
3
As you can see, the answer is off by one. The improved version, however, works correctly:
$ find . -iname '*.txt' -printf '1\n' | wc -l
2
find -type f -name "*.h" -mtime +10 -print | wc -l
This worked out.

Bash - listing programs in all subdirectories with directory name before file

I don't need to do this in one line, but I've only got 1 line so far.
find . -perm -111 +type f | sort -r
What I'm trying to do is write a bash script that will display the list of all files in the current directory that are executable (z to a). I want the script to do the same for all subdirectories. What I'm having difficulty doing is displaying the name of the subdirectory before the list of executable files in that directory / subdirectory.
So, to clarify, desirable output might look like this:
program1
program2
SubDir1
program3
SubDirSubDir2
program4
SubDir2
program5
What I have right now (the above code) does this. Its not removing /path and it isn't listing the name of the new directory when directories are changed.
./exfile
./test/exfile1
./test1/program2
./test1/program
./first
Hopefully that was clear.
This will work.
I changed the permission to -100 because maybe some programs are only executable by its owner.
for d in $(find . -type d); do
echo "in $d:"
find $d -maxdepth 1 -perm -100 -type f | sed 's#.*/##'
done
This will do the trick for you.
find . -type d | sort | xargs -n1 -I{} bash -c "find {} -type f -maxdepth 1 -executable | sort -r"
The first find command lists all directories and sub directories and sort them in ascending order.
The sorted directories/sub-directories are then passed to xargs which calls bash to find the files within the directory/sub-directory and sort them in descending order.
If you prefer to also print the directory, you may run it without -type f.
You can use find on all directories and combine it with -print (to print the directory name) and -exec (to execute a find for files in that directory):
find . -type d -print -exec bash -c 'find {} -type f -depth 1 -perm +0111 | sort -r' \;
Let's break this down. First, you have the directory search:
find . -type d -print
Then the command to execute for each directory:
find {} -type f -depth 1 -perm +0111 | sort -r
The -exec switch will expand the path wherever it sees {}. Because this uses a pipe operator that is shell syntax, the whole thing is wrapped in bash -c.
You can expand on this further. If you want to strip the directory name off the files and space our your results nicer, something like this might suffice:
find {} -type f -depth 1 -print0 -perm +0111 | xargs -n1 -0 basename | sort -r && echo
Hmm, the sorting requirement makes this tricky - the "for d in $(find...)" command is clever, but hard to control the sorting. How about this? Everything is z->a, including the directories, but the awk statement is a bit of a monster ;_)
find `pwd` -perm 111 -type f |
sort -r |
xargs -n1 -I{} sh -c "dirname {};basename {}" |
awk '/^\// {dir=$0 ; if (dir != lastdir) {print;lastdir=dir}} !/^\// {print}'
Produces
/home/imcgowan/t/t3
jjj
iii
hhh
/home/imcgowan/t/t2
ggg
fff
eee
/home/imcgowan/t/t1
ddd
ccc
bbb
/home/imcgowan/t
aaa

linux find command operation

My shell script find all files 90 days older
find /var/www/html/zip/data/*/*/*/*/* -type f -mtime +90
that returns the output like
/var/www/html/zip/data/2011/jan/11/333333/Photos/a.jpeg
/var/www/html/zip/data/2011/jan/11/333333/Photos/b.jpeg
/var/www/html/zip/data/2011/jan/11/333333/Photos/c.jpeg
/var/www/html/zip/data/2011/feb/11/333333/Photos/a.jpeg
/var/www/html/zip/data/2011/feb/11/333333/Photos/b.jpeg
What would i need to do to just fetch unique folder path from the above output using the same Find command so the output should be
/var/www/html/zip/data/2011/jan/11/333333/Photos
/var/www/html/zip/data/2011/feb/11/333333/Photos
So i believe there would need to append something in the above Find command but don't know what
Note: I would like to save the unique path in a variable
Try
find /var/www/html/zip/data/*/*/*/*/* -type f -mtime +90 -printf "%h\n" | sort | uniq
I am not sure if find can do this directly, but you could always use sed to post-process the results:
find /var/www/html/zip/data/*/*/*/*/* -type f -mtime +90 | sed 's|/[^/]*$||'
Piping the results further through uniq should remove duplicates (you might need to first do sort, but I doubt it).
You can find the solution this way
find /var/www/html/zip/data -type d -mtime +90 | uniq
The idea behind this is, whenever a file inside a folder is updated or modified, folder is also marked as modified. So in this case you will get all the folders which were not updated in last 90 days...
Adding to jonathanasdf's answer,
You could may be add a for loop,
$i=1;
for uniq_dir in `find /var/www/html/zip/data/*/*/*/*/* -type f -mtime +90 -printf "%h\n" | sort | uniq`;
do
a[$i]=$uniq_dir;
let "i = $i + 1";
done;

pipe a command after splitting the returned value

I'm using a find command which results in multiple lines for result, I then want to pipe each of those lines into an ls command with the-l option specified.
find . -maxdepth 2 -type f |<some splitting method> | ls -l
I want to do this in one "command" and avoid writing to a file.
I believe this is what you are looking for:
find . -maxdepth 2 -type f -exec ls -l {} \;
Explanation:
find . -maxdepth 2 -type f: find files with maxdepth at 2
-exec ls -l {} \; For each such result found, run ls -l on it; {} specifies where the results from find would be substituted into.
The typical approach is to use -exec:
find . -maxdepth 2 -type f -exec ls -l {} \;
Sounds like you are looking for xargs. For example, on a typical Linux system:
find . -maxdepth 2 -type f -print0 | xargs -0 -n1 ls -l

How do I find all the files that were created today in Unix/Linux?

How do I find all the files that were create only today and not in 24 hour period in unix/linux
On my Fedora 10 system, with findutils-4.4.0-1.fc10.i386:
find <path> -daystart -ctime 0 -print
The -daystart flag tells it to calculate from the start of today instead of from 24 hours ago.
Note however that this will actually list files created or modified in the last day. find has no options that look at the true creation date of the file.
find . -mtime -1 -type f -print
To find all files that are modified today only (since start of day only, i.e. 12 am), in current directory and its sub-directories:
touch -t `date +%m%d0000` /tmp/$$
find . -type f -newer /tmp/$$
rm /tmp/$$
Source
I use this with some frequency:
$ ls -altrh --time-style=+%D | grep $(date +%D)
After going through many posts I found the best one that really works
find $file_path -type f -name "*.txt" -mtime -1 -printf "%f\n"
This prints only the file name like
abc.txt not the /path/tofolder/abc.txt
Also also play around or customize with -mtime -1
This worked for me. Lists the files created on May 30 in the current directory.
ls -lt | grep 'May 30'
Use ls or find to have all the files that were created today.
Using ls : ls -ltr | grep "$(date '+%b %e')"
Using find : cd $YOUR_DIRECTORY; find . -ls 2>/dev/null| grep "$(date '+%b %e')"
find ./ -maxdepth 1 -type f -execdir basename '{}' ';' | grep `date +'%Y%m%d'`
You can use find and ls to accomplish with this:
find . -type f -exec ls -l {} \; | egrep "Aug 26";
It will find all files in this directory, display useful informations (-l) and filter the lines with some date you want... It may be a little bit slow, but still useful in some cases.
Just keep in mind there are 2 spaces between Aug and 26. Other wise your find command will not work.
find . -type f -exec ls -l {} \; | egrep "Aug 26";
If you're did something like accidentally rsync'd to the wrong directory, the above suggestions work to find new files, but for me, the easiest was connecting with an SFTP client like Transmit then ordering by date and deleting.
To get file before 24 hours execute below command:
find . -type f -mtime 1 -exec ls -l {} \;
To get files created today execute below command:
find . -type f -mtime -1 -exec ls -l {} \;
To Get files created before n days before, where +2 is before 2 days files in below command:
find . -type f -mtime +2 -exec ls -l {} \;

Resources