Get the date of the most recent entry (file or directory) within a directory - linux

I have a bunch of directories with different revisions of the same c++ project within. I'd like to make things sorted out moving each of these directories to a parent directory named by pattern of YYYY.MM.DD. Where YYYY.MM.DD is the date of the most recent entry (file or directory) in a directory.
How can I recursively find the date of the most recent entry in a particular directory?
Update
Below is one of the ways to do it:
find . -not -type d -printf "%T+ %p\n" | sort -n | tail -1
Or even:
find . -not -type d -printf "%TY.%Tm.%Td\n" | sort -n | tail -1

Try using ls -t|head -n 1 to list files sorted by modification date and show only the first. The date will be in the format defined by your locale (ie YYYY-MM-DD).
For example,
ls -tl | awk '{date=$6; file=$8; system("mkdir " date); system("mv $8 " " date"/")'
will go through all files and create a directory for every modification data and move the file there (beware: care must be taken for filenames containing whitespace). Now use find -type d in the root directory of the source tree to recursively list all the directories. Combined with the above you have now (sadly there is some overhead now):
for dir in $(find -type d) ; do
export dir
ls -tl dir| awk '{dir=ENVIRON["dir"]; date=$6; file=$8; system("mkdir " dir "/" date); system("mv " dir "/" $8 " " dir "/" date"/")'
done
This does not go recursively through the tree, but takes all directories of the complete tree and then iterates over them. If you need the date-directories outside of the source tree (suppose so), just edit the two system() calls in the awk script accordingly.
Edited: fix script, add more description

Another option, mixing your solution with the previous answer:
find -print0 | xargs --null ls -dtl
It shows directories as well.

Related

Bash How to find directories of given files

I have a folder with several subfolders and in some of those subfolders I have text files example_1.txt, example_2.txt and so on. example_1.txt may be found in subfolder1, some subfolders do not contain text files.
How can I list all directories that contain a text file starting with example?
I can find all those files by running this command
find . -name "example*"
But what I need to do is to find the directories these files are located in? I would need a list like this subfolder1, subfolder4, subfolder8 and so on. Not sure how to do that.
You must be use the following command,
find . -name "example*" | uniq -u | awk -F / '{print $2}'
find all files in subfolders with mindepth 2 to avoid this folder (.)
get dirname with xargs dirname
sort output-list and make folders unique with sort -u
print only basenames with awk (delimiter is / and last string is $NF). add "," after every subfolder
translate newlines in blanks with tr
remove last ", " with sed
list=$(find ./ -mindepth 2 -type f -name "example*"|xargs dirname|sort -u|awk -F/ '{print $NF","}'|tr '\n' ' '|sed 's/, $//')
echo $list
flow, over, stack
Suggesting find command that prints only the directories path.
Than sort the paths.
Than remove duplicates.
find . -type f -name "example*" -printf "%h\n"|sort|uniq

Remove all but newest file from all sub directories

I have found the following which will list the files in all subdirectories, hide the last 5, and then delete the rest:
find -type f -printf '%T# %P\n' | sort -n | cut -d' ' -f2- | head -n -5 | xargs rm
Unfortunately if I don't know how many subdirectories there are, it won't delete the correct number of files. Does anyone have a way to transverse each directory, and then delete all but the newest of file in each subdirectory?
Directory structure would be the following:
-> Base Directory -> Parent Directory -> Child directory
I'd write a script.
It would be a recursive function:
call function: rm_files(base_dir)
list all directories
if there are directories go through on the list and call rm_files(act_dir) for each item
else (if there is no directories):
list all files
delete all files but the newest
return from function
In case lot of subdirectories it may be memory problem because of the recursive function.
I found I was able to do what I needed to do with the following one liner:
find . -name *.* -mmin +59 -delete > /dev/null

Moving folders with the same name to new directory Linux, Ubuntu

I have a folder with 100,000 sub-folders.
Because of the size I cannot open the folder.
Now I am looking for a shell script to help me move or split the folders.
Current = Folder Research : with 100,000 sub-folders. (Sorted A, B, C, D)
Needed = New folder All folders starting with name A-science. should be moved to a new
folder AScience.
All folders starting with B-Science.. should be move to a new folder BScience
I found this script below. But don't know how to make it work.
find /home/spenx/src -name "a1a2*txt" | xargs -n 1 dirname | xargs -I list mv list /home/spenx/dst/
find ~ -type d -name "*99966*" -print
I had a look at the command you supplied to see what it did. here's what each command does (correct me if I'm wrong)
| = pipes output of command to the left of pipe to the input of the command on the right
find /home/spenx/src -name "a1a2*txt" = finds all files within given directory that match between "" and pipes output
xargs -n 1 dirname = takes in all the piped files outputted by the find command and gets the directory name of each file and pipes to output
xargs - I list mv list /home/spenx/dst = takes in all piped folders and and puts them into list variable, mv all items in list to the given folder
find ~ -type d -name "**" -print = runs a test for all files within given name to see if they exist and print any found out (this line is only a test command, it's not necessary for the actual move)
/home/spenx/src = folder to look in (absolute file path, or just folder name without '/')
/home/spenx/dst = folder to move all files to (absolute file path, or just folder name without '/')
"a1a2*txt" = files to look for (since you only care about folders, just use *.* to catch all files
"*99966* = files to test for but I'm not sure what you would put here
I took a look at the command and decided to modify it a little, but It still won't move each folder category (i.e. A-science, B-science) into a separate dirs, this will just get all folders in a given directory and move them to a given destination, or at least as far as I can tell.
You might want to try find all folders of each category ( A-Science) and moving them to a destination folder of Ascience one by one like so
find /home/spenx/src -name "A-science/*.*" | xargs -n 1 dirname | sort -u | xargs - I list mv list /home/spenx/dst/Ascience
find /home/spenx/src -name "B-science/*.*" | xargs -n 1 dirname | sort -u | xargs - I list mv list /home/spenx/dst/Bscience
Again, test the command out before using it on your actual files.
You might want to take look at this question, specifically:
list.txt
1abc
2def
3xyz
script to run:
while read pattern; do
mv "${pattern}"* ../folder_b/"$pattern"
done < list.txt

Create a bash script to delete folders which do not contain a certain filetype

I have recently run into a problem.
I used a utility to move all my music files into directories based on tags. This left a LOT of almost empty folders. The folders, in general, contain a thumbs.db file or some sort of image for album art. The mp3s have the correct album art in their new directories, so the old ones are okay to delete.
Basically, I need to find any directories within D:/Music/ that:
-Do not have any subdirectories
-Do not contain any mp3 files
And then delete them.
I figured this would be easier to do in a shell script or bash script or whatever else linux/unix world than in Windows 8.1 (HAHA).
Any suggestions? I'm not very experienced writing scripts like this.
This should get you started
find /music -mindepth 1 -type d |
while read dt
do
find "$dt" -mindepth 1 -type d | read && continue
find "$dt" -iname '*.mp3' -type f | read && continue
echo DELETE $dt
done
Here's the short story...
find . -name '*.mp3' -o -type d -printf '%h\n' | sort | uniq > non-empty-dirs.tmp
find . -type d -print | sort | uniq > all-dirs.tmp
comm -23 all-dirs.tmp non-empty-dirs.tmp > dirs-to-be-deleted.tmp
less dirs-to-be-deleted.tmp
cat dirs-to-be-deleted.tmp | xargs rm -rf
Note that you might have to run all the commands a few times (depending on your repository's directory depth) before you're done deleting all recursive empty directories...
And the long story goes...
You can approach this problem from two basic perspective: either you find all directories, then iterate over each of them, check if it contain any mp3 file or any subdirectory, if not, mark that directory for deletion. It will works, but on large very large repositories, you might expect a significant run time.
Another approach, which is in my sense much more interesting, is to build a list of directories NOT to be deleted, and subtract that list from the list of all directories. Let's work the second strategy, one step at a time...
First of all, to find the path of all directories that contains mp3 files, you can simply do:
find . -name '*.mp3' -printf '%h\n' | sort | uniq
This means "find any file ending with .mp3, then print the path to it's parent directory".
Now, I could certainly name at least ten different approaches to find directories that contains at least one subdirectory, but keeping the same strategy as above, we can easily get...
find . -type d -printf '%h\n' | sort | uniq
What this means is: "Find any directory, then print the path to it's parent."
Both of these queries can be combined in a single invocation, producing a single list containing the paths of all directories NOT to be deleted.. Let's redirect that list to a temporary file.
find . -name '*.mp3' -o -type d -printf '%h\n' | sort | uniq > non-empty-dirs.tmp
Let's similarly produce a file containing the paths of all directories, no matter if they are empty or not.
find . -type d -print | sort | uniq > all-dirs.tmp
So there, we have, on one side, the complete list of all directories, and on the other, the list of directories not to be deleted. What now? There are tons of strategies, but here's a very simple one:
comm -23 all-dirs.tmp non-empty-dirs.tmp > dirs-to-be-deleted.tmp
Once you have that, well, review it, and if you are satisfied, then pipe it through xargs to rm to actually delete the directories.
cat dirs-to-be-deleted.tmp | xargs rm -rf

Copy the three newest files under one directory (recursively) to another specified directory

I'm using bash.
Suppose I have a log file directory /var/myprogram/logs/.
Under this directory I have many sub-directories and sub-sub-directories that include different types of log files from my program.
I'd like to find the three newest files (modified most recently), whose name starts with 2010, under /var/myprogram/logs/, regardless of sub-directory and copy them to my home directory.
Here's what I would do manually
1. Go through each directory and do ls -lt 2010*
to see which files starting with 2010 are modified most recently.
2. Once I go through all directories, I'd know which three files are the newest. So I copy them manually to my home directory.
This is pretty tedious, so I wondered if maybe I could somehow pipe some commands together to do this in one step, preferably without using shell scripts?
I've been looking into find, ls, head, and awk that I might be able to use but haven't figured the right way to glue them together.
Let me know if I need to clarify. Thanks.
Here's how you can do it:
find -type f -name '2010*' -printf "%C#\t%P\n" |sort -r -k1,1 |head -3 |cut -f 2-
This outputs a list of files prefixed by their last change time, sorts them based on that value, takes the top 3 and removes the timestamp.
Your answers feel very complicated, how about
for FILE in find . -type d; do ls -t -1 -F $FILE | grep -v "/" | head -n3 | xargs -I{} mv {} ..; done;
or laid out nicely
for FILE in `find . -type d`;
do
ls -t -1 -F $FILE | grep -v "/" | grep "^2010" | head -n3 | xargs -I{} mv {} ~;
done;
My "shortest" answer after quickly hacking it up.
for file in $(find . -iname *.php -mtime 1 | xargs ls -l | awk '{ print $6" "$7" "$8" "$9 }' | sort | sed -n '1,3p' | awk '{ print $4 }'); do cp $file ../; done
The main command stored in $() does the following:
Find all files recursively in current directory matching (case insensitive) the name *.php and having been modified in the last 24 hours.
Pipe to ls -l, required to be able to sort by modification date, so we can have the first three
Extract the modification date and file name/path with awk
Sort these files based on datetime
With sed print only the first 3 files
With awk print only their name/path
Used in a for loop and as action copy them to the desired location.
Or use #Hasturkun's variant, which popped as a response while I was editing this post :)

Resources