Finding the oldest folder in a directory in linux even when files inside are modified - linux

I have two folders A and B, inside that there are two files each.
which are created in the below order
mkdir A
cd A
touch a_1
touch a_2
cd ..
mkdir B
cd B
touch b_1
touch b_2
cd ..
From the above i need to find which folder was created first(not modified).
ls -c <path_to_root_before_A_and_B> | tail -1
Now this outputs as "A" (no issues here).
Now i delete the file a_1 inside the Directory A.
Now i again execute the command
ls -c <path_to_root_before_A_and_B> | tail -1
This time it shows "B".
But the directory A contains the file a_2, but the ls command shows as "B". how to overcome this

How To Get File Creation Date Time In Bash-Debian
You'll want to read the link above for that, files and directories would save the same modification time types, which means directories do not save their creation date. Methods like the ls -i one mentioned earlier may work sometimes, but when I ran it just now it got really old files mixed up with really new files, so I don't think it works exactly how you think it might.
Instead try touching a file immediately after creating a directory, save it as something like .DIRBIRTH and make it hidden. Then when trying to find the order the directories were made, just grep for which .DIRBIRTH has the oldest modification date.

Assuming that all the stars align (You're using a version of GNU stat(1) that supports the file birth time formats, you're using a filesystem that records them, and a linux kernel version new enough to support the statx(2) syscall, this script should print out all immediate subdirectories of the directory passed as its argument sorted by creation time:
#!/bin/sh
rootdir=$1
find "$rootdir" -maxdepth 1 -type d -exec stat -c "%W %n" {} + | tail -n +2 \
| sort -k1,1n | cut --complement -d' ' -f1

Related

Quickly list random set of files in directory in Linux

Question:
I am looking for a performant, concise way to list N randomly selected files in a Linux directory using only Bash. The files must be randomly selected from different subdirectories.
Why I'm asking:
In Linux, I often want to test a random selection of files in a directory for some property. The directories contain 1000's of files, so I only want to test a small number of them, but I want to take them from different subdirectories in the directory of interest.
The following returns the paths of 50 "randomly"-selected files:
find /dir/of/interest/ -type f | sort -R | head -n 50
The directory contains many files, and resides on a mounted file system with slow read times (accessed through ssh), so the command can take many minutes. I believe the issue is that the first find command finds every file (slow), and only then prints a random selection.
If you are using locate and updatedb updates regularly (daily is probably the default), you could:
$ locate /home/james/test | sort -R | head -5
/home/james/test/10kfiles/out_708.txt
/home/james/test/10kfiles/out_9637.txt
/home/james/test/compr/bar
/home/james/test/10kfiles/out_3788.txt
/home/james/test/test
How often do you need it? Do the work periodically in advance to have it quickly available when you need it.
Create a refreshList script.
#! /bin/env bash
find /dir/of/interest/ -type f | sort -R | head -n 50 >/tmp/rand.list
mv -f /tmp/rand.list ~
Put it in your crontab.
0 7-20 * * 1-5 nice -25 ~/refresh
Then you will always have a ~/rand.list that's under an hour old.
If you don't want to use cron and aren't too picky about how old it is, just write a function that refreshes the file after you use it every time.
randFiles() {
cat ~/rand.list
{ find /dir/of/interest/ -type f |
sort -R | head -n 50 >/tmp/rand.list
mv -f /tmp/rand.list ~
} &
}
If you can't run locate and the find command is too slow, is there any reason this has to be done in real time?
Would it be possible to use cron to dump the output of the find command into a file and then do the random pick out of there?

Clearing archive files with linux bash script

Here is my problem,
I have a folder where is stored multiple files with a specific format:
Name_of_file.TypeMM-DD-YYYY-HH:MM
where MM-DD-YYYY-HH:MM is the time of its creation. There could be multiple files with the same name but not the same time of course.
What i want is a script that can keep the 3 newest version of each file.
So, I found one example there:
Deleting oldest files with shell
But I don't want to delete a number of files but to keep a certain number of newer files. Is there a way to get that find command, parse in the Name_of_file and keep the 3 newest???
Here is the code I've tried yet, but it's not exactly what I need.
find /the/folder -type f -name 'Name_of_file.Type*' -mtime +3 -delete
Thanks for help!
So i decided to add my final solution in case anyone liked to get it. It's a combination of the 2 solutions given.
ls -r | grep -P "(.+)\d{4}-\d{2}-\d{2}-\d{2}:\d{2}" | awk 'NR > 3' | xargs rm
One line, super efficiant. If anything changes on the pattern of date or name just change the grep -P pattern to match it. This way you are sure that only the files fitting this pattern will get deleted.
Can you be extra, extra sure that the timestamp on the file is the exact same timestamp on the file name? If they're off a bit, do you care?
The ls command can sort files by timestamp order. You could do something like this:
$ ls -t | awk 'NR > 3' | xargs rm
THe ls -t lists the files by modification time where the newest are first.
The `awk 'NR > 3' prints out the list of files except for the first three lines which are the three newest.
The xargs rm will remove the files that are older than the first three.
Now, this isn't the exact solution. There are possible problems with xargs because file names might contain weird characters or whitespace. If you can guarantee that's not the case, this should be okay.
Also, you probably want to group the files by name, and keep the last three. Hmm...
ls | sed 's/MM-DD-YYYY-HH:MM*$//' | sort -u | while read file
do
ls -t $file* | awk 'NR > 3' | xargs rm
done
The ls will list all of the files in the directory. The sed 's/\MM-DD-YYYY-HH:MM//' will remove the date time stamp from the files. Thesort -u` will make sure you only have the unique file names. Thus
file1.txt-01-12-1950
file2.txt-02-12-1978
file2.txt-03-12-1991
Will be reduced to just:
file1.txt
file2.txt
These are placed through the loop, and the ls $file* will list all of the files that start with the file name and suffix, but will pipe that to awk which will strip out the newest three, and pipe that to xargs rm that will delete all but the newest three.
Assuming we're using the date in the filename to date the archive file, and that is possible to change the date format to YYYY-MM-DD-HH:MM (as established in comments above), here's a quick and dirty shell script to keep the newest 3 versions of each file within the present working directory:
#!/bin/bash
KEEP=3 # number of versions to keep
while read FNAME; do
NODATE=${FNAME:0:-16} # get filename without the date (remove last 16 chars)
if [ "$NODATE" != "$LASTSEEN" ]; then # new file found
FOUND=1; LASTSEEN="$NODATE"
else # same file, different date
let FOUND="FOUND + 1"
if [ $FOUND -gt $KEEP ]; then
echo "- Deleting older file: $FNAME"
rm "$FNAME"
fi
fi
done < <(\ls -r | grep -P "(.+)\d{4}-\d{2}-\d{2}-\d{2}:\d{2}")
Example run:
[me#home]$ ls
another_file.txt2011-02-11-08:05
another_file.txt2012-12-09-23:13
delete_old.sh
not_an_archive.jpg
some_file.exe2011-12-12-12:11
some_file.exe2012-01-11-23:11
some_file.exe2012-12-10-00:11
some_file.exe2013-03-01-23:11
some_file.exe2013-03-01-23:12
[me#home]$ ./delete_old.sh
- Deleting older file: some_file.exe2012-01-11-23:11
- Deleting older file: some_file.exe2011-12-12-12:11
[me#home]$ ls
another_file.txt2011-02-11-08:05
another_file.txt2012-12-09-23:13
delete_old.sh
not_an_archive.jpg
some_file.exe2012-12-10-00:11
some_file.exe2013-03-01-23:11
some_file.exe2013-03-01-23:12
Essentially, but changing the file name to dates in the form to YYYY-MM-DD-HH:MM, a normal string sort (such as that done by ls) will automatically group similar files together sorted by date-time.
The ls -r on the last line simply lists all files within the current working directly print the results in reverse order so newer archive files appear first.
We pass the output through grep to extract only files that are in the correct format.
The output of that command combination is then looped through (see the while loop) and we can simply start deleting after 3 occurrences of the same filename (minus the date portion).
This pipeline will get you the 3 newest files (by modification time) in the current dir
stat -c $'%Y\t%n' file* | sort -n | tail -3 | cut -f 2-
To get all but the 3 newest:
stat -c $'%Y\t%n' file* | sort -rn | tail -n +4 | cut -f 2-

rsync to backup one file generated in dynamic folders

I'm trying to backup just one file that is generated by other application in dynamic named folders.
for example:
parent_folder/
back_01 -> file_blabla.zip (timestam 2013.05.12)
back_02 -> file_blabla01.zip (timestam 2013.05.14)
back_03 -> file_blabla02.zip (timestam 2013.05.22)
and I need to get the latest generated zip, just that one it doesnt matter the name of the file as long as is the latest, is a zip and is inside "parent_folder" get that one.
as well when I do the rsync the folder structure + file name is generated and I want to omit that I want to backup that file in a folder and with a name so I know where is the latest and it will be always named the same.
now im doing this with a perl that get the latest generated folder with
"ls -tAF | grep '/$' | head -1"
and perform the rsync but it does brings the last zip but with the folder structure that I dont want because it doesnt override my latest zip file.
rsync -rvtW --prune-empty-dirs --delay-updates --no-implied-dirs --modify-window=1 --include='*.zip' --exclude='*.*' --progress /source/ /myBackup/
as well it would be great if I could do the rsync without needing to use perl or any other script.
thanks
The file names will differ each time ?
This would be hard for any type of syncing to work.
What you could do is :
create a new folder outside of where it is found, then :
Before you start remove the last sym linked file in that folder
When the file is found i.e. ls -tAF | grep '/$' | head -1 ....
symlink it this folder
then rsync,ssh,unison file across to new node.
If the symlink name is file-latest.zip then it will always be this
one file sent across.
But why do all that when you can just scp and you can take a look at here:
https://github.com/vahidhedayati/definedscp
for a more long winded approach, and not for this situation but it uses the real file date/time stamp then converts to seconds... It might be useful if you wish to do the stat in a different way
Using stat to work out file, work out latest file then simply scp it across, here is something to get you started:
One liner:
scp $(find /path/to/parent_folder -name \*.zip -exec stat -t {} \;|awk '{print $1" "$13}'|sort -k2nr|head -n1|awk '{print $1}') remote_server:/path/to/name.zip
More long winded way, maybe of use to understand what above is doing:
#!/bin/bash
FOUND_ARRAY=()
cd parent_folder;
for file in $(find . -name \*.zip); do
ptime=$(stat -t $file|awk '{print $13}');
FOUND_ARRAY+=($file" "$ptime)
done
IFS=$'\n'
FOUND_FILE=$(echo "${FOUND_ARRAY[*]}" | sort -k2nr | head -n1|awk '{print $1}');
scp $FOUND_FILE remote_host:/backup/new_name.zip

Launching program several times

I am using Mac Os. This is command line code to lauch my programm (two parts)
nucmer --mum file1.txt file2.txt
show-snps -Clr -x 2 out.delta > out_file1.snps
First part of the programm creates file out.delta. My file2.txt is always the same, but I want to launch this both parts 35000 times whith different file1.txt. All the file1s are located in the same directory.
Is it possible to do it using BASH?
Keep all the input files in a directory. Create a wrapper script to invoke nucmer script and then show-snps script. Your wrapper script will accept path to file directory as input. Iterate over all files in the directory and call your two scripts.
You could do something along these lines:
find . -maxdepth 1 -type f -print | grep -v './out_' | while read f
do
b=$(basename ${f})
nucmer --mum ${f} file2.txt
show-snps -Clr -x 2 out.delta > out_${b}.snps
done
The find bit finds all files in the current directory. grep filters out any previous output files, in case you've run some previously. The basename line strips off the leading ./ and trailing extension, and then your two programs get run with the input file name and an output filename based on the basename output.
If you don't get an argument list too long error, you could just use for:
for f in file*.txt; do nucmer --mum $f second.txt; show-snps -Clr -x 2 out.delta > out_${f%.txt}.snps; done

How to view last created file?

I have uploaded a file to a Linux computer. I do not know its name. So how to view files through their last created date attribute ?
ls -lat
will show a list of all files sorted by date. When listing with the -l flag using the -t flag sorts by date. If you only need the filename (for a script maybe) then try something like:
ls -lat | head -2 | tail -1 | awk '{print $9}'
This will list all files as before, get the first 2 rows (the first one will be something like 'total 260'), the get the last one (the one which shows the details of the file) and then get the 9th column which contains the filename.
find / -ctime -5
Will print the files created in the last five minutes. Increase the period one minute at a time to find your file.
Assuming you know the folder where you'll be searching it, the most easy solution is:
ls -t | head -1
# use -A in case the file can start with a dot
ls -tA | head -1
ls -t will sort by time, newest first (from ls --help itself)
head -1 will only keep 1 line at the top of anything
Use ls -lUt or ls -lUtr, as you wish. You can take a look at the ls command documentation typing man ls on a terminal.

Resources