Found several posts like this one to tell how to find the latest file inside of a folder.
My question is one step forward, how to find the second latest file inside the same folder? The purpose is that I am looking for a way to diff the latest log with a previous log so as to know what have been changed. The log was generated in a daily basis.
Building on the linked solutions, you can just make tail keep the last two files, and then pass the result through head to keep the first one of those:
ls -Art | tail -n 2 | head -n 1
To do diff of the last (lately modified) two files:
ls -t | head -n 2 | xargs diff
Here's a stat-based solution (tested on linux)
for x in ./*;
do
if [[ -f "$x" ]]; then
stat --printf="%n %Y\n" "$x"; fi;
done |
sort -k2,2 -n -r |
sed -n '2{p;q}'
ls -dt {{your file pattern}} | head -n 2 | tail -n 1
Will provide second latest file in the pattern you search.
Here's the command returns you latest second file in the folder
ls -lt | tail -n 1 | head -n 2
enjoy...!
Related
I am new to linux. I have a folder with many files in it and i need to get the latest file depending on the file name. Example: I have 3 files RAT_20190111.txt RAT_20190212.txt RAT_20190321.txt . I need a linux command to move the latest file here RAT20190321.txt to a specific directory.
If file pattern remains the same then you can try below command :
mv $(ls RAT*|sort -r|head -1) /path/to/directory/
As pointed out by #wwn, there is no need to use sort, Since the files are lexicographically sortable ls should do the job already of sorting them so the command will become :
mv $(ls RAT*|tail -1) /path/to/directory
The following command works.
ls | grep -v '/$' |sort | tail -n 1 | xargs -d '\n' -r mv -- /path/to/directory
The command first splits output of ls with newline. Then sorts it, takes the last file and then it moves this to the required directory.
Hope it helps.
Use the below command
cp ls |tail -n 1 /data...
I'm trying to write a script which help to follows the logs of my application.
The logs of my application are written to "var/log/MyLogs/" with the following pattern:
runningNumber_XXX.txt , for example:
0_XXX.txt
37_xxx.txt
99_xxx.txt
101_xxx.txt
103_xxx.txt
I'm trying to write a bash script (without a success for now) which will print last 20 rows of the last log file (the last log file is the file with has the biggest prefix number).
I know I need to go over the files in the folder (for file in /var/log/MyLogs/*) and check which file name has the biggest prefix, and after it print the last 20 rows from the selected file.
please help me....
Thanks...
find /var/log/MyLogs -iname '*_xxx.txt' | sort -n | tail -1 | xargs tail -20
Get correct files
Sort numerically
Get last log file
Get last 20 rows
tail -20 $(ls -1 /var/log/MyLogs/*_*.txt | sort -n -t _ -k 1 -r | head -1)
ls -1 [0-9]*_XXX.txt | sort -rn | head -1 | xargs tail -20
Usually is the bad practice using ls in shell scripts, but if you can ensure than the logfiles doesn't contains spaces and other strange characters, you can use a simple:
tail -20 $(ls -t1 /var/log/[0-9]*_XXX.txt | head -1)
The:
ls -t sorts the files my modification time newest comes first
head the the 1st
tail print the last lines
AGAIN, this is usually a bad practice, you can use it only when you knows what you're doing.
I am writing a bash script that will run a couple of times a minute. What I would like it to do is find all files in a specified directory that contain a specified string, and search that list of files and delete any line beginning with a different specific string (in this case it's
Here's what I've tried s far, but they aren't working:
ls -1t /the/directory | head -10 | grep -l "qualifying string" * | sed -i '/^<meta/d' *'
ls -1t /the/directory | head -10 | grep -l "qualifying string" * | sed -i '/^<meta/d' /the/directory'
The only reason I added in the head -10 is so that every time the script runs, it will start by only looking at the 10 most recent files. I don't want it to spend a lot of time searching needlessly through the entire directory since it will be going through and removing the line many times a minute.
The script has to be run out of a different directory than the files are in. Also, would the modified date on the files change if the "<meta" string doesn't exist in the file?
There are a variety of problem with this part of the command...
ls -1t /the/directory | head -10 | grep -l "qualifying string" * ...
First, you appear to be trying to pipe the output of ls ... | head -10 into grep, which would cause grep to search for "qualifying string" in the output of ls. Except then you turn around and provide * as a command line argument to grep, causing it to search in all the files, and completely ignoring the ls and head commands.
You probably want to read about the xargs commands, which reads a list of files on stdin and then runs a given command against that list. For example, you ought to be able to generate your file list like this:
ls -1t /the/directory | head -10 |
xargs grep -l "qualifying string"
And to apply sed to those files:
ls -1t /the/directory | head -10 |
xargs grep -l "qualifying string" |
sed -i 's/something/else/g'
Modifying the files with sed will update the modification time on the files.
You can use globbing with the * character to expand file names and loop through the directory.
n=0
for file in /the/directory/*; do
if [ -f "$file" ]; then
grep "qualifying string" "$file" && sed -i '/^<meta/d' "$file"
n=$((n+1))
fi
[ $n -eq 10 ] && break
done
I am using ls -l -t to get a list of files in a directory ordered by time.
I would like to limit the search result to the top 2 files in the list.
Is this possible?
I've tried with grep and I struggled.
You can pipe it into head:
ls -l -t | head -3
Will give you top 3 lines (2 files and the total).
This will just give you the first 2 lines of files, skipping the size line:
ls -l -t | tail -n +2 | head -2
tail strips the first line, then head outputs the next 2 lines.
To avoid dealing with the top output line you can reverse the sort and get the last two lines
ls -ltr | tail -2
This is pretty safe, but depending what you'll do with those two file entries after you find them, you should read Parsing ls on the problems with using ls to get files and file information.
Or you could try just this
ls -1 -t | head -2
The -1 switch skips the title line.
You can use the head command to grab only the first two lines of output:
ls -l -t | head -2
You have to pipe through head.
ls -l -t | head -n 3
will output the two first results.
Try this:
ls -td -- * | head -n 2
I'm having some rather unusual problems using grep in a bash script. Below is an example of the bash script code that I'm using that exhibits the behaviour:
UNIQ_SCAN_INIT_POINT=1
cat "$FILE_BASENAME_LIST" | uniq -d >> $UNIQ_LIST
sed '/^$/d' $UNIQ_LIST >> $UNIQ_LIST_FINAL
UNIQ_LINE_COUNT=`wc -l $UNIQ_LIST_FINAL | cut -d \ -f 1`
while [ -n "`cat $UNIQ_LIST_FINAL | sed "$UNIQ_SCAN_INIT_POINT"'q;d'`" ]; do
CURRENT_LINE=`cat $UNIQ_LIST_FINAL | sed "$UNIQ_SCAN_INIT_POINT"'q;d'`
CURRENT_DUPECHK_FILE=$FILE_DUPEMATCH-$CURRENT_LINE
grep $CURRENT_LINE $FILE_LOCTN_LIST >> $CURRENT_DUPECHK_FILE
MATCH=`grep -c $CURRENT_LINE $FILE_BASENAME_LIST`
CMD_ECHO="$CURRENT_LINE matched $MATCH times," cmd_line_echo
echo "$CURRENT_DUPECHK_FILE" >> $FILE_DUPEMATCH_FILELIST
let UNIQ_SCAN_INIT_POINT=UNIQ_SCAN_INIT_POINT+1
done
On numerous occasions, when grepping for the current line in the file location list, it has put no output to the current dupechk file even though there have definitely been matches to the current line in the file location list (I ran the command in terminal with no issues).
I've rummaged around the internet to see if anyone else has had similar behaviour, and thus far all I have found is that it is something to do with buffered and unbuffered outputs from other commands operating before the grep command in the Bash script....
However no one seems to have found a solution, so basically I'm asking you guys if you have ever come across this, and any idea/tips/solutions to this problem...
Regards
Paul
The `problem' is the standard I/O library. When it is writing to a terminal
it is unbuffered, but if it is writing to a pipe then it sets up buffering.
try changing
CURRENT_LINE=`cat $UNIQ_LIST_FINAL | sed "$UNIQ_SCAN_INIT_POINT"'q;d'`
to
CURRENT LINE=`sed "$UNIQ_SCAN_INIT_POINT"'q;d' $UNIQ_LIST_FINAL`
Are there any directories with spaces in their names in $FILE_LOCTN_LIST? Because if they are, those spaces will need escaped somehow. Some combination of find and xargs can usually deal with that for you, especially xargs -0
A small bash script using md5sum and sort that detects duplicate files in the current directory:
CURRENT="" md5sum * |
sort |
while read md5sum filename;
do
[[ $CURRENT == $md5sum ]] && echo $filename is duplicate;
CURRENT=$md5sum;
done
you tagged linux, some i assume you have tools like GNU find,md5sum,uniq, sort etc. here's a simple example to find duplicate files
$ echo "hello world">file
$ md5sum file
6f5902ac237024bdd0c176cb93063dc4 file
$ cp file file1
$ md5sum file1
6f5902ac237024bdd0c176cb93063dc4 file1
$ echo "blah" > file2
$ md5sum file2
0d599f0ec05c3bda8c3b8a68c32a1b47 file2
$ find . -type f -exec md5sum "{}" \; |sort -n | uniq -w32 -D
6f5902ac237024bdd0c176cb93063dc4 ./file
6f5902ac237024bdd0c176cb93063dc4 ./file1