Find out if file has been modified within the last 2 minutes - linux

In a bash script I want to check if a file has been changed within the last 2 minutes.
I already found out that I can access the date of the last modification with stat file.ext -c %y. How can I check if this date is older than two minutes?

I think this would be helpful,
find . -mmin -2 -type f -print
also,
find / -fstype local -mmin -2

Complete script to do what you're after:
#!/bin/sh
# Input file
FILE=/tmp/test.txt
# How many seconds before file is deemed "older"
OLDTIME=120
# Get current and file times
CURTIME=$(date +%s)
FILETIME=$(stat $FILE -c %Y)
TIMEDIFF=$(expr $CURTIME - $FILETIME)
# Check if file older
if [ $TIMEDIFF -gt $OLDTIME ]; then
echo "File is older, do stuff here"
fi
If you're on macOS, use stat -t %s -f %m $FILE for FILETIME, as in a comment by Alcanzar.

Here's an even simpler version that uses shell math over expr:
SECONDS (for idea)
echo $(( $(date +%s) - $(stat file.txt -c %Y) ))
MINUTES (for answer)
echo $(( ($(date +%s) - $(stat file.txt -c %Y)) / 60 ))
HOURS
echo $(( ($(date +%s) - $(stat file.txt -c %Y)) / 3600 ))

I solved the problem this way: get the current date and last modified date of the file (both in unix timestamp format). Subtract the modified date from the current date and divide the result by 60 (to convert it to minutes).
expr $(expr $(date +%s) - $(stat mail1.txt -c %Y)) / 60
Maybe this is not the cleanest solution, but it works great.

Here is how I would do it: (I would use a proper temp file)
touch -d"-2min" .tmp
[ "$file" -nt .tmp ] && echo "file is less than 2 minutes old"

Here is a solution that will test if a file is older than X seconds. It doesn't use stat, which has platform-specific syntax, or find which doesn't have granularity finer than 1 minute.
interval_in_seconds=10
filetime=$(date -r "$filepath" +"%s")
now=$(date +"%s")
timediff=$(expr $now - $filetime)
if [ $timediff -ge $interval_in_seconds ]; then
echo ""
fi

For those who like 1-liners once in a while:
test $(stat -c %Y -- "$FILE") -gt $(($EPOCHSECONDS - 120))
This solution is also safe with any kind of file name, including if it contains %!"`' ()

Related

how to set timer for each script that runs

Dear friend and colleges
it's lovely to be here in stack overflow the best cool site
Under /tmp/scripts we have around 128 scripts that perform many tests
As
verify_dns.sh
verify_ip.sh
verify_HW.sh
And so on
we decided to run all scripts under the current folder - /tmp/scfipt
with the following code
script_name=` find /tmp/scripts -maxdepth 1 -type f -name "verify_*" -exec basename {} \; `
for i in $script_name
do
echo running the script - $i
/tmp/scripts/$i
done
So output is like this
running the script - verify_dns.sh
running the script - verify_ip.sh
.
.
What we want to add - is the ability to print also the time that script runs
As the following example
running the script - verify_dns.sh - 16.3 Sec
running the script - verify_ip.sh - 2.5 Sec
.
.
My question , how we can add this ability in my code ?
Note - os version - is redhat 7.2
for calculating seconds you can use
SECONDS=0 ;
your_bash_script ;
echo $SECONDS
for more sensitive calculation
start=$(date +'%s%N')
your_shell_script.sh
echo "It took $((($(date +'%s%N') - $start)/100000)) miliseconds"
for internal time function
time your_shell_script.sh
Edit: example provided for OP
for i in $script_name
do
echo running the script - $i
start=$(date +'%s%N')
/tmp/scripts/$i
echo "It took $((($(date +'%s%N') - $start)/100000)) miliseconds"
done
for i in $script_name
do
echo running the script - $i
time /tmp/scripts/$i
done
You can use the time command to tell you how long each one took:
TIMEFORMAT="%E"
for i in $script_name
do
echo -en "running the script - $i\t - "
exec 3>&1 4>&2
var=$( { time /tmp/scripts/$i 1>&3 2>&4; } 2>&1) # Captures time only
exec 3>&- 4>&-
echo "$var Sec"
done
This works regardless of if your scripts produce any output/stderr. See this link for capturing only the output of time: get values from 'time' command via bash script
While it doesn't put the output on the same line, this might suit your needs.
for i in $script_name
do { set -x;
time "$i";
} 2>&1 | grep -Ev '^(user|sys|$)'
done

Check last modified date is within n seconds

I need to check the last modified date of all files within a directory to see if the latest file has been modified within some arbitrary period. Eg. 200 seconds.
Here's the twist, it also has to be a one-liner (its a Marathon health check and I can't rely on a file system being there for a script file).
Here's what I have so far:
ls -v | tail -n 1 | expr $(date +%s) - $(xargs date +%s -r) | if [ $PREV -gt 100 ]; then echo 1; else echo 0; fi
The ls -v sorts directory contents in "natural order" (the file names are monotonically increasing), so the latest file will be always be the last.
tail -n 1 gets the last value.
Then expr $(date +%s) - $(xargs date +%s -r) subtracts the file's last modified date from now as a unix timestamp.
Next I want to pass the result forward to an if statement and return 0 or 1 depending on comparison with a constant. But I can't work out how to get the pipe output into the if statement.
Note: I'm aware I could have the if check in the previous pipe wrapping the subtraction, but I think that the one-liner is already confusing enough as it is.
Any help appreciated.
Host OS is Linux. Bash shell.
Assuming GNU find (a fair assumption, given other GNU tools used in the question):
if [[ $(find . -maxdepth 1 -type f -newermt '-200 seconds' -print -quit) ]]; then
echo "The newest file is less than 200 seconds old"
else
echo "The newest file is more than 200 seconds old"
fi
On a system with BSD tools (such as MacOS), this might instead be:
if [[ $(find . -maxdepth 1 -type f -mtime -200s -print -quit) ]]; then
echo "The newest file is less than 200 seconds old"
else
echo "The newest file is more than 200 seconds old"
fi
Either of the above (as appropriate for the current platform) will have find scan the directory until they find a single file less than 200 seconds old; and will stop at that point and print the name of that file. This makes the search considerably more efficient for large directories than having ls sort the entire list (or to continue to scan for more files after one has already been found).
Note also the use of [[ ]], which suppresses string-splitting -- with [ ], quotes around $( ) would be needed to ensure correct behavior with filenames having spaces in their names, or files whose names could be expanded as glob expressions.
I don't think I understand what you are looking for but you may try below
PREV=$(ls -v | tail -n 1 | expr $(date +%s) - $(xargs date +%s -r)) ; if [ $PREV -gt 100 ]; then echo 1; else echo 0; fi
If you're looking for something else then please give few more hints.
And what's $PREV here? You're comparing this with 100 inside if but where does PREV value come from?
Answering my own question, I came up with:
ls -v | tail -n 1 | expr $(date +%s) - $(xargs date +%s -r) | if [ $(xargs) -gt 150 ]; then echo 1; else echo 0; fi
The modification I needed was using command substitution: $(xargs) to get the previous pipe output.
But I like the answer from bigbounty better than my solution as its cleaner. Marking as solved.
This code works.
if [ `expr $(date +%s) - $(stat -c %Y $(ls -t | head -n 1))` -gt 100 ];then echo 1;else echo 0;fi
How does it work?
ls -t - lists the files based on the time.
head -n 1 - gives first latest file.
date +%s gives time elapsed since epoch.
stat -c %Y - gives modification time since epoch
expr - subtracting present date from modified file date.
EDIT - 1
As per Charles advice
echo $(( $(date +%s) - $(printf '%s\0' * | xargs -0 stat -c '%Y' | sort -g | tail -n 1) > 100 ))
This is more elegant way of doing.

How to get time since file was last modified in seconds with bash?

I need to get the time in seconds since a file was last modified. ls -l doesn't show it.
There is no simple command to get the time in seconds since a file was modified, but you can compute it from two pieces:
date +%s: the current time in seconds since the Epoch
date -r path/to/file +%s: the last modification time of the specified file in seconds since the Epoch
Use these values, you can apply simple Bash arithmetic:
lastModificationSeconds=$(date -r path/to/file +%s)
currentSeconds=$(date +%s)
((elapsedSeconds = currentSeconds - lastModificationSeconds))
You could also compute and print the elapsed seconds directly without temporary variables:
echo $(($(date +%s) - $(date -r path/to/file +%s)))
In BASH, use this for seconds since last modified:
expr `date +%s` - `stat -c %Y /home/user/my_file`
I know the tag is Linux, but the stat -c syntax doesn't work for me on OSX. This does work...
echo $(( $(date +%s) - $(stat -f%c myfile.txt) ))
And as a function to be called with the file name:
lastmod(){
echo "Last modified" $(( $(date +%s) - $(stat -f%c "$1") )) "seconds ago"
}

How to move/copy a lot of files (not all files) in a directory?

I got a directory which contains approx 9000 files, the file names are in ascending number (however not necessarily consecutive).
Now I need to copy/move ~3000 files from number xxxx to number yyyy to another direcotory. How can I use cp or mv command for that purpose?
find -type f | while read file; do if [ "$file" -ge xxxx -o "$file" -le yyyy ]; then echo $file; fi; done | xargs cp -t /destination/
If you want to limit to 3000 files, do:
export i=0; find -type f | while read file; do if [ "$file" -ge xxxx -o "$file" -le yyyy ]; then echo $file; let i+=1; fi; if [ $i -gt 3000 ]; then break; fi; done | xargs cp -t /destination/
If the files have a common suffix after the number, use ${file%%suffix} inside the if (you can use globs in the suffix).
You can use the seq utility to generate numbers for this kind of operation:
for i in `seq 4073 7843` ; do cp file_${i}_name.png /destination/folder ; done
On the downside, this will execute cp a lot more often than QuantumMechanic's solution; but QuantumMechanic's solution may not execute if the total length of all the filenames is greater than the kernel's argv size limitation (which could be between 128K and 2048K, depending upon your kernel version and stack-size rlimits; see execve(2) for details).
If the range you want spans orders of magnitudes (e.g., between 900 and 1010) then the seq -w option may be useful, it zero-pads the output numbers.
This isn't the most elegant, but how about something like:
cp 462[5-9] 46[3-9]? 4[7-9]?? 5??? 6[0-2]?? 63[0-4]? 635[0-3] otherDirectory
which would copy files named 4625 to 6353 inclusive to otherDirectory. (You wouldn't want to use something like 4* since that would copy the file 4, 42, 483, etc.)

How can I tell if a file is older than 30 minutes from /bin/sh?

How do I write a script to determine if a file is older than 30 minutes in /bin/sh?
Unfortunately does not the stat command exist in the system. It is an old Unix system, http://en.wikipedia.org/wiki/Interactive_Unix
Perl is unfortunately not installed on the system and the customer does not want to install it, and nothing else either.
Here's one way using find.
if test "`find file -mmin +30`"
The find command must be quoted in case the file in question contains spaces or special characters.
The following gives you the file age in seconds:
echo $(( `date +%s` - `stat -L --format %Y $filename` ))
which means this should give a true/false value (1/0) for files older than 30 minutes:
echo $(( (`date +%s` - `stat -L --format %Y $filename`) > (30*60) ))
30*60 -- 60 seconds in a minute, don't precalculate, let the CPU do the work for you!
If you're writing a sh script, the most useful way is to use test with the already mentioned stat trick:
if [ `stat --format=%Y $file` -le $(( `date +%s` - 1800 )) ]; then
do stuff with your 30-minutes-old $file
fi
Note that [ is a symbolic link (or otherwise equivalent) to test; see man test, but keep in mind that test and [ are also bash builtins and thus can have slightly different behavior. (Also note the [[ bash compound command).
Ok, no stat and a crippled find. Here's your alternatives:
Compile the GNU coreutils to get a decent find (and a lot of other handy commands). You might already have it as gfind.
Maybe you can use date to get the file modification time if -r works?
(`date +%s` - `date -r $file +%s`) > (30*60)
Alternatively, use the -nt comparision to choose which file is newer, trouble is making a file with a mod time 30 minutes in the past. touch can usually do that, but all bets are off as to what's available.
touch -d '30 minutes ago' 30_minutes_ago
if [ your_file -ot 30_minutes_ago ]; then
...do stuff...
fi
And finally, see if Perl is available rather than struggling with who knows what versions of shell utilities.
use File::stat;
print "Yes" if (time - stat("yourfile")->mtime) > 60*30;
For those like myself, who don't like back ticks, based on answer by #slebetman:
echo $(( $(date +%s) - $(stat -L --format %Y $filename) > (30*60) ))
You can do this by comparing to a reference file that you've created with a timestamp of thirty minutes ago.
First create your comparison file by entering
touch -t YYYYMMDDhhmm.ss /tmp/thirty_minutes_ago
replacing the timestamp with the value thirty minutes ago. You could automate this step with a trivial one liner in Perl.
Then use find's newer operator to match files that are older by negating the search operator
find . \! -newer /tmp/thirty_minutes_ago -print
Here's my variation on find:
if [ `find cache/nodes.csv -mmin +10 | egrep '.*'` ]
Find always returns status code 0 unless it fails; however, egrep returns 1 is no match is found`. So this combination passes if that file is older than 10 minutes.
Try it:
touch /tmp/foo; sleep 61;
find /tmp/foo -mmin +1 | egrep '.*'; echo $?
find /tmp/foo -mmin +10 | egrep '.*'; echo $?
Should print 0 and then 1 after the file's path.
My function using this:
## Usage: if isFileOlderThanMinutes "$NODES_FILE_RAW" $NODES_INFO_EXPIRY; then ...
function isFileOlderThanMinutes {
if [ "" == "$1" ] ; then serr "isFileOlderThanMinutes() usage: isFileOlderThanMinutes <file> <minutes>"; exit; fi
if [ "" == "$2" ] ; then serr "isFileOlderThanMinutes() usage: isFileOlderThanMinutes <file> <minutes>"; exit; fi
## Does not exist -> "older"
if [ ! -f "$1" ] ; then return 0; fi
## The file older than $2 is found...
find "$1" -mmin +$2 | egrep '.*' > /dev/null 2>&1;
if [ $? == 0 ] ; then return 0; fi ## So it is older.
return 1; ## Else it not older.
}
Difference in seconds between current time and last modification time of myfile.txt:
echo $(($(date +%s)-$(stat -c "%Y" myfile.txt)))
you can also use %X or %Z with the command stat -c to get the difference between last access or last status change, check for 0 return!
%X time of last access, seconds since Epoch
%Y time of last data modification, seconds since Epoch
%Z time of last status change, seconds since Epoch
The test:
if [ $(($(date +%s)-$(stat -c "%Y" myfile.txt))) -lt 600 ] ; then echo younger than 600 sec ; else echo older than 600 sec ; fi
What do you mean by older than 30 minutes: modified more than 30 minutes ago, or created more than 30 minutes ago? Hopefully it's the former, as the answers so far are correct for that interpretation. In the latter case, you have problems since unix file systems do not track the creation time of a file. (The ctime file attribute records when the inode contents last changed, ie, something like chmod or chown happened).
If you really need to know if file was created more than 30 minutes ago, you'll either have to scan the relevant part of the file system repeatedly with something like find or use something platform-dependent like linux's inotify.
#!/usr/bin/ksh
## this script creates a new timer file every minute and renames all the previously created timer files and then executes whatever script you need which can now use the timer files to compare against with a find. The script is designed to always be running on the server. The first time the script is executed it will remove the timer files and it will take an hour to rebuild them (assuming you want 60 minutes of timer files)
set -x
# if the server is rebooted for any reason or this scripts stops we must rebuild the timer files from scratch
find /yourpath/timer -type f -exec rm {} \;
while [ 1 ]
do
COUNTER=60
COUNTER2=60
cd /yourpath/timer
while [ COUNTER -gt 1 ]
do
COUNTER2=`expr $COUNTER - 1`
echo COUNTER=$COUNTER
echo COUNTER2=$COUNTER2
if [ -f timer-minutes-$COUNTER2 ]
then
mv timer-minutes-$COUNTER2 timer-minutes-$COUNTER
COUNTER=`expr $COUNTER - 1`
else
touch timer-minutes-$COUNTER2
fi
done
touch timer-minutes-1
sleep 60
#this will check to see if the files have been fully updated after a server restart
COUNT=`find . ! -newer timer-minutes-30 -type f | wc -l | awk '{print $1}'`
if [ $COUNT -eq 1 ]
then
# execute whatever scripts at this point
fi
done
You can use the find command.
For example, to search for files in current dir that are older than 30 min:
find . -type f -mmin +30
You can read up about the find command HERE
if [[ "$(date --rfc-3339=ns -r /tmp/targetFile)" < "$(date --rfc-3339=ns --date '90 minutes ago')" ]] ; then echo "older"; fi

Resources