Shell Script : Increment seconds to date printed in logs - linux

I have date and time printed in log files as "14:09:49.922 UTC 12.08.2015"
To analyze the logs in few instances I have to grep for next X seconds from this date and time in the logs.
Note : Time Zone might vary.
I have used grep along with for loop to iterate the seconds or minutes or hours depending upon the give time.
The help I am looking for is the options in the date command to increment seconds so that if I ad x seconds to the given time stamp, then date, month year, hour, minute and second should be updated accordingly.
Eg : 23:59:59 UTC 31.12.2015" + 1 seond should return "00:00:00 UTC 01.01.2016".
Basically I am looking for options in date command instead of me manually checking if seconds crossed 59 then increment minute and so on.
How to achieve this in a shell script using date utility?

Date command doesn't support "14:09:49.922 UTC 12.08.2015" format.
So I converted to "14:09:49.922 UTC 08/12/2015" and then used date utility as below
DATE="14:09:49 UTC 12.08.2015"
NEXT_DATE=`echo $DATE | awk '{ split($3,a,"."); print $1" "$2" "a[2]"/"a[1]"/"a[3]}'`
TIME_ZONE=`echo $NEXT_DATE | awk '{print $2}'` NEXT_DATE=`TZ="$TIME_ZONE" date +"%H:%M:%S %Z %m/%d/%Y" -d "$NEXT_DATE + 1 second"`
GREP_DATE=`echo $NEXT_DATE | awk '{ split($3,a,"/"); print $1" "$2" "a[2]"."a[1]"."a[3]}'`
grep $GREP_DATE logfile

Related

extract date hours minutes and seconds from date string fomat (2021-09-04T20:02:33,315Z) in shell script

I have a log file and each row in a log file contains data with timestamp in the format 2021-09-04T20:02:33,315Z and I want to filter the last 30 seconds logs alone from the log file.
I found awk can be used to extract the dates in the range
sudo awk -vDate=$(date -d '30 seconds ago' +%Y-%m-%dT%H:%M:%S,000Z) '{ if ($4 > Date) print Date FS $4}'
But I am stuck on if condition in the command to extract date hours minutes and seconds to check the condition.
You can use range in awk with a coma (,) between two tests.
Like this:
awk '$1$2$3 > "Sep3008:47:46", $1$2$3 < "Sep3008:54:04" {print $0}' /var/log/messages
You can compute timestamp before with your format like in your example:
awk -v TS1=$(date ...) -v TS2=$(date ...) '$4 > TS1, $4 < TS2 { print ...}'

Number of Mondays Falls on the First of the month

I want a command line can display number of Monday(s) which fall(s) on the first of the month in a given year without using sed or awk commands
I have this command that display the first date of the current month
date -d "-0 month -$(($(date +%d)-1)) days"
With GNU date, you can read input from a file (or standard input):
printf '%s\n' 2021-{01..12}-01 | date -f- +%u | grep -c 1
This prints dates for the first of each month in a year, then formats them as "weekday" (where 1 is "Monday"), then counts the number of Mondays.
To parametrize the year, replace 2021 with a variable containing the year; wrapped in a function:
mondays() {
local year=$1
printf '%s\n' "$year"-{01..12}-01 | date -f- +%u | grep -c 1
}
Using a for loop, this can be accomplished as follows.
for mon in {01..12}; do date -d "2021-$mon-01" +%u; done | grep -c 1
Breakdown
We iterate through the numbers 01 to 12 representing the months.
We call date passing in the custom date value with the first date of each month in the year. We use +%u to return the day of week where 1 represents Monday.
Lastly we count the number of 1s using grep -c or grep --count
Note, the desired year has been hard coded as 2021. The current year can be used as:
for mon in {01..12}; do date -d "$(date +%Y)-$mon-01" +%u; done | grep -c 1
This can also all be put into a function and the desired year passed in as an argument:
getMondays() {
for mon in {01..12}; do date -d "$1-$mon-01" +%u; done | grep -c 1
}
I implemented it as:
for ((i=1,year=2021,mondays=0; i< 12; i++)) {
if [ $(date -d "$i/1/$year" +%u) -eq 1 ]
then
let "mondays++"
fi
}
echo "There are $mondays Mondays in $year."
That said, I like Mushfiq's answer. Quite elegant.

How to grep the logs between two date range in Unix

I have a log file abc.log in which each line is a date in date +%m%d%y format:
061019:12
062219:34
062319:56
062719:78
I want to see the all the logs between this date range (7 days before date to current date) i.e (from 062019 to 062719 in this case). The result should be:
062219:34
062319:56
062719:78
I have tried few things from my side to achieve:
awk '/062019/,/062719' abc.log
This gives me correct answer, but if i don't want to hard-code the date value and try achieving the same it does not give the correct value.
awk '/date --date "7 days ago" +%m%d%y/,/date +%m%d%y' abc.log
Note:
date --date "7 days ago" +%m%d%y → 062019 (7 days back date)
date +%m%d%y → 062719 (Current date)
Any suggestions how this can be achieved?
Your middle-endian date format is unfortunate for sorting and comparison purposes. Y-m-d would have been much easier.
Your approach using , ranges in awk requires exactly one log entry per day (and that the log entries are sorted chronologically).
I would use perl, e.g. something like:
perl -MPOSIX=strftime -ne '
BEGIN { ($start, $end) = map strftime("%y%m%d", localtime $_), time - 60 * 60 * 24 * 7, time }
print if /^(\d\d)(\d\d)(\d\d):/ && "$3$1$2" ge $start && "$3$1$2" le $end' abc.log
Use strftime "%y%m%d" to get the ends of the date range in big-endian format (which allows for lexicographical comparisons).
Use a regex to extract day/month/year from each line into separate variables.
Compare the rearranged date fields of the current line to the ends of the range to determine whether to print the line.
To get around the issue of looking for dates that may not be there, you could generate a pattern that matches any of the dates (since there are only 8 of them it doesn’t get too big, if you want to look for the last year it might not work as well):
for d in 7 6 5 4 3 2 1 0
do
pattern="${pattern:+${pattern}\\|}$(date --date "${d} days ago" +%m%d%y)"
done
grep "^\\(${pattern}\\)" abc.log

BASH: date expects 12 hour by default, input is in 24 hour

I have a simple question here.
I have an input date with time of 05:21, but it could be any arbitrary 24 hour time.
I add 1 minute to it using the date command. date then outputs 17:22!!
I need the output format to be 24 hour because that's what the input format is, and what is required.
How do I tell date to stop messing with the times?
I'm very close to just using substr to extract the minutes, and then adding 1, and then adding a check to see if >59 and if so, value = 0 and substr the hour value out, then add 1 to that, because at this point it seems simpler.
end='12-02-2018 17:01'
test=$(echo $end | sed -re 's#(..)-(..)-(....) (..:..)()#\3-\2-\1 \4#') ;
add=$(date +"%d-%m-%Y %H:%M" --date="$test + 1 minute") ;
echo $test --- $add
2018-02-12 17:01 --- 13-02-2018 05:02

change the call from a bash script in a month to a week

I have 2 script in bash, and i have some files:
transaction-2012-01-01.csv.bz2
transaction-2012-01-02.csv.bz2
transaction-2012-01-03.csv.bz2
transaction-2012-01-04.csv.bz2
.
.
transaction-2012-01-31.csv.bz2
transaction-2012-02-01.csv.bz2
.
.
transaction-2012-02-28.csv.bz2
I have a script called script.sh
cat script.sh
YEAR_MONTH=$1
FILEPATH="transaction-$YEAR_MONTH*.csv.bz2"
bzcat $FILEPATH|strings|grep -v "code" >> output
And if you need call the script you can use other script
cat script2.sh
LAST_MONTH=$(date -d -1month +%Y"-"%m)
if [ $# -eq 1 ]; then
DATE=$1
else
DATE=$LAST_MONTH
fi
script.sh $DATE 1>output$DATE.csv 2>> log.txt
And it do cat the files in a month, but now i need call the script with a specific week in a year:
bash script2.sh 2012-01
where 2012 is the year and 01 is the month
Now i need call the script with:
bash script2.sh 2012 13
where 2012 is the year and 13 is the week in a year
Now i need the cat only to the files in the year and week that the user specified, no per month per week
But the format of the files do not help me!!!! because the name is transaction-year-month-day.csv.bz2, not transaction-year-week.csv.bz2
Take a look at the manpage for strftime. These are date format codes. For example:
$ date +"%A, %B %e, %Y at %I:%m:%S %p"
Will print out a date like:
Thursday, May 30, 2013 at 02:05:31 PM
Try to see why this works.
On some systems, the date command will have a -j switch. This means, don't set the date, but reformat the given date. This allows you to convert one date to another:
$ date -f"$input_format" "$string_date" +"$output_format"
The $input_format is the format of your input date. $string_date is the string representation of the date in your $input_format. And, $output_format is the format you want your date in.
The first two fields are easy. Your date is in YY-MM-DD format:
$ date -f"%Y-%m-%d" "$my_date_string"
The question is what can you do for the final format. Fortunately, there is a format for the week in the year. %V which represents the weeks at 01-53 and %W which represents the weeks as 00-53.
What you need to do is find the date string on your file name, then convert that to the year and week number. If that's the same as the input, you need to concatenate this file.
find $dir -type f | while read transaction_file
do
file_date=${transaction_file#transaction-} #Removes the prefix
file_date=${file_date%.csv.bz2} #Removes the suffix
weekdate=$(date -j -f"%Y-%m-%d" "$file_date" +"%Y %W")
[ "$weekdate" -eq "$desired_date" ] || continue
...
done
For example, someone puts in 2013 05 as the desired date, you will go through all of your files and find ones with dates in the range you want. NOTE: That the week of the year is zero filled. You may need to zero fill the input of the week number to match.

Resources