Find the next nearest value (bash) - linux

Let's say I have some holiday data (holiday_master.csv) in columns, something like
...
20200320 Vernal Equinox Day
20200429 Showa Day
20200503 Constitution Day
20200505 Green Day
20200720 Children's Day
20200811 Sea Day
...
Given this set of data, I want to find the next closest holiday from the given date.
For example if the input is 20200420, 20200429 Showa Day is expected.
If the input is 20200620, 20200720 Children's Day is expected.
I have a feeling that awk has the necessary functionality to do this, but any solution that works in a bash script is welcome.

Would you please try the bash script:
#!/bin/bash
input="20200428" # or assign to whatever
< "holiday_master.csv" sort -nk1,1 | # sort the csv file by date and pass to the while loop
while read -r date desc; do
if (( date >= input )); then # if the date is greater than or equal to the input
echo "$date" "$desc" # then print the line
break # and exit the loop
fi
done

Assuming no two days will ever have the same date...
DATE=<some desired input date>
awk "{print (\$1 - $DATE"' "\t" $0)}' calendar.txt | sed '/^-/d' | sort | head -n 1 | awk '{$1=""; print $0}'
Explanation
awk "{print (\$1 - $DATE"' "\t" $0)}' calendar.txt: Prepend a column to the input.txt file describing the difference between the desired input date and the date column
sed '/^-/d': Remove all lines beginning with -. Dates with negative differences have already passed.
sort: Sort the remaining entries from least to greatest (based upon the difference column)
head -n 1: Select only the first row (The lowest difference)
awk '{$1=""; print $0}': Print all but the first column
Prettier script version
#!/bin/bash
# Usage: script <Date> <Calendar file>
DATE=${1:--1}
CAL=${2:-calendar.txt}
# Arg check and execute
if[ ! -f $CAL ]
then
echo "File not found: $CAL"
echo "Usage: script <Date> <Calendar file>"
elif [ $DATE -le 0 ]
then
echo "Invalid date: $DATE"
echo "Usage: script <Date> <Calendar file>"
elif [ $(echo "$DATE" | grep -Ewo -- '-?[0-9]+' | wc -l) -eq 0 ]
then
echo "Invalid date: $DATE"
echo "Usage: script <Date> <Calendar file>"
else
awk '{print ($1 - '"$DATE"' "\t" $0)}' $CAL | sed '/^-/d' | sort | head -n 1 | awk '{$1=""; print $0}'
fi

As you use YYYYMMDD format we might compare it just like numbers (note: year is greater than month, month is greater than day). So you can use AWK following way, let:
20200320 Vernal Equinox Day
20200429 Showa Day
20200503 Constitution Day
20200505 Green Day
20200720 Children's Day
20200811 Sea Day
be file named holidays.txt then:
awk 'BEGIN{inputdate=20200420}{if($1>inputdate){print $2;exit}}' holidays.txt
output:
Showa
Explanation: in BEGIN I set inputdate to 20200420 then when line with greater number in 1st column is found I print content of 2nd column and exit (otherwise later dates would be printed too). Note that AWK does automatically parse number when asked to do comparison (> in this case) so you do not have to care about conversion yourself - you could even do inputdate="20200420" and it would work too.
This solution assumes that all dates in file are already sorted.

Using awk and assuming the source data is comma separated:
awk -F, -v dayte="20200420" '
BEGIN {
"date -d "dayte" +%s" | getline dat1
{
{
"date -d "$1" +%s" | getline dat2;
dat3=dat2-dat1;
if (dat3 > 0 )
{
hols[dat3]=$2
}
}
END {
asorti(hols,hols1,"#ind_num_asc");
print hols[hols1[1]]
}
' holiday_master.csv
One liner:
awk -F, -v dayte="20200420" 'BEGIN { "date -d "dayte" +%s" | getline dat1 } { "date -d "$1" +%s" | getline dat2;dat3=dat2-dat1;if (dat3 > 0 ) { hols[dat3]=$2 } } END { asorti(hols,hols1,"#ind_num_asc");print hols[hols1[1]] }' holiday_master.csv
Set the field separator to , and set a variable dayte to the date we wish to check. In the BEGIN block, we pass the dayte variable through to date command via an awk pipe/getline and read the epoch result into the variable dat1. We do the same with the first column on the master file ($1) and read this into dat2. We take the difference between the epoch dates and read the result into dat3. Only if the result is positive (in the future) do we then use dat3 for an index in a "hols" array, with the holiday description as the value. In the END block, we sort the indexes of hols into a news hols1 array basing the sort on ascending, numeric indexes. We then take the first index of the new hols1 array to attain the holiday that is closest to the dayte variable.

Assuming the holiday list file is sorted by date as you have given, the below would work
$ awk -v dt="20200420" ' (dt-$1)<0 { print;exit } ' holiday.txt
20200429 Showa Day
$ awk -v dt="20200620" ' (dt-$1)<0 { print;exit } ' holiday.txt
20200720 Children's Day
$
If the holiday file is not sorted, then you can use below
$ shuf holiday.txt | awk -v dt="20200420" ' dt-$1<0 { a[(dt-$1)*-1]=$0 } END { asort(a); print a[1] } '
20200429 Showa Day
$ shuf holiday.txt | awk -v dt="20200620" ' dt-$1<0 { a[(dt-$1)*-1]=$0 } END { asort(a); print a[1] } '
20200720 Children's Day

Related

Number of Mondays Falls on the First of the month

I want a command line can display number of Monday(s) which fall(s) on the first of the month in a given year without using sed or awk commands
I have this command that display the first date of the current month
date -d "-0 month -$(($(date +%d)-1)) days"
With GNU date, you can read input from a file (or standard input):
printf '%s\n' 2021-{01..12}-01 | date -f- +%u | grep -c 1
This prints dates for the first of each month in a year, then formats them as "weekday" (where 1 is "Monday"), then counts the number of Mondays.
To parametrize the year, replace 2021 with a variable containing the year; wrapped in a function:
mondays() {
local year=$1
printf '%s\n' "$year"-{01..12}-01 | date -f- +%u | grep -c 1
}
Using a for loop, this can be accomplished as follows.
for mon in {01..12}; do date -d "2021-$mon-01" +%u; done | grep -c 1
Breakdown
We iterate through the numbers 01 to 12 representing the months.
We call date passing in the custom date value with the first date of each month in the year. We use +%u to return the day of week where 1 represents Monday.
Lastly we count the number of 1s using grep -c or grep --count
Note, the desired year has been hard coded as 2021. The current year can be used as:
for mon in {01..12}; do date -d "$(date +%Y)-$mon-01" +%u; done | grep -c 1
This can also all be put into a function and the desired year passed in as an argument:
getMondays() {
for mon in {01..12}; do date -d "$1-$mon-01" +%u; done | grep -c 1
}
I implemented it as:
for ((i=1,year=2021,mondays=0; i< 12; i++)) {
if [ $(date -d "$i/1/$year" +%u) -eq 1 ]
then
let "mondays++"
fi
}
echo "There are $mondays Mondays in $year."
That said, I like Mushfiq's answer. Quite elegant.

Select rows with min value based on fourth column and group by first column in linux

Can you please tell me how to Select rows with min value based on fourth column and group by first column in linux?
Original file
x,y,z,w
1,a,b,0.22
1,a,b,0.35
1,a,b,0.45
2,c,d,0.06
2,c,d,0.20
2,c,d,0.46
3,e,f,0.002
3,e,f,0.98
3,e,f,1.0
The file I want is as below.
x,y,z,w
1,a,b,0.22
2,c,d,0.06
3,e,f,0.002
I tried as below but this does not work.
sort -k1,4 -u original_file.txt | awk '!a[$1] {a[$1] = $4} $4 == a[$1]' >> out.txt
You should just sort by column 4. You need to store the entire line in the array, not just $4. And then print the entire array at the end.
To keep the heading from getting mixed in, I print that separately and then process the rest of the file.
head -n 1 original_file
tail -n +2 original_file | sort -t, -k 4n -u | awk -F, '
!a[$1] { a[$1] = $0 }
END { for (k in a) print a[k] }' | sort -t, -k 1,1n >> out

Filter Linux logs based on Unix Timestamp

I have a log on a linux server. The entries are in the format:
[timestamp (seconds since jan 1 1970)] log data entry
I need a bash script that will take the name of the log file and output only yesterdays entries (from 12:00 to 23:59:59 of previous day) and output those lines to a new file.
I've seen various scripts that filter logs based on dates but all of them so far deal with date stamps in more human readable formats, or are not dynamic. They rely on hard coded dates. I want a script that is going to run in a cron job daily so it has to be aware of what the current date is each time it runs.
Thanks.
Update: This is what I have so far. It just never seems to do the evaluation of the date. It prints 00 for the date so everything gets through.
head -5 logfile.log | awk '{
if($1 >= (date -d "today 00:00:00" +"%s"))
print $1 (date -d "today 00:00:00" +"%s");
}'
I'm confused though, even if the date evaluates properly, $1 is going to have numbers inside square brackets, and my date will be just numbers. Will it do the comparison properly if the strings are formatted differently like that? I haven't figured out how to shove the date number returned by date into a string with brackets yet.
Well, maybe using the dates as Dale said. But using a little trick to extract the "[" and "]", and after compare the dates. Something like this:
YESTERDAY=$(date -d "yesterday 00:00:00" +"%s")
TODAY=$(date -d "today 00:00:00" +"%s")
# Combine the processing in awk
awk -v MIN=${YESTERDAY} -v MAX=${TODAY} -F["]""["] '{ if ( $2 >= MIN && $2 <= MAX) print $0}' logfile.log
Combining tips and tricks from Glenn, Dale, and Davison:
awk -v today=$(date -d "today 00:00:00" +"%s") -v yesterday=$(date -d "yesterday 00:00:00" +"%s") -F'[\\[\\] ]' '{ if($2 >= yesterday && $2 < today) print }' logfile.log
Uses the shell's $() command substitution to feed variables to awk's -v argument parser
-F'[\\[\\] ]' sets the field separator to be either [, ], or
input data:
[1300000000 log1 data1 entry1]
[1444370000 log2 data2 entry2]
[1444374000 log3 data3 entry3]
[1444460399 log4 data4 entry4]
[1500000000 log5 data5 entry5]
output:
[1444370000 log2 data2 entry2]
You might try something like this:
YESTERDAY=$(date -d "yesterday 00:00:00" +"%s")
TODAY=$(date -d "today 00:00:00" +"%s")
cat your_log.log | \
awk -v MIN=${YESTERDAY} -v MAX=${TODAY} \
'{if($1 >= MIN && $1 < MAX) print}'
:)
Dale

How to search a log file for two different dates in Linux

I'm using an RPM-based distro and I want to dynamically search a log file for today's date and yesterday's date to output a report. The string has to be dynamic ( no egrep "\b2012-10-[20-30]\b" ) meaning that I can take the same one-liner or script and search a file for today's date and yesterday's date and print some output. Basically searching log files for specific entries.
Here's what I got, but I want to replace the egrep with something dynamic:
grep "No Such User Here" /var/log/maillog | egrep "\b2012-10-2[3-4]\b" | cut -d "<" -f 3 | egrep -o '\b[a-zA-Z0-9._%-]+#[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}\b' | cut -d "#" -f 2 | sort -d |uniq -ci | awk -F" " '{ print "Domain: " $2 " has been sent " $1 " messages that got a No Such User Here error." }'
Any help is appreciated. I'm looking for something that very likely uses the date command
date "+%Y-%m-%d"
but I need to take the %d and search for both the current day, and yesterday. Can this be done?
Any insight is much appreciated.
If you have GNU date:
$ x=$(date "+%Y-%m-%d")
$ y=$(date "+%Y-%m-%d" -d "-1 day")
$ egrep "($x|$y)" file
x contains current date and y contains the yesterday's date.
With GNU awks time functions:
gawk 'BEGIN{
today = strftime("%Y-%m-%d")
yesterday = strftime("%Y-%m-%d",systime()-24*60*60)
}
$0 ~ "(" today "|" yesterday ")"
' file

How to count number of unique values of a field in a tab-delimited text file?

I have a text file with a large amount of data which is tab delimited. I want to have a look at the data such that I can see the unique values in a column. For example,
Red Ball 1 Sold
Blue Bat 5 OnSale
...............
So, its like the first column has colors, so I want to know how many different unique values are there in that column and I want to be able to do that for each column.
I need to do this in a Linux command line, so probably using some bash script, sed, awk or something.
What if I wanted a count of these unique values as well?
Update: I guess I didn't put the second part clearly enough. What I wanted to do is to have a count of "each" of these unique values not know how many unique values are there. For instance, in the first column I want to know how many Red, Blue, Green etc coloured objects are there.
You can make use of cut, sort and uniq commands as follows:
cat input_file | cut -f 1 | sort | uniq
gets unique values in field 1, replacing 1 by 2 will give you unique values in field 2.
Avoiding UUOC :)
cut -f 1 input_file | sort | uniq
EDIT:
To count the number of unique occurences you can make use of wc command in the chain as:
cut -f 1 input_file | sort | uniq | wc -l
awk -F '\t' '{ a[$1]++ } END { for (n in a) print n, a[n] } ' test.csv
You can use awk, sort & uniq to do this, for example to list all the unique values in the first column
awk < test.txt '{print $1}' | sort | uniq
As posted elsewhere, if you want to count the number of instances of something you can pipe the unique list into wc -l
Assuming the data file is actually Tab separated, not space aligned:
<test.tsv awk '{print $4}' | sort | uniq
Where $4 will be:
$1 - Red
$2 - Ball
$3 - 1
$4 - Sold
# COLUMN is integer column number
# INPUT_FILE is input file name
cut -f ${COLUMN} < ${INPUT_FILE} | sort -u | wc -l
Here is a bash script that fully answers the (revised) original question. That is, given any .tsv file, it provides the synopsis for each of the columns in turn. Apart from bash itself, it only uses standard *ix/Mac tools: sed tr wc cut sort uniq.
#!/bin/bash
# Syntax: $0 filename
# The input is assumed to be a .tsv file
FILE="$1"
cols=$(sed -n 1p $FILE | tr -cd '\t' | wc -c)
cols=$((cols + 2 ))
i=0
for ((i=1; i < $cols; i++))
do
echo Column $i ::
cut -f $i < "$FILE" | sort | uniq -c
echo
done
This script outputs the number of unique values in each column of a given file. It assumes that first line of given file is header line. There is no need for defining number of fields. Simply save the script in a bash file (.sh) and provide the tab delimited file as a parameter to this script.
Code
#!/bin/bash
awk '
(NR==1){
for(fi=1; fi<=NF; fi++)
fname[fi]=$fi;
}
(NR!=1){
for(fi=1; fi<=NF; fi++)
arr[fname[fi]][$fi]++;
}
END{
for(fi=1; fi<=NF; fi++){
out=fname[fi];
for (item in arr[fname[fi]])
out=out"\t"item"_"arr[fname[fi]][item];
print(out);
}
}
' $1
Execution Example:
bash> ./script.sh <path to tab-delimited file>
Output Example
isRef A_15 C_42 G_24 T_18
isCar YEA_10 NO_40 NA_50
isTv FALSE_33 TRUE_66

Resources