print time in double decimal shell script - linux

how can I print output in double decimal.
below command will print hour in GMT format but i want output as 06 and for double digit hour it should be 10,11,12.
date -u --date="today" +"%I" | awk -F' ' '{print $1-1}'
6

You may use printf "%02d" in awk to achieve it,
$ date -u --date="today" +"%I" | awk -F' ' '{printf "%02d\n",$1-1}'
06

Perhaps you want:
date -u --date="- 1 hour" +"%I"`
If the time adjustment is part of your date string, the format will not be munged.
Alternately, if what you really want is a way to zero-pad a number in bash or awk, you have a variety of alternatives:
date -u --date="- 1 hour" +"%I" | awk '{printf "%02d\n",$1-1}'
Or in bash alone:
read hour < <( date -u --date="- 1 hour" +"%I" )
printf '%02d\n' "$hour"
Get the idea? Output format happens when you print your output, and printf in whatever language formats your output.

awk is superfluous here. You can use a relative time format with date:
date -u --date="-1 hour" +"%I"
06

Related

Extract password expire date

I want to get the password expire date from this output:
>chage -l dsi
Last password change : Feb 05, 2020
Password expires : May 05, 2020
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 90
Number of days of warning before password expires : 7
I already got so far that i only see the both dates for "last password change" and "Password expires", but i donĀ“t get it working to only get the "password expires" date.
>chage -l dsi | cut -d ':' -f2 | head -n 2
Feb 05, 2020
May 05, 2020
How can I only get "May 05, 2020" to store it in a variable for further processing?
Thanks.
There are several options. The one that follows your schema could be:
chage -l dsi | cut -d ':' -f2 | head -n 2 | tail -1
So you get the first two lines and then the last line (which is the second in the whole text).
I don't like this approach, as it is completely dependent on the position of the information. If the order changes, you will get a wrong answer. I would go for searching the piece of information you need and then extracting the date:
chage -l dsi | grep "Password expires" | cut -d ':' -f2
You could use grep to filter the line you want before parsing the output:
chage -l dsi | grep "Password expires" | cut -d ':' -f2
That way you always get only the row you want.
With awk:
chage -l dsi | awk -F':' '$1 ~ /^Password expires/{ print $2 }'
or if you want to get rid of the space character after the colon:
chage -l dsi | awk -F':' '$1 ~ /^Password expires/{ sub(/^[[:blank:]]/, "", $2); print $2 }'
And a variant with sed
chage -l dsi | sed -n '/Password expires/s/^.*: //p'

Transform an entire column using "date" command

Here is a dummy CSV file with 3 rows. The actual file has 7 million rows.
testdates.csv:
y_m_d
1997-01-01
1985-06-09
1943-07-14
The date tool can usually be formatted as such , to get the 'day' :
date -d "25 JUN 2011" +%A
=> output: Saturday
Query: How to provide an entire column as input for the date +%A transformation?
The resulting output should be appended to the end of the input file.
Intended output:
y_m_d, Day
1997-01-01, Thursday
1985-06-09, Sunday
1943-07-14, Tuesday
To read multiples dates from a file using GNU date, you can use the -f/--file option:
$ date -f testdates.csv '+%F, %A'
1997-01-01, Wednesday
1985-06-09, Sunday
1943-07-14, Wednesday
Since your file has a header row, we have to skip that, for example using process substitution and sed:
date -f <(sed '1d' testdates.csv) '+%F, %A'
To get your desired output, combine like this:
echo 'y_m_d, Day'
date -f <(sed '1d' testdates.csv) '+%F, %A'
or write to a new file:
{
echo 'y_m_d, Day'
date -f <(sed '1d' testdates.csv) '+%F, %A'
} > testdates.csv.tmp
and after inspection, you can rename with
mv testdates.csv.tmp testdates.csv
Hard to beat that date answer.
GNU awk would be OK too:
gawk -v OFS=', ' '
NR == 1 {$2 = "Day"}
NR > 1 {$2 = strftime("%A", mktime(gensub(/-/, " ", "g", $1) " 0 0 0"))}
1
' testdates.csv
y_m_d, Day
1997-01-01, Wednesday
1985-06-09, Sunday
1943-07-14, Wednesday
Or perl:
perl -MTime::Piece -lne '
print "$_, ", $. == 1
? "Day"
: Time::Piece->strptime($_, "%Y-%m-%d")->strftime("%A")
' testdates.csv
#/bin/bash
while read datespec; do
echo $datespec, $(date -d "$datespec" +%A)
done < testdates.csv
Output:
1997-01-01, Wednesday
1985-06-09, Sunday
1943-07-14, Wednesday

bash is eating spaces from date format in linux

date format shows correct when i execute just date but when I store in a variable, loosing a space in date if it has single digit(need that extra space to grep /var/log/messages). please suggest to get the exact format as it is. thanks!
$date -d '-1 day' '+%b %e'
Aug 1
$echo $(date -d '-1 day' '+%b %e')
Aug 1
$var=`date -d '-1 day' '+%b %e'`
$echo $var
Aug 1
Use double-quotes like this:
$ echo $(date -d '+1 day' '+%b %e')
Aug 2
$ echo "$(date -d '+1 day' '+%b %e')"
Aug 2
Or:
$ var="$(date -d '+1 day' '+%b %e')"
$ echo $var
Aug 2
$ echo "$var"
Aug 2
Without double-quotes, the shell, among other things, applies word splitting to the output and that collapses multiple spaces to one blank.

how to use shell to split string into correct format?

I have this file with time duration. Some have days but mostly in hh:mm form. The entire form is dd+hh:mm
I was trying to "tr -s '+:' ':'" them into dd:hh:mm form and then split($1,tm,":")calculate them into seconds.
However, the problem I am facing is that after this operation, the form with hh:mm would have hh in tm[1] but if its dd:hh:mm then the tm[1] would be dd.
Is there a way to put the hh in form of hh:mm into tm[2] and put tm[1] to be 0 Please?
4+11:26
10+06:54
20:27
is the input
the output I wanted would be(in form of tm[1], tm[2], tm[3]):
4 11 26
10 06 54
0 20 27
I would first preprocess it with sed (to add missing 0+ in lines that don't have a plus sign) and then tr +: to spaces:
cat a.txt | sed 's/^\([^+]\+\)$/0+\1/g' | tr '+:' ' '
Or as suggested by Lars, shorter sed version:
cat a.txt | sed '/+/! s/^/0+/;' | tr '+:' ' '
awk to the rescue!
You can do the conversion and computation in awk, using your input file the values are converted to minutes
$ awk -F: '{if($1~/+/){split($1,f,"+");h=f[1]*24+f[2]}
else h=$1; m=h*60+$2; print $0 " --> " m}' file
4+11:26 --> 6446
10+06:54 --> 14814
20:27 --> 1227

Configuring date command to meet my format

I have a date in YYYY.MM.DD HH:SS format (e.g. 2014.02.14 13:30). I'd like to convert it in seconds since epoch using the date command.
The command
date -d"2014.02.14 13:30" +%s
won't work, because of the dots separation.
Any Ideas?
Why don't you make the date format acceptable? Just replace dots with dashes:
$ date --date="`echo '2014.02.14 13:30' | sed 's/\./-/g'`" +%s
1392370200
Here I first change the format:
$ echo '2014.02.14 13:30' | sed 's/\./-/g'
2014-02-14 13:30
and then use the result as a parameter for date.
Note that the result depends on your timezone.
You can use:
s='2014.02.14 13:30'
date -d "${s//./}"
Fri Feb 14 13:30:00 EST 2014
To get EPOCH value:
date -d "${s//./}" '+%s'
1392402600
using awk :
s=`echo "2014.02.14 13:30" | awk '{gsub(/\./,"-",$0);print $0}'`
echo -d "$s"
date -d "$s" +%s
output:
Fri Feb 14 13:30:00 IST 2014
1392364800
Perl: does not require you to munge the string
d="2014.02.14 13:30"
epoch_time=$(perl -MTime::Piece -E 'say Time::Piece->strptime(shift, "%Y.%m.%d %H:%M")->epoch' "$d")
echo $epoch_time
1392384600
Timezone: Canada/Eastern
I Finally solved it using
awk 'BEGIN{FS=","}{ gsub(/./," ",$1);gsub(/:/," ",$2); var=sprintf("%s %s 00",$1,$2); print mktime(var), $3,$4,$5,$6,$7 }' myfile | less
so myfile:
2014.09.24,15:15,1.27921,1.27933,1.279,1.27924,234
became
1411582500 1.27921 1.27933 1.279 1.27924 234
:)

Resources