Calendar calculations in bash - linux

I want to do some calendar manipulations in bash - specifically, I want to figure out the last date of a given month (including leap-year, and a preparing a table for a lookup is not a valid solution for me).
Supposedly I have the following code:
$year=2009
$start_month=2
$end_month=10
for $month in $(seq $start_month $end_month); do
echo "Last date of "$(date +"%B" -d "${year}-${month}-01")" is: " ???
done
I can't figure out how to do something like this. I though date -d would work like POSIX mktime and fold invalid dates to their valid equivalents, so I could say something like date -d "2009-03-00" and get '2009-02-28', but no such luck.
Is there anyway to do it using only what is available on bash in a standard GNU environment?

Try: date -d 'yesterday 2009-03-01'
Intuitive I know. Earlier versions of date used to work the POSIX way.

date(1)'s -d is GNU specific; so using that will only work on GNU Linux.
A more portable solution (this should even work in sh AFAIK), is this:
: $(cal 4 2009); echo $_

If you don't mind playing with grep/awk/perl, you can take a look at cal.
$ cal 4 2009
April 2009
Su Mo Tu We Th Fr Sa
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30
Edit (MarkusQ): To atone for my joke solution below I'll contribute to yours:
cal 4 2009 | tr ' ' '\n' | grep -v ^$ | tail -n 1

Well, one way would be to watch the current date in a loop until the month component changes, saving the day component for one round. That would give you both the first and last day of the month, but it might be too slow.
Posted 1 April 2009

Related

Multiple outputs of a script to a rate

While doing system administration I often write one-liners to get the status of a process, or some value like used disk-space, number of files processed or seconds left (for example replication). I use tools like watch or echo in a loop with date to assess the status in real-time.
Often I know the outcome of the value I'm watching: it will go up to a defined number, or zero. To calculate the expected time it will be done (some processes take hours) I would put some timestamps and values in a spreadsheet, calculate some rates of increment or decrement of the value between the timestamps, and average the rate and extrapolate to estimate the expected end-time of the process.
I'm looking for a way to automate this like pv does this for a pipe. I would expect it to work something like this:
$ rate --expected-value=0 --interval=10 "mysql -e 'show slave status' -E | grep Seconds_Behind_Master | awk '{print $2}'"
Fri May 31 10:31:48 CEST 2019 | value: 52952
Fri May 31 10:31:58 CEST 2019 | value: 52918 | rate: 3.4/s | ETA: 10:57:27
Fri May 31 10:32:08 CEST 2019 | value: 52886 | rate: 3.2/s | ETA: 10:58:29
or for another example:
$ rate --unit byte --interval=1 "stat / -f -t | awk '{print $9}'"
Fri May 31 10:58:03 CEST 2019 | value: 11908091
Fri May 31 10:58:04 CEST 2019 | value: 11829190 | rate: 78900 bytes/s
These are examples of course, and the fictive rate utility does not exist. I could build it myself but I wonder if there is an existing utility (which I have not found yet) that can do this, maybe a library, or a simple one-liner that would do something similar.

Set a cron every 10 days starting from 16th January

How to set a cron to execute every 10 days starting from 16th January? Would this suffice?
30 7 16-15/10 * * command >/dev/null
The above starts at 7.30 AM, 16th of every month and ends on next month 15th and repeats every 10 days. I don't think what I have above is correct. Can anyone tell me how to set up the cron so that month ends are taken into account and every 10 days the command is executed starting from 16th January this year 2016?.
As William suggested, cron can't handle this complexity by itself. However, you can run a cron job more frequently, and use something else for the logic. For example;
30 7 16-31 1 * date '+\%j' | grep -q '0$' && yourcommand
30 7 * 2-12 * date '+\%j' | grep -q '0$' && yourcommand
This date format string prints the day of the year, from 001 to 365. The grep -q will do a pattern match, NOT print the results, but return a success of a failure on the basis of what it finds. Every 10 days, the day of the year ends in a zero. On those days, yourcommand gets run.
This has a problem with the year roll-over. A more complex alternative might be to do a similar grep on a product of date '+%s' (the epoch second), but you'll need to do math to turn seconds into days for analysis by grep. This might work (you should test):
SHELL=/bin/bash
30 7 * * * echo $(( $(date '+%s') / 86400 )) | grep '0$' && yourcommand
(Add your Jan 16th logic too, of course.)
This relies on the fact that shell arithmetic can only handle integers. The shell simply truncates rather than rounding.
UPDATE
In a comment on another answer, you clarified your requirements:
The command should start executing on January 16th, and continue like on January 26th, February 5th, February 15th and so on – jai
For this, the epoch-second approach is probably the right direction.
% date -v1m -v16d -v7H -v30M -v0S '+%s'
1452947400
(I'm in FreeBSD, hence these arguments to date.)
SHELL=/bin/bash
30 7 * * * [[ $(( ($(date '+\%s') - 1452947400) \% 864000 )) == 0 ]] && yourcommand
This expression subtracts the epoch second of 7:30AM Jan 16 (my timezone) from the current time, and tests whether the resultant difference is divisible by 10 days. If it is, the expression evaluates true and yourcommand is run. Note that $(( 0 % $x )) evaluates to 0 for any value of $x.
This may be prone to error if cron is particularly busy and can't get to your job in the one second where the math works out.
If you want to make this any more complex (and perhaps even if it's this complex), I recommend you move the logic into a separate shell script to handle the date comparison math. Especially if you plan to add a fudge factor to allow for jobs to miss their 1-second window .. that would likely be multiple lines of script, which is awkward to maintain in a single cronjob entry.
Observation: the math capabilities of cron are next to non-existent. The math capabilities of the Unix tools are endless.
Conclusion: move the problem from the cron domain to the shell domain.
Solution: run this each day with 30 7 * * * /path/to/script in the crontab:
#!/bin/sh
PATH=$(/usr/bin/getconf PATH)
if test $(($(date +%j) % 10)) = 6; then
your_command
fi
This tests whether the day-of-year modulo 10 is 6, like it is for January 16 (and January 6th is already in the past...).
Thinking outside the box:
Fix your requirement. Convince whoever came up with that funny 10 day cycle to accept a 7 day cycle. So much easier for cron. This is following the KISS principle.
0 30 7 1/10 * ? * command >/dev/null
Output for the above express is,
Saturday, January 16, 2016 7:30 AM
1. Thursday, January 21, 2016 7:30 AM
2. Sunday, January 31, 2016 7:30 AM
3. Monday, February 1, 2016 7:30 AM
4. Thursday, February 11, 2016 7:30 AM
5. Sunday, February 21, 2016 7:30 AM
Output for your expression
i.e 30 7 16-15/10 * * command >/dev/null
2016-01-15 07:30:00
2016-02-15 07:30:00
2016-03-15 07:30:00
2016-04-15 07:30:00
2016-05-15 07:30:00
2016-06-15 07:30:00
2016-07-15 07:30:00
2016-08-15 07:30:00
2016-09-15 07:30:00
2016-10-15 07:30:00
The closest syntax would like this:
30 7 1-30/10 * *
30 7 1-31/10 * *
30 7 1-28/10 * *
30 7 1-29/10 * *
You can test the cron expression here http://cron.schlitt.info/

Running a command over several files and keep the same name

How can I run a shell command on several files in linux/mac while keeping the same name (excluding the extension) ?
e.g. let's assume that I want to compile a list of files using a command to some other files with the same name :
{command} [name].less [same-name].css
EDIT: Supposing, more generally, that the two targets are located in two different paths, say, "path/to/folder2" and "path/to/folder3" and keeping in mind you can always specify the list used in the for cycle, you can try:
for i in $(ls path/to/folder3 | grep .less); do . /path/to/folder1/script.sh $(echo "path/to/folder3/$i $( echo "path/to/folder2/$i" | sed -e s/.less/.css/)") ; done
Still sorry for the brutality and perhaps non-elegant solution.
You can do something like this:
ls sameName.*
or simply
ls same* > list_of_filenams_starting_with_SAME.txt
IMHO, the more concise, performant and intuitive solution is to use GNU Parallel. Your command becomes:
parallel command {} {.}.css ::: *.less
So, for example, let's say your "command" is ls -l, and you have these files in your directory:
Freddy Frog.css
Freddy Frog.less
a.css
a.less
then your command would be
parallel ls -l {} {.}.css ::: *.less
-rw-r--r-- 1 mark staff 0 7 Aug 08:09 Freddy Frog.css
-rw-r--r-- 1 mark staff 0 7 Aug 08:09 Freddy Frog.less
-rw-r--r-- 1 mark staff 0 7 Aug 08:09 a.css
-rw-r--r-- 1 mark staff 0 7 Aug 08:09 a.less
The benefits are firstly that it is a nice, concise syntax and a one-liner. Secondly, it'll run commands in parallel using as many cores as your CPU(s) have so it will be faster. If you do that, you may want the -k option to keep the outputs in order from the different commands.
If, you need it to run across many folders in a hierarchy, you can pipe the filenames in like this:
find <somepleace> -name \*.less | parallel <command> {} {.}.css
To understand these last two points (piping in and order), look at this example:
seq 1 10 | parallel echo
6
7
8
5
4
9
3
2
1
10
And now with -k to keep the order:
seq 1 10 | parallel -k echo
1
2
3
4
5
6
7
8
9
10
If, for some reason, you want to run the jobs sequentially one after the other, just add the switch -j 1 to the parallel command to set the number of parallel jobs to 1.
Try this out on your Linux machine as GNU Parallel is generally installed there. On the Mac under OS X , the easiest way to install GNU Parallel is with homebrew - please ask before trying if you are not familiar.

Getting specific part of output in Linux

I have an output from a shell script like this:
aaa.sh output
Tue Mar 04 01:00:53 2014
Time drift detected. Please check VKTM trace file for more details.
Tue Mar 04 07:21:52 2014
Time drift detected. Please check VKTM trace file for more details.
Tue Mar 04 13:17:16 2014
Time drift detected. Please check VKTM trace file for more details.
Tue Mar 04 16:56:01 2014
SQL> ALTER DISKGROUP fra ADD DISK '/dev/rhdisk20'
Wed Mar 05 00:03:42 2014
Time drift detected. Please check VKTM trace file for more details.
Wed Mar 05 04:13:39 2014
Time drift detected. Please check VKTM trace file for more details.
Tue Mar 05 05:56:07 2014
GMON querying group 3 at 10 for pid 18, osid 27590856
GMON querying group 3 at 11 for pid 18, osid 27590856
I need to get the part, beginning from today's date:
Wed Mar 05 00:03:42 2014
Time drift detected. Please check VKTM trace file for more details.
Wed Mar 05 04:13:39 2014
Time drift detected. Please check VKTM trace file for more details.
Tue Mar 05 05:56:07 2014
GMON querying group 3 at 10 for pid 18, osid 27590856
GMON querying group 3 at 11 for pid 18, osid 27590856
You can get the date in the correct format like this:
today=$(date +'%a %b %d')
and then search for it like this:
grep "$today" aaa.sh
If there are lines from today without a date, such as your GMON lines, you could add -A to say how many lines after the match you want and use a big number:
grep -A 999999 "$today" aaa.sh
If you are on AIX and there is no -A option, use sed like this:
today=$(date +'%a %b %d')
sed -n "/${today}/,$ p" aaa.sh
Explanation:
That says store today's date in the variable today in the format "Wed Mar 05". Then search, without printing anything (-n) till you find that date, From that point on, till the end of file ($) print all lines (p).
I think I have an easy solution:
Get date to output the date in a format that would match the date in the file (check man date on formatting options). Since we don't want to match the hours/minutes/seconds we have to call date twice: once for the weekday/month/day half and once for the year half on the end of the full date. Between these two halves we match the horus/minutes/seconds with .* regex.
Then do:
aaa.sh | grep -E '`date --only-weekday-month-day`.*`date --only-year`' -A 999999
though I am using answer by NewWorld it can be modified as,
convert output of date similar to your file format
suppose in variable 'D'you get that output
sed '1,/${D}/d' aaa.sh
that will output all lines after match date match.
example: suppose you get D="Wed Mar 05 00:03:42 2014"
output will be as expected.
You can use
tail -n 7 filename
for getting the desired output . It will basically give you the last seven lines of the text file named filename .
For getting solution from today's date you can use :
k=$(date +"%a %b %d")
g=$(grep -nr "$k" in|cut -f1 -d:|head -1)
total=$(wc -l<in)
l=`expr $total - $g + 1
tail -n$l in
Try
sed -n '/Wed Mar 05/,$p' aaa.sh
Here -n means "don't print anything unless specified to".
First appearance of a line that matches the expression /Wed\ Mar\ 05/ till the end of the file, will be printed(p)"

unix DATE command converts wrong in specific years

The command
mydate=$(date -d "90 days 19850101" +%Y%m%d%H%M%S)
yields 19850401000000. But:
mydate=$(date -d "90 days 19830101" +%Y%m%d%H%M%S)
yields 19830401010000.
How is it possible that in year 1983 one hour is added on 1 April (which is a result I don't want), while for the year 1985 the answer is correct?

Resources