Problem while loop to compare current date to date in a file - linux

I want to compare current date to date present in file [every 30 seconds and in background]
I don't know if I have to do an infinite loop or a while read line loop ...
So, every match of current date with a date of a file, I print the current line matched in xmessage.
( I add a date using ./script.sh 0138 test )
The date is 0138 in this form : +%H%M.
I already tried this but my loop works only one time for the first date in file. Only one window of xmessage is opened... How can this work for each date added ?
while true
do
currentDate="$(date +%H%M)"
futureDate="$(cat test.txt | cut -d ' ' -f 1 | grep "${currentDate}" | head -n 1)"
printLine="$(cat test.txt | grep "${currentDate}")"
if test "${currentDate}" = "${futureDate}"
then
xmessage "${printLine}" -buttons Ok -geometry 400x100 -center
fi
sleep 30
done &
Exemple of file : test.txt
0142 test xmessage
0143 test other xmessage
0144 other test
0145 other xmessage !
Thanks for helping me !

Your main problem is that the execution halts until you "confirm" the message by clicking OK.
You could work around that by detaching xmessage (put an ampersand at the end of the line:
xmessage -buttons Ok -geometry 400x100 -center "${printLine}"&

Related

How to update text of a specific line on terminal?

I'm programming bash script but there is a problem that is how to update text of a specific line.
I've tried using clear command. But using clear is refresh all lines on terminal but i want to refresh specific line. Like under
===============
TIME: 20:35
===============
I want to refresh only "20:35" part, without "=====" and "TIME:".
1)
while true
do
clear
echo "
===============
TIME: $(date +%H:%M)
==============="
done
2)
function TIME_RE(){
while true
do
printf "TIME: $(date +%Y.%m.%d) ($(date +%H:%M:%S)) \r"
done
}
echo "
===============
TIME: $(TIME_RE)
==============="
I expected result of second is refreshing only "$(TIME_RE)" part, but it displayed nothing.
You can use ANSI escape codes to move cursor position, or save and restore cursor position. For example, using the cursor up sequence:
while true; do
echo -e "
===============
TIME: $(date +%Y.%m.%d) ($(date +%H:%M:%S))
===============
\e[5A"
sleep 1
done
Notes:
you need echo's -e option to let you print escape sequences.
the "\e[5A" is the sequence to move 5 lines up.
add something like "sleep 1" as delays to avoid burdening the system.

Assign variable inside for loop

I'm trying to do a simple for{} loop and I encountered problem with assign a variable inside loop. I will try to explain it on example:
DIFF=(`date +%s -d 20120203`-`date +%s -d 20120126`)/86400 # Difference between 2 dates.
echo $DIFF
end_date_change='20120126'
for i in {1..$DIFF} #Difference between 2 dates inside loop
do
end_date_change=$(end_date_change -d "-1 day") # Now, after every loop I would like to decrease date
echo $end_date_change
done
So the output should looks:
20120126
20120127
20120128
20120201
20120202
20120203
Anyone? Thanks in advance

How can you can calculate the time span between two time entries in a file using a shell script?

In a Linux script: I have a file that has two time entries for each message within the file. A 'received time' and a 'source time'. there are hundreds of messages within the file.
I want to calculate the elapsed time between the two times.
2014-07-16T18:40:48Z (received time)
2014-07-16T18:38:27Z (source time)
The source time is 3 lines after the received time, not that it matters.
info on the input data:
The input has a lines are as follows:
TimeStamp: 2014-07-16T18:40:48Z
2 lines later: a bunch of messages in one line and within each line, multiple times is:
sourceTimeStamp="2014-07-16T18:38:27Z"
If you have GNU's date (not busybox's), you can give difference in seconds with:
#!/bin/bash
A=$(date -d '2014-07-16T18:40:48Z' '+%s')
B=$(date -d '2014-07-16T18:38:27Z' '+%s')
echo "$(( A - B )) seconds"
For busybox's date and ash (modern probably / BusyBox v1.21.0):
#!/bin/ash
A=$(busybox date -d '2014-07-16 18:40:48' '+%s')
B=$(busybox date -d '2014-07-16 18:38:27' '+%s')
echo "$(( A - B )) seconds"
you should be able to use date like this (e.g.)
date +%s --date="2014-07-16T18:40:48Z"
to convert both timestamps into a unix timestamp. Getting the time difference between them is then reduced to a simple subtraction.
Does this help?
I would use awk. The following script searches for the lines of interest, converts the time value into a UNIX timestamp and saves them in the start, end variables. At the end of the script the difference will get calculated and printed:
timediff.awk:
/received time/ {
"date -d "$1" +%s" | getline end
}
/source time/ {
"date -d "$1" +%s" | getline start
exit
}
END {
printf "%s seconds in between", end - start
}
Execute it like this:
awk -f timediff.awk log.file
Output:
141 seconds in between

Bash: Split stdout from multiple concurrent commands into columns

I am running multiple commands in a bash script using single ampersands like so:
commandA & commandB & commandC
They each have their own stdout output but they are all mixed together and flood the console in an incoherent mess.
I'm wondering if there is an easy way to pipe their outputs into their own columns... using the column command or something similar. ie. something like:
commandA | column -1 & commandB | column -2 & commandC | column -3
New to this kind of thing, but from initial digging it seems something like pr might be the ticket? or the column command...?
Regrettably answering my own question.
None of the supplied solutions were exactly what I was looking for. So I developed my own command line utility: multiview. Maybe others will benefit?
It works by piping processes' stdout/stderr to a command interface and then by launching a "viewer" to see their outputs in columns:
fooProcess | multiview -s & \
barProcess | multiview -s & \
bazProcess | multiview -s & \
multiview
This will display a neatly organized column view of their outputs. You can name each process as well by adding a string after the -s flag:
fooProcess | multiview -s "foo" & \
barProcess | multiview -s "bar" & \
bazProcess | multiview -s "baz" & \
multiview
There are a few other options, but thats the gist of it.
Hope this helps!
pr is a solution, but not a perfect one. Consider this, which uses process substitution (<(command) syntax):
pr -m -t <(while true; do echo 12; sleep 1; done) \
<(while true; do echo 34; sleep 2; done)
This produces a marching column of the following:
12 34
12 34
12 34
12 34
Though this trivially provides the output you want, the columns do not advance individually—they advance together when all files have provided the same output. This is tricky, because in theory the first column should produce twice as much output as the second one.
You may want to investigate invoking tmux or screen in a tiled mode to allow the columns to scroll separately. A terminal multiplexer will provide the necessary machinery to buffer output and scroll it independently, which is important when showing output side-by-side without allowing excessive output from commandB to scroll commandA and commandC off-screen. Remember that scrolling each column separately will require a lot of screen redrawing, and the only way to avoid screen redraws is to have all three columns produce output simultaneously.
As a last-ditch solution, consider piping each output to a command that indents each column by a different number of characters:
this is something that commandA outputs and is
and here is something that commandB outputs
interleaved with the other output, but visually
you might have an easier time distinguishing one
here is something that commandC outputs
which is also interleaved with the others
from the other
Script print out three vertical rows and a timer each row containing the output from a single script.
Comment on anything you dont understand and ill add answers to my answer as needed
Hope this helps :)
#!/bin/bash
#Script by jidder
count=0
Elapsed=0
control_c()
{
tput rmcup
rm tail.tmp
rm tail2.tmp
rm tail3.tmp
stty sane
}
Draw()
{
tput clear
echo "SCRIPT 1 Elapsed time =$Elapsed seconds"
echo "------------------------------------------------------------------------------------------------------------------------------------------------------"
tail -n10 tail.tmp
tput cup 25 0
echo "Script 2 "
echo "------------------------------------------------------------------------------------------------------------------------------------------------------"
tail -n10 tail2.tmp
tput cup 50 0
echo "Script 3 "
echo "------------------------------------------------------------------------------------------------------------------------------------------------------"
tail -n10 tail3.tmp
}
Timer()
{
if [[ $count -eq 10 ]]; then
Draw
((Elapsed = Elapsed + 1))
count=0
fi
}
main()
{
stty -icanon time 0 min 0
tput smcup
Draw
count=0
keypress=''
MYSCRIPT1.sh > tail.tmp &
MYSCRIPT2.sh > tail2.tmp &
MYSCRIPT3.sh > tail3.tmp &
while [ "$keypress" != "q" ]; do
sleep 0.1
read keypress
(( count = count + 2 ))
Timer
done
stty sane
tput rmcup
rm tail.tmp
rm tail2.tmp
rm tail3.tmp
echo "Thanks for using this script."
exit 0
}
main
trap control_c SIGINT

How to delete older contents of file that is being continuously written to?

I have a simulation running and expect it to go on for atleast 10 more hours. I have directed the console out put to a .txt file using
(binary) > out.txt
This out.txt is becoming too huge. I do not need a lot of contents in this file. How can I delete the older parts of this file without harming the writing process? The contents that will be written towards the end of the simulation is important to me.
As Carl mentioned in the comments, you cannot really do this on an actively written log file. However, if the initial data is not relevant to you, you can do the following (though beware that you will loose all data)
> out.txt
For future, you can use a utility called logrotate(8)
You could use tail to only store the end of the file:
# Say you want to save the last 100 lines
your_binary | tail -n 100 > out.txt
This assumes that the output ends at some point.
saw your comments - the file is 10 GB now ... try using sed -i to reduce the size so that it will work with the other tools, if you want to completely erase it then :> logfile.
tools can cope up with a file which is as big as their buffer , else they should be streamed ..... something like split wont work on a 4 GB file , dont know if they made a code adjustment for this , its been long since i had to work with a file that big.
two suggestions :
1
there were a few methods i could think off like using split ....but almost all were involving creation of a seperate file from the log (a reduced version) and renaming that or redirecting to that.
use split to break the log to smaller logs (split -l 100 ...) and just redirect the program output to the recent the last log found using ls -1.
this seems to work fine .
2
Also i tried a second method to edit/truncate top 10 lines in the same file ......
Kaizen ~/shell_prac
$ cat zcntr.sh
## test truncate a log file
##set -xv
:> zcntr.log ;
## fxn
cntr_log()
{
limit=$1 ;
start=0 ;
while [ $start -lt $limit ]
do
echo "count is $start" >> zcntr.log ; ## generate a continuous log
start=$(($start + 1));
sleep 1;
cnt=$(($start % 10)) ;
if [ $cnt -eq 0 ] ## check to truncate the top 10 lines using sed
then
echo "truncate at $start " >> zcntr.log ;
sed -i "1,10d" zcntr.log ;
fi
done ;
}
## main cntrlr
echo "enter a limit" ;
read lmt ;
cntr_log $lmt ;
this seems to work
i tested it with a counter to print till value 25
output :
Kaizen ~/shell_prac
$ cat zcntr.log
count is 19
truncate at 20
count is 20
count is 21
count is 22
count is 23
count is 24
i think either of the two will help.
let me know if there is something else on your mind !!
Truncate file with cat
> cat /dev/null > out.txt

Resources