bash: syntax error: operand expected (error token is "-") - linux

I am trying to watch a log for certain messages within the last hour. The logs are formatted in this manner:
[1/18/19 9:59:13:791 CST] <Extra text here...>
I was having trouble with just doing a date comparison with awk, so my thought was to convert to epoch and compare. I am taking field 1 and cutting off the milliseconds from field 2, and removing [] (though I guess I could just do [ for my purposes).
while read -r line || [[ -n "$line" ]]
do
log_date_str="$(awk '{gsub("\\[|\\]", "");print $1" "substr($2,1,length($2)-4)}' <<< "$line")"
log_date="$(date -d "$log_date_str" +%s)"
[[ $(($(date +%s)-$log_date)) -le 3600 ]] && echo "$line"
done < /path/to/file
When I try to run this against the log file though, I get this error:
date: invalid date `************ S'
-bash: 1547832909-: syntax error: operand expected (error token is "-")
Taking a single date, e.g. "1/18/19 9:59:13" works with the date conversion to epoch, but I'm not sure where to go with that error.

As the comments pointed out, I was getting data that was not a date since it was parsing the entire log. Grepping the input file for my desired text solved my problems, and I've changed to Charles' suggestion.
while read -r line || [[ -n "$line" ]]
do
log_date_str="$(awk '{gsub("\\[|\\]", "");print $1" "substr($2,1,length($2)-4)}' <<< "$line")"
log_date="$(date -d "$log_date_str" +%s)"
(( ( $(date +%s) - log_date ) <= 3600 )) && echo "$line"
done < <(grep ERROR_STRING /path/to/file.log)

Related

How to convert a number read from a file from str to int?

I use bash, I have to read a file that contains many lines
Each line is composed by a single number. Then, for each line (number) I have to check if value > 80.
And here's my problem, no matter what I try, I always get:
>integer expression expected
after the if condition. That's because the variable I use to put the value of the line is a string.
Here's my original file
[root#michelep_centos2 ~]# cat temp1
0
0
0
98
0
0
79
0
81
In the bash script I have
##!/bin/bash
file=/root/temp1
while IFS= read -r line
do
if [ 'expr $line /1' -gt 80 ]
then echo "hit>80"
fi
done <"$file"
And here expr returns error as $line is a string. I have tried using another variable
val=$(($line + 0))
if [ $val -gt 80 ]
Here the if condition returns "integer expression expected"
I have used also echo
val=$(echo "$((line /1))")
if [ $val -gt 80 ]
I get
syntax error: invalid arithmetic operator (error token is ...
from echo command and of course again the if condition returns
integer expression expected
First action: dos2unix -f inputfile
Second action:
From the input file it can be observed that it contains blank lines, and this will cause if comparison to fail. You can put an additional check on top of your -gt 80 check to ensure before passing the variable $line its not empty using if [ ! -z $line ] or better you can put a check to ensure the line is an integer or may be both using AND.
Example:
while read line;
do
if [[ $line =~ ^[0-9]+$ ]];then
if [ $line -gt 80 ];then
echo "$line is greater then 80"
fi
fi
done <input_file
Or , if you do not want to put empty check([ ! -z $line ]) ,
Also, this can be done using other tools like awk in one line, but this may or may not fit in your requirement.
awk 'NF && $0>80{print $0 ,"is greater then 80"}' inputfile
Try this Shellcheck-clean pure Bash code:
#! /bin/bash -p
file=/root/temp1
while read -r line ; do
(( line > 80 )) && echo "$line > 80"
done <"$file"
The problem with the code in the question is that it doesn't handle empty lines. This code does handle empty lines because variables containing the empty string are treated as if they contained zero in arithmetic expressions (((...))).

Unix - Replace column value inside while loop

I have comma separated (sometimes tab) text file as below:
parameters.txt:
STD,ORDER,ORDER_START.xml,/DML/SOL,Y
STD,INSTALL_BASE,INSTALL_START.xml,/DML/IB,Y
with below code I try to loop through the file and do something
while read line; do
if [[ $1 = "$(echo "$line" | cut -f 1)" ]] && [[ "$(echo "$line" | cut -f 5)" = "Y" ]] ; then
//do something...
if [[ $? -eq 0 ]] ; then
// code to replace the final flag
fi
fi
done < <text_file_path>
I wanted to update the last column of the file to N if the above operation is successful, however below approaches are not working for me:
sed 's/$f5/N/'
'$5=="Y",$5=N;{print}'
$(echo "$line" | awk '$5=N')
Update: Few considerations which need to be considered to give more clarity which i missed at first, apologies!
The parameters file may contain lines with last field flag as "N" as well.
Final flag needs to be update only if "//do something" code has successfully executed.
After looping through all lines i.e, after exiting "while loop" flags for all rows to be set to "Y"
perhaps invert the operations do processing in awk.
$ awk -v f1="$1" 'BEGIN {FS=OFS=","}
f1==$1 && $5=="Y" { // do something
$5="N"}1' file
not sure what "do something" operation is, if you need to call another command/script it's possible as well.
with bash:
(
IFS=,
while read -ra fields; do
if [[ ${fields[0]} == "$1" ]] && [[ ${fields[4]} == "Y" ]]; then
# do something
fields[4]="N"
fi
echo "${fields[*]}"
done < file | sponge file
)
I run that in a subshell so the effects of altering IFS are localized.
This uses sponge to write back to the same file. You need the moreutils package to use it, otherwise use
done < file > tmp && mv tmp file
Perhaps a bit simpler, less bash-specific
while IFS= read -r line; do
case $line in
"$1",*,Y)
# do something
line="${line%Y}N"
;;
esac
echo "$line"
done < file
To replace ,N at the end of the line($) with ,Y:
sed 's/,N$/,Y/' file

Read from two files in simultaneously in bash when they may be missing trailing newlines

I have two text files that I'm trying to read through line by line at the same time. The files do not necessarily have the same number of lines, and the script should stop reading when it reaches the end of either file. I'd like to keep this as "pure" bash as possible. Most of the solutions I've found for doing this suggest something of the form:
while read -r f1 && read -r f2 <&3; do
echo "$f1"
echo "$f2"
done < file1 3<file2
However this fails if the last line of a file does not have a newline.
If I were only reading one file I would do something like:
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "$line"
done < file1
but when I try to extend this to reading multiple files by doing things like:
while IFS='' read -r f1 || [[ -n "$f1" ]] && read -r f2 <&3 || [[ -n "$f2" ]]; do
echo "$f1"
echo "$f2"
done < file1 3<file2
or
while IFS='' read -r f1 && read -r f2 <&3 || [[ -n "$f1" || -n "$f2" ]]; do
echo "$f1"
echo "$f2"
done < file1 3<file2
I can't seem to get the logic right and the loop either doesn't terminate or finishes without reading the last line.
I can get the desired behavior using:
while true; do
read -r f1 <&3 || if [[ -z "$f1" ]]; then break; fi;
read -r f2 <&4 || if [[ -z "$f2" ]]; then break; fi;
echo "$f1"
echo "$f2"
done 3<file1 4<file2
However this doesn't seem to match the normal (?)
while ... read ...; do
...
done
idiom that I see for reading files.
Is there a better way to simultaneously read from two files that might have differing numbers of lines and last lines that are not newline terminated?
What are the potential drawbacks of my way of reading the files?
You can override the precedence of the && and || operators by grouping them with { }:
while { IFS= read -r f1 || [[ -n "$f1" ]]; } && { IFS= read -r f2 <&3 || [[ -n "$f2" ]]; }; do
Some notes: You can't use ( ) for grouping in this case because that forces its contents to run in a subshell, and variables read in subshells aren't available in the parent shell. Also, you need a ; before each }, so the shell can tell it isn't just a parameter to the last command. Finally, you need IFS= (or equivalently IFS='') before each read command, since assignments given as a prefix to a command apply only to that one command.

How do I read all dates between start date and end date in linux

I want to read all dates between two range of dates and this ranges includes both start date and end date
input_start_date="2013-09-05"
input_end_date="2013-09-10"
START_DATE=$(date -I -d "$input_start_date") || exit -1
END_DATE=$(date -I -d "$input_end_date") || exit -1
d="$START_DATE"
while [ "$d" <= "$END_DATE" ]; do
echo $d
d=$(date -I -d "$d + 1 day")
done
but when I ran the above code I get below error
bash: = 2013-09-10: No such file or directory
Could someone help me to fix this issue
Expected output
2013-09-05
2013-09-06
2013-09-07
2013-09-08
2013-09-09
2013-09-10
start=2013-09-05
end=2013-09-10
while [[ $start < $end ]]
do
printf "$start\n"; start=$(date -d "$start + 1 day" +"%Y-%m-%d")
done
or you can try this one
END=$(date -d "2013-09-10" +%s);
DATE=$(date -d "2013-09-05" +%s);
while [[ "$DATE" -le "$END" ]]; do date -d "#$DATE" +%F; let DATE+=86400; done
The idea is right, but you just got the operator wrong, <= does not work with date strings in bash, you needed a inequality operator != in the condition.
while [ "$d" != "$enddate" ]; do
The <= operator works when used in arithmetic context in bash with the ((..)) operator.
Something little in awk (changed the range a bit since there was no test data, just the expected output):
$ awk '$0>="2013-09-06" && $0<="2013-09-09"' file
2013-09-06
2013-09-07
2013-09-08
2013-09-09
You kind of need a do-while loop here, which bash does not provide. How about
date="$start_date"
while true; do
echo "$date"
[[ $date = "$end_date" ]] && break
date=$(date -d "$date + 1 day" "+%F")
done
Don't use ALL_CAPS_VAR_NAMES -- too easy to mistakenly overwrite shell/system vars.

Bash scripting: why is the last line missing from this file append?

I'm writing a bash script to read a set of files line by line and perform some edits. To begin with, I'm simply trying to move the files to backup locations and write them out as-is, to test the script is working. However, it is failing to copy the last line of each file. Here is the snippet:
while IFS= read -r line
do
echo "Line is ***$line***"
echo "$line" >> $POM
done < $POM.backup
I obviously want to preserve whitespace when I copy the files, which is why I have set the IFS to null. I can see from the output that the last line of each file is being read, but it never appears in the output.
I've also tried an alternative variation, which does print the last line, but adds a newline to it:
while IFS= read -r line || [ -n "$line" ]
do
echo "Line is ***$line***"
echo "$line" >> $POM
done < $POM.backup
What is the best way to do this do this read-write operation, to write the files exactly as they are, with the correct whitespace and no newlines added?
The command that is adding the line feed (LF) is not the read command, but the echo command. read does not return the line with the delimiter still attached to it; rather, it strips the delimiter off (that is, it strips it off if it was present in the line, IOW, if it just read a complete line).
So, to solve the problem, you have to use echo -n to avoid adding back the delimiter, but only when you have an incomplete line.
Secondly, I've found that when providing read with a NAME (in your case line), it trims leading and trailing whitespace, which I don't think you want. But this can be solved by not providing a NAME at all, and using the default return variable REPLY, which will preserve all whitespace.
So, this should work:
#!/bin/bash
inFile=in;
outFile=out;
rm -f "$outFile";
rc=0;
while [[ $rc -eq 0 ]]; do
read -r;
rc=$?;
if [[ $rc -eq 0 ]]; then ## complete line
echo "complete=\"$REPLY\"";
echo "$REPLY" >>"$outFile";
elif [[ -n "$REPLY" ]]; then ## incomplete line
echo "incomplete=\"$REPLY\"";
echo -n "$REPLY" >>"$outFile";
fi;
done <"$inFile";
exit 0;
Edit: Wow! Three excellent suggestions from Charles Duffy, here's an updated script:
#!/bin/bash
inFile=in;
outFile=out;
while { read -r; rc=$?; [[ $rc -eq 0 || -n "$REPLY" ]]; }; do
if [[ $rc -eq 0 ]]; then ## complete line
echo "complete=\"$REPLY\"";
printf '%s\n' "$REPLY" >&3;
else ## incomplete line
echo "incomplete=\"$REPLY\"";
printf '%s' "$REPLY" >&3;
fi;
done <"$inFile" 3>"$outFile";
exit 0;
After review i wonder if :
{
line=
while IFS= read -r line
do
echo "$line"
line=
done
echo -n "$line"
} <$INFILE >$OUTFILE
is juts not enough...
Here my initial proposal :
#!/bin/bash
INFILE=$1
if [[ -z $INFILE ]]
then
echo "[ERROR] missing input file" >&2
exit 2
fi
OUTFILE=$INFILE.processed
# a way to know if last line is complete or not :
lastline=$(tail -n 1 "$INFILE" | wc -l)
if [[ $lastline == 0 ]]
then
echo "[WARNING] last line is incomplete -" >&2
fi
# we add a newline ANYWAY if it was complete, end of file will be seen as ... empty.
echo | cat $INFILE - | {
first=1
while IFS= read -r line
do
if [[ $first == 1 ]]
then
echo "First Line is ***$line***" >&2
first=0
else
echo "Next Line is ***$line***" >&2
echo
fi
echo -n "$line"
done
} > $OUTFILE
if diff $OUTFILE $INFILE
then
echo "[OK]"
exit 0
else
echo "[KO] processed file differs from input"
exit 1
fi
Idea is to always add a newline at the end of file and to print newlines only BETWEEN lines that are read.
This should work for quite all text files given they are not containing 0 byte ie \0 character, in which case 0 char byte will be lost.
Initial test can be used to decided whether an incomplete text file is acceptable or not.
Add a new line if line is not a line. Like this:
while IFS= read -r line
do
echo "Line is ***$line***";
printf '%s' "$line" >&3;
if [[ ${line: -1} != '\n' ]]
then
printf '\n' >&3;
fi
done < $POM.backup 3>$POM

Resources