I'm trying to subtract from a decimal number using bash.
For example:
If I have a number 1.0.0.55 I would like to subtract to get to 1.0.0.54.
Here is what I currently have:
#!/bin/bash
LATEST_RELEASE="myproduct_1.0.0.55"
RELEASE_NUMBER=`echo $LATEST_RELEASE | sed 's/[^0-9.]//g'`
echo $RELEASE_NUMBER
#this only works with whole numbers (i.e. 10055)
PREVIOUS_RELEASE=$(($RELEASE_NUMBER - 1))
echo $PREVIOUS_RELEASE
#EOF
Any help would be appreciated!
Thanks!
With bash and its Parameter Expansion:
latest_release="myproduct_1.0.0.55"
first="${latest_release%.*}"
declare -i last # set integer flag
last="${latest_release##*.}"-1
previous_release="$first.$last"
echo "$previous_release"
Output:
myproduct_1.0.0.54
You'll need to isolate the part of that string that you want to extract 1 from since you can't subtrack 1 from a string 1.0.0.55.
Consider using awk here:
echo '1.0.0.55' | awk 'BEGIN{FS=OFS="."}{$4=$4-1}1'
"myproduct_1.0.0.54" isn't a number, so we can't easily subtract 1 from it.
I'd use Parameter-Expansion (${parameter:-word}) to get the part after the last ., and use that at the number so we can + 1.
Then get everything before the last dot to connect the string again:
#!/bin/bash
input="myproduct_1.0.0.55"
minus=$((${input##*.} - 1))
echo "${input%.*}.${minus}"
Will produce:
myproduct_1.0.0.54
Try it online!
Related
How would you go about removing everything after x number of characters? For example, cut everything after 15 characters and add ... to it.
This is an example sentence should turn into This is an exam...
GnuTools head can use chars rather than lines:
head -c 15 <<<'This is an example sentence'
Although consider that head -c only deals with bytes, so this is incompatible with multi-bytes characters like UTF-8 umlaut ü.
Bash built-in string indexing works:
str='This is an example sentence'
echo "${str:0:15}"
Output:
This is an exam
And finally something that works with ksh, dash, zsh…:
printf '%.15s\n' 'This is an example sentence'
Even programmatically:
n=15
printf '%.*s\n' $n 'This is an example sentence'
If you are using Bash, you can directly assign the output of printf to a variable and save a sub-shell call with:
trim_length=15
full_string='This is an example sentence'
printf -v trimmed_string '%.*s' $trim_length "$full_string"
Use sed:
echo 'some long string value' | sed 's/\(.\{15\}\).*/\1.../'
Output:
some long strin...
This solution has the advantage that short strings do not get the ... tail added:
echo 'short string' | sed 's/\(.\{15\}\).*/\1.../'
Output:
short string
So it's one solution for all sized outputs.
Use cut:
echo "This is an example sentence" | cut -c1-15
This is an exam
This includes characters (to handle multi-byte chars) 1-15, c.f. cut(1)
-b, --bytes=LIST
select only these bytes
-c, --characters=LIST
select only these characters
Awk can also accomplish this:
$ echo 'some long string value' | awk '{print substr($0, 1, 15) "..."}'
some long strin...
In awk, $0 is the current line. substr($0, 1, 15) extracts characters 1 through 15 from $0. The trailing "..." appends three dots.
Todd actually has a good answer however I chose to change it up a little to make the function better and remove unnecessary parts :p
trim() {
if (( "${#1}" > "$2" )); then
echo "${1:0:$2}$3"
else
echo "$1"
fi
}
In this version the appended text on longer string are chosen by the third argument, the max length is chosen by the second argument and the text itself is chosen by the first argument.
No need for variables :)
Using Bash Shell Expansions (No External Commands)
If you don't care about shell portability, you can do this entirely within Bash using a number of different shell expansions in the printf builtin. This avoids shelling out to external commands. For example:
trim () {
local str ellipsis_utf8
local -i maxlen
# use explaining variables; avoid magic numbers
str="$*"
maxlen="15"
ellipsis_utf8=$'\u2026'
# only truncate $str when longer than $maxlen
if (( "${#str}" > "$maxlen" )); then
printf "%s%s\n" "${str:0:$maxlen}" "${ellipsis_utf8}"
else
printf "%s\n" "$str"
fi
}
trim "This is an example sentence." # This is an exam…
trim "Short sentence." # Short sentence.
trim "-n Flag-like strings." # Flag-like strin…
trim "With interstitial -E flag." # With interstiti…
You can also loop through an entire file this way. Given a file containing the same sentences above (one per line), you can use the read builtin's default REPLY variable as follows:
while read; do
trim "$REPLY"
done < example.txt
Whether or not this approach is faster or easier to read is debatable, but it's 100% Bash and executes without forks or subshells.
This question already has answers here:
Shell command to sum integers, one per line?
(45 answers)
Closed 5 years ago.
I'm currently trying to write a function in my bash script that does the following: Take in a file as an argument and calculate the sum of the numbers within that file. I must make use of a for loop and the bc command.
Example of values in the file (each value on their own line):
12
4
53
19
6
So here's what I have so far:
function sum_values() {
for line in $1; do
#not sure how to sum the values using bc
done
}
Am I on the right track? I'm not sure how to implement the bc command in this situation.
You can do it easily without the need of a for loop.
paste -s -d+ numbers.txt | bc
You are not on track. Why?
You are passing the whole file content as a variable which requires to store the whole file in memory. Not a problem with a 1, 2, 3 example, big no go in real life.
You are iterating over the content of a file using a for in loop assuming that you are iterating over the lines of that file. That is not true, because word splitting will be performed which makes the for in loop literally iterate over words, not lines.
As others mentioned, the shell is not the right tool for it. That's because such kind of processing is very slow with the shell compared to awk for example. Furthermore the shell is not able to perform floating point operations, meaning you can only process integers. Use awk.
Correct would be (with bash, for educational purposes):
# Expects a filename
sum() {
filename=${1}
s=0
while read -r line ; do
# Arithmetic expansion
s=$((s+line))
# Or with bc
# s=$(bc <<< "${s}+${line}")
# With floating point support
# s=$(bc -l <<< "${s}+${line}")
done < "${filename}"
echo "${s}"
}
sum filename
With awk:
awk '{s+=$0}END{print s}' filename
While awk (or other higher level language: perl, python, etc) would be better suited for this task, you are on the right track for doing it the naive way. Tip:
$ x=1
$ y=$(bc <<<"$x + 1")
$ echo $y
2
To do math in bash we surround an operation in $(( ... ))
Here are some examples:
$(( 5 + 5 )) # 10
my_var = $((5 + 5)) # my_var is now 10
my_var = $(($my_var + 5)) # my_var is now 10
Solution to your problem:
function sum_values() {
sum=0
for i in $(<$1); do
sum=$(($sum + $i))
done
echo $sum
}
Note that you could have also done $(cat $1) instead of $(<$1) in the solution above.
Edit: Replaced return $sum with echo $sum
I am trying to grep all the lines between 2 date ranges , where the dates are formatted like this :
date_time.strftime("%Y%m%d%H%M")
so say between [201211150821 - 201211150824]
I am trying to write a script which involves looking for lines between these dates:
cat <somepattern>*.log | **grep [201211150821 - 201211150824]**
I am trying to find out if something exists in unix where I can look for a range in date.
I can convert dates in logs to (since epoch) and then use regular grep with [time1 - time2] , but that means reading each line , extracting the time value and then converting it etc .
May be something simple already exist , so that I can specify date/timestamp ranges the way I can provide a numeric range to grep ?
Thanks!
P.S:
Also I can pass in the pattern something like 2012111511(27|28|29|[3-5][0-9]) , but thats specific to ranges I want and its tedious to try out for different dates each time and gets trickier doing it at runtime.
Use awk. Assuming the first token in the line is the timestamp:
awk '
BEGIN { first=ARGV[1]; last=ARGV[2]; }
$1 > first && $1 < last { print; }
' 201211150821 201211150824
A Perl solution:
perl -wne 'print if m/(?<!\d)(20\d{8})(?!\d)/
&& $1 >= 201211150821 && $1 <= 201211150824'
(It finds the first ten-digit integer that starts with 20, and prints the line if that integer is within your range of interest. If it doesn't find any such integer, it skips the line. You can tweak the regex to be more restrictive about valid months and hours and so on.)
You are looking for the somewhat obscure 'csplit' (context split) command:
csplit '%201211150821%' '/201211150824/' file
will split out all the lines between the first and second regexps from file. It is likely to be the fastest and shortest if your files are sorted on the dates (you said you were grepping logs).
Bash + coreutils' expr only:
export cmp=201211150823 ; cat file.txt|while read line; do range=$(expr match "$line" '.*\[\(.*\)\].*'); [ "x$range" = "x" ] && continue; start=${range:0:12}; end=${range:15:12}; [ $start -le $cmp -a $end -ge $cmp ] && echo "match: $line"; done
cmp is your comparison value,
I wrote a specific tool for similar searches - http://code.google.com/p/bsearch/
In your example, the usage will be:
$ bsearch -p '$[YYYYMMDDhhmm]' -t 201211150821 -t 201211150824 logfile.
I'm trying to retrieve a memory value from file, and compare it to reference value. But one thing at a time....
I've attempted using set/source/grep/substring to variable but non of them actually worked. Then I found a way to do it using a for loop (see code).
The issue: I'm receiving the entire string from the file, but I can't manage to get rid of the last character in it.
#!/bin/bash
#source start_params.properties
#mem_val= "$default.default.minmaxmemory.main"
#mem_val= grep "default.default.minmaxmemory.main" start_params.properties
for mLine in $(grep 'default.default.minmaxmemory.main' start_params.properties)
do
echo "$mLine"
done
echo "${mLine:4:5}" # didn't get rid of the last `m` in `-max4095m`
v1="max"
v2="m"
echo "$mLine" | sed -e "s/.*${v1}//;s/${v2}.*//" #this echo the right value.
The loop iterates twice:
First output: default.default.minmaxmemory.main=-min512m
Second output: -max4096m
Then the sed command output is 4096,but how can I change the last line in the code S.T. it'll store the value in a variable?
Thank you for your suggestions,
You could use grep to filter the max part and then another a grep -o to extract the numbers:
echo "$mLine" | grep "$max" | grep -o '[[:digit:]]*'
$ sed '/max[0-9]/!d; s/.*max//; s/m//' start_params.properties
4096
remove lines not matching max[0-9]
remove first part of line until max
remove final m
noob here, sorry if a repost. I am extracting a string from a file, and end up with a line, something like:
abcdefg:12345:67890:abcde:12345:abcde
Let's say it's in a variable named testString
the length of the values between the colons is not constant, but I want to save the number, as a string is fine, to a variable, between the 2nd and 3rd colons. so in this case I'd end up with my new variable, let's call it extractedNum, being 67890 . I assume I have to use sed but have never used it and trying to get my head around it...
Can anyone help? Cheers
On a side-note, I am using find to extract the entire line from a string, by searching for the 1st string of characters, in this case the abcdefg part.
Pure Bash using an array:
testString="abcdefg:12345:67890:abcde:12345:abcde"
IFS=':'
array=( $testString )
echo "value = ${array[2]}"
The output:
value = 67890
Here's another pure bash way. Works fine when your input is reasonably consistent and you don't need much flexibility in which section you pick out.
extractedNum="${testString#*:}" # Remove through first :
extractedNum="${extractedNum#*:}" # Remove through second :
extractedNum="${extractedNum%%:*}" # Remove from next : to end of string
You could also filter the file while reading it, in a while loop for example:
while IFS=' ' read -r col line ; do
# col has the column you wanted, line has the whole line
# # #
done < <(sed -e 's/\([^:]*:\)\{2\}\([^:]*\).*/\2 &/' "yourfile")
The sed command is picking out the 2nd column and delimiting that value from the entire line with a space. If you don't need the entire line, just remove the space+& from the replacement and drop the line variable from the read. You can pick any column by changing the number in the \{2\} bit. (Put the command in double quotes if you want to use a variable there.)
You can use cut for this kind of stuff. Here you go:
VAR=$(echo abcdefg:12345:67890:abcde:12345:abcde |cut -d":" -f3); echo $VAR
For the fun of it, this is how I would (not) do this with sed, but I'm sure there's easier ways. I guess that'd be a question of my own to future readers ;)
echo abcdefg:12345:67890:abcde:12345:abcde |sed -e "s/[^:]*:[^:]*:\([^:]*\):.*/\1/"
this should work for you: the key part is awk -F: '$0=$3'
NewVar=$(getTheLineSomehow...|awk -F: '$0=$3')
example:
kent$ newVar=$(echo "abcdefg:12345:67890:abcde:12345:abcde"|awk -F: '$0=$3')
kent$ echo $newVar
67890
if your text was stored in var testString, you could:
kent$ echo $testString
abcdefg:12345:67890:abcde:12345:abcde
kent$ newVar=$(awk -F: '$0=$3' <<<"$testString")
kent$ echo $newVar
67890