I want to add a special value to the value of a variable. this is my script:
x 55;
y 106;
now I want to change the value of x from 55 to 60.
Generally, how can we apply a math expression on the values of variables in a script?
Others might come up with something simpler (ex: sed, awk, ...), but this quick and dirty script works. It assumes your input file is exactly like you posted:
this is my script.
x 55;
y 106;
And the code:
#!/bin/bash
#
if [ $# -ne 1 ]
then
echo "ERROR: usage $0 <file>"
exit 1
else
inputfile=$1
if [ ! -f $inputfile ]
then
echo "ERROR: could not find $inputfile"
exit 1
fi
fi
tempfile="/tmp/tempfile.$$"
>$tempfile
while read line
do
firstelement=$(echo $line | awk '{print $1}')
if [ "$firstelement" == 'x' ]
then
secondelement=$(echo $line | awk '{print $2}' | cut -d';' -f1)
(( secondelement = secondelement + 5 ))
echo "$firstelement $secondelement;" >>$tempfile
else
echo "$line" >>$tempfile
fi
done <$inputfile
mv $tempfile $inputfile
So it reads the input file line per line. If the line starts with variable x, it takes the number that follows, does +5 to it and outputs it to a temp file. If the line does not start with x, it outputs the line, unchanged, to the temp file. Lastly the temp file overwrite the input file.
Copy this code in a file, make it executable and run it with the input file as an argument.
Related
Is it possible to write a script that reads the file containing numbers (one per line) and writes their maximum, minimum and sum. If the file is empty, it will print an appropriate message. The name of the file is to be given as the parameter of the script. I mange to create below script, but there are 2 errors:
./4.3: line 20: syntax error near unexpected token `done'
./4.3: line 20: `done echo "Max: $max" '
Is it possible to add multiple files as parameter?
lines=`cat "$1" | wc -l`
if [ $lines -eq 0 ];
then echo "File $1 is empty!"
exit fi min=`cat "$1" | head -n 1`
max=$min sum=0
while [ $lines -gt 0 ];
do num=`cat "$1" |
tail -n $lines`
if [ $num -gt $max ];
then max=$num
elif [ $num -lt $min ];
then min=$num fiS
sum=$[ $sum + $num] lines=$[ $lines - 1 ]
done echo "Max: $max"
echo "Min: number $min"
echo "Sum: $sum"
Pretty compelling use of GNU datamash here:
read sum min max < <( datamash sum 1 min 1 max 1 < "$1" )
[[ -z $sum ]] && echo "file is empty"
echo "sum=$sum; min=$min; max=$max"
Or, sort and awk:
sort -n "$1" | awk '
NR == 1 { min = $1 }
{ sum += $1 }
END {
if (NR == 0) {
print "file is empty"
} else {
print "min=" min
print "max=" $1
print "sum=" sum
}
}
'
Here's how I'd fix your original attempt, preserving as much of the intent as possible:
#!/usr/bin/env bash
lines=$(wc -l "$1")
if [ "$lines" -eq 0 ]; then
echo "File $1 is empty!"
exit
fi
min=$(head -n 1 "$1")
max=$min
sum=0
while [ "$lines" -gt 0 ]; do
num=$(tail -n "$lines" "$1")
if [ "$num" -gt "$max" ]; then
max=$num
elif [ "$num" -lt "$min" ]; then
min=$num
fi
sum=$(( sum + num ))
lines=$(( lines - 1 ))
done
echo "Max: $max"
echo "Min: number $min"
echo "Sum: $sum"
The dealbreakers were missing linebreaks (can't use exit fi on a single line without ;); other changes are good practice (quoting expansions, useless use of cat), but wouldn't have prevented your script from working; and others are cosmetic (indentation, no backticks).
The overall approach is a massive antipattern, though: you read the whole file for each line being processed.
Here's how I would do it instead:
#!/usr/bin/env bash
for fname in "$#"; do
[[ -s $fname ]] || { echo "file $fname is empty" >&2; continue; }
IFS= read -r min < "$fname"
max=$min
sum=0
while IFS= read -r num; do
(( sum += num ))
(( max = num > max ? num : max ))
(( min = num < min ? num : min ))
done < "$fname"
printf '%s\n' "$fname:" " min: $min" " max: $max" " sum: $sum"
done
This uses the proper way to loop over an input file and utilizes the ternary operator in the arithmetic context.
The outermost for loop loops over all arguments.
You can do the whole thing in one while loop inside a shell script. Here's the bash version:
s=0
while read x; do
if [ ! $mi ]; then
mi=$x
elif [ $mi -gt $x ]; then
mi=$x
fi
if [ ! $ma ]; then
ma=$x
elif [ $ma -lt $x ]; then
ma=$x
fi
s=$((s+x))
done
if [ ! $ma ]; then
echo "File is empty."
else
echo "s=$s, mi=$mi, ma=$ma"
fi
Save that script into a file, and then you can use pipes to send as many input files into it as you wish, like so (assuming the script is called "mysum"):
cat file1 file2 file3 | mysum
or for a single file
mysum < file1
(Make sure, the script is executable and on the $PATH, otherwise use "./mysum" for the script in the current directory or indeed "bash mysum" if it isn't executable.)
The script assumes that the numbers are one per line and that there's nothing else on the line. It gives a message if the input is empty.
How does it work? The "read x" will take input from stdin line-by-line. If the file is empty, the while loop will never be run, and thus variables mi and ma won't be set. So we use this at the end to trigger the appropriate message. Otherwise the loop checks first if the mi and ma variables exist. If they don't, they are initialised with the first x. Otherwise it is checked if the next x requires updating the mi and ma found thus far.
Note that this trick ensures that you can feed-in any sequence of numbers. Otherwise you have to initialise mi with something that's definitely too large and ma with something that's definitely too small - which works until you encounter a strange number list.
Note further, that this works for integers only. If you need to work with floats, then you need to use some other tool than the shell, e.g. awk.
Just for fun, here's the awk version, a one-liner, use as-is or in a script, and it will work with floats, too:
cat file1 file2 file3 | awk 'BEGIN{s=0}; {s+=$1; if(length(mi)==0)mi=$1; if(length(ma)==0)ma=$1; if(mi>$1)mi=$1; if(ma<$1)ma=$1} END{print s, mi, ma}'
or for one file:
awk 'BEGIN{s=0}; {s+=$1; if(length(mi)==0)mi=$1; if(length(ma)==0)ma=$1; if(mi>$1)mi=$1; if(ma<$1)ma=$1} END{print s, mi, ma}' < file1
Downside: if doesn't give a decent error message for an empty file.
a script that reads the file containing numbers (one per line) and writes their maximum, minimum and sum
Bash solution using sort:
<file sort -n | {
read -r sum
echo "Min is $sum"
while read -r num; do
sum=$((sum+num));
done
echo "Max is $num"
echo "Sum is $sum"
}
Let's speed up by using some smart parsing using tee, tr and calculating with bc and if we don't mind using stderr for output. But we could do a little fifo and synchronize tee output. Anyway:
{
<file sort -n |
tee >(echo "Min is $(head -n1)" >&2) >(echo "Max is $(tail -n1)" >&2) |
tr '\n' '+';
echo 0;
} | bc | sed 's/^/Sum is /'
And there is always datamash. The following willl output 3 numbers, being sum, min and max:
<file datamash sum 1 min 1 max 1
You can try with a shell loop and dc
while [ $# -gt 0 ] ; do
dc -f - -e '
['"$1"' is empty]sa
[la p q ]sZ
z 0 =Z
# if file is empty
dd sb sc
# populate max and min with the first value
[d sb]sY
[d lb <Y ]sM
# if max keep it
[d sc]sX
[d lc >X ]sN
# if min keep it
[lM x lN x ld + sd z 0 <B]sB
lB x
# on each line look for max, min and keep the sum
[max for '"$1"' = ] n lb p
[min for '"$1"' = ] n lc p
[sum for '"$1"' = ] n ld p
# print summary at end of each file
' <"$1"
shift
done
I have a file in linux:
The file has ranges of numbers, the file is like:
100,500
501,1000
1001,2000
And i have other file with a word and the numbers:
a,105
b,110
c,550
d,670
e,900
f,80
h,1500
Then i need filter the file and generate files according the ranges in the first file.
Then i need 3 files:
<<110,500>>
a,105
b,110
<<501,1000>>
c,550
d,670
e,900
<<1001,2000>>
h,1500
With a bash script
i can read the first file like:
while read line
do
init=`echo $line | awk 'BEGIN {FS=","}{print $1}'`
end=`echo $line | awk 'BEGIN {FS=","}{print $2}'`
done <rangos.txt
And i have the ranges, but i don't know how can i divide the second file according the ranges of the first file.
Who can help me?
Thanks
Here a sample parser in bash:
#!/bin/bash
declare file1=file1
declare file2=file2
while read line; do
if [ -z "${line}" ]; then continue; fi # empty lines
declare -i left=${line%%,*}
declare -i right=${line##*,}
echo "<<$left,$right>>"
OIFS=$IFS
IFS=' '
for word in $(<$file2); do
declare letter=${word%%,*}
declare -i value=${word##*,}
if [[ $left -le $value && $value -le $right ]]; then
echo "$letter,$value"
fi
done
IFS=$OIFS
done < "${file1}"
Tested under Debian Wheezy with bash4, it print:
$ ./parser.sh
<<100,500>>
a,105
b,110
<<501,1000>>
c,550
d,670
e,900
<<1001,2000>>
h,1500
However, in the light of your comment about perl or other language, then you should do it in the language you or your team is more familiar with.
I assume that the two file are not sorted and that the second file have a word and a number per line.
In this case you can do something like this:
> out_file.txt
while read line; do
init=${line#*,}
end=${line%,*}
echo "<<$init,$end>>" >> out_file.txt
while read wnum; do
theNum=${wnum#*,}
if [ $theNum -le $end ] && [ $theNum -ge $init ]; then
echo "$wnum" >> out_file.txt
fi
done < word_and_num.txt
done <rangos.txt
It will be a lot easier with awk:
BEGIN { FS = "," }
NR==FNR {
map[$0]; # load data in hash
next
}
{
++count;
file = "file" count ".txt"; # create your filename
print "<<" $0 ">>" > file; # add the header to filename
for (data in map) {
split (data, fld, /,/);
if ( $1 <= fld[2] && fld[2] <= $2 ) { # add entries if in range
print (data) > file
}
}
close(file) # close your file
}
Save the above script in say script.awk. Run it like:
awk -f script.awk datafile rangefile
This will create three files:
$ head file*
==> file1.txt <==
<<100,500>>
a,105
b,110
==> file2.txt <==
<<501,1000>>
c,550
d,670
e,900
==> file3.txt <==
<<1001,2000>>
h,1500
I am newbie to shell scripting. I have a requirement to read a file by line and match for specific string. If it matches, print x and if it doesn't match, print y.
Here is what I am trying. But,I am getting unexpected results. I am getting 700 lines of result where my /tmp/l1.txt has 10 lines only. Somewhere, I am going through the loop. I appreciate your help.
for line in `cat /tmp/l3.txt`
do
if echo $line | grep "abc.log" ; then
echo "X" >>/tmp/l4.txt
else
echo "Y" >>/tmp/l4.txt
fi
done
I don't understand the urge to do looping ...
awk '{if($0 ~ /abc\.log/){print "x"}else{print "y"}}' /tmp/13.txt > /tmp/14.txt
EDIT after inquiry ...
Of course, your spec wasn't overly precise, and I'm jumping to conclusions regarding your lines format ... we basically take the whole line that matched abc.log, replace everything up to the directory abc and from /log to the end of line with nothing, which leaves us with clusterX/xyz.
awk '{if($0 ~ /abc\.log/){print gensub(/.+\/abc\/(.+)\/logs/, "\\1", 1)}else{print "y"}}' /tmp/13.txt > /tmp/14.txt
cat /tmp/l3.txt | while read line # read the entire line into the variable "line"
do
if [ -n `echo "$line" | grep "abc.log"` ] # If there is a value "-n"
then
echo "X" >> /tmp/l4.txt # Echo "X" or the value of the variable "line" into l4.txt
else
echo "Y" >> /tmp/l4.txt # If empty echo "Y" into l4.txt
fi
done
While read statement will read either the entire line if only one variable is given, in this case "line" or if you have a fixed amount of fields you can specify a variable for each field, I.E. "| while read field1 field2" etc... The -n tests for if their is a value or not. -z will test if it's empty.
Why worry about cat and the rest before grep, you can simply test the return of grep and append all matching lines to /tmp/14.txt or append "Y":
[ -f "/tmpfile.tmp" ] && :> /tmpfile.tmp # test for existing tmpfile & truncate
if grep "abc.log" /tmp/13.txt >>tmpfile.tmp ; then # write all matching lines to tmpfile
cat tmpfile.tmp /tmp/14.txt # if grep matched append to /tmp/14.txt
else
echo "Y" >> /tmp/14.txt # write "Y" to /tmp/14.txt
fi
rm tmpfile.tmp # cleanup
Note: if you don't want the result of the grep appended to /tmp/14.txt, then just replace cat tmpfile.tmp /tmp/14.txt with echo "X" >> /tmp/14.txt and you can remove the 1st and last lines.
I think the "awk" answer above is better. However, if you really need to interact using a bash loop, you can use:
PATTERN="abc.log"
OUTPUTFILE=/tmp/14.txt
INPUTFILE=/tmp/13.txt
while read line
do
grep -q "$PATTERN" <<< "$line" > /dev/null 2>&1 && echo X || echo Y
done < $INPUTFILE >> $OUTPUTFILE
I need to find strings matching some regexp pattern and represent the search result as array for iterating through it with loop ), do I need to use sed ? In general I want to replace some strings but analyse them before replacing.
Using sed and diff:
sed -i.bak 's/this/that/' input
diff input input.bak
GNU sed will create a backup file before substitutions, and diff will show you those changes. However, if you are not using GNU sed:
mv input input.bak
sed 's/this/that/' input.bak > input
diff input input.bak
Another method using grep:
pattern="/X"
subst=that
while IFS='' read -r line; do
if [[ $line = *"$pattern"* ]]; then
echo "changing line: $line" 1>&2
echo "${line//$pattern/$subst}"
else
echo "$line"
fi
done < input > output
The best way to do this would be to use grep to get the lines, and populate an array with the result using newline as the internal field separator:
#!/bin/bash
# get just the desired lines
results=$(grep "mypattern" mysourcefile.txt)
# change the internal field separator to be a newline
IFS=$'/n'
# populate an array from the result lines
lines=($results)
# return the third result
echo "${lines[2]}"
You could build a loop to iterate through the results of the array, but a more traditional and simple solution would just be to use bash's iteration:
for line in $lines; do
echo "$line"
done
FYI: Here is a similar concept I created for fun. I thought it would be good to show how to loop a file and such with this. This is a script where I look at a Linux sudoers file check that it contains one of the valid words in my valid_words array list. Of course it ignores the comment "#" and blank "" lines with sed. In this example, we would probably want to just print the Invalid lines only but this script prints both.
#!/bin/bash
# -- Inspect a sudoer file, look for valid and invalid lines.
file="${1}"
declare -a valid_words=( _Alias = Defaults includedir )
actual_lines=$(cat "${file}" | wc -l)
functional_lines=$(cat "${file}" | sed '/^\s*#/d;/^\s*$/d' | wc -l)
while read line ;do
# -- set the line to nothing "" if it has a comment or is empty line.
line="$(echo "${line}" | sed '/^\s*#/d;/^\s*$/d')"
# -- if not set to nothing "", check if the line is valid from our list of valid words.
if ! [[ -z "$line" ]] ;then
unset found
for each in "${valid_words[#]}" ;do
found="$(echo "$line" | egrep -i "$each")"
[[ -z "$found" ]] || break;
done
[[ -z "$found" ]] && { echo "Invalid=$line"; sleep 3; } || echo "Valid=$found"
fi
done < "${file}"
echo "actual lines: $actual_lines funtional lines: $functional_lines"
So I keep messing this up and I think where I was going wrong was that the code i'm writing needs to return only the file name and number of lines from an argument.
So using wc I need to get something to accept either 0 or 1 arguments and print out something like "The file findlines.sh has 4 lines" or if they give a ./findlines.sh Desktop/testfile they'll get the "the file testfile has 5 lines"
I have a few attempts and all of them have failed. I can't seem to figure out how to approach it at all.
Should I echo "The file" and then toss the argument name in and then add another echo for "has the number of lines [lines]"?
Sample input would be from terminal something like
>findlines.sh
Output:the file findlines.sh has 18 lines
Or maybe
>findlines.sh /home/directory/user/grocerylist
Output of 'the file grocerylist has 16 lines
#! /bin/sh -
file=${1-findfiles.sh}
lines=$(wc -l < "$file") &&
printf 'The file "%s" has %d lines\n' "$file" "$lines"
This should work:
#!/bin/bash
file="findfiles.sh"
if [ $# -ge 1 ]
then
file=$1
fi
if [ -f $file ]
then
lines=`wc -l "$file" | awk '{print $1}'`
echo "The file $file has $lines lines"
else
echo "File not found"
fi
See sch's answer for a shorter example that doesn't use awk.