I want to sum all the numbers in a file (columns and lines) given by the first parameter, but my program shows sum=sum+$i instead of the numeric sum:
sum=0;
file=$1
for i in $file
do
sum=sum+$i;
done;
echo "The sum is: " $sum
Input file:
$cat file.txt
10 20 10
40
50
Expected output :
The sum is: 21
Maybe if there is an awk method to solve this?
Try this -
$cat file1.txt
10 20 10
40
50
$awk '{for(i=1;i<=NF;i++) {sum+=$i}} END {print sum}' file1.txt
130
OR
$xargs < file1.txt| tr ' ' + | bc
130
cat file.txt | xargs | sed -e 's/\ /+/g' | bc
You can also use a simple read and an array to sum the value relying on word splitting to separate the values into an array via the default IFS (Internal Field Separator), e.g.
#!/bin/bash
declare -i sum=0
fn="${1:-/dev/stdin}" ## read from file as 1st argument (default stdin)
while read -r line; do ## read each line
a=( $line ) ## separate values into array
for i in ${a[#]}; do ## for each value in array
((sum += i)) ## add to sum
done
done <"$fn"
echo "sum: $sum"
Example Input File
$ cat dat/numfile.txt
10 20 10
40
50
Example Use/Output
$ bash sumnumfile.sh dat/numfile.txt
sum: 130
Another for some awks (at least mawk and gawk):
$ awk -v RS="[^0-9]" '{s+=$1}END{print s}' file
130
Related
I have file.txt include:
2
10
60
90
now how can i check if numbers in that file is equal on greater than 50 end then do something. Something in my case is sending an email this part i have.
I have tried do this with awk but it does not work in script.
The following command will output the greatest value of your file:
sort -nr file.txt | head -1
Then just compare it to the value of your choice and voilà. Something like:
if [ `sort -nr file.txt | head -1` -ge 50 ]
then
<do something>
fi
Explanation:
sort -n sorts the file as numbers (otherwise 12 would be considered greater than 100).
sort -r reverse the sort (by default it displays lower numbers first, with -r it displays higher first).
head -1 displays only the first output.
This will serve your job.
$ awk 'FNR > 0 { if($1 > 50) print $1 }' <file>
I have a .txt file with the following content:
a 3
a 4
a 5
a 6
b 1
b 3
b 5
c 9
c 10
I am wondering if there is any command (no awk if possible) that can read the .txt file and give the following output (Sorted by the second column):
c 19
a 18
b 9
You can use awk piped to sort:
awk '{sums[$1] += $2} END {for (i in sums) print i, sums[i]}' file | sort -rnk2
c 19
a 18
b 9
sums[$1] += $2 is adding value of $2 in an array sums that is indexed by field #1 ($1).
sort -rnk2 is reverse sorting numerically output of awk on field 2
Use can use this code:
cat 1.txt | awk '{arr[$1]+=$2}END{for (var in arr) print var," ",arr[var]}' | sort -rnk 2
Explanation:
cat 1.txt - read 1.txt file with content
awk - is a language very useful for data manipulation
{arr[$1]+=$2} for each line in content file increase array item with key first field with value of second field. Field separator by default is space.
END{for (var in arr) print var," ",arr[var]}' - after all line is proceeded, print array content
sort -rnk 2 - reverse numeric sort on field 2
Non-awk solutions.
perl
perl -lane '
$sum{$F[0]} += $F[1]
} END {
$, = " ";
print $_, $sum{$_} for reverse sort {$sum{$a} <=> $sum{$b}} keys %sum
' file.txt
bash version 4
declare -A sum
while read key val; do (( sum[$key] += $val )); done < file.txt
for key in "${!sum[#]}"; do echo "$key ${sum[$key]}"; done | sort -rn -k2
non-awk challenge accepted
vars=$(cut -d" " -f1 nums | uniq); paste <(echo "$vars") <(cat <(sed -e 's/ /+=/' nums) <(echo "$vars" | sed 's/$/;/') | bc) | sort -k2,2nr
c 19
a 18
b 9
I would like to compare if two columns (mid) in a file is identical to each other. I am not sure of how to do it...Since the original file that I am working one is rather huge (in Gb)
file 1 (column1 and column 4 - to check if they are identical)
mid A1 A2 mid A3 A4 A5 A6
18 we gf 18 32 23 45 89
19 ew fg 19 33 24 46 90
21 ew fg 21 35 26 48 92
Thanks
M
if you just need to find the different row, awk will do,
awk '$1!=$4{print $1,$4}' data
You can check using diff and awk for advance difference.
diff <(awk '{print $1}' data) <(awk '{print $4}' data)
The status code ($?)of this command will tell if they are same (zero) or different (non-zero).
You can use that in base expression like this too,
if diff <(awk '{print $1}' data) <(awk '{print $4}' data) >& /dev/null;
then
echo same;
else
echo different;
fi
Something like this:
awk '{ if ($1 == $4) { print "same"; } else { print "different"; } }' < foo.txt
Completing a litle bit the Shiplu Mokaddim answer, if you have another delimiter, for example in a csv file, you can use:
awk -F; '$1!=$4{print $1,$4}' data.csv | sed -r 's/ /;/g'
In this sample, the delimiter is a ";". The sed command in the end is to replace again the delimiter to the original one. Be sure that you don´t have another space in you answer, i.e. and date time.
Question: Compare two columns value in the same file.
Answer:
cut -d, -f1 a.txt > b.txt ; cut -d, -f3 a.txt > c.txt ; cmp b.txt c.txt && echo "Column values are same"; rm -rf b.txt c.txt
Given input such as:
1
1a
1.1b
2.0c
How to extract the integer/decimal number at beginning of each input line, using only Linux/Unix command line utilities?
Using awk, you could say:
awk '{print $0+0}'
Awk is available in Linux, BSD, and many other Unix-like operating systems. It helps in this way:
echo "1" | awk '{a+=$0; print a}' # output 1
echo "1a" | awk '{a+=$0; print a}' # output 1
echo "1.1b" | awk '{a+=$0; print a}' # output 1.1
echo "2.0c" | awk '{a+=$0; print a}' # output 2
Some more awk
For extracting only digits
$ awk 'gsub(/[[:alpha:]].*/,x,$1) + 1' << EOF
1
1a
1.1b
2.0c
EOF
1
1
1.1
2.0
For integer
$ awk '{print int($0)}' << EOF
1
1a
1.1b
2.0c
EOF
1
1
1
2
---edit---
If there is any blank line in file, you can avoid printing zero from following
$ awk 'NF{$0+=0}1' << EOF
1
1a
1.1b
2foot4c
2
EOF
1
1
1.1
2
2
Here is a way to do this with sed:
echo "12.3abc" | sed -n 's/^\([0-9.][0-9.]*\).*/\1/p'
Output:
12.3
The block in parentheses matches all numbers or periods '.' that occur at the beginning of the line. Everything after that is match by the '.*'.
The \1 says to replace the entire line with just the portion that was matched in the parentheses.
Assuming your version of grep supports -o:
grep -o '^[0-9.]\+' data.in
NB: This will match any sequence of digits and decimal points at the start of the line.
I have a file test.txt looking like this:
2092 Mary
103 Tom
1239 Mary
204 Mark
1294 Tom
1092 Mary
I am trying to create a shell script that will
Read each line and put the data in two columns into variable var1 and var2
If var2 in each line is the same, then add the var1 in those lines.
output the file into a text file.
The result should be unique values in the var2 column. Here's what I have so far:
#!/bin/sh
#!/usr/bin/sh
cat test.txt| while read line;
do
$var1=$(echo $line| awk -F\; '{print $1}')
$var2=$(echo $line| awk -F\; '{print $2}')
How can I reference the variable in each line and then compare them?
The expected output would be:
4423 Mary
1397 Tom
204 Mark
Using awk it is easy:
awk '{sum[$2] += $1} END {for (i in sum) printf "%4d %s\n", sum[i], i; }'
If you want to do it with bash 4.x (not 3.x), then:
declare -A sum
while read number name
do
((sum[$name] += $number))
done
for name in "${!sum[#]}"
do
echo ${sum[$name]} $name
done
The structure here is essentially isomorphic with the awk script, but a little less notationally convenient. It will read from standard input, using the names as indexes into the associative array sum. The ${!sum[#]} notation is described in the Shell Parameter Expansion section of the manual, and not even hinted at in the section on Arrays. The information is there if you know where to look.
If you want to process an arbitrary number of input files (like the awk script would) then you need to use cat to collect the data:
cat "$#" |
{
declare -A sum
while read number name
do
((sum[$name] += $number))
done
for name in "${!sum[#]}"
do
echo ${sum[$name]} $name
done
}
This is not UUOC because it handles no arguments (read standard input), one argument or many arguments.
For all the scripts, if you want to sort the output in number or name order, apply an appropriate sort to the output of the script:
script file1 file2 file3 | sort -k 1,1n # By sum increasing order
script file1 file2 file3 | sort -k 1,1nr # By sum decreasing order
script file1 file2 file3 | sort -k 2,2 # By name increasing order
script file1 file2 file3 | sort -k 2,2r # By name decreasing order