Can't add more than 2 variables [duplicate] - linux

This question already has an answer here:
Sorting 3 columns and getting the average
(1 answer)
Closed 6 years ago.
I have a problem whenever I add more than 3 numbers with multiple operators. (I tried expr, bc,
SUM=$(( $S1 + $S2 + $S3 ))
and many other forms, but whenever I have 3 variables I get this error.
expr: non-integer argument
expr: syntax error
This is when I do it with 2 variables (works fine)
#!/bin/sh
FILE=$1
while read -r SID FIRST LAST S1 S2 S3
do
SUM=$(expr $S1 + $S2)
AVG=$(expr $SUM / 3)
printf '%d [%d] %s, %s\n' "$AVG" "$SID" "$LAST" "$FIRST"
done < "$FILE" | sort
and when I do 3 variables (doesn't work)
#!/bin/sh
FILE=$1
while read -r SID FIRST LAST S1 S2 S3
do
SUM=$(expr $S1 + $S2 + $S3)
AVG=$(expr $SUM / 3)
printf '%d [%d] %s, %s\n' "$AVG" "$SID" "$LAST" "$FIRST"
done < "$FILE" | sort
expr: non-integer argument
expr: syntax error
txt file
123456789 Lee Johnson 72 85 90
999999999 Jaime Smith 90 92 91
888111818 JC Forney 100 81 97
290010111 Terry Lee 100 99 100
199144454 Tracey Camp 77 84 84
299226663 Laney Camp 70 74 71
434401929 Skyler Camp 78 81 82
928441032 Jess Forester 85 80 82
928441032 Chris Forester 97 94 89

The shell absolutely supports this; thus, the problem is with your data. Try the following:
s1=1
s2=2
s3=3
echo $(( s1 + s2 + s3 ))
...run, and showing output 6, here.
Likewise:
s1=1
s2=2
s3=3
expr "$s1" + "$s2" + "$s3"
...run, and showing output 6, here.

Related

Linux execute php file with arguments

I have .php takes three parameters. For example: ./execute.php 11 111 111
I have like list of data in text file with spacing. For example:
22 222 222
33 333 333
44 444 444
I was thinking for using xargs to pass in the arguements but its not working.
here is my try
cat raw.txt | xargs -I % ./execute.php %0 %1 %2
doesn't work, any idea?
thanks for the help
As per the following transcript, you are not handling the data correctly:
pax> printf '2 22 222\n3 33 333\n4 44 444\n' | xargs -I % echo %0 %1 %2
2 22 2220 2 22 2221 2 22 2222
3 33 3330 3 33 3331 3 33 3332
4 44 4440 4 44 4441 4 44 4442
Each % is giving you the entire line, and the digit following the % is just tacked on to the end.
To investigate, lets first create a fake processing file proc.sh (and chmod 700 it so we can run it easily):
#!/usr/bin/env bash
echo "$# '$1' '$2' '$3'"
Even if you switch to xargs -I % ./proc.sh %, you'll find you get one argument with embedded spaces, not three individual arguments:
pax> vi proc.sh ; printf '2 22 222\n3 33 333\n4 44 444\n' | xargs -I % ./proc.sh %
1 '2 22 222' '' ''
1 '3 33 333' '' ''
1 '4 44 444' '' ''
The easiest solution is probably to switch to a for read loop, something like:
pax:~> printf '2 22 222\n3 33 333\n4 44 444\n' | while read p1 p2 p3 ; do ./proc.sh ${p1} ${p2} ${p3} ; done
3 '2' '22' '222'
3 '3' '33' '333'
3 '4' '44' '444'
You can see there the program is called with three arguments, you just have to adapt it to your own program:
while read p1 p2 p3 ; do ./proc.sh ${p1} ${p2} ${p3} ; done < raw.txt

How to sort or rearrange numbers from multiple column into multiple row [fixed into 4 columns]?

I have 1 text file, which is test1.txt.
text1.txt contain as following:
Input:
##[A1] [B1] [T1] [V1] [T2] [V2] [T3] [V3] [T4] [V4]## --> headers
1 1000 0 100 10 200 20 300 30 400
40 500 50 600 60 700 70 800
1010 0 101 10 201 20 301 30 401
40 501 50 601
2 1000 0 110 15 210 25 310 35 410
45 510 55 610 65 710
1010 0 150 10 250 20 350 30 450
40 550
Condition:
A1 and B1 -> for each A1 + (B1 + [Tn + Vn])
A1 should be in 1 column.
B1 should be in 1 column.
T1,T2,T3 and T4 should be in 1 column.
V1,V2,V3 and V4 should be in 1 column.
How do I sort it become like below?
Desire Output:
## A1 B1 Tn Vn ## --> headers
1 1000 0 100
10 200
20 300
30 400
40 500
50 600
60 700
70 800
1010 0 101
10 201
20 301
30 401
40 501
50 601
2 1000 0 110
15 210
25 310
35 410
45 510
55 610
65 710
1010 0 150
10 250
20 350
30 450
40 550
Here is my current code:
First Attempt:
Input
cat test1.txt | awk ' { a=$1 b=$2 } { for(i=1; i<=5; i=i+1) { t=substr($0,11+i*10,5) v=substr($0,16+i*10,5) if( t ~ /^\ +[0-9]+$/ || t ~ /^[0-9]+$/ || t ~ /^\ +[0-9]+\ +$/ ){ printf "%7s %7d %8d %8d \n",a,b,t,v } }}' | less
Output:
1 1000 400 0
40 500 800 0
1010 0 401 0
2 1000 410 0
1010 0 450 0
I'm trying using simple awk command, but still can't get the result.
Can anyone help me on this?
Thanks,
Am
Unlike what is stated elsewhere, there's nothing tricky about this at all, you're just using fixed width fields in your input instead of char/string separated fields.
With GNU awk for FIELDWIDTHS to handle fixed width fields it really couldn't be much simpler:
$ cat tst.awk
BEGIN {
# define the width of the input and output fields
FIELDWIDTHS = "2 4 5 5 6 5 6 5 6 5 6 99"
ofmt = "%2s%5s%6s%5s%6s%s\n"
}
{
# strip leading/trailing blanks and square brackets from every field
for (i=1; i<=NF; i++) {
gsub(/^[[\s]+|[]\s]+$/,"",$i)
}
}
NR==1 {
# print the header line
printf ofmt, $1, $2, $3, "Tn", "Vn", " "$NF
next
}
{
# print every other line
for (i=4; i<NF; i+=2) {
printf ofmt, $1, $2, $3, $i, $(i+1), ""
$1 = $2 = $3 = ""
}
}
.
$ awk -f tst.awk file
## A1 B1 Tn Vn ## --> headers
1 1000 0 100
10 200
20 300
30 400
40 500
50 600
60 700
70 800
1010 0 101
10 201
20 301
30 401
40 501
50 601
2 1000 0 110
15 210
25 310
35 410
45 510
55 610
65 710
1010 0 150
10 250
20 350
30 450
40 550
With other awks you'd use a while() { substr() } loop instead of FIELDWIDTHS so it'd be a couple more lines of code but still trivial.
The above will be orders of magnitude faster than an equivalent shell script. See https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice.
This isn't easy because it is hard to identify when you have the different styles of row — those with values in both column 1 and column 2, those with no value in column 1 and a value in column 2, and those no value in column 1 or 2. A first step is to make this easier — sed to the rescue:
$ sed 's/[[:space:]]\{1,\}$//
s/^....../&|/
s/|....../&|/
:a
s/|\( *[0-9][0-9]* \)\( *[^|]\)/|\1|\2/
t a' data
1 | 1000 | 0 | 100 | 10 | 200 | 20 | 300 | 30 | 400
| | 40 | 500 | 50 | 600 | 60 | 700 | 70 | 800
| 1010 | 0 | 101 | 10 | 201 | 20 | 301 | 30 | 401
| | 40 | 501 | 50 | 601
2 | 1000 | 0 | 110 | 15 | 210 | 25 | 310 | 35 | 410
| | 45 | 510 | 55 | 610 | 65 | 710
| 1010 | 0 | 150 | 10 | 250 | 20 | 350 | 30 | 450
| | 40 | 550
$
The first line removes any trailing white space, to avoid confusion. The next two expressions handle the fixed-width columns 1 and 2 (6 characters each). The next line creates a label a; the substitute finds a pipe |, some spaces, some digits, a space, and some trailing material which doesn't include a pipe; and inserts a pipe in the middle. The t a jumps back to the label if a substitution was done.
With that in place, it becomes easy to manage awk with a field separator of |.
This is verbose, but seems to do the trick:
awk -F '|' '
$1 > 0 { printf "%5d %4d %3d %3d\n", $1, $2, $3, $4
for (i = 5; i <= NF; i += 2) { printf "%5s %4s %3d %3d\n", "", "", $i, $(i+1) }
next
}
$2 > 0 { printf "%5s %4d %3d %3d\n", "", $2, $3, $4
for (i = 5; i <= NF; i += 2) { printf "%5s %4s %3d %3d\n", "", "", $i, $(i+1) }
next
}
{ for (i = 3; i <= NF; i += 2) { printf "%5s %4s %3d %3d\n", "", "", $i, $(i+1) }
next
}'
Output:
1 1000 0 100
10 200
20 300
30 400
40 500
50 600
60 700
70 800
1010 0 101
10 201
20 301
30 401
40 501
50 601
2 1000 0 110
15 210
25 310
35 410
45 510
55 610
65 710
1010 0 150
10 250
20 350
30 450
40 550
If you need to remove the headings, add 1d; to the start of the sed script.
This might work for you (GNU sed):
sed -r '1d;s/^(.{11}).{11}/&\n\1/;s/^((.{5}).*\n)\2/\1 /;s/^(.{5}(.{6}).*\n.{5})\2/\1 /;/\S/P;D' file
Delete the first line (if the header is needed see below). The key fields occupy the first 11 (the first key is 5 characters and the second 6) characters and the data fields occupy the next 11. Insert a newline and the key fields before each pair of data fields. Compare the keys on adjacent lines and replace by spaces if they are duplicated. Do not print any blank lines.
If the header is needed, use the following:
sed -r '1{s/\[[^]]+\]\s*//5g;y/[]/ /;s/1/n/3g;s/B/ B/;G;b};s/^(.{11}).{11}/&\n\1/;s/^((.{5}).*\n)\2/\1 /;s/^(.{5}(.{6}).*\n.{5})\2/\1 /;/\S/P;D' file
This does additional formatting on the first line to remove superfluous headings, []'s, replace 1's by n, add an additional space for alignment and a following empty line.
Further more. By utilising the second line of the input file as a template for the data, a sed script can be created that does not have any hard coded values:
sed -r '2!d;s/\s*\S*//3g;s/.\>/&\n/;h;s/[^\n]/./g;G;s/[^\n.]/ /g;s#(.*)\n(.*)\n(.*)\n(.*)#1d;s/^(\1\2)\1\2/\&\\n\\1/;s/^((\1).*\\n)\\2/\\1\3/;s/^(\1(\2).*\\n\1)\\2/\\1\4/;/\\S/P;D#' file |
sed -r -f - file
The script created from the template is piped into a second invocation of the sed as a file and run against the original file to produce the required output.
Likewise the headers may be formatted if need be as so:
sed -r '2!d;s/\s*\S*//3g;s/.\>/&\n/;h;s/[^\n]/./g;G;s/[^\n.]/ /g;s#(.*)\n(.*)\n(.*)\n(.*)#s/^(\1\2)\1\2/\&\\n\\1/;s/^((\1).*\\n)\\2/\\1\3/;s/^(\1(\2).*\\n\1)\\2/\\1\4/;/\\S/P;D#' file |
sed -r -e '1{s/\[[^]]+\]\s*//5g;y/[]/ /;s/1/n/3g;s/B/ B/;G;b}' -f - file
By extracting the first four fields from the second line of the input file, Four variables can be made. Two regexp and two values. These variables can be used to build the sed script.
N.B. The sed script is created from strings extracted from the template and the variables produced are also strings so they can be concatenated to produce further new regexp's and new values etc etc
This is a rather tricky problem that can be handled a number of ways. Whether bash, perl or awk, you will need to handle to number of fields in some semi-generic way instead of just hardcoding values for your example.
Using bash, so long as you can rely on an even-number of fields in all lines (except for the lines with the sole initial value (e.g. 1010), you can accommodate the number of fields is a reasonably generic way. For the lines with 1, 2, etc.. you know your initial output will contain 4-fields. For lines with 1010, etc.. you know the output will contain an initial 3-fields. For the remaining values you are simply outputting pairs.
The tricky part is handling the alignment. Here is where printf which allows you to set the field-width with a parameter using the form "%*s" where the conversion specifier expects the next parameter to be an integer value specifying the field-width followed by a parameter for the string conversion itself. It takes a little gymnastics, but you could do something like the following in bash itself:
(note: edit to match your output header format)
#!/bin/bash
declare -i nfields wd=6 ## total no. fields, printf field-width modifier
while read -r line; do ## read each line (preserve for header line)
arr=($line) ## separate into array
first=${arr[0]} ## check for '#' in first line for header
if [ "${first:0:1}" = '#' ]; then
nfields=$((${#arr[#]} - 2)) ## no. fields in header
printf "## A1 B1 Tn Vn ## --> headers\n" ## new header
continue
fi
fields=${#arr[#]} ## fields in line
case "$fields" in
$nfields ) ## fields -eq nfiles?
cnt=4 ## handle 1st 4 values in line
printf " "
for ((i=0; i < cnt; i++)); do
if [ "$i" -eq '2' ]; then
printf "%*s" "5" "${arr[i]}"
else
printf "%*s" "$wd" "${arr[i]}"
fi
done
echo
for ((i = cnt; i < $fields; i += 2)); do ## handle rest
printf "%*s%*s%*s\n" "$((2*wd))" " " "$wd" "${arr[i]}" "$wd" "${arr[$((i+1))]}"
done
;;
$((nfields - 1)) ) ## one less than nfields
cnt=3 ## handle 1st 3 values
printf " %*s%*s" "$wd" " "
for ((i=0; i < cnt; i++)); do
if [ "$i" -eq '1' ]; then
printf "%*s" "5" "${arr[i]}"
else
printf "%*s" "$wd" "${arr[i]}"
fi
done
echo
for ((i = cnt; i < $fields; i += 2)); do ## handle rest
if [ "$i" -eq '0' ]; then
printf "%*s%*s%*s\n" "$((wd+1))" " " "$wd" "${arr[i]}" "$wd" "${arr[$((i+1))]}"
else
printf "%*s%*s%*s\n" "$((2*wd))" " " "$wd" "${arr[i]}" "$wd" "${arr[$((i+1))]}"
fi
done
;;
* ) ## all other lines format as pairs
for ((i = 0; i < $fields; i += 2)); do
printf "%*s%*s%*s\n" "$((2*wd))" " " "$wd" "${arr[i]}" "$wd" "${arr[$((i+1))]}"
done
;;
esac
done
Rather than reading from a file, just use redirection to redirect the input file to your script (if you want to just provide a filename, then redirect the file to feed the output while read... loop)
Example Use/Output
$ bash text1format.sh <dat/text1.txt
## A1 B1 Tn Vn ## --> headers
1 1000 0 100
10 200
20 300
30 400
40 500
50 600
60 700
70 800
1010 0 101
10 201
20 301
30 401
40 501
50 601
2 1000 0 110
15 210
25 310
35 410
45 510
55 610
65 710
1010 0 150
10 250
20 350
30 450
40 550
As between awk and bash, awk will generally be faster, but here with formatted output, it may be closer than usual. Look things over and let me know if you have questions.

How do I generate random numbers in 3 lines - Linux Shell Script

I would like to write a code that can generate 3 rows of 6 random numbers spaced out, which shuffle after a given time (0.5 seconds), and no new rows are created, basically 6 random numbers keep generating in 3 rows.
The code I have so far is:
echo " "
echo " "
echo " "
for i in {1..5};
do
for i in {1..1};
do
echo -ne " $(($RANDOM % 100)) $(($RANDOM % 100)) $(($RANDOM % 100)) $(($RANDOM % 100)) $(($RANDOM % 100)) $(($RANDOM % 100))\r"
done
sleep 0.5
done
However, when I try to add the second and third row to this, it doesn't seem to work the way I want it. A sample output could look like:
45 88 85 90 44 22
90 56 34 55 32 45
58 99 42 10 48 98
and between these numbers, new ones will generate, keeping only 6 columns and 3 rows. I have tried making matrix too but it didn't work for me.
I don't know if you have it finish, but continuing on from the comment, I would fill an indexed array with random values between 1-100, e.g.
#!/bin/bash
for ((i = 0; i < 18; i++)); do ## fill array with random values
a[i]=$(($RANDOM % 100 + 1))
done
What you would then want is a function you could call, passing the number of values in each row (so you can output a '\n' after those digits print) and then the array values as arguments to the function to read into a local array within the function (of course, you can just use the original array without worrying about passing elements as arguments, but I prefer using local values within function to preserve values in other scopes unchanged. For that your print function could be something like:
## function to print array in shuffled order, using tput for cursor control
prnarray() {
local n=$1
local al=( ${#:2} )
local c=0
for i in $(shuf -i 0-$((${#al[#]} - 1))); do
[ "$c" -gt '0' -a $((c % n)) -eq '0' ] && printf "\n"
printf " %3d" "${al[i]}"
((c++))
done
printf "\n"
tput cuu 6 ## tput is used for cursor control to move back to top
}
Then you really don't need much else bu a loop to print the array, sleep for some period of time and then call prnarray again to overlay the output with a new shuffle. e.g.
tput sc ## save cursor position
## call prnarray 3 times with 5 sec. delay between displays
declare -i c=0
while (( c < 3 )); do
prnarray 3 ${a[#]}
((c++))
sleep 5
done
tput rc ## restore cursor position
Example Use/Output
The array will print in the same spot every 5 seconds with the same elements shuffled to different positions within the array.
$ sh randomshuf.sh
33 30 34
86 98 48
94 89 80
50 57 34
11 45 57
80 42 22
Give it a shot and let me know if you have any questions.
Note: to make it 3x6 change the lines:
tput cuu 3
and
prnarray 6 ${a[#]}
With those changed your output would resemble:
$ sh randomshuf.sh
85 9 45 14 18 16
6 59 43 19 29 58
7 89 18 72 29 29
I would recommend you to avoid using the shell for this. The shell is great for automating system administration tasks - e.g interact with files, directories, shell commands -, but it is also great to keep people away from learning a 'real' and most powerful programming language.
python, ruby or perl can help you out.
if all you have is a hammer, everything looks like a nail.
e.g ruby
def print_random_numbers(num)
random_numbers = []
num.times do |n|
random_numbers << rand(100)
end
puts random_numbers.join(' ')
puts
end
while true
3.times do
print_random_numbers(6)
end
sleep 0.5
end
I'm not sure whether I really understand your problem, but:
This gets you 6 numbers taken at random from the range 1-100
numbers=$(shuf -i 1-100 -n 6)
echo $numbers
The numbers are selected without repetition. If you want repetition, use -r.
This gives you a permutation of the numbers drawn before:
echo $numbers | tr ' ' '\n' | shuf | xargs echo

Print statement within a loop

I have few text files named as file1.txt, file2.txt and so on.
I would like to print the mean of each file after giving some weightage to it. My script is
#!/bin/sh
m1=3.2; m2=1.2; m3=0.2 #mean of file1.txt, file2.txt ...
for i in {1..100} #files
do for j in 20 30 35 45 #weightages
do
k=m$i*$j #This is an example, calulated as mean of file$i.txt * j
printf "%5s %8.3f\n" "$i" "$k" >> ofile.txt
done
done
The above it printing as
ofile.txt
1 64
1 96
1 112
1 144
2 24
2 36
. .
Desire output format as
ofile.txt
1 64 96 112 144
2 24 36 42 54
3 4 6 7 9
. . . . .
where 1st column is the file numbers, 2nd, 3rd, 4th columns are m*j
Off the top of my head so you might need to correct some stuff.
#!/bin/sh
m1=3.2; m2=1.2; m3=0.2 #mean of file1.txt, file2.txt ...
for i in {1..100} #files
do
printf "%5s" "$i" >> ofile.txt
for j in 20 30 35 45 #weightages
do
k=m$i*$j #This is an example, calulated as mean of file$i.txt * j
printf "\t%8.3f" "$k" >> ofile.txt
done
printf "\n" >> ofile.txt
done
#!/bin/sh
m1=3.2; m2=1.2; m3=0.2 #mean of file1.txt, file2.txt ...
for i in {1..100} #files
ofile_line="$i "
do for j in 20 30 35 45 #weightages
do
k=m$i*$j #This is an example, calulated as mean of file$i.txt * j
support=$(printf "%5s %8.3f\n" "$i" "$k")
ofile_line="${ofile_line}${support} "
done
echo "${ofile_line}" >> ofile.txt
done
You don't need a \n in echo "${ofile_line}" >> ofile.txt because echo breaks the line for you.

Compare two files having different column numbers and print the requirement to a new file if condition satisfies

I have two files with more than 10000 rows:
File1 has 1 col File2 has 4 col
23 23 88 90 0
34 43 74 58 5
43 54 87 52 3
54 73 52 35 4
. .
. .
I want to compare each value in file-1 with that in file-2. If exists then print the value along with other three values in file-2. In this example output will be:
23 88 90 0
43 74 58 5
54 87 52 3
.
.
I have written following script, but it is taking too much time to execute.
s1=1; s2=$(wc -l < File1.txt)
while [ $s1 -le $s2 ]
do n=$(awk 'NR=="$s1" {print $1}' File1.txt)
p1=1; p2=$(wc -l < File2.txt)
while [ $p1 -le $p2 ]
do awk '{if ($1==$n) printf ("%s %s %s %s\n", $1, $2, $3, $4);}'> ofile.txt
(( p1++ ))
done
(( s1++ ))
done
Is there any short/ easy way to do it?
You can do it very shortly using awk as
awk 'FNR==NR{found[$1]++; next} $1 in found'
Test
>>> cat file1
23
34
43
54
>>> cat file2
23 88 90 0
43 74 58 5
54 87 52 3
73 52 35 4
>>> awk 'FNR==NR{found[$1]++; next} $1 in found' file1 file2
23 88 90 0
43 74 58 5
54 87 52 3
What it does?
FNR==NR Checks if FNR file number of record is equal to NR total number of records. This will be same only for the first file, file1 because FNR is reset to 1 when awk reads a new file.
{found[$1]++; next} If the check is true then creates an associative array indexed by $1, the first column in file1
$1 in found This check is only done for the second file, file2. If column 1 value, $1 is and index in associative array found then it prints the entire line ( which is not written because it is the default action)

Resources