Use mapfile and awk in the same command - linux

I'm trying to map a file to a variable and take from it only the part after a '/'
For now I have this:
mapfile VAR < path_to_file
echo $VAR | awk -F '/' '{ print $2 }'
How to combine those two commands into one? I can't find any examples.

you can use
awk -F '/' '{ print $2 }' path_to_file
what you do is actually same as
mapfile VAR < test.txt && echo $VAR | awk -F '/' '{ print $2 }'

Related

Difficulty to create .txt file from loop in bash

I've this data :
cat >data1.txt <<'EOF'
2020-01-27-06-00;/dev/hd1;100;/
2020-01-27-12-00;/dev/hd1;100;/
2020-01-27-18-00;/dev/hd1;100;/
2020-01-27-06-00;/dev/hd2;200;/usr
2020-01-27-12-00;/dev/hd2;200;/usr
2020-01-27-18-00;/dev/hd2;200;/usr
EOF
cat >data2.txt <<'EOF'
2020-02-27-06-00;/dev/hd1;120;/
2020-02-27-12-00;/dev/hd1;120;/
2020-02-27-18-00;/dev/hd1;120;/
2020-02-27-06-00;/dev/hd2;230;/usr
2020-02-27-12-00;/dev/hd2;230;/usr
2020-02-27-18-00;/dev/hd2;230;/usr
EOF
cat >data3.txt <<'EOF'
2020-03-27-06-00;/dev/hd1;130;/
2020-03-27-12-00;/dev/hd1;130;/
2020-03-27-18-00;/dev/hd1;130;/
2020-03-27-06-00;/dev/hd2;240;/usr
2020-03-27-12-00;/dev/hd2;240;/usr
2020-03-27-18-00;/dev/hd2;240;/usr
EOF
I would like to create a .txt file for each filesystem ( so hd1.txt, hd2.txt, hd3.txt and hd4.txt ) and put in each .txt file the sum of the value from each FS from each dataX.txt. I've some difficulties to explain in english what I want, so here an example of the result wanted
Expected content for the output file hd1.txt:
2020-01;/dev/hd1;300;/
2020-02;/dev/hd1;360;/
2020-03;/dev/hd1;390:/
Expected content for the file hd2.txt:
2020-01;/dev/hd2;600;/usr
2020-02;/dev/hd2;690;/usr
2020-03;/dev/hd2;720;/usr
The implementation I've currently tried:
for i in $(cat *.txt | awk -F';' '{print $2}' | cut -d '/' -f3| uniq)
do
cat *.txt | grep -w $i | awk -F';' -v date="$(cat *.txt | awk -F';' '{print $1}' | cut -d'-' -f-2 | uniq )" '{sum+=$3} END {print date";"$2";"sum}' >> $i
done
But it doesn't works...
Can you show me how to do that ?
Because the format seems to be so constant, you can delimit the input with multiple separators and parse it easily in awk:
awk -v FS='[;-/]' '
prev != $9 {
if (length(output)) {
print output >> fileoutput
}
prev = $9
sum = 0
}
{
sum += $9
output = sprintf("%s-%s;/%s/%s;%d;/%s", $1, $2, $7, $8, sum, $11)
fileoutput = $8 ".txt"
}
END {
print output >> fileoutput
}
' *.txt
Tested on repl generates:
+ cat hd1.txt
2020-01;/dev/hd1;300;/
2020-02;/dev/hd1;360;/
2020-03;/dev/hd1;390;/
+ cat hd2.txt
2020-01;/dev/hd2;600;/usr
2020-02;/dev/hd2;690;/usr
2020-03;/dev/hd2;720;/usr
Alternatively, you could -v FS=';' and use split to split first and second column to extract the year and month and the hdX number.
If you seek a bash solution, I suggest you invert the loops - first iterate over files, then over identifiers in second column.
for file in *.txt; do
prev=
output=
while IFS=';' read -r date dev num path; do
hd=$(basename "$dev")
if [[ "$hd" != "${prev:-}" ]]; then
if ((${#output})); then
printf "%s\n" "$output" >> "$fileoutput"
fi
sum=0
prev="$hd"
fi
sum=$((sum + num))
output=$(
printf "%s;%s;%d;%s" \
"$(cut -d'-' -f1-2 <<<"$date")" \
"$dev" "$sum" "$path"
)
fileoutput="${hd}.txt"
done < "$file"
printf "%s\n" "$output" >> "$fileoutput"
done
You could also almost translate awk to bash 1:1 by doing IFS='-;/' in while read loop.

Get Values out of String in bash

I hope you can help me.
I try to separate a String:
#!/bin/bash
file=$(<sample.txt)
echo "$file"
The File itself contains Values like this:
(;FF[4]GM[1]SZ[19]CA[UTF-8]SO[sometext]BC[cn]WC[ja]
What I need is a way to extract the Values between the [ ] and set them as variables, for Example:
$FF=4
$GM=1
$SZ=19
and so on
However, some Files do not contain all Values, so that in some cases there is no FF[*]. In this case the Program should use the Value of "99"
How do I have to do this?
Thank you so much for your help.
Greetings
Chris
It may be a bit overcomplicated, but here it comes another way:
grep -Po '[-a-zA-Z0-9]*' file | awk '!(NR%2) {printf "declare %s=\"%s\";\n", a,$0; next} {a=$0} | bash
By steps
Filter file by printing only the needed blocks:
$ grep -Po '[-a-zA-Z0-9]*' a
FF
4
GM
1
SZ
19
CA
UTF-8
SO
sometext
BC
cn
WC
ja
Reformat so that it specifies the declaration:
$ grep -Po '[-a-zA-Z0-9]*' a | awk '!(NR%2) {printf "declare %s=\"%s\";\n", a,$0; next} {a=$0}'
declare FF="4";
declare GM="1";
declare SZ="19";
declare CA="UTF-8";
declare SO="sometext";
declare BC="cn";
declare WC="ja";
And finally pipe to bash so that it is executed.
Note 2nd step could be also rewritten as
xargs -n2 | awk '{print "declare"$1"=\""$2"\";"}'
I'd write this, using ; or [ or ] as awk's field separators
$ line='(;FF[4]GM[1]SZ[19]CA[UTF-8]SO[sometext]BC[cn]WC[ja]'
$ awk -F '[][;]' '{for (i=2; i<NF; i+=2) {printf "%s=\"%s\" ", $i, $(i+1)}; print ""}' <<<"$line"
FF="4" GM="1" SZ="19" CA="UTF-8" SO="sometext" BC="cn" WC="ja"
Then, to evaluate the output in your current shell:
$ source <(!!)
source <(awk -F '[][;]' '{for (i=2; i<NF; i+=2) {printf "%s=\"%s\" ", $i, $(i+1)}; print ""}' <<<"$line")
$ echo $SO
sometext
To handle the default FF value:
$ source <(awk -F '[][;]' '{
print "FF=99"
for (i=2; i<NF; i+=2) printf "%s=\"%s\" ", $i, $(i+1)
print ""
}' <<< "(;A[1]B[2]")
$ echo $FF
99
$ source <(awk -F '[][;]' '{
print "FF=99"
for (i=2; i<NF; i+=2) printf "%s=\"%s\" ", $i, $(i+1)
print ""
}' <<< "(;A[1]B[2]FF[3]")
$ echo $FF
3
Per your request:
while IFS=\[ read -r A B; do
[[ -z $B ]] && B=99
eval "$A=\$B"
done < <(exec grep -oE '[[:alpha:]]+\[[^]]*' sample.txt)
Although using an associative array would be better:
declare -A VALUES
while IFS=\[ read -r A B; do
[[ -z $B ]] && B=99
VALUES[$A]=$B
done < <(exec grep -oE '[[:alpha:]]+\[[^]]*' sample.txt)
There you could have access both with keys ("${!VALUES[#]}") and values "${VALUES['FF']}".
I would probably do something like this:
set $(sed -e 's/^(;//' sample.txt | tr '[][]' ' ')
while (( $# > 2 ))
do
varname=${1}
varvalue=${2}
# do something to test varname and varvalue to make sure they're sane/safe
declare "${varname}=${varvalue}"
shift 2
done

how to split "1$$$$" use awk

if I have a string like "sn":"1$$$$12056597.3,2595585.69$$", how can I use awk to split "1$$$$"
I tried
**cat $filename | awk -F "\"1\$\$\$\$" '{ print $2 }'**
**cat $filename | awk -F "\"1$$$$" '{ print $2 }'**
but all failed
any number of $ use
echo '"1$$$$12056597.3,2595585.69$$"' | awk -F '"1[$]+' '{ print $2 }'
exactly 4 use
echo '"1$$$$12056597.3,2595585.69$$"' | awk -F '"1[$]{4}' '{ print $2 }'
to help debug problems with escape characters in the shell you can use the built-in shell command set which will print the arguments that are being passed to awk after the shell has interpreted any escape characters and replaced shell variables
In this case the shell first interprets \$ as an escape for a plain $
set -x
echo '"1$$$$12056597.3,2595585.69$$"'|awk -F "\"1\$\$\$\$" '{ print $2 }'
+ echo '"1$$$$12056597.3,2595585.69$$"'
+ awk -F '"1$$$$' '{ print $2 }'
You can use \$ so the \$ get to awk, but \$ is interpreted in awk regular expressions as a $ anyway. At least awk is nice enough to warn you...
echo '"1$$$$12056597.3,2595585.69$$"'|awk -F "\"1\\$\\$\\$\\$" '{ print $2 }'
+ echo '"1$$$$12056597.3,2595585.69$$"'
+ awk -F '"1\$\$\$\$' '{ print $2 }'
awk: warning: escape sequence `\$' treated as plain `$'
Turn off debugging with
set +x
echo '"1$$$$12056597.3,2595585.69$$"' | awk -F '"1[$]+' '{ print $2 }' |sed 's/.\{3\}$//'
Or if you want to split both float digit:
echo '"1$$$$12056597.3,2595585.69$$"' | awk -F '"1[$]+' '{ print $2 }' |sed 's/.\{3\}$//' |awk 'BEGIN {FS=","};{print $1}'
And
echo '"1$$$$12056597.3,2595585.69$$"' | awk -F '"1[$]+' '{ print $2 }' |sed 's/.\{3\}$//' |awk 'BEGIN {FS=","};{print $2}'

error bash extracting second column of a matched pattern

I am trying to search for a pattern and from the results i am extracting just the second column. The command works well in command line but not inside a bash script.
#!/bin/bash
set a = grep 'NM_033356' test.txt | awk '{ print $2 }'
echo $a
It doesnt print any output at all.
Input
NM_033356 2
NM_033356 5
NM_033356 7
Your code:
#!/bin/bash
set a = grep 'NM_033356' test.txt | awk '{ print $2 }'
echo $a
Change it to:
#!/bin/bash
a="$(awk '$1=="NM_033356"{ print $2 }' test.txt)"
echo "$a"
Code changes are based on your sample input.
.......
a="$(awk '/NM_033356/ { print $2 }' test.txt)"
Try this:
a=`grep 'NM_033356' test.txt | awk '{ print $2 }'`

Using awk in shellscript with parameters?

because i haven't found a solution on google or the searchfunction i will ask here.
Here is my code :
Send="last -n 1 $1 | awk '{ print $1 " " $2 }'"
My problem is, that my shell script is using parameters.
When i'm calling my script:
myScript hello world
Then my awk-Command looks like
awk '{ print hello " " world }'
But how could I avoid this? is there a way?
Because this is a part of a project, i couldn't post more code. ;/
first change the outer "'s to $() so: send=$(last -n 1 $1 | awk '{print $1 " " $2}')
use the FS (field separator) variable which defaults to space in awk instead of " " for a space so: send=$(last -n 1 $1 | awk '{print $1 FS $2}')

Resources