I have a variable as below & i perform certain operations to print the output one by one as mentioned below.
a="My name is A. Her Name is B. His Name is C"
echo "$a" | awk -F '[nN]ame |\\.' '{for (i=2; i<=NF; i+=2) print $i}'
The output is
is A
is B
is C
When I store the results into an array, it considers space as array separator and stores value. but i want to store the each line of the output to each array index values as below
x=($(awk -F '[nN]ame |\\.' '{for (i=2; i<=NF; i+=2) print $i}' <<< "$a"))
out puts ,
${x[0]} = is
${x[1]} = A
..and so on...
What i expect is
${x[0]} = is A
${x[1]} = is B
${x[2]} = is C
Also echo ${#x[#]} = 6 ; It should be = 3
OK try below:
i=0
while read v; do
x[i]="$v"
(( i++ ))
done < <(awk -F '[nN]ame |\\.' '{for (i=2; i<=NF; i+=2) print $i}' <<< "$a")
You can also use the mapfile command (bash version 4 or higher):
tempX=$(awk -F '[nN]ame |\\.' '{for (i=2; i<=NF; i+=2) print $i}' <<< "$a")
mapfile -t x <<< "$tempX"
~$ echo "${x[0]}"
is A
Related
I have a folder that contains text files. I need to extract 20 lines right after the LAST word 'Input' and send the results to files of the same name in a different folder
I use the following:
for i in error/*.log; do awk '/Input/ {n=NR} {a[NR]=$0} END {for (i=n;i<=n+20;i++) print a[i]}' $i > exceeded/`basename $i` done
What am I doing wrong?
Thanks for the help in advance
If you have spaces in the file names (the * part of *.log) then try this:
for i in error/*.log; do awk '/Input/ {n=NR} {a[NR]=$0} END {for (i=n;i<=n+20;i++) print a[i]}' "$i" > "exceeded/`basename \"$i\"`" ; done
Also, I am assuming that both the "error" and "exceeded" directories are in the same directory (and that "exceeded" already exists.)
I expect no upvotes for this:
When I hear "do something after the last ...", I think "reverse the input and do something after the **first* ..."
This awk only has to remember 20 lines: helpful if you have large files.
for f in error/*.log; do
tac "$f" |
awk -v n=20 -v pattern="input" '
BEGIN { for (i=1; i<=n; i++) line[i] = "" }
function keep_line(l) {
for (i=2; i<=n; i++) line[i-1] = line[i]
line[n] = l
}
$0 ~ pattern {
for (i=1; i<=n; i++) print line[i]
exit
}
{ keep_line($0) }
' |
tac > "exceeded/$(basename "$f")"
done
How's this?
for i in error/*.log; do
awk '/Input/ { i=21; delete a; next }
--i > 0 { a[21-i] = $0 }
END { for (i=1; i <=20; ++i) print a[i] }' "$i" >exceeded/"${i#error/}"
done
for i in error/*.log; do
awk 'NR==FNR{if(/Input/)n=NR;next} FNR>n && FNR<(n+21)' "$i" "$i" > exceeded/$(basename "$i")
done
I have below text in a file
1|2|SID1=/some/path|SID2=/some/path|4|5
1|2|SID1=/some/path|tel|path|SID2=/some/path|6|5|ord|til
1|2|SID1=/some/path|id1|id2|id3|SID2=/some/path|4|8|dea
In Linux, how do I seach for SID1 and SID2 in each line and print only till the next delimiter, so the output should be
SID1=/some/path SID2=/some/path
SID1=/some/path SID2=/some/path
SID1=/some/path SID2=/some/path
Perl to the rescue:
perl -lne 'print join " ", /SID[12]=[^|]*/g' file.txt
Explanation: Perl reads the file line by line (-n). All parts of the line containing SID followed by 1 or 2 followed by = followed by anything but | are printed with a space between them.
I feel like I'm missing a better solution but this works
Oneline:
awk -F'|' '{a=0; for (i=1; i<=NF; i++) {if ($i ~ /^SID[[:digit:]]*=/) { printf "%s%s", a?OFS:(NR>1)?ORS:"", $i; a++ }}} END {print ""}' file
Explained:
awk -F'|' '{
# Reset our field tracking.
a=0
# Loop over all the fields in the line.
for (i=1; i<=NF; i++) {
# If the current field starts with 'SID#=' then
if ($i ~ /^SID[[:digit:]]*=/) {
# Print out the field with the appropriate separator.
# When we have 'a' set we are in a line and want to print out a
# leading OFS. Otherwise if this is not the first line we want to
# print out a leading ORS. Otherise do nothing.
printf "%s%s", a?OFS:(NR>1)?ORS:"", $i
# Set our field tracking.
a=1
}
}
}
END {
# Print out the final newline.
print ""
}' file
I hope you can help me.
I try to separate a String:
#!/bin/bash
file=$(<sample.txt)
echo "$file"
The File itself contains Values like this:
(;FF[4]GM[1]SZ[19]CA[UTF-8]SO[sometext]BC[cn]WC[ja]
What I need is a way to extract the Values between the [ ] and set them as variables, for Example:
$FF=4
$GM=1
$SZ=19
and so on
However, some Files do not contain all Values, so that in some cases there is no FF[*]. In this case the Program should use the Value of "99"
How do I have to do this?
Thank you so much for your help.
Greetings
Chris
It may be a bit overcomplicated, but here it comes another way:
grep -Po '[-a-zA-Z0-9]*' file | awk '!(NR%2) {printf "declare %s=\"%s\";\n", a,$0; next} {a=$0} | bash
By steps
Filter file by printing only the needed blocks:
$ grep -Po '[-a-zA-Z0-9]*' a
FF
4
GM
1
SZ
19
CA
UTF-8
SO
sometext
BC
cn
WC
ja
Reformat so that it specifies the declaration:
$ grep -Po '[-a-zA-Z0-9]*' a | awk '!(NR%2) {printf "declare %s=\"%s\";\n", a,$0; next} {a=$0}'
declare FF="4";
declare GM="1";
declare SZ="19";
declare CA="UTF-8";
declare SO="sometext";
declare BC="cn";
declare WC="ja";
And finally pipe to bash so that it is executed.
Note 2nd step could be also rewritten as
xargs -n2 | awk '{print "declare"$1"=\""$2"\";"}'
I'd write this, using ; or [ or ] as awk's field separators
$ line='(;FF[4]GM[1]SZ[19]CA[UTF-8]SO[sometext]BC[cn]WC[ja]'
$ awk -F '[][;]' '{for (i=2; i<NF; i+=2) {printf "%s=\"%s\" ", $i, $(i+1)}; print ""}' <<<"$line"
FF="4" GM="1" SZ="19" CA="UTF-8" SO="sometext" BC="cn" WC="ja"
Then, to evaluate the output in your current shell:
$ source <(!!)
source <(awk -F '[][;]' '{for (i=2; i<NF; i+=2) {printf "%s=\"%s\" ", $i, $(i+1)}; print ""}' <<<"$line")
$ echo $SO
sometext
To handle the default FF value:
$ source <(awk -F '[][;]' '{
print "FF=99"
for (i=2; i<NF; i+=2) printf "%s=\"%s\" ", $i, $(i+1)
print ""
}' <<< "(;A[1]B[2]")
$ echo $FF
99
$ source <(awk -F '[][;]' '{
print "FF=99"
for (i=2; i<NF; i+=2) printf "%s=\"%s\" ", $i, $(i+1)
print ""
}' <<< "(;A[1]B[2]FF[3]")
$ echo $FF
3
Per your request:
while IFS=\[ read -r A B; do
[[ -z $B ]] && B=99
eval "$A=\$B"
done < <(exec grep -oE '[[:alpha:]]+\[[^]]*' sample.txt)
Although using an associative array would be better:
declare -A VALUES
while IFS=\[ read -r A B; do
[[ -z $B ]] && B=99
VALUES[$A]=$B
done < <(exec grep -oE '[[:alpha:]]+\[[^]]*' sample.txt)
There you could have access both with keys ("${!VALUES[#]}") and values "${VALUES['FF']}".
I would probably do something like this:
set $(sed -e 's/^(;//' sample.txt | tr '[][]' ' ')
while (( $# > 2 ))
do
varname=${1}
varvalue=${2}
# do something to test varname and varvalue to make sure they're sane/safe
declare "${varname}=${varvalue}"
shift 2
done
Suppose I have 3 records :
P1||1234|
P1|56001||
P1|||NJ
I want to merge these 3 records into one with all the attributes. Final record :
P1|56001|1234|NJ
Is there any way to achieve this in Unix/Linux?
I assume you ask solution with bash, awk, sed etc.
You could try something like
$ cat test.txt
P1||1234|
P1|56001||
P1|||NJ
$ cat test.txt | awk -F'|' '{ for (i = 1; i <= NF; i++) print $i }' | egrep '.+' | sort | uniq | awk 'BEGIN{ c = "" } { printf c $0; c = "|" } END{ printf "\n" }'
1234|56001|NJ|P1
Briefly, awk splits the lines with '|' separator and prints each field to a line. egrep removes the empty lines. After that, sort and uniq removes multiple attributes. Finally, awk merges the lines with '|' separator.
Update:
If I understand correctly, here's what you seek for;
$ cat test.txt | awk -F'|' '{ for (i = 1; i <= NF; i++) if($i) col[i]=$i } END{ for (i = 1; i <= length(col); i++) printf col[i] (i == length(col) ? "\n" : "|")}'
P1|56001|1234|NJ
In your example, 1st row you have 1234, 2nd row you have 56001.
I don't get why in your final result, the 56001 goes before 1234. I assume it is a typo/mistake.
an awk-oneliner could do the job:
awk -F'|' '{for(i=2;i<=NF;i++)if($i)a[$1]=(a[$1]?a[$1]"|":"")$i}END{print $1"|"a[$1]}'
with your data:
kent$ echo "P1||1234|
P1|56001||
P1||NJ"|awk -F'|' '{for(i=2;i<=NF;i++)if($i)a[$1]=(a[$1]?a[$1]"|":"")$i}END{print $1"|"a[$1]}'
P1|1234|56001|NJ
I want to extract some data from a file and save it in an array, but i don't know how to do it.
In the following I'm extracting some data from /etc/group and save it in another file, after that I print every single item:
awk -F: '/^'$GROUP'/ { gsub(/,/,"\n",$4) ; print $4 }' /etc/group > $FILE
for i in `awk '{ print $0 }' $FILE`
do
echo "member: "$i" "
done
However, I don't want to extract the data into a file, but into an array.
members=( $(awk -F: '/^'$GROUP':/ { gsub(/,/,"\n",$4) ; print $4 }' /etc/group) )
The assignment with the parentheses indicates that $members is an array. The original awk command has been enclosed in $(...), and the colon added so that if you have group and group1 in the file, and you look for group, you don't get the data for group1 too. Of course, if you wanted both entries, then you drop the colon I added.
j=0
for i in `awk '{ print $0 }' $FILE`
do
arr[$j] = $i
j=`expr $j + 1`
done
arr=($(awk -F: -v g=$GROUP '$1 == g { gsub(/,/,"\n",$4) ; print $4 }' /etc/group))