Read 3 output lines into 3 variables in bash - linux

I have a command that generates 3 lines of output such as
$ ./mycommand
1
asdf
qwer zxcv
I'd like to assign those 3 lines to 3 different variables ($a, $b, $c) such that
$ echo $a
1
$ echo $b
asdf
$ echo $c
qwer zxcv
I'm familiar with the while loop method that would normally be used for reading 3 lines at a time from output that contains sets of 3 lines. But that seems less than elegant considering my command will only ever output 3 lines.
I tried playing around with various combinations of values for IFS= and options for read -r a b c, sending the command output as stdin, but I could only ever get it to set the first line to the first variable. Some examples:
IFS= read -r a b c < <(./mycommand)
IFS=$'\n' read -r a b c < <(./mycommand)
IFS= read -r -d $'\n' < <(./mycommand)
If I modify my command so that the 3 lines are separated by spaces instead of newlines, I can successfully just use this variation as long as each former line is properly quoted:
read -r a b c < <(./mycommand)
And while that is working for the purposes of my current project, it's still bugging me that I couldn't get it to work the other way. So I'm wondering if anyone can see and explain what I was missing in my original attempt with the 3 lines of output.

If you want to read data from three lines, use three reads:
{ read -r a; read -r b; read -r c; } < <(./mycommand)
read reads a chunk of data and then splits it up. You couldn't get it to work because your chunks were always single lines.

Newer BASH versions support mapfile command. Using that you can read all the lines into an array:
mapfile -t ary < <(./command)
Check the array content:
declare -p ary
declare -a ary='([0]="1" [1]="asdf" [2]="qwer zxcv")'

Perhaps this explanation will be useful to you.
... it's still bugging me that I couldn't get it to work the other way. So I'm wondering if anyone can see and explain what I was missing in my original attempt with the 3 lines of output.
Simple: read works only with one line (by default). This:
#!/bin/bash
mycommand(){ echo -e "1\nasdf\nqwer zxcv"; }
read a b c < <(mycommand)
printf 'first : %s\nsecond : %s\nthird : %s\n' "$a" "$b" "$c"
Will print:
first : 1
second :
third :
However, using a null character will capture the whole string in (replace this line above):
read -d '' a b c < <(mycommand)
Will print:
first : 1
second : asdf
third : qwer zxcv
The read command absorbed the whole output of the command and was broken into parts with the default value of IFS: SpaceTabEnter.
In this specific example, that worked correctly because the last value is the one with more than one "part".
But this kind of processing is very brittle. For example: this other possible output of the command, the assignment to variables will break:
mycommand(){ echo -e "1 and 2\nasdf and dfgh\nqwer zxcv"; }
Will output (incorrectly):
first : 1
second : and
third : 2
asdf and dfgh
qwer zxcv
The processing is brittle. To make it robust we need to use a loop. But you say that that is something you already know:
#!/bin/bash
mycommand(){ echo -e "1 and 2\nasdf and dfgh\nqwer zxcv"; }
i=0; while read arr[i]; do ((i++)); done < <(mycommand)
printf 'first : %s\nsecond : %s\nthird : %s\n' "${arr[0]}" "${arr[1]}" "${arr[2]}"
Which will (correctly) print:
first : 1 and 2
second : asdf and dfgh
third : qwer zxcv
However, the loop could be made simpler using bash command readarray:
#!/bin/bash
mycommand(){ echo -e "1 and 2\nasdf and dfgh\nqwer zxcv"; }
readarray -t arr < <(mycommand)
printf 'first : %s\nsecond : %s\nthird : %s\n' "${arr[0]}" "${arr[1]}" "${arr[2]}"
And using a printf "loop" will make the structure work for any count of input lines:
#!/bin/bash
mycommand(){ echo -e "1 and 2\nasdf and dfgh\n*\nqwer zxcv"; }
readarray -t arr < <(mycommand)
printf 'value : %s\n' "${arr[#]}"
Hope that this helped.
EDIT
About nulls (in simple read):
In bash, the use of nulls is almost never practical. In specific, nulls are erased silently in most condidions. This solution does suffer of that limitation.
Including a null in the input:
mycommand(){ echo -e "1 and 2\nasdf and dfgh\n\000\n*\nqwer zxcv"; }
will make a simple read -r -d '' get the input up to the first null (understanding such null as the character with octal 000).
echo "test one:"; echo
echo "input"; echo
mycommand | od -tcx1
echo "output"; echo
read -r -d '' arr < <(mycommand)
echo "$arr" | od -tcx1
Gives this as output:
test one:
input
0000000 1 a n d 2 \n a s d f a n d
31 20 61 6e 64 20 32 0a 61 73 64 66 20 61 6e 64
0000020 d f g h \n \0 \n * \n q w e r z
20 64 66 67 68 0a 00 0a 2a 0a 71 77 65 72 20 7a
0000040 x c v \n
78 63 76 0a
0000044
output
0000000 1 a n d 2 \n a s d f a n d
31 20 61 6e 64 20 32 0a 61 73 64 66 20 61 6e 64
0000020 d f g h \n
20 64 66 67 68 0a
0000026
It is clear that the value captured by read stops at the first octal 000.
Which, frankly, is to be expected.
About nulls (in readarray):
I have to report, however, that readarray does not stop at the octal 000 but just silently removes it (an usual shell trait).
Running this code:
#!/bin/bash
mycommand(){ echo -e "1 and 2\nasdf and dfgh\n\000\n*\nqwer zxcv"; }
echo "test two:"; echo
echo "input"; echo
mycommand | od -tcx1
echo "output"; echo
readarray -t arr < <(mycommand)
printf 'value : %s\n' "${arr[#]}"
echo
printf 'value : %s\n' "${arr[#]}"|od -tcx1
Renders this output:
test two:
input
0000000 1 a n d 2 \n a s d f a n d
31 20 61 6e 64 20 32 0a 61 73 64 66 20 61 6e 64
0000020 d f g h \n \0 \n * \n q w e r z
20 64 66 67 68 0a 00 0a 2a 0a 71 77 65 72 20 7a
0000040 x c v \n
78 63 76 0a
0000044
output
value : 1 and 2
value : asdf and dfgh
value :
value : *
value : qwer zxcv
0000000 v a l u e : 1 a n d 2 \n
76 61 6c 75 65 20 3a 20 31 20 61 6e 64 20 32 0a
0000020 v a l u e : a s d f a n d
76 61 6c 75 65 20 3a 20 61 73 64 66 20 61 6e 64
0000040 d f g h \n v a l u e : \n v
20 64 66 67 68 0a 76 61 6c 75 65 20 3a 20 0a 76
0000060 a l u e : * \n v a l u e :
61 6c 75 65 20 3a 20 2a 0a 76 61 6c 75 65 20 3a
0000100 q w e r z x c v \n
20 71 77 65 72 20 7a 78 63 76 0a
0000113
That is, the null 000 or just \0 gets silently removed.

Related

Remove \r\ character from String pattern matched in AWK

I'm quite new to AWK so apologies for the basic question. I've found many references for removing windows end-line characters from files but none that match a regular expression and subsequently remove the windows end line characters.
I have a file named infile.txt that contains a line like so:
...
DATAFILE data5v.dat
...
Within a shell script I want to capture the filename argument data5v.dat from this infile.txt and remove any carriage return character, \r, IF present. The carriage return may not always be present. So I have to match a word and then remove the \r subsequently.
I have tried the following but it is not working how I expect:
FILENAME=$(awk '/DATAFILE/ { print gsub("\r", "", $2) }' $INFILE)
Can I store the string returned from matching my regex /DATAFILE/ in a variable within my AWK statement to subsequently apply gsub?
File names can contain spaces, including \rs, blanks and tabs, so to do this robustly you can't remove all \rs with gsub() and you can't rely on there being any field, e.g. $2, that contains the whole file name.
If your input fields are tab-separated you need:
awk '/DATAFILE/ { sub(/[^\t]+\t/,""); sub(/\r$/,""); print }' file
or this otherwise:
awk '/DATAFILE/ { sub(/[^[:space:]]+[[:space:]]+/,""); sub(/\r$/,""); print }' file
The above assumes your file names don't start with spaces and don't contain newlines.
To test any solution for robustness try:
printf 'DATAFILE\tfoo \r bar\r\n' | awk '...' | cat -TEv
and make sure that the output looks like it does below:
$ printf 'DATAFILE\tfoo \r\tbar\r\n' | awk '/DATAFILE/ { sub(/[^\t]+\t/,""); sub(/\r$/,""); print }' | cat -TEv
foo ^M^Ibar$
$ printf 'DATAFILE\tfoo \r\tbar\r\n' | awk '/DATAFILE/ { sub(/[^[:space:]]+[[:space:]]+/,""); sub(/\r$/,""); print }' | cat -TEv
foo ^M^Ibar$
Note the blank, ^M (CR), and ^I (tab) in the middle of the file name as they should be but no ^M at the end of the line.
If your version of cat doesn't support -T or -E then do whatever you normally do to look for non-printing chars, e.g. od -c or vi the output.
With GNU awk, would you please try the following:
FILENAME=$(awk -v RS='\r?\n' '/DATAFILE/ {print $2}' "$INFILE")
echo "$FILENAME"
It assigns the record separator RS to a sequence of zero or one \r followed by \n.
As a side note, it is not recommended to use uppercases for user's variable names because it may conflict with system reserved variable names.
Awk simply applies each line of script to each input line. You can easily remove the carriage return and then apply some other logic to the input line. For example,
FILENAME=$(awk '/\r/ { sub(/\r/, "") }
/DATAFILE/ { print $2 }' "$INFILE")
Notice also When to wrap quotes around a shell variable.
who says you need gnu-awk :
gecho -ne "test\r\nabc\n\rdef\n" \
\
| mawk NF=NF FS='\r' OFS='' | odview
0000000 1953719668 1667391754 1717920778 10
t e s t \n a b c \n d e f \n
164 145 163 164 012 141 142 143 012 144 145 146 012
t e s t nl a b c nl d e f nl
116 101 115 116 10 97 98 99 10 100 101 102 10
74 65 73 74 0a 61 62 63 0a 64 65 66 0a
0000015
gawk -P posix mode is also fine with it :
gecho -ne "test\r\nabc\n\rdef\n" \
\
| gawk -Pe NF=NF FS='\r' OFS='' | odview
0000000 1953719668 1667391754 1717920778 10
t e s t \n a b c \n d e f \n
164 145 163 164 012 141 142 143 012 144 145 146 012
t e s t nl a b c nl d e f nl
116 101 115 116 10 97 98 99 10 100 101 102 10
74 65 73 74 0a 61 62 63 0a 64 65 66 0a
0000015

Even after `sort`, `uniq` is still repeating some values

Reference file: http://snap.stanford.edu/data/wiki-Vote.txt.gz
(It is a tape archive that contains a file called Wiki-Vote.txt)
The first few lines in the file that contains the following, head -n 10 Wiki-Vote.txt
# Directed graph (each unordered pair of nodes is saved once): Wiki-Vote.txt
# Wikipedia voting on promotion to administratorship (till January 2008).
# Directed edge A->B means user A voted on B becoming Wikipedia administrator.
# Nodes: 7115 Edges: 103689
# FromNodeId ToNodeId
30 1412
30 3352
30 5254
30 5543
30 7478
3 28
I want to find the number of nodes in the graph, (although it's already given in line 3). I ran the following command,
awk '!/^#/ { print $1; print $2; }' Wiki-Vote.txt | sort | uniq | wc -l
Explanation:
/^#/ matches all the lines that start with #. And !/^#/ matches that doesn't.
awk '!/^#/ { print $1; print $2; }' Wiki-Vote.txt prints the first and second column of all those matched lines in new lines.
| sort pipes the output to sort them.
| uniq should display all those unique values, but it doesn't.
| wc -l counts the previous lines and it is wrong.
The result of the above command is, 8491, which is not 7115 (as mentioned in the line 3). I don't know why uniq repeats the values. I can tell that since awk '!/^#/ { print $1; print $2; }' Wiki-Vote.txt | sort -i | uniq | tail returns,
992
993
993
994
994
995
996
998
999
999
Which contains the repeated values. Someone please run the code and tell me that I am not the only one getting the wrong answer and please help me figure out why I'm getting what I am getting.
The file has dos line endings - each line is ending with \r CR character.
You can inspect your tail output for example with hexdump -C, lines starting with # added by me:
$ awk '!/^#/ { print $1; print $2; }' ./wiki-Vote.txt | sort | uniq | tail | hexdump -C
00000000 39 39 32 0a 39 39 33 0a 39 39 33 0d 0a 39 39 34 |992.993.993..994|
# ^^ HERE
00000010 0a 39 39 34 0d 0a 39 39 35 0d 0a 39 39 36 0a 39 |.994..995..996.9|
# ^^ ^^
00000020 39 38 0a 39 39 39 0a 39 39 39 0d 0a |98.999.999..|
# ^^
0000002c
Because uniq sees unique lines, one with CR and one not, they are not removed. Remove the CR character before pipeing. Note that sort | uniq is better to sort -u.
$ awk '!/^#/ { print $1; print $2; }' ./wiki-Vote.txt | tr -d '\r' | sort -u | wc -l
7115

How to delimit file with "\t\n" on a Mac

I have a document whose lines are separated by "\t\n". Records are separated either by "\t", OR by "\n".
Normally, this should be a straigtforward awk query:
BEGIN {
RS='\t\n';
}
{
print;
print "Next entry:";
}
However, on a Mac, regular expressions do not seem to be supported (maybe I'm not doing something right?) So I tried, RS="\t\n"; however, this is interpreted as RS='\t | \n'. Similar problems running awk from the command line:
awk 1 RS='\t\n' ORS='abc' input > output
replaces the \t's, but leaves the \n's be.
Next try: using tr. This obviously fails for sequence of more than one character-- since \t and \n are both used individually in the rows.
Next:
sed -e '/\t\n/s//NextEntry:/g' input > output
However, doesn't work. Entering any ASCII character sequence instead of \t\n works.
Read the manual. It says that \t is not supported in sed strings. Fair enough
sed -e '/\x9\xa/s//abc/' input > output
Still doesn't work. Idea: use tr to replace \t and \n by characters unused in the input file, use sed to change them to what I want, and then tr to change the remaining characters back to what they should be.
tr: Illegal byte sequence
Turns out, that f6 character makes tr just totally fail.
Went through the suggestions in Sed not recognizing \t instead it is treating it as 't' why? . That might work for replacing output strings (except the "Pasting tab into command prompt via CTRL+V" suggestion-- the shell just rejected that paste.), but did not seem to help in my case.
Maybe it's because it's a Mac? Maybe it's because that's the text I'm looking for, not replacing with? Maybe it's the combination with \n?
Any other suggestions?
UPDATE:
I found thread How can I replace a newline (\n) using sed? . Apparently, I am unable even to replace a \n by the string "abc" using the suggestions in that thread.
EDIT: Hex head of source file:
5a 20 4e 4f 09 0a 41 53 20 4f 46 20 30 31 2d 30
34 2d 30 35 20 45 4d 50 4c 4f 59 45 45 0a 47 52
4f 55 50 09 48 49 52 45 20 44 41 54 45 09 53 41
4c 41 52 59 09 4a 4f 42 20 54 49 54 4c 45 09 0a
4a 4f 42 20 4c 45 56 45 4c 0a 53 45 52 49 45 53
09 41 50 50 54 20 54 59 50 45 09 0a 50 41 59 20
53 54 41 54 55 53 0a f6
Unfortunately, BSD awk, as also used on macOS, doesn't support multi-character record separators (RS) altogether (in line with POSIX) - only a single, literal character is supported.
BSD sed, as also used on macOS, supports only \n in regexes - any other escapes, including hex ones (e.g., \x09) are not supported.
See this answer of mine for a comprehensive comparison of GNU and BSD sed.
Assuming that your sed command works in principle, you can use an ANSI C-quoted string
($'\t') to splice a literal tab char. into your sed script (assumes bash (the macOS default shell), ksh, or zsh),:
sed -e ':a' -e '$!{N;ba' -e '}' -e '/'$'\t''\n/s//NextEntry:/g'
Note that, in order to replace newlines, you must instruct sed to read the entire file into memory first, which is what -e ':a' -e '$!{N;ba' -e '}' does (the BSD Sed-compatible form of the common GNU sed idiom :a;$!{N;ba}).

Compare two files having different column numbers and print the requirement to a new file if condition satisfies

I have two files with more than 10000 rows:
File1 has 1 col File2 has 4 col
23 23 88 90 0
34 43 74 58 5
43 54 87 52 3
54 73 52 35 4
. .
. .
I want to compare each value in file-1 with that in file-2. If exists then print the value along with other three values in file-2. In this example output will be:
23 88 90 0
43 74 58 5
54 87 52 3
.
.
I have written following script, but it is taking too much time to execute.
s1=1; s2=$(wc -l < File1.txt)
while [ $s1 -le $s2 ]
do n=$(awk 'NR=="$s1" {print $1}' File1.txt)
p1=1; p2=$(wc -l < File2.txt)
while [ $p1 -le $p2 ]
do awk '{if ($1==$n) printf ("%s %s %s %s\n", $1, $2, $3, $4);}'> ofile.txt
(( p1++ ))
done
(( s1++ ))
done
Is there any short/ easy way to do it?
You can do it very shortly using awk as
awk 'FNR==NR{found[$1]++; next} $1 in found'
Test
>>> cat file1
23
34
43
54
>>> cat file2
23 88 90 0
43 74 58 5
54 87 52 3
73 52 35 4
>>> awk 'FNR==NR{found[$1]++; next} $1 in found' file1 file2
23 88 90 0
43 74 58 5
54 87 52 3
What it does?
FNR==NR Checks if FNR file number of record is equal to NR total number of records. This will be same only for the first file, file1 because FNR is reset to 1 when awk reads a new file.
{found[$1]++; next} If the check is true then creates an associative array indexed by $1, the first column in file1
$1 in found This check is only done for the second file, file2. If column 1 value, $1 is and index in associative array found then it prints the entire line ( which is not written because it is the default action)

How to search, replace specific hex code in automated way

I have a 100M row file that has some encoding problems -- was "originally" EBCDIC, saved as US-ASCII, now UTF-8. I don't know much more about its heritage, sorry -- I've just been asked to analyze the content.
The "cents" character from EBCDIC is "hidden" in this file in random places, causing all sorts of errors. Here is more on this bugger: cents character in hex
Converting this file using iconv -f foo -t UTF-8 -c is not working -- the cents character prevails.
When I use hex editor, I can find the appearance of 0xC2 0xA2 (c2a2). But in a BIG file, this isn't ideal. Sed doesn't work at hex level, so... Not sure about tr -- I only really use it for carriage return / new line.
What linux utility / command can I use to find and delete this character reasonably quickly on very big files?
2 parts:
1 -- utility / command to find / count the number of these occurrences (octal \242)
2 -- command to replace (this works tr '\242' ' ' < source > output )
How the text appears on my ubuntu terminal:
1019EQ?IT DEPT GENERATED
With xxd, how it looks at hex level (ascii to the side looks the same as above):
0000000: 3130 3139 4551 a249 5420 4445 5054 2047 454e 4552 4154 4544 0d0a
With xxd, how it looks with "show ebcdic" -- here, just showing the ebcdic from side:
......s.....&....+........
So hex "a2" is the culprit. I'm now trying xxd -E foo | grep a2 to count the instances up.
Adding output from od -ctxl, rather than xxd, for those interested:
0000000 1 0 1 9 E Q 242 I T D E P T G
31 30 31 39 45 51 a2 49 54 20 44 45 50 54 20 47
0000020 E N E R A T E D \r \n
45 4e 45 52 41 54 45 44 0d 0a
When you say the file was converted what do you mean? Do you mean the binary file was simply dumped from an IBM 360 to another ASCII based computer, or was the file itself converted over to ASCII when it was transferred over?
The question is whether the file is actually in a well encoded state or not. The other question is how do you want the file encoded?
On my Mac (which uses UTF-8 by default, just like Linux systems), I have no problem using sed to get rid of the ¢ character:
Here's my file:
$ cat test.txt
This is a test --¢-- TEST TEST
$ od -ctx1 test.txt
0000000 T h i s i s a t e s t -
54 68 69 73 20 69 73 20 61 20 74 65 73 74 20 2d
0000020 - ¢ ** - - T E S T T E S T \n
2d c2 a2 2d 2d 20 54 45 53 54 20 54 45 53 54 0a
0000040
You can see that cat has no problems printing out that ¢ character. And, you can see in the od dump the c2a2 encoding of the ¢ character.
$ sed 's/¢/$/g' test.txt > new_test.txt
$ cat new_test.txt
This is a test --$-- TEST TEST
$ od -ctx1 new_test.txt
0000000 T h i s i s a t e s t -
54 68 69 73 20 69 73 20 61 20 74 65 73 74 20 2d
0000020 - $ - - T E S T T E S T \n
2d 24 2d 2d 20 54 45 53 54 20 54 45 53 54 0a
0000037
Here's my sed has no problems changing that ¢ into a $ sign. The dump now shows that this test file is equivalent to a strictly ASCII encoded file. That two hexadecimal digit encoded ¢ is now a nice clean single hexadecimal digit encoded $.
It looks like sed can handle your issue.
If you want to use this file on a Windows system, you can convert the file to the standard Windows Code Page 1252:
$ iconv -f utf8 -t cp1252 test.txt > new_test.txt
$ cat new_test.txt
This is a test --?-- TEST TEST
$ od -ctx1 new_test.txt
0000000 T h i s i s a t e s t -
54 68 69 73 20 69 73 20 61 20 74 65 73 74 20 2d
0000020 - 242 - - T E S T T E S T \n
2d a2 2d 2d 20 54 45 53 54 20 54 45 53 54 0a
0000037
Here's the file now in Codepage 1252 just like the way Windows likes it! Note that the ¢ is now a nice hex 242 character.
So, what is exactly the issue? Do you need to file in pure ASCII defined 127 characters? Do you need the file encoded, so Windows machines can work on it? Are you having problems entering the ¢ character?
Let me know. I'm not from the government, and yet I'm here to help you.

Resources