Joining consecutive lines using awk - string

How can i join consecutive lines into a single lines using awk? Actually i have this with my awk command:
awk -F "\"*;\"*" '{if (NR!=1) {print $2}}' file.csv
I remove the first line
44895436200043
38401951900014
72204547300054
38929771400013
32116464200027
50744963500014
i want to have this:
44895436200043 38401951900014 72204547300054 38929771400013 32116464200027 50744963500014
csv file

That's a job for tr:
# tail -n +2 prints the whole file from line 2 on
# tr '\n' ' ' translates newlines to spaces
tail -n +2 file | tr '\n' ' '
With awk, you can achieve this by changing the output record separator to " ":
# BEGIN{ORS= " "} sets the internal output record separator to a single space
# NR!=1 adds a condition to the default action (print)
awk 'BEGIN{ORS=" "} NR!=1' file

I assume you want to modify your existing awk, so that it prints a horizontal space separated list, instead of words, one per row.
You can replace the print $2 action in your command, you can do this:
awk -F "\"*;\"*" 'NR!=1{u=u s $2; s=" "} END {print u}' file.csv
or replace the ORS (output record separator)
awk -F "\"*;\"*" -v ORS=" " 'NR!=1{print $2}' file.csv
or pipe output to xargs:
awk -F "\"*;\"*" 'NR!=1{print $2}' file.csv | xargs

Related

How to join every newline Strings within single or double quote

How to join every newline Strings within single or double quote separated by comma.
Example:
I have below names..
$ cat file
James kurt
Suji sane
Bhujji La
Loki Hapa
Desired:
"James kurt", "Suji sane", "Bhujji La", "Loki Hapa"
EDIT:
My Side Efforts:
Below which i have done but there i'm completing it in two steps, jst curious if it can be clubbed into one only.
$ awk '{print "\x22" $1" "$2 "\x22"}'| tr '\n' ','
First print all lines with the " and then join the lines with a comma:
< file xargs -d '\n' printf '"%s"\n' | paste -sd,
Instead of newline you could just remove trailing (or leading comma):
< file xargs -d '\n' printf '"%s",' | sed 's/,$//'
< file xargs -d '\n' printf ',"%s"' | cut -c2-
< file xargs -d '\n' printf ', "%s"' | cut -c3- # with space after comma
With sed add the " and hold the lines, then on last line replace newline with comma and remove the leading command and print:
sed -n 's/^/"/;s/$/"/;H;${x;s/\n/, /g;s/^, //;p}' file
You were close! The " " in your attempt adds a space between the line and ". You could:
awk '{print "\x22" $0 "\x22"}' | tr '\n' ',' |
# and then remove trailing comma:
sed 's/,$//'
But joining the lines with paste is just simpler then replacing newlines with comma and removing the last one:
awk '{print "\x22" $0 "\x22"}' | paste -sd,
Could you please try following.
awk -v lines=$(wc -l < Input_file) -v s1="\"" '
BEGIN{
OFS=", "
}
{
printf("%s%s",s1 $0 s1,lines==FNR?ORS:OFS)
}
' Input_file
Explanation: Adding detailed explanation for above.
awk -v lines=$(wc -l < Input_file) -v s1="\"" ' ##Starting awk program, creating variable lines which has total number of lines in Input_file and creating s1 variable with " in it.
BEGIN{ ##Starting BEGIN section of this program from here.
OFS=", " ##Setting OFS value as comma space here.
}
{
printf("%s%s",s1 $0 s1,lines==FNR?ORS:OFS) ##Printing current line and either printing space or new line as per condition.
}
' Input_file ##Mentioning Input_file name here.
awk '{printf "%s",(NR==1?"":",")"\042"$0"\042"}END{print ""}'
Note that the last END statement is only used to add the last new-line to the output. This makes it POSIX complaint.
This might work for you (GNU sed):
sed ':a;N;$!ba;s/.*/"&"/mg;s/\n/, /g' file
Slurp file into the pattern space, surround lines by double quotes and replace newlines by a comma and a space.
Alternative:
sed -z 's/\n$//;s/.*/"&"/mg;s/\n/, /g;s/$/\n/' file

replace sed command text inline

I have this file
file.txt
unknown#mail.com||unknown#mail.com||
unknown#mail2.com||unknown#mail2.com||
unknown#mail3.com||unknown#mail3.com||
unknown#mail4.com||unknown#mail4.com||
unknownpass
unknownpass2
unknownpass3
unknownpass4
How can I use the sed command to obtain this:
unknown#mail.com|unknownpass|unknown#mail.com|unknownpass|
unknown#mail2.com|unknownpass2|unknown#mail2.com|unknownpass2|
unknown#mail3.com|unknownpass3|unknown#mail3.com|unknownpass3|
unknown#mail4.com|unknownpass4|unknown#mail4.com|unknownpass4|
This might work for you (GNU sed):
sed ':a;N;/\n[^|\n]*$/!ba;s/||\([^|]*\)||\(\n.*\)*\n\(.*\)$/|\3|\1|\3|\2/;P;D' file
Slurp the first part of the file into pattern space and one of the replacements, substitute, print and delete the first line and then repeat.
Well, this does use sed anyway:
{ sed -n 5,\$p file.txt; sed 4q file.txt; } | awk 'NR<5{a[NR]=$0; next}
{$2=a[NR-4]; $4=a[NR-4]} 1' FS=\| OFS=\|
awk to the rescue!
awk 'BEGIN {FS=OFS="|"}
NR==FNR {if(NF==1) a[++c]=$1; next}
NF>4 {$2=a[FNR]; $4=$2; print}' file{,}
a two pass algorithm, caches the entries in the first round and inserts them into the empty fields, assumes the number of items match.
Here is another approach with one pass, powered by tac wrapped awk
tac file |
awk 'BEGIN {FS=OFS="|"}
NF==1 {a[++c]=$1}
NF>4 {$2=a[c--]; $4=$2; print}' |
tac
I would combine the related lines with paste and reshuffle the elements with awk (I assume the related lines are exactly half a file away):
n=$(wc -l < file.txt)
paste -d'|' <(head -n $((n/2)) file.txt) <(tail -n $((n/2)) file.txt) |
awk '{ print $1, $6, $3, $6, "" }' FS='|' OFS='|'
Output:
unknown#mail.com|unknownpass|unknown#mail.com|unknownpass|
unknown#mail2.com|unknownpass2|unknown#mail2.com|unknownpass2|
unknown#mail3.com|unknownpass3|unknown#mail3.com|unknownpass3|
unknown#mail4.com|unknownpass4|unknown#mail4.com|unknownpass4|

bash, extract string from text file with space delimiter

I have a text files with a line like this in them:
MC exp. sig-250-0 events & $0.98 \pm 0.15$ & $3.57 \pm 0.23$ \\
sig-250-0 is something that can change from file to file (but I always know what it is for each file). There are lines before and above this, but the string "MC exp. sig-250-0 events" is unique in the file.
For a particular file, is there a good way to extract the second number 3.57 in the above example using bash?
use awk for this:
awk '/MC exp. sig-250-0/ {print $10}' your.txt
Note that this will print: $3.57 - with the leading $, if you don't like this, pipe the output to tr:
awk '/MC exp. sig-250-0/ {print $10}' your.txt | tr -d '$'
In comments you wrote that you need to call it in a script like this:
while read p ; do
echo $p,awk '/MC exp. sig-$p/ {print $10}' filename | tr -d '$'
done < grid.txt
Note that you need a sub shell $() for the awk pipe. Like this:
echo "$p",$(awk '/MC exp. sig-$p/ {print $10}' filename | tr -d '$')
If you want to pass a shell variable to the awk pattern use the following syntax:
awk -v p="MC exp. sig-$p" '/p/ {print $10}' a.txt | tr -d '$'
More lines would've been nice but I guess you would like to have a simple use awk.
awk '{print $N}' $file
If you don't tell awk what kind of field-separator it has to use it will use just a space ' '. Now you just have to count how many fields you have got to get your field you want to get. In your case it would be 10.
awk '{print $10}' file.txt
$3.57
Don't want the $?
Pipe your awk result to cut:
awk '{print $10}' foo | cut -d $ -f2
-d will use the $ als field-separator and -f will select the second field.
If you know you always have the same number of fields, then
#!/bin/bash
file=$1
key=$2
while read -ra f; do
if [[ "${f[0]} ${f[1]} ${f[2]} ${f[3]}" == "MC exp. $key events" ]]; then
echo ${f[9]}
fi
done < "$file"

How to remove padding from awk command?

I have a 10000 line file that contains on each line a string in the form of "data:key", which is also right-padded by 8 characters, where ':' is the delimiter. I am attempting to use awk from within Linux to print these pairs on their own lines, so that line #1 = data and line #2 = key, and I have achieved this using the command:
awk -F: '{print $1; print$2}' < ~/prices.txt
My problem occurs on the second line of each set. For some reason, it is padded with as much whitespace as there was from removing the data from the line. So, if my line was "26900:9976", the first line would be '26900' and the second line would be ' 9976', whitespace included.
If curious, I want to do it this way because I am piping the results to db_load to use within a B+-tree.
Not exactly your answer but you can use tr for this:
tr ':' '\n' < input
also I don't see the behaviour you are describing with your awk command, however, you can always add a sed to the pipeline to remove leading white space:
tr ':' '\n' < ~/prices.txt | sed 's/^[ \t]*//'
awk -F: '{print $1; print$2}' < ~/prices.txt | sed 's/^[ \t]*//'
You can use a regular expression as the field separator: a colon followed by zero or more whitespace chars will separate the fields.
awk -F ':[[:space:]]*' '{print $1; print $2}' < ~/prices.txt

how to get requred field from file on linux?

I have one file which contains three fields separated by two spaces. I need to get only third field from file. File content is as in following example:
kuldeep Mirat Shakti
balaji salunke pune
.
.
.
How can I get the third field?
To get the 3rd field, assuming you don't have any "embedded spaces", just
awk '{print $3}' file
awk by default sets whitespaces as field delimiters. So even if you have 2 spaces or more, the 3rd field is always $3.
However, if you want to be specific, then specify a Field delimiter
awk -F" " '{print $3}' file
If you have other choices, a Ruby one
ruby -F" " -ane 'print $F[2]' file
ruby -ane 'print $F[2]' file
Update: If you need to get all fields after 3rd,
awk -F" " '{$1=$2=$3=""}1' OFS=" " file # add a pipe to `sed 's/^[ \t]*//'` if desired
ruby -F" " -ane 'puts $F[3..-1].join(" ")' file
Use awk:
awk -F' ' '{print $3}' file
This also works if fields may contain embedded spaces.
To get the third field of each line, pipe through awk, e.g
cat filename | awk '{print $3}'
If you just want to get the third field of the first line, use head, too:
cat filename | head -n 1 | awk '{print $3}'
Given #balaji's comment to #kurani's answer:
perl -pe 's/^.*? .*? //' filename
awk -F' ' '{for(i=3; i<NF; i++) {printf("%s%s",$i,FS)}; print $NF}' filename
less filename | cut -d" " -f 3

Resources