filter out unrecognised fields using awk - linux

I have a CVS file where I expect some values such as Y or N. Folks are adding comments or arbitrary entries such as NA? that I want to remove:
Create,20055776,Y,,Y,Y,,Y,,NA?,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055777,,,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055779,,Y,,,,,,,,Y,,,NA ?,,,Y,,,,,,TBD,,,,,,,,,
I can use gsub to remove things that I am anticipating such as:
$ cat test.csv | awk '{gsub("NA\\?", ""); gsub("NA \\?",""); gsub("TBD", ""); print}'
Create,20055776,Y,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,
Create,20055777,,,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055779,,Y,,,,,,,,Y,,,,,,Y,,,,,,,,,,,,,,,
Yet that will break if someone adds a new comment. I am looking for a regex to generalise the match as "not Y".
I tried some negative look arounds but couldn't get it to work on the awk that I have which is GNU Awk 4.2.1, API: 2.0 (GNU MPFR 4.0.1, GNU MP 6.1.2). Thanks in advance!

awk 'BEGIN{FS=OFS=","}{for (i=3;i<=NF;i++) if ($i !~ /^(y|Y|n|N)$/) $i="";print}' test.CSV
Create,20055776,Y,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055777,,,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055779,,Y,,,,,,,,Y,,,,,,Y,,,,,,,,,,,,,,,
Accepting only Y/N (case-insensitive).

awk 'BEGIN{OFS=FS=","}{for(i=3;i<=NF;i++){if($i!~/^[Y]$/){$i=""}}; print;}'
This seems to do the trick. Loops through the 3rd through the last field, and if the field isn't Y, it's replaced with nothing. Since we're modifying fields we need to set OFS as well.
$ cat file.txt
Create,20055776,Y,,Y,Y,,Y,,NA?,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055777,,,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055779,,Y,,,,,,,,Y,,,NA ?,,,Y,,,,,,TBD,,,,,,,,,
$ awk 'BEGIN{OFS=FS=","}{for(i=3;i<=NF;i++){if($i!~/^[Y]$/){$i=""}}; print;}'
Create,20055776,Y,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055777,,,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055779,,Y,,,,,,,,Y,,,,,,Y,,,,,,,,,,,,,,,
If you wanted to accept "N" too, /^[YN]$/ would work.

cat test.CSV | awk 'BEGIN{FS=OFS=","}{for (i=3;i<=NF;i++) if($i != "Y") $i=""; print}'
Output:
Create,20055776,Y,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055777,,,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055779,,Y,,,,,,,,Y,,,,,,Y,,,,,,,,,,,,,,,
Update: So there's no need to use regex if you simply want to determine it's "Y" or not.
However, if you want to use regex, as zzevannn's answer and tink's answer already gave great ideas of regex condition, so I'll give a batch replace by regex instead:
To be exact, and to increase the challenge, I created some boundary conditions:
$ cat test.CSV
Create,20055776,Y,,Y,Y,,Y,,YNA?,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055777,,,,Y,Y,,Y,,,,Y,,Y,Y,,YN.Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055779,,Y,,,NANN,,,,,Y,,,NA ?Y,,,Y,,,,,,TYBD,,,,,,,,,
And the batch replace is:
$ awk 'BEGIN{FS=OFS=","}{fst=$1;sub($1 FS,"");print fst,gensub("(,)[^,]*[^Y,]+[^,]*","\\1","g",$0);}' test.CSV
Create,20055776,Y,,Y,Y,,Y,,,,Y,,Y,Y,,Y,,,Y,,Y,,,Y,,,,,,,,
Create,20055777,,,,Y,Y,,Y,,,,Y,,Y,Y,,,,,Y,,Y,,,Y,,,,,,,,
Create,20055779,,Y,,,,,,,,Y,,,,,,Y,,,,,,,,,,,,,,,
"(,)[^,]*[^Y,]+[^,]*" is to match anything between two commas that other than single Y.
Note I saved $1 and deleted $1 and the comma after it first, and later print it back.

sed solution
# POSIX
sed -e ':a' -e 's/\(^Create,[0-9]*\(,Y\{0,1\}\)*\),[^Y,][^,]*/\1/;t a' test.csv
# GNU
sed ':a;s/\(^Create,[0-9]*\(,Y\{0,1\}\)*\),[^Y,][^,]*/\1/;ta' test.csv
awk on same concept (avoid some problem of sed that miss the OR regex)
awk -F ',' '{ Idx=$2;gsub(/,[[:blank:]]*[^YN,][^,]*/, "");sub( /,/, "," Idx);print}'

Related

linux command to fetch path of S3

I have S3 bucket path like
s3://my-dev-s3/apyong/output/public/file3.gz.tar
I want to fetch all characters after first "/" (not //) and before last "/" .
So here output should be
apyong/output/public
I tried awk -F'/' '{print $NF}' .But its not producing correct results.
Here is sed solution:
s='s3://my-dev-s3/apyong/output/public/file3.gz.tar'
sed -E 's~.*//[^/]*/|/[^/]*$~~g' <<< "$s"
apyong/output/public
Pure bash solution:
s='s3://my-dev-s3/apyong/output/public/file3.gz.tar'
r="${s#*//*/}"
echo "${r%/*}"
apyong/output/public
1st solution: With GNU grep you could use following solution. Using -oP options with GNU grep will make sure it prints only matched values and enables PCRE regex respectively. Then in main grep code using regex ^.*?\/\/.*?\/\K(.*)(?=/)(explained following) to fetch the desired outcome.
var="s3://my-dev-s3/apyong/output/public/file3.gz.tar"
grep -oP '^.*?\/\/.*?\/\K(.*)(?=/)' <<<"$var"
apyong/output/public
Explanation: Adding detailed explanation for used regex.
^.*?\/\/ ##From starting of value performing a lazy match till // here.
.*?\/ ##again performing lazy match till single / here.
\K ##\K will make sure previous values are forgotten in order to print only needed values.
(.*)(?=/) ##This is greedy match to get everything else as per requirement.
2nd solution: Using GNU awk please try following code. Using capturing group capability of GNU awk here to store values captured by regex in array arr to be used later on in program.
awk 'match($0,/^[^:]*:\/\/[^/]*\/(.*)\//,arr){print arr[1]}' <<<"$var"
3rd solution: awk's match + match solution here to get the required output.
awk 'match($0,/^[^:]*:\/\/[^/]*\//) && match(val=substr($0,RSTART+RLENGTH),/^.*\//){
print substr(val,RSTART,RLENGTH-1)
}
' <<<"$var"
Using awk:
awk -F "/" '{ for(i=4;i<=NF;i++) { if (i==4) { printf "%s",$i } else { printf "/%s",$i } }}' <<< "s3://my-dev-s3/apyong/output/public/file3.gz.tar"
Set the field delimiter to "/" and the look through the 4th to the last field. For the fourth field print the field and for all other field print "/" then the field.
Use coreutils cut:
<<<"s3://my-dev-s3/apyong/output/public/file3.gz.tar" \
cut -d/ -f4-6
Output:
apyong/output/public

Select subdomains using print command

cat a.txt
a.b.c.d.e.google.com
x.y.z.google.com
rev a.txt | awk -F. '{print $2,$3}' | rev
This is showing:
e google
x google
But I want this output
a.b.c.d.e.google
b.c.d.e.google
c.d.e.google
e.google
x.y.z.google
y.z.google
z.google
With your shown samples, please try following awk code. Written and tested in GNU awk should work in any awk.
awk '
BEGIN{
FS=OFS="."
}
{
nf=NF
for(i=1;i<(nf-1);i++){
print
$1=""
sub(/^[[:space:]]*\./,"")
}
}
' Input_file
Here is one more awk solution:
awk -F. '{while (!/^[^.]+\.[^.]+$/) {print; sub(/^[^.]+\./, "")}}' file
a.b.c.d.e.google.com
b.c.d.e.google.com
c.d.e.google.com
d.e.google.com
e.google.com
x.y.z.google.com
y.z.google.com
z.google.com
Using sed
$ sed -En 'p;:a;s/[^.]+\.(.*([^.]+\.){2}[[:alpha:]]+$)/\1/p;ta' input_file
a.b.c.d.e.google.com
b.c.d.e.google.com
c.d.e.google.com
d.e.google.com
e.google.com
x.y.z.google.com
y.z.google.com
z.google.com
Using bash:
IFS=.
while read -ra a; do
for ((i=${#a[#]}; i>2; i--)); do
echo "${a[*]: -i}"
done
done < a.txt
Gives:
a.b.c.d.e.google.com
b.c.d.e.google.com
c.d.e.google.com
d.e.google.com
e.google.com
x.y.z.google.com
y.z.google.com
z.google.com
(I assume the lack of d.e.google.com in your expected output is typo?)
For a shorter and arguably simpler solution, you could use Perl.
To auto-split the line on the dot character into the #F array, and then print the range you want:
perl -F'\.' -le 'print join(".", #F[0..$#F-1])' a.txt
-F'\.' will auto-split each input line into the #F array. It will split on the given regular expression, so the dot needs to be escaped to be taken literally.
$#F is the number of elements in the array. So #F[0..$#F-1] is the range of elements from the first one ($F[0]) to the penultimate one. If you wanted to leave out both "google" and "com", you would use #F[0..$#F-2] etc.

Replaceing multiple command calls

Able to trim and transpose the below data with sed, but it takes considerable time. Hope it would be better with AWK. Welcome any suggestions on this
Input Sample Data:
[INX_8_60L ] :9:Y
[INX_8_60L ] :9:N
[INX_8_60L ] :9:Y
[INX_8_60Z ] :9:Y
[INX_8_60Z ] :9:Y
Required Output:
INX?_8_60L¦INX?_8_60L¦INX?_8_60L¦INX?_8_60Z¦INX?_8_60Z
Just use awk, e.g.
awk -v n=0 '{printf (n?"!%s":"%s", substr ($0,2,match($0,/[ \t]+/)-2)); n=1} END {print ""}' file
Which will be orders of magnitude faster. It just picks out the (e.g. "INX_8_60L") substring using substring and match. n is simply used as a false/true (0/1) flag to prevent outputting a "!" before the first string.
Example Use/Output
With your data in file you would get:
$ awk -v n=0 '{printf (n?"!%s":"%s", substr ($0,2,match($0,/[ \t]+/)-2)); n=1} END {print ""}' file
INX_8_60L!INX_8_60L!INX_8_60L!INX_8_60Z!INX_8_60Z
Which appears to be what you are after. (Note: I'm not sure what your separator character is, so just change above as needed) If not, let me know and I'm happy to help further.
Edit Per-Changes
Including the '?' isn't difficult, and I just copied the character, so you would now have:
awk -v n=0 '{s=substr($0,2,match($0,/[ \t]+/)-2); sub(/_/,"?_",s); printf n?"¦%s":"%s", s; n=1}
END {print ""}' file
Example Output
INX?_8_60L¦INX?_8_60L¦INX?_8_60L¦INX?_8_60Z¦INX?_8_60Z
And to simplify, just operating on the first field as in #JamesBrown's answer, that would reduce to:
awk -v n=0 '{s=substr($1,2); sub(/_/,"?_",s); printf n?"¦%s":"%s", s; n=1} END {print ""}' file
Let me know if that needs more changes.
Don't start so many sed commands, separate the sed operations with semicolon instead.
Try to process the data in a single job and avoid regex. Below reading with substr() static sized first block and insterting ? while outputing.
$ awk '{
b=b (b==""?"":";") substr($1,2,3) "?" substr($1,5)
}
END {
print b
}' file
Output:
INX?_8_60L;INX?_8_60L;INX?_8_60L;INX?_8_60Z;INX?_8_60Z
If the fields are not that static in size:
$ awk '
BEGIN {
FS="[[_ ]" # split field with regex
}
{
printf "%s%s?_%s_%s",(i++?";":""), $2,$3,$4 # output semicolons and fields
}
END {
print ""
}' file
Performance of solutions for 20 M records:
Former:
real 0m8.017s
user 0m7.856s
sys 0m0.160s
Latter:
real 0m24.731s
user 0m24.620s
sys 0m0.112s
sed can be very fast when used gingerly, so for simplicity and speed you might wish to consider:
sed -e 's/ .*//' -e 's/\[INX/INX?/' | tr '\n' '|' | sed -e '$s/|$//'
The second call to sed is there to satisfy the requirement that there is no trailing |.
Another solution using GNU awk:
awk -F'[[ ]+' '
{printf "%s%s",(o?"¦":""),gensub(/INX/,"INX?",1,$2);o=1}
END{print ""}
' file
The field separator is set (with -F option) such that it matches the wanted parameter.
The main statement is to print the modified parameter with the ? character.
The variable o allows to keep track of the delimeter ¦.

shorten awk command pipes

fairly new to using linux on shell.
I want to reduce the amount of pipes I used to extract the following data.
V 190917135635Z 1005 unknown /C=DE/ST=City/L=City/O=something/OU=Somewhat/CN=someserver.com/emailAddress=test#toast.com
My goal is to put the following values into a separate file
190917135635 someserver.com
The command I use right now is fairly long, piped and looks like this
grep -v '^R' $file | awk '{print $2, $6}' | awk -F'[=|/]' '{print $1, $3}' | awk '{print $1, $3}' | awk -F 'Z ' '{print $1, $2}' > sdata.txt
(The file contains other lines starting with 'R' so I exclude those in my grep)
Is this a legit way of doing it?
Is there a way to get this in a shorter command?
Thanks a lot!
Another awk. Using match to find CN entry and substr to extract it for print to print, if it exists.
$ awk '!/^R/{
print $2,
(match($0,/CN=[^/]+/)?substr($0,RSTART+3,RLENGTH-3):"") # 3==length("CN=")
}' file
Output:
190917135635Z someserver.com
Looks some of your data fields are used as creating SSL certificates, thus many fields might contain SPACES, i.e. City, Organization Name etc. That's why you need many awk lines(???). Here is one way which might help you overcome these issues. So instead of transforming your existing code logic, the target is to find the domain name by searching the substring CN= and fetching its corresponding value.
awk '
!/^R/{
start = index($0, "CN=")+3
end = index(substr($0, start), "/")
domain = end ? substr($0, start, end-1) : substr($0, start)
print $2, domain
}
' file.txt
Where:
we use index() to find the start-position of the substring CN=, +3 will be the starting point of the domain name
then we search the next / to get the end-position of this domain. if it's at the end of the line, there will be no / and thus end will be '0'
then we get the domain name between the substring CN= and the next '/' by using substr($0, start, end-1) or the end of line by using substr($0, start).
A short version:
awk '!/^R/{s=index($0, "CN=")+3; e=index(substr($0, s), "/"); print $2, substr($0, s, e ? e-1 : 253)}' file.txt
where 253 is the longest possible domain name which might be enough to fit your needs.
Update:
Actually, it's much easier just use match(), but the point is the same:
awk '!/^R/{if(match($0, "/CN=([^/]*)")) print $2, substr($0, RSTART+4, RLENGTH-4)}' file.txt
If this:
$ awk -F'[[:space:]/=]+' '!/^R/{print $2+0, $16}' file
190917135635 someserver.com
isn't all you need then updated your question to clarify your requirements and provide more truly representative sample input/output.
Using GNU sed:
sed -E -n '/^R/d; s/^[A-Za-z]\s+([0-9]+)\s+[0-9]+\s+.*\/CN=(.*)\/.*/\1 \2/p' input_file > new_file
EDIT: Strictly considering that OP's Input_file is same as shown samples only. After seeing OP's samples one could try following.
awk -F"[ =/Z]" '!/^R/{print $8,$37}' Input_file
For FUN :) in case one want to try in OP's approach then we could try following.
awk '
!/^R/{
val=$2 OFS $5
split(val,array,"[ /Z]")
val1=array[1] OFS array[9] OFS array[10]
split(val1,array1,"[ =]")
print array1[1],array1[3]
}
' Input_file
You are using $6 in the second awk command, that means your 5th column has potentially spaces inside unlike the sample data you showed, also it is extracting CN= part (CNAME?).
So here's a more compatible and more exact sed way which does not require GNU sed:
sed -n -e '/^R/!{' -e 's|^[^[:space:]]*[[:space:]]*\([^[:space:]Z][^[:space:]Z]*\).*/CN=\([^/][^/]*\).*|\1 \2|p;}'
If you just want digits in the second column and it begins with digit, then you can change to use this:
sed -n -e '/^R/!{' -e 's|^[^[:space:]]*[[:space:]]*\([0-9][0-9]*\).*/CN=\([^/][^/]*\).*|\1 \2|p;}'

awk or sed to change column value in a file

I have a csv file with data as follows
16:47:07,3,r-4-VM,230000000.,0.466028518635,131072,0,0,0,60,0
16:47:11,3,r-4-VM,250000000.,0.50822578824,131072,0,0,0,0,0
16:47:14,3,r-4-VM,240000000.,0.488406067907,131072,0,0,32768,0,0
16:47:17,3,r-4-VM,230000000.,0.467893525702,131072,0,0,0,0,0
I would like to shorten the value in the 5th column.
Desired output
16:47:07,3,r-4-VM,230000000.,0.46,131072,0,0,0,60,0
16:47:11,3,r-4-VM,250000000.,0.50,131072,0,0,0,0,0
16:47:14,3,r-4-VM,240000000.,0.48,131072,0,0,32768,0,0
16:47:17,3,r-4-VM,230000000.,0.46,131072,0,0,0,0,0
Your help is highly appreciated
awk '{$5=sprintf( "%.2g", $5)} 1' OFS=, FS=, input
This will round and print .47 instead of .46 on the first line, but perhaps that is desirable.
Try with this:
cat filename | sed 's/\(^.*\)\(0\.[0-9][0-9]\)[0-9]*\(,.*\)/\1\2\3/g'
So far, the output is at GNU/Linux standard output, so
cat filename | sed 's/\(^.*\)\(0\.[0-9][0-9]\)[0-9]*\(,.*\)/\1\2\3/g' > out_filename
will send the desired result to out_filename
If rounding is not desired, i.e. 0.466028518635 needs to be printed as 0.46, use:
cat <input> | awk -F, '{$5=sprintf( "%.4s", $5)} 1' OFS=,
(This can another example of Useless use of cat)
You want it in perl, This is it:
perl -F, -lane '$F[4]=~s/^(\d+\...).*/$1/g;print join ",",#F' your_file
tested below:
> cat temp
16:47:07,3,r-4-VM,230000000.,0.466028518635,131072,0,0,0,60,0
16:47:11,3,r-4-VM,250000000.,10.50822578824,131072,0,0,0,0,0
16:47:14,3,r-4-VM,240000000.,0.488406067907,131072,0,0,32768,0,0
16:47:17,3,r-4-VM,230000000.,0.467893525702,131072,0,0,0,0,0
> perl -F, -lane '$F[4]=~s/^(\d+\...).*/$1/g;print join ",",#F' temp
16:47:07,3,r-4-VM,230000000.,0.46,131072,0,0,0,60,0
16:47:11,3,r-4-VM,250000000.,10.50,131072,0,0,0,0,0
16:47:14,3,r-4-VM,240000000.,0.48,131072,0,0,32768,0,0
16:47:17,3,r-4-VM,230000000.,0.46,131072,0,0,0,0,0
sed -r 's/^(([^,]+,){4}[^,]{4})[^,]*/\1/' file.csv
This might work for you (GNU sed):
sed -r 's/([^,]{,4})[^,]*/\1/5' file
This replaces the 5th occurence of non-commas to no more than 4 characters length.

Resources