I am using the below script
list=kmakalas#gmail.com;kmakalas#gmail.com;kmakalas#gmail.com;
for the above I wanted extract to
r=kmakalas,r=kmakalas,r=kmakalas
for that I used the below shell manipulations
rev_list="r=${list//#gmail.com;/r=}
the above gave me
r=kmakalas,r=kmakalas,r=kmakalas,r=
to get r=kmakalas,r=kmakalas,r=kmakalas
I used rev2="${rev_list%,r=}"
Is there any possibility to do in a single line command
Using GNU awk:
awk -F '[=;]' '{ for(i=2;i<NF;i++) { split($i,map,"#");printf i==(NF-1)?"r="map[1]:"r="map[1]"," } printf "\n" }' <<< "$list"
Explanation:
awk -F '[=;]' '{ # Set the field delimiter to "=" or ";"
for(i=2;i<NF;i++) {
split($i,map,"#"); # Loop through each field and split the field/address into an array map using "#" as the delimiter
printf i==(NF-1)?"r="map[1]:"r="map[1]"," # Print "r=" along with the "#" prefix
}
printf "\n"
}' <<< "$list"
Related
How can i join consecutive lines into a single lines using awk? Actually i have this with my awk command:
awk -F "\"*;\"*" '{if (NR!=1) {print $2}}' file.csv
I remove the first line
44895436200043
38401951900014
72204547300054
38929771400013
32116464200027
50744963500014
i want to have this:
44895436200043 38401951900014 72204547300054 38929771400013 32116464200027 50744963500014
csv file
That's a job for tr:
# tail -n +2 prints the whole file from line 2 on
# tr '\n' ' ' translates newlines to spaces
tail -n +2 file | tr '\n' ' '
With awk, you can achieve this by changing the output record separator to " ":
# BEGIN{ORS= " "} sets the internal output record separator to a single space
# NR!=1 adds a condition to the default action (print)
awk 'BEGIN{ORS=" "} NR!=1' file
I assume you want to modify your existing awk, so that it prints a horizontal space separated list, instead of words, one per row.
You can replace the print $2 action in your command, you can do this:
awk -F "\"*;\"*" 'NR!=1{u=u s $2; s=" "} END {print u}' file.csv
or replace the ORS (output record separator)
awk -F "\"*;\"*" -v ORS=" " 'NR!=1{print $2}' file.csv
or pipe output to xargs:
awk -F "\"*;\"*" 'NR!=1{print $2}' file.csv | xargs
/home/user/views/links/user1/gitsrc/database/src/
This is my string. I want to cut it in 2 strings such as
"/home/user/views/links/user1/"
"/database/src/"
so the delim is not actally a single character but a group of characters ie "gitsrc".
You can only define a single character as delimiter in cut.
You could use awk where the field separator can be a single character, a null string or a regular expression, e.g.
$ echo '/home/user/views/links/user1/gitsrc/database/src/' |
awk -F'gitsrc' '{ print $1 " " $2 }'
/home/user/views/links/user1/ /database/src/
or
$ echo '/home/user/views/links/user1/gitsrc/database/src/' |
awk -F'gitsrc' '{ print $1 ORS $2 }'
/home/user/views/links/user1/
/database/src/
In your shell you could or use a parameter expansion to get the first and second part:
$ str=/home/user/views/links/user1/gitsrc/database/src/
$ echo "${str%%gitsrc*}" # remove longest suffix `gitsrc*`
/home/user/views/links/user1/
$ echo "${str#*gitsrc}" # remove shortest prefix `*gitsrc`
/database/src/
#!/bin/bash
awk '$1 == "abc" {print}' file # print lines first column matching "abc"
How to print lines when the first column matching members of array("12" or "34" or "56")?
#!/bin/bash
ARR=("12" "34" "56")
Add
Also, how to print lines when the first column exactly matching members of array("12" or "34" or "56")?
You could use bash to interpolate the string to a regex pattern used in Awk, by changing the IFS value to a | character and do array expansion as below:
ARR=("12" "34" "56")
regex=$( IFS='|'; echo "${ARR[*]}" )
awk -v str="$regex" '$1 ~ str' file
The array expansion converts the list elements to a string delimited with |, for e.g. 12|34|56 in this case.
The $() runs in the sub-shell do that the value of IFS is not reflcted in the parent shell. You could make it in one line as
awk -v str="$( IFS='|'; echo "${ARR[*]}" )" '$1 ~ str' file
OP had also asked for an exact match of the strings from the array in the file, in that case using grep with its ERE support can do the job
regex=$( IFS='|'; echo "${ARR[*]}" )
egrep -w "$regex" file
(or)
grep -Ew "$regex" file
awk one-liner
awk -v var="${ARR[*]}" 'BEGIN{split(var,array," "); for(i in array) a[array[i]] } ($1 in a){print $0}' file
The following code does the trick:
awk 'BEGIN{myarray [0]="aaa";myarray [1]="bbb"; test=0 }{
test=0;
for ( x in myarray ) {
if($1 == myarray[x]){
test=1;
break;
}
}
if(test==0) print}'
If you need to pass a variable to awk use the -v option, however for array it is a bit tricker but the following syntax should work.
A=( $( ls -1p ) ) #example of list to be passed to awk (to be adapted to your needs)
awk -v var="$A" 'BEGIN{split(var,list,"\n")}END{ for (i in list) print i}'
Near the same as Inian
ARR=("34" "56" "12");regex=" ${ARR[*]} ";regex="'^${regex// /\\|^}'";grep -w $regex infile
Replace comma with space using a shell script
Given the following input:
Test,10.10.10.10,"80,22,3306",connect
I need to get below output using a bash script
Test 10.10.10.10 "80,22,3306" connect
If you have gawk, you can use FPAT (field pattern), setting it to a regular expression.
awk -v FPAT='([^,]+)|(\"[^"]+\")' '{ for(i=1;i<=NF;i++) { printf "%s ",$i } }' <<< "Test,10.10.10.10,\"80,22,3306\",connect"
We set FPAT to separate the text based on anything that is not a comma and also data enclosed in quotation marks as as well as anything that is not a quotation mark. We then print all the fields with a spaces in between.
Considering if your Input_file is same as shown sample then following sed may help you in same too.
sed 's/\(.[^,]*\),\([^,]*\),\(".*"\),\(.*\)/\1 \2 \3 \4/g' Input_file
Assuming you can read your input from the file, this works
#!/usr/bin/bash
while read -r line;do
declare -a begin=$(echo $line | awk -F'"' '{print $1}' | tr "," " " )
declare -a end=$(echo $line |awk -F'"' '{print $3}' | tr "," " " )
declare -a middle=$(echo $line | awk -F'"' '{print $2}' )
echo "${begin[#]} \"${middle[#]}\" ${end[#]}"
done < connect_file
Edit: I see,that you want to keep the commas between port numbers. I have edited the script.
echo Test,10.10.10.10,\"80,22,3306\",connect|awk '{sub(/,/," ")gsub(/,"80,22,3306",/," \4280,22,3306\42 ")}1'
Test 10.10.10.10 "80,22,3306" connect
I want to extract the value of pt & userId in a variable in shell script.
I've below value set in a varibale which comes dynamically & need to extract pt & userId
{"pt":"PT-24fesxPGJIHOe714iaMV-13dd3872781-sin_pos","userId":"66254363666003"}
Can any one tell me how to extract these values in shell script?
Note: I don't want to use JSON parser just to parse 2 strings.
Thanks!
This string appears to be a JSON string and its better to use dedicated JSPN parser like underscore for parsing this text. Once underscore cli is installed you can do:
# extract pt
echo $jsonStr | underscore select '.pt'
# extract userId
echo $jsonStr | underscore select '.userId'
Though not recommended but if you really want to parse it in shell you can use awk like this:
awk -F, '$1 ~ "pt" {gsub(/[^:]+:"|"/, "", $1); print $1}
$2 ~ "userId" {gsub(/[^:]+:"|"}/, "", $2); print $2}'
OR even simpler:
awk -F'"' '{print $4 "\n" $8}'
Output:
PT-24fesxPGJIHOe714iaMV-13dd3872781-sin_pos
66254363666003
You can use following script
var={"pt":"PT-24fesxPGJIHOe714iaMV-13dd3872781-sin_pos","userId":"66254363666003"}
echo $var
pt=`echo $var|cut -d, -f1|awk -F':' '{ print $2 }'`
echo $pt
userId=`echo $var|cut -d, -f2|awk -F':' '{ print $2 }'|tr -d '}'`
echo $userId
In this script stores values in 2 variables "pt" and "userId", which you can use.