So I have code that looks like this:
else if(between(pay,1260,1280))
{
return 159;
}
else if(between(pay,1280,1300))
{
return 162;
}
else if(between(pay,1300,1320))
{
return 165;
}
But I want it to look like this:
else if(between(pay,1260,1280)){return 159;}
else if(between(pay,1280,1300)){return 162;}
else if(between(pay,1300,1320)){return 165;}
Can I do this in bash? If not, which language can I use?
The full code is over 30,000 lines and I could manually do it, but I know there's a better way. I want to say the 'sed' command can help me with a mixture of regex, but that's as far as my knowledge can take me.
P.S Please overlook how un-optimized it is just this once.
Following awk may also help you in same.
awk -v RS="" '{
$1=$1;
gsub(/ { /,"{");
gsub(/ }/,"}");
gsub(/}/,"&\n");
gsub(/ else/,"else");
sub(/\n$/,"")
}
1
' Input_file
Output will be as follows.
else if(between(pay,1260,1280)){return 159;}
else if(between(pay,1280,1300)){return 162;}
else if(between(pay,1300,1320)){return 165;}
EDIT: Adding explanation for solution too now.
awk -v RS="" '{ ##Making RS(record separator) as NULL here.
$1=$1; ##re-creating first field to remove new lines or space.
gsub(/ { /,"{"); ##globally substituting space { with only { here.
gsub(/ }/,"}"); ##globally substituting space } with only } here.
gsub(/}/,"&\n"); ##globally substituting } with } and new line here.
gsub(/ else/,"else");##globally substituting space else with only else here.
sub(/\n$/,"") ##substituting new line at last of line with NULL.
}
1 ##motioning 1 here as awk works on method of condition and action.
##So here I am making condition as TRUE and then not mentioning any action so be default print of current line will happen.
' Input_file
This might work for you (GNU sed):
sed '/^else/{:a;N;/^}/M!ba;s/\n\s*//g}' file
Gather up the required lines in the pattern space and remove all newlines and following spaces on encountering the end marker i.e. a line beginning }.
Related
I have log file similar to this format
test {
seq-cont {
0,
67,
266
},
grp-id 505
}
}
test{
test1{
val
}
}
Here is the echo command to produce that output
$ echo -e "test {\nseq-cont {\n\t\t\t0,\n\t\t\t67,\n\t\t\t266\n\t\t\t},\n\t\tgrp-id 505\n\t}\n}\ntest{\n\ttest1{\n\t\tval\n\t}\n}\n"
Question is how to remove all whitespace between seq-cont { and the next } that may be multiple in the file.
I want the output to be like this. Preferably use sed to produce the output.
test{seq-cont{0,67,266},
grp-id 505
}
}
test{
test1{
val
}
}
Efforts by OP: Here is the one somewhat worked but not exactly what I wanted:
sed ':a;N;/{/s/[[:space:]]\+//;/}/s/}/}/;ta;P;D' logfile
It can be done using gnu-awk with a custom RS regex that matches { and closing }:
awk -v RS='{[^}]+}' 'NR==1 {gsub(/[[:space:]]+/, "", RT)} {ORS=RT} 1' file
test {seq-cont{0,67,266},
grp-id 505
}
}
test{
test1{
val
}
}
Here:
NR==1 {gsub(/[[:space:]]+/, "", RT)}: For the first record replace all whitespaces (including line breaks) with empty string.
{ORS=RT}: Set ORS to whatever text we captured in RS
PS: Remove NR==1 if you want to do this for entire file.
With your shown samples, please try following awk program. Tested and written in GNU awk.
awk -v RS= '
match($0,/{\nseq-cont {\n[^}]*/){
val=substr($0,RSTART,RLENGTH)
gsub(/[[:space:]]+/,"",val)
print substr($0,1,RSTART-1) val substr($0,RSTART+RLENGTH)
}
' Input_file
Explanation: Simple explanation would be, using RS capability to set it to null. Then using match function of awk to match everything between seq-cont { to till next occurrence of }. Removing all spaces, new lines in matched value. Finally printing all the values including newly edited values to get expected output mentioned by OP.
You can do that much easier with perl:
perl -0777 -i -pe 's/\s+(seq-cont\s*\{[^}]*\})/$1=~s|\s+||gr/ge' logfilepath
The -0777 option tells perl to slurp the file into a single string, -i saves changes inline, \s+(seq-cont\s*\{[^}]*\}) regex matches one or more whitespaces, then captures into Group 1 ($1) seq-cont, zero or more whitespaces, and then a substring between the leftmost { and the next } char ([^}]* matches zero or more chars other than }) and then all one or more whitespace character chunks (matched with \s+) are removed from the whole Group 1 value ($1) (this second inner replacement is enabled with e flag). All occurrences are handled due to the g flag (next to e).
See the online demo:
#!/bin/bash
s=$(echo -e "test {\nseq-cont {\n\t\t\t0,\n\t\t\t67,\n\t\t\t266\n\t\t\t},\n\t\tgrp-id 505\n\t}\n}\ntest{\n\ttest1{\n\t\tval\n\t}\n}\n")
perl -0777 -pe 's/\s+(seq-cont\s*\{[^}]*\})/$1=~s|\s+||gr/ge' <<< "$s"
Output:
test {seq-cont{0,67,266},
grp-id 505
}
}
test{
test1{
val
}
}
I am trying to export certain strings from below output, however i have no experience with sed/awk and i need some advise how can i proceed with that.
Input:
name Cleartext-Password := "password", Service-Type := Framed-User
Framed-IP-Address := 127.0.0.1,
MS-Primary-DNS-Server := 8.8.8.8,
Fall-Through = Yes,
Mikrotik-Rate-Limit = 20M/30M
The output should be:
name;password;127.0.0.1;20M;30M;
I am not sure if this is correct way to do that, but i have tried to remove everything between my required string, for example:
sed 's/ Cleartext-Password := "/;/'
However i think this is dirty way and not the clever one.
Could you please let me know what i need to look for in order to create working sed/awk solution for this?
Could you please try following based on your shown samples. Written and tested it in site
https://ideone.com/eWXv3w
Since OP's Input_file has control M characters so added gsub(/\r/,"") in code here.
awk '
BEGIN{ OFS=";" }
{ gsub(/\r/,"") }
match($0,/Cleartext-Password[^,]*/){
val=substr($0,RSTART,RLENGTH)
gsub(/Cleartext-Password[^"]*|"/,"",val)
val=$1 OFS val
next
}
/Framed-IP-Address/{
sub(/,$/,"")
val=val OFS $NF
next
}
/Mikrotik-Rate-Limit/{
print val, $NF
val=""
}' Input_file
Explanation: In BEGIN section of program setting OFS to semi colon as per question. Then using match function of awk to match regex from string Cleartext...Cleartext-Password[^,]* till first comma comes. If regex matches perfectly then capturing that sub-string in variable val here. Now using gsub to globally substitute everything from Cleartext-Password and all un-necessary stuff there as per required output.
Then checking if line contains Framed-IP-Address if it's found then send substituting , from last of line and adding that line last field to variable val here.
Now checking condition if a line contains Mikrotik-Rate-Limit then simply printing value of val and last field here, nullifying val here too.
There are a number of ways to approach this with awk, the key is to match part of the record with the regular expression to identify the record you are operating on and then isolate the wanted test and output in the desired format.
One approach would be:
awk '
/Cleartext-Password/ { printf "%s;%s;", $1, substr($4,2,length($4)-3) }
/Framed-IP-Address/ { printf "%s;", substr($NF,1,length($NF)-1) }
/Mikrotik-Rate-Limit/{ sub(/\//,";",$NF); printf "%s;\n", $NF }
' config
Example Use/Output
With your sample input in the file named config, you would receive:
name;password;127.0.0.1;20M;30M;
Look things over and let me know if I misunderstood anywhere.
This might work for you (GNU sed):
sed -nE -e '/Cleartext-Password/{s/ .*:=\s"(.*)",.*/;\1/;h}' \
-e '/Framed-IP-Address/{s/.*:= (.*),/\1/;H}' \
-e '/Mikrotik-Rate-Limit/{s#.*= (.*)/(.*)#\1;\2#;H;g;y/\n/;/;p}' file
Turn off implicit printing by invoking the -n option.
Reduce back slashes by invoking the -E option.
Stash the fields of the record in the hold space and when all fields have been collected, copy the hold space to the pattern space, replace newlines by the field separators and print the result.
You may prefer:
sed -nE '/Cleartext-Password/{s/ .*:=\s"(.*)",.*/;\1/;h};
/Framed-IP-Address/{s/.*:= (.*),/\1/;H};
/Mikrotik-Rate-Limit/{s#.*= (.*)/(.*)#\1;\2#;H;g;y/\n/;/;p}' file
New to AWK. I have a file with the following content:
FirstName,LastName,Email,ID,Number,IDToBeMatched
John,Smith,js#.com,js30,4,kt78
George,Haynes,gh#.com,gh67,3,re201
Mary,Dewar,md#.com,md009,4,js30
Kevin,Pan,kp#.com,kp41,2,md009
,,,,,ti10
,,,,,qwe909
,,,,,md009
,,,,,kor28
,,,,,gh67
The idea is to check whether any of the fields below the header ID matches any of the fields below IDToBeMatched and if there is a match to print the whole record but for the last field (i.e. IDToBeMatched). So my final output should look like:
FirstName,LastName,Email,ID,Number
John,Smith,js#.com,js30,4
George,Haynes,gh#.com,gh67,3
Mary,Dewar,md#.com,md009,4
My code so far
awk 'BEGIN{
FS=OFS=",";SUBSEP=",";
}
{
# all[$1,$2,$3,$4,$5]
a[$4]++;
b[$6]++;
}
END{ #for(k in all){
for(i in a){
for(j in b){
if(i==j){
print i #k
}
}
}
#}
}' inputfile
This prints the match only. If however I try to introduce another loop by uncommenting the lines in the above script in order to have the whole line for the matching field, things get messy. I understand why but I cannot find the solution. I thought to introduce a next statement but it's not allowed in the END. My AWK defaults to GAWK and I would prefer an (G)AWK solution only.
Thank you in advance.
The last field has more records because it was copied/pasted from an ID "pool" which does not necessarily has the same number of records as the files it was pasted in.
$ awk -F, 'NR==FNR{a[$6];next} (FNR==1)||($4 in a){sub(/,[^,]+$/,"");print}' file file
FirstName,LastName,Email,ID,Number
John,Smith,js#.com,js30,4
George,Haynes,gh#.com,gh67,3
Mary,Dewar,md#.com,md009,4
Can someone help me figure out how to replace the empty columns with the last known value. Here is a line that i would like the number "0.7588044" to replace the null values in this line:
0.7723808|0.767398|0.7645381|0.7605125|0.759718|0.7588044|0.7588044|0.7588044|0.7588044|0.7588044|0.7588044||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
In other words, I would like "0.7588044" to be between the empty/null "|" delimeters at the end of the line.
I can't figure out how to do this with something like sed. Any help would be greatly appreciated.
Here are the first 3 lines of my file:
66943|0.9939215|0.9873032|0.9791299|0.9708792|0.9623731|0.9535987|0.945847|0.9379317|0.9286675|0.9203091|0.9127985|0.9041528|0.8966769|0.8902251|0.8832675|0.8778407|0.8734665|0.8679647|0.8616999|0.8560756|0.8518617|0.8463235|0.8410841|0.8342401|0.8311638|0.8261909|0.8252836|0.8218218|0.8177906|0.815474|0.8122096|0.8115648|0.8108233|0.8108233|0.8108233|0.8108233|0.8108233|0.8108233||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
69550|0.9946427|0.9888051|0.9815896|0.9742986|0.966774|0.9590039|0.9521323|0.9451087|0.9368793|0.9294462|0.9227601|0.9150554|0.9083862|0.9026252|0.896407|0.8915528|0.8876377|0.8827099|0.8770942|0.8720485|0.8682655|0.8632902|0.8585799|0.8524216|0.8496516|0.8451712|0.8443534|0.8412323|0.8375956|0.8355048|0.8325575|0.8319751|0.8313053|0.8313053|0.8313053|0.8313053|0.8313053|0.8313053||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
380713|0.9942899|0.9880703|0.9803859|0.9726248|0.9646193|0.9563567|0.9490533|0.941592|0.9328543|0.9249665|0.917875|0.9097072|0.9026409|0.8965395|0.8899569|0.8848204|0.8806788|0.8754678|0.8695317|0.8642001|0.8602043|0.8549507|0.8499787|0.8434811|0.8405594|0.8358352|0.834973|0.8316831|0.8278509|0.8256481|0.8225436|0.8219303|0.8212249|0.8212249|0.8212249|0.8212249|0.8212249|0.8212249||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The awk code works but just the first line:
You can use the following awk script:
awk -F'|' 'BEGIN{OFS="|"}{for(i=1;i<NF;i++){if($i==""){$i=l}else{l=$i}}print}'
It is better readable in this form:
BEGIN {
OFS="|" # set output field separator to |
}
{
for(i=1;i<NF;i++) { # iterate through columns
if($i=="") { # if current column is empty
$i=l # use the last value
} else {
l=$i # else store the value
}
}
print # print the line
}
This might work for you (GNU sed):
sed -r ':a;s/^(.*\|([^|]+)\|)\|/\1\2|/;ta' file
Some shorter version of solution hek2mgl
awk '{for(i=1;i<NF;i++) $i=($i=="")?l:l=$i}1' FS=\| OFS=\| file
The /./ is removing blank lines for the first condition { print "a"$0 } only, how would I ensure the script removes blank lines for every condition ?
awk -F, '/./ { print "a"$0 } NR!=1 { print "b"$0 } { print "c"$0 } END { print "d"$0 }' MyFile
A shorter form of the already proposed answer could be the following:
awk NF file
Any awk script follows the syntax condition {statement}. If the statement block is not present, awk will print the whole record (line) in case the condition is not zero.
NF variable in awk holds the number of fields in the line. So when the line is non empty, NF holds a positive value which trigger the default awk action (print the whole line). In case of empty line, NF is zero and the condition is not met, so awk does nothing.
Note that you don't even need quote because this 2 letters awk script doesn't contain any space or character that could be interpreted by the shell.
or
awk '!/^$/' file
^$ is the regex for an empty line. The 2 / is needed to let awk understand the string is a regex. ! is the standard negation.
Awk command to remove blank lines from a file:
awk 'NF > 0' filename
if you want to ignore all blank lines, put this at the beginning of the script
/^$/ {next}
Put following conditions inside the first one, and check them with if statements, like this:
awk -F, '
/./ {
print "a"$0;
if (NR!=1) { print "b"$0 }
print "c"$0
}
END { print "d"$0 }
' MyFile