the following awk syntax cut the lines from the file
from the line that have port XNT1
until END OF COMMAND line
# awk '/\/stats\/port XNT1\/if/,/END OF COMMAND/' /var/tmp/test
>> SW_02_03 - Main# /stats/port XNT1/if
------------------------------------------------------------------
Interface statistics for port XNT1:
IBP/CBP Discards: 0
L3 Discards: 0
>> SW_02_03 - Port Statistics# END OF COMMAND
#
#
#
now I set external variable as XNTF=XNT1 in awk command
but from some reason XNTF in the awk not get the "XNT1" value , and awk not display the lines!!!!!!!!
# awk -v XNTF=XNT1 '/\/stats\/port XNTF\/if/,/END OF COMMAND/' /var/tmp/test
please advice why awk not works when I set external variable ? and how to fix it ?
I normally try to avoid the range command in awk ,, since it not so flexible. This should do:
awk -v XNTF=XNT1 '$0~"/stats/port " XNTF "/if" {f=1} f; /END OF COMMAND/ {f=0}' file
>> SW_02_03 - Main# /stats/port XNT1/if
------------------------------------------------------------------
Interface statistics for port XNT1:
IBP/CBP Discards: 0
L3 Discards: 0
>> SW_02_03 - Port Statistics# END OF COMMAND
Inside //, variables are not expanded. You'll have to use the ~ operator to match against an assembled regex:
awk -v XNTF=XNT1 '$0 ~ "/stats/port " XNTF "/if",/END OF COMMAND/' /var/tmp/test
Generally, $0 ~ some_string matches $0 (the line) against some_string interpreted as a regex.
Related
I'm a beginner to bash scripting and been writing a script to check different log files and I'm bit stuck here.
clientlist=/path/to/logfile/which/consists/of/client/names
#i will grep only the client name from the file which has multiple log lines
clients=$(grep --color -i 'list of client assets:' $clientlist | cut -d":" -f1 )
echo "Clients : $clients"
#For example "Clients: Apple
# Samsung
# Nokia"
#number of clients may vary from time to time
assets=("$clients".log)
echo assets: "$assets"
The code above greps the client name from the log file and i'm trying to use the grepped client name (each) to construct a logfile with each client name.
The number of clients is indefinite and may vary from time to time.
The code I have returns the client name as a whole
assets: Apple
Samsung
Nokia.log
and I'm bit unsure of how to cut the string and pass it on one by one to return the assets which has .log for each client name. How can i do this ?
Apple.log
Samsung.log
Nokia.log
(Apologies if I have misunderstood the task)
Using awk
if your input file (I'll call it clients.txt) is:
Clients: Apple
Samsung
Nokia
The following awk step:
awk '{print $NF".log"}' clients.txt
outputs:
Apple.log
Samsung.log
Nokia.log
(You can pipe straight into awk and omit the file name if the pipe stream is as the file contents in the above example).
It is highly likely that a simple awk procedure can perform the entire task beginning with the 'clientlist' you process with grep (awk has all the functionality of grep built-in) but I'd need to know the structure of the origial file to extract the client names.
One awk idea:
assets=( $(awk -F: '/list of client assets:/ {print $2".log"}' "${clientlist}") )
# or
mapfile -t assets < <(awk -F: '/list of client assets:/ {print $2".log"}' "${clientlist}")
Where:
-F: - define input field delimiter as :
/list of client assets:/ - for lines that contain the string list of clients assets: print the 2nd :-delimited field and append the string .log on the end
One sed idea:
assets=( $(sed 's/.*://; s/$/.log/' "${clientlist}") )
# or
mapfile -t assets < <(sed 's/.*://; s/$/.log/' "${clientlist}")
Where:
s/.*:// - strip off everything up to the :
s/$/.log/ - replace end of line with .log
Both generate:
$ typeset -p assets
declare -a assets=([0]="Apple.log" [1]="Samsung.log" [2]="Nokia.log")
$ echo "${assets[#]}"
Apple.log Samsung.log Nokia.log
$ printf "%s\n" "${assets[#]}"
Apple.log
Samsung.log
Nokia.log
$ for i in "${!assets[#]}"; do echo "assets[$i] = ${assets[$indx]}"; done
assets[0] = Apple.log
assets[1] = Samsung.log
assets[2] = Nokia.log
NOTE: the alternative answers using mapfile address the issue referenced in Charles Duffy comment (see bash pitfall #50); readarray is a synonym for mapfile
I have the following output to grep the value in this case "225". This value is actually a variable $pd so it could change depending on users input" It could be integer numbers or an alphanumeric character case-insensitive exact match. Example if value of variable is "225" then a "0225" or "11225" its not a valid output from the file Im reading it.
Input File:
10.20.223.10|2000-H1|1/1/2|DeviceX_4021|LG
10.20.223.10|2000-H1|1/1/3|Undiscoverable|Unkwn
10.20.225.10|2000-H1|1/1/5|DeviceZ_2050|LG
10.20.223.10|2000-H1|1/1/8|DeviceY_225_|Kenmore
10.20.223.10|2000-H1|1/1/8|DeviceY_01225_|Kenmore
10.20.225.10|2000-H1|1/1/8|DeviceY_2250_|Kenmore
Desired Output File:
10.20.223.10|2000-H1|1/1/8|DeviceY_225_|Kenmore
If user input is "lg"; then it should output the line without not ignoring it because the input file has "lg" in uppercase. (This part is already fixed on the script).
Desired Output:
10.20.223.10|2000-H1|1/1/2|DeviceX_4021|LG
10.20.225.10|2000-H1|1/1/5|DeviceZ_2050|LG
$ awk -F'|' -v n='225' '$4 ~ n' file
10.20.223.10|2000-H1|1/1/8|DeviceY_225_|Kenmore
or if you don't want a partial match (e.g. against 1225) then one way is:
$ awk -F'|' -v n='225' '$4 ~ ("(^|[^0-9])" n "([^0-9]|$)")' file
10.20.223.10|2000-H1|1/1/8|DeviceY_225_|Kenmore
or:
$ awk -F'|' -v n='225' '$4 ~ ("(^|_)" n "(_|$)")' file
10.20.223.10|2000-H1|1/1/8|DeviceY_225_|Kenmore
There are other possibilities too. The right solution depends on the requirements you haven't told us about and will pass or fail when using input other then you've shown us yet.
awk
awk -F"|" -v var="[A-Za-z].225_" '$4 ~ var{print}'
sed
sed -n '/[A-Za-z].225./p'
grep
grep '[A-Za-z].225.'
Output
10.20.223.10|2000-H1|1/1/8|DeviceY_225_|Kenmore
Using sed:
sed -n '/^\([^|]*\|\)\{3\}[^|]*225/p' < input
Explanation:
the -n option disables automatic output at the end of each sed cycle
the pattern matches arbitrary contents of the first three (\{3\}) columns of data via the \(parenthesized\) pattern [^|]*\| -- any number of non-delmiter characters followed by the column delimiter
it matches additional input at the beginning of the fourth column, but not spanning columns, with a similar subexpression: [^|]*
then comes the literal text you want to match
the p command after the pattern causes the line to be printed to sed's output in the event that it matches the pattern
There's almost certainly an awk solution too, but in Perl it's this:
$ perl -aF'\|' -ne '$F[3] =~ 225 and print' < input
10.20.223.10|2000-H1|1/1/8|DeviceY_225_|Kenmore
-a: Autosplit the input into array #F
-F'\|: Set the autosplit delimiter to |
-n: Run code for each line in the input file
-e: Here's the code to run
$F[3]: The 4th element of the autosplit array #F
=~: Regex match
and print: Print the input line if the regex matches
Update: You can get the string you're interested in from a command line parameter by assigning it in a BEGIN block.
$ perl -aF'\|' -ne 'BEGIN { $x = shift } $F[3] =~ $x and print' 225 < input
I'm trying to create a little script that basically uses dig +short to find the IP of a website, and then pipe that to sed/awk/grep to replace a line. This is what the current file looks like:
#Server
123.455.1.456
246.523.56.235
So, basically, I want to search for the '#Server' line in a text file, and then replace the two lines underneath it with an IP address acquired from dig.
I understand some of the syntax of sed, but I'm really having trouble figuring out how to replace two lines underneath a match. Any help is much appreciated.
Based on the OP, it's not 100% clear exactly what needs to replaced where, but here's a a one-liner for the general case, using GNU sed and bash. Replace the two lines after "3" with standard input:
echo Hoot Gibson | sed -e '/3/{r /dev/stdin' -e ';p;N;N;d;}' <(seq 7)
Outputs:
1
2
3
Hoot Gibson
6
7
Note: sed's r command is opaquely documented (in Linux anyway). For more about r, see:
"5.9. The 'r' command isn't inserting the file into the text" in this sed FAQ.
here's how in awk:
newip=12.34.56.78
awk -v newip=$newip '{
if($1 == "#Server"){
l = NR;
print $0
}
else if(l>0 && NR == l+1){
print newip
}
else if(l==0 || NR != l+2){
print $0
}
}' file > file.tmp
mv -f file.tmp file
explanation:
pass $newip to awk
if the first field of the current line is #Server, let l = current line number.
else if the current line is one past #Server, print the new ip.
else if the current row is not two past #Server, print the line.
overwrite original file with modified version.
I have this file:
$ cat file
1515523 A45678BF141 A11269151
2234545 A45678BE145 A87979746
5432568 A45678B2123 A40629187
7234573 A45678B4154 A98879129
8889568 A45678B5123 A13409137
9234511 A45678B9176 A23589941
3904568 A45678B7123 A52329165
3234555 A45678B1169 A23589497
9643568 A45678B6123 A39969112
1234547 A45678B2132 A40579243
and this script:
cat file | awk '{FS = " "} {print $1" "$3" "$5}'| awk '{
n = split($3, a, "");
s = "";
for (i = 1; i <= n; i += 2) s = s a[i+1] a[i];
print $1, substr($2, length($2)-3, 4), s
}'| cut -d" " -f3,1 > output
And when I open the output with vi, I have:
1515523 F141 11621915^M
2234545 E145 78797964^M
5432568 2123 04261978^M
7234573 4154 89781992^M
8889568 5123 31041973^M
9234511 9176 32859914^M
3904568 7123 25231956^M
3234555 1169 32854979^M
9643568 6123 93691921^M
1234547 2132 04752934^M
I don't know why I obtain ^M, because when I intend to run the awk snippet:
cat imei | awk '{FS=" "} {print $2","$1}'
the output is mistaken, i.e., it does not exchange the columns, as it does not print the second column. Any ideas on what may be happening?
There are carriage returns (^M or Control-M) in the data file. It probably came from a Windows machine at some point.
When you print $2","$1 (which concatenates $2 with a string containing a comma and then $1 — it took me a couple of looks to see what it was really doing), the carriage return makes the second column overwrite the first.
Look at the data file with od -c or similar tools to see the carriage returns in it.
You can use dos2unix or tr or various other techniques to convert the file from DOS/Windows format to Unix format.
Also, given the data format shown, I'd expect not to use -F " " (or the FS = " ", which is equivalent), so that you have columns $1, $2, and $3, which is more obvious than working with columns 1, 3, 5 as shown. You could set OFS to double-blank if you wanted the output with two blanks between columns.
$ dos2unix file
$ awk '{split($3,a,""); print $1, substr($2,8), a[3]a[2]a[5]a[4]a[7]a[6]a[9]a[8]}' file
1515523 F141 11621915
2234545 E145 78797964
5432568 2123 04261978
7234573 4154 89781992
8889568 5123 31041973
9234511 9176 32859914
3904568 7123 25231956
3234555 1169 32854979
9643568 6123 93691921
1234547 2132 04752934
Since you are using awk you do not need a dos2unix.
simply insert
gsub(/\r/,"");
as a first statement in your awk Script
It cleans up each line read in. Subsequent matching or processing does not get any 'carriage return' characters.
How about a perl 'one liner' (with a continuation line)
$ dos2unix file
$ perl -lane \
'$xxxx = substr($F[1],-4);
#c = split(//,$F[2]);
print "$F[0] $xxxx $c[2]$c[1]$c[4]$c[3]$c[6]$c[5]$c[8]$c[7]"' file
I have a file file1 with the following content
{"name":"clio5", "value":"13"}
{"name":"citroen_c4", "value":"23"}
{"name":"citroen_c3", "value":"12"}
{"name":"golf4", "value":"16"}
{"name":"golf3", "value":"8"}
I want to look for the line which contains the word clio5 and then replace the found line by the following string
string='{"name":"clio5", "value":"1568688554"}'
$ string='{"name":"clio5", "value":"1568688554"}'
$ awk -F'"(:|, *)"' -v string="$string" 'BEGIN{split(string,s)} {print ($2==s[2]?string:$0)}' file
{"name":"clio5", "value":"1568688554"}
{"name":"citroen_c4", "value":"23"}
{"name":"citroen_c3", "value":"12"}
{"name":"golf4", "value":"16"}
{"name":"golf3", "value":"8"}
$ string='{"name":"citroen_c3", "value":"1568688554"}'
$ awk -F'"(:|, *)"' -v string="$string" 'BEGIN{split(string,s)} {print ($2==s[2]?string:$0)}' file
{"name":"clio5", "value":"13"}
{"name":"citroen_c4", "value":"23"}
{"name":"citroen_c3", "value":"1568688554"}
{"name":"golf4", "value":"16"}
{"name":"golf3", "value":"8"}
Updated the above based on #dogbane's comment so it will work even if the text contains "s. It will still fail if the text can contain ":" (with appropriate escapes) but that seems highly unlikely and the OP can tell us if it's a valid concern.
First you extract the name part from your $string as
NAME=`echo $string | sed 's/[^:]*:"\([^"]*\).*/\1/'`
Then, use the $NAME to replace the string as
sed -i "/\<$NAME\>/s/.*/$string/" file1
Use awk like this:
awk -v str="$string" -F '[,{}:]+' '{
split(str, a);
if (a[3] ~ $3)
print str;
else print
}' file.json