I want to get the lines between forwarders { and }; those are IP address, below is the sample file which mimics my data..
// Red Hat BIND Configuration Tool
//
// THIS IS THE SLAVE DDNS SERVER -
//
// Currently running in chroot environment
// Prefix all file names below with /var/named/chroot
options {
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
recursion yes;
check-names master ignore;
check-names slave ignore;
check-names respocope ignore;
max-journal-size 2M;
allow-query { any; };
allow-update {
key copdcop1.example.com.;
key copdcop2.example.com.;
key copdcop3.example.com.;
key copdcop4.example.com.;
};
forward only;
forwarders {
192.168.174.131; // cop-no1
192.155.98.74; // cop-jn1
192.168.2.40; // cop-sad1
192.168.2.56; // cop-s1
192.43.4.70; // cop-che1
192.20.28.8; // copdcop1
};
Desired Result:
192.168.174.131; // cop-no1
192.155.98.74; // cop-jn1
192.168.2.40; // cop-sad1
192.168.2.56; // cop-s1
192.43.4.70; // cop-che1
192.20.28.8; // copdcop1
I'm okay with any solution either shell or python or awk.
I tried with sed but no luck..
sed -n '"/forwarders {"/,/"};"' dns.txt
However, below awk code works ..
awk '/forwarders {/{flag=1;next}/};/{flag=0}flag' dns.txt
sed -n '/forwarders {/,/};/{//!p}' file
Given your sample its output:
192.168.174.131; // cop-no1
192.155.98.74; // cop-jn1
192.168.2.40; // cop-sad1
192.168.2.56; // cop-s1
192.43.4.70; // cop-che1
1192.20.28.8; // copdcop1
It really depends in how much the file can change.
But this would work for your example:
awk '/forwarders {/{flag=1;next}/};/{flag=0}flag' /path/to/file
For your example:
192.168.174.131; // cop-no1
192.155.98.74; // cop-jn1
192.168.2.40; // cop-sad1
192.168.2.56; // cop-s1
192.43.4.70; // cop-che1
192.20.28.8; // copdcop1
EDIT: Since OP asked to have output into single line so adding following solution now.
awk 'BEGIN{OFS=","} /}/{found=""} /forwarders {/{found=1} found && match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/){gsub(/ +/," ");val=(val?val OFS:"")$0}END{print val}' Input_file
OR non-one liner form of solution.
awk '
BEGIN{
OFS=","
}
/}/{
found=""
}
/forwarders {/{
found=1
}
found && match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/){
gsub(/ +/," ")
val=(val?val OFS:"")$0
}
END{
print val
}' Input_file
OR as mentioned before too, to print anything inside forwarder block try:
awk '/}/{found=""} /forwarders {/{found=1;next} found{gsub(/ +/," ");val=(val?val OFS:"")$0} END{print val}' Input_file
Could you please try following(considering that you only need to print IP addresses inside the tag).
awk '/}/{found=""} /forwarders {/{found=1} found && match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/)' Input_file
In case forwarders tag you want to anything then try following.
awk '/}/{found=""} /forwarders {/{found=1;next} found' Input_file
Related
I have a bunch of files which I concatenate into one large file. The single large file then looks like this:
function foo() {
// ... implementation
}
function bar() {
// ... implementation
}
function baz() {
// ... implementation
}
function foo_bar() {
// ... implementation
}
...
A bunch of functions. I want to create a new file with all this content, PLUS prefixing it with this:
module.exports = {
foo,
bar,
baz,
foo_bar,
...
}
Basically exporting every function. What is the most simple, cleanest way I can do this in bash?
As far as I got is this haha, it is really confusing to try and come up with a solution:
A := out/a.js
B := out/b.js
all: $(A) $(B)
$(A):
#find src -name '*.js' -exec cat {} + > $#
$(B):
#cat out/a.js | grep -oP '(?function )[a-zA-Z0-9_]+(? \{)'
.PHONY: all
Store the list of functions declared before and after sourcing the file. Compute the difference. You can get the list of currently declared functions with declare -F.
A() { :; }
pre=$(declare -F | sed 's/^declare -f //')
function foo() {
// ... implementation
}
function bar() {
// ... implementation
}
function baz() {
// ... implementation
}
function foo_bar() {
// ... implementation
}
post=$(declare -F | sed 's/^declare -f //')
diff=$(comm -13 <(sort <<<"$pre") <(sort <<<"$post"))
echo "module.exports = {
$(<<<"$diff" paste -sd, | sed 's/,/,\n\t/g')
}"
I think with bash --norc you should get a clean environment, so with bash --norc -c 'source yourfile.txt; declare -F' you could get away with computing the difference:
cat <<EOF >yourfile.txt
function foo() {
// ... implementation
}
function bar() {
// ... implementation
}
function baz() {
// ... implementation
}
function foo_bar() {
// ... implementation
}
EOF
diff=$(bash --norc -c 'source yourfile.txt; declare -F' | cut -d' ' -f3-)
echo "module.exports = {
$(<<<"$diff" paste -sd, | sed 's/,/,\n\t/g')
}"
Both code snippets should output:
module.exports = {
bar,
baz,
foo,
foo_bar
}
Note: the function name() {} is a mix of ksh and posix form of function definition - the ksh uses function name {} while posix uses name() {}. Bash supports both forms and also the strange mix of both forms. To be portable, just use the posix version name() {}. More info maybe at wiki-deb-bash-hackers.org obsolete and deprecated syntax.
This simple awk script will do it
awk -F '( |\\()' 'BEGIN {print "module.exports = {"} /function/ {print "\t" $2 ","} END {print "}"}' largefile.js
You could use echo and sed:
echo 'modules.exports = {'; sed -n 's/^function \([^(]*\)(.*/ \1,/p' input.txt; echo '}'
result:
modules.exports = {
foo,
bar,
baz,
foo_bar,
}
I have four files and want to run a awk code on each file. But this code is not working.
My code:
for i in "udp-250b.tr" "udp-50b.tr"
do
awk '
BEGIN {
//some code
}
{
//some code
}
END {
//some code
} ' i
done
Awk can work with multiple files no need of for
syntax will be like this
awk '{ }' file1 file2 file3
or
awk '{ }' file*
In your case
awk 'BEGIN{ } { } END{ }' udp-*.tr
To correct your existing code change
} ' i
To
} ' "$i"
I want to declare a variable called variableToUse which holds the file name path.
I want to append file name with today's date.
Below code is in myAWK.awk
$bash: cat myAWK.awk
BEGIN{
today="date +%Y%m%d";
variableToUse=/MainDir/MainDir1/MainDir2/OutputFile_today.xml
}
/<record / { i=1 }
i { a[i++]=$0 }
/<\/record>/ {
if (found) {
print a[i] >> variableToUse
}
}
I am getting syntax error at OutputFile_today.xml.
How to use variable value?
You should quote the variables properly
Example
$ awk 'BEGIN{variableToUse="/MainDir/MainDir1/MainDir2/OutputFile_today.xml"; print variableToUse}'
/MainDir/MainDir1/MainDir2/OutputFile_today.xml
To get the current date you can use strftime
Example
$ awk 'BEGIN{today="date +%Y%m%d";variableToUse="/MainDir/MainDir1/MainDir2/OutputFile_"strftime("%Y%m%d")".xml"; print variableToUse}'
/MainDir/MainDir1/MainDir2/OutputFile_20160205.xml
Have your awk script like this:
BEGIN {
today="date +%Y%m%d";
variableToUse="/MainDir/MainDir1/MainDir2/OutputFile_" today ".xml"
}
/<record / { i=1 }
i { a[i++]=$0 }
/<\/record>/ {
if (found) {
print a[i] >> variableToUse
}
}
btw there are couple of other issues:
- I don't see found getting set anywhere in this script.
- today="date +%Y%m%d" will not execute date command. It just assigns literaldate +%Y%m%dtotodayvariable. If you want to executedate` command then use:
awk -v today="$(date '+%Y%m%d')" -f myAWK.awk
and remove today= line from BEGIN block.
In Perl 6 the Str type is immutable, so it seems reasonable to use a mutable buffer instead of concatenating a lot of strings. Next, I like being able to use the same API regardless if my function is writing to stdout, file or to an in-memory buffer.
In Perl, I can create an in-memory file like so
my $var = "";
open my $fh, '>', \$var;
print $fh "asdf";
close $fh;
print $var; # asdf
How do I achieve the same thing in Perl 6?
There's a minimal IO::String in the ecosystem backed by an array.
For a one-off solution, you could also do someting like
my $string;
my $handle = IO::Handle.new but role {
method print(*#stuff) { $string ~= #stuff.join };
method print-nl { $string ~= "\n" }
};
$handle.say("The answer you're looking for is 42.");
dd $string;
What I currently do is that I wrapped string concatenation in a class as a temporary solution.
class Buffer {
has $!buf = "";
multi method print($string) {
$!buf ~= $string;
}
multi method say($string) {
$!buf ~= $string ~ "\n";
}
multi method Str() {
return $!buf;
}
}
With that, I can do
my $buf = Buffer.new();
say $buf: "asdf";
print $buf.Str;
I have file of bind config with following config:
zone "domain1.com" {
type master;
file "masters/domain1.com";
allow-transfer {
dnscontroller_acl;
};
};
zone "domain2.com" {
type master;
file "masters/domain2.com";
allow-transfer {
dnscontroller_acl;
};
};
zone "domain3.com" {
type master;
file "masters/domain3.com";
allow-transfer {
dnscontroller_acl;
};
};
zone "domain4.com" {
type master;
file "masters/domain4.com";
allow-transfer {
dnscontroller_acl;
};
};
How to remove zone config (start from zone filename and end of };) from file with help of bash?
You can use sed to remove the config for a given zone:
sed '/^zone "domain4.com" {$/,/^};/d' file
If you want a script that can take a zone as an argument, just add the she-bang and the argument:
#!/bin/bash
sed '/^zone "'"$1"'" {$/,/^};/d' file
If the file is well ordered, you could use awk with automatic record- and field separation:
awk '
BEGIN { RS = ORS = "\n\n"; FS="\n" }
$1 !~ /domain3/
' file
Removes zone where the first line contains "domain3".