How to remove a text block from a file with bash - linux

I have file of bind config with following config:
zone "domain1.com" {
type master;
file "masters/domain1.com";
allow-transfer {
dnscontroller_acl;
};
};
zone "domain2.com" {
type master;
file "masters/domain2.com";
allow-transfer {
dnscontroller_acl;
};
};
zone "domain3.com" {
type master;
file "masters/domain3.com";
allow-transfer {
dnscontroller_acl;
};
};
zone "domain4.com" {
type master;
file "masters/domain4.com";
allow-transfer {
dnscontroller_acl;
};
};
How to remove zone config (start from zone filename and end of };) from file with help of bash?

You can use sed to remove the config for a given zone:
sed '/^zone "domain4.com" {$/,/^};/d' file
If you want a script that can take a zone as an argument, just add the she-bang and the argument:
#!/bin/bash
sed '/^zone "'"$1"'" {$/,/^};/d' file

If the file is well ordered, you could use awk with automatic record- and field separation:
awk '
BEGIN { RS = ORS = "\n\n"; FS="\n" }
$1 !~ /domain3/
' file
Removes zone where the first line contains "domain3".

Related

Changing contents of a tsx file through shell script

I have a requirement to change the contents of config.tsx file that contains values like:
const authData = {
base_uri: 'https://development-api.com.au',
customLib: {
redirect_uri: 'https://another-development-api.com.au'
}
}
export default authData;
I want to change this content using a shell script and save the file. Changed content can look like:
const authData = {
base_uri: 'https://production-api.com.au',
customLib: {
redirect_uri: 'https://another-production-api.com.au'
}
}
export default authData;
How can I do this?
This should work:
sed -i 's/development/production/g' config.tsx
The -i option will edit the file in place. If you first want to try the command to see if it works the way you want, use it without the -i. The output will be printed to stdout.

How to take the data between two strings in a file Linux

I want to get the lines between forwarders { and }; those are IP address, below is the sample file which mimics my data..
// Red Hat BIND Configuration Tool
//
// THIS IS THE SLAVE DDNS SERVER -
//
// Currently running in chroot environment
// Prefix all file names below with /var/named/chroot
options {
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
recursion yes;
check-names master ignore;
check-names slave ignore;
check-names respocope ignore;
max-journal-size 2M;
allow-query { any; };
allow-update {
key copdcop1.example.com.;
key copdcop2.example.com.;
key copdcop3.example.com.;
key copdcop4.example.com.;
};
forward only;
forwarders {
192.168.174.131; // cop-no1
192.155.98.74; // cop-jn1
192.168.2.40; // cop-sad1
192.168.2.56; // cop-s1
192.43.4.70; // cop-che1
192.20.28.8; // copdcop1
};
Desired Result:
192.168.174.131; // cop-no1
192.155.98.74; // cop-jn1
192.168.2.40; // cop-sad1
192.168.2.56; // cop-s1
192.43.4.70; // cop-che1
192.20.28.8; // copdcop1
I'm okay with any solution either shell or python or awk.
I tried with sed but no luck..
sed -n '"/forwarders {"/,/"};"' dns.txt
However, below awk code works ..
awk '/forwarders {/{flag=1;next}/};/{flag=0}flag' dns.txt
sed -n '/forwarders {/,/};/{//!p}' file
Given your sample its output:
192.168.174.131; // cop-no1
192.155.98.74; // cop-jn1
192.168.2.40; // cop-sad1
192.168.2.56; // cop-s1
192.43.4.70; // cop-che1
1192.20.28.8; // copdcop1
It really depends in how much the file can change.
But this would work for your example:
awk '/forwarders {/{flag=1;next}/};/{flag=0}flag' /path/to/file
For your example:
192.168.174.131; // cop-no1
192.155.98.74; // cop-jn1
192.168.2.40; // cop-sad1
192.168.2.56; // cop-s1
192.43.4.70; // cop-che1
192.20.28.8; // copdcop1
EDIT: Since OP asked to have output into single line so adding following solution now.
awk 'BEGIN{OFS=","} /}/{found=""} /forwarders {/{found=1} found && match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/){gsub(/ +/," ");val=(val?val OFS:"")$0}END{print val}' Input_file
OR non-one liner form of solution.
awk '
BEGIN{
OFS=","
}
/}/{
found=""
}
/forwarders {/{
found=1
}
found && match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/){
gsub(/ +/," ")
val=(val?val OFS:"")$0
}
END{
print val
}' Input_file
OR as mentioned before too, to print anything inside forwarder block try:
awk '/}/{found=""} /forwarders {/{found=1;next} found{gsub(/ +/," ");val=(val?val OFS:"")$0} END{print val}' Input_file
Could you please try following(considering that you only need to print IP addresses inside the tag).
awk '/}/{found=""} /forwarders {/{found=1} found && match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/)' Input_file
In case forwarders tag you want to anything then try following.
awk '/}/{found=""} /forwarders {/{found=1;next} found' Input_file

awk : how to use variable value

I want to declare a variable called variableToUse which holds the file name path.
I want to append file name with today's date.
Below code is in myAWK.awk
$bash: cat myAWK.awk
BEGIN{
today="date +%Y%m%d";
variableToUse=/MainDir/MainDir1/MainDir2/OutputFile_today.xml
}
/<record / { i=1 }
i { a[i++]=$0 }
/<\/record>/ {
if (found) {
print a[i] >> variableToUse
}
}
I am getting syntax error at OutputFile_today.xml.
How to use variable value?
You should quote the variables properly
Example
$ awk 'BEGIN{variableToUse="/MainDir/MainDir1/MainDir2/OutputFile_today.xml"; print variableToUse}'
/MainDir/MainDir1/MainDir2/OutputFile_today.xml
To get the current date you can use strftime
Example
$ awk 'BEGIN{today="date +%Y%m%d";variableToUse="/MainDir/MainDir1/MainDir2/OutputFile_"strftime("%Y%m%d")".xml"; print variableToUse}'
/MainDir/MainDir1/MainDir2/OutputFile_20160205.xml
Have your awk script like this:
BEGIN {
today="date +%Y%m%d";
variableToUse="/MainDir/MainDir1/MainDir2/OutputFile_" today ".xml"
}
/<record / { i=1 }
i { a[i++]=$0 }
/<\/record>/ {
if (found) {
print a[i] >> variableToUse
}
}
btw there are couple of other issues:
- I don't see found getting set anywhere in this script.
- today="date +%Y%m%d" will not execute date command. It just assigns literaldate +%Y%m%dtotodayvariable. If you want to executedate` command then use:
awk -v today="$(date '+%Y%m%d')" -f myAWK.awk
and remove today= line from BEGIN block.

String read in from file not responding to string manipulation

I have a Perl subroutine that creates a file, like so:
sub createFile {
if (open (OUTFILEHANDLE, ">$fileName")) {
print OUTFILEHANDLE "$desiredVariable\n";
}
close(OUTFILEHANDLE);
}
where $fileName and $desiredVariable have been previously defined. I call that, and then call the following subroutine, which reads from the file, takes the first (only) line, and saves it into the variable $desiredVariable:
sub getInfoFromFile {
if (existFile($fileName)) {
if (open (READFILEHANDLE, "<$fileName")) {
my #entire_file=<READFILEHANDLE>; # Slurp
$desiredVariable = $entire_file[0];
chop $desiredVariable;
close(READFILEHANDLE);
}
}
}
If I leave out the "chop" line, $desiredVariable is what I want, but with a trailing space newline. If I include the "chop" line, $desiredVariable is an empty string. For some reason, "chop" is killing the whole string. I've tried it with $desiredVariable =~ s/\s*$//; and several other string manipulation tricks.
What am I doing wrong?
The code you included does not reproduce the problem. I'm guessing it was lost in translation somehow while you were anonymizing it. I ran the script as follows, the only adjustment I made was -f instead of existsFile().
#!/usr/bin/perl
sub createFile {
if (open (OUTFILEHANDLE, ">$fileName")) {
print OUTFILEHANDLE "$desiredVariable\n";
}
close(OUTFILEHANDLE);
}
sub getInfoFromFile {
if (-f $fileName) {
if (open (READFILEHANDLE, "<$fileName")) {
my #entire_file=<READFILEHANDLE>; # Slurp
$desiredVariable = $entire_file[0];
chop $desiredVariable;
close(READFILEHANDLE);
}
}
}
$fileName = "test.txt";
$desiredVariable = "Hello World!";
createFile();
$desiredVariable = "";
getInfoFromFile();
print "Got '$desiredVariable'\n"; # Got 'Hello World!'

Concatenation of the contents of the files with the particle names in the array

I have many files(on master) in dir with names file1.domain file2.domain file3.domain someanothername.domain
Domain always is the same.
I need defined type, that i can use like this.
mydefinedtype { "title":
filenames => ["file1", "file2", "file3"],
}
And it will create file on node with content of files
file1.domain file2.domain file3.domain
i guess you mean that you want to deploy some files in your nodes.
if so, you can do as follows. this will copy file1,2,3 from your_module/files to the target directory:
$filenames = ["file1", "file2", "file3"]
define copy_file {
file { "/targetdir/$name":
source => 'puppet:///modules/your_module/$name',
}
}
copy_file { $filenames }
see: http://docs.puppetlabs.com/guides/file_serving.html

Resources