update Conf File using perl - linux

I am using perl script to update sections in conf file . my script i working properly but when i execute script the ordered on sections changed automatically while i dont want to change the order.script is following .
#!/usr/bin/perl
use Config::Tiny;
$file = 'sss.conf';
my $Config = Config::Tiny->new;
$Config = Config::Tiny->read( $file );
my $rootproperty = $Config->{_}->{rootproperty};
$Config->{amrit}->{host} = 'Not Bar!';
$Config->write( $file );
and file contents following line;
[amrit]
type=friend
host=111.118.253.145
port=2776
username=amrit
secret=password
disallow=all
allow=gsm
context=sip-calling
qualify=yes
call-limit=22
Please help me here i dont want to change order of fields

From the documentation for Config::Tiny:
...Config::Tiny does not preserve your comments, whitespace, or the order of your config file.
See Config::Tiny::Ordered (and possibly others) for the preservation of the order of the entries in the file.
So, do as the documentation suggests, and see Config::Tiny::Ordered.
Update: Based on the comments I'll try to provide additional help here.
First, your existing script is mostly just a copy-paste from the Config::Tiny synopsis, without any deep understanding of what most of it does. The relevant parts... or at least those parts you should keep are:
use Config::Tiny;
$file = 'sss.conf';
$Config = Config::Tiny->read( $file );
$Config->{amrit}->{host} = 'Not Bar!';
$Config->write( $file );
If you add use Data::Dumper; at the top of the script, and immediately after reading the config file you add print Dumper $Config;, the structure would look like this:
{
'amrit' => {
'call-limit' => '22',
'host' => '111.118.253.145',
'secret' => 'password',
'context' => 'sip-calling',
'port' => '2776',
'username' => 'amrit',
'allow' => 'gsm',
'qualify' => 'yes',
'type' => 'friend',
'disallow' => 'all'
}
}
Modifying the structure with the script I've posted above would work if you don't mind the key/value pairs to have their orders rearranged. But you need to preserve order. So the suggestion was to switch to Config::Tiny::Ordered. That module preserves order by rearranging the structure differently. If you change the script to look like this:
use Data::Dumper;
use Config::Tiny::Ordered;
$file = 'conf.conf';
$Config = Config::Tiny->read( $file );
print Dumper $Config;
You will see that the structure now looks like this:
{
'amrit' =>
[
{
'value' => 'friend',
'key' => 'type'
},
{
'key' => 'host',
'value' => '111.118.253.145'
},
{
'value' => '2776',
'key' => 'port'
},
{
'value' => 'amrit',
'key' => 'username'
},
{
'value' => 'password',
'key' => 'secret'
},
{
'value' => 'all',
'key' => 'disallow'
},
{
'key' => 'allow',
'value' => 'gsm'
},
{
'key' => 'context',
'value' => 'sip-calling'
},
{
'key' => 'qualify',
'value' => 'yes'
},
{
'value' => '22',
'key' => 'call-limit'
}
]
}
Or in other words, the internal structure of the Config::Tiny::* object has changed from a hash of hashes, to a hash of arrays of hashes. ("All problems in computer science can be solved by another level of indirection" -- David Wheeler) This change in the shape of the datastructure exists to move away from the problem of hashes being unordered containers.
So now instead of convenient hash lookups for the key named "host", you have to iterate through your structure to find the array element that has an anonymous hash with a key field named host. More work:
use List::Util qw(first);
use Config::Tiny::Ordered;
$file = 'sss.conf';
$Config = Config::Tiny::Ordered->read( $file );
my $want_ix = first {
$Config->{amrit}[$_]{key} eq 'host'
} 0 .. $#{$Config->{amrit}};
$Config->{amrit}[$want_ix]{value} = 'Not Bar!';
$Config->write( $file );
This works by running through the amrit section of the config file looking for the element in the structure's anonymous array that has an anonymous hash with a field named 'key', and once that array element is found, modifying the 'value' hash element to 'Not Bar!'
If this is a one time thing for you, you're done. If you want to be able to do it yourself next time, read perldoc perlreftut, perldoc perldsc, as well as documentation for any of the functions used herein that aren't immediately clear.

From the CPAN page:
Lastly, Config::Tiny does not preserve your comments, whitespace, or the order of your config file.
Use COnfig::Tiny::Ordered instead
See Config::Tiny::Ordered (and possibly others) for the preservation of the order of the entries in the file.

Related

Puppet: set default values for nested hashes

I am creating a class which creates a ssh keys pair for users.
In Hiera it looks like this:
ssh::default_user_keys::entries:
root:
privkey: ENC[PKCS7,MIIH/QYJKoZIhvcNAQcDoIIH7jCCB+oCAQAx...]
pubkey: BiQYJKoZIhvcNAQcDoIIBejCCAXYC...
user1:
privkey: ENC[PKCS7,MIIH/Bf0mv0aa7JRbEWWONfVMJBOX2I7x...]
keydir: /srv/data/user1/.ssh
user2:
privkey: ENC[PKCS7,MIIBiQYJKoZIhvcNAQcDoIIBejCCAXYCAQAx...]
keytype: id_ecdsa
If some values are not specified in Hiera, they must be picked up from default values in the manifest.
So my question is how to write the manifest correctly?
This is my pseudo code I try to make working:
class ssh::default_user_keys_refact (
Hash $entries = lookup('ssh::default_user_keys_refact::entries', "merge" => 'hash'),
) {
$entries.each |String $user, Hash $item = {} | {
set_default { $item:
privkey => undef,
pubkey => undef,
keydir => "/home/${user}/.ssh",
keytype => id_rsa,
comment => undef,
}
$sshpriv = $item[privkey]
file { "${keydir}/$[keytype}":
ensure => file,
owner => $user,
mode => '0600',
content => $sshpriv,
}
}
}
Do you have ideas how to make this pseudo code working? Or in other words, how to set up default values for nested hiera hash in puppet manifest?
puppet versions is 6.
Thank you in advance.
Or in other words, how to set up default values for nested hiera hash in puppet manifest?
The fact that the hashes for which you want to provide defaults are nested inside others in the Hiera data isn't much relevant, because at the point of use they have been plucked out. With that being the case, you can accomplish this fairly easily by leveraging Puppet (not Hiera) hash merging. It might look something like this:
$entries.each |String $user, Hash $props| {
# Merge with default values
$full_props = {
'keydir' => "/home/${user}/.ssh",
'keytype' => 'id_rsa'
} + $props
file { "${full_props['keydir']}/${full_props['keytype']}":
ensure => 'file',
owner => $user,
mode => '0600',
content => $full_props['privkey'],
}
}
That's by no means the only way, but it's clean and clear, and it scales reasonably well.

Injecting facter into file content - no implicit conversion of Hash into String

I want to inject some values from facter <prop> into a file content.
It works with $fqdn since facter fqdn returns string.
node default {
file {'/tmp/README.md':
ensure => file,
content => $fqdn, # $(facter fqdn)
owner => 'root',
}
}
However, it does not work with hash object (facter os):
node default {
file {'/tmp/README.md':
ensure => file,
content => $os, # $(facter os) !! DOES NOT WORK
owner => 'root',
}
}
And getting this error message when running puppet agent -t:
Error: Failed to apply catalog: Parameter content failed on
File[/tmp/README.md]: Munging failed for value
{"architecture"=>"x86_64", "family"=>"RedHat", "hardware"=>"x86_64",
"name"=>"CentOS", "release"=>{"full"=>"7.4.1708", "major"=>"7",
"minor"=>"4"}, "selinux"=>{"config_mode"=>"enforcing",
"config_policy"=>"targeted", "current_mode"=>"enforcing",
"enabled"=>true, "enforced"=>true, "policy_version"=>"28"}} in class
content: no implicit conversion of Hash into String (file:
/etc/puppetlabs/code/environments/production/manifests/site.pp, line:
2)
How to convert the hash to string inside the pp file?
If you have Puppet >= 4.5.0, it is now possible to natively convert various data types to strings in the manifests (i.e. in the pp files). The conversion functions are documented here.
This would do what you want:
file { '/tmp/README.md':
ensure => file,
content => String($os),
}
or better:
file { '/tmp/README.md':
ensure => file,
content => String($facts['os']),
}
On my Mac OS X, that leads to a file with:
{'name' => 'Darwin', 'family' => 'Darwin', 'release' => {'major' => '14', 'minor' => '5', 'full' => '14.5.0'}}
Have a look at all that documentation, because there are quite a lot of options that might be useful to you.
Of course, if you wanted the keys inside the $os fact,
file { '/tmp/README.md':
ensure => file,
content => $facts['os']['family'],
}
Now, if you don't have the latest Puppet, and you don't have the string conversion functions, the old way of doing this would be via templates and embedded Ruby (ERB), e.g.
$os_str = inline_template("<%= #os.to_s %>")
file { '/tmp/README.md':
ensure => file,
content => $os_str,
}
This actually leads to a slightly differently-formatted Hash since Ruby, not Puppet does the formatting:
{"name"=>"Darwin", "family"=>"Darwin", "release"=>{"major"=>"14", "minor"=>"5", "full"=>"14.5.0"}}

puppet creating relationship between file, class and define

I want to create relationship between file, class and define.... Please check the below code....
The problem I am facing is, even if there is no change in deploy.cfg file, class and nexus::artifact always runs...
class and nexus::artifact should execute only if it detects a change in file
I know that we need to make use of subscribe and refreshonly=true. But I have no idea where to put this...
file { 'deploy.cfg':
ensure => file,
path => '/home/deploy/deploy.cfg',
mode => '0644',
owner => 'root',
group => 'root',
content => "test",
notify => [Class['nexus'], Nexus::Artifact['nexus-artifact']],
subscribe => File['/home/deploy/dir'],
}
class { 'nexus':
url => $url,
username => $user,
password => $password,
}
nexus::artifact { "nexus-artifact":
gav => $gav,
packaging => $packaging,
output => $targetfilepath,
repository => $repository,
owner => $owner,
mode => $mode,
}
artifact.pp
define nexus::artifact (
$gav,
$repository,
$output,
$packaging = 'jar',
$classifier = undef,
$ensure = update,
$timeout = undef,
$owner = undef,
$group = undef,
$mode = undef
) {
include nexus
}
init.pp
class nexus (
$url,
$username = undef,
$password = undef,
$netrc = undef,
) {
}
even if there is no change in deploy.cfg file, class and nexus::artifact always runs
Well yes, every class and resource in your node's catalog is applied on every catalog run, unless a resource that is required to be applied before it fails. That is normal. The key thing to understand in this regard is that the first part of applying a catalog resource is determining whether the corresponding physical resource is already in sync; if it is, then applying the catalog resource has no further effect.
class and nexus::artifact should execute only if it detects a change in file
I know that we need to make use of subscribe and refreshonly=true.
Well, no. You may be able to modulate the effect of applying that class and resource, but you cannot use the result of syncing another resource to modulate whether they are applied at all. In any event, refreshonly is specific to Exec resources, and you don't have any of those in your code.

How to iterate on puppet? Or how to avoid it?

I have a global string variable that's actually an array of names:
"mongo1,mongo2,mongo3"
What I'm doing here is splitting them into an array using the "," as a delimiter and then feeding that array into a define to create all instances I need.
Problem is, every instance has a different port. I made a new stdlib function to get the index of a name in an array, and am feeding that to the port parameter.
This seems bad and I don't like having to alter stdlib.
So I'm wondering how I could do this using something like a nx2 array?
"mongo1,port1;mongo2,port2;mongo3,port3"
or two arrays
"mongo1,mongo2,mongo3" and "port1,port2,port3"
class site::mongomodule {
class { 'mongodb':
package_ensure => '2.4.12',
logdir => '/var/log/mongodb/'
}
define mongoconf () {
$index = array_index($::site::mongomodule::mongoReplSetName_array, $name)
mongodb::mongod { "mongod_${name}":
mongod_instance => $name,
mongod_port => 27017 + $index,
mongod_replSet => 'Shard1',
mongod_shardsvr => 'true',
}
}
$mongoReplSetName_array = split(hiera('site::mongomodule::instances', undef), ',')
mongoconf { $mongoReplSetName_array: }
}
the module I'm using is this one:
https://github.com/echocat/puppet-mongodb
using puppet 3.8.0
Hiera can give you a hash when you lookup a key, so you can have something like this in hiera:
mongoinstances:
mongo1:
port: 1000
mongo2:
port: 1234
Then you lookup the key in hiera to get the hash, and pass it to the create_resources function which will create one instance of a resource per entry in the hash.
$mongoinstances = hiera('mongoinstances')
create_resources('mongoconf', $mongoinstances)
You will need to change mongoconf for this to work by adding a $port parameter. Each time you want to pass an additional value from hiera, just add it as a parameter to your defined type.
If you are using puppet >= 4.0, use puppet hashes with each function.
Define hash e.g:
$my_hash = { mongo1 => port1,
mongo2 => port2, }
Next use each function on it e.g:
$my_hash.each |$key, $val| { some code }.
More about iteration in puppet here.

Logstash grok filter - name fields dynamically

I've got log lines in the following format and want to extract fields:
[field1: content1] [field2: content2] [field3: content3] ...
I neither know the field names, nor the number of fields.
I tried it with backreferences and the sprintf format but got no results:
match => [ "message", "(?:\[(\w+): %{DATA:\k<-1>}\])+" ] # not working
match => [ "message", "(?:\[%{WORD:fieldname}: %{DATA:%{fieldname}}\])+" ] # not working
This seems to work for only one field but not more:
match => [ "message", "(?:\[%{WORD:field}: %{DATA:content}\] ?)+" ]
add_field => { "%{field}" => "%{content}" }
The kv filter is also not appropriate because the content of the fields may contain whitespaces.
Is there any plugin / strategy to fix this problem?
Logstash Ruby Plugin can help you. :)
Here is the configuration:
input {
stdin {}
}
filter {
ruby {
code => "
fieldArray = event['message'].split('] [')
for field in fieldArray
field = field.delete '['
field = field.delete ']'
result = field.split(': ')
event[result[0]] = result[1]
end
"
}
}
output {
stdout {
codec => rubydebug
}
}
With your logs:
[field1: content1] [field2: content2] [field3: content3]
This is the output:
{
"message" => "[field1: content1] [field2: content2] [field3: content3]",
"#version" => "1",
"#timestamp" => "2014-07-07T08:49:28.543Z",
"host" => "abc",
"field1" => "content1",
"field2" => "content2",
"field3" => "content3"
}
I have try with 4 fields, it also works.
Please note that the event in the ruby code is logstash event. You can use it to get all your event field such as message, #timestamp etc.
Enjoy it!!!
I found another way using regex:
ruby {
code => "
fields = event['message'].scan(/(?<=\[)\w+: .*?(?=\](?: |$))/)
for field in fields
field = field.split(': ')
event[field[0]] = field[1]
end
"
}
I know that this is an old post, but I just came across it today, so I thought I'd offer an alternate method. Please note that, as a rule, I would almost always use a ruby filter, as suggested in either of the two previous answers. However, I thought I would offer this as an alternative.
If there is a fixed number of fields or a maximum number of fields (i.e., there may be fewer than three fields, but there will never be more than three fields), this can be done with a combination of grok and mutate filters, as well.
# Test message is: `[fieldname: value]`
# Store values in [#metadata] so we don't have to explicitly delete them.
grok {
match => {
"[message]" => [
"\[%{DATA:[#metadata][_field_name_01]}:\s+%{DATA:[#metadata][_field_value_01]}\]( \[%{DATA:[#metadata][_field_name_02]}:\s+%{DATA:[#metadata][_field_value_02]}\])?( \[%{DATA:[#metadata][_field_name_03]}:\s+%{DATA:[#metadata][_field_value_03]}\])?"
]
}
}
# Rename the fieldname, value combinations. I.e., if the following data is in the message:
#
# [foo: bar]
#
# It will be saved in the elasticsearch output as:
#
# {"foo":"bar"}
#
mutate {
rename => {
"[#metadata][_field_value_01]" => "[%{[#metadata][_field_name_01]}]"
"[#metadata][_field_value_02]" => "[%{[#metadata][_field_name_02]}]"
"[#metadata][_field_value_03]" => "[%{[#metadata][_field_name_03]}]"
}
tag_on_failure => []
}
For those who may not be as familiar with regex, the captures in ()? are optional regex matches, meaning that if there is no match, the expression won't fail. The tag_on_failure => [] option in the mutate filter ensures that no error will be appended to tags if one of the renames fails because there was no data to capture and, as a result, there is no field to rename.

Resources