Chef: Modify existing resource from another cookbook - resources

I have two cookbooks: elasticsearch and curator.
Elasticsearch cookbook installs and configure an elasticsearch. The following resource (from elasticsearch cookbook), has to be modified from curator cookbook:
elasticsearch_configure 'elasticsearch' do
configuration ({
'http.port' => port,
'cluster.name' => cluster_name,
'node.name' => node_name,
'bootstrap.memory_lock' => false,
'discovery.zen.minimum_master_nodes' => 1,
'xpack.monitoring.enabled' => true,
'xpack.graph.enabled' => false,
'xpack.watcher.enabled' => true
})
end
I need to modify it on curator cookbook and add a single line:
'path.repo' => (["/backups/s3_currently_dev", "/backups/s3_currently", "/backups/s3_daily", "/backups/s3_weekly", "/backups/s3_monthly"])
How I can do that?

I initially was going to point you to the chef-rewind gem, but that actually points to the edit_resource provider that is now built into Chef. A basic example of this:
# cookbook_a/recipes/default.rb
file 'example.txt' do
content 'this is the initial content'
end
.
# cookbook_b/recipes/default.rb
edit_resource! :file, 'example.txt' do
content 'modified content!'
end
If both of these are in the Chef run_list, the actual content within example.txt is that of the edited resource, modified content!.
Without fully testing your case, I'm assuming the provider can be utilized the same way, like so:
edit_resource! :elasticsearch_configure, 'elasticsearch' do
configuration ({
'http.port' => port,
'cluster.name' => cluster_name,
'node.name' => node_name,
'bootstrap.memory_lock' => false,
'discovery.zen.minimum_master_nodes' => 1,
'xpack.monitoring.enabled' => true,
'xpack.graph.enabled' => false,
'xpack.watcher.enabled' => true,
'path.repo' => ["/backups/s3_currently_dev", "/backups/s3_currently", "/backups/s3_daily", "/backups/s3_weekly", "/backups/s3_monthly"]
})
end

Related

Using facts inside a module to push data to file

I'm trying to create a simple module that will use facts from the agent to push the relevant output to file..
I've already managed to do it in one module but for an unknown reason it doesn't work as expected..
this is what I did
class testrepo {
case $facts['os']['family'] {
'RedHat': {
file_line { 'dscrp to local repo file':
path => '/etc/yum.repos.d/test.repo',
line => "name=${::description}",
ensure => present,
}
file_line { 'repo from agent':
path => '/etc/yum.repos.d/test.repo',
line => "baseurl=file:///usr/local/src/RHEL/RHEL-${::full}-${::architecture}",
ensure => present,
}
in the first file_line the output in file is "name=". and in the second file_line it doesn't translate the ${::full} but I get the ${::architecture}
file_line { 'Add fdqn to /etc/hosts':
path => '/etc/hosts',
line => "${::ipaddress} ${::fqdn} ${::hostname}",
ensure => present,
}
the above is working as expected
right now I'm not sure which direction should I check
I've tried $facts['os']['familiy']['full'] , it also doesn't work
could anyone give me some advice here
thank you
Architecture, fqdn and ipaddress are all facts available at the top level, if you jump onto the target node and run facter architecture you'll get an answer;
[root#example ~]# facter ipaddress
10.10.10.110
[root#example ~]# facter architecture
x86_64
"full" is part of the OS nested fact:
[root#example ~]# facter full
[root#r2h-bg5ore5nix0 ~]# facter os
{
architecture => "x86_64",
family => "RedHat",
hardware => "x86_64",
name => "CentOS",
release => {
full => "7.7.1908",
major => "7",
minor => "7"
},
selinux => {
config_mode => "enforcing",
config_policy => "targeted",
current_mode => "enforcing",
enabled => true,
enforced => true,
policy_version => "31"
}
}
So you'll have to drill down through the os facts hash to do that, on the command line that's;
[root#example ~]# facter os.release.full
7.7.1908
In code you can experiment with;
notify { 'message':
message => "message is ${::os['release']['full']}",
}
or
notify { 'message':
message => "message is ${::facts['os']['release']['full']}",
}
So what you're going to need to do in your code is use:
line => "baseurl=file:///usr/local/src/RHEL/RHEL-${::os['release']['full']}-${::architecture}",

Logstash Plug-in Development

I am trying to develop a new logstash plugin in my development Elasticsearch stack environment.
I started with generating the skeleton for my new plug-in using the logstash-plugin generate utility. Once I had this in place I then was able to build my plug-in using the gem build utility which ran without error and created a gem for me. I then ran the logstash-plugin install utility which successfully installed my plug-in.
The entry in my logstash.gem looks like this--
gem "logstash-output-s3-4.3.3", "0.1.0", :path => "./vendor/bundle/jruby/2.5.0/gems/logstash-output-s3-4.3.3"
I then took the code from the actual logstash plug-in that I'm hoping to modify and copied it into my new plug-in directory and then created a new logstash.conf file to test with using my new plug-in and started the logstash service. It ran as expected moving data from the servers that were being watched in the appropriate s3 buckets.
logstash.conf output section--
output {
s3{
region => "us-east-1"
bucket => "xxxx"
prefix => "Dev/xxxx/%{+YYYY}/%{+MM}/%{+dd}/"
server_side_encryption => true
server_side_encryption_algorithm => "aws:kms"
ssekms_key_id => "xxxx"
validate_credentials_on_root_bucket => false
codec => "json_lines"
size_file => 1024000
time_file => 5
rotation_strategy => "size_and_time"
temporary_directory => "../../LogstashS3OutputData/temp/"
canned_acl => "private"
}
s3{
region => "us-east-1"
bucket => "xxxx"
prefix => "Dev/xxxx/%{+YYYY}/%{+MM}/%{+dd}/"
server_side_encryption => true
server_side_encryption_algorithm => "aws:kms"
ssekms_key_id => "xxxx"
validate_credentials_on_root_bucket => false
codec => "json_lines"
size_file => 1024000
time_file => 5
rotation_strategy => "size_and_time"
temporary_directory => "../../LogstashS3OutputData/temp/"
canned_acl => "private"
}
}
My next step was and this is where I am having issues was to try and create a new property for the plug-in called file_prefix--
s3{
region => "us-east-1"
bucket => "xxxx"
prefix => "Dev/xxxx/xxxx/%{+YYYY}/%{+MM}/%{+dd}/"
file_prefix => "test"
server_side_encryption => true
server_side_encryption_algorithm => "aws:kms"
ssekms_key_id => "arn:aws:kms:xxxx"
validate_credentials_on_root_bucket => false
codec => "json_lines"
size_file => 1024000
time_file => 5
rotation_strategy => "size_and_time"
temporary_directory => "../../LogstashS3OutputData/temp/"
canned_acl => "private"
}
I modified the following files in my solution
\vendor\bundle\jruby\2.5.0\gems\logstash-output-s3-4.3.3\spec\outputs\s3_spec.rb
added "file_prefix" => file_prefix at line 15
\vendor\bundle\jruby\2.5.0\gems\logstash-output-s3-4.3.3\spec\supports\helpers.rb
added let(:file_prefix) {} at line line 12
added "file_prefix" => file_prefix, at line 19
\vendor\bundle\jruby\2.5.0\gems\logstash-output-s3-4.3.3\lib\logstash\outputs\s3.rb
added config :file_prefix, :validate => :string right after the statement config :prefix, :validate => :string, :default => '' at line 154
added #file_prefix = file_prefix at line 226
Now when I start the logstash service I get this error message
Unknown setting 'file_prefix' for s3
I've searched and I've tried everything I can think but I can't get past this hurdle.
TIA,
Bill Youngman
This can be considered closed as I am using a similar feature request in the s3 output plugin developer branch as a starting point for my project.
Thanks,
-Bill

looping in Puppet 5 results in duplicate declaration error

Request some help please.
Requirement is to create a custom firewall service and then allow this custom firewall service only to a selected ips (trying to use firewalld_rich_rules here).
Here is the sample code:
class foo::fwall (
$sourceip = undef,
)
{
include firewalld
if $sourceip {
$sourceip.each |String $ipaddr| {
firewalld_rich_rule { "rich_rule_${ipaddr}":
ensure => enabled,
permanent => true,
zone => 'public',
family => ipv4,
source => $ipaddr,
element => service,
servicename => 'bar',
action => accept,
}
}
}
# this is defined in firewalld class and works good
firewalld::custom_service { 'bar':
short => 'bar custom service',
description => 'custom service ports',
ports => [
{
port => '7771',
protocol => 'tcp',
},
{
port => '8282',
protocol => 'tcp',
},
{
port => '8539',
protocol => 'tcp',
},
],
}
}
and while running it on a node, with couple of ip addresses (provided as an array for $sourceip), it results in duplicate declaration error
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: Firewalld_rich_rule[rich_rule_2] is already declared at (file: .../dev/modules/test/manifests/fwall.pp, line: 11); cannot redeclare (file: .../dev/modules/test/manifests/fwall.pp, line: 11) (file: .../dev/modules/test/manifests/fwall.pp, line: 11, column: 7) on node server.domain
Trying it in puppet v5.5 (from puppetlabs) for Redhat Enterprise Linux 7 servers
Note: tried defining a resource following this example from Puppet documentation but getting invalid address error.
define puppet::binary::symlink ($binary = $title) {
file {"/usr/bin/${binary}":
ensure => link,
target => "/opt/puppetlabs/bin/${binary}",
}
}
Use the defined type for the iteration somewhere ele in your manifest file:
$binaries = ['facter', 'hiera', 'mco', 'puppet', 'puppetserver']
puppet::binary::symlink { $binaries: }
I had to change the datatype for $sourceip to array in RH Satellite's smart class parameters which was String by default. Everything works good now.

Creating master and slaves DNS with puppet module camptocamp bind

Trying to create a master and slave (redundancy) DNS with puppet module camptocamp bind. In slave profile, I've set transfer_source => '192.168.1.20' to masters ip: 192.168.1.20. It should then synchronize and copy dns records from master to the slave.
But I got complaints about that it could only be set to slave zones. I've followed the README from puppet forge for the module: https://forge.puppet.com/camptocamp/bind/readme
dnsmaster.pp
class profile::dnsbind::server {
include 'bind'
bind::zone {'example.com':
ensure => 'present',
zone_contact => 'contact.example.com',
zone_ns => ['ns0.example.com'],
zone_serial => '2012112901',
zone_ttl => '604800',
zone_origin => 'example.com',
}
bind::a { 'example.com':
ensure => 'present',
zone => 'example.com',
ptr => false,
hash_data => {
'host1' => { owner => '192.168.0.1', },
'host2' => { owner => '192.168.0.2', },
},
}
}
dnsslave.pp
class profile::dnsbind::server_slave {
include 'bind'
bind::zone {'example.com':
ensure => 'present',
zone_contact => 'contact.example.com',
zone_ns => ['ns0.example.com'],
zone_serial => '2012112901',
zone_ttl => '604800',
zone_origin => 'example.com',
transfer_source => '192.168.1.20',
}
}
The error message:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Function Call, Zone 'example.com': transfer_source can be set only for slave zones! at /etc/puppetlabs/code/environments/production/modules/bind/manifests/zone.pp:80:5 at /etc/puppetlabs/code/environments/production/manifests/profile_dns2.pp:5 on node centos7-3
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
It should then synchronize and copy dns records from master to the
slave.
But I got complaints about that it could only be set to slave zones.
Evidently, the module does not recognize that you're trying to configure a slave zone. How do you suppose it would know? Well, apparently not from the presence of a transfer_source property.
I've followed the README from puppet forge for the module:
https://forge.puppet.com/camptocamp/bind/readme
I'll believe that you started by pulling the example zone definition (for a master zone) from the readme, and I grant you that this module's docs are kinda shoddy. But do nevertheless consider actually reading the docs thoroughly, not just skimming them. If you had done, you would have found documentation for the zone_type parameter immediately following the the documentation for the transfer_source parameter:
$zone_type = master
Specify if the zone is master/slave/forward.
Use this to specify that you're configuring a slave zone.

puppet couldn't retrieve information from source

My Puppet manifest looks like this
$abrt_config = [ 'abrt.conf','abrt-action-save-package-data.conf' ]
file { $abrt_config:
ensure => present,
path => "/etc/abrt/${abrt_config}",
owner => 'root',
group => 'root',
mode => '0644',
source => "puppet:///modules/abrt/${abrt_config}",
}
My config files are located in the following path.
/abrt/files/abrt.conf
/abrt/files/abrt-action-save-package-data.conf
I'm getting the following error when executing puppet on client nodes.
Error: /Stage[main]/Abrt/File[/etc/abrt/abrt-action-save-package-data.conf]: Could not evaluate: Could not retrieve information from environment development source(s) puppet:///modules/abrt//etc/abrt/abrt.conf/etc/abrt/abrt-action-save-package-data.conf
Error: /Stage[main]/Abrt/File[/etc/abrt/abrt.conf]: Could not evaluate: Could not retrieve information from environment development source(s) puppet:///modules/abrt//etc/abrt/abrt.conf/etc/abrt/abrt-action-save-package-data.conf
You cannot implicitly convert an array to a string in the source attribute like that and expect desired behavior.
If you are using a non-obsolete version of Puppet, then you can use a lambda iterator to solve this problem in the following way:
['abrt.conf', 'abrt-action-save-package-data.conf'].each |$abrt_config| {
file { $abrt_config:
ensure => present,
path => "/etc/abrt/${abrt_config}",
owner => 'root',
group => 'root',
mode => '0644',
source => "puppet:///modules/abrt/${abrt_config}",
}
}
Check the documentation here for more details: https://docs.puppet.com/puppet/4.8/function.html#each

Resources