puppet: iterate over hash which has array - puppet

I try to generate nginx config via puppet . I have hieradata like this
profile::candy::virtualhosts:
abc.example.com:
listen: '80'
location: [v1, v2]
backend_port: [3000, 3001]
I can generate server config using this code
$virtualhosts.each |$virtualhost, $opts| {
nginx::resource::server { "${virtualhost}":
listen_port => Stdlib::Port($opts[listen]),
ssl => $ssl,
ssl_cert => if $ssl { "/etc/nginx/certs/abc.pem" },
ssl_key => if $ssl { "/etc/nginx/certs/abc.key" },
ssl_redirect => $ssl,
}
}
but i want to create location for nginx for that virtualhost this is code which works manually
nginx::resource::location { "/v1" :
ensure => present,
ssl => true,
ssl_only => true,
location => '/v1/',
server => 'abc.example.com',
index_files => [],
proxy => "http://127.0.0.1:4000/",
proxy_set_header => [ 'Upgrade $http_upgrade', "Connection 'upgrade'", 'Host $host' ],
proxy_http_version => '1.1',
}
not able to understand how can i iterate over this from hiera values

Fixed my problem used 2 loops outer loop i passed Hash to it and in second loop used each keyword to scan all locations
Also changed my hiera as
profile::candy::virtualhosts:
abc.example.com:
v1: '3000'
v2: '3001'

Related

looping in Puppet 5 results in duplicate declaration error

Request some help please.
Requirement is to create a custom firewall service and then allow this custom firewall service only to a selected ips (trying to use firewalld_rich_rules here).
Here is the sample code:
class foo::fwall (
$sourceip = undef,
)
{
include firewalld
if $sourceip {
$sourceip.each |String $ipaddr| {
firewalld_rich_rule { "rich_rule_${ipaddr}":
ensure => enabled,
permanent => true,
zone => 'public',
family => ipv4,
source => $ipaddr,
element => service,
servicename => 'bar',
action => accept,
}
}
}
# this is defined in firewalld class and works good
firewalld::custom_service { 'bar':
short => 'bar custom service',
description => 'custom service ports',
ports => [
{
port => '7771',
protocol => 'tcp',
},
{
port => '8282',
protocol => 'tcp',
},
{
port => '8539',
protocol => 'tcp',
},
],
}
}
and while running it on a node, with couple of ip addresses (provided as an array for $sourceip), it results in duplicate declaration error
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: Firewalld_rich_rule[rich_rule_2] is already declared at (file: .../dev/modules/test/manifests/fwall.pp, line: 11); cannot redeclare (file: .../dev/modules/test/manifests/fwall.pp, line: 11) (file: .../dev/modules/test/manifests/fwall.pp, line: 11, column: 7) on node server.domain
Trying it in puppet v5.5 (from puppetlabs) for Redhat Enterprise Linux 7 servers
Note: tried defining a resource following this example from Puppet documentation but getting invalid address error.
define puppet::binary::symlink ($binary = $title) {
file {"/usr/bin/${binary}":
ensure => link,
target => "/opt/puppetlabs/bin/${binary}",
}
}
Use the defined type for the iteration somewhere ele in your manifest file:
$binaries = ['facter', 'hiera', 'mco', 'puppet', 'puppetserver']
puppet::binary::symlink { $binaries: }
I had to change the datatype for $sourceip to array in RH Satellite's smart class parameters which was String by default. Everything works good now.

How to use the index specified in Filebeat in logstash.yml?

I am using Filebeat to send log files over to my Logstash with the following configurations:
filebeat.inputs:
- type: log
enabled: true
paths:
- ${PWD}/filebeat-volume/data/*.txt
output.logstash:
enabled: true
hosts: ["elk:5044"]
index: "custom-index"
setup.kibana:
host: "localhost:5601"
and
input {
beats {
port => "5044"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "<WHAT SHOULD GO HERE???>"
}
}
In filebeat.yml, I am specifying an index ("custom index"). How can I set the same index in my logstash.yml to be sent to Elasticsearch?
I see what you want now, you should set Logstash with below output configuration, this way it will pass the index set in filebeat to Elasticsearch.
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[#metadata][beat]}"
}
}
Point 2 in this example

Chef: Modify existing resource from another cookbook

I have two cookbooks: elasticsearch and curator.
Elasticsearch cookbook installs and configure an elasticsearch. The following resource (from elasticsearch cookbook), has to be modified from curator cookbook:
elasticsearch_configure 'elasticsearch' do
configuration ({
'http.port' => port,
'cluster.name' => cluster_name,
'node.name' => node_name,
'bootstrap.memory_lock' => false,
'discovery.zen.minimum_master_nodes' => 1,
'xpack.monitoring.enabled' => true,
'xpack.graph.enabled' => false,
'xpack.watcher.enabled' => true
})
end
I need to modify it on curator cookbook and add a single line:
'path.repo' => (["/backups/s3_currently_dev", "/backups/s3_currently", "/backups/s3_daily", "/backups/s3_weekly", "/backups/s3_monthly"])
How I can do that?
I initially was going to point you to the chef-rewind gem, but that actually points to the edit_resource provider that is now built into Chef. A basic example of this:
# cookbook_a/recipes/default.rb
file 'example.txt' do
content 'this is the initial content'
end
.
# cookbook_b/recipes/default.rb
edit_resource! :file, 'example.txt' do
content 'modified content!'
end
If both of these are in the Chef run_list, the actual content within example.txt is that of the edited resource, modified content!.
Without fully testing your case, I'm assuming the provider can be utilized the same way, like so:
edit_resource! :elasticsearch_configure, 'elasticsearch' do
configuration ({
'http.port' => port,
'cluster.name' => cluster_name,
'node.name' => node_name,
'bootstrap.memory_lock' => false,
'discovery.zen.minimum_master_nodes' => 1,
'xpack.monitoring.enabled' => true,
'xpack.graph.enabled' => false,
'xpack.watcher.enabled' => true,
'path.repo' => ["/backups/s3_currently_dev", "/backups/s3_currently", "/backups/s3_daily", "/backups/s3_weekly", "/backups/s3_monthly"]
})
end

Put an Include directive inside Directory in a vhost with puppet

Is there any way to create a "Directory" in a vhost and put inside an "Include" with Puppet?
Like this:
<Directory "/var/www">
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Require all granted
Include /etc/apache2/myconf.d/htpasswd.conf
</Directory>
I did it with "custom_fragment" but I would like to do with "additional_includes", but "additional_includes" can't use it inside the variable "directories".
Is there any another way?
Thanks.
I assume you are using Puppet Enterprise or the PLAM.
It has indeed no native support for what you are trying. custom_fragment is actually a very good choice here.
If you really want to add the include through a dedicated hash key, you can modify the module and open a pull request. You will basically have to add a section like the existing ones to the template. Also, some brief documentation. The guys love pull requests ;-)
Looks like you're looking for an array?
if you are using the puppetlabs module, you can use "additional_includes"
additional_includes
Specifies paths to additional static, vhost-specific Apache configuration files. Useful for implementing a unique, custom configuration not supported by this module. Can be an array. Defaults to '[]'.
https://forge.puppetlabs.com/puppetlabs/apache#parameter-directories-for-apachevhost
apache::vhost { 'myvhost.whaterver.com':
port => 8080,
docroot => '/var/www/folder',
directories => [
{ 'path' => '/var/www/folder',
'options' => 'None',
'allow_override' => 'None',
'order' => 'Allow,Deny',
'allow' => 'from All',
'additional_includes' => ['/etc/apache2/myconf.d/htpasswd.conf', 'other settings'],
},],
`
Here a snippet that works for me:
class {'apache':
default_vhost => false,
}
apache::vhost {'mydefault':
port => 80,
docroot => '/var/www/html',
directories => [
{
'path' => '/var/www/html',
'provider' => 'files',
},
{
'path' => '/media/my_builds',
'options' => 'Indexes FollowSymLinks MultiViews',
'allowoverride' => 'None',
'require' => 'all granted',
'additional_includes' => ['what Randy Black said'],
},
],
aliases => [
{
alias => '/my_builds',
path => '/media/my_builds',
},
],
}

enabling fastcgi mod in lighttpd through puppet

Hi guys am new to puppet and I want to execute the following command on client using puppet so that the fast cgi mod is enabled on the puppet client.
lighttpd-enable-mod fastcgi
Both puppet server and client are ubuntu machines and my lighttpd module's init.pp file is as follows:
class lighttpd::install {
package { "lighttpd":
ensure => present,
}
}
class lighttpd::conf {
file { "/etc/lighttpd/lighttpd.conf":
ensure => present,
owner => 'root',
group => 'root',
mode => 0600,
source => "puppet:///modules/lighttpd/lighttpd.conf",
require => Class["lighttpd::install"],
}
}
class lighttpd::fastcgi {
file { "/etc/lighttpd/conf-available/10-fastcgi.conf":
ensure => present,
owner => 'root',
group => 'root',
mode => 0600,
source => "puppet:///modules/lighttpd/10-fastcgi.conf",
require => Class["lighttpd::install"],
}
}
class lighttpd {
include lighttpd::install, lighttpd::conf, lighttpd::fastcgi
}
Please help me execute this command on the puppet client.
Thanks
So if you modify your lighttpd::fastcgi class to be something like:
class lighttpd::fastcgi {
file { "/etc/lighttpd/conf-available/10-fastcgi.conf":
ensure => present,
owner => 'root',
group => 'root',
mode => 0600,
source => "puppet:///modules/lighttpd/10-fastcgi.conf",
require => Class["lighttpd::install"],
notify => Exec["enable-mod-fastcgi"],
}
exec { "enable-mod-fastcgi":
command => "/usr/bin/lighttpd-enable-mod fastcgi",
refreshonly => true,
}
}
(sorry - the path may be wrong to lighttpd-enable-mod - I don't have lighttpd here).
This should notify the 'exec' correctly. The exec will only get called when notified because of the 'refreshonly' parameter being true.

Resources