I have a basic puppet install using this tutorial https://www.digitalocean.com/community/tutorials/how-to-install-puppet-4-on-ubuntu-16-04
When I run /opt/puppetlabs/bin/puppet agent --test on my node I get
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Error while evaluating a Resource Statement. Could not find declared class firewall at /etc/puppetlabs/code/environments/production/manifests/site.pp:7:1 on node mark-inspiron.
On my node:
/opt/puppetlabs/bin/puppet module list
returns
/etc/puppetlabs/code/environment/production/modules
----- puppetlabs-firewall (v1.9.0)
On my puppet master at /etc/puppetlabs/code/environments/production/manifests/site.pp:
file {'/tmp/it_works.txt': # resource type file and filename
ensure => present, # make sure it exists
mode => '0644', # file permissions
content => "It works on ${ipaddress_eth0}!\n", # Print the eth0 IP fact
}
class { 'firewall': }
resources { 'firewall':
purge => true,
}
firewall { "051 asterisk-set-rate-limit-register":
string => "REGISTER sip:",
string_algo => "bm",
dport => '5060',
proto => 'udp',
recent => 'set',
rname => 'VOIPREGISTER',
rsource => 'true';
}
firewall { "052 asterisk-drop-rate-limit-register":
string => "REGISTER sip:",
string_algo => "bm",
dport => '5060',
proto => 'udp',
action => 'drop',
recent => 'update',
rseconds => '600',
rhitcount => '5',
rname => 'VOIPREGISTER',
rsource => true,
rttl => true;
}
The file part works but not firewall.
You need to install the modules on your master in a master setup with Puppet. They need to be somewhere in your modulepath. You can either place it in the modules directory within your $codedir (normally /etc/puppetlabs/code/modules) or in your directory environment modules directory (likely /etc/puppetlabs/code/environments/production/modules in your case since your cited site.pp is there). If you defined additional module paths in your environment.conf, then you can also place the modules there.
You can install/deploy them with a variety of methods, such as librarian-puppet, r10k, or code-manager (in Enterprise). However, the easiest method for you would be puppet module install puppetlabs-firewall on the master. Your Puppet catalog will then find the firewall class during compilation.
On a side note, that:
resources { 'firewall':
purge => true,
}
will remove any changes to associated firewall configurations (as defined by Puppet's knowledge of the system firewall configuration according to the module's definition of what the resource manages) that are not managed by Puppet. This is nice for eliminating local changes that people make, but it can also have interesting side effects, so be careful.
Related
I'm trying to create a simple module that will use facts from the agent to push the relevant output to file..
I've already managed to do it in one module but for an unknown reason it doesn't work as expected..
this is what I did
class testrepo {
case $facts['os']['family'] {
'RedHat': {
file_line { 'dscrp to local repo file':
path => '/etc/yum.repos.d/test.repo',
line => "name=${::description}",
ensure => present,
}
file_line { 'repo from agent':
path => '/etc/yum.repos.d/test.repo',
line => "baseurl=file:///usr/local/src/RHEL/RHEL-${::full}-${::architecture}",
ensure => present,
}
in the first file_line the output in file is "name=". and in the second file_line it doesn't translate the ${::full} but I get the ${::architecture}
file_line { 'Add fdqn to /etc/hosts':
path => '/etc/hosts',
line => "${::ipaddress} ${::fqdn} ${::hostname}",
ensure => present,
}
the above is working as expected
right now I'm not sure which direction should I check
I've tried $facts['os']['familiy']['full'] , it also doesn't work
could anyone give me some advice here
thank you
Architecture, fqdn and ipaddress are all facts available at the top level, if you jump onto the target node and run facter architecture you'll get an answer;
[root#example ~]# facter ipaddress
10.10.10.110
[root#example ~]# facter architecture
x86_64
"full" is part of the OS nested fact:
[root#example ~]# facter full
[root#r2h-bg5ore5nix0 ~]# facter os
{
architecture => "x86_64",
family => "RedHat",
hardware => "x86_64",
name => "CentOS",
release => {
full => "7.7.1908",
major => "7",
minor => "7"
},
selinux => {
config_mode => "enforcing",
config_policy => "targeted",
current_mode => "enforcing",
enabled => true,
enforced => true,
policy_version => "31"
}
}
So you'll have to drill down through the os facts hash to do that, on the command line that's;
[root#example ~]# facter os.release.full
7.7.1908
In code you can experiment with;
notify { 'message':
message => "message is ${::os['release']['full']}",
}
or
notify { 'message':
message => "message is ${::facts['os']['release']['full']}",
}
So what you're going to need to do in your code is use:
line => "baseurl=file:///usr/local/src/RHEL/RHEL-${::os['release']['full']}-${::architecture}",
Request some help please.
Requirement is to create a custom firewall service and then allow this custom firewall service only to a selected ips (trying to use firewalld_rich_rules here).
Here is the sample code:
class foo::fwall (
$sourceip = undef,
)
{
include firewalld
if $sourceip {
$sourceip.each |String $ipaddr| {
firewalld_rich_rule { "rich_rule_${ipaddr}":
ensure => enabled,
permanent => true,
zone => 'public',
family => ipv4,
source => $ipaddr,
element => service,
servicename => 'bar',
action => accept,
}
}
}
# this is defined in firewalld class and works good
firewalld::custom_service { 'bar':
short => 'bar custom service',
description => 'custom service ports',
ports => [
{
port => '7771',
protocol => 'tcp',
},
{
port => '8282',
protocol => 'tcp',
},
{
port => '8539',
protocol => 'tcp',
},
],
}
}
and while running it on a node, with couple of ip addresses (provided as an array for $sourceip), it results in duplicate declaration error
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: Firewalld_rich_rule[rich_rule_2] is already declared at (file: .../dev/modules/test/manifests/fwall.pp, line: 11); cannot redeclare (file: .../dev/modules/test/manifests/fwall.pp, line: 11) (file: .../dev/modules/test/manifests/fwall.pp, line: 11, column: 7) on node server.domain
Trying it in puppet v5.5 (from puppetlabs) for Redhat Enterprise Linux 7 servers
Note: tried defining a resource following this example from Puppet documentation but getting invalid address error.
define puppet::binary::symlink ($binary = $title) {
file {"/usr/bin/${binary}":
ensure => link,
target => "/opt/puppetlabs/bin/${binary}",
}
}
Use the defined type for the iteration somewhere ele in your manifest file:
$binaries = ['facter', 'hiera', 'mco', 'puppet', 'puppetserver']
puppet::binary::symlink { $binaries: }
I had to change the datatype for $sourceip to array in RH Satellite's smart class parameters which was String by default. Everything works good now.
Trying to create a master and slave (redundancy) DNS with puppet module camptocamp bind. In slave profile, I've set transfer_source => '192.168.1.20' to masters ip: 192.168.1.20. It should then synchronize and copy dns records from master to the slave.
But I got complaints about that it could only be set to slave zones. I've followed the README from puppet forge for the module: https://forge.puppet.com/camptocamp/bind/readme
dnsmaster.pp
class profile::dnsbind::server {
include 'bind'
bind::zone {'example.com':
ensure => 'present',
zone_contact => 'contact.example.com',
zone_ns => ['ns0.example.com'],
zone_serial => '2012112901',
zone_ttl => '604800',
zone_origin => 'example.com',
}
bind::a { 'example.com':
ensure => 'present',
zone => 'example.com',
ptr => false,
hash_data => {
'host1' => { owner => '192.168.0.1', },
'host2' => { owner => '192.168.0.2', },
},
}
}
dnsslave.pp
class profile::dnsbind::server_slave {
include 'bind'
bind::zone {'example.com':
ensure => 'present',
zone_contact => 'contact.example.com',
zone_ns => ['ns0.example.com'],
zone_serial => '2012112901',
zone_ttl => '604800',
zone_origin => 'example.com',
transfer_source => '192.168.1.20',
}
}
The error message:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Function Call, Zone 'example.com': transfer_source can be set only for slave zones! at /etc/puppetlabs/code/environments/production/modules/bind/manifests/zone.pp:80:5 at /etc/puppetlabs/code/environments/production/manifests/profile_dns2.pp:5 on node centos7-3
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
It should then synchronize and copy dns records from master to the
slave.
But I got complaints about that it could only be set to slave zones.
Evidently, the module does not recognize that you're trying to configure a slave zone. How do you suppose it would know? Well, apparently not from the presence of a transfer_source property.
I've followed the README from puppet forge for the module:
https://forge.puppet.com/camptocamp/bind/readme
I'll believe that you started by pulling the example zone definition (for a master zone) from the readme, and I grant you that this module's docs are kinda shoddy. But do nevertheless consider actually reading the docs thoroughly, not just skimming them. If you had done, you would have found documentation for the zone_type parameter immediately following the the documentation for the transfer_source parameter:
$zone_type = master
Specify if the zone is master/slave/forward.
Use this to specify that you're configuring a slave zone.
My Puppet manifest looks like this
$abrt_config = [ 'abrt.conf','abrt-action-save-package-data.conf' ]
file { $abrt_config:
ensure => present,
path => "/etc/abrt/${abrt_config}",
owner => 'root',
group => 'root',
mode => '0644',
source => "puppet:///modules/abrt/${abrt_config}",
}
My config files are located in the following path.
/abrt/files/abrt.conf
/abrt/files/abrt-action-save-package-data.conf
I'm getting the following error when executing puppet on client nodes.
Error: /Stage[main]/Abrt/File[/etc/abrt/abrt-action-save-package-data.conf]: Could not evaluate: Could not retrieve information from environment development source(s) puppet:///modules/abrt//etc/abrt/abrt.conf/etc/abrt/abrt-action-save-package-data.conf
Error: /Stage[main]/Abrt/File[/etc/abrt/abrt.conf]: Could not evaluate: Could not retrieve information from environment development source(s) puppet:///modules/abrt//etc/abrt/abrt.conf/etc/abrt/abrt-action-save-package-data.conf
You cannot implicitly convert an array to a string in the source attribute like that and expect desired behavior.
If you are using a non-obsolete version of Puppet, then you can use a lambda iterator to solve this problem in the following way:
['abrt.conf', 'abrt-action-save-package-data.conf'].each |$abrt_config| {
file { $abrt_config:
ensure => present,
path => "/etc/abrt/${abrt_config}",
owner => 'root',
group => 'root',
mode => '0644',
source => "puppet:///modules/abrt/${abrt_config}",
}
}
Check the documentation here for more details: https://docs.puppet.com/puppet/4.8/function.html#each
I'm trying to create an instance of a defined resource type (::apt::ppa) that comes before other resources. I am using the PuppetLabs Apt Module.
When adding a new repository via the module, the defined type contains an exec statement that notifies apt::update so that any packages that might be required can be installed correctly. However, when I run my below code, the notify gets scheduled after I attempt to install Java, thereby causing the Java install to fail. I've tried putting anchors around the apt::ppa declaration, but that doesn't help. What else can I do?
class rap::java(
$version = '7',
) {
$package = "oracle-java${version}-installer"
apt::ppa { 'ppa:webupd8team/java': } ->
exec { 'accept-java-license':
command => "/bin/echo ${package} shared/accepted-oracle-license-v1-1 select true | /usr/bin/sudo /usr/bin/debconf-set-selections",
unless => "/usr/bin/debconf-show ${package} | grep 'shared/accepted-oracle-license-v1-1: true'",
} ->
class { '::java':
package => $package,
distribution => 'oracle-jre',
}
file_line { 'java_environment':
path => '/etc/environment',
line => "JAVA_HOME=\"/usr/lib/jvm/java-${version}-oracle\"",
}
}
I believe the issue is that you need to include the apt class within the class you've made to get the ordering right.
This works for me on a new Precise box:
class rap::java(
$version = '7',
) {
$package = "oracle-java${version}-installer"
include apt
apt::ppa { 'ppa:webupd8team/java':
package_manage => true,
}
exec { 'accept-java-license':
command => "/bin/echo ${package} shared/accepted-oracle-license-v1-1 select true | /usr/bin/sudo /usr/bin/debconf-set-selections",
unless => "/usr/bin/debconf-show ${package} | grep 'shared/accepted-oracle-license-v1-1: true'",
}
class { '::java':
package => $package,
distribution => 'oracle-jre',
require => [
Apt::Ppa['ppa:webupd8team/java'],
Exec["accept-java-license"],
]
}
file_line { 'java_environment':
path => '/etc/environment',
line => "JAVA_HOME=\"/usr/lib/jvm/java-${version}-oracle\"",
}
}
Log of run:
Notice: Compiled catalog for precise64 in environment production in 0.78 seconds
Notice: /Stage[main]/Apt/File[preferences]/ensure: created
Notice: /Stage[main]/Rap::Java/Exec[accept-java-license]/returns: executed successfully
Notice: /Stage[main]/Rap::Java/File_line[java_environment]/ensure: created
Notice: /Stage[main]/Apt/Apt::Setting[conf-update-stamp]/File[/etc/apt/apt.conf.d/15update-stamp]/ensure: defined content as '{md5}0962d70c4ec78bbfa6f3544ae0c41974'
Notice: /Stage[main]/Rap::Java/Apt::Ppa[ppa:webupd8team/java]/Package[python-software-properties]/ensure: created
Notice: /Stage[main]/Rap::Java/Apt::Ppa[ppa:webupd8team/java]/Exec[add-apt-repository-ppa:webupd8team/java]/returns: executed successfully
Notice: /Stage[main]/Apt::Update/Exec[apt_update]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Java/Package[java-common]/ensure: created
Notice: /Stage[main]/Java/Package[java]/ensure: created
Notice: Applied catalog in 39.58 seconds
To extend the question further, generally things that are blockers for a standard setup to run are usually moved into a run stage (documented here).
So I would probably move all of the various repo setup puppet code into pre run stage with other prerequisites (normally you put in repo setup), the run stage will always be run first before the main stage, so you don't have to worry about explictly setting requirements that repos are setup on each package. This makes making changes to repos and prerequisites a lot easier