No value from hiera on puppet manifests when installed foreman - puppet

If try to get data from module use calling_class the data don't come to puppet manifests, if put the variable to common or osfamily yaml file value will be available from manifets.
My environment:
Puppet Master 3.7.4 + Foreman 1.7 + Hiera 1.3.4
Hiera configs:
---
:backends:
- yaml
:hierarchy:
- "%{::environment}/node/%{::fqdn}" #node settings
- "%{::environment}/profile/%{calling_class}" # profile settings
- "%{::environment}/%{::environment}" # environment settings
- "%{::environment}/%{::osfamily}" # osfamily settings
- common # common settings
:yaml:
:datadir: '/etc/puppet/hiera'
/etc/puppet/hiera/production/profile/common.yaml
profile::common::directory_hierarchy:
- "C:\\SiteName"
- "C:\\SiteName\\Config"
profile::common::system: "common"
And on profile module manifest /etc/puppet/environments/production/modules/profile/manifests/common.pp
class profile::common (
$directory_hierarchy =undef,
$system =undef
)
{
notify { "Dir is- $directory_hierarchy my fqdn is $fqdn, system = $system": }
}
Puppet config /etc/puppet/puppet.config
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
privatekeydir = $ssldir/private_keys { group = service }
hostprivkey = $privatekeydir/$certname.pem { mode = 640 }
autosign = $confdir/autosign.conf { mode = 664 }
show_diff = false
hiera_config = $confdir/hiera.yaml
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
default_schedules = false
report = true
pluginsync = true
masterport = 8140
environment = production
certname = puppet024.novalocal
server = puppet024.novalocal
listen = false
splay = false
splaylimit = 1800
runinterval = 1800
noop = false
configtimeout = 120
usecacheonfailure = true
[master]
autosign = $confdir/autosign.conf { mode = 664 }
reports = foreman
external_nodes = /etc/puppet/node.rb
node_terminus = exec
ca = true
ssldir = /var/lib/puppet/ssl
certname = puppet024.novalocal
strict_variables = false
environmentpath = /etc/puppet/environments
basemodulepath = /etc/puppet/environments/common:/etc/puppet/modules:/usr/share/puppet/modules
parser = future
And the more interesting thing that if deploy the same code without foreman it will be working.
Maybe I've missed some configuration or plugins?

You need have a environment (production in your sample) folder structures as below:
/etc/puppet/hiera/environments/production/node/%{::fqdn}.yaml
/etc/puppet/hiera/environments/production/profile/%{calling_class}.yaml
/etc/puppet/hiera/environments/production/production/*.yaml
/etc/puppet/hiera/environments/production/%{::osfamily}.yaml
/etc/puppet/hiera/environments/common.yaml
So the environment path you pasted is wrong also.
/etc/puppet/hiera/production/profile/common.yaml
Side notes
By first view, shouldn't mix hieradata with modulepath, so if can, move the modules out of basemodulepath
basemodulepath = /etc/puppet/environments/common
With the puppet.conf you pasted, the real profile module path is at one of three folders:
/etc/puppet/environments/common/modules/profile
/etc/puppet/modules/profile
/usr/share/puppet/modules/profile

Related

When sending the configuration to the puppet agent, I get an error

When sending the configuration to the agent, I get an error Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Function Call, no implicit conversion of String into Integer (file:/etc/puppetlabs/code/environments/production/modules/accounts/manifests/init.pp, line: 24, column: 24) on node
init.pp
**# See README.md for details.**
class accounts(
$groups = {},
$groups_membership = undef,
$ssh_keys = {},
$users = {},
$usergroups = {},
$accounts = {},
$start_uid = undef,
$start_gid = undef,
$purge_ssh_keys = false,
$ssh_authorized_key_title = '%{ssh_key}-on-%{account}',
$shell = undef,
$managehome = true,
$forcelocal = true,
) {
include ::accounts::config
create_resources(group, $groups)
create_resources(accounts::account, $accounts)
** # Remove users marked as absent**
$absent_users = keys(absents($users))
user { $absent_users:
ensure => absent,
managehome => $managehome,
forcelocal => $forcelocal,
}
}
_______________________________________________________________________________________________
I'm new to puppet and using it together with foreman
Puppet master version 7.2.0
Puppet agent version 6.27.0
Foreman version 3.4.0
All settings were made in Foreman. Manifests, like any other changes, were not made in the console.

rsync not working in two way lsyncd configuration

I have configured lsyncd on 2 servers. below are the lsyncd config file settings for both servers:
for server 1: rsync setting:
source= server1
destination= server 2
and for server 2 setting is vice versa:
source= server2
destination= server1
rsync works perfectly on server 1, and immediately reflects new changes on server2,
though it has a different user and permission on the same folder.
on the second server rsunc only works when I run lsyncd config file. command: sudo service lsyncd status -l.
rsync does not work for server2, changes are not reflected immediately on server1.
Here is my lsync config setting:
server1:
*-- lsyncd config file for 2-way sync
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status",
statusInterval = 1,
nodaemon = true,
insist = true
}
sync {
default.rsync,
source = "path/uploads",
target = "bitnami#<IP_server2>:/path/uploads",
delete = 'running',
--delay = 5,
rsync = {
-- timeout = 3000,
update = true,
times = true,
archive = true,
compress = true,
perms = true,
acls = true,
owner = true,
rsh = "<ssh_key>"
}
}
server 2:
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status",
statusInterval = 1,
nodaemon = true,
insist = true
}
sync {
default.rsync,
source = "path/uploads",
target = "bitnami#<IP_server1>:/path/uploads",
delete = 'running',
--delay = 5,
rsync = {
-- timeout = 3000,
update = true,
times = true,
archive = true,
compress = true,
perms = true,
acls = true,
owner = true,
rsh = "<ssh_key>"
}
}
Can anyone help me out with this?

KVM with Terraform: SSH permission denied (Cloud-Init)

I have a KVM host. I'm using Terraform to create some virtual servers using KVM provider. Here's the relevant section of the Terraform file:
provider "libvirt" {
uri = "qemu+ssh://root#192.168.60.7"
}
resource "libvirt_volume" "ubuntu-qcow2" {
count = 1
name = "ubuntu-qcow2-${count.index+1}"
pool = "default"
source = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
resource "libvirt_network" "vm_network" {
name = "vm_network"
mode = "bridge"
bridge = "br0"
addresses = ["192.168.60.224/27"]
dhcp {
enabled = true
}
}
# Use CloudInit to add our ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
pool = "default"
user_data = "data.template_file.user_data.rendered"
network_config = "data.template_file.network_config.rendered"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_config.yaml")
}
data "template_file" "network_config" {
template = file("${path.module}/network_config.yaml")
}
The cloud_config.yaml file contains the following info:
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ${file("/path/to/keyfolder/homelab.pub")}
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
The server gets created successfully, I can ping the device from the host on which I ran the Terraform script. I cannot seem to login through SSH though despite the fact that I pass my SSH key through the cloud-init file.
From the folder where all my keys are stored I run:
homecomputer:keyfolder wim$ ssh -i homelab ubuntu#192.168.80.86
ubuntu#192.168.60.86: Permission denied (publickey).
In this command, homelab is my private key.
Any reasons why I cannot login? Any way to debug? I cannot login to the server now to debug. I tried setting the passwd in the cloud-config file but that also does not work
*** Additional information
1) the rendered template is as follows:
> data.template_file.user_data.rendered
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1y***Homelab_Wim
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
I also faced the same problem, because i'm missing the fisrt line
#cloud-config
in the cloudinit.cfg file
You need to add libvirt_cloudinit_disk resource to add ssh-key to VM,
code from my TF-script:
# Use CloudInit ISO to add ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
count = length(var.hostname)
name = "${var.hostname[count.index]}-commoninit.iso"
#name = "${var.hostname}-commoninit.iso"
# pool = "default"
user_data = data.template_file.user_data[count.index].rendered
network_config = data.template_file.network_config.rendered
i , i had the same problem . i had resolved in this way:
user_data = data.template_file.user_data.rendered
without double quote!

probelm with puppet tagmail puppetlabs module

i'm using puppet 6.14.0 and tagmail module 3.2.0 on Centos 7.
below is my config on the master:
[master]
dns_alt_names=*******
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puippetlabs/code
confdir = /etc/puppetlabs/puppet
reports = puppetdb,console,tagmail
tagmap = $confdir/tagmail.conf
tagmail.conf(using a local smtp server, i'm able to telnet it )
[transport]
reportfrom = **********
smtpserver = localhost
smtpport = 25
smtphelo = localhost
[tagmap]
all: my_email_address
and below is my config on one managed node
[main]
certname = *********
server = *********
environment =uat
runinterval = 120
[agent]
report = true
pluginsync = true
but i'm not receiving any report from tagmail.
is someone having the same problem or i'm missing something on my config ?

Vagrant : An AMI must be configured via "ami" (region: #{region})

I got an error as below when run vagrant command,
# vagrant up --provider=aws
There are errors in the configuration of this machine. Please fix
the following errors and try again:
AWS Provider:
* An AMI must be configured via "ami" (region: #{region})
I'm using Vagrant 2.0.1 with vagrant-aws 0.7.2
Vagrant file:
Vagrant.configure("2") do |config|
require 'vagrant-aws'
Vagrant.configure('2') do |config|
config.vm.box = 'Vagarent'
config.vm.provider 'aws' do |aws, override|
aws.access_key_id = "xxxxxxxxxxxxxxxxxx"
aws.secret_access_key = "xxxxxxxxxxxxxxxxxxxxxxxx"
aws.keypair_name = 'ssh-keypair-name'
aws.instance_type = "t2.micro"
aws.region = 'us-west-2a'
aws.ami = 'ami-1122298f0'
aws.security_groups = ['default']
override.ssh.username = 'ubuntu'
override.ssh.private_key_path = '~/.ssh/ssh-keypair-file'
end
end
How to solve it?
us-west-2a is not a valid region name, see https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region for the full list of available region and end-points.
If you AMI is location in US West (Oregon), then you need to replace with us-west-2 in your Vagrantfile
Going through "vagrant-aws" documentation, following worked for me.
Installed "vagrant-aws" plugin with shell:
vagrant plugin install vagrant-aws
Added AWS compatible 'dummy-box' named "aws" added in config.vm.box = "aws":
vagrant box add aws https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
Created following Vagrant file:
# Require the AWS provider plugin
require 'vagrant-aws'
Vagrant.configure(2) do |config|
config.vm.box = "aws"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
config.vm.provider :aws do |aws, override|
aws.access_key_id = ENV['AWS_ACCESS_KEY']
aws.secret_access_key = ENV['AWS_SECRET_KEY']
aws.region = "us-east-1"
#aws.availability_zone = "us-east-1c"
# EC2 Instance AMI
aws.ami = "ami-aa2ea6d0" # Ubuntu 16.04 in US-EAST
aws.keypair_name = "awswindows" #change as per your key
aws.instance_type = "t2.micro"
aws.block_device_mapping = [{ 'DeviceName' => '/dev/sda1', 'Ebs.VolumeSize' => 10 }]
aws.security_groups = ["YOUR_SG"]
aws.tags = {
'Name' => 'Vagrant EC2 Instance'
}
# Credentials to login to EC2 Instance
override.ssh.username = "ubuntu"
override.ssh.private_key_path = ENV['AWS_PRIVATE_KEY']
end
end
Fired vagrant up --provider=aws.
Check once and let me know if you face any issue.

Resources