probelm with puppet tagmail puppetlabs module - puppet

i'm using puppet 6.14.0 and tagmail module 3.2.0 on Centos 7.
below is my config on the master:
[master]
dns_alt_names=*******
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puippetlabs/code
confdir = /etc/puppetlabs/puppet
reports = puppetdb,console,tagmail
tagmap = $confdir/tagmail.conf
tagmail.conf(using a local smtp server, i'm able to telnet it )
[transport]
reportfrom = **********
smtpserver = localhost
smtpport = 25
smtphelo = localhost
[tagmap]
all: my_email_address
and below is my config on one managed node
[main]
certname = *********
server = *********
environment =uat
runinterval = 120
[agent]
report = true
pluginsync = true
but i'm not receiving any report from tagmail.
is someone having the same problem or i'm missing something on my config ?

Related

KVM with Terraform: SSH permission denied (Cloud-Init)

I have a KVM host. I'm using Terraform to create some virtual servers using KVM provider. Here's the relevant section of the Terraform file:
provider "libvirt" {
uri = "qemu+ssh://root#192.168.60.7"
}
resource "libvirt_volume" "ubuntu-qcow2" {
count = 1
name = "ubuntu-qcow2-${count.index+1}"
pool = "default"
source = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
resource "libvirt_network" "vm_network" {
name = "vm_network"
mode = "bridge"
bridge = "br0"
addresses = ["192.168.60.224/27"]
dhcp {
enabled = true
}
}
# Use CloudInit to add our ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
pool = "default"
user_data = "data.template_file.user_data.rendered"
network_config = "data.template_file.network_config.rendered"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_config.yaml")
}
data "template_file" "network_config" {
template = file("${path.module}/network_config.yaml")
}
The cloud_config.yaml file contains the following info:
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ${file("/path/to/keyfolder/homelab.pub")}
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
The server gets created successfully, I can ping the device from the host on which I ran the Terraform script. I cannot seem to login through SSH though despite the fact that I pass my SSH key through the cloud-init file.
From the folder where all my keys are stored I run:
homecomputer:keyfolder wim$ ssh -i homelab ubuntu#192.168.80.86
ubuntu#192.168.60.86: Permission denied (publickey).
In this command, homelab is my private key.
Any reasons why I cannot login? Any way to debug? I cannot login to the server now to debug. I tried setting the passwd in the cloud-config file but that also does not work
*** Additional information
1) the rendered template is as follows:
> data.template_file.user_data.rendered
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1y***Homelab_Wim
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
I also faced the same problem, because i'm missing the fisrt line
#cloud-config
in the cloudinit.cfg file
You need to add libvirt_cloudinit_disk resource to add ssh-key to VM,
code from my TF-script:
# Use CloudInit ISO to add ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
count = length(var.hostname)
name = "${var.hostname[count.index]}-commoninit.iso"
#name = "${var.hostname}-commoninit.iso"
# pool = "default"
user_data = data.template_file.user_data[count.index].rendered
network_config = data.template_file.network_config.rendered
i , i had the same problem . i had resolved in this way:
user_data = data.template_file.user_data.rendered
without double quote!

Vagrant-Azure: Guest machine can't connect to host machine (Unable to copy SMB files)

I've been working on Vagrant only locally until now and now I want to create VM with Azure as the provider, but unfortunately I've got the error that can be seen on the image accesible through the link. I understand what it says but I have absolutely no idea how to fix it.
Error
I am also appending my Vagrantfile:
require 'vagrant-azure'
Vagrant.configure("2") do |config|
config.vm.box = 'azure'
config.vm.box_url = 'https://github.com/azure/vagrant-azure/raw/master/dummy.box'
config.vm.network "private_network", guest: 80, host: 80
config.ssh.username = 'vagrant'
config.ssh.private_key_path = '~/.ssh/id_rsa'
config.vm.synced_folder '.', '/vagrant', :disabled => true
config.vm.provider :azure do |azure, override|
azure.tenant_id = ****
azure.client_id = ****
azure.client_secret = ****
azure.subscription_id = ****
azure.tcp_endpoints = '80'
azure.vm_name = 'grafmuvivm'
azure.vm_size = 'Standard_B1s'
azure.vm_image_urn = 'Canonical:UbuntuServer:18.04-LTS:latest'
azure.resource_group_name = 'grafmuvirg'
azure.location = 'westeurope'
virtual_network_name = 'grafmuvivm-vagrantPublicIP'
end
# Declare where chef repository path
chef_repo_path = "./chef"
# Provisioning Chef-Zero
config.vm.provision :chef_zero do |chef|
# Added necessary chef attributes
chef.cookbooks_path = 'chef/cookbooks'
chef.nodes_path = 'chef/cookbooks'
#### Adding recipes ####
chef.add_recipe "api::ssh_user"
chef.add_recipe "api::grafmuvi"
# Running recipes
chef.run_list = [
'recipe[api::ssh_user]',
'recipe[api::grafmuvi]'
]
# Accept chef license
chef.arguments = "--chef-license accept"
end
end
If I run 'vagrant up --debug' it can be seen that guest machine cannot ping any of the host machine IPs.
Could someone please tell me how to properly setup networking on Vagrant? I've checked the GitHub issues related to this topic but I didn't find anything useful...
EDIT:
I worked with Vagrant but not with Vagrant-azure. But, can you change configuration in the following way and show the output:
azure.vm.network "private_network", ip: "192.168.50.10"

Vagrant : An AMI must be configured via "ami" (region: #{region})

I got an error as below when run vagrant command,
# vagrant up --provider=aws
There are errors in the configuration of this machine. Please fix
the following errors and try again:
AWS Provider:
* An AMI must be configured via "ami" (region: #{region})
I'm using Vagrant 2.0.1 with vagrant-aws 0.7.2
Vagrant file:
Vagrant.configure("2") do |config|
require 'vagrant-aws'
Vagrant.configure('2') do |config|
config.vm.box = 'Vagarent'
config.vm.provider 'aws' do |aws, override|
aws.access_key_id = "xxxxxxxxxxxxxxxxxx"
aws.secret_access_key = "xxxxxxxxxxxxxxxxxxxxxxxx"
aws.keypair_name = 'ssh-keypair-name'
aws.instance_type = "t2.micro"
aws.region = 'us-west-2a'
aws.ami = 'ami-1122298f0'
aws.security_groups = ['default']
override.ssh.username = 'ubuntu'
override.ssh.private_key_path = '~/.ssh/ssh-keypair-file'
end
end
How to solve it?
us-west-2a is not a valid region name, see https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region for the full list of available region and end-points.
If you AMI is location in US West (Oregon), then you need to replace with us-west-2 in your Vagrantfile
Going through "vagrant-aws" documentation, following worked for me.
Installed "vagrant-aws" plugin with shell:
vagrant plugin install vagrant-aws
Added AWS compatible 'dummy-box' named "aws" added in config.vm.box = "aws":
vagrant box add aws https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
Created following Vagrant file:
# Require the AWS provider plugin
require 'vagrant-aws'
Vagrant.configure(2) do |config|
config.vm.box = "aws"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
config.vm.provider :aws do |aws, override|
aws.access_key_id = ENV['AWS_ACCESS_KEY']
aws.secret_access_key = ENV['AWS_SECRET_KEY']
aws.region = "us-east-1"
#aws.availability_zone = "us-east-1c"
# EC2 Instance AMI
aws.ami = "ami-aa2ea6d0" # Ubuntu 16.04 in US-EAST
aws.keypair_name = "awswindows" #change as per your key
aws.instance_type = "t2.micro"
aws.block_device_mapping = [{ 'DeviceName' => '/dev/sda1', 'Ebs.VolumeSize' => 10 }]
aws.security_groups = ["YOUR_SG"]
aws.tags = {
'Name' => 'Vagrant EC2 Instance'
}
# Credentials to login to EC2 Instance
override.ssh.username = "ubuntu"
override.ssh.private_key_path = ENV['AWS_PRIVATE_KEY']
end
end
Fired vagrant up --provider=aws.
Check once and let me know if you face any issue.

'Failed to open TCP connection to 127.0.0.1:7058' when running selenium/capybara test in parallel mode(using 8 threads)

We ran cukes in parallel mode using 8 threads on Jenkins but most of cukes were failed due to following error.
Error: (Connection refused - connect(2) for "127.0.0.1" port 7058) (Errno::ECONNREFUSED)
./features/support/hooks.rb:3:in `Before'
Parameters:
OS: Linux
Selenium : gem selenium-webdriver', '~> 2.53.4'
Capybara: gem 'capybara', '>= 2.10.0'
Browser: Firefox version 45.5.0
hooks.rb
1.Before do |scenario|
2.Capybara.reset_sessions!
3. page.driver.browser.manage.window.maximize
4. page.driver.browser.manage.delete_all_cookies
5.end
env.rb
browser=ENV['BROWSER'] || 'ff'
case browser
when 'ff', 'firefox'
Capybara.register_driver :selenium do |app|
Selenium::WebDriver::Firefox::Binary.path=("/usr/bin/firefox") if REGISTRY[:local_path_for_selenium]
profile = Selenium::WebDriver::Firefox::Profile.new
profile.assume_untrusted_certificate_issuer = false
profile.secure_ssl = false
profile['browser.manage.timeouts.implicit_wait'] = 100
profile['browser.manage.timeouts.script_timeout'] = 100
profile['browser.manage.timeouts.read_timeout'] = 500
profile['browser.manage.timeouts.page_load'] = 120
profile["browser.download.folderList"] = 2
profile['browser.download.dir'] = "#{Rails.root}/downloads"
profile['browser.helperApps.neverAsk.saveToDisk'] = "application/xlsx"
profile['browser.helperApps.neverAsk.openFile'] = "application/xlsx"
http_client = Selenium::WebDriver::Remote::Http::Default.new
http_client.timeout = 410
Capybara::Selenium::Driver.new(app, :profile => profile, :http_client => http_client)
end
Capybara.default_driver = :selenium
Regards,
Ajay
Any proposition will be welcome
Thanks in advance for your time.

No value from hiera on puppet manifests when installed foreman

If try to get data from module use calling_class the data don't come to puppet manifests, if put the variable to common or osfamily yaml file value will be available from manifets.
My environment:
Puppet Master 3.7.4 + Foreman 1.7 + Hiera 1.3.4
Hiera configs:
---
:backends:
- yaml
:hierarchy:
- "%{::environment}/node/%{::fqdn}" #node settings
- "%{::environment}/profile/%{calling_class}" # profile settings
- "%{::environment}/%{::environment}" # environment settings
- "%{::environment}/%{::osfamily}" # osfamily settings
- common # common settings
:yaml:
:datadir: '/etc/puppet/hiera'
/etc/puppet/hiera/production/profile/common.yaml
profile::common::directory_hierarchy:
- "C:\\SiteName"
- "C:\\SiteName\\Config"
profile::common::system: "common"
And on profile module manifest /etc/puppet/environments/production/modules/profile/manifests/common.pp
class profile::common (
$directory_hierarchy =undef,
$system =undef
)
{
notify { "Dir is- $directory_hierarchy my fqdn is $fqdn, system = $system": }
}
Puppet config /etc/puppet/puppet.config
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
privatekeydir = $ssldir/private_keys { group = service }
hostprivkey = $privatekeydir/$certname.pem { mode = 640 }
autosign = $confdir/autosign.conf { mode = 664 }
show_diff = false
hiera_config = $confdir/hiera.yaml
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
default_schedules = false
report = true
pluginsync = true
masterport = 8140
environment = production
certname = puppet024.novalocal
server = puppet024.novalocal
listen = false
splay = false
splaylimit = 1800
runinterval = 1800
noop = false
configtimeout = 120
usecacheonfailure = true
[master]
autosign = $confdir/autosign.conf { mode = 664 }
reports = foreman
external_nodes = /etc/puppet/node.rb
node_terminus = exec
ca = true
ssldir = /var/lib/puppet/ssl
certname = puppet024.novalocal
strict_variables = false
environmentpath = /etc/puppet/environments
basemodulepath = /etc/puppet/environments/common:/etc/puppet/modules:/usr/share/puppet/modules
parser = future
And the more interesting thing that if deploy the same code without foreman it will be working.
Maybe I've missed some configuration or plugins?
You need have a environment (production in your sample) folder structures as below:
/etc/puppet/hiera/environments/production/node/%{::fqdn}.yaml
/etc/puppet/hiera/environments/production/profile/%{calling_class}.yaml
/etc/puppet/hiera/environments/production/production/*.yaml
/etc/puppet/hiera/environments/production/%{::osfamily}.yaml
/etc/puppet/hiera/environments/common.yaml
So the environment path you pasted is wrong also.
/etc/puppet/hiera/production/profile/common.yaml
Side notes
By first view, shouldn't mix hieradata with modulepath, so if can, move the modules out of basemodulepath
basemodulepath = /etc/puppet/environments/common
With the puppet.conf you pasted, the real profile module path is at one of three folders:
/etc/puppet/environments/common/modules/profile
/etc/puppet/modules/profile
/usr/share/puppet/modules/profile

Resources