puppet with multiple NFS mount on same server - linux

I have few NFS mount points on the same server but different directories.
ex:
x.x.x.x:/stats /data/stats
x.x.x.x:/scratch /data/scratch
x.x.x.x:/ops /data/ops
But when i try to run puppet it adds following to my fstab. (wrong mount assignment)
x.x.x.x:/scratch /data/stats nfs defaults,nodev,nosharecache 0 0
x.x.x.x:/scratch /data/ops nfs defaults,nodev,nosharecache 0 0
x.x.x.x:/scratch /data/scratch nfs defaults,nodev,nosharecache 0 0
It is using the last mount option on all mounted partitions. so i did a little bit of research and found the following bug.
https://tickets.puppetlabs.com/browse/DOCUMENT-242
Then added nosharecache option, but still no luck.
this is my puppet code
class profile::mounts::stats {
# Hiera lookups
$location = hiera('profile::mounts::stats::location')
$location2 = hiera('profile::mounts::stats::location2')
tag 'new_mount'
file { '/data/stats':
ensure => directory,
owner => 'root',
group => 'root',
mode => '0755',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/stats':
ensure => mounted,
fstype => 'nfs',
device => $location,
options => 'defaults,nodev,nosharecache',
require => File['/data/stats'],
tag => 'new_mount'
}
file { '/data/ops':
ensure => directory,
owner => 'root',
group => 'mail',
mode => '0775',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/ops':
ensure => mounted,
fstype => 'nfs',
device => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/ops'],
tag => 'new_mount',
}
file { '/data/scratch':
ensure => directory,
owner => 'root',
group => 'mail',
mode => '0775',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/scratch':
ensure => mounted,
fstype => 'nfs',
device => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/scratch'],
tag => 'new_mount',
}
}
}
My hieara lookup is as follows
profile::mounts::stats::location: x.x.x.x:/stats
profile::mounts::stats::location2: x.x.x.x:/scratch
why it is causing some unexpected behavior ?

I compiled that code and I see a few issues:
You did not include the File['/data'] resource, but I assume you have that somewhere else?
After compiling I see this in the catalog:
$ cat myclass.json | jq '.resources | .[] | select(.type == "Mount") | [.title, .parameters]'
[
"/data/stats",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/stats",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/stats]",
"tag": "new_mount"
}
]
[
"/data/ops",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/scratch",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/ops]",
"tag": "new_mount"
}
]
[
"/data/scratch",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/scratch",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/scratch]",
"tag": "new_mount"
}
]
So you are mounting both /data/ops and /data/scratch on $location2. Is that an oversight? It does not match what you said you were trying to achieve.
Otherwise I can't reproduce what you said you are observing.
Is anything other than Puppet editing the fstab file? Did you try this code on a fresh box?

Related

Logs are ignoring input section in config files

I have a simple setup for capturing logs though HTTP and TCP.
I've created 2 conf files at /etc/logstash/conf.d/ (see below) but logs sent though HTTP are also being passed through the TCP pipeline and vise versa. For example when I send a log through TCP it ends up both in http-logger-* index and in tcp-logger-*.. it makes no sense to me :(
http_logger.conf
input {
http {
port => 9884
}
}
filter {
grok {
match => ["[headers][request_path]", "\/(?<component>[\w-]*)(?:\/)?(?<env>[\w-]*)(?:\/)?"]
}
}
output {
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'http-logger-%{+YYYY.MM.dd}'
}
stdout { codec => rubydebug }
}
tcp_logger.conf
input {
tcp {
port => 9885
codec => json
}
}
filter {
}
output {
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'tcp-logger-%{+YYYY.MM.dd}'
}
stdout { codec => rubydebug }
}
Any ideas on what am I missing?
Thank you
The Input, filter and Output configuration even when split across a different file the logstash while processing it will process it as a single big configuration as if all the input, filter and output is specified in a single file.
So said that the event coming into logstash will pass through all the output and filter plugin configured, in your case, each event picked up by the TCP and HTTP input plugin will pass through filter plugin and output plugin configured in both http_logger.conf and tcp_logger.conf, that's the reason you are seeing events stashed in both http-logger-* and tcp-logger-* index
So in order to fix this, we can specify a unique type field for events picked by both tcp and http input plugins and then apply the filter and output plugin selectively using the type set in the input plugin as shown below
http_logger.conf
input {
http {
port => 9884
type => "http_log"
}
}
filter {
if [type] == "http_log"
{
grok {
match => ["[headers][request_path]", "\/(?<component>[\w-]*)(?:\/)?(?<env>[\w-]*)(?:\/)?"]
}
}
}
output {
if ([type] == "http_log")
{
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'http-logger-%{+YYYY.MM.dd}'
}
}
stdout { codec => rubydebug }
}
tcp_logger.conf
input {
tcp {
port => 9885
codec => json
type => "tcp_log"
}
}
output {
if ([type] == "tcp_log")
{
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'tcp-logger-%{+YYYY.MM.dd}'
}
}
stdout { codec => rubydebug }
}
The explanation provided by #Ram is spot on however there is a cleaner way of solving the issue: enter pipelines.yml.
By default it looks like this:
- pipeline.id: main
path.config: "/etc/logstash/conf.d/*.conf"
basically it loads and combines all *.conf files - in my case I had two.
To solve the issue just separate the pipelines like so:
- pipeline.id: httplogger
path.config: "/etc/logstash/conf.d/http_logger.conf"
- pipeline.id: tcplogger
path.config: "/etc/logstash/conf.d/tcp_logger.conf"
The pipelines are now running separately :)
P.S. Don't forget to reload logstash after any changes here

puppet Exec always runs even if the subscribed create_resource doesn't run

I have attached report screenshot from foreman and pasted below is the class that I am having issue with.
If it's hard to go through the entire code, I am highlighting the Exec section that is not working as expected
exec { $service:
path => ["/usr/bin/","/usr/sbin/","/bin"],
subscribe => Domain_ip_map[$domain_ip_map_titles],
command => "sudo service nagios restart",
}
The above Exec[$service] is subscribed to Domain_ip_map[...], this in turn notified by Exec['purge-config-files'] which require => File['deployconfig.cfg'].
Since there is no change in deployconfig.cfg file, File['deployconfig.cfg'] doesn't run and hence no notify, so Exec['purge-config-files'] and custom Domain_ip_map resource doesn't run. Up to this point everything working as expected. But the last part, Exec[$service] is subscribed to Domain_ip_map.
When Domain_ip_map is not running, how can Exec[$service] execute
successfully ?
class testclass ( $data = {
item1 => {
domain => 'testdomain.com',
ipaddress => '1.1.1.1',
},
},
$baseconfigdir = '/usr/local/servers',
$config_file_host = '/usr/local/test.cfg',
$config_file_service = '/usr/local/test_service.cfg' ) {
validate_hash($data)
$domain_ip_map_titles = keys($data)
file { "${baseconfigdir}":
ensure => directory,
}
exec { 'purge-config-files':
command => "/bin/rm -f ${baseconfigdir}/*",
notify => Domain_ip_map[$domain_ip_map_titles],
require => File['deployconfig.cfg'],
refreshonly => true,
}
file { 'deployconfig.cfg':
ensure => file,
path => '/home/deployconfig.cfg',
mode => '0644',
owner => 'root',
group => 'root',
content => "test",
notify => Exec['purge-config-files'],
}
#problem here, its subscribed to Domain_ip_map, but even if Domain_ip_map doesn't run, Exec['$service'] always execute, how???
exec { $service:
path => ["/usr/bin/","/usr/sbin/","/bin"],
subscribe => Domain_ip_map[$domain_ip_map_titles],
command => "sudo service nagios restart",
}
create_resources(domain_ip_map, $data)
}
define domain_ip_map($domain, $ipaddress) {
nagios_host { $domain:
....
}
nagios_service { "check_ping_${domain}":
....
}
}

logstash if statement within output

This simple config file has an error. I've tried everything I can think of to fix it. I've simplified this example. Ultimately I'd like to use multiple input files and send them to different ports on the output.
input {
file {
path => "/home/user/log/*"
type => "test1"
start_position => "beginning"
}
}
output {
syslog {
host => "server.com"
if [type] == "test1" {
port => 5555
}
severity => "informational"
facility => "syslogd"
}
}
The error is:
Error: Expected one of #, => at line 14, column 8 (byte 168) after output {
syslog {
host => "server.com"
if
logstash is running on ubuntu:
apt --installed list|grep logstash
logstash/stable,now 1.4.3-1-7e387fb all [installed]
I found the problem. I needed to move "syslog" like this:
output {
if [type] == "test1" {
syslog {
host => "server.com"
port => 5555
severity => "informational"
facility => "syslogd"
}
}
}

Using puppet to create a lvm with dynamic size

I use below command manually to create a lvm (appslv) with rest 100% size.
lvcreate -l +100%FREE -n appslv appsvg
But with puppet-lvm module I create lvm with below code:
class { 'lvm':
volume_groups => {
'appsvg' => {
physical_volumes => [ '/dev/xvda5' ],
logical_volumes => {
'appslv' => {
'size' => '500G',
'mountpath' => '/u01',
'mountpath_require' => true,
},
},
},
},
}
But as the attached size of /dev/xda5 is unknown, I dont want to specify the exact size, as it varries from instance to instance.
So How can I specify that in the pp, to use rest 100%
If you don't set any size parameter it will, by default, use all the available space.
source code
if !#resource[:extents] and !#resource[:size] and !#resource[:initial_size]
args.push('--extents', '100%FREE')
end

Simplifying Puppet Manifest

I want to provision multiple sets of things on a server using existing puppet modules the simplest example would be:
file { "/var/www/MYVARIABLEHERE":
ensure => "directory",
}
mysql::db { MYVARIABLEHERE:
user => MYVARIABLEHERE,
password => MYVARIABLEHERE,
host => 'localhost',
grant => ['all'],
}
Is there a way to abstract this out so that I can have say an array of pre defined options and then pass them into existing puppet modules so I don't end up with a manifest file that's thousands of lines long?
As per the answer below I have setup:
define mySites {
mysql::db { $name:
user => $name,
password => $name,
host => 'localhost',
grant => ['all'],
}
file { "/var/www/${name}.drupal.dev":
ensure => "directory",
}
}
I then call:
mySites {"site": $name => "test", }
and get the following error:
Could not parse for environment production: Syntax error at 'name'; expected '}'
You could use a define type to simplify as much :
define mydef( $usern, $passn) {
file { "/var/www/$usern":
ensure => "directory",
}
mysql::db { $usern :
user => $usern,
password => $passn,
host => "localhost",
grant => ['all'],
}
}
# You have to call the define type for each cases.
mydef{"u1": usern => "john", password => "pass", }
# It might be possible to provide multiple arrays to a define
# type if you use puppet's future parser

Resources