Using puppet to create a lvm with dynamic size - linux

I use below command manually to create a lvm (appslv) with rest 100% size.
lvcreate -l +100%FREE -n appslv appsvg
But with puppet-lvm module I create lvm with below code:
class { 'lvm':
volume_groups => {
'appsvg' => {
physical_volumes => [ '/dev/xvda5' ],
logical_volumes => {
'appslv' => {
'size' => '500G',
'mountpath' => '/u01',
'mountpath_require' => true,
},
},
},
},
}
But as the attached size of /dev/xda5 is unknown, I dont want to specify the exact size, as it varries from instance to instance.
So How can I specify that in the pp, to use rest 100%

If you don't set any size parameter it will, by default, use all the available space.
source code
if !#resource[:extents] and !#resource[:size] and !#resource[:initial_size]
args.push('--extents', '100%FREE')
end

Related

logstash - Conditionally converts field types

I inherited a logstash config as follows. I do not want to do major changes in this because I do not want to break anything that is working. The metrics are sent as logs with json in format - "metric": "metricname", "value": "int". This has been working great. However, there is a requirement to have a string in value for a new metric. It is not really a metric but to indicate the state of the processing in string. Based on the following filter, it converts everything to integer and any string in value will be converted to 0. The requirement is that if the value is a string, it shouldn't attempt convert. Thank you!
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:ts} - M_%{DATA:task}_%{NUMBER:thread} - INFO - %{GREEDYDATA:jmetric}"}
remove_field => [ "message", "ecs", "original", "agent", "log", "host", "path" ]
break_on_match => false
}
if "_grokparsefailure" in [tags] {
drop {}
}
date {
match => ["ts", "ISO8601"]
target => "#timestamp"
}
json {
source => "jmetric"
remove_field => "jmetric"
}
split {
field => "points"
add_field => {
"metric" => "%{[points][metric]}"
"value" => "%{[points][value]}"
}
remove_field => [ "points", "event", "tags", "ts", "stream", "input" ]
}
mutate {
convert => { "value" => "integer" }
convert => { "thread" => "integer" }
}
}
You should use index mappings for this mainly.
Even if you handle things in logstash, elasticsearch will - if configured with the defaults - do dynamic mapping, which may work against any configuration you do in logstash.
See Elasticsearch index templates
An index template is a way to tell Elasticsearch how to configure an index when it is created.
...
Index templates can contain a collection of component templates, as well as directly specify settings, mappings, and aliases.
Mappings are pr index! This means that when you apply new mapping, you will have to create a new index. You can "rollover" to a new index, or delete / import your data again. What you do depends on your data, how you receive it, etc. ymmv...
No matter what, if your index has the wrong mapping you will need to create a new index to get the new mapping.
PS! If you have a lot of legacy data take a look at the reindex API for elasticsearch.

puppet with multiple NFS mount on same server

I have few NFS mount points on the same server but different directories.
ex:
x.x.x.x:/stats /data/stats
x.x.x.x:/scratch /data/scratch
x.x.x.x:/ops /data/ops
But when i try to run puppet it adds following to my fstab. (wrong mount assignment)
x.x.x.x:/scratch /data/stats nfs defaults,nodev,nosharecache 0 0
x.x.x.x:/scratch /data/ops nfs defaults,nodev,nosharecache 0 0
x.x.x.x:/scratch /data/scratch nfs defaults,nodev,nosharecache 0 0
It is using the last mount option on all mounted partitions. so i did a little bit of research and found the following bug.
https://tickets.puppetlabs.com/browse/DOCUMENT-242
Then added nosharecache option, but still no luck.
this is my puppet code
class profile::mounts::stats {
# Hiera lookups
$location = hiera('profile::mounts::stats::location')
$location2 = hiera('profile::mounts::stats::location2')
tag 'new_mount'
file { '/data/stats':
ensure => directory,
owner => 'root',
group => 'root',
mode => '0755',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/stats':
ensure => mounted,
fstype => 'nfs',
device => $location,
options => 'defaults,nodev,nosharecache',
require => File['/data/stats'],
tag => 'new_mount'
}
file { '/data/ops':
ensure => directory,
owner => 'root',
group => 'mail',
mode => '0775',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/ops':
ensure => mounted,
fstype => 'nfs',
device => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/ops'],
tag => 'new_mount',
}
file { '/data/scratch':
ensure => directory,
owner => 'root',
group => 'mail',
mode => '0775',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/scratch':
ensure => mounted,
fstype => 'nfs',
device => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/scratch'],
tag => 'new_mount',
}
}
}
My hieara lookup is as follows
profile::mounts::stats::location: x.x.x.x:/stats
profile::mounts::stats::location2: x.x.x.x:/scratch
why it is causing some unexpected behavior ?
I compiled that code and I see a few issues:
You did not include the File['/data'] resource, but I assume you have that somewhere else?
After compiling I see this in the catalog:
$ cat myclass.json | jq '.resources | .[] | select(.type == "Mount") | [.title, .parameters]'
[
"/data/stats",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/stats",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/stats]",
"tag": "new_mount"
}
]
[
"/data/ops",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/scratch",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/ops]",
"tag": "new_mount"
}
]
[
"/data/scratch",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/scratch",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/scratch]",
"tag": "new_mount"
}
]
So you are mounting both /data/ops and /data/scratch on $location2. Is that an oversight? It does not match what you said you were trying to achieve.
Otherwise I can't reproduce what you said you are observing.
Is anything other than Puppet editing the fstab file? Did you try this code on a fresh box?

puppet Exec always runs even if the subscribed create_resource doesn't run

I have attached report screenshot from foreman and pasted below is the class that I am having issue with.
If it's hard to go through the entire code, I am highlighting the Exec section that is not working as expected
exec { $service:
path => ["/usr/bin/","/usr/sbin/","/bin"],
subscribe => Domain_ip_map[$domain_ip_map_titles],
command => "sudo service nagios restart",
}
The above Exec[$service] is subscribed to Domain_ip_map[...], this in turn notified by Exec['purge-config-files'] which require => File['deployconfig.cfg'].
Since there is no change in deployconfig.cfg file, File['deployconfig.cfg'] doesn't run and hence no notify, so Exec['purge-config-files'] and custom Domain_ip_map resource doesn't run. Up to this point everything working as expected. But the last part, Exec[$service] is subscribed to Domain_ip_map.
When Domain_ip_map is not running, how can Exec[$service] execute
successfully ?
class testclass ( $data = {
item1 => {
domain => 'testdomain.com',
ipaddress => '1.1.1.1',
},
},
$baseconfigdir = '/usr/local/servers',
$config_file_host = '/usr/local/test.cfg',
$config_file_service = '/usr/local/test_service.cfg' ) {
validate_hash($data)
$domain_ip_map_titles = keys($data)
file { "${baseconfigdir}":
ensure => directory,
}
exec { 'purge-config-files':
command => "/bin/rm -f ${baseconfigdir}/*",
notify => Domain_ip_map[$domain_ip_map_titles],
require => File['deployconfig.cfg'],
refreshonly => true,
}
file { 'deployconfig.cfg':
ensure => file,
path => '/home/deployconfig.cfg',
mode => '0644',
owner => 'root',
group => 'root',
content => "test",
notify => Exec['purge-config-files'],
}
#problem here, its subscribed to Domain_ip_map, but even if Domain_ip_map doesn't run, Exec['$service'] always execute, how???
exec { $service:
path => ["/usr/bin/","/usr/sbin/","/bin"],
subscribe => Domain_ip_map[$domain_ip_map_titles],
command => "sudo service nagios restart",
}
create_resources(domain_ip_map, $data)
}
define domain_ip_map($domain, $ipaddress) {
nagios_host { $domain:
....
}
nagios_service { "check_ping_${domain}":
....
}
}

Can I write duplicate node blocks in Puppet site.pp file?

I am trying to write duplicate node blocks in site.pp file. This site.pp file I am generating from Java code. When I do test 'puppetd --test' I am not getting other node blocks changes on client.
site.pp
node "puppetclient1.domain.com" {
file { "twc-bind-9.9.4-0.noarch.rpm" :
source => "puppet:///files/modules/BIND/twc-bind-9.9.4-0.noarch.rpm",
}
}
node "puppetclient1.domain.com" {
package { "twc-bind" :
source => "/opt/test/files/twc-bind-9.9.4-0.noarch.rpm",
provider => "rpm",
ensure => "latest",
}
}
node "puppetclient1.domain.com" {
service { "named" :
subscribe => File["/opt/test/files/twc-bind-9.9.4-0.noarch.rpm"],
ensure => "running",
}
}
I'm pretty sure that puppet will match against the first node it finds.
You need to make your Java code a little bit smarter and add all of the resources into a single node, i.e.
node "puppetclient1.domain.com" {
file { "twc-bind-9.9.4-0.noarch.rpm" :
source => "puppet:///files/modules/BIND/twc-bind-9.9.4-0.noarch.rpm",
}
package { "twc-bind" :
source => "/opt/test/files/twc-bind-9.9.4-0.noarch.rpm",
provider => "rpm",
ensure => "latest",
}
service { "named" :
subscribe => File["/opt/test/files/twc-bind-9.9.4-0.noarch.rpm"],
ensure => "running",
}
}
Or another option would be to use node inheritance.
If you'll have to deal with hundred of resources and thousands of boxes, you should care about make a good design and modeling. Put your resources into classes, and then classes into more general classes and then, put classes into boxes. And use hiera or parameterized classes or both to change resources
class twc-bind {
file { "/opt/test/files/twc-bind-9.9.4-0.noarch.rpm" :
source => "puppet:///files/modules/BIND/twc-bind-9.9.4-0.noarch.rpm",
}
package { "twc-bind" :
source => "/opt/test/files/twc-bind-9.9.4-0.noarch.rpm",
provider => "rpm",
ensure => "latest",
}
service { "named" :
ensure => "running",
}
File["twc-bind-9.9.4-0.noarch.rpm"]->Package["twc-bind"]->Service["named"]
}
node "puppetclient1.domain.com" {
class { "twc-bind" :
}
}
If you're using Java to generate manifests, you shuold model your Java classes too.

Simplifying Puppet Manifest

I want to provision multiple sets of things on a server using existing puppet modules the simplest example would be:
file { "/var/www/MYVARIABLEHERE":
ensure => "directory",
}
mysql::db { MYVARIABLEHERE:
user => MYVARIABLEHERE,
password => MYVARIABLEHERE,
host => 'localhost',
grant => ['all'],
}
Is there a way to abstract this out so that I can have say an array of pre defined options and then pass them into existing puppet modules so I don't end up with a manifest file that's thousands of lines long?
As per the answer below I have setup:
define mySites {
mysql::db { $name:
user => $name,
password => $name,
host => 'localhost',
grant => ['all'],
}
file { "/var/www/${name}.drupal.dev":
ensure => "directory",
}
}
I then call:
mySites {"site": $name => "test", }
and get the following error:
Could not parse for environment production: Syntax error at 'name'; expected '}'
You could use a define type to simplify as much :
define mydef( $usern, $passn) {
file { "/var/www/$usern":
ensure => "directory",
}
mysql::db { $usern :
user => $usern,
password => $passn,
host => "localhost",
grant => ['all'],
}
}
# You have to call the define type for each cases.
mydef{"u1": usern => "john", password => "pass", }
# It might be possible to provide multiple arrays to a define
# type if you use puppet's future parser

Resources