puppet Exec always runs even if the subscribed create_resource doesn't run - puppet

I have attached report screenshot from foreman and pasted below is the class that I am having issue with.
If it's hard to go through the entire code, I am highlighting the Exec section that is not working as expected
exec { $service:
path => ["/usr/bin/","/usr/sbin/","/bin"],
subscribe => Domain_ip_map[$domain_ip_map_titles],
command => "sudo service nagios restart",
}
The above Exec[$service] is subscribed to Domain_ip_map[...], this in turn notified by Exec['purge-config-files'] which require => File['deployconfig.cfg'].
Since there is no change in deployconfig.cfg file, File['deployconfig.cfg'] doesn't run and hence no notify, so Exec['purge-config-files'] and custom Domain_ip_map resource doesn't run. Up to this point everything working as expected. But the last part, Exec[$service] is subscribed to Domain_ip_map.
When Domain_ip_map is not running, how can Exec[$service] execute
successfully ?
class testclass ( $data = {
item1 => {
domain => 'testdomain.com',
ipaddress => '1.1.1.1',
},
},
$baseconfigdir = '/usr/local/servers',
$config_file_host = '/usr/local/test.cfg',
$config_file_service = '/usr/local/test_service.cfg' ) {
validate_hash($data)
$domain_ip_map_titles = keys($data)
file { "${baseconfigdir}":
ensure => directory,
}
exec { 'purge-config-files':
command => "/bin/rm -f ${baseconfigdir}/*",
notify => Domain_ip_map[$domain_ip_map_titles],
require => File['deployconfig.cfg'],
refreshonly => true,
}
file { 'deployconfig.cfg':
ensure => file,
path => '/home/deployconfig.cfg',
mode => '0644',
owner => 'root',
group => 'root',
content => "test",
notify => Exec['purge-config-files'],
}
#problem here, its subscribed to Domain_ip_map, but even if Domain_ip_map doesn't run, Exec['$service'] always execute, how???
exec { $service:
path => ["/usr/bin/","/usr/sbin/","/bin"],
subscribe => Domain_ip_map[$domain_ip_map_titles],
command => "sudo service nagios restart",
}
create_resources(domain_ip_map, $data)
}
define domain_ip_map($domain, $ipaddress) {
nagios_host { $domain:
....
}
nagios_service { "check_ping_${domain}":
....
}
}

Related

Process never end when I run a command inside a container using Python

with a python script Im running logstash via command inside a docker container, the normal behavior (with logstash installed in the server) is that after the pipeline get the data that pipeline shuts down, but the process never ends.
logstash=subprocess.call(["docker","exec", "-it", "logstash-docker_logstash_1", "/usr/share/logstash/bin/logstash","-f", "/usr/share/logstash/pipeline/site-canvas.conf","--path.data","/usr/share/logstash/config/min-data/"])
Im using docker top to see the running processes inside the container
what can I do to ensure that the process end when finish getting the data?
This is my pipeline
input {
jdbc {
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver:/db-ip:1433;databasename=omi"
jdbc_user => "my-user"
jdbc_password => "my-pass"
statement => "SELECT
TIME_CREATED,DESCRIPTION as problem, SEVERITY as severity_mame, NODEHINTS_DNSNAME as source,CATEGORY
FROM [omi1062event].[dbo].[ALL_EVENTS]
WHERE STATE = 'OPEN'
AND NODEHINTS_DNSNAME LIKE 'mju%'
AND TIME_CREATED >= DATEADD(day, -1, GETDATE())
ORDER BY TIME_CREATED ASC
"
jdbc_default_timezone => "UTC"
}
}
filter {
date {
match => [ "time_created", "ISO8601", "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'","yyyy-MM-dd HH:mm:ss", "yyyy-MM-dd HH:mm:ss.SSSSSS" ]
timezone => "Chile/Continental"
}
}
output {
elasticsearch {
hosts => "my-ip:9200"
index => "canvas"
user => "my-user"
password => "my-pass"
}
}

Logstash - Split escape character " \ " is not working

I have logstash to check log from window file ; there is many app running on window show I think using the folder to determinate this log come from what app ; but it is not working and get the exception :
Failed to execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Expected one of
\', ', any character at line 21, column 1 (byte 237)
my config
input {
beats {
port => 5044
}
}
filter {
mutate {
split => { "source" => '\\' }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => mt4log
}
}
someone can help me find out what is problem here thanks

Logs are ignoring input section in config files

I have a simple setup for capturing logs though HTTP and TCP.
I've created 2 conf files at /etc/logstash/conf.d/ (see below) but logs sent though HTTP are also being passed through the TCP pipeline and vise versa. For example when I send a log through TCP it ends up both in http-logger-* index and in tcp-logger-*.. it makes no sense to me :(
http_logger.conf
input {
http {
port => 9884
}
}
filter {
grok {
match => ["[headers][request_path]", "\/(?<component>[\w-]*)(?:\/)?(?<env>[\w-]*)(?:\/)?"]
}
}
output {
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'http-logger-%{+YYYY.MM.dd}'
}
stdout { codec => rubydebug }
}
tcp_logger.conf
input {
tcp {
port => 9885
codec => json
}
}
filter {
}
output {
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'tcp-logger-%{+YYYY.MM.dd}'
}
stdout { codec => rubydebug }
}
Any ideas on what am I missing?
Thank you
The Input, filter and Output configuration even when split across a different file the logstash while processing it will process it as a single big configuration as if all the input, filter and output is specified in a single file.
So said that the event coming into logstash will pass through all the output and filter plugin configured, in your case, each event picked up by the TCP and HTTP input plugin will pass through filter plugin and output plugin configured in both http_logger.conf and tcp_logger.conf, that's the reason you are seeing events stashed in both http-logger-* and tcp-logger-* index
So in order to fix this, we can specify a unique type field for events picked by both tcp and http input plugins and then apply the filter and output plugin selectively using the type set in the input plugin as shown below
http_logger.conf
input {
http {
port => 9884
type => "http_log"
}
}
filter {
if [type] == "http_log"
{
grok {
match => ["[headers][request_path]", "\/(?<component>[\w-]*)(?:\/)?(?<env>[\w-]*)(?:\/)?"]
}
}
}
output {
if ([type] == "http_log")
{
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'http-logger-%{+YYYY.MM.dd}'
}
}
stdout { codec => rubydebug }
}
tcp_logger.conf
input {
tcp {
port => 9885
codec => json
type => "tcp_log"
}
}
output {
if ([type] == "tcp_log")
{
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'tcp-logger-%{+YYYY.MM.dd}'
}
}
stdout { codec => rubydebug }
}
The explanation provided by #Ram is spot on however there is a cleaner way of solving the issue: enter pipelines.yml.
By default it looks like this:
- pipeline.id: main
path.config: "/etc/logstash/conf.d/*.conf"
basically it loads and combines all *.conf files - in my case I had two.
To solve the issue just separate the pipelines like so:
- pipeline.id: httplogger
path.config: "/etc/logstash/conf.d/http_logger.conf"
- pipeline.id: tcplogger
path.config: "/etc/logstash/conf.d/tcp_logger.conf"
The pipelines are now running separately :)
P.S. Don't forget to reload logstash after any changes here

puppet with multiple NFS mount on same server

I have few NFS mount points on the same server but different directories.
ex:
x.x.x.x:/stats /data/stats
x.x.x.x:/scratch /data/scratch
x.x.x.x:/ops /data/ops
But when i try to run puppet it adds following to my fstab. (wrong mount assignment)
x.x.x.x:/scratch /data/stats nfs defaults,nodev,nosharecache 0 0
x.x.x.x:/scratch /data/ops nfs defaults,nodev,nosharecache 0 0
x.x.x.x:/scratch /data/scratch nfs defaults,nodev,nosharecache 0 0
It is using the last mount option on all mounted partitions. so i did a little bit of research and found the following bug.
https://tickets.puppetlabs.com/browse/DOCUMENT-242
Then added nosharecache option, but still no luck.
this is my puppet code
class profile::mounts::stats {
# Hiera lookups
$location = hiera('profile::mounts::stats::location')
$location2 = hiera('profile::mounts::stats::location2')
tag 'new_mount'
file { '/data/stats':
ensure => directory,
owner => 'root',
group => 'root',
mode => '0755',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/stats':
ensure => mounted,
fstype => 'nfs',
device => $location,
options => 'defaults,nodev,nosharecache',
require => File['/data/stats'],
tag => 'new_mount'
}
file { '/data/ops':
ensure => directory,
owner => 'root',
group => 'mail',
mode => '0775',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/ops':
ensure => mounted,
fstype => 'nfs',
device => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/ops'],
tag => 'new_mount',
}
file { '/data/scratch':
ensure => directory,
owner => 'root',
group => 'mail',
mode => '0775',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/scratch':
ensure => mounted,
fstype => 'nfs',
device => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/scratch'],
tag => 'new_mount',
}
}
}
My hieara lookup is as follows
profile::mounts::stats::location: x.x.x.x:/stats
profile::mounts::stats::location2: x.x.x.x:/scratch
why it is causing some unexpected behavior ?
I compiled that code and I see a few issues:
You did not include the File['/data'] resource, but I assume you have that somewhere else?
After compiling I see this in the catalog:
$ cat myclass.json | jq '.resources | .[] | select(.type == "Mount") | [.title, .parameters]'
[
"/data/stats",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/stats",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/stats]",
"tag": "new_mount"
}
]
[
"/data/ops",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/scratch",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/ops]",
"tag": "new_mount"
}
]
[
"/data/scratch",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/scratch",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/scratch]",
"tag": "new_mount"
}
]
So you are mounting both /data/ops and /data/scratch on $location2. Is that an oversight? It does not match what you said you were trying to achieve.
Otherwise I can't reproduce what you said you are observing.
Is anything other than Puppet editing the fstab file? Did you try this code on a fresh box?

Simplifying Puppet Manifest

I want to provision multiple sets of things on a server using existing puppet modules the simplest example would be:
file { "/var/www/MYVARIABLEHERE":
ensure => "directory",
}
mysql::db { MYVARIABLEHERE:
user => MYVARIABLEHERE,
password => MYVARIABLEHERE,
host => 'localhost',
grant => ['all'],
}
Is there a way to abstract this out so that I can have say an array of pre defined options and then pass them into existing puppet modules so I don't end up with a manifest file that's thousands of lines long?
As per the answer below I have setup:
define mySites {
mysql::db { $name:
user => $name,
password => $name,
host => 'localhost',
grant => ['all'],
}
file { "/var/www/${name}.drupal.dev":
ensure => "directory",
}
}
I then call:
mySites {"site": $name => "test", }
and get the following error:
Could not parse for environment production: Syntax error at 'name'; expected '}'
You could use a define type to simplify as much :
define mydef( $usern, $passn) {
file { "/var/www/$usern":
ensure => "directory",
}
mysql::db { $usern :
user => $usern,
password => $passn,
host => "localhost",
grant => ['all'],
}
}
# You have to call the define type for each cases.
mydef{"u1": usern => "john", password => "pass", }
# It might be possible to provide multiple arrays to a define
# type if you use puppet's future parser

Resources