I am getting the following error in the event viewer on my node during a puppet run. I suspect the issue is with incorrect lookup function in my profile.
Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Scconfig::Coserveradmin[SomeSettings]:
parameter 'parameterA' expects a String value, got Undef
parameter 'parameterB' expects a String value, got Undef
parameter 'parameterC' expects a String value, got Undef
parameter 'parameterD' expects a String value, got Undef
coserveradmin is a define resource with all string parameters. I would like to do lookup of values from a Json file
{
"SASettings" : {
"Watchdog" : {
"ParameterA" : "somevalue",
"ParameterB" : "somevalue"
},
"Serversettings" : {
"ParameterC" : "somevalue",
"ParameterD" : "somevalue",
},
"GeneralSettings" : {
"ParameterE" : "somevalue",
"ParameterF" : "somevalue",
},
"customsettings_prod" : {
"ParameterG" :"somevalue",
"ParameterH" : "%{facts.hostname}.example-cloud.com"
},
"customsettings_dev" : {
"ParameterI" :"",
"ParameterK" : "%{facts.hostname}.example.net"
}
}
}
In my hiera.yaml file I have defined the name and path to the json file.
- name: "Desired Some Settings"
path: "default/serveradmin.json"
In profile I have the following code .
class profile::scconfig_someprofile_a {
.
.
.
$hname= $::facts['hostname']
$mac= "${facts['macaddress'].delete(':')}"
$adminpropeties = lookup('SASettings')
if $hname=~someregex {
scconfig::coserveradmin{ 'SomeSettings':
property1 => $adminpropeties['customsettings_prod.ParameterG'],
property2 => $adminproperties['Watchdog.ParameterA'],
property3 => $adminproperties['Watchdog.ParameterB'],
property4 => $adminproperties['Serversettings.ParameterC'],
.
.
.
.
and so on
.
macaddress => $mac,
}
elsif $hname=~someregex {
scconfig::coserveradmin{ 'SomeSettings':
property1 => $adminpropeties['customsettings_dev.ParameterI'],
property2 => $adminproperties['Watchdog.ParameterA'],
property3 => $adminproperties['Watchdog.ParameterB'],
property4 => $adminproperties['Serversettings.ParameterC'],
.
.
.
.
and so on
.
macaddress => $mac,
}
Also adding the code for the "define" resource as requested.
define scconfig::coserveradmin(
String $Property1,
String $Property2,
String $Property3,
String $Property4,
.
.
.
String $macaddress,
) {
$dscmoduleversion = lookup('requires.modules.codsc.version')
if $dscmoduleversion != '' {
$module = {
'name' => 'codsc',
'version' => $dscmoduleversion,
}
}else{
$module = 'codsc'
}
$configname1='someconfig1'
$configname2='someconfig2'
$configname3='someconfig3'
dsc { 'someconfig1':
require => lookup('requires.cloudopssoftware'),
resource_name => 'Someresourcename',
module => $module,
properties => {
configname => $configname1,
Prop1 => $Property1,
Prop2 => $Property2,
Prop3 =>$Property3,
},
}
dsc { 'someconfig2':
require => lookup('requires.cloudopssoftware'),
resource_name => 'someresourcename2',
module => $module,
properties => {
configname => $configname2,
Prop1 => $Property4,
Prop2 => $Property5,
Prop3 =>$Property6,
},
}
dsc { 'someconfig3':
require => lookup('requires.cloudopssoftware'),
resource_name => 'someresourcename3',
module => $module,
properties => {
configname => $configname3,
Prop1 => $Property6,
Prop2 => $Property7,
Prop3 =>$Property8,
.
.
.
Propn => $macaddress
},
}
Please note that the last property which is the macaddress is evaluated within the profile class therefore I don't see any error for it.
Any ideas what could be the issue.
I suspect the issue is with incorrect lookup function in my profile.
That does not appear to be the case. If your lookup() call were not successfully looking up and returning a hash then you would get a different error when you tried to extract values.
I guess it's possible that you're retrieving the wrong hash -- which would be a matter of your hiera configuration and / or data, not the lookup() call itself -- but whether it's the right hash or the wrong one, the syntax you are trying to use to extract the data from it is not matched to the hash structure presented in the question. For example, this expression
$adminpropeties['customsettings_prod.ParameterG']
attempts to retrieve the value whose key is 'customsettings_prod.ParameterG', but the data presented contain no such key.
What you seem to want is
$adminpropeties['customsettings_prod']['ParameterG']
That extracts the value having key 'customsettings_prod', and, that value being a hash itself, extracts its value associated with key 'ParameterG'.
Alternatively, you may find the dig() function convenient for extracting data from nested data structures such as yours:
dig($adminpropeties, 'customsettings_prod', 'ParameterG')
Related
I'd need to collect metrics from an URL. The format of the metrics is like that:
# HELP base:classloader_total_loaded_class_count Displays the total number of classes that have been loaded since the Java virtual machine has started execution.
# TYPE base:classloader_total_loaded_class_count counter
base:classloader_total_loaded_class_count 23003.0
I'd need to exclude, from the events collected, all lines which begin with a '#' character.
So I have arranged for the following configuration file:
input {
http_poller {
urls => {
pool_metrics => {
method => "get"
url => "http://localhost:10090/metrics"
headers => {
"Content-Type" => "text/plain"
}
}
}
request_timeout => 30
schedule => { cron => "* * * * * UTC"}
codec => multiline {
pattern => "^#"
negate => "true"
what => previous
}
type => "server_metrics"
}
}
output {
elasticsearch {
# An index is created for each type of metrics inpout
index => "logstash-%{type}"
}
}
Unfortunately, when I check through elastic search the data collected, I see it's not really what I was expecting. For example:
{
"_index" : "logstash-server_metrics",
"_type" : "doc",
"_id" : "2egAvWcBwbQ9kTetvX2o",
"_score" : 1.0,
"_source" : {
"type" : "server_metrics",
"tags" : [
"multiline"
],
"message" : "# TYPE base:gc_ps_scavenge_count counter\nbase:gc_ps_scavenge_count 24.0",
"#version" : "1",
"#timestamp" : "2018-12-17T16:30:01.009Z"
}
},
So it seems that the lines with '#' aren't skipped but appended to the next line from the metrics.
Can you recommend any way to fix it?
The multiline codec doesn't work this way. It merges the events into a single event, appending the lines that don't match ^# as you have observed.
I don't think it's possible to drop messages with a codec, you'll have to use the drop filter instead.
First remove the codec from your input configuration, then add this filter part to your configuration:
filter {
if [message] =~ "^#" {
drop {}
}
}
Using conditionals, if the message matches ^#, the event will be dropped by the drop filter, as you wanted.
I have a logstash filter that extracts an api token string from an XML payload. I don't want to store the actual API token in elasticsearch, I want to store a hashed version. My filter file is as follows:
filter {
xml {
source => "xml_request"
store_xml => "false"
force_array => "false"
xpath => [ "//authentication/apiKey/text()", "api_key" ]
}
if [api_key] =~ /.+/ {
fingerprint {
method => "SHA256"
key => "some_random_string"
source => "api_key"
target => "api_key"
}
}
}
Unfortunately the fingerprint filter does not seem to be working because the api_key value is always the value from the XML input and not SHA256 hashed. I have tried setting the target field to a new field (e.g. api_key_hashed) to test, but the new field does not show up. Can anyone shed some light please?
I do not know if this helps, but you can try:
fingerprint {
add_field => {
"apikey" => "%{api_key}"
remove_field => [ "%{api_key}" ]
}
add_field => {
"api_key" => "%{apikey}"
remove_field => [ "%{apikey}" ]
}
}
Otherwise, you can try:
GROK_OVERRIDE or GROK ADD FIELD or DROP ADD FIELD
I am using following puppet class
class myclass{
$foo = [{"id" => "bar", "ip" => "1.1.1.1"}, {"id" => "baz", "ip" => "2.2.2.2"}]
map {$foo:}
define map () { notify {$name['id']: } }
}
But this gives me
err: Could not retrieve catalog from remote server: Could not intern from pson: Could not convert from pson: Could not find relationship target "Change_config::Map[ip1.1.1.1idbar]"
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run
What is the reason for this ?
Regards,
Malintha Adikari
Your array contains hashes. The resource declaration syntax works only for arrays of strings.
$foo = ["bar", "baz"]
map {$foo:}
define map () { notify {$name: } }
If you want to pass data with each resource title, you need to
build a hash of your data, not an array of hashes
use the create_resources function
Untested example code:
$foo = {
"bar" => { "ip" => "1.1.1.1" },
"baz" => { "ip" => "2.2.2.2" },
}
create_resources('map', $foo)
define map ($ip="") { notify { "$name has ip $ip": } }
I cannot get negative regexp expressions working within LogStash (as described in the docs)
Consider the following positive regex which works correctly to detect fields that have been assigned a value:
if [remote_ip] =~ /(.+)/ {
mutate { add_tag => ["ip"] }
}
However, the negative expression seems to return false even when the field is blank:
if [remote_ip] !~ /(.+)/ {
mutate { add_tag => ["no_ip"] }
}
Am I misunderstanding the usage?
Update - this was fuzzy thinking on my part. There were issues with my config file. If the rest of your config file is sane, the above should work.
This was fuzzy thinking on my part - there were issues with the rest of my config file.
Based on Ben Lim's example, I came up with an input that is easier to test:
input {
stdin { }
}
filter {
if [message] !~ /(.+)/ {
mutate { add_tag => ["blank_message"] }
}
if [noexist] !~ /(.+)/ {
mutate { add_tag => ["tag_does_not_exist"] }
}
}
output {
stdout {debug => true}
}
The output for a blank message is:
{
"message" => "",
"#version" => "1",
"#timestamp" => "2014-02-27T01:33:19.285Z",
"host" => "benchmark.example.com",
"tags" => [
[0] "blank_message",
[1] "tag_does_not_exist"
]
}
The output for a message with the content "test message" is:
test message
{
"message" => "test message",
"#version" => "1",
"#timestamp" => "2014-02-27T01:33:25.059Z",
"host" => "benchmark.example.com",
"tags" => [
[0] "tag_does_not_exist"
]
}
Thus, the "negative regex" /(.+)/ returns true only when the field is empty or the field does not exist.
The negative regex /(.*)/ will only return true when the field does not exist. If the field exists (whether empty or with values), the return value will be false.
Below is my configuration. The type field is not exist, therefore, the negative expression is return true.
input {
stdin {
}
}
filter {
if [type] !~ /(.+)/ {
mutate { add_tag => ["aa"] }
}
}
output {
stdout {debug => true}
}
The regexp /(.+)/ means it accepts everything, include blank. So, when the "type" field is exist, even the field value is blank, it also meet the regexp. Therefore, in your example, if the remote_ip field exist, your "negative expression" will always return false.
I want to provision multiple sets of things on a server using existing puppet modules the simplest example would be:
file { "/var/www/MYVARIABLEHERE":
ensure => "directory",
}
mysql::db { MYVARIABLEHERE:
user => MYVARIABLEHERE,
password => MYVARIABLEHERE,
host => 'localhost',
grant => ['all'],
}
Is there a way to abstract this out so that I can have say an array of pre defined options and then pass them into existing puppet modules so I don't end up with a manifest file that's thousands of lines long?
As per the answer below I have setup:
define mySites {
mysql::db { $name:
user => $name,
password => $name,
host => 'localhost',
grant => ['all'],
}
file { "/var/www/${name}.drupal.dev":
ensure => "directory",
}
}
I then call:
mySites {"site": $name => "test", }
and get the following error:
Could not parse for environment production: Syntax error at 'name'; expected '}'
You could use a define type to simplify as much :
define mydef( $usern, $passn) {
file { "/var/www/$usern":
ensure => "directory",
}
mysql::db { $usern :
user => $usern,
password => $passn,
host => "localhost",
grant => ['all'],
}
}
# You have to call the define type for each cases.
mydef{"u1": usern => "john", password => "pass", }
# It might be possible to provide multiple arrays to a define
# type if you use puppet's future parser