How to use list in an if statement in puppet? - puppet

class am_rgw_profile::am_rgw_ports {
firewalld::custom_service{'am_rgw_ports':
short => 'am_rgw_ports',
description => 'IIQ IaaS gateway wg6561',
port => [
if ('xarsiiq1xd' in $hostname) { [
{
'port' => '2470',
'protocol' => 'tcp',
},
{
'port' => '2472',
'protocol' => 'tcp',
},
{
'port' => '2474',
'protocol' => 'tcp',
},
{
'port' => '2476',
'protocol' => 'tcp',
},
{
'port' => '2478',
'protocol' => 'tcp',
},
{
'port' => '2480',
'protocol' => 'tcp',
},
{
'port' => '2482',
'protocol' => 'tcp',
},
{
'port' => '2484',
'protocol' => 'tcp',
},
{
'port' => '2486',
'protocol' => 'tcp',
},
{
'port' => '2490',
'protocol' => 'tcp',
},
{
'port' => '2492',
'protocol' => 'tcp',
},
{
'port' => '2494',
'protocol' => 'tcp',
},
{
'port' => '2496',
'protocol' => 'tcp',
},
{
'port' => '2498',
'protocol' => 'tcp',
},
{
'port' => '2500',
'protocol' => 'tcp',
},
{
'port' => '2502',
'protocol' => 'tcp',
},
{
'port' => '2504',
'protocol' => 'tcp',
},
{
'port' => '2506',
'protocol' => 'tcp',
},
{
'port' => '2508',
'protocol' => 'tcp',
},
{
'port' => '2510',
'protocol' => 'tcp',
},
{
'port' => '2512',
'protocol' => 'tcp',
},
{
'port' => '2514',
'protocol' => 'tcp',
},
]
}elsif ('xarsiiq1xe' in $hostname) { [
{
'port' => '2492',
'protocol' => 'tcp',
},
{
'port' => '2516',
'protocol' => 'tcp',
},
{
'port' => '2518',
'protocol' => 'tcp',
},
{
'port' => '2520',
'protocol' => 'tcp',
},
{
'port' => '2522',
'protocol' => 'tcp',
},
{
'port' => '2524',
'protocol' => 'tcp',
},
{
'port' => '2526',
'protocol' => 'tcp',
},
{
'port' => '2528',
'protocol' => 'tcp',
},
{
'port' => '2530',
'protocol' => 'tcp',
},
{
'port' => '2532',
'protocol' => 'tcp',
},
{
'port' => '2534',
'protocol' => 'tcp',
},
{
'port' => '2536',
'protocol' => 'tcp',
},
{
'port' => '2538',
'protocol' => 'tcp',
},
{
'port' => '2540',
'protocol' => 'tcp',
},
{
'port' => '2542',
'protocol' => 'tcp',
},
{
'port' => '2544',
'protocol' => 'tcp',
},
{
'port' => '2546',
'protocol' => 'tcp',
},
{
'port' => '2548',
'protocol' => 'tcp',
},
{
'port' => '2550',
'protocol' => 'tcp',
},
{
'port' => '2552',
'protocol' => 'tcp',
},
{
'port' => '2554',
'protocol' => 'tcp',
},
{
'port' => '2556',
'protocol' => 'tcp',
},
{
'port' => '2558',
'protocol' => 'tcp',
},
]
}else {
{
'port' => '2492',
'protocol' => 'tcp',
}
}
]
}
firewalld_service { 'Allow am_racf_ports services for IIQ gateway servers':
ensure => 'present',
service => 'am_rgw_ports',
zone => 'Cali'
}
}
when my code goes to the if condition, this is the error I get:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Function Call, Failed to parse template firewalld/service.xml.erb:
Filepath: org/jruby/RubyArray.java
Line: 1489
Detail: no implicit conversion of String into Integer
(file: /etc/puppetlabs/code/common_modules/modules/firewalld/manifests/custom_service.pp, line: 66, column: 16) (file: /etc/puppetlabs/code/environments/community_am_racf_gateway/profiles/am_rgw_profile/manifests/am_rgw_ports.pp, line: 2) on node xarsiiq1xd.opr.test.zone.org
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
What am I doing wrong?

The error message may be a bit confusing, but the syntax you are trying to use is simply incorrect.
What you appear to be trying to do is something like this:
class am_rgw_profile::am_rgw_ports {
if ('xarsiiq1xd' in $hostname) {
$port = [
{
'port' => '2470',
'protocol' => 'tcp',
},
{
'port' => '2472',
'protocol' => 'tcp',
}, # etc
]
} elsif ('xarsiiq1xe' in $hostname) {
$port = [
{
'port' => '2492',
'protocol' => 'tcp',
},
{
'port' => '2516',
'protocol' => 'tcp',
},
]
}
firewalld::custom_service { 'am_rgw_ports':
short => 'am_rgw_ports',
description => 'IIQ IaaS gateway wg6561',
port => $port,
}
firewalld_service { 'Allow am_racf_ports services for IIQ gateway servers':
ensure => 'present',
service => 'am_rgw_ports',
zone => 'Cali'
}
}

small optimisation on Alex Harvey's answer
$port = $facts['hostname'] ? {
/xarsiiq1xd/ => [
{'port' => '2470', 'protocol' => 'tcp'},
{'port' => '2472', 'protocol' => 'tcp'},
]
/xarsiiq1xe/ => [
{'port' => '2492', 'protocol' => 'tcp'},
{'port' => '2516', 'protocol' => 'tcp'},
]
default =>
[{'port' => '2492', 'protocol' => 'tcp'},]
}
firewalld::custom_service { 'am_rgw_ports':
short => 'am_rgw_ports',
description => 'IIQ IaaS gateway wg6561',
port => $port,
}
firewalld_service { 'Allow am_racf_ports services for IIQ gateway servers':
ensure => 'present',
service => 'am_rgw_ports',
zone => 'Cali'
}
Note, you can put the selector statement inline, instead of first assigning it to the port variable e.g.
firewalld::custom_service { 'am_rgw_ports':
short => 'am_rgw_ports',
description => 'IIQ IaaS gateway wg6561',
port => $facts['hostname'] ? { /xarsiiq1xd/ => [{...}], /xarsiiq1x3/ => [{...}],}
}
but it looks ugly

Related

Logstash update nested data

I'm having a nested field as below
"appdata": {
"type":"nested",
"include_in_parent":true,
"properties": {
"accessType": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"appname": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"eventtime": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
I'm updating the same using logstash in output plugin as below
elasticsearch
{
hosts => ["localhost:9200"]
document_id => "%{sid}"
index => "dashboard_write"
timeout => 30
script => "if (ctx._source.appdata == null) { ctx._source.appdata = params.event.get('appdata') } else { ctx._source.appdata = ctx._source.appdata + params.event.get('appdata') }"
doc_as_upsert => true
action => "update"
}
First time appdata will be null and it should assign that value. For second event, it should append the data to existing appdata
But I saw ctx._source.appdata is empty even though data is there
Am I doing anything wrong here
I'm having input as syslog and filter as below
filter{
if [I]=="525"
{
useragent
{
source => "T"
}
kv
{
source => "S"
field_split => ","
value_split => "="
}
mutate
{
split => ["Y", " "]
split => ["B", "-"]
split => ["T", "?"]
add_field => { "sessionid" => "%{[T][0]}" }
add_field => { "appdata" => null }
}
}
if [I]=="514"
{
mutate
{
split => ["T", "?"]
rename => { "I" => "eventid" }
add_field => { "sid" => "%{[T][0]}" }
add_field => { "[appdata][accessType]" => "app1" }
add_field => { "[appdata][appname]" => "%{F}" }
add_field => { "[appdata][eventtime]" => "%{timestamp}" }
}
}
}
Output looks like below
if[I]=="525"
{
elasticsearch
{
hosts => ["localhost:9200"]
document_id => "%{sessionid}"
index => "dashboard_write"
}
stdout { codec => rubydebug }
}
else if[I]=="514"
{
elasticsearch
{
hosts => ["localhost:9200"]
document_id => "%{sid}"
index => "dashboard_write"
timeout => 30
script => "if (ctx._source.appdata == null) { ctx._source.appdata =
params.event.get('appdata') } else { ctx._source.appdata =
ctx._source.appdata + params.event.get('appdata') }"
doc_as_upsert => true
action => "update"
}
}
First 525 event will come. It will be stored with a sessionid and appdata as null. Next event 514 will come with same session id. I'm suppose to update existing 525 record with 514 appdata. Subsequently if multiple 514 events come, just append that to nested object for same session id

Trailing backslash issue with grok filters in logstash

Trailing backslash issue while using grok filters in Logstash
The messages should be parsed with "repo" value as "test" and "test-group" respectively, but the third message has a grok parse error because its missing a backslash and grok filter is failing to parse "resource_path" for it. I wanted to skip the api from parsing as a repo and that's the reason I had to implement a regex to do it.
I wanted to know if there's any workaround for it such that the messages that don't end with a trailing backslash still get parsed and don't throw errors.
Test messages used:
20190815175019|9599|REQUEST|14.56.55.120|anonymous|POST|/api/test/Lighter-test-group|HTTP/1.1|200|452
20190815175019|9599|REQUEST|14.56.55.120|anonymous|POST|/api/test-group/|HTTP/1.1|200|452
20190815175019|9599|REQUEST|14.56.55.120|anonymous|POST|/api/test-group|HTTP/1.1|200|452
Grok filter:
filter {
grok {
break_on_match => false
match => { "message" => "%{DATA:timestamp_local}\|%{NUMBER:duration}\|%{WORD:requesttype}\|%{IP:clientip}\|%{DATA:username}\|%{WORD:method}\|%{DATA:resource}\|%{DATA:protocol}\|%{NUMBER:statuscode}\|%{NUMBER:bytes}" }
}
grok {
break_on_match => false
match => { "resource" => "^(\/)+[^\/]+/%{DATA:repo}/%{GREEDYDATA:resource_path}" }
}
}
Expected result :
{
"#timestamp" => 2019-08-20T19:09:48.008Z,
"path" => "/Users/hack/test-status.log",
"timestamp_local" => "20190815175019",
"username" => "anonymous",
"method" => "POST",
"repo" => "test-group",
"bytes" => "452",
"requesttype" => "REQUEST",
"protocol" => "HTTP/1.1",
"duration" => "9599",
"clientip" => "14.56.55.120",
"resource" => "/api/test-group/",
"statuscode" => "200",
"message" => "20190815175019|9599|REQUEST|14.56.55.120|anonymous|POST|/api/test-group/|HTTP/1.1|200|452",
"host" => "empty",
"#version" => "1"
}
actual output:
{
"#timestamp" => 2019-08-20T19:09:48.009Z,
"path" => "/Users//hack/test-status.log",
"timestamp_local" => "20190815175019",
"username" => "anonymous",
"method" => "POST",
"bytes" => "452",
"requesttype" => "REQUEST",
"protocol" => "HTTP/1.1",
"duration" => "9599",
"clientip" => "14.56.55.120",
"resource" => "/api/test-group",
"statuscode" => "200",
"message" => "20190815175019|9599|REQUEST|14.56.55.120|anonymous|POST|/api/test-group|HTTP/1.1|200|452",
"host" => "empty",
"#version" => "1",
"tags" => [
[0] "_grokparsefailure"
]
}

Logstash is sending a log twice. Repeating logs Issue

I am parsing logs of a file of my server and sending only info, warning and error level logs to my API but problem is that I am receiving a log two times. In output I am mapping parsed logs values to on my JSON fields and I am send that json to my API but I am receiving that mapping of json twice.
I am analyzing my logstash log file but a log entry is only appeared once in log file.
{
"log_EventMessage" => "Unable to sendViaPost to url[http://ubuntu:8280/services/TestProxy.TestProxyHttpSoap12Endpoint] Read timed ",
"message" => "TID: [-1234] [] [2017-08-11 12:03:11,545] INFO {org.apache.axis2.transport.http.HTTPSender} - Unable to sendViaPost to url[http://ubuntu:8280/services/TestProxy.TestProxyHttpSoap12Endpoint] Read time",
"type" => "carbon",
"TimeStamp" => "2017-08-11T12:03:11.545",
"tags" => [
[0] "grokked",
[1] "loglevelinfo",
[2] "_grokparsefailure"
],
"log_EventTitle" => "org.apache.axis2.transport.http.HTTPSender",
"path" => "/home/waqas/Documents/repository/logs/carbon.log",
"#timestamp" => 2017-08-11T07:03:13.668Z,
"#version" => "1",
"host" => "ubuntu",
"log_SourceSystemId" => "-1234",
"EventId" => "b81a054e-babb-426c-b0a0-268494d14a0e",
"log_EventType" => "INFO"
}
Following are my configuration.
Need help. Unable to figure out the reason that why this is happening.
input {
file {
path => "LOG_FILE_PATH"
type => "carbon"
start_position => "end"
codec => multiline {
pattern => "(^\s*at .+)|^(?!TID).*$"
negate => false
what => "previous"
auto_flush_interval => 1
}
}
}
filter {
#***********************************************************
# Grok Pattern to parse Single Line Log Entries
#**********************************************************
if [type] == "carbon" {
grok {
match => [ "message", "TID:%{SPACE}\[%{INT:log_SourceSystemId}\]%{SPACE}\[%{DATA:log_ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:log_EventType}%{SPACE}{%{JAVACLASS:log_EventTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:log_EventMessage}" ]
add_tag => [ "grokked" ]
}
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
if "grokked" in [tags] {
grok {
match => ["log_EventType", "INFO"]
add_tag => [ "loglevelinfo" ]
}
grok {
match => ["log_EventType", "ERROR"]
add_tag => [ "loglevelerror" ]
}
grok {
match => ["log_EventType", "WARN"]
add_tag => [ "loglevelwarn" ]
}
}
#*****************************************************
# Grok Pattern in Case of Failure
#*****************************************************
if !( "_grokparsefailure" in [tags] ) {
grok{
match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
add_tag => [ "grokked" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
timezone => "UTC"
}
}
}
#*******************************************************************
# Grok Pattern to handle MultiLines Exceptions and StackTraces
#*******************************************************************
if ( "multiline" in [tags] ) {
grok {
match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
add_tag => [ "multiline" ]
tag_on_failure => [ "multiline" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
}
}
}
filter {
uuid {
target => "EventId"
}
}
output {
if [type] == "carbon" {
if "loglevelerror" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Error Messages to API
#*******************************************************************
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","High","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
if [type] == "carbon" {
if "loglevelinfo" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Info Messages to API
#*******************************************************************
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","Low","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
if [type] == "carbon" {
if "loglevelwarn" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Warn Messages to API
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","Medium","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
}

logstash json post output

I am current trying to do a JavaScript post to Logstash by using a tcp input.
JavaScript Post
xhr = new XMLHttpRequest();
var url = "http://localhost:5043";
xhr.open("POST", url, true);
xhr.setRequestHeader("Content-type", "application/json");
var data = JSON.stringify({"test" : hello});
xhr.send(data);
Logstash config file
input {
tcp {
port => 5043
}
}
filter{
}
output {
stdout {
codec => rubydebug
}
}
Output in console
{
"message" => "OPTIONS / HTTP/1.1\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.611Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Host: localhost:5043\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.620Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Connection: keep-alive\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.621Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Access-Control-Request-Method: POST\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.622Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Origin: http://atgdev11\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.623Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.626Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Access-Control-Request-Headers: content-type\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.634Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Accept: */*\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.651Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Referer: http://test/Welcome.jsp\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.653Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Accept-Encoding: gzip, deflate, sdch, br\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.719Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Accept-Language: en-US,en;q=0.8\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.720Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
I cant seem to see my json data {"test" : hello} passing into logstash could there be something wrong with my logstash.config file ? Please help
Actually their is javascript error found in following line #JavaScript Post section
var data = JSON.stringify({"test" : hello});
replace with
var data = JSON.stringify({"test" : "hello"});
i,e double quotes was missed
Above changes will give u the required result
I have tested in following way
JavaScript Post
<!DOCTYPE html>
<html>
<title>Web Page Design</title>
<script>
function sayHello() {
var xhr = new XMLHttpRequest();
var url = "http://localhost:5043";
xhr.open("POST", url, true);
xhr.setRequestHeader("Content-type", "application/json");
var data = JSON.stringify({"test" : "hello"});
xhr.send(data);
}
sayHello();
</script>
<body>
</body>
</html>
Logstash config file
input {
http {
port => 5043
response_headers => {
"Access-Control-Allow-Origin" => "*"
"Content-Type" => "text/plain"
"Access-Control-Allow-Headers" => "Origin, X-Requested-With, Content-Type,
Accept"
}
}
}
filter {
}
output {
stdout {
codec => rubydebug
}
}
Output in console
{
"host" => "0:0:0:0:0:0:0:1",
"#timestamp" => 2018-10-08T11:01:34.395Z,
"headers" => {
"http_user_agent" => "Mozilla/5.0 (Windows NT 6.1; Win64; x64) Appl
eWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36",
"content_length" => "17",
"request_path" => "/",
"request_method" => "POST",
"http_origin" => "null",
"content_type" => "application/json",
"http_accept_encoding" => "gzip, deflate, br",
"http_host" => "localhost:5043",
"request_uri" => "/",
"http_accept_language" => "en-US,en;q=0.9",
"http_accept" => "*/*",
"http_connection" => "keep-alive",
"http_version" => "HTTP/1.1"
},
"test" => "hello",#### here is you input data in json format #####
"#version" => "1"
}

Logstash: TestResult comes out as an array

The generated results of running the config below show the TestResult section as an array. I am trying to get rid of that array to make sending the data to Elasticsearch.
I have the following XML file:
<tem:SubmitTestResult xmlns:tem="http://www.example.com" xmlns:acs="http://www.example.com" xmlns:acs1="http://www.example.com">
<tem:LabId>123</tem:LabId>
<tem:userId>123</tem:userId>
<tem:TestResult>
<acs:CreatedBy>123</acs:CreatedBy>
<acs:CreatedDate>123</acs:CreatedDate>
<acs:LastUpdatedBy>123</acs:LastUpdatedBy>
<acs:LastUpdatedDate>123</acs:LastUpdatedDate>
<acs1:Capacity95FHigh>123</acs1:Capacity95FHigh>
<acs1:Capacity95FHigh_AHRI>123</acs1:Capacity95FHigh_AHRI>
<acs1:CondensateDisposal_AHRI>123</acs1:CondensateDisposal_AHRI>
<acs1:DegradationCoeffCool>123</acs1:DegradationCoeffCool>
</tem:TestResult>
</tem:SubmitTestResult>
And I am using this config:
input {
file {
path => "/var/log/logstash/test3.xml"
}
}
filter {
multiline {
pattern => "<tem:SubmitTestResult>"
negate => "true"
what => "previous"
}
if "multiline" in [tags] {
mutate {
gsub => ["message", "\n", ""]
}
mutate {
replace => ["message", '<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>%{message}']
}
xml {
source => "message"
target => "SubmitTestResult"
}
mutate {
remove_field => ["message", "#version", "host", "#timestamp", "path", "tags", "type"]
remove_field => ["[SubmitTestResult][xmlns:tem]","[SubmitTestResult][xmlns:acs]","[SubmitTestResult][xmlns:acs1]"]
}
mutate {
replace => [ "[SubmitTestResult][LabId]", "%{[SubmitTestResult][LabId]}" ]
replace => [ "[SubmitTestResult][userId]", "%{[SubmitTestResult][userId]}" ]
}
mutate {
replace => [ "[SubmitTestResult][TestResult][0][CreatedBy]", "%{[SubmitTestResult][TestResult][0][CreatedBy]}" ]
replace => [ "[SubmitTestResult][TestResult][0][CreatedDate]", "%{[SubmitTestResult][TestResult][0][CreatedDate]}" ]
replace => [ "[SubmitTestResult][TestResult][0][LastUpdatedBy]", "%{[SubmitTestResult][TestResult][0][LastUpdatedBy]}" ]
replace => [ "[SubmitTestResult][TestResult][0][LastUpdatedDate]", "%{[SubmitTestResult][TestResult][0][LastUpdatedDate]}" ]
replace => [ "[SubmitTestResult][TestResult][0][Capacity95FHigh]", "%{[SubmitTestResult][TestResult][0][Capacity95FHigh]}" ]
replace => [ "[SubmitTestResult][TestResult][0][Capacity95FHigh_AHRI]", "%{[SubmitTestResult][TestResult][0][Capacity95FHigh_AHRI]}" ]
replace => [ "[SubmitTestResult][TestResult][0][CondensateDisposal_AHRI]", "%{[SubmitTestResult][TestResult][0][CondensateDisposal_AHRI]}" ]
replace => [ "[SubmitTestResult][TestResult][0][DegradationCoeffCool]", "%{[SubmitTestResult][TestResult][0][DegradationCoeffCool]}" ]
}
}
}
output {
stdout {
codec => "rubydebug"
}
}
The result is:
"SubmitTestResult" => {
"LabId" => "123",
"userId" => "123",
"TestResult" => [
[0] {
"CreatedBy" => "123",
"CreatedDate" => "123",
"LastUpdatedBy" => "123",
"LastUpdatedDate" => "123",
"Capacity95FHigh" => "123",
"Capacity95FHigh_AHRI" => "123",
"CondensateDisposal_AHRI" => "123",
"DegradationCoeffCool" => "123"
}
]
}
As you can see, TestResult has the "[0]" array in there. Is there some config change I can do to make sure that it doesn't come out as an array? I want to send this to Elasticsearch and want the data correct.
I figured this out. After the last mutate block, I added one more mutate block. All I had to do was rename the field and that did the trick.
mutate {
rename => {"[SubmitTestResult][TestResult][0]" => "[SubmitTestResult][TestResult]"}
}
The result now looks proper:
"SubmitTestResult" => {
"LabId" => "123",
"userId" => "123",
"TestResult" => {
"CreatedBy" => "123",
"CreatedDate" => "123",
"LastUpdatedBy" => "123",
"LastUpdatedDate" => "123",
"Capacity95FHigh" => "123",
"Capacity95FHigh_AHRI" => "123",
"CondensateDisposal_AHRI" => "123",
"DegradationCoeffCool" => "123"
}
}

Resources