I am using elkb to handle my access log. And some day, I found kibana missing a log.
Then, grep filebeat log, And I can find the missing log:
2017/03/01 10:19:20.096711 client.go:184: DBG Publish: {
"#timestamp": "2017-03-01T10:19:16.327Z",
"beat": {
"hostname": "kvm980156.jx.diditaxi.com",
"name": "kvm980156.jx.diditaxi.com",
"version": "5.0.0"
},
"input_type": "log",
"message": "2017-03-01 18:19:11.699|10.94.104.169|17714317657896955-151|1|wangziyi|Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36|POST|/api/v1/answer/|com.didi.km.api.controller.api.v1.quest
ion.AnswerController#post[2 args]|{\"questionId\":[\"145\"],\"content\":[\"\u003cp\u003e123123123123123\u003c/p\u003e\"]}|200|220",
"offset": 1723505,
"source": "/home/km/didi-km-api/logs/km-access.2017-03-01.log",
"type": "log"
}
And, I grep logstash log, I can find it too:
{
"#timestamp" => 2017-03-01T10:19:16.327Z,
"offset" => 1723505,
"#version" => "1",
"input_type" => "log",
"beat" => {
"hostname" => "kvm980156.jx.diditaxi.com",
"name" => "kvm980156.jx.diditaxi.com",
"version" => "5.0.0"
},
"host" => "kvm980156.jx.diditaxi.com",
"source" => "/home/km/didi-km-api/logs/km-access.2017-03-01.log",
"message" => "2017-03-01 18:19:11.699|10.94.104.169|17714317657896955-151|1|wangziyi|Mozilla/5.0 (Macintosh; Intel Mac OS X 10
_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36|POST|/api/v1/answer/|com.didi.km.api.controller.api.v
1.question.AnswerController#post[2 args]|{\"questionId\":[\"145\"],\"content\":[\"<p>123123123123123</p>\"]}|200|220",
"type" => "log",
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}
BUT there is some different between this log and others. This log didn't split as my config say, And others did.
next log under the missing log:
{
"controllerMethod" => "com.didi.km.api.controller.api.v1.question.AnswerController#answersOrderByHot[2 args]",
"offset" => 1723849,
"method" => "GET",
"input_type" => "log",
"source" => "/home/km/didi-km-api/logs/km-access.2017-03-01.log",
"message" => "2017-03-01 18:19:11.855|10.94.104.169|17714317657896955-152|1|wangziyi|Mozilla/5.0 (Macintosh; Intel Mac O
S X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36|GET|/api/v1/answer/145|com.didi.km.api.controll
er.api.v1.question.AnswerController#answersOrderByHot[2 args]|{\"order\":[\"hot\"],\"pager\":[\"1,100\"]}|200|60",
"type" => "log",
"ua" => "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87
Safari/537.36",
"uri" => "/api/v1/answer/145",
"tags" => [
[0] "beats_input_codec_plain_applied"
],
"uid" => 1,
"#timestamp" => 2017-03-01T10:19:11.855Z,
"param" => "{\"order\":[\"hot\"],\"pager\":[\"1,100\"]}",
"costTime" => 60,
"requestID" => "17714317657896955-152",
"host-ip" => "10.94.104.169",
"#version" => "1",
"beat" => {
"hostname" => "kvm980156.jx.diditaxi.com",
"name" => "kvm980156.jx.diditaxi.com",
"version" => "5.0.0"
},
"host" => "kvm980156.jx.diditaxi.com",
"time" => "2017-03-01 18:19:11.855",
"username" => "wangziyi",
"statusCode" => 200
}
And, this is my logstash config. Using grok to split log.
input {
beats {
port => "5043"
}
}
filter {
# TIME||HOST-IP||REQUEST-ID||UID||USERNAME||METHOD||URI||CONTROLLER-METHOD||PARAMS-MAP
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:time}\|%{IP:host-ip}\|(?<requestID>\d+-\d+)\|%{INT:uid:int}\|%{WORD:username}\|(?<ua>(\
w|\/|\.|\s|\(|;|\)|,)+)\|%{WORD:method}\|(?<uri>(\w|\/)+)\|(?<controllerMethod>(\w|\d|\s|\.|#|\[|\])+)\|(?<param>(\w|{|}|\"|\:|\[|\]|
\,)+)\|%{NUMBER:statusCode:int}\|%{NUMBER:costTime:int}"
}
}
date {
match => ["time", "yyyy-MM-dd HH:mm:ss.SSS"]
target => "#timestamp"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "10.94.66.193:9200" ]
index => "km-access-%{+YYYY.MM.dd}"
}
}
Because this bug, I can't count some log with Kibana.
Here is my original log:
2017-03-01 18:19:11.699|10.94.104.169|17714317657896955-151|1|wangziyi|Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36|POST|/api/v1/answer/|com.didi.km.api.controller.api.v1.question.AnswerController#post[2 args]|{"questionId":["145"],"content":["<p>123123123123123</p>"]}|200|220
I could see that, you're only trying to extract the timestamp part of your logline and match it. If that's the case, what if you have your grok match as such without making it more complicated:
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:time}%{GREEDYDATA}"
}
}
date {
match => ["time", "yyyy-MM-dd HH:mm:ss.SSS"]
target => "#timestamp"
}
Related
I have below log messages like
2021-03-26 11:49:25.575: 2021-03-26 11:49:25.575 [INFO] 10.0.3.12 - "POST https://api.kr-seo.assistant.watson.cloud.ibm.com/instances/a33da834-a7a7-48c2-9bf6-d3207849ad71/v1/workspaces/c6e3035b-411a-468d-adac-1ae608f7bf68/message?version=2018-07-10" 200 462 ms
2021-03-26 11:49:26.514: 2021-03-26 11:49:26.514 [INFO] 10.0.3.12 + "POST http://test-bff.lotteon.com/order/v1/mylotte/getOrderList"
I want to transfrom using logstash like
"timestamp" : "2021-03-26 11:49:26.514",
"logLevel" : "INFO",
"IP" : "10.0.3.12",
"inout" : "-",
"Method" : "POST",
"url" : "https://api.kr-seo.assistant.watson.cloud.ibm.com/instances/a33da834-a7a7-48c2-9bf6-d3207849ad71/v1/workspaces/c6e3035b-411a-468d-adac-1ae608f7bf68/message?version=2018-07-10",
"status" : "200",
"duration" : "462 ms"
if, inout field is '+' that status/ duration filed are null ('')
How can I script logstash grok filter? (grok, mutate any other filter OK ...etc)
Help me..!
filter {
grok { match => [ "message", "%{GREEDYDATA:predata} (?<inout>[-+]) \"%{GREEDYDATA:postdata}\""] }
if [inout] == "+"
{
grok { match => [ "message", "%{DATESTAMP:timestamp}: %{GREEDYDATA:data} \[%{LOGLEVEL:loglevel}\] %{IP:IP} (?<inout>[-+]) \"%{WORD:method} %{URI:url}\"" ] }
}
else {
grok { match => [ "message", "%{DATESTAMP:timestamp}: %{GREEDYDATA:data} \[%{LOGLEVEL:loglevel}\] %{IP:IP} (?<inout>[-+]) \"%{WORD:method} %{URI:url}\" %{POSINT:statucode} %{POSINT:duration}" ] }
}
}
Now, you can remove the unnecessary fields:
filter {
mutate {
remove_field => [
"message",
"predata",
"postdata",
"DATE_US",
"IPV6",
"USER",
"USERNAME",
"URIHOST",
"IPORHOST",
"HOSTNAME",
"URIPATHPARAM",
"port",
"URIPATH",
"URIPARAM"
]
remove_tag => [
"multiline",
"_grokparsefailure"
]
}
}
Trailing backslash issue while using grok filters in Logstash
The messages should be parsed with "repo" value as "test" and "test-group" respectively, but the third message has a grok parse error because its missing a backslash and grok filter is failing to parse "resource_path" for it. I wanted to skip the api from parsing as a repo and that's the reason I had to implement a regex to do it.
I wanted to know if there's any workaround for it such that the messages that don't end with a trailing backslash still get parsed and don't throw errors.
Test messages used:
20190815175019|9599|REQUEST|14.56.55.120|anonymous|POST|/api/test/Lighter-test-group|HTTP/1.1|200|452
20190815175019|9599|REQUEST|14.56.55.120|anonymous|POST|/api/test-group/|HTTP/1.1|200|452
20190815175019|9599|REQUEST|14.56.55.120|anonymous|POST|/api/test-group|HTTP/1.1|200|452
Grok filter:
filter {
grok {
break_on_match => false
match => { "message" => "%{DATA:timestamp_local}\|%{NUMBER:duration}\|%{WORD:requesttype}\|%{IP:clientip}\|%{DATA:username}\|%{WORD:method}\|%{DATA:resource}\|%{DATA:protocol}\|%{NUMBER:statuscode}\|%{NUMBER:bytes}" }
}
grok {
break_on_match => false
match => { "resource" => "^(\/)+[^\/]+/%{DATA:repo}/%{GREEDYDATA:resource_path}" }
}
}
Expected result :
{
"#timestamp" => 2019-08-20T19:09:48.008Z,
"path" => "/Users/hack/test-status.log",
"timestamp_local" => "20190815175019",
"username" => "anonymous",
"method" => "POST",
"repo" => "test-group",
"bytes" => "452",
"requesttype" => "REQUEST",
"protocol" => "HTTP/1.1",
"duration" => "9599",
"clientip" => "14.56.55.120",
"resource" => "/api/test-group/",
"statuscode" => "200",
"message" => "20190815175019|9599|REQUEST|14.56.55.120|anonymous|POST|/api/test-group/|HTTP/1.1|200|452",
"host" => "empty",
"#version" => "1"
}
actual output:
{
"#timestamp" => 2019-08-20T19:09:48.009Z,
"path" => "/Users//hack/test-status.log",
"timestamp_local" => "20190815175019",
"username" => "anonymous",
"method" => "POST",
"bytes" => "452",
"requesttype" => "REQUEST",
"protocol" => "HTTP/1.1",
"duration" => "9599",
"clientip" => "14.56.55.120",
"resource" => "/api/test-group",
"statuscode" => "200",
"message" => "20190815175019|9599|REQUEST|14.56.55.120|anonymous|POST|/api/test-group|HTTP/1.1|200|452",
"host" => "empty",
"#version" => "1",
"tags" => [
[0] "_grokparsefailure"
]
}
I am parsing logs of a file of my server and sending only info, warning and error level logs to my API but problem is that I am receiving a log two times. In output I am mapping parsed logs values to on my JSON fields and I am send that json to my API but I am receiving that mapping of json twice.
I am analyzing my logstash log file but a log entry is only appeared once in log file.
{
"log_EventMessage" => "Unable to sendViaPost to url[http://ubuntu:8280/services/TestProxy.TestProxyHttpSoap12Endpoint] Read timed ",
"message" => "TID: [-1234] [] [2017-08-11 12:03:11,545] INFO {org.apache.axis2.transport.http.HTTPSender} - Unable to sendViaPost to url[http://ubuntu:8280/services/TestProxy.TestProxyHttpSoap12Endpoint] Read time",
"type" => "carbon",
"TimeStamp" => "2017-08-11T12:03:11.545",
"tags" => [
[0] "grokked",
[1] "loglevelinfo",
[2] "_grokparsefailure"
],
"log_EventTitle" => "org.apache.axis2.transport.http.HTTPSender",
"path" => "/home/waqas/Documents/repository/logs/carbon.log",
"#timestamp" => 2017-08-11T07:03:13.668Z,
"#version" => "1",
"host" => "ubuntu",
"log_SourceSystemId" => "-1234",
"EventId" => "b81a054e-babb-426c-b0a0-268494d14a0e",
"log_EventType" => "INFO"
}
Following are my configuration.
Need help. Unable to figure out the reason that why this is happening.
input {
file {
path => "LOG_FILE_PATH"
type => "carbon"
start_position => "end"
codec => multiline {
pattern => "(^\s*at .+)|^(?!TID).*$"
negate => false
what => "previous"
auto_flush_interval => 1
}
}
}
filter {
#***********************************************************
# Grok Pattern to parse Single Line Log Entries
#**********************************************************
if [type] == "carbon" {
grok {
match => [ "message", "TID:%{SPACE}\[%{INT:log_SourceSystemId}\]%{SPACE}\[%{DATA:log_ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:log_EventType}%{SPACE}{%{JAVACLASS:log_EventTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:log_EventMessage}" ]
add_tag => [ "grokked" ]
}
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
if "grokked" in [tags] {
grok {
match => ["log_EventType", "INFO"]
add_tag => [ "loglevelinfo" ]
}
grok {
match => ["log_EventType", "ERROR"]
add_tag => [ "loglevelerror" ]
}
grok {
match => ["log_EventType", "WARN"]
add_tag => [ "loglevelwarn" ]
}
}
#*****************************************************
# Grok Pattern in Case of Failure
#*****************************************************
if !( "_grokparsefailure" in [tags] ) {
grok{
match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
add_tag => [ "grokked" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
timezone => "UTC"
}
}
}
#*******************************************************************
# Grok Pattern to handle MultiLines Exceptions and StackTraces
#*******************************************************************
if ( "multiline" in [tags] ) {
grok {
match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
add_tag => [ "multiline" ]
tag_on_failure => [ "multiline" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
}
}
}
filter {
uuid {
target => "EventId"
}
}
output {
if [type] == "carbon" {
if "loglevelerror" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Error Messages to API
#*******************************************************************
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","High","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
if [type] == "carbon" {
if "loglevelinfo" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Info Messages to API
#*******************************************************************
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","Low","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
if [type] == "carbon" {
if "loglevelwarn" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Warn Messages to API
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","Medium","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
}
I am current trying to do a JavaScript post to Logstash by using a tcp input.
JavaScript Post
xhr = new XMLHttpRequest();
var url = "http://localhost:5043";
xhr.open("POST", url, true);
xhr.setRequestHeader("Content-type", "application/json");
var data = JSON.stringify({"test" : hello});
xhr.send(data);
Logstash config file
input {
tcp {
port => 5043
}
}
filter{
}
output {
stdout {
codec => rubydebug
}
}
Output in console
{
"message" => "OPTIONS / HTTP/1.1\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.611Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Host: localhost:5043\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.620Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Connection: keep-alive\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.621Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Access-Control-Request-Method: POST\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.622Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Origin: http://atgdev11\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.623Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.626Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Access-Control-Request-Headers: content-type\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.634Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Accept: */*\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.651Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Referer: http://test/Welcome.jsp\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.653Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Accept-Encoding: gzip, deflate, sdch, br\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.719Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
{
"message" => "Accept-Language: en-US,en;q=0.8\r",
"#version" => "1",
"#timestamp" => "2016-12-15T09:58:54.720Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 55867,
}
I cant seem to see my json data {"test" : hello} passing into logstash could there be something wrong with my logstash.config file ? Please help
Actually their is javascript error found in following line #JavaScript Post section
var data = JSON.stringify({"test" : hello});
replace with
var data = JSON.stringify({"test" : "hello"});
i,e double quotes was missed
Above changes will give u the required result
I have tested in following way
JavaScript Post
<!DOCTYPE html>
<html>
<title>Web Page Design</title>
<script>
function sayHello() {
var xhr = new XMLHttpRequest();
var url = "http://localhost:5043";
xhr.open("POST", url, true);
xhr.setRequestHeader("Content-type", "application/json");
var data = JSON.stringify({"test" : "hello"});
xhr.send(data);
}
sayHello();
</script>
<body>
</body>
</html>
Logstash config file
input {
http {
port => 5043
response_headers => {
"Access-Control-Allow-Origin" => "*"
"Content-Type" => "text/plain"
"Access-Control-Allow-Headers" => "Origin, X-Requested-With, Content-Type,
Accept"
}
}
}
filter {
}
output {
stdout {
codec => rubydebug
}
}
Output in console
{
"host" => "0:0:0:0:0:0:0:1",
"#timestamp" => 2018-10-08T11:01:34.395Z,
"headers" => {
"http_user_agent" => "Mozilla/5.0 (Windows NT 6.1; Win64; x64) Appl
eWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36",
"content_length" => "17",
"request_path" => "/",
"request_method" => "POST",
"http_origin" => "null",
"content_type" => "application/json",
"http_accept_encoding" => "gzip, deflate, br",
"http_host" => "localhost:5043",
"request_uri" => "/",
"http_accept_language" => "en-US,en;q=0.9",
"http_accept" => "*/*",
"http_connection" => "keep-alive",
"http_version" => "HTTP/1.1"
},
"test" => "hello",#### here is you input data in json format #####
"#version" => "1"
}
The generated results of running the config below show the TestResult section as an array. I am trying to get rid of that array to make sending the data to Elasticsearch.
I have the following XML file:
<tem:SubmitTestResult xmlns:tem="http://www.example.com" xmlns:acs="http://www.example.com" xmlns:acs1="http://www.example.com">
<tem:LabId>123</tem:LabId>
<tem:userId>123</tem:userId>
<tem:TestResult>
<acs:CreatedBy>123</acs:CreatedBy>
<acs:CreatedDate>123</acs:CreatedDate>
<acs:LastUpdatedBy>123</acs:LastUpdatedBy>
<acs:LastUpdatedDate>123</acs:LastUpdatedDate>
<acs1:Capacity95FHigh>123</acs1:Capacity95FHigh>
<acs1:Capacity95FHigh_AHRI>123</acs1:Capacity95FHigh_AHRI>
<acs1:CondensateDisposal_AHRI>123</acs1:CondensateDisposal_AHRI>
<acs1:DegradationCoeffCool>123</acs1:DegradationCoeffCool>
</tem:TestResult>
</tem:SubmitTestResult>
And I am using this config:
input {
file {
path => "/var/log/logstash/test3.xml"
}
}
filter {
multiline {
pattern => "<tem:SubmitTestResult>"
negate => "true"
what => "previous"
}
if "multiline" in [tags] {
mutate {
gsub => ["message", "\n", ""]
}
mutate {
replace => ["message", '<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>%{message}']
}
xml {
source => "message"
target => "SubmitTestResult"
}
mutate {
remove_field => ["message", "#version", "host", "#timestamp", "path", "tags", "type"]
remove_field => ["[SubmitTestResult][xmlns:tem]","[SubmitTestResult][xmlns:acs]","[SubmitTestResult][xmlns:acs1]"]
}
mutate {
replace => [ "[SubmitTestResult][LabId]", "%{[SubmitTestResult][LabId]}" ]
replace => [ "[SubmitTestResult][userId]", "%{[SubmitTestResult][userId]}" ]
}
mutate {
replace => [ "[SubmitTestResult][TestResult][0][CreatedBy]", "%{[SubmitTestResult][TestResult][0][CreatedBy]}" ]
replace => [ "[SubmitTestResult][TestResult][0][CreatedDate]", "%{[SubmitTestResult][TestResult][0][CreatedDate]}" ]
replace => [ "[SubmitTestResult][TestResult][0][LastUpdatedBy]", "%{[SubmitTestResult][TestResult][0][LastUpdatedBy]}" ]
replace => [ "[SubmitTestResult][TestResult][0][LastUpdatedDate]", "%{[SubmitTestResult][TestResult][0][LastUpdatedDate]}" ]
replace => [ "[SubmitTestResult][TestResult][0][Capacity95FHigh]", "%{[SubmitTestResult][TestResult][0][Capacity95FHigh]}" ]
replace => [ "[SubmitTestResult][TestResult][0][Capacity95FHigh_AHRI]", "%{[SubmitTestResult][TestResult][0][Capacity95FHigh_AHRI]}" ]
replace => [ "[SubmitTestResult][TestResult][0][CondensateDisposal_AHRI]", "%{[SubmitTestResult][TestResult][0][CondensateDisposal_AHRI]}" ]
replace => [ "[SubmitTestResult][TestResult][0][DegradationCoeffCool]", "%{[SubmitTestResult][TestResult][0][DegradationCoeffCool]}" ]
}
}
}
output {
stdout {
codec => "rubydebug"
}
}
The result is:
"SubmitTestResult" => {
"LabId" => "123",
"userId" => "123",
"TestResult" => [
[0] {
"CreatedBy" => "123",
"CreatedDate" => "123",
"LastUpdatedBy" => "123",
"LastUpdatedDate" => "123",
"Capacity95FHigh" => "123",
"Capacity95FHigh_AHRI" => "123",
"CondensateDisposal_AHRI" => "123",
"DegradationCoeffCool" => "123"
}
]
}
As you can see, TestResult has the "[0]" array in there. Is there some config change I can do to make sure that it doesn't come out as an array? I want to send this to Elasticsearch and want the data correct.
I figured this out. After the last mutate block, I added one more mutate block. All I had to do was rename the field and that did the trick.
mutate {
rename => {"[SubmitTestResult][TestResult][0]" => "[SubmitTestResult][TestResult]"}
}
The result now looks proper:
"SubmitTestResult" => {
"LabId" => "123",
"userId" => "123",
"TestResult" => {
"CreatedBy" => "123",
"CreatedDate" => "123",
"LastUpdatedBy" => "123",
"LastUpdatedDate" => "123",
"Capacity95FHigh" => "123",
"Capacity95FHigh_AHRI" => "123",
"CondensateDisposal_AHRI" => "123",
"DegradationCoeffCool" => "123"
}
}