Parsing a multiline bracket-wrapped JSON file in Logstash - logstash

I'm trying to parse JSON logs being generated from a system in the form of:
[{"name" : "test1", "state" : "success"},
{"name" : "test2", "state" : "fail"},
{"name" : "test3", "state" : "success"}]
This is a valid JSON file however I am not able to parse it using the json filter.
My current logstash.config file looks like this:
input {
file {
path => "C:/elastic_stack/json_stream/*.json"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter{
mutate {
gsub => [ "message","\]",""]
gsub => [ "message","\[",""]
gsub => [ "message","\},","}"]
}
json{
source=> "message"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstashindex"
}
stdout { codec => rubydebug }
}
I have tried adding a bunch of regex filters with gsub but since I'm pretty new I'm not sure if that's the cleanest and most efficient parsing solution. Any suggestion?

Related

Logstash Filter JSON syslog message field is not getting parsed

I cannot parse the incoming Syslog by JSON. The message field is not getting parsed. I tried JSON filter using addfield and also with mutate but no luck. I used GROK to parse specific fields but the message field has keys and values. How to parse the below message field into JSON
conf file
input {
file {
path => "/opt/log/sample/*.txt"
codec => "plain" # { format => "%{message}" }
}
}
filter {
# mutate { gsub => [ "message","(\")", "" ] }
mutate { gsub => [ "message","(\\")", "" ] }
json {
source => "message"
}
}
output {
file {
path => "/opt/log/out/out.txt"
codec => json_lines
}
stdout {}
}
GROK
%{TIME:timestamp} %{HOST:host} %{GREEDYDATA:message}
GROK output
"timestamp": [
[
"18:11:58"
]
],
"host": [
[
"myhost.aco.mydomain.net"
]
],
"message": [
[
"{destinationPort:90,exception:-,totalByteUsage:0,sourcePort:160,extension:.com\\\\/,contentTypeHeader:-,callout:0,scheme:http,reportingGroup:0,requestMethod:GET,privateIp:-,sAction:Allowed,sourceIpAddress:10.10.10.10,description:-,categoryName:News,sandBoxDecoded:-,urlLogId:0,responseCode:0,sandboxResult:-,computerName:-,totalByteCount:0,audit:0,host:www.local.com,action:Allowed,useTime:0,upstreamByteUsage:0,uriPath:\\\\/,computerMacAddress:00:00:00:00:00:00,direction:0,myboss:myhost,malware:0,ipAddress:10.10.10.10,userAgent:-,publicIp:-,url:http:\\\\/\\\\/www.local.com\\\\/,logTime:2022-07-12,referrerUrl:-,mde:-,sha256Sum:-,macAddress:00:00:00:00:00:00,filename:-,uriQuery:-,filteringGroupName:Default Catch All,downstreamByteUsage:0,cncFlag:0,location:-,time:18:11:57,username:*10.10.10.10}""
]
]
}

Can't grok multiline logs

I have logs where each event is:
ExitNode FF33F91CC06B6CC5C3EE804E7D8DBE42CB5707F9
Published 2017-11-05 02:55:09
LastStatus 2017-11-05 04:02:27
ExitAddress 66.42.224.235 2017-11-05 04:06:26
I tried to use multiline:
input {
file {
path => "/path/input"
}
}
filter {
multiline {
pattern => "^\b[A-Za-z]{8}\b"
what => "next"
}
}
filter {
multiline {
pattern => "^\b[A-Za-z]{8}\b"
what => "next"
}
}
filter {
multiline {
pattern => "^\b[A-Za-z]{11}\b"
what => "previous"
}
}
output {
file {
codec => rubydebug
path => "/path/output"
}
}
And I get something like this:
{
"path" => "/path/input",
"#timestamp" => 2017-11-05T10:25:34.112Z,
"#version" => "1",
"host" => "HOST",
"message" => "ExitNode FE3CB742E73674F1BC2382723209ECEE44AD4AEC\nPublished 2017-11-04 20:34:55\nLastStatus 2017-11-04 21:03:26\nExitAddress 77.250.227.12 2017-11-04 21:06:45",
"tags" => [
[0] "multiline"
]
}
And I can't grok this message field because I don't know how to remove or replace \n and gsub => ["message", "\n", "Line_Break"] doesn't work properly.
Thanks
From the comment of #baudsp:
mutate {
gsub =>
["message", "[\r\n]","_"]
}

Logstash issue with json_formater [LogStash::Json::ParserError: Unexpected character ('-' (code 45)): was expecting comma to separate ARRAY entries]

I have an issue with converting value through logstash, I can't find solution for it. it seems to be linked to the date.
#Log line
[2017-08-15 12:30:17] api.INFO: {"sessionId":"a216925---ff5992be7520924ff25992be75209c7","action":"processed","time":1502789417,"type":"bookingProcess","page":"order"} [] []
Logstash configuration
filter {
if [type] == "api-prod-log" {
grok {
match => {"message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{WORD:module}.%{WORD:level}: (?<log_message>.*) \[\] \[\]" }
add_field => [ "received_from", "%{host}" ]
}
json {
source => "log_message"
target => "flightSearchRequest"
remove_field=>["log_message"]
}
date {
match => [ "timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "Asia/Jerusalem"
}
}
}
Any idea ?
Thanks
What version of Logstash are you using?
On Logstash 5.2.2 with the following Logstash config:
input {
stdin{}
}
filter {
grok {
match => {"message" => '\[%{TIMESTAMP_ISO8601:timestamp}\] %{WORD:module}.%{WORD:level}: (?<log_message>.*) \[\] \[\]' }
}
json {
source => "log_message"
target => "flightSearchRequest"
remove_field=>["log_message"]
}
date {
match => [ "timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "Asia/Jerusalem"
}
}
output{
stdout {codec => "rubydebug"}
}
I get a perfectly correct result and no errors, when I pass your log line as input:
{
"#timestamp" => 2017-08-15T09:30:17.000Z,
"flightSearchRequest" => {
"action" => "processed",
"sessionId" => "a216925---ff5992be7520924ff25992be75209c7",
"time" => 1502789417,
"page" => "order",
"type" => "bookingProcess"
},
"level" => "INFO",
"module" => "api",
"#version" => "1",
"message" => "[2017-08-15 12:30:17] api.INFO: {\"sessionId\":\"a216925---ff5992be7520924ff25992be75209c7\",\"action\":\"processed\",\"time\":1502789417,\"type\":\"bookingProcess\",\"page\":\"order\"} [] []",
"timestamp" => "2017-08-15 12:30:17"
}
I've removed the check for "type" in the beginning, can you test if that can affect the result?

Logstash make a copy a nested field with mutate.add_field

I wanted to make a copy of a nested field in a Logstash filter but I can't figure out the correct syntax.
Here is what I try:
incorrect syntax:
mutate {
add_field => { "received_from" => %{beat.hostname} }
}
beat.hostname is not replaced
mutate {
add_field => { "received_from" => "%{beat.hostname}" }
}
beat.hostname is not replaced
mutate {
add_field => { "received_from" => "%{[beat][hostname]}" }
}
beat.hostname is not replaced
mutate {
add_field => { "received_from" => "%[beat][hostname]" }
}
No way. If I give a non nested field it works as expected.
The data structure received by logstash is the following:
{
"#timestamp" => "2016-08-24T13:01:28.369Z",
"beat" => {
"hostname" => "etg-dbs-master-tmp",
"name" => "etg-dbs-master-tmp"
},
"count" => 1,
"fs" => {
"device_name" => "/dev/vdb",
"total" => 5150212096,
"used" => 99287040,
"used_p" => 0.02,
"free" => 5050925056,
"avail" => 4765712384,
"files" => 327680,
"free_files" => 326476,
"mount_point" => "/opt/ws-etg/datas"
},
"type" => "filesystem",
"#version" => "1",
"tags" => [
[0] "topbeat"
],
"received_at" => "2016-08-24T13:01:28.369Z",
"received_from" => "%[beat][hostname]"
}
EDIT:
Since you didn't show your input message I worked off your output. In your output the field you are trying to copy into already exists, which is why you need to use replace. If it does not exist, you do in deed need to use add_field. I updated my answer for both cases.
EDIT 2: I realised that your problem might be to access the value that is nested, so I added that as well :)
you are using the mutate filter wrong/backwards.
First mistake:
You want to replace a field, not add one. In the docs, it gives you the "replace" option. See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-replace
Second mistake, you are using the syntax in reverse. It appears that you believe this is true:
"text I want to write" => "Field I want to write it in"
While this is true:
"myDestinationFieldName" => "My Value to be in the field"
With this knowledge, we can now do this:
mutate {
replace => { "[test][a]" => "%{s}"}
}
or if you want to actually add a NEW NOT EXISTING FIELD:
mutate {
add_field => {"[test][myNewField]" => "%{s}"}
}
Or add a new existing field with the value of a nested field:
mutate {
add_field => {"some" => "%{[test][a]}"}
}
Or more details, in my example:
input {
stdin {
}
}
filter {
json {
source => "message"
}
mutate {
replace => { "[test][a]" => "%{s}"}
add_field => {"[test][myNewField]" => "%{s}"}
add_field => {"some" => "%{[test][a]}"}
}
}
output {
stdout { codec => rubydebug }
}
This example takes stdin and outputs to stdout. It uses a json filter to parse the message, and then the mutate filter to replace the nested field. I also add a completely new field in the nested test object.
And finally creates a new field "some" that has the value of test.a
So for this message:
{"test" : { "a": "hello"}, "s" : "to_Repalce"}
We want to replace test.a (value: "Hello") with s (Value: "to_Repalce"), and add a field test.myNewField with the value of s.
On my terminal:
artur#pandaadb:~/dev/logstash$ ./logstash-2.3.2/bin/logstash -f conf2/
Settings: Default pipeline workers: 8
Pipeline main started
{"test" : { "a": "hello"}, "s" : "to_Repalce"}
{
"message" => "{\"test\" : { \"a\": \"hello\"}, \"s\" : \"to_Repalce\"}",
"#version" => "1",
"#timestamp" => "2016-08-24T14:39:52.002Z",
"host" => "pandaadb",
"test" => {
"a" => "to_Repalce",
"myNewField" => "to_Repalce"
},
"s" => "to_Repalce"
"some" => "to_Repalce"
}
The value has succesfully been replaced.
A field "some" with the replaces value has been added
A new field in the nested array has been added.
if you use add_field, it will convert a into an array and append your value there.
Hope this solves your issue,
Artur

Use Logstash with HTML log

I'm new to Logstash, trying to use it to parse a HTML log file.
I need to output only the log lines, i.e. ignore preceding JS, CSS and HTML that are also included in the file.
A log line in the file looks like this:
<tr bgcolor="tomato"><td>Jan 28<br>13:52:25.692</td><td>Jan 28<br>13:52:23.950</td><td>qtp114615276-1648 [POST] [call_id:-8009072655119858507]</td><td>REST</td><td>sa</td><td>0.0.0.0</td><td>ERR</td><td>ProjectValidator.validate(36)</td><td>Project does not exist</td></tr>
I have no problem getting all the lines, but I would like to have an output which contains only the relevant ones, without HTML tags, and looks something like that:
{
"db_timestamp": "2015-01-28 13:52:25.692",
"server_timestamp": "2015-01-28 13:52:25.950",
"node": "qtp114615276-1648 [POST] [call_id:-8009072655119858507]",
"thread": "REST",
"user": "sa",
"ip": "0.0.0.0",
"level": "ERR",
"method": "ProjectValidator.validate(36)",
"message": "Project does not exist"
}
My Logstash configuration is:
input {
file {
type => "request"
path => "<some path>/*.log"
start_position => "beginning"
}
file {
type => "log"
path => "<some path>/*.html"
start_position => "beginning"
}
}
filter {
if [type] == "log" {
grok {
match => [ WHAT SHOULD I PUT HERE??? ]
}
}
}
output {
stdout {}
if [type] == "request" {
http {
http_method => "post"
url => "http://<some url>"
mapping => ["type", "request", "host" ,"%{host}", "timestamp", "%{#timestamp}", "message", "%{message}"]
}
}
if [type] == "log" {
http {
http_method => "post"
url => "http://<some url>"
mapping => [ ALSO WHAT SHOULD I PUT HERE??? ]
}
}
}
Is there a way to do that? So far I haven't found any relevant documentation or samples.
Thanks!
Finally figured out the answer.
Not sure this is the best or most elegant solution, but it works.
I changed the http output format to "message", which enabled me to override and format the whole message as JSON, instead of using mapping. Also, found out how to name parameters in the grok filter and use them in the output.
This is the new Logstash configuration file:
input {
file {
type => "request"
path => "<some path>/*.log"
start_position => "beginning"
}
file {
type => "log"
path => "<some path>/*.html"
start_position => "beginning"
}
}
filter {
if [type] == "log" {
grok {
match => { "message" => "<tr bgcolor=.*><td>%{MONTH:db_date}%{SPACE}%{MONTHDAY:db_date}<br>%{TIME:db_date}</td><td>%{MONTH:alm_date}%{SPACE}%{MONTHDAY:alm_date}<br>%{TIME:alm_date}</td><td>%{DATA:thread}</td><td>%{DATA:req_type}</td><td>%{DATA:username}</td><td>%{IP:ip}</td><td>%{DATA:level}</td><td>%{DATA:method}</td><td>%{DATA:err_message}</td></tr>" }
}
}
}
output { stdout { codec => rubydebug }
if [type] == "request" {
http {
http_method => "post"
url => "http://<some URL>"
mapping => ["type", "request", "host" ,"%{host}", "timestamp", "%{#timestamp}", "message", "%{message}"]
}
}
if [type] == "log" {
http {
format => "message"
content_type => "application/json"
http_method => "post"
url => "http://<some URL>"
message=> '{
"db_date":"%{db_date}",
"alm_date":"%{alm_date}",
"thread": "%{thread}",
"req_type": "%{req_type}",
"username": "%{username}",
"ip": "%{ip}",
"level": "%{level}",
"method": "%{method}",
"message": "%{err_message}"
}'
}
}
}
Note the single quote for the http message block and the double quotes for the parameters inside this block.
For anyone parsing HP ALM logs, the following Logstash filter will do the work:
grok {
break_on_match => true
match => [ "message", "<tr bgcolor=.*><td>%{MONTH:db_date_mon}%{SPACE}%{MONTHDAY:db_date_day}<br>%{TIME:db_date_time}<\/td><td>%{MONTH:alm_date_mon}%{SPACE}%{MONTHDAY:alm_date_day}<br>%{TIME:alm_date_time}<\/td><td>(?<thread_col1>.*?)<\/td><td>(?<request_type>.*?)<\/td><td>(?<login>.*?)<\/td><td>(?<ip>.*?)<\/td><td>(?<level>.*?)<\/td><td>(?<method>.*?)<\/td><td>(?m:(?<log_message>.*?))</td></tr>" ]
}
mutate {
add_field => ["db_date", "%{db_date_mon} %{db_date_day}"]
add_field => ["alm_date", "%{alm_date_mon} %{alm_date_day}"]
remove_field => [ "db_date_mon", "db_date_day", "alm_date_mon", "alm_date_day" ]
gsub => [
"log_message", "<br>", "
"
]
gsub => [
"log_message", "<p>", " "
]
}
Tested and working fine with Logstash 2.4.0

Resources