move json fields to root - logstash [duplicate] - logstash

I have logstash input that looks like this
{
"#timestamp": "2016-12-20T18:55:11.699Z",
"id": 1234,
"detail": {
"foo": 1
"bar": "two"
}
}
I would like to merge the content of "detail" with the root object so that the final event looks like this:
{
"#timestamp": "2016-12-20T18:55:11.699Z",
"id": 1234,
"foo": 1
"bar": "two"
}
Is there a way to accomplish this without writing my own filter plugin?

You can do this with a ruby filter.
filter {
ruby {
code => "
event['detail'].each {|k, v|
event[k] = v
}
event.remove('detail')
"
}
}

There is a simple way to do that using the json_encode plugin (not included by default).
The json extractor add fields to the root of the event. It's one of the very few extractors that can add things to the root.
filter {
json_encode {
source => "detail"
target => "detail"
}
json {
source => "detail"
remove_field => [ "detail" ]
}
}

Related

Logstash mutate copy field not available in filter scope?

I'd like to access a field that was copied in my filter block, however it appears the value isn't set at that point, or that I can't access it.
When the same conditional logic is in my output block it works as expected.
Sample of the "json" field after the json filter block. The original input message contains a "message" field that is correctly parsed as shown below.
{
"json":
{
"groups":
[
"vdos.all.hosts.virtualmachine",
"vdos.all.compute.all"
],
"itemid": 1632807,
"name": "Memory Guest Usage Percentage[\"X001\"]",
"clock": 1642625307,
"ns": 723739588,
"value": 4.992676,
"type": 0
}
}
Logstash config
filter {
json {
source => "message"
target => "json"
}
mutate {
copy => { "[json][groups]" => "host_groups" }
}
if "vdos.all.compute.all" not in "%{[host_groups]}" {
drop {}
}
}
I've tried
if "vdos.all.compute.all" not in "[host_groups]" {
drop {}
}
as well as trying to access the json field directly.
if "vdos.all.compute.all" not in "[json][groups]" {
drop {}
}

Unable to rename/copy the root field name in logstash

Here is my sample config LS 7.9.
input {
jdbc { ... }
}
filter {
json {
#It's JSON field from DB, including only two for reference.
source => "tags_json"
#Need it as Sub-field like tags.companies, tags.geographies in ES
add_field => {
"[tags][companies]" => "%{companies}"
"[tags][geographies]" => "%{geographies}"
}
output {
elasticsearch { ... }
}
JSON structure in DB field tags_json
{"companies": ["ABC","XYZ"],
"geographies": [{"Market": "Group Market", "Region": "Group Region", "Country": "my_country"}],
"xyz":[]...
}
Logstash prints root geographies field correctly, this is what I need as sub-field under tags.
"geographies" => [
[0] {
"Market" => "Group Market",
"Region" => "Group Region"
},
## But as sub-field under the tags, only geographies is nil
"tags" => {
"companies" => [
[0] "ABC",
[1] "XYZ"
],
"geographies" => nil
}
I tried below copy, ruby, but doesn't seem to fix it :
mutate { copy => { "%{geographies}" => "[tags][geographies]"} }
Also tried Ruby
ruby { code => " event.set('[tags][geographies]', event.get('%{geographies}')) " }
Any help please. Thanks.
Resolved it with ruby event.
ruby {
code => 'event.set("[tags][geographies]", event.get("geographies"))'
}

Logstash - remove deep field from json file

I have JSON file that I'm sending to ES through logstash. I would like to remove 1 field ( It's deep field ) in the JSON - ONLY if the value is NULL.
Part of the JSON is:
"input": {
"startDate": "2015-05-27",
"numberOfGuests": 1,
"fileName": "null",
"existingSessionId": "XXXXXXXXXXXXX",
**"radius": "null",**
"nextItemReference": "51",
"longitude": -99.12,
"endDate": "2015-05-29",
"thumbnailHeight": 200,
"thumbnailWidth": 300,
"latitude": 19.42,
"numOfRooms": "1"
},
Part in the logstash.conf file is :
if [input.radius] == "null" {
mutate {
remove_field => [ "input.radius" ]
}
}
This is inside the filter of course.
How can I remove this field if the value is null?
Nested fields aren't referred with [name.subfield] but [field][subfield]. This should work for you:
if [input][radius] == "null" {
mutate {
remove_field => [ "[input][radius]" ]
}
}
Note that if there is no "input" field, the [input][radius] reference will create an empty "input" dictionary. To avoid that you can do this:
if [input] and [input][radius] == "null" {
mutate {
remove_field => [ "[input][radius]" ]
}
}
See the Logstash documentation for details and more examples.

Logstash: replace field values matching pattern

I'm using Elasticsearch + Logstash + kibana for windows eventlog analysis. And i get the following log:
{
"_index": "logstash-2015.04.16",
"_type": "logs",
"_id": "Ov498b0cTqK8W4_IPzZKbg",
"_score": null,
"_source": {
"EventTime": "2015-04-16 14:12:45",
"EventType": "AUDIT_FAILURE",
"EventID": "4656",
"Message": "A handle to an object was requested.\r\n\r\nSubject:\r\n\tSecurity ID:\t\tS-1-5-21-2832557239-2908104349-351431359-3166\r\n\tAccount Name:\t\ts.tekotin\r\n\tAccount Domain:\t\tIAS\r\n\tLogon ID:\t\t0x88991C8\r\n\r\nObject:\r\n\tObject Server:\t\tSecurity\r\n\tObject Type:\t\tFile\r\n\tObject Name:\t\tC:\\Folders\\Общая (HotSMS)\\Test_folder\\3\r\n\tHandle ID:\t\t0x0\r\n\tResource Attributes:\t-\r\n\r\nProcess Information:\r\n\tProcess ID:\t\t0x4\r\n\tProcess Name:\t\t\r\n\r\nAccess Request Information:\r\n\tTransaction ID:\t\t{00000000-0000-0000-0000-000000000000}\r\n\tAccesses:\t\tReadData (or ListDirectory)\r\n\t\t\t\tReadAttributes\r\n\t\t\t\t\r\n\tAccess Reasons:\t\tReadData (or ListDirectory):\tDenied by\tD:(D;OICI;CCDCLCSWRPWPLOCRSDRC;;;S-1-5-21-2832557239-2908104349-351431359-3166)\r\n\t\t\t\tReadAttributes:\tGranted by ACE on parent folder\tD:(A;OICI;0x1200a9;;;S-1-5-21-2832557239-2908104349-351431359-3166)\r\n\t\t\t\t\r\n\tAccess Mask:\t\t0x81\r\n\tPrivileges Used for Access Check:\t-\r\n\tRestricted SID Count:\t0",
"ObjectServer": "Security",
"ObjectName": "C:\\Folders\\Общая (HotSMS)\\Test_folder\\3",
"HandleId": "0x0",
"PrivilegeList": "-",
"RestrictedSidCount": "0",
"ResourceAttributes": "-",
"#timestamp": "2015-04-16T11:12:45.802Z"
},
"sort": [
1429182765802,
1429182765802
]
}
I get many log messages with different EventID, and when I recieve a log entry with EventID 4656 - i want to replace the value "4656" with a string "Access Failure". Is there a chance to do so?
You can do it when you are loading with logstash -- just do something like this:
filter {
if [EventID] == "4656" {
mutate {
replace => [ "EventID", "Access Failure" ]
}
}
}
If you have a lot of values, look at translate{}:
translate {
dictionary => [
"4656", "Access Failure",
"1234", "Another Value"
]
field => "EventID"
destination => "EventName"
}
I don't think translate{} will let you replace the original field. You could remove it, though, in favor of the new field.
use replace filter:
Replace a field with a new value. The new value can include %{foo} strings to help you build a new value from other parts of the event.
Example:
filter {
if [source] == "your code like 4656" {
mutate {
replace => { "message" => "%{source_host}: My new message" }
}
}
}

Logstash: how to add file name as a field?

I'm using Logstash + Elasticsearch + Kibana to have an overview of my Tomcat log files.
For each log entry I need to know the name of the file from which it came. I'd like to add it as a field. Is there a way to do it?
I've googled a little and I've only found this SO question, but the answer is no longer up-to-date.
So far the only solution I see is to specify separate configuration for each possible file name with different "add_field" like so:
input {
file {
type => "catalinalog"
path => [ "/path/to/my/files/catalina**" ]
add_field => { "server" => "prod1" }
}
}
But then I need to reconfigure logstash each time there is a new possible file name.
Any better ideas?
Hi I added a grok filter to do just this. I only wanted to have the filename not the path, but you can change this to your needs.
filter {
grok {
match => ["path","%{GREEDYDATA}/%{GREEDYDATA:filename}\.log"]
}
}
In case you would like to combine the message and file name in one event:
filter {
grok {
match => {
message => "ERROR (?<function>[\S]*)"
}
}
grok {
match => {
path => "%{GREEDYDATA}/%{GREEDYDATA:filename}\.log"
}
}}
The result in ElasticSearch (focus on 'filename' and 'function' fields):
"_index": "logstash-2016.08.03",
"_type": "logs",
"_id": "AVZRyEI49-A6kyBCq6Yt",
"_score": 1,
"_source": {
"message": "27/07/16 12:16:18,321 ERROR blaaaaaaaaa.internal.com",
"#version": "1",
"#timestamp": "2016-08-03T19:01:33.083Z",
"path": "/home/admin/mylog.log",
"host": "my-virtual-machine",
"function": "blaaaaaaaaa.internal.com",
"filename": "mylog"
}

Resources