I have JSON file that I'm sending to ES through logstash. I would like to remove 1 field ( It's deep field ) in the JSON - ONLY if the value is NULL.
Part of the JSON is:
"input": {
"startDate": "2015-05-27",
"numberOfGuests": 1,
"fileName": "null",
"existingSessionId": "XXXXXXXXXXXXX",
**"radius": "null",**
"nextItemReference": "51",
"longitude": -99.12,
"endDate": "2015-05-29",
"thumbnailHeight": 200,
"thumbnailWidth": 300,
"latitude": 19.42,
"numOfRooms": "1"
},
Part in the logstash.conf file is :
if [input.radius] == "null" {
mutate {
remove_field => [ "input.radius" ]
}
}
This is inside the filter of course.
How can I remove this field if the value is null?
Nested fields aren't referred with [name.subfield] but [field][subfield]. This should work for you:
if [input][radius] == "null" {
mutate {
remove_field => [ "[input][radius]" ]
}
}
Note that if there is no "input" field, the [input][radius] reference will create an empty "input" dictionary. To avoid that you can do this:
if [input] and [input][radius] == "null" {
mutate {
remove_field => [ "[input][radius]" ]
}
}
See the Logstash documentation for details and more examples.
Related
I'd like to access a field that was copied in my filter block, however it appears the value isn't set at that point, or that I can't access it.
When the same conditional logic is in my output block it works as expected.
Sample of the "json" field after the json filter block. The original input message contains a "message" field that is correctly parsed as shown below.
{
"json":
{
"groups":
[
"vdos.all.hosts.virtualmachine",
"vdos.all.compute.all"
],
"itemid": 1632807,
"name": "Memory Guest Usage Percentage[\"X001\"]",
"clock": 1642625307,
"ns": 723739588,
"value": 4.992676,
"type": 0
}
}
Logstash config
filter {
json {
source => "message"
target => "json"
}
mutate {
copy => { "[json][groups]" => "host_groups" }
}
if "vdos.all.compute.all" not in "%{[host_groups]}" {
drop {}
}
}
I've tried
if "vdos.all.compute.all" not in "[host_groups]" {
drop {}
}
as well as trying to access the json field directly.
if "vdos.all.compute.all" not in "[json][groups]" {
drop {}
}
This is my input log4j line being shipped by filebeat
2017-07-02 08:46:28,702 INFO com.company.service.EventService - Consumed event: {
"details": {
"A": 10,
"B": "EUR"
},
"eventId": "45YHJAIBpPeExHtskhqRbTDI9oEk2wPl",
"eventArrivalTime": "2017-07-02T08:46:28.700Z"
}
I managed to remove 2017-07-02 08:46:28,702 INFO part (mapped it into field: msgbody) and now im trying to parse the json part from it into fields on kibana.
I want to index into kibana the fields inside the event: {..} e.g eventId details
this is what i have done so far and I have no idea how to extract this json
filter {
if [type] == "log" {
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:logdate} %{LOGLEVEL:loglevel} %{GREEDYDATA:msgbody}"
}
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => ["logdate", "yyyy-MM-dd HH:mm:ss,SSS", "ISO8601"]
}
}
}
Thanks
msgbody is mapped using GREEDYDATA in your filter, which wouldn't match a new line. This means your msgbody will only match com.company.service.EventService - Consumed event: {
You need to map everything after INFO till }, into a separate field, which can be matched using,
(?m)%{DATA:msgbody}\},
It will match,
"msgbody": [
[
" com.company.service.EventService - Consumed event: {\n "details": {\n "A": 10,\n "B": "EUR"\n "
]
]
The rest of the data, i.e,
"eventId": "45YHJAIBpPeExHtskhqRbTDI9oEk2wPl",
"eventArrivalTime": "2017-07-02T08:46:28.700Z"
needs to be matched in its own block so that it can then be filtered using a json filter,
%{TIMESTAMP_ISO8601:logdate} %{LOGLEVEL:loglevel}(?m)%{DATA:msgbody}\},%{GREEDYDATA:json}
This will produce,
"msgbody": [
[
" com.company.service.EventService - Consumed event: {\n "details": {\n "A": 10,\n "B": "EUR"\n "
]
],
"json": [
[
"\n "eventId": "45YHJAIBpPeExHtskhqRbTDI9oEk2wPl",\n "eventArrivalTime": "2017-07-02T08:46:28.700Z"\n}"
]
]
Now we have json fields into a new field called json.
json filter can be then applied on it as follows,
json{
source => "json"
target => "parsed_json"
}
hope this helps
I have logstash input that looks like this
{
"#timestamp": "2016-12-20T18:55:11.699Z",
"id": 1234,
"detail": {
"foo": 1
"bar": "two"
}
}
I would like to merge the content of "detail" with the root object so that the final event looks like this:
{
"#timestamp": "2016-12-20T18:55:11.699Z",
"id": 1234,
"foo": 1
"bar": "two"
}
Is there a way to accomplish this without writing my own filter plugin?
You can do this with a ruby filter.
filter {
ruby {
code => "
event['detail'].each {|k, v|
event[k] = v
}
event.remove('detail')
"
}
}
There is a simple way to do that using the json_encode plugin (not included by default).
The json extractor add fields to the root of the event. It's one of the very few extractors that can add things to the root.
filter {
json_encode {
source => "detail"
target => "detail"
}
json {
source => "detail"
remove_field => [ "detail" ]
}
}
2016-11-30 15:43:09.3060 DEBUG 20
Company.Product.LoggerDataFilter
[UOW:583ee57782fe0140c6dfbfd8] [DP:0] Creating
DeviceDataTransformationRequest for logger
[D:4E3239200C5032593D004100].
%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel}\s+ %{INT:threadId}
%{DATA:loggerName} %{UOW} %{DATAPACKET} %{GREEDYDATA:message}
%{DEVICEID}
The output of that is
{
"timestamp": [
"2016-11-30 15:43:09.3060"
],
"loglevel": [
"DEBUG"
],
"threadId": [
"20"
],
"loggerName": [
"Tts.IoT.DataLogger.Etl.Core.Filters.LoggerDataFilter"
],
"correlationId": [
"583ee57782fe0140c6dfbfd8"
],
"datapacket": [
"0"
],
"message": [
"Creating DeviceDataTransformationRequest for logger"
],
"deviceId": [
"4E3239200C5032593D004100"
]
}
Which is good - EXCEPT - the message is now lacking the DEVICEID property which I extracted. I want it both - as a separate field and still keep it in the message.
Can you do that?
(On a side note... how does structured logging like serilog help in this regard?)
How about try change it
%{GREEDYDATA:message} %{DEVICEID}
to
%{GREEDYDATA:testmessage} %{DEVICEID}
then add a field
mutate {
add_field => {
"message" => "%{testmessage} %{DEVICEID}"
}
remove_field => ["testmessage"]
}
I'm using Elasticsearch + Logstash + kibana for windows eventlog analysis. And i get the following log:
{
"_index": "logstash-2015.04.16",
"_type": "logs",
"_id": "Ov498b0cTqK8W4_IPzZKbg",
"_score": null,
"_source": {
"EventTime": "2015-04-16 14:12:45",
"EventType": "AUDIT_FAILURE",
"EventID": "4656",
"Message": "A handle to an object was requested.\r\n\r\nSubject:\r\n\tSecurity ID:\t\tS-1-5-21-2832557239-2908104349-351431359-3166\r\n\tAccount Name:\t\ts.tekotin\r\n\tAccount Domain:\t\tIAS\r\n\tLogon ID:\t\t0x88991C8\r\n\r\nObject:\r\n\tObject Server:\t\tSecurity\r\n\tObject Type:\t\tFile\r\n\tObject Name:\t\tC:\\Folders\\Общая (HotSMS)\\Test_folder\\3\r\n\tHandle ID:\t\t0x0\r\n\tResource Attributes:\t-\r\n\r\nProcess Information:\r\n\tProcess ID:\t\t0x4\r\n\tProcess Name:\t\t\r\n\r\nAccess Request Information:\r\n\tTransaction ID:\t\t{00000000-0000-0000-0000-000000000000}\r\n\tAccesses:\t\tReadData (or ListDirectory)\r\n\t\t\t\tReadAttributes\r\n\t\t\t\t\r\n\tAccess Reasons:\t\tReadData (or ListDirectory):\tDenied by\tD:(D;OICI;CCDCLCSWRPWPLOCRSDRC;;;S-1-5-21-2832557239-2908104349-351431359-3166)\r\n\t\t\t\tReadAttributes:\tGranted by ACE on parent folder\tD:(A;OICI;0x1200a9;;;S-1-5-21-2832557239-2908104349-351431359-3166)\r\n\t\t\t\t\r\n\tAccess Mask:\t\t0x81\r\n\tPrivileges Used for Access Check:\t-\r\n\tRestricted SID Count:\t0",
"ObjectServer": "Security",
"ObjectName": "C:\\Folders\\Общая (HotSMS)\\Test_folder\\3",
"HandleId": "0x0",
"PrivilegeList": "-",
"RestrictedSidCount": "0",
"ResourceAttributes": "-",
"#timestamp": "2015-04-16T11:12:45.802Z"
},
"sort": [
1429182765802,
1429182765802
]
}
I get many log messages with different EventID, and when I recieve a log entry with EventID 4656 - i want to replace the value "4656" with a string "Access Failure". Is there a chance to do so?
You can do it when you are loading with logstash -- just do something like this:
filter {
if [EventID] == "4656" {
mutate {
replace => [ "EventID", "Access Failure" ]
}
}
}
If you have a lot of values, look at translate{}:
translate {
dictionary => [
"4656", "Access Failure",
"1234", "Another Value"
]
field => "EventID"
destination => "EventName"
}
I don't think translate{} will let you replace the original field. You could remove it, though, in favor of the new field.
use replace filter:
Replace a field with a new value. The new value can include %{foo} strings to help you build a new value from other parts of the event.
Example:
filter {
if [source] == "your code like 4656" {
mutate {
replace => { "message" => "%{source_host}: My new message" }
}
}
}