Replace #timestamp in logstash - logstash

I'am getting crazy with my logstash configuration.
I can't find a way to replace the #timestamp field with another:
Here is what logstash receive:
{
"offset" => 6718968,
"Varnish_txid" => "639657758",
"plateform" => "cdnfronts",
"Referer" => "-",
"input_type" => "log",
"respsize" => "281",
"source" => "/var/log/varnish/varnish4xx-5xx.log",
"UA" => "Microsoft-WebDAV-MiniRedir/5.1.2600",
"type" => "varnish-logs",
"tags" => [
[0] "json",
[1] "varnish",
[2] "beats_input_codec_json_applied",
[3] "_dateparsefailure"
],
"st_snt2c_or_sntfromb" => "405",
"RemoteHost" => "32.26.21.21",
"#timestamp" => 2017-02-14T13:38:47.808Z,
"Varnish.Handling" => "pass",
"tot_bytes_rcvby_c_or_sntby_b" => "-",
"time_req_rcv4c_or_snt4b" => "[14/Feb/2017:14:38:44 +0100]",
"#version" => "1",
"beat" => {
"hostname" => "cdn1",
"name" => "cdn1",
"version" => "5.1.2"
},
"host" => "cdn1",
"time_1st_byte" => "0.010954",
"Varnish_side" => "c",
"reqfirstline" => "OPTIONS http://a.toto.com/ HTTP/1.1"
}
Here is my logstash conf :
input {
beats {
port => 5000
codec => "json"
ssl => true
ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt"
ssl_key => "/etc/logstash/ssl/logstash-forwarder.key"
}
}
filter {
if "json" in [tags] {
json {
source => "message"
}
if "varnish" in [tags] {
date {
locale => "en"
match => [ "[time_req_rcv4c_or_snt4b]","dd/MMM/yyyy:HH:mm:ss Z" ]
remove_field => "[time_req_rcv4c_or_snt4b]"
}
}
}
}
output {
if "varnish" in [tags] {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-varnish-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
stdout {
codec => rubydebug
}
}
I tried :
match => [ "time_req_rcv4c_or_snt4b","dd/MMM/yyyy:HH:mm:ss Z" ]
remove_field => "time_req_rcv4c_or_snt4b"
and
match => [ "[time_req_rcv4c_or_snt4b]","dd/MMM/yyyy:HH:mm:ss Z" ]
remove_field => "[time_req_rcv4c_or_snt4]
Anybody can explain me what i missed. I didn't find anything relevant on google for the moment.

From your output:
"time_req_rcv4c_or_snt4b" => "[14/Feb/2017:14:38:44 +0100]",
Your date field has [] around it, so you need to match those in your date pattern or strip them off when you first match the date.

Related

Logstash - Drop logs containing kv value

I am unsuccessfully trying to drop logs based on the value of the kv value field.
filter {
if [type] == "cef" {
mutate {
add_field => { "tmp_message" => "%{message}" }
split => ["message", "|"]
add_field => { "version" => "%{message[0]}" }
add_field => { "device_vendor" => "%{message[1]}" }
add_field => { "device_product" => "%{message[2]}" }
add_field => { "device_version" => "%{message[3]}" }
add_field => { "sig_id" => "%{message[4]}" }
add_field => { "sig_name" => "%{message[5]}" }
add_field => { "sig_severity" => "%{message[6]}" }
}
kv {
field_split => " "
trim_value => "<>\[\],"
}
mutate {
replace => { "message" => "%{tmp_message}" }
remove_field => [ "tmp_message" ]
}
}
if [FTNTFGTsrcintfrole_s] == "wan" {
drop { }
}
[FTNTFGTsrcintfrole_s] is one of the keys that are parsed out by kv. If the value of the key is "wan", it should drop the log. That's not happening.
How can I filter out those logs?
Edit: Here is an example of the parsed data
{
"dst" => "xxx.xxx.xxx.xxx",
"FTNTFGTtz" => "+0000",
"FTNTFGTsubtype" => "forward",
"message" => "%{tmp_message}",
"host" => "xxx.xxx.xxx.xxx",
"spt" => "59975",
"type" => "cef",
"deviceInboundInterface" => "ssl.root",
"FTNTFGTdstintfrole" => "wan",
"FTNTFGTduration" => "180",
"FTNTFGTdstcountry" => "United",
"FTNTFGTpolicyid" => "47",
"FTNTFGTpolicytype" => "policy",
"FTNTFGTpoluuid" => "801d40c2-3b60-51ea-d66a-293bf886d27e",
"FTNTFGTeventtime" => "1633506791693710149",
"sourceTranslatedAddress" => "xxx.xxx.xxx.xxx",
"dpt" => "8253",
"app" => "udp/8253",
"FTNTFGTpolicyname" => "xxxxxxxx",
"tags" => [
[0] "fortigate",
[1] "_mutate_error"
],
"act" => "accept",
"FTNTFGTlogid" => "0000000013",
"in" => "64",
"sourceTranslatedPort" => "59975",
"FTNTFGTsentpkt" => "1",
"FTNTFGTtrandisp" => "snat",
"FTNTFGTsrcintfrole" => "wan",
"#version" => "1",
"FTNTFGTrcvdpkt" => "1",
"deviceExternalId" => "xxxxx",
"FTNTFGTauthserver" => "xxxxx",
"#timestamp" => 2021-10-06T07:53:11.729Z,
"FTNTFGTsrccountry" => "Reserved",
"deviceOutboundInterface" => "wan1",
"proto" => "17",
"out" => "48",
"src" => "xxx.xxx.xxx.xxx",
"externalId" => "870512",
"FTNTFGTlevel" => "notice",
"FTNTFGTvd" => "root",
"duser" => "xxxxx",
"cat" => "traffic:forward",
"FTNTFGTappcat" => "unscanned"
}
I found the answer thanks to #YLR and #Filip. The SIEM was adding "_s" to the key name when creating the field leading me to believe that that was the original key name and in turn what I was filtering for. After seeing the log output and realizing that wasn't the case, I corrected the filter and it worked.

Add log4net Level field to logstash.conf file

I'm trying to add LEVEL field (so it shows up in Kibana). My logstash.conf
Input:
2018-03-18 15:43:40.7914 - INFO: Tick
2018-03-18 15:43:40.7914 - ERROR: Tock
file:
input {
beats {
port => 5044
}
}
filter {
grok {
match => {
"message" => "(?m)^%{TIMESTAMP_ISO8601:timestamp}~~\[%{DATA:thread}\]~~\[%{DATA:user}\]~~\[%{DATA:requestId}\]~~\[%{DATA:userHost}\]~~\[%{DATA:requestUrl}\]~~%{DATA:level}~~%{DATA:logger}~~%{DATA:logmessage}~~%{DATA:exception}\|\|"
}
match => {
"levell" => "(?m)^%{DATA:level}"
}
add_field => {
"received_at" => "%{#timestamp}"
"received_from" => "%{host}"
"level" => "levell"
}
remove_field => ["message"]
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss:SSS" ]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
sniffing => true
index => "filebeat-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
#user => "elastic"
#password => "changeme"
}
stdout { codec => rubydebug }
}
this prints out "levell" instead of "INFO/ERROR" etc
EDIT:
Input:
2018-03-18 15:43:40.7914 - INFO: Tick
configuration:
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:timestamp}~~\[%{DATA:thread}\]~~\[%{DATA:user}\]~~\[%{DATA:requestId}\]~~\[%{DATA:userHost}\]~~\[%{DATA:requestUrl}\]~~%{DATA:level}~~%{DATA:logger}~~%{DATA:logmessage}~~%{DATA:exception}\|\|" }
add_field => {
"received_at" => "%{#timestamp}"
"received_from" => "%{host}"
}
}
grok {
match => { "message" => "- %{LOGLEVEL:level}" }
remove_field => ["message"]
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss:SSS" ]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
sniffing => true
index => "filebeat-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
#user => "elastic"
#password => "changeme"
}
stdout { codec => rubydebug }
}
Output I'm getting. Still missing received_at and level:
In that part of the configuration:
add_field => {
"received_at" => "%{#timestamp}"
"received_from" => "%{host}"
"level" => "levell"
}
When using "level" => "levell", you just put the String levell in the field level. To put the value of the field named levell, you have to use %{levell}. So in you case, it would look like:
add_field => {
"received_at" => "%{#timestamp}"
"received_from" => "%{host}"
"level" => "%{levell}"
}
Also the grok#match, according to the documentation:
A hash that defines the mapping of where to look, and with which patterns.
So trying to match on the levell field won't work, since it look like it doesn't exist yet. And the grok pattern you're using to match the message field don't match the example you provided.

Can't grok multiline logs

I have logs where each event is:
ExitNode FF33F91CC06B6CC5C3EE804E7D8DBE42CB5707F9
Published 2017-11-05 02:55:09
LastStatus 2017-11-05 04:02:27
ExitAddress 66.42.224.235 2017-11-05 04:06:26
I tried to use multiline:
input {
file {
path => "/path/input"
}
}
filter {
multiline {
pattern => "^\b[A-Za-z]{8}\b"
what => "next"
}
}
filter {
multiline {
pattern => "^\b[A-Za-z]{8}\b"
what => "next"
}
}
filter {
multiline {
pattern => "^\b[A-Za-z]{11}\b"
what => "previous"
}
}
output {
file {
codec => rubydebug
path => "/path/output"
}
}
And I get something like this:
{
"path" => "/path/input",
"#timestamp" => 2017-11-05T10:25:34.112Z,
"#version" => "1",
"host" => "HOST",
"message" => "ExitNode FE3CB742E73674F1BC2382723209ECEE44AD4AEC\nPublished 2017-11-04 20:34:55\nLastStatus 2017-11-04 21:03:26\nExitAddress 77.250.227.12 2017-11-04 21:06:45",
"tags" => [
[0] "multiline"
]
}
And I can't grok this message field because I don't know how to remove or replace \n and gsub => ["message", "\n", "Line_Break"] doesn't work properly.
Thanks
From the comment of #baudsp:
mutate {
gsub =>
["message", "[\r\n]","_"]
}

Logstash issue with json_formater [LogStash::Json::ParserError: Unexpected character ('-' (code 45)): was expecting comma to separate ARRAY entries]

I have an issue with converting value through logstash, I can't find solution for it. it seems to be linked to the date.
#Log line
[2017-08-15 12:30:17] api.INFO: {"sessionId":"a216925---ff5992be7520924ff25992be75209c7","action":"processed","time":1502789417,"type":"bookingProcess","page":"order"} [] []
Logstash configuration
filter {
if [type] == "api-prod-log" {
grok {
match => {"message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{WORD:module}.%{WORD:level}: (?<log_message>.*) \[\] \[\]" }
add_field => [ "received_from", "%{host}" ]
}
json {
source => "log_message"
target => "flightSearchRequest"
remove_field=>["log_message"]
}
date {
match => [ "timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "Asia/Jerusalem"
}
}
}
Any idea ?
Thanks
What version of Logstash are you using?
On Logstash 5.2.2 with the following Logstash config:
input {
stdin{}
}
filter {
grok {
match => {"message" => '\[%{TIMESTAMP_ISO8601:timestamp}\] %{WORD:module}.%{WORD:level}: (?<log_message>.*) \[\] \[\]' }
}
json {
source => "log_message"
target => "flightSearchRequest"
remove_field=>["log_message"]
}
date {
match => [ "timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "Asia/Jerusalem"
}
}
output{
stdout {codec => "rubydebug"}
}
I get a perfectly correct result and no errors, when I pass your log line as input:
{
"#timestamp" => 2017-08-15T09:30:17.000Z,
"flightSearchRequest" => {
"action" => "processed",
"sessionId" => "a216925---ff5992be7520924ff25992be75209c7",
"time" => 1502789417,
"page" => "order",
"type" => "bookingProcess"
},
"level" => "INFO",
"module" => "api",
"#version" => "1",
"message" => "[2017-08-15 12:30:17] api.INFO: {\"sessionId\":\"a216925---ff5992be7520924ff25992be75209c7\",\"action\":\"processed\",\"time\":1502789417,\"type\":\"bookingProcess\",\"page\":\"order\"} [] []",
"timestamp" => "2017-08-15 12:30:17"
}
I've removed the check for "type" in the beginning, can you test if that can affect the result?

Logstash parse field issue

i have a log print as follows,
"message" => "....",
"host" => "10.10.12.13",
"#version" => "1",
"#timestamp" => "2016-04-13T01:52:43.535Z",
"DISMAN-EVENT-MIB::sysUpTimeInstance" => "22 days, 16:33:23.24",
"SNMP-MIB::OID_0" => "example::bgpPeerState",
"source_ip" => "10.10.12.13"
I want to parse the string that is based on the prefix "specific" and add a field for this and remove the original
"SNMP-MIB::OID_0" => "example::bgpPeerState"
it's should looks like as below ,
"message" => "....",
"host" => "10.10.12.13",
"#version" => "1",
"#timestamp" => "2016-04-13T01:52:43.535Z",
"type" => "snmptrap",
"DISMAN-EVENT-MIB::sysUpTimeInstance" => "22 days, 16:33:23.24",
"example" => "bgpPeerState",
"source_ip" => "10.10.12.13"
my conf,
filter
{
if "example" in [SNMP-MIB::OID_0] {
# I don't how to parse it and add a field ???
}
else
{
.......
}
}
As always, many thanks for your help!
Use kv filter:
filter {
if "example" in [SNMP-MIB::OID_0] {
kv {
source => "SNMP-MIB::OID_0"
value_split => ":"
trim => ":"
remove_field => "SNMP-MIB::OID_0"
}
}
}
}

Resources