logstash - remove all non digit characters from field - logstash

I have log files that I am passing into logstash to be modified before pushing to elasticsearch.
One of the fields that I have sometimes appears as a series of digits
foobar = 42
Sometimes it is prefixed with letters
foobar = ws-42
I want to make sure the field is always an integer, and if any non-digits are present, that they are removed.
Here is part of the logstash config which makes sure the field is an integer
filter {
mutate {
convert => [ "foobar", "integer"]
}
}
How can I strip out the characters if present?
Update
By using the mutate filter I can either strip out non numerical values, or I can convert to integers. However if I try and do both, it returns 0.
Example
input {
stdin {}
}
filter {
kv { }
mutate {
gsub => [ "foobar", "\D", "" ]
convert => [ "foobar", "integer" ]
}
}
Here is the output. Notice that if '42' is provided, then foobar returns an integer of 42, however if you provide 'sw-42' foobar returns 0
foobar="42"
{
"message" => "foobar=\"42\"",
"#version" => "1",
"#timestamp" => "2015-03-31T22:32:11.718Z",
"host" => "swat-logstash02",
"foobar" => 42
}
foobar="sw-42"
{
"message" => "foobar=\"sw-42\"",
"#version" => "1",
"#timestamp" => "2015-03-31T22:32:23.822Z",
"host" => "swat-logstash02",
"foobar" => 0
}

It's a scoping issue.
If you do just the gsub (without the convert), it shows that the regexp is working:
{
"message" => "foobar=\"sw-42\"",
"#version" => "1",
"#timestamp" => "2015-03-31T22:42:40.097Z",
"host" => "0.0.0.0",
"foobar" => "42"
}
so you should run it as two stanzas:
filter {
kv { }
mutate {
gsub => [ "foobar", "\D", "" ]
}
mutate {
convert => [ "foobar", "integer" ]
}
}

Related

LogStash Conf | Drop Empty Lines

The contents of LogStash's conf file looks like this:
input {
beats {
port => 5044
}
file {
path => "/usr/share/logstash/iway_logs/*"
start_position => "beginning"
sincedb_path => "/dev/null"
#ignore_older => 0
codec => multiline {
pattern => "^\[%{NOTSPACE:timestamp}\]"
negate => true
what => "previous"
max_lines => 2500
}
}
}
filter {
grok {
match => { "message" =>
['(?m)\[%{NOTSPACE:timestamp}\]%{SPACE}%{WORD:level}%{SPACE}\(%{NOTSPACE:entity}\)%{SPACE}%{GREEDYDATA:rawlog}'
]
}
}
date {
match => [ "timestamp", "yyyy-MM-dd'T'HH:mm:ss.SSS"]
target => "#timestamp"
}
grok {
match => { "entity" => ['(?:W.%{GREEDYDATA:channel}:%{GREEDYDATA:inlet}:%{GREEDYDATA:listener}\.%{GREEDYDATA:workerid}|W.%{GREEDYDATA:channel}\.%{GREEDYDATA:workerid}|%{GREEDYDATA:channel}:%{GREEDYDATA:inlet}:%{GREEDYDATA:listener}\.%{GREEDYDATA:workerid}|%{GREEDYDATA:channel}:%{GREEDYDATA:inlet}:%{GREEDYDATA:listener}|%{GREEDYDATA:channel})']
}
}
dissect {
mapping => {
"[log][file][path]" => "/usr/share/logstash/iway_logs/%{serverName}#%{configName}#%{?ignore}.log"
}
}
}
output {
elasticsearch {
hosts => "${ELASTICSEARCH_HOST_PORT}"
index => "iway_"
user => "${ELASTIC_USERNAME}"
password => "${ELASTIC_PASSWORD}"
ssl => true
ssl_certificate_verification => false
cacert => "/certs/ca.crt"
}
}
As one can make out, the idea is to parse a custom log employing multiline extraction. The extraction does its job. The log occasionally contains an empty first line. So:
[2022-11-29T12:23:15.073] DEBUG (manager) Generic XPath iFL functions use full XPath 1.0 syntax
[2022-11-29T12:23:15.074] DEBUG (manager) XPath 1.0 iFL functions use iWay's full syntax implementation
which naturally is causing Kibana to report an empty line:
In an attempt to supress this line from being sent to ES, I added the following as a last filter item:
if ![message] {
drop { }
}
if [message] =~ /^\s*$/ {
drop { }
}
The resulting JSON payload to ES:
{
"#timestamp": [
"2022-12-09T14:09:35.616Z"
],
"#version": [
"1"
],
"#version.keyword": [
"1"
],
"event.original": [
"\r"
],
"event.original.keyword": [
"\r"
],
"host.name": [
"xxx"
],
"host.name.keyword": [
"xxx"
],
"log.file.path": [
"/usr/share/logstash/iway_logs/localhost#iCLP#iway_2022-11-29T12_23_33.log"
],
"log.file.path.keyword": [
"/usr/share/logstash/iway_logs/localhost#iCLP#iway_2022-11-29T12_23_33.log"
],
"message": [
"\r"
],
"message.keyword": [
"\r"
],
"tags": [
"_grokparsefailure"
],
"tags.keyword": [
"_grokparsefailure"
],
"_id": "oRc494QBirnaojU7W0Uf",
"_index": "iway_",
"_score": null
}
While this does drop the empty first line, it also unfortunately interferes with the multiline operation on other lines. In other words, the multiline operation does not work anymore. What am I doing incorrectly?
Use of the following variation resolved the issue:
if [message] =~ /\A\s*\Z/ {
drop { }
}
This solution is based on Badger's answer provided on the Logstash forums, where this question was raised as well.

Using if-else with Logstash split

I have a string field called description delimited with _.
I split it as follows:
filter {
mutate {
split => ["description", "_"]
add_field => {"location" => "%{[description][3]}"}
}
How can I check if the split values are empty or not?
I have attempted:
if !["%{[description][3]}"] {
# do something
}
if ![[description][3]] {
# do something
}
if ![description][3] {
# do something
}
None of them work.
The goal is to have the value of the new field location as its actual value or a generic value such as NA.
you made a really simple mistake with your mutate split.
this
mutate {
split => ["description", "_"]
add_field => {"location" => "%{[description][3]}"}
}
should have been
mutate {
split => ["description"=> "_"] <=== see I removed the comma and added =>
add_field => {"location" => "%{[description][3]}"}
}
here is sample I tested out with
filter {
mutate {
remove_field => ["headers", "#version"]
add_field => { "description" => "Python_Java_ruby_perl " }
}
mutate {
split => {"description" => "_"}
}
if [description][4] {
mutate {
add_field => {"result" => "The 4 th field exists"}
}
} else {
mutate {
add_field => {"result" => "The 4 th field DOES NOT exists"}
}
}
and the result on console (since there is no 4 th element, it went to else block
{
"host" => "0:0:0:0:0:0:0:1",
"result" => "The 4 th field DOES NOT exists", <==== from else block
"#timestamp" => 2020-01-14T19:35:41.013Z,
"message" => "hello",
"description" => [
[0] "Python",
[1] "Java",
[2] "ruby",
[3] "perl "
]
}

Logstash make a copy a nested field with mutate.add_field

I wanted to make a copy of a nested field in a Logstash filter but I can't figure out the correct syntax.
Here is what I try:
incorrect syntax:
mutate {
add_field => { "received_from" => %{beat.hostname} }
}
beat.hostname is not replaced
mutate {
add_field => { "received_from" => "%{beat.hostname}" }
}
beat.hostname is not replaced
mutate {
add_field => { "received_from" => "%{[beat][hostname]}" }
}
beat.hostname is not replaced
mutate {
add_field => { "received_from" => "%[beat][hostname]" }
}
No way. If I give a non nested field it works as expected.
The data structure received by logstash is the following:
{
"#timestamp" => "2016-08-24T13:01:28.369Z",
"beat" => {
"hostname" => "etg-dbs-master-tmp",
"name" => "etg-dbs-master-tmp"
},
"count" => 1,
"fs" => {
"device_name" => "/dev/vdb",
"total" => 5150212096,
"used" => 99287040,
"used_p" => 0.02,
"free" => 5050925056,
"avail" => 4765712384,
"files" => 327680,
"free_files" => 326476,
"mount_point" => "/opt/ws-etg/datas"
},
"type" => "filesystem",
"#version" => "1",
"tags" => [
[0] "topbeat"
],
"received_at" => "2016-08-24T13:01:28.369Z",
"received_from" => "%[beat][hostname]"
}
EDIT:
Since you didn't show your input message I worked off your output. In your output the field you are trying to copy into already exists, which is why you need to use replace. If it does not exist, you do in deed need to use add_field. I updated my answer for both cases.
EDIT 2: I realised that your problem might be to access the value that is nested, so I added that as well :)
you are using the mutate filter wrong/backwards.
First mistake:
You want to replace a field, not add one. In the docs, it gives you the "replace" option. See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-replace
Second mistake, you are using the syntax in reverse. It appears that you believe this is true:
"text I want to write" => "Field I want to write it in"
While this is true:
"myDestinationFieldName" => "My Value to be in the field"
With this knowledge, we can now do this:
mutate {
replace => { "[test][a]" => "%{s}"}
}
or if you want to actually add a NEW NOT EXISTING FIELD:
mutate {
add_field => {"[test][myNewField]" => "%{s}"}
}
Or add a new existing field with the value of a nested field:
mutate {
add_field => {"some" => "%{[test][a]}"}
}
Or more details, in my example:
input {
stdin {
}
}
filter {
json {
source => "message"
}
mutate {
replace => { "[test][a]" => "%{s}"}
add_field => {"[test][myNewField]" => "%{s}"}
add_field => {"some" => "%{[test][a]}"}
}
}
output {
stdout { codec => rubydebug }
}
This example takes stdin and outputs to stdout. It uses a json filter to parse the message, and then the mutate filter to replace the nested field. I also add a completely new field in the nested test object.
And finally creates a new field "some" that has the value of test.a
So for this message:
{"test" : { "a": "hello"}, "s" : "to_Repalce"}
We want to replace test.a (value: "Hello") with s (Value: "to_Repalce"), and add a field test.myNewField with the value of s.
On my terminal:
artur#pandaadb:~/dev/logstash$ ./logstash-2.3.2/bin/logstash -f conf2/
Settings: Default pipeline workers: 8
Pipeline main started
{"test" : { "a": "hello"}, "s" : "to_Repalce"}
{
"message" => "{\"test\" : { \"a\": \"hello\"}, \"s\" : \"to_Repalce\"}",
"#version" => "1",
"#timestamp" => "2016-08-24T14:39:52.002Z",
"host" => "pandaadb",
"test" => {
"a" => "to_Repalce",
"myNewField" => "to_Repalce"
},
"s" => "to_Repalce"
"some" => "to_Repalce"
}
The value has succesfully been replaced.
A field "some" with the replaces value has been added
A new field in the nested array has been added.
if you use add_field, it will convert a into an array and append your value there.
Hope this solves your issue,
Artur

how to generate new fields in logstash

I need to generate the new fields (loglevel) using logstash,finally displaying in kibana.
How to extract this log and make the pattern using grok filter for this log.
How to create the field of loglevel using logstash configuration.
I found this page helpful when I was setting up my log4net filter. Based on what your logs look like, you'll end up with something like this (copied from that page):
filter {
if [type] == "log4net" {
grok {
match => [ "message", "(?m)%{LOGLEVEL:level} %{TIMESTAMP_ISO8601:sourceTimestamp} %{DATA:logger} \[%{NUMBER:threadId}\] \[%{IPORHOST:tempHost}\] %{GREEDYDATA:tempMessage}" ]
}
mutate {
replace => [ "message" , "%{tempMessage}" ]
replace => [ "host" , "%{tempHost}" ]
remove_field => [ "tempMessage" ]
remove_field => [ "tempHost" ]
}
}
}
Yes, Now i got an answer. Please find the below configuration for creating new fields and filter some fields.
filter {
multiline{
pattern => "^%{TIMESTAMP_ISO8601}"
what => "previous"
negate=> true
}
# Delete trailing whitespaces
mutate {
strip => "message"
}
# Delete \n from messages
mutate {
gsub => ['message', "\n", " "]
}
# Delete \r from messages
mutate {
gsub => ['message', "\r", " "]
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} \[%{NUMBER:thread}\] %{LOGLEVEL:loglevel} %{JAVACLASS:class} - %{GREEDYDATA:msg}" }
}
if "Exception" in [msg] {
mutate {
add_field => { "msg_error" => "%{msg}" }
}
}
}

File input add_field not adding field to every row

I am parsing several logfiles of different load balanced serverclusters with my logstash config and would like to add a field "log_origin" to each file's entries for the later easy filtering.
Here's my input->file config in a simple example:
input {
file {
type => "node1"
path => "C:/Development/node1/log/*"
add_field => [ "log_origin", "live_logs" ]
}
file {
type => "node2"
path => "C:/Development/node2/log/*"
add_field => [ "log_origin", "live_logs" ]
}
file {
type => "node3"
path => "C:/Development/node1/log/*"
add_field => [ "log_origin", "live_logs" ]
}
file {
type => "node4"
path => "C:/Development/node1/log/*"
add_field => [ "log_origin", "live_logs" ]
}
}
filter {
grok {
match => [
"message","%{DATESTAMP:log_timestamp}%{SPACE}\[%{DATA:class}\]%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{GREEDYDATA:log_message}"
]
}
date {
match => [ "log_timestamp", "dd.MM.YY HH:mm:ss", "ISO8601" ]
target => "#timestamp"
}
mutate {
lowercase => ["loglevel"]
strip => ["loglevel"]
}
if "_grokparsefailure" in [tags] {
multiline {
pattern => ".*"
what => "previous"
}
}
if[fields.log_origin] == "live_logs"{
if [type] == "node1" {
mutate {
add_tag => "realsServerName1"
}
}
if [type] == "node2" {
mutate {
add_tag => "realsServerName2"
}
}
if [type] == "node3" {
mutate {
add_tag => "realsServerName3"
}
}
if [type] == "node4" {
mutate {
add_tag => "realsServerName4"
}
}
}
}
output {
stdout { }
elasticsearch { embedded => true }
}
I would have expected logstash to add this field with the value given to every logentry it finds, but it doesn't. Maybe I am completely taking the wrong approach here?
Edit: I am not able to retrieve the logs directly from the nodes, but have to copy them over to my "server". Otherwise i would be able to just use the filepath for distinguishing different clusters...
Edit: It's working. I should have cleand my data in between. Old entries without the field added cluttered up my results.
The add_field expects a hash. It should be
add_field => {
"log_origin" => "live_logs"
}

Resources