I am using logstash to parse json input message and then add another field from one of the parsed values:
filter {
json {
source => "message"
target => "data"
}
mutate {
add_field => {
"index_date" => "%{[data][#timestamp]}}"
}
}
}
This works fine, but now I need index_date to be only the date.
How can I format the [data][#timestamp] field to only return the date?
You will need to install the date_formatter plugin with
bin/logstash-plugin install logstash-filter-date
And then you can use something like this in your logstash filter function
date_formatter {
source => "index_data"
target => "[#metadata][indexDateOnlyDate]"
pattern => "YYYY.MM.dd"}
This should work :)
Related
I have a json event coming into logstash that looks like (values are different in each event):
{
"metadata":{
"app_type":"Foo",
"app_namespace":"System.Bar"
"json_string":"{\"test\":true}"
}
}
I would like my logstash filter to output the following to elasticsearch (i'm not worried about trimming the existing fields):
{
"app_event":{
"foo":{
"system_bar":{
"test":true
}
}
}
}
I cannot figure out the syntax for how to use field values as the names of new fields.
I tried a few different ways, all resulted in a config error when starting my logstash instance. I'm only including the filter block of logstash.conf in my examples, the rest of the config is working as expected.
1.
# note this one does not have replacement and lowercase values I want
filter {
json {
source => "[metadata][json_string]"
target => "[app_event][%{[metadata][app_type]}][%{[metadata][app_namespace]}]"
}
}
I tried to create variables with values I need and then use those in the target
filter {
appType = "%{[metadata][instrument_type]}".downcase
appNamespace = "%{[metadata][app_namespace]}".downcase
appNamespace.gsub(".", "_")
json {
source => "[metadata][json_string]"
target => "[app_event][#{appType}][#{appNamespace}]"
}
}
Just to confirm that everything else was set up correctly this does work, but does not include the dynamically generated field structure I'm looking for.
filter {
json {
source => "[metadata][json_string]"
target => "[app_event]"
}
}
The json filter does not sprintf the value of target, so you cannot use a json filter in those ways. To make the field name variable you must use a ruby filter. Try
json { source => "message" target => "[#metadata][json1]" remove_field => [ "message" ] }
json { source => "[#metadata][json1][metadata][json_string]" target => "[#metadata][json2]" }
ruby {
code => '
namespace = event.get("[#metadata][json1][metadata][app_namespace]")
if namespace
namespace = namespace.downcase.sub("\.", "_")
end
type = event.get("[#metadata][json1][metadata][app_type]")
if type
type = type.downcase
end
value = event.get("[#metadata][json2]")
if type and namespace and value
event.set("app_event", { "#{type}" => { "#{namespace}" => value } } )
end
'
}
This is the filter part in my logstash config file.
filter {
mutate {
split => ["message", "|"]
add_field => {
"start_time" => "%{[message][1]}"
"end_time" => "%{[message][2]}"
"channel" => "%{[message][5]}"
"[range_time][gte]" => "%{[message][1]}"
"[range_time][lte]" => "%{[message][2]}"
# "duration" => "%{[end_time]-[start_time]}"
}
# remove_field => ["message"]
}
date {
match => ["start_time", "yyyyMMddHHmmss"]
target => "start_time"
}
date {
match => ["end_time", "yyyyMMddHHmmss"]
target => "end_time"
}
ruby {
code =>
"
event.set('start_time', event.get('start_time').to_i)
event.set('end_time', event.get('end_time').to_i)
"
}
mutate {
remove_field => ["message", "#timestamp"]
}
ruby {
init => "require 'time'"
code => "event['duration'] = event['end_time'] - event['start_time'];"
}
In the end, I wanna create a new field named duration to represent the difference between end_time and start_time.
Obviously, the last ruby part was wrong. How could I write for this part?
To start you must make sure that the field you want to put the duration in exists prior to your setting of the value.
Make sure you add the field up front.
As it will be numeric you could do it like this
mutate {
add_field {
"duration" => 0
}
}
After this you can calculate the value and set it using ruby
ruby {
code => "event.set('duration', event.get('end_time').to_i - event.get('start_time').to_i)"
}
I got a filename in the format <key>:<value>-<key>:<value>.log like e.g. pr:64-author:mxinden-platform:aws.log containing logs of a test run.
I want to stream each line of the file to elasticsearch via logstash. Each line should be treated as a separate document. Each document should get the fields according to the filename. So e.g. for the above example let's say log-line 17-12-07 foo something happened bar would get the fields: pr with value 64, author with value mxinden and platform with value aws.
At the point in time, where I write the logstash configuration I do not know the names of the fields.
How do I dynamically add fields to each line based on the fields contained in the filename?
The static approach so far is:
filter {
mutate { add_field => { "file" => "%{[#metadata][s3][key]}"} }
else {
grok { match => { "file" => "pr:%{NUMBER:pr}-" } }
grok { match => { "file" => "author:%{USERNAME:author}-" } }
grok { match => { "file" => "platform:%{USERNAME:platform}-" } }
}
}
Changes to the filename structure are fine.
Answering my own question based on #dan-griffiths comment:
Solution for a file like pr=64,author=mxinden,platform=aws.log is to use the Elasticsearch kv filter like e.g.:
filter {
kv {
source => "file"
field_split => ","
}
}
where file is a field extracted from the filename via the AWS S3 input plugin.
I'm building out logstash and would like to build functionality to anonymize fields as specified in the message.
Given the message below, the field fta is a list of fields to anonymize. I would like to just use %{fta} and pass it through to the anonymize filter, but that doesn't seem to work.
{ "containsPII":"True", "fta":["f1","f2"], "f1":"test", "f2":"5551212" }
My config is as follows
input {
stdin { codec => json }
}
filter {
if [containsPII] {
anonymize {
algorithm => "SHA1"
key => "123456789"
fields => %{fta}
}
}
}
output {
stdout {
codec => rubydebug
}
}
The output is
{
"containsPII" => "True",
"fta" => [
[0] "f1",
[1] "f2"
],
"f1" => "test",
"f2" => "5551212",
"#version" => "1",
"#timestamp" => "2016-07-13T22:07:04.036Z",
"host" => "..."
}
Does anyone have any thoughts? I have tried several permutations at this point with no luck.
Thanks,
-D
EDIT:
After posting in the Elastic forums, I found out that this is not possible using base logstash functionality. I will try using the ruby filter instead. So, to ammend my question, How do I call another filter from within the ruby filter? I tried the following with no luck and honestly can't even figure out where to look. I'm very new to ruby.
filter {
if [containsPII] {
ruby {
code => "event['fta'].each { |item| event[item] = LogStash::Filters::Anonymize.execute(event[item],'12345','SHA1') }"
add_tag => ["Rubyrun"]
}
}
}
You can execute the filters from ruby script. Steps will be:
Create the required filter instance in the init block of inline ruby script.
For every event call the filter method of the filter instance.
Following is the example for above problem statement. It will replace my_ip field in event with its SHA1.
Same can be achieved using ruby script file.
Following is the sample config file.
input { stdin { codec => json_lines } }
filter {
ruby {
init => "
require 'logstash/filters/anonymize'
# Create instance of filter with applicable parameters
#anonymize = LogStash::Filters::Anonymize.new({'algorithm' => 'SHA1',
'key' => '123456789',
'fields' => ['my_ip']})
# Make sure to call register
#anonymize.register
"
code => "
# Invoke the filter
#anonymize.filter(event)
"
}
}
output { stdout { codec => rubydebug {metadata => true} } }
Well, I wasn't able to figure out how to call another filter from within a ruby filter, but I did get to the functional goal.
filter {
if [fta] {
ruby {
init => "require 'openssl'"
code => "event['fta'].each { |item| event[item] = OpenSSL::HMAC.hexdigest(OpenSSL::Digest::SHA256.new, '123456789', event[item] ) }"
}
}
}
If the field FTA exists, it will SHA2 encode each of the fields listed in that array.
I am trying to write a pipeline to load a file into logstash. My setup requires specifying the type field in the input section to Run multiple independent logstash config files with input,filter and output. Unfortunately the source data already contains the field type and it looks like the value from the source data is conflicting with the value provided from the input configuration.
The source data contains a json array like the following
[
{"key1":"obj1", "type":"looks like a bad choose for a key name"},
{"key1":"obj2", "type":"you can say that again"}
]
My pipeline looks like the following
input {
exec {
command => "cat /path/file_containing_above_json_array.txt"
codec => "json"
type => "typeSpecifiedInInput"
interval => 3600
}
}
output {
if[type] == "typeSpecifiedInInput" {
stdout {
codec => rubydebug
}
}
}
The output never gets called because type has been set to the value provided from the source data instead of the value provided from the input section.
How can I set up the input pipeline to avoid this conflict?
Nathan
Create a new field in your input instead of reusing 'type'. The exec{} input has add_field available.
Below is the final pipeline that uses add_field instead of type. A filter phase was added to clean up the document so that type field contains the expected value needed for writing into ElasticSearch (class of similar documents). The type value from the original JSON document is perserved in the key typeSpecifiedFromDoc The mutate step had to be broken into separate phases so that the replace would not affect type before its original value had been added as the new field typeSpecifiedFromDoc.
input {
exec {
command => "cat /path/file_containing_above_json_array.txt"
codec => "json"
add_field => ["msgType", "typeSpecifiedInInput"]
interval => 3600
}
}
filter {
if[msgType] == "typeSpecifiedInInput" {
mutate {
add_field => ["typeSpecifiedFromDoc", "%{type}"]
}
mutate {
replace => ["type", "%{msgType}"]
remove_field => ["msgType"]
}
}
}
output {
if[type] == "typeSpecifiedInInput" {
stdout {
codec => rubydebug
}
}
}