Logstash pipeline output does not create requested index - logstash

My output plugin is configured as follows:
output {
stdout { codec => rubydebug { metadata => true } }
elasticsearch {
hosts => ["elasticsearch:443"]
ssl => true
cacert => 'cacert.crt'
user => "logstash_internal"
password => "x-pack-test-password"
index => "logstash-ddo-%{+xxxx.ww}"
}
}
When I query elastic with:
GET /_cat/indices/logstash-*?v&s=index
I get the following:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open logstash-ddo- oyMaZ1jeQPCOhI8YDXnbxw 1 1 78687 0 64.8mb 32.2mb
I would expected it to be for example:
logstash-ddo-2020-44
I have checked and re-checked the pipeline configuration file and I am out of ideas. Do you guys see anything?
Logstash does not report any errors. I'm on version 7.9.2 (dockerized)

To create an index based on the event's date logstash uses the date from the #timestamp field.
Since your index name is not getting the year and the week number, you are either removing or renaming the #timestamp field.
You need to keep the #timestamp field in your document, if you want to change its name, you can create a new field from it, but logstash needs the #timestamp field to create time based index.

Related

LogStash dissect with key=value, comma

I have a pattern of logs that contain performance&statistical data. I have configured LogStash to dissect this data as csv format in order to save the values to ES.
<1>,www1,3,BISTATS,SCAN,330,712.6,2035,17.3,221.4,656.3
I am using the following LogSTash filter and getting the desired results..
grok {
match => { "Message" => "\A<%{POSINT:priority}>,%{DATA:pan_host},%{DATA:pan_serial_number},%{DATA:pan_type},%{GREEDYDATA:message}\z" }
overwrite => [ "Message" ]
}
csv {
separator => ","
columns => ["pan_scan","pf01","pf02","pf03","kk04","uy05","xd06"]
}
This is currently working well for me as long as the order of the columns doesn't get messed up.
However I want to make this logfile more meaningful and have each column-name in the original log. example-- <1>,www1,30000,BISTATS,SCAN,pf01=330,pf02=712.6,pf03=2035,kk04=17.3,uy05=221.4,xd06=656.3
This way I can keep inserting or appending key/values in the middle of the process without corrupting the data. (Using LogStash5.3)
By using #baudsp recommendations, I was able to formulate the following. I deleted the csv{} block completely and replace it with the kv{} block. The kv{} automatically created all the key values leaving me to only mutate{} the fields into floats and integers.
json {
source => "message"
remove_field => [ "message", "headers" ]
}
date {
match => [ "timestamp", "YYYY-MM-dd'T'HH:mm:ss.SSS'Z'" ]
target => "timestamp"
}
grok {
match => { "Message" => "\A<%{POSINT:priority}>,%{DATA:pan_host},%{DATA:pan_serial_number},%{DATA:pan_type},%{GREEDYDATA:message}\z" }
overwrite => [ "Message" ]
}
kv {
allow_duplicate_values => false
field_split_pattern => ","
}
Using the above block, I was able to insert the K=V, pairs anywhere in the message. Thanks again for all the help. I have added a sample code block for anyone trying to accomplish this task.
Note: I am using NLog for logging, which produces JSON outputs. From the C# code, the format looks like this.
var logger = NLog.LogManager.GetCurrentClassLogger();
logger.ExtendedInfo("<1>,www1,30000,BISTATS,SCAN,pf01=330,pf02=712.6,pf03=2035,kk04=17.3,uy05=221.4,xd06=656.3");

Logstash : Unrecognized #timestamp value, setting current time to #timestamp, original in _#timestamp field

I have json log messages sent to logstash which looks like :
{"#timestamp":"2017-08-10 11:32:14.619","level":"DEBUG","logger_name":"application","message":"Request processed in 1 ms"}
And logstash configured with :
json {
source => "message"
}
date {
match => ["#timestamp", "yyyy-MM-dd HH:mm:ss.SSS"]
timezone => "Europe/Paris"
}
But I have this warning in the logs :
[2017-08-10T11:21:16,739][WARN ][logstash.filters.json ] Unrecognized #timestamp value, setting current time to #timestamp, original in _#timestamp field {:value=>"\"2017-08-10 11:20:34.527\""}
I tried different configurations, like adding quotes around the space, renaming the field with a mutate before the date filter (wich result with the same warning, and an error saying that the timestamp is missing), etc...
In the values stored in elastic search, the timestamp is the time the log was parsed and not the original (2/3 seconds after).
What am I missing ?
I think the problem is that the field in the source message is named #timestamp, just like the default.
We solved it by renaming the field in the source, add changing the config to :
date {
match => ["apptimestamp", "yyyy-MM-dd HH:mm:ss.SSS"]
timezone => "Europe/Paris"
}

Logstash to convert epoch timestamp

I'm trying to parse some epoch timestamps to be something more readable.
I looked around for how to parse them into a normal time, and from what I understand all I should have to do is something like this:
mutate
{
remove_field => [ "..."]
}
grok
{
match => { 'message' => '%{NUMBER:time}%{SPACE}%{NUMBER:time2}...' }
}
date
{
match => [ "time","UNIX" ]
}
An example of a message is: 1410811884.84 1406931111.00 ....
The first two values should be UNIX time values.
My grok works, because all of the fields show in Kibana with the expected values, and all the values fields I've removed aren't there so the mutate works too. The date section seems to do nothing.
From what I understand the match => [ "time","UNIX" ] should do what I want (Change the value of time to be a proper date format, and have it show on kibana as a field.) . So apparently I'm not understanding it.
The date{} filter replaces the value of #timestamp with the data provided, so you should see #timestamp with the same value as the [time] field. This is typically useful since there's some delay in the propagation, processing, and storing of the logs, so using the event's own time is preferred.
Since you have more than one date field, you'll want to use the 'target' parameter of the date filter to specify the destination of the parsed date, e.g.:
date {
match => [ "time","UNIX" ]
target => "myTime"
}
This would convert the string field named [time] into a date field named [myTime]. Kibana knows how to display date fields, and you can customize that in the kibana settings.
Since you probably don't need both a string a date version of the same data, you can remove the string version as part of the conversion:
date {
match => [ "time","UNIX" ]
target => "myTime"
remove_field => [ "time" ]
}
Consider also trying with UNIX_MS for milliseconds.
date {
timezone => "UTC"
match => ["timestamp", "UNIX_MS"]
target => "#timestamp"
}

Logstash add date field to logs

My application produces logs, without a timestamp.
Is there a way in logstash to append timestamp to the logs on processing
something like,
mutate {
add_field => { "timestamp" => "%{date}" }
}
Logstash adds a #timestamp field by default. You don't need to set anything additional. Logstash will take the time an event is received and add the field for you.
For example if you try this command:
LS_HOME/bin/logstash -e 'input { stdin {} } output { stdout { codec => rubydebug } }'
You will see an automatically created #timestamp field in your result:
"#timestamp": "2015-07-13T17:41:13.174Z"
You can change the format and timezone using the date filter or you can match a timestamp of your event (e.g. a syslog timestamp) using other filters like grok or json.

Failed to query elasticsearch for previous event in logstash

Im very new logstash and created the config file to processing two different kind of file. I've Created field for similar in which are fields same in the input file and i have correlate some of the value from request to response. while in this scenario im facing following error, but in the query section which im parsing datat is similar only even though why it shown error im in clueless.Im storing both file in same index. If i run the file without deleting the index mean it will get correlate.
My correlation :
"
elasticsearch {
sort => [
{
"partOrder.OrderRefNo" => {"order" => "asc" "ignore_unmapped" => true}
"dlr.dlrCode" => {"order" => "asc" "ignore_unmapped" => true}
}
]
query => "type:transmit_req AND partOrder.OrderRefNo : %{partOrder.OrderRefNo} AND dlr.dlrCode : %{dlr.dlrCode}"
fields => [
"partOrder.totalLineNo", "partOrder.totalLineNo",
"partOrder.totalOrderQty", "partOrder.totalOrderQty",
"partOrder.transportMethodType","partOrder.transportMethodType",
"dlr.brand","dlr.brand",
"partOrder.orderType","partOrder.orderType",
"partOrder.bodId","partOrder.bodId"
]
fail_on_error => "false"
}
"
Error:
"
←[33mFailed to query elasticsearch for previous event {:query=>"type:transmit_re
q AND partOrder.OrderRefNo : BC728010 AND dlr.dlrCode : 28012", :event=>#
, #metrics={}, #channel=#>, #subscriber_lock=#, #level=:warn, #subscribers={2002=>#

Resources