how to use elapsed filter- logstash - logstash

I am working in the Elapsed filter. I read the guide of Elapsed filter in logstash. then i made a sample config file and csv to test the working of Elapsed filter. But it seems to be not working. There is no change in uploading the data to ES. i have attached the csv file and config code. Can you give some examples for how to use the elapsed filter.
Here's my csv data:
here's my config file:
input {
file {
path => "/home/paulsteven/log_cars/aggreagate.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
quote_char => "%"
columns => ["state","city","haps","ads","num_id","serial"]
}
elapsed {
start_tag => "taskStarted"
end_tag => "taskEnded"
unique_id_field => "num_id"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "el03"
document_type => "details"
}
stdout{}
}
Output in ES:
{
"city" => "tirunelveli",
"path" => "/home/paulsteven/log_cars/aggreagate.csv",
"num_id" => "2345-1002-4501",
"message" => "tamil nadu,tirunelveli,hap0,ad1,2345-1002-4501,1",
"#version" => "1",
"serial" => "1",
"haps" => "hap0",
"state" => "tamil nadu",
"host" => "smackcoders",
"ads" => "ad1",
"#timestamp" => 2019-05-06T10:03:51.443Z
}
{
"city" => "chennai",
"path" => "/home/paulsteven/log_cars/aggreagate.csv",
"num_id" => "2345-1002-4501",
"message" => "tamil nadu,chennai,hap0,ad1,2345-1002-4501,5",
"#version" => "1",
"serial" => "5",
"haps" => "hap0",
"state" => "tamil nadu",
"host" => "smackcoders",
"ads" => "ad1",
"#timestamp" => 2019-05-06T10:03:51.447Z
}
{
"city" => "kottayam",
"path" => "/home/paulsteven/log_cars/aggreagate.csv",
"num_id" => "2345-1002-4501",
"message" => "kerala,kottayam,hap1,ad2,2345-1002-4501,9",
"#version" => "1",
"serial" => "9",
"haps" => "hap1",
"state" => "kerala",
"host" => "smackcoders",
"ads" => "ad2",
"#timestamp" => 2019-05-06T10:03:51.449Z
}
{
"city" => "Jalna",
"path" => "/home/paulsteven/log_cars/aggreagate.csv",
"num_id" => "2345-1002-4501",
"message" => "mumbai,Jalna,hap2,ad3,2345-1002-4501,13",
"#version" => "1",
"serial" => "13",
"haps" => "hap2",
"state" => "mumbai",
"host" => "smackcoders",
"ads" => "ad3",
"#timestamp" => 2019-05-06T10:03:51.452Z
}

You have to tag your events in order Logstash could find the start / end tags.
Basically you have to know when an event is considered a start event and when it's an end event.
Elapsed filter plugin works only for two events (for example a request event and a response event in order to get the latency between them)
Both these two kinds of event need to own an ID field which identify uniquely that particular task. The name of this field is stored in unique_id_field.
For your example you have to identify a pattern for start and end event, let's say that you have in your csv a column type (see the code below) when type contains "START", the line is considered start event and if it contains "END" it's an end event, pretty straightforward, and a columnn id that stores the unique identifier.
filter {
csv {
separator => ","
quote_char => "%"
columns => ["state","city","haps","ads","num_id","serial", "type", "id"]
}
grok {
match => { "type" => ".*START.*" }
add_tag => [ "taskStarted" ]
}grok {
match => { "type" => ".*END*" }
add_tag => [ "taskTerminated" ]
} elapsed {
start_tag => "taskStarted"
end_tag => "taskTerminated"
unique_id_field => "id"
}
}
I feel like your need is different.
If you want to aggregate more than two events, all the events with the same value for column state for example, please check out this plugin

Related

I am trying to pick a field from one event to another using aggregation filter

I have log files that I am able to get fields based on two different if/grok statements and patterns. The output from the two are like below;
{
timestamp" => 2021-06-09T03:08:30.943Z,
"Loc" => "91340",
"#version" => "1",
"#timestamp" => 2021-07-17T04:09:36.438Z,
"location" => 274.05292,
"speed" => 2.6279999999999997,
"target_location" => 261.11999999999995,
"host" => "AUDPRWL00192",
"path" => "C:/ELK/LOGS/91340____________090621_021536_2653_ATO_B.txt",
}
{
"ID" => "066",
"host" => "AUDPRWL00192",
"MESSAGE" => "0560BFC0BC00C8005023AE00164260BFC0BC6B5DDC5B",
"timestamp" => 2021-06-09T03:08:27.540Z,
"path" => "C:/ELK/LOGS/91340____________090621_021536_2653_ATO_B.txt",
"Loc" => "91340",
"#version" => "1",
"#timestamp" => 2021-07-17T04:09:36.428Z
I am trying to aggregate so that my end goal is to get the following i.e pick values from the previous event i.e speed and location so that the output that i can send to Elastic is;
{
"ID" => "066",
"host" => "AUDPRWL00192",
"MESSAGE" => "0560BFC0BC00C8005023AE00164260BFC0BC6B5DDC5B",
"timestamp" => 2021-06-09T03:08:27.540Z,
"path" => "C:/ELK/LOGS/91340____________090621_021536_2653_ATO_B.txt",
"Loc" => "91340",
"speed" => 2.6279999999999997,
"location" => 274.05292,
"#version" => "1",
"#timestamp" => 2021-07-17T04:09:36.428Z
}
The aggregation filter i am trying is;
aggregate {
task_id => "%{host}%{path}"
code => "map['location'] = event.get('[location]')"
map_action => "create"}

Replace #timestamp in logstash

I'am getting crazy with my logstash configuration.
I can't find a way to replace the #timestamp field with another:
Here is what logstash receive:
{
"offset" => 6718968,
"Varnish_txid" => "639657758",
"plateform" => "cdnfronts",
"Referer" => "-",
"input_type" => "log",
"respsize" => "281",
"source" => "/var/log/varnish/varnish4xx-5xx.log",
"UA" => "Microsoft-WebDAV-MiniRedir/5.1.2600",
"type" => "varnish-logs",
"tags" => [
[0] "json",
[1] "varnish",
[2] "beats_input_codec_json_applied",
[3] "_dateparsefailure"
],
"st_snt2c_or_sntfromb" => "405",
"RemoteHost" => "32.26.21.21",
"#timestamp" => 2017-02-14T13:38:47.808Z,
"Varnish.Handling" => "pass",
"tot_bytes_rcvby_c_or_sntby_b" => "-",
"time_req_rcv4c_or_snt4b" => "[14/Feb/2017:14:38:44 +0100]",
"#version" => "1",
"beat" => {
"hostname" => "cdn1",
"name" => "cdn1",
"version" => "5.1.2"
},
"host" => "cdn1",
"time_1st_byte" => "0.010954",
"Varnish_side" => "c",
"reqfirstline" => "OPTIONS http://a.toto.com/ HTTP/1.1"
}
Here is my logstash conf :
input {
beats {
port => 5000
codec => "json"
ssl => true
ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt"
ssl_key => "/etc/logstash/ssl/logstash-forwarder.key"
}
}
filter {
if "json" in [tags] {
json {
source => "message"
}
if "varnish" in [tags] {
date {
locale => "en"
match => [ "[time_req_rcv4c_or_snt4b]","dd/MMM/yyyy:HH:mm:ss Z" ]
remove_field => "[time_req_rcv4c_or_snt4b]"
}
}
}
}
output {
if "varnish" in [tags] {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-varnish-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
stdout {
codec => rubydebug
}
}
I tried :
match => [ "time_req_rcv4c_or_snt4b","dd/MMM/yyyy:HH:mm:ss Z" ]
remove_field => "time_req_rcv4c_or_snt4b"
and
match => [ "[time_req_rcv4c_or_snt4b]","dd/MMM/yyyy:HH:mm:ss Z" ]
remove_field => "[time_req_rcv4c_or_snt4]
Anybody can explain me what i missed. I didn't find anything relevant on google for the moment.
From your output:
"time_req_rcv4c_or_snt4b" => "[14/Feb/2017:14:38:44 +0100]",
Your date field has [] around it, so you need to match those in your date pattern or strip them off when you first match the date.

How to process http posted files in logstash - line by line?

I successfully configured logstash to process csv files from the file system and put them into Elastic for further analysis.
But our ELK is heavily separated from the original source of the csv files, so I thought about sending the csv files via http to logstash instead of using a file system.
The issue is that if I use input "http" the whole file is taken and processed as one big bunch. The csv filter only recognized the first line. As mentioned, the same file works via "file" input.
logstash config is like this:
input {
# http {
# host => "localhost"
# port => 8080
# }
file {
path => "/media/sample_files/debit_201606.csv"
type => "items"
start_position => "beginning"
}
}
filter {
csv {
columns => ["Created", "Direction", "Member", "Point Value", "Type", "Sub Type"]
separator => " "
convert => { "Point Value" => "integer" }
}
date {
match => [ "Created", "YYYY-MM-dd HH:mm:ss" ]
timezone => "UTC"
}
}
output {
# elasticsearch {
# action => "index"
# hosts => ["localhost"]
# index => "logstash-%{+YYYY.MM.dd}"
# workers => 1
# }
stdout {
codec => rubydebug
}
}
My goal is to pass the csv via curl. So switching to the commented part of the input area above, and then use curl to pass the files:
curl http://localhost:8080/ -T /media/samples/debit_201606.csv
What do I need to do to achieve that logstash is processing the csv line by line?
I tried this and I think what you need to do is to split your input. Here's how you do that:
My configuration:
input {
http {
port => 8787
}
}
filter {
split {}
csv {}
}
output {
stdout { codec => rubydebug }
}
And for my test I created a csv file looking like this:
artur#pandaadb:~/tmp/logstash$ cat test.csv
a,b,c
d,e,f
g,h,i
And now for the test:
artur#pandaadb:~/dev/logstash/conf3$ curl localhost:8787 -T ~/tmp/logstash/test.csv
Outputs:
{
"message" => "a,b,c",
"#version" => "1",
"#timestamp" => "2016-08-01T15:27:17.477Z",
"host" => "127.0.0.1",
"headers" => {
"request_method" => "PUT",
"request_path" => "/test.csv",
"request_uri" => "/test.csv",
"http_version" => "HTTP/1.1",
"http_host" => "localhost:8787",
"http_user_agent" => "curl/7.47.0",
"http_accept" => "*/*",
"content_length" => "18",
"http_expect" => "100-continue"
},
"column1" => "a",
"column2" => "b",
"column3" => "c"
}
{
"message" => "d,e,f",
"#version" => "1",
"#timestamp" => "2016-08-01T15:27:17.477Z",
"host" => "127.0.0.1",
"headers" => {
"request_method" => "PUT",
"request_path" => "/test.csv",
"request_uri" => "/test.csv",
"http_version" => "HTTP/1.1",
"http_host" => "localhost:8787",
"http_user_agent" => "curl/7.47.0",
"http_accept" => "*/*",
"content_length" => "18",
"http_expect" => "100-continue"
},
"column1" => "d",
"column2" => "e",
"column3" => "f"
}
{
"message" => "g,h,i",
"#version" => "1",
"#timestamp" => "2016-08-01T15:27:17.477Z",
"host" => "127.0.0.1",
"headers" => {
"request_method" => "PUT",
"request_path" => "/test.csv",
"request_uri" => "/test.csv",
"http_version" => "HTTP/1.1",
"http_host" => "localhost:8787",
"http_user_agent" => "curl/7.47.0",
"http_accept" => "*/*",
"content_length" => "18",
"http_expect" => "100-continue"
},
"column1" => "g",
"column2" => "h",
"column3" => "i"
}
What the split filter does is:
It takes your input message (which is one String including the new-lines) and splits it by the configured value (which by default is a new-line). Then it cancels the original event and re-submits the split events to logstash. It is important that you execute the split before you execute the csv filter.
I hope that answers your question!
Artur

logstash - dynamic field names

I have problem with dynamics field names in my Logstash configuration.
This is my test config:
input {
generator {
lines => [ "May 15 13:42:55 logstash puppet-agent[3551]: Finished catalog run in 43",
"May 16 14:57:07 logstash puppet-agent[3551]: Starting Puppet client version" ]
count => 7
}
}
filter {
grok {
match => [ "message", "%{SYSLOGBASE} %{WORD:log}.*" ]
}
if "Starting" in [log] {
metrics {
meter => [ "%{logsource}.%{log}" ]
add_tag => [ "metric" ]
add_field => { "server" => "%{logsource}"
"bad" => "true" }
clear_interval => 5
}
}
}
output {
stdout { codec => rubydebug }
}
and here is my output: (just end of output)
{
"message" => "May 15 13:42:55 logstash puppet-agent[3551]: Finished catalog run in 43",
"#version" => "1",
"#timestamp" => "2016-06-07T07:37:50.138Z",
"host" => "logstash.test.lan",
"sequence" => 6,
"timestamp" => "May 15 13:42:55",
"logsource" => "test",
"program" => "puppet-agent",
"pid" => "3551",
"log" => "Finished"
}
{
"message" => "May 16 14:57:07 logstash puppet-agent[3551]: Starting Puppet client version",
"#version" => "1",
"#timestamp" => "2016-06-07T07:37:50.138Z",
"host" => "logstash.test.lan",
"sequence" => 6,
"timestamp" => "May 16 14:57:07",
"logsource" => "test",
"program" => "puppet-agent",
"pid" => "3551",
"log" => "Starting"
}
{
"#version" => "1",
"#timestamp" => "2016-06-07T07:37:50.288Z",
"message" => "Counting: 7",
"logstash.Starting" => {
"count" => 7,
"rate_1m" => 0.0,
"rate_5m" => 0.0,
"rate_15m" => 0.0
},
"server" => "%{logsource}",
"bad" => "true",
"tags" => [
[0] "metric"
]
}
Why field server donĀ“t have logstash as value from the input logs? %{logsource} works for meter option, so why not for add_field?
Thx for help.
When a log event is received, the SYSLOGBASE information is extracted from the content. This is where the %{logsource} value is defined. If the event isn't coming from a log entry that contains SYSLOGBASE information, then logsource will be undefined.
When you receive a start message, logsource is defined in scope and is added to your message.
The metrics plugin is generating a new message per interval. This message is generated from scratch so it does not have the value of logsource or anything else that would normally be obtained from an individual log entry.

Logstash parse field issue

i have a log print as follows,
"message" => "....",
"host" => "10.10.12.13",
"#version" => "1",
"#timestamp" => "2016-04-13T01:52:43.535Z",
"DISMAN-EVENT-MIB::sysUpTimeInstance" => "22 days, 16:33:23.24",
"SNMP-MIB::OID_0" => "example::bgpPeerState",
"source_ip" => "10.10.12.13"
I want to parse the string that is based on the prefix "specific" and add a field for this and remove the original
"SNMP-MIB::OID_0" => "example::bgpPeerState"
it's should looks like as below ,
"message" => "....",
"host" => "10.10.12.13",
"#version" => "1",
"#timestamp" => "2016-04-13T01:52:43.535Z",
"type" => "snmptrap",
"DISMAN-EVENT-MIB::sysUpTimeInstance" => "22 days, 16:33:23.24",
"example" => "bgpPeerState",
"source_ip" => "10.10.12.13"
my conf,
filter
{
if "example" in [SNMP-MIB::OID_0] {
# I don't how to parse it and add a field ???
}
else
{
.......
}
}
As always, many thanks for your help!
Use kv filter:
filter {
if "example" in [SNMP-MIB::OID_0] {
kv {
source => "SNMP-MIB::OID_0"
value_split => ":"
trim => ":"
remove_field => "SNMP-MIB::OID_0"
}
}
}
}

Resources