Change the seconds to milliseconds in logstash - logstash

I m trying to compare two logs files haproxy and nginx using ELK stack especially the response time in logtash i have created two separate conf files in logstash for haproxy and nginx, In haproxy I'm getting response time as milliseconds eg: 2334 and in nginx I'm getting it in seconds eg: 1.23.
I want to convert the response time of nginx in milliseconds, I tried to convert it using the ruby filter but not I'm getting proper results also I think it's conflicting with my current elasticsearch index created for haproxy.
Below are my two config files:
Haproxy logstash Conf:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{MONTH:month} %{MONTHDAY:date} %{TIME:time} %{WORD:[source]} %{WORD:[app]}\[%{DATA:[class]}\]: %{IPORHOST:[UE_IP]}:%{NUMBER:[UE_Port]} %{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Source_Port]} %{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Destination_Port]} %{IPORHOST:[WAN_IP]}:%{NUMBER:[WAN_Port]} \[%{HAPROXYDATE:[accept_date]}\] %{NOTSPACE:[frontend_name]}~ %{NOTSPACE:[backend_name]} %{NOTSPACE:[ty_name]}/%{NUMBER:[response_time]:int} %{NUMBER:[http_status_code]} %{NUMBER:[response_bytes]:int} - - ---- %{NOTSPACE:[df]} %{NOTSPACE:[df]} %{DATA:[domain_name]} %{DATA:[cache_status]} %{DATA:[domain_name]} %{URIPATHPARAM:[content]} HTTP/%{NUMBER:[http_version]}" }
add_tag => [ "response_time", "response_time" ]
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
# stdout {
# codec => rubydebug
# }
}
Nginx logstash conf file
input {
beats {
port => 5045
}
}
filter {
grok {
match => { "message" => "%{IPORHOST:clientip} - - \[%{HTTPDATE:timestamp}\] \"%{WORD:verb} %{URIPATHPARAM:content} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response} %{NUMBER:response_bytes:int} \"-\" \"%{GREEDYDATA:junk}\" %{NUMBER:response_time}"}
}
ruby {
code => "event.set('response_time', event.get('response_time').to_i * 1000)"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout { codec => rubydebug }
}

grok {
match => { "message" => "%{IPV4:clientip} - - [%{HTTPDATE:requesttimestamp}] "%{WORD:httpmethod} /" %{NUMBER:responsecode:int} %{NUMBER:responsesize:int} "-" "-" "-" "%{NUMBER:responsetimems:float}""}
}
ruby {
code => "event.set('responsetimems', event.get('responsetimems').to_i * 1000)"
}

Related

Logstash config with filebeat issue when using both beats and file input

I am trying to config a filebeat with logstash. At the moment I managed to successfully config filebeat with logstash and I am running into same issues when creating multiple conf files in the logstash.
So currently I have one filebeats input which is something like :
input {
beats {
port => 5044
}
}
filter {
}
output {
if [#metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "systemsyslogs"
pipeline => "%{[#metadata][pipeline]}"
}}
else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "systemsyslogs"
}
}}
And a file Logstash config which is like :
input {
file {
path => "/var/log/foldername/number.log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{WORD:username} %{INT:number} %{TIMESTAMP_ISO8601:timestamp}" }
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "numberlogtest"
}
}
The grok filter is working as I successfully managed to create 2 index patterns in kibana and view the data correctly.
The problem is that when I am running logstash with both configs applied, logstash is fetching the data from number.log multiple times and logstash plain logs are getting lots of warning, therefore using a lot of computing resources and CPU is going over 80% ( this is an oracle instance ). If I remove the file config from logstash the system is running properly.
I managed to run logstash with each one of these config files applied individually, but not both at once.
I already added an exception in the filebeats config :
exclude_files:
- /var/log/foldername/*.log
Logstash plain logs when running both config files:
[2023-02-15T12:42:41,077][WARN ][logstash.outputs.elasticsearch][main][39aca10fa204f31879ff2b20d5b917784a083f91c2eda205baefa6e05c748820] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"numberlogtest", :routing=>nil}, {"service"=>{"type"=>"system"}
"caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:607"}}}}}
I already added an exception in the filebeat config :
exclude_files:
- /var/log/foldername/*.log
Fixed by creating a single logstash config with both inputs :
input {
beats {
port => 5044
}
file {
path => "**path**"
start_position => "beginning"
}
}
filter {
if [path] == "**path**" {
grok {
match => { "message" => "%{WORD:username} %{INT:number} %{TIMESTAMP_ISO8601:timestamp}" }
}
}
}
output {
if [#metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index1"
pipeline => "%{[#metadata][pipeline]}"
}
} else {
if [path] == "**path**" {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index2"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index1"
}
}
}
}

losgtash grok pattern error with custom log

I'm new with ELK stack. I need to parse my customlogs using Grok and then analyze them.
[11/Oct/2018 09:51:47] INFO [serviceManager.management.commands.serviceManagerMonitor:serviceManagerMonitor.py:114] [2018-10-11 07:51:47.527997+00:00] SmMonitoring Module : Launching action info over the service sysstat is delayed
with this grok pattern:
\[(?<timestamp>%{MONTHDAY}\/%{MONTH}\/%{YEAR} %{TIME})\] %{LOGLEVEL:loglevel} \[%{GREEDYDATA:messagel}\] \[%{GREEDYDATA:message2}\] %{GREEDYDATA:message3}
i tried a grok debugger and it's matched
here is the configuration of logstash for the input:
input {
beats {
port => 5044
}
}
and here is the output configuration:
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
and here is the whole filter:
filter {
grok {
match => { "message" => "\[(?<timestamp>%{MONTHDAY}\/%{MONTH}\/%{YEAR} %{TIME})\] %{LOGLEVEL:loglevel} \[%{GREEDYDATA:messagel}\] \[%{GREEDYDATA:message2}\] %{GREEDYDATA:message3}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
but i don't get any result with elasticsearch.
Thank you for helping me.

Logstash Multiline Logfile XML Parsing Filter

I am absolutely new to Logstash and I am trying to parse my multiline logentries, that are in the following format
<log level="INFO" time="Wed May 03 08:25:03 CEST 2017" timel="1493792703368" host="host">
<msg><![CDATA[Method=GET URL=http://localhost (Vers=[Version], Param1=[param1], Param2=[param1]) Result(Content-Length=[22222], Content-Type=[text/xml; charset=utf-8]) Status=200 Times=TISP:1098/CSI:-/Me:1/Total:1099]]>
</msg>
</log>
Do you know how to implement the filter in logstash config to be able to index the following fields in elasticsearch
time, host, Vers, Param1, Param2, TISP
Thank you very much
OK, I found out how to do it. This is my pipeline.conf file and it works
input {
beats {
port => 5044
}
}
filter {
xml {
store_xml => false
source => "message"
xpath => [
"/log/#level", "level",
"/log/#time", "time",
"/log/#timel", "unixtime",
"/log/#host", "host_org",
"/log/#msg", "msg",
"/log/msg/text()","msg_txt"
]
}
grok {
break_on_match => false
match => ["msg_txt", "Param1=\[(?<param1>-?\w+)\]"]
match => ["msg_txt", "Param2=\[(?<param2>-?\w+)\]"]
match => ["msg_txt", "Vers=\[(?<vers>-?\d+\.\d+)\]"]
match => ["msg_txt", "TISP:(?<tisp>-?\d+)"]
match => [unixtime, "(?<customTime>-?\d+)"]
}
if "_grokparsefailure" in [tags] {
drop { }
}
mutate {
convert => { "tisp" => "integer" }
}
date {
match => [ "customTime", "UNIX_MS"]
target => "#timestamp"
}
if "_dateparsefailure" in [tags] {
drop { }
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => user
password => passwd
}
}

Logstash keep syslog host

I have a syslog server and the ELK stack on the same server. I have a directory for each syslog source.
I'm trying to parse syslog files with Logstash, and I'd like to keep the ip adress or the hostname of the syslog source in the "host" field. At the moment I have the 0.0.0.0 source after Logstash parsing.
My logstash.conf :
input {
file {
path => ["path/to/file.log"]
start_position => "beginning"
type => "linux-syslog"
ignore_older => 0
}
}
filter {
if [type] == "linux-syslog" {
grok {
match => {"message" => "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
}
}
}
output {
elasticsearch {
hosts => ["#IP_Elastic:Port_Elastic"]
}
stdout { codec => rubydebug }
}
you can overwrite your host with your ip variable once you have parsed it. Consider this example:
Pipeline main started
{"ip":"1.2.3.4"}
{
"message" => "{\"ip\":\"1.2.3.4\"}",
"#version" => "1",
"#timestamp" => "2016-08-10T13:36:18.875Z",
"host" => "pandaadb",
"ip" => "1.2.3.4",
"#host" => "1.2.3.4"
}
I am parsing json to get the IP. Then I write the IP field into the host.
The filter:
filter {
# this parses the ip json
json {
source => "message"
}
mutate {
add_field => { "#host" => "%{ip}" }
}
}
replace %{ip} with whatever field contains your ip address.
Cheers,
Artur

logstash as service - no logs reaching elasticsearch

I have the kibana web app running, but it shows an index from last week.
How can I get a new index loaded for today's date? is it as a result of logging?
If so, then I must not be contacting the logstash server.
I followed this article:https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04
Is there anyway to run diagnostics when running as a service?
Any command lines I issue to the service?
Here is my logstash config:
input {
tcp {
type => "apache"
port => 3333
}
}
filter {
if [type] == "apache" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
embedded => true
}
}

Resources