Logstash email alert - logstash

I'm trying to configure logstash to send mail when someone login my server. But it seems doesn't work. This is my config file in /etc/logstash/conf.d/email.conf
My file:
input {
file {
type => "syslog"
path => "/var/log/auth.log"
}
}
filter {
if [type] == "syslog" {
grok {
pattern => [ "%{SYSLOGBASE} Failed password for %{USERNAME:user} from % {IPORHOST:host} port %{POSINT:port} %{WORD:protocol}" ]
add_tag => [ "auth_failure" ]
}
}
}
output {
email {
tags => [ "auth_failure" ]
to => "<admin#gmail.com>"
from => "<alert#abc.com>"
options => [ "smtpIporHost", "smtp.abc.com",
"port", "25",
"domail", "abc.com",
"userName", "alert#abc.com",
"password", "mypassword",
"authenticationType", "plain",
"debug", "true"
]
subject => "Error"
via => "smtp"
body => "Here is the event line %{#message}"
htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{#message}</div>"
}
}
My logstash file /var/log/logstash/logstash.log
{:timestamp=>"2015-03-10T11:46:41.152000+0700", :message=>"Using milestone 1 output plugin 'email'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin. For more information on plugin milestones, see http://logstash.net/docs/1.4.1/plugin-milestones", :level=>:warn}
any body please help!

You're not using the correct syntax in your grok filter. It should look like this:
grok {
match => ["message", "..."]
}
Other minor comments:
Using tags => ["auth_failure"] for conditional filtering is deprecated. Prefer if "auth_failture" in [tags].
In the email body you're referring to the message with #message. That's deprecated too and the field is named plain message.

Related

logstash GROK filter along with KV plugin couldn't able to process the events

i am new to ELK. when i onboarded the below log file, it is going to "dead letter queue" in logstash because logstash couldn't able to process the events.I have written the GROK filter to parse the events but logstash still couldn't not process the events. Any help would be appreciated.
Below is the sample log format.
25193662345 [http-nio-8080-exec-44] DEBUG c.s.b.a.m.PerformanceMetricsFilter - method=PUT status=201 appLogicTime=1, streamInTime=0, blobStorageTime=31, totalTime=33 tenantId=b9sdfs-1033-4444-aba5-csdfsdfsf, immutableBlobId=bss_c_586331/Sample_app12-sdas-157123148464.txt, blobSize=2862, domain=abc
2519366789 [http-nio-8080-exec-47] DEBUG q.s.b.y.m.PerformanceMetricsFilter - method=PUT status=201 appLogicTime=1, streamInTime=0, blobStorageTime=32, totalTime=33 tenantId=b0csdfsd-1066-4444-adf4-ce7bsdfssdf, immutableBlobId=bss_c_586334/Sample_app15-615223-157sadas6648465.txt, blobSize=2862, domain=cde
GROK filter:
dissect { mapping => { "message" => "%{NUMBER:number} [%{thread}] %{level} %{class} - %{[#metadata][msg]}" } }
kv { source => "[#metadata][msg]" field_split => "," }
Thanks
You have basically two problems in your configuration.
1.) You are using the dissect filter, not grok, both are used to parse messages, but grok uses regular expressions to validate the value of the field and dissect is just positional, it does not perform any validation, if you have a WORD value in the position of a field that expects a NUMBER, grok will fail, but dissect will not.
If your log lines always have the same pattern, you should continue to use dissect since it is faster and needs less cpu.
Your correct dissect mapping should be:
dissect {
mapping => { "message" => "%{number} [%{thread}] %{level} %{class} - %{[#metadata][msg]}" }
}
2.) The field that contains the kv message is wrong, it has fields separated by space and by comma, kv won't work this way.
After your dissect filter this is the content of [#metadata][msg].
method=PUT status=201 appLogicTime=1, streamInTime=0, blobStorageTime=32, totalTime=33 tenantId=b0csdfsd-1066-4444-adf4-ce7bsdfssdf, immutableBlobId=bss_c_586334/Sample_app15-615223-157sadas6648465.txt, blobSize=2862, domain=cde
To solve this you should use a mutate filter to remove the comma from the [#metadata][msg] and use the kv filter with the default configurations.
This should be your filter configuration
filter {
dissect {
mapping => { "message" => "%{number} [%{thread}] %{level} %{class} - %{[#metadata][msg]}" }
}
mutate {
gsub => ["[#metadata][msg]",",",""]
}
kv {
source => "[#metadata][msg]"
}
}
Your output should be something like this:
{
"number" => "2519366789",
"#timestamp" => 2019-11-03T16:42:11.708Z,
"thread" => "http-nio-8080-exec-47",
"appLogicTime" => "1",
"domain" => "cde",
"method" => "PUT",
"level" => "DEBUG",
"blobSize" => "2862",
"#version" => "1",
"immutableBlobId" => "bss_c_586334/Sample_app15-615223-157sadas6648465.txt",
"streamInTime" => "0",
"status" => "201",
"blobStorageTime" => "32",
"message" => "2519366789 [http-nio-8080-exec-47] DEBUG q.s.b.y.m.PerformanceMetricsFilter - method=PUT status=201 appLogicTime=1, streamInTime=0, blobStorageTime=32, totalTime=33 tenantId=b0csdfsd-1066-4444-adf4-ce7bsdfssdf, immutableBlobId=bss_c_586334/Sample_app15-615223-157sadas6648465.txt, blobSize=2862, domain=cde",
"totalTime" => "33",
"tenantId" => "b0csdfsd-1066-4444-adf4-ce7bsdfssdf",
"class" => "q.s.b.y.m.PerformanceMetricsFilter"
}

Retrieving RESTful GET parameters in logstash

I am trying to get logstash to parse key-value pairs in an HTTP get request from my ELB log files.
the request field looks like
http://aaa.bbb/get?a=1&b=2
I'd like there to be a field for a and b in the log line above, and I am having trouble figuring it out.
My logstash conf (formatted for clarity) is below which does not load any additional key fields. I assume that I need to split off the address portion of the URI, but have not figured that out.
input {
file {
path => "/home/ubuntu/logs/**/*.log"
type => "elb"
start_position => "beginning"
sincedb_path => "log_sincedb"
}
}
filter {
if [type] == "elb" {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp}
%{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int}
%{IP:backend_ip}:%{NUMBER:backend_port:int}
%{NUMBER:request_processing_time:float}
%{NUMBER:backend_processing_time:float}
%{NUMBER:response_processing_time:float}
%{NUMBER:elb_status_code:int}
%{NUMBER:backend_status_code:int}
%{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int}
%{QS:request}" ]
}
date {
match => [ "timestamp", "ISO8601" ]
}
kv {
field_split => "&?"
source => "request"
exclude_keys => ["callback"]
}
}
}
output {
elasticsearch { host => localhost }
}
kv will take a URL and split out the params. This config works:
input {
stdin { }
}
filter {
mutate {
add_field => { "request" => "http://aaa.bbb/get?a=1&b=2" }
}
kv {
field_split => "&?"
source => "request"
}
}
output {
stdout {
codec => rubydebug
}
}
stdout shows:
{
"request" => "http://aaa.bbb/get?a=1&b=2",
"a" => "1",
"b" => "2"
}
That said, I would encourage you to create your own versions of the default URI patterns so that they set fields. You can then pass the querystring field off to kv. It's cleaner that way.
UPDATE:
For "make your own patterns", I meant to take the existing ones and modify them as needed. In logstash 1.4, installing them was as easy as putting them in a new file the 'patterns' directory; I don't know about patterns for >1.4 yet.
MY_URIPATHPARAM %{URIPATH}(?:%{URIPARAM:myuriparams})?
MY_URI %{URIPROTO}://(?:%{USER}(?::[^#]*)?#)?(?:%{URIHOST})?(?:%{MY_URIPATHPARAM})?
Then you could use MY_URI in your grok{} pattern and it would create a field called myuriparams that you could feed to kv{}.

logstash : how to extract data from log4j message?

I try to extract data from my log4j message with logstash.
The message look like this :
Method findAll - Start by : bokc
I would like to extract the method name : "findAll" and the user "bokc".
How can I do this?
I use logstash 1.5.2 and my config is :
input {
log4j {
mode => "server"
type => "log4j-artemis"
port => 4560
}
}
filter {
multiline {
type => "log4j-artemis"
pattern => "^\\s"
what => "previous"
}
mutate {
add_field => [ "source_ip", "%{host}" ]
}
}
Use a grok filter:
filter {
grok {
match => [
"message",
"^Method %{WORD:method} - Start by : %{USER:user}"
]
tag_on_failure => []
}
}
This extracts the two words into the fields "method" and "user". The setting of tag_on_failure makes sure that non-matching messages aren't tagged with _grokparsefailure. Since most messages aren't supposed to match the pattern it doesn't make sense to mark them as failures.

Logstash. Get fields by position number

Background
I have the scheme: logs from my app go through rsyslog to central log server, then to Logstash and Elasticsearch.
Logs from app is a pure JSON, but rsyslog adds to log "timestamp", "app name" and "server name" fileds. And log becomes to this:
timestamp app-name server-name [JSON]
Question
How can I remove first three fields with Logstash filters?
Can I get fields by position numbers (like in awk) and do something like:
filter {
somefilter_name {
remove_field => $1, $2, $3
}
}
Or maybe my vision is totally wrong and I must do this in another way?
Thank you!
Use grok{} to match them (they may be useful on their own!) and put the remainder of the event back into the [message] field:
Given input like:
2015-06-16 13:37:30 myApp myServer { "jsonField": "jsonValue" }
And this config:
grok {
pattern => "%{TIMESTAMP_ISO8601:timestamp} %{WORD:app} %{WORD:server} %{GREEDYDATA:message}"
overwrite => [ "message" ]
}
json {
source => "message"
}
Will produce this document:
{
"message" => "{ \"jsonField\": \"jsonValue\" }",
"#version" => "1",
"#timestamp" => "2015-06-16T20:38:55.658Z",
"host" => "0.0.0.0",
"timestamp" => "2015-06-16 13:37:30",
"app" => "myApp",
"server" => "myServer",
"jsonField" => "jsonValue"
}

Logstash not adding fields

I am using Logstash 1.4.2 and I have the following conf file.
I would expect to see in Kibana in the "Fields" section on the left the options for "received_at" and "received_from" and "description", but I don't.
I see
#timestamp
#version
_id
_index
_type host path
I do see in the _source section on the right side the following...
received_at:2015-05-11 14:19:40 UTC received_from:PGP02 descriptionError1!
So home come these don't appear in the list of "Popular Fields"?
I'd like to filter the right side to not show EVERY field in the _source section on the right. Excuse the redaction blocks.
input
{
file {
path => "C:/ServerErrlogs/office-log.txt"
start_position => "beginning"
sincedb_path => "c:/tools/logstash-1.4.2/office-log.sincedb"
tags => ["product_qa", "office"]
}
file {
path => "C:/ServerErrlogs/dis-log.txt"
start_position => "beginning"
sincedb_path => "c:/tools/logstash-1.4.2/dis-log.sincedb"
tags => ["product_qa", "dist"]
}
}
filter {
grok {
match => ["path","%{GREEDYDATA}/%{GREEDYDATA:filename}\.log"]
match => [ "message", "%{TIMESTAMP_ISO8601:logdate}: %{LOGLEVEL:loglevel} (?<logmessage>.*)" ]
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "logdate", "ISO8601", "yyyy-MM-dd HH:mm:ss,SSSSSSSSS" ]
}
#logdate is now parsed into timestamp, remove original log message too
mutate {
remove_field => ['message', 'logdate' ]
add_field => [ "description", "Error1!" ]
}
}
output {
elasticsearch {
protocol => "http"
host => "0.0.0.x"
}
}
Update:
I have tired searching with a query like:
tags: data AND loglevel : INFO
then saving this query, and then reloading the page.
But still I don't see loglevel appearing as 'Popular Fields'
If the fields don't appear on the left side, it's probably a kibana caching problem. Go to Settings->Indices, select your index, and click the orange Refresh button.
I had the same issue with logstash not adding fields and after quite a lot of searching and testing other things, suddenly I had the solution (but I´am using the logstash-logback-encoder, so I have JSON already - if you don´t, then you need to transform things into JSON in the logstash "input"-phase).
I added a "json" plugin-filter, that did the magic for me:
filter {
json {
source => "message"
}
}

Resources