I am new to the whole ElasticSearch framework and have downloaded an installed the logstash-input-jmx plugin and now I need to test my configuration but I can't find in any of the LogStash documentation exactly how to test the plugin. All they have in the plugin documentation in GitHub is a sentence down at the bottom that says to start LogStash and test your plugin, they don't tell you exactly how to accomplish that. As a matter of fact that seems to be the standard blurb for all of the plugins which isn't very helpful if you're coming in without any knowledge of the framework.
Here are some details for my configuration if that helps:
logstash.conf
input {
jmx
{
path => "file://*machinename*/D$/LS/logstash-5.1.1/config/jmx"
polling_frequency => 15
type => "jmx"
}
}
filter {
it [type] == "jmx" {
if ("Memory.HeapMemoryUsage" in [metric_path] or "Memory.NonHeapMemoryUsage" in [metric_path]) {
ruby {
code => "event['memoryUsage'] = event['metric_value_number'] * 100"
add_tag => [ "memoryUsage" ]
}
}
}
}
jmx.conf:
{
"host" : *ip address of machine*,
"port" : *jmx listener port*,
"queries" : [
"object_name" : "java.lang:type=Memory",
"object_alias" : "Memory"
]
}
TIA,
Bill
Figured it out by doing a complete uninstall/reinstall of the framework and found a very good tutorial on Ivan Krizsan's blog (https://www.ivankrizsan.se/2015/09/27/jmx-monitoring-with-the-elk-stack/) that was instrumental in helping me get the plug-in up and running.
Related
I have a strange problem with a logstash filter, that was working up until yesterday.
This is my .conf file:
input {
beats {
port => 5044
}
}
filter {
if "access.log" in [source] {
grok {
match => { "message" => "%{GREEDYDATA:messagebefore}\[%{HTTPDATE:real_date}\]\ %{GREEDYDATA:messageafter}" }
}
mutate {
replace => { "[message]" => "%{messagebefore} %{messageafter}" }
remove_field => [ "messagebefore" ]
remove_field => [ "messageafter" ]
}
date {
match => [ "real_date", "dd/MMM/YYYY:HH:mm:ss Z" ]
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
}
}
The issue is that in the output, the derived variables %messagebefore and %message after are coming through as literal text, rather than the content.
Example:
source:/var/log/nginx/access.log message:%{messagebefore} %{messageafter}...
The strange thing is that this was working fine before yesterday afternoon. I also appreciate that this is probably not the best way to process nginx logs, but I'm using this one as an example only as it's affecting all of my other configuration files as well.
My environment:
ELK stack running as a docker container on Centos 7 derived from docker.io/sebp/elk.
Filebeat running on Centos 7 client.
Any ideas?
Thanks.
Solved this myself, and posting here in case anyone gets the same issue.
When building the docker container, I inadvertently left behind another .conf file that also contained reference to access.log. The two .conf files were clashing as logstash was processing both. I deleted the erroneous file and it has all started working.
With Logstash 2.3.3, grok filter doesn't work for the last field.
To reproduce the problem, create test.conf as follows:
input {
file {
path => "/Users/izeye/Applications/logstash-2.3.3/test.log"
}
}
filter {
grok {
match => { "message" => "%{DATA:id1},%{DATA:id2},%{DATA:id3},%{DATA:id4},%{DATA:id5}" }
}
}
output {
stdout {
codec => rubydebug
}
}
Run ./bin/logstash -f test.conf
and after it started, in another terminal run echo "1,2,3,4,5" >> test.log
and I got the following output:
Johnnyui-MacBook-Pro:logstash-2.3.3 izeye$ ./bin/logstash -f test.conf
Settings: Default pipeline workers: 8
Pipeline main started
{
"message" => "1,2,3,4,5",
"#version" => "1",
"#timestamp" => "2016-07-07T07:57:42.830Z",
"path" => "/Users/izeye/Applications/logstash-2.3.3/test.log",
"host" => "Johnnyui-MacBook-Pro.local",
"id1" => "1",
"id2" => "2",
"id3" => "3",
"id4" => "4"
}
You can see the missing id5.
I'm not sure this is a bug or mis-configured.
Any hint will be appreciated.
I think it is because how the DATA pattern is defined. Its regex is .*?, so it's a lazy match.
It's not a bug, it's how regex works (example).
But you might want to ask a regex question in order to have an accurate answer.
As a solution, you can replace the last DATA with NUMBER (or something appropriate to your situation). GREEDYDATA would also work.
Though, in that solution, the csv or dissect filters might be better fit, as easier to configure and more performant.
I'm trying to configure logstash to send mail when someone login my server. But it seems doesn't work. This is my config file in /etc/logstash/conf.d/email.conf
My file:
input {
file {
type => "syslog"
path => "/var/log/auth.log"
}
}
filter {
if [type] == "syslog" {
grok {
pattern => [ "%{SYSLOGBASE} Failed password for %{USERNAME:user} from % {IPORHOST:host} port %{POSINT:port} %{WORD:protocol}" ]
add_tag => [ "auth_failure" ]
}
}
}
output {
email {
tags => [ "auth_failure" ]
to => "<admin#gmail.com>"
from => "<alert#abc.com>"
options => [ "smtpIporHost", "smtp.abc.com",
"port", "25",
"domail", "abc.com",
"userName", "alert#abc.com",
"password", "mypassword",
"authenticationType", "plain",
"debug", "true"
]
subject => "Error"
via => "smtp"
body => "Here is the event line %{#message}"
htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{#message}</div>"
}
}
My logstash file /var/log/logstash/logstash.log
{:timestamp=>"2015-03-10T11:46:41.152000+0700", :message=>"Using milestone 1 output plugin 'email'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin. For more information on plugin milestones, see http://logstash.net/docs/1.4.1/plugin-milestones", :level=>:warn}
any body please help!
You're not using the correct syntax in your grok filter. It should look like this:
grok {
match => ["message", "..."]
}
Other minor comments:
Using tags => ["auth_failure"] for conditional filtering is deprecated. Prefer if "auth_failture" in [tags].
In the email body you're referring to the message with #message. That's deprecated too and the field is named plain message.
I am running the following filter in a logstash config file:
filter {
if [type] == "logstash" {
grok {
match => {
"message" => [
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{DATA:mymessage}, reason:%{GREEDYDATA:reason}",
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{GREEDYDATA:mymessage}"
]
}
}
}
}
It kind of works:
it does identify and carve out variables "timestamp", "severity", "instance", "mymessage", and "reason"
Really what I wanted was to have text which is now %{mymessage} to be the ${message} but when I add any sort of mutate command to this grok it stops working (btw, should there be a log that tells me what is breaking? I didn't see it... ironic for a logging solution to not have verbose logging).
Here's what I tried:
filter {
if [type] == "logstash" {
grok {
match => {
"message" => [
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{DATA:mymessage}, reason:%{GREEDYDATA:reason}",
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{GREEDYDATA:mymessage}"
]
}
mutate => {
replace => [ "message", "%{mymessage}"]
remove => [ "mymessage" ]
}
}
}
}
So in summary I'd like to understand:
Are there log files I can look at to see why/where a failure is happening?
Why would my mutate commands illustated above not work?
I also thought that if I never used the mymessage variable but instead just referred to message as the variable that maybe it would automatically truncate message to just the matched pattern but that appeared to append the results instead ... what is the correct behaviour?
Using the overwrite option is the best solution, but I thought I'd address a couple of your questions directly anyway.
It depends on how Logstash is started. Normally you'd run it via an init script that passes the -l or --log option. /var/log/logstash would be typical.
mutate is a filter of its own, not a part of grok. You could've done like this (or used rename instead of replace + remove):
grok {
...
}
mutate {
replace => [ "message", "%{mymessage}" ]
remove => [ "mymessage" ]
}
I'd do it a different way. For what you're trying to do, the overwrite option might be more apt.
Something like this:
grok {
overwrite => "message"
match => [
"message" => [
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{DATA:message}, reason:%{GREEDYDATA:reason}",
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{GREEDYDATA:message}"
]
]
}
This'll replace 'message' with the 'grokked' bit.
I know that doesn't directly answer your question - about all I can say is when you start logstash, it writes to STDOUT - at least on the version I'm using - which I'm capturing and writing to a file. In here, it reports some of the errors.
There's a -l option to logstash that lets you specify a log file to use - this will usually show you what's going on in the parser, but bear in mind that if something doesn't match a rule, it won't necessarily tell you why it didn't.
I'm trying to use Logstash to parse out and geolocate IP addresses from a Netflow source, it works to get the data into Elasticsearch, but it's not putting in the geoip info. Here's my config file that I'm using in logstash
input {
udp {
host => localhost
port => 5555
codec => netflow
}
}
filter {
geoip {
target => "geoip"
source => "ipv4_dst_addr"
add_tag => ["geoip"]
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}"$
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" $
}
}
output {
stdout { }
elasticsearch { host => "127.0.0.1" }
}
More info that might help, Using Logstash 1.4.2 and Elasticsearch 1.3.4.
Any luck in figuring this one out?
If not, please note that you need to use a mutate to convert the coordinates to float.
However, the geoip filter in Logstash 1.3 and up adds a location field directly so you won't have to use add_field and you won't even have to use the converter. If you try these two solutions, please tell me how it goes. Thank you.
A side note: The recommended version to work with Logstash 1.4.2 of Elasticsearch is 1.1.1
I just spent some time digging into this, and it ends up being something of a bug in the Netflow codec code (specifically, in the IP4Addr class in netflow/util.rb).
You should be able to work around this with a mutate filter, like this:
filter {
mutate {
convert => {
"[netflow][ipv4_src_addr]" => "string"
"[netflow][ipv4_dst_addr]" => "string"
}
}
geoip {
source => "[netflow][ipv4_src_addr]"
target => "src_geoip"
}
geoip {
source => "[netflow][ipv4_dst_addr]"
target => "dst_geoip"
}
}
I've submitted a pull request to fix this properly, but in the time being, try that config.