Can't get logstash to geoip locate from Netflow input - logstash

I'm trying to use Logstash to parse out and geolocate IP addresses from a Netflow source, it works to get the data into Elasticsearch, but it's not putting in the geoip info. Here's my config file that I'm using in logstash
input {
udp {
host => localhost
port => 5555
codec => netflow
}
}
filter {
geoip {
target => "geoip"
source => "ipv4_dst_addr"
add_tag => ["geoip"]
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}"$
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" $
}
}
output {
stdout { }
elasticsearch { host => "127.0.0.1" }
}
More info that might help, Using Logstash 1.4.2 and Elasticsearch 1.3.4.

Any luck in figuring this one out?
If not, please note that you need to use a mutate to convert the coordinates to float.
However, the geoip filter in Logstash 1.3 and up adds a location field directly so you won't have to use add_field and you won't even have to use the converter. If you try these two solutions, please tell me how it goes. Thank you.
A side note: The recommended version to work with Logstash 1.4.2 of Elasticsearch is 1.1.1

I just spent some time digging into this, and it ends up being something of a bug in the Netflow codec code (specifically, in the IP4Addr class in netflow/util.rb).
You should be able to work around this with a mutate filter, like this:
filter {
mutate {
convert => {
"[netflow][ipv4_src_addr]" => "string"
"[netflow][ipv4_dst_addr]" => "string"
}
}
geoip {
source => "[netflow][ipv4_src_addr]"
target => "src_geoip"
}
geoip {
source => "[netflow][ipv4_dst_addr]"
target => "dst_geoip"
}
}
I've submitted a pull request to fix this properly, but in the time being, try that config.

Related

Can we configure logstash input to listen onlyto paricular set of hosts

Currently my logstash input is listening to filebeat on port XXXX,my requirement is to collect log data only from a particular hosts(let's say only from Webservers). I dont want to modify the filebeat configuration directly on to the servers but I want allow only the webservers logs to listen in.
Could anyone suggest how to configure the logstash in this scenario? Following is mylogstash input configuration.
**input {
beats {
port => 50XX
}
}**
In a word, "no", you cannot configure the input to restrict which hosts it will accept input from. What you can do is drop events from hosts you are not interested in. If the set of hosts you want to accept input from is small then you could do this using a conditional
if [beat][hostname] not in [ "hosta", "hostb", "hostc" ] { drop {} }
Similarly, if your hostnames follow a fixed pattern you might be able to do it using a regexp
if [beat][hostname] !~ /web\d+$/ { drop {} }
would drop events from any host whose name did not end in web followed by a number.
If you have a large set of hosts you could use a translate filter to determine if they are in the set. For example, if you create a csv file with a list of hosts
hosta,1
hostb,1
hostc,1
then do a lookup using
translate {
field => "[beat][hostname]"
dictionary_path => "/some/path/foo.csv"
destination => "[#metadata][field]"
fallback => "dropMe"
}
if [#metadata][field] == "dropMe" { drop {} }
#Badger - Thank you for your response!
As you rightly mentioned, I have large number of hosts, and all my web servers follows naming convention(for an example xxxwebxxx). Could you please brief me the following
translate {
field => "[beat][hostname]"
dictionary_path => "/some/path/foo.csv"
destination => "[#metadata][field]"
fallback => "dropMe"
}
if [#metadata][field] == "dropMe" { drop {}
Also, please suggest how to add the above to my logstash.conf, PFB this is how my logstash.conf looks like
input {
beats {
port => 5xxxx
}
}
filter {
if [type] == "XXX" {
grok {
match => [ "message", '"%{TIMESTAMP_ISO8601:logdate}"\t%{GREEDYDATA}']
}
grok {
match => [ "message", 'AUTHENTICATION-(?<xxx_status_code>[0-9]{3})']
}
grok {
match => [ "message", 'id=(?<user_id>%{DATA}),']
}
if ([user_id] =~ "_agent") {
drop {}
}
grok {
match => [ "message", '%{IP:clientip}' ]
}
date {
match => [ "logdate", "ISO8601", "YYYY-MM-dd HH:mm:ss"]
locale => "en"
}
geoip {
source => "clientip"
}
}
}
output {
elasticsearch {
hosts => ["hostname:port"]
}
stdout { }
}

Logstash configuration variable expansion

I have a strange problem with a logstash filter, that was working up until yesterday.
This is my .conf file:
input {
beats {
port => 5044
}
}
filter {
if "access.log" in [source] {
grok {
match => { "message" => "%{GREEDYDATA:messagebefore}\[%{HTTPDATE:real_date}\]\ %{GREEDYDATA:messageafter}" }
}
mutate {
replace => { "[message]" => "%{messagebefore} %{messageafter}" }
remove_field => [ "messagebefore" ]
remove_field => [ "messageafter" ]
}
date {
match => [ "real_date", "dd/MMM/YYYY:HH:mm:ss Z" ]
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
}
}
The issue is that in the output, the derived variables %messagebefore and %message after are coming through as literal text, rather than the content.
Example:
source:/var/log/nginx/access.log message:%{messagebefore} %{messageafter}...
The strange thing is that this was working fine before yesterday afternoon. I also appreciate that this is probably not the best way to process nginx logs, but I'm using this one as an example only as it's affecting all of my other configuration files as well.
My environment:
ELK stack running as a docker container on Centos 7 derived from docker.io/sebp/elk.
Filebeat running on Centos 7 client.
Any ideas?
Thanks.
Solved this myself, and posting here in case anyone gets the same issue.
When building the docker container, I inadvertently left behind another .conf file that also contained reference to access.log. The two .conf files were clashing as logstash was processing both. I deleted the erroneous file and it has all started working.

Data missed in Logstash?

Data missed a lot in logstash version 5.0,
is it a serous bug ,when a config the config file so many times ,it useless,data lost happen again and agin, how to use logstash to collect log event property ?
any reply will thankness
Logstash is all about reading logs from specific location and based on you interested information you can create index in elastic search or other output also possible.
Example of logstash conf
input {
file {
# PLEASE SET APPROPRIATE PATH WHERE LOG FILE AVAILABLE
#type => "java"
type => "json-log"
path => "d:/vox/logs/logs/vox.json"
start_position => "beginning"
codec => json
}
}
filter {
if [type] == "json-log" {
grok {
match => { "message" => "UserName:%{JAVALOGMESSAGE:UserName} -DL_JobID:%{JAVALOGMESSAGE:DL_JobID} -DL_EntityID:%{JAVALOGMESSAGE:DL_EntityID} -BatchesPerJob:%{JAVALOGMESSAGE:BatchesPerJob} -RecordsInInputFile:%{JAVALOGMESSAGE:RecordsInInputFile} -TimeTakenToProcess:%{JAVALOGMESSAGE:TimeTakenToProcess} -DocsUpdatedInSOLR:%{JAVALOGMESSAGE:DocsUpdatedInSOLR} -Failed:%{JAVALOGMESSAGE:Failed} -RecordsSavedInDSE:%{JAVALOGMESSAGE:RecordsSavedInDSE} -FileLoadStartTime:%{JAVALOGMESSAGE:FileLoadStartTime} -FileLoadEndTime:%{JAVALOGMESSAGE:FileLoadEndTime}" }
add_field => ["STATS_TYPE", "FILE_LOADED"]
}
}
}
filter {
mutate {
# here converting data type
convert => { "FileLoadStartTime" => "integer" }
convert => { "RecordsInInputFile" => "integer" }
}
}
output {
elasticsearch {
# PLEASE CONFIGURE ES IP AND PORT WHERE LOG DOCs HAS TO PUSH
document_type => "json-log"
hosts => ["localhost:9200"]
# action => "index"
# host => "localhost"
index => "locallogstashdx_new"
# workers => 1
}
stdout { codec => rubydebug }
#stdout { debug => true }
}
To know more you can go throw many available websites like
https://www.elastic.co/guide/en/logstash/current/first-event.html

Logstash override host with filebeat name

I have setup the FileBeat -> Logstash -> ElasticSearch -> Kibana set-up successfully. Now in logstash I want to override the host with the beat.name. However, When I try to refer to the beat metadata, the variable is not resolved.
mutate {
add_field => {
"timestamp" => "%{year}-%{month}-%{day} %{time}"
}
replace_field => {
"host" => "%{[#metadata][beat][name]}"
}
}
I think I am missing some major configuration. Even when Logstash forwards it to elasticsearch, these symbol resolution are not done.
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
How do we refer to filebeat meta information in logstash config file correctly?
The beat.name field is not carried in the #metadata object. beat is a top-level field in the event. So to refer to the value use [beat][name] or in string use "%{[beat][name]}".

Logstash eventlog filter

I need to filter error messages from microsoft eventlogs from logstash. ELK is running on ubunu 14.04 machine
logstash configuration
input {
tcp {
port => 5045
type => 'eventlog'
}
}
filter{
if [type] == 'eventlog' {
if [Severity] == "ERROR" {
mutate {
add_tag => "error"
}
}
}
}
output {
elasticsearch {
hosts => ["IP_ADDRSS:9200"]
}
if "error" in [tags]{
stdout { codec => 'rubydebug' }
}
}
But still I am getting thousands of eventlogs from which I can't filter out the error logs.
How to effectively filter error logs from all type of eventlogs?
How are you ingesting the data? Not clear from the input configuration. If you use Winlogbeat, filtering should work fine.
There was no tag named "Severity:ERROR". So I added codec => "json" in tcp input. Now there is error tag in log. So I can filter it out.

Resources