How to remove unwanted fields logstash (ELK) - logstash

How can I remove unnecessary fields? Type:
agent.ephemeral_id agent.id winlog.provider_guid
I tried, but Kibana stops showing logs at all
- drop_fields:
fields: ["date_created", "ecs.version", "agent.version", "agent.type", "agent.id"]
In logstash I have these configs: filter.conf, input.conf, output.conf
Filter:
filter {
if "winsrvad" in [tags] {
if [winlog][event_id] != "5136" and [winlog][event_id] !=ent_id] != "4729" and id] != "4734" {
drop { }
}
}
}

I would suggest using prune to blacklist the 'unnecessary fields' when the condition is met.
See documentation: https://www.elastic.co/guide/en/logstash/current/plugins-filters-prune.html

You can use mutate;like below
mutate {
remove_field => [ "date_created", "ecs.version", "agent.version", "agent.type", "agent.id"]
}

Related

Can we configure logstash input to listen onlyto paricular set of hosts

Currently my logstash input is listening to filebeat on port XXXX,my requirement is to collect log data only from a particular hosts(let's say only from Webservers). I dont want to modify the filebeat configuration directly on to the servers but I want allow only the webservers logs to listen in.
Could anyone suggest how to configure the logstash in this scenario? Following is mylogstash input configuration.
**input {
beats {
port => 50XX
}
}**
In a word, "no", you cannot configure the input to restrict which hosts it will accept input from. What you can do is drop events from hosts you are not interested in. If the set of hosts you want to accept input from is small then you could do this using a conditional
if [beat][hostname] not in [ "hosta", "hostb", "hostc" ] { drop {} }
Similarly, if your hostnames follow a fixed pattern you might be able to do it using a regexp
if [beat][hostname] !~ /web\d+$/ { drop {} }
would drop events from any host whose name did not end in web followed by a number.
If you have a large set of hosts you could use a translate filter to determine if they are in the set. For example, if you create a csv file with a list of hosts
hosta,1
hostb,1
hostc,1
then do a lookup using
translate {
field => "[beat][hostname]"
dictionary_path => "/some/path/foo.csv"
destination => "[#metadata][field]"
fallback => "dropMe"
}
if [#metadata][field] == "dropMe" { drop {} }
#Badger - Thank you for your response!
As you rightly mentioned, I have large number of hosts, and all my web servers follows naming convention(for an example xxxwebxxx). Could you please brief me the following
translate {
field => "[beat][hostname]"
dictionary_path => "/some/path/foo.csv"
destination => "[#metadata][field]"
fallback => "dropMe"
}
if [#metadata][field] == "dropMe" { drop {}
Also, please suggest how to add the above to my logstash.conf, PFB this is how my logstash.conf looks like
input {
beats {
port => 5xxxx
}
}
filter {
if [type] == "XXX" {
grok {
match => [ "message", '"%{TIMESTAMP_ISO8601:logdate}"\t%{GREEDYDATA}']
}
grok {
match => [ "message", 'AUTHENTICATION-(?<xxx_status_code>[0-9]{3})']
}
grok {
match => [ "message", 'id=(?<user_id>%{DATA}),']
}
if ([user_id] =~ "_agent") {
drop {}
}
grok {
match => [ "message", '%{IP:clientip}' ]
}
date {
match => [ "logdate", "ISO8601", "YYYY-MM-dd HH:mm:ss"]
locale => "en"
}
geoip {
source => "clientip"
}
}
}
output {
elasticsearch {
hosts => ["hostname:port"]
}
stdout { }
}

Filebeat Input Fields are not sent to Logstash

StackOverflow community!
I am trying to collect some system logs using Filebeat and then further process them with LogStash before viewing them in Kibana.
Now, as I have different logs location, I am trying to add specific identification fields for each of them within filebeat.yml.
- type: log
enabled: true
paths:
- C:\Users\theod\Desktop\Logs\Test2\*
processors:
- add_fields:
target: ''
fields:
name:"drs"
- type: log
enabled: true
paths:
- C:\Users\theod\Desktop\Logs\Test\*
processors:
- add_fields:
target: ''
fields:
name:"pos"
Depending on that, I am trying to apply some Grok filters in the Logstash conf file:
input {
beats {
port => 5044
}
}
filter
{
if "pos" in [fields][name] {
grok {
match => { "message" => "\[%{LOGLEVEL:LogLevel}(?: ?)] %{TIMESTAMP_ISO8601:TimeStamp} \[%{GREEDYDATA:IP_Address}] \[%{GREEDYDATA:Username}] %{GREEDYDATA:Operation}] \[%{GREEDYDATA:API_RequestLink}] \[%{GREEDYDATA:Account_name_and_Code}] \[%{GREEDYDATA:Additional_Info1}] \[%{GREEDYDATA:Additional_Info2}] \[%{GREEDYDATA:Store}] \[%{GREEDYDATA:Additional_Info3}](?: ?)%{GREEDYDATA:Error}" }
}
}
if "drs" in [fields][name] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:TimeStamp} \[%{DATA:Thread}] %{LOGLEVEL:LogLevel} (?: ?)%{INT:Sequence} %{DATA:Request_Header}] %{GREEDYDATA:Request}" }
}
}
}
output
{
if "pos" in [fields][name] {
elasticsearch {
hosts => ["localhost:9200"]
index => "[fields][name]logs-%{+YYYY.MM.dd}"
}
}
else if "pos" in [fields][name] {
elasticsearch {
hosts => ["localhost:9200"]
index => "[fields][name]logs-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
index => "logs-%{+YYYY.MM.dd}"
}
}
}
Now, every time I'm running this, the conditionals in the Logstash conf are ignored. Checking the Filebeat logs, I'm noticing that no fields are sent to Logstash.
Can someone offer some guidance and perhaps point out what am I doing wrong?
Thank you!
Your Filebeat config is not adding the field [fields][name], it is adding the field [name] in the top-level of your document because of your target configuration.
processors:
- add_fields:
target: ''
fields:
name:"pos"
All your conditional test the field [fields][name], which does not exist.
Change your conditionals to test the [name] field.
if "pos" in [name] {
... your filters ...
}

how to remove version,hostname,name,tags,#version,prospector,source,host,type,offset

How to remove version,hostname,name,tags,#version,prospector,source,host,type,offset etc(all the default fields) in logstash output after filtering.
You can use a ruby filter in the config file in order to remove the unwanted field(s):
filter {
ruby {
# do stuff
event.remove(key)
# do other stuff
}
}
Here is the reference: link to rubydoc.info
In the question you didn't mentioned about your filter. As an example I mentioned below answer.
filter {
json{
source => "message"
remove_field => [ "message", "host", "path", "#version","type"]
}
}

Logstash config: conditional with list not working if [field] in ["list item 1"]

I am using Logstash to process some flow data. Now I came across a problem while tagging the data using a conditional.
If I write the following in the logstash config
if [myfield] == "abc"{ mutate { add_tag => ["mytag"] } }
else { mutate { add_tag => ["not_working"] } }
everything works just fine, but now I want to use a list like
if [myfield] is in ["abc"]{ mutate { add_tag => ["mytag"] } }
else { mutate { add_tag => ["not_working"] } }
and only get a not_working tag.
Any suggestions? Thanks in advance!
It seems as if there has to be more than one value in the array/list. You could just duplicate the only value like
if [myfield] in ["abc", "abc"] { mutate { add_tag => ["mytag"] } }
else { mutate { add_tag => ["not_working"] } }
and it is working fine.
This is indeed a bug, here a link to Github: https://github.com/elastic/logstash/issues/9932

How can I have logstash drop all events that do not match a group of regular expressions?

I'm trying to match event messages with several regular expressions. I was going for the use of grep filter, but its deprecated so I'm trying for the drop with negation.
The functionality I'm looking for is to have all events dropped unless the message matches several regular expressions.
The filter bellow does not work, but tested individually both expressions work fine.
What am I missing?
filter {
if ([message] !~ ' \[critical\]: ' or [message] !~ '\[crit\]: ') {
drop { }
}
}
I was reading a bit more and went along with painting the events with grok by adding a tag and dropping them in the end, if the tag was not there:
filter {
grok {
add_tag => [ "valid" ]
match => [
"message", ".+ \[critical\]: ?(.+)",
"message", ".+ \[crit\]: ?(.+) ",
"message", '.+ (Deadlock found.+) ',
"message", "(.+: Could not record email: .+) "
]
}
if "valid" not in [tags] {
drop { }
}
mutate {
remove_tag => [ "valid" ]
}
}
if "_grokparsefailure" in [tags] {
drop {}
}
You're using a regexp in your conditional, but not passing in the argument in the correct format. The doc shows this:
if [status] =~ /^5\d\d/ {
nagios { ... }
}
Note the regexp is unquoted and surrounded with slashes.

Resources