logstash dns filter miss - dns

I am using logstash 1.5 in my ELK stack environment.
with the following filter configuration:
filter {
mutate {
add_filed => { "src_ip" => "%{src}" }
add_filed => { "dst_ip" => "%{dst}" }
}
dns {
reverse => [ "src", "dst" ]
action => "replace"
}
}
I have 2 problems:
The filter is missing or skip the dns reverse proccess on many logs - I mean each log that going in the filter process or that both dst and src fields reverse or not at all and remain with the ip
( when i test with nslookup all the ip fields has names in the dns).
I dont know how and why but some of my logs has multiple values and i get the following error:
DNS: skipping reverse, can't deal with multiple values, :field=>"src"
, :value=>["10.0.0.1","20.0.0.2"], : level=> warn
It looks like my (ELK) logstash cant handle with a lot of logs and resolve them fast enough. also its looks that he create array keys of multiple value from different logs.
any idea?
maybe you guys encounter this problem?

I noticed a typo in your configuration - add_filed should be add_field

Related

Logstash - add new field by multi condition

Good afternoon. Logs from network devices are transferred to Logstash using syslog.
beats {
type => "filebeat_nginx_proxy_connect"
port => 5044
}
syslog {
type => "syslog"
port => 5145
host => "0.0.0.0"
}
}
Then they get into elastic and are displayed in kibana
As you can see from the picture, the ip address of the device is displayed. I have a mapping of ip addresses of all devices and their sysname. How can I add a new field (for example, sysname) to the document that will display the device name. I tried using mutate (add_fields) and if conditions, but a lot of conditions are obtained since the number of devices is about 2500 thousand. Maybe I need to write my own filter for logstash. But I don't know how and where to look for information. Please help..
I tried using mutate (add_fields) and if conditions, but a lot of conditions are obtained since the number of devices is about 2500 thousand.
If you have a mapping of the IP addresses of all devices and their sysname, you could simply leverage the translate filter plugin which does exactly that, i.e. map the value of one field into another value stored in another field:
translate {
source => "[ip_field]"
target => "[sysname_field]"
override => true
dictionary_path => "/path/to/ip/file/mapping.csv"
fallback => "N/A"
}

How to remove fields from filebeat or logstash

I'm very new to ELK stack and i'm trying to process this log from a spring application.
{
"#timestamp": "2021-02-17T18:25:47.646+01:00",
"#version": "1",
"message": "The app is running!",
"logger_name": "it.company.demo.TestController",
"thread_name": "http-nio-8085-exec-2",
"level": "INFO",
"level_value": 20000,
"application_name": "myAppName"
}
On the machine where the Spring application is running i setup filebeat, that is connected to logstash.
Right now, the logstash configuration is this (very simple, very basic):
input {
beats {
port => 5044
ssl => false
client_inactivity_timeout => 3000
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => localhost
}
}
I added the json { source => "message"} } to extract the message field from the log (i dont know if this is correct).
Anyway, filebeat is sending a lot of fields that are not included to the log, for example:
agent.hostname
agent.id
agent.type
and many other agent fields (version etc)
host.hostname
host.ip
and many other host fields (os.build, os.family etc)
For my pourpose i dont need all this field, maybe i need some of them.. but for sure not all.
I'm asking how i can remove all this fields, and select only the fields i want. How i can do that.
And, to do this i think the right solution is to add a filter to logstash, so all the application (fielebeats) always send the entire paypload and a single instance of logstash will parse the message.. right ?
Doing this on filebeat means that i need to reproduce this configuration for all my application, and it's not centralized. But, because i started this new adventure yesterday.. i dont know if its right
Many thanks
Those fields under [agent] and [host] are being added by filebeat. The filebeat documentation explains how to configure them.

logstash - add only first time value

Here's what I want, it's a bit the opposite of incremental data.
some data's are logs with a specific token, and I want to be able to keep (or to show in Elasticsearch) only the first submitted data, the oldiest information of each token.
I want to ignore any new log of the same token ?
How can I do that ? is it in logstash or elasticsearch ?
Thanks
Updates 2016-05-31
I think we can see that in different perspective. but globally what I want is the table like in the picture, but without the red lines, I want them to be ignored by logstash, or not display in ES queries.
I know it can be done, if I was able to add any flag in those lines I want to delete, but it's not possible, the only fact that tell us they can be removed is because we already have a key first-AAA that has been logged before.
At the logging process, we don't have this information.
You can achieve this using the elasticsearch filter. The filter would check in ES if the record already exists and if it is the case, we ask Logstash to just drop the line.
Note that I'm making the assumption that the Id field (AAA) is used as the document _id and is also present in the document as the Id field. Feel free to change whatever needs to, but this will work.
input {
...
}
filter {
elasticsearch {
hosts => ["localhost:9200"]
query => "_type:your_type AND _id:%{[Id]}"
fields => {"Id" => "found"}
}
if [found] {
drop {}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
...
}
}

Logstash Dynamically assign template

I have read that it is possible to assign dynamic names to the indexes like this:
elasticsearch {
cluster => "logstash"
index => "logstash-%{clientid}-%{+YYYY.MM.dd}"
}
What I am wondering is if it is possible to assign the template dynamically as well:
elasticsearch {
cluster => "logstash"
template => "/etc/logstash/conf.d/%{clientid}-template.json"
}
Also where does the variable %{clientid} come from?
Thanks!
After some testing and feedback from other users, thanks Ben Lim, it seems this is not possible to do so far.
The closest thing would be to do something like this:
if [type] == "redis-input" {
elasticsearch {
cluster => "logstash"
index => "%{type}-logstash-%{+YYYY.MM.dd}"
template => "/etc/logstash/conf.d/elasticsearch-template.json"
template_name => "redis"
}
} else if [type] == "syslog" {
elasticsearch {
cluster => "logstash"
index => "%{type}-logstash-%{+YYYY.MM.dd}"
template => "/etc/logstash/conf.d/syslog-template.json"
template_name => "syslog"
}
}
Full disclosure: I am a Logstash developer at Elastic
You cannot dynamically assign a template because templates are uploaded only once, at Logstash initialization. Without the flow of traffic, deterministic variable completion does not happen. Since there is no traffic flow during initialization, there is nothing there which can "fill in the blank" for %{clientid}.
It is also important to remember that Elasticsearch index templates are only used when a new index is created, and so it is that templates are not uploaded every time a document reached the Elasticsearch output block in Logstash--can you imagine how much slower it would be if Logstash had to do that? If you intend to have multiple templates, they need to be uploaded to Elasticsearch before any data gets sent there. You can do this with a script of your own making using curl and Elasticsearch API calls. This also permits you to update templates without having to restart Logstash. You could run the script any time before index rollover, and when the new indices get created, they'll have the new template settings.
Logstash can send data to a dynamically configured index name, just as you have above. If there is no template present, Elasticsearch will create a best-guess mapping, rather than what you wanted. Templates can and ought to be completely independent of Logstash. This functionality was added for an improved out-of-the-box experience for brand new users. The default template is less than ideal for advanced use cases, and Logstash is not a good tool for template management if you have more than one index template.

Using Log4J with LogStash

I'm new to LogStash. I have some logs written from a Java application in Log4J. I'm in the process of trying to get those logs into ElasticSearch. For the life of me, I can't seem to get it to work consistently. Currently, I'm using the following logstash configuration:
input {
file {
type => "log4j"
path => "/home/ubuntu/logs/application.log"
}
}
filter {
grok {
type => "log4j"
add_tag => [ "ApplicationName" ]
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level}" ]
}
}
output {
elasticsearch {
protocol => "http"
codec => "plain"
host => "[myIpAddress]"
port => "[myPort]"
}
}
This configuration seems to be hit or miss. I'm not sure why. For instance, I have two messages. One works, and the other throws a parse failure. Yet, I'm not sure why. Here are the messages and their respective results:
Tags Message
------ -------
["_grokparsefailure"] 2014-04-04 20:14:11,613 TRACE c.g.w.MyJavaClass [pool-2-
thread-6] message was null from https://domain.com/id-1/env-
MethodName
["ApplicationName"] 2014-04-04 20:14:11,960 TRACE c.g.w.MyJavaClass [pool-2-
thread-4] message was null from https://domain.com/id-1/stable-
MethodName
The one with ["ApplicationName"] has my custom fields of timestamp and level. However, the entry with ["_grokparsefailure"] does NOT have my custom fields. The strange piece is, the logs are nearly identical as shown in the message column above. This is really confusing me, yet, I don't know how to figure out what the problem is or how to get beyond it. Does anyone know how how I can use import log4j logs into logstash and get the following fields consistently:
Log Level
Timestamp
Log message
Machine Name
Thread
Thank you for any help you can provide. Even if I can just the log level, timestamp, and log message, that would be a HUGE help. I sincerely appreciate it!
I'd recommend using the log4j socket listener for logstash and the log4j socket appender.
Logstash conf:
input {
log4j {
mode => server
host => "0.0.0.0"
port => [logstash_port]
type => "log4j"
}
}
output {
elasticsearch {
protocol => "http"
host => "[myIpAddress]"
port => "[myPort]"
}
}
log4j.properties:
log4j.rootLogger=[myAppender]
log4j.appender.[myAppender]=org.apache.log4j.net.SocketAppender
log4j.appender.[myAppender].port=[log4j_port]
log4j.appender.[myAppender].remoteHost=[logstash_host]
There's more info in the logstash docs for their log4j input: http://logstash.net/docs/1.4.2/inputs/log4j
It looks like the SocketAppender solution that was used before is deprecated because of some security issue.
Currently the recommended solution is to use log4j fileAppender and then pass the file through filebeat plugin to logstash and then filter.
For more information you can refer the below links:
https://www.elastic.co/blog/log4j-input-logstash
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-log4j.html
On my blog (edit: removed dead link) I described how to send JSON message(s) to the ElasticSearch and then parse it with GROK.
[Click to see blog post with description and Java example][1]
In the post you find description but also simple maven project with example (complete project on github).
Hope it helps you.

Resources