filebeat send messages to certain index - linux

I have an installed pair elasticsearch - logstash - kibana, 2 clients: ELKclient1 and ELKclient2. Filebeat is installed on clients. I need that both clients write logs in separate index, ELKclient1 in index test-%{+YYYY.MM.dd, ELKclient2 in index test2-%{+YYYY.MM.dd (sending nginx access logs). For some reason logs from clients are written in both indexes, eg, from client ELKclient2 logs are written in both indexes test-%{+YYYY.MM.dd and test2-%{+YYYY.MM.dd (attachment 1 and attachement 2). Do you have any clue why its happening?
#config filebeat on client2
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
fields:
type: nginx_access
fields_under_root: true
scan_frequency: 5s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["ip-address_logstash:5044"]
index: "test2-%{+YYYY.MM.dd}"
bulk_max_size: 1024
shipper:
logging:
to_syslog: false
to_files: true
level: info
files:
path: /var/log/filebeat
name: filebeat.log
#config logstash output
output {
elasticsearch {
hosts => "localhost:9200"
index => "test-%{+YYYY.MM.dd}"
}
#stdout { codec => rubydebug }
elasticsearch {
hosts => "localhost:9200"
index => "test2-%{+YYYY.MM.dd}"
}
#stdout { codec => rubydebug }
}

In order to make both clients write logs in a separate index, Take the workflow idea in the below picture, You need to add a tag to differentiate the logs coming from different servers.
Considering your requirement in your question one of the ways is to put the following code in the output section of your logstash config file.
output {
if [beat][hostname] == "ELKclient1"
elasticsearch {
hosts => "localhost:9200"
index => "test-%{+YYYY.MM.dd}"
}
else if [beat][hostname] == "ELKclient2"
elasticsearch {
hosts => "localhost:9200"
index => "test2-%{+YYYY.MM.dd}"
}
else
stdout {
codec => rubydebug
}
}

Related

Filebeat: How to remove log if some key or value exist?

filebeat version 7.17.3
i have 3 different logs for example
{"level":"debug","message":"Start proxy checking","module":"proxy","timestamp":"2022-05-18 23:22:15 +0200"}
{"level":"info","message":"Attempt to get proxy","module":"proxy","timestamp":"2022-05-18 23:22:17 +0200"}
{"campaign":"18","level":"warn","message":"Missed or empty list","module":"loader","session":"pYpifim","timestamp":"2022-05-18 23:27:46 +0200"}
how is it possible to not provide/filter out the log to logstash or elasticsearch if level is equal "info"
how is it possible to not provide/filter out the log to logstash or elasticsearch if key campaign does not exist?
in FileBeat i have following
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 1
target: ""
overwrite_keys: true
add_error_key: true
- drop_fields:
fields: ["agent", "host", "log", "ecs", "input", "location"]
but with drop_fields i can remove some field and i need to not save completely log if key or value are exist!
in Logstash to delete those events is no problem - see below, but how to do this in filebeats?
/etc/logstash/conf.d/40-filebeat-to-logstash.conf
input {
beats {
port => 5044
include_codec_tag => false
}
}
filter {
if "Start proxy checking" in [message] {
drop { }
}
if "Attempt to get proxy" in [message] {
drop { }
}
}
output {
elasticsearch {
hosts => ["http://xxx.xxx.xxx.xxx:9200"]
# index => "myindex"
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+yyyy.MM.dd}"
}
}
Thank you in Advance
in filebeat there is drop events processor,
processors:
- drop_event:
when:
condition
https://www.elastic.co/guide/en/beats/filebeat/7.17/drop-event.html

Tags in logstash

How to direct postfix logs to index postfix ?
In logstash config
input {
beats {
port => 5044
}
}
filter {
grok {
}
}
output {
if "postfix" in [tags]{
elasticsearch {
hosts => "localhost:9200"
index => "postfix-%{+YYYY.MM.dd}"
}
}
}
In filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/maillog*
exclude_files: [".gz$"]
tags: ["postfix"]
output.logstash:
hosts: ["10.50.11.8:5044"]
In the log logstash a lot
[WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"newrelicdata", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x4ceb504a>], :response=>{"index"=>{"_index"=>"newrelicdata", "_type"=>"_doc", "_id"=>"V7x2z20Bp3jq-MOqpNbt", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id 'V7x2z20Bp3jq-MOqpNbt'. Preview of field's value: '{name=mail.domain.com}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:521"}}}}}
Why date from mail.domain.com try to get not in index postfix ? And the data is trying to get into all the indexes ? Any help
I think the logs contain field name "host", so it's throwing an error as "failed to parse field [host] of type [text]"
mutate {
rename => ["host", "server"]
convert => {"server" => "string"}
}
As I tried sending the logs with the below filebeat.yml configurations,
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/varsha/ELK7.4/logs/*.log
tags: ["postfix"]
output.logstash:
hosts: ["localhost:5044"]
By the above configurations and the files you have shared with me are working fine and got the expected result, please check it in https://pastebin.com/f6e0E52S

How to use the index specified in Filebeat in logstash.yml?

I am using Filebeat to send log files over to my Logstash with the following configurations:
filebeat.inputs:
- type: log
enabled: true
paths:
- ${PWD}/filebeat-volume/data/*.txt
output.logstash:
enabled: true
hosts: ["elk:5044"]
index: "custom-index"
setup.kibana:
host: "localhost:5601"
and
input {
beats {
port => "5044"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "<WHAT SHOULD GO HERE???>"
}
}
In filebeat.yml, I am specifying an index ("custom index"). How can I set the same index in my logstash.yml to be sent to Elasticsearch?
I see what you want now, you should set Logstash with below output configuration, this way it will pass the index set in filebeat to Elasticsearch.
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[#metadata][beat]}"
}
}
Point 2 in this example

What is the pattern to match complete input in Logstash?

I am using ELK stack with filebeat.
filebeat.conf
filebeat:
prospectors:
-
paths:
- /home/ubuntu/logs_*
input_type: log
output:
logstash:
hosts: [${LOGSTASH_PORT_5044_TCP_ADDR}]
index: filebeat
console:
pretty: true
This is passing logs from a file logs_test
A sample log
{"name":"test","statusCode":0,"deployment":"production","hostname":"ip-random-address","level":30,"jobName":"testJob","date":"2016-07-18T03:15:02.075Z","jobType":"script","msg":"","time":"2016-07-18T03:15:02.076Z","v":0}
I want to make a HTTP call to an external URL when the field statusCode is 1
The entire log object is being passed to logstash.
My logstash config
input {
beats {
port => 5044
codec => "json"
}
}
output {
if ([statusCode] and [statusCode] == 1) {
http {
format=>"message"
http_method=>"post"
url=>"http://www.example.com"
message=>'{"text": "%{some_pattern_matcher}"}'
}
}
}
[Question] What should the "some_pattern_matcher" be to send all fields to HTTP request.
PS: %{mesage} does not work.
input {
beats {
port => 5044
codec => "json"
}
}
filter{
grok{
match => { "message" => "%{GREEDYDATA:data}" }
}
}
output {
if ([statusCode] and [statusCode] == 1) {
http {
format=>"message"
http_method=>"post"
url=>"http://www.example.com"
message=> %{data}
}
}
}
I haven't tried it out. So try this one and let me know if this solution works. If not please post the error(s) you got.

ELK not passing metadata from filebeat into logstash

Installed an ELK server via: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
It seems to work except for the filebeat connection; filebeat does not appear to be forwarding anything or at least I can't find anything in the logs to indicate anything is happening.
My filebeat configuration is as follows:
filebeat:
prospectors:
-
paths:
- /var/log/*.log
- /var/log/messages
- /var/log/secure
encoding: utf-8
input_type: log
timeout: 30s
idle_timeout: 30s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["my_elk_fqdn:5044"]
bulk_max_size: 1024
compression_level: 3
worker: 1
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
to_files: true
files:
path: /var/log/filebeat
name: filebeat.log
rotateeverybytes: 10485760 # = 10MB
keepfiles: 7
level: debug
The log file output I keep getting from filebeat is just not very helpful:
2016-07-14T17:32:21-04:00 DBG Start next scan
2016-07-14T17:32:31-04:00 DBG Start next scan
2016-07-14T17:32:41-04:00 DBG Start next scan
2016-07-14T17:32:46-04:00 DBG Flushing spooler because of timeout. Events flushed: 0
2016-07-14T17:32:51-04:00 DBG Start next scan
Is there anything wrong with my configuration file?
When I test on the ELK server to see if I am getting anything:
[root#my_elk_server ~]# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]
}
}
Oh and my logstash configuration for filebeats:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
UPDATE: It is not filebeat. Somewhat relieved that messages are indeed being passed but still have an issue I can't track:
Discovered it wasn't filebeat that was causing the issue. It appears that the configuration file in logstash to send to elasticsearch is not properly labeling the index (and the type) to make it searchable as shown in the question. Instead of putting filebeat in the index name it gives a result like this:
"_index" : "%{[#metadata][beat]}-2016.07.14",
The elasticsearch output put in the file turned out to be incorrect in the
output {
elasticsearch {
hosts => "my_elk_fqdn:9200"
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Apparently this #metadata is not being passed in correctly. Has anyone been able to get the _index and _type fields to populate correctly???
This might be a bug with filebeat??
https://github.com/logstash-plugins/logstash-input-beats/issues/6

Resources