Installed an ELK server via: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
It seems to work except for the filebeat connection; filebeat does not appear to be forwarding anything or at least I can't find anything in the logs to indicate anything is happening.
My filebeat configuration is as follows:
filebeat:
prospectors:
-
paths:
- /var/log/*.log
- /var/log/messages
- /var/log/secure
encoding: utf-8
input_type: log
timeout: 30s
idle_timeout: 30s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["my_elk_fqdn:5044"]
bulk_max_size: 1024
compression_level: 3
worker: 1
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
to_files: true
files:
path: /var/log/filebeat
name: filebeat.log
rotateeverybytes: 10485760 # = 10MB
keepfiles: 7
level: debug
The log file output I keep getting from filebeat is just not very helpful:
2016-07-14T17:32:21-04:00 DBG Start next scan
2016-07-14T17:32:31-04:00 DBG Start next scan
2016-07-14T17:32:41-04:00 DBG Start next scan
2016-07-14T17:32:46-04:00 DBG Flushing spooler because of timeout. Events flushed: 0
2016-07-14T17:32:51-04:00 DBG Start next scan
Is there anything wrong with my configuration file?
When I test on the ELK server to see if I am getting anything:
[root#my_elk_server ~]# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]
}
}
Oh and my logstash configuration for filebeats:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
UPDATE: It is not filebeat. Somewhat relieved that messages are indeed being passed but still have an issue I can't track:
Discovered it wasn't filebeat that was causing the issue. It appears that the configuration file in logstash to send to elasticsearch is not properly labeling the index (and the type) to make it searchable as shown in the question. Instead of putting filebeat in the index name it gives a result like this:
"_index" : "%{[#metadata][beat]}-2016.07.14",
The elasticsearch output put in the file turned out to be incorrect in the
output {
elasticsearch {
hosts => "my_elk_fqdn:9200"
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Apparently this #metadata is not being passed in correctly. Has anyone been able to get the _index and _type fields to populate correctly???
This might be a bug with filebeat??
https://github.com/logstash-plugins/logstash-input-beats/issues/6
Related
filebeat version 7.17.3
i have 3 different logs for example
{"level":"debug","message":"Start proxy checking","module":"proxy","timestamp":"2022-05-18 23:22:15 +0200"}
{"level":"info","message":"Attempt to get proxy","module":"proxy","timestamp":"2022-05-18 23:22:17 +0200"}
{"campaign":"18","level":"warn","message":"Missed or empty list","module":"loader","session":"pYpifim","timestamp":"2022-05-18 23:27:46 +0200"}
how is it possible to not provide/filter out the log to logstash or elasticsearch if level is equal "info"
how is it possible to not provide/filter out the log to logstash or elasticsearch if key campaign does not exist?
in FileBeat i have following
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 1
target: ""
overwrite_keys: true
add_error_key: true
- drop_fields:
fields: ["agent", "host", "log", "ecs", "input", "location"]
but with drop_fields i can remove some field and i need to not save completely log if key or value are exist!
in Logstash to delete those events is no problem - see below, but how to do this in filebeats?
/etc/logstash/conf.d/40-filebeat-to-logstash.conf
input {
beats {
port => 5044
include_codec_tag => false
}
}
filter {
if "Start proxy checking" in [message] {
drop { }
}
if "Attempt to get proxy" in [message] {
drop { }
}
}
output {
elasticsearch {
hosts => ["http://xxx.xxx.xxx.xxx:9200"]
# index => "myindex"
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+yyyy.MM.dd}"
}
}
Thank you in Advance
in filebeat there is drop events processor,
processors:
- drop_event:
when:
condition
https://www.elastic.co/guide/en/beats/filebeat/7.17/drop-event.html
I have an installed pair elasticsearch - logstash - kibana, 2 clients: ELKclient1 and ELKclient2. Filebeat is installed on clients. I need that both clients write logs in separate index, ELKclient1 in index test-%{+YYYY.MM.dd, ELKclient2 in index test2-%{+YYYY.MM.dd (sending nginx access logs). For some reason logs from clients are written in both indexes, eg, from client ELKclient2 logs are written in both indexes test-%{+YYYY.MM.dd and test2-%{+YYYY.MM.dd (attachment 1 and attachement 2). Do you have any clue why its happening?
#config filebeat on client2
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
fields:
type: nginx_access
fields_under_root: true
scan_frequency: 5s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["ip-address_logstash:5044"]
index: "test2-%{+YYYY.MM.dd}"
bulk_max_size: 1024
shipper:
logging:
to_syslog: false
to_files: true
level: info
files:
path: /var/log/filebeat
name: filebeat.log
#config logstash output
output {
elasticsearch {
hosts => "localhost:9200"
index => "test-%{+YYYY.MM.dd}"
}
#stdout { codec => rubydebug }
elasticsearch {
hosts => "localhost:9200"
index => "test2-%{+YYYY.MM.dd}"
}
#stdout { codec => rubydebug }
}
In order to make both clients write logs in a separate index, Take the workflow idea in the below picture, You need to add a tag to differentiate the logs coming from different servers.
Considering your requirement in your question one of the ways is to put the following code in the output section of your logstash config file.
output {
if [beat][hostname] == "ELKclient1"
elasticsearch {
hosts => "localhost:9200"
index => "test-%{+YYYY.MM.dd}"
}
else if [beat][hostname] == "ELKclient2"
elasticsearch {
hosts => "localhost:9200"
index => "test2-%{+YYYY.MM.dd}"
}
else
stdout {
codec => rubydebug
}
}
How to direct postfix logs to index postfix ?
In logstash config
input {
beats {
port => 5044
}
}
filter {
grok {
}
}
output {
if "postfix" in [tags]{
elasticsearch {
hosts => "localhost:9200"
index => "postfix-%{+YYYY.MM.dd}"
}
}
}
In filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/maillog*
exclude_files: [".gz$"]
tags: ["postfix"]
output.logstash:
hosts: ["10.50.11.8:5044"]
In the log logstash a lot
[WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"newrelicdata", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x4ceb504a>], :response=>{"index"=>{"_index"=>"newrelicdata", "_type"=>"_doc", "_id"=>"V7x2z20Bp3jq-MOqpNbt", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id 'V7x2z20Bp3jq-MOqpNbt'. Preview of field's value: '{name=mail.domain.com}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:521"}}}}}
Why date from mail.domain.com try to get not in index postfix ? And the data is trying to get into all the indexes ? Any help
I think the logs contain field name "host", so it's throwing an error as "failed to parse field [host] of type [text]"
mutate {
rename => ["host", "server"]
convert => {"server" => "string"}
}
As I tried sending the logs with the below filebeat.yml configurations,
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/varsha/ELK7.4/logs/*.log
tags: ["postfix"]
output.logstash:
hosts: ["localhost:5044"]
By the above configurations and the files you have shared with me are working fine and got the expected result, please check it in https://pastebin.com/f6e0E52S
I am using Filebeat to send log files over to my Logstash with the following configurations:
filebeat.inputs:
- type: log
enabled: true
paths:
- ${PWD}/filebeat-volume/data/*.txt
output.logstash:
enabled: true
hosts: ["elk:5044"]
index: "custom-index"
setup.kibana:
host: "localhost:5601"
and
input {
beats {
port => "5044"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "<WHAT SHOULD GO HERE???>"
}
}
In filebeat.yml, I am specifying an index ("custom index"). How can I set the same index in my logstash.yml to be sent to Elasticsearch?
I see what you want now, you should set Logstash with below output configuration, this way it will pass the index set in filebeat to Elasticsearch.
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[#metadata][beat]}"
}
}
Point 2 in this example
I am using ELK stack version 5.1.2 and I have problem with sending logs from one worker (node) to central server. Everything I configured on localhost and it worked perfectly, but on development environment not. On localhost I used SSL, but now I turned it off. So my conf file of filebeat is:
filebeat.prospectors:
- input_type: log
paths:
- e:\logs\*.log
document_type: xxx_log
output.logstash:
hosts: ["xxxx:5043"]
logging.level: error
logging.to_syslog: true
logging.files:
rotateeverybytes: 10485760 # = 10MB
Logstash configuration:
input {
beats {
port => "5043"
}
}
filter {
if [type] == "xxx_log" {
multiline {
pattern => "^TID"
negate => true
what => "previous"
}
grok {
break_on_match => false
match => [ "message", "TID: \[%{TIMESTAMP_ISO8601:timestamp}\] %{LOGLEVEL:level} \[%{JAVACLASS:java_class}\] \(%{GREEDYDATA:thread}\) - (?<log_message>(.|\r|\n)*)"]
}
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
user => "elastic"
password => "changeme"
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}
Ok, when I add line to log file, for example:
TID: [2017-01-19 13:37:18] INFO [App.java] (main) - Info test...
Filebeat starts to collect data, after successfull harvest I am getting:
ERR Failed to publish events caused by: write tcp yyyy:51992->xxxx:5043: wsasend: An existing connection was forcibly closed by the remote host.
Nothing in log of Logstash.
Firewall is turned off, when I open telnet from WORK node on port 5043 message will come to central server because Logstash say in log file, that I send invalid frame type, for example I send only some POST to test if port 5043 is open. So the port is open, but the elastic is empty. Sometimes, I do not know why, I am getting error in Filebeat log:
wsarecv: An existing connection was forcibly closed by the remote host.
This line generates Logstash log:
11:45:31.094 [nioEventLoopGroup-4-2] ERROR org.logstash.beats.BeatsHandler - Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 83
13:31:43.139 [nioEventLoopGroup-4-4] ERROR org.logstash.beats.BeatsHandler - Exception: An existing connection was forcibly closed by the remote host
Thank you for any advice.
Jaroslav