I am trying to setup filebeat to logstash and get below errors at filebeat and logstash end:
filebeat; Version: 7.7.0
logstash "number" : "7.8.0"
Modified /etc/filebeat/filebeat.yml:
enabled: true
paths:
commented output.elasticsearch
uncommented output.logstash and added hosts: ["hostname:5044"]
Modified /etc/logstash/conf.d/beats_elasticsearch.conf:
input {
beats {
port => 5044
}
}
#filter {
#}
output {
elasticsearch {
hosts => ["hostname:9200"]
}
}
I started filebeat and got below error:
2020-07-06T08:51:23.912-0700 ERROR [publisher_pipeline_output] pipeline/output.go:106 Failed to connect to backoff(elasticsearch(http://hostname:5044)): Get http://hostname:5044: dial tcp ip_address:5044: connect: connection refused
Started logstash and its log below:
[INFO ] 2020-07-06 09:00:20.562 [[main]<beats] Server - Starting server on port: 5044
[INFO ] 2020-07-06 09:00:20.835 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2020-07-06 09:00:45.266 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: x.x.x.x:5044, remote: y.y.y.y:53628] Handling exception: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
[WARN ] 2020-07-06 09:00:45.267 [nioEventLoopGroup-2-2] DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
Please explain what else I should do.
Started filebeat and logstash as:
sudo /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml
sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/beats_elasticsearch.conf
Thanks
The version on the filebeat and logstash was different. Upgraded the filebeat version and fixed the issue. Thanks
Related
im used filebeat to many server, shipped nginx log to logstash
in this time and months my elk server is very good worked
but, my 1 line added grok pattern to syslog-filter.conf, and restart logstash ,,,
my elk and Concerning Logstash not worked
this is wehn, my elasticsearch and logatash and kibana ... services this up and enable and active!
but ...
my nginx servers ...
telnet to 5044
and
telnet to 5443
connection refused
this log, 1 server Logs (filebeat logs)
> 2018-01-23T10:21:21+03:30 ERR Failed to connect: dial tcp 172.17.11.202:5443: getsockopt: connection refused
> 2018-01-23T10:21:28+03:30 INFO Non-zero metrics in the last 30s: beat.memstats.gc_next=11769216 beat.memstats.memory_alloc=5935656 beat.memstats.memory_total=73881024 filebeat.harvester.open_files=5 filebeat.harvester.running=6 libbeat.config.module.running=0 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.retry=2048 registrar.states.current=35
> 2018-01-23T10:47:16+03:30 INFO Non-zero metrics in the last 30s: beat.memstats.gc_next=11297792 beat.memstats.memory_alloc=5872352 beat.memstats.memory_total=26557112 filebeat.harvester.open_files=5 filebeat.harvester.running=6 libbeat.config.module.running=0 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=4117 registrar.states.current=37
> 2018-01-23T10:47:22+03:30 ERR Failed to connect: dial tcp 172.17.11.202:5443: getsockopt: connection refused
> 2018-01-23T10:47:46+03:30 INFO Non-zero metrics in the last 30s: beat.memstats.gc_next=11297792 beat.memstats.memory_alloc=6012704 beat.memstats.memory_total=26697464 filebeat.harvester.open_files=5 filebeat.harvester.running=6 libbeat.config.module.running=0 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.retry=2048 registrar.states.current=37
> 2018-01-23T14:22:45+03:30 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=11490800 beat.memstats.memory_alloc=5802160 beat.memstats.memory_total=153496216 filebeat.harvester.open_files=3 filebeat.harvester.running=2 libbeat.config.module.running=0 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=4117 registrar.states.current=3
I have a log forwarding pipeline consists of filebeat and logstash. Somehow, they stopped working together recently. How can I check if Filebeat is correctly connected to Logstash?
Check your Filebeat log file, default location: /var/log/filebeat/filebeat
Example of errors:
ERR Failed to publish events
Connecting error publishing events (retrying): dial tcp x.x.x.x:5044: getsockopt: connection refused
Extra:
Troubleshooting Filebeat and Logstash
I have in the same machine Elasticsearh, Logstash and Beat/filebeat.
Filebeat is configured to send information to localhost:5043.
Logstash has a pipe configuration listening on port 5043.
If I ran netstat -tuplen I see:
[root#elk bin]# netstat -tuplen | grep 5043
tcp6 0 0 :::5043 :::* LISTEN 994 147016 31435/java
Which means logstash loaded the pipe and is listening on the expected port.
If I telnet to localhost and port 5043:
[root#elk bin]# telnet localhost 5043
Trying ::1...
Connected to localhost.
Escape character is '^]'.
^CConnection closed by foreign host.
[root#elk bin]#
Which means the port is open.
However when I read the filebeat's log, I see:
2017-02-15T17:35:32+01:00 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-02-15T17:35:32+01:00 INFO Setup Beat: filebeat; Version: 5.2.1
2017-02-15T17:35:32+01:00 INFO Loading template enabled. Reading template file: /etc/filebeat/filebeat.template.json
2017-02-15T17:35:32+01:00 INFO Loading template enabled for Elasticsearch 2.x. Reading template file: /etc/filebeat/filebeat.template-es2x.json
2017-02-15T17:35:32+01:00 INFO Elasticsearch url: http://localhost:5043
2017-02-15T17:35:32+01:00 INFO Activated elasticsearch as output plugin.
2017-02-15T17:35:32+01:00 INFO Publisher name: elk.corp.ncr
2017-02-15T17:35:32+01:00 INFO Flush Interval set to: 1s
2017-02-15T17:35:32+01:00 INFO Max Bulk Size set to: 50
2017-02-15T17:35:32+01:00 INFO filebeat start running.
2017-02-15T17:35:32+01:00 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2017-02-15T17:35:32+01:00 INFO Loading registrar data from /var/lib/filebeat/registry
2017-02-15T17:35:32+01:00 INFO States Loaded from registrar: 0
2017-02-15T17:35:32+01:00 INFO Loading Prospectors: 1
2017-02-15T17:35:32+01:00 INFO Starting Registrar
2017-02-15T17:35:32+01:00 INFO Start sending events to output
2017-02-15T17:35:32+01:00 INFO Prospector with previous states loaded: 0
2017-02-15T17:35:32+01:00 INFO Loading Prospectors completed. Number of prospectors: 1
2017-02-15T17:35:32+01:00 INFO All prospectors are initialised and running with 0 states to persist
2017-02-15T17:35:32+01:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-02-15T17:35:32+01:00 INFO Starting prospector of type: log
2017-02-15T17:35:32+01:00 INFO Harvester started for file: /tmp/logstash-tutorial.log
2017-02-15T17:35:32+01:00 INFO Harvester started for file: /tmp/yum.log
2017-02-15T17:35:38+01:00 ERR Connecting error publishing events (retrying): Get http://localhost:5043: read tcp [::1]:40240->[::1]:5043: read: connection reset by peer
And the message 2017-02-15T17:35:41+01:00 ERR Connecting error publishing events (retrying): Get http://localhost:5043: read tcp 127.0.0.1:39214->127.0.0.1:5043: read: connection reset by peer is repeated ad-nauseam.
I am missing a elephant in the room? Why is the connection "reset by peer"?
pipeline.conf
input {
beats {
port => "5043"
}
}
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }
output {
stdout { codec => rubydebug }
}
filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /tmp/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ["^DBG"]
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ["^ERR", "^WARN"]
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: [".gz$"]
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
hosts: ["localhost:5043"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
I found out:
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
...
...
#----------------------------- Logstash output --------------------------------
#output.logstash:
...
...
Where I should have:
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
...
...
#----------------------------- Logstash output --------------------------------
output.logstash:
...
...
Get http://localhost:5043
This suggests that your filebeat configuration and what Logstash is configured to listen for are not in sync. Logstash has a beats {} input specifically designed to be a server for beats connections. The default port is 5044. On the beats side, the Logstash Output needs to be used to connect to that server. Doing it this way ensures both sides are speaking the same language, which that error suggests is not the case.
In your Filebeat configuration try changing tls to ssl. See the list of breaking changes
In my case I was missing logstash template options
output.logstash:
hosts: ["localhost:5044"]
template.enabled: true
template.path: "/etc/filebeat/filebeat.template.json"
index: "filebeat"
My logstash version is:
# /opt/logstash/bin/logstash --version
logstash 2.2.4
it is configured to receive input from port 5044 according to the filebeat file:
/etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => false
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
I have set ssl to false as I am not using it
but when I start the logstash service normally with systemctl it start and checking the status confirms it is running
systemctl status logstash
● logstash.service - LSB: Starts Logstash as a daemon.
Loaded: loaded (/etc/rc.d/init.d/logstash)
Active: active (exited) since Mon 2016-07-18 19:14:51 BST; 15h ago
Docs: man:systemd-sysv-generator(8)
Process: 19965 ExecStop=/etc/rc.d/init.d/logstash stop (code=exited, status=0/SUCCESS)
Process: 19970 ExecStart=/etc/rc.d/init.d/logstash start (code=exited, status=0/SUCCESS)
...
logstash started
The problem is that logstash does not seem to be receiving input on port 5044. hosts sending filebeats encounter:
single.go:126: INFO Connecting error publishing events (retrying): dial tcp 192.72.0.92:5044: getsockopt: connection refused
when I check the port
# netstat -an | grep 5044
I get nothing. So even though logstash is running, I can't tell what port it is bound to and listening on.
Also the firewall is stopped temporarily to investigate this.
The strange thing is that is I run logstash is debug mode like so:
# ./logstash --debug -f /etc/logstash/conf.d/02-beats-input.conf
I can see
# netstat -an | grep 5044
tcp6 0 0 :::5044 :::* LISTEN
tcp6 0 0 192.72.0.92:5044 192.168.36.70:53720 ESTABLISHED
tcp6 0 0 192.72.0.92:5044 192.72.0.90:45980 ESTABLISHED
tcp6 0 0 192.72.0.92:5044 192.72.0.90:45975 ESTABLISHED
tcp6 0 0 192.72.0.92:5044 192.72.0.90:45976 ESTABLISHED
or
# lsof -i :5044
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 15136 root 7u IPv6 7191510 0t0 TCP *:lxi-evntsvc (LISTEN)
java 15136 root 33u IPv6 7192379 0t0 TCP hostname:lxi-evntsvc->192.72.0.90:45975 (ESTABLISHED)
and the host sending filebeats can connect
output.go:87: DBG output worker: publish 7 events
2016/07/19 10:02:08.017890 client.go:146: DBG Try to publish 7 events to logstash with window size 10
2016/07/19 10:02:08.038579 client.go:124: DBG 7 events out of 7 events sent to logstash. Continue sending ...
2016/07/19 10:02:08.038615 single.go:135: DBG send completed
Please help point out what I may be doing wrong with this configuration. Thanks
Based on the hing provided by #LiGhTx117
I think
The startup script used by logstash in:
/etc/init.d/logstash
has the following variables among others:
LS_USER=logstash
LS_GROUP=logstash
LS_HOME=/var/lib/logstash
LS_LOG_DIR=/var/log/logstash
LS_LOG_FILE="${LS_LOG_DIR}/$name.log"
LS_CONF_DIR=/etc/logstash/conf.d
The ownership and permission on these seem to be the issue.
I ensured that the directories where recursively accessible to the
user logstash as well as the group logstash
and
Then I also ensured that the log_file: logstash.log was writeable by
the user/group logstash
restarted logstash
MY collectd is sending data to logstash at port 25826 but i am seeing this error on running logstash
UDP listener died {:exception=>#<SocketError: bind: name or service not known>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:160:in `bind'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.5/lib/logstash/inputs/udp.rb:67:in `udp_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.5/lib/logstash/inputs/udp.rb:50:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:342:in `inputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:336:in `start_input'"], :level=>:warn}
Anyone knows the solution out here?
Got a fix
No error at Logstash the collector collectd was not sending the data
to logstash udp port corrected it by adding conf in network plugin of
collectd enabled that plugin in collectd.conf and replace hostname with logstash host and udp port.