Logstash check connected clients - logstash

I have a log forwarding pipeline consists of filebeat and logstash. Somehow, they stopped working together recently. How can I check if Filebeat is correctly connected to Logstash?

Check your Filebeat log file, default location: /var/log/filebeat/filebeat
Example of errors:
ERR Failed to publish events
Connecting error publishing events (retrying): dial tcp x.x.x.x:5044: getsockopt: connection refused
Extra:
Troubleshooting Filebeat and Logstash

Related

rafthttp: dial tcp timeout on etcd 3-node cluster creation

I don't have an access to the etcd part of the project's source code, however I do have access to the /var/log/syslog.
The goal is to setup up 3-node cluster.
(1)The very first etcd error that comes up is:
rafthttp: failed to dial 76e7ffhh20007a98 on stream MsgApp v2 (dial tcp 10.0.0.134:2380: i/o timeout)
Before continuing, I would say that I can ping all three nodes from each of the nodes. As well as I have tried to open the 2380 TCP ports and still no success - same error.
(2)So, before that error I had following messages from the etcd, which in my opinion confirm that cluster is setup correctly:
etcdserver/membership: added member 76e7ffhh20007a98 [https://server2:2380]
etcdserver/membership: added member 222e88db3803e816 [https://server1:2380]
etcdserver/membership: added member 999115e00e17123d [https://server3:2380]
In /etc/hosts file these DNS names are resolved as:
server2 10.0.0.135
server1 10.0.0.134
server3 10.0.0.136
(3)The initial setup, however (on each nodes looks like this):
embed: listening for peers on https://127.0.0.1:2380
embed: listening for client requests on 127.0.0.1:2379
So, to sum up, each node have got this initial setup log (3) and then adds members (2) then once these steps are done it fails with (1). As I know the etcd cluster creation is following this pattern: https://etcd.io/docs/v3.5/tutorials/how-to-setup-cluster/
Without knowing the source code is really hard to debug, however maybe some ideas on the error and what could cause it?
UPD: etcdctl cluster-health output (ETCDCTL_ENDPOINT is exported):
cluster may be unhealthy: failed to list members Error: client: etcd
cluster is unavailable or misconfigured; error #0: client: endpoint
http://127.0.0.1:2379 exceeded header timeout ; error #1: dial tcp
127.0.0.1:4001: connect: connection refused
error #0: client: endpoint http://127.0.0.1:2379 exceeded header
timeout error #1: dial tcp 127.0.0.1:4001: connect: connection refused

Fabic: Issue connection refused 7050

I am trying to create a network from the hyperledger fabic tutorial. I get the following error:
Error: failed to create deliver client for orderer: orderer client failed to connect to localhost:7050: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp [::1]:7050: connect: connection refused"
I opened up the port on the Centos 7 Virtual machine and still no luck. The docker container is exposing the port to the host.
I removed all docker containers, images and volumes. I even rebuilt the VM from scratch.
Any help would be great.
Thanks,
This situation is happened because you called a gRPC to orderer server but your call failed to hit the server. This situation may happen for many reasons, but for most of the cases the situation is happened due to server down(orderer server exit or down due to misconfiguration) or your call failed to hit the server due to misconfiguration.
I somehow encounter this problem before and the port was opened. Somehow it was a mistake where I forgot to put '-a' in command (launch cerificate authorities). Hope it help.
You might also refer this : https://hyperledger-fabric.readthedocs.io/en/release-2.0/build_network.html

connect filebeat to logstash

I am trying to setup filebeat to logstash and get below errors at filebeat and logstash end:
filebeat; Version: 7.7.0
logstash "number" : "7.8.0"
Modified /etc/filebeat/filebeat.yml:
enabled: true
paths:
commented output.elasticsearch
uncommented output.logstash and added hosts: ["hostname:5044"]
Modified /etc/logstash/conf.d/beats_elasticsearch.conf:
input {
beats {
port => 5044
}
}
#filter {
#}
output {
elasticsearch {
hosts => ["hostname:9200"]
}
}
I started filebeat and got below error:
2020-07-06T08:51:23.912-0700 ERROR [publisher_pipeline_output] pipeline/output.go:106 Failed to connect to backoff(elasticsearch(http://hostname:5044)): Get http://hostname:5044: dial tcp ip_address:5044: connect: connection refused
Started logstash and its log below:
[INFO ] 2020-07-06 09:00:20.562 [[main]<beats] Server - Starting server on port: 5044
[INFO ] 2020-07-06 09:00:20.835 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2020-07-06 09:00:45.266 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: x.x.x.x:5044, remote: y.y.y.y:53628] Handling exception: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
[WARN ] 2020-07-06 09:00:45.267 [nioEventLoopGroup-2-2] DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
Please explain what else I should do.
Started filebeat and logstash as:
sudo /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml
sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/beats_elasticsearch.conf
Thanks
The version on the filebeat and logstash was different. Upgraded the filebeat version and fixed the issue. Thanks

Filebeeat sending log, but logstash not running, connection refused! Logstash

im used filebeat to many server, shipped nginx log to logstash
in this time and months my elk server is very good worked
but, my 1 line added grok pattern to syslog-filter.conf, and restart logstash ,,,
my elk and Concerning Logstash not worked
this is wehn, my elasticsearch and logatash and kibana ... services this up and enable and active!
but ...
my nginx servers ...
telnet to 5044
and
telnet to 5443
connection refused
this log, 1 server Logs (filebeat logs)
> 2018-01-23T10:21:21+03:30 ERR Failed to connect: dial tcp 172.17.11.202:5443: getsockopt: connection refused
> 2018-01-23T10:21:28+03:30 INFO Non-zero metrics in the last 30s: beat.memstats.gc_next=11769216 beat.memstats.memory_alloc=5935656 beat.memstats.memory_total=73881024 filebeat.harvester.open_files=5 filebeat.harvester.running=6 libbeat.config.module.running=0 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.retry=2048 registrar.states.current=35
> 2018-01-23T10:47:16+03:30 INFO Non-zero metrics in the last 30s: beat.memstats.gc_next=11297792 beat.memstats.memory_alloc=5872352 beat.memstats.memory_total=26557112 filebeat.harvester.open_files=5 filebeat.harvester.running=6 libbeat.config.module.running=0 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=4117 registrar.states.current=37
> 2018-01-23T10:47:22+03:30 ERR Failed to connect: dial tcp 172.17.11.202:5443: getsockopt: connection refused
> 2018-01-23T10:47:46+03:30 INFO Non-zero metrics in the last 30s: beat.memstats.gc_next=11297792 beat.memstats.memory_alloc=6012704 beat.memstats.memory_total=26697464 filebeat.harvester.open_files=5 filebeat.harvester.running=6 libbeat.config.module.running=0 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.retry=2048 registrar.states.current=37
> 2018-01-23T14:22:45+03:30 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=11490800 beat.memstats.memory_alloc=5802160 beat.memstats.memory_total=153496216 filebeat.harvester.open_files=3 filebeat.harvester.running=2 libbeat.config.module.running=0 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=4117 registrar.states.current=3

Logstash Failing to get data from collectd

MY collectd is sending data to logstash at port 25826 but i am seeing this error on running logstash
UDP listener died {:exception=>#<SocketError: bind: name or service not known>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:160:in `bind'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.5/lib/logstash/inputs/udp.rb:67:in `udp_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.5/lib/logstash/inputs/udp.rb:50:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:342:in `inputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:336:in `start_input'"], :level=>:warn}
Anyone knows the solution out here?
Got a fix
No error at Logstash the collector collectd was not sending the data
to logstash udp port corrected it by adding conf in network plugin of
collectd enabled that plugin in collectd.conf and replace hostname with logstash host and udp port.

Resources