Logstash failed to send join request to elasticsearch master - azure

I have a problem with my logstash that can't send log to elaticsearch.
With following details
Logstash version : 1.5.1
Elasticsearch version : 1.6.0
jvm on both servers version : 1.8.0
Linux 3.10.0-229.7.2.el7.x86_64 #1 SMP Tue Jun 23 22:06:11 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Azure Openlogic 7.1
Here is my logstash.err file
INFO: [ls1] failed to send join request to master
[[es1][e8A0li5pRfeMklozmDXgkQ][elastic][inet[/x.x.x.x:9300]]], reason
[RemoteTransportException[[es1][inet[/x.x.x.x:9300]]
[internal:discovery/zen/join]]; nested:
ConnectTransportException[[ls1][inet[/x.x.x.x:9300]]
connect_timeout[30s]]; nested: ConnectTimeoutException[connection
timed out: /x.x.x.x:9300]; ]
My logstash configuration output
output {
elasticsearch {
host => "x.x.x.x"
bind_port => 9300
index => "syslog"
cluster => "test-cluster"
node_name => 'ls1'
}
stdout {
codec => rubydebug
}
}
Here is my elasticsearch.yml configuration file in elasticsearch server
cluster.name: test-cluster
node.name: "es1"
network.bind_host: 0.0.0.0
network.publish_host: <my_elasticsearch_public_ip>
transport.tcp.port: 9300
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["my_logstash_public_ip:9300"]
Here is my elasticsearch.yml file in logstash server (/var/lib/logstash)
network.publish_host: my_logstash_public_ip
discovery.zen.ping.multicast.enabled: false
I've allowed port 9300 on both servers.

You need to include protocol attribute in your logstash configuration. find the below updated code.
output {
elasticsearch {
host => "x.x.x.x"
protocol => "http"
bind_port => 9300
index => "syslog"
cluster => "test-cluster"
node_name => 'ls1'
}

I use Microsoft Azure VM, Now I can solve this problem by create VPN connection via azure virtual machines.

Related

Why I cannot receive CPU data when using SNMP and logstash

there
I monitor remote Linux with Logstash and SNMP. When I try to get interfaces or ifSpeed, everthing is OK. But when I try to get sysDescr, CPU storage and memory storage, I cannot get any data back!
I dont know why. The logstash log seems normal, too.
The logstash.conf:
input {
snmp {
tables => [
{
"name" => "sysDescr"
"columns" => ["1.3.6.1.2.1.1.1.0"]
}
]
hosts => [{
host => "udp:192.168.131.125/161"
community => "laundry"
version => "2c"
}
]
interval => 5
type => "snmp"
}
beats {
port => 5044
add_field => {"type" => "beat"}
}
tcp {
port => 50000
}
}
## Add your filters / logstash plugins configuration here
output {
if [type] == "beat" {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:9200"]
index => "beat-logs"
}
}
if [type] == "snmp" {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:9200"]
index => "snmp-logs"
}
}
}
the logstash log is:
root#laundry:/opt/ground/management# docker logs -f -t -n=5 5ae67e146ab0
2023-02-03T02:35:04.639861138Z [2023-02-03T10:35:04,639][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
2023-02-03T02:35:04.873655686Z [2023-02-03T10:35:04,873][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
2023-02-03T02:35:04.885933029Z [2023-02-03T10:35:04,884][INFO ][logstash.inputs.tcp ][main][06f1d7ee5445cc0e11cda56012ef6767600f21acd6133e02e957f761d26bac84] Starting tcp input listener {:address=>"0.0.0.0:50000", :ssl_enable=>false}
2023-02-03T02:35:04.934224084Z [2023-02-03T10:35:04,933][INFO ][org.logstash.beats.Server][main][4b91981ecb09a5d2
the output of snmpwalk and snmpget:
root#laundry:/opt/ground/management# snmpwalk -v 2c -c laundry 192.168.131.125 1.3.6.1.2.1.1.1.0
iso.3.6.1.2.1.1.1.0 = STRING: "Linux laundry 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 12:06:43 UTC 2023 aarch64"
root#laundry:/opt/ground/management# snmpget -v 2c -c laundry 192.168.131.125 1.3.6.1.2.1.1.1.0
iso.3.6.1.2.1.1.1.0 = STRING: "Linux laundry 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 12:06:43 UTC 2023 aarch64"

logstash Error: com.mariadb.jdbc.Driver not loaded

I'm trying to sync some tables from MariaDB to ElasticSearch using LogStash.
I'm on a Debian Buster (10) Server
$ java -version
openjdk version "11.0.4" 2019-07-16
OpenJDK Runtime Environment (build 11.0.4+11-post-Debian-1deb10u1)
OpenJDK 64-Bit Server VM (build 11.0.4+11-post-Debian-1deb10u1, mixed mode, sharing)
$ mariadb --version
mariadb Ver 15.1 Distrib 10.3.15-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
$ /usr/share/logstash/bin/logstash --version
logstash 7.2.0
I tried different connectors :
$ ls -l /usr/share/java/
mariadb-java-client.jar
$ ls -l /etc/logstash/connectors/
mariadb-java-client-2.1.2.jar
mariadb-java-client-2.2.6.jar
mariadb-java-client-2.3.0.jar
mariadb-java-client-2.4.2.jar
mysql-connector-java-8.0.17.jar
Using "org.mariadb.jdbc.Driver" for mariadb connectors and "com.mysql.cj.jdbc.Driver" for mysql connector
$ cat /etc/logstash/conf.d/db-fr-bank.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mariadb://localhost:3306/db_fr"
jdbc_user => "logstash"
jdbc_password => "<password>"
jdbc_driver_library => "/usr/share/java/mariadb-java-client.jar"
jdbc_driver_class => "org.mariadb.jdbc.Driver"
statement => "SELECT * FROM bank"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "fr-bank"
}
}
But, instead of syncing, i keep getting :
$ /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/db-fr-bank.conf
...
[ERROR] 2019-07-29 02:08:17.563 [[main]<jdbc] jdbc - Failed to load /usr/share/java/mariadb-java-client.jar {:exception=>#<TypeError: failed to coerce jdk.internal.loader.ClassLoaders$AppClassLoader to java.net.URLClassLoader>}
[ERROR] 2019-07-29 02:08:17.598 [[main]<jdbc] javapipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Jdbc jdbc_user=>"logstash", jdbc_password=><password>, statement=>"SELECT * FROM bank", jdbc_driver_library=>"/usr/share/java/mariadb-java-client.jar", jdbc_connection_string=>"jdbc:mariadb://localhost:3306/db_fr", id=>"38a6d112755a5e87278761cf5f41b7e509212d1d02837a03672df2face00943a", jdbc_driver_class=>"org.mariadb.jdbc.Driver", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_f3094292-7482-4b73-95c4-7f78da4da911", enable_metric=>true, charset=>"UTF-8">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, parameters=>{"sql_last_value"=>1970-01-01 00:00:00 UTC}, last_run_metadata_path=>"/root/.logstash_jdbc_last_run", use_column_value=>false, tracking_column_type=>"numeric", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>
Error: org.mariadb.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Exception: LogStash::ConfigurationError
Stack: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:163:in `open_jdbc_connection'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:221:in `execute_statement'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:277:in `execute_query'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:263:in `run'
/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:309:in `inputworker'
/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:302:in `block in start_input'
...
Same issue here.
I use the workaround from here with this and it works.
ie:
Copy the driver file to {logstash install dir}/logstash-core/lib/jars/ directory. These jars get added to the correct JDK classpath as logstash is started via java.
And
Change the jdbc_driver_library value in the logstash conf to "". i.e.: jdbc_driver_library => "" as well, otherwise the code still tries to load the jar separately

Filebeat fails to connect to logstash

I'm using two servers on the cloud on one server (A) I installed filebeat and on second server (B) I have installed logstash, elasticsearch, and kibana. So I'm facing problem while sending logs from server A to server B on logstash.
My filebeat configuration is
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/vinit/demo/*.log
fields:
log_type: apache
fields_under_root: true
#output.elasticsearch:
#hosts: ["localhost:9200"]
#protocol: "https"
#username: "elastic"
#password: "changeme"
output.logstash:
hosts: ["XXX.XX.X.XXX:5044"]
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
#ssl.certificate: "/etc/pki/client/cert.pem"
#ssl.key: "/etc/pki/client/cert.key"
In logstash, I have enabled modules system, filebeat, and logstash.
Logstash configuration is
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "^%{IP:CLIENT_IP} (?:-|%{USER:IDEN}) (?:-|%{USER:AUTH}) \[%{HTTPDATE:CREATED_ON}\] \"(?:%{WORD:REQUEST_METHOD} (?:/|%{NOTSPACE:REQUEST})(?: HTT$
add_field => {
"LOG_TYPES" => "apache-log"
}
overwrite => [ "message" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "apache-info-log"
}
stdout { codec => rubydebug }
}
In Elasticsearch I did
network.host: localhost
I'm getting error are below-
|2019-01-18T15:05:47.738Z|INFO|crawler/crawler.go:72|Loading Inputs: 1|
|---|---|---|---|
|2019-01-18T15:05:47.739Z|INFO|log/input.go:138|Configured paths: [/home/vinit/demo/*.log]|
|2019-01-18T15:05:47.739Z|INFO|input/input.go:114|Starting input of type: log; ID: 10340820847180584185 |
|2019-01-18T15:05:47.740Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-plain*.log]|
|2019-01-18T15:05:47.740Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-slowlog-plain*.log]|
|2019-01-18T15:05:47.742Z|INFO|log/harvester.go:254|Harvester started for file: /home/vinit/demo/info-log.log|
|2019-01-18T15:05:47.749Z|INFO|log/input.go:138|Configured paths: [/var/log/auth.log* /var/log/secure*]|
|2019-01-18T15:05:47.763Z|INFO|log/input.go:138|Configured paths: [/var/log/messages* /var/log/syslog*]|
|2019-01-18T15:05:47.763Z|INFO|crawler/crawler.go:106|Loading and starting Inputs completed. Enabled inputs: 1|
|2019-01-18T15:05:47.763Z|INFO|cfgfile/reload.go:150|Config reloader started|
|2019-01-18T15:05:47.777Z|INFO|log/input.go:138|Configured paths: [/var/log/auth.log* /var/log/secure*]|
|2019-01-18T15:05:47.790Z|INFO|log/input.go:138|Configured paths: [/var/log/messages* /var/log/syslog*]|
|2019-01-18T15:05:47.790Z|INFO|input/input.go:114|Starting input of type: log; ID: 15514736912311113705 |
|2019-01-18T15:05:47.790Z|INFO|input/input.go:114|Starting input of type: log; ID: 4004097261679848995 |
|2019-01-18T15:05:47.791Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-plain*.log]|
|2019-01-18T15:05:47.791Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-slowlog-plain*.log]|
|2019-01-18T15:05:47.791Z|INFO|input/input.go:114|Starting input of type: log; ID: 2251543969305657601 |
|2019-01-18T15:05:47.791Z|INFO|input/input.go:114|Starting input of type: log; ID: 9013300092125558684 |
|2019-01-18T15:05:47.791Z|INFO|cfgfile/reload.go:205|Loading of config files completed.|
|2019-01-18T15:05:47.792Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20181223|
|2019-01-18T15:05:47.794Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20181223|
|2019-01-18T15:05:47.797Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20181230|
|2019-01-18T15:05:47.800Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20181230|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20190106|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20190113|
|2019-01-18T15:05:47.816Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20190106|
|2019-01-18T15:05:47.817Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages|
|2019-01-18T15:05:47.818Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20190113|
|2019-01-18T15:05:47.855Z|INFO|pipeline/output.go:95|Connecting to backoff(async(tcp://XXX.XX.X.XXX:5044))|
|2019-01-18T15:06:18.855Z|ERROR|pipeline/output.go:100|Failed to connect to backoff(async(tcp://XXX.XX.X.XXX:5044)): dial tcp XXX.XX.X.XXX:5044: i/o timeout|
|---|---|---|---|
|2019-01-18T15:06:18.855Z|INFO|pipeline/output.go:93|Attempting to reconnect to backoff(async(tcp://XXX.XX.X.XXX:5044)) with 1 reconnect attempt(s)|
Is anyone have any idea how to resolve this and make it work properly?
Related question is Failed to connect to backoff(async(tcp://ip:5044)): dial tcp ip:5044: i/o timeout.
There in the answer it was proposed to allow outgoing TCP connection on port 5044 directly in your cloud provider settings page, since it may be blocked by default.
In addition to present comments by #Vinit Jordan, who whitelisted port 5044 on EC2 with this steps, I propose possible solution for general case.
Please, check your default firewall on logstash server. Probably you have ufw simple firewall that was preconfigured during initial Nginx setup. I ran into this problem right after installation of ELK on the machine B and filebeat on the machine A.
I just added a new rule for filebeat server firewall and the error disappeared:
sudo ufw allow from <IP_address_of_machine_A> to any port 5044
Then filbeat log on machine A showed me:
"message":"Connection to backoff(async(tcp://<IP_address_of_machine_B>:5044)) established"
Probably it is also reasonable to add more general rule for your trusted servers:
sudo ufw allow from <IP_ADDRESS>

Filebeat to Logstash ERR wsarecv, wsasend

I am using ELK stack version 5.1.2 and I have problem with sending logs from one worker (node) to central server. Everything I configured on localhost and it worked perfectly, but on development environment not. On localhost I used SSL, but now I turned it off. So my conf file of filebeat is:
filebeat.prospectors:
- input_type: log
paths:
- e:\logs\*.log
document_type: xxx_log
output.logstash:
hosts: ["xxxx:5043"]
logging.level: error
logging.to_syslog: true
logging.files:
rotateeverybytes: 10485760 # = 10MB
Logstash configuration:
input {
beats {
port => "5043"
}
}
filter {
if [type] == "xxx_log" {
multiline {
pattern => "^TID"
negate => true
what => "previous"
}
grok {
break_on_match => false
match => [ "message", "TID: \[%{TIMESTAMP_ISO8601:timestamp}\] %{LOGLEVEL:level} \[%{JAVACLASS:java_class}\] \(%{GREEDYDATA:thread}\) - (?<log_message>(.|\r|\n)*)"]
}
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
user => "elastic"
password => "changeme"
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}
Ok, when I add line to log file, for example:
TID: [2017-01-19 13:37:18] INFO [App.java] (main) - Info test...
Filebeat starts to collect data, after successfull harvest I am getting:
ERR Failed to publish events caused by: write tcp yyyy:51992->xxxx:5043: wsasend: An existing connection was forcibly closed by the remote host.
Nothing in log of Logstash.
Firewall is turned off, when I open telnet from WORK node on port 5043 message will come to central server because Logstash say in log file, that I send invalid frame type, for example I send only some POST to test if port 5043 is open. So the port is open, but the elastic is empty. Sometimes, I do not know why, I am getting error in Filebeat log:
wsarecv: An existing connection was forcibly closed by the remote host.
This line generates Logstash log:
11:45:31.094 [nioEventLoopGroup-4-2] ERROR org.logstash.beats.BeatsHandler - Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 83
13:31:43.139 [nioEventLoopGroup-4-4] ERROR org.logstash.beats.BeatsHandler - Exception: An existing connection was forcibly closed by the remote host
Thank you for any advice.
Jaroslav

rsyslog forwarder seems not not work

I would like to send rsyslog message to my ELK stack but it does not work
rsyslog conf
*.* ##127.0.0.1:10514
local6.* /tmp/grenard.log
&~
logstash conf
input {
syslog {
port => 10514
type => "syslog"
}
stdin {}
}
output {
stdout { codec => rubydebug }
}
logstash listens really on 10514 (telnet localhost 10514
)(test with a localhost telent 10514 and I can see it in my stdout
root#VM-GUILLAUME /etc/logstash/conf.d # /opt/logstash/bin/logstash
-f /etc/logstash/conf.d Settings: Default filter workers: 4 Logstash startup completed {
"message" => "bonjour\r\n",
"#version" => "1",
"#timestamp" => "2016-03-01T10:55:41.488Z",
"type" => "syslog",
"host" => "0:0:0:0:0:0:0:1",
"tags" => [
[0] "_grokparsefailure_sysloginput"
Moreover, the logfile is fulfilled so I know my rsyslog conf is OK
logger -t apache -i -p local6.info $(date)
the log file
Mar 1 12:06:04 localhost apache[13700]: mar. mars 1 12:06:04 CET 2016
Problem was due to tcp (##). using udp (#) problem solved. Here my rsyslod.d/grenard.conf
*.* #127.0.0.1:10514
local6.* /tmp/grenard.log
&~

Resources