unable to find valid certification path to requested target logstash - logstash

I am enabling ssl on our elk stack and I am encountering an error message when connecting logstash to elasticsearch.
`
enter code here`2019-08-29T13:38:43,822][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://logstash_internal:xxxxxx#elastic.local:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://logstash_internal:xxxxxx#elastic.local:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"
The self signed certificate is installed in the trusted on the local machine. If you know what is causing the issue and how to resolve let me know.
my logstash config is
output {
elasticsearch {
hosts => ["https://elastic.local:9200"]
index => "logstash-%{+YYYY.MM.dd}"
ssl => true
keystore => '\config\logstash.keystore'
keystore_password => "keystore.pass"
cacert => '\config\certs\instance.crt'
ssl_certificate_verification => true
user => "{ES_USER}" password => "{ES_PWD}"
}
stdout { codec => rubydebug }
}
Any assistance would be much appreciated in getting this error resolved

Related

how to enable authentication with logstash email output

I want to learn how to use logstash with mailtrap smtp for development purposes. I installed logstash then ran this command:
/usr/share/logstash/bin/logstash -e 'input { stdin { } } output { email {
to => "user#example.com"
from => "user#example.com"
subject => "Alert - %{title}"
body => "content here"
authentication => "plain"
domain => "smtp.mailtrap.io:2525"
username => "20ff3475e0c350"
password => "594980b5a1be46"
}
}'
Once logstash is up and running, I type something and press enter. This causes the error below:
[ERROR] 2022-11-08 19:34:59.861 [[main]>worker1] email - Something happen while delivering an email {:exception=>#<Net::SMTPAuthenticationError: 503 5.5.1 Error: authentication not enabled
How do I enable authentication? Or what am I doing wrong? I also can't find any documentation on what are acceptable values for the authentication property.

Creating master and slaves DNS with puppet module camptocamp bind

Trying to create a master and slave (redundancy) DNS with puppet module camptocamp bind. In slave profile, I've set transfer_source => '192.168.1.20' to masters ip: 192.168.1.20. It should then synchronize and copy dns records from master to the slave.
But I got complaints about that it could only be set to slave zones. I've followed the README from puppet forge for the module: https://forge.puppet.com/camptocamp/bind/readme
dnsmaster.pp
class profile::dnsbind::server {
include 'bind'
bind::zone {'example.com':
ensure => 'present',
zone_contact => 'contact.example.com',
zone_ns => ['ns0.example.com'],
zone_serial => '2012112901',
zone_ttl => '604800',
zone_origin => 'example.com',
}
bind::a { 'example.com':
ensure => 'present',
zone => 'example.com',
ptr => false,
hash_data => {
'host1' => { owner => '192.168.0.1', },
'host2' => { owner => '192.168.0.2', },
},
}
}
dnsslave.pp
class profile::dnsbind::server_slave {
include 'bind'
bind::zone {'example.com':
ensure => 'present',
zone_contact => 'contact.example.com',
zone_ns => ['ns0.example.com'],
zone_serial => '2012112901',
zone_ttl => '604800',
zone_origin => 'example.com',
transfer_source => '192.168.1.20',
}
}
The error message:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Function Call, Zone 'example.com': transfer_source can be set only for slave zones! at /etc/puppetlabs/code/environments/production/modules/bind/manifests/zone.pp:80:5 at /etc/puppetlabs/code/environments/production/manifests/profile_dns2.pp:5 on node centos7-3
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
It should then synchronize and copy dns records from master to the
slave.
But I got complaints about that it could only be set to slave zones.
Evidently, the module does not recognize that you're trying to configure a slave zone. How do you suppose it would know? Well, apparently not from the presence of a transfer_source property.
I've followed the README from puppet forge for the module:
https://forge.puppet.com/camptocamp/bind/readme
I'll believe that you started by pulling the example zone definition (for a master zone) from the readme, and I grant you that this module's docs are kinda shoddy. But do nevertheless consider actually reading the docs thoroughly, not just skimming them. If you had done, you would have found documentation for the zone_type parameter immediately following the the documentation for the transfer_source parameter:
$zone_type = master
Specify if the zone is master/slave/forward.
Use this to specify that you're configuring a slave zone.

Filebeat to Logstash ERR wsarecv, wsasend

I am using ELK stack version 5.1.2 and I have problem with sending logs from one worker (node) to central server. Everything I configured on localhost and it worked perfectly, but on development environment not. On localhost I used SSL, but now I turned it off. So my conf file of filebeat is:
filebeat.prospectors:
- input_type: log
paths:
- e:\logs\*.log
document_type: xxx_log
output.logstash:
hosts: ["xxxx:5043"]
logging.level: error
logging.to_syslog: true
logging.files:
rotateeverybytes: 10485760 # = 10MB
Logstash configuration:
input {
beats {
port => "5043"
}
}
filter {
if [type] == "xxx_log" {
multiline {
pattern => "^TID"
negate => true
what => "previous"
}
grok {
break_on_match => false
match => [ "message", "TID: \[%{TIMESTAMP_ISO8601:timestamp}\] %{LOGLEVEL:level} \[%{JAVACLASS:java_class}\] \(%{GREEDYDATA:thread}\) - (?<log_message>(.|\r|\n)*)"]
}
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
user => "elastic"
password => "changeme"
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}
Ok, when I add line to log file, for example:
TID: [2017-01-19 13:37:18] INFO [App.java] (main) - Info test...
Filebeat starts to collect data, after successfull harvest I am getting:
ERR Failed to publish events caused by: write tcp yyyy:51992->xxxx:5043: wsasend: An existing connection was forcibly closed by the remote host.
Nothing in log of Logstash.
Firewall is turned off, when I open telnet from WORK node on port 5043 message will come to central server because Logstash say in log file, that I send invalid frame type, for example I send only some POST to test if port 5043 is open. So the port is open, but the elastic is empty. Sometimes, I do not know why, I am getting error in Filebeat log:
wsarecv: An existing connection was forcibly closed by the remote host.
This line generates Logstash log:
11:45:31.094 [nioEventLoopGroup-4-2] ERROR org.logstash.beats.BeatsHandler - Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 83
13:31:43.139 [nioEventLoopGroup-4-4] ERROR org.logstash.beats.BeatsHandler - Exception: An existing connection was forcibly closed by the remote host
Thank you for any advice.
Jaroslav

Logstash in EC2 can't send log data to AWS Elasticsearch service

In EC2 I have configured logstash as belows
input {
# beats{
# port => 5044
# }
file {
type => "adjustlog"
path => "/etc/logstash/conf.d/sample.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
if[type] == 'adjustlog'{
grok {
match => {
"message" => [
"%{TIMESTAMP_ISO8601:timestamp},(%{USERNAME:userId})?,%{USERNAME:setlkey},%{USERNAME:uniqueId},%{NUMBER:providerId},%{USERNAME:itemCode},%{USERNAME:voucherCode},%{USERNAME:samsCode},(%{USERNAME:serviceType})?"
]
}
}
}else {
drop{ }
}
}
output {
elasticsearch{
hosts => ["search-*.es.amazonaws.com:80"]
index => "test"
}
stdout {codec => rubydebug}
}
but logstash can't make index in AWS elasticsearch and
send log data.
(However, curl and wget commands are working well.
I can make index using curl command)
Error logs are
Attempted to send a bulk request to Elasticsearch configured at '["http://search-*.es.amazonaws.com/"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:error_message=>"search*.es.amazonaws.com:80 failed to respond", :error_class=>"Manticore::ClientProtocolException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/transport/http/manticore.rb:84:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/transport/base.rb:257:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/transport/http/manticore.rb:67:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/client.rb:128:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.17/lib/elasticsearch/api/actions/bulk.rb:88:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in `non_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:172:in `safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:101:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:86:in `retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:29:in `multi_receive'", "org/jruby/RubyArray.java:1653:in `each_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:28:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/output_delegator.rb:114:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:301:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:301:in `output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:232:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:201:in `start_workers'"], :client_config=>{:hosts=>["http://search*.es.amazonaws.com/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"http", :user=>nil, :password=>nil, :port=>80}}, :level=>:error}
What is the check point for debug?
I found this when trying to fix a similar issue. AWS has changed how it implements Elasticsearch node discovery. It will work fine until logstash tries to discover more hosts at which point it breaks. Restarting logstash temporarily but inconsistently fixes the issue. curl and wget work fine too.
:message=>"Cannot get new connection from pool.", :class=>"Elasticsearch::Transport::Transport::Error", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:193:in `perform_request'",
ElasticSearch would work for a bit but then stop ingesting data.
Old config which failed
output {
elasticsearch {
hosts => ["https://search-*.us-east-1.es.amazonaws.com"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Logstash tries to get a list of hosts from Elasticsearch but AWS's implementation has changed the format of the data returned. For more details on the specifics. https://forums.aws.amazon.com/thread.jspa?threadID=222600
https://discuss.elastic.co/t/elasitcsearch-ruby-raises-cannot-get-new-connection-from-pool-error/36252/11
The working config.
output
{
elasticsearch {
hosts => ["https://search-*.us-east-1.es.amazonaws.com"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
tomwj

Error observed while configuring logstash forwarder

While trying to configure the logstash forwarder on the central server, below error is observed in logstash.log:
{:timestamp=>"2015-07-07T09:05:14.742000-0500", :message=>"Unknown setting 'timestamp' for date", :level=>:error}
{:timestamp=>"2015-07-07T09:05:14.744000-0500", :message=>"Error: Something is wrong with your configuration."}
Could someone please help to resolve this issue?
Here is the configuration file:/etc/logstash/conf.d/central.conf:
input {
lumberjack {
port => 6782
ssl_certificate => "/etc/logstash/server.crt"
ssl_key => "/etc/logstash/server.key"
type => "lumberjack"
}
}
output {
stdout { }
elasticsearch {
cluster => "logstash"
}
}

Resources