logstash attempting to connect to elasticsearch instead of loki as output - logstash

my logstash is running on kubernetes, logstash attempting to connect to elasticsearch instead of loki as output even if i used
XPACK_MONITORING_ENABLED: false in the env.
logstash.yml: |
http.host: "0.0.0.0"
log.level: debug
xpack.monitoring.enabled: false
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
file{
path => "/var/log/containers/*.log"
}
}
filter {
kubernetes {
source => "path"
target => "loki"
}
}
output {
stdout { codec => rubydebug}
loki {
url => "http://loki-loki-distributed-distributor.loki-benchmark.svc.cluster.local:3100/loki/api/v1/push"
}
}
console output
[2022-12-13T06:31:40,411][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:31:50,775][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:32:10,414][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:32:20,910][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:32:40,412][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:32:50,998][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:33:10,410][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:33:21,328][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
kubernetes config
image: "grafana/logstash-output-loki:1.0.1"
imagePullPolicy: "IfNotPresent"
command:
- '/bin/sh'
- '-c'
- 'logstash-plugin install --no-verify logstash-filter-kubernetes && logstash -f /usr/share/logstash/pipeline/logstash.conf'

Related

Error in connecting python client "Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record"

I am trying to get a python ElasticSearch client to connect to a local OpenSearch server that is running via docker. Am unable to connect and keep getting
opensearch-node1 | [2021-09-25T00:09:09,526][ERROR][o.o.s.s.h.n.SecuritySSLNettyHttpServerTransport] [opensearch-node1] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:
Installed OpenSearch locally sing docker following these instructions https://opensearch.org/downloads.html
Tried various ways to get the python client to connect but all are failing.
# es = Elasticsearch(timeout=600, hosts="http://localhost:9200", http_auth=('admin', 'admin'), use_ssl=False, verify_certs=False)
es = Elasticsearch(timeout=600, hosts=["http://localhost:9200"])
es.ping()
python version: 3.9.7
ElasticSearch python lib: 7.15.0
Has anybody experiences this?
**** Figured it out ****
Need to have a client of at most 7.13.4 as per the docs opensearch.org/docs/clients/index
and as per this issues github.com/opendistro-for-elasticsearch/sample-code/issues/242, have to initiate the client like this for local dev
es = Elasticsearch(['https://admin:admin#localhost:9200/'],
use_ssl=False,
verify_certs=False,
ssl_show_warn=False)

Sending log from filebeat to logstash error: Failed to publish events caused by: lumberjack protocol error

Halo guys
I'm new with ELK Stack
I try to send IIS log from FileBeat to Logstash and further but it doesn't work.
I get an error Failed to publish events caused by: lumberjack protocol error when start FileBeat (Logstash is running)
Here all my config
filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- e:\\elk\\iislog\\*
exclude_lines: ['#']
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["localhost:5044"]
logstash.yml
node.name: main
pipeline.id: main
pipeline.workers: 2
http.host: "localhost"
http.port: 5044
logstash.iis.conf
input {
beats {
port => "5044"
}
}
output {
}
iis.yml
- module: iis
# Access logs
access:
enabled: true
var.paths:
- e:\elk\iislog\*.log
error:
enabled: true
Logstash screen stand at line Successfully started Logstash API endpoint {:port=>5044}
All stack are version 7.4.0
Can you guys show me what am i doing wrong
Thanks

logstash is not fetching data from log file

logstash is configured with elasticsearch which should store data coming from logstash. configuration has been done properly still not fetching.
input {
file {
path => "C:\Users\vishadub\Documents\elkstackTools\logs\error_log.log"
type => "error_logs"
start_position => beginning
sincedb_path => "C:\Users\vishadub\Documents\elkstackTools\sincedb-access"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "error_log"
}
}
this is written in my config file..
o/p is below====
C:\Users\vishadub\Documents\elkstackTools\logstash-6.4.2\bin>logstash -f logstash.conf
Sending Logstash logs to C:/Users/vishadub/Documents/elkstackTools/logstash-6.4.2/logs which is now configured via log4j2.properties [2018-10-30T11:35:39,167]
[WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-10-30T11:35:39,667][INFO ][logstash.runner] Starting Logstash {"logstash.version"=>"6.4.2"} [2018-10-30T11:35:41,645][INFO ][logstash.pipeline] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-10-30T11:35:42,020][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [2018-10-30T11:35:42,036][INFO][logstash.outputs.elasticsearch]
Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}[2018-10-30T11:35:42,208][WARN ][logstash.outputs.elasticsearch]
Restored connection to ES instance {:url=>"http://localhost:9200/"}[2018-10-30T11:35:42,286][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-10-30T11:35:42,301][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-10-30T11:35:42,348][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-10-30T11:35:42,380][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-10-30T11:35:42,426][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-10-30T11:35:42,861][INFO ][logstash.pipeline ]
Pipeline started successfully {:pipeline_id=>"main", :thread=>"3 <Thread:0x45c02cea run>"} [2018-10-30T11:35:42,908][INFO ][logstash.agent]
Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2018-10-30T11:35:42,940][INFO ][filewatch.observingtail ] START,creating Discoverer, Watch with file and sincedb collections [2018-10-30T11:35:43,221][INFO ][logstash.agent ]
Successfully started Logstash API endpoint {:port=>9600}

Can connect to Elasticsearch 5 remotely but not locally

I recently upgraded from Elasticsearch 1.5 to Elasticsearch 5.5, and I can connect fine remotely, but not from localhost.
I updated elasticsearch.yml to be able to connect remotely:
network.host: 0.0.0.0
And the elasticsearch logs look fine to me:
[2017-11-05T22:44:23,441][INFO ][o.e.n.Node ] [node1] starting ...
[2017-11-05T22:44:23,655][INFO ][o.e.t.TransportService ] [node1] publish_address {[ip address]:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}, {[ip address]:9300}
[2017-11-05T22:44:23,666][INFO ][o.e.b.BootstrapChecks ] [node1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-11-05T22:44:26,712][INFO ][o.e.c.s.ClusterService ] [node1] new_master {node1}{s-J7aStjQFuwor-WY6bSCQ}{nv4GVIQ6SwScPiebRHBQBQ}{localhost}{[ip address]:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-11-05T22:44:26,733][INFO ][o.e.h.n.Netty4HttpServerTransport] [node1] publish_address {[ip address]:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}, {[ip address]:9200}
[2017-11-05T22:44:26,734][INFO ][o.e.n.Node ] [node1] started
I am wondering if it is related to the proxy, since when I try run curl localhost:9200, I get the following:
...
<p>The following error was encountered while trying to retrieve the URL: http://localhost:9200/</p>
<blockquote id="error">
<p><b>Connection to 127.0.0.1 failed.</b></p>
</blockquote>
<p id="sysmsg">The system returned: <i>(111) Connection refused</i></p>
<p>The remote host or network may be down. Please try the request again.</p>
...
Any ideas or tips on how to narrow down the issue would be helpful.
It turned out a http_proxy bash variable was set which was proxying all outbound traffic. Once this variable was unset, the curl command worked.
It also turned out that before this fix was made, I realized our Java applications were still able to connect to Elasticsearch locally, it was just the curl command that wasn't working.

Logstash - GELF output error

I've installed Graylog v2.1.1 as a virtual appliance inside VirtualBox on a Windows 7 PC.
I'm trying to read a simple log file and forward it to Graylog by using logstash v5.0.0 with the logstash-output-gelf-3.1.1 plugin, as described here: https://stackoverflow.com/a/31054064/4863804.
I've set up the following logstash.conf output:
input {
file {...}
}
output {
gelf {
host => "199.99.99.179"
port => 12203
}
}
But after running logstash -f logstash.conf I get the following error:
[2016-10-28T14:52:17,756][INFO ][logstash.pipeline ] Pipeline main started
[2016-10-28T14:52:17,817][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2016-10-28T14:52:18,594][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<NameError: no method 'debug' for arguments (org.jruby.RubyArray,org.jruby.RubyHash) on Java::OrgApacheLoggingLog4jCore::Logger
available overloads:
(org.apache.logging.log4j.Marker,java.lang.String,java.lang.Object[])
(org.apache.logging.log4j.Marker,java.lang.String,org.apache.logging.log4j.util.Supplier[])
(java.lang.String,org.apache.logging.log4j.util.Supplier[])
(java.lang.String,java.lang.Object[])>, :backtrace=>["C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/logging/logger.rb:41:in `debug'", "C:/SDKs/logstash-5.0.0/vendor/bundle/jruby/1.9/gems/logstash-output-gelf-3.1.1/lib/logstash/outputs/gelf.rb
:190:in `receive'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "C:/S
DKs/logstash-5.0.0/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:19:in `multi_receive'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/output_delegator.rb:42:in `multi_receive'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logst
ash/pipeline.rb:297:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/pipeline.rb:296:in `output_batch'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/pipeline.rb:252:in `worker_loo
p'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/pipeline.rb:225:in `start_workers'"]}
Update:
It seems to be caused by a version mismatch between logstash and the logstash-output-gelf as the same configuration works fine with logstash-2.4.0.
Perhaps the output plugin needs to be updated for 5.0.0.

Resources