I'm trying to install GrayLog2 on my Ubuntu server to manage all logs from multiple location.
I have install Mongodb, Java and Elastic Search but it failed to detect the elastic search when I run
sudo java -jar /opt/graylog2-server/graylog2-server.jar --debug
Error from graylog
2014-12-11 03:29:23,700 INFO : org.elasticsearch.transport - [graylog2-server] bound_address {inet[/0:0:0:0:0:0:0:0:9350]}, publish_address {inet[/10.175.112.147:9350]}
2014-12-11 03:29:26,739 WARN : org.elasticsearch.discovery - [graylog2-server] waited for 3s and no initial state was set by the discovery
2014-12-11 03:29:26,739 INFO : org.elasticsearch.discovery - [graylog2-server] graylog2/MNpZ3HLXRbaKkpw--872Mw
2014-12-11 03:29:26,739 DEBUG: org.elasticsearch.gateway - [graylog2-server] can't wait on start for (possibly) reading state from gateway, will do it asynchronously
2014-12-11 03:29:26,740 INFO : org.elasticsearch.node - [graylog2-server] started
2014-12-11 03:29:26,886 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] filtered ping responses: (filter_client[true], filter_data[false]) {none}
2014-12-11 03:29:29,897 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] filtered ping responses: (filter_client[true], filter_data[false]) {none}
2014-12-11 03:29:31,783 ERROR: org.graylog2.Main -
ERROR: Could not successfully connect to ElasticSearch. Check that your cluster state is not RED and that ElasticSearch is running properly.
More infomation
ElasticSearch Health Status
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true
{
"cluster_name" : "graylog2",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
Graylog2Server Version: 0.20.2
ElasticSearch Version: 0.90.0
Thanks
Related
my logstash is running on kubernetes, logstash attempting to connect to elasticsearch instead of loki as output even if i used
XPACK_MONITORING_ENABLED: false in the env.
logstash.yml: |
http.host: "0.0.0.0"
log.level: debug
xpack.monitoring.enabled: false
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
file{
path => "/var/log/containers/*.log"
}
}
filter {
kubernetes {
source => "path"
target => "loki"
}
}
output {
stdout { codec => rubydebug}
loki {
url => "http://loki-loki-distributed-distributor.loki-benchmark.svc.cluster.local:3100/loki/api/v1/push"
}
}
console output
[2022-12-13T06:31:40,411][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:31:50,775][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:32:10,414][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:32:20,910][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:32:40,412][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:32:50,998][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:33:10,410][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:33:21,328][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
kubernetes config
image: "grafana/logstash-output-loki:1.0.1"
imagePullPolicy: "IfNotPresent"
command:
- '/bin/sh'
- '-c'
- 'logstash-plugin install --no-verify logstash-filter-kubernetes && logstash -f /usr/share/logstash/pipeline/logstash.conf'
I'm installing thingsboard on ubuntu 20.04LTS. I followed the installation guides on the thingsboard website. After all done, i entered this address "http://localhost:8081" (i changed the http_bind_port to 8081) but my browser can not reach this address. I checked for the error and received this report:
~$ cat /var/log/thingsboard/thingsboard.log | grep ERROR
2022-03-22 09:58:19,873 [main] ERROR o.s.boot.SpringApplication - Application run failed
2022-03-22 10:05:01,668 [main] ERROR o.s.boot.SpringApplication - Application run failed
Binary file (standard input) matches
Hope anybody can help me. Thank you all.
Starting up JHipster Registry throws an exception and it seems to be some sort of nast catch-22 that I cannot resolve.
I imagine that the registry should not register itself, not try to pull its own configuration from the registry (rather that should come from its own environment). To this end, it comes pre-configured like this...
application.yml:
eureka:
client:
fetch-registry: false
register-with-eureka: false
However, the 'peer' configurations both come with both those values as 'true', and if you turn on the 'uaa' configuration, you turn on eureka.client.fetchRegistry=true.
Now, turning off the 'uaa' profile turns off the exception. Turning it on (because I need it), throws the exception.
2020-05-09 03:30:07.759 INFO 7 --- [ main] c.n.d.s.r.aws.ConfigClusterResolver : Resolving eureka endpoints via configuration
2020-05-09 03:30:07.814 INFO 7 --- [ main] com.netflix.discovery.DiscoveryClient : Disable delta property : false
2020-05-09 03:30:07.814 INFO 7 --- [ main] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
2020-05-09 03:30:07.815 INFO 7 --- [ main] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
2020-05-09 03:30:07.815 INFO 7 --- [ main] com.netflix.discovery.DiscoveryClient : Application is null : false
2020-05-09 03:30:07.816 INFO 7 --- [ main] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
2020-05-09 03:30:07.817 INFO 7 --- [ main] com.netflix.discovery.DiscoveryClient : Application version is -1: true
2020-05-09 03:30:07.818 INFO 7 --- [ main] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
2020-05-09 03:30:08.016 ERROR 7 --- [ main] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://admin:admin#registry.lo
com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused (Connection refused)
at com.sun.jersey.client.apache4.ApacheHttpClient4Handler.handle(ApacheHttpClient4Handler.java:187)
at com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123)
at com.netflix.discovery.EurekaIdentityHeaderFilter.handle(EurekaIdentityHeaderFilter.java:27)
at com.sun.jersey.api.client.Client.handle(Client.java:652)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
Is there a way out of this?
This issue should have been fixed long back as per this
Github ticket
Why Am i still seeing this in logs.
Config Server: Not found or not setup for this application
2017-10-30 15:03:53.740 INFO 24700 --- [ main] foo.bar.id.gateway.GatewayApp :
----------------------------------------------------------
Application 'Gateway' is running! Access URLs:
Local: http://localhost:8180
External: http://ipadddress:8180
Profile(s): [swagger, dev]
----------------------------------------------------------
2017-10-30 15:03:53.740 INFO 24700 --- [ main] foo.bar.id.gateway.GatewayApp :
----------------------------------------------------------
Config Server: Not found or not setup for this application
----------------------------------------------------------
2017-10-30 15:03:55.310 INFO 24700 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_AppGATEWAY/Gateway:2c4703cd1cbf3617def055e786113743: registering service...
2017-10-30 15:03:55.328 INFO 24700 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_AppGATEWAY/Gateway:2c4703cd1cbf3617def055e786113743 - registration status: 204
2017-10-30 15:08:20.870 INFO 24700 --- [trap-executor-0] c.n.d.s.r.aws.ConfigClusterResolver : Resolving eureka endpoints via configuration
[1]: https://github.com/jhipster/generator-jhipster/issues/3166
If you expect that the config server may occasionally be unavailable when your application starts, you can make it keep trying after a failure. First, you need to set spring.cloud.config.fail-fast=true. Then you need to add spring-retry and spring-boot-starter-aop to your classpath. The default behavior is to retry six times with an initial backoff interval of 1000ms and an exponential multiplier of 1.1 for subsequent backoffs. You can configure these properties (and others) by setting the spring.cloud.config.retry.* configuration properties.
Docs:
https://cloud.spring.io/spring-cloud-config/reference/html/#config-client-retry
You need to set the following properties in your GitHub configuration repo.
configserver:
name: JHipster Registry config server
status: Connected to the JHipster Registry config server, using https://github.com/jhipster/jhipster-registry-sample-config !
I recently upgraded from Elasticsearch 1.5 to Elasticsearch 5.5, and I can connect fine remotely, but not from localhost.
I updated elasticsearch.yml to be able to connect remotely:
network.host: 0.0.0.0
And the elasticsearch logs look fine to me:
[2017-11-05T22:44:23,441][INFO ][o.e.n.Node ] [node1] starting ...
[2017-11-05T22:44:23,655][INFO ][o.e.t.TransportService ] [node1] publish_address {[ip address]:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}, {[ip address]:9300}
[2017-11-05T22:44:23,666][INFO ][o.e.b.BootstrapChecks ] [node1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-11-05T22:44:26,712][INFO ][o.e.c.s.ClusterService ] [node1] new_master {node1}{s-J7aStjQFuwor-WY6bSCQ}{nv4GVIQ6SwScPiebRHBQBQ}{localhost}{[ip address]:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-11-05T22:44:26,733][INFO ][o.e.h.n.Netty4HttpServerTransport] [node1] publish_address {[ip address]:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}, {[ip address]:9200}
[2017-11-05T22:44:26,734][INFO ][o.e.n.Node ] [node1] started
I am wondering if it is related to the proxy, since when I try run curl localhost:9200, I get the following:
...
<p>The following error was encountered while trying to retrieve the URL: http://localhost:9200/</p>
<blockquote id="error">
<p><b>Connection to 127.0.0.1 failed.</b></p>
</blockquote>
<p id="sysmsg">The system returned: <i>(111) Connection refused</i></p>
<p>The remote host or network may be down. Please try the request again.</p>
...
Any ideas or tips on how to narrow down the issue would be helpful.
It turned out a http_proxy bash variable was set which was proxying all outbound traffic. Once this variable was unset, the curl command worked.
It also turned out that before this fix was made, I realized our Java applications were still able to connect to Elasticsearch locally, it was just the curl command that wasn't working.