Can connect to Elasticsearch 5 remotely but not locally - linux

I recently upgraded from Elasticsearch 1.5 to Elasticsearch 5.5, and I can connect fine remotely, but not from localhost.
I updated elasticsearch.yml to be able to connect remotely:
network.host: 0.0.0.0
And the elasticsearch logs look fine to me:
[2017-11-05T22:44:23,441][INFO ][o.e.n.Node ] [node1] starting ...
[2017-11-05T22:44:23,655][INFO ][o.e.t.TransportService ] [node1] publish_address {[ip address]:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}, {[ip address]:9300}
[2017-11-05T22:44:23,666][INFO ][o.e.b.BootstrapChecks ] [node1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-11-05T22:44:26,712][INFO ][o.e.c.s.ClusterService ] [node1] new_master {node1}{s-J7aStjQFuwor-WY6bSCQ}{nv4GVIQ6SwScPiebRHBQBQ}{localhost}{[ip address]:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-11-05T22:44:26,733][INFO ][o.e.h.n.Netty4HttpServerTransport] [node1] publish_address {[ip address]:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}, {[ip address]:9200}
[2017-11-05T22:44:26,734][INFO ][o.e.n.Node ] [node1] started
I am wondering if it is related to the proxy, since when I try run curl localhost:9200, I get the following:
...
<p>The following error was encountered while trying to retrieve the URL: http://localhost:9200/</p>
<blockquote id="error">
<p><b>Connection to 127.0.0.1 failed.</b></p>
</blockquote>
<p id="sysmsg">The system returned: <i>(111) Connection refused</i></p>
<p>The remote host or network may be down. Please try the request again.</p>
...
Any ideas or tips on how to narrow down the issue would be helpful.

It turned out a http_proxy bash variable was set which was proxying all outbound traffic. Once this variable was unset, the curl command worked.
It also turned out that before this fix was made, I realized our Java applications were still able to connect to Elasticsearch locally, it was just the curl command that wasn't working.

Related

Cannot start Keycloak 19.0.2 on Azure

I have a Linux VM hosted on Azure. On this VM I "installed" the standalone version of Keycloak 19.0.2. When I connect to the VM with SSH, I can simply start the server with bin/kc.sh start-dev. This works without any problems.
Now I want to start the Keycloak server automatically on the VM startup. I tried this with Crontab. This did not work because the VM resets the Crontab on each restart.
Then I tried to directly start it from Azure with a Custom Script for Linux. Since I had trouble setting it up, I started playing around with the Run command feature in the Azure portal. When I run the command sudo /home/my_user/keycloak/bin/kc.sh start-dev I get the following output:
2022-09-14 14:19:32,839 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: FrontEnd: <request>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin: <request>, Port: -1, Proxied: false
2022-09-14 14:19:34,670 INFO [org.keycloak.common.crypto.CryptoIntegration] (main) Detected crypto provider: org.keycloak.crypto.def.DefaultCryptoProvider
2022-09-14 14:19:36,475 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2022-09-14 14:19:36,483 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2022-09-14 14:19:36,553 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2022-09-14 14:19:37,251 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.9.Final
2022-09-14 14:19:37,886 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: node_884187, Site name: null
2022-09-14 14:19:41,657 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2022-09-14 14:19:41,657 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Unable to start HTTP server
2022-09-14 14:19:41,658 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: io.quarkus.runtime.QuarkusBindException
2022-09-14 14:19:41,658 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command.
TLDR: The server starts perfectly fine, when I manually start it via SSH. When I try to start the server with the Azure portal (Run command) it does not work.

SPARK YARN: cannot send job from client (org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032)

I'm trying to send spark job to yarn (without HDFS) in HA mode.
For submitting I'm using org.apache.spark.deploy.SparkSubmit.
When I send request from machine with active Resource Manager, it works well. But if I' trying to send from machine with standby Resource Manager, job fails with error:
DEBUG org.apache.hadoop.ipc.Client - Connecting to spark2-node-dev/10.10.10.167:8032
DEBUG org.apache.hadoop.ipc.Client - Connecting to /0.0.0.0:8032
org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep
However, when I send request via command line (spark-submit), it works well through both active and standby machine.
What can cause the problem?
P.S. Use the same parameters for both type of sending job: org.apache.spark.deploy.SparkSubmit and spark-submit command line request. And properties yarn.resourcemanager.hostname.rm_id defined for all rm hosts
The problem was with absence of yarn-site.xml within class path for spark-submitter jar. Actually spark submitter jar does not take to account YARN_CONF_DIR or HADOOP_CONF_DIR env var, so cannot see yarn-site.
One solution that I found was to put yarn-site into classpath of jar.

Fail to connect to master with Spark on Google Compute Engine

I am trying hadoop/spark cluster in Google Compute Engine through "Launch click-to-deploy software" feature .
I have created 1 master and 2 slave node and i can launch spark-shell on the cluster but when i want to launch spark-shell since my computer, i failed.
I launch :
./bin/spark-shell --master spark://IP or Hostname:7077
And i have this stackTrace :
15/04/09 10:58:06 INFO AppClient$ClientActor: Connecting to master
akka.tcp://sparkMaster#IP or Hostname:7077/user/Master...
15/04/09 10:58:06 WARN AppClient$ClientActor: Could not connect to
akka.tcp://sparkMaster#IP or Hostname:7077: akka.remote.InvalidAssociation: Invalid address: akka.tcp://sparkMaster#IP or Hostname:7077
15/04/09 10:58:06 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkMaster#IP or Hostname:7077]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: IP or Hostname: unknown error
please let me know how to overcome this problem .
See comment from Daniel Darabos. By default, all incoming connections are blocked except for SSH, RDP and ICMP. To be able to connect from the Internet to the hadoop master instance, you must open port 7077 for 'hadoop-master' tag in your project first:
gcloud compute --project PROJECT firewall-rules create allow-spark \
--allow TCP:7077 \
--target-tags hadoop-master
See Firewalls, Adding a firewall and gcloud compute firewall-rules create at GCE public documentation for further details and all the possibilities.

Dataxtax Agent error

While adding existing cluster in OpsCenter I receive an error:
ERROR: Agent for XXX.XXX.XXX.XXX was unable to complete operation (http://XXX.XXX.XXX.XXX:61621/snapshots/pit/properties?): java.lang.IllegalArgumentException: No implementation of method: :make-reader of protocol: #'clojure.java.io/IOFactory found for class: nil
On agent there is an error:
java.lang.IllegalArgumentException: No implementation of method: :make-reader of protocol: #'clojure.java.io/IOFactory found for class: nil
at clojure.core$_cache_protocol_fn.invoke(core_deftype.clj:541)
at clojure.java.io$fn__8551$G__8546__8558.invoke(io.clj:73)
at clojure.java.io$reader.doInvoke(io.clj:106)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at clojure.lang.AFn.applyToHelper(AFn.java:161)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$slurp.doInvoke(core.clj:6278)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at opsagent.backups.pit$read_properties.invoke(pit.clj:68)
at opsagent.backups.pit$enabled_QMARK_.invoke(pit.clj:106)
at clojure.core$eval37.invoke(NO_SOURCE_FILE:107)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6609)
at clojure.lang.Compiler.eval(Compiler.java:6582)
at clojure.core$eval.invoke(core.clj:2852)
at opsagent.opsagent$post_interface_startup.doInvoke(opsagent.clj:102)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at opsagent.conf$handle_new_conf.invoke(conf.clj:198)
at opsagent.messaging$message_callback$fn__12316.invoke(messaging.clj:52)
at opsagent.messaging.proxy$java.lang.Object$StompConnection$Listener$7f16bc72.onMessage(Unknown Source)
at org.jgroups.client.StompConnection.notifyListeners(StompConnection.java:324)
at org.jgroups.client.StompConnection.run(StompConnection.java:274)
at java.lang.Thread.run(Thread.java:745)
And cluster creation failed. Also i get this error during startup. I tried reinstall agent but in won't help
DataStax Agent version: 5.1.0
OpsCenter version 5.1.0
root#node1:~# java -version
java version "1.7.0_75"
OpenJDK Runtime Environment (IcedTea 2.5.4) (7u75-2.5.4-1~deb7u1)
OpenJDK 64-Bit Server VM (build 24.75-b04, mixed mode)
root#node1:~#
Content of address.yaml
stomp_interface: "YYY.YYY.YYY.YYY"
Content of opscenterd.conf
[webserver]
port = 8888
interface = 0.0.0.0
use_ssl = false
[logging]
level = INFO
<cluster name>.conf is absent, because cluster not added
The problem the agent is having is finding your installation of DSE on that node. When it can't find DSE it can't get the archiving properties file to update and errors out.
This error message is unfortunately terribly unhelpful. I've created a ticket to fix the error message (it's unfortunately private, but you can use this ticket number when discussing the issue with DataStax: OPSC-4826)
For a work around, try setting cassandra_install_location in your address.yaml file on that node. After adjusting address.yaml please bounce the agent and you can retry that operation.
You can find a document listing this and more address.yaml config items here: http://www.datastax.com/documentation/opscenter/5.1/opsc/configure/agentAddressConfiguration.html
I think the issue will be with your Java installation. I believe you'll need Oracle Java, not OpenJDK.
This worked for me:
ubuntu:~$ sudo add-apt-repository ppa:webupd8team/java
ubuntu:~$ sudo apt-get update && sudo apt-get install oracle-java7-installer oracle-java7-set-default

Cassandra Cluster configuration

I am trying to configure two windows servers in my network as Cassandra cluster.
I did some reading in various sites and changed the below in Cassandra.yalm
after changing the default value of 127.0.0.1 to actual IP the Cassandra service is not starting.
I also added the map to actual IP to localhost in (windows) hosts file.
After doing the above change, the service is coming up when I start the service. it is stopping immediately.
The reason I am changing this IP is to make this a cluster with two node setup,
Please let me know if I miss some thing.
Version: Datastax community version of Cassandra
Server : windows.
Thx
Muthu
Message from Cassandra.txt in logs dir:
ERROR [main] 2014-09-18 11:43:12,155 DatabaseDescriptor.java (line 116) Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml Caused by: Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=seed_provider for JavaBean=org.apache.cassandra.config.Config#34e5190a; No suitable constructor with 2 arguments found for class org.apache.cassandra.config.SeedProviderDef in 'reader', line 8, column 1: cluster_name: 'Test Cluster'
If you want to create Cassandra cluster you must have at least two nodes and configure /etc/cassandra/cassandra.yaml
cassandra.yaml
cluster_name: 'Some Cluster Name'
listen_address: [Current IP]
rpc_address: [Current IP]
seed_provuder:
- seeds: "[Current IP], [Remote IP]"
Note: seeds must have at least two IPs which must be reachable for each other
Clean and start Cassandra instance
sudo rm -rf /var/lib/cassandra/* /var/log/cassandra/*
Note: Cassandra instance must be killed before cleaning those folders.

Resources