opscenter 5.0.1 agent connection fail - cassandra

opscenter agent can't connet to opscenter.
opscenter's agent.log show error like this. ( replace IP to XX)
INFO [pdp-loader] 2014-11-28 12:03:53,517 Attempting to load stored metric values.
ERROR [StompConnection receiver] 2014-11-28 12:03:54,814 failed connecting to <X.X.X.X>:61620:java.net.UnknownHostException: <X.X.X.X>
INFO [StompConnection receiver] 2014-11-28 12:03:54,814 Reconnecting in 6s.
ERROR [StompConnection receiver] 2014-11-28 12:04:00,814 failed connecting to <X.X.X.X>:61620:java.net.UnknownHostException: <X.X.X.X>
INFO [StompConnection receiver] 2014-11-28 12:04:00,814 Reconnecting in 14s.
ERROR [StompConnection receiver] 2014-11-28 12:04:14,818 failed connecting to <X.X.X.X>:61620:java.net.UnknownHostException: <X.X.X.X>
INFO [StompConnection receiver] 2014-11-28 12:04:14,818 Reconnecting in 30s.
ERROR [StompConnection receiver] 2014-11-28 12:04:44,822 failed connecting to <X.X.X.X>:61620:java.net.UnknownHostException: <X.X.X.X>
INFO [StompConnection receiver] 2014-11-28 12:04:44,822 Reconnecting in 62s.
ERROR [StompConnection receiver] 2014-11-28 12:05:46,826 failed connecting to <X.X.X.X>:61620:java.net.UnknownHostException: <X.X.X.X>
INFO [StompConnection receiver] 2014-11-28 12:05:46,826 Reconnecting in 60s.
ERROR [StompConnection receiver] 2014-11-28 12:06:46,830 failed connecting to <X.X.X.X>:61620:java.net.UnknownHostException: <X.X.X.X>
and opscenterd.log has nothing special.
my config below.
opscenter config.
$cat opscenter-5.0.1/conf/opscenterd.conf
[webserver]
port = 8888
interface = 0.0.0.0
[logging]
[authentication]
enabled = False
[stat_reporter]
[agents]
use_ssl = False
agent config.
$ cat datastax-agent-5.0.1/conf/address.yaml
stomp_interface: <X.X.X.X>
use_ssl: 0
so I chekc port.
$netstat -an | grep 61620
tcp 0 0 0.0.0.0:61620 0.0.0.0:* LISTEN
$ telnet X.X.X.X 61620
Trying X.X.X.X...
Connected to X.X.X.X.
Escape character is '^]'.
it seems ok.
but opscenter agent show me a error .. again and again and.....
INFO [StompConnection receiver] 2014-11-28 12:05:46,826 Reconnecting in 60s.
ERROR [StompConnection receiver] 2014-11-28 12:06:46,830 failed connecting to <X.X.X.X>:61620:java.net.UnknownHostException: <X.X.X.X>
is addition ....
I'm running cassandra 2.1.2( 3 replica) on CentOS release 6.5 (Final)
and kernel: 2.6.32-431.23.3.el6.x86_64
anyone can help me?

So I had a slightly different message in the OpsCenter diagnostic output:
ProcessingError while calling CreateClusterConfController: Unable to connect to cluster
However, I'm running Cassandra 2.1.2 and OpsCenter 5.0.1 as well, and I had trouble connecting to an existing cluster until I saw this in cassandra.yaml
listen_address
# You _must_ change this if you want multiple nodes to be able to communicate!
# Leaving it blank leaves it up to InetAddress.getLocalHost(). This
# will always do the Right Thing _if_ the node is properly configured
# (hostname, name resolution, etc), and the Right Thing is to use the
# address associated with the hostname (it might not be).
It seems this includes OpsCenter. So I commented it out. I also commented out rpc_address, and let it auto-configure itself as well.
Please verify these settings, restart cassandra, and try connecting again through OpsCenter.

Related

Cassandra Says Listening on Port 9042 But Couldn't Connect It

I've running cassandra on my local machine.
I've starting it sudo service cassandra start. And then check logs under var/log/cassandra/system-log and it says:
INFO [main] 2019-07-28 13:13:17,226 Server.java:162 - Starting listening for CQL clients on localhost/127.0.0.1:9042 (unencrypted)...
INFO [main] 2019-07-28 13:13:17,270 CassandraDaemon.java:501 - Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
INFO [SharedPool-Worker-1] 2019-07-28 13:13:27,133 ApproximateTime.java:44 - Scheduling approximate time-check task with a precision of 10 milliseconds
INFO [OptionalTasks:1] 2019-07-28 13:13:27,298 CassandraRoleManager.java:339 - Created default superuser role 'cassandra'
Then I try to connect with cqlsh in terminal and it says:
Connection error: ('Unable to connect to any servers', {'127.0.0.1:9042': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
What is wrong? Also I couldn't see 9042 port with netstat -tulpn command.
Go to /etc/cassandra/cassandra-env.sh
Uncomment
# JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=<public name>"
and change it to
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname==localhost"
Set listen_address and broadcast_rpc_address to local ip (get ip address from ifconfig).
Restart Cassandra.
cqlsh localhost 9042
This would work if you didn't change the cassandra.yml file.

facing connection error when trying to open cqlsh prompt

Can some help me why i'm facing the below issue and how to fix when I'm trying to start my cqlsh (cassandra).
Connection error: ('Unable to connect to any servers',
{'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)].
Last error: Connection refused")})
When I type below command:
sudo service cassandra status
cassandra (pid 1xxxx) is running...
Which indicates my cassandra is running properly.
But unable to run cqlsh. But was able to run yesterday without any issues.
Coming to my cassandra.yaml file
my seed, listen_address, and rpc_address all are set to my public ip address 10.x.xx.xxx.
native_transport_port: 9042
I'm using single node cluster.
How are you starting cqlsh? If you want it to connect to an address other than 127.0.0.1, you need to specify it. Specifically, you should try the 10.x.xx.xxx address that you set in your yaml.
$ cqlsh 10.x.xx.xxx
Are you specifying anything for listen_interface or rpc_interface? Remember that you can set either the address or the interface, but not both.
To figure for sure out which address Cassandra is listening on, check your system.log file:
$ grep listening /var/log/cassandra/system.log
INFO [main] 2015-12-03 21:06:27,581 Server.java:182 - Starting listening for CQL clients on /192.168.0.100:9042...
Assuming that everything is configured properly, and you do not have any errors during startup, the address returned is the one you should be providing when you start cqlsh.
Also, are you trying to connect from the same machine? Or are you trying to remotely connect to your single node? Or is your Cassandra node running on a VM on your machine? Double-check your firewall rules, and ensure that traffic on 9042 can get from your client to your node.
I got below output when i ran $ grep listening /var/log/cassandra/system.log
INFO [main] 2015-12-02 12:49:20,334 Server.java:182 - Starting listening for CQL clients on localhost/127.0.0.1:9042...
INFO [StorageServiceShutdownHook] 2015-12-02 15:59:11,730 ThriftServer.java:142 - Stop listening to thrift clients
INFO [StorageServiceShutdownHook] 2015-12-02 15:59:11,771 Server.java:213 - Stop listening for CQL clients
INFO [main] 2015-12-02 17:21:28,775 Server.java:182 - Starting listening for CQL clients on /10.x.x.xxx:9042...
INFO [StorageServiceShutdownHook] 2015-12-03 17:12:12,840 ThriftServer.java:142 - Stop listening to thrift clients
INFO [StorageServiceShutdownHook] 2015-12-03 17:12:12,882 Server.java:213 - Stop listening for CQL clients
INFO [main] 2015-12-03 17:12:41,337 Server.java:182 - Starting listening for CQL clients on /10.x.x.xxx:9042...
INFO [StorageServiceShutdownHook] 2015-12-03 17:33:35,996 ThriftServer.java:142 - Stop listening to thrift clients
INFO [StorageServiceShutdownHook] 2015-12-03 17:33:36,100 Server.java:213 - Stop listening for CQL clients
INFO [main] 2015-12-03 17:34:00,741 Server.java:182 - Starting listening for CQL clients on /10.x.x.xxx:9042...
Also i'm trying to connect remotely through VPN. I'm using openstack.How to check for firewall issues?
Edit:
Finally I'm able to fix this issue. Ran netstat -tuplen command and found the address to be ::ffff:10.x.x.xxx:9042.
So ran cqlsh ::ffff:10.x.x.xxx:9042 and it started working.

cqlsh not connecting when configuring a two node cluster on windows server

I am trying to configure a two node cluster with cassandra in windows r2 2008 So i installed cassandra community version in one server (10.xxx.0.1,10.xxx.0.2) And then I stopped the service and then edited the configuraton.yaml file in the conf folder.
The changes are:
cluster_name
commented the num_tokens
gave the tokens in initial_token,
seeds as 10.xxx.0.1,10.xxx.0.2,
listen_addresses are their respective ip addresses which are 10.xxx.0.1,10.xxx.0.2,
rpc_addresses are same as listen_address,
endpointsnitch as gossip
I also changed the cassandra rackdc.properties file to dc=DC1 rack=RAC1.
and changed the cassandra-env file - uncommented: JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=10.xxx.0.1
I then saved and started back the service and ran nodetool status and got
Datacenter: DC1
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Owns (effective) Host ID
Token Rack
UN 10.104.0.15 46.65 KB 100.0% bc41a884-baaf-4a52-85f3-f3270c2ec9
57 -9223372036854775808 RAC1
and opened the cqlsh, but it is not connecting. Below is the error:
ERROR [Initialization] 2015-10-12 17:10:21,353 Error connecting via JMX: java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection refused: connect]
INFO [main] 2015-10-12 17:10:24,612 Reconnecting to a backup OpsCenter instance
INFO [main] 2015-10-12 17:10:24,615 SSL communication is disabled
INFO [main] 2015-10-12 17:10:24,615 Creating stomp connection to 127.0.0.1:61620
INFO [main] 2015-10-12 17:10:24,628 Starting Jetty server: {:join? false, :ssl? false, :host nil, :port 61621}
INFO [Initialization] 2015-10-12 17:10:24,632 Sleeping for 2s before trying to determine IP over JMX again
INFO [StompConnection receiver] 2015-10-12 17:10:24,640 Reconnecting in 0s.
INFO [main] 2015-10-12 18:25:56,347 Waiting for the config from OpsCenter
INFO [main] 2015-10-12 18:25:56,349 Attempting to determine Cassandra's broadcast address through JMX
INFO [main] 2015-10-12 18:25:56,350 Starting Stomp
INFO [main] 2015-10-12 18:25:56,350 Starting up agent communcation with OpsCenter.
INFO [Initialization] 2015-10-12 18:25:56,356 New JMX connection (127.0.0.1:7199)
ERROR [Initialization] 2015-10-12 18:25:57,652 Error connecting via JMX: java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection refused: connect]
INFO [main] 2015-10-12 18:26:00,768 Reconnecting to a backup OpsCenter instance
INFO [main] 2015-10-12 18:26:00,770 SSL communication is disabled
INFO [main] 2015-10-12 18:26:00,770 Creating stomp connection to 127.0.0.1:61620
INFO [main] 2015-10-12 18:26:00,779 Starting Jetty server: {:join? false, :ssl? false, :host nil, :port 61621}
INFO [Initialization] 2015-10-12 18:26:00,782 Sleeping for 2s before trying to determine IP over JMX again
INFO [StompConnection receiver] 2015-10-12 18:26:00,945 Reconnecting in 0s.
INFO [StompConnection receiver] 2015-10-12 18:26:01,136 Connected to 127.0.0.1:61620

Node Manager cannot able to start in Hadoop 2.6.0 (Connection refused)

I have installed hadoop 2.6.0 multi-node cluster in EC2 instance(ubuntu 14.04 64 bit). All demons(NameNode,SecondaryNameNode,ResourceManager) in master is up,but in slave machine only DataNode is up NodeManager is shutting down due to connection refuse.
Kindly help me in this regard. Thanks in advance
The log file of my NodeManager is below:
2015-09-08 07:59:36,606 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: NodeManager configured with 8 G physical memory allocated to containers, which is more than 80% of the total physical memory available (992.5 M). Thrashing might happen.
2015-09-08 07:59:36,613 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Initialized nodemanager for null: physical-memory=8192 virtual-memory=17204 virtual-cores=8
2015-09-08 07:59:36,646 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-09-08 07:59:36,666 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 53949
2015-09-08 07:59:36,688 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ContainerManagementProtocolPB to the server
2015-09-08 07:59:36,688 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Blocking new container-requests as container manager rpc server is still starting.
2015-09-08 07:59:36,691 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-09-08 07:59:36,692 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 53949: starting
2015-09-08 07:59:36,707 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Updating node address : ec2-52-88-167-9.us-west-2.compute.amazonaws.com:53949
2015-09-08 07:59:36,713 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-09-08 07:59:36,713 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8040
2015-09-08 07:59:36,716 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB to the server
2015-09-08 07:59:36,717 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-09-08 07:59:36,717 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8040: starting
2015-09-08 07:59:36,717 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Localizer started on port 8040
2015-09-08 07:59:36,719 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: ContainerManager started at ec2-52-88-167-9.us-west-2.compute.amazonaws.com/172.31.29.154:53949
2015-09-08 07:59:36,719 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: ContainerManager bound to 0.0.0.0/0.0.0.0:0
2015-09-08 07:59:36,719 INFO org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer: Instantiating NMWebApp at 0.0.0.0:8042
2015-09-08 07:59:36,790 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-09-08 07:59:36,793 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.nodemanager is not defined
2015-09-08 07:59:36,805 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-09-08 07:59:36,806 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context node
2015-09-08 07:59:36,806 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-09-08 07:59:36,807 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-09-08 07:59:36,812 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /node/*
2015-09-08 07:59:36,812 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2015-09-08 07:59:36,820 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 8042
2015-09-08 07:59:36,820 INFO org.mortbay.log: jetty-6.1.26
2015-09-08 07:59:36,863 INFO org.mortbay.log: Extract jar:file:/home/ubuntu/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar!/webapps/node to /tmp/Jetty_0_0_0_0_8042_node____19tj0x/webapp
2015-09-08 07:59:37,358 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup#0.0.0.0:8042
2015-09-08 07:59:37,359 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app /node started at 8042
2015-09-08 07:59:37,879 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2015-09-08 07:59:37,885 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8031
2015-09-08 07:59:37,913 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM container statuses: []
2015-09-08 07:59:37,917 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registering with RM using containers :[]
**2015-09-08 07:59:38,951 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-09-08 07:59:39,956 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-09-08 07:59:40,957 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-09-08 07:59:41,957 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-09-08 07:59:42,958 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)**
2015-09-08 08:19:48,256 INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl failed in state STARTED; cause: **org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From ec2-52-88-167-9.us-west-2.compute.amazonaws.com/172.31.29.154 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From ec2-52-88-167-9.us-west-2.compute.amazonaws.com/172.31.29.154 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused**
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:264)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:463)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:509)
Caused by: java.net.ConnectException: Call From ec2-52-88-167-9.us-west-2.compute.amazonaws.com/172.31.29.154 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor8.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy27.registerNodeManager(Unknown Source)
at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy28.registerNodeManager(Unknown Source)
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:257)
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:191)
... 6 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 18 more
2015-09-08 08:19:48,257 INFO org.apache.hadoop.service.AbstractService: Service NodeManager failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From ec2-52-88-167-9.us-west-2.compute.amazonaws.com/172.31.29.154 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From ec2-52-88-167-9.us-west-2.compute.amazonaws.com/172.31.29.154 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:264)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:463)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:509)
Caused by: java.net.ConnectException: Call From ec2-52-88-167-9.us-west-2.compute.amazonaws.com/172.31.29.154 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor8.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy27.registerNodeManager(Unknown Source)
at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy28.registerNodeManager(Unknown Source)
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:257)
at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:191)
... 6 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 18 more
2015-09-08 08:19:48,263 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup#0.0.0.0:8042
2015-09-08 08:19:48,264 INFO org.apache.hadoop.ipc.Server: Stopping server on 53949
2015-09-08 08:19:48,266 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 53949
2015-09-08 08:19:48,267 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2015-09-08 08:19:48,267 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl is interrupted. Exiting.
2015-09-08 08:19:48,267 INFO org.apache.hadoop.ipc.Server: Stopping server on 8040
2015-09-08 08:19:48,268 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8040
2015-09-08 08:19:48,268 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2015-09-08 08:19:48,269 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Public cache exiting
2015-09-08 08:19:48,269 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NodeManager metrics system...
2015-09-08 08:19:48,270 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system stopped.
2015-09-08 08:19:48,270 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system shutdown complete.
2015-09-08 08:19:48,270 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://ec2-52-26-161-203.us-west-2.compute.amazonaws.com:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hdfstmp</value>
</property>
</configuration>
mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://ec2-52-26-161-203.us-west-2.compute.amazonaws.com:8021</value>
</property>
</configuration>
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
Master machine:
ubuntu#ec2-52-26-161-203:~$ vim /etc/hosts
172.31.23.167 ec2-52-26-161-203.us-west-2.compute.amazonaws.com
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
ubuntu#ec2-52-26-161-203:~$ vim /etc/hadoop/masters
ec2-52-26-161-203.us-west-2.compute.amazonaws.com
ubuntu#ec2-52-26-161-203:~$ vim /etc/hadoop/slaves
ec2-52-88-167-9.us-west-2.compute.amazonaws.com
Slave Machine :
ubuntu#ec2-52-88-167-9:~ vim /etc/hosts
172.31.29.154 ec2-52-88-167-9.us-west-2.compute.amazonaws.com
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
ubuntu#ec2-52-88-167-9:~ vim /etc/hadoop/slaves
ec2-52-88-167-9.us-west-2.compute.amazonaws.com
ubuntu#ec2-52-26-161-203:~$ sudo netstat -lpten | grep java
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 1000 569904 19910/java
tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 1000 570916 20136/java
tcp 0 0 172.31.23.167:8020 0.0.0.0:* LISTEN 1000 569911 19910/java
tcp6 0 0 :::8088 :::* LISTEN 1000 571699 20278/java
tcp6 0 0 :::8030 :::* LISTEN 1000 571690 20278/java
tcp6 0 0 :::8031 :::* LISTEN 1000 571683 20278/java
tcp6 0 0 :::8032 :::* LISTEN 1000 571695 20278/java
tcp6 0 0 :::8033 :::* LISTEN 1000 571702 20278/java
Telnet command:
ubuntu#ec2-52-26-161-203:~$ telnet localhost 8031
Trying ::1...
Connected to localhost.
Escape character is '^]'.
How is it taking 8031 port for resource manager ? I haven't giving in my hadoop configuration files(coresite.xml,mapred-site.xml,hdfs-site.xml) which is above.
I have made modifications in mapred-site.xml and yarn-site.xml which solved my issue.Since I haven't mentioned the host name property value for resource manager in yarn-site.xml it was trying to connect with the address 0.0.0.0 which was the cause for connection refused exception.
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ec2-52-26-161-203.us-west-2.compute.amazonaws.com</value>
</property>
The hadoop documentation http://wiki.apache.org/hadoop/ConnectionRefused
clearly says:
Make sure the destination address in the exception isn't 0.0.0.0 -this
means that you haven't actually configured the client with the real
address for that
Could you please try adding master ip entry in slave machine's host and slave's ip entry to master as well. Also comment out all the entries in the hosts file, if not needed.

datastax Opscenter can't add nodes, "Error provisioning cluster: Request ID is invalid" ,

Update 2
There was a bug in Opscenter not matching dsc22 configuration with cassandra community version, this solved one problem.
Update
After reading the opscenter log again I think there actually something wrong with the 4 authentication fields or some ssh configuration, but I still don't know what exactly should be done, The field says "Local node credentials (sudo) private key (optional)
the scenario is as following:
I installed 4 nodes with vagrant and ansible where each has dsc22,opscenter (redundant I know),datastax-agent,cassandra-tool, oracle java 8
configuration below
nodetool status, everything is good they all see each other
I create a keyspace, it replicates to all the nodes just fine
on my host machine I open the datastax using the forwarded port from node02 for example.
First time I see two choices add existing cluster or manage existing one When I try to manage existing cluster > add 192.168.50.3
I get the following:
So I try using 127.0.0.1, it works just fine but I only see this machine local cassandra node only.
So I try to add nodes from inside when I get a dialog (I think this is important) it has credentials fields i add admin,admin for repository, and also admin,admin for local username, I actually don't know what to put in these 4 fields, whether these are created or they're actually preconfigured somewhere else
So after I add some node to the data center with RAC info etc, I get the "Error provisioning cluster: Request ID is invalid"
I have no clue where the problem is, the only unknown step I did was the credentials thing (repository username/pass, local username/pass) when I add nodes from inside. But why I can't do the manage existing from the beginning where I only get in when I use 127.0.0.1 as ip.
So here's the datastax-agent:
Starting DataStax agent monitor datastax_agent_monitor.
INFO [main] 2015-08-24 22:39:59,506 Loading conf files: /var/lib/datastax-agent/conf/address.yaml
INFO [main] 2015-08-24 22:39:59,657 Java vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.8.0_60
INFO [main] 2015-08-24 22:39:59,657 DataStax Agent version: 5.2.0
INFO [main] 2015-08-24 22:39:59,732 Default config values: {:cassandra_port 9042, :rollups300_ttl 2419200, :settings_cf "settings", :restore_req_update_period 60, :my_$
INFO [main] 2015-08-24 22:39:59,740 Waiting for the config from OpsCenter
INFO [main] 2015-08-24 22:39:59,752 Starting Stomp
INFO [main] 2015-08-24 22:39:59,752 Starting up agent communcation with OpsCenter.
INFO [main] 2015-08-24 22:39:59,753 Reconnecting to a backup OpsCenter instance
INFO [main] 2015-08-24 22:39:59,756 SSL communication is disabled
INFO [main] 2015-08-24 22:39:59,757 Creating stomp connection to 192.168.50.3:61620
INFO [async-dispatch-1] 2015-08-24 22:39:59,756 Using 127.0.0.1 as the cassandra broadcast address
INFO [async-dispatch-1] 2015-08-24 22:39:59,762 New JMX connection (127.0.0.1:7199)
INFO [StompConnection receiver] 2015-08-24 22:39:59,787 Reconnecting in 0s.
INFO [main] 2015-08-24 22:39:59,791 Starting Jetty server: {:join? false, :ssl? false, :host nil, :port 61621}
INFO [StompConnection receiver] 2015-08-24 22:39:59,872 Connected to 192.168.50.3:61620
INFO [StompConnection receiver] 2015-08-24 22:40:00,200 Got new config from OpsCenter [note values in address.yaml override those from OpsCenter]: {:cassandra_port 904$
INFO [StompConnection receiver] 2015-08-24 22:40:00,224 Starting up agent collection.
INFO [StompConnection receiver] 2015-08-24 22:40:00,225 New JMX connection (127.0.0.1:7199)
INFO [Jetty] 2015-08-24 22:40:00,347 Jetty server started
INFO [StompConnection receiver] 2015-08-24 22:40:00,452 agent RPC address is 127.0.0.1
INFO [async-dispatch-1] 2015-08-24 22:40:00,454 cassandra RPC address is nil
INFO [StompConnection receiver] 2015-08-24 22:40:00,471 Starting OS metric collectors (Linux)
INFO [StompConnection receiver] 2015-08-24 22:40:00,516 Starting Cassandra JMX metric collectors
INFO [install-location-finder] 2015-08-24 22:40:00,614 New JMX connection (127.0.0.1:7199)
INFO [StompConnection receiver] 2015-08-24 22:40:00,639 New JMX connection (127.0.0.1:7199)
INFO [StompConnection receiver] 2015-08-24 22:40:00,793 New JMX connection (127.0.0.1:7199)
INFO [clojure-agent-send-off-pool-0] 2015-08-24 22:40:02,094 Attempting to load stored metric values.
Here's full opscenter log http://pastebin.com/fXT2vkFR
The following is section from it:
2015-08-24 23:13:38+0000 [Test_Cluster] WARN: Ignoring scheduled job with type=best-practice, which is only supported with DataStax Enterprise.
2015-08-24 23:13:38+0000 [Test_Cluster] INFO: Done loading persisted scheduled job descriptions
2015-08-24 23:13:40+0000 [Test_Cluster] INFO: Using 192.168.50.4 as the RPC address for node 127.0.0.1
2015-08-24 23:13:40+0000 [Test_Cluster] INFO: Node <Node 127.0.0.1='-6574032654670847999'> changed version to {'search': None, 'jobtracker': None, 'tasktracker': None, 'spark': {u'master': None, u'version': None, u'worker': None}, 'dse': None, 'cassandra': u'2.2.0'}
2015-08-24 23:13:40+0000 [Test_Cluster] INFO: Processing spark version {u'master': None, u'version': None, u'worker': None}
2015-08-24 23:13:40+0000 [Test_Cluster] INFO: Node <Node 127.0.0.1='-6574032654670847999'> changed version to {u'search': None, u'jobtracker': None, u'tasktracker': None, u'spark': {u'master': None, u'version': None, u'worker': None}, u'dse': None, u'cassandra': u'2.2.0'}
2015-08-24 23:13:40+0000 [Test_Cluster] INFO: Processing spark version {u'master': None, u'version': None, u'worker': None}
2015-08-24 23:13:40+0000 [Test_Cluster] INFO: Node 127.0.0.1 changed its mode to normal
2015-08-24 23:13:40+0000 [Test_Cluster] INFO: Done loading persisted alert rules
2015-08-24 23:13:41+0000 [Test_Cluster] INFO: OpsCenter starting up.
2015-08-24 23:13:42+0000 [Test_Cluster] INFO: Using 192.168.50.2 as the RPC address for node 127.0.0.1
2015-08-24 23:13:42+0000 [Test_Cluster] INFO: Node <Node 127.0.0.1='-6574032654670847999'> changed version to {'search': None, 'jobtracker': None, 'tasktracker': None, 'spark': {u'master': None, u'version': None, u'worker': None}, 'dse': None, 'cassandra': u'2.2.0'}
2015-08-24 23:13:42+0000 [Test_Cluster] INFO: Processing spark version {u'master': None, u'version': None, u'worker': None}
2015-08-24 23:13:42+0000 [Test_Cluster] INFO: Node <Node 127.0.0.1='-6574032654670847999'> changed version to {u'search': None, u'jobtracker': None, u'tasktracker': None, u'spark': {u'master': None, u'version': None, u'worker': None}, u'dse': None, u'cassandra': u'2.2.0'}
2015-08-24 23:13:42+0000 [Test_Cluster] INFO: Processing spark version {u'master': None, u'version': None, u'worker': None}
2015-08-24 23:13:42+0000 [Test_Cluster] INFO: Node 127.0.0.1 changed its mode to normal
2015-08-24 23:13:42+0000 [] INFO: Starting to update agents' configuration
2015-08-24 23:13:47+0000 [Test_Cluster] INFO: Using 192.168.50.5 as the RPC address for node 127.0.0.1
2015-08-24 23:13:48+0000 [Test_Cluster] INFO: Using 192.168.50.4 as the RPC address for node 127.0.0.1
2015-08-24 23:13:49+0000 [Test_Cluster] INFO: Using 192.168.50.3 as the RPC address for node 127.0.0.1
2015-08-24 23:13:49+0000 [Test_Cluster] INFO: Node <Node 127.0.0.1='-6574032654670847999'> changed version to {'search': None, 'jobtracker': None, 'tasktracker': None, 'spark': {u'master': None, u'version': None, u'worker': None}, 'dse': None, 'cassandra': u'2.2.0'}
2015-08-24 23:13:49+0000 [Test_Cluster] INFO: Processing spark version {u'master': None, u'version': None, u'worker': None}
2015-08-24 23:13:49+0000 [Test_Cluster] INFO: Node <Node 127.0.0.1='-6574032654670847999'> changed version to {u'search': None, u'jobtracker': None, u'tasktracker': None, u'spark': {u'master': None, u'version': None, u'worker': None}, u'dse': None, u'cassandra': u'2.2.0'}
2015-08-24 23:13:49+0000 [Test_Cluster] INFO: Processing spark version {u'master': None, u'version': None, u'worker': None}
2015-08-24 23:13:49+0000 [Test_Cluster] INFO: Node 127.0.0.1 changed its mode to normal
2015-08-24 23:13:58+0000 [Test_Cluster] INFO: Using 192.168.50.3 as the RPC address for node 127.0.0.1
2015-08-24 23:13:58+0000 [Test_Cluster] INFO: Using 192.168.50.2 as the RPC address for node 127.0.0.1
2015-08-24 23:14:22+0000 [] INFO: Testing SSH connectivity to 192.168.50.4
2015-08-24 23:14:23+0000 [] INFO: Testing SSH login to 192.168.50.4
2015-08-24 23:14:29+0000 [] There was a problem verifying an ssh login on 192.168.50.4
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'192.168.50.4' failed
2015-08-24 23:14:29+0000 [] INFO: Sleeping before retrying ssh login.
2015-08-24 23:14:41+0000 [] There was a problem verifying an ssh login on 192.168.50.4
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'192.168.50.4' failed
2015-08-24 23:14:41+0000 [] INFO: Sleeping before retrying ssh login.
2015-08-24 23:14:52+0000 [] There was a problem verifying an ssh login on 192.168.50.4
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'192.168.50.4' failed
2015-08-24 23:14:52+0000 [] INFO: Sleeping before retrying ssh login.
2015-08-24 23:15:03+0000 [] There was a problem verifying an ssh login on 192.168.50.4
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'192.168.50.4' failed
2015-08-24 23:15:03+0000 [] INFO: Sleeping before retrying ssh login.
2015-08-24 23:15:14+0000 [] There was a problem verifying an ssh login on 192.168.50.4
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'192.168.50.4' failed
2015-08-24 23:15:14+0000 [] INFO: Sleeping before retrying ssh login.
2015-08-24 23:15:26+0000 [] There was a problem verifying an ssh login on 192.168.50.4
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'192.168.50.4' failed
2015-08-24 23:15:26+0000 [] INFO: Sleeping before retrying ssh login.
2015-08-24 23:15:38+0000 [] There was a problem verifying an ssh login on 192.168.50.4
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'192.168.50.4' failed
2015-08-24 23:15:38+0000 [] INFO: Sleeping before retrying ssh login.
2015-08-24 23:15:50+0000 [] There was a problem verifying an ssh login on 192.168.50.4
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'192.168.50.4' failed
2015-08-24 23:15:50+0000 [] INFO: Sleeping before retrying ssh login.
2015-08-24 23:16:01+0000 [] There was a problem verifying an ssh login on 192.168.50.4
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'192.168.50.4' failed
2015-08-24 23:16:01+0000 [] INFO: Sleeping before retrying ssh login.
2015-08-24 23:16:13+0000 [] There was a problem verifying an ssh login on 192.168.50.4
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'192.168.50.4' failed
Configuration
I'm using vagrant to create my vms with
... a section from the vagrantfile
config.vm.define "node02" do |node|
node.vm.host_name = "node02"
node.vm.network :forwarded_port, guest: 8888, host: 3023
node.vm.network "private_network", ip: "192.168.50.2", virtualbox__intnet: "intnet"
end
...
A section from the cassandra.yaml in each node
- seeds: "192.168.50.xx, 192.168.50.xx, ... rest of nodes"
The address.yaml in each node
# couple of nodes that have opscenter
# The following hosts line is commented out because when I use it the datastax-agent doesn't connect to any nodes, so I guess the default is 127.0.0.1 which works fine
# hosts: ["192.168.50.xx","192.168.50.xx"]
local_interface: 127.0.0.1
# opscenter ip
stomp_interface: 192.168.50.xx
# this nodeXX ip
agent_rpc_broadcast_address: 192.168.50.xx
A section from cassandra-env.sh, note I don't use any authentication for jmx
LOCAL_JMX=NO
if [ "$LOCAL_JMX" = "yes" ]; then
JVM_OPTS="$JVM_OPTS -Dcassandra.jmx.local.port=$JMX_PORT -XX:+DisableExplicitGC"
else
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.ssl=false"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
nodetool status:
-- Address Load Tokens Owns Host ID Rack
UN 192.168.50.2 982.74 KB 256 ? a35.. RAC1
UN 192.168.50.3 679.05 KB 256 ? e6c.. RAC1
UN 192.168.50.4 912.1 KB 256 ? 634.. RAC1
UN 192.168.50.5 939.55 KB 256 ? 0a... RAC1

Resources