datastax-agent and OpsCenter not communicating on fresh AWS EC2 datastax instance - cassandra

I've got a two-node Amazon Web Services cassandra cluster created using "DataStax Auto-Clustering AMI 2.5.1-pv". OpsCenter is running on node 0, as is the datastax-agent, but they don't seem to be completely connected. OpsCenter says 0 of 0 agents connected and the connection icon next to "New Cluster" is blinking red.
OpsCenter screenshot: http://i.stack.imgur.com/Z6Tnx.png

a nodetool status would really help.
check the agent logs on either of the nodes when the starts, if you don't see any errors, try "adding a new cluster" then "manage existing cluster" and add the seed IPs of your two nodes, opscenter will try to update the agents if needed.
BTW: upgrade to opscenter 5.1, there are a lot of bug already fixed

Related

OpsCenter nodes list shows all nodes names as localhost

I have a brand new cluster running DSE 5.0.3 with OpsCenter 6.0.3. I used the LifeCycle Manager to create a 6 node cluster, adding their IPs to the Nodes list and installing DSE on each node that way. The cluster seems fine, healthy, etc. but the Nodes section, under the LIST tab, shows all nodes' names as localhost. If I click on each node it shows "localhost - x.x.x.x" (x.x.x.x being the actual node IP). How do I make them show their actual hostnames in OpsCenter? Where does this name come from?
Thanks!
The hostnames in OpsCenter are reported by the agent running on each node in the cluster. In this case each individual name is reporting its hostname as localhost. Fixing that configuration and restarting the agents should resolve the issue.

How to setup stomp_interface for failover node for cassandra Opscenter

How to setup a fail-over node for Cassandra Opscenter. The Opscenter data is stored on Opscenter node itself. So to setup a failover node i need to setup an Opscenter different from current Opscenter and sync Opscenter data and config files between Opscenters.
The stomp_interface on nodes in the cluster are pointed towards Opscenter_1 how will it change automatically to Opscenter_2 when failover occurs??
There are steps on the datastax documentation that have details for doing this. At a minimum:
Mirror the configuration directories stored on the OpsCenter primary to the OpsCenter backup using the method you prefer.
On the backup OpsCenter in the failover directory, create a primary_opscenter_location configuration file that indicates the IP address of the primary OpsCenter daemon to monitor
The stomp_interface setting on the agents gets changed (address.yaml file updated as well) when failover occurs. This is why the documentations recommend making sure there is no 3rd party configuration management on it.
3 things :
If you have firewall on, allow the corresponding ports to communicate (61620,61621,9160,9042,7199)
always verify IF the cassandra is up and running, so agent can actually connect to something.
stop the agent, check again the address.yaml, restart the agent.

Cassandra opscenter "agents failed to connect" on removed nodes

I have some EC2 instances where cassandra provisioning failed. I terminated the instances and the machines no longer exist.
Opscenter keeps nagging me about "agents failed to connect" on these machines.
The machines do not show up in nodepool status nor in the system.peers table.
Where does cassandra opscenter stores the node list to connect to so I can delete these zombie nodes ?
This is actually a bug in OpsCenter that is being addressed in the future. To mitigate your current issue, just restart OpsCenter and those messages should cease.

Cassandra: where to modify opscenter agent for a newly added node to existing cluster

I have a single node Cassandra cluster on EC2 (launched from a Datastax AMI) and I manually added a new node which is also backed by the same Datastax AMI after deleting data directory and modifying cassandra.yaml. I can see two nodes in the Nodes section of Opscenter but I see Opscenter agent is not installed in the new node (1 of 2 agents are connected). It looks like in the new node it has its own opscenter installation and that somehow conflicts with the opscenter installation in the first node? I guess I have to fix some configuration file of opscenter agent in the new node so that it can point to the opscenter installation of the first node? But I can't find where to modify.
Thanks!
It is stomp_interface section of /var/lib/datastax-agent/conf/address.yaml
I had to manually put stomp_interface into the configuration file. Also, I noticed that the process was looking for /etc/datastax-agent/address.yaml and never looked for /var/lib/datastax-agent/conf/address.yaml
Also, local_interface was not necessary to get things to work for me. YMMV.
I'm not sure where this gets set, or if this changed between agent versions at some point in time. FWIW, I installed both opscenter and the agents via packages.

why datastax cassandra community ami has opscenter

I just launched a datastax cassandra ami.
Noticed that the opscenter is by default running.
Do I have to disable the opscenter for each of the ami I launched?
What linux command to ensure opscenter won't run as a service?
Ultimately I want to run opscenter on my local server, remotely connecting to the cluster in AWS.
http://www.datastax.com/documentation/cassandra/2.0/cassandra/install/installAMILaunch.html
instance can be configured
--opscenter [no] Optional. By default, DataStax OpsCenter is installed on the first instance. Specify no to disable.
Sorry guys, I should have read the documents more carefully.

Resources