Cassandra: where to modify opscenter agent for a newly added node to existing cluster - cassandra

I have a single node Cassandra cluster on EC2 (launched from a Datastax AMI) and I manually added a new node which is also backed by the same Datastax AMI after deleting data directory and modifying cassandra.yaml. I can see two nodes in the Nodes section of Opscenter but I see Opscenter agent is not installed in the new node (1 of 2 agents are connected). It looks like in the new node it has its own opscenter installation and that somehow conflicts with the opscenter installation in the first node? I guess I have to fix some configuration file of opscenter agent in the new node so that it can point to the opscenter installation of the first node? But I can't find where to modify.
Thanks!

It is stomp_interface section of /var/lib/datastax-agent/conf/address.yaml

I had to manually put stomp_interface into the configuration file. Also, I noticed that the process was looking for /etc/datastax-agent/address.yaml and never looked for /var/lib/datastax-agent/conf/address.yaml
Also, local_interface was not necessary to get things to work for me. YMMV.
I'm not sure where this gets set, or if this changed between agent versions at some point in time. FWIW, I installed both opscenter and the agents via packages.

Related

How to update configuration of a Cassandra cluster

I have a 3 node Cassandra cluster and I want to make some adjustments to the cassandra.yaml
My question is, how should I perform this? One node at a time or is there a way to make it happen without shutting down nodes?
Btw, I am using Cassandra 2.2 and this is a production cluster.
There are multiple approaches here:
If you edit the cassandra.yaml file, you need to restart cassandra to re-read the contents of that file. If you restart all nodes at once, your cluster will be unavailable. Restarting one node at a time is almost always safe (provided you have sane replication-factors and consistency-levels). If your cluster is configured to survive a rack or datacenter outage, then you can safely restart more nodes concurrently.
Many settings can be changed without a restart via JMX, though I don't have a documentation link handy. Changing via JMX WON'T change cassandra.yml though, so you'll need to update that also or your config will revert back to what's in the file when the node restarts.
If you're using DSE, OpsCenter's Lifecycle Manager feature makes updating configs a simple point-and-click affair (disclaimer, I'm biased as I'm an LCM dev).

How to setup stomp_interface for failover node for cassandra Opscenter

How to setup a fail-over node for Cassandra Opscenter. The Opscenter data is stored on Opscenter node itself. So to setup a failover node i need to setup an Opscenter different from current Opscenter and sync Opscenter data and config files between Opscenters.
The stomp_interface on nodes in the cluster are pointed towards Opscenter_1 how will it change automatically to Opscenter_2 when failover occurs??
There are steps on the datastax documentation that have details for doing this. At a minimum:
Mirror the configuration directories stored on the OpsCenter primary to the OpsCenter backup using the method you prefer.
On the backup OpsCenter in the failover directory, create a primary_opscenter_location configuration file that indicates the IP address of the primary OpsCenter daemon to monitor
The stomp_interface setting on the agents gets changed (address.yaml file updated as well) when failover occurs. This is why the documentations recommend making sure there is no 3rd party configuration management on it.
3 things :
If you have firewall on, allow the corresponding ports to communicate (61620,61621,9160,9042,7199)
always verify IF the cassandra is up and running, so agent can actually connect to something.
stop the agent, check again the address.yaml, restart the agent.

Cassandra 2+ HPC Deployment

I am trying to deploy Cassandra on a Linux Based HPC cluster and I need some guidelines if possible. Specifically, what is the difference between running Cassandra locally and in cluster.
When managing locally (in which case it runs smoothly) we duplicate the original files for every node inside our Cassandra directory and we apply the appropriate changes for IP address, rcp, JMX etc... however, when managing a network which files do we need to install in each node. The whole package with all the files or just some of the required ones
like, bin/cassandra.in.sh, conf/cassandra.yaml, bin/cassandra.
I am a little bit confused on what to store in each node separately so to start working on the cluster.
You need to install Cassandra on each node (VM), i.e. the whole package and then update config files as neccessary. As described here to configure cluster in a single data center you need:
Install Cassandra on each node
Configure cluster name
Configure seeds
Configure snitch, if needed

Unable to add node via OpsCenter

When trying to add a node via OpsCenter 5.0.1 I get the following
The Ec2Snitch is being used by this cluster. Provisioning nodes using this endpoint_snitch is not supported at this time.
Which seems contrary to the instructions given here.
i had the same problem, just add the new node using dsedelegate snitch, after the provisioning it's done, change the snitch to ec2snitch, and restart the node and thats it

Cassandra migration

I have Cassandra 0.8.0 running with data on server 1, and a clean install of Cassandra 1.0.3 on server 2.
Is it possible to just copy some files from server 1 to server 2? Or do i have to write my own import/export code?
Both servers can be taken down, restarted, etc.
Why would you not upgrade server1? Upgrade details here (either way read this first):
http://svn.apache.org/viewvc/cassandra/branches/cassandra-1.0/NEWS.txt?view=markup
But if you do want to change machines, follow the procedures for 'nodetool snapshot' as detailed here:
http://wiki.apache.org/cassandra/Operations#Backing_up_data
Re-create the schema on the new node, then add the snapshots to the data directory (as described above), restart cassandra then issue a nodetool scrub.
Thanks zznate it had to do with hardware.
Here some links i found useful:
http://jonathanhui.com/cassandra-data-maintenance-backup-and-system-recovery
http://wiki.apache.org/cassandra/StorageConfiguration
http://www.memonic.com/user/pneff/folder/database/id/1bZvk
If it looks like nothing happened after migrating make sure you create the column family's on the new node using CassandraCli.

Resources