DataStax commnity AMI installation doesn't join other nodes - cassandra

I kicked off a 6 node cluster as per the documentation on http://docs.datastax.com/en/cassandra/2.1/cassandra/install/installAMILaunch.html. All worked ok. It's meant to be a 6 node cluster - I can see the 6 nodes working on EC dashborad. I can see OpsWork working on node 0. But the nodes are not seeing each other... I dont have access to OpsWork via browser but I can ssh to each node and verify cassandra is working.
What do I need to do so that they join the cluster. Note they all in the same VPC, same subnet in the same IP range with the same cluster name. All launched using the AMI specified in the document.
Any help will be much appreciated.

Hope your listen address is configured. Add the "auto_bootstrap": false attribute to each node, and restart each node. Check the logs too. That would be of great help.
In my situation, turning on broadcast address to public-ip caused a similar issue. Make the broadcast address your private-ip, or just leave it untouched. If broadcasts address is a must have, have your architect modify the firewall rules.

Related

OpsCenter nodes list shows all nodes names as localhost

I have a brand new cluster running DSE 5.0.3 with OpsCenter 6.0.3. I used the LifeCycle Manager to create a 6 node cluster, adding their IPs to the Nodes list and installing DSE on each node that way. The cluster seems fine, healthy, etc. but the Nodes section, under the LIST tab, shows all nodes' names as localhost. If I click on each node it shows "localhost - x.x.x.x" (x.x.x.x being the actual node IP). How do I make them show their actual hostnames in OpsCenter? Where does this name come from?
Thanks!
The hostnames in OpsCenter are reported by the agent running on each node in the cluster. In this case each individual name is reporting its hostname as localhost. Fixing that configuration and restarting the agents should resolve the issue.

How to use Zookeeper with Azure HDInsight Linux cluster?

Obviously I need to start a zookeeper server on one of the cluster machines, then I need other client machines to connect to this server.
The way I did it is that I used ssh to connect to the headnode, I found a zk server running on the port 2181. So, I used ifconfig to get the machine's IP address (for example 10.0.0.8) and i then had my worker nodes connect to:
10.0.0.8:2181.
However, my MR job now completes but it works slowly and the output is not correct. I suspect that I'm doing something wrong with Zookeeper, especially that I didn't follow a tutorial and improvised my steps.
HDInsight has multiple zookeeper servers. Not sure if specifying one might be the cause of the problem you are seeing.
I wrote an example a while back that uses Storm to write to HBase (both servers on the same Azure Virtual Network,) and as part of the configuration, I had to specify the three zookeeper servers for the component that writes to hbase. (https://azure.microsoft.com/en-us/documentation/articles/hdinsight-storm-sensor-data-analysis/ is the article.)
From the cluster head node, you can probably ping zookeeper0, zookeeper1, and zookeeper2 to find the IP address of each.

OpsCenter is having trouble connecting with the agents

I am trying to setup a two node cluster in Cassandra. I am able to get my nodes to connect fine as far as I can tell. When I run nodetool status it shows both my nodes in the same data center and same rack. I can also run cqlsh on either node and query data. The second node can see data from the first node, etc.
I have my first node as the seed node both in the Cassandra.yaml and the cluster config file.
To avoid any potential security issues, I flushed my iptable and allowed all on all ports for both nodes. They are also on the same virtual network.
iptables -P INPUT ACCEPT
When I start OpsCenter on either machine it sees both nodes but only has information on the node I am viewing OpsCenter on. It can tell if the other node is up/down, but I am not able to view any detailed information. It sometimes initially says 2 Agents Connected but after awhile it says 1 agent failed to connect. It keeps prompting me to install OpsCenter on the other node although it's already there.
The OpsCenterd.log doesn't reveal much. There don't appear to be any errors but I see INFO: Nodes with agents that appear to no longer be running .
I am not sure what else to check as everything but OpsCenter seems to be working fine.
You should install Opscenter on a single node rather than all nodes. The opscenter gui will then prompt you to install the agent on each of the nodes in the cluster. Use nodetool status or nodetool ring to make sure that the cluster is properly functioning and all nodes are Up and functioning Normally. (status = UN)
In address.yaml file you can set stomp_address equal to the ip address of the opscenter server to force the agents to the correct address.

Icinga2 cluster node local checks not executing

I am using Icinga2-2.3.2 cluster HA setup with three nodes in the same zone and database in a seperate server for idodb. All are Cent OS 6.5. Installed IcingaWeb2 in the active master.
Configured four local checks for each node including cluster health check as described in the documentation. Installed Icinga Classi UI in all three nodes, beacuse I am not able to see the local checks configured for nodes in Icinga Web2.
Configs are syncing, checks are executing & all three nodes are connected among them. But the configured local checks, specific to the node alone are not happening properly and verified it in the classic ui.
a. All local checks are executed only one time whenever
- one of the node is disconnected or reconnected
- configuration changes done in the master and reload icinga2
b. But after that, only one check is hapenning properly in one node and the remaining are not.
I have attached the screenshot of all node classic ui.
Please help me to fix and Thanks in advance.

install multi-node cassandra in windows

Is there any detail step-by-step document to address the multi-node cassandra installation in Windows? I read some documents/blogs and tried on Window7 workstations/Windows2008 servers but not be able to establish connection from the 2nd node to the 1st node.
When I was setting up my first cluster on windows I found this blogpost to be excellent. It covers many aspects of the setup including:
Firewall / Networking issues.
Running Cassandra as a service.
Monitoring and maintenance.
If you want to create a complete setup with using just cassandra have a look at this blog.
But to setup a multi-node cluster, you basically need to have the correct ports open on your servers. When it comes to configuration you are basically going to have identical cassandra.yaml configs accross all your nodes, with the same seeds list, and the only two fields need to be changed are the listen_address and possibly rpc_address (although you could just listen an all interfaces for the rpc_address by setting it to:
rpc_address: 0.0.0.0

Resources