The servers we use have both public and private internal IPs. When using MemSQL Ops GUI to add MemSQL host and nodes, the installation defaults to using the public IP even when provided with the private IP.
How can we have the private IP used during installation? Or how can this IP be changed after the installation?
Tried using memsql-ops memsql-update-config to update the reported_hostname setting which confirms a successful change and says to restart. memsql-ops cluster-restart doesn't show any changes though.
Process that works:
Installing the agents through the command line through memsql-ops agent-deploy (after the initial ops install) then using memsql-ops restart on each node to restart with new interface and host bindings. Once the agent is restarted and showing the private IP, using the Ops UI to install the memsql node works fine.
The IP shown in Ops is not necessarily the IPs used by MemSQL the database.
The source of truth about the addresses used in cluster is the output of the SHOW LEAVES command.
http://docs.memsql.com/latest/ref/SHOW_LEAVES/
Related
We are looking at a way at performing a rolling upgrade of a Cassandra cluster within our CI environment.
We have Cassandra running across a number of VMs. When a VM is spun up it is allocated a random IP address from a pool. We cannot control this to get a static IP address. We also are "not allowed" to login to a VM to run a manual upgrade of Cassandra. So to upgrade, we need to spin up a new VM and install the later version of Cassandra on it.
Ideally, we'd like to:
Shutdown Cassandra on an existing node;
Spin up a new node with a new version of cassandra and a new IP address;
copy the data from the old node to the new node;
start up the new node, as it's it was the old node (autobootstrap=false);
some Cassandra setting would likely be needed here to tell the cluster that this is an existing node, along the lines of cassandra.replace_node but that doesn't require autobootstrap=true)
run "nodetool upgradesstables".
We've thought about using cassandra.replace_node or cassandra.replace_node_first_boot, but documentation on these strongly implies (or states) that the node has to be bootstrapped when using these, meaning we can't copy the data from the old node (as it would be ignored/overwritten/duplicated).
Is it possible to do what we want to do without having to bootstrap the node?
(We are looking at the possibility of static IP addresses - if we can reuse the IP address, then the node would appear as a node being upgraded - no bootstrap necessary. However, it's not looking likely that we can have static IPs).
Looks like someone else has already done this:
http://engineering.mydrivesolutions.com/posts/cassandra_nodes_replacement/
We'll try this out and see how well it works for us.
I have a brand new cluster running DSE 5.0.3 with OpsCenter 6.0.3. I used the LifeCycle Manager to create a 6 node cluster, adding their IPs to the Nodes list and installing DSE on each node that way. The cluster seems fine, healthy, etc. but the Nodes section, under the LIST tab, shows all nodes' names as localhost. If I click on each node it shows "localhost - x.x.x.x" (x.x.x.x being the actual node IP). How do I make them show their actual hostnames in OpsCenter? Where does this name come from?
Thanks!
The hostnames in OpsCenter are reported by the agent running on each node in the cluster. In this case each individual name is reporting its hostname as localhost. Fixing that configuration and restarting the agents should resolve the issue.
I am trying to setup a two node cluster in Cassandra. I am able to get my nodes to connect fine as far as I can tell. When I run nodetool status it shows both my nodes in the same data center and same rack. I can also run cqlsh on either node and query data. The second node can see data from the first node, etc.
I have my first node as the seed node both in the Cassandra.yaml and the cluster config file.
To avoid any potential security issues, I flushed my iptable and allowed all on all ports for both nodes. They are also on the same virtual network.
iptables -P INPUT ACCEPT
When I start OpsCenter on either machine it sees both nodes but only has information on the node I am viewing OpsCenter on. It can tell if the other node is up/down, but I am not able to view any detailed information. It sometimes initially says 2 Agents Connected but after awhile it says 1 agent failed to connect. It keeps prompting me to install OpsCenter on the other node although it's already there.
The OpsCenterd.log doesn't reveal much. There don't appear to be any errors but I see INFO: Nodes with agents that appear to no longer be running .
I am not sure what else to check as everything but OpsCenter seems to be working fine.
You should install Opscenter on a single node rather than all nodes. The opscenter gui will then prompt you to install the agent on each of the nodes in the cluster. Use nodetool status or nodetool ring to make sure that the cluster is properly functioning and all nodes are Up and functioning Normally. (status = UN)
In address.yaml file you can set stomp_address equal to the ip address of the opscenter server to force the agents to the correct address.
I have been trying to get Spark yarn-client mode working through VPN. More specifically, spark driver will be launched locally from my laptop, while the yarn cluster is in its own private network reachable through a non-bridged VPN.
The first challenge was to make the spark driver service reachable from yarn-cluster since the VPN is one-way, my laptop is not routable from the cluster.
I managed to get this working by adding an entry in /etc/hosts to point a public domain name to my local network IP, something like
192.168.0.6 spark.driver.mydomain
Then I set spark.driver.host=spark.driver.mydomain.
Now spark driver can successfully bind to spark.driver.mydomain, and tell yarn application manager to connect to spark.driver.mydomain. I also need to configure spark.driver.mydomain to point to my public IP by modifying my domain's DNS, and configure firewall to make the service publicly available.
Now I can run spark from my laptop to drive the cluster, almost there. However the SparkUI doesn't work. There is no way to connect to SparkUI despite of the message says it's suffcessfully started at spark.driver.mydomain:4040. I opened all the ports through my local network's firewall using DMZ. I also tried to use local network IP address. I can notice it is being redirected to yarn resource managers link, http://resourcemanager/proxy/application_id but just get timed out eventually, and I haven't figured out how the proxy thing works.
The spark session also occasionally spits out warning messages like
WARN ReliableDeliverySupervisor: Association with remote system
[akka.tcp://sparkExecutor#executor:port] has failed, address is
now gated for [5000] ms. Reason is: [Disassociated].
The basic spark actions all works despite of the warning message.
There are still quite a few concerns and questions
Does the communication between spark driver and yarn cluster contain unencrypted data in this scenario? Is there any data security concerns ( assuming the VPN is secure).
SparkUI is not accessible, which is intolerable.
Warning messages
Is it really a good practice to run driver from a remote network in yarn-client mode? There are certainly other benefits to do so, but is the framework designed to do this?
Finally, here is a JIRA issue that may lead to more general solutions. https://issues.apache.org/jira/browse/SPARK-5113
I just installed a 3 node cassandra (2.0.11) community cluster with a single seed node. I installed opscenter (5.0.2) on the seed node and everything is working fairly well. The only issue I am having is that any node actions I perform (stop, start, compact, etc) apply only to the seed node. Even if I choose a different node on the ring or list, the action always happens on the seed node.
I watched the opscenter logs and can see requests for /ops/compact/ip_address and the ip address is the correct node that I chose but the action always run on the seed instance.
All agents have been installed on all the nodes and the cluster is fully operational. I can run nodetool compact on each node and see the compaction progress in opscenter.
I have each node configured to listen on an internal address and have verified that the rpc server is open on the network. I have also tried adding the cluster using a non-seed node but all actions continue to run on the seed node.
Posted the answer above but I'll explain more in detail for anyone else with this issue.
I changed rpc_address and listen_address in cassandra.yaml in order to listen on a private ip address. I restarted cassandra and the cluster could communicate easily. The datastax-agent was still reporting 127.0.0.1 to opscenter as the rpc address. I found this out by enabling trace logging in opscenter.
If you modify anything in cassandra.yaml, make sure you restart the datastax-agent as it apparently caches the data.