Gridgain In-Memory Streaming deploiyng - gridgain

I'm trying to use Gridgain In-Memory Streaming and I can't understand ho w to deploy to cluster.
For example, I have cluster from two nodes:
1) first node starts with streamer "TestStreamer"
2) second node has identical streamer "TestStreamer"
3) I'm changing some code of "TestStreamer" and redeploying it on first node
Is the possibility to autodeploy it on second node?
I've tried http://atlassian.gridgain.com/wiki/display/GG60/GridUriDeploymentSpi, but seems it works only for tasks, not for streamers.

You are right. If you need to change the streamer code in GridGain, then you need to restart the node with new code in the classpath.
Tasks, computations, and cache objects, on the other hand can be auto-deployed.

Related

Can I upgrade a Cassandra cluster swapping in new nodes running the updated version?

I am relatively new to Cassandra... both as a User and as an Operator. Not what I was hired for, but it's now on my plate. If there's an obvious answer or detail I'm missing, I'll be more than happy to provide it... just let me know!
I am unable to find any recent or concrete documentation that explicitly spells out how tolerant Cassandra nodes will be when a node with a higher Cassandra version is introduced to an existing cluster.
Hypothetically, let's say I have 4 nodes in a cluster running 3.0.16 and I wanted to upgrade the cluster to 3.0.24 (the latest version as of posting; 2021-04-19). For reasons that are not important here, running an 'in-place' upgrade on each existing node is not possible. That is: I can not simply stop Cassandra on the existing nodes and then do an nodetool drain; service cassandra stop; apt upgrade cassandra; service cassandra start.
I've looked at the change log between 3.0.17 and 3.0.24 (inclusive) and don't see anything that looks like a major breaking change w/r/t the transport protocol.
So my question is: Can I introduce new nodes (running 3.0.24) to the c* cluster (comprised of 3.0.16 nodes) and then run nodetool decommission on each of the 3.0.16 nodes to perform a "one for one" replacement to upgrade the cluster?
Do i risk any data integrity issues with this procedure? Is there a specific reason why the procedure outlined above wouldn't work? What about if the number of tokens each node was responsible for was increased with the new nodes? E.G.: 0.16 nodes equally split the keyspace over 128 tokens but the new nodes 0.24 will split everything across 256 tokens.
EDIT: After some back/forth on the #cassandra channel on the apache slack, it appears as though there's no issue w/ the procedure. There were some other comorbid issues caused by other bits of automation that did threaten the data-integrity of the cluster, however. In short, each new node was adding ITSSELF to list list of seed nodes as well. This can be seen in the logs: This node will not auto bootstrap because it is configured to be a seed node.
Each new node failed to bootstrap, but did not fail to take new writes.
EDIT2: I am not on a k8s environment; this is 'basic' EC2. Likewise, the volume of data / node size is quite small; ranging from tens of megabytes to a few hundred gigs in production. In all cases, the cluster is fewer than 10 nodes. The case I outlined above was for a test/dev cluster which is normally 2 nodes in two distinct rack/AZs for a total of 4 nodes in the cluster.
Running bootstrap & decommission will take quite a long time, especially if you have a lot of data - you will stream all data twice, and this will increase load onto cluster. The simpler solution would be to replace old nodes by copying their data onto new nodes that have the same configuration as old nodes, but with different IP and with 3.0.24 (don't start that node!). Step-by-step instructions are in this answer, when it's done correctly you will have minimal downtime, and won't need to wait for bootstrap decommission.
Another possibility if you can't stop running node is to add all new nodes as a new datacenter, adjust replication factor to add it, use nodetool rebuild to force copying of the data to new DC, switch application to new data center, and then decommission the whole data center without streaming the data. In this scenario you will stream data only once. Also, it will play better if new nodes will have different number of num_tokens - it's not recommended to have different num_tokens on the nodes of the same DC.
P.S. usually it's not recommended to do changes in cluster topology when you have nodes of different versions, but maybe it could be ok for 3.0.16 -> 3.0.24.
To echo Alex's answer, 3.0.16 and 3.0.24 still use the same SSTable file format, so the complexity of the upgrade decreases dramatically. They'll still be able to stream data between the different versions, so your idea should work. If you're in a K8s-like environment, it might just be easier to redeploy with the new version and attach the old volumes to the replacement instances.
"What about if the number of tokens each node was responsible for was increased with the new nodes? E.G.: 0.16 nodes equally split the keyspace over 128 tokens but the new nodes 0.24 will split everything across 256 tokens."
A couple of points jump out at me about this one.
First of all, it is widely recognized by the community that the default num_tokens value of 256 is waaaaaay too high. Even 128 is too high. I would recommend something along the lines of 12 to 24 (we use 16).
I would definitely not increase it.
Secondly, changing num_tokens requires a data reload. The reason, is that the token ranges change, and thus each node's responsibility for specific data changes. I have changed this before by standing up a new, logical data center, and then switching over to it. But I would recommend not changing that if at all possible.
"In short, each new node was adding ITSSELF to list list of seed nodes as well."
So, while that's not recommended (every node a seed node), it's not a show-stopper. You can certainly run a nodetool repair/rebuild afterward to stream data to them. But yes, if you can get to the bottom of why each node is adding itself to the seed list, that would be ideal.

Inconsistent Elassandra cluster state after node restart - less data on one node

I have migrated my existing data in 4 nodes Cassandra (with RF=3) to Elassandra and after putting my mappings whole data got indexed into Elassandra. After the completion of indexing, all nodes show a consistent result in /_cat/indices?v API. But as soon as I restart any node the data on that node is reduced substantially, index size as well as the number of records. If I restart another node of the cluster the problem shift to that node and previous node recovers automatically. For more details and detailed use case please refer to the issue I have created with Elassandra.
Upgrade to Elassandra v6.8.4.3 has resolved the problem. Thanks!

What is the difference between replace a node and remove/add a new one?

Is there any difference between replacing a node in the cluster and removing/adding a new node?
I think, removing a node and then add a new one will work fine like replacing without the following argument.
–Dcassandra.replace_address=[old_address]
When you remove the node using nodetool removenode, the data that it handles are distributed to other nodes, and when you add a new node then the data is streamed back, so all data is moved 2 times.
By using -Dcassandra.replace_address=[old_address] you avoid streaming data from node that is removed, so streaming happens only once.
P.S. corresponding part of DSE documentation.

Best way to add multiple nodes to existing cassandra cluster

We have a 12 node cluster with 2 datacenters(each DC has 6 nodes) with RF-3 in each DC.
We are planning to increase cluster capacity by adding 3 nodes in each DC(total 6 nodes).
What is the best way to add multiple nodes at once.(ya,may be with 2 min difference).
auto_bootstrap:false - Use auto_bootstrap:false(as this is quicker process to start nodes) on all new nodes , start all nodes & then run 'nodetool rebuild' to get data streamed to this new nodes from exisitng nodes.
If I go this way , where read requests go soon starting this new nodes , as at this point it has only token range assigned to them(new nodes) but NO data got streamed to this nodes , will it cause Read request failures/CL issues/any other issue?
OR
auto_bootstrap:true - Use auto_bootstrap:true and then start one node at a time , wait until streaming process finishes(this might take time I guess as we have huge data approx 600 GB+ on each node) before starting next node.
If I go this way , I have to wait until whole streaming process done on a node before proceed adding next new node.
Kindly suggest a best way to add multiple nodes all at once.
PS: We are using c*-2.0.3.
Thanks in advance.
As each depends on streaming data over the network, it largely depends how distributed your cluster is, and where your data currently is.
If you have a single-DC cluster and latency is minimal between all nodes, then bringing up a new node with auto_bootstrap: true should be fine for you. Also, if at least one copy of your data has been replicated to your local datacenter (the one you are joining the new node to) then this is also the preferred method.
On the other hand, for multiple DCs I have found more success with setting auto_bootstrap: false and using nodetool rebuild. The reason for this, is because nodetool rebuild allows you to specify a data center as the source of the data. This path gives you the control to contain streaming to a specific DC (and more importantly, not to other DCs). And similar to the above, if you are building a new data center and your data has not yet been fully-replicated to it, then you will need to use nodetool rebuild to stream data from a different DC.
how read requests would be handled ?
In both of these scenarios, the token ranges would be computed for your new node when it joins the cluster, regardless of whether or not the data is actually there. So if a read request were to be sent to the new node at CL ONE, it should be routed to a node containing a secondary replica (assuming RF>1). If you queried at CL QUORUM (with RF=3) it should find the other two. That is of course, assuming that the nodes which are picking-up the slack are not overcome by their streaming activities that they cannot also serve requests. This is a big reason why the "2 minute rule" exists.
The bottom line, is that your queries do have a higher chance of failing before the new node is fully-streamed. Your chances of query success increase with the size of your cluster (more nodes = more scalability, and each bears that much less responsibility for streaming). Basically, if you are going from 3 nodes to 4 nodes, you might get failures. If you are going from 30 nodes to 31, your app probably won't notice a thing.
Will the new node try to pull data from nodes in the other data centers too?
Only if your query isn't using a LOCAL consistency level.
I'm not sure this was ever answered:
If I go this way , where read requests go soon starting this new nodes , as at this point it has only token range assigned to them(new nodes) but NO data got streamed to this nodes , will it cause Read request failures/CL issues/any other issue?
And the answer is yes. The new node will join the cluster, receive the token assignments, but since auto_bootstrap: false, the node will not receive any streamed data. Thus, it will be a member of the cluster, but will not have any old data. New writes will be received and processed, but existing data prior to the node joining, will not be available on this node.
With that said, with the correct CL levels, your new node will still do background and foreground read repair, so that it shouldn't respond any differently to requests. However, I would not go this route. With 2 DC's, I would divert traffic to DCA, add all of the nodes with auto_bootstrap: false to DCB, and then rebuild the nodes from DCA. The rebuild will need to be from DCA because the tokens have changed in DCB, and with auto_bootstrap: false, the data may no longer exist. You could also run repair, and that should resolve any discrepancies as well. Lastly, after all of the nodes have been bootstrapped, run cleanup on all of the nodes in DCB.

Duplicate Cassandra node from one node to two nodes without repair?

Is there an easy way to move from having a single node to two nodes that both have a full copy of the data. In the past when I added a node, I had to wait for the second node to take half of the data and then I had to muck around to make them full replicas again.
Yes, you can copy over the sstables directly and that way when the node starts up, it can access its data directly without needing streaming. This is how it should work: go to your data folder on your old node and copy over directories matching for the keyspaces you want to migrate, as well as schema directories inside system. Copy these over to your new node's data directory (create it if necessary). Then configure the new node and have it join the cluster as usual. I believe you'll have access to your new data at this point without further migrations.
See a blog post on this topic for more details, especially on using a more refined approach with the help of sstableloader tool.

Resources