cassandra cluster not re-starting upon improper shutdown - cassandra

I have a test cluster on 3 machines where 2 are seeds all centos7 and all cassandra 3.4.
Yesterday all was fine they were chating and i had the "brilliant" idea to ....power all those machines off to simulate a power failure.
As a newbie that i am, i simply powered the machines back and i expected probably some kind of supermagic, but here it is my cluster is not up again, each individual refuses to connect.
And yes, my firewalld is disabled.
My question : what damage was made and how can i recover the cluster to the previous running state?

Since you abruptly shutdown your cluster, that simply means, nodes were not able to drain themselves.
Don't worry, it is unlikely any data loss happened because of this, as cassandra maintains commit logs, and will read from it when it is restarted.
First, find your seed node ip from cassandra.yaml
Start your seed node first.
Check the start up logs in cassandra.log and system.log and wait for it to start up completely, it will take sometime.
As it will read from commit log for pending actions, and will replay them.
Once it finishes starting up, start other nodes, and tail their log files.

Related

Hardware maintenance activity on One of the Cassandra Node

Need help, I have a 4 node cassandra Cluster, RF 2 and There is a Hardware maintenance activity (Total Activity time can be 30-40 mins) scheduled on one of the node .
Please let me know how we can safely do this activity without impacting the live traffic.
Can I use below steps on node (where hardware maintenance will be going on)
nodetool -h<node IP / Hostname > drain
Kill Cassnadra service.
Once activity get completed, Then start the cassandra service.
Kindly let me know if anything else need to be done.
Thanks in advance.
That's a good start, Dinesh. The shutdown scripts which I write look like this:
nodetool disablegossip
nodetool disablebinary
nodetool drain
The disable commands first take the node out of gossip, and then stop any native binary connections. Once those complete, I drain the node.
Once those have completed, I then stop the service.

Cassandra node self-removal

I run Cassandra 3.1 in autoscaling group. Recently one of the machines failed and got replaced. I did not lose any data, but client applications were trying to connect to a node which was marked down.
I am looking for a way to gracefully remove a node from a cluster with a quick command which I would invoke via systemd right before it shuts down cassandra during the shutdown process.
nodetool decommission involves data streaming and takes long time.
nodetool removenode and nodetool assassinate can't remove the node they are running at.
Losing data is not my concern. My goal is fully automated node replacement.
Fixing client libaries is out of scope of this question

why does my cassandra cluster experience latency when restarting a single node?

I am running a 29 node cluster spread over 4 DC's in EC2, using C* 3.11.1 on Ubuntu, using RF3. Occasionally I have the need to restart nodes in the cluster, but every time I do, I see errors and application (nodejs) timeouts.
I restart a node like this:
nodetool disablebinary && nodetool disablethrift && nodetool disablegossip && nodetool drain
sudo service cassandra restart
When I do that, I very often get timeouts and errors like this in my nodejs app:
Error: Cannot achieve consistency level LOCAL_ONE
My queries are all pretty much the same, things like: select * from history where ts > {current_time} (along with the partition key in the where clause)
The errors and timeouts seem to go away on their own after a while, but it is frustrating because I can't track down what I am doing wrong!
I've tried waiting between steps of shutting down cassandra, and I've tried stopping, waiting, then starting the node. One thing I've noticed is that even after nodetool draining the node, there are open connections to other nodes in the cluster (ie looking at the output of netstat) until I stop cassandra. I don't see any errors or warnings in the logs.
One other thing I've noticed is that after restarting a node and seeing application latency, I also see that the node I just restarted sees many other nodes in the same DC as being down (ie status 'DN'). However, checking nodetool status on those other nodes shows all nodes as up/normal. To me this could kind of explain the problem - node comes back online, thinks it is healthy but many others are not, so it gets traffic from the client application. But then it gets requests for ranges that belong to a node it thinks is down, so it responds with an error. The latency issue seems to start roughly when the node goes down, but persists long (ie 15-20 mins) after it is back online and accepting connections. It seems to go away once the bounced node shows the other nodes in the same DC as up again.
I have not been able to reproduce this locally using ccm.
What can I do to prevent this? Is there something else I should be doing to gracefully restart the cluster? It could be something to do with the nodejs driver, but I can't find anything there to try.
I seem to have been able to resolve the issue by issuing nodetool disablegossip as the last step in shutting down. So using this instead of my initial approach at restarting seems to work (note that only the order of drain and disablegossip have switched):
nodetool disablebinary
nodetool disablethrift
nodetool drain
nodetool disablegossip
sudo service cassandra restart
While this seems to work, I have no explanation as to why. On the mailing list, someone helpfully pointed out that the drain should take care of everything that disablegossip does, so my hypothesis is that doing the disablegossip first causes the drain to then have problems which only appear after startup.

Google Dataproc Jobs Never Cancel, Stop, or Terminate

I have been using Google Dataproc for a few weeks now and since I started I had a problem with canceling and stopping jobs.
It seems like there must be some server other than those created on cluster setup, that keeps track of and supervises jobs.
I have never had a process that does its job without error actually stop when I hit stop in the dev console. The spinner just keeps spinning and spinning.
Cluster restart or stop does nothing, even if stopped for hours.
Only when the cluster is entirely deleted will the jobs disappear... (But wait there's more!) If you create a new cluster with the same settings, before the previous cluster's jobs have been deleted, the old jobs will start on the new cluster!!!
I have seen jobs that terminate on their own due to OOM errors restart themselves after cluster restart! (with no coding for this sort of fault tolerance on my side)
How can I forcefully stop Dataproc jobs? (gcloud beta dataproc jobs kill does not work)
Does anyone know what is going on with these seemingly related issues?
Is there a special way to shutdown a Spark job to avoid these issues?
Jobs keep running
In some cases, errors have not been successfully reported to the Cloud Dataproc service. Thus, if a job fails, it appears to run forever even though it (has probably) failed on the back end. This should be fixed by a soon-to-be released version of Dataproc in the next 1-2 weeks.
Job starts after restart
This would be unintended and undesirable. We have tried to replicate this issue and cannot. If anyone can replicate this reliably, we'd like to know so we can fix it! This may (is provably) be related to the issue above where the job has failed but appears to be running, even after a cluster restarts.
Best way to shutdown
Ideally, the best way to shutdown a Cloud Dataproc cluster is to terminate the cluster and start a new one. If that will be problematic, you can try a bulk restart of the Compute Engine VMs; it will be much easier to create a new cluster, however.

Cassandra hangs on arbitrary commands

We're hosting Cassandra 2.0.2 cluster on AWS. We've recently started upgrading from normal to SSD drives, by bootstrapping new and decommissioning old nodes. It went fairly well, aside from two nodes hanging forever on decommission. Now, after the new 6 nodes are operational, we noticed that some of our old tools, using phpcassa stopped working. Nothing has changed with security groups, all ports TCP/UDP are open, telnet can connect via 9160, cqlsh can 'use' a cluster, select data, however, 'describe cluster' fails, in cli, 'show keyspaces' also fails - and by fail, I mean never exits back to prompt, nor returns any results. The queries work perfectly from the new nodes, but even the old nodes waiting to be decommissioned cannot perform them. The production system, also using phpcassa, does normal data requests - it works fine.
All cassandras have the same config, the same versions, the same package they were installed from. All nodes were recently restarted, due to seed node change.
Versions:
Connected to ### at ####.compute-1.amazonaws.com:9160.
[cqlsh 4.1.0 | Cassandra 2.0.2 | CQL spec 3.1.1 | Thrift protocol 19.38.0]
I've run out out of ideas. Any hints would be greatly appreciated.
Update:
After a bit of random investigating, here's a bit more detailed description.
If I cassandra-cli to any machine, and do "show keyspaces", it works.
If I cassandra-cli to a remote machine, and do "show keyspaces", it hangs indefinitely.
If I cqlsh to a remote cassandra, and do a describe keyspaces, it hangs. ctrl+c, repeat the same query, it instantly responds.
If I cqlsh to a local cassandra, and do a describe keyspaces, it works.
If I cqlsh to a local cassandra, and do a select * from Keyspace limit x, it will return data up to a certain limit. I was able to return data with limit 760, the 761 would fail.
If I do a consistency all, and select the same, it hangs.
If I do a trace, different machines return the data, though sometimes source_elapsed is "null"
Not to forget, applications querying the cluster sometimes do get results, after several attempts.
Update 2
Further playing introduced failed bootstrapping of two nodes, one hanging on bootstrap for 4 days, and eventually failing, possibly due to a rolling restart, and the other plain failing after 2 days. Repairs wouldn't function, and introduced "Stream failed" errors, as well as "Exception in thread Thread[StorageServiceShutdownHook,5,main] java.lang.NullPointerException". Also, after executing repair, started getting "Read an invalid frame size of 0. Are you using tframedtransport on the client side?", so..
Solution
Switch rpc_server_type from hsha to sync. All problems gone. We worked with hsha for months without issues.
If someone also stumbles here:
http://planetcassandra.org/blog/post/hsha-thrift-server-corruption-cassandra-2-0-2-5/
In cassandra.yaml:
Switch rpc_server_type from hsha to sync.

Resources