RPC Timeout in Cassandra - cassandra

I get the following error:
cqlsh:dev> SELECT DISTINCT id FROM raw_data;
Request did not complete within rpc_timeout.
This is a special query that I'll never make again, I don't care how long it takes, and I don't want to change my schema (since I'll never make the query again...).
How can I increase rpc_timeout for this one query?
I have tried adding LIMIT 9999 and ALLOW FILTERING, and it doesn't help. I expect less than 1000 rows in the result. The query works on another Cassandra cluster with half as much data.
Edit: as it turns out, this particular command succeeded after I ran nodetool compact, but what I'm more interested in the general case of temporarily increasing rpc_timeout for one query.

increase the read request time in cassandra.yaml file under /cassandra/conf
read_request_timeout_in_ms: 30000
change this restart server and execute your query, might be your problem get resolved.

Related

Cassandra - Truncate a table while inserts in progress

I want to understand how the truncate command works in Cassandra (version 3.9) to be able to know what would happen in the following scenario:
I have about 100GB of data on a table in production on a table that needs to be truncated.
I want to truncate this table, but at the same time there will be a few hundred requests per second that will be making inserts at the same time.
I am trying to understand, theoretically how would this play out.
Would the truncate try to acquire some sort of a lock on the table before it can proceed? and possibly stop the insert requests or itself be timed out?
Or would the truncate go through in sequence as the request came in and following insert requests would create the additional rows and I would end up with a small number of rows remaining after the truncate.
I am just trying to reclaim space, so I am not particularly concerned if a small amount of data remains from the insert requests run after the truncate command.
I am just trying to understand if you'd expect this to complete successfully or it would fail / time-out.
I will try to run a similar scenario on a smaller cluster, but I'm not sure if that will be a good substitute to understand the actual behavior. Any inputs will be helpful.
Truncate sends a message to all the nodes with a request to delete all the SSTables at the moment of execution, you will have information only of those upserts received after the truncate was issued.
In the Datastax documentation it is stated that this is done with JMX, but looking at the comments of this answer, this is done with CQL and the messaging service.
If you are trying to reclaim disk space, please note that a snapshot will be created with the truncate if auto_snapshot is set to true (true is the default value), so you will need to remove the snapshot after the execution of the command. Also, note that truncate will require to have all the nodes to be up and healthy to be able to complete.
I tried this for myself. On a 2 node Cassandra cluster I Made inserts at about 160 requests per second in the background and ran a truncate query on the same table that had about 200,000 records.
The table got truncated and the inserts continued without an error.
The new rows inserted after the truncate showed on the DB.

Cassandra unpredictable failure depending on WHERE clause

I am attempting to execute a SELECT statement against a large Cassandra table (10m rows) with various WHERE clauses. I am issuing these from the Datastax DevCenter application. The columns I am using in the where clause have secondary indexes.
The where clause looks like WHERE fileid = 18000 or alternatively WHERE fileid < 18000. In this example, the second where clause results in the error Unable to execute CQL script on 'connection1': Cassandra failure during read query at consistency ONE (1 responses were required but only 0 replica responded, 1 failed)
I have no idea why it is failing in this unpredictable manner. Any ideas?
NOTE: I am aware that this is a terrible idea, and Cassandra is not meant to be used in this way. I am issuing these queries and timing them to prove to others how inefficient Cassandra is for our use case compared to other solutions.
Your query is probably failing because of a READ timeout (the timeout on waiting to read data). You could try updating the Cassandra.yaml with a larger read timeout time with read_request_timeout_in_ms: 200000 (for 200s) to give an output rather than an error. However, if you're trying to prove the inefficiency of Cassandra in your use case, this error seems like a pretty good way to do it.

Fetching more 100000 rows in Cassandra

I am currently using Cassandra 1.6.6, however I having a big problem. I am trying to fetch more than 100000 rows using the LIMIT clause, however I always get the error below and then the database just shutdown.
TSocket read 0 bytes.
Secondly, does any know how to update all the rows in a Cassandra database.
Thanks awaiting your reply. I just can't find anything online, very distress.
TSocket read 0 bytes means you lost the connection to Cassandra, possibly due to the timeout that stops running a malformed query that would cause system instability. I don't think you can run one query that updates all rows because you need to specify the unique key to update a row.

cassandra is giving timeout exception in pentaho kettle

i am using cassandra nosql database for transformation in pentaho data integration.
At the time of manually checking the connection it will connect but while executing the transformation it is giving me timeout exception..
i increased the "request_timeout" but problem is still their. and as per my knowledge data in the cassandra database is increasing then only facing such problems.
so is it some problem with PDI tool or because of cassandra database it-self?
And how can i resolve this problem?
In cassandra.yaml, you need to increase the parameter read_request_timeout_in_ms to a higher number like 20000. The default number is 10000, and for selects with a huge limit and 10 or 20 columns, you may expect TimeoutException.
Increasing this value will make cassandra wait more for query completion.
I tried this in my database and it worked.

Data in Cassandra not consistent even with Quorum configuration

I encountered a consistency problem using Hector and Cassandra when we have Quorum for both read and write.
I use MultigetSubSliceQuery to query rows from super column limit size 100, and then read it, then delete it. And start another around.
I found that the row which should be deleted by my prior query is still shown from next query.
And also from a normal Column Family, I updated the value of one column from status='FALSE' to status='TRUE', and the next time I queried it, the status was still 'FALSE'.
More detail:
It has not happened not every time (1/10,000)
The time between the two queries is around 500 ms (but we found one pair of queries in which 2 seconds had elapsed between them, still indicating a consistency problem)
We use ntp as our cluster time synchronization solution.
We have 6 nodes, and replication factor is 3
I understand that Cassandra is supposed to be "eventually consistent", and that read may not happen before write inside Cassandra. But for two seconds?! And if so, isn't it then meaningless to have Quorum or other consistency level configurations?
So first of all, is it the correct behavior of Cassandra, and if not, what data we need to analyze for further investment?
After check the source code with the system log, I found the root cause of the inconsistency.
Three factors cause the problem:
Create and update same record from different nodes
Local system time is not synchronized accurately enough (although we use NTP)
Consistency level is QUORUM
Here is the problem, take following as the event sequence
seqID NodeA NodeB NodeC
1. New(.050) New(.050) New(.050)
2. Delete(.030) Delete(.030)
First Create request come from Node C with local time stamp 00:00:00.050, assume requests first record in Node A and Node B, then later synchronized with Node C.
Then Delete request come from Node A with local time stamp 00:00:00.030, and record in node A and Node B.
When read request come, Cassandra will do version conflict merge, but the merge only depend on time stamp, so although Delete happened after Create, but the merge final result is "New" which has latest time stamp due to local time synchronization issue.
I also faced similar a issue. The issue occured because cassandra driver uses server timestamp by default to check which query is latest. However in latest version of cassandra driver they have changes it and now by default they are using client timestamp.
I have described the details of issue here
The deleted rows may be showing up as "range ghosts" because of the way that distributed deletes work: see http://wiki.apache.org/cassandra/FAQ#range_ghosts
If you are reading and writing individual columns both at CL_QUORUM, then you should always get full consistency, regardless of the time interval (provided strict ordering is still observed, i.e. you are certain that the read is always after the write). If you are not seeing this, then something, somewhere, is wrong.
To start with, I'd suggest a) verifying that the clients are syncing to NTP properly, and/or reproduce the problem with times cross-checked between clients somehow, and b) maybe try to reproduce the problem with CL_ALL.
Another thought - are your clients synced with NTP, or just the Cassandra server nodes? Remember that Cassandra uses the client timestamps to determine which value is the most recent.
I'm running into this problem with one of my clients/node. The other 2 clients I'm testing with (and 2 other nodes) run smoothly. I have a test that uses QUORUM in all reads and all writes and it fails very quickly. Actually some processes do not see anything from the others and others may always see data even after I QUORUM remove it.
In my case I turned on the logs and intended to test the feat with the tail -F command:
tail -F /var/lib/cassandra/log/system.log
to see whether I was getting some errors as presented here. To my surprise the tail process itself returned an error:
tail: inotify cannot be used, reverting to polling: Too many open files
and from another thread this means that some processes will fail opening files. In other words, the Cassandra node is likely not responding as expected because it cannot properly access data on disk.
I'm not too sure whether this is related to the problem that the user who posted the question, but tail -F is certainly a good way to determine whether the limit of files was reached.
(FYI, I have 5 relatively heavy servers running on the same machine so I'm not too surprise about the fact. I'll have to look into increasing the ulimit. I'll report here again if I get it fixed in this way.)
More info about the file limit and the ulimit command line option: https://askubuntu.com/questions/181215/too-many-open-files-how-to-find-the-culprit
--------- Update 1
Just in case, I first tested using Java 1.7.0-11 from Oracle (as mentioned below, I first used a limit of 3,000 without success!) The same error would popup at about the same time when running my Cassandra test (Plus even with the ulimit of 3,000 the tail -F error would still appear...)
--------- Update 2
Okay! That worked. I changed the ulimit to 32,768 and the problems are gone. Note that I had to enlarge the per user limit in /etc/security/limits.conf and run sudo sysctl -p before I could bump the maximum to such a high number. Somehow the default upper limit of 3000 was not enough even though the old limit was only 1024.

Resources