Kundera Cassandra Delete a row based on Indexed column - cassandra

How to delete rows in cassandra based on an indexed column ?
Tried:
upload_id is added as an index in the table.
Delete from table where upload_id = '"+uploadId+"'"
But this gives me an Error "NON PRIMARY KEY found in where clause".
String selectQuery = "Select hashkey from table where upload_id='" + uploadId + "'"
entityManager.createNativeQuery(selectQuery).getResultList()
and delete all the elements in the List using a for loop.
This query is changed by kundera to append LIMIT 100 ALLOW Filtering.
Found a Question similar to this at Kundera for Cassandra - Deleting record by row key but that was asked in 2012 after that there were a lot of changes to cassandra and Kundera.

Kundera by default uses LIMIT 100. You can use query.setMaxResults(<integer>) to modify the limit accordingly and then run the loop.
Example:
Query findQuery = entityManager.createQuery("Select p from PersonCassandra p where p.age = 10", PersonCassandra.class);
findQuery.setMaxResults(5000);
List<PersonCassandra> allPersons = findQuery.getResultList();

Related

Reading guarantees for full table scan while updating the table?

Given schema:
CREATE TABLE keyspace.table (
key text,
ckey text,
value text
PRIMARY KEY (key, ckey)
)
...and Spark pseudocode:
val sc: SparkContext = ...
val connector: CassandraConnector = ...
sc.cassandraTable("keyspace", "table")
.mapPartitions { partition =>
connector.withSessionDo { session =>
partition.foreach { row =>
val key = row.getString("key")
val ckey = Random.nextString(42)
val value = row.getString("value")
session.execute(s"INSERT INTO keyspace.table (key, ckey, value)" +
" VALUES ($key, $ckey, $value)")
}
}
}
Is it possible for a code like this to read an inserted value within a single application (Spark job) run? More generalized version of my question would be whether a token range scan CQL query can read newly inserted values while iterating over rows.
Yes, it is possible exactly as Alex wrote
but I don't think it's possible with above code
So per data model the table is ordered by ckey in ascending order
The funny part however is the page size and how many pages are prefetched and since this is by default 1000 (spark.cassandra.input.fetch.sizeInRows), then the only problem could occur, if you wouldn't use 42, but something bigger and/or the executor didn't page yet
Also I think you use unnecessary nesting, so the code to achieve what you want might be simplified (after all cassandraTable will give you a data frame).
(I hope I understand that you want to read per partition (note a partition in your case is all rows under one primary key - "key") and for every row (distinguished by ckey) in this partition generate new one (with new ckey that will just duplicate value with new ckey) - use case for such code is a mystery for me, but I hope it has some sense:-))

Spark SQL get max & min dynamically from datasource

I am using Spark SQL where I want to fetch whole data everyday from a Oracle table(consist of more than 1800k records). The application is hanging up when I read from Oracle hence I used concept of partitionColumn,lowerBound & upperBound. But,the problem is how can I get lowerBound & upperBound value of primary key column dynamically?? Every day value of lowerBound & upperBound will be changing.Hence how can I get the boundary values of primary key column dynamically?? Can anyone guide me an sample example for my problem?
Just fetch required values from the database:
url = ...
properties = ...
partition_column = ...
table = ...
# Push aggregation to the database
query = "(SELECT min({0}), max({0}) FROM {1}) AS tmp".format(
partition_column, table
)
(lower_bound, upper_bound) = (spark.read
.jdbc(url=url, table=query. properties=properties)
.first())
and pass to the main query:
num_partitions = ...
spark.read.jdbc(
url, table,
column=partition_column,
# Make upper bound inclusive
lowerBound=lower_bound, upperBound=upper_bound + 1,
numPartitions=num_partitions, properties=properties
)

How to reset record with counter column after delete it in Cassandra?

I create a table with counter column in using com.datastax.driver.core packages, and a function in class:
public void doStartingJob(){
session.execute("CREATE KEYSPACE myks WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1};");
session.execute("CREATE TABLE myks.clients_count(ip_address text PRIMARY KEY,"
+ "request_count counter);");
}
After this I deleted table entry from CQLSH like:
jay#jayesh-380:~$ cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh:myks> DELETE FROM clients_count WHERE ip_address='127.0.0.1';
Then to insert row with same primary-Key I used following statement(via cqlsh):
UPDATE myks.clients_count SET request_count = 1 WHERE ip_address ='127.0.0.1';
And it is not allowed as:
InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot set the value of counter column request_count (counters can only be incremented/decremented, not set)"
But, I want the value of record's counter column should be set to 1, and with same primary-Key. (Functional Requirement)
How to do the same ??
The usage of counters is a bit strange, but you'll get used to. The main thing however, is that counters cannot be reused. Once you delete a counter value for a particular primary key, this counter is lost forever. This is by design and I think is not going to change.
Back to your question, the first of your problems is the initial DELETE. Don't.
Second, if the counter value for a primary key doesn't exists, C* will treat it is zero by default. Following the documentation, to load data in the counter for the first time you have to issue:
UPDATE mysecurity.clients_count SET request_count = request_count + 1 WHERE ip_address ='127.0.0.1';
And a SELECT will return the correct answer: 1
Again, beware of deletes! Don't! If you do, any subsequent query:
UPDATE mysecurity.clients_count SET request_count = request_count + 1 WHERE ip_address ='127.0.0.1';
will NOT fail, but the counter will NOT be updated.
Another thing to note is that C* don't support atomic read and update (or update and read on counter columns. That is you cannot issue an update and get within the same query the new (or the old) value of the counter. You'll need to perform two distinct queries, one with SELECT and one with UPDATE, but in a multi-client environment the SELECT value you get could not reflect the counter value during the UPDATE.
Your app will definitely fail if you do underestimate this.

Cassandra SELECT on secondary index doesn't return row

I am dealing with a puzzling behaviour when doing SELECTs on Cassandra 2.2.3. I have 4 nodes in the ring, and I create the following keyspace, table and index.
CREATE KEYSPACE IF NOT EXISTS my_keyspace
WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};
CREATE TABLE my_keyspace.my_table (
id text,
some_text text,
code text,
some_set set<int>,
a_float float,
name text,
type int,
a_double double,
another_set set<int>,
another_float float,
yet_another_set set<text>,
PRIMARY KEY (id, some_text, code)
) WITH read_repair_chance = 0.0
AND dclocal_read_repair_chance = 0.1
AND gc_grace_seconds = 864000
AND bloom_filter_fp_chance = 0.01
AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' }
AND comment = ''
AND compaction = { 'class' : 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' }
AND compression = { 'sstable_compression' : 'org.apache.cassandra.io.compress.LZ4Compressor' }
AND default_time_to_live = 0
AND speculative_retry = '99.0PERCENTILE'
AND min_index_interval = 128
AND max_index_interval = 2048;
CREATE INDEX idx_my_table_code ON my_keyspace.my_table (code);
Then I insert some rows on the table. Some of them have empty sets. I perform this query through the default CQL client and get the row I am expecting:
SELECT * FROM my_table WHERE code = 'test';
Then I run some tests which are outside my control. I don't know what they do but I expect they read and possibly insert/update/delete some rows. I'm sure they don't delete or change any of the settings in the index, table or keyspace.
After the tests, I log in again through the default CQL client and run the following queries.
SELECT * FROM my_table WHERE code = 'test';
SELECT * FROM my_table;
SELECT * FROM my_table WHERE id = 'my_id' AND some_text = 'whatever' AND code = 'test';
The first one doesn't return anything.
The second one returns all the rows, including the one with code = 'test'.
The third one returns the expected row that the first query couldn't retrieve.
The only difference that I can see between this row and others is that it is one of the rows which contains some empty sets, as explained earlier. If I query for another of the rows that also contain some empty sets, I get the same behavior.
I would say the problem is related to the secondary index. Somehow, the operations performed during the tests leave the index in an state where it cannot see certain rows.
I'm obviously missing something. Do you have any ideas about what could cause this behavior?
Thanks in advance.
UPDATE:
I worked around the issue, but now I found the same problem somewhere else. Since the issue first happened, I found out more about the operations performed before the error: updates on specific columns that set a TTL for said columns. After some investigation I found some Jira issues which could be related to this problem:
https://issues.apache.org/jira/browse/CASSANDRA-6782
https://issues.apache.org/jira/browse/CASSANDRA-8206
However, those issues seem to have been solved on 2.0 and 2.1, and I'm using 2.2. I think these changes are included in 2.2, but I could be mistaken.
The main problem is the the type of query you are running on Cassandra.
The Cassadra data model is query driven, tables are recomputed to serve the query.
Tables are created by using well defined Primary Key (Partition Key & clustring key). Cassandra is not good for full table scan type of queries.
Now coming to your queries.
SELECT * FROM my_table WHERE code = 'test';
Here the column used is clustring column and it the equality search column it should be part of Partition Key. Clustring key will be present in different partitions so if Read consistency level is one it may give empty result.
SELECT * FROM my_table;
Cassandra is not good for this kind of table scan query. Here it will search all the table and get all the rows (poor querying).
SELECT * FROM my_table
WHERE id = 'my_id' AND some_text = 'whatever' AND code = 'test';
Here you mentioned everything so the correct results were returned.
I opened a Jira issue and the problem was fixed on 2.1.18 and 2.2.10:
https://issues.apache.org/jira/browse/CASSANDRA-13412
I speak just from what I read in the Jira issue. I didn't test the above scenario again after the fix was implemented because by then I had moved to the 3.0 version.
In the end though I ended up removing almost every use of secondary indices in my application, as I learned that they led to bad performance.
The reason is that in most cases they will result in fan-out queries that will contact every node of the cluster, with the corresponding costs.
There are still some cases where they can perform well, e.g. when you query by partition key at the same time, as no other nodes will be involved.
But for anything else, my advice is: consider if you can remove your secondary indices and do lookups in auxiliary tables instead. You'll have the burden of maintaining the tables in sync, but performance should be better.

How to search a cassandra collection map using QueryBuilder

In my cassandra table i have a collection of Map also i have indexed the map keys.
CREATE TABLE IF NOT EXISTS test.collection_test(
name text,
year text,
attributeMap map<text,text>,
PRIMARY KEY ((name, year))
);
CREATE INDEX ON collection_test (attributeMap);
The QueryBuilder syntax is as below:
select().all().from("test", "collection_test")
.where(eq("name", name)).and(eq("year", year));
How should i put where condition on attributeMap?
First of all, you will need to create an index on the keys in your map. By default, an index created on a map indexes the values of the map, not the keys. There is special syntax to index the keys:
CREATE INDEX attributeKeyIndex ON collection_test (KEYS(attributeMap));
Next, to SELECT from a map with indexed keys, you'll need the CONTAINS KEY keyword. But currently, there is not a definition for this functionality in the query builder API. However, there is an open ticket to support it: JAVA-677
Currently, to accomplish this with the Java Driver, you'll need to build your own query or use a prepared statement:
PreparedStatement statement = _session.prepare("SELECT * " +
"FROM test.collection_test " +
"WHERE attributeMap CONTAINS KEY ?");
BoundStatement boundStatement = statement.bind(yourKeyValue);
ResultSet results = _session.execute(boundStatement);
Finally, you should read through the DataStax doc on When To Use An Index. Secondary indexes are known to not perform well. I can't imagine that a secondary index on a collection would be any different.

Resources