Is there an easy way to clear prepared statement cache in AWS Keyspaces?
Traditional tricks used on manual cassandra nodes cannot be used in the AWS Cloud.
In Cassandra, if the table definition changes, this cache seems to be cleared. But I found some cases where AWS Keyspaces does not when you modify the table on columns used in a former prepared statement.
Related
As there is no support for custom index in AWS Keyspaces what would be the best solution / pattern to be able to run LIKE or ILIKE queries on specific columns of a Cassandra Table?
In vanilla Cassandra, you can use SSTable secondary index to use LIKE queries, but we can't in AWS...
Is there any query for Cassandra as same as SQL:LIKE Condition?
Feeding an OpenSearch service, or even a good old Postgres at the same time of updating Keyspaces seems a bit overkill to me.
Fetching all columns in-memory somewhere to do the query seems slow as well.
What would be the lightest infra / architecture to implement to provide a LIKE query support based on AWS Keyspaces as source of truth?
You can use a Lexi-graphical Select statement to narrow your query down the same way you would do a LIKE statement. If you needed to further narrow it down you could do that narrowing client side. I would love to learn more your use case so I can better assist you.
I need a big data storage solution for batch inserts of denormalized data which happen infrequently and queries on the inserted data which happen frequently.
I've gone through Cassandra and feel that its not that good for batch inserts, but an OK solution for querying. Also, it would be good if there was a mechanism to segregate data separately based on a data attribute.
As you mentioned Cassandra I will talk about it:
Can you insert in an unbatched way or is this impossed by the system? If you can insert unbatched, Cassandra will probably be able to handle it easily.
Batched inserts should also be handable by Cassandra nodes, but this won't distribute the load properly among all the nodes (NOTE: I'm talking about load balancing, not about data balance, which will be only depending on your partition key setup). If you are not very familiar with Cassandra you could tell us your data structure and your query types and we could suggest you how to use Cassandra's data model to fit it.
For the filtering part of the question, Cassandra has clustering keys and secondary indexes, that are basically like adding another column configuration to the clustering key so that you have both for querying.
I have to change the schema of one of my tables in Cassandra. It's cannot be done by simply using ALTER TABLE command, because there are some changes in primary key.
So the question is: How to do such a migration in the best way?
Using COPY command in cql is not an option in here because dump file can be really huge.
Can I solve this problem by not creating some custom application?
Like Guillaume has suggested in the comment - you can't do this directly in cassandra. Schema altering operations are very limited here. You have to perform such migration manually using one of suggested there tools OR if you have very large tables you can leverage Spark.
Spark can efficiently read data from your nodes, transform them locally and save them back to db. Remember that such migration requires reading whole db content, so might take a while. It might be the most performant solution, however needs some bigger preparation - Spark cluster setup.
I want to test my cluster a little, how data replicates, etc.
I have a cassandra cluster formed by 5 machines ( centos 7 & cassie 3.4 on them).
Are there anywhere tables already created for testing that I can import in my db in some keyspace?
If yes, please be kind enough and explain me how to import them into a keyspace and where from to take them.
You can use Cassandra-stress. This is great to create data for your style of table and also has some default tables.
http://docs.datastax.com/en/cassandra_win/3.0/cassandra/tools/toolsCStress.html
I highly recommend it.
Actually , it is a lot of data in internet that can be used for testing
e.g.
https://www.quora.com/Where-can-I-find-large-datasets-open-to-the-public
http://bigdata-madesimple.com/70-websites-to-get-large-data-repositories-for-free/
Cassandra provide with tool cqlsh for executing CQL command as COPY for importing CSV data to database.
P.S.But pay attention on the fact that cqlsh has some restriction related to timeout. That is why it would be better to use some cassandra connector to make this process more effective.
In case we need to add new columns to existing Cassandra (version 1.2) static column family in production, can we do it without downtime provided we have hundreds of nodes and multiple data centers?
It would be disappointing if not possible.
In the case of adding columns, all that is really going on with an 'ALTER' statement in CQL is some meta-data entry in the system tables. No data files are being re-written.
This meta-data is then used for validation from both the API transports and compaction.
If you really have that big a cluster, you will need to wait a short while for the change to propagate - cqlsh blocks until this happens, IIRC.