Cassandra materialized view on collection truncate - cassandra

If I truncate the entire source table in Cassandra. How will the MV(materialized views)'s have that reflection.
Does this process may create parity between source partition and MV's if the data is continuously inserted into the source partition?
can you please explain the flow of collection truncate with materialize view involved?
FYI my usecase is that: I will have regular ingrace of CURD operation on source partitions and there in between there will be a collection truncate request. so does Cassandra will ensure that there won't be any Parity between MV and source partition?

Related

How to flush data in all tables of keyspace in cassandra?

I am currently writing tests in golang and I want to get rid of all the data of tables after finishing tests. I was wondering if it is possible to flush the data of all tables in cassandra.
FYI: I am using 3.11 version of Cassandra.
The term "flush" is ambiguous in this case.
In Cassandra, "flush" is an operation where data is "flushed" from memory and written to disk as SSTables. Flushing can happen automatically based on certain triggers or can be done manually with the nodetool flush command.
However based on your description, what you want is to "truncate" the contents of tables. You can do this using the following CQL command:
cqlsh> TRUNCATE ks_name.table_name
You will need to iterate over each table in the keyspace. For more info, see the CQL TRUNCATE command. Cheers!

Performance consideration when reading from hive view Vs hive table via DataFrames

We have a view that unions multiple hive tables. If i use spark SQL in pyspark and read that view will there be any performance issue as against reading directly from the table.
In hive we had something called full table scan if we don't limit the where clause to an exact table partition. Is spark intelligent enough to directly read the table that has the data that we are looking for rather than searching through the entire view ?
Please advise.
You are talking about partition pruning.
Yes spark supports it spark automatically omits large data read when partition filters are specified.
Partition pruning is possible when data within a table is split across multiple logical partitions. Each partition corresponds to a particular value of a partition column and is stored as a subdirectory within the table root directory on HDFS. Where applicable, only the required partitions (subdirectories) of a table are queried, thereby avoiding unnecessary I/O
After partitioning the data, subsequent queries can omit large amounts of I/O when the partition column is referenced in predicates. For example, the following query automatically locates and loads the file under peoplePartitioned/age=20/and omits all others:
val peoplePartitioned = spark.read.format("orc").load("peoplePartitioned")
peoplePartitioned.createOrReplaceTempView("peoplePartitioned")
spark.sql("SELECT * FROM peoplePartitioned WHERE age = 20")
more detailed info is provided here
You can also see this in the logical plan if you run an explain(True) on your query:
spark.sql("SELECT * FROM peoplePartitioned WHERE age = 20").explain(True)
it will show which partitions are read by spark

Efficient Filtering on a huge data frame in Spark

I have a Cassandra table with 500 million rows. I would like to filter based on a field which is a partition key in Cassandra using spark.
Can you suggest the best possible/efficient approach to filter in Spark/Spark SQL based on the list keys which is also a pretty large.
Basically i need only those rows from the Cassandra table which are present in the list of keys.
We are using DSE and its features.
The approach i am using is taking lot of time roughly around an hour.
Have you checked repartitionByCassandraReplica and joinWithCassandraTable ?
https://github.com/datastax/spark-cassandra-connector/blob/75719dfe0e175b3e0bb1c06127ad4e6930c73ece/doc/2_loading.md#performing-efficient-joins-with-cassandra-tables-since-12
joinWithCassandraTable utilizes the java drive to execute a single
query for every partition required by the source RDD so no un-needed
data will be requested or serialized. This means a join between any
RDD and a Cassandra Table can be performed without doing a full table
scan. When performed between two Cassandra Tables which share the same
partition key this will not require movement of data between machines.
In all cases this method will use the source RDD's partitioning and
placement for data locality.
The method repartitionByCassandraReplica can be used to relocate data
in an RDD to match the replication strategy of a given table and
keyspace. The method will look for partition key information in the
given RDD and then use those values to determine which nodes in the
Cluster would be responsible for that data.

Reading cassandra simultaneously while writing

I am trying to read cassandra table immediately while data is been inserted to the table. The table is having timestamp as one of the primary key (Not the partition key). We have a spark job reads from the kafka and writes to cassandra at every 15 secs. The server component read from the cassandra almost immediately when the spark job starts inserting the data. Since the data inserting to the cassandra and is huge we are reading the data in pages. While reading in pages ,we observed that few of the records being skipped and reaches last record.
But when we run same logic of reading the data by pages on all ready inserted data it is working fine ( no skipping of records) . Is there any way read the data in pages while data being inserted in cassandra ?
What you observe might be a result of a current Cassandra data consistency level. To make sure all data written is available to read you could use ALL level, but this will cause waiting for all nodes to make a change.

What is the metastore for in Spark?

I am using SparkSQL in python. I have created a partitioned table (~few hundreds of partitions) stored it into Hive Internal Table using the hiveContext. The hive warehouse is located in S3.
When I simply do "df = hiveContext.table("mytable"). It would take over a minute to going through all the partitions the first time. I thought the metastore stored all the metadata. Why would spark still need to going through each partition? Is it possible to avoid this step so my startup can be faster?
The key here is that it takes this long to load the file metadata only on the first query. The reason is that SparkSQL doesn't store the partition metadata in the Hive metastore. For Hive partitioned tables, the partition information needs to be stored in the metastore. Depending on how the table is created will dictate how this behaves. From the information provided, it sounds like you created a SparkSQL table.
SparkSQL stores the table schema (which includes partition information) and the root directory of your table, but still discovers each partition directory on S3 dynamically when the query is run. My understanding is that this is a tradeoff so you don't need to manually add new partitions whenever the table is updated.

Resources