How to update Cassandra table with latest row, where Spark Dataframe is having multiple rows with same primary key? - cassandra

We have Cassandra table person,
CREATE TABLE test.person (
name text PRIMARY KEY,
score bigint
)
and Dataframe is,
val caseClassDF = Seq(Person("Andy1", 32), Person("Mark1", 27), Person("Ron", 27),Person("Andy1", 20),Person("Ron", 270),Person("Ron", 2700),Person("Mark1", 37),Person("Andy1", 200),Person("Andy1", 2000)).toDF()
In Spark We wanted to save dataframe to table , where dataframe is having multiple records for the same primary key.
Q 1: How Cassandra Connector internally handles ordering of the rows?
Q2: We are reading data from kafka and saving to Cassandra, and our batch will always have multiple events like above. We want to save the latest score to Cassandra. Any suggestion how we can achieve this??
Connector version we used is spark-cassandra-connector_2.12:3.2.1
Here are some Observation from our side,
val spark = SparkSession.builder()
.master("local[1]")
.appName("CassandraConnector")
.config("spark.cassandra.connection.host", "")
.config("spark.cassandra.connection.port", "")
.config("spark.sql.extensions", "com.datastax.spark.connector.CassandraSparkExtensions")
.getOrCreate()
val caseClassDF = Seq(Person("Andy1", 32), Person("Mark1", 27), Person("Ron", 27),Person("Andy1", 20),Person("Ron", 270),Person("Ron", 2700),Person("Mark1", 37),Person("Andy1", 200),Person("Andy1", 2000)).toDF()
caseClassDF.write
.format("org.apache.spark.sql.cassandra")
.option("keyspace", "test")
.option("table", "person")
.mode("APPEND")
.save()
When we have
.master("local[1]")
then in Cassandra table, we always see score 2000 for "Andy1" and 2700 fro "Ron", this is the latest in the Seq
Now when we change to,
.master("local[*]") OR .master("local[2]")
then we see some random score in Cassandra table, either 200 or 32 for "Andy1".
Note : We did each run on fresh table. So it is always insert and update in one batch.
We want to save the latest score to Cassandra. Any suggestion how we can achieve this??

Data in dataframe is by definition aren't ordered, and write into Cassandra will reflect this (inserts and updates are the same things in Cassandra) - data will be written in the random order and last write will win.
If you want to write only the latest value (with max score?) you will need to perform aggregations over your data, and use update output mode to write data to Cassandra (to write intermediate results of your streaming aggregations). Something like this:
caseClassDF.groupBy("name").agg(max("score")).write....

Related

How to get new/updated records from Delta table after upsert using merge?

Is there any way to get updated/inserted rows after upsert using merge to Delta table in spark streaming job?
val df = spark.readStream(...)
val deltaTable = DeltaTable.forName("...")
def upsertToDelta(events: DataFrame, batchId: Long) {
deltaTable.as("table")
.merge(
events.as("event"),
"event.entityId == table.entityId")
.whenMatched()
.updateExpr(...))
.whenNotMatched()
.insertAll()
.execute()
}
df
.writeStream
.format("delta")
.foreachBatch(upsertToDelta _)
.outputMode("update")
.start()
I know I can create another job to read updates from the delta table. But is it possible to do the same job? From what I can see, execute() returns Unit.
You can enable Change Data Feed on the table, and then have another stream or batch job to fetch the changes, so you'll able to receive information on what rows has changed/deleted/inserted. It could be enabled with:
ALTER TABLE table_name SET TBLPROPERTIES (delta.enableChangeDataFeed = true)
if thable isn't registered, you can use path instead of table name:
ALTER TABLE delta.`path` SET TBLPROPERTIES (delta.enableChangeDataFeed = true)
The changes will be available if you add the .option("readChangeFeed", "true") option when reading stream from a table:
spark.readStream.format("delta") \
.option("readChangeFeed", "true") \
.table("table_name")
and it will add three columns to table describing the change - the most important is _change_type (please note that there are two different types for update operation).
If you're worried about having another stream - it's not a problem, as you can run multiple streams inside the same job - you just don't need to use .awaitTermination, but something like spark.streams.awaitAnyTermination() to wait on multiple streams.
P.S. But maybe this answer will change if you explain why you need to get changes inside the same job?

Change filter/where condition when restarting a Structured Streaming query reading data from Delta Table

In Structured Streaming, will the checkpoints keep track of which data has already been processed from a Delta Table?
def fetch_data_streaming(source_table: str):
print("Fetching now")
streamingInputDF = (
spark
.readStream
.format("delta")
.option("maxBytesPerTrigger",1024)
.table(source_table)
.where("measurementId IN (1351,1350)")
.where("year >= '2021'")
)
query = (
streamingInputDF
.writeStream
.outputMode("append")
.option("checkpointLocation", "/streaming_checkpoints/5")
.foreachBatch(customWriter)
.start()
.awaitTermination()
)
return query
def customWriter(batchDF,batchId):
print(batchId)
print(batchDF.count())
batchDF.show(10)
length = batchDF.count()
print("batchId,batch size:",batchId,length)
If I change the where clause in the streamingInputDF to add more measurentId, the structured streaming job doesn't always acknowledge the change and fetch the new data values. It continues to run as if nothing has changed, whereas at times it starts fetching new values.
Isn't the checkpoint supposed to identify the change?
Edit: Schema of delta table:
col_name
data_type
measurementId
int
year
int
time
timestamp
q
smallint
v
string
"In structured streaming, will the checkpoints will keep track of which data has already been processed?"
Yes, the Structured Streaming job will store the read version of the Delta table in its checkpoint files to avoid producing duplicates.
Within the checkpoint directory in the folder "offsets", you will see that Spark stored the progress per batchId. For example it will look like below:
v1
{"batchWatermarkMs":0,"batchTimestampMs":1619695775288,"conf":[...]}
{"sourceVersion":1,"reservoirId":"d910a260-6aa2-4a7c-9f5c-1be3164127c0","reservoirVersion":2,"index":2,"isStartingVersion":true}
Here, the important part is the "reservoirVersion":2 which tells you that the streaming job has consumed all data from the Delta Table as of version 2.
Re-starting your Structured Streaming query with an additional filter condition will therefore not be applied to historic records but only to those that were added to the Delta Table after version 2.
In order to see this behavior in action you can use below code and analyse the content in the checkpoint files.
val deltaPath = "file:///tmp/delta/table"
val checkpointLocation = "file:///tmp/checkpoint/"
// run the following two lines once
val deltaDf = Seq(("1", "foo1"), ("2", "foo2"), ("3", "foo2")).toDF("id", "value")
deltaDf.write.format("delta").mode("append").save(deltaPath)
// run this code for the first time, then add filter condition, then run again
val query = spark.readStream
.format("delta")
.load(deltaPath)
.filter(col("id").isin("1")) // in the second run add "2"
.writeStream
.format("console")
.outputMode("append")
.option("checkpointLocation", checkpointLocation)
.start()
query.awaitTermination()
Now, if you append some more data to the Delta table while the streaming query is shut down and then restart is with the new filter condition it will be applied to the new data.

Create index in Ignite table when save dataframe from pyspark

I save Spark dataframe to Apache Ignite table with this code:
df.write\
.format("ignite")\
.option("table","REPORT")\
.option("primaryKeyFields", ', '.join(map(str, df.schema.names[:-1])))\
.option("config",configFile)\
.option("compression", "gzip")\
.mode("overwrite")\
.save()
But, I cannot find how create index on field with this owerwrite-saving.
I need this, but on .save() operation:
CREATE INDEX REPORT_FIELD_IDX ON PUBLIC.REPORT (FIELD)
It's pretty simple to do using the syntax like next:
CREATE INDEX IF NOT EXISTS AGE_IDX ON "PUBLIC".Person (AGE)
In case if a new table wasn't created then IF NOT EXISTS will work and nothing will be done. Otherwise, the index will be created.
It can be run using any SQL tool that can be used with Ignite (webconsole, visor, sqlline, jdbc, odbc, etc) but I guess that you are going to do it from Spark job. So you can try to use IgniteSparkSession or IgniteRDD to run SQL over Ignite:
IgniteSparkSession igniteSession = IgniteSparkSession.builder()
.appName("Spark Ignite example")
.igniteConfig(configPath)
.getOrCreate();
igniteSession.sqlContext().sql("CREATE INDEX IF NOT EXISTS AGE_IDX ON \"PUBLIC\".Person (AGE)");
or
val cacheRdd = igniteContext.fromCache("partitioned")
val result = cacheRdd.sql(
"CREATE INDEX IF NOT EXISTS AGE_IDX ON \"PUBLIC\".Person (AGE)")
No, you can't do that when saving DataFrame with Spark. Creating a table and creating an index are 2 different operations.
Here are all the options for DataFrame saving into Ignite, and as you can see, there is no option for index creation.

Migrate huge cassandra table to another cluster using spark

I want to migrate our old Cassandra cluster to a new one.
Requirements:-
I have a cassandra cluster of 10 nodes and the table i want to migrate is ~100GB. I am using spark for migrating the data. My spark cluster has 10 nodes and each node has around 16GB memory.
In the table we have some junk data which i don't want to migrate to the new table. eg:- Let's say i don't want to transfer the rows which has the cid = 1234. So, what is the best way to migrate this using spark job ? I can't put a where filtering on the cassandraRdd directly as the cid is not the only column included in partitioned key.
Cassandra Table:-
test_table (
cid text,
uid text,
key text,
value map<text, timestamp>,
PRIMARY KEY ((cid, uid), key)
)
Sample Data:-
cid | uid | key | value
------+--------------------+-----------+-------------------------------------------------------------------------
1234 | 899800070709709707 | testkey1 | {'8888': '2017-10-22 03:26:09+0000'}
6543 | 097079707970709770 | testkey2 | {'9999': '2017-10-20 11:08:45+0000', '1111': '2017-10-20 15:31:46+0000'}
I am thinking of something like below. But i guess this is not the best efficient approach.
val filteredRdd = rdd.filter { row => row.getString("cid") != "1234" }
filteredRdd.saveToCassandra(KEYSPACE_NAME,NEW_TABLE_NAME)
What will be the best possible approach here ?
That method is pretty good. You may want to write it in DataFrames to take advantage of the row encoding but this may only have a slight benefit. The key bottleneck in this operation will be writing and reading from Cassandra.
DF Example
spark
.read
.format("org.apache.spark.sql.cassandra")
.option("keyspace", ks)
.option("table", table)
.load
.filter( 'cid !== "1234" )
.write
.format("org.apache.spark.sql.cassandra")
.option("keyspace", ks2)
.option("table", table2)
.save

Overwrite specific partitions in spark dataframe write method

I want to overwrite specific partitions instead of all in spark. I am trying the following command:
df.write.orc('maprfs:///hdfs-base-path','overwrite',partitionBy='col4')
where df is dataframe having the incremental data to be overwritten.
hdfs-base-path contains the master data.
When I try the above command, it deletes all the partitions, and inserts those present in df at the hdfs path.
What my requirement is to overwrite only those partitions present in df at the specified hdfs path. Can someone please help me in this?
Finally! This is now a feature in Spark 2.3.0:
SPARK-20236
To use it, you need to set the spark.sql.sources.partitionOverwriteMode setting to dynamic, the dataset needs to be partitioned, and the write mode overwrite. Example:
spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
data.write.mode("overwrite").insertInto("partitioned_table")
I recommend doing a repartition based on your partition column before writing, so you won't end up with 400 files per folder.
Before Spark 2.3.0, the best solution would be to launch SQL statements to delete those partitions and then write them with mode append.
This is a common problem. The only solution with Spark up to 2.0 is to write directly into the partition directory, e.g.,
df.write.mode(SaveMode.Overwrite).save("/root/path/to/data/partition_col=value")
If you are using Spark prior to 2.0, you'll need to stop Spark from emitting metadata files (because they will break automatic partition discovery) using:
sc.hadoopConfiguration.set("parquet.enable.summary-metadata", "false")
If you are using Spark prior to 1.6.2, you will also need to delete the _SUCCESS file in /root/path/to/data/partition_col=value or its presence will break automatic partition discovery. (I strongly recommend using 1.6.2 or later.)
You can get a few more details about how to manage large partitioned tables from my Spark Summit talk on Bulletproof Jobs.
spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
data.toDF().write.mode("overwrite").format("parquet").partitionBy("date", "name").save("s3://path/to/somewhere")
This works for me on AWS Glue ETL jobs (Glue 1.0 - Spark 2.4 - Python 2)
Adding 'overwrite=True' parameter in the insertInto statement solves this:
hiveContext.setConf("hive.exec.dynamic.partition", "true")
hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
df.write.mode("overwrite").insertInto("database_name.partioned_table", overwrite=True)
By default overwrite=False. Changing it to True allows us to overwrite specific partitions contained in df and in the partioned_table. This helps us avoid overwriting the entire contents of the partioned_table with df.
Using Spark 1.6...
The HiveContext can simplify this process greatly. The key is that you must create the table in Hive first using a CREATE EXTERNAL TABLE statement with partitioning defined. For example:
# Hive SQL
CREATE EXTERNAL TABLE test
(name STRING)
PARTITIONED BY
(age INT)
STORED AS PARQUET
LOCATION 'hdfs:///tmp/tables/test'
From here, let's say you have a Dataframe with new records in it for a specific partition (or multiple partitions). You can use a HiveContext SQL statement to perform an INSERT OVERWRITE using this Dataframe, which will overwrite the table for only the partitions contained in the Dataframe:
# PySpark
hiveContext = HiveContext(sc)
update_dataframe.registerTempTable('update_dataframe')
hiveContext.sql("""INSERT OVERWRITE TABLE test PARTITION (age)
SELECT name, age
FROM update_dataframe""")
Note: update_dataframe in this example has a schema that matches that of the target test table.
One easy mistake to make with this approach is to skip the CREATE EXTERNAL TABLE step in Hive and just make the table using the Dataframe API's write methods. For Parquet-based tables in particular, the table will not be defined appropriately to support Hive's INSERT OVERWRITE... PARTITION function.
Hope this helps.
Tested this on Spark 2.3.1 with Scala.
Most of the answers above are writing to a Hive table. However, I wanted to write directly to disk, which has an external hive table on top of this folder.
First the required configuration
val sparkSession: SparkSession = SparkSession
.builder
.enableHiveSupport()
.config("spark.sql.sources.partitionOverwriteMode", "dynamic") // Required for overwriting ONLY the required partitioned folders, and not the entire root folder
.appName("spark_write_to_dynamic_partition_folders")
Usage here:
DataFrame
.write
.format("<required file format>")
.partitionBy("<partitioned column name>")
.mode(SaveMode.Overwrite) // This is required.
.save(s"<path_to_root_folder>")
I tried below approach to overwrite particular partition in HIVE table.
### load Data and check records
raw_df = spark.table("test.original")
raw_df.count()
lets say this table is partitioned based on column : **c_birth_year** and we would like to update the partition for year less than 1925
### Check data in few partitions.
sample = raw_df.filter(col("c_birth_year") <= 1925).select("c_customer_sk", "c_preferred_cust_flag")
print "Number of records: ", sample.count()
sample.show()
### Back-up the partitions before deletion
raw_df.filter(col("c_birth_year") <= 1925).write.saveAsTable("test.original_bkp", mode = "overwrite")
### UDF : To delete particular partition.
def delete_part(table, part):
qry = "ALTER TABLE " + table + " DROP IF EXISTS PARTITION (c_birth_year = " + str(part) + ")"
spark.sql(qry)
### Delete partitions
part_df = raw_df.filter(col("c_birth_year") <= 1925).select("c_birth_year").distinct()
part_list = part_df.rdd.map(lambda x : x[0]).collect()
table = "test.original"
for p in part_list:
delete_part(table, p)
### Do the required Changes to the columns in partitions
df = spark.table("test.original_bkp")
newdf = df.withColumn("c_preferred_cust_flag", lit("Y"))
newdf.select("c_customer_sk", "c_preferred_cust_flag").show()
### Write the Partitions back to Original table
newdf.write.insertInto("test.original")
### Verify data in Original table
orginial.filter(col("c_birth_year") <= 1925).select("c_customer_sk", "c_preferred_cust_flag").show()
Hope it helps.
Regards,
Neeraj
As jatin Wrote you can delete paritions from hive and from path and then append data
Since I was wasting too much time with it I added the following example for other spark users.
I used Scala with spark 2.2.1
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.Path
import org.apache.spark.SparkConf
import org.apache.spark.sql.{Column, DataFrame, SaveMode, SparkSession}
case class DataExample(partition1: Int, partition2: String, someTest: String, id: Int)
object StackOverflowExample extends App {
//Prepare spark & Data
val sparkConf = new SparkConf()
sparkConf.setMaster(s"local[2]")
val spark = SparkSession.builder().config(sparkConf).getOrCreate()
val tableName = "my_table"
val partitions1 = List(1, 2)
val partitions2 = List("e1", "e2")
val partitionColumns = List("partition1", "partition2")
val myTablePath = "/tmp/some_example"
val someText = List("text1", "text2")
val ids = (0 until 5).toList
val listData = partitions1.flatMap(p1 => {
partitions2.flatMap(p2 => {
someText.flatMap(
text => {
ids.map(
id => DataExample(p1, p2, text, id)
)
}
)
}
)
})
val asDataFrame = spark.createDataFrame(listData)
//Delete path function
def deletePath(path: String, recursive: Boolean): Unit = {
val p = new Path(path)
val fs = p.getFileSystem(new Configuration())
fs.delete(p, recursive)
}
def tableOverwrite(df: DataFrame, partitions: List[String], path: String): Unit = {
if (spark.catalog.tableExists(tableName)) {
//clean partitions
val asColumns = partitions.map(c => new Column(c))
val relevantPartitions = df.select(asColumns: _*).distinct().collect()
val partitionToRemove = relevantPartitions.map(row => {
val fields = row.schema.fields
s"ALTER TABLE ${tableName} DROP IF EXISTS PARTITION " +
s"${fields.map(field => s"${field.name}='${row.getAs(field.name)}'").mkString("(", ",", ")")} PURGE"
})
val cleanFolders = relevantPartitions.map(partition => {
val fields = partition.schema.fields
path + fields.map(f => s"${f.name}=${partition.getAs(f.name)}").mkString("/")
})
println(s"Going to clean ${partitionToRemove.size} partitions")
partitionToRemove.foreach(partition => spark.sqlContext.sql(partition))
cleanFolders.foreach(partition => deletePath(partition, true))
}
asDataFrame.write
.options(Map("path" -> myTablePath))
.mode(SaveMode.Append)
.partitionBy(partitionColumns: _*)
.saveAsTable(tableName)
}
//Now test
tableOverwrite(asDataFrame, partitionColumns, tableName)
spark.sqlContext.sql(s"select * from $tableName").show(1000)
tableOverwrite(asDataFrame, partitionColumns, tableName)
import spark.implicits._
val asLocalSet = spark.sqlContext.sql(s"select * from $tableName").as[DataExample].collect().toSet
if (asLocalSet == listData.toSet) {
println("Overwrite is working !!!")
}
}
If you use DataFrame, possibly you want to use Hive table over data.
In this case you need just call method
df.write.mode(SaveMode.Overwrite).partitionBy("partition_col").insertInto(table_name)
It'll overwrite partitions that DataFrame contains.
There's not necessity to specify format (orc), because Spark will use Hive table format.
It works fine in Spark version 1.6
Instead of writing to the target table directly, i would suggest you create a temporary table like the target table and insert your data there.
CREATE TABLE tmpTbl LIKE trgtTbl LOCATION '<tmpLocation';
Once the table is created, you would write your data to the tmpLocation
df.write.mode("overwrite").partitionBy("p_col").orc(tmpLocation)
Then you would recover the table partition paths by executing:
MSCK REPAIR TABLE tmpTbl;
Get the partition paths by querying the Hive metadata like:
SHOW PARTITONS tmpTbl;
Delete these partitions from the trgtTbl and move the directories from tmpTbl to trgtTbl
I would suggest you doing clean-up and then writing new partitions with Append mode:
import scala.sys.process._
def deletePath(path: String): Unit = {
s"hdfs dfs -rm -r -skipTrash $path".!
}
df.select(partitionColumn).distinct.collect().foreach(p => {
val partition = p.getAs[String](partitionColumn)
deletePath(s"$path/$partitionColumn=$partition")
})
df.write.partitionBy(partitionColumn).mode(SaveMode.Append).orc(path)
This will delete only new partitions. After writing data run this command if you need to update metastore:
sparkSession.sql(s"MSCK REPAIR TABLE $db.$table")
Note: deletePath assumes that hfds command is available on your system.
My solution implies overwriting each specific partition starting from a spark dataframe. It skips the dropping partition part. I'm using pyspark>=3 and I'm writing on AWS s3:
def write_df_on_s3(df, s3_path, field, mode):
# get the list of unique field values
list_partitions = [x.asDict()[field] for x in df.select(field).distinct().collect()]
df_repartitioned = df.repartition(1,field)
for p in list_partitions:
# create dataframes by partition and send it to s3
df_to_send = df_repartitioned.where("{}='{}'".format(field,p))
df_to_send.write.mode(mode).parquet(s3_path+"/"+field+"={}/".format(p))
The arguments of this simple function are the df, the s3_path, the partition field, and the mode (overwrite or append). The first part gets the unique field values: it means that if I'm partitioning the df by daily, I get a list of all the dailies in the df. Then I'm repartition the df. Finally, I'm selecting the repartitioned df by each daily and I'm writing it on its specific partition path.
You can change the repartition integer by your needs.
You could do something like this to make the job reentrant (idempotent):
(tried this on spark 2.2)
# drop the partition
drop_query = "ALTER TABLE table_name DROP IF EXISTS PARTITION (partition_col='{val}')".format(val=target_partition)
print drop_query
spark.sql(drop_query)
# delete directory
dbutils.fs.rm(<partition_directoy>,recurse=True)
# Load the partition
df.write\
.partitionBy("partition_col")\
.saveAsTable(table_name, format = "parquet", mode = "append", path = <path to parquet>)
For >= Spark 2.3.0 :
spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
data.write.insertInto("partitioned_table", overwrite=True)

Resources