Spark connector: Partition usage and performance issue - cassandra

I am trying to run a spark job (which talks to Cassandra) to read data, do some aggregation, and then write aggregates to Cassandra
I have 2 tables (monthly_active_users (MAU) , daily_user_metric_aggregates (DUMA))
For every record in MAU, there will be one or more records in DUMA
Get every records in MAU and fetch user_id in it then find records in DUMA for that user (with server side filters applied like metric_name in ('ms', 'md')
If one or more records in DUMA for the specified where clause then i need to increment the count of appMauAggregate map (app wise MAU counts)
I tested this algorithm, works as expected but i wanted to find out
1) Is it an optimized algorithm (or) is there any better way to do it? I have a sense that something is not correct and i am not seeing speedups. Looks like Cassandra client is being created and shutdown for each spark action (collect). Takes long time to process small dataset.
2) Spark workers are not co-located with cassandra, meaning spark worker is running in different node (container) than C* node (we may move spark worker to C* node for data locality)
3) I am seeing spark job is being created/submitted for every spark action (collect) and i belive that it is an expected behavior from spark, is there anyway to cutdown reads from C* and create joins so that data retrierval is fast?
4) What is the downside of this algorithm? Can you recommend better design approach, meaning w/r/t partition strategy, loading C* partition onto Spark partition, executor's / driver's memory requirement?
5) As long as algorithm and design approach is fine then i can play around with spark tuning. I am using 5 workers (each with 16 CPU and 64GB RAM)
C* Schema :
MAU:
CREATE TABLE analytics.monthly_active_users (
month text,
app_id uuid,
user_id uuid,
PRIMARY KEY (month, app_id, user_id)
) WITH CLUSTERING ORDER BY (app_id ASC, user_id ASC)
data:
cqlsh:analytics> select * from monthly_active_users limit 2;
month | app_id | user_id
--------+--------------------------------------+--------------------------------------
2015-2 | 108eeeb3-7ff1-492c-9dcd-491b68492bf2 | 199c0a31-8e74-46d9-9b3c-04f67d58b4d1
2015-2 | 108eeeb3-7ff1-492c-9dcd-491b68492bf2 | 2c70a31a-031c-4dbf-8dbd-e2ce7bdc2bc7
DUMA:
CREATE TABLE analytics.daily_user_metric_aggregates (
metric_date timestamp,
user_id uuid,
metric_name text,
"count" counter,
PRIMARY KEY (metric_date, user_id, metric_name)
) WITH CLUSTERING ORDER BY (user_id ASC, metric_name ASC)
data:
cqlsh:analytics> select * from daily_user_metric_aggregates where metric_date='2015-02-08' and user_id=199c0a31-8e74-46d9-9b3c-04f67d58b4d1;
metric_date | user_id | metric_name | count
--------------------------+--------------------------------------+-------------------+-------
2015-02-08 | 199c0a31-8e74-46d9-9b3c-04f67d58b4d1 | md | 1
2015-02-08 | 199c0a31-8e74-46d9-9b3c-04f67d58b4d1 | ms | 1
Spark Job :
import java.net.InetAddress
import java.util.concurrent.atomic.AtomicLong
import java.util.{Date, UUID}
import com.datastax.spark.connector.util.Logging
import org.apache.spark.{SparkConf, SparkContext}
import org.joda.time.{DateTime, DateTimeZone}
import scala.collection.mutable.ListBuffer
object MonthlyActiveUserAggregate extends App with Logging {
val KeySpace: String = "analytics"
val MauTable: String = "mau"
val CassandraHostProperty = "CASSANDRA_HOST"
val CassandraDefaultHost = "127.0.0.1"
val CassandraHost = InetAddress.getByName(sys.env.getOrElse(CassandraHostProperty, CassandraDefaultHost))
val conf = new SparkConf().setAppName(getClass.getSimpleName)
.set("spark.cassandra.connection.host", CassandraHost.getHostAddress)
lazy val sc = new SparkContext(conf)
import com.datastax.spark.connector._
def now = new DateTime(DateTimeZone.UTC)
val metricMonth = now.getYear + "-" + now.getMonthOfYear
private val mauMonthSB: StringBuilder = new StringBuilder
mauMonthSB.append(now.getYear).append("-")
if (now.getMonthOfYear < 10) mauMonthSB.append("0")
mauMonthSB.append(now.getMonthOfYear).append("-")
if (now.getDayOfMonth < 10) mauMonthSB.append("0")
mauMonthSB.append(now.getDayOfMonth)
private val mauMonth: String = mauMonthSB.toString()
val dates = ListBuffer[String]()
for (day <- 1 to now.dayOfMonth().getMaximumValue) {
val metricDate: StringBuilder = new StringBuilder
metricDate.append(now.getYear).append("-")
if (now.getMonthOfYear < 10) metricDate.append("0")
metricDate.append(now.getMonthOfYear).append("-")
if (day < 10) metricDate.append("0")
metricDate.append(day)
dates += metricDate.toString()
}
private val metricName: List[String] = List("ms", "md")
val appMauAggregate = scala.collection.mutable.Map[String, scala.collection.mutable.Map[UUID, AtomicLong]]()
case class MAURecord(month: String, appId: UUID, userId: UUID) extends Serializable
case class DUMARecord(metricDate: Date, userId: UUID, metricName: String) extends Serializable
case class MAUAggregate(month: String, appId: UUID, total: Long) extends Serializable
private val mau = sc.cassandraTable[MAURecord]("analytics", "monthly_active_users")
.where("month = ?", metricMonth)
.collect()
mau.foreach { monthlyActiveUser =>
val duma = sc.cassandraTable[DUMARecord]("analytics", "daily_user_metric_aggregates")
.where("metric_date in ? and user_id = ? and metric_name in ?", dates, monthlyActiveUser.userId, metricName)
//.map(_.userId).distinct().collect()
.collect()
if (duma.length > 0) { // if user has `ms` for the given month
if (!appMauAggregate.isDefinedAt(mauMonth)) {
appMauAggregate += (mauMonth -> scala.collection.mutable.Map[UUID, AtomicLong]())
}
val monthMap: scala.collection.mutable.Map[UUID, AtomicLong] = appMauAggregate(mauMonth)
if (!monthMap.isDefinedAt(monthlyActiveUser.appId)) {
monthMap += (monthlyActiveUser.appId -> new AtomicLong(0))
}
monthMap(monthlyActiveUser.appId).incrementAndGet()
} else {
println(s"No message_sent in daily_user_metric_aggregates for user: $monthlyActiveUser")
}
}
for ((metricMonth: String, appMauCounts: scala.collection.mutable.Map[UUID, AtomicLong]) <- appMauAggregate) {
for ((appId: UUID, total: AtomicLong) <- appMauCounts) {
println(s"month: $metricMonth, app_id: $appId, total: $total");
val collection = sc.parallelize(Seq(MAUAggregate(metricMonth.substring(0, 7), appId, total.get())))
collection.saveToCassandra(KeySpace, MauTable, SomeColumns("month", "app_id", "total"))
}
}
sc.stop()
}
Thanks.

Your solution is the least efficient possible. You are performing a join by looking up each key one-by-one, avoiding any possible parallelization.
I've never used the Cassandra connector, but I understand it returns RDDs. So you could do this:
val mau: RDD[(UUID, MAURecord)] = sc
.cassandraTable[MAURecord]("analytics", "monthly_active_users")
.where("month = ?", metricMonth)
.map(u => u.userId -> u) // Key by user ID.
val duma: RDD[(UUID, DUMARecord)] = sc
.cassandraTable[DUMARecord]("analytics", "daily_user_metric_aggregates")
.where("metric_date in ? metric_name in ?", dates, metricName)
.map(a => a.userId -> a) // Key by user ID.
// Count "duma" by key.
val dumaCounts: RDD[(UUID, Long)] = duma.countByKey
// Join to "mau". This drops "mau" entries that have no count
// and "duma" entries that are not present in "mau".
val joined: RDD[(UUID, (MAURecord, Long))] = mau.join(dumaCounts)
// Get per-application counts.
val appCounts: RDD[(UUID, Long)] = joined
.map { case (u, (mau, count)) => mau.appId -> 1 }
.countByKey

There is a parameter spark.cassandra.connection.keep_alive_ms which controls for how long keep the connection opened. Take a look at the documentation page.
If you colocate Spark Workers with Cassandra nodes, connector will take advantage of this and create partitions appropriately so that the executor will always fetch data from the local node.
I can see some design improvements you can make in DUMA table: metric_date seems to be not the best choice for partition key - consider making (user_id, metric_name) a partition key because in that case you will not have to generate dates for the query - you will just need to put user_id and metrics_name to the where clause. Moreover, you can add a month identifier to the primary key - then, each partition will include only those information which are related to what you want to fetch with each query.
Anyway, the functionality of join in Spark-Cassandra-Connector are currently being implemented (see this ticket).

Related

Reading guarantees for full table scan while updating the table?

Given schema:
CREATE TABLE keyspace.table (
key text,
ckey text,
value text
PRIMARY KEY (key, ckey)
)
...and Spark pseudocode:
val sc: SparkContext = ...
val connector: CassandraConnector = ...
sc.cassandraTable("keyspace", "table")
.mapPartitions { partition =>
connector.withSessionDo { session =>
partition.foreach { row =>
val key = row.getString("key")
val ckey = Random.nextString(42)
val value = row.getString("value")
session.execute(s"INSERT INTO keyspace.table (key, ckey, value)" +
" VALUES ($key, $ckey, $value)")
}
}
}
Is it possible for a code like this to read an inserted value within a single application (Spark job) run? More generalized version of my question would be whether a token range scan CQL query can read newly inserted values while iterating over rows.
Yes, it is possible exactly as Alex wrote
but I don't think it's possible with above code
So per data model the table is ordered by ckey in ascending order
The funny part however is the page size and how many pages are prefetched and since this is by default 1000 (spark.cassandra.input.fetch.sizeInRows), then the only problem could occur, if you wouldn't use 42, but something bigger and/or the executor didn't page yet
Also I think you use unnecessary nesting, so the code to achieve what you want might be simplified (after all cassandraTable will give you a data frame).
(I hope I understand that you want to read per partition (note a partition in your case is all rows under one primary key - "key") and for every row (distinguished by ckey) in this partition generate new one (with new ckey that will just duplicate value with new ckey) - use case for such code is a mystery for me, but I hope it has some sense:-))

How to implement Slowly Changing Dimensions (SCD2) Type 2 in Spark

We want to implement SCD2 in Spark using SQL Join. i got reference from Github
https://gist.github.com/rampage644/cc4659edd11d9a288c1b
but it's not very clear.
Can anybody provide any example or reference to implement SCD2 in spark
Regards,
Manish
A little outdated in terms of newer Spark SQL, but here is an example
I trialed a la Ralph Kimball using Spark SQL, that worked and is thus
reliable. You can run it and it works - but file logic and such needs
to be added - this is the body of the ETL SCD2 logic based on 1.6
syntax but run in 2.x - it is not that hard but you will need to trace
through and generate test data and trace through each step:
Some pre-processing required before script initiates, save a copy of existing and copy existing to the DIM_CUSTOMER_EXISTING.
Write new output to DIM_CUSTOMER_NEW and then copy this to target, DIM_CUSTOMER_1 or DIM_CUSTOMER_2.
The feed can also be re-created or LOAD OVERWRITE.
^^^ NEED SOME BETTER SCRIPTING AROUND THIS. ^^^ The Type 2 dimension is simply only Type 2 values, not a mixed Type 1 & Type 2.
DUMPs that are accumulative can be in fact pre-processed to get the delta.
Use case assumes we can have N input for a person with a date validity / extract supplied.
SPARK 1.6 SQL based originally, not updated yet to SPARK 2.x SQL with nested correlated subquery support.
CUST_CODE cannot changes unless a stable Primary Key.
This approach handles no input, delta input, same input, all input, and can catch up and need not be run-date based.
^^^ Works best with deltas, as if pass all data and there is no change then still have make a dummy entry with all the same values else it will have gaps in key range
which means will not be able to link facts to dimensions in all cases. I.e. the discard logic works only in terms of a pure delta feed. All data can be passed but only
the current delta. Problem becomes difficult to solve in that we must then look for changes over different rows and expand date range, a little too complicated imho.
The dummy entries in the dimensions are not a huge issue. The problem is a little more difficult in such a less mutable environment, in KUDU it easier to solve.
Ideally there would be some sort of preprocessor that checks which fields have changed and only then passed on, but that may be a bridge too far.
HENCE IT IS A COMPROMISE ALGORITHM necessarily. ^^^
No Deletions processed.
Multi-step processing for SQL required in some cases. Gaps in key ranges difficult to avoid with set processing.
No out of order processing, that would mean re-processing all. Works on a whole date/day basis, if run more than once per day in batch then would need timestamp instead.
0.1 Any difference analysis on existimg dumps only possible if the dumps are accumulative. If they are transactional deltas only, then this is not required.
Care to be taken here.
0.2 If we want only the last update for a given date, then do that here by method of Partitioning and Ranking and filtering out.
These are all pre-processing steps as are the getting of the dimension data from which table.
0.3 Issue is that of small files, but that is not an issue here at xxx. RAW usage only as written to KUDU in a final step.
Actual coding:
import org.apache.spark.sql.SparkSession
val sparkSession = SparkSession
.builder
.master("local") // Not a good idea
.appName("Type 2 dimension update")
.config("spark.sql.crossJoin.enabled", "true") // Needed to add this
.getOrCreate()
spark.sql("drop table if exists DIM_CUSTOMER_EXISTING")
spark.sql("drop table if exists DIM_CUSTOMER_NEW")
spark.sql("drop table if exists FEED_CUSTOMER")
spark.sql("drop table if exists DIM_CUSTOMER_TEMP")
spark.sql("drop table if exists DIM_CUSTOMER_WORK")
spark.sql("drop table if exists DIM_CUSTOMER_WORK_2")
spark.sql("drop table if exists DIM_CUSTOMER_WORK_3")
spark.sql("drop table if exists DIM_CUSTOMER_WORK_4")
spark.sql("create table DIM_CUSTOMER_EXISTING (DWH_KEY int, CUST_CODE String, CUST_NAME String, ADDRESS_CITY String, SALARY int, VALID_FROM_DT String, VALID_TO_DT String) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE LOCATION '/FileStore/tables/alhwkf661500326287094' ")
spark.sql("create table DIM_CUSTOMER_NEW (DWH_KEY int, CUST_CODE String, CUST_NAME String, ADDRESS_CITY String, SALARY int, VALID_FROM_DT String, VALID_TO_DT String) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE LOCATION '/FileStore/tables/DIM_CUSTOMER_NEW_3' ")
spark.sql("CREATE TABLE FEED_CUSTOMER (CUST_CODE String, CUST_NAME String, ADDRESS_CITY String, SALARY int, VALID_DT String) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE LOCATION '/FileStore/tables/mhiscfsv1500226290781' ")
// 1. Get maximum value in dimension, this differs to other RDD approach, issues in parallel? May be other way to be done! Check, get a DF here and this is the interchangability
val max_val = spark.sql("select max(dwh_key) from DIM_CUSTOMER_EXISTING")
//max_val.show()
val null_count = max_val.filter("max(DWH_KEY) is null").count()
var max_Dim_Key = 0;
if ( null_count == 1 ) {
max_Dim_Key = 0
} else {
max_Dim_Key = max_val.head().getInt(0)
}
//2. Cannot do simple difference processing. The values of certain fields could be flip-flopping over time. A too simple MINUS will not work well. Need to process relative to
// youngest existing record etc. and roll the transactions forward. Hence we will not do any sort of difference analysis between new dimension data and existing dimension
// data in any way.
// DO NOTHING.
//3. Capture new stuff to be inserted.
// Some records for a given business key can be linea recta inserted as there have been no mutations to consider at all as there is nothing in current Staging. Does not mean
// delete.
// Also, the older mutations need not be re-processed, only the youngest! The younger one may need closing off or not, need to decide if it is now
// copied across or subject to updating in this cycle, depends on the requirements.
// Older mutations copied across immediately.
// DELTA not always strictly speaking needed, but common definitions. Some ranking required.
spark.sql("""insert into DIM_CUSTOMER_NEW select *
from DIM_CUSTOMER_EXISTING
where CUST_CODE not in (select distinct CUST_CODE FROM FEED_CUSTOMER) """) // This does not need RANKing, DWH Key retained.
spark.sql("""create table DIM_CUSTOMER_TEMP as select *, dense_rank() over (partition by CUST_CODE order by VALID_FROM_DT desc) as RANK
from DIM_CUSTOMER_EXISTING """)
spark.sql("""insert into DIM_CUSTOMER_NEW select DWH_KEY, CUST_CODE, CUST_NAME, ADDRESS_CITY, SALARY, VALID_FROM_DT, VALID_TO_DT
from DIM_CUSTOMER_TEMP
where CUST_CODE in (select distinct CUST_CODE from FEED_CUSTOMER)
and RANK <> 1 """)
// For updating of youngest record in terms of SLCD, we use use AND RANK <> 1 to filter these out here as we want to close off the period in this record, but other younger
// records can be passed through immediately with their retained DWH Key.
//4. Combine Staging and those existing facts required. The result of this eventually will be stored in DIM_CUSTOMER_NEW which can be used for updating a final target.
// Issue here is that DWH Key not yet set and different columns. DWH key can be set last.
//4.1 Get records to process, the will have the status NEW.
spark.sql("""create table DIM_CUSTOMER_WORK (DWH_KEY int, CUST_CODE String, CUST_NAME String, ADDRESS_CITY String, SALARY int, VALID_FROM_DT String, VALID_TO_DT String, RECSTAT String) """)
spark.sql("""insert into DIM_CUSTOMER_WORK select 0, CUST_CODE, CUST_NAME, ADDRESS_CITY, SALARY, VALID_DT, '2099-12-31', "NEW"
from FEED_CUSTOMER """)
//4.2 Get youngest already existing dimension record to process in conjunction with newer values.
spark.sql("""insert into DIM_CUSTOMER_WORK select DWH_KEY, CUST_CODE, CUST_NAME, ADDRESS_CITY, SALARY, VALID_FROM_DT, VALID_TO_DT, "OLD"
from DIM_CUSTOMER_TEMP
where CUST_CODE in (select distinct CUST_CODE from FEED_CUSTOMER)
and RANK = 1 """)
// 5. ISSUE with first record in a set. It is not a delta or is used for making a delta, need to know what to do or bypass, depends on case.
// Here we are doing deltas, so first rec is a complete delta
// RECSTAT to be filtered out at end
// NEW, 1 = INSERT --> checked, is correct way, can do in others. No delta computation required
// OLD, 1 = DO NOTHING
// else do delta and INSERT
//5.1 RANK and JOIN to get before and after images in CDC format so that we can decide what needs to be closed off.
// Get the new DWH key values + offset, there may exist gaps eventually.
spark.sql(""" create table DIM_CUSTOMER_WORK_2 as select *, rank() over (partition by CUST_CODE order by VALID_FROM_DT asc) as rank FROM DIM_CUSTOMER_WORK """)
//DWH_KEY, CUST_CODE, CUST_NAME, BIRTH_CITY, SALARY,VALID_FROM_DT, VALID_TO_DT, "OLD"
spark.sql(""" create table DIM_CUSTOMER_WORK_3 as
select T1.DWH_KEY as T1_DWH_KEY, T1.CUST_CODE as T1_CUST_CODE, T1.rank as CURR_RANK, T2.rank as NEXT_RANK,
T1.VALID_FROM_DT as CURR_VALID_FROM_DT, T2.VALID_FROM_DT as NEXT_VALID_FROM_DT,
T1.VALID_TO_DT as CURR_VALID_TO_DT, T2.VALID_TO_DT as NEXT_VALID_TO_DT,
T1.CUST_NAME as CURR_CUST_NAME, T2.CUST_NAME as NEXT_CUST_NAME,
T1.SALARY as CURR_SALARY, T2.SALARY as NEXT_SALARY,
T1.ADDRESS_CITY as CURR_ADDRESS_CITY, T2.ADDRESS_CITY as NEXT_ADDRESS_CITY,
T1.RECSTAT as CURR_RECSTAT, T2.RECSTAT as NEXT_RECSTAT
from DIM_CUSTOMER_WORK_2 T1 LEFT OUTER JOIN DIM_CUSTOMER_WORK_2 T2
on T1.CUST_CODE = T2.CUST_CODE AND T2.rank = T1.rank + 1 """)
//5.2 Get the data for computing new Dimension Surrogate DWH Keys, must execute new query or could use DF's and RDS, RDDs, but chosen for SPARK SQL as aeasier to follow
spark.sql(s""" create table DIM_CUSTOMER_WORK_4 as
select *, row_number() OVER( ORDER BY T1_CUST_CODE) as ROW_NUMBER, '$max_Dim_Key' as DIM_OFFSET
from DIM_CUSTOMER_WORK_3 """)
//spark.sql("""SELECT * FROM DIM_CUSTOMER_WORK_4 """).show()
//Execute the above to see results, could not format here.
//5.3 Process accordingly and check if no change at all, if no change can get holes in the sequence numbers, that is not an issue. NB: NOT DOING THIS DUE TO COMPLICATIONS !!!
// See sample data above for decision-making on what to do. NOTE THE FACT THAT WE WOULD NEED A PRE_PROCCESOR TO CHECK IF FIELD OF INTEREST ACTUALLY CHANGED
// to get the best result.
// We could elaborate and record via an extra step if there were only two records per business key and if all the current and only next record fields were all the same,
// we could disregard the first and the second record. Will attempt that later as an extra optimization. As soon as there are more than two here, then this scheme packs up
// Some effort still needed.
//5.3.1 Records that just need to be closed off. The previous version gets an appropriate DATE - 1. Dates must not overlap.
// No check on whether data changed or not due to issues above.
spark.sql("""insert into DIM_CUSTOMER_NEW select T1_DWH_KEY, T1_CUST_CODE, CURR_CUST_NAME, CURR_ADDRESS_CITY, CURR_SALARY,
CURR_VALID_FROM_DT, cast(date_sub(cast(NEXT_VALID_FROM_DT as DATE), 1) as STRING)
from DIM_CUSTOMER_WORK_4
where CURR_RECSTAT = 'OLD' """)
//5.3.2 Records that are the last in the sequence must have high end 2099-12-31 set, which has already been done.
// No check on whether data changed or not due to issues above.
spark.sql("""insert into DIM_CUSTOMER_NEW select ROW_NUMBER + DIM_OFFSET, T1_CUST_CODE, CURR_CUST_NAME, CURR_ADDRESS_CITY, CURR_SALARY,
CURR_VALID_FROM_DT, CURR_VALID_TO_DT
from DIM_CUSTOMER_WORK_4
where NEXT_RANK is null """)
//5.3.3
spark.sql("""insert into DIM_CUSTOMER_NEW select ROW_NUMBER + DIM_OFFSET, T1_CUST_CODE, CURR_CUST_NAME, CURR_ADDRESS_CITY, CURR_SALARY,
CURR_VALID_FROM_DT, cast(date_sub(cast(NEXT_VALID_FROM_DT as DATE), 1) as STRING)
from DIM_CUSTOMER_WORK_4
where CURR_RECSTAT = 'NEW'
and NEXT_RANK is not null""")
spark.sql("""SELECT * FROM DIM_CUSTOMER_NEW """).show()
// So, the question is if we could have done without JOINing and just sorted due to gap processing. This was derived off the delta processing but it turned out a little
// different.
// Well we did need the JOIN for next date at least, so if we add some optimization it still holds.
// My logic applied here per different steps, may well be less steps, left as is.
//6. The copy / insert to get a new big target table version and re-compile views. Outside of this actual processing. Logic performed elsewhere.
// NOTE now that 2.x supports nested correlated sub-queries are supported, so would need to re-visit this at a later point, but can leave as is.
// KUDU means no more restating.
Sample data so you know what to generate for the examples:
+-------+---------+----------------+------------+------+-------------+-----------+
|DWH_KEY|CUST_CODE| CUST_NAME|ADDRESS_CITY|SALARY|VALID_FROM_DT|VALID_TO_DT|
+-------+---------+----------------+------------+------+-------------+-----------+
| 230| E222222| Pete Saunders| Leeds| 75000| 2013-03-09| 2099-12-31|
| 400| A048901| John Alexander| Calgary| 22000| 2015-03-24| 2017-10-22|
| 402| A048901| John Alexander| Wellington| 47000| 2017-10-23| 2099-12-31|
| 403| B787555| Mark de Wit|Johannesburg| 49500| 2017-10-02| 2099-12-31|
| 406| C999666| Daya Dumar| Mumbai| 50000| 2016-12-16| 2099-12-31|
| 404| C999666| Daya Dumar| Mumbai| 49000| 2016-11-11| 2016-12-14|
| 405| C999666| Daya Dumar| Mumbai| 50000| 2016-12-15| 2016-12-15|
| 300| A048901| John Alexander| Calgary| 15000| 2014-03-24| 2015-03-23|
+-------+---------+----------------+------------+------+-------------+-----------+
Here's the detailed implementation of slowly changing dimension type 2 in Spark (Data frame and SQL) using exclusive join approach.
Assuming that the source is sending a complete data file i.e. old, updated and new records.
Steps:
Load the recent file data to STG table
Select all the expired records from HIST table
1. select * from HIST_TAB where exp_dt != '2099-12-31'
Select all the records which are not changed from STG and HIST using inner join and filter on HIST.column = STG.column as below
2. select hist.* from HIST_TAB hist inner join STG_TAB stg on hist.key = stg.key where hist.column = stg.column
Select all the new and updated records which are changed from STG_TAB using exclusive left join with HIST_TAB and set expiry and effective date as below
3. select stg.*, eff_dt (yyyy-MM-dd), exp_dt (2099-12-31) from STG_TAB stg left join (select * from HIST_TAB where exp_dt = '2099-12-31') hist
on hist.key = stg.key where hist.key is null or hist.column != stg.column
Select all updated old records from the HIST table using exclusive left join with STG table and set their expiry date as shown below:
4. select hist.*, exp_dt(yyyy-MM-dd) from (select * from HIST_TAB where exp_dt = '2099-12-31') hist left join STG_TAB stg
on hist.key= stg.key where hist.key is null or hist.column!= stg.column
unionall queries from 1-4 and insert overwrite result to HIST table
More detailed implementation of SCD type 2 in Scala and Pyspark can be found here-
https://github.com/sahilbhange/spark-slowly-changing-dimension
Hope this helps!
scala spark: https://georgheiler.com/2020/11/19/sparkling-scd2/
NOTICE: this is not a full SCD2 - it assumes one table of events and it determines/ deduplicates valid_from/valid_to from them i.e. no merge/upsert is implemented
val df = Seq(("k1","foo", "2020-01-01"), ("k1","foo", "2020-02-01"), ("k1","baz", "2020-02-01"),
("k2","foo", "2019-01-01"), ("k2","foo", "2019-02-01"), ("k2","baz", "2019-02-01")).toDF("key", "value_1", "date").withColumn("date", to_date(col("date")))
df.show
+---+-------+----------+
|key|value_1| date|
+---+-------+----------+
| k1| foo|2020-01-01|
| k1| foo|2020-02-01|
| k1| baz|2020-02-01|
| k2| foo|2019-01-01|
| k2| foo|2019-02-01|
| k2| baz|2019-02-01|
+---+-------+----------+
df.printSchema
root
|-- key: string (nullable = true)
|-- value_1: string (nullable = true)
|-- date: date (nullable = true)
df.transform(deduplicateScd2(Seq("key"), Seq("date"), "date", Seq())).show
+---+-------+----------+----------+
|key|value_1|valid_from| valid_to|
+---+-------+----------+----------+
| k1| foo|2020-01-01|2020-02-01|
| k1| baz|2020-02-01|2020-11-18|
| k2| foo|2019-01-01|2019-02-01|
| k2| baz|2019-02-01|2020-11-18|
+---+-------+----------+----------+
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.col
import org.apache.spark.sql.functions.lag
import org.apache.spark.sql.functions.lead
import org.apache.spark.sql.functions.when
import org.apache.spark.sql.functions.current_date
def deduplicateScd2(
key: Seq[String],
sortChangingIgnored: Seq[String],
timeColumn: String,
columnsToIgnore: Seq[String]
)(df: DataFrame): DataFrame = {
val windowPrimaryKey = Window
.partitionBy(key.map(col): _*)
.orderBy(sortChangingIgnored.map(col): _*)
val columnsToCompare =
df.drop(key ++ sortChangingIgnored: _*).drop(columnsToIgnore: _*).columns
val nextDataChange = lead(timeColumn, 1).over(windowPrimaryKey)
val deduplicated = df
.withColumn(
"data_changes_start",
columnsToCompare
.map(e => {
val previous = lag(col(e), 1).over(windowPrimaryKey)
val self = col(e)
// 3 cases: 1.: start (previous is NULL), 2: in between, try to collapse 3: end (= next is null)
// first, filter to only start & end events (= updates/invalidations of records)
//self =!= previous or self =!= next or previous.isNull or next.isNull
self =!= previous or previous.isNull
})
.reduce(_ or _)
)
.withColumn(
"data_changes_end",
columnsToCompare
.map(e => {
val next = lead(col(e), 1).over(windowPrimaryKey)
val self = col(e)
// 3 cases: 1.: start (previous is NULL), 2: in between, try to collapse 3: end (= next is null)
// first, filter to only start & end events (= updates/invalidations of records)
self =!= next or next.isNull
})
.reduce(_ or _)
)
.filter(col("data_changes_start") or col("data_changes_end"))
.drop("data_changes")
deduplicated //.withColumn("valid_to", nextDataChange)
.withColumn(
"valid_to",
when(col("data_changes_end") === true, col(timeColumn))
.otherwise(nextDataChange)
)
.filter(col("data_changes_start") === true)
.withColumn(
"valid_to",
when(nextDataChange.isNull, current_date()).otherwise(col("valid_to"))
)
.withColumnRenamed(timeColumn, "valid_from")
.drop("data_changes_end", "data_changes_start")
}
}
Here an updated answer with MERGE.
Note it will not work with Spark Structured Streaming, but can be used with Spark Kafka Batch Integration.
// 0. Standard, start of program.
// Handles multiple business keys in a single run. DELTA tables.
// Schema evolution also handled.
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
val sparkSession = SparkSession.builder
.master("local") // Not realistic
.appName("REF Zone History stuff and processing")
.enableHiveSupport() // Standard in Databricks.
.getOrCreate()
// 1. Read newer data to process in some way. Create tempView.
// In general we should have few rows to process, i.e. not at scale.
val dfA = spark.read.option("multiLine",false).json("/FileStore/tables/new_customers_json_multiple_alt3.txt") // New feed.
dfA.createOrReplaceTempView("newFeed")
// 2. First create the target for data at rest if it does not exist. Add an ASC col_key. Should only occur once.
val save_path = "/some_loc_fix/ref/atRest/data" // Make dynamic.
val table_name = "CUSTOMERS_AT_REST"
spark.sql("CREATE TABLE IF NOT EXISTS " + table_name + " LOCATION '" + save_path + "'" + " AS SELECT * from newFeed WHERE 1 = 0 " ) // Can also use limit 0 instead of WHERE 1 = 0.
// Add an ASC col_key column if it does not exist.
// I have in input valid_from_dt, but it could be different so we would need to add in reality as well. Mark to decide.
try {
spark.sql("ALTER TABLE " + table_name + " ADD COLUMNS (col_key BIGINT FIRST, valid_to_dt STRING) ")
} catch {
case unknown: Exception => {
None
}
}
// 3. Get maximum value for target. This is a necessity.
val max_val = spark.sql("select max(col_key) from " + table_name)
//max_val.show()
val null_count = max_val.filter("max(col_key) is null").count()
var max_Col_Key: BigInt = 0;
if ( null_count == 1 ) {
max_Col_Key = 0
} else {
max_Col_Key = max_val.head().getLong(0) // Long and BIGINT interoperable.
}
// 4.1 Create a temporary table for getting the youngest records from the existing data. table_name as variable, newFeed tempView as string. Then apply processing.
val dfB = spark.sql(" select O.* from (select A.cust_code, max(A.col_key) as max_col_key from " + table_name + " A where A.cust_code in (select B.cust_code from newFeed B ) group by A.cust_code ) Z, " + table_name + " O where O.col_key = Z.max_col_key ") // Most recent records.
// No tempView required.
// 4.2 Get the set of data to actually process. New feed + youngest records in feed.
val dfC =dfA.unionByName(dfB, true)
dfC.createOrReplaceTempView("cusToProcess")
// 4.3 RANK
val df1 = spark.sql("""select *, dense_rank() over (partition by CUST_CODE order by VALID_FROM_DT desc) as RANK from CusToProcess """)
df1.createOrReplaceTempView("CusToProcess2")
// 4.4 JOIN adjacent records & process closing off dates etc.
val df2 = spark.sql("""select A.*, B.rank as B_rank, cast(date_sub(cast(B.valid_from_dt as DATE), 1) as STRING) as untilMinus1
from CusToProcess2 A LEFT OUTER JOIN CusToProcess2 B
on A.cust_code = B.cust_code and A.RANK = B.RANK + 1 """)
val df3 = df2.drop("valid_to_dt").withColumn("valid_to_dt", $"untilMinus1").drop("untilMinus1").drop("B_rank")
val df4 = df3.withColumn("valid_to_dt", when($"valid_to_dt".isNull, lit("2099-12-31")).otherwise($"valid_to_dt")).drop("RANK")
df4.createOrReplaceTempView("CusToProcess3")
val df5 = spark.sql(s""" select *, row_number() OVER( ORDER BY cust_code ASC, valid_from_dt ASC) as ROW_NUMBER, '$max_Col_Key' as col_OFFSET
from CusToProcess3 """)
// Add new ASC col_key, gaps can result, not an issue must always be ascending.
val df6 = df5.withColumn("col_key", when($"col_key".isNull, ($"ROW_NUMBER" + $"col_OFFSET")).otherwise($"col_key"))
val df7 = df6.withColumn("col_key", col("col_key").cast(LongType)).drop("ROW_NUMBER").drop("col_OFFSET")
// 5. ACTUAL MERGE, is very simple.
// More than one Merge key possible? Need then to have a col_key if only one such possible.
df7.createOrReplaceTempView("CUST_DELTA")
spark.sql("SET spark.databricks.delta.schema.autoMerge.enabled = true")
spark.sql(""" MERGE INTO CUSTOMERS_AT_REST
USING CUST_DELTA
ON CUSTOMERS_AT_REST.col_key = CUST_DELTA.col_key
WHEN MATCHED THEN
UPDATE SET *
WHEN NOT MATCHED THEN
INSERT *
""")

How to get Cassandra cql string given a Apache Spark Dataframe in 2.2.0?

I am trying to get a cql string given a Dataframe. I came across this function
Where I can do something like this
TableDef.fromDataFrame(df, "test", "hello", ProtocolVersion.NEWEST_SUPPORTED).cql()
It looks to me that the library uses first column as Partition Key and does not care about Clustering Key so how do I specify to use particular set of columns of a Dataframe as a PartitionKey and ParticularSet of columns as a Clustering Key ?
Looks like I can create a new TableDef however I have to do the entire mapping by myself and in some cases the necessary functions like ColumnType are not accessible in Java. for Example I tried to create a new ColumnDef like below
new ColumnDef("col5", new PartitionKeyColumn(), ColumnType is not accessible in Java)
Objective: To get a CQL create Statement from a Spark DataFrame.
Input My dataframe can have any number of columns with their respective Spark Types. so say I have a Spark Dataframe with 100 columns where my col8, col9 of my dataframe corresponds to cassandra partitionKey columns and my column10 corresponds to cassandra clustering Key column
col1| col2| ...|col100
Now I want to use spark-cassandra-connector library to give me a CQL create table statement given the info above.
Desired Output
create table if not exists test.hello (
col1 bigint, (whatever column1 type is from my dataframe I just picked bigint randomly)
col2 varchar,
col3 double,
...
...
col100 bigint,
primary key(col8,col9)
) WITH CLUSTERING ORDER BY (col10 DESC);
Because required components (PartitionKeyColumn & instances of ColumnType) are objects in Scala, you need to use following syntax to access their intances:
// imports
import com.datastax.spark.connector.cql.ColumnDef;
import com.datastax.spark.connector.cql.PartitionKeyColumn$;
import com.datastax.spark.connector.types.TextType$;
// actual code
ColumnDef a = new ColumnDef("col5",
PartitionKeyColumn$.MODULE$, TextType$.MODULE$);
See code for ColumnRole & PrimitiveTypes to find full list of names of objects/classes.
Update after additional requirements: Code is lengthy, but should work...
SparkSession spark = SparkSession.builder()
.appName("Java Spark SQL example").getOrCreate();
Set<String> partitionKeys = new TreeSet<String>() {{
add("col1");
add("col2");
}};
Map<String, Integer> clustereingKeys = new TreeMap<String, Integer>() {{
put("col8", 0);
put("col9", 1);
}};
Dataset<Row> df = spark.read().json("my-test-file.json");
TableDef td = TableDef.fromDataFrame(df, "test", "hello",
ProtocolVersion.NEWEST_SUPPORTED);
List<ColumnDef> partKeyList = new ArrayList<ColumnDef>();
List<ColumnDef> clusterColumnList = new ArrayList<ColumnDef>();
List<ColumnDef> regColulmnList = new ArrayList<ColumnDef>();
scala.collection.Iterator<ColumnDef> iter = td.allColumns().iterator();
while (iter.hasNext()) {
ColumnDef col = iter.next();
String colName = col.columnName();
if (partitionKeys.contains(colName)) {
partKeyList.add(new ColumnDef(colName,
PartitionKeyColumn$.MODULE$, col.columnType()));
} else if (clustereingKeys.containsKey(colName)) {
int idx = clustereingKeys.get(colName);
clusterColumnList.add(new ColumnDef(colName,
new ClusteringColumn(idx), col.columnType()));
} else {
regColulmnList.add(new ColumnDef(colName,
RegularColumn$.MODULE$, col.columnType()));
}
}
TableDef newTd = new TableDef(td.keyspaceName(), td.tableName(),
(scala.collection.Seq<ColumnDef>) partKeyList,
(scala.collection.Seq<ColumnDef>) clusterColumnList,
(scala.collection.Seq<ColumnDef>) regColulmnList,
td.indexes(), td.isView());
String cql = newTd.cql();
System.out.println(cql);

C* may lose an update after INSERT IF NOT EXISTS

I have the following code:
def main(args: Array[String]): Unit = {
val cluster = Cluster.builder()
.addContactPoint("localhost")
.withPort(9042)
.build()
val session = cluster.connect()
try {
session.execute(s"CREATE KEYSPACE demoks WITH REPLICATION = {'class':'SimpleStrategy', 'replication_factor':1}")
} catch {
case _: AlreadyExistsException =>
}
session.execute(s"USE demoks")
session.execute("DROP table IF EXISTS demo")
session.execute( """
| CREATE TABLE IF NOT EXISTS demo (
| id text,
| data1 map<text, text>,
| data2 map<text, text>,
| PRIMARY KEY (id)
| ) WITH
| compaction = {'class': 'LeveledCompactionStrategy'}
| AND
| compression = { 'sstable_compression' : 'SnappyCompressor' };
""".stripMargin).one()
val p1 = session.prepare("UPDATE demo SET data1[?]=?, data2[?] = ? WHERE id=?")
val p3 = session.prepare("INSERT INTO demo (id,data1) VALUES (?,?) IF NOT EXISTS")
import scala.collection.JavaConverters._
val id2 = "id2-"+System.nanoTime()
session.execute(p3.bind(id2, Map("key" -> "value1-q1").asJava))
session.execute(p1.bind("key", "value1-q2", "key", "value2-q2", id2))
System.exit(0)
}
After execution of this snippet I'm just doing select * from demo; in cqlsh:
Usually the result is correct and expected:
cqlsh:demoks> select * from demo;
id | data1 | data2
--------------------+----------------------+----------------------
id2-61510117409472 | {'key': 'value1-q2'} | {'key': 'value2-q2'}
(1 rows)
But sometimes it may be different. Looks like that queries has been reordered and IF NOT EXISTS not triggered:
cqlsh:demoks> select * from demo;
id | data1 | data2
--------------------+----------------------+----------------------
id2-61522373234949 | {'key': 'value1-q1'} | {'key': 'value2-q2'}
(1 rows)
Could anybody explain me this behavior?
It's Cassandra 3.7 running in docker on Windows machine. I cannot reproduce this behavior under Linux nor Mac on the same machine and all another machines. I tried both docker and bare installations. Moreover, I cannot reproduce this even with bare installation on the same machine.
Basically there is no guarantees how the insert, upsert statements you provide will end up in cluster. Most of the time you will get the behaviour you expect, but not always. This depends on some factors, what is the coordinator you are hitting, what is the time there.
You have two options, setting the time yourself on the statements you produce by using "using timestamp" or you can use batches so that you guarantee the order of execution of your statements:
// your code
Batch batch = QueryBuilder.unloggedBatch()
batch.add(binded p3)
batch.add(binded p1)
// now execute the batch
session.execute(batch)

Having trouble querying by dates using the Java Cassandra Spark SQL Connector

I'm trying to use Spark SQL to query a table by a date range. For example, I'm trying to run an SQL statement like: SELECT * FROM trip WHERE utc_startdate >= '2015-01-01' AND utc_startdate <= '2015-12-31' AND deployment_id = 1 AND device_id = 1. When I run the query no error is being thrown but I'm not receiving any results back when I would expect some. When running the query without the date range I am getting results back.
SparkConf sparkConf = new SparkConf().setMaster("local").setAppName("SparkTest")
.set("spark.executor.memory", "1g")
.set("spark.cassandra.connection.host", "localhost")
.set("spark.cassandra.connection.native.port", "9042")
.set("spark.cassandra.connection.rpc.port", "9160");
JavaSparkContext context = new JavaSparkContext(sparkConf);
JavaCassandraSQLContext sqlContext = new JavaCassandraSQLContext(context);
sqlContext.sqlContext().setKeyspace("mykeyspace");
String sql = "SELECT * FROM trip WHERE utc_startdate >= '2015-01-01' AND utc_startdate < '2015-12-31' AND deployment_id = 1 AND device_id = 1";
JavaSchemaRDD rdd = sqlContext.sql(sql);
List<Row> rows = rdd.collect(); // rows.size() is zero when I would expect it to contain numerous rows.
Schema:
CREATE TABLE trip (
device_id bigint,
deployment_id bigint,
utc_startdate timestamp,
other columns....
PRIMARY KEY ((device_id, deployment_id), utc_startdate)
) WITH CLUSTERING ORDER BY (utc_startdate ASC);
Any help would be greatly appreciated.
What does your table schema (in particular, your PRIMARY KEY definition) look like? Even without seeing it, I am fairly certain that you are seeing this behavior because you are not qualifying your query with a partition key. Using the ALLOW FILTERING directive will filter the rows by date (assuming that is your clustering key), but that is not a good solution for a big cluster or large dataset.
Let's say that you are querying users in a certain geographic region. If you used region as a partition key, you could run this query, and it would work:
SELECT * FROM users
WHERE region='California'
AND date >= '2015-01-01' AND date <= '2015-12-31';
Give Patrick McFadin's article on Getting Started with Timeseries Data a read. That has some good examples that should help you.

Resources