How does spark worker distributes load in cassandra cluster? - cassandra

I am trying to understand how cassandra and spark work together, especially when
the data is distributed across nodes.
I have cassandra+spark setup with two node cluster using DSE.
The schema is
CREATE KEYSPACE foo WITH replication = {'class': 'SimpleStrategy','replication_factor':1}
CREATE TABLE bar (
customer text,
start timestamp,
offset bigint,
data blob,
PRIMARY KEY ((customer, start), offset)
)
I populated the table with huge set of test data. Later figured out the keys
that lie on different nodes with the help of "nodetool getendpoints" command.
For example in my case a particular customer data with date '2014-05-25' is on
node1 and '2014-05-26' is node2.
When I run the following query from spark shell, I see that spark worker on
node1 is running the task during mapPartitions stage.
csc.setKeyspace("foo")
val query = "SELECT cl_ap_mac_address FROM bar WHERE customer='test' AND start IN ('2014-05-25')"
val srdd = csc.sql(query)
srdd.count()
and for the following query, spark worker on node2 is running the task.
csc.setKeyspace("foo")
val query = "SELECT cl_ap_mac_address FROM bar WHERE customer='test' AND start IN ('2014-05-26')"
val srdd = csc.sql(query)
srdd.count()
But when I give both the dates only one node worker is getting utilized.
csc.setKeyspace("foo")
val query = "SELECT cl_ap_mac_address FROM bar WHERE customer='test' AND start IN ('2014-05-25', '2014-05-26')"
val srdd = csc.sql(query)
srdd.count()
I was thinking that this should use both the nodes in parallel during
mapPartitions stage. Am I missing something.

I think you are trying to understand the interaction between spark and Cassandra as well as the data distribution in Cassandra.
Basically from spark application, request will be made to one of the Cassandra node, which acts as a coordinator for that particular client request.More details can be found here.
Further data partitioning and replication will be take care by Cassandra system only.

Related

SQL query taking too long in azure databricks

I want to execute SQL query on a DB which is in Azure SQL managed instance using Azure Databricks. I have connected to DB using spark connector.
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
val config = Config(Map(
"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"queryCustom" -> "SELECT TOP 100 * FROM dbo.Clients WHERE PostalCode = 98074" //Sql query
"user" -> "username",
"password" -> "*********",
))
//Read all data in table dbo.Clients
val collection = sqlContext.read.sqlDB(config)
collection.show()
I am using above method to fetch the data(Example from MSFT doc). Table sizes are over 10M in my case. My question is How does Databricks process the query here?
Below is the documentation:
The Spark master node connects to databases in SQL Database or SQL Server and loads data from a specific table or using a specific SQL query.
The Spark master node distributes data to worker nodes for transformation.
The Worker node connects to databases that connect to SQL Database and SQL Server and writes data to the database. User can choose to use row-by-row insertion or bulk insert.
It says master node fetches the data and distributes the work to worker nodes later. In the above code, while fetching the data what if the query itself is complex and takes time? Does it spread the work to worker nodes? or I have to fetch the tables data first to Spark and then run the SQL query to get the result. Which method do you suggest?
So using the above method uses a single JDBC connection to pull the table into the Spark environment.
And if you want to use the push down predicate on the query then you can use in this way.
val pushdown_query = "(select * from employees where emp_no < 10008) emp_alias"
val df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query,
properties=connectionProperties)
display(df)
If you want to improve the performance than you need to manage parallelism while reading.
You can provide split boundaries based on the dataset’s column values.
These options specify the parallelism on read. These options must all be specified if any of them is specified. lowerBound and upperBound decide the partition stride, but do not filter the rows in table. Therefore, Spark partitions and returns all rows in the table.
The following example splits the table read across executors on the emp_no column using the columnName, lowerBound, upperBound, and numPartitions parameters.
val df = (spark.read.jdbc(url=jdbcUrl,
table="employees",
columnName="emp_no",
lowerBound=1L,
upperBound=100000L,
numPartitions=100,
connectionProperties=connectionProperties))
display(df)
For more Details : use this link

What is best approach to join data in spark streaming application?

Question : Essentially it means , rather than running a join of C* table for each streaming records , is there anyway to run a join for each micro-batch ( micro-batching ) of records in spark streaming ?
We are almost finalized to use spark-sql 2.4.x version , datastax-spark-cassandra-connector for Cassandra-3.x version.
But have one fundamental question regarding the efficiency in the below scenario.
For the streaming data records(i.e. streamingDataSet ) , I need to look up for existing records( i.e. cassandraDataset) from Cassandra(C*) table.
i.e.
Dataset<Row> streamingDataSet = //kafka read dataset
Dataset<Row> cassandraDataset= //loaded from C* table those records loaded earlier from above.
To look up data i need to join above datasets
i.e.
Dataset<Row> joinDataSet = cassandraDataset.join(cassandraDataset).where(//somelogic)
process further the joinDataSet to implement the business logic ...
In the above scenario, my understanding is ,for each record received
from kafka stream it would query the C* table i.e. data base call.
Does not it take huge time and network bandwidth if C* table consists
billions of records? What should be the approach/procedure to be
followed to improve look up C* table ?
What is the best solution in this scenario ? I CAN NOT load once from
C* table and look up as the data keep on adding to C* table ... i.e.
new look ups might need newly persisted data.
How to handle this kind of scenario? any advices plzz..
If you're using Apache Cassandra, then you have only one possibility for effective join with data in Cassandra - via RDD API's joinWithCassandraTable. The open source version of the Spark Cassandra Connector (SCC) supports only it, while in version for DSE, there is a code that allows to perform effective join against Cassandra also for Spark SQL - so-called DSE Direct Join. If you'll use join in Spark SQL against Cassandra table, Spark will need to read all data from Cassandra, and then perform join - that's very slow.
I don't have an example for OSS SCC for doing the join for Spark Structured Streaming, but I have some examples for "normal" join, like this:
CassandraJavaPairRDD<Tuple1<Integer>, Tuple2<Integer, String>> joinedRDD =
trdd.joinWithCassandraTable("test", "jtest",
someColumns("id", "v"), someColumns("id"),
mapRowToTuple(Integer.class, String.class), mapTupleToRow(Integer.class));

How to paralellize RDD work when using cassandra spark connector for data aggration?

Here is the sample senario, we have real time data record in cassandra, and we want to aggregate the data in different time ranges. What I write code like below:
val timeRanges = getTimeRanges(report)
timeRanges.foreach { timeRange =>
val (timestampStart, timestampEnd) = timeRange
val query = _sc.get.cassandraTable(report.keyspace, utilities.Helper.makeStringValid(report.scope)).
where(s"TIMESTAMP > ?", timestampStart).
where(s"VALID_TIMESTAMP <= ?", timestampEnd)
......do the aggregation work....
what the issue of the code is that for every time range, the aggregation work is running not in parallized. My question is how can I parallized the aggregation work? Since RDD can't run in another RDD or Future? Is there any way to parallize the work, or we can't using spark connector here?
Use the joinWithCassandraTable function. This allows you to use the data from one RDD to access C* and pull records just like in your example.
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md#performing-efficient-joins-with-cassandra-tables-since-12
joinWithCassandraTable utilizes the java driver to execute a single
query for every partition required by the source RDD so no un-needed
data will be requested or serialized. This means a join between any
RDD and a Cassandra Table can be preformed without doing a full table
scan. When preformed between two Cassandra Tables which share the
same partition key this will not require movement of data between
machines. In all cases this method will use the source RDD's
partitioning and placement for data locality.
Finally , we using union to join each RDD and makes them parallized.

Spark Cassandra Iterative Query

I am applying the following through the Spark Cassandra Connector:
val links = sc.textFile("linksIDs.txt")
links.map( link_id =>
{
val link_speed_records = sc.cassandraTable[Double]("freeway","records").select("speed").where("link_id=?",link_id)
average = link_speed_records.mean().toDouble
})
I would like to ask if there is way to apply the above sequence of queries more efficiently given that the only parameter I always change is the 'link_id'.
The 'link_id' value is the only Partition Key of my Cassandra 'records' table.
I am using Cassandra v.2.0.13, Spark v.1.2.1 and Spark-Cassandra Connector v.1.2.1
I was thinking if it is possible to open a Cassandra Session in order to apply those queries and still get the 'link_speed_records' as a SparkRDD.
Use the joinWithCassandra Method to use an RDD of keys to pull data out of a Cassandra Table. The method given in the question will be extremely expensive comparatively and also not function well as a parallelizable request.
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md#performing-efficient-joins-with-cassandra-tables-since-12

Does Spark from DSE laod all data into RDD before running SQL Query?

Running DSE 4.7
So say I have a 4 node DSE Cassandra/Spark cluster...
I have a Cassandra table with say 4,000,000 records in it.
On Spark running the following Spark SQL "select * from table where email = ? or mobile = ?"
Will Spark load all the data into RDD and then filter based on the where clause? Will each spark node have 1,000,000 records per node loaded into memory?
Will spark load all the data into RDD and then filter based on the where clause?
It depends on your database schema. If your query explicitly restricts scan to a single C* partition (and ours where email = ? or mobile = ? definitely does not), Spark will load only part of the data.
In your case it will have to scan all the data.
Will each spark node have 1,000,000 records per node loaded into memory?
Again, it depends of your dataset size and amount of RAM on worker nodes. Spark RDDs are not always fully loaded into RAM, in your case it can be split into smaller parts (e.g. 100k rows), loaded into ram, filtered according to your query and saved after that, one-by-one.

Resources