Spark SQL get max & min dynamically from datasource - apache-spark

I am using Spark SQL where I want to fetch whole data everyday from a Oracle table(consist of more than 1800k records). The application is hanging up when I read from Oracle hence I used concept of partitionColumn,lowerBound & upperBound. But,the problem is how can I get lowerBound & upperBound value of primary key column dynamically?? Every day value of lowerBound & upperBound will be changing.Hence how can I get the boundary values of primary key column dynamically?? Can anyone guide me an sample example for my problem?

Just fetch required values from the database:
url = ...
properties = ...
partition_column = ...
table = ...
# Push aggregation to the database
query = "(SELECT min({0}), max({0}) FROM {1}) AS tmp".format(
partition_column, table
)
(lower_bound, upper_bound) = (spark.read
.jdbc(url=url, table=query. properties=properties)
.first())
and pass to the main query:
num_partitions = ...
spark.read.jdbc(
url, table,
column=partition_column,
# Make upper bound inclusive
lowerBound=lower_bound, upperBound=upper_bound + 1,
numPartitions=num_partitions, properties=properties
)

Related

How to make my identity column consecutive on delta table in Azure Databricks?

I am trying to create a delta table with a consecutive identity column. The goal is for our clients to see if there is some data they did not receive from us.
It looks like the generated identity column is not consecutive. Which makes the "INCREMENT BY 1" quite misleading.
store_visitor_type_name = ["apple","peach","banana","mango","ananas"]
card_type_name = ["door","desk","light","coach","sink"]
store_visitor_type_desc = ["monday","tuesday","wednesday","thursday","friday"]
colnames = ["column2","column3","column4"]
data_frame = spark.createDataFrame(zip(store_visitor_type_name,card_type_name,store_visitor_type_desc),colnames)
data_frame.createOrReplaceTempView('vw_increment')
data_frame.display()
%sql
CREATE or REPLACE TABLE TEST(
`column1SK` BIGINT GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1)
,`column2` STRING
,`column3` STRING
,`column4` STRING
,`inserted_timestamp` TIMESTAMP
,`modified_timestamp` TIMESTAMP
)
USING delta
LOCATION '/mnt/Marketing/Sales';
MERGE INTO TEST as target
USING vw_increment as source
ON target.`column2` = source.`column2`
WHEN MATCHED
AND (target.`column3` <> source.`column3`
OR target.`column4` <> source.`column4`)
THEN
UPDATE SET
`column2` = source.`column2`
,`modified_timestamp` = current_timestamp()
WHEN NOT MATCHED THEN
INSERT (
`column2`
,`column3`
,`column4`
,`modified_timestamp`
,`inserted_timestamp`
) VALUES (
source.`column2`
,source.`column3`
,source.`column4`
,current_timestamp()
,current_timestamp()
)
I'm getting the following results. You can see this is not sequential.What is also very confusing is that it is not starting at 1, while explicitely mentionned in the query.
I can see in the documentation (https://docs.databricks.com/sql/language-manual/sql-ref-syntax-ddl-create-table-using.html#parameters) :
The automatically assigned values start with start and increment by
step. Assigned values are unique but are not guaranteed to be
contiguous. Both parameters are optional, and the default value is 1.
step cannot be 0.
Is there a workaround to make this identity column consecutive ?
I guess I could have another column and do a ROW_NUMBER operation after the MERGE, but it looks expensive.
You can utilize Pyspark to achieve the requirement instead of using row_number() function.
I have read the TEST table as a spark dataframe and converted it to pandas on spark dataframe. In pandas dataframe, using reset_index(), I have created a new index column.
Then I have converted it back to spark dataframe. I have added 1 to the index column values since the index starts with 0.
df = spark.sql("select * from test")
pdf = df.to_pandas_on_spark()
#to create new index column.
pdf.reset_index(inplace=True)
final_df = pdf.to_spark()
#Since index starts from 0, I have added 1 to it.
final_df.withColumn('index',final_df['index']+1).show()

Spark 3.0 JdbcRDD Java - Issue in specifying lowerBound and upperBound for views with no ID column

I am using Spark 3.0, in my Java program I am querying data from views which are in Oracle DB. I used the Java API JdbcRDD to query the views.
The problem I have is that the view doesn't contain any ID or timestamp columns. So, I am unable to construct my SQL query with lowerBound and upperBound values.
Please find below the example query I need to run in Spark. Here exp_stg.usr and exp_stg.prtcpnt are the two views exposed to me.
"SELECT a.participant,
a.desc,
b.firstname,
b.lastname,
b.dept,
b.telno,
b.emailaddr
FROM usr_stg.prtcpnt a
LEFT OUTER JOIN usr_stg.usr b
ON a.participant = b.participant
WHERE a.class = 'SpSession' "
I tried using temp tables in spark and join, but the query performance is bad as there are around ~13,000,000 rows in each view. Hence I tried to use the join query in the Oracle DB.
I was able to overcome the constraint using ROWNUM in the query. Using ROWNUM as the lowerBound and upperBound I am now able to get the data using JdbcRDD.
`
SELECT ROWNUM as id, a.participant,
a.desc,
b.firstname,
b.lastname,
b.dept,
b.telno,
b.emailaddr
FROM usr_stg.prtcpnt a
LEFT OUTER JOIN usr_stg.usr b
ON a.participant = b.participant
WHERE a.class = 'SpSession' and ?<=ROWNUM and ROWNUM<=?"`

PySpark Pushing down timestamp filter

I'm using PySpark version 2.4 to read some tables using jdbc with a Postgres driver.
df = spark.read.jdbc(url=data_base_url, table="tablename", properties=properties)
One column is a timestamp column and I want to filter it like this:
df_new_data = df.where(df.ts > last_datetime )
This way the filter is pushed down as a SQL query but the datetime format
is not right. So I tried this approach
df_new_data = df.where(df.ts > F.date_format( F.lit(last_datetime), "y-MM-dd'T'hh:mm:ss.SSS") )
but then the filter is no pushed down anymore.
Can someone clarify why this is the case ?
While loading the data from a Database table, if you want to push down queries to database and get few result rows, instead of providing the 'table', you can provide the 'Query' and return just the result as a DataFrame. This way, we can leverage database engine to process the query and return only the results to Spark.
The table parameter identifies the JDBC table to read. You can use anything that is valid in a SQL query FROM clause. Note that alias is mandatory to be provided in query.
pushdown_query = "(select * from employees where emp_no < 10008) emp_alias"
df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties)
df.show()

Kundera Cassandra Delete a row based on Indexed column

How to delete rows in cassandra based on an indexed column ?
Tried:
upload_id is added as an index in the table.
Delete from table where upload_id = '"+uploadId+"'"
But this gives me an Error "NON PRIMARY KEY found in where clause".
String selectQuery = "Select hashkey from table where upload_id='" + uploadId + "'"
entityManager.createNativeQuery(selectQuery).getResultList()
and delete all the elements in the List using a for loop.
This query is changed by kundera to append LIMIT 100 ALLOW Filtering.
Found a Question similar to this at Kundera for Cassandra - Deleting record by row key but that was asked in 2012 after that there were a lot of changes to cassandra and Kundera.
Kundera by default uses LIMIT 100. You can use query.setMaxResults(<integer>) to modify the limit accordingly and then run the loop.
Example:
Query findQuery = entityManager.createQuery("Select p from PersonCassandra p where p.age = 10", PersonCassandra.class);
findQuery.setMaxResults(5000);
List<PersonCassandra> allPersons = findQuery.getResultList();

Having trouble querying by dates using the Java Cassandra Spark SQL Connector

I'm trying to use Spark SQL to query a table by a date range. For example, I'm trying to run an SQL statement like: SELECT * FROM trip WHERE utc_startdate >= '2015-01-01' AND utc_startdate <= '2015-12-31' AND deployment_id = 1 AND device_id = 1. When I run the query no error is being thrown but I'm not receiving any results back when I would expect some. When running the query without the date range I am getting results back.
SparkConf sparkConf = new SparkConf().setMaster("local").setAppName("SparkTest")
.set("spark.executor.memory", "1g")
.set("spark.cassandra.connection.host", "localhost")
.set("spark.cassandra.connection.native.port", "9042")
.set("spark.cassandra.connection.rpc.port", "9160");
JavaSparkContext context = new JavaSparkContext(sparkConf);
JavaCassandraSQLContext sqlContext = new JavaCassandraSQLContext(context);
sqlContext.sqlContext().setKeyspace("mykeyspace");
String sql = "SELECT * FROM trip WHERE utc_startdate >= '2015-01-01' AND utc_startdate < '2015-12-31' AND deployment_id = 1 AND device_id = 1";
JavaSchemaRDD rdd = sqlContext.sql(sql);
List<Row> rows = rdd.collect(); // rows.size() is zero when I would expect it to contain numerous rows.
Schema:
CREATE TABLE trip (
device_id bigint,
deployment_id bigint,
utc_startdate timestamp,
other columns....
PRIMARY KEY ((device_id, deployment_id), utc_startdate)
) WITH CLUSTERING ORDER BY (utc_startdate ASC);
Any help would be greatly appreciated.
What does your table schema (in particular, your PRIMARY KEY definition) look like? Even without seeing it, I am fairly certain that you are seeing this behavior because you are not qualifying your query with a partition key. Using the ALLOW FILTERING directive will filter the rows by date (assuming that is your clustering key), but that is not a good solution for a big cluster or large dataset.
Let's say that you are querying users in a certain geographic region. If you used region as a partition key, you could run this query, and it would work:
SELECT * FROM users
WHERE region='California'
AND date >= '2015-01-01' AND date <= '2015-12-31';
Give Patrick McFadin's article on Getting Started with Timeseries Data a read. That has some good examples that should help you.

Resources