Timestamp data value different between Hive tables and databricks delta tables - azure

We have done binary copy of data from Hive to ADLS with checksum validated. While values across every datatype matches however timestamp datatype columns are showing change in value between Hive and Delta(Azure Databricks) tables.
select abcdtstmp from xyz.abc where mn_ID = "sdsdsd-7878-0016"
2018-01-16 00:00:00.0 (on prem)
select abcdtstmp from xyz.abc where mn_ID = "sdsdsd-7878-0016"
2018-01-16T05:00:00.000+0000(DBX)
While checksum and all validation does match, however some values getting added after 'T' is causing concern. Any suggestion would be helpful

This seems to be related to timezone and hive.
Hive always thinks that timestamps in Parquet files are stored in UTC and it will convert them to a local system time (cluster host time) when it outputs. So, even if you are transferring data from EST to EST, its hive that is the culprit.
You can follow this link if you have hive version higher than 1.2 - https://issues.apache.org/jira/browse/HIVE-9482
set hive.parquet.timestamp.skip.conversion=true
Else, you need to manually convert the data back to EST or whatever timezone you want using below sql.
from_utc_timestamp(to_utc_timestamp(my_dt_tm,'America/New_York'),'America/Denver') AS local_time

Related

spark streaming and delta tables: java.lang.UnsupportedOperationException: Detected a data update

The setup:
Azure Event Hub -> raw delta table -> agg1 delta table -> agg2 delta table
The data is processed by spark structured streaming.
Updates on target delta tables are done via foreachBatch using merge.
In the result I'm getting error:
java.lang.UnsupportedOperationException: Detected a data update (for
example
partKey=ap-2/part-00000-2ddcc5bf-a475-4606-82fc-e37019793b5a.c000.snappy.parquet)
in the source table at version 2217. This is currently not supported.
If you'd like to ignore updates, set the option 'ignoreChanges' to
'true'. If you would like the data update to be reflected, please
restart this query with a fresh checkpoint directory.
Basically I'm not able to read the agg1 delta table via any kind of streaming. If I switch the last streaming from delta to memory I'm getting the same error message. With first streaming I don't have any problems.
Notes.
Between aggregations I'm changing granuality: agg1 delta table (trunc date to minutes), agg2 delta table (trunc date to days).
If I turn off all other streaming, the last one still doesn't work
The agg2 delta table is new fresh table with no data
How the streaming works on the source table:
It reads the files that belongs to our source table. It's not able to handle changes in these files (updates, deletes). If anything like that happens you will get the error above. In other words. DDL operations modify the underlying files. The only difference is for INSERTS. New data arrives in new file if not configured differently.
To fix that you would need to set an option: ignoreChanges to True.
This option will cause that you will get all the records from the modified file. So, you will get again the same records as before plus this one modified.
The problem: we have aggregations, the aggregated values are stored in the checkpoint. If we get again the same record (not modified) we will recognize it as an update and we will increase the aggregation for its grouping key.
Solution: we can't read agg table to make another aggregations. We need to read the raw table.
reference: https://docs.databricks.com/structured-streaming/delta-lake.html#ignore-updates-and-deletes
Note: I'm working on Databricks Runtime 10.4, so I'm using new shuffle merge by default.

Write a spark DataFrame to a table

I am trying to understand the spark DataFrame API method called saveAsTable.
I have following question
If I simply write a dataframe using saveAsTable API
df7.write.saveAsTable("t1"), (assuming t1 did not exist earlier), will the newly created table be a hive table which can be read outside spark using Hive QL ?
Does spark also create some non-hive table (which are created using saveAsTable API but can not be read outside spark using HiveQL)?
How can check if a table is Hive Table or Non-Hive table ?
(I am new to big data processing, so pardon me if question is not phrased properly)
Yes. Newly created table will be hive table and can be queried from Hive CLI(Only if the DataFrame is created from single input HDFS path i.e. from non-partitioned single input HDFS path).
Below is the documentation comment in DataFrameWriter.scala class. Documentation link
When the DataFrame is created from a non-partitioned
HadoopFsRelation with a single input path, and the data source
provider can be mapped to an existing Hive builtin SerDe (i.e. ORC and
Parquet), the table is persisted in a Hive compatible format, which
means other systems like Hive will be able to read this table.
Otherwise, the table is persisted in a Spark SQL specific format.
Yes, you can do. You table can be partitioned by a column, but can not use bucketing (its a problem between spark and hive).

Spark and JDBC: Iterating through large table and writing to hdfs

What would be the most memory efficient way to copy the contents of a large relational table using spark and then write to a partitioned Hive table in parquet format (without sqoop). I have a basic spark app and i have done some other tuning with spark's jdbc but data in relational table is still 0.5 TB and 2 Billion records so I although I can lazy load the full table, I'm trying to figure out how to efficiently partition by date and save to hdfs without running into memory issues. since the jdbc load() from spark will load everything into memory I was thinking of looping through the dates in the database query but still not sure how to make sure I don't run out of memory.
If you need to use Spark you can add to your application date parameter for filtering table by date and run your Spark application in loop for each date. You can use bash or other scripting language for this loop.
This can look like:
foreach date in dates
spark-submit your application with date parameter
read DB table with spark.read.jdbc
filter by date using filter method
write result to HDFS with df.write.parquet("hdfs://path")
Another option is to use different technology for example implement Scala application using JDBC and DB cursor to iterate through rows and save result to HDFS. This is more complex, because you need to solve problems related to writing to Parquet format and saving to HDFS using Scala. If you want I can provide Scala code responsible for writing to Parquet format.

Spark SQL queries on Partitioned Data

I have setup a Spark 1.3.1 application that collects event data. One of the attributes is a timestamp called 'occurredAt'. Im intending to partition the event data into parquet files on a filestore and in accordance with the documentation (https://spark.apache.org/docs/1.3.1/sql-programming-guide.html#partition-discovery) it indicates that time based values are not supported only string and int, so i've split the date into Year, Month, Day values and partitioned as follows
events
|---occurredAtYear=2015
| |---occurredAtMonth=07
| | |---occurredAtDay=16
| | | |---<parquet-files>
...
I then load the parquet file from the root path /events
sqlContext.parquetFile('/var/tmp/events')
Documentation says:
'Spark SQL will automatically extract the partitioning information
from the paths'
However my query
SELECT * FROM events where occurredAtYear=2015
Fails miserably saying spark cannot resolve 'occurredAtYear'
I can see the schema for all other aspects of the event and can do queries on those attributes, but printSchema does not list occurredAtYear/Month/Day on the schema at all? What am I missing to get partitioning working appropriately.
Cheers
So it turns out I was following the instructions too precisely, I was actually writing the parquet files out to
/var/tmp/occurredAtYear=2015/occurredAtMonth=07/occurredAtDay=16/data.parquet
The 'data.parquet' was additionally creating a further directory with parquet files underneath, I should have been saving the parquet file to
/var/tmp/occurredAtYear=2015/occurredAtMonth=07/occurredAtDay=16
All works now with the schema being discovered correctly.

What is the metastore for in Spark?

I am using SparkSQL in python. I have created a partitioned table (~few hundreds of partitions) stored it into Hive Internal Table using the hiveContext. The hive warehouse is located in S3.
When I simply do "df = hiveContext.table("mytable"). It would take over a minute to going through all the partitions the first time. I thought the metastore stored all the metadata. Why would spark still need to going through each partition? Is it possible to avoid this step so my startup can be faster?
The key here is that it takes this long to load the file metadata only on the first query. The reason is that SparkSQL doesn't store the partition metadata in the Hive metastore. For Hive partitioned tables, the partition information needs to be stored in the metastore. Depending on how the table is created will dictate how this behaves. From the information provided, it sounds like you created a SparkSQL table.
SparkSQL stores the table schema (which includes partition information) and the root directory of your table, but still discovers each partition directory on S3 dynamically when the query is run. My understanding is that this is a tradeoff so you don't need to manually add new partitions whenever the table is updated.

Resources