Writing DataFrame as parquet creates empty files - apache-spark

I am trying to do some performance optimization for Spark job using bucketing technique. I am reading .parquet and .csv files and do some transformations. After I am doing bucketing and join two DataFrames. Then I am writing joined DF to parquet but I have an empty file of ~500B instead of 500Mb.
Cloudera (cdh5.15.1)
Spark 2.3.0
Blob
val readParquet = spark.read.parquet(inputP)
readParquet
.write
.format("parquet")
.bucketBy(23, "column")
.sortBy("column")
.mode(SaveMode.Overwrite)
.saveAsTable("bucketedTable1")
val firstTableDF = spark.table("bucketedTable1")
val readCSV = spark.read.csv(inputCSV)
readCSV
.filter(..)
.ordrerBy(someColumn)
.write
.format("parquet")
.bucketBy(23, "column")
.sortBy("column")
.mode(SaveMode.Overwrite)
.saveAsTable("bucketedTable2")
val secondTableDF = spark.table("bucketedTable2")
val resultDF = secondTableDF
.join(firstTableDF, Seq("column"), "fullouter")
.
.
resultDF
.coalesce(1)
.write
.mode(SaveMode.Overwrite)
.parquet(output)
When I launch Spark job in command line using ssh I have correct result, ~500Mb parquet file which I can see using Hive. If I run the same job using oozie workflow I have an empty file (~500 Bytes).
When I do .show() on my resultDF I can see the data but I have empty parquet file.
+-----------+---------------+----------+
| col1| col2 | col3|
+-----------+---------------+----------+
|33601234567|208012345678910| LOL|
|33601234567|208012345678910| LOL|
|33601234567|208012345678910| LOL|
There is no problem writing to parquet when I am not saving data as a table. It occurs only with DF created from table.
Any suggestions ?
Thanks in advance for any thoughts!

I figured it out for my case I just added an option .option("path", "/sources/tmp_files_path"). Now I can use bucketing and I have a data in my output files.
readParquet
.write
.option("path", "/sources/tmp_files_path")
.mode(SaveMode.Overwrite)
.bucketBy(23, "column")
.sortBy("column")
.saveAsTable("bucketedTable1")

Related

Spark always bradcasts tables greater than spark.sql.autoBroadcastJoinThreshold when performing streaming merge on DeltaTable sink

I am trying to do a streaming merge between delta tables using this guide - https://docs.delta.io/latest/delta-update.html#upsert-from-streaming-queries-using-foreachbatch
Our Code Sample (Java):
Dataset<Row> sourceDf = sparkSession
.readStream()
.format("delta")
.option("inferSchema", "true")
.load(sourcePath);
DeltaTable deltaTable = DeltaTable.forPath(sparkSession, targetPath);
sourceDf.createOrReplaceTempView("vTempView");
StreamingQuery sq = sparkSession.sql("select * from vTempView").writeStream()
.format("delta")
.foreachBatch((microDf, id) -> {
deltaTable.alias("e").merge(microDf.alias("d"), "e.SALE_ID = d.SALE_ID")
.whenMatched().updateAll()
.whenNotMatched().insertAll()
.execute();
})
.outputMode("update")
.option("checkpointLocation", util.getFullS3Path(target)+"/_checkpoint")
.trigger(Trigger.Once())
.start();
Problem:
Here Source path and Target path is already in sync using the checkpoint folder. Which has around 8 million rows of data amounting to around 450mb of parquet files.
When new data comes in Source Path (let's say 987 rows), then above code will pick that up and perform a merge with target table. During this operation spark is trying to perform a BroadCastHashJoin, and broadcasts the target table which has 8M rows.
Here's a DAG snippet for merge operation (with table with 1M rows),
Expectation:
I am expecting smaller dataset (i.e: 987 rows) to be broadcasted. If not then atleast spark should not broadcast target table, as it is larger than provided spark.sql.autoBroadcastJoinThreshold setting and neither are we providing any broadcast hint anywhere.
Things I have tried:
I searched around and got this article - https://learn.microsoft.com/en-us/azure/databricks/kb/sql/bchashjoin-exceeds-bcjointhreshold-oom.
It provides 2 solutions,
Run "ANALYZE TABLE ..." (but since we are reading target table from path and not from a table this is not possible)
Cache the table you are broadcasting, DeltaTable does not have any provision to cache table, so can't do this.
I thought this was because we are using DeltaTable.forPath() method for reading target table and spark is unable to calculate target table metrics. So I also tried a different approach,
Dataset<Row> sourceDf = sparkSession
.readStream()
.format("delta")
.option("inferSchema", "true")
.load(sourcePath);
Dataset<Row> targetDf = sparkSession
.read()
.format("delta")
.option("inferSchema", "true")
.load(targetPath);
sourceDf.createOrReplaceTempView("vtempview");
targetDf.createOrReplaceTempView("vtemptarget");
targetDf.cache();
StreamingQuery sq = sparkSession.sql("select * from vtempview").writeStream()
.format("delta")
.foreachBatch((microDf, id) -> {
microDf.createOrReplaceTempView("vtempmicrodf");
microDf.sparkSession().sql(
"MERGE INTO vtemptarget as t USING vtempmicrodf as s ON t.SALE_ID = s.SALE_ID WHEN MATCHED THEN UPDATE SET * WHEN NOT MATCHED THEN INSERT * "
);
})
.outputMode("update")
.option("checkpointLocation", util.getFullS3Path(target)+"/_checkpoint")
.trigger(Trigger.Once())
.start();
In above snippet I am also caching the targetDf so that Spark can calculate metrics and not broadcast target table. But it didn't help and spark still broadcasts it.
Now I am out of options. Can anyone give me some guidance on this?

PySpark dataframe not getting saved in Hive due to file format mismatch

I want to write the streaming data from kafka topic to hive table.
I am able to create dataframes by reading kafka topic, but the data is not getting written to Hive Table due to file-format mismatch. I have specified dataframe.format("parquet") and the hive table is created with stored as parquet.
Below is the code snippet:
def hive_write_batch_data(data, batchId):
data.write.format("parquet").mode("append").saveAsTable(table)
def write_to_hive(data,kafka_sink_name):
global table
table = kafka_sink_name
data.select(col("key"),col("value"),col("offset")) \
.writeStream.foreachBatch(hive_write_batch_data) \
.start().awaitTermination()
if __name__ == '__main__':
kafka_sink_name = sys.argv[1]
kafka_config = {
....
..
}
spark = SparkSession.builder.appName("Test Streaming").enableHiveSupport().getOrCreate()
df = spark.readStream \
.format("kafka") \
.options(**kafka_config) \
.load()
df1 = df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)","offset","timestamp","partition")
write_to_hive(df1,kafka_sink_name)
Hive table is created as Parquet:
CREATE TABLE test.kafka_test(
key string,
value string,
offset bigint)
STORED AS PARQUET;
It is giving me the Error:
pyspark.sql.utils.AnalysisException: "The format of the existing table test.kafka_test is `HiveFileFormat`. It doesn\'t match the specified format `ParquetFileFormat`.;"
How do I write the dataframe to hive table ?
I dropped the hive table, and ran the Spark-streaming job. Table got created with the correct format.

SparkSteaming reading entire table instead of by file

I have ~3PB of parquet on S3. I want to read it file-by-file with spark streaming and join some metadata to it before writing out. The metadata is small enough to be broadcasted. Files in the source data are ~60mb, none are huge.
val r = spark.readStream
.option("maxFilesPerTrigger", "100")
.schema(pschema)
.parquet("s3://mybigdata/sourcedata/")
.withColumn("id", regexp_extract(col("mycol"), "someregex", 1).cast(IntegerType))
.alias("p")
.join(broadcast(idmap.alias("i")), $"p.id" === $"i.id", "inner") //idmap is a small dataframe
.drop($"i.id")
.withColumn("date", regexp_extract($"filename", "someregex", 1))
val w = r.writeStream.format("delta")
.partitionBy("date", "some_id")
.option("checkpointLocation", "s3://mybigdata/checkpoint/")
.option("path", "s3://mybigdata/destination/")
.start()
When I do this, I get MASSIVE spills to memory and disk:
Which of course, is a disaster. How is it that I am getting these massive spills when I'm rate limiting via maxFilesPerTrigger to 100x60mb files at a time? It seems to be trying to read the entire S3 dataset and isn't streaming at all.
What is going wrong here?

How to write data to Apache Iceberg tables using Spark SQL?

I am trying to familiarize myself with Apache Iceberg and I'm having some trouble understanding how to write some external data to a table using Spark SQL.
I have a file, one.csv, sitting in a directory, /data
my Iceberg catalog is configured to point to this directory, /warehouse
I want to write this one.csv to an Apache Iceberg table (preferably using Spark SQL)
Is it even possible to read external data using Spark SQL? And then write it to the iceberg tables? Do I have to use scala or python to do this? I've been through the Iceberg and Spark 3.0.1 documentation a bunch but maybe I'm missing something.
Code Update
Here is some code that I hope will help
spark.conf.set("spark.sql.catalog.spark_catalog", "org.apache.iceberg.spark.SparkSessionCatalog")
spark.conf.set("spark.sql.catalog.spark_catalog.type", "hive")
spark.conf.set("spark.sql.catalog.local", "org.apache.iceberg.spark.SparkCatalog")
spark.conf.set("spark.sql.catalog.local.type", "hadoop")
spark.conf.set("spark.sql.catalog.local.warehouse", "data/warehouse")
I have the data I need to use sitting in a directory /one/one.csv
How do I get it into an Iceberg table using Spark? Can all of this be done purely using SparkSQL?
spark.sql(
"""
CREATE or REPLACE TABLE local.db.one
USING iceberg
AS SELECT * FROM `/one/one.csv`
"""
)
Then the goal is I can work with this iceberg table directly for example:
select * from local.db.one
and this would give me all the content from the /one/one.csv file.
To use the SparkSQL, read the file into a dataframe, then register it as a temp view. This temp view can now be referred in the SQL as:
var df = spark.read.format("csv").load("/data/one.csv")
df.createOrReplaceTempView("tempview");
spark.sql("CREATE or REPLACE TABLE local.db.one USING iceberg AS SELECT * FROM tempview");
To answer your other question, Scala or Python is not required; the above example is in Java.
val sparkConf = new SparkConf()
sparkConf.set("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
sparkConf.set("spark.sql.catalog.spark_catalog", "org.apache.iceberg.spark.SparkSessionCatalog")
sparkConf.set("spark.sql.catalog.spark_catalog.type", "hive")
sparkConf.set("spark.sql.catalog.hive_catalog", "org.apache.iceberg.spark.SparkCatalog")
sparkConf.set("spark.sql.catalog.hive_catalog.type", "hadoop")
sparkConf.set("spark.sql.catalog.hive_catalog.warehouse", "hdfs://host:port/user/hive/warehouse")
sparkConf.set("hive.metastore.uris", "thrift://host:19083")
sparkConf.set("spark.sql.catalog.hive_prod", " org.apache.iceberg.spark.SparkCatalog")
sparkConf.set("spark.sql.catalog.hive_prod.type", "hive")
sparkConf.set("spark.sql.catalog.hive_prod.uri", "thrift://host:19083")
sparkConf.set("hive.metastore.warehouse.dir", "hdfs://host:port/user/hive/warehouse")
val spark: SparkSession = SparkSession.builder()
.enableHiveSupport()
.config(sparkConf)
.master("yarn")
.appName("kafkaTableTest")
.getOrCreate()
spark.sql(
"""
|
|create table if not exists hive_catalog.icebergdb.kafkatest1(
| company_id int,
| event string,
| event_time timestamp,
| position_id int,
| user_id int
|)using iceberg
|PARTITIONED BY (days(event_time))
|""".stripMargin)
import spark.implicits._
val df: DataFrame = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "kafka_server")
.option("subscribe", "topic")
.option("startingOffsets", "latest")
.load()
//.selectExpr("cast (value as string)")
val value: DataFrame = df.selectExpr("CAST(value AS STRING)")
.as[String]
.map(data => {
val json_str: JSONObject = JSON.parseObject(data)
val company_id: Integer = json_str.getInteger("company_id")
val event: String = json_str.getString("event")
val event_time: String = json_str.getString("event_time")
val position_id: Integer = json_str.getInteger("position_id")
val user_id: Integer = json_str.getInteger("user_id")
(company_id, event, event_time, position_id, user_id)
})
.toDF("company_id", "event", "event_time", "position_id", "user_id")
value.createOrReplaceTempView("table")
spark.sql(
"""
|select
| company_id,
| event,
| to_timestamp(event_time,'yyyy-MM-dd HH:mm:ss') as event_time,
| position_id,
| user_id
|from table
|""".stripMargin)
.writeStream
.format("iceberg")
.outputMode("append")
.trigger(Trigger.ProcessingTime(1, TimeUnit.MINUTES))
.option("path","hive_catalog.icebergdb.kafkatest1") // tablePath: catalog.db.tableName
.option("checkpointLocation","hdfspath")
.start()
.awaitTermination()
This example is reading data from Kafka and writing data to Iceberg table

Is is possible to parse JSON string from Kafka topic in real time using Spark Streaming SQL?

I have a Pyspark notebook that connects to kafka broker and creates a spark writeStream called temp. The data values in Kafka topic are in json format but I'm not sure how to go about creating a spark sql table that can parse this data in real time. The only way I know is to create a copy of the table convert it into RDD or DF and parse the value into another RDD and DF. Is is possible to have this done in real time processing as the stream is being written?
Code:
df = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers","localhost:9092") \
.option("subscribe","hoteth") \
.option("startingOffsets", "earliest") \
.load()
ds = df.selectExpr("CAST (key AS STRING)", "CAST(value AS STRING)", "timestamp")
ds.writeStream.queryName("temp").format("memory").start()
spark.sql("select * from temp limit 5").show()
Output:
+----+--------------------+--------------------+
| key| value| timestamp|
+----+--------------------+--------------------+
|null|{"e":"trade","E":...|2018-09-18 15:41:...|
|null|{"e":"trade","E":...|2018-09-18 15:41:...|
|null|{"e":"trade","E":...|2018-09-18 15:41:...|
|null|{"e":"trade","E":...|2018-09-18 15:41:...|
|null|{"e":"trade","E":...|2018-09-18 15:41:...|
+----+--------------------+--------------------+
One way I could solve this is to just lateral view json_tuple just like it is done in Hive HQL. I'm still looking for a solution that it can parse data directly from the stream so that it doesn't take extra processing time parsing using query.
spark.sql("""
select value, v1.transaction,ticker,price
from temp
lateral view json_tuple(value,"e","s","p") v1 as transaction, ticker,price
limit 5
""").show()

Resources