Saving DataFrame as ORC loses StructField metadata - apache-spark

Spark version: 2.4.4
When writing a DataFrame, I want to put some metadata on certain fields. It's important for this metadata to be persisted when I write out the DataFrame and read it again later. If I save this DataFrame as Parquet and then read it back, I see the metadata is preserved. But saving as ORC, the metadata is lost when I read the files. Here is a bit of code to show how I'm doing this (in Java):
// set up schema and dataframe
Metadata myMeta = new MetadataBuilder().putString("myMetaData", "foo").build();
StructField field = DataTypes.createStructField("x", DataTypes.IntegerType, true, myMeta);
Dataset<Row> df = sparkSession.createDataFrame(rdd, /* a schema using this field */);
// write it
df.write().format("parquet").save("test");
// read it again
Dataset<Row> df2 = sparkSession.read().format("parquet").load("test");
// check the schema after reading files
df2.schema().prettyJson();
df2.schema().fields()[0].metadata();
Using Parquet format, the metadata is deserialized as I expect. However if changed to ORC format, the metadata comes back as an empty map.
Is this a known bug in the Spark ORC implementation or am I missing something? Thanks.

Related

when to use Spark Encoder bean vs Schema

I have a long xmlstring that I am converting to JSON for easy processing in spark. But I am experiencing some issues with auto Infer schema. what is the efficient way to convert a Dataset xmlStringData -> Dataset with a structure? In this case should I generate a schema using StructType to read this again in spark Row as shown below:
Dataset<Row> jsonDatset = sparkSession.read().schema(schema).json(xmlStringData );
OR
Dataset<myClass> jsonDataset = xmlStringData.map((MapFunction<Row, String>) xmlRow -> {
return new myClass(xmlRow);
}, myClassEncode);
What is the difference in later processing going either route?
All I need to do later is process the data and save to CSV.
thank you

Create a parquet file with custom schema

I have a requirement like this:
In Databricks, we are reading a csv file. This file has multiple columns like emp_name, emp_salary, joining_date etc. When we read this file in a dataframe, we are getting all the columns as string.
We have an API which will give us the schema of the columns. emp_name is string(50), emp_salary is decimal(7,4), joining_date as timestamp etc.
I have to create a parquet file with the schema that is coming from the API.
How can we do this in Databricks using PySpark.
You can always pass in the schema when reading:
schema = 'emp_name string, emp_salary decimal(7,4), joining_date timestamp'
df = spark.read.csv('input.csv', schema=schema)
df.printSchema()
df.show()
The only thing to be careful is that some strings cannot be used directly from API, e.g., "string(50)" needs to be converted to "string".
input.csv:
"name","123.1234","2022-01-01 10:10:00"

how to query data from Pyspark sql context if key is not present in json fie , How to catch give sql analysis execption

I am using Pyspark to transform JSON in a Dataframe. And I am successfully able to transform it. But the problem I am facing is there is a key which will be present in some JSON file and will not be present in another. When I flatten the JSON with Pyspark SQL context and the key is not present in some JSON file, it gives error in creating my Pyspark data frame, throwing SQL Analysis Exception.
for example my sample JSON
{
"_id" : ObjectId("5eba227a0bce34b401e7899a"),
"origin" : "inbound",
"converse" : "72412952",
"Start" : "2020-04-20T06:12:20.89Z",
"End" : "2020-04-20T06:12:53.919Z",
"ConversationMos" : 4.88228940963745,
"ConversationRFactor" : 92.4383773803711,
"participantId" : "bbe4de4c-7b3e-49f1-8",
}
The above JSON participant id will be available in some JSON and not in another JSON files
My pysaprk code snippet:
fetchFile = sark.read.format(file_type)\
.option("inferSchema", "true")\
.option("header","true")\
.load(generated_FileLocation)
fetch file.registerTempTable("CreateDataFrame")
tempData = sqlContext.sql("select origin,converse,start,end,participantId from CreateDataFrame")
When, in some JSON file participantId is not present, an exception is coming. How to handle that kind of exception that if the key is not present so column will contain null or any other ways to handle it
You can simply check if the column is not there then add it will empty values.
The code for the same goes like:
from pyspark.sql import functions as f
fetchFile = sark.read.format(file_type)\
.option("inferSchema", "true")\
.option("header","true")\
.load(generated_FileLocation)
if not 'participantId' in df.columns:
df = df.withColumn('participantId', f.lit(''))
fetch file.registerTempTable("CreateDataFrame")
tempData = sqlContext.sql("select origin,converse,start,end,participantId from CreateDataFrame")
I think you're calling Spark to read one file at a time and inferring the schema at the same time.
What Spark is telling you with the SQL Analysis exception is that your file and your inferred schema doesn't have the key you're looking for. What you have to do is get to your good schema and apply it to all of the files you want to process. Ideally, processing all of your files at once.
There are three strategies:
Infer your schema from lots of files. You should get the aggregate of all of the keys. Spark will run two passes over the data.
df = spark.read.json('/path/to/your/directory/full/of/json/files')
schema = df.schema
print(schema)
Create a schema object
I find this tedious to do, but will speed up your code. Here is a reference: https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.types.StructType
Read the schema from a well formed file then use that to read your whole directory. Also, by printing the schema object, you can copy paste that back into your code for option #2.
schema = spark.read.json('path/to/well/formed/file.json')
print(schema)
my_df = spark.read.schema(schema).json('path/to/entire/folder/full/of/json')

Expressing spark `StructType` in avro schema

How would you describe spark StructType data type in an avro schema? I am generating a parquet file, the format of which is described in an avro schema. This file is then loaded from S3 into spark. There is an array and map data types but these do not correspond to the StructType.
Using the package org.apache.spark.sql.avro (Spark 2.4) you can convert sparkSQL schemas to avro schemas and viceversa.
You cant try this way:
import org.apache.spark.sql.avro.SchemaConverters
val sqlType = SchemaConverters.toSqlType(avroSchema)
var rowRDD = yourGeneircRecordRDD.map(record => genericRecordToRow(record, sqlType))
val df = sqlContext.createDataFrame(rowRDD , sqlType.dataType.asInstanceOf[StructType])
Here you can find more answers too: Code

RDD String to Spark csv Reader

I want to read the RDD[String] using the spark CSV reader. The reason I am doing this is, I need to filter some records before using the CSV reader.
val fileRDD: RDD[String] = spark.sparkContext.textFile("file")
I need to read the fileRDD using the spark CSV reader. I wish not to commit the file as it increases the IO of the HDFS. I have looked into the options we have in the spark CSV, but didn't found any.
spark.read.csv(file)
Sample Data
PHM|MERC|PHARMA|BLUEDRUG|50
CLM|BSH|CLAIM|VISIT|HSA|EMPLOYER|PAID|250
PHM|GSK|PHARMA|PARAC|70
CLM|UHC|CLAIM|VISIT|HSA|PERSONAL|PAID|72
As you can see all the records starts with PHM has different number of columns and clm has different number of columns. That is the reason i am filtering and then applying schema. PHM and CLM records has different schema.
val fileRDD: RDD[String] = spark.sparkContext.textFile("file").filter(_.startWith("PHM"))
spark.read.option(schema,"phcschema").csv(fileRDD.toDS())
Since Spark 2.2, method ".csv" can read dataset of strings. Can be implemented in this way:
val rdd: RDD[String] = spark.sparkContext.textFile("csv.txt")
// ... do filtering
spark.read.csv(rdd.toDS())

Resources