How does Spark 2.0 handle column nullability? - apache-spark

In the recently released The Data Engineer's Guide to Apache Spark, the authors stated (page 74):
"...when you define a schema where all columns are declared to not
have null values - Spark will not enforce that and will happily let
null values into that column. The nullable signal is simply to help
Spark SQL optimize for handling that column. If you have null values
in columns that should not have null values, you can get an incorrect
result or see strange exceptions that can be hard to debug."
While going over notes and previous JIRAs, it appears that the statement above may really no longer be true.
According to SPARK-13740 and SPARK-15192, it looks like when a schema is defined on a DataFrame creation that nullability is enforced.
Could I get some clarification? I'm no longer certain what the behavior is.

Different DataFrame creation processes are handled differently with respect to null types. It's not really straightforward, because there are at least three different areas that nulls are being handled completely differently.
First, SPARK-15192 is about RowEncoders. And in the case of RowEncoders, there are no nulls allowed, and the error messages have been improved. For example, with the two dozen or so overloading of SparkSession.createDataFrame(), there are quite a few implementations of createDataFrame() that are basically converting an RDD to a DataFrame.
In my example below no nulls were accepted. So try something similar to converting an RDD to a DataFrame using createDateFrame() method like below and you will get same results...
val nschema = StructType(Seq(StructField("colA", IntegerType, nullable = false), StructField("colB", IntegerType, nullable = true), StructField("colC", IntegerType, nullable = false), StructField("colD", IntegerType, nullable = true)))
val intNullsRDD = sc.parallelize(List(org.apache.spark.sql.Row(null,null,null,null),org.apache.spark.sql.Row(2,null,null,null),org.apache.spark.sql.Row(null,3,null,null),org.apache.spark.sql.Row(null,null,null,4)))
spark.createDataFrame(intNullsRDD, schema).show()
In Spark 2.1.1, the error message is pretty nice.
17/11/23 21:30:37 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 6)
java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: The 0th field 'colA' of input row cannot be null.
validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object), 0, colA), IntegerType) AS colA#73
+- validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object), 0, colA), IntegerType)
+- getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object), 0, colA)
+- assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object)
+- input[0, org.apache.spark.sql.Row, true]
Stepping through the code, you can see where this happens. Way below in the doGenCode() method there is the validation. And immediately below, when the RowEncoder object is being created with val encoder = RowEncoder(schema), that logic begins.
#DeveloperApi
#InterfaceStability.Evolving
def createDataFrame(rowRDD: RDD[Row], schema: StructType): DataFrame = {
createDataFrame(rowRDD, schema, needsConversion = true)
}
private[sql] def createDataFrame(
rowRDD: RDD[Row],
schema: StructType,
needsConversion: Boolean) = {
// TODO: use MutableProjection when rowRDD is another DataFrame and the applied
// schema differs from the existing schema on any field data type.
val catalystRows = if (needsConversion) {
val encoder = RowEncoder(schema)
rowRDD.map(encoder.toRow)
} else {
rowRDD.map{r: Row => InternalRow.fromSeq(r.toSeq)}
}
val logicalPlan = LogicalRDD(schema.toAttributes, catalystRows)(self)
Dataset.ofRows(self, logicalPlan)
}
After stepping through this logic more, here is that improved message in objects.scala and this is where the code handles null values. Actually the error message is passed into ctx.addReferenceObj(errMsg) but you get the idea.
case class GetExternalRowField(
child: Expression,
index: Int,
fieldName: String) extends UnaryExpression with NonSQLExpression {
override def nullable: Boolean = false
override def dataType: DataType = ObjectType(classOf[Object])
override def eval(input: InternalRow): Any =
throw new UnsupportedOperationException("Only code-generated evaluation is supported")
private val errMsg = s"The ${index}th field '$fieldName' of input row cannot be null."
override def doGenCode(ctx: CodegenContext, ev: ExprCode): ExprCode = {
// Use unnamed reference that doesn't create a local field here to reduce the number of fields
// because errMsgField is used only when the field is null.
val errMsgField = ctx.addReferenceObj(errMsg)
val row = child.genCode(ctx)
val code = s"""
${row.code}
if (${row.isNull}) {
throw new RuntimeException("The input external row cannot be null.");
}
if (${row.value}.isNullAt($index)) {
throw new RuntimeException($errMsgField);
}
final Object ${ev.value} = ${row.value}.get($index);
"""
ev.copy(code = code, isNull = "false")
}
}
Something completely different happens when pulling from an HDFS data source. In this case there will be no error message when there is a non-nullable column and a null comes in. The column still accepts null values. Check out the quick testFile "testFile.csv" I created and then put it into hdfs hdfs dfs -put testFile.csv /data/nullTest
|colA|colB|colC|colD|
| | | | |
| | 2| 2| 2|
| | 3| | |
| 4| | | |
When I read from the file below with the same nschema schema, all of the blank values became null, even if the field was non-nullable. There are ways of how to handle blanks differently, but this is the default. Both csv and parquet had the same results.
val nschema = StructType(Seq(StructField("colA", IntegerType, nullable = true), StructField("colB", IntegerType, nullable = true), StructField("colC", IntegerType, nullable = true), StructField("colD", IntegerType, nullable = true)))
val jListNullsADF = spark.createDataFrame(List(org.apache.spark.sql.Row(null,null,null,null),org.apache.spark.sql.Row(2,null,null,null),org.apache.spark.sql.Row(null,3,null,null),org.apache.spark.sql.Row(null,null,null,4)).asJava,nschema)
jListNullsADF.write.format("parquet").save("/data/parquetnulltest")
spark.read.format("parquet").schema(schema).load("/data/parquetnulltest").show()
+----+----+----+----+
|colA|colB|colC|colD|
+----+----+----+----+
|null|null|null|null|
|null| 2| 2| 2|
|null|null| 3|null|
|null| 4|null| 4|
+----+----+----+----+
The cause of the nulls being allowed starts with the DataFrameReader creation where a call is made to baseRelationToDataFrame() in DataFramerReader.scala. baseRelationToDataFrame() in SparkSession.scala uses a QueryPlan class in the method and the QueryPlan is recreating the StructType. The method fromAttributes() which always has nullable fields is basically the same schema as the original one but forces nullability. Thus, by the time it gets back RowEncoder(), it is now a nullable version of the original schema.
Immediately below in DataFrameReader.scala you can see the baseRelationToDataFrame() call...
#scala.annotation.varargs
def load(paths: String*): DataFrame = {
sparkSession.baseRelationToDataFrame(
DataSource.apply(
sparkSession,
paths = paths,
userSpecifiedSchema = userSpecifiedSchema,
className = source,
options = extraOptions.toMap).resolveRelation())
}
Immediately below in the file SparkSession.scala you can see the Dataset.ofRows(self: SparkSession, lr: LogicalRelation) method is being called, pay close attention to the LogicalRelation plan constructor.
def baseRelationToDataFrame(baseRelation: BaseRelation): DataFrame = {
Dataset.ofRows(self, LogicalRelation(baseRelation))
}
In Dataset.scala, the analyzed QueryPlan object's schema property is being passed as the third argument to create the Dataset in new Dataset[Row](sparkSession, qe, RowEncoder(qe.analyzed.schema)).
def ofRows(sparkSession: SparkSession, logicalPlan: LogicalPlan): DataFrame = {
val qe = sparkSession.sessionState.executePlan(logicalPlan)
qe.assertAnalyzed()
new Dataset[Row](sparkSession, qe, RowEncoder(qe.analyzed.schema))
}
}
In QueryPlan.scala the StructType.fromAttributes() method is being used
lazy val schema: StructType = StructType.fromAttributes(output)
And finally in StructType.scala the nullable property is always nullable.
private[sql] def fromAttributes(attributes: Seq[Attribute]): StructType =
StructType(attributes.map(a => StructField(a.name, a.dataType, a.nullable, a.metadata)))
About the query plan being different based on nullability, I think it is totally possible that the LogicalPlan was different based on whether a column was nullable or not. A lot of information is passed into that object and there is a lot of subsequent logic to creeate the plan. But it is not being kept nullable when it is actually writing the dataframe, as we saw a second ago.
The third case is dependent on DataType. When you create a DataFrame using the method createDataFrame(rows: java.util.List[Row], schema: StructType) it will actually create zeros where there is a null passed into a non-nullable IntegerType field. You can see the example below...
val schema = StructType(Seq(StructField("colA", IntegerType, nullable = false), StructField("colB", IntegerType, nullable = true), StructField("colC", IntegerType, nullable = false), StructField("colD", IntegerType, nullable = true)))
val jListNullsDF = spark.createDataFrame(List(org.apache.spark.sql.Row(null,null,null,null),org.apache.spark.sql.Row(2,null,null,null),org.apache.spark.sql.Row(null,3,null,null),org.apache.spark.sql.Row(null,null,null,4)).asJava,schema)
jListNullsDF.show()
+----+----+----+----+
|colA|colB|colC|colD|
+----+----+----+----+
| 0|null| 0|null|
| 2|null| 0|null|
| 0| 3| 0|null|
| 0|null| 0| 4|
+----+----+----+----+
It looks like there is logic in org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getInt() that substitutes zeros for nulls. However, with non-nullable StringType fields, nulls are not handled as gracefully.
val strschema = StructType(Seq(StructField("colA", StringType, nullable = false), StructField("colB", StringType, nullable = true), StructField("colC", StringType, nullable = false), StructField("colD", StringType, nullable = true)))
val strNullsRDD = sc.parallelize(List(org.apache.spark.sql.Row(null,null,null,null),org.apache.spark.sql.Row("r2colA",null,null,null),org.apache.spark.sql.Row(null,"r3colC",null,null),org.apache.spark.sql.Row(null,null,null,"r4colD")))
spark.createDataFrame(List(org.apache.spark.sql.Row(null,null,null,null),org.apache.spark.sql.Row("r2cA",null,null,null),org.apache.spark.sql.Row(null,"row3cB",null,null),org.apache.spark.sql.Row(null,null,null,"row4ColD")).asJava,strschema).show()
but below is the not very helpful error message that doesn't specify the ordinal position of the field...
java.lang.NullPointerException
at org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter.write(UnsafeRowWriter.java:210)

Long story short we don't know. It is true that Spark become much stricter with enforcing nullable attributes
However considering complexity of Spark (number of guest languages, size of the library, number of low level mechanisms used for optimizations, plugable data sources, and relatively large pool of legacy code) there is really no guarantee that fairly limited safety checks included in the recent versions cover all possible scenarios.

Related

Validating Schema of Column with StructType in Pyspark 2.4

I have a dataframe that has a column that is a JSON string
from pyspark.sql import SparkSession
from pyspark.sql.types import *
import pyspark.sql.functions as F
sc = SparkSession.builder.getOrCreate()
l = [
(1, """{"key1": true, "nested_key": {"mylist": ["foo", "bar"], "mybool": true}})"""),
(2, """{"key1": true, "nested_key": {"mylist": "", "mybool": true}})"""),
]
df = sc.createDataFrame(l, ["id", "json_str"])
and want to parse the json_str column with from_json using a schema
schema = StructType([
StructField("key1", BooleanType(), False),
StructField("nested_key", StructType([
StructField("mylist", ArrayType(StringType()), False),
StructField("mybool", BooleanType(), False)
]))
])
df = df.withColumn("data", F.from_json(F.col("json_str"), schema))
df.show(truncate=False)
+---+--------------------------+
|id |data |
+---+--------------------------+
|1 |[true, [[foo, bar], true]]|
|2 |[true, [, true]] |
+---+--------------------------+
As one can see, the second row didn't conform to the schema in schema so it's null even though I passed False to nullable in the StructField. It's important to my pipeline that if there's data that doesn't conform to the schema defined that an alert get raised somehow, but I'm not sure about the best way to do this in Pyspark. The real data has many, many keys, some of them nested so checking each one with some form of isNan isn't feasable and since we already defined the schema it feels like there should be away to leverage that.
If it matters, I don't necessarily need to check the schema of the whole dataframe, I'm really after checking the schema of the StructType column
Check out the options parameter:
https://spark.apache.org/docs/2.3.1/api/python/pyspark.sql.html?highlight=from_json#pyspark.sql.functions.from_json
It's a little vague, but it allows you to pass a dict to the underlying method here:
https://spark.apache.org/docs/2.3.1/api/python/pyspark.sql.html?highlight=from_json#pyspark.sql.DataFrameReader.json
You might have success passing something like options={'mode' : 'FAILFAST'}.

Deserializing Spark structured stream data from Kafka topic

I am working off Kafka 2.3.0 and Spark 2.3.4. I have already built a Kafka Connector which reads off a CSV file and posts a line from the CSV to the relevant Kafka topic. The line is like so:
"201310,XYZ001,Sup,XYZ,A,0,Presales,6,Callout,0,0,1,N,Prospect".
The CSV contains 1000s of such lines. The Connector is able to successfully post them on the topic and I am also able to get the message in Spark. I am not sure how can I deserialize that message to my schema? Note that the messages are headerless so the key part in the kafka message is null. The value part includes the complete CSV string as above. My code is below.
I looked at this - How to deserialize records from Kafka using Structured Streaming in Java? but was unable to port it to my csv case. In addition I've tried other spark sql mechanisms to try and retrieve the individual row from the 'value' column but to no avail. If I do manage to get a compiling version (e.g. a map over the indivValues Dataset or dsRawData) I get errors similar to: "org.apache.spark.sql.AnalysisException: cannot resolve 'IC' given input columns: [value];". If I understand correctly, it is because value is a comma separated string and spark isn't really going to magically map it for me without me doing 'something'.
//build the spark session
SparkSession sparkSession = SparkSession.builder()
.appName(seCfg.arg0AppName)
.config("spark.cassandra.connection.host",config.arg2CassandraIp)
.getOrCreate();
...
//my target schema is this:
StructType schema = DataTypes.createStructType(new StructField[] {
DataTypes.createStructField("timeOfOrigin", DataTypes.TimestampType, true),
DataTypes.createStructField("cName", DataTypes.StringType, true),
DataTypes.createStructField("cRole", DataTypes.StringType, true),
DataTypes.createStructField("bName", DataTypes.StringType, true),
DataTypes.createStructField("stage", DataTypes.StringType, true),
DataTypes.createStructField("intId", DataTypes.IntegerType, true),
DataTypes.createStructField("intName", DataTypes.StringType, true),
DataTypes.createStructField("intCatId", DataTypes.IntegerType, true),
DataTypes.createStructField("catName", DataTypes.StringType, true),
DataTypes.createStructField("are_vval", DataTypes.IntegerType, true),
DataTypes.createStructField("isee_vval", DataTypes.IntegerType, true),
DataTypes.createStructField("opCode", DataTypes.IntegerType, true),
DataTypes.createStructField("opType", DataTypes.StringType, true),
DataTypes.createStructField("opName", DataTypes.StringType, true)
});
...
Dataset<Row> dsRawData = sparkSession
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", config.arg3Kafkabootstrapurl)
.option("subscribe", config.arg1TopicName)
.option("failOnDataLoss", "false")
.load();
//getting individual terms like '201310', 'XYZ001'.. from "values"
Dataset<String> indivValues = dsRawData
.selectExpr("CAST(value AS STRING)")
.as(Encoders.STRING())
.flatMap((FlatMapFunction<String, String>) x -> Arrays.asList(x.split(",")).iterator(), Encoders.STRING());
//indivValues when printed to console looks like below which confirms that //I receive the data correctly and completely
/*
When printed on console, looks like this:
+--------------------+
| value|
+--------------------+
| 201310|
| XYZ001|
| Sup|
| XYZ|
| A|
| 0|
| Presales|
| 6|
| Callout|
| 0|
| 0|
| 1|
| N|
| Prospect|
+--------------------+
*/
StreamingQuery sq = indivValues.writeStream()
.outputMode("append")
.format("console")
.start();
//await termination
sq.awaitTermination();
I require the data to be typed as my custom schema shown above since I would be running mathematical calculations over it (for every new row combined with some older rows).
Is it better to synthesize headers in the Kafka Connector source task before pushing them onto the topic? Will having headers make this issue resolution simpler?
Thanks!
Given your existing code, the easiest way to parse your input from your dsRawData is to convert it to a Dataset<String> and then use the native csv reader api
//dsRawData has raw incoming data from Kafka...
Dataset<String> indivValues = dsRawData
.selectExpr("CAST(value AS STRING)")
.as(Encoders.STRING());
Dataset<Row> finalValues = sparkSession.read()
.schema(schema)
.option("delimiter",",")
.csv(indivValues);
With such a construct you can use exactly the same CSV parsing options that are available when directly reading a CSV file from Spark.
I have been able to resolve this now. Via use of spark sql. The code to the solution is below.
//dsRawData has raw incoming data from Kafka...
Dataset<String> indivValues = dsRawData
.selectExpr("CAST(value AS STRING)")
.as(Encoders.STRING());
//create new columns, parse out the orig message and fill column with the values
Dataset<Row> dataAsSchema2 = indivValues
.selectExpr("value",
"split(value,',')[0] as time",
"split(value,',')[1] as cname",
"split(value,',')[2] as crole",
"split(value,',')[3] as bname",
"split(value,',')[4] as stage",
"split(value,',')[5] as intid",
"split(value,',')[6] as intname",
"split(value,',')[7] as intcatid",
"split(value,',')[8] as catname",
"split(value,',')[9] as are_vval",
"split(value,',')[10] as isee_vval",
"split(value,',')[11] as opcode",
"split(value,',')[12] as optype",
"split(value,',')[13] as opname")
.drop("value");
//remove any whitespaces as they interfere with data type conversions
dataAsSchema2 = dataAsSchema2
.withColumn("intid", functions.regexp_replace(functions.col("int_id"),
" ", ""))
.withColumn("intcatid", functions.regexp_replace(functions.col("intcatid"),
" ", ""))
.withColumn("are_vval", functions.regexp_replace(functions.col("are_vval"),
" ", ""))
.withColumn("isee_vval", functions.regexp_replace(functions.col("isee_vval"),
" ", ""))
.withColumn("opcode", functions.regexp_replace(functions.col("opcode"),
" ", ""));
//change types to ready for calc
dataAsSchema2 = dataAsSchema2
.withColumn("intcatid",functions.col("intcatid").cast(DataTypes.IntegerType))
.withColumn("intid",functions.col("intid").cast(DataTypes.IntegerType))
.withColumn("are_vval",functions.col("are_vval").cast(DataTypes.IntegerType))
.withColumn("isee_vval",functions.col("isee_vval").cast(DataTypes.IntegerType))
.withColumn("opcode",functions.col("opcode").cast(DataTypes.IntegerType));
//build a POJO dataset
Encoder<Pojoclass2> encoder = Encoders.bean(Pojoclass2.class);
Dataset<Pojoclass2> pjClass = new Dataset<Pojoclass2>(sparkSession, dataAsSchema2.logicalPlan(), encoder);

Why can't Spark properly load columns from HDFS? [duplicate]

This question already has answers here:
What is going wrong with `unionAll` of Spark `DataFrame`?
(5 answers)
Closed 4 years ago.
Below I provide my schema and the code that I use to read from partitions in hdfs.
An example of a partition could be this path: /home/maria_dev/data/key=key/date=19 jan (and of course inside this folder there's a csv file that contains cnt)
So, the data I have is partitioned by key and date columns.
When I read it like below the columns are not properly read, so cnt gets read into date and vice versa.
How can I resolve this?
private val tweetSchema = new StructType(Array(
StructField("date", StringType, nullable = true),
StructField("key", StringType, nullable = true),
StructField("cnt", IntegerType, nullable = true)
))
// basePath example: /home/maria_dev/data
// path example: /home/maria_dev/data/key=key/data=19 jan
private def loadDF(basePath: String, path: String, format: String): DataFrame = {
val df = spark.read
.schema(tweetSchema)
.format(format)
.option("basePath", basePath)
.load(path)
df
}
I tried changing their order in the schema from (date, key, cnt) to (cnt, key, date) but it does not help.
My problem is that when I call union, it appends 2 dataframes:
df1: {(key: 1, date: 2)}
df2: {(date: 3, key: 4)}
into the final dataframe like this: {(key: 1, date: 2), (date: 3, key: 4)}. As you can see, the columns are messed up.
The schema should be in the following order:
Columns present in the data files as such - in case of CSV in the natural order from left to right.
Columns used with partitioning in the same order as defined by the directory structure.
So in your case the correct order will be:
new StructType(Array(
StructField("cnt", IntegerType, nullable = true),
StructField("key", StringType, nullable = true),
StructField("date", StringType, nullable = true)
))
It turns out that everything was read properly.
So, now, instead of doing df1.union(df2), I do df1.select("key", "date").union(df2.select("key", "date")) and it works.

How do I apply schema with nullable = false to json reading

I'm trying to write some test cases using json files for dataframes (whereas production would be parquet). I'm using spark-testing-base framework and I'm running into a snag when asserting data frames equal each other due to schema mismatches where the json schema always has nullable = true.
I'd like to be able to apply a schema with nullable = false to the json read.
I've written a small test case:
import com.holdenkarau.spark.testing.DataFrameSuiteBase
import org.apache.spark.sql.types.{IntegerType, StructField, StructType}
import org.scalatest.FunSuite
class TestJSON extends FunSuite with DataFrameSuiteBase {
val expectedSchema = StructType(
List(StructField("a", IntegerType, nullable = false),
StructField("b", IntegerType, nullable = true))
)
test("testJSON") {
val readJson =
spark.read.schema(expectedSchema).json("src/test/resources/test.json")
assert(readJson.schema == expectedSchema)
}
}
And have a small test.json file of:
{"a": 1, "b": 2}
{"a": 1}
This returns an assertion failure of
StructType(StructField(a,IntegerType,true),
StructField(b,IntegerType,true)) did not equal
StructType(StructField(a,IntegerType,false),
StructField(b,IntegerType,true)) ScalaTestFailureLocation:
TestJSON$$anonfun$1 at (TestJSON.scala:15) Expected
:StructType(StructField(a,IntegerType,false),
StructField(b,IntegerType,true)) Actual
:StructType(StructField(a,IntegerType,true),
StructField(b,IntegerType,true))
Am I applying the schema the correct way?
I'm using spark 2.2, scala 2.11.8
There is a workaround, where rather than reading the json directly from the file, read it using RDD then it applies the schema. Below is code:
val expectedSchema = StructType(
List(StructField("a", IntegerType, nullable = false),
StructField("b", IntegerType, nullable = true))
)
test("testJSON") {
val jsonRdd =spark.sparkContext.textFile("src/test/resources/test.json")
//val readJson =sparksession.read.schema(expectedSchema).json("src/test/resources/test.json")
val readJson = spark.read.schema(expectedSchema).json(jsonRdd)
readJson.printSchema()
assert(readJson.schema == expectedSchema)
}
The test case passes and the print schema result is :
root
|-- a: integer (nullable = false)
|-- b: integer (nullable = true)
There is JIRA https://issues.apache.org/jira/browse/SPARK-10848 with apache Spark for this issue, which they say is not a problem and said that:
This should be resolved in the latest file format refactoring in Spark 2.0. Please reopen it if you still hit the problem. Thanks!
If you are getting the error you can open the JIRA again.
I tested in spark 2.1.0, and still see the same issue
The workAround aboves ensures there is a correct schema, but null values are set to default ones. In my case when an Int does not exist in the json String it is set to 0.

Why specifying schema to be DateType / TimestampType will make querying extremely slow?

I'm using spark-csv 1.1.0 and Spark 1.5. I make the schema as follows:
private def makeSchema(tableColumns: List[SparkSQLFieldConfig]): StructType = {
new StructType(
tableColumns.map(p => p.ColumnDataType match {
case FieldDataType.Integer => StructField(p.ColumnName, IntegerType, nullable = true)
case FieldDataType.Decimal => StructField(p.ColumnName, FloatType, nullable = true)
case FieldDataType.String => StructField(p.ColumnName, StringType, nullable = true)
case FieldDataType.DateTime => StructField(p.ColumnName, TimestampType, nullable = true)
case FieldDataType.Date => StructField(p.ColumnName, DateType, nullable = true)
case FieldDataType.Boolean => StructField(p.ColumnName, BooleanType, nullable = false)
case _ => StructField(p.ColumnName, StringType, nullable = true)
}).toArray
)
}
But when there are DateType columns, my query with Dataframes will be very slow. (The queries are just simple groupby(), sum() and so on)
With the same dataset, after I commented the two lines to map Date to DateType and DateTime to TimestampType(that is, to map them to StringType), the queries become much faster.
What is the possible reason for this? Thank you very much!
We have found a possible answer for this problem.
When simply specifying a column to be DateType or TimestampType, spark-csv will try to parse the dates with all its internal formats for each line of the row, which makes the parsing progress much slower.
From its official documentation, it seems that we can specify in the option the format for the dates. I suppose it can make the parsing progress much faster.

Resources