I am trying to write a pyspark dataframe to hive table which also got created using the below line
parks_df.write.mode("overwrite").saveAsTable("fs.PARKS_TNTO")
When I try to print the count of the dataframe parks_df.count() I get 1000 records.
But in the final table fs.PARKS_TNTO, I get 980 records. Hence 20 records are getting dropped. How can I resolve this issue ? . Also , how can I capture the records which are getting dropped. There are no partitions on this final table fs.PARKS_TNTO.
I am using hive llap(https://github.com/hortonworks-spark/spark-llap) in pyspark to read hive internal table like this:
df = hive.table(<tableName>)
But the issue is that my table has 18 million records, but when I do
df.count()
I just get 7.5 million as count which is wrong
You might have to refresh spark metastore which does not utilize the hive metastore and the stats might be just stale
You can refresh the pyspark metastore like this :
spark.sql("REFRESH TABLE <TABLE_NAME>")
I got output from spark SQL query despite the fact that the actual hive table doesn't contain enough records on queried column. The hive table is partitioned by integer column date_nbr which contains values like 20181125, 20181005 for some reason I had to truncate the table (Note: I did not delete the partitions directory in HDFS) and reload the table for the week date_nbr=20181202
After data load I run below query on hive and got expected result
SELECT DISTINCT date_nbr FROM transdb.temp
date_nbr
20181202
but spark SQL doesn't give the same output as hive
scala> spark.sql("SELECT DISTINCT date_nbr FROM transdb.temp").map(_.getAs[Int](0)).collect.toList
res9: List[Int] = List(20181125, 20181005, 20181202)
I'm bit confused by the spark sql result.
After reading up on this answer , i know the number of partitions when reading data from Hive will be decided by the HDFS blockSize.
But i meet a problem: i use spark sql to read a hive table, and save the data to an new hive table, but the two hive tables have different partition numbers when loaded by spark sql.
val data = spark.sql("select * from src_table")
val partitionsNum = data.rdd.getNumPartitions
println(partitionsNum)
val newData = data
newData.write.mode("overwrite").format("parquet").saveAsTable("new_table")
I don't understand the same data, why different partition numbers.
I'm having an issue writing a Hive table from Spark. The following code works just fine; I can write the table (which defaults to the Parquet format) and read it back in Hive:
df.write.mode('overwrite').saveAsTable("db.table")
hive> describe table;
OK
val string
Time taken: 0.021 seconds, Fetched: 1 row(s)
However, if I specify the format should be csv:
df.write.mode('overwrite').format('csv').saveAsTable("db.table")
then I can save the table, but Hive doesn't recognize the schema:
hive> describe table;
OK
col array<string> from deserializer
Time taken: 0.02 seconds, Fetched: 1 row(s)
It's also worth noting that I can create a Hive table manually and then insertInto it:
spark.sql("create table db.table(val string)")
df.select('val').write.mode("overwrite").insertInto("db.table")
Doing so, Hive seems to recognize the schema. But that's clunky and I can't figure a way to automate the schema string anyway.
That is because Hive SerDe do not support csv by default.
If you insist on using csv format, creating table as below:
CREATE TABLE my_table(a string, b string, ...)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
"separatorChar" = "\t",
"quoteChar" = "'",
"escapeChar" = "\\"
)
STORED AS TEXTFILE;
And insert data through df.write.insertInto
For more info:
https://cwiki.apache.org/confluence/display/Hive/CSV+Serde
You are creating a table with text format and trying to insert CSV data into it, which may run in to problems. So as suggested in the answer by Zhang Tong, create the hive table using hive OpenCSVSerde.
After that, if you are more comfortable with Hive query language than dataframes, you can try this.
df.registerTempTable("temp")
spark.sql("insert overwrite db.table select * from temp")
This happens because HiveSerde is different for csv than what is used by Spark. Hive by default use TEXTFORMAT and the delimiter has to be specified while creating the table.
One Option is to use the insertInto API instead of saveAsTable while writing from spark. While using insertInto, Spark writes the contents of the Dataframe to the specified table. But it requires the schema of the dataframe to be same as the schema of the table. Position of the columns is important here as it ignores the column names.
Seq((5, 6)).toDF("a", "b").write.insertInto("t1")