Unable to directly load hive parquet table using spark dataframe - apache-spark

I have gone thru related posts available in SO and couldn't find this specific issue anywhere over the internet.
I am trying to load Hive table (Hive external table pointed to parquet files) but spark data frame couldn't read the data and it is just able to read schema. But for the same hive table i can query from hive shell. When i try to load hive table into dataframe it is not returning any data. Below is my script looks like and the DDL. I am using Spark 2.1 (Mapr distribution)
Unable to read data from hive table has underlying parquet files from spark
val df4 = spark.sql("select * from default.Tablename")
scala> df4.show()
+----------------------+------------------------+----------+---+-------------+-------------+---------+
|col1 |col2 |col3 |key |col4| record_status|source_cd|
+----------------------+------------------------+----------+---+-------------+-------------+---------+
+----------------------+------------------------+----------+---+-------------+-------------+---------+
Hive DDL
CREATE EXTERNAL TABLE `Tablename`(
`col1` string,
`col2` string,
`col3` decimal(19,0),
`key` string,
`col6` string,
`record_status` string,
`source_cd` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
'path'='maprfs:abc/bds/dbname.db/Tablename')
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
'maprfs:/Datalocation/Tablename'
TBLPROPERTIES (
'numFiles'='2',
'spark.sql.sources.provider'='parquet',
'spark.sql.sources.schema.numParts'='1',
'spark.sql.sources.schema.part.0'='{\"type\":\"struct\",\"fields\":[{\"name\":\"col1\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"col2\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"col3\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"key\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"col6\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"record_status\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"source_cd\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}}]}',
'totalSize'='68216',
'transient_lastDdlTime'='1502904476')

remove
'spark.sql.sources.provider'='parquet'
and you will success

Related

Spark write to Hive mistaken table_name as Partition spec and throws "Partition spec contains non-partition columns" error

My Hive table was defined with PARTITIONED BY (ds STRING, model STRING)
And when writing to the table in PySpark, I did
output_df
.repartition(250)
.write
.mode('overwrite')
.format('parquet')\
.partitionBy('ds', 'model')\
.saveAsTable('{table_schema}.{table_name}'.format(table_schema=table_schema,
table_name=table_name))
However I encountered the following error:
org.apache.hadoop.hive.ql.metadata.Table.ValidationFailureSemanticException: Partition spec {ds=2019-10-06, model=p1kr, table_name=drv_projection_table} contains non-partition columns
It seems Spark or Hive mistaken table_name as a partition. My S3 path for the table is s3://some_path/qubole/table_name=drv_projection_table, but table_name wasn't specified as part of the partition.

Error while exchanging partition in hive tables

I am trying to merge the incremental data with an existing hive table.
For testing I created a dummy table from the base table as below:
create base.dummytable like base.fact_table
The table: base.fact_table is partition based on dbsource String
When I checked the dummy table's DDL, I could see that the partition column is correctly defined.
PARTITIONED BY ( |
| `dbsource` string)
Then I tried to exchange one of the partition from the dummy table by dropping it first.
spark.sql("alter table base.dummy drop partition(dbsource='NEO4J')")
The partition: NEO4J has dropped successfully and I ran the exchange statement as below:
spark.sql("ALTER TABLE base.dummy EXCHANGE PARTITION (dbsource = 'NEO4J') WITH TABLE stg.inc_labels_neo4jdata")
The exchange statement is giving an error:
Error: Error while compiling statement: FAILED: ValidationFailureSemanticException table is not partitioned but partition spec exists: {dbsource=NEO4J}
The table I am trying to push the incremental data is partitioned by dbsource and I have dropped it successfully.
I am running this from spark code and the config is given below:
val conf = new SparkConf().setAppName("MERGER").set("spark.executor.heartbeatInterval", "120s")
.set("spark.network.timeout", "12000s")
.set("spark.sql.inMemoryColumnarStorage.compressed", "true")
.set("spark.shuffle.compress", "true")
.set("spark.shuffle.spill.compress", "true")
.set("spark.sql.orc.filterPushdown", "true")
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.set("spark.kryoserializer.buffer.max", "512m")
.set("spark.serializer", classOf[org.apache.spark.serializer.KryoSerializer].getName)
.set("spark.streaming.stopGracefullyOnShutdown", "true")
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "true")
.set("spark.executor.instances", "4")
.set("spark.executor.memory", "4g")
.set("spark.executor.cores", "5")
.set("hive.merge.sparkfiles","true")
.set("hive.merge.mapfiles","true")
.set("hive.merge.mapredfiles","true")
show create table base.dummy:
CREATE TABLE `base`.`dummy`(
`dff_id` bigint,
`dff_context_id` bigint,
`descriptive_flexfield_name` string,
`model_table_name` string)
PARTITIONED BY (`dbsource` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'/apps/hive/warehouse/base.db/dummy'
TBLPROPERTIES (
'orc.compress'='ZLIB')
show create table stg.inc_labels_neo4jdata:
CREATE TABLE `stg`.`inc_labels_neo4jdata`(
`dff_id` bigint,
`dff_context_id` bigint,
`descriptive_flexfield_name` string,
`model_table_name` string)
`dbsource` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'/apps/hive/warehouse/stg.db/inc_labels_neo4jdata'
TBLPROPERTIES (
'orc.compress'='ZLIB')
Could anyone let me know what the mistake I am doing here & what should I change inorder to successfully exchange the partition ?
My take on this error is that table stg.inc_labels_neo4jdata is not partitioned as base.dummy and therefore there's no partition to move.
From Hive documentation:
This statement lets you move the data in a partition from a table to
another table that has the same schema and does not already have that
partition.
You can check the Hive DDL Manual for EXCHANGE PARTITION
And the JIRA where this feature was added to Hive. You can read:
This only works if and have the
same field schemas and the same partition by parameters. If they do not
the command will throw an exception.
You basically need to have exactly the same schema on both source_table and destination_table.
Per your last edit, this is not the case.

hive external table on parquet not fetching data

I am trying to create a datapipeline where the incomng data is stored into parquet and i create and external hive table and users can query the hive table and retrieve data .I am able to save the parquet data and retrieve it directly but when i query the hive table its not returning any rows. I did the following test setup
--CREATE EXTERNAL HIVE TABLE
create external table emp (
id double,
hire_dt timestamp,
user string
)
stored as parquet
location '/test/emp';
Now created dataframe on some data and saved to parquet .
---Create dataframe and insert DATA
val employeeDf = Seq(("1", "2018-01-01","John"),("2","2018-12-01", "Adam")).toDF("id","hire_dt","user")
val schema = List(("id", "double"), ("hire_dt", "date"), ("user", "string"))
val newCols= schema.map ( x => col(x._1).cast(x._2))
val newDf = employeeDf.select(newCols:_*)
newDf.write.mode("append").parquet("/test/emp")
newDf.show
--read the contents directly from parquet
val sqlcontext=new org.apache.spark.sql.SQLContext(sc)
sqlcontext.read.parquet("/test/emp").show
+---+----------+----+
| id| hire_dt|user|
+---+----------+----+
|1.0|2018-01-01|John|
|2.0|2018-12-01|Adam|
+---+----------+----+
--read from the external hive table
spark.sql("select id,hire_dt,user from emp").show(false)
+---+-------+----+
|id |hire_dt|user|
+---+-------+----+
+---+-------+----+
As shown above i am able to see the data if i read from parquet directly but not from hive .The question is what i am doing wrong here ? What i am i doing wrong that the hive isnt getting the data. I thought msck repair may be a reason but i get error if i try to do msck repair table saying table not partitioned.
Based on your create table statement, you have used location as /test/emp but while writing data, you are writing at /tenants/gwm/idr/emp. So you will not have data at /test/emp.
CREATE EXTERNAL HIVE TABLE create external table emp ( id double, hire_dt timestamp, user string ) stored as parquet location '/test/emp';
Please re-create external table as
CREATE EXTERNAL HIVE TABLE create external table emp ( id double, hire_dt timestamp, user string ) stored as parquet location '/tenants/gwm/idr/emp';
Apart from the answer given by Ramdev below, you also need to be cautious of using the correct datatype around date/timestamp; as 'date' type is not supported by parquet when creating a hive table.
For that you can change the 'date' type for column 'hire_dt' to 'timestamp'.
Otherwise there will be a mismatch in data you persisting through spark and trying to read in hive (or hive SQL). Keeping it to 'timestamp' at both places will resolve the issue. I hope it helps.
Do you have enableHiveSupport() in your sparkSession builder() statement. Are you able to connect to hive metastore? Try doing show tables/databases in your code to see if you can display tables present at your hive location?
i got this working with below chgn.
val dfTransformed = employeeDf.withColumn("id", employeeDf.col("id").cast(DoubleType))
.withColumn("hire_dt", employeeDf.col("hire_dt".cast(TimestampType))
So basically the issue was datatype mismatch and some how the original code the cast doesn't seem to work. So i did an explicit cast and then write it goes fine and able to query back as well.Logically both are doing the same not sure why the original code not working.
val employeeDf = Seq(("1", "2018-01-01","John"),("2","2018-12-01", "Adam")).toDF("id","hire_dt","user")
val dfTransformed = employeeDf.withColumn("id", employeeDf.col("id").cast(DoubleType))
.withColumn("hire_dt", employeeDf.col("hire_dt".cast(TimestampType))
dfTransformed.write.mode("append").parquet("/test/emp")
dfTransformed.show
--read the contents directly from parquet
val sqlcontext=new org.apache.spark.sql.SQLContext(sc)
sqlcontext.read.parquet("/test/emp").show
+---+----------+----+
| id| hire_dt|user|
+---+----------+----+
|1.0|2018-01-01|John|
|2.0|2018-12-01|Adam|
+---+----------+----+
--read from the external hive table
spark.sql("select id,hire_dt,user from emp").show(false)
+---+----------+----+
| id| hire_dt|user|
+---+----------+----+
|1.0|2018-01-01|John|
|2.0|2018-12-01|Adam|
+---+----------+----+

.csv not a SequenceFile Failed with exception java.io.IOException:java.io.IOException

While creating External table with partition in hive using spark in csv format com.databricks.spark.csv it works fine but I can't able to open the table created in hive which is in .csv format from hive shell
ERROR
hive> select * from output.candidatelist;
Failed with exception java.io.IOException:java.io.IOException: hdfs://10.19.2.190:8020/biometric/event=ABCD/LabName=500098A/part-00000-de39bb3d-0548-4db6-b8b7-bb57739327b4.c000.csv not a SequenceFile
Code:
val sparkDf = spark.read.format("com.databricks.spark.csv").option("header", "true").option("nullValue", "null").schema(StructType(Array(StructField("RollNo/SeatNo", StringType, true), StructField("LabName", StringType, true)))).option("multiLine", "true").option("mode", "DROPMALFORMED").load("hdfs://10.19.2.190:8020/biometric/SheduleData_3007_2018.csv")
sparkDf.write.mode(SaveMode.Overwrite).option("path", "hdfs://10.19.2.190:8020/biometric/event=ABCD/").partitionBy("LabName").format("com.databricks.spark.csv").saveAsTable("output.candidateList")
how to access the table in Hive shell while the format of the table in csv
SHOW CREATE TABLE candidatelist;
CREATE EXTERNAL TABLE `candidatelist`(
`col` array<string> COMMENT 'from deserializer')
PARTITIONED BY (
`centercode` string,
`examdate` date)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES ('path'='hdfs://10.19.2.190:8020/biometric/output')
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.SequenceFileInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
LOCATION
'hdfs://nnuat.iot.com:8020/apps/hive/warehouse/sify_cvs_output.db/candidatelist-__PLACEHOLDER__'TBLPROPERTIES (
'spark.sql.create.version'='2.3.0.2.6.5.0-292',
'spark.sql.partitionProvider'='catalog',
'spark.sql.sources.provider'='com.databricks.spark.csv',
'spark.sql.sources.schema.numPartCols'='2',
'spark.sql.sources.schema.numParts'='1',
'spark.sql.sources.schema.part.0'='{\"type\":\"struct\",\"fields\":[{\"name\":\"RollNo/SeatNo\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"LabName\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Student_Name\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"ExamName\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"ExamTime\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Center\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"CenterCode\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"ExamDate\",\"type\":\"date\",\"nullable\":true,\"metadata\":{}}]}',
'spark.sql.sources.schema.partCol.0'='CenterCode',
'spark.sql.sources.schema.partCol.1'='ExamDate',
'transient_lastDdlTime'='1535692379')

Spark timestamp type is not getting accepted with hive timestamp

I have a spark Dataframe which contains a field as a timestamp. I am storing the dataframe into HDFS location where hive external table is created. Hive table contains the field with timestamp type. But while reading data from the external location hive is populating the timestamp field as a blank value in the table.
my spark dataframe query:
df.select($"ipAddress", $"clientIdentd", $"userId", to_timestamp(unix_timestamp($"dateTime", "dd/MMM/yyyy:HH:mm:ss Z").cast("timestamp")).as("dateTime"), $"method", $"endpoint", $"protocol", $"responseCode", $"contentSize", $"referrerURL", $"browserInfo")
Hive create table statement:
CREATE EXTERNAL TABLE `finalweblogs3`(
`ipAddress` string,
`clientIdentd` string,
`userId` string,
`dateTime` timestamp,
`method` string,
`endpoint` string,
`protocol` string,
`responseCode` string,
`contentSize` string,
`referrerURL` string,
`browserInfo` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'field.delim'=',',
'serialization.format'=',')
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://localhost:9000/streaming/spark/finalweblogs3'
I am not able to get it why this is happening.
I resolved it by changing the storing format as "Parquet".
I still don't know why it is not working for CSV format.

Resources