This is a followup to Save Spark dataframe as dynamic partitioned table in Hive . I tried to use suggestions in the answers but couldn't make it to work in Spark 1.6.1
I am trying to create partitions programmatically from `DataFrame. Here is the relevant code (adapted from a Spark test):
hc.setConf("hive.metastore.warehouse.dir", "tmp/tests")
// hc.setConf("hive.exec.dynamic.partition", "true")
// hc.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
hc.sql("create database if not exists tmp")
hc.sql("drop table if exists tmp.partitiontest1")
Seq(2012 -> "a").toDF("year", "val")
.write
.partitionBy("year")
.mode(SaveMode.Append)
.saveAsTable("tmp.partitiontest1")
hc.sql("show partitions tmp.partitiontest1").show
Full file is here: https://gist.github.com/SashaOv/7c65f03a51c7e8f9c9e018cd42aa4c4a
Partitioned files are created fine on the file system but Hive complains that the table is not partitioned:
======================
HIVE FAILURE OUTPUT
======================
SET hive.support.sql11.reserved.keywords=false
SET hive.metastore.warehouse.dir=tmp/tests
OK
OK
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Table tmp.partitiontest1 is not a partitioned table
======================
It looks like the root cause is that org.apache.spark.sql.hive.HiveMetastoreCatalog.newSparkSQLSpecificMetastoreTable always creates table with empty partitions.
Any help to move this forward is appreciated.
EDIT: also created SPARK-14927
I found a workaround: if you pre-create the table then saveAsTable() won't mess with it. So the following works:
hc.setConf("hive.metastore.warehouse.dir", "tmp/tests")
// hc.setConf("hive.exec.dynamic.partition", "true")
// hc.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
hc.sql("create database if not exists tmp")
hc.sql("drop table if exists tmp.partitiontest1")
// Added line:
hc.sql("create table tmp.partitiontest1(val string) partitioned by (year int)")
Seq(2012 -> "a").toDF("year", "val")
.write
.partitionBy("year")
.mode(SaveMode.Append)
.saveAsTable("tmp.partitiontest1")
hc.sql("show partitions tmp.partitiontest1").show
This workaround works in 1.6.1 but not in 1.5.1
Related
Let's assume I have a streaming dataframe, and I'm writing it to Databricks Delta Lake:
someStreamingDf.writeStream
.format("delta")
.outputMode("append")
.start("targetPath")
and then creating a delta table out of it:
spark.sql("CREATE TABLE <TBL_NAME> USING DELTA LOCATION '<targetPath>'
TBLPROPERTIES ('delta.autoOptimize.optimizeWrite'=true)")
which fails with AnalysisException: The specified properties do not match the existing properties at <targetPath>.
I know I can create a table beforehand:
CREATE TABLE <TBL_NAME> (
//columns
)
USING DELTA LOCATION "< targetPath >"
TBLPROPERTIES (
"delta.autoOptimize.optimizeWrite" = true,
....
)
and then just write to it, but writting this SQL with all the columns and their types looks like a bit of extra/unnecessary work. So is there a way to specify these TBLPROPERTIES while writing to a delta table (for the first time) and not beforehand?
If you look into documentation, you can see that you can set following property:
spark.conf.set(
"spark.databricks.delta.properties.defaults.autoOptimize.optimizeWrite", "true")
and then all newly created tables will have delta.autoOptimize.optimizeWrite set to true.
another approach - create table without option, and then try to do alter table set tblprperties (not tested although)
Scenario:
Store Hudi Spark dataframe using saveAsTable(data frame writer) method, such that Hudi supported table with org.apache.hudi.hadoop.HoodieParquetInputFormat Input format schema is automaticaly generated.
Currently, saveAsTable works fine with normal (non Hudi table), Which generates default input format.
I want to automate the Hudi table creation with the supported input file format, either with some overridden version saveAsTable or other way staying in the premise of spark.
Hudi DOES NOT support saveAsTable yet.
You have two options to sync hudi tables with a hive metastore:
Sync inside spark
val hudiOptions = Map[String,String](
...
DataSourceWriteOptions.HIVE_URL_OPT_KEY -> "jdbc:hive2://<thrift server host>:<port>",
DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY -> "true",
DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY -> "<the database>",
DataSourceWriteOptions.HIVE_TABLE_OPT_KEY -> "<the table>",
DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> "<the partition field>",
DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY -> classOf[MultiPartKeysValueExtractor].getName
...
)
// Write the DataFrame as a Hudi dataset
// it will appear in hive (similar to saveAsTable..)
test_parquet_partition.write
.format("org.apache.hudi")
.option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL)
.options(hudiOptions)
.mode(SaveMode.Overwrite)
.save(hudiTablePath)
Sync outside spark
use the bash script after running your hudi spark transformations hudi documentation
cd hudi-hive
./run_sync_tool.sh --jdbc-url jdbc:hive2:\/\/hiveserver:10000 --user hive --pass hive --partitioned-by partition --base-path <basePath> --database default --table <tableName>```)
```bash
cd hudi-hive
./run_sync_tool.sh --jdbc-url jdbc:hive2:\/\/hiveserver:10000 --user hive --pass hive --partitioned-by partition --base-path <basePath> --database default --table <tableName>```
Conf
spark.conf.set('spark.sql.hive.convertMetastoreParquet', "true")
Hive table
spark.sql("create table table_name (ip string, user string) PARTITIONED BY (date date) STORED AS PARQUET")
InsertInto
df.write.insertInto("table_name", overwrite=True)
Error
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1
Btw insert into ORC table is good. Running on cluster with client mode.
Is your hive-site.xml file present in the Spark config folder?
Edit:
Can you try with:
df.write.mode("overwrite").partitionBy("date").saveAsTable("db.table_name")
It should not be necessary to set any configuration beforehand and to run the SQL create statement.
I am trying to insert data into a Hive External table from Spark Sql.
I am created the hive external table through the following command
CREATE EXTERNAL TABLE tab1 ( col1 type,col2 type ,col3 type) CLUSTERED BY (col1,col2) SORTED BY (col1) INTO 8 BUCKETS STORED AS PARQUET
In my spark job , I have written the following code
Dataset df = session.read().option("header","true").csv(csvInput);
df.repartition(numBuckets, somecol)
.write()
.format("parquet")
.bucketBy(numBuckets,col1,col2)
.sortBy(col1)
.saveAsTable(hiveTableName);
Each time I am running this code I am getting the following exception
org.apache.spark.sql.AnalysisException: Table `tab1` already exists.;
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:408)
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:393)
at somepackage.Parquet_Read_WriteNew.writeToParquetHiveMetastore(Parquet_Read_WriteNew.java:100)
You should be specifying a save mode while saving the data in hive.
df.write.mode(SaveMode.Append)
.format("parquet")
.bucketBy(numBuckets,col1,col2)
.sortBy(col1)
.insertInto(hiveTableName);
Spark provides the following save modes:
Save Mode
ErrorIfExists: Throws an exception if the target already exists. If target doesn’t exist write the data out.
Append: If target already exists, append the data to it. If the data doesn’t exist write the data out.
Overwrite: If the target already exists, delete the target. Write the data out.
Ignore: If the target already exists, silently skip writing out. Otherwise write out the data.
You are using the saveAsTable API, which create the table into Hive. Since you have already created the hive table through command, the table tab1 already exists. so when Spark API trying to create it, it throws error saying table already exists, org.apache.spark.sql.AnalysisException: Tabletab1already exists.
Either drop the table and let spark API saveAsTable create the table itself.
Or use the API insertInto to insert into an existing hive table.
df.repartition(numBuckets, somecol)
.write()
.format("parquet")
.bucketBy(numBuckets,col1,col2)
.sortBy(col1)
.insertInto(hiveTableName);
I'd like to save data in a Spark (v 1.3.0) dataframe to a Hive table using PySpark.
The documentation states:
"spark.sql.hive.convertMetastoreParquet: When set to false, Spark SQL will use the Hive SerDe for parquet tables instead of the built in support."
Looking at the Spark tutorial, is seems that this property can be set:
from pyspark.sql import HiveContext
sqlContext = HiveContext(sc)
sqlContext.sql("SET spark.sql.hive.convertMetastoreParquet=false")
# code to create dataframe
my_dataframe.saveAsTable("my_dataframe")
However, when I try to query the saved table in Hive it returns:
hive> select * from my_dataframe;
OK
Failed with exception java.io.IOException:java.io.IOException:
hdfs://hadoop01.woolford.io:8020/user/hive/warehouse/my_dataframe/part-r-00001.parquet
not a SequenceFile
How do I save the table so that it's immediately readable in Hive?
I've been there...
The API is kinda misleading on this one.
DataFrame.saveAsTable does not create a Hive table, but an internal Spark table source.
It also stores something into Hive metastore, but not what you intend.
This remark was made by spark-user mailing list regarding Spark 1.3.
If you wish to create a Hive table from Spark, you can use this approach:
1. Use Create Table ... via SparkSQL for Hive metastore.
2. Use DataFrame.insertInto(tableName, overwriteMode) for the actual data (Spark 1.3)
I hit this issue last week and was able to find a workaround
Here's the story:
I can see the table in Hive if I created the table without partitionBy:
spark-shell>someDF.write.mode(SaveMode.Overwrite)
.format("parquet")
.saveAsTable("TBL_HIVE_IS_HAPPY")
hive> desc TBL_HIVE_IS_HAPPY;
OK
user_id string
email string
ts string
But Hive can't understand the table schema(schema is empty...) if I do this:
spark-shell>someDF.write.mode(SaveMode.Overwrite)
.format("parquet")
.saveAsTable("TBL_HIVE_IS_NOT_HAPPY")
hive> desc TBL_HIVE_IS_NOT_HAPPY;
# col_name data_type from_deserializer
[Solution]:
spark-shell>sqlContext.sql("SET spark.sql.hive.convertMetastoreParquet=false")
spark-shell>df.write
.partitionBy("ts")
.mode(SaveMode.Overwrite)
.saveAsTable("Happy_HIVE")//Suppose this table is saved at /apps/hive/warehouse/Happy_HIVE
hive> DROP TABLE IF EXISTS Happy_HIVE;
hive> CREATE EXTERNAL TABLE Happy_HIVE (user_id string,email string,ts string)
PARTITIONED BY(day STRING)
STORED AS PARQUET
LOCATION '/apps/hive/warehouse/Happy_HIVE';
hive> MSCK REPAIR TABLE Happy_HIVE;
The problem is that the datasource table created through Dataframe API(partitionBy+saveAsTable) is not compatible with Hive.(see this link). By setting spark.sql.hive.convertMetastoreParquet to false as suggested in the doc, Spark only puts data onto HDFS,but won't create table on Hive. And then you can manually go into hive shell to create an external table with proper schema&partition definition pointing to the data location.
I've tested this in Spark 1.6.1 and it worked for me. I hope this helps!
I have done in pyspark, spark version 2.3.0 :
create empty table where we need to save/overwrite data like:
create table databaseName.NewTableName like databaseName.OldTableName;
then run below command:
df1.write.mode("overwrite").partitionBy("year","month","day").format("parquet").saveAsTable("databaseName.NewTableName");
The issue is you can't read this table with hive but you can read with spark.
metadata doesn't already exist. In other words, it will add any partitions that exist on HDFS but not in metastore, to the hive metastore.