I have a Folder which previously had subfolders based on ingestiontime which is also the original PARTITION used in its Hive Table.
So the Folder Looks as -
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200712230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200711230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200710230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200709230000/....
........
Inside each ingestiontime folder, data is present in PARQUET format.
Now in the Same myStreamingData folder, I am adding another folder that holds similar data but in the folder named businessname.
So my Folder structure now looks like -
s3://MyDevBucket/dev/myStreamingData/businessname=007/ingestiontime=20200712230000/....
s3://MyDevBucket/dev/myStreamingData/businessname=007/ingestiontime=20200711230000/....
s3://MyDevBucket/dev/myStreamingData/businessname=007/ingestiontime=20200710230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200712230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200711230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200710230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200709230000/....
........
So I need to add the data in the businessname partition to my current hive table too.
To achieve this , I was running the ALTER Query - ( on Databricks)
%sql
alter table gp_hive_table add partition (businessname=007,ingestiontime=20200712230000) location "s3://MyDevBucket/dev/myStreamingData/businessname=007/ingestiontime=20200712230000"
But I am getting this error -
Error in SQL statement: AnalysisException: businessname is not a valid partition column in table `default`.`gp_hive_table`.;
What part I am doing incorrectly here ?
Thanks in Advance.
Since you're already using Databricks and this is a streaming use case, you should definitely take a serious look at using Delta Lake tables.
You won't have to mess with explicit ... ADD PARTITION and MSCK statements.
Delta Lake with the ACID properties will ensure your data is committed properly, if your job fails you won't end up with partial results. As soon as the data is committed, it is available to users (again without the MSCK and ADD PARTITION) statements.
Just change 'USING PARQUET' to 'USING DELTA' in your DDL.
You can also (CONVERT) your existing parquet table to a Delta Lake table and then start using INSERT, UPDATE, DELETE, MERGE INTO, COPY INTO, from Spark batch and structured streaming jobs. OPTIMIZE will clean up the small file problem.
alter table gp_hive_table add partition is to add partition(data location, not new column) to the table with already defined partitioning scheme, it does not change current partitioning scheme, it just adds partition metadata, that in some location there is partition corresponding to some partitioning column value.
If you want to change partition columns, you need to recreate the table.:
Drop (check it is EXTERNAL) the table: DROP TABLE gp_hive_table;
Create table with new partitioning column. Partitions WILL NOT be created automatically.
Now you can add partitions using ALTER TABLE ADD PARTITION or use MSCK REPAIR TABLE to create them automatically based on directory structure. Directory structure should already match partitioning scheme before you execute these commands
So,
building upon the suggestion from #leftjoin,
Instead of having a hive table without businessname as one of the partition ,
What I did is -
Step 1 -> Create hive table with - PARTITION BY (businessname long,ingestiontime long)
Step 2 -> Executed the query - MSCK REPAIR <Hive_Table_name> to auto add partitions.
Step 3 ->
Now, there are ingestiontime folders which are not in the folder businessname i.e
folders like -
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200712230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200711230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200710230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200709230000/....
I wrote a small piece of code to fetch all such partitions and then ran the following query for all of them -
ALTER TABLE <hive_table_name> ADD PARTITION (businessname=<some_value>,ingestiontime=<ingestion_time_partition_name>) LOCATION "<s3_location_of_all_partitions_not_belonging_to_a_specific_businesskey>
This solved my issue.
Related
I create some dataframe with spark daily, and save it to HDFS location.
Before saving I partition data by some fields, so path to data looks like this:
/warehouse/tablespace/external/hive/table_name/...
table_name directory has partitions like:
table_name/field=value1
table_name/field=value2
I create external table to operate the data with Hive and set location to data path.
Each day I want to change location to new data path. But if I use
ALTER TABLE table
SET LOCATION 'new location'
querying still return old data because partition's locations don't change.
Is there any way to tell Hive to search partitions in new location, without changing it one by one?
If you don't want to keep the old partition you can drop using the below command and try to add a clause that matches all partition
ALTER TABLE table_name DROP IF EXISTS PARTITION (field != 'non_exist_value');
Then you can check the remaining partition again using the below command
SHOW PARTITIONS table_name;
After that, you can change it to a new location and repair the hive table to create a new partition under the new location (The partition column name in new location must be the same as old)
ALTER TABLE table_name SET LOCATION '/new_location';
MSCK REPAIR TABLE table_name;
I created a hive table on top of a parquet folder written via spark. In one test server it is running fine and giving out results (hive version 2.6.5.196) but in production it gives no records (hive 2.6.5.179). Could someone please point out what the exact issue could be?
If you created the table on top of an existing partition structure, you have to make it known to the table that there are partitions at this location.
MSCK REPAIR TABLE table_name; -- adds missing partitions
SELECT * FROM table_name; -- should return records now
This problem shouldn't happen if there are only files in that location, and if they are the expected format.
You can verify with:
SHOW CREATE TABLE table_name; -- to see the expected format
created hive table on top of a parquet folder written via spark.
Check for the databases that you are using is available or not using
show databases;
check the ddl of the table that you have created on your test server and the other that is there on production
show create table table_name;
Make sure that both the ddl exactly matches.
Do msck repair table table_name to load the incremental data or the data from all the partitions
select * from table_name to view records
I use databricks. I am trying to create a table as below
target_table_name = 'test_table_1'
spark.sql("""
drop table if exists %s
""" % target_table_name)
spark.sql("""
create table if not exists {0}
USING org.apache.spark.sql.parquet
OPTIONS (
path ("/mnt/sparktables/ds=*/name=xyz/")
)
""".format(target_table_name))
Even though using "*" gives me flexibility on loading different files (pattern matching) and eventually create a table, I wish to create a table based on two completely different paths (no pattern matching).
path1 = /mnt/sparktables/ds=*/name=xyz/
path2 = /mnt/sparktables/new_path/name=123fo/
Spark uses Hive metastore to create these permanent tables. These tables are essentially external tables in Hive.
Generally what you are trying is not possible because Hive external table location needs to be unique at the time of creation.
However, you could still achieve the hive table with different location, if you incorporate partitioning strategy on your hive metastore.
In hive metastore you can have partitions which point to different locations.
However there is no off the shelf way to achieve this. Firstly you would need to specify a partition key for your dataset and create a table from the 1st location where the entire data belongs to one partition. Then alter table to add a new partition.
Sample:
create external table tableName(<schema>) partitioned by ('name') location '/mnt/sparktables/ds=*/name=xyz/'
Then you can add partitions
alter table tableName add partition(name='123fo') location '/mnt/sparktables/new_path/name=123fo/'
The alternate to this process is create 2 dataframe out of the 2 location , combine them then saveAsaTable
I would do something like this:
create or replace view 'mytable' as
select * from parquet.`path1`
union all
select * from parquet.`path2`
The view understands how to query from both locations. I assume you will not append/overwrite the table as it would lead to more ambiguity.
You can create data frames separately for two or more parquet files and then union them (assuming they have identical schemas)
df1.union(df2)
I have a external table that has a partitioned column called rundate. I can load data into the table using
DataFrame.write.mode(SaveMode.Overwrite).orc("s3://test/table")
I then create a partition using
spark.sql("ALTER TABLE table ADD IF NOT EXISTS PARTITION(rundate = '2017-12-19')")
The code works fine and i can see the partitions. But I cannot see data in the Hive table.
You have not saved the partition data in correct folder structure and also manually added the partition where data does not exist.
Two things:
1. First make sure you are saving at data at the location where external table is created and also the folder structure is same as hive expect. e.g Assume your external table name is table and partition column is rundate, partition value is 2017-12-19 and external table is pointing to location s3://test/table. Then save data for partition 2017-12-19 as below:
DataFrame.write.mode(SaveMode.Overwrite).orc("s3://test/table/rundate=2017-12-19/")
2.Once save is successful below command to update the metastore of hive with the latest added partition.
synatx: msck repair table <tablename>
msck repair table table
I have data saved as parquet files in Azure blob storage. Data is partitioned by year, month, day and hour like:
cont/data/year=2017/month=02/day=01/
I want to create external table in Hive using following create statement, which I wrote using this reference.
CREATE EXTERNAL TABLE table_name (uid string, title string, value string)
PARTITIONED BY (year int, month int, day int) STORED AS PARQUET
LOCATION 'wasb://cont#storage_name.blob.core.windows.net/data';
This creates table but has no rows when querying. I tried same create statement without PARTITIONED BY clause and that seems to work. So looks like issue is with partitioning.
What am I missing?
After you create the partitioned table, run the following in order to add the directories as partitions
MSCK REPAIR TABLE table_name;
If you have a large number of partitions you might need to set hive.msck.repair.batch.size
When there is a large number of untracked partitions, there is a
provision to run MSCK REPAIR TABLE batch wise to avoid OOME (Out of Memory Error). By giving
the configured batch size for the property hive.msck.repair.batch.size
it can run in the batches internally. The default value of the
property is zero, it means it will execute all the partitions at once.
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RecoverPartitions(MSCKREPAIRTABLE)
Written by the OP:
This will probably fix your issue, however if data is very large, it won't work. See relevant issue here.
As a workaround, there is another way to add partitions to Hive metastore one by one like:
alter table table_name add partition(year=2016, month=10, day=11, hour=11)
We wrote simple script to automate this alter statement and it seems to work for now.