I am trying to create hive table with this syntax :
create table table_name as orc as select * from table1 partitioned by (Acc_date date).
I am getting error. My requirement is to create table using select statement and append the table when the next load happens.
I am trying to replicate this spark command:
df1.distinct().repartition("acc_date").write.mode("append").partitionBy("acc_date").format("parquet").saveAsTable("schema.table_name")
Make it a two step process.
Create the partition table as you want.
Insert data into it.
Details
1.sql may be like this -
create table if not exists table_name
(Col1 int, col2...)
partition (acc_date date)
Stored as orc ;
Insert will be like below. Make sure partition column is the last column in select clause.
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
Insert into table_name partition (Acc_date )
Select col1,col2... acc_date from table1 ;
Related
According to Athena docs, I can not add the date column to an existing table, so I am trying to use the workaround they propose with the timestamp datatype.
But when I run the ALTER TABLE my_table ADD COLUMNS (date_column TIMESTAMP) query, I still get the following error :
Parquet does not support date. See HIVE-6384
Is there any option to add date or timestamp columns to an existing table?
Thanks
UPD: I found out that I can still add timestamp columns with glue UI interface/API
UPD 2: The issue occurs only with one specific table, but it works for others.
You can use the following query to add a timestamp column to an existing table:
ALTER TABLE my_table ADD COLUMNS (date_column TIMESTAMP);
This should work for both Parquet and ORC tables.
We are using Trino(Presto) and SparkSQL for querying Hive tables on s3 but they give different results with the same query and on the same tables. We found the main problem. There are existing rows in a problematic Hive table which can be found with a simple where filter on a specific column with Trino but cannot be found with SparkSQL. The sql statements are the same in both.
On the other hand, SparkSQL can find these rows in the source table of that problematic table, filtering on the same column.
Create sql statement:
CREATE TABLE problematic_hive_table AS SELECT c1,c2,c3 FROM source_table
The select sql that can be used to find missing rows in Trino but not in SparkSQL
SELECT * FROM problematic_hive_table WHERE c1='missing_rows_value_in_column'
And this is the select query which can find these missing rows in SparkSQL:
SELECT * FROM source_table WHERE c1='missing_rows_value_in_column'
We execute the CTAS in Trino(Presto). If we are using ...WHERE trim(c1) = 'missing_key'' then spark can also find the missing rows but the fields do not contain trailing spaces (the length of these fields are the same in the source table as in the problematic table). In the source table spark can find these missing rows without trim.
I have external Hive Table which is filled by spark job and partitioned by(event_date date) now I have modified the spark code and added one extra column 'country'.In earlier written data country column will have null values as it is newly added. now I want to Alter 'partitioned by' clause as partition by(event_date date,country string) how can I achieve this.Thank you!!
Please try to alter the partition using below commnad-
ALTER TABLE table_name PARTITION part_spec SET LOCATION path
part_spec:
: (part_col_name1=val1, part_col_name2=val2, ...)
Try this databricks spark-sql language manual for alter command
We are trying to write into a HIVE table from SPARK and we are using saveAsTable function. I want to know whether saveAsTable every time drop and recreate the hive table or not? If it does so, then is there any other possible spark function which will actually just truncate and load a table, instead drop and recreate.
It depends on which .mode value you are specifying
overwrite --> then spark drops the table first then recreates the table
append --> insert new data to the table
1.Drop if exists/create if not exists default.spark1 table in parquet format
>>> df.write.mode("overwrite").saveAsTable("default.spark1")
2.Drop if exists/create if not exists default.spark1 table in orc format
>>> df.write.format("orc").mode("overwrite").saveAsTable("default.spark1")
3.Append the new data to the existing data in the table(doesn't drop/recreate table)
>>> df.write.format("orc").mode("append").saveAsTable("default.spark1")
Achieve Truncate and Load using Spark:
Method1:-
You can register your dataframe as temp table then execute insert overwrite statement to overwrite target table
>>> df.registerTempTable("temp") --registering df as temptable
>>> spark.sql("insert overwrite table default.spark1 select * from temp") --overwriting the target table.
This method will work for Internal/External tables also.
Method2:-
In case of internal tables as we can truncate the tables first then append the data to the table, by using this way we are not recreating the table but we are just appending the data to the table.
>>> spark.sql("truncate table default.spark1")
>>> df.write.format("orc").mode("append").saveAsTable("default.spark1")
This method will only work for Internal tables.
Even in case of external tables we can do some workaround to truncate the table by changing table properties.
Let's assume default.spark1 table is external table and
--change external table to internal table
>>> saprk.sql("alter table default.spark1 set tblproperties('EXTERNAL'='FALSE')")
--once the table is internal then we can run truncate table statement
>>> spark.sql("truncate table default.spark1")
--change back the table as External table again
>>> spark.sql("alter table default.spark1 set tblproperties('EXTERNAL'='TRUE')")
--then append data to the table
>>> df.write.format("orc").mode("append").saveAsTable("default.spark1")
You can also use insertInto("table") which doesn't recreate the table
The main difference between saveAsTable is that insertInto expects that the table already exists and is based on the order of columns instead of names.
I used brisk. The cassandra column family automatically maps to Hive tables.
However, if data type is timeuuid in column family, it is unreadable in Hive tables.
For example, I used following command to create an external table in hive to map column family.
Hive > create external table A (rowkey string, column_name string, value string)
> STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler'
> WITH SERDEPROPERTIES (
> "cassandra.columns.mapping" = ":key,:column,:value");
If column name is TimeUUIDType in cassandra, it becomes unreadable in the Hive table.
For example, a row in cassandra column family looks like:
RowKey: 2d36a254bb04272b120aaf79d70a3578
=> (column=29139210-b6dc-11df-8c64-f315e3a329d6, value={"event_id":101},timestamp=1283464254261)
Where column name is TimeUUIDType.
In hive table, it looks like the following row:
2d36a254bb04272b120aaf79d70a3578 t��ߒ4��!�� {"event_id":101}
So, column name is unreadable in Hive table.
This is a known issue with the automatic table mapping. For best results with a timeUUIDType, turn the auto-mapping feature off in $brisk_home/resources/hive/hive-site.xml:
"cassandra.autoCreateHiveSchema"
and create the table in hive manually.