How can I drop a Delta Table in Databricks? I can't find any information in the docs... maybe the only solution is to delete the files inside the folder 'delta' with the magic command or dbutils:
%fs rm -r delta/mytable?
EDIT:
For clarification, I put here a very basic example.
Example:
#create dataframe...
from pyspark.sql.types import *
cSchema = StructType([StructField("items", StringType())\
,StructField("number", IntegerType())])
test_list = [['furniture', 1], ['games', 3]]
df = spark.createDataFrame(test_list,schema=cSchema)
and save it in a Delta table
df.write.format("delta").mode("overwrite").save("/delta/test_table")
Then, if I try to delete it.. it's not possible with drop table or similar action
%SQL
DROP TABLE 'delta.test_table'
neither other options like drop table 'delta/test_table', etc, etc...
If you want to completely remove the table then a dbutils command is the way to go:
dbutils.fs.rm('/delta/test_table',recurse=True)
From my understanding the delta table you've saved is sitting within blob storage. Dropping the connected database table will drop it from the database, but not from storage.
you can do that using sql command.
%sql
DROP TABLE IF EXISTS <database>.<table>
Basically in databricks, Table are of 2 types - Managed and Unmanaged
1.Managed - tables for which Spark manages both the data and the metadata,Databricks stores the metadata and data in DBFS in your account.
2.Unmanaged - databricks just manage the meta data only but data is not managed by databricks.
so if you write a drop query for Managed tables it will drop the table and also delete the Data as well, but in case of Unmanaged tables if you write a drop query it will simply delete the sym-link pointer(Meta-information of table) to the table location but your data is not deleted, so you need to delete data externally using rm commands.
for more info:
https://docs.databricks.com/data/tables.html
Databricks has unmanaged tables and managed tables, but your code snippet just creates a Delta Lake. It doesn't create a managed or unmanaged table. The DROP TABLE syntax doesn't work because you haven't created a table.
Remove files
As #Papa_Helix mentioned, here's the syntax to remove files:
dbutils.fs.rm('/delta/test_table',recurse=True)
Drop managed table
Here's how you could have written your data as a managed table.
df.write.saveAsTable("your_managed_table")
Check to make sure the data table exists:
spark.sql("show tables").show()
+---------+------------------+-----------+
|namespace| tableName|isTemporary|
+---------+------------------+-----------+
| default|your_managed_table| false|
+---------+------------------+-----------+
When the data is a managed table, then you can drop the data and it'll delete the table metadata & the underlying data files:
spark.sql("drop table if exists your_managed_table")
Drop unmanaged table
When the data is saved as an unmanaged table, then you can drop the table, but it'll only delete the table metadata and won't delete the underlying data files. Create the unmanaged table and then drop it.
df.write.option("path", "tmp/unmanaged_data").saveAsTable("your_unmanaged_table")
spark.sql("drop table if exists your_unmanaged_table")
The tmp/unmanaged_data folder will still contain the data files, even though the table has been dropped.
Check to make sure the table has been dropped:
spark.sql("show tables").show()
+---------+---------+-----------+
|namespace|tableName|isTemporary|
+---------+---------+-----------+
+---------+---------+-----------+
So the table isn't there, but you'd still need to run a rm command to delete the underlying data files.
Delete from the GUI,
Data -> DatabaseTables -> pick your database -> select the drop down next to your table and delete.
I don't know consequences of this type of delete so caveat emptor
I found that to fully delete a delta table and be able to create a new one under the same name with say a different schema, you have to also delete temp files (otherwise you get an error saying that an old file no longer exists).
dbutils.fs.rm('/delta/<my_schema>/<my_table>', recurse=True)
dbutils.fs.rm('/tmp/delta/<my_schema>/<my_table>', recurse=True)
Related
I am new to ADF. I have a pipeline which deletes all rows of any of the attributes are null. Schema : { Name, Value, Key}
I tried using a data flow with Alter Table and set both source and sink to be the same table but it always appends to the table instead of overwriting it which creates duplicate rows and the rows I want to delete still remain. Is there a way to overwrite the table.
Assuming that your table is SQL table, I have tried to overwrite the source table after deleting the specific null values. It successfully deleted the records but got the duplicate records even after exploring various methods.
So, as an alternate you can try the below methods to achieve your requirement:
By Creating new table and deleting old table:
This is my sample source table names mytable.
Alter transformation
Give new table in the sink and in settings->post SQL scripts. give the drop command to delete the source dataset. Now your sink table is your required table. drop table [dbo].[mytable]
Result table(named newtable) and old table.
Source table deleted.
Deleting null values from source table using script activity
Use script activity to delete the null values from source table.
Source table after execution.
I have multiple delta lake tables storing images data. Now I want to take specific rows via filter from those tables and put them in another delta table. I do not want to copy the original data just the reference or shallow copy. I am using pyapark and databricks. Can someone please help me find the correct approach for this?
What you actually need is a view over the original table. Use CREATE VIEW to create it with necessary filter expression, like this:
CREATE VIEW <name> AS
SELECT * from <source_table> WHERE <your filter condition>
Then this view could be queried like a normal table, but data will be filtered according to your condition.
I have a pyspark dataframe currently from which I initially created a delta table using below code -
df.write.format("delta").saveAsTable("events")
Now, since the above dataframe populates the data on daily basis in my requirement, hence for appending new records into delta table, I used below syntax -
df.write.format("delta").mode("append").saveAsTable("events")
Now this whole thing I did in databricks and in my cluster. I want to know how can I write generic pyspark code in python that will create delta table if it does not exists and append records if delta table exists.This thing I want to do because if I give my python package to someone, they will not have the same delta table in their environment so it should get created dynamically from code.
If you don't have Delta table yet, then it will be created when you're using the append mode. So you don't need to write any special code to handle the case when table doesn't exist yet, and when it exits.
P.S. You'll need to have such code only in case if you're performing merge into the table, not append. In this case the code will looks like this:
if table_exists:
do_merge
else:
df.write....
P.S. here is a generic implementation of that pattern
There are eventually two operations available with spark
saveAsTable:- create or replace the table if present or not with the current DataFrame
insertInto:- Successful if the table present and perform operation based on the mode('overwrite' or 'append'). it requires the table to be available in the database.
The .saveAsTable("events") Basically rewrites the table every time you call it. which means that, even if you have a table present earlier or not, it will replace the table with the current DataFrame value. Instead, you can perform the below operation to be in the safer side:
Step 1: Create the table even if it is present or not. If present, remove the data from the table and append the new data frame records, else create the table and append the data.
df.createOrReplaceTempView('df_table')
spark.sql("create table IF NOT EXISTS table_name using delta select * from df_table where 1=2")
df.write.format("delta").mode("append").insertInto("events")
So, every time it will check if the table is available or not, else it will create the table and move to next step. Else, if the table is available, then append the data into the table.
I have a Folder which previously had subfolders based on ingestiontime which is also the original PARTITION used in its Hive Table.
So the Folder Looks as -
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200712230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200711230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200710230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200709230000/....
........
Inside each ingestiontime folder, data is present in PARQUET format.
Now in the Same myStreamingData folder, I am adding another folder that holds similar data but in the folder named businessname.
So my Folder structure now looks like -
s3://MyDevBucket/dev/myStreamingData/businessname=007/ingestiontime=20200712230000/....
s3://MyDevBucket/dev/myStreamingData/businessname=007/ingestiontime=20200711230000/....
s3://MyDevBucket/dev/myStreamingData/businessname=007/ingestiontime=20200710230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200712230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200711230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200710230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200709230000/....
........
So I need to add the data in the businessname partition to my current hive table too.
To achieve this , I was running the ALTER Query - ( on Databricks)
%sql
alter table gp_hive_table add partition (businessname=007,ingestiontime=20200712230000) location "s3://MyDevBucket/dev/myStreamingData/businessname=007/ingestiontime=20200712230000"
But I am getting this error -
Error in SQL statement: AnalysisException: businessname is not a valid partition column in table `default`.`gp_hive_table`.;
What part I am doing incorrectly here ?
Thanks in Advance.
Since you're already using Databricks and this is a streaming use case, you should definitely take a serious look at using Delta Lake tables.
You won't have to mess with explicit ... ADD PARTITION and MSCK statements.
Delta Lake with the ACID properties will ensure your data is committed properly, if your job fails you won't end up with partial results. As soon as the data is committed, it is available to users (again without the MSCK and ADD PARTITION) statements.
Just change 'USING PARQUET' to 'USING DELTA' in your DDL.
You can also (CONVERT) your existing parquet table to a Delta Lake table and then start using INSERT, UPDATE, DELETE, MERGE INTO, COPY INTO, from Spark batch and structured streaming jobs. OPTIMIZE will clean up the small file problem.
alter table gp_hive_table add partition is to add partition(data location, not new column) to the table with already defined partitioning scheme, it does not change current partitioning scheme, it just adds partition metadata, that in some location there is partition corresponding to some partitioning column value.
If you want to change partition columns, you need to recreate the table.:
Drop (check it is EXTERNAL) the table: DROP TABLE gp_hive_table;
Create table with new partitioning column. Partitions WILL NOT be created automatically.
Now you can add partitions using ALTER TABLE ADD PARTITION or use MSCK REPAIR TABLE to create them automatically based on directory structure. Directory structure should already match partitioning scheme before you execute these commands
So,
building upon the suggestion from #leftjoin,
Instead of having a hive table without businessname as one of the partition ,
What I did is -
Step 1 -> Create hive table with - PARTITION BY (businessname long,ingestiontime long)
Step 2 -> Executed the query - MSCK REPAIR <Hive_Table_name> to auto add partitions.
Step 3 ->
Now, there are ingestiontime folders which are not in the folder businessname i.e
folders like -
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200712230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200711230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200710230000/....
s3://MyDevBucket/dev/myStreamingData/ingestiontime=20200709230000/....
I wrote a small piece of code to fetch all such partitions and then ran the following query for all of them -
ALTER TABLE <hive_table_name> ADD PARTITION (businessname=<some_value>,ingestiontime=<ingestion_time_partition_name>) LOCATION "<s3_location_of_all_partitions_not_belonging_to_a_specific_businesskey>
This solved my issue.
I am trying to drop a table(Internal) table that was created Spark-Sql, some how table is getting dropped but location of the table is still exists. Can some one let me know how to do this?
I tried both Beeline and Spark-Sql
create table something(hello string)
PARTITIONED BY(date_d string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY "^"
LOCATION "hdfs://path"
)
Drop table something;
No rows affected (0.945 seconds)
Thanks
Spark internally uses Hive metastore to create Table. If the table is created as an external hive table from spark i.e. the data present in HDFS and Hive provides a table view on that, drop table command will only delete the Metastore information and will not delete the data from HDFS.
So there are some alternate strategy which you could take
Manually delete the data from HDFS using hadoop fs -rm -rf command
Do alter table on the table you want to delete, change the external table to internal table then drop the table.
ALTER TABLE <table-name> SET TBLPROPERTIES('external'='false');
drop table <table-name>;
The first statement will convert the external table to internal table and 2nd statement will delete the table with the data.