Error occurs when I call ALTER TABLE REBUILD in one stored procedure and try to SELECT data in another simultaneously.
CREATE PROCEDURE IF NOT EXISTS RebuildContent()
AS
BEGIN
INSERT INTO dbo.Log (Date, Message)
VALUES ( DateTime.UtcNow, "Starting Content table rebuilding ..." );
ALTER TABLE dbo.Content REBUILD;
ALTER TABLE dbo.ContentCrc REBUILD;
INSERT INTO dbo.Log (Date, Message)
VALUES ( DateTime.UtcNow, "Completed Content table rebuilding ..." );
END;
Are there any solutions to avoid it?
Thank you in advance!
You are running into a race condition between rebuilding and reading from the same table.
Rebuilding a table creates a new file by compacting the files that got created by the insertion. Unfortunately right now, once the rebuild is deleting the old file, you will lose access to the old version and you will get an error message.
We are aware of this issue and have created a work item to preserve access to the old file for the started queries (providing snapshot semantics). However, I do not have an ETA at the moment.
Thus until then, please schedule your rebuild and read jobs without overlaps.
Note: You can still concurrently rebuild and insert or insert and read.
Related
I am trying to drop a table before reading in a new set of values for testing purposes. When I run the command
DROP TABLE [dbo].[Table1]
I get the following error after about 3-5 minutes. It is a large table (~50 million rows).
Failed to execute query. Error: A severe error occurred on the current command. The results, if any, should be discarded.
Operation cancelled by user.
Failed to execute query. Error: A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user. what is the cause?
The error can cause for many different reasons, but it is not showing exact reason to find exact reason you can check following things:
It might be because of indexing issue to get that check the consistency of the database should be checked first
DBCC CHECKDB(``'database_name'``);
Check table consistency if you have it nailed down to a table.
DBCC CHECKTABLE(``'table_name'``);
When the problem was reported, search for any files with the name SQLDump* in the LOG folder, which contains ERRORLOG or You can try following actions in SSMS:
Object Explorer >> Management Node >> SQL Server Logs >> View the current log
We tried adding a new column to an existing table in Cassandra. It ended up giving an exception "org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch".
When we execute the command "describe " --> New columns was added.
when we tried to insert the data --> it throws an exception that "the newly added column does NOT exist".
We tried to recreate the table by dropping it --> Table gets dropped but while recreating it says table already exists.
Seems like some issue with Cassandra sync.
I want this issue to be resolved without any need to restart the Cassandra Nodes.
Can someone suggest the right approach to resolve this?
Thanks.
The rolling restart of the cluster resolved this issue. Thanks.
Flushing memtables (nodetool flush) should resolve the issue.
Flushing does not require restarting cassandra whereas draining does.
See:
Column family ID mismatch during ALTER TABLE
I have the following problem in Azure Databricks. Sometimes when I try to save a DataFrame as a managed table:
SomeData_df.write.mode('overwrite').saveAsTable("SomeData")
I get the following error:
"Can not create the managed table('SomeData'). The associated
location('dbfs:/user/hive/warehouse/somedata') already exists.;"
I used to fix this problem by running a %fs rm command to remove that location but now I'm using a cluster that is managed by a different user and I can no longer run rm on that location.
For now the only fix I can think of is using a different table name.
What makes things even more peculiar is the fact that the table does not exist. When I run:
%sql
SELECT * FROM SomeData
I get the error:
Error in SQL statement: AnalysisException: Table or view not found:
SomeData;
How can I fix it?
Seems there are a few others with the same issue.
A temporary workaround is to use
dbutils.fs.rm("dbfs:/user/hive/warehouse/SomeData/", true)
to remove the table before re-creating it.
This generally happens when a cluster is shutdown while writing a table. The recomended solution from Databricks documentation:
This flag deletes the _STARTED directory and returns the process to the original state. For example, you can set it in the notebook
%py
spark.conf.set("spark.sql.legacy.allowCreatingManagedTableUsingNonemptyLocation","true")
All of the other recommended solutions here are either workarounds or do not work. The mode is specified as overwrite, meaning you should not need to delete or remove the db or use legacy options.
Instead, try specifying the fully qualified path in the options when writing the table:
df.write \
.option("path", "hdfs://cluster_name/path/to/my_db") \
.mode("overwrite") \
.saveAsTable("my_db.my_table")
For a more context-free answer, run this in your notebook:
dbutils.fs.rm("dbfs:/user/hive/warehouse/SomeData", recurse=True)
Per Databricks's documentation, this will work in a Python or Scala notebook, but you'll have to use the magic command %python at the beginning of the cell if you're using an R or SQL notebook.
I have the same issue, I am using
create table if not exists USING delta
If I first delete the files lie suggested, it creates it once, but second time the problem repeats, It seems the create table not exists does not recognize the table and tries to create it anyway
I don't want to delete the table every time, I'm actually trying to use MERGE on keep the table.
Well, this happens because you're trying to write data to the default location (without specifying the 'path' option) with the mode 'overwrite'.
Like said Mike you can set "spark.sql.legacy.allowCreatingManagedTableUsingNonemptyLocation" to "true", but this option was removed in Spark 3.0.0.
If you try to set this option in Spark 3.0.0 you will get the following exception:
Caused by: org.apache.spark.sql.AnalysisException: The SQL config 'spark.sql.legacy.allowCreatingManagedTableUsingNonemptyLocation' was removed in the version 3.0.0. It was removed to prevent loosing of users data for non-default value.;
To avoid this problem you can explicitly specify the path where you're going to save with the 'overwrite' mode.
There are many queue_promotion_n tables where n is from 1 to 100.
There is an error on the 73 table with a fairly simple query
SELECT count(DISTINCT queue_id)
FROM "queue_promotion_73"
WHERE status_new > NOW() - interval '3 days';
ERROR: could not open file "base/16387/357386324.1" (target block
200005): No such file or directory
Uptime DB 23 days. How to fix it?
Check that you have up-to-date backups (or verify that your DB replica is in sync)
PostgreSQL wiki recommends stopping DB and rsync whole all PostgreSQL files to a safe location.
File where the table is physically stored seems to be missing. You can check where PostgreSQL stores data on disk using:
SELECT pg_relation_filepath('queue_promotion_73');
pg_relation_filepath
----------------------
base/16387/357386324
(1 row)
If you are sure that your hard drives/RAID controller works fine, you can try rebuilding the table. It is a good idea to try this on a replica or backup snapshot of the database first.
VACUUM FULL queue_promotion_73;
Check again the relation path:
SELECT pg_relation_filepath('queue_promotion_73');
it should be different and hopefully with all required files.
The cause could be related to a hardware issue, make sure to check DB consistency.
Hi i have a two stored procedure a and b, b called by a, in a one temp tables created and in the sp b trying to access temp table of sp a, at compile time getting error for missing object.
The compiler checks for all objects mentioned in your SQL without checking if they are created during execution.
You could create the temp table and then in the same session create the procs, that should work.
You can then drop the temp table.
Seriously? Version 11? Wow.
Any way, try adding "create table" for the temp table before creating sproc b.