Can't Drop Azure SQL Table due to severe error, what is the cause? - azure

I am trying to drop a table before reading in a new set of values for testing purposes. When I run the command
DROP TABLE [dbo].[Table1]
I get the following error after about 3-5 minutes. It is a large table (~50 million rows).
Failed to execute query. Error: A severe error occurred on the current command. The results, if any, should be discarded.
Operation cancelled by user.

Failed to execute query. Error: A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user. what is the cause?
The error can cause for many different reasons, but it is not showing exact reason to find exact reason you can check following things:
It might be because of indexing issue to get that check the consistency of the database should be checked first
DBCC CHECKDB(``'database_name'``);
Check table consistency if you have it nailed down to a table.
DBCC CHECKTABLE(``'table_name'``);
When the problem was reported, search for any files with the name SQLDump* in the LOG folder, which contains ERRORLOG or You can try following actions in SSMS:
Object Explorer >> Management Node >> SQL Server Logs >> View the current log

Related

Push in existing local table failure (windows): InvalidRegionNumberException then IllegalArgumentException

I want to push data into an already existing table, single column family, no records.
I am using shc-core:1.1.1-2.1-s_2.11 on a windows machine. I have hbase 1.2.6 installed and use scala 2.11.8.
When I try to push data I got first the following error: org.apache.spark.sql.execution.datasources.hbase.InvalidRegionNumberException: Number of regions specified for new table must be greater than 3.
After following the advice of this link https://github.com/hortonworks-spark/shc/issues/249#issue-318285217, I added: HBaseTableCatalog.newTable -> "5" to my options.
It still failed but with: java.lang.IllegalArgumentException: Can not create a Path from a null string.
Following this link: https://github.com/hortonworks-spark/shc/issues/151#issuecomment-313800739, I added to my catalog: , "tableCoder":"PrimitiveType".
Still facing the same error.
I saw people are expecting some clarification about this issue (https://github.com/hortonworks-spark/shc/issues/249#issuecomment-463528032).
It is known issue and apparently it seems fixed (https://github.com/hortonworks-spark/shc/issues/155#issuecomment-315236736).
I do not know what to do next.
Is there a solution about this?

PostgreSQL ERROR: could not open file "base/.../..."

There are many queue_promotion_n tables where n is from 1 to 100.
There is an error on the 73 table with a fairly simple query
SELECT count(DISTINCT queue_id)
FROM "queue_promotion_73"
WHERE status_new > NOW() - interval '3 days';
ERROR: could not open file "base/16387/357386324.1" (target block
200005): No such file or directory
Uptime DB 23 days. How to fix it?
Check that you have up-to-date backups (or verify that your DB replica is in sync)
PostgreSQL wiki recommends stopping DB and rsync whole all PostgreSQL files to a safe location.
File where the table is physically stored seems to be missing. You can check where PostgreSQL stores data on disk using:
SELECT pg_relation_filepath('queue_promotion_73');
pg_relation_filepath
----------------------
base/16387/357386324
(1 row)
If you are sure that your hard drives/RAID controller works fine, you can try rebuilding the table. It is a good idea to try this on a replica or backup snapshot of the database first.
VACUUM FULL queue_promotion_73;
Check again the relation path:
SELECT pg_relation_filepath('queue_promotion_73');
it should be different and hopefully with all required files.
The cause could be related to a hardware issue, make sure to check DB consistency.

How do I export runtime datatable into excel if any error occurs due to data?

I want to know if I can export a datatable into excel when I get an error due to data while running the scripts.
If i am having 5 records in a sheet, and 2 records processed well, while running the third record my script encounters an error. Am I able to export into excel in that moment?
Errors may occur at any places because of the data.
Your question doesn't explicitly say QTP, but I'm assuming QTP because you used the tag HP-UFT.
I'm not sure what you mean by "when we get error", so I'll explore two possibilites.
1) You're getting an error in the application you are testing; QTP itself is still executing the script.
In this situation, your script should have validation checks (if statements that check to make sure that what you expected to happen did indeed just happen), and if those checks fail, you could immediately do a DataTable.Export(filename) to save the data to disk before QTP ends. Then, the script could continue, or you can add an ExitTest to fail out and stop the test.
Based on your question, I think it's more likely that:
2) You're getting an error in QTP itself. When QTP crashes, it drops any dynamic changes to the DataTable (i.e. if you had done a DataTable.Import(filename) or updated any fields, it would loose that and go back to it's design time DataTable instead)
In this situation, your script is encountering something that is causing QTP itself to stop the script. Perhaps it's hitting an error where an object cannot be found, or some kind of syntax error. You should consider adding defensive statements to check on things before your code reaches the point that this kind of error would occur... For example, perhaps add...
If not Browser("ie").Page("page").WebTable("table").Exists then
FailTestBecause "Can't find table"
End If
...
function FailTestBecause (reason)
Print "Test Failed Because: " & reason
Reporter.ReportEvent micFail, Environment("ActionName"), reason
DataTable.Export(filename)
ExitTest
end Function
Or, you could just use an On Error Resume Next and put in a command to DataTable.Export(filename) immediately after where it is failing...

Getting error while creating workflow in Knime for text analytics

I have a set of URL from which I have to read data and execute a particular work-flow in Knime for determining word frequency. However I am getting error "No column with DocumentCells found!". I have attached reference image. Can someone please help me with this.
Also I am getting following error in the HttpRetriver node saying
WARN HttpRetriever (deprecated) 0:2 Error retrieving https://www.bosch-do-it.com/gb/en/diy/knowledge/project-guides/valentine-s-day-601921.jsp: Exception java.net.UnknownHostException: www.bosch-do-it.com for URL "https://www.bosch-do-it.com/gb/en/diy/knowledge/project-guides/valentine-s-day-601921.jsp": www.bosch-do-it.com
You need the "Strings to Document" node to use the "POS tagger" node.
The "POS tagger" node needs a DocumentCell to work and the "Strings to Document" node do the job.
Updated Workflow

Cassandra "Unexpected error deserializing mutation" error

Cassandra stopped.
when i restart Cassandra using "service cassandra start" or "service cassandra restart", i get the following error(from "/var/log/cassandra/system.log"):
ERROR [main] 2014-11-14 02:08:52,379 CommitLogReplayer.java (line 304) Unexpected error deserializing mutation; saved to /tmp/mutation3145492124947244713dat and ignored. This may be caused by replaying a mutation against a table with the same name but incompatible schema. Exception follows:
org.apache.cassandra.serializers.MarshalException: Expected 8 or 0 byte long for date (7)
at org.apache.cassandra.serializers.TimestampSerializer.validate(TimestampSerializer.java:118)
at org.apache.cassandra.db.marshal.AbstractType.validate(AbstractType.java:171)
at org.apache.cassandra.db.marshal.AbstractType.validateCollectionMember(AbstractType.java:289)
at org.apache.cassandra.db.marshal.AbstractCompositeType.validate(AbstractCompositeType.java:282)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:274)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:95)
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:151)
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:131)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:336)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
now i cannot start cassandra.
cqlsh is also not available.
I also encounter this problem. I'd like to share how I resolve the problem.
Change the debug mode
The default debug mode is INFO, the output is too less to track the error. You should change the debug mode from INFO to DEBUG. This is is determined by the following line in the log4j-server.properties file:
log4j.rootLogger=INFO,stdout,R
rerun the cassandra
From the output, I find the error appears when replay the commit log file. I think there are something wrong in the log file. But the log file is binary, I don't know how to read it. So I try to delete the log file that results in the error, then restart cassandra. And it works!
Maybe the root problem is different from mine, but you can try to find it in this way. Hope this can help.

Resources