Error sending a single delete in movilizer - movilizer

I'm sending a delete operation to Movilizer with only a key and the pool but it gives me this error:
Cannot delete primary group (Do not set group in delete command to delete entire entry)
Why?

It was because I wasn't updating my ACK. This error ocurred before and I was getting it because of the ACK. Thanks.

Related

Is there any way to identify the error returned by a SQL Server stored procedure in NodeJS

Scenario: executing a stored procedure to insert row into a table
Output: normal, should insert record as set in SQL statement
Failure case: if unique key is violated, it should not update and throw error
All the above steps are working when manually executed in Azure Studio. The same when integrated with NodeJS by using ASYNC call, it works only for the +ve test case; which is a for fresh new record inserted and when the duplicate is inserted, the recordset.length is seen as undefined
This undefined is visible in 6.3.1 and not in the earlier version of 6.2.3
Now in 6.3.1, I could find only an option of using returnValue. Does anyone know other features available to get notified of the error. Below is the output
If it's successful, I get the result as
{
recordsets: [],
recordset: undefined,
output: {},
rowsAffected: [],
returnValue: 0
}
You can try to put your INSERT/UPDATE inside a TRY-CATCH block, then return a specific integer ir a RAISE ERROR. Also, put It on a transaction to rollback on the CATCH block.

How to cleanup the JdbcMetadataStore?

Initially our flow of cimmunicating with google Pub/Sub was so:
Application accepts message
Checks that it doesn't exist in idempotencyStore
3.1 If doesn't exist - put it into idempotency store (key is a value of unique header, value is a current timestamp)
3.2 If exist - just ignore this message
When processing is finished - send acknowledge
In the acknowledge successfull callback - remove this msg from metadatastore
The point 5 is wrong because theoretically we can get duplicated message even after message has processed. Moreover we found out that sometimes message might not be removed even although successful callback was invoked( Message is received from Google Pub/Sub subscription again and again after acknowledge[Heisenbug]) So we decided to update value after message is proccessed and replace timestamp with "FiNISHED" string
But sooner or later we will encounter that this table will be overcrowded. So we have to cleanup messages in the MetaDataStore. We can remove messages which are processed and they were processed more 1 day.
As was mentioned in the comments of https://stackoverflow.com/a/51845202/2674303 I can add additional column in the metadataStore table where I could mark if message is processed. It is not a problem at all. But how can I use this flag in the my cleaner? MetadataStore has only key and value
In the acknowledge successfull callback - remove this msg from metadatastore
I don't see a reason in this step at all.
Since you say that you store in the value a timestamp that means that you can analyze this table from time to time to remove definitely old entries.
In some my project we have a daily job in DB to archive a table for better main process performance. Right, just because we don't need old data any more. For this reason we definitely check some timestamp in the raw to determine if that should go into archive or not. I wouldn't remove data immediately after process just because there is a chance for redelivery from external system.
On the other hand for better performance I would add extra indexed column with timestamp type into that metadata table and would populate a value via trigger on each update or instert. Well, MetadataStore just insert an entry from the MetadataStoreSelector:
return this.metadataStore.putIfAbsent(key, value) == null;
So, you need an on_insert trigger to populate that date column. This way you will know in the end of day if you need to remove an entry or not.

got error while trying to remove query from QueryStore

I was tring to remove a slowquery from QueryStore, but I got an error message saying:
query can't be deleted as there is an active forcing policy on this queryid while executing "sp_query_store_remove_query #queryid"
Can't we remove the query in this case?
I tried exec sp_query_store_remove_query #queryid.
Try this.
ALTER DATABASE WideWorldImporters SET QUERY_STORE CLEAR;

IMAP (gmail?) returning incorrect UIDs to FETCH request

I'm fiddling with IMAP in Python currently and have noticed the following (tested using single and multiple messages in a folder):
Select folder, "A" and fetch the UID of the message within
Move to another folder, "B", select that and fetch the new UID
Using the "B" UID, move the message back to folder "A"
Re-select folder "A" and fetch the new UID - as expected this gives a new UID for the message
Finally, issue a FETCH using the new UID from step 4
The issue is that in the 5th step the command executes OK - but the server returns a different UID to the one specified in the request (normally, but not always, a difference of 1)! For example:
LEFC12 UID FETCH 65 (FLAGS...)])
DEBUG:imapclient.imaplib:< * 3 FETCH (UID 64 ... {47}
The same happens with multiple messages - all of them are offset by the same amount.
If I have the process sleep for 20s (as in totally idle for 20s, if it retries every second it never comes back OK) - the fetch returns the correct UID fine. I'm not sure if this is a gmail or IMAP thing, any pointers/help would be greatly appreciated!
EDIT: here's all of the imapclient logs for the sequence above: https://gist.github.com/Fizzadar/37cb1fa808ffb6594326bba293f6daab. I have noticed that this isn't consistent - if you repeat the above steps twice over it always fails, but just the once it fails randomly (~50%, leading me to believe this is a gmail-specific issue).

SonarQube : cannot insert duplicate key row in object 'dbo.file_sources' with unique index 'file_sources_file_uuid_uniq'. The duplicate key value is

during the process migration of Sonar (from 4.5.7 to 5.4), we faced to an issue. The migration failed with this message :
Cannot insert duplicate key row in object 'dbo.file_sources' with uni
que index 'file_sources_file_uuid_uniq'. The duplicate key value is...
My database is an MS SQL Server. She's configured in FRENCH_CI_AS. I tried to change it in FRENCH_CS_AS, but it didn't solve the problem.
I've observed that, each time we restart the migration, the number of processed files was different. BUT it always failed by processing for the same file.
Any idea ?

Resources