Error in trigger seems to rollback entire transaction - sap-ase

I have a delete trigger on table A that deletes from table B. I am testing what happens when trigger fails, and for that purpose I renamed table B, so that trigger cannot find it.
These are my normal steps:
begin transaction
delete from C
delete from A -- this errors for reason mentioned
-- At this point the transaction is automatically rolled-back.
However if I do these steps:
begin transaction
delete from C
delete from B -- this errors for reason mentioned
-- At this point transaction is not rolled back, and I can still commit it.
How come in the first scenario, the transaction is rolling back? Shouldn't it be up to application to call either rollback or commit?
And the whole difference is trigger failing vs. statement failing for same reason, I would expect the behavior to be identical.
edit to add example:
create table A (a int primary key)
create table B (a int primary key)
create table C (a int primary key)
create trigger Atrig on A for delete as delete B from B, deleted where B.a=deleted.a
insert into A values(1)
insert into A values(2)
insert into B values(2)
insert into B values(3)
insert into C values(1)
insert into C values(2)
insert into C values(3)
Now rename table B to B2 (I used UI to rename it, so don't have sql command to do so)
begin transaction
delete C where a=3
delete A where a = 2
Above returns this error, and rolls back transaction:
System.Data.OleDb.OleDbException (0x80040E37): [42000]
[Message Class: 16]
[Message State: 1]
[Transaction State: 0]
[Server Name: sybase_15_5_devrs1]
[Procedure Name: Atrig]
[Line Number: 1]
[Native Code: 208]
[ASEOLEDB]B not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
However if I do this:
begin transaction
delete C where a=3
delete B where a = 2
above returns error, but transaction is not rolled back, and I can issue'commit transaction':
System.Data.OleDb.OleDbException (0x80040E37): [42000]
[Message Class: 16]
[Message State: 1]
[Transaction State: 0]
[Server Name: sybase_15_5_devrs1]
[Native Code: 208]
[ASEOLEDB]B not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
I'm thinking the behavior has something to do with this topic
In the table "Rollbacks caused by errors in data modification", it says:
Context: Transaction only
Behavior: Current command is aborted. Previous commands are not rolled back, and subsequent commands are executed.
Context: Trigger in a transaction
Behavior: Trigger completes, but trigger effects are rolled back.
All data modifications since the start of the transaction are rolled back. If a transaction spans multiple batches, the rollback affects all of those batches.
Any remaining commands in the batch are not executed. Processing resumes at the next batch.

Context: Transaction only
Behavior: Current command is aborted. Previous commands are not rolled back, and subsequent commands are executed.
Context: Trigger in a transaction
Behavior: Trigger completes, but trigger effects are rolled back.
All data modifications since the start of the transaction are rolled back. If a transaction spans multiple batches, the rollback affects all of those batches.
Any remaining commands in the batch are not executed. Processing resumes at the next batch.
After reading the same manual that you specified in the answer this morning, I've tested in my sybase environment and that's what happened.

Related

atomically replace content of SQLite database, preferably in SQL or python3

I have a python script that supports an --edit-the-database option to invoke the user's preferred editor on a dump of the script's SQLite database. This option is intended to facilitate quick access to parts of the database that the script's other options don't provide access to, particularly during the development of this script.
Once the script has dumped the database's content, launched the editor and verified that the modified content is still valid then it needs to replace the existing database content.
First it removes all existing content by executing this SQL (using python's sqlite module):
PRAGMA writable_schema = 1;
DELETE FROM sqlite_master WHERE type IN ('table', 'index', 'trigger');
PRAGMA writable_schema = 0;
VACUUM;
and then it loads the new content using the sqlite module's executescript() method:
cursor.executescript(sql_slurped_from_user_modified_dump)
The problem is that these two operations (deleting existing content, loading new content) are not executed atomically: press CTRL-C at the wrong moment and the database content has been lost.
If I try to execute those two blocks of code inside a transaction then I get the error:
Error: cannot VACUUM from within a transaction
And if I keep the transaction but remove the VACUUM then I get the error:
Error: table first_table already exists
I have an ugly workaround in place: prior to calling the editor, the script copies the dump file to a safe location, writes a warning message to the user:
WARNING: if anything goes wrong then a backup of the database
can be found in /some/path
and, if the script continues and completes loading the new content, then it deletes the copy of the dump. But this is pretty ugly!
I could use DROP TABLE instead of the DELETE FROM sqlite_master ..., but if I am trying to allow the database to be modified in this way then I am allowing that the list of tables itself may change. I.e. if the user adds this to the dump:
CREATE TABLE t3 (n INT);
then a hard-coded list of DROPs like this:
BEGIN TRANSACTION
DROP TABLE t1;
DROP TABLE t2;
DROP INDEX ...
...
cursor.executescript(sql_slurped_from_user_modified_dump)
...
END TRANSACTION;
isn't going to work second time round (because it doesn't delete table t3).
I could use filesystem-atomic operations (i.e. something like: load the modified dump into a new database file; hardlink new file to old file), but that would require the script to close its database connection and reopen it afterwards, which, for reasons beyond the scope of this question, I would prefer not to do.
Does anybody have any better ideas for atomically replacing the entire content of a database whose list of tables is not predictable?
In case Google leads you here ...
I managed to do the first half of the task (delete existing content inside a single transaction) with something like this pseudocode:
-- Make the order in which tables are dropped irrelevant. Unfortunately, this
-- cannot be done just around the table dropping because it doesn't work inside
-- transactions.
PRAGMA foreign_keys = 0;
BEGIN TRANSACTION;
indexes = (SELECT name
FROM sqlite_master
WHERE type = 'index' AND
name NOT LIKE 'sqlite_autoindex_%';)
triggers = (SELECT name
FROM sqlite_master
WHERE type = 'trigger';)
tables = (SELECT name
FROM sqlite_master
WHERE type = 'table';)
for thing in indexes+triggers+tables:
DROP thing;
At which point I thought the second half (loading new content in the same transaction) would just be this:
cursor.executescript(sql_slurped_from_user_modified_dump)
END TRANSACTION;
-- Reinstate foreign key constraints.
PRAGMA foreign_keys = 1;
Unfortunately, pressing CTRL-C in the middle of the two blocks resulted in any empty database. The cause? cursor.executescript() does an immediate COMMIT before running the provided SQL. That turns the above code into two transactions!
This isn't the first time I've been caught out by this module's hidden transaction
management, but this time I was motivated to try the apsw module instead. This switch was remarkably easy. The latter half of the code now looks like this:
cursor.execute(sql_slurped_from_user_modified_dump)
END TRANSACTION;
-- Reinstate foreign key constraints.
PRAGMA foreign_keys = 1;
and it works perfectly!

How to cleanup the JdbcMetadataStore?

Initially our flow of cimmunicating with google Pub/Sub was so:
Application accepts message
Checks that it doesn't exist in idempotencyStore
3.1 If doesn't exist - put it into idempotency store (key is a value of unique header, value is a current timestamp)
3.2 If exist - just ignore this message
When processing is finished - send acknowledge
In the acknowledge successfull callback - remove this msg from metadatastore
The point 5 is wrong because theoretically we can get duplicated message even after message has processed. Moreover we found out that sometimes message might not be removed even although successful callback was invoked( Message is received from Google Pub/Sub subscription again and again after acknowledge[Heisenbug]) So we decided to update value after message is proccessed and replace timestamp with "FiNISHED" string
But sooner or later we will encounter that this table will be overcrowded. So we have to cleanup messages in the MetaDataStore. We can remove messages which are processed and they were processed more 1 day.
As was mentioned in the comments of https://stackoverflow.com/a/51845202/2674303 I can add additional column in the metadataStore table where I could mark if message is processed. It is not a problem at all. But how can I use this flag in the my cleaner? MetadataStore has only key and value
In the acknowledge successfull callback - remove this msg from metadatastore
I don't see a reason in this step at all.
Since you say that you store in the value a timestamp that means that you can analyze this table from time to time to remove definitely old entries.
In some my project we have a daily job in DB to archive a table for better main process performance. Right, just because we don't need old data any more. For this reason we definitely check some timestamp in the raw to determine if that should go into archive or not. I wouldn't remove data immediately after process just because there is a chance for redelivery from external system.
On the other hand for better performance I would add extra indexed column with timestamp type into that metadata table and would populate a value via trigger on each update or instert. Well, MetadataStore just insert an entry from the MetadataStoreSelector:
return this.metadataStore.putIfAbsent(key, value) == null;
So, you need an on_insert trigger to populate that date column. This way you will know in the end of day if you need to remove an entry or not.

Insert bulk data into big-query without keeping it in streaming buffer

My motive here is as follow:
Insert bulk records into big-query every half an hour
Delete the record if the exists
Those records are transactions which change their statuses from: pending, success, fail and expire.
BigQuery does not allow me to delete the rows that are inserted just half an hour ago as they are still in the streaming buffer.
can anyone suggest me some workaround as i am getting some duplicate rows in my table.
A better course of action would be to:
Perform periodic loads into a staging table (loading is a free operation)
After the load completes, execute a MERGE statement.
You would want something like this:
MERGE dataset.TransactionTable dt
USING dataset.StagingTransactionTable st
ON dt.tx_id = st.tx_id
WHEN MATCHED THEN
UPDATE dt.status = st.status
WHEN NOT MATCHED THEN
INSERT (tx_id, status) VALUES (st.tx_id, st.status)

Oracle update skip locked with dependency on other rows

For multithreaded env in our application we implemented oracle skip locked where no two threads pick up the same record in the database for processing(We add 'waiting' to 'working' flag).
Now, we have a modification where if two records in the database which are queued for processing have the same ID(workid) should not picked up the same time.(i.e - the other record status should not be updated to WORKING if already one record has a flag was 'WORKING'
Can someone help as to how this can be achieved?
Below is the procedure for the same where single record is locked without comparison.
create or replace PROCEDURE DEQUEUE_MANAGER(
v_managerName IN Queue.manager%TYPE,
v_workid IN VARCHAR2,
v_key OUT NUMBER,
v_datatablekey OUT Queue.DATA_TABLE_KEY%TYPE,
v_tasktype OUT Queue.TASK_TYPE%TYPE,
v_no_of_attempts OUT Queue.ATTEMPT%TYPE,
v_result OUT NUMBER,
v_error_desc OUT VARCHAR2)
IS
v_recordrow Queue%ROWTYPE;
v_qeuuestatus VARCHAR2(255);
v_updatedstaus VARCHAR2(255);
CURSOR c IS
select *
from QUEUE
WHERE MANAGER =v_managerName
AND STATUS =v_qeuuestatus
AND workid=v_workid
AND DATE_RELEASE<=SYSDATE
FOR UPDATE SKIP LOCKED;
BEGIN
v_result := -1;
v_qeuuestatus :='WAITING';
v_updatedstaus:='WORKING';
v_tasktype :='';
v_datatablekey:=-1;
v_key:=-1;
v_error_desc:='No Data Found';
v_no_of_attempts:=0;
OPEN c;
FOR i IN 1..1 LOOP
FETCH c INTO v_recordrow;
EXIT WHEN c%NOTFOUND;
select v_recordrow.key into v_key
from QUEUE
where key = v_recordrow.key
for update;
UPDATE Queue
SET STATUS=v_updatedstaus
WHERE KEY=v_recordrow.key;
COMMIT;
v_datatablekey:=v_recordrow.data_table_key;
v_tasktype := v_recordrow.task_type;
v_no_of_attempts := v_recordrow.attempt;
v_result := 0;
IF (v_no_of_attempts IS NULL) THEN
v_no_of_attempts:=0;
END IF;
END LOOP;
CLOSE c;
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_datatablekey:=-1;
v_tasktype:='';
v_key:=-1;
v_no_of_attempts:=0;
v_result :=-1;
v_error_desc:='No Rows Found';
WHEN OTHERS THEN
DBMS_OUTPUT.put_line ('Exception Occurred');
v_datatablekey:=0;
v_tasktype:='';
v_key:=0;
v_no_of_attempts:=0;
v_result := -2;
v_error_desc := SQLERRM;
ROLLBACK;
END;
The purpose of the FOR UPDATE syntax is to lock records we want to update at some point in the future. We want to get the records now so that we can be sure our subsequent update won't fail because another session has locked the records.
That's not what your code does. Instead it selects the record, updates it and then issues a commit. The commit ends the transaction, which releases the lock. Your process would work the same without the FOR UPDATE.
Now we have your additional requirement: if a queued record for a given workid is being processed no other record for the same workid can be processed by another session. You say that all instances of a workid have the same values of queue status or manager. So that means the initial SELECT FOR UPDATE grabs all the records you want to lock. The snag is the SKIP LOCK allows other sessions to update any other records for that workid (only the first record is actually locked because that's the only one you've updated). Not that it matters sd the commit releases those locks.
The simplest solution would be to remove the SKIP LOCKED and the COMMIT. That would keep all the related records locked up until your processing transaction commits. But this may create further problems elsewhere.
Concurrent programming is really hard to get right. It's an architectural problem; you can't solve it at the level of an individual program unit.

Atomic Sequence using Hazelcast

USE CASE: Create AtomicSequence using last saved id in DB (not start from zero) and generate id after last saved id in db.
First we are checking if AtomicSequence instance is there or not if not we create AtomicSequence from last saved id (if the entry is in db.).
In HazelcastAtomicSequenceManager getSequenceGenerator method is two-step process.
Step 1: getHzInstance().getAtomicLong(key). // It will get if not present create a new one with 0 initial value.
Step 2: this.sequence.compareAndSet(0, startVal); // set value if initial value is zero.
Now consider Thread 1 come check and see the AtomicSequence for the given key is not present and execute stpe1 still did not execute step 2.
Thread 2 come and see the AtomicSequence is created (As step 1 is executed by thread1) and go ahead and increment it to 1.As the initial value still zero as Thread 2 did not execute step 2.
Now thread 1 will try to execute step2 but unable to it as initial value became 1 or something not equal to zero. So the atomicsequence will generate id from 1 next instead it should start from last save id, Due to which our test case is failing.
Any way to fix this issue
You need to get and try compareAndSet in a loop, until it succeeds:
long current;
do {
current = atomicLong.get();
} while (!atomicLong.compareAndSet(current, startVal));

Resources