Check for Insert Failure Due to Page Lock - sap-ase

I freely admit that I know nothing about sybase in terms of its return codes. My experience has primarily been with Oracle and SQL Server. This particular project requires an insert into a binary field of a table that periodically fails because the entry is locked. Looking at the code it doesn't appear that I am able to successfully detect a lock condition. My current strategy is to insert the data, then select and determine if the insert was successful, and retry if it was not using threads that sleep for several seconds in between retry attempts. This fails to account for other data that may have altered the entry prior to my original data insert and may be more current than the data that I am attempting to insert. Is there a simple way to determine if the row is locked before attempting an insert, wait for the lock to clear, then lock the row myself before an insert? Alternately, if I can detect that the entry is locked, then I can fail the transaction and alert the user to the failure so that it can be manually inspected. Before anyone asks, I am unable to change the architecture of the RDMS in terms of how it is set up to lock entries. This has to be handled by the code that performs the insert.

Locking the entire table will work, but is pretty crude if you're only looking for smaller granularity of a page (as per the title of your question).
You can actually do that by doing SET LOCK NOWAIT before the INSERT and then checking ##ERROR for status code 12205 which indicates there was a lock on something that was needed in order to do the insert. Don't forget to run SET LOCK WAIT to restore the default or NOWAIT will apply to the rest of your session.

Try:
BEGIN TRANSACTION
LOCK TABLE <<table_name>>
IN EXCLUSIVE MODE NOWAIT
IF ##error != 0
BEGIN
ROLLBACK TRANSACTION
PRINT 'COULD NOT ACQUIRE LOCK. EXITING ...'
RETURN 0
END
<< your code here if it was able to lock >>
COMMIT TRANSACTION

Related

READ for UPDATE in CICS

If I use READ for UPDATE in reading a record from a file without a subsequent DELETE, REWRITE, UNLOCK, or SYNCPOINT command. Will something happen to the record being read?
Nothing happens to the record itself. A lock will be held on the record (and maybe on the control interval) until the DELETE, REWRITE, UNLOCK, or SYNCPOINT is issued. See https://www.ibm.com/docs/en/cics-ts/5.6?topic=summary-read for the various locks that will be held based on the type of file and the access mode. NOTE that a SYNCPOINT will be issued automatically at end of task. While it's a poor programming practice to fail to issue a command that will release the lock, CICS will take care of things at end of task.

Sqlite : Modifying locking criteria inside begin - commit

As per sqlite documentation, when we are using deferred transaction using begin - commit, database is locked since the first write.
And most probably this lock is there till the transaction is commited. So If I did begin and did the first write, and commit comes 180 seconds later, my database is locked till this time.Hence, I cannot do write operations till this time from another thread.
Is there any way that I can tell Sqlite to not hold locks till the commit and acquire locks only when its writing within the transaction? So that I have some chances of concurrent writing from another thread during that transaction. Or is there any solution?
I am using C Sqlite library in an embedded environment.
Allowing others to write data that you are reading would results in inconsistent data.
To allow a writer and readers at the same time, enable WAL mode.

a synchronization issue between requests in express/node.js

I've come up with a fancy issue of synchronization in node.js, which I've not able to find an elegant solution:
I setup a express/node.js web app for retrieving statistics data from a one row database table.
If the table is empty, populate it by a long calculation task
If the record in table is older than 15 minutes from now, update it by a long calculation task
Otherwise, respond with a web page showing the record in DB.
The problem is,
when multiple users issue requests simultaneously, in case the record is old, the long calculation task would be executed once per request, instead of just once.
Is there any elegant way that only one request triggers the calculation task, and all others wait for the updated DB record?
Yes, it is called locks.
Put an additional column in your table say lock which will be of timestamp type. Once a process starts working with that record put a now+timeout time into it (by the rule of thumb I choose timeout to be 2x the average time of processing). When the process stops processing update that column with NULL value.
At the begining of processing check that column. If the value > now condition is satisfied then return some status code to client (don't force client to wait, it's a bad user experience, he doesn't know what's going on unless processing time is really short) like 409 Conflict. Otherwise start processing (also ideally processing takes place in a separate thread/process so that user won't have to wait: respond with an appropriate status code like 202 Accepted).
This now+timeout value is needed in case your processing process crashes (so we avoid deadlocks). Also remember that you have to "check and set" this lock column in transaction because of race conditions (might be quite difficult if you are working with MongoDB-like databases).

Multi Threading in a Tree like structure

Below is the Question I was asked in an interview and I believe there are many solutions to this question but I want to know what can be the best solution (and stackoverflow is perfect for this :) ).
Q: We have a tree like structure and have three threads. Now we have to perform three operations: Insert, Delete and lookup. How will you design this?
My Approach: I will take mutex for insert and delete operation as I want only one thread to perform at a time insert or delete. While in case of lookup I will allow all the three threads to enter in the function but keep a count(counting semaphore) so that insert and delete operation can't be perform this time.
Similarly when insert or delete operation is going no thread is allowed to do lookup, same with the case for insert and delete.
Now he cross questioned me that as I am allowing only one thread at a time to insert so if two nodes on different leaf need to be inserted then still my approach will allow one at a time, this made be stuck.
Is my approach fine ?
What can be other approaches ?
How about like this? Similar to a traffic road block(broken paths).
Each node will have two flags say leftClear_f and rightClear_f indicating clear-path ahead
There will be only one MutEx for the tree
Lookup Operation:
If flags are set indicating path ahead is under modification, got to conditional_wait and wait for signal.
after getting signal check the flag and continue.
Insert Operation
follow the Lookup till you get to the location of insertion.
acquire MutEx and set relevant flag of parent_node and both child_nodes after checking their state.
Release the MutEx so that parallel Delete/Insert can happen on other valid unbroken paths
Acquire MutEx after insert operation and update the relevant flag in the parent_node and child_nodes.
Delete Operation
same as Insert operation except that it deletes nodes.
PS: You can also maintain the details of the nodes under Insert or Delete process someplace else. Other operation can jump the broken paths if necessary/needed! It sounds complicated yet doable.

SQLite issue - DB is locked workaround

I have 2 processes that connect to the same DB.
The first one is used to read from the DB and the second is used to write to the DB.
The first process sends write procedures for executing to the second process via message-queue on linux.
Every SQL-statement is taken in the prepare, step, finalize routine; Where the prepare and step are made in loop of 10000 times till it succedd (did this to overcome DB locked issues).
To add a table i do the next procedure:
the first process sends request via msg-q to the second process to add a table and insert garbage in it's rows in a journal_mode=OFF mode.
then the first process checks for the existing table so it could continue in its algorithm. (It checks it in a loop with usleep command between iterations.)
The problem is that the second process is stuck in the step execute of 'PRAGMA journal_mode=OFF;' because it says the DB is locked (Here too, i use a loop of 10000 iterations with usleep to check 10000 time for the DB to be free, as i mentioned before).
When i add to the first process in the 'check for existing table' loop, the operation of closing the connection, the second process is ok. But now when i add tables and values sometime i get 'callback requested query abort' in the Step Statement.
Any help of whats happening here ?
Use WAL mode. It allows one writer and any number of readers without any problems. You don't need to check for the locked state and do retrys etc.
WAL limitation: The DB has to be on the local drive.
Performance: Large transactions (1000s of inserts or similar) are slower than classic rollback journal, but apart of that the speed is very similar, sometimes even better. Perceived performance (UI waiting for DB write to finish) improves dramatically.
WAL is a new technology, but already used in Firefox, Adroid/iOS phones etc. I did tests with 2 threads running at full speed - one writing and the other one reading - and did not encounter a single problem.
You may be able to simplify your app when adopting the WAL mode.

Resources