READ for UPDATE in CICS - cics

If I use READ for UPDATE in reading a record from a file without a subsequent DELETE, REWRITE, UNLOCK, or SYNCPOINT command. Will something happen to the record being read?

Nothing happens to the record itself. A lock will be held on the record (and maybe on the control interval) until the DELETE, REWRITE, UNLOCK, or SYNCPOINT is issued. See https://www.ibm.com/docs/en/cics-ts/5.6?topic=summary-read for the various locks that will be held based on the type of file and the access mode. NOTE that a SYNCPOINT will be issued automatically at end of task. While it's a poor programming practice to fail to issue a command that will release the lock, CICS will take care of things at end of task.

Related

EventSourcing race condition

Here is the nice article which describes what is ES and how to deal with it.
Everything is fine there, but one image is bothering me. Here it is
I understand that in distributed event-based systems we are able to achieve eventual consistency only. Anyway ... How do we ensure that we don't book more seats than available? This is especially a problem if there are many concurrent requests.
It may happen that n aggregates are populated with the same amount of reserved seats, and all of these aggregate instances allow reservations.
I understand that in distributes event-based systems we are able to achieve eventual consistency only, anyway ... How to do not allow to book more seats than we have? Especially in terms of many concurrent requests?
All events are private to the command running them until the book of record acknowledges a successful write. So we don't share the events at all, and we don't report back to the caller, without knowing that our version of "what happened next" was accepted by the book of record.
The write of events is analogous to a compare-and-swap of the tail pointer in the aggregate history. If another command has changed the tail pointer while we were running, our swap fails, and we have to mitigate/retry/fail.
In practice, this is usually implemented by having the write command to the book of record include an expected position for the write. (Example: ES-ExpectedVersion in GES).
The book of record is expected to reject the write if the expected position is in the wrong place. Think of the position as a unique key in a table in a RDBMS, and you have the right idea.
This means, effectively, that the writes to the event stream are actually consistent -- the book of record only permits the write if the position you write to is correct, which means that the position hasn't changed since the copy of the history you loaded was written.
It's typical for commands to read event streams directly from the book of record, rather than the eventually consistent read models.
It may happen that n-AggregateRoots will be populated with the same amount of reserved seats, it means having validation in the reserve method won't help, though. Then n-AggregateRoots will emit the event of successful reservation.
Every bit of state needs to be supervised by a single aggregate root. You can have n different copies of that root running, all competing to write to the same history, but the compare and swap operation will only permit one winner, which ensures that "the" aggregate has a single internally consistent history.
There are going to be a couple of ways to deal with such a scenario.
First off, an event stream would have the current version as the version of the last event added. This means that when you would not, or should not, be able to persist the event stream if the event stream is not at the version when loaded. Since the very first write would cause the version of the event stream to be increased, the second write would not be permitted. Since events are not emitted, per se, but rather a result of the event sourcing we would not have the type of race condition in your example.
Well, if your commands are processed behind a queue any failures should be retried. Should it not be possible to process the request you would enter the normal "I'm sorry, Dave. I'm afraid I can't do that" scenario by letting the user know that they should try something else.
Another option is to start the processing by issuing an update against some table row to serialize any calls to the aggregate. Probably not the most elegant but it does cause a system-wide block on the processing.
I guess, to a large extent, one cannot really trust the read store when it comes to transactional processing.
Hope that helps :)

Avoiding race condition for inserting model to DB on complex conditions

We are trying to create an algorithm/heuristic that will schedule a delivery at a certain time period, but there is definitely a race condition here, whereby two conflicting scheduled items could be written to the DB, because the write is not really atomic.
The only way to truly prevent race conditions is to create some atomic insert operation, TMK.
The server receives a request to schedule something for a certain time period, and the server has to check if that time period is still available before it writes the data to the DB. But in that time the server could get a similar request and end up writing conflicting data.
How to circumvent this? Is there some way to create some script in the DB itself that hooks into the write operation to make the whole thing atomic? By putting a locking mechanism on that script? What makes the whole thing non-atomic is the read and the wire time between the server and the DB.
Whenever I run into race condition I think of one immediate solution QUEUE.
Step 1) What you can do is that instead of adding data to a database directly you can add it to queue without checking anything.
Step 2) A separate reader will read from the queue check DB for any conflict and take necessary action.
This is one of the ways to solve this If you implement any better solution please do share it.
Hope that helps

Is it required to lock shared variables in perl for read access?

I am using shared variables on perl with use threads::shared.
That variables can we modified only from single thread, all other threads are only 'reading' that variables.
Is it required in the 'reading' threads to lock
{
lock $shared_var;
if ($shared_var > 0) .... ;
}
?
isn't it safe to simple verification without locking (in the 'reading' thread!), like
if ($shared_var > 0) ....
?
Locking is not required to maintain internal integrity when setting or fetching a scalar.
Whether it's needed or not in your particular case depends on the needs of the reader, the other readers and the writers. It rarely makes sense not to lock, but you haven't provided enough details for us to determine what your needs are.
For example, it might not be acceptable to use an old value after the writer has updated the shared variable. For starters, this can lead to a situation where one thread is still using the old value while the another thread is using the new value, a situation that can be undesirable if those two threads interact.
It depends on whether it's meaningful to test the condition just at some point in time or other. The problem however is that in a vast majority of cases, that Boolean test means other things, which might have already changed by the time you're done reading the condition that says it represents a previous state.
Think about it. If it's an insignificant test, then it means little--and you have to question why you are making it. If it's a significant test, then it is telltale of a coherent state that may or may not exist anymore--you won't know for sure, unless you lock it.
A lot of times, say in real-time reporting, you don't really care which snapshot the database hands you, you just want a relatively current one. But, as part of its transaction logic, it keeps a complete picture of how things are prior to a commit. I don't think you're likely to find this in code, where the current state is the current state--and even a state of being in a provisional state is a definite state.
I guess one of the times this can be different is a cyclical access of a queue. If one consumer doesn't get the head record this time around, then one of them will the next time around. You can probably save some processing time, asynchronously accessing the queue counter. But here's a case where it means little in context of just one iteration.
In the case above, you would just want to put some locked-level instructions afterward that expected that the queue might actually be empty even if your test suggested it had data. So, if it is just a preliminary test, you would have to have logic that treated the test as unreliable as it actually is.

Multi Threading in a Tree like structure

Below is the Question I was asked in an interview and I believe there are many solutions to this question but I want to know what can be the best solution (and stackoverflow is perfect for this :) ).
Q: We have a tree like structure and have three threads. Now we have to perform three operations: Insert, Delete and lookup. How will you design this?
My Approach: I will take mutex for insert and delete operation as I want only one thread to perform at a time insert or delete. While in case of lookup I will allow all the three threads to enter in the function but keep a count(counting semaphore) so that insert and delete operation can't be perform this time.
Similarly when insert or delete operation is going no thread is allowed to do lookup, same with the case for insert and delete.
Now he cross questioned me that as I am allowing only one thread at a time to insert so if two nodes on different leaf need to be inserted then still my approach will allow one at a time, this made be stuck.
Is my approach fine ?
What can be other approaches ?
How about like this? Similar to a traffic road block(broken paths).
Each node will have two flags say leftClear_f and rightClear_f indicating clear-path ahead
There will be only one MutEx for the tree
Lookup Operation:
If flags are set indicating path ahead is under modification, got to conditional_wait and wait for signal.
after getting signal check the flag and continue.
Insert Operation
follow the Lookup till you get to the location of insertion.
acquire MutEx and set relevant flag of parent_node and both child_nodes after checking their state.
Release the MutEx so that parallel Delete/Insert can happen on other valid unbroken paths
Acquire MutEx after insert operation and update the relevant flag in the parent_node and child_nodes.
Delete Operation
same as Insert operation except that it deletes nodes.
PS: You can also maintain the details of the nodes under Insert or Delete process someplace else. Other operation can jump the broken paths if necessary/needed! It sounds complicated yet doable.

Cache Locking for lots of processes?

I've got a script that 1) runs often 2) is run by lots of different processes and 3) takes a long time.
update: The stuff that takes a long time is tests who's results will be the same for every process. Totally redundant.
I think it's time to do some caching, but I'm worried about the potential for races, conflicts, corruption, temporal-vortex-instability and chickens.
The complexity comes in because any of the processes could update the cache as well as read the cache, so I have to know how to handle all those combinations.
This smells to me like something that someone smarter and more educated than myself has already probably figured out.
Anyway, to make this question more concrete, here's what I've thought of so far. I'm using flock in my head, not sure if that's a good idea.
if the cache is fresh, read it and go away
if the cache is stale
try to get a write lock
if I get the lock, do the tests and update the cache
If I don't get the lock, does someone else have an write or a read lock?
If its shared, why are they reading a stale cache? Do I ignore them, do the tests and update the cache (or maybe this causes them to read a half-written cache... er...)
If it's exclusive, give them a short time to complete the tests and update the cache.
Hope that makes sense...
Here is a scheme which uses flock(2) for file locking in concurrent environments.
It explains how "safe-cache" works.
Every cache file has two companion files (WLock and RLock).
All flock requests are blocking except first one (NB WLock).
having WLock ensures opportunity for possible generation of fresh cache
having shared RLock ensures safe reading from cache file
and having exclusive RLock ensures safe writing to cache file
There are two companion files for only one reason, and that is when new cache is being generated,
and old cache is not too old(cache time+N is not expired) clients can still use old cache
instead of waiting for cache being generated.
Please comment on this scheme and make it simpler if possible.

Resources