How to do concurrent insertion in a linear probing hash table? - multithreading

I'm trying to write a multithread linear probing hash table. There's a deadlock problem that I don't know how to resolve.
Suppose the hash table has two pages, each with a number of buckets. Two threads want to insert simultaneously. The first thread found a tombstone in page 1, and the second thread found a tombstone in page 2. But before the threads make the insertion, they both need to check whether the key is already existing in the hash table. If the key exists, the insertion is supposed to fail immediately.
In my current implementation, each thread holds the write lock on the page it is going to write, then tries to acquire the read lock on other pages. So thread 1 tries to acquire read lock on page 2 (whose write lock is held by thread 2), and thread 2 tries to acquire read lock on page 1 (whose write lock is held by thread 1). This results in a deadlock.
What's the proper way to implement concurrent insertion in linear probing hash table?

Related

how to emulate a reverse/inverse counting mutex using only binary, counting, and recursive mutexes?

I'm not sure if reverse/inverse counting mutex is the name of the synchronization primitive I'm looking for, and search results are vague. What I need is the ability for a thread that wants to write to an object to be able to "lock" said object such that it waits for all existing readers of that object to finish doing so, but where no new thread/task can attempt to aquire any access to that object (neither read nor write), until the thread that wants to write finishes doing so and "unlocks" that object.
The question is how to design a class that behaves as such a synchronization primitive using only preexisting binary/counting/recursive semaphores.
Using a standard counting semaphore isn't suitable, since that only limits the max number of tasks that would be able to access the object simultaneously. But it doesn't restrict/enforce that they may only read, nor would they notifiy the thread that wants to write that they have finished, nor would it prevent any other threads in the meanwhile starting to read.
I need some kind of "counting" semaphore that is not bounded from above, but on which "register_read" or "lock_for_read" can be called (which keeps count how many simultaneous readers there are), but on which a task can call "lock_for_write", and then blocks until the count reaches 0, and after "lock_for_write" is called, any new calls to "lock_for_read" would have to block until the writing thread calls "unlock_from_write".

Is what I am doing preventing Deadlock?

I have 12 resources (r_1, r_2, ..., r_12), and 12 corresponding locks (l_1, l_2, ..., l_12) that my threads try to access. Each thread needs a specific sequence of resources to operate on. For example, thread 1 needs r_1, r_3, and r_5. Thread 2 needs r_1, r_7, r_8, r_10.
Now what I've basically done is ordered the resources from 1 to 12, make each thread lock its required resources in this order (ascending order), then when the thread is done, I unlock them in the reverse order (descending order) to maintain an order.
So my question is, am I preventing a deadlock in this case? Or can there happen a deadlock?
TL;DR: Yes, this system is totally immune to deadlock. At any point in time, the thread holding the highest-numbered lock must be able to make progress, since it cannot be waiting to acquire any locks held by other processes. More formally, your conditions ensure a total ordering on lock acquisition by all processes, which in turn ensures that circular wait can never occur. Circular wait is a necessary precondition for deadlock.
Detail: In order for deadlock to take place, all four of the following conditions must apply (see relevant Wikipedia):
Mutual exclusion - i.e. concurrent processes are accessing unsharable resources. Locks are unsharable by definition (they are also called mutexes for this reason).
Hold and wait - at least one process is attempting to access multiple resources, and it does so by holding some of them and then waiting for the others. This condition probably applies in your case, depending on the exact semantics of your program.
No preemption - it is not possible for processes to have their resources taken from them by other processes. Once again, this is a property of the locks we're using.
Circular wait - there is a cycle of processes, each waiting on a resource held by the next. This condition doesn't apply here. Consider a thread A, waiting on accessing a lock L_i. That lock must be held by a thread B which has already obtained all the locks it requires from indices 1 to i. As a result, B cannot be waiting on A. Similarly, any thread that B is waiting on in order to acquire its next lock L_j (where j > i by the order in which locks are acquired) cannot be waiting on any locks with indices 1 to j. By induction, there can be no cycles of dependency in this system.
In concurrent programming, it is typical for the first three cases to be set by the context in which you are developing (which concurrency primitives are being used etc.), whereas the last can occasionallyâ„¢ be avoided by cleverness.

Vulkan Queue Synchronization in Multithreading

In my application it is imperative that "state" and "graphics" are processed in separate threads. So for example, the "state" thread is only concerned with updating object positions, and the "graphics" thread is only concerned with graphically outputting the current state.
For simplicity, let's say that the entirety of the state data is contained within a single VkBuffer. The "state" thread creates a Compute Pipeline with a Storage Buffer backed by the VkBuffer, and periodically vkCmdDispatchs to update the VkBuffer.
Concurrently, the "graphics" thread creates a Graphics Pipeline with a Uniform Buffer backed by the same VkBuffer, and periodically draws/vkQueuePresentKHRs.
Obviously there must be some sort of synchronization mechanism to prevent the "graphics" thread from reading from the VkBuffer whilst the "state" thread is writing to it.
The only idea I have is to employ the usage of a host mutex fromvkQueueSubmit to vkWaitForFences in both threads.
I want to know, is there perhaps some other method that is more efficient or is this considered to be OK?
Try using semaphores. They are used to synchronize operations solely on the GPU, which is much more optimal than waiting in the app and submitting work after previous work is fully processed.
When You submit work You can provide a semaphore which gets signaled when this work is finished. When You submit another work You can provide the same semaphore on which the second batch should wait. Processing of the second batch will start automatically when the semaphore gets signaled (this semaphore is also automatically unsignaled and can be reused).
(I think there are some constraints on using semaphores, associated with queues. I will update the answer later when I confirm this but they should be sufficient for Your purposes.
[EDIT] There are constraints on using semaphores but it shouldn't affect You - when You use a semaphore as a wait semaphore during submission, no other queue can wait on the same semaphore.)
There are also events in Vulkan which can be used for similar purposes but their use is a little bit more complicated.
If You really need to synchronize GPU and Your application, use fences. They are signaled in a similar way as semaphores. But You can check their state on the app side and You need to manually unsignal them before You can use then again.
[EDIT]
I've added an image that more or less shows what I think You should do. One thread calculates state and with each submission adds a semaphore to the top of the list (or a ring buffer as #NicolasBolas wrote). This semaphore gets signaled when the submission is finished (it is provided in pSignalSemaphores during "compute" batch submission).
Second thread renders Your scene. It manages it's own list of semaphores similarly to the compute thread. But when You want to render things, You need to be sure that compute thread finished calculations. That's why You need to take the latest "compute" semaphore and wait on it (provide it in pWaitSemaphores during "render" batch submission). When You submit rendering commands, compute thread can't start and modify the data because it may influence the results of a rendering. So compute thread also needs to wait until the most recent rendering is done. That's why compute thread also needs to provide a wait semaphore (the most recent "rendering" semaphore).
You just need to synchronize submissions. Rendering thread cannot start when a compute threads submits commands and vice versa. That's why adding semaphores to the lists (and taking semaphores from the list) should be synchronized. But this has nothing to do with Vulkan. Probably some mutex will be helpful (for example a C++-ish std::lock_guard<std::mutex>). But this synchronization is a problem only when You have a single buffer.
Another thing is what to do with old semaphores from both lists. You cannot directly check what is their state and You cannot directly unsignal them. The state of semaphores can be checked by using additional fences provided with each submission. You don't wait on them but from time to time check if a given fence is signaled and, if it is, You can destroy old semaphore (as You cannot unsignal it from the application) or You can make an empty submission, with no command buffers, and use that semaphore as a wait semaphore. This way the semaphore will be unsignaled and You can reuse it. But I don't know which solution is more optimal: destroying old and creating new semaphores, or unsignaling them with empty submissions.
When You have a single buffer, a one-element list/ring is probably enough. But more optimal solution would have some kind of a ping-pong set of buffers - You read data from one buffer, but store results in another buffer. And in the next step You swap them. That's why in the image above, the lists of semaphores (rings) may have more elements depending on Your setup. The more independent buffers and semaphores in the lists (of course to some reasonable count), the best performance You will get as You reduce time wasted on waiting. But this complicates Your code and it may also increase a lag (rendering thread gets data that is a bit older than the data currently processed by the compute thread). So You may need to balance performance, code complexity and a rendering lag.
How you do this depends on two factors:
Whether you want to dispatch the compute operation on the same queue as its corresponding graphics operation.
The ratio of compute operations to their corresponding graphics operations.
#2 is the most important part.
Even though they are generated in separate threads, there must be at least some idea that the graphics operation is being fed by a particular compute operation (otherwise, how would the graphics thread know where the data is to read from?). So, how do you do that?
At the end of the day, that part has nothing to do with Vulkan. You need to use some inter-thread communication mechanism to allow the graphics thread to ask, "which compute task's data should I be using?"
Typically, this would be done by having the compute thread add every compute operation it does to some kind of circular buffer (thread-safe of course. And non-locking). When the graphics thread goes to decide where to read its data from, it asks the circular buffer for the most recently added compute operation.
In addition to the "where to read its data from" information, this would also provide the graphics thread with an appropriate Vulkan synchronization primitive to use to synchronize its command buffer(s) with the compute operation's CB.
If the compute and graphics operations are being dispatched on the same queue, then this is pretty simple. There doesn't have to actually be a synchronization primitive. So long as the graphics CBs are issued after the compute CBs in the batch, all the graphics CBs need is to have a vkCmdPipelineBarrier at the front which waits on all memory operations from the compute stage.
srcStageMask would be STAGE_COMPUTE_SHADER_BIT, with dstStageMask being, well, pretty much everything (you could narrow it down, but it won't matter, since at the very least your vertex shader stage will need to be there).
You would need a single VkMemoryBarrier in the pipeline barrier. It's srcAccessMask would be SHADER_WRITE_BIT, while the dstAccessMask would be however you intend to read it. If the compute operations wrote some vertex data, you need VERTEX_ATTRIBUTE_READ_BIT. If they wrote some uniform buffer data, you need UNIFORM_READ_BIT. And so on.
If you're dispatching these operations on separate queues, that's where you need an actual synchronization object.
There are several problems:
You cannot detect if a Vulkan semaphore has been signaled by user code. Nor can you set a semaphore to the unsignaled state by user code. Nor can you reasonably submit a batch that has a semaphore in it that is currently signaled and nobody's waiting on it. You can do the latter, but it won't do the right thing.
In short, you can never submit a batch that signals a semaphore unless you are certain that some process is going to wait for it.
You cannot issue a batch that waits on a semaphore, unless a batch that signals it is "pending execution". That is, your graphics thread cannot vkQueueSubmit its batch until it is certain that the compute queue has submitted its signaling batch.
So what you have to do is this. When the graphics queue goes to get its compute data, this must send a signal to the compute thread to add a semaphore to its next submit call. When the graphics thread submits its graphics operation, it then waits on that semaphore.
But to ensure proper ordering, the graphics thread cannot submit its operation until the compute thread has submitted the semaphore signaling operation. That requires a CPU-synchronization operation of some form. It could be as simple as the graphics thread polling an atomic variable set by the compute thread.

When should I use critical sections?

Here's the deal. My app has a lot of threads that do the same thing - read specific data from huge files(>2gb), parse the data and eventually write to that file.
Problem is that sometimes it could happen that one thread reads X from file A and second thread writes to X of that same file A. A problem would occur?
The I/O code uses TFileStream for every file. I split the I/O code to be local(static class), because I'm afraid there will be a problem. Since it's split, there should be critical sections.
Every case below is local(static) code that is not instaniated.
Case 1:
procedure Foo(obj:TObject);
begin ... end;
Case 2:
procedure Bar(obj:TObject);
var i: integer;
begin
for i:=0 to X do ...{something}
end;
Case 3:
function Foo(obj:TObject; j:Integer):TSomeObject
var i:integer;
begin
for i:=0 to X do
for j:=0 to Y do
Result:={something}
end;
Question 1: In which case do I need critical sections so there are no problems if >1 threads call it at same time?
Question 2: Will there be a problem if Thread 1 reads X(entry) from file A while Thread 2 writes to X(entry) to file A?
When should I use critical sections? I try to imagine it my head, but it's hard - only one thread :))
EDIT
Is this going to suit it?
{a class for every 2GB file}
TSpecificFile = class
cs: TCriticalSection;
...
end;
TFileParser = class
file :TSpecificFile;
void Parsethis; void ParseThat....
end;
function Read(file: TSpecificFile): TSomeObject;
begin
file.cs.Enter;
try
...//read
finally
file.cs.Leave;
end;
end;
function Write(file: TSpecificFile): TSomeObject;
begin
file.cs.Enter;
try
//write
finally
file.cs.Leave
end;
end;
Now will there be a problem if two threads call Read with:
case 1: same TSpecificFile
case 2: different TSpecificFile?
Do i need another critical section?
In general, you need a locking mechanism (critical sections are a locking mechanism) whenever multiple threads may access a shared resource at the same time, and at least one of the threads will be writing to / modifying the shared resource.
This is true whether the resource is an object in memory or a file on disk.
And the reason that the locking is necessary is that, is that if a read operation happens concurrently with a write operation, the read operation is likely to obtain inconsistent data leading to unpredictable behaviour.
Stephen Cheung has mentioned the platform specific considerations with regards file handling, and I'll not repeat them here.
As a side note, I'd like to highlight another concurrency concern that may be applicable in your case.
Suppose one thread reads some data and starts processing.
Then another thread does the same.
Both threads determine that they must write a result to position X of File A.
At best the values to be written are the same, and one of the threads effectively did nothing but waste time.
At worst, the calculation of one of the threads is overwritten, and the result is lost.
You need to determine whether this would be a problem for your application. And I must point out that if it is, just locking the read and write operations will not solve it. Furthermore, trying to extend the duration of the locks leads to other problems.
Options
Critical Sections
Yes, you can use critical sections.
You will need to choose the best granularity of the critical sections: One per whole file, or perhaps use them to designate specific blocks within a file.
The decision would require a better understanding of what your application does, so I'm not going to answer for you.
Just be aware of the possibility of deadlocks:
Thread 1 acquires lock A
Thread 2 acquires lock B
Thread 1 desires lock B, but has to wait
Thread 2 desires lock A - causing a deadlock because neither thread is able to release its acquired lock.
I'm also going to suggest 2 other tools for you to consider in your solution.
Single-Threaded
What a shocking thing to say! But seriously, if your reason to go multi-threaded was "to make the application faster", then you went multi-threaded for the wrong reason. Most people who do that actually end up making their applications, more difficult to write, less reliable, and slower!
It is a far too common misconception that multiple threads speed up applications. If a task requires X clock-cycles to perform - it will take X clock-cycles! Multiple threads don't speed up tasks, it permits multiple tasks to be done in parallel. But this can be a bad thing! ...
You've described your application as being highly dependent on reading from disk, parsing what's read and writing to disk. Depending on how CPU intensive the parsing step is, you may find that all your threads are spending the majority of their time waiting for disk IO operations. In which case, the multiple threads generally only serve to shunt the disk heads to the far 'corners' of your (ummm round) disk platters. Disk IO is still the bottle-neck, and the threads make it behave as if the files are maximally fragmented.
Queueing Operations
Let's suppose your reason for going multi-threaded are valid, and you do still have threads operating on shared resources. Instead of using locks to avoid concurrency issues, you could queue your shared resource operations onto specific threads.
So instead of Thread 1:
Reading position X from File A
Parsing the data
Writing to position Y in file A
Create another thread; the FileA thread:
the FileA has a queue of instructions
When it gets to the instruction to read position X, it does so.
It sends the data to Thread 1
Thread 1 parses its data --- while FileA thread continues processing instructions
Thread 1 places an instruction to write its result to position Y at the back of FileA thread's queue --- while FileA thread continues to process other instructions.
Eventually FileA thread will write the data as required by Trhead 1.
Synchronization is only needed for shared data that can cause a problem (or an error) if more than one agent is doing something with it.
Obviously the file writing operation should be wrapped in a critical section for that file only if you don't want other writer processes to trample on the new data before the write is completed -- the file may no long be consistent if you have half of the new data modified by another process that does not see the other half of the new data (that hasn't been written out by the original writer process yet). Therefore you'll have a collection of CS's, one for each file. That CS should be released asap when you're done with writing.
In certain cases, e.g. memory-mapped files or sparse files, the O/S may allow you to write to different portions of the file at the same time. Therefore, in such cases, your CS will have to be on a particular segment of the file. Thus you'll have a collection of CS's (one for each segment) for each file.
If you write to a file and read it at the same time, the reader may get inconsistent data. In some O/S's, reading is allowed to happen simultaneously with a write (perhaps the read comes from cached buffers). However, if you are writing to a file and reading it at the same time, what you read may not be correct. If you need consistent data on reads, then the reader should also be subject to the critical section.
In certain cases, if you are writing to a segment and read from another segment, the O/S may allow it. However, whether this will return correct data usually cannot be guaranteed because there you can't always tell whether two segments of the file may be residing in one disk sector, or other low-level O/S things.
So, in general, the advise is to wrap any file operation in a CS, per file.
Theoretically, you should be able to read simultaneously from the same file, but locking it in a CS will only allow one reader. In that case, you'll need to separate your implementation into "read locks" and "write locks" (similar to a database system). This is highly non-trivial though as you'll then have to deal with promoting different levels of locks.
After note: The kind of thing you're trying to data (reading and writing huge data sets that are GB's in size simultaneously in segments) is what is typically done in a database. You should be looking into breaking your data files into database records. Otherwise, you either suffer from non-optimized read/write performance due to locking, or you end up re-inventing the relational database.
Conclusion first
You don't need TCriticalSection. You should implement a Queue-based algorithm that guarantees no two threads are working on the same piece of data, without blocking.
How I got to that conclusion
First of all Windows (Win 7?) will allow you to simultaneously write to a file as many times as you see fit. I have no idea what it does with the writes, and I'm clearly not saying it's a good idea, but I've just done the following test to prove Windows allows simultaneous multiple writes to the same file:
I made a thread that opens a file for writing (with "share deny none") and keeps writing random stuff to a random offset for 30 seconds. Here's a pastebin with the code.
Why a TCriticalSection would be bad
A critical section only allows one thread to access the protect resource at any given time. You have two options: Only hold the lock for the duration of the read/write operation, or hold the lock for the entire time required to process the given resource. Both have serious problems.
Here's what might happen if a thread holds the lock only for the duration of the read/write operations:
Thread 1 acquires the lock, reads the data, releases the lock
Thread 2 acquires the lock, reads the same data, releases the lock
Thread 1 finishes processing, acquires the lock, writes the data, releases the lock
Thread 2 acquires the lock, writes the data, and here's the oops: Thread 2 has been working on old data, since Thread 1 made changes in the background!
Here's what might happen if a thread holds the lock for the entire round-trim read & write operation:
Thread 1 acquires the lock, starts reading data
Thread 2 tries to acquire the same lock, gets blocked...
Thread 1 finishes reading the data, processes the data, writes the data back to file, releases the lock
Thread 2 acquires the lock and starts processing the same data again !
The Queue solution
Since you're multi-threading, and you can have multiple threads simultaneously processing data from the same file, I assume data is somehow "context free": You can process the 3rd part of a file before processing the 1st. This must be true, because if it's not, you can't multi-thread (or are limited to 1 thread per file).
Before you start processing you can prepare a number of "Jobs", that look like this:
File 'file1.raw', offset 0, 1024 Kb
File 'file1.raw', offset 1024, 1024 kb.
...
File 'fileN.raw', offset 99999999, 1024 kb
Put all those "jobs" in a queue. Have your threads dequeue one Job from the queue and process it. Since no two jobs overlap, threads don't need to synchronize with each other, so you don't need the critical section. You only need the critical section to protect access to the Queue itself. Windows makes sure threads can read and write to/from the files just fine, as long as they stick to the allocated "Job".

Real World Examples of read-write in concurrent software

I'm looking for real world examples of needing read and write access to the same value in concurrent systems.
In my opinion, many semaphores or locks are present because there's no known alternative (to the implementer,) but do you know of any patterns where mutexes seem to be a requirement?
In a way I'm asking for candidates for the standard set of HARD problems for concurrent software in the real world.
What kind of locks are used depends on how the data is being accessed by multiple threads. If you can fine tune the use case, you can sometimes eliminate the need for exclusive locks completely.
An exclusive lock is needed only if your use case requires that the shared data must be 100% exact all the time. This is the default that most developers start with because that's how we think about data normally.
However, if what you are using the data for can tolerate some "looseness", there are several techniques to share data between threads without the use of exclusive locks on every access.
For example, if you have a linked list of data and if your use of that linked list would not be upset by seeing the same node multiple times in a list traversal and would not be upset if it did not see an insert immediately after the insert (or similar artifacts), you can perform list inserts and deletes using atomic pointer exchange without the need for a full-stop mutex lock around the insert or delete operation.
Another example: if you have an array or list object that is mostly read from by threads and only occasionally updated by a master thread, you could implement lock-free updates by maintaining two copies of the list: one that is "live" that other threads can read from and another that is "offline" that you can write to in the privacy of your own thread. To perform an update, you copy the contents of the "live" list into the "offline" list, perform the update to the offline list, and then swap the offline list pointer into the live list pointer using an atomic pointer exchange. You will then need some mechanism to let the readers "drain" from the now offline list. In a garbage collected system, you can just release the reference to the offline list - when the last consumer is finished with it, it will be GC'd. In a non-GC system, you could use reference counting to keep track of how many readers are still using the list. For this example, having only one thread designated as the list updater would be ideal. If multiple updaters are needed, you will need to put a lock around the update operation, but only to serialize updaters - no lock and no performance impact on readers of the list.
All the lock-free resource sharing techniques I'm aware of require the use of atomic swaps (aka InterlockedExchange). This usually translates into a specific instruction in the CPU and/or a hardware bus lock (lock prefix on a read or write opcode in x86 assembler) for a very brief period of time. On multiproc systems, atomic swaps may force a cache invalidation on the other processors (this was the case on dual proc Pentium II) but I don't think this is as much of a problem on current multicore chips. Even with these performance caveats, lock-free runs much faster than taking a full-stop kernel event object. Just making a call into a kernel API function takes several hundred clock cycles (to switch to kernel mode).
Examples of real-world scenarios:
producer/consumer workflows. Web service receives http requests for data, places the request into an internal queue, worker thread pulls the work item from the queue and performs the work. The queue is read/write and has to be thread safe.
Data shared between threads with change of ownership. Thread 1 allocates an object, tosses it to thread 2 for processing, and never wants to see it again. Thread 2 is responsible for disposing the object. The memory management system (malloc/free) must be thread safe.
File system. This is almost always an OS service and already fully thread safe, but it's worth including in the list.
Reference counting. Releases the resource when the number of references drops to zero. The increment/decrement/test operations must be thread safe. These can usually be implemented using atomic primitives instead of full-stop kernal mutex locks.
Most real world, concurrent software, has some form of requirement for synchronization at some level. Often, better written software will take great pains to reduce the amount of locking required, but it is still required at some point.
For example, I often do simulations where we have some form of aggregation operation occurring. Typically, there are ways to prevent locking during the simulation phase itself (ie: use of thread local state data, etc), but the actual aggregation portion typically requires some form of lock at the end.
Luckily, this becomes a lock per thread, not per unit of work. In my case, this is significant, since I'm typically doing operations on hundreds of thousands or millions of units of work, but most of the time, it's occuring on systems with 4-16 PEs, which means I'm usually restricting to a similar number of units of execution. By using this type of mechanism, you're still locking, but you're locking between tens of elements instead of potentially millions.

Resources