Does _ReadWriteBarrier Ensure Visibility of Dynamically Allocated Buffers Across Threads? - multithreading

I am using Visual C++ and Windows 7 and XP. I have two threads in a program where, after both are created, one thread dynamically creates a buffer and assigns its address to a global pointer, then writes data to that buffer. Will a call to _ReadWriteBarrier guarantee the visibility of that data to the second thread?
For example:
char *buff;
HANDLE EventObject;
main()
{
// Create an event object so we can signal the second thread
// when it is time to read the buffer.
EventObject = CreateEvent(NULL, TRUE, FALSE, NULL);
// Create the second thread.
CreateThread(NULL, 0, ThreadProc, NULL, 0);
// Allocate the buffer.
buff = (char*)malloc(100);
// Write some data to the buffer.
buff[50] = rand() % 256;
// Set the fence here.
_ReadWriteBarrier();
// Signal the other thread that data is ready.
SetEvent(EventObject);
// Go on our merry way in this thread.
.
.
.
}
ThreadProc(void* p)
{
// Wait for the signal that data is ready.
WaitForSingleObject(EventObject, INFINITE);
// Read the data written to the buffer.
printf("%d\n", buff[50]);
}
I believe from the the documentation that _ReadWriteBarrier guarantees the visibility of the address in buff, as buff is a global variable. But does it also guarantee the visibility of the buffer itself, that was created in main? Is it even necessary?

If you use SetEvent, you don't need to have the barrier at all. The event takes care of that.
Usually, for barriers to have a visible effect, you need them on both sides (the writing and the reading side). Since SetEvent/WaitForSingleObject both act as barriers, you're fine.

Related

If each thread needs to resume work, whenever any thread finds some new information, how to wait until all threads have finished?

A Seemingly Simple Synchronization Problem
TL;DR
Several threads depend on each other. Whenever one of them finds some new information, all of them need to process that information. How to determine, that all threads are ready?
Background
I have (almost) parallelized a function Foo(input) that solves a problem, which is known to be P-complete and may be thought of as some type of search. Unsurprisingly, so far nobody has managed to successfully exploit parallelism beyond two threads for solving that problem. However, I had a promising idea and managed to fully implement it, except for this seemingly simply problem.
Details
Information between each of the threads is exchanged implicitly using some kind of shared graph-like database g of type G, such that the threads have all informations immediately and do not really need to notify each other explicitly. More precisely, each time an information i is found by some thread, that thread calls a thread-safe function g.addInformation(i) which among other things basically places the information i at the end of some array. One aspect of my new implementation is, that threads can use an information i during their search even before i has been enqueued at the end of the array. Nevertheless, each thread needs to additionally process the information i separately after it has been enqueued in that array. Enqueueing i may happen after the thread who added i has returned from g.addInformation(i). This is because some other thread may take over responsibility to enqueue i.
Each thread s calls a function s.ProcessAllInformation() in order to processes all information in that array in g in order. A call to s.ProcessAllInformation by some thread is a noop, i.e. does nothing, if that thread has already processed all informations or there was no (new) informations.
As soon as a thread finished processing all informations, it should wait for all other threads to finish. And it should resume work if any of the other threads finds some new information i. I.e. each time some thread calls g.addInformation(i) all threads that had finished processing all previously known informations, need to resume their work and process that (and any other) newly added information.
My Problem
Any solution I could think does not work and suffers from a variation of the same problem: One thread finished processing all informations and then sees all other threads are ready, too. Hence, this thread leaves. But then another thread notices some new information had been added, resumes work and finds a new information. The new information is then not processed by the thread that has already left.
A solution to this problem may be straight forward, but I can not think of one. Ideally a solution to this problem should not depend on time-consuming operations during a function call to g.addInformation(i) whenever a new information is found, because of how many times a second this situation is predicted to appear (1 or 2 Million times per second, see below).
Even more background
In my initially sequential application the function Foo(input) is called roughly 100k times a second on modern hardware and my application spends 80% to 90% of time executing Foo(input). Actually, all function calls to Foo(input) depend on each other, we kind of search for something in a very large space in an iterative manner. Solving a reasonable-sized problem typically takes about one or two hours when using the sequential version of the application.
Each time Foo(input) is called between zero and many hundred new informations are found. On average during the execution of my application 1 or 2 million informations are found per second, i.e. we find 10 to 20 new informations on each function call to Foo(input). All of these statistics probably have a very high standard deviation (which i didn't yet measure, though).
Currently I am writing a prototype for the parallel version of Foo(input) in go. I prefer answers in go. The sequential application is written in C (actually it's C++, but its written like a program in C). So answers in C or C++ (or pseudo-code) are no problem. I haven't benchmarked my prototype, yet, since wrong code is infinitely slower than slow code.
Code
This code examples are in order to clarify. Since I haven't solved the problem feel free to consider any changes to the code. (I appreciate unrelated helpful remarks, too.)
Global situation
We have some type G and Foo() is a method of G. If g is an object of type G and when g.Foo(input) is called, g creates some workers s[1], ..., s[g.numThreads] that obtain a pointer to g, such that these have access to the member variables of g and are able to call g.addInformation(i) whenever they find a new information. Then for each worker s[j] a method FooInParallel() is called in parallel.
type G struct {
s []worker
numThreads int
// some data, that the workers need access to
}
func (g *G) initializeWith(input InputType) {
// Some code...
}
func (g *G) Foo(input InputType) int {
// Initialize data-structures:
g.initializeWith(input)
// Initialize workers:
g.s := make([]worker, g.numThreads)
for j := range g.s {
g.s[j] := newWorker(g) // workers get a pointer to g
}
// Note: This wait group doesn't solve the problem. See remark below.
wg := new(sync.WaitGroup)
wg.Add(g.numThreads)
// Actual computation in parallel:
for j := 0 ; j < g.numThreads - 1 ; j++ {
// Start g.numThread - 1 go-routines in parrallel
go g.s[j].FooInParallel(wg)
}
// Last thread is this go-routine, such that we have
// g.numThread go-routines in total.
g.s[g.numThread-1].FooInParallel(wg)
wg.Wait()
}
// This function is thread-safe in so far as several
// workers can concurrently add information.
//
// The function is optimized for heavy contention; most
// threads can leave almost immediately. One threads
// cleans up any mess they leave behind (and even in
// bad cases that is not too much).
func (g *G) addInformation(i infoType) {
// Step 1: Make information available to all threads.
// Step 2: Enqueue information at the end of some array.
// Step 3: Possibly, call g.notifyAll()
}
// If a new information has been added, we must ensure,
// that every thread, that had finished, resumes work
// and processes any newly added informations.
func (g *G) notifyAll() {
// TODO:
// This is what I fail to accomplish. I include
// my most successful attempt in the corresponding.
// section. It doesn't work, though.
}
// If a thread has finished processing all information
// it must ensure that all threads have finished and
// that no new information have been added since.
func (g *G) allThreadsReady() bool {
// TODO:
// This is what I fail to accomplish. I include
// my most successful attempt in the corresponding.
// section. It doesn't work, though.
}
Remark: The only purpose of the wait group is to ensure Foo(input) is not called again before the last worker has returned. However, you can completely ignore this.
Local Situation
Each worker contains a pointer to the global data-structure and searches for either a treasure or new informations until it has processed all information that have been enqueued by this or other threads. If it finds a new information i it calls the function g.addInformation(i) and continues its search. If it finds a treasure it sends the treasure via a channel it has obtained as an argument and returns. If all threads are ready with processing all information, each of them can send a dummy-treasure to the channel and return. However, determining whether all threads are ready is exactly my problem.
type worker struct {
// Each worker contains a pointer to g
// such that it has access to its member
// variables and is able to call the
// function g.addInformation(i) as soon
// as it finds some information i.
g *G
// Also contains some other stuff.
}
func (s *worker) FooInParallel(wg *sync.WaitGroup) {
defer wg.Done()
for {
a := s.processAllInformation()
// The following is the problem. Feel free to make any
// changes to the following block.
s.notifyAll()
for !s.needsToResumeWork() {
if s.allThreadsReady() {
return
}
}
}
}
func (s *worker) notifyAll() {
// TODO:
// This is what I fail to accomplish. I include
// my most successful attempt in the corresponding.
// section. It doesn't work, though.
// An example:
// Step 1: Possibly, do something else first.
// Step 2: Call g.notifyAll()
}
func (s *worker) needsToResumeWork() bool {
// TODO:
// This is what I fail to accomplish. I include
// my most successful attempt in the corresponding.
// section. It doesn't work, though.
}
func (s *worker) allThreadsReady() bool {
// TODO:
// This is what I fail to accomplish. I include
// my most successful attempt in the corresponding.
// section. It doesn't work, though.
// If all threads are ready, return true.
// Otherwise, return false.
// Alternatively, spin as long as no new information
// has been added, and return false as soon as some
// new information has been added, or true if no new
// information has been added and all other threads
// are ready.
//
// However, this doesn't really matter, because a
// function call to processAllInformation is cheap
// if no new informations are available.
}
// A call to this function is cheap if no new work has
// been added since the last function call.
func (s *worker) processAllInformation() treasureType {
// Access member variables of g and search
// for information or treasures.
// If a new information i is found, calls the
// function g.addInformation(i).
// If all information that have been enqueued to
// g have been processed by this thread, returns.
}
My best attempt to solve the problem
Well, by now, I am rather tired, so I might need to double-check my solution later. However, even my correct attempt doesn't work. So in order to give you an idea of what I have been trying so far (among many other things), I share it immediately.
I tried the following. Each of the workers contains a member variable needsToResumeWork, that is atomically set to one whenever a new information has been added. Several times setting this member variable to one does not do harm, it is only important that the thread resumes work after the last information has been added.
In order to reduce work load for a thread calling g.addInformation(i) whenever an information i is found, instead of notifying all threads individually, the thread that enqueues the information (that is not necessarily the thread that called g.addInformation(i)) afterwards sets a member variable notifyAllFlag of g to one, which indicates that all threads need to be notified about the latest information.
Whenever a thread that has finished processing all information that had been enqueued calls the function g.notifyAll(), it checks whether the member variable notifyAllFlag is set to one. If so it tries to atomically compare g.allInformedFlag with 1 and swap with 0. If it could not write g.allInformedFlag it assumes some other thread has taken the responsibility to inform all threads. If this operation is successful, this thread has taken over responsibility to notify all threads and proceeds to do so by setting the member variable needsToResumeWorkFlag to one for every thread. Afterwards it atomically sets g.numThreadsReady and g.notifyAllFlag to zero, and g.allInformedFlag to 1.
type G struct {
numThreads int
numThreadsReady *uint32 // initialize to 0 somewhere appropriate
notifyAllFlag *uint32 // initialize to 0 somewhere appropriate
allInformedFlag *uint32 // initialize to 1 somewhere appropriate (1 is not a typo)
// some data, that the workers need access to
}
// This function is thread-safe in so far as several
// workers can concurrently add information.
//
// The function is optimized for heavy contention; most
// threads can leave almost immediately. One threads
// cleans up any mess they leave behind (and even in
// bad cases that is not too much).
func (g *G) addInformation(i infoType) {
// Step 1: Make information available to all threads.
// Step 2: Enqueue information at the end of some array.
// Since the responsibility to enqueue an information may
// be passed to another thread, it is important that the
// last step is executed by the thread which enqueues the
// information(s) in order to ensure, that the information
// successfully has been enqueued.
// Step 3:
atomic.StoreUint32(g.notifyAllFlag,1) // all threads need to be notified
}
// If a new information has been added, we must ensure,
// that every thread, that had finished, resumes work
// and processes any newly added informations.
func (g *G) notifyAll() {
if atomic.LoadUint32(g.notifyAll) == 1 {
// Somebody needs to notify all threads.
if atomic.CompareAndSwapUint32(g.allInformedFlag, 1, 0) {
// This thread has taken over the responsibility to inform
// all other threads. All threads are hindered to access
// their member variable s.needsToResumeWorkFlag
for j := range g.s {
atomic.StoreUint32(g.s[j].needsToResumeWorkFlag, 1)
}
atomic.StoreUint32(g.notifyAllFlag, 0)
atomic.StoreUint32(g.numThreadsReady, 0)
atomic.StoreUint32(g.allInformedFlag, 1)
} else {
// Some other thread has taken responsibility to inform
// all threads.
}
}
Whenever a thread finishes processing all information that had been enqueued, it checks whether it needs to resume work by atomically comparing its member variable needsToResumeWorkFlag with 1 and swapping with 0. However, since one of the threads is responsible to notify all others, it can not do so immediately.
First, it must call the function g.notifyAll(), and then it must check, whether the latest thread to call g.notifyAll() finished notifying all threads. Hence, after calling g.notifyAll() it must spin until g.allInformed is one, before it checks whether its member variable s.needsToResumeWorkFlag is one and in this case atomically sets it to be zero and resumes work. (I guess here is a mistake, but I also tried several other things here without success.) If s.needsToResumeWorkFlag is already zero, it atomically increments g.numThreadsReady by one, if it hasn't done so before. (Recall that g.numThreadsReady is reset during a function call to g.notifyAll().) then it atomically checks whether g.numThreadsReady is equal to g.numThreads, in which case it can leave (after sending a dummy-treasure to the channel). otherwise we start all over again until either this thread has been notified (possibly by itself) or all threads are ready.
type worker struct {
// Each worker contains a pointer to g
// such that it has access to its member
// variables and is able to call the
// function g.addInformation(i) as soon
// as it finds some information i.
g *G
// If new work has been added, the thread
// is notified by setting the uint32
// at which needsToResumeWorkFlag points to 1.
needsToResumeWorkFlag *uint32 // initialize to 0 somewhere appropriate
// Also contains some other stuff.
}
func (s *worker) FooInParallel(wg *sync.WaitGroup) {
defer wg.Done()
for {
a := s.processAllInformation()
numReadyIncremented := false
for !s.needsToResumeWork() {
if !numReadyIncremented {
atomic.AddUint32(g.numThreadsReady,1)
numReadyIncremented = true
}
if s.allThreadsReady() {
return
}
}
}
}
func (s *worker) needsToResumeWork() bool {
s.notifyAll()
for {
if atomic.LoadUint32(g.allInformedFlag) == 1 {
if atomic.CompareAndSwapUint32(s.needsToResumeWorkFlag, 1, 0) {
return true
} else {
return false
}
}
}
}
func (s *worker) notifyAll() {
g.notifyAll()
}
func (g *G) allThreadsReady() bool {
if atomic.LoadUint32(g.numThreadsReady) == g.numThreads {
return true
} else {
return false
}
}
As mentioned my solution doesn't work.
I found a solution myself. We exploit, that a call to s.processAllInformation() does nothing, if no new information had been added (and is cheap). The trick is to use an atomic variable as a lock to both, for each thread to notify all if necessary and to check whether it has been notified. And then to simply call s.processAllInformation() again, if the lock can not be acquired. A thread then uses the notifications to check whether it has to increment the counter of ready threads, instead of to see whether it needs to return work.
Global situation
type G struct {
numThreads int
numThreadsReady *uint32 // initialize to 0 somewhere appropriate
notifyAllFlag *uint32 // initialize to 0 somewhere appropriate
allCanGoFlag *uint32 // initialize to 0 somewhere appropriate
lock *uint32 // initialize to 0 somewhere appropriate
// some data, that the workers need access to
}
// This function is thread-safe in so far as several
// workers can concurrently add information.
//
// The function is optimized for heavy contention; most
// threads can leave almost immediately. One threads
// cleans up any mess they leave behind (and even in
// bad cases that is not too much).
func (g *G) addInformation(i infoType) {
// Step 1: Make information available to all threads.
// Step 2: Enqueue information at the end of some array.
// Since the responsibility to enqueue an information may
// be passed to another thread, it is important that the
// last step is executed by the thread which enqueues the
// information(s) in order to ensure, that the information
// successfully has been enqueued.
// Step 3:
atomic.StoreUint32(g.notifyAllFlag,1) // all threads need to be notified
}
// If a new information has been added, we must ensure,
// that every thread, that had finished, resumes work
// and processes any newly added informations.
//
// This function is not thread-safe. Make sure not to
// have several threads call this function concurrently
// if these calls are not guarded by some lock.
func (g *G) notifyAll() {
if atomic.LoadUint32(g.notifyAllFlag,1) {
for j := range g.s {
atomic.StoreUint32(g.s[j].needsToResumeWorkFlag, 1)
}
atomic.StoreUint32(g.notifyAllFlag,0)
atomic.StoreUint32(g.numThreadsReady,0)
}
Local situation
type worker struct {
// Each worker contains a pointer to g
// such that it has access to its member
// variables and is able to call the
// function g.addInformation(i) as soon
// as it finds some information i.
g *G
// If new work has been added, the thread
// is notified by setting the uint32
// at which needsToResumeWorkFlag points to 1.
needsToResumeWorkFlag *uint32 // initialize to 0 somewhere appropriate
incrementedNumReadyFlag *uint32 // initialize to 0 somewhere appropriate
// Also contains some other stuff.
}
func (s *worker) FooInParallel(wg *sync.WaitGroup) {
defer wg.Done()
for {
a := s.processAllInformation()
if atomic.LoadUint32(s.g.allCanGoFlag, 1) {
return
}
if atomic.CompareAndSwapUint32(g.lock,0,1) { // If possible, lock.
s.g.notifyAll() // It is essential, that this is also guarded by the lock.
if atomic.LoadUint32(s.needsToResumeWorkFlag) == 1 {
atomic.StoreUint32(s.needsToResumeWorkFlag,0)
// Some new information was found, and this thread can't be sure,
// whether it already has processed it. Since the counter for
// how many threads are ready had been reset, we must increment
// that counter after the next call processAllInformation() in the
// following iteration.
atomic.StoreUint32(s.incrementedNumReadyFlag,0)
} else {
// Increment number of ready threads by one, if this thread had not
// done this before (since the last newly found information).
if atomic.CompareAndSwapUint32(s.incrementedNumReadyFlag,0,1) {
atomic.AddUint32(s.g.numThreadsReady,1)
}
// If all threads are ready, give them all a signal.
if atomic.LoadUint32(s.g.numThreadsReady) == s.g.numThreads {
atomic.StoreUint32(s.g.allCanGo, 1)
}
}
atomic.StoreUint32(g.lock,0) // Unlock.
}
}
}
Later I may add some order for the threads to access to the lock under heavy contention, but for now that'll do.

How to create monitoring thread?

I got a question while I'm doing for ray-tracing stuff.
I have created multiple threads to split whole image to be processed and let it process its allocated task. Threads work well as it is intended. I would like to monitor the work progress in real-time.
To resolve this problem, I have created one more thread to monitor current state.
Here is the monitoring pseudo-code:
/* Global var */
int cnt = 0; // count the number of row processed
void* render_disp(void* arg){ // thread for monitoring current render-processing
/* monitoring global variable and calculate percentage to display */
double result = 100.*cnt/(h-1);
fprintf(stderr,"\r3.2%f%% of image is processed!", result);
}
void* process(void* arg){ // multiple threads work here
// Rendering process
for(........)
pthread_mutex_lock(&lock);
cnt++;
pthread_mutex_unlock(&lock);
for(........)
}
I wrote the code for initialization of pthread and mutex in main() function.
Basically, I think this monitoring thread should display current state but this thread seems to be called only once and quit.
How do I change this code to this thread function to be called until the whole rendering is finished?

IOCP multithreaded server and reference counted class

I work on IOCP Server (Overlapped I/O , 4 threads, CreateIoCompletionPort, GetQueuedCompletionStatus, WSASend etc). And my goal is to send single reference counted buffer too all connected sockets.(I followed Len Holgate's suggestion from this post WSAsend to all connected socket in multithreaded iocp server) . After sending buffer to all connected clients it should be deleted.
this is class with buffer to be send
class refbuf
{
private:
int m_nLength;
int m_wsk;
char *m_pnData; // buffer to send
mutable int mRefCount;
public:
...
void grab() const
{
++mRefCount;
}
void release() const
{
if(mRefCount > 0);
--mRefCount;
if(mRefCount == 0) {delete (refbuf *)this;}
}
...
char* bufadr() { return m_pnData;}
};
sending buffer to all socket
refbuf *refb = new refbuf(4);
...
EnterCriticalSection(&g_CriticalSection);
pTmp1 = g_pCtxtList; // start of linked list with sockets
while( pTmp1 )
{
pTmp2 = pTmp1->pCtxtBack;
ovl=TakeOvl(); // ovl -struct containing WSAOVERLAPPED
ovl->wsabuf.buf=refb->bufadr();// adress m_pnData from refbuf
ovl->rcb=refb; //when GQCS get notification rcb is used to decrease mRefCount
ovl->wsabuf.len=4;
refb->grab(); // mRefCount ++
WSASend(pTmp1->Socket, &(ovl->wsabuf),1,&dwSendNumBytes,0,&(ovl->Overlapped), NULL);
pTmp1 = pTmp2;
}
LeaveCriticalSection(&g_CriticalSection);
and 1 of 4 threads
GetQueuedCompletionStatus(hIOCP, &dwIoSize,(PDWORD_PTR)&lpPerSocketContext, (LPOVERLAPPED *)&lpOverlapped, INFINITE);
...
lpIOContext = (PPER_IO_CONTEXT)lpOverlapped;
lpIOContext->rcb->release(); //mRefCount --,if mRefCount reach 0, delete object
i check this with 5 connected clients and it seems to work. When GQCS receives all notifaction, mRefCount reachs 0 and delete is executed.
And my questions: is that approach appropriate? What if there will be for example 100 or more clients? Is situation avoided when one thread can delete object before another still use it? How to implement atomic reference count in this scernario? Thanks in advance.
Obvious issues; in order of importance...
Your refbuf class doesn't use thread safe ref count manipulation. Use InterlockedIncrement() etc.
I assume that TakeOvl() obtains a new OVERLAPPED and WSABUF structure per operation.
Your naming could be better, why grab() rather than AddRef(), what does TakeOvl() take from? Those Tmp variables are something and the least important something is that they're 'temporary' so name them after a more important something. Go Read Code Complete.

lio_listio: How to wait until all requests complete?

In my C++ program, i use the lio_listio call to send many (up to a few hundred) write requests at once. After that, I do some calculations, and when I'm done I need to wait for all outstanding requests to finish before I can submit the next batch of requests. How can I do this?
Right now, I am just calling aio_suspend in a loop, with one request per call, but this seems ugly. It looks like I should use the struct sigevent *sevp argument to lio_listio. My current guess is that I should do something like this:
In the main thread, create a mutex and lock it just before the call to lio_listio.
In the call to lio_listio, specify a notification function / signal handler that unlocks this mutex.
This should give me the desired behavior, but will it work reliably? Is it allowed to manipulate mutexes from the signal handler context? I read that pthread mutexes can provide error detection and fail with if you try to lock them again from the same thread or unlock them from a different thread, yet this solution relies on deadlocking.
Example code, using a signal handler:
void notify(int, siginfo_t *info, void *) {
pthread_mutex_unlock((pthread_mutex_t *) info->si_value);
}
void output() {
pthread_mutex_t iomutex = PTHREAD_MUTEX_INITIALIZER;
struct sigaction act;
memset(&act, 0, sizeof(struct sigaction));
act.sa_sigaction = &notify;
act.sa_flags = SA_SIGINFO;
sigaction(SIGUSR1, &act, NULL);
for (...) {
pthread_mutex_lock(&iomutex);
// do some calculations here...
struct aiocb *cblist[];
int cbno;
// set up the aio request list - omitted
struct sigevent sev;
memset(&sev, 0, sizeof(struct sigevent));
sev.sigev_notify = SIGEV_SIGNAL;
sev.sigev_signo = SIGUSR1;
sev.sigev_value.sival_ptr = &iomutex;
lio_listio(LIO_NOWAIT, cblist, cbno, &sev);
}
// ensure that the last queued operation completes
// before this function returns
pthread_mutex_lock(&iomutex);
pthread_mutex_unlock(&iomutex);
}
Example code, using a notification function - possibly less efficient, since an extra thread is created:
void output() {
pthread_mutex_t iomutex = PTHREAD_MUTEX_INITIALIZER;
for (...) {
pthread_mutex_lock(&iomutex);
// do some calculations here...
struct aiocb *cblist[];
int cbno;
// set up the aio request list - omitted
struct sigevent sev;
memset(&sev, 0, sizeof(struct sigevent));
sev.sigev_notify = SIGEV_THREAD;
sev_sigev_notify_function = &pthread_mutex_unlock;
sev.sigev_value.sival_ptr = &iomutex;
lio_listio(LIO_NOWAIT, cblist, cbno, &sev);
}
// ensure that the last queued operation completes
// before this function returns
pthread_mutex_lock(&iomutex);
pthread_mutex_unlock(&iomutex);
}
If you set the sigevent argument in the lio_listio() call, you will be notified with a signal (or function call) when all the jobs in that one particular call completes. You would still need to:
wait until you receive as many notifications as you have made lio_listio() calls, to know when they're all done.
use some safe mechanism to communicate from your signal handler to your main thread, probably via a global variable (to be portable).
If you're on linux, I would recommend tying an eventfd to your sigevent instead and wait on that. That's a lot more flexible since you don't need to involve signal handlers. On BSD (but not Mac OS), you can wait on aiocbs using kqueue and on solaris/illumos you can use a port to get notified of aiocb completions.
Here's an example of how to use eventfds on linux:
As a side note, I would use caution when issuing jobs with lio_listio. You're not guaranteed that it supports taking more than 2 jobs, and some systems have very low limits of how many you can issue at a time. Default on Mac OS for instance is 16. This limit may be defined as the AIO_LISTIO_MAX macro, but it isn't necessarily. In which case you need to call sysconf(_SC_AIO_LISTIO_MAX) (see docs). For details, see the lio_listio documentation.
You should at least check error conditions from your lio_listio() call.
Also, your solution of using a mutex is sub-optimal, since you will synchronize each loop in the for loop, and just run one at a time (unless it's a recursive mutex, but in that case its state could be corrupt if your signal handler happens to land on a different thread).
A more appropriate primitive may be a semaphore, which is released in the handler, and then (after your for loop) acquired the same number of times as you looped, calling lio_listio(). But, I would still recommend an eventfd if it's OK to be linux specific.

pthread_cond_wait never unblocking - thread pools

I'm trying to implement a sort of thread pool whereby I keep threads in a FIFO and process a bunch of images. Unfortunately, for some reason my cond_wait doesn't always wake even though it's been signaled.
// Initialize the thread pool
for(i=0;i<numThreads;i++)
{
pthread_t *tmpthread = (pthread_t *) malloc(sizeof(pthread_t));
struct Node* newNode;
newNode=(struct Node *) malloc(sizeof(struct Node));
newNode->Thread = tmpthread;
newNode->Id = i;
newNode->threadParams = 0;
pthread_cond_init(&(newNode->cond),NULL);
pthread_mutex_init(&(newNode->mutx),NULL);
pthread_create( tmpthread, NULL, someprocess, (void*) newNode);
push_back(newNode, &threadPool);
}
for() //stuff here
{
//...stuff
pthread_mutex_lock(&queueMutex);
struct Node *tmpNode = pop_front(&threadPool);
pthread_mutex_unlock(&queueMutex);
if(tmpNode != 0)
{
pthread_mutex_lock(&(tmpNode->mutx));
pthread_cond_signal(&(tmpNode->cond)); // Not starting mutex sometimes?
pthread_mutex_unlock(&(tmpNode->mutx));
}
//...stuff
}
destroy_threads=1;
//loop through and signal all the threads again so they can exit.
//pthread_join here
}
void *someprocess(void* threadarg)
{
do
{
//...stuff
pthread_mutex_lock(&(threadNode->mutx));
pthread_cond_wait(&(threadNode->cond), &(threadNode->mutx));
// Doesn't always seem to resume here after signalled.
pthread_mutex_unlock(&(threadNode->mutx));
} while(!destroy_threads);
pthread_exit(NULL);
}
Am I missing something? It works about half of the time, so I would assume that I have a race somewhere, but the only thing I can think of is that I'm screwing up the mutexes? I read something about not signalling before locking or something, but I don't really understand what's going on.
Any suggestions?
Thanks!
Firstly, your example shows you locking the queueMutex around the call to pop_front, but not round push_back. Typically you would need to lock round both, unless you can guarantee that all the pushes happen-before all the pops.
Secondly, your call to pthread_cond_wait doesn't seem to have an associated predicate. Typical usage of condition variables is:
pthread_mutex_lock(&mtx);
while(!ready)
{
pthread_cond_wait(&cond,&mtx);
}
do_stuff();
pthread_mutex_unlock(&mtx);
In this example, ready is some variable that is set by another thread whilst that thread holds a lock on mtx.
If the waiting thread is not blocked in the pthread_cond_wait when pthread_cond_signal is called then the signal will be ignored. The associated ready variable allows you to handle this scenario, and also allows you to handle so-called spurious wake-ups where the call to pthread_cond_wait returns without a corresponding call to pthread_cond_signal from another thread.
I'm not sure, but I think you don't have to (you must not) lock the mutex in the thread pool before calling pthread_cond_signal(&(tmpNode->cond)); , otherwise, the thread which is woken up won't be able to lock the mutex as part of pthread_cond_wait(&(threadNode->cond), &(threadNode->mutx)); operation.

Resources