Semaphore execution - semaphore

P1 and P2 are two concurrent processes interacting over shared data as shown below.Tell me whether the property of mutual exclusion is satisfied for the given codes or if not,provide a corrected version that satisfies the mutual exclusion property.
do {
signal(mutex);
// critical section
wait(mutex);
// remainder section
} while (TRUE);
Code for P2:
do {
wait(mutex);
// critical section
signal(mutex);
// remainder section
} while (TRUE);
Note:
The definition of wait() is as follows:
wait(S) {
while S <= 0
; // no-op
S-- ;
}
The definition of signal() is as follows:
signal(S) {
S++;
}

Related

Producer Consumer synchronization with only 1 semaphore

In the Producer-Consumer problem, the standard solution uses 3 semaphores.
However, I was wondering if we could just use 1 semaphore:
semaphore mutex = 1
procedure producer() {
while (true) {
down(mutex)
if (bufferNotFull()) {
item = produceItem()
putItemIntoBuffer(item)
}
up(mutex)
}
}
procedure consumer() {
while (true) {
down(mutex)
if (bufferNotEmpty()) {
item = removeItemFromBuffer()
consumeItem(item)
}
up(mutex)
}
}
Is this solution as good as the standard one??
The standard solution for reference:
semaphore mutex = 1
semaphore fillCount = 0
semaphore emptyCount = BUFFER_SIZE
procedure producer() {
while (true) {
item = produceItem()
down(emptyCount)
down(mutex)
putItemIntoBuffer(item)
up(mutex)
up(fillCount)
}
}
procedure consumer() {
while (true) {
down(fillCount)
down(mutex)
item = removeItemFromBuffer()
consumeItem(item)
up(mutex)
up(emptyCount)
}
}
The "standard solution" doesn't have to use 3 semaphores. Even the wikipedia article you linked has a two-semaphore solution for cases in which there's a single producer and a single consumer:
semaphore fillCount = 0; // items produced
semaphore emptyCount = BUFFER_SIZE; // remaining space
procedure producer()
{
while (true)
{
item = produceItem();
down(emptyCount);
putItemIntoBuffer(item);
up(fillCount);
}
}
procedure consumer()
{
while (true)
{
down(fillCount);
item = removeItemFromBuffer();
up(emptyCount);
consumeItem(item);
}
}
Solutions with a single semaphore aren't great since they give you just a single wait condition when you actually want to - "wait for free space", and "wait for an item". You can't do both with a single semaphore.
Anyway, your solution isn't wrong, it's just very inefficient since at any given moment only a single producer or a single consumer can run. This is essentially single-threaded code, only with locks and context switches between threads. So it's even less efficient that actual single threaded code.
And one more thing - item production and consumption aren't usually part of the critical section. While the item is being consumed or produced, the other thread can run. We need mutual exclusion only when using the buffer.

Worker thread suspend / resume implementation

In my attempt to add suspend / resume functionality to my Worker [thread] class, I've happened upon an issue that I cannot explain. (C++1y / VS2015)
The issue looks like a deadlock, however I cannot seem to reproduce it once a debugger is attached and a breakpoint is set before a certain point (see #1) - so it looks like it's a timing issue.
The fix that I could find (#2) doesn't make a lot of sense to me because it requires to hold on to a mutex longer and where client code might attempt to acquire other mutexes, which I understand to actually increase the chance of a deadlock.
But it does fix the issue.
The Worker loop:
Job* job;
while (true)
{
{
std::unique_lock<std::mutex> lock(m_jobsMutex);
m_workSemaphore.Wait(lock);
if (m_jobs.empty() && m_finishing)
{
break;
}
// Take the next job
ASSERT(!m_jobs.empty());
job = m_jobs.front();
m_jobs.pop_front();
}
bool done = false;
bool wasSuspended = false;
do
{
// #2
{ // Removing this extra scoping seemingly fixes the issue BUT
// incurs us holding on to m_suspendMutex while the job is Process()ing,
// which might 1, be lengthy, 2, acquire other locks.
std::unique_lock<std::mutex> lock(m_suspendMutex);
if (m_isSuspended && !wasSuspended)
{
job->Suspend();
}
wasSuspended = m_isSuspended;
m_suspendCv.wait(lock, [this] {
return !m_isSuspended;
});
if (wasSuspended && !m_isSuspended)
{
job->Resume();
}
wasSuspended = m_isSuspended;
}
done = job->Process();
}
while (!done);
}
Suspend / Resume is just:
void Worker::Suspend()
{
std::unique_lock<std::mutex> lock(m_suspendMutex);
ASSERT(!m_isSuspended);
m_isSuspended = true;
}
void Worker::Resume()
{
{
std::unique_lock<std::mutex> lock(m_suspendMutex);
ASSERT(m_isSuspended);
m_isSuspended = false;
}
m_suspendCv.notify_one(); // notify_all() doesn't work either.
}
The (Visual Studio) test:
struct Job: Worker::Job
{
int durationMs = 25;
int chunks = 40;
int executed = 0;
bool Process()
{
auto now = std::chrono::system_clock::now();
auto until = now + std::chrono::milliseconds(durationMs);
while (std::chrono::system_clock::now() < until)
{ /* busy, busy */
}
++executed;
return executed < chunks;
}
void Suspend() { /* nothing here */ }
void Resume() { /* nothing here */ }
};
auto worker = std::make_unique<Worker>();
Job j;
worker->Enqueue(j);
std::this_thread::sleep_for(std::chrono::milliseconds(j.durationMs)); // Wait at least one chunk.
worker->Suspend();
Assert::IsTrue(j.executed < j.chunks); // We've suspended before we finished.
const int testExec = j.executed;
std::this_thread::sleep_for(std::chrono::milliseconds(j.durationMs * 4));
Assert::IsTrue(j.executed == testExec); // We haven't moved on.
// #1
worker->Resume(); // Breaking before this call means that I won't see the issue.
worker->Finalize();
Assert::IsTrue(j.executed == j.chunks); // Now we've finished.
What am I missing / doing wrong? Why does the Process()ing of the job have to be guarded by the suspend mutex?
EDIT: Resume() should not have been holding on to the mutex at the time of notification; that's fixed -- the issue persists.
Of course the Process()ing of the job does not have to be guarded by the suspend mutex.
The access of j.executed - for the asserts as well as for the incrementing - however does need to be synchronized (either by making it an std::atomic<int> or by guarding it with a mutex etc.).
It's still not clear why the issue manifested the way it did (since I'm not writing to the variable on the main thread) -- might be a case of undefined behaviour propagating backwards in time.

C++11 Thread Safety of Atomic Containers

I am trying to implement a thread safe STL vector without mutexes. So I followed through this post and implemented a wrapper for the atomic primitives.
However when I ran the code below, it displayed out Failed!twice from the below code (only two instances of race conditions) so it doesn't seem to be thread safe. I'm wondering how can I fix that?
Wrapper Class
template<typename T>
struct AtomicVariable
{
std::atomic<T> atomic;
AtomicVariable() : atomic(T()) {}
explicit AtomicVariable(T const& v) : atomic(v) {}
explicit AtomicVariable(std::atomic<T> const& a) : atomic(a.load()) {}
AtomicVariable(AtomicVariable const&other) :
atomic(other.atomic.load()) {}
inline AtomicVariable& operator=(AtomicVariable const &rhs) {
atomic.store(rhs.atomic.load());
return *this;
}
inline AtomicVariable& operator+=(AtomicVariable const &rhs) {
atomic.store(rhs.atomic.load() + atomic.load());
return *this;
}
inline bool operator!=(AtomicVariable const &rhs) {
return !(atomic.load() == rhs.atomic.load());
}
};
typedef AtomicVariable<int> AtomicInt;
Functions and Testing
// Vector of 100 elements.
vector<AtomicInt> common(100, AtomicInt(0));
void add10(vector<AtomicInt> &param){
for (vector<AtomicInt>::iterator it = param.begin();
it != param.end(); ++it){
*it += AtomicInt(10);
}
}
void add100(vector<AtomicInt> &param){
for (vector<AtomicInt>::iterator it = param.begin();
it != param.end(); ++it){
*it += AtomicInt(100);
}
}
void doParallelProcessing(){
// Create threads
std::thread t1(add10, std::ref(common));
std::thread t2(add100, std::ref(common));
// Join 'em
t1.join();
t2.join();
// Print vector again
for (vector<AtomicInt>::iterator it = common.begin();
it != common.end(); ++it){
if (*it != AtomicInt(110)){
cout << "Failed!" << endl;
}
}
}
int main(int argc, char *argv[]) {
// Just for testing purposes
for (int i = 0; i < 100000; i++){
// Reset vector
common.clear();
common.resize(100, AtomicInt(0));
doParallelProcessing();
}
}
Is there such a thing as an atomic container? I've also tested this with a regular vector<int> it didn't have any Failed output but that might just be a coincidence.
Just write operator += as:
inline AtomicVariable& operator+=(AtomicVariable const &rhs) {
atomic += rhs.atomic;
return *this;
}
In documentation: http://en.cppreference.com/w/cpp/atomic/atomic operator += is atomic.
Your example fails because below scenario of execution is possible:
Thread1 - rhs.atomic.load() - returns 10 ; Thread2 - rhs.atomic.load() - returns 100
Thread1 - atomic.load() - returns 0 ; Thread2 - atomic.load - returns 0
Thread1 - add values (0 + 10 = 10) ; Thread2 - add values (0 + 100)
Thread1 - atomic.store(10) ; Thread2 - atomic.store(100)
Finally in this case in atomic value might be 10 or 100, depends of which thread first execute atomic.store.
please note that
atomic.store(rhs.atomic.load() + atomic.load());
is not atomic
You have two options to solve it.
memoery
1) Use a mutex.
EDIT as T.C mentioned in the comments this is irrelevant since the operation here will be load() then load() then store() (not relaxed mode) - so memory order is not related here.
2) Use memory order http://bartoszmilewski.com/2008/12/01/c-atomics-and-memory-ordering/
memory_order_acquire: guarantees that subsequent loads are not moved before the current load or any preceding loads.
memory_order_release: preceding stores are not moved past the current store or any subsequent stores.
I'm still not sure about 2, but I think if the stores will not be on parallel, it will work.

Producer Consumer - using semaphores in child Processes

I had implemented the Bounded buffer(Buffer size 5) problem using three semaphores, two counting (with count MAX 5) and one binary semaphore for critical section.
The producer and consumer processes were separate and were sharing the buffer.
Then I have moved on to try the same problem, this time with One Parent process that sets up the shared memory( Buffer) and two child processes which act like Producer and consumer.
I almost copied whatever Implemented in the earlier code into the new one (Producer goes into the ret ==0 and i==0 block and Consumer goes into ret ==0 and i==1 block..Here i is the count of the child processes);
However my process blocks. The pseudo implementation of the code is as follows:
Please suggest if the steps are correct. I think i may be going wrong with the sharing of semaphores and their values. The shared memory anyways gets shared implicitly between the parent and both child processes.
struct shma{readindex,writeindex,buf_max,char buf[5],used_count};
main()
{
struct shma* shm;
shmid = shmget();
shm = shmat(shmid);
init_shma(0,0,5,0);
while(i++<2)
{
ret = fork();
if(ret > 0 )
continue;
if(ret ==0 )
{
if(i==0)
{
char value;
sembuf[3]; semun u1; values[] = {5,1,0}; // Sem Numbers 1,2,3
semid = semget(3 semaphores);
semctl(SETALL,values);
while(1)
{
getValuefromuser(&value);
decrement(1);
decrement(2); // Critical section
*copy value to shared memory*
increment(2);
increment(3); // used count
}
}
if(i==1)
{
char value;
sembuf[3]; semun u1; values[] = {5,1,0}; // Sem Numbers 1,2,3
semid = semget(3 semaphores);
semctl(SETALL,values);
while(1)
{
decrement(3); // Used Count
decrement(2); // Critical Section
read and print(&value); // From Shared Memory
increment(2);
increment(1); // free slots
}
}
}
}//while
Cleanup Code.
}//main
Should I get semaphore ids in both child processes..or is there something else missing.
The pseudo code implementation would be something like this. Get the semaphore ID in the child process using same key, either using ftok or hardcoded key, then obtain the current value of semaphore then perform appropriate operations.
struct shma{readindex,writeindex,buf_max,char buf[5],used_count};
main()
{
struct shma* shm;
shmid = shmget();
shm = shmat(shmid);
init_shma(0,0,5,0);
sembuf[3]; semun u1; values[] = {5,1,0}; // Sem Numbers 1,2,3
semid = semget(3 semaphores);
semctl(SETALL,values);
while(i++<2)
{
ret = fork();
if(ret > 0 )
continue;
if(ret ==0 )
{
if(i==0)
{
char value;
sembuf[3]; semun u1; values[];
semid = semget(3 semaphores);
while(1)
{
getValuefromuser(&value);
decrement(1);
decrement(2); // Critical section
*copy value to shared memory*
increment(2);
increment(3); // used count
}
}
if(i==1)
{
char value;
sembuf[3]; semun u1; values[];
while(1)
{
getValuefromuser(&value);
decrement(3); // Used Count
decrement(2); // Critical Section
read and print(&value); // From Shared Memory
increment(2);
increment(1); // free slots
}
}
}
}//while
Cleanup Code.
}//main
I ultimately found a problem with my understanding.
I have to create and set up a semaphore or three semaphores in the parent process and then obtain their value in the respective producer and consumer child processes then use them accordingly.
I was earlier, creating semaphores in both producer and consumer.
silly me.!!

need assistance with race conditions

Below is an example of the code
#define MAX PROCESSES 255
int number_of_processes = 0;
/* the implementation of fork() calls this function */
int allocate process()
{
int new pid;
if (number_of_processes == MAX PROCESSES)
return -1;
else {
/* allocate necessary process resources */
++number_of_processes;
return new pid;
}
}
/* the implementation of exit() calls this function */
void release process()
{
/* release process resources */
--number_of_processes;
}
I know the race condition is number_of_processes. I am trying to understand race conditions. My question is if i had a mutex lock with acquire() and release() operations, where could i place the lock to avoid race conditions. I just started reading about process synchronization and its pretty interesting. Also could an atomic integer be used such as atomic_t number_of_processes instead of int number_of_processes. I know atomic integer is used to avoid context switching but am not sure but is it possible?
Consider changing the code in that way, for mutex lock:
#define MAX PROCESSES 255
int number_of_processes = 0;
/* the implementation of fork() calls this function */
int allocate_process()
{
try {
lock.acquire();
int new pid;
if (number_of_processes == MAX PROCESSES)
return -1;
else {
/* allocate necessary process resources */
++number_of_processes;
return new pid;
}
} finally {
lock.release();
}
}
/* the implementation of exit() calls this function */
void release_process()
{
try {
lock.acquire();
/* release process resources */
--number_of_processes;
} finally {
lock.release();
}
}
I used try/finally syntax from Java, it might be slightly different for C++.
Regadring atomic integer - simple replace of int with atomic_t won't help because there are two operations with number_of_processes in allocate_process, thus the whole operation is not atomic.

Resources