How do I draw a state diagram for a suspension-queue semaphore? - semaphore

Here is the question:
Each process may be in different states and different events cause a process to transfer from one state to another; this can be represented using a state diagram. Use a state diagram to explain how a suspension-queue semaphore may be implemented. [10 marks]
Is my diagram correct, or have I misunderstood the question?
http://i.imgur.com/dC5RG6o.jpg
It is my understanding that suspended-queue semaphores maintain a list of blocked processes from which to (perhaps randomly) select a process to unblock when the current process has finished its critical section. Hence the waiting state in the state diagram.
pseudocode of suspended_queue_semaphore.
struct suspended_queue_semaphore
{
int count;
queueType queue;
};
void up(suspended_queue_semaphore s)
{
if (s.count == 0)
{
/* place this process in s.queue /*
/* block this process */
}
else
{
s.count = s.count - 1;
}
}
void down(suspended_queue_semaphore s)
{
if (s.queue is not empty)
{
/* remove a process from s.queue using FIFO */
/* unblock the process */
}
else
{
s.count = s.count + 1;
}
}

Is the state diagram for the process or the semaphore, and which semaphore are you talking about.
In the simplest semaphore: a binary semaphore (i.e. only one process can run) with operations wait() i.e. request to access shared resource and signal() i.e. finished accessing resource.
A state diagram for the process has only two states: Queued (Q) and Running (R) in addition to the Start and Terminate state.
The state diagram would be:
START = wait.CAN_RUN
CAN_RUN = suspend.QUEUED + run.RUNNING
QUEUED = run.RUNNING
RUNNING = signal.END
The semaphore has two states Empty and Full
A state diagram for the semaphore would be:
START = EMPTY
EMPTY = wait.RUN_PROCCESS + RUN_PROCESS
RUN_PROCESS = run.FULL
FULL = signal.EMPTY + wait.SUSPEND_PROCESS
SUSPEND_PROCESS = suspend.FULL
Edit: Fixed notation of state diagrams (was backwards sorry my process calculus is rusty) and added internal processes CAN_RUN, SUSPEND_PROCESS and RUN_PROCESS; and internal messages run and suspend.
Explanation:
The process calls the 'wait' method (up in your pseudo code) and goes to the CAN_RUN state, from there it can either start RUNNING or become QUEUED based on whether it gets a 'run' or 'suspend' message. If QUEUED it can start RUNNING when it receives a 'run' message. If RUNNING it uses 'signal' (down in your pseudo code) before finishing.
The semaphore starts EMPTY, if it gets a 'wait' it goes into RUN_PROCESS issues a 'run' message and becomes FULL. Once FULL any further 'wait' will send it to the SUSPEND_PROCESS state where it issues a 'suspend' to the process. When a 'signal' is received it goes back to EMPTY and it can remain there or go to RUN_PROCESS again based on whether the queue is empty or not (I did not model these internal states, nor did I model the queue as a system.)

Related

Using fetch-and-add as lock

I am trying to understand how fetch-and-add can be used as a lock. Here is what the book (OS's: 3 Easy pieces) says:
The basic operation is pretty simple: when
a thread wishes to acquire a lock, it first does an atomic fetch-and-add
on the ticket value; that value is now considered this thread’s “turn”
(myturn). The globally shared lock->turn is then used to determine
which thread’s turn it is; when (myturn == turn) for a given thread,
it is that thread’s turn to enter the critical section.
What I do not understand is how the thread checks if the lock held by another process before entering the cretical seection. All I can read that the value will be incremented, no mention of checks!
Another part says:
Unlock is accomplished
simply by incrementing the turn such that the next waiting thread (if
there is one) can now enter the critical section.
Which I can not interpret in a way where checks will not be performed, which can not be true becuase it compremises the whole porpose of locking cretical sections. What am I fmissing here? Thanks.
What I do not understand is how the thread checks if the lock held by another process before entering the cretical seection.
You need an "atomic fetch" for this, maybe something like "while( atomic_fetch(currently_serving) != my_ticket) { /* wait */ }".
If you have "atomic fetch and add", then you can implement "atomic fetch" by doing "atomic fetch and add the value zero", maybe something like "while( atomic_fetch_and_add(currently_serving, 0) != my_ticket) { /* wait */ }".
For reference; the full sequence could be something like:
my_ticket = atomic_fetch_and_add(ticket_counter, 1);
while( atomic_fetch_and_add(currently_serving, 0) != my_ticket) {
/* wait */
}
/* Critical section (lock successfully acquired). */
atomic_fetch_and_add(currently_serving, 1); /* Release the lock */
Of course you might have a better atomic fetch you can use instead (e.g. for some CPUs any normal aligned load is atomic).

Is there a timed signal similar to pthread_cond_timedwait?

I have created many threads all waiting for there own condition. Each thread when runs signals its next condition and again goes into wait state.
However, I want that the currently running thread should signal its next condition after some specified period of time (very short period). How to achieve that?
void *threadA(void *t)
{
while(i<100)
{
pthread_mutex_lock(&mutex1);
while (state != my_id )
{
pthread_cond_wait(&cond[my_id], &mutex1);
}
// processing + busy wait
/* Set state to i+1 and wake up thread i+1 */
pthread_mutex_lock(&mutex1);
state = (my_id + 1) % NTHREADS;//usleep(1);
// (Here I don't want this sleep. I want that this thread completes it processing and signals next thread a bit later.)
/*nanosleep(&zero, NULL);*/
pthread_cond_signal(&cond[(my_id + 1) % NTHREADS]); // Send signal to Thread (i+1) to awake
pthread_mutex_unlock(&mutex1);
i++;
}
Signalling a condition does nothing if there is nothing waiting on the condition. So, if pthread 'x' signals condition 'cx' and then waits on it, it will wait for a very long time... unless some other thread also signals 'cx' !
I'm not really sure I understand what you mean by the pthread signalling its "next condition", but it occurs to me that there is not much difference between waiting to signal a waiting thread and the thread sleeping after it is signalled ?

Does the new thread exist, when pthread_create() returns?

My application creates several threads with pthread_create() and then tries to verify their presence with pthread_kill(threadId, 0). Every once in a while the pthread_kill fails with "No such process"...
Could it be, I'm calling pthread_kill too early after pthread_create? I thought, the threadId returned by pthread_create() is valid right away, but it seems to not always be the case...
I do check the return value of pthread_create() itself -- it is not failing... Here is the code-snippet:
if (pthread_create(&title->thread, NULL,
process_title, title)) {
ERR("Could not spawn thread for `%s': %m",
title->name);
continue;
}
if (pthread_kill(title->thread, 0)) {
ERR("Thread of %s could not be signaled.",
title->name);
continue;
}
And once in a while I get the message about a thread, that could not be signaled...
That's really an implementation issue. The thread may exist or it may still be in a state of initialisation where pthread_kill won't be valid yet.
If you really want to verify that the thread is up and running, put some form of inter-thread communication in the thread function itself, rather than relying on the underlying details.
This could be as simple as an array which the main thread initialises to something and the thread function sets it to something else as its first action. Something like (pseudo-code obviously):
array running[10]
def threadFn(index):
running[index] = stateBorn
while running[index] != stateDying:
weaveYourMagic()
running[index] = stateDead
exitThread()
def main():
for i = 1 to 10:
running[i] = statePrenatal
startThread (threadFn, i)
for i = 1 to 10:
while running[i] != stateBorn:
sleepABit()
// All threads are now running, do stuff until shutdown required.
for i = 1 to 10:
running[i] = stateDying
for i = 1 to 10:
while running[i] != stateDead:
sleepABit()
// All threads have now exited, though you could have
// also used pthread_join for that final loop if you
// had the thread IDs.
From that code above, you actually use the running state to control both when the main thread knows all other threads are doing something, and to shutdown threads as necessary.

Multithreading: classical Producer Consumer algorithm

Something I don't get about the classical algorithm for the Producer-Consumer problem (from Wikipedia:)
semaphore mutex = 1
semaphore fillCount = 0
semaphore emptyCount = BUFFER_SIZE
procedure producer() {
while (true) {
item = produceItem()
down(emptyCount)
down(mutex)
putItemIntoBuffer(item)
up(mutex)
up(fillCount)
}
up(fillCount) //the consumer may not finish before the producer.
}
procedure consumer() {
while (true) {
down(fillCount)
down(mutex)
item = removeItemFromBuffer()
up(mutex)
up(emptyCount)
consumeItem(item)
}
}
I note that both producers and consumers lock 'mutex' prior to messing with the buffer, and unlock it thereafter. If that is the case, i.e. only a single thread is accessing the buffer at any given moment, I don't really see how the above algo differs from a simple solution that entails only putting a guarding mutex over the buffer:
semaphore mutex = 1
procedure producer() {
while (true) {
item = produceItem()
flag = true
while (flag) {
down(mutex)
if (bufferNotFull()) {
putItemIntoBuffer(item)
flag = false
}
up(mutex)
}
}
}
procedure consumer() {
while (true) {
flag = true
while (flag) {
down(mutex)
if (bufferNotEmpty()) {
item = removeItemFromBuffer()
flag = false
}
up(mutex)
}
consumeItem(item)
}
}
Only thing I can think of that necessitates using the 'fillCount' and 'emptyCount' semaphores is scheduling.
Maybe the first algo is for making sure that in a state where 5 consumers are waiting on an empty buffer (zero 'fillCount'), it is assured that when a new producer comes along, it will go past its "down(emptyCount)" statement quickly and get the 'mutex' quickly.
(whereas in the other solution the consumers will needlessly get the 'mutex' only to relinquish it until the new producer gets it and inserts an item).
Am I right? Am I missing something?
If there are no messages in the buffer, the consumer will down the mutex, check the buffer, find that it's empty, up the mutex, loop back around and immediately repeat the process. In simple terms, consumers and producers are stuck in busy loops that chew up 100% of a CPU core. This is not just a theoretical problem, either. You may well find that your computer's fan starts to spin every time you run your program.
The hole concept of the producer/consumer pattern is that the shared resource (buffer) is only accessed if certain criteria are met. And to not utilize unnecessary CPU cycles in order to make sure they are met.
Consumer:
Wait if there is nothing to consume (=> empty buffer).
Producer:
Wait if there is not enough space in the buffer.
And for both its very important to note:
Wait != Spin wait
There's more than just the locking of the buffer going on.
The fillCount semaphore in the first program is to pause the consumer(s) when there's nothing left to consume. Without it, you're constantly polling the buffer to see if there's anything to get, and that's quite wasteful.
Likewise, the emptyCount semaphore pauses the producer when the buffer is full.
There is a starving issue if more than one consumer is involved. In the latter example consumer checks for an item in a loop. But by the time he get's to try again some other consumer may "steal" his item. And next time again, and again. That is not nice.

What is a race condition?

When writing multithreaded applications, one of the most common problems experienced is race conditions.
My questions to the community are:
What is the race condition?
How do you detect them?
How do you handle them?
Finally, how do you prevent them from occurring?
A race condition occurs when two or more threads can access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any time, you don't know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are "racing" to access/change the data.
Problems often occur when one thread does a "check-then-act" (e.g. "check" if the value is X, then "act" to do something that depends on the value being X) and another thread does something to the value in between the "check" and the "act". E.g:
if (x == 5) // The "Check"
{
y = x * 2; // The "Act"
// If another thread changed x in between "if (x == 5)" and "y = x * 2" above,
// y will not be equal to 10.
}
The point being, y could be 10, or it could be anything, depending on whether another thread changed x in between the check and act. You have no real way of knowing.
In order to prevent race conditions from occurring, you would typically put a lock around the shared data to ensure only one thread can access the data at a time. This would mean something like this:
// Obtain lock for x
if (x == 5)
{
y = x * 2; // Now, nothing can change x until the lock is released.
// Therefore y = 10
}
// release lock for x
A "race condition" exists when multithreaded (or otherwise parallel) code that would access a shared resource could do so in such a way as to cause unexpected results.
Take this example:
for ( int i = 0; i < 10000000; i++ )
{
x = x + 1;
}
If you had 5 threads executing this code at once, the value of x WOULD NOT end up being 50,000,000. It would in fact vary with each run.
This is because, in order for each thread to increment the value of x, they have to do the following: (simplified, obviously)
Retrieve the value of x
Add 1 to this value
Store this value to x
Any thread can be at any step in this process at any time, and they can step on each other when a shared resource is involved. The state of x can be changed by another thread during the time between x is being read and when it is written back.
Let's say a thread retrieves the value of x, but hasn't stored it yet. Another thread can also retrieve the same value of x (because no thread has changed it yet) and then they would both be storing the same value (x+1) back in x!
Example:
Thread 1: reads x, value is 7
Thread 1: add 1 to x, value is now 8
Thread 2: reads x, value is 7
Thread 1: stores 8 in x
Thread 2: adds 1 to x, value is now 8
Thread 2: stores 8 in x
Race conditions can be avoided by employing some sort of locking mechanism before the code that accesses the shared resource:
for ( int i = 0; i < 10000000; i++ )
{
//lock x
x = x + 1;
//unlock x
}
Here, the answer comes out as 50,000,000 every time.
For more on locking, search for: mutex, semaphore, critical section, shared resource.
What is a Race Condition?
You are planning to go to a movie at 5 pm. You inquire about the availability of the tickets at 4 pm. The representative says that they are available. You relax and reach the ticket window 5 minutes before the show. I'm sure you can guess what happens: it's a full house. The problem here was in the duration between the check and the action. You inquired at 4 and acted at 5. In the meantime, someone else grabbed the tickets. That's a race condition - specifically a "check-then-act" scenario of race conditions.
How do you detect them?
Religious code review, multi-threaded unit tests. There is no shortcut. There are few Eclipse plugin emerging on this, but nothing stable yet.
How do you handle and prevent them?
The best thing would be to create side-effect free and stateless functions, use immutables as much as possible. But that is not always possible. So using java.util.concurrent.atomic, concurrent data structures, proper synchronization, and actor based concurrency will help.
The best resource for concurrency is JCIP. You can also get some more details on above explanation here.
There is an important technical difference between race conditions and data races. Most answers seem to make the assumption that these terms are equivalent, but they are not.
A data race occurs when 2 instructions access the same memory location, at least one of these accesses is a write and there is no happens before ordering among these accesses. Now what constitutes a happens before ordering is subject to a lot of debate, but in general ulock-lock pairs on the same lock variable and wait-signal pairs on the same condition variable induce a happens-before order.
A race condition is a semantic error. It is a flaw that occurs in the timing or the ordering of events that leads to erroneous program behavior.
Many race conditions can be (and in fact are) caused by data races, but this is not necessary. As a matter of fact, data races and race conditions are neither the necessary, nor the sufficient condition for one another. This blog post also explains the difference very well, with a simple bank transaction example. Here is another simple example that explains the difference.
Now that we nailed down the terminology, let us try to answer the original question.
Given that race conditions are semantic bugs, there is no general way of detecting them. This is because there is no way of having an automated oracle that can distinguish correct vs. incorrect program behavior in the general case. Race detection is an undecidable problem.
On the other hand, data races have a precise definition that does not necessarily relate to correctness, and therefore one can detect them. There are many flavors of data race detectors (static/dynamic data race detection, lockset-based data race detection, happens-before based data race detection, hybrid data race detection). A state of the art dynamic data race detector is ThreadSanitizer which works very well in practice.
Handling data races in general requires some programming discipline to induce happens-before edges between accesses to shared data (either during development, or once they are detected using the above mentioned tools). this can be done through locks, condition variables, semaphores, etc. However, one can also employ different programming paradigms like message passing (instead of shared memory) that avoid data races by construction.
A sort-of-canonical definition is "when two threads access the same location in memory at the same time, and at least one of the accesses is a write." In the situation the "reader" thread may get the old value or the new value, depending on which thread "wins the race." This is not always a bug—in fact, some really hairy low-level algorithms do this on purpose—but it should generally be avoided. #Steve Gury give's a good example of when it might be a problem.
A race condition is a situation on concurrent programming where two concurrent threads or processes compete for a resource and the resulting final state depends on who gets the resource first.
A race condition is a kind of bug, that happens only with certain temporal conditions.
Example:
Imagine you have two threads, A and B.
In Thread A:
if( object.a != 0 )
object.avg = total / object.a
In Thread B:
object.a = 0
If thread A is preempted just after having check that object.a is not null, B will do a = 0, and when thread A will gain the processor, it will do a "divide by zero".
This bug only happen when thread A is preempted just after the if statement, it's very rare, but it can happen.
Many answers in this discussion explains what a race condition is. I try to provide an explaination why this term is called race condition in software industry.
Why is it called race condition?
Race condition is not only related with software but also related with hardware too. Actually the term was initially coined by the hardware industry.
According to wikipedia:
The term originates with the idea of two signals racing each other to
influence the output first.
Race condition in a logic circuit:
Software industry took this term without modification, which makes it a little bit difficult to understand.
You need to do some replacement to map it to the software world:
"two signals" ==> "two threads"/"two processes"
"influence the output" ==> "influence some shared state"
So race condition in software industry means "two threads"/"two processes" racing each other to "influence some shared state", and the final result of the shared state will depend on some subtle timing difference, which could be caused by some specific thread/process launching order, thread/process scheduling, etc.
Race conditions occur in multi-threaded applications or multi-process systems. A race condition, at its most basic, is anything that makes the assumption that two things not in the same thread or process will happen in a particular order, without taking steps to ensure that they do. This happens commonly when two threads are passing messages by setting and checking member variables of a class both can access. There's almost always a race condition when one thread calls sleep to give another thread time to finish a task (unless that sleep is in a loop, with some checking mechanism).
Tools for preventing race conditions are dependent on the language and OS, but some comon ones are mutexes, critical sections, and signals. Mutexes are good when you want to make sure you're the only one doing something. Signals are good when you want to make sure someone else has finished doing something. Minimizing shared resources can also help prevent unexpected behaviors
Detecting race conditions can be difficult, but there are a couple signs. Code which relies heavily on sleeps is prone to race conditions, so first check for calls to sleep in the affected code. Adding particularly long sleeps can also be used for debugging to try and force a particular order of events. This can be useful for reproducing the behavior, seeing if you can make it disappear by changing the timing of things, and for testing solutions put in place. The sleeps should be removed after debugging.
The signature sign that one has a race condition though, is if there's an issue that only occurs intermittently on some machines. Common bugs would be crashes and deadlocks. With logging, you should be able to find the affected area and work back from there.
Microsoft actually have published a really detailed article on this matter of race conditions and deadlocks. The most summarized abstract from it would be the title paragraph:
A race condition occurs when two threads access a shared variable at
the same time. The first thread reads the variable, and the second
thread reads the same value from the variable. Then the first thread
and second thread perform their operations on the value, and they race
to see which thread can write the value last to the shared variable.
The value of the thread that writes its value last is preserved,
because the thread is writing over the value that the previous thread
wrote.
What is a race condition?
The situation when the process is critically dependent on the sequence or timing of other events.
For example,
Processor A and processor B both needs identical resource for their execution.
How do you detect them?
There are tools to detect race condition automatically:
Lockset-Based Race Checker
Happens-Before Race Detection
Hybrid Race Detection
How do you handle them?
Race condition can be handled by Mutex or Semaphores. They act as a lock allows a process to acquire a resource based on certain requirements to prevent race condition.
How do you prevent them from occurring?
There are various ways to prevent race condition, such as Critical Section Avoidance.
No two processes simultaneously inside their critical regions. (Mutual Exclusion)
No assumptions are made about speeds or the number of CPUs.
No process running outside its critical region which blocks other processes.
No process has to wait forever to enter its critical region. (A waits for B resources, B waits for C resources, C waits for A resources)
You can prevent race condition, if you use "Atomic" classes. The reason is just the thread don't separate operation get and set, example is below:
AtomicInteger ai = new AtomicInteger(2);
ai.getAndAdd(5);
As a result, you will have 7 in link "ai".
Although you did two actions, but the both operation confirm the same thread and no one other thread will interfere to this, that means no race conditions!
I made a video that explains this.
Essentially it is when you have a state with is shared across multiple threads and before the first execution on a given state is completed, another execution starts and the new thread’s initial state for a given operation is wrong because the previous execution has not completed.
Because the initial state of the second execution is wrong, the resulting computation is also wrong. Because eventually the second execution will update the final state with the wrong result.
You can view it here.
https://youtu.be/RWRicNoWKOY
Here is the classical Bank Account Balance example which will help newbies to understand Threads in Java easily w.r.t. race conditions:
public class BankAccount {
/**
* #param args
*/
int accountNumber;
double accountBalance;
public synchronized boolean Deposit(double amount){
double newAccountBalance=0;
if(amount<=0){
return false;
}
else {
newAccountBalance = accountBalance+amount;
accountBalance=newAccountBalance;
return true;
}
}
public synchronized boolean Withdraw(double amount){
double newAccountBalance=0;
if(amount>accountBalance){
return false;
}
else{
newAccountBalance = accountBalance-amount;
accountBalance=newAccountBalance;
return true;
}
}
public static void main(String[] args) {
// TODO Auto-generated method stub
BankAccount b = new BankAccount();
b.accountBalance=2000;
System.out.println(b.Withdraw(3000));
}
Try this basic example for better understanding of race condition:
public class ThreadRaceCondition {
/**
* #param args
* #throws InterruptedException
*/
public static void main(String[] args) throws InterruptedException {
Account myAccount = new Account(22222222);
// Expected deposit: 250
for (int i = 0; i < 50; i++) {
Transaction t = new Transaction(myAccount,
Transaction.TransactionType.DEPOSIT, 5.00);
t.start();
}
// Expected withdrawal: 50
for (int i = 0; i < 50; i++) {
Transaction t = new Transaction(myAccount,
Transaction.TransactionType.WITHDRAW, 1.00);
t.start();
}
// Temporary sleep to ensure all threads are completed. Don't use in
// realworld :-)
Thread.sleep(1000);
// Expected account balance is 200
System.out.println("Final Account Balance: "
+ myAccount.getAccountBalance());
}
}
class Transaction extends Thread {
public static enum TransactionType {
DEPOSIT(1), WITHDRAW(2);
private int value;
private TransactionType(int value) {
this.value = value;
}
public int getValue() {
return value;
}
};
private TransactionType transactionType;
private Account account;
private double amount;
/*
* If transactionType == 1, deposit else if transactionType == 2 withdraw
*/
public Transaction(Account account, TransactionType transactionType,
double amount) {
this.transactionType = transactionType;
this.account = account;
this.amount = amount;
}
public void run() {
switch (this.transactionType) {
case DEPOSIT:
deposit();
printBalance();
break;
case WITHDRAW:
withdraw();
printBalance();
break;
default:
System.out.println("NOT A VALID TRANSACTION");
}
;
}
public void deposit() {
this.account.deposit(this.amount);
}
public void withdraw() {
this.account.withdraw(amount);
}
public void printBalance() {
System.out.println(Thread.currentThread().getName()
+ " : TransactionType: " + this.transactionType + ", Amount: "
+ this.amount);
System.out.println("Account Balance: "
+ this.account.getAccountBalance());
}
}
class Account {
private int accountNumber;
private double accountBalance;
public int getAccountNumber() {
return accountNumber;
}
public double getAccountBalance() {
return accountBalance;
}
public Account(int accountNumber) {
this.accountNumber = accountNumber;
}
// If this method is not synchronized, you will see race condition on
// Remove syncronized keyword to see race condition
public synchronized boolean deposit(double amount) {
if (amount < 0) {
return false;
} else {
accountBalance = accountBalance + amount;
return true;
}
}
// If this method is not synchronized, you will see race condition on
// Remove syncronized keyword to see race condition
public synchronized boolean withdraw(double amount) {
if (amount > accountBalance) {
return false;
} else {
accountBalance = accountBalance - amount;
return true;
}
}
}
You don't always want to discard a race condition. If you have a flag which can be read and written by multiple threads, and this flag is set to 'done' by one thread so that other thread stop processing when flag is set to 'done', you don't want that "race condition" to be eliminated. In fact, this one can be referred to as a benign race condition.
However, using a tool for detection of race condition, it will be spotted as a harmful race condition.
More details on race condition here, http://msdn.microsoft.com/en-us/magazine/cc546569.aspx.
Consider an operation which has to display the count as soon as the count gets incremented. ie., as soon as CounterThread increments the value DisplayThread needs to display the recently updated value.
int i = 0;
Output
CounterThread -> i = 1
DisplayThread -> i = 1
CounterThread -> i = 2
CounterThread -> i = 3
CounterThread -> i = 4
DisplayThread -> i = 4
Here CounterThread gets the lock frequently and updates the value before DisplayThread displays it. Here exists a Race condition. Race Condition can be solved by using Synchronzation
A race condition is an undesirable situation that occurs when two or more process can access and change the shared data at the same time.It occurred because there were conflicting accesses to a resource . Critical section problem may cause race condition. To solve critical condition among the process we have take out only one process at a time which execute the critical section.

Resources