Thread scheduling/Data race conditions - multithreading

In class, we are studying threads and race conditions. By my estimates, it should be possible for the below code to output the value 8 or 9, as it is possible that thread 1 is interrupted by thread 2 before the counter value is updated, but after it has been incremented in the eax register.
int counter = 10;
void *worker(void *arg) {
counter--;
return NULL;
}
int main(int argc, char *argv[]) {
pthread_t p1, p2;
pthread_create(&p1, NULL, worker, NULL);
pthread_create(&p2, NULL, worker, NULL);
pthread_join(p1, NULL);
pthread_join(p2, NULL);
printf("%d\n", counter);
}
However, when I run the code, I always receive the output 8. Is it a mechanism of the compiler that normalizes the output, or is it only possible for the code to output 8 (no race condition is created)?

There's no way for us to tell without knowing lots of complicated details about your platform, compiler, maybe even CPU. The code has a race condition in theory but it may be exceptionally difficult, maybe even impossible, to trigger.
Of course, if you upgrade your compiler or CPU, change compilation options, upgrade your OS, or do any number of other things, it may start behaving differently.
This is one of the reasons race conditions can be so insidious. They can be impossible to trigger under some conditions and then suddenly start happening all the time when some change is made elsewhere.

The code definitely has a race condition.
I don't find it surprising that you're seeing consistent results--starting a thread takes a little while, so there's a good chance that in your case, the first thread finishes before the second gets started.
Nonetheless, the code clearly has undefined behavior, because there's no question it has a race condition.

There definitely is a race condition. The reason you're not seeing it is because the increment happens so fast compared to the time it takes to start a thread that it's likely for the first thread to be done before the second thread even starts. You'll see the race condition if you make the amount of work sufficiently large that the first thread will still be running when the second one starts.
example: modify the worker function to decrement in a loop
int counter = 1000000000;
void* worker(void *arg)
{
for (int i = 0; i < 500000000; ++i)
--counter;
return NULL;
}
Since counter starts at 1 billion, and you're running 2 threads that each decrement counter by 500 million, you would expect counter to be 0 when you are done if race conditions didn't exist.

Related

Threads issue in c language using pthread library

I declare a global variable and initialize it with 0.
In main () function i create two threads. The first thread function increments the global variable upto the received arguments (function parameter) using a for loop, while the second function decrements the global variable same times using for loop.
When i pass 1000 as arguments the program works fine but when i pass 100000 the global variable value should be zero at the end but i found the value is not zero.
I also called the join function for both threads but doesn't works.
#include "stdio.h"
#include "stdlib.h"
#include "pthread.h"
int globVar =0;
void *incFunct(void* val){
for (int i=0; i<val; i++)
globVar++;
pthread_exit(NULL);
}
void *decFunct(void* val){
for (int i=0; i<val; i++)
globVar--;
pthread_exit(NULL);
}
int main()
{
pthread_t tid[2];
int val = 1000000;
printf("Initial value of Global variable : %d \n", globVar);
pthread_create(&tid[0], NULL, &incFunct, (void*)val);
pthread_create(&tid[1], NULL, &decFunct, (void*)val);
pthread_join(tid[0], NULL);
pthread_join(tid[1], NULL);
printf("Final Value of Global Var : %d \n", globVar);
return 0;
}
Yeah, you can't do that. Reasonably, you could end up with globVar having any value between -10000000 and +1000000; unreasonably, you might have invited the compiler to burn down your home (ask google about undefined behaviour).
You need to synchronize the operations of the two threads. One such synchronization is with a pthread_mutex_t; and you would acquire the lock (pthread_mutex_lock()) before operating on globVar, and release the lock (pthread_mutex_unlock()) after updating globVar.
For this particularly silly case, atomics might be more appropriate if your compiler happens to support them (/usr/include/stdatomic.h).
One thing that might happen is that the inc thread and the dec thread don't see consistent values for globVar. If you increment a variable you think has a value of 592, and, at the same time, I decrement what I think is the same variable but with a value of 311 — who wins? What happens when it's all over?
Without memory synchronization, you can't predict what will happen when multiple threads update the same memory location. You might have problems with cache coherency, variable tearing, and even reordered operations. Mutexes or C11 atomic variables are two ways to avoid these problems.
(As an aside, I suspect you don't see this problem with one thousand iterations because the first thread finishes well before the second even looks at globVar, and your implementation happens to update memory for that latter thread's consistency.)

using atomic c++11 to implement a thread safe down counter to zero

I'm new to atomic techniques and try to implement a safe thread version for the follow code:
// say m_cnt is unsigned
void Counter::dec_counter()
{
if(0==m_cnt)
return;
--m_cnt;
if(0 == m_cnt)
{
// Do seomthing
}
}
Every thread that calls dec_counter must decrement it by one and "Do something" should be done only one time - at when the counter is decremented to 0.
After fighting with it, I did the follow code that does it well (I think), but I wonder if this is the way to do it, or is there a better way. Thanks.
// m_cnt is std::atomic<unsigned>
void Counter::dec_counter()
{
// loop until decrement done
unsigned uiExpectedValue;
unsigned uiNewValue;
do
{
uiExpectedValue = m_cnt.load();
// if other thread already decremented it to 0, then do nothing.
if (0 == uiExpectedValue)
return;
uiNewValue = uiExpectedValue - 1;
// at the short time from doing
// uiExpectedValue = m_cnt.load();
// it is possible that another thread had decremented m_cnt, and it won't be equal here to uiExpectedValue,
// thus the loop, to be sure we do a decrement
} while (!m_cnt.compare_exchange_weak(uiExpectedValue, uiNewValue));
// if we are here, that means we did decrement . so if it was to 0, then do something
if (0 == uiNewValue)
{
// do something
}
}
The thing with atomic is that only that one statement is atomic.
If you write
std::atomic<int> i {20}
...
if (!--i)
...
Then just 1 thread will enter the if.
However, if you split up the change and the test, then other threads can get into the gap, and you may get strange results:
std::atomic<int> i {20}
...
--i;
// other thread(s) can modify i just here
if (!i)
...
Of course you can split the condition test for the decrement by using a local variable:
std::atomic<int> i {20}
...
int j=--i;
// other thread(s) can modify i just here
if (!j)
...
All the simple math operations are generally efficiently supported for small atomics in c++
For more complex types and expressions, you need to use the read/modify/write member methods.
These allow you to read the current value, calculate the new value, and then call compare_exchange_strong or compare_exchange_weak say "if the value has not changed, then store my new value, otherwise give me the new current value" a a single atomic operation. You can stick this in a loop and keep recalculating the new value until you are lucky enough that your thread is the only writer. If there are not too many threads trying too often to change the value this is reasonably efficient as well.

Design pattern for asynchronous while loop

I have a function that boils down to:
while(doWork)
{
config = generateConfigurationForTesting();
result = executeWork(config);
doWork = isDone(result);
}
How can I rewrite this for efficient asynchronous execution, assuming all functions are thread safe, independent of previous iterations, and probably require more iterations than the maximum number of allowable threads ?
The problem here is we don't know how many iterations are required in advance so we can't make a dispatch_group or use dispatch_apply.
This is my first attempt, but it looks a bit ugly to me because of arbitrarily chosen values and sleeping;
int thread_count = 0;
bool doWork = true;
int max_threads = 20; // arbitrarily chosen number
dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
while(doWork)
{
if(thread_count < max_threads)
{
dispatch_async(queue, ^{ Config myconfig = generateConfigurationForTesting();
Result myresult = executeWork();
dispatch_async(queue, checkResult(myresult)); });
thread_count++;
}
else
usleep(100); // don't consume too much CPU
}
void checkResult(Result value)
{
if(value == good) doWork = false;
thread_count--;
}
Based on your description, it looks like generateConfigurationForTesting is some kind of randomization technique or otherwise a generator which can make a near-infinite number of configuration (hence your comment that you don't know ahead of time how many iterations you will need). With that as an assumption, you are basically stuck with the model that you've created, since your executor needs to be limited by some reasonable assumptions about the queue and you don't want to over-generate, as that would just extend the length of the run after you have succeeded in finding value ==good measurements.
I would suggest you consider using a queue (or OSAtomicIncrement* and OSAtomicDecrement*) to protect access to thread_count and doWork. As it stands, the thread_count increment and decrement will happen in two different queues (main_queue for the main thread and the default queue for the background task) and thus could simultaneously increment and decrement the thread count. This could lead to an undercount (which would cause more threads to be created than you expect) or an overcount (which would cause you to never complete your task).
Another option to making this look a little nicer would be to have checkResult add new elements into the queue if value!=good. This way, you load up the initial elements of the queue using dispatch_apply( 20, queue, ^{ ... }) and you don't need the thread_count at all. The first 20 will be added using dispatch_apply (or an amount that dispatch_apply feels is appropriate for your configuration) and then each time checkResult is called you can either set doWork=false or add another operation to queue.
dispatch_apply() works for this, just pass ncpu as the number of iterations (apply never uses more than ncpu worker threads) and keep each instance of your worker block running for as long as there is more work to do (i.e. loop back to generateConfigurationForTesting() unless !doWork).

Strange behavior of printk in linux kernel module

I am writing a code for linux kernel module and experiencing a strange behavior in it.
Here is my code:
int data = 0;
void threadfn1()
{
int j;
for( j = 0; j < 10; j++ )
printk(KERN_INFO "I AM THREAD 1 %d\n",j);
data++;
}
void threadfn2()
{
int j;
for( j = 0; j < 10; j++ )
printk(KERN_INFO "I AM THREAD 2 %d\n",j);
data++;
}
static int __init abc_init(void)
{
struct task_struct *t1 = kthread_run(threadfn1, NULL, "thread1");
struct task_struct *t2 = kthread_run(threadfn2, NULL, "thread2");
while( 1 )
{
printk("debug\n"); // runs ok
if( data >= 2 )
{
kthread_stop(t1);
kthread_stop(t2);
break;
}
}
printk(KERN_INFO "HELLO WORLD\n");
}
Basically I was trying to wait for threads to finish and then print something after that.
The above code does achieve that target but WITH "printk("debug\n");" not commented. As soon as I comment out printk("debug\n"); to run the code without debugging and load the module through insmod command, the module hangs on and it seems like it gets lost in recursion. I dont why printk effects my code in such a big way?
Any help would be appreciated.
regards.
You're not synchronizing the access to the data-variable. What happens is, that the compiler will generate a infinite loop. Here is why:
while( 1 )
{
if( data >= 2 )
{
kthread_stop(t1);
kthread_stop(t2);
break;
}
}
The compiler can detect that the value of data never changes within the while loop. Therefore it can completely move the check out of the loop and you'll end up with a simple
while (1) {}
If you insert printk the compiler has to assume that the global variable data may change (after all - the compiler has no idea what printk does in detail) therefore your code will start to work again (in a undefined behavior kind of way..)
How to fix this:
Use proper thread synchronization primitives. If you wrap the access to data into a code section protected by a mutex the code will work. You could also replace the variable data and use a counted semaphore instead.
Edit:
This link explains how locking in the linux-kernel works:
http://www.linuxgrill.com/anonymous/fire/netfilter/kernel-hacking-HOWTO-5.html
With the call to printk() removed the compiler is optimising the loop into while (1);. When you add the call to printk() the compiler is not sure that data isn't changed and so checks the value each time through the loop.
You can insert a barrier into the loop, which forces the compiler to reevaluate data on each iteration. eg:
while (1) {
if (data >= 2) {
kthread_stop(t1);
kthread_stop(t2);
break;
}
barrier();
}
Maybe data should be declared volatile? It could be that the compiler is not going to memory to get data in the loop.
Nils Pipenbrinck's answer is spot on. I'll just add some pointers.
Rusty's Unreliable Guide to Kernel Locking (every kernel hacker should read this one).
Goodbye semaphores?, The mutex API (lwn.net articles on the new mutex API introduced in early 2006, before that the Linux kernel used semaphores as mutexes).
Also, since your shared data is a simple counter, you can just use the atomic API (basically, declare your counter as atomic_t and access it using atomic_* functions).
Volatile might not always be "bad idea". One needs to separate out
the case of when volatile is needed and when mutual exclusion
mechanism is needed. It is non optimal when one uses or misuses
one mechanism for the other. In the above case. I would suggest
for optimal solution, that both mechanisms are needed: mutex to
provide mutual exclusion, volatile to indicate to compiler that
"info" must be read fresh from hardware. Otherwise, in some
situation (optimization -O2, -O3), compilers might inadvertently
leave out the needed codes.

What is a race condition?

When writing multithreaded applications, one of the most common problems experienced is race conditions.
My questions to the community are:
What is the race condition?
How do you detect them?
How do you handle them?
Finally, how do you prevent them from occurring?
A race condition occurs when two or more threads can access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any time, you don't know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are "racing" to access/change the data.
Problems often occur when one thread does a "check-then-act" (e.g. "check" if the value is X, then "act" to do something that depends on the value being X) and another thread does something to the value in between the "check" and the "act". E.g:
if (x == 5) // The "Check"
{
y = x * 2; // The "Act"
// If another thread changed x in between "if (x == 5)" and "y = x * 2" above,
// y will not be equal to 10.
}
The point being, y could be 10, or it could be anything, depending on whether another thread changed x in between the check and act. You have no real way of knowing.
In order to prevent race conditions from occurring, you would typically put a lock around the shared data to ensure only one thread can access the data at a time. This would mean something like this:
// Obtain lock for x
if (x == 5)
{
y = x * 2; // Now, nothing can change x until the lock is released.
// Therefore y = 10
}
// release lock for x
A "race condition" exists when multithreaded (or otherwise parallel) code that would access a shared resource could do so in such a way as to cause unexpected results.
Take this example:
for ( int i = 0; i < 10000000; i++ )
{
x = x + 1;
}
If you had 5 threads executing this code at once, the value of x WOULD NOT end up being 50,000,000. It would in fact vary with each run.
This is because, in order for each thread to increment the value of x, they have to do the following: (simplified, obviously)
Retrieve the value of x
Add 1 to this value
Store this value to x
Any thread can be at any step in this process at any time, and they can step on each other when a shared resource is involved. The state of x can be changed by another thread during the time between x is being read and when it is written back.
Let's say a thread retrieves the value of x, but hasn't stored it yet. Another thread can also retrieve the same value of x (because no thread has changed it yet) and then they would both be storing the same value (x+1) back in x!
Example:
Thread 1: reads x, value is 7
Thread 1: add 1 to x, value is now 8
Thread 2: reads x, value is 7
Thread 1: stores 8 in x
Thread 2: adds 1 to x, value is now 8
Thread 2: stores 8 in x
Race conditions can be avoided by employing some sort of locking mechanism before the code that accesses the shared resource:
for ( int i = 0; i < 10000000; i++ )
{
//lock x
x = x + 1;
//unlock x
}
Here, the answer comes out as 50,000,000 every time.
For more on locking, search for: mutex, semaphore, critical section, shared resource.
What is a Race Condition?
You are planning to go to a movie at 5 pm. You inquire about the availability of the tickets at 4 pm. The representative says that they are available. You relax and reach the ticket window 5 minutes before the show. I'm sure you can guess what happens: it's a full house. The problem here was in the duration between the check and the action. You inquired at 4 and acted at 5. In the meantime, someone else grabbed the tickets. That's a race condition - specifically a "check-then-act" scenario of race conditions.
How do you detect them?
Religious code review, multi-threaded unit tests. There is no shortcut. There are few Eclipse plugin emerging on this, but nothing stable yet.
How do you handle and prevent them?
The best thing would be to create side-effect free and stateless functions, use immutables as much as possible. But that is not always possible. So using java.util.concurrent.atomic, concurrent data structures, proper synchronization, and actor based concurrency will help.
The best resource for concurrency is JCIP. You can also get some more details on above explanation here.
There is an important technical difference between race conditions and data races. Most answers seem to make the assumption that these terms are equivalent, but they are not.
A data race occurs when 2 instructions access the same memory location, at least one of these accesses is a write and there is no happens before ordering among these accesses. Now what constitutes a happens before ordering is subject to a lot of debate, but in general ulock-lock pairs on the same lock variable and wait-signal pairs on the same condition variable induce a happens-before order.
A race condition is a semantic error. It is a flaw that occurs in the timing or the ordering of events that leads to erroneous program behavior.
Many race conditions can be (and in fact are) caused by data races, but this is not necessary. As a matter of fact, data races and race conditions are neither the necessary, nor the sufficient condition for one another. This blog post also explains the difference very well, with a simple bank transaction example. Here is another simple example that explains the difference.
Now that we nailed down the terminology, let us try to answer the original question.
Given that race conditions are semantic bugs, there is no general way of detecting them. This is because there is no way of having an automated oracle that can distinguish correct vs. incorrect program behavior in the general case. Race detection is an undecidable problem.
On the other hand, data races have a precise definition that does not necessarily relate to correctness, and therefore one can detect them. There are many flavors of data race detectors (static/dynamic data race detection, lockset-based data race detection, happens-before based data race detection, hybrid data race detection). A state of the art dynamic data race detector is ThreadSanitizer which works very well in practice.
Handling data races in general requires some programming discipline to induce happens-before edges between accesses to shared data (either during development, or once they are detected using the above mentioned tools). this can be done through locks, condition variables, semaphores, etc. However, one can also employ different programming paradigms like message passing (instead of shared memory) that avoid data races by construction.
A sort-of-canonical definition is "when two threads access the same location in memory at the same time, and at least one of the accesses is a write." In the situation the "reader" thread may get the old value or the new value, depending on which thread "wins the race." This is not always a bug—in fact, some really hairy low-level algorithms do this on purpose—but it should generally be avoided. #Steve Gury give's a good example of when it might be a problem.
A race condition is a situation on concurrent programming where two concurrent threads or processes compete for a resource and the resulting final state depends on who gets the resource first.
A race condition is a kind of bug, that happens only with certain temporal conditions.
Example:
Imagine you have two threads, A and B.
In Thread A:
if( object.a != 0 )
object.avg = total / object.a
In Thread B:
object.a = 0
If thread A is preempted just after having check that object.a is not null, B will do a = 0, and when thread A will gain the processor, it will do a "divide by zero".
This bug only happen when thread A is preempted just after the if statement, it's very rare, but it can happen.
Many answers in this discussion explains what a race condition is. I try to provide an explaination why this term is called race condition in software industry.
Why is it called race condition?
Race condition is not only related with software but also related with hardware too. Actually the term was initially coined by the hardware industry.
According to wikipedia:
The term originates with the idea of two signals racing each other to
influence the output first.
Race condition in a logic circuit:
Software industry took this term without modification, which makes it a little bit difficult to understand.
You need to do some replacement to map it to the software world:
"two signals" ==> "two threads"/"two processes"
"influence the output" ==> "influence some shared state"
So race condition in software industry means "two threads"/"two processes" racing each other to "influence some shared state", and the final result of the shared state will depend on some subtle timing difference, which could be caused by some specific thread/process launching order, thread/process scheduling, etc.
Race conditions occur in multi-threaded applications or multi-process systems. A race condition, at its most basic, is anything that makes the assumption that two things not in the same thread or process will happen in a particular order, without taking steps to ensure that they do. This happens commonly when two threads are passing messages by setting and checking member variables of a class both can access. There's almost always a race condition when one thread calls sleep to give another thread time to finish a task (unless that sleep is in a loop, with some checking mechanism).
Tools for preventing race conditions are dependent on the language and OS, but some comon ones are mutexes, critical sections, and signals. Mutexes are good when you want to make sure you're the only one doing something. Signals are good when you want to make sure someone else has finished doing something. Minimizing shared resources can also help prevent unexpected behaviors
Detecting race conditions can be difficult, but there are a couple signs. Code which relies heavily on sleeps is prone to race conditions, so first check for calls to sleep in the affected code. Adding particularly long sleeps can also be used for debugging to try and force a particular order of events. This can be useful for reproducing the behavior, seeing if you can make it disappear by changing the timing of things, and for testing solutions put in place. The sleeps should be removed after debugging.
The signature sign that one has a race condition though, is if there's an issue that only occurs intermittently on some machines. Common bugs would be crashes and deadlocks. With logging, you should be able to find the affected area and work back from there.
Microsoft actually have published a really detailed article on this matter of race conditions and deadlocks. The most summarized abstract from it would be the title paragraph:
A race condition occurs when two threads access a shared variable at
the same time. The first thread reads the variable, and the second
thread reads the same value from the variable. Then the first thread
and second thread perform their operations on the value, and they race
to see which thread can write the value last to the shared variable.
The value of the thread that writes its value last is preserved,
because the thread is writing over the value that the previous thread
wrote.
What is a race condition?
The situation when the process is critically dependent on the sequence or timing of other events.
For example,
Processor A and processor B both needs identical resource for their execution.
How do you detect them?
There are tools to detect race condition automatically:
Lockset-Based Race Checker
Happens-Before Race Detection
Hybrid Race Detection
How do you handle them?
Race condition can be handled by Mutex or Semaphores. They act as a lock allows a process to acquire a resource based on certain requirements to prevent race condition.
How do you prevent them from occurring?
There are various ways to prevent race condition, such as Critical Section Avoidance.
No two processes simultaneously inside their critical regions. (Mutual Exclusion)
No assumptions are made about speeds or the number of CPUs.
No process running outside its critical region which blocks other processes.
No process has to wait forever to enter its critical region. (A waits for B resources, B waits for C resources, C waits for A resources)
You can prevent race condition, if you use "Atomic" classes. The reason is just the thread don't separate operation get and set, example is below:
AtomicInteger ai = new AtomicInteger(2);
ai.getAndAdd(5);
As a result, you will have 7 in link "ai".
Although you did two actions, but the both operation confirm the same thread and no one other thread will interfere to this, that means no race conditions!
I made a video that explains this.
Essentially it is when you have a state with is shared across multiple threads and before the first execution on a given state is completed, another execution starts and the new thread’s initial state for a given operation is wrong because the previous execution has not completed.
Because the initial state of the second execution is wrong, the resulting computation is also wrong. Because eventually the second execution will update the final state with the wrong result.
You can view it here.
https://youtu.be/RWRicNoWKOY
Here is the classical Bank Account Balance example which will help newbies to understand Threads in Java easily w.r.t. race conditions:
public class BankAccount {
/**
* #param args
*/
int accountNumber;
double accountBalance;
public synchronized boolean Deposit(double amount){
double newAccountBalance=0;
if(amount<=0){
return false;
}
else {
newAccountBalance = accountBalance+amount;
accountBalance=newAccountBalance;
return true;
}
}
public synchronized boolean Withdraw(double amount){
double newAccountBalance=0;
if(amount>accountBalance){
return false;
}
else{
newAccountBalance = accountBalance-amount;
accountBalance=newAccountBalance;
return true;
}
}
public static void main(String[] args) {
// TODO Auto-generated method stub
BankAccount b = new BankAccount();
b.accountBalance=2000;
System.out.println(b.Withdraw(3000));
}
Try this basic example for better understanding of race condition:
public class ThreadRaceCondition {
/**
* #param args
* #throws InterruptedException
*/
public static void main(String[] args) throws InterruptedException {
Account myAccount = new Account(22222222);
// Expected deposit: 250
for (int i = 0; i < 50; i++) {
Transaction t = new Transaction(myAccount,
Transaction.TransactionType.DEPOSIT, 5.00);
t.start();
}
// Expected withdrawal: 50
for (int i = 0; i < 50; i++) {
Transaction t = new Transaction(myAccount,
Transaction.TransactionType.WITHDRAW, 1.00);
t.start();
}
// Temporary sleep to ensure all threads are completed. Don't use in
// realworld :-)
Thread.sleep(1000);
// Expected account balance is 200
System.out.println("Final Account Balance: "
+ myAccount.getAccountBalance());
}
}
class Transaction extends Thread {
public static enum TransactionType {
DEPOSIT(1), WITHDRAW(2);
private int value;
private TransactionType(int value) {
this.value = value;
}
public int getValue() {
return value;
}
};
private TransactionType transactionType;
private Account account;
private double amount;
/*
* If transactionType == 1, deposit else if transactionType == 2 withdraw
*/
public Transaction(Account account, TransactionType transactionType,
double amount) {
this.transactionType = transactionType;
this.account = account;
this.amount = amount;
}
public void run() {
switch (this.transactionType) {
case DEPOSIT:
deposit();
printBalance();
break;
case WITHDRAW:
withdraw();
printBalance();
break;
default:
System.out.println("NOT A VALID TRANSACTION");
}
;
}
public void deposit() {
this.account.deposit(this.amount);
}
public void withdraw() {
this.account.withdraw(amount);
}
public void printBalance() {
System.out.println(Thread.currentThread().getName()
+ " : TransactionType: " + this.transactionType + ", Amount: "
+ this.amount);
System.out.println("Account Balance: "
+ this.account.getAccountBalance());
}
}
class Account {
private int accountNumber;
private double accountBalance;
public int getAccountNumber() {
return accountNumber;
}
public double getAccountBalance() {
return accountBalance;
}
public Account(int accountNumber) {
this.accountNumber = accountNumber;
}
// If this method is not synchronized, you will see race condition on
// Remove syncronized keyword to see race condition
public synchronized boolean deposit(double amount) {
if (amount < 0) {
return false;
} else {
accountBalance = accountBalance + amount;
return true;
}
}
// If this method is not synchronized, you will see race condition on
// Remove syncronized keyword to see race condition
public synchronized boolean withdraw(double amount) {
if (amount > accountBalance) {
return false;
} else {
accountBalance = accountBalance - amount;
return true;
}
}
}
You don't always want to discard a race condition. If you have a flag which can be read and written by multiple threads, and this flag is set to 'done' by one thread so that other thread stop processing when flag is set to 'done', you don't want that "race condition" to be eliminated. In fact, this one can be referred to as a benign race condition.
However, using a tool for detection of race condition, it will be spotted as a harmful race condition.
More details on race condition here, http://msdn.microsoft.com/en-us/magazine/cc546569.aspx.
Consider an operation which has to display the count as soon as the count gets incremented. ie., as soon as CounterThread increments the value DisplayThread needs to display the recently updated value.
int i = 0;
Output
CounterThread -> i = 1
DisplayThread -> i = 1
CounterThread -> i = 2
CounterThread -> i = 3
CounterThread -> i = 4
DisplayThread -> i = 4
Here CounterThread gets the lock frequently and updates the value before DisplayThread displays it. Here exists a Race condition. Race Condition can be solved by using Synchronzation
A race condition is an undesirable situation that occurs when two or more process can access and change the shared data at the same time.It occurred because there were conflicting accesses to a resource . Critical section problem may cause race condition. To solve critical condition among the process we have take out only one process at a time which execute the critical section.

Resources