public class SetGetFail implements Runnable {
// Every thread is assigned a number and has a reference to a SharedObject.
// In main(), a single SharedObject is passed to all threads.
int number;
SharedObject shared;
public SetGetFail(int no, SharedObject so) {
number = no;
shared = so;
}
public static void main(String[] args) {
SharedObject shared = new SharedObject(0);
new Thread(new SetGetFail(1, shared)).start();
new Thread(new SetGetFail(2, shared)).start();
}
synchronized public void run() {
setGet();
}
synchronized void setGet() {
// Repeatedly assign this thread's own number to the shared
// object and race to read that number again.
// Exit, if some other thread modified the number in between.
while(true) {
shared.setNo(number);
int no = shared.getNo();
if (no != number) {
System.out.println("Thread " + number + " sees " + no);
System.exit(no);
}
}
}}
So my question to the code is, why "synchronized" do not prevent races between these threads?
Thread 2 should be locked while Thread 1 is getting/setting the Value from shared, but the result is still "Thread 2 sees 1".
Change the code like this. I have added an additional log statement just to prove you that it is running. Now let me explain the issue. You have just declared the method that modifies the shared state as this.
synchronized void setGet() {
// ...
}
So each thread gets it's own lock and modify the shared data at the same time. That's why your thread-2 sees the value 1 which is set by the other thread. To guard this you need to use a lock which is common to both the thread1 and thread2 instances. For that you need to use an explicit lock object which is shared among both the threads and synchronize using that shared lock. So that's what I have done to solve the issue.
private static final Object lock = new Object();
synchronized (lock) {
// ...
}
public class SetGetFail implements Runnable {
// Every thread is assigned a number and has a reference to a SharedObject.
// In main(), a single SharedObject is passed to all threads.
int number;
SharedObject shared;
private static Object lock = new Object();
public SetGetFail(int no, SharedObject so) {
number = no;
shared = so;
}
public static void main(String[] args) throws InterruptedException {
SharedObject shared = new SharedObject(0);
new Thread(new SetGetFail(1, shared), "One").start();
new Thread(new SetGetFail(2, shared), "Two").start();
}
synchronized public void run() {
setGet();
}
void setGet() {
// Repeatedly assign this thread's own number to the shared
// object and race to read that number again.
// Exit, if some other thread modified the number in between.
while (true) {
synchronized (lock) {
shared.setNo(number);
int no = shared.getNo();
if (no != number) {
System.out.println("Thread " + number + " sees " + no);
System.exit(no);
}
System.out.println("Thread " + number + " sees " + no);
}
}
}
}
Related
I came across the following excerpt while reading on visibility guarantees provided by the JVM when reading volatile variables :
"When thread A writes to a volatile variable and subsequently thread B reads that same variable, the values of ALL variables that were visible to A prior to writing to the volatile variable become visible to B AFTER reading the volatile variable."
I have a question around this guarantee of JVM. Consider the below set of classes :
public class Test {
public static void main(String[] args) throws InterruptedException {
POJO p = new POJO();
new Th1(p).start();
new Th2(p).start();
}
}
public class Th1 extends Thread {
private POJO p1 = null;
public Th1(POJO obj) {
p1 = obj;
}
#Override
public void run() {
p1.a = 10; // t = 1
p1.b = 10; // t = 2
p1.c = 10; // t = 5;
System.out.println("p1.b val: " + p1.b); // t = 8
System.out.println("Thread Th1 finished"); // t = 9
}
}
public class Th2 extends Thread {
private POJO p2 = null;
public Th2(POJO obj) {
p2 = obj;
}
#Override
public void run() {
p2.a = 30; // t = 3
p2.b = 30; // t = 4
int x = p2.c; // t = 6
System.out.println("p2.b value: " + p2.b); // t = 7
}
}
public class POJO {
int a = 1;
int b = 1;
volatile int c = 1;
}
Imagine the 2 threads Th1 and Th2 run in separate CPUs and the order in which their instructions execute is indicated by the comment in each line (in their run methods). The question I have is that :
When code "int x = p2.c;" executes at t = 6, variables visible to thread Th2 should be refreshed from main memory as per the above para. The main memory then as I understand would have all the writes from Th1 at this point. What value will the variable p2.b show then when it is printed at t = 7?
Will p2.b show value of 10 as its value was refreshed from the read of the volatile variable p2.c?
Or it will retain the value 30 somehow?
For your code, p2.b is not guaranteed to be 10 or 30. The write is a race condition.
"When thread A writes to a volatile variable and subsequently thread B reads that same variable, the values of ALL variables that were visible to A prior to writing to the volatile variable become visible to B AFTER reading the volatile variable."
Your Th2 read of p2.c is not guaranteed to be done after the write of p1.c in Th1.
For the specific order you discussed, the read of p2.c in Th2 will not revert the value of p2.b to 10.
There is no happens before edge between the write of a and the read of a. Since they are conflicting actions (at least one of them is a write) and are on the same address, there is a data-race and as a consequence, program behavior is undefined.
I think the following example explains the behavior of what you are looking for better:
public class Test {
public static void main(String[] args) throws InterruptedException {
POJO p = new POJO();
new Th1(p).start();
new Th2(p).start();
}
}
public class Th1 extends Thread {
private POJO p1 = null;
public Th1(POJO obj) {
p1 = obj;
}
#Override
public void run() {
a=1;
b=1;
}
}
public class Th2 extends Thread {
private POJO p2 = null;
public Th2(POJO obj) {
p2 = obj;
}
#Override
public void run() {
if(p.b==1)println("a must be 1, a="+p2.a);
}
}
public class POJO {
int a = 0;
volatile int b = 0;
}
There is a happens before edge between the write of a and the write of b (program order rule)
There is a happens before edge between the write of b and a subsequent read of b (volatile variable rule)
There is a happens before edge between the read of b and the read of a (program order rule)
Since the happens before relation is transitive, there is a happens before edge between the write of a and the read of a. So the second thread should see the a=1 from the first thread.
I've been looking at a solution for the dining philosopher problem on wikipedia.
The resource hierarchy solution
I understand how it works and how breaking the circular structure prevents deadlocks but how does the solution prevent starvation? Couldn't one or a few threads keep going while a few wont get to make progress?
If not, what prevents this from happening?
The implementation:
public class DinningphilMain {
public static void main(String[] args) throws InterruptedException {
int numPhil = 3;
Philosopher[] phil = new Philosopher[numPhil];
Fork[] forkArr=new Fork[numPhil];
for (int i = 0; i < numPhil; i ++) {
forkArr[i]= new Fork(i);
}
for (int i = 0; i < numPhil-1; i++) {
phil[i]=new Philosopher(i, forkArr[i], forkArr[i+1]);
}
phil[numPhil-1]= new Philosopher(numPhil-1, forkArr[0], forkArr[numPhil-1]);
for (Philosopher p : phil)
new Thread(p).start();
}
}
This is the philosopher class
import java.util.Random;
public class Philosopher implements Runnable {
int sleep = 1000;
int id;
int eatTime= 500;
Random rand = new Random();
Fork left;
Fork right;
public Philosopher(int id, Fork left, Fork right) {
this.id = id;
this.left = left;
this.right = right;
}
private void think() {
System.out.println("Philosopher " + id + " is thinking");
try {
int thinkingTime = rand.nextInt(sleep);
Thread.sleep(thinkingTime);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private void getForks() {
System.out.println("Philosopher " + id + " is picking up forks");
try {
left.get();
right.get();
System.out.println("Philosopher " + id + " has both forks");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private void releaseForks() {
System.out.println("Philosopher " + id + " is putting down forks");
left.release();
right.release();
}
private void eat() {
System.out.println("Philosopher " + id + " is eating");
try {
Thread.sleep(eatTime);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
#Override
public void run() {
while (true) {
getForks();
eat();
releaseForks();
think();
}
}
}
This is the fork class
public class Fork {
private int id;
private Thread thread;
public Fork(int id) {
this.id = id;
thread = null;
}
public int getId() {
return id;
}
public synchronized void get() throws InterruptedException {
if (thread != null)
this.wait();
thread = Thread.currentThread();
}
public synchronized void release() {
if (thread == Thread.currentThread())
thread = null;
this.notify();
}
}
The resource hierarchy solution solves deadlocks but doesn't solves starvation.
In order to prevent starvation you either need:
A guarantee from the thread system that threads will be unblocked from
monitors and condition variables in the same order that they are
blocked.
To do it yourself. In other words, you must guarantee that no
philosopher may starve. For example, suppose you maintain a queue of
philosophers. When a philosopher is hungry, he/she gets put onto the
tail of the queue. A philosopher may eat only if he/she is at the head
of the queue, and if the chopsticks are free.
This is taken from C560 Lecture notes -- Dining Philosophers
The short answer is that it doesn't. The dining philosophers problem is used to discuss the problem of concurrency; it in itself is not a single solution for anything (hence why it's called a problem).
The wikipedia page for the dining philosophers itself shows a few implementations. The first one shows how a poor implementation for a solution will cause starvation.
https://en.wikipedia.org/wiki/Dining_philosophers_problem
How can I switch between threads created from thread pool? I have many threads created but I only want 1 thread to print something and others to be in wait state. Now after printing, I want this thread to go in wait state and some other thread to acquire this lock and print just like the previous thread and then go into wait state. This simulation keeps on occuring again and again until some condition satisfies. There is randomization of threads acquiring the lock and it doesn't need to be in order. If possible later you can exlain how can I achieve that in order maybe using queue.
I am new to threads, so something that I was trying to achieve is below. I know its wrong but I wanted you to give a solution and little explanation in terms of what I want to achieve.
public class Processor implements Runnable{
private int id;
public Processor(int id) {
this.id = id;
}
#Override
public void run() {
int count=0;
System.out.println("Starting process id: " + id);
while(count<100) {
System.out.println("Pausing process id: "+id);
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
notifyAll();
System.out.println("Resuming process id: "+id);
count++;
}
System.out.println("Completed process id: " + id);
}
}
public class Test {
#SuppressWarnings("resource")
public static void main(String[] args) {
Scanner reader = new Scanner(System.in);
System.out.print("Enter number of processes you want to create: ");
int n = reader.nextInt();
ExecutorService executor = Executors.newFixedThreadPool(n);
for(int i=1;i<=n; i++) {
executor.submit(new Processor(i));
}
executor.shutdown();
try {
executor.awaitTermination(10, TimeUnit.MINUTES);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
}
}
It is not possible to programmatically control the order in which threads are instantiated. The priority of threads and their execution is determined by the particular Operating System's thread scheduling algorithm implementation.
class Program
{
static void Main(string[] args)
{
Thread thread1 = new Thread((ThreadStart)DLockSample.FunctionA);
Thread therad2 = new Thread((ThreadStart)DLockSample.FunctionB);
thread1.Start();
therad2.Start();
}
}
public class DLockSample
{
static object object1 = new object();
static object object2 = new object();
public static void FunctionA()
{
lock (object1)
{
Thread.Sleep(1000);
lock (object2)
{
Thread.Sleep(1000);
Console.WriteLine("heart beat - object2");
}
}
}
public static void FunctionB()
{
lock (object2)
{
lock (object1)
{
Thread.Sleep(1000);
Console.WriteLine("heart beat - object1");
}
}
} }
Always enter the locks in the same order in all threads. See also hierarchy of critical sections I.e. FunctionB needs to be:
public static void FunctionB()
{
lock (object1)
{
lock (object2)
...
That's a pretty abstact problem to fix. Just a few tips:
Always lock on objects in the same order
If it's impossible to lock in the same order, use object's fields to preserve the order (for example, if A.Id > B.Id then always lock on A before B).
below is my simple code to start 5 threads, each one calls a wcf service which returns the value sent in, my problem is that the :
public void clien_GetDataCompleted(object sender, GetDataCompletedEventArgs e)
{
lock (sync)
{
count += e.Result;
}
}
works ok and increments the count, but how do i capture when all the threads have completed, does anybody have simple example code on how to call multiple wcf services which use async methods.
public partial class Threading : Form
{
public int count;
ServiceReference1.Service1Client clien = new ServiceReference1.Service1Client();
public Threading()
{
InitializeComponent();
}
private void GetData()
{
clien.GetDataAsync(1);
}
public void DisplayResults()
{
MessageBox.Show(count.ToString());
}
private object sync = new object();
public void clien_GetDataCompleted(object sender, GetDataCompletedEventArgs e)
{
lock (sync)
{
count += e.Result;
}
}
public List<Thread> RunThreads(int count, ThreadStart start)
{
List<Thread> list = new List<Thread>();
for (int i = 0; i <= count - 1; i++)
{
dynamic thread = new Thread(start);
thread.Start();
list.Add(thread);
}
return list;
}
private void button1_Click_1(object sender, EventArgs e)
{
clien.GetDataCompleted += new EventHandler<GetDataCompletedEventArgs>(clien_GetDataCompleted);
ThreadStart WcfCall = new ThreadStart(GetData);
IList<Thread> threads = RunThreads(5, WcfCall);
}
}
many thanks
If you are using .NET 4.0 you can use Task Parallel Library (TPL) and use Tasks instead of Threads. Tasks has more flow control. What you can do with tasks something like
// Wait for all the tasks to finish.
Task.WaitAll(tasks);
Here is example on how to use Tasks and wait for all tasks to finish. here
I have implemented the solution using tasks, the code is below, its works well, let me know if theres any improvement i could do.
public partial class Tasks : Form
{
static ServiceReference1.Service1Client clien = new ServiceReference1.Service1Client();
int count = 0;
public Tasks()
{
InitializeComponent();
}
// Define a delegate that prints and returns the system tick count
Func<object, int> action = (object obj) =>
{
int i = (int)obj;
clien.GetDataAsync(i);
Console.WriteLine("Task={0}, i={1}, Thread={2}", Task.CurrentId, i, Thread.CurrentThread.ManagedThreadId);
return i;
};
public void clien_GetDataCompleted(object sender, GetDataCompletedEventArgs e)
{
count += e.Result;
}
private void button1_Click(object sender, EventArgs e)
{
const int n = 5;
// create async callback delegate from wcf.
clien.GetDataCompleted += new EventHandler<GetDataCompletedEventArgs>(clien_GetDataCompleted);
// Construct started tasks
Task<int>[] tasks = new Task<int>[n];
for (int i = 0; i < n; i++)
{
tasks[i] = Task<int>.Factory.StartNew(action, i);
}
try
{
// Wait for all the tasks to finish.
Task.WaitAll(tasks);
MessageBox.Show(count.ToString());
}
catch
{
}
}
}
cheers