Does this barbershop semaphore solution have a bug? - multithreading

I have come up with a pseudocode solution to the barbershop problem.
Requirements- a customer can call gethaircut() and a barber can call cuthair() and theres n seats available.
A barber should be sleeping until a customer calls gethaircut.
If n seats are taken then the customer should call balk() which just exits that customer thread.
The methods gethaircut and cuthair should be only running in parallel with one thread of each caller, meaning only one thread should be calling cuthair while another thread is calling gethaircut.
There is only one barber.
Here is the solution I came up with.. I really cannot format this correctly on the phone.. Please bear with
// Sem = semaphore. Sry Im on my phone
Int n = maxSeats
Sem mutex = new Sem(1)
Sem hairCutMutex = new Sem(1)
Sem canCutHair = new Sem(0)
Sem finishedHairCut = new Sem(0)
Customer() {
P(mutex)
if (n == 0) {
V(mutex)
balk();
}
V(mutex)
P(hairCutMutex)
getHairCut()
V(canCutHair)
P(finishedHairCut)
V(hairCutMutex)
P(mutex)
n++
V(mutex)
}
`Barber() {
while (true) {
P(canCutHair)
cutHair()
V(finishedHairCut)
}
}
`

Related

Implementation of Queue is not thread safe

I am trying to implement a thread safe queue using a Semaphore that is enqueued with integers. This is not thread-safe at the moment. What would I have to add in terms of synchronization to the queue to make it thread-safe?
I've tried using synchronized blocks on the Queue, so that only one thread is allowed in the queue at the same time, but this does not seem to work, or I am misusing them. What should I be synchronizing on? I have a separate class that is constantly appending and removing with a maintainer thread.
class ThreadSafeQueue {
var queue = List[Int]()
val semaphore = new Semaphore(0)
def append(num: Int): Unit = {
queue = queue ::: List(num)
semaphore.release()
}
def dequeue(): Int = {
semaphore.acquire()
val n = queue.head
queue = queue.tail
n
}
}
To be thread-safe, you should place code that accesses the queue in synchronized blocks, as shown below.
import java.util.concurrent.Semaphore
class ThreadSafeQueue {
var queue = List[Int]()
val semaphore = new Semaphore(0)
def append(num: Int): Unit = {
synchronized {
queue = queue ::: List(num)
}
semaphore.release()
}
def dequeue(): Int = {
semaphore.acquire()
synchronized {
val n = queue.head
queue = queue.tail
n
}
}
}
A few notes:
With the Semaphore permits value set to 0, all acquire() calls will block until there is a release().
In case the Semaphore permits value is > 0, method dequeue would better be revised to return an Option[Int] to cover cases of dequeueing an empty queue.
In case there is only a single queue in your application, consider defining ThreadSafeQueue as object ThreadSafeQueue.
There is an arguably more efficient approach of atomic update using AtomicReference for thread-safety. See this SO link for differences between the two approaches.

Non-Blocking Thread-Safe Counter for JavaFX

I am trying to implement a thread-safe solution to keep a count of successful tasks that have been completed, which will ultimately get bound to label displayed on the UI. However, when I use the AtomicInteger below it locks up my UI when the tasks start running, however, if I remove all AtomicInteger refs everything works fine. Is there a non-blocking, thread-safe way which this can be accomplished?
public void handleSomeButtonClick(){
if(!dataModel.getSomeList().isEmpty()) {
boolean unlimited = false;
int count = 0;
AtomicInteger successCount = new AtomicInteger(0);
if(countSelector.getValue().equalsIgnoreCase("Unlimited"))
unlimited = true;
else
count = Integer.parseInt(countSelector.getValue());
while(unlimited || successCount.get() < count) {
Task task = getSomeTask();
taskExecutor.submit(task);
task.setOnSucceeded(event -> {
if (task.getValue())
log.info("Successfully Completed Task | Total Count: " + successCount.incrementAndGet());
else
log.error("Failed task");
});
}
}
}
Your loop waits for a certain number of tasks to be completed. It may even be an infinite loop.
This is not a good idea:
You block the calling thread which seems to be the JavaFX application thread.
You don't have any control of how many tasks are submitted. count could be 3, but since you only schedule the tasks in the loop, 1000 or more tasks could be created&scheduled before the first one completes.
Furthermore if you use onSucceeded/onFailed, you don't need to use AtomicInteger or any similar kind of synchronisation, since those handlers all run on the JavaFX application thread.
Your code could be rewritten like this:
private int successCount;
private void scheduleTask(final boolean unlimited) {
Task task = getSomeTask();
task.setOnSucceeded(event -> {
// cannot get a Boolean from a raw task, so I assume the task is successfull iff no exception happens
successCount++;
log.info("Successfully Completed Task | Total Count: " + successCount);
if (unlimited) {
// submit new task, if the number of tasks is unlimited
scheduleTask(true);
}
});
// submit new task on failure
task.setOnFailed(evt -> scheduleTask(unlimited));
taskExecutor.submit(task);
}
public void handleSomeButtonClick() {
if(!dataModel.getSomeList().isEmpty()) {
successCount = 0;
final boolean unlimited;
final int count;
if(countSelector.getValue().equalsIgnoreCase("Unlimited")) {
unlimited = true;
count = 4; // set limit of number of tasks submitted to the executor at the same time
} else {
count = Integer.parseInt(countSelector.getValue());
unlimited = false;
}
for (int i = 0; i < count; i++) {
scheduleTask(unlimited);
}
}
}
Note: This code runs the risk of handleButtonClick being clicked multiple times before the previous tasks have been completed. You should either prevent scheduling new tasks before the old ones are completed or use some reference type containing an int instead for the count, create this object in handleSomeButtonClick and pass this object to scheduleTask.
Your UI lock up means you do the counting(successCount.get() < count) in your FX application thread. I cannot understand why you keep submit the task in the while loop,
which one do you want to do? (1) start X(e.g. 10) task and count how many task is success. or (2) just keep starting new task and see the count go up.
if(2) then run the whole while loop in a background thread, update the UI in a Platform->runlater().
if(1) use the Future / CompletableFuture, or more powerful version Future in 3rd party package like vavr.
Your problem is future.get() block and wait for result.
This will be simple if you use Vavr library.
Because it can attach a code to its future which run automatically when success or fail.
So you don't have to wait.
Here is a example which using Vavr's future.
CheckedFunction0<String> thisIsATask = () -> {
if ( /*do something*/ ){
throw new Exception("Hey");
}
return "ABC";
};
List<Future<String>> futureList = new ArrayList<>();
for (int x = 0; x < 10; x++) {
futureList.add(Future.of(getExecutorService(), thisIsATask));
}
futureList.forEach((task) -> {
// This will run if success
task.onSuccess(s -> s.equals("ABC") ? Platform.runLater(()->UpdateCounter()) : wtf());
// Your get the exception if it is fail;
task.onFailure(e -> e.printStackTrace());
// task.onComplete() will run on any case when complete
});
This is not blocking, the code at onSucess onFailure or onComplete will run when the task is finish or an exception is catch.
Note: Future.of will use the executorService you pass in to run each task at new thread, the code you provide at onSuccess will continue to run at that thread once the task is done so if you calling javafx remember the Platform.runLater()
Also if you want to run something when all task is finish, then
// the code at onComplete will run when tasks all done
Future<Seq<String>> all = Future.sequence(futureList);
all.onComplete((i) -> this.btnXYZ.setDisable(false));

Thread for every queue in dictionary c#

I have a dictionary with a custom class object as key and a class queue
ConcurrentDictionary<clsEFTOPConnection, Queue<clsEFTOPS>>();
here clsEFTOPConnection and clsEFTOPS are custom classes. I add a new queue if it is not in the dictionary. Now I want to work parallelly on each queue. I want each queue to be watched by threads and whenever a enqueue operation is performed, the particular thread should awake if sleeping and start dequeuing items. So no of threads is equal to no of queues and each thread will act as a watcher of its queue.
I have written below code to achieve it, but it's not functioning as per my needs
public Queue<clsEFTOPS> CheckAlreadyExistingQueue(clsEFTOPConnection objEFTOPConnection, clsEFTOPS objJobQ)
{
Queue<clsEFTOPS> queue;
// See if we have a queue
if (!mHostQueues.TryGetValue(objEFTOPConnection, out queue))
{
// No queue for this Host, so create and cache
queue = new Queue<clsEFTOPS>();
objJobQ.Host = objEFTOPConnection.Host;
objJobQ.PortNo = objEFTOPConnection.PortNo;
objEFTOPConnection.socket_ = objJobQ.ReconnectSocket();
queue.Enqueue(objJobQ);
mHostQueues.TryAdd(objEFTOPConnection, queue);
JobThrd = new Thread(() => DequeueJobs(queue, objEFTOPConnection));
if (JobThrd.ThreadState != ThreadState.Running)
JobThrd.Start();
}
else
{
queue.Enqueue(objJobQ);
if (queue.Count > 1)
JobThrd = new Thread(() => DequeueJobs(queue, objEFTOPConnection));
else
DequeueJobs(queue, objEFTOPConnection);
if (JobThrd.ThreadState != ThreadState.Running)
JobThrd.Start();
}
return queue;
}

Multiple threads modifying a collection

I'm trying to learn about threads for an assignment for school, and I'm trying to get two threads to empty a collection. The code I came up with so far throws an exception, saying that the collection got modified.
First I had a while loop in the locked code part, but then (of course ;-)) only one thread empties the collection.
My question is, how can I have a loop in which the threads both take turns in emptying the collection?
class Program
{
private static List<int> containers = new List<int>();
static void Main(string[] args)
{
for (int i = 0; i < 100; i++)
{
containers.Add(i);
}
Thread t1 = new Thread(() => { foreach (int container in containers) { GeefContainer(); } });
t1.Name = "Kraan 1";
t1.Start();
Thread t2 = new Thread(() => { foreach (int container in containers) { GeefContainer(); } });
t2.Name = "Kraan 2";
t2.Start();
Console.Write("Press any key to continue...");
Console.Read();
}
static void GeefContainer()
{
lock (containers)
{
int containerNummer = containers.Count - 1;
//Container container = containers[containerNummer];
//Console.Write("Container {0} opgehaald... Overladen", containerNummer);
Console.WriteLine("Schip: Container {0} gegeven aan {1}", containerNummer, Thread.CurrentThread.Name);
//Gevaarlijk, want methode aanroepen kan klappen
containers.RemoveAt(containerNummer);
}
}
}
I assume you are not allowed to use any of the ThreadSafe collections found in the System.Collections.Concurrent namespace.
You need to gain exclusive access to the containers collection when checking if there are still entries left. Yet, you don't want 1 thread to take exclusive control removing all entries before releasing its lock. Monitor.Pulse can be used to allow other threads waiting to lock the container to 'go first'. Try the following implementation of GeefContainers:
static void GeefContainer()
{
lock (containers)
{
while (containers.Any()) // using linq, similar to: while(container.Count > 0)
{
containers.RemoveAt(0); // remove the first element
// allow other threads to take control
Monitor.Pulse(containers); // http://msdn.microsoft.com/en-us/library/system.threading.monitor.pulse.aspx
// Wait for a pulse from the other thread
Monitor.Wait(container);
}
}
}
Oh, and remove your looping logic from:
Thread t2 = new Thread(() => { foreach (int container in containers) { GeefContainer(); } });
Simply invoking GeefContainer is enough.
This can be visualized in the following way:
Thread 1 gains a lock to 'collections'
Thread 2 is blocked since it's waiting for an exclusive lock to 'collections'
Thread 1 removes an entry from 'collections'
Thread 1 releases it's lock on 'collections' and tries to gain a new exclusive lock
Thread 2 gains a lock to 'collections'
Thread 2 removes an entry from 'collections'
Thread 2 releases it's lock on 'collections' and tries to gain a new exclusive lock
Thread 1 gains a lock to 'collections'
etc
The exception you are seeing is being thrown by the enumerator. Enumerators on standard collections have checks to make sure the collection was not modified in the middle of an enumeration operation (via foreach in your case).
Since you want to have your threads alternate removing from the collection then you will need some kind of mechanism that allows the threads to signal each other. We also have to be careful not to access the collection from multiple collections at the same time. Not even the Count property is safe to use without synchronization. The Barrier class makes the signaling really easy. A simple lock will suffice for the synchronization. Here is how I would do this.
public class Program
{
public static void Main(string[] args)
{
var containers = new List<int>();
for (int i = 0; i < 100; i++)
{
containers.Add(i);
}
var barrier = new Barrier(0);
var t1 = new Thread(() => GeefContainers(containers, barrier));
t1.Name = "Thread 1";
t1.Start();
var t2 = new Thread(() => GeefContainers(containers, barrier));
t2.Name = "Thread 2";
t2.Start();
Console.Write("Press any key to continue...");
Console.Read();
}
private static void GeefContainers(List<int> list, Barrier barrier)
{
barrier.AddParticipant();
while (true)
{
lock (list)
{
if (list.Count > 0)
{
list.RemoveAt(0);
Console.WriteLine(Thread.CurrentThread.Name + ": Count = " + list.Count.ToString());
}
else
{
break;
}
}
barrier.SignalAndWait();
}
barrier.RemoveParticipant();
}
}
The Barrier class basically causes this to happen over and over again.
|----| |----| |----|
| T1 |-->| |-->| T1 |-->| |-->| T1 |
|----| | | |----| | | |----|
|-->(B)-->| |-->(B)-->|
|----| | | |----| | | |----|
| T2 |-->| |-->| T2 |-->| |-->| T2 |
|----| |----| |----|
In the above diagram T1 and T2 represent the remove operations on threads 1 and 2 respectively. (B) represents a call to Barrier.SignalAndWait.
First, change your thred definition as follows:
new Thread(() => { while(containers.Count>0) { GeefContainer(); } });
Then, rewrite GeefContainer() as follows to avoid exceptions:
static void GeefContainer()
{
lock (containers)
{
int containerNummer = containers.Count - 1;
if(containerNummer>=0)
{
//Container container = containers[containerNummer];
//Console.Write("Container {0} opgehaald... Overladen", containerNummer);
Console.WriteLine("Schip: Container {0} gegeven aan {1}", containerNummer, Thread.CurrentThread.Name);
//Gevaarlijk, want methode aanroepen kan klappen
containers.RemoveAt(containerNummer);
}
}
}
What if you modify your threads as follows? That way, both threads should get some time to perform actions on the collection.
Thread t1 = new Thread(() => {
while (containers.Count > 0)
{
GeefContainer();
Thread.Sleep(150);
}});
t1.Name = "Kraan 1";
t1.Start();
Thread t2 = new Thread(() => {
while (containers.Count > 0)
{
GeefContainer();
Thread.Sleep(130);
}});
t2.Name = "Kraan 2";
t2.Start();

Reading output from console application and WPF plotting async

I have a console application which outputs about 160 lines of info every 1 second.
The data output is points that can be used to plot on a graph.
In my WPF application, I've successfully have this hooked up and the data output by the console application is being plotted, however, after about 500 or so data points, I see significant slow down in the application and UI thread lockups.
I assume this is due to the async operations I'm using:
BackgroundWorker worker = new BackgroundWorker();
worker.DoWork += delegate(object s, DoWorkEventArgs args)
{
_process = new Process();
_process.StartInfo.FileName = "consoleApp.exe";
_process.StartInfo.UseShellExecute = false;
_process.StartInfo.RedirectStandardOutput = true;
_process.StartInfo.CreateNoWindow = true;
_process.EnableRaisingEvents = true;
_process.OutputDataReceived += new DataReceivedEventHandler(SortOutputHandler);
_process.Start();
_process.BeginOutputReadLine();
_watch.Start();
};
worker.RunWorkerAsync();
And the handler that is taking care of parsing and plotting the data:
private void SortOutputHandler(object sendingProcess, DataReceivedEventArgs outLine)
{
if (!String.IsNullOrEmpty(outLine.Data))
{
var xGroup = Regex.Match(outLine.Data, "x: ?([-0-9]*)").Groups[1];
int x = int.Parse(xGroup.Value);
var yGroup = Regex.Match(outLine.Data, "y: ?([-0-9]*)").Groups[1];
int y = int.Parse(yGroup.Value);
var zGroup = Regex.Match(outLine.Data, "z: ?([-0-9]*)").Groups[1];
int z = int.Parse(zGroup.Value);
Reading reading = new Reading()
{
Time = _watch.Elapsed.TotalMilliseconds,
X = x,
Y = y,
Z = z
};
Dispatcher.Invoke(new Action(() =>
{
_readings.Enqueue(reading);
_dataPointsCount++;
}), System.Windows.Threading.DispatcherPriority.Normal);
}
}
_readings is a custom ObservableQueue<Queue> as defined in this answer. I've modified it so that only 50 items can be in the queue at a time. So if a new item is being added and the queue count >= 50, a Dequeue() is called before an Enqueue().
Is there any way I can improve the performance or am I doomed because of how much the console app outputs?
From what I can tell here is what it looks like is going on:
IU thread spins up a background worker to launch the console App.
It redirects the output of the Console and handles it with a handler on the UI thread
The handler on the UI thread then calls Dispatcher.Invoke 160 times a second to update a queue object on the same thread.
After 50 calls the queue starts blocking while items are dequeued by the UI
The trouble would seem to be:
Having the UI thread handle the raw output from the console and the queue and the update to the Graph.
There is also a potential problem with blocking between enqueue and dequeue once the UI is over 50 data items behind that might be leading to a cascading failure. (I can't see enough of the code to be sure of that)
Resolution:
Start another background thread to manage the data from the console app
The new thread should: Create the Queue; handle the OutputDataReceived event; and launch the console app process.
The Event Handler should not use Dispatcher.Invoke to update the Queue. A direct threadsafe call should be used.
The Queue really needs to be non blocking when updating the UI, but I don't really have enough information about how that's being implemented to comment.
Hope this helps
-Chris
I suspect that there's a thread starvation issue happening on the UI thread as your background thread is marshaling calls to an observable collection that is possibly forcing the underlying CollectionView to be recreated each time. This can be a pretty expensive operation.
Depending how you've got your XAML configured is also a concern. The measure / layout changes alone could be killing you. I would imagine that at the rate the data is coming in, the UI hasn't got a chance to properly evaluate what's happening to the underlying data.
I would suggest not binding the View to the Queue directly. Instead of using an Observable Queue as you've suggested, consider:
Use a regular queue that caps content at 50 items. Don't worry about the NotifyCollectionChanged event happening on the UI thread. You also won't have to marshal each item to the UI thread either.
Expose a CollectionViewSource object in your ViewModel that takes the Queue as its collection.
Use a timer thread on the UI to manually force a refresh of the CollectionViewSource. Start with once a second and decrease the interval to see what your XAML and machine can handle. In this fashion, you control when the CollectionView is created and destroyed.
You could try passing the processed data onto the UI Thread from the BackgroundWorker ProgressChanged event.
Something like....
// Standard warnings apply: not tested, no exception handling, etc.
var locker = new object();
var que = new ConcurrentQueue<string>();
var worker = new BackgroundWorker();
var proc = new Process();
proc.StartInfo.FileName = "consoleApp.exe";
proc.StartInfo.UseShellExecute = false;
proc.StartInfo.RedirectStandardOutput = true;
proc.StartInfo.CreateNoWindow = true;
proc.EnableRaisingEvents = true;
proc.OutputDataReceived +=
(p, a) =>
{
que.Enqueue(a.Data);
Monitor.Pulse(locker);
};
worker.DoWork +=
(s, e) =>
{
var watch = Stopwatch.StartNew();
while (!e.Cancel)
{
while (que.Count > 0)
{
string data;
if (que.TryDequeue(out data))
{
if (!String.IsNullOrEmpty(data))
{
var xGroup = Regex.Match(data, "x: ?([-0-9]*)").Groups[1];
int x = int.Parse(xGroup.Value);
var yGroup = Regex.Match(data, "y: ?([-0-9]*)").Groups[1];
int y = int.Parse(yGroup.Value);
var zGroup = Regex.Match(data, "z: ?([-0-9]*)").Groups[1];
int z = int.Parse(zGroup.Value);
var reading = new Reading()
{
Time = watch.Elapsed.TotalMilliseconds,
X = x,
Y = y,
Z = z
};
worker.ReportProgress(0, reading);
}
}
else break;
}
// wait for data or timeout and check if the worker is cancelled.
Monitor.Wait(locker, 50);
}
};
worker.ProgressChanged +=
(s, e) =>
{
var reading = (Reading)e.UserState;
// We are on the UI Thread....do something with the new reading...
};
// start everybody.....
worker.RunWorkerAsync();
proc.Start();
proc.BeginOutputReadLine();
You can simply store the points in a list and call the dispatcher only when you have e.g. reached 160 points so you do not create to many update messages. Currently you are causing a window message every 6ms which is way too much. When you update the UI e.g. every second or every 160 points things will be much smoother. If the notifications are still too much you need to have a look how you can suspend redrawing your control while you update the UI with 160 data points and resume drawing afterwards so you do not get heavy flickering.
List<Reading> _Readings = new List<Reading>();
DateTime _LastUpdateTime = DateTime.Now;
TimeSpan _UpdateInterval = new TimeSpan(0,0,0,0,1*1000); // Update every 1 second
private void SortOutputHandler(object sendingProcess, DataReceivedEventArgs outLine)
{
if (!String.IsNullOrEmpty(outLine.Data))
{
var xGroup = Regex.Match(outLine.Data, "x: ?([-0-9]*)").Groups[1];
int x = int.Parse(xGroup.Value);
var yGroup = Regex.Match(outLine.Data, "y: ?([-0-9]*)").Groups[1];
int y = int.Parse(yGroup.Value);
var zGroup = Regex.Match(outLine.Data, "z: ?([-0-9]*)").Groups[1];
int z = int.Parse(zGroup.Value);
Reading reading = new Reading()
{
Time = _watch.Elapsed.TotalMilliseconds,
X = x,
Y = y,
Z = z
};
// create a batch of readings until it is time to send it to the UI
// via ONE window message and not hundreds per second.
_Readings.Add(reading);
DateTime current = DateTime.Now;
if( current -_LastUpdateTime > _UpdateInterval ) // update ui every second
{
_LastUpdateTime = current;
List<Reading> copy = _Readings; // Get current buffer and make it invisible to other threads by creating a new list.
// Since this is the only thread that does write to it this is a safe operation.
_Readings = new List<Reading>(); // publish a new empty list
Dispatcher.Invoke(new Action(() =>
{
// This is called as part of a Window message in the main UI thread
// once per second now and not every 6 ms. Now we can upate the ui
// with a batch of 160 points at once.
// A further optimization would be to disable drawing events
// while we add the points to the control and enable it after
// the loop
foreach(Reading reading in copy)
{
_readings.Enqueue(reading);
_dataPointsCount++;
}
}),
System.Windows.Threading.DispatcherPriority.Normal);
}
}
}

Resources