Multiprocess Synchronization with a Single Semaphore - semaphore

We're covering multithreaded programming in a class I'm taking. The professor offered a bonus question that I have been trying, to no avail, to figure out:
Each of processes P0, P1, P2 and P3 have to wait for the other three to cross or reach a particular synchronization point in their code, and only then may that process cross its own synchronization point.
I already know how to answer the question with four semaphores, the hard part is doing it with only one semaphore.
Any suggestions or hints as to how to proceed?

Just initialize your semaphore at -4 if it's not a binary one.

You are a little light on the constraints imposed on your solution but see The Little Book of Semaphores and read through the sections on barriers. That should give you some ideas.

Turns out the professor had meant to say that you could use two semaphores instead of one. He believes, as I do after having thought about the problem for a while, that it is impossible to do with a single semaphore.

Related

Doing multithreading implicitly?

I have a question from one of my IT-subjects. Actually I am trying to understand multithreading, and the one question that I need answer to is
what can be done if we want to activate multiple Threads when our
Hardwaresystem doesn't support explict multithreading solutions
(also don't know what solutions fall into that category.)
Any help on understandig the whole multithreading is welcome and particulary answer on this question :)
Thank you!
I don't believe it makes any sense to talk about implicit multi-threading.
Multi-threading is a way to structure computer software such that a single program can have several different, independent activities going on at the same time. There are several different reasons why you would want to do that, but none of them happens by accident. Multi-threaded programs only exist because somebody intentionally wrote them that way.
One of the reasons for writing a multi-threaded program is to perform parallel computation on a multi-CPU host. Other technologies that you mentioned, "superscalar, SMT, VLIW," are all different approaches to parallelism.
My guess is, that when you said "multithreading" in your question, you actually were asking about parallelism.

Matlab parallel programming

First of all, sorry for the general title and, probably, for the general question.
I'm facing a dilemma, I always worked in c++ and right now I'm trying to do something very similar to one of my previous projects, which is to parallelize a single-target object tracker written in matlab in order to assign to each concurrent thread an object and then gather the results at each frame. In c++, I used boost thread API to do so, and with good results. Is it possibile in matlab? Reading around I'm finding it rather unclear, I'm reading a lot about the parfor loop but that's pretty much it? Can I impose synchronization barriers similar to boost::barrier in order to stop each thread after each frame and wait for other before going to the next frame?
Basically, I wish to initialize some common data structures and then launch a few parallel instances of the tracker, which shares some data and take different objects to track as input. Any suggestion will be greatly appreciated!
parfor is only one piece of functionality provided by Parallel Computing Toolbox. It's the simplest, and most people find it the most immediately useful, which is probably why most of the resources your research has found discuss only that.
parfor gives you a way to very simply parallelize "embarassingly parallel" tasks, in other words tasks that are independent and do not require any communication between them (such as, for example, parameter sweeps or Monte Carlo analyses).
It sounds like that's not what you need. From your question, I'm not entirely sure exactly what you do need; but since you mention synchronization, barriers, and waiting for one task to finish before another moves forward, I would suggest you take a look at features of Parallel Computing Toolbox such as labSend, labReceive, labBarrier, and spmd, that allow you to implement a more message-passing style of parallelization. There is plenty more functionality in the toolbox than just parfor.
Also - don't be afraid to ask MathWorks for advice on this, there are several (free) recorded webinars and tutorials on this sort of parallelization that they can point you towards.
Hope that helps!

What are the things that thread doesn’t share with process?

I have couple of doubts regarding the process and threads and are given below
1.What are the things that thread doesn’t share with process?
2.Why there is separate stack for each thread?
3.How do two threads from different process communicate?
1) This is a definition. You don't need "help" with this one, you need a "book."
2) I'm very willing to help this one. It isn't a simple definition question, so let's start by answering your question with a question... In a single-process, single-thread system, what is the purpose of the stack? Once you can answer this, you are an inch from answering this question.
3) On what system?
http://en.wikipedia.org/wiki/Thread_(computing)
Wikipedia is down for the moment, butafter that, you can check it :)
Your second question actually answers your first. Threads work at a different rate from one another. Imagine a program being 1 line of commands all following eachother, waiting for completion of one another. Now add a second line so you have 2 bits of processing done at the same time (and different rates of speed quite possibly). That's a thread.
In essence, a thread is a different process, spawned from a mutual application. Usability varies greatly according to which system you use and what you wist to accomplish.
These are the types of things you're better off using Google that Stackoverflow.

Graceful exit for multithreaded haskell

This is entirely theoretical at this point, but I've been trying to wrap my
head around this problem. Let's take a client for an example. There are
forkIOd threads for every connection, and one of them wants to quit the
entire program (ie. /exit). How would this information be propagated to
other threads?
This is not a condition, but I assume that the threads are reading from
their respective threads which are blocking. Since they're idling away
until something is written for them, they can't poll any kind of "done"
variable. So my first thought unless done is bunked.
I don't have a solution in mind for any program, so anyone giving solutions
for any language is appreciated, but the real question is how to do it in
Haskell.
The best way I know of is poison, which is implemented by the CHP library.
See the excellent explanation here: http://chplib.wordpress.com/2009/09/30/poison-concurrent-termination/
The above article incidentally goes through other solutions and explains why they're generally somewhat fragile.

Threading Best Practices

Many projects I work on have poor threading implementations and I am the sucker who has to track these down. Is there an accepted best way to handle threading. My code is always waiting for an event that never fires.
I'm kinda thinking like a design pattern or something.
(Assuming .NET; similar things would apply for other platforms.)
Well, there are lots of things to consider. I'd advise:
Immutability is great for multi-threading. Functional programming works well concurrently partly due to the emphasis on immutability.
Use locks when you access mutable shared data, both for reads and writes.
Don't try to go lock-free unless you really have to. Locks are expensive, but rarely the bottleneck.
Monitor.Wait should almost always be part of a condition loop, waiting for a condition to become true and waiting again if it's not.
Try to avoid holding locks for longer than you need to.
If you ever need to acquire two locks at once, document the ordering thoroughly and make sure you always use the same order.
Document the thread-safety of your types. Most types don't need to be thread-safe, they just need to not be thread hostile (i.e. "you can use them from multiple threads, but it's your responsibility to take out locks if you want to share them)
Don't access the UI (except in documented thread-safe ways) from a non-UI thread. In Windows Forms, use Control.Invoke/BeginInvoke
That's off the top of my head - I probably think of more if this is useful to you, but I'll stop there in case it's not.
Learning to write multi-threaded programs correctly is extremely difficult and time consuming.
So the first step is: replace the implementation with one that doesn't use multiple threads at all.
Then carefully put threading back in if, and only if, you discover a genuine need for it, when you've figured out some very simple safe ways to do so. A non-threaded implementation that works reliably is far better than a broken threaded implementation.
When you're ready to start, favour designs that use thread-safe queues to transfer work items between threads and take care to ensure that those work items are accessed only by one thread at a time.
Try to avoid just spraying lock blocks around your code in the hope that it will become thread-safe. It doesn't work. Eventually, two code paths will acquire the same locks in a different order, and everything will grind to a halt (once every two weeks, on a customer's server). This is especially likely if you combine threads with firing events, and you hold the lock while you fire the event - the handler may take out another lock, and now you have a pair of locks held in a particular order. What if they're taken out in the opposite order in some other situation?
In short, this is such a big and difficult subject that I think it is potentially misleading to give a few pointers in a short answer and say "Off you go!" - I'm sure that's not the intention of the many learned people giving answers here, but that is the impression many get from summarised advice.
Instead, buy this book.
Here is a very nicely worded summary from this site:
Multithreading also comes with
disadvantages. The biggest is that it
can lead to vastly more complex
programs. Having multiple threads does
not in itself create complexity; it's
the interaction between the threads
that creates complexity. This applies
whether or not the interaction is
intentional, and can result long
development cycles, as well as an
ongoing susceptibility to intermittent
and non-reproducable bugs. For this
reason, it pays to keep such
interaction in a multi-threaded design
simple – or not use multithreading at
all – unless you have a peculiar
penchant for re-writing and debugging!
Perfect summary from Stroustrup:
The traditional way of dealing with concurrency by letting a bunch of
threads loose in a single address space and then using locks to try to
cope with the resulting data races and coordination problems is
probably the worst possible in terms of correctness and
comprehensibility.
(Like Jon Skeet, much of this assumes .NET)
At the risk of seeming argumentative, comments like these just bother me:
Learning to write multi-threaded
programs correctly is extremely
difficult and time consuming.
Threads should be avoided when
possible...
It is practically impossible to write software that does anything significant without leveraging threads in some capacity. If you are on Windows, open your Task Manager, enable the Thread Count column, and you can probably count on one hand the number of processes that are using a single thread. Yes, one should not simply use threads for the sake of using threads nor should it be done cavalierly, but frankly, I believe these cliches are used too often.
If I had to boil multithreaded programming down for the true novice, I would say this:
Before jumping into it, first understand that the the class boundary is not the same as a thread boundary. For example, if a callback method on your class is called by another thread (e.g., the AsyncCallback delegate to the TcpListener.BeginAcceptTcpClient() method), understand that the callback executes on that other thread. So even though the callback occurs on the same object, you still have to synchronize access to the members of the object within the callback method. Threads and classes are orthogonal; it is important to understand this point.
Identify what data needs to be shared between threads. Once you have defined the shared data, try to consolidate it into a single class if possible.
Limit the places where the shared data can be written and read. If you can get this down to one place for writing and one place for reading, you will be doing yourself a tremendous favor. This is not always possible, but it is a nice goal to shoot for.
Obviously make sure you synchronize access to the shared data using the Monitor class or the lock keyword.
If possible, use a single object to synchronize your shared data regardless of how many different shared fields there are. This will simplify things. However, it may also overly constrain things too, in which case, you may need a synchronization object for each shared field. And at this point, using immutable classes becomes very handy.
If you have one thread that needs to signal another thread(s), I would strongly recommend using the ManualResetEvent class to do this instead of using events/delegates.
To sum up, I would say that threading is not difficult, but it can be tedious. Still, a properly threaded application will be more responsive, and your users will be most appreciative.
EDIT:
There is nothing "extremely difficult" about ThreadPool.QueueUserWorkItem(), asynchronous delegates, the various BeginXXX/EndXXX method pairs, etc. in C#. If anything, these techniques make it much easier to accomplish various tasks in a threaded fashion. If you have a GUI application that does any heavy database, socket, or I/O interaction, it is practically impossible to make the front-end responsive to the user without leveraging threads behind the scenes. The techniques I mentioned above make this possible and are a breeze to use. It is important to understand the pitfalls, to be sure. I simply believe we do programmers, especially younger ones, a disservice when we talk about how "extremely difficult" multithreaded programming is or how threads "should be avoided." Comments like these oversimplify the problem and exaggerate the myth when the truth is that threading has never been easier. There are legitimate reasons to use threads, and cliches like this just seem counterproductive to me.
You may be interested in something like CSP, or one of the other theoretical algebras for dealing with concurrency. There are CSP libraries for most languages, but if the language wasn't designed for it, it requires a bit of discipline to use correctly. But ultimately, every kind of concurrency/threading boils down to some fairly simple basics: Avoid shared mutable data, and understand exactly when and why each thread may have to block while waiting for another thread. (In CSP, shared data simply doesn't exist. Each thread (or process in CSP terminology) is only allowed to communicate with others through blocking message-passing channels. Since there is no shared data, race conditions go away. Since message passing is blocking, it becomes easy to reason about synchronization, and literally prove that no deadlocks can occur.)
Another good practice, which is easier to retrofit into existing code is to assign a priority or level to every lock in your system, and make sure that the following rules are followed consistently:
While holding a lock at level N, you
may only acquire new locks of lower levels
Multiple locks at the same level must
be acquired at the same time, as a
single operation, which always tries
to acquire all the requested locks in
the same global order (Note that any
consistent order will do, but any
thread that tries to acquire one or
more locks at level N, must do
acquire them in the same order as any
other thread would do anywhere else
in the code.)
Following these rules mean that it is simply impossible for a deadlock to occur. Then you just have to worry about mutable shared data.
BIG emphasis on the first point that Jon posted. The more immutable state that you have (ie: globals that are const, etc...), the easier your life is going to be (ie: the fewer locks you'll have to deal with, the less reasoning you'll have to do about interleaving order, etc...)
Also, often times if you have small objects to which you need multiple threads to have access, you're sometimes better off copying it between threads rather than having a shared, mutable global that you have to hold a lock to read/mutate. It's a tradeoff between your sanity and memory efficiency.
Looking for a design pattern when dealing with threads is the really best approach to start with. It's too bad that many people don't try it, instead attempting to implement less or more complex multithreaded constructs on their own.
I would probably agree with all opinions posted so far. In addition, I'd recommend to use some existing more coarse-grained frameworks, providing building blocks rather than simple facilities like locks, or wait/notify operations. For Java, it would be simply the built-in java.util.concurrent package, which gives you ready-to-use classes you can easily combine to achieve a multithreaded app. The big advantage of this is that you avoid writing low-level operations, which results in hard-to-read and error-prone code, in favor of a much clearer solution.
From my experience, it seems that most concurrency problems can be solved in Java by using this package. But, of course, you always should be careful with multithreading, it's challenging anyway.
Adding to the points that other folks have already made here:
Some developers seem to think that "almost enough" locking is good enough. It's been my experience that the opposite can be true -- "almost enough" locking can be worse than enough locking.
Imagine thread A locking resource R, using it, and then unlocking it. A then uses resource R' without a lock.
Meanwhile, thread B tries to access R while A has it locked. Thread B is blocked until thread A unlocks R. Then the CPU context switches to thread B, which accesses R, and then updates R' during its time slice. That update renders R' inconsistent with R, causing a failure when A tries to access it.
Test on as many different hardware and OS architectures as possible. Different CPU types, different numbers of cores and chips, Windows/Linux/Unix, etc.
The first developer who worked with multi-threaded programs was a guy named Murphy.
Well, everyone thus far has been Windows / .NET centric, so I'll chime in with some Linux / C.
Avoid futexes at all costs(PDF), unless you really, really need to recover some of the time spent with mutex locks. I am currently pulling my hair out with Linux futexes.
I don't yet have the nerve to go with practical lock free solutions, but I'm rapidly approaching that point out of pure frustration. If I could find a good, well documented and portable implementation of the above that I could really study and grasp, I'd probably ditch threads completely.
I have come across so much code lately that uses threads which really should not, its obvious that someone just wanted to profess their undying love of POSIX threads when a single (yes, just one) fork would have done the job.
I wish that I could give you some code that 'just works', 'all the time'. I could, but it would be so silly to serve as a demonstration (servers and such that start threads for each connection). In more complex event driven applications, I have yet (after some years) to write anything that doesn't suffer from mysterious concurrency issues that are nearly impossible to reproduce. So I'm the first to admit, in that kind of application, threads are just a little too much rope for me. They are so tempting and I always end up hanging myself.
I'd like to follow up with Jon Skeet's advice with a couple more tips:
If you are writing a "server", and are likely to have a high amount of insert parallelism, don't use Microsoft's SQL Compact. Its lock manager is stupid. If you do use SQL Compact, DON'T use serializable transactions (which happens to be the default for the TransactionScope class). Things will fall apart on you rapidly. SQL Compact doesn't support temporary tables, and when you try to simulate them inside of serialized transactions it does rediculsouly stupid things like take x-locks on the index pages of the _sysobjects table. Also it get's really eager about lock promotion, even if you don't use temp tables. If you need serial access to multiple tables , your best bet is to use repeatable read transactions(to give atomicity and integrity) and then implement you own hierarchal lock manager based on domain-objects (accounts, customers, transactions, etc), rather than using the database's page-row-table based scheme.
When you do this, however, you need to be careful (like John Skeet said) to create a well defined lock hierarchy.
If you do create your own lock manager, use <ThreadStatic> fields to store information about the locks you take, and then add asserts every where inside the lock manager that enforce your lock hierarchy rules. This will help to root out potential issues up front.
In any code that runs in a UI thread, add asserts on !InvokeRequired (for winforms), or Dispatcher.CheckAccess() (for WPF). You should similarly add the inverse assert to code that runs in background threads. That way, people looking at a method will know, just by looking at it, what it's threading requirements are. The asserts will also help to catch bugs.
Assert like crazy, even in retail builds. (that means throwing, but you can make your throws look like asserts). A crash dump with an exception that says "you violated threading rules by doing this", along with stack traces, is much easier to debug then a report from a customer on the other side of the world that says "every now and then the app just freezes on me, or it spits out gobbly gook".
It's the mutable state, stupid
That is a direct quote from Java Concurrency in Practice by Brian Goetz. Even though the book is Java-centric, the "Summary of Part I" gives some other helpful hints that will apply in many threaded programming contexts. Here are a few more from that same summary:
Immutable objects are automatically thread-safe.
Guard each mutable variable with a lock.
A program that accesses a mutable variable from multiple threads without
synchronization is a broken program.
I would recommend getting a copy of the book for an in-depth treatment of this difficult topic.
(source: umd.edu)
Instead of locking on containers, you should use ReaderWriterLockSlim. This gives you database like locking - an infinite number of readers, one writer, and the possibility of upgrading.
As for design patterns, pub/sub is pretty well established, and very easy to write in .NET (using the readerwriterlockslim). In our code, we have a MessageDispatcher object that everyone gets. You subscribe to it, or you send a message out in a completely asynchronous manner. All you have to lock on is the registered functions and any resources that they work on. It makes multithreading much easier.

Resources