Why am i use Parallel.For get the diffrent result? - c#-4.0

using System.Threading.Tasks;
const int _Total = 1000000;
[ThreadStatic]
static long count = 0;
static void Main(string[] args)
{
Parallel.For(0, _Total, (i) =>
{
count++;
});
Console.WriteLine(count);
}
I get different result every time, can anybody help me and tell me why?

Most likely your "count" variable isn't atomic in any form, so you are getting concurrent modifications that aren't synchronized. Thus, the following sequence of events is possible:
Thread 1 reads "count"
Thread 2 reads "count"
Thread 1 stores value+1
Thread 2 stores value+1
Thus, the "for" loop has done 2 iterations, but the value has only increased by 1. As thread ordering is "random", so will be the result.
Things can get a lot worse, of course:
Thread 1 reads count
Thread 2 increases count 100 times
Thread 1 stores value+1
In that case, all those 100 increases done by thread 2 are undone. Although that can really only happen if the "++" is actually split into at least 2 machine instructions, so it can be interrupted in the middle of the operation. In the one-instruction case, you're only dealing with interleaved hardware threads.

It's a typical race condition scenario.
So, most likely, ThreadStatic is not working here. In this concrete sample use System.Threading.Interlocked:
void Main()
{
int total = 1000000;
int count = 0;
System.Threading.Tasks.Parallel.For(0, _Total, (i) =>
{
System.Threading.Interlocked.Increment(ref count);
});
Console.WriteLine(count);
}
Similar question
C# ThreadStatic + volatile members not working as expected

Related

Give me a scenario where such an output could happen when multi-threading is happening

Just trying to understand threads and race condition and how they affect the expected output. In the below code, i once had an output that began with
"2 Thread-1" then "1 Thread-0" .... How could such an output happen? What I understand is as follows:
Step1:Assuming Thread 0 started, it incremented counter to 1,
Step2: Before printing it, Thread 1 incremented it to 2 and printed it,
Step3: Thread 0 prints counter which should be 2 but is printing 1.
How could Thread 0 print counter as 1 when Thread 1 already incremented it to 2?
P.S: I know that synchronized key could deal with such race conditions, but I just want to have some concepts done before.
public class Counter {
static int count=0;
public void add(int value) {
count=count+value;
System.out.println(count+" "+ Thread.currentThread().getName());
}
}
public class CounterThread extends Thread {
Counter counter;
public CounterThread(Counter c) {
counter=c;
}
public void run() {
for(int i=0;i<5;i++) {
counter.add(1);
}
}
}
public class Main {
public static void main(String args[]) {
Counter counter= new Counter();
Thread t1= new CounterThread(counter);
Thread t2= new CounterThread(counter);
t1.start();
t2.start();
}
}
How could Thread 0 print counter as 1 when Thread 1 already incremented it to 2?
There's a lot more going on in these two lines than meets the eye:
count=count+value;
System.out.println(count+" "+ Thread.currentThread().getName());
First of all, the compiler doesn't know anything about threads. It's job is to emit code that will achieve the same end result when executed in a single thread. That is, when all is said and done, the count must be incremented, and the message must be printed.
The compiler has a lot of freedom to re-order operations, and to store values in temporary registers in order to ensure that the correct end result is achieved in the most efficient way possible. So, for example, the count in the expression count+" "+... will not necessarily cause the compiler to fetch the latest value of the global count variable. In fact it probably will not fetch from the global variable because it knows that the result of the + operation still is sitting in a CPU register. And, since it doesn't acknowledge that other threads could exist, then it knows that there's no way that the value in the register could be any different from what it stored into the global variable after doing the +.
Second of all, the hardware itself is allowed to stash values in temporary places and re-order operations for efficiency, and it too is allowed to assume that there are no other threads. So, even when the compiler emits code that says to actually fetch from or store to the global variable instead of to or from a register, the hardware does not necessarily store to or fetch from the actual address in memory.
Assuming your code example is Java code, then all of that changes when you make appropriate use of synchronized blocks. If you would add synchronized to the declaration of your add method for example:
public synchronized void add(int value) {
count=count+value;
System.out.println(count+" "+ Thread.currentThread().getName());
}
That forces the compiler to acknowledge the existence of other threads, and the compiler will emit instructions that force the hardware to acknowledge other threads as well.
By adding synchronized to the add method, you force the hardware to deliver the actual value of the global variable on entry to the method, your force it to actually write the global by the time the method returns, and you prevent more than one thread from being in the method at the same time.

Thread scheduling/Data race conditions

In class, we are studying threads and race conditions. By my estimates, it should be possible for the below code to output the value 8 or 9, as it is possible that thread 1 is interrupted by thread 2 before the counter value is updated, but after it has been incremented in the eax register.
int counter = 10;
void *worker(void *arg) {
counter--;
return NULL;
}
int main(int argc, char *argv[]) {
pthread_t p1, p2;
pthread_create(&p1, NULL, worker, NULL);
pthread_create(&p2, NULL, worker, NULL);
pthread_join(p1, NULL);
pthread_join(p2, NULL);
printf("%d\n", counter);
}
However, when I run the code, I always receive the output 8. Is it a mechanism of the compiler that normalizes the output, or is it only possible for the code to output 8 (no race condition is created)?
There's no way for us to tell without knowing lots of complicated details about your platform, compiler, maybe even CPU. The code has a race condition in theory but it may be exceptionally difficult, maybe even impossible, to trigger.
Of course, if you upgrade your compiler or CPU, change compilation options, upgrade your OS, or do any number of other things, it may start behaving differently.
This is one of the reasons race conditions can be so insidious. They can be impossible to trigger under some conditions and then suddenly start happening all the time when some change is made elsewhere.
The code definitely has a race condition.
I don't find it surprising that you're seeing consistent results--starting a thread takes a little while, so there's a good chance that in your case, the first thread finishes before the second gets started.
Nonetheless, the code clearly has undefined behavior, because there's no question it has a race condition.
There definitely is a race condition. The reason you're not seeing it is because the increment happens so fast compared to the time it takes to start a thread that it's likely for the first thread to be done before the second thread even starts. You'll see the race condition if you make the amount of work sufficiently large that the first thread will still be running when the second one starts.
example: modify the worker function to decrement in a loop
int counter = 1000000000;
void* worker(void *arg)
{
for (int i = 0; i < 500000000; ++i)
--counter;
return NULL;
}
Since counter starts at 1 billion, and you're running 2 threads that each decrement counter by 500 million, you would expect counter to be 0 when you are done if race conditions didn't exist.

How to join all threads before deleting the ThreadPool

I am using a MultiThreading class which creates the required number of threads in its own threadpool and deletes itself after use.
std::thread *m_pool; //number of threads according to available cores
std::mutex m_locker;
std::condition_variable m_condition;
std::atomic<bool> m_exit;
int m_processors
m_pool = new std::thread[m_processors + 1]
void func()
{
//code
}
for (int i = 0; i < m_processors; i++)
{
m_pool[i] = std::thread(func);
}
void reset(void)
{
{
std::lock_guard<std::mutex> lock(m_locker);
m_exit = true;
}
m_condition.notify_all();
for(int i = 0; i <= m_processors; i++)
m_pool[i].join();
delete[] m_pool;
}
After running through all tasks, the for-loop is supposed to join all running threads before delete[] is being executed.
But there seems to be one last thread still running, while the m_pool does not exist anymore.
This leads to the problem, that I can't close my program anymore.
Is there any way to check if all threads are joined or wait for all threads to be joined before deleting the threadpool?
Simple typo bug I think.
Your loop that has the condition i <= m_processors is a bug and will actually process one extra entry past the end of the array. This is an off-by-one bug. Suppose m_processors is 2. You'll have an array that contains 2 elements with indices [0] and [1]. Yet, you'll be reading past the end of the array, attempting to join with the item at index [2]. m_pool[2] is undefined memory and you're likely going to either crash or block forever there.
You likely intended i < m_processors.
The real source of the problem is addressed by Wick's answer. I will extend it with some tips that also solve your problem while improving other aspects of your code.
If you use C++11 for std::thread, then you shouldn't create your thread handles using operator new[]. There are better ways of doing that with other C++ constructs, which will make everything simpler and exception safe (you don't leak memory if an unexpected exception is thrown).
Store your thread objects in a std::vector. It will manage the memory allocation and deallocation for you (no more new and delete). You can use other more flexible containers such as std::list if you insert/delete threads dynamically.
Fill the vector in place with std::generate or similar
std::vector<std::thread> m_pool;
m_pool.reserve(n_processors);
// Fill the vector
std::generate_n( std::back_inserter(m_pool), m_processors,
[](){ return std::thread(func); } );
Join all the elements using range-for loop and delete handles using container's functions.
for( std::thread& t: m_pool ) {
t.join();
}
m_pool.clear();

Design pattern for asynchronous while loop

I have a function that boils down to:
while(doWork)
{
config = generateConfigurationForTesting();
result = executeWork(config);
doWork = isDone(result);
}
How can I rewrite this for efficient asynchronous execution, assuming all functions are thread safe, independent of previous iterations, and probably require more iterations than the maximum number of allowable threads ?
The problem here is we don't know how many iterations are required in advance so we can't make a dispatch_group or use dispatch_apply.
This is my first attempt, but it looks a bit ugly to me because of arbitrarily chosen values and sleeping;
int thread_count = 0;
bool doWork = true;
int max_threads = 20; // arbitrarily chosen number
dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
while(doWork)
{
if(thread_count < max_threads)
{
dispatch_async(queue, ^{ Config myconfig = generateConfigurationForTesting();
Result myresult = executeWork();
dispatch_async(queue, checkResult(myresult)); });
thread_count++;
}
else
usleep(100); // don't consume too much CPU
}
void checkResult(Result value)
{
if(value == good) doWork = false;
thread_count--;
}
Based on your description, it looks like generateConfigurationForTesting is some kind of randomization technique or otherwise a generator which can make a near-infinite number of configuration (hence your comment that you don't know ahead of time how many iterations you will need). With that as an assumption, you are basically stuck with the model that you've created, since your executor needs to be limited by some reasonable assumptions about the queue and you don't want to over-generate, as that would just extend the length of the run after you have succeeded in finding value ==good measurements.
I would suggest you consider using a queue (or OSAtomicIncrement* and OSAtomicDecrement*) to protect access to thread_count and doWork. As it stands, the thread_count increment and decrement will happen in two different queues (main_queue for the main thread and the default queue for the background task) and thus could simultaneously increment and decrement the thread count. This could lead to an undercount (which would cause more threads to be created than you expect) or an overcount (which would cause you to never complete your task).
Another option to making this look a little nicer would be to have checkResult add new elements into the queue if value!=good. This way, you load up the initial elements of the queue using dispatch_apply( 20, queue, ^{ ... }) and you don't need the thread_count at all. The first 20 will be added using dispatch_apply (or an amount that dispatch_apply feels is appropriate for your configuration) and then each time checkResult is called you can either set doWork=false or add another operation to queue.
dispatch_apply() works for this, just pass ncpu as the number of iterations (apply never uses more than ncpu worker threads) and keep each instance of your worker block running for as long as there is more work to do (i.e. loop back to generateConfigurationForTesting() unless !doWork).

Can I assign a per-thread index, using pthreads?

I'm optimizing some instrumentation for my project (Linux,ICC,pthreads), and would like some feedback on this technique to assign a unique index to a thread, so I can use it to index into an array of per-thread data.
The old technique uses a std::map based on pthread id, but I'd like to avoid locks and a map lookup if possible (it is creating a significant amount of overhead).
Here is my new technique:
static PerThreadInfo info[MAX_THREADS]; // shared, each index is per thread
// Allow each thread a unique sequential index, used for indexing into per
// thread data.
1:static size_t GetThreadIndex()
2:{
3: static size_t threadCount = 0;
4: __thread static size_t myThreadIndex = threadCount++;
5: return myThreadIndex;
6:}
later in the code:
// add some info per thread, so it can be aggregated globally
info[ GetThreadIndex() ] = MyNewInfo();
So:
1) It looks like line 4 could be a race condition if two threads where created at exactly the same time. If so - how can I avoid this (preferably without locks)? I can't see how an atomic increment would help here.
2) Is there a better way to create a per-thread index somehow? Maybe by pre-generating the TLS index on thread creation somehow?
1) An atomic increment would help here actually, as the possible race is two threads reading and assigning the same ID to themselves, so making sure the increment (read number, add 1, store number) happens atomically fixes that race condition. On Intel a "lock; inc" would do the trick, or whatever your platform offers (like InterlockedIncrement() for Windows for example).
2) Well, you could actually make the whole info thread-local ("__thread static PerThreadInfo info;"), provided your only aim is to be able to access the data per-thread easily and under a common name. If you actually want it to be a globally accessible array, then saving the index as you do using TLS is a very straightforward and efficient way to do this. You could also pre-compute the indexes and pass them along as arguments at thread creation, as Kromey noted in his post.
Why so averse to using locks? Solving race conditions is exactly what they're designed for...
In any rate, you can use the 4th argument in pthread_create() to pass an argument to your threads' start routine; in this way, you could use your master process to generate an incrementing counter as it launches the threads, and pass this counter into each thread as it is created, giving you your unique index for each thread.
I know you tagged this [pthreads], but you also mentioned the "old technique" of using std::map. This leads me to believe that you're programming in C++. In C++11 you have std::thread, and you can pass out unique indexes (id's) to your threads at thread creation time through an ordinary function parameter.
Below is an example HelloWorld that creates N threads, assigning each an index of 0 through N-1. Each thread does nothing but say "hi" and give it's index:
#include <iostream>
#include <thread>
#include <mutex>
#include <vector>
inline void sub_print() {}
template <class A0, class ...Args>
void
sub_print(const A0& a0, const Args& ...args)
{
std::cout << a0;
sub_print(args...);
}
std::mutex&
cout_mut()
{
static std::mutex m;
return m;
}
template <class ...Args>
void
print(const Args& ...args)
{
std::lock_guard<std::mutex> _(cout_mut());
sub_print(args...);
}
void f(int id)
{
print("This is thread ", id, "\n");
}
int main()
{
const int N = 10;
std::vector<std::thread> threads;
for (int i = 0; i < N; ++i)
threads.push_back(std::thread(f, i));
for (auto i = threads.begin(), e = threads.end(); i != e; ++i)
i->join();
}
My output:
This is thread 0
This is thread 1
This is thread 4
This is thread 3
This is thread 5
This is thread 7
This is thread 6
This is thread 2
This is thread 9
This is thread 8

Resources