I want to bind the threads in my code to each physical core.
With GCC I have successfully done this using sched_setaffinity so I no longer have to set export OMP_PROC_BIND=true. I want to do the same thing in Windows with MSVC. Windows and Linux using a different thread topology. Linux scatters the threads while windows uses a compact form. In other words in Linux with four cores and eight hyper-threads I only need to bind the threads to the first four processing units. In windows I set them to to every other processing unit.
I have successfully done this using SetProcessAffinityMask. I can see from Windows Task Manger when I right click on the processes and click "Set Affinity" that every other CPU is set (0, 2, 4, 6 on my eight hyper thread system). The problem is that the efficiency of my code is unstable when I run. Sometimes it's nearly constant but most of the time it has big changes. I changed the priority to high but it makes no difference. In Linux the efficiency is stable. Maybe Windows is still migrating the threads? Is there something else I need to do to bind the threads in Windows?
Here is the code I'm using
#ifdef _WIN32
HANDLE process;
DWORD_PTR processAffinityMask = 0;
//Windows uses a compact thread topology. Set mask to every other thread
for(int i=0; i<ncores; i++) processAffinityMask |= 1<<(2*i);
//processAffinityMask = 0x55;
process = GetCurrentProcess();
SetProcessAffinityMask(process, processAffinityMask);
#else
cpu_set_t mask;
CPU_ZERO(&mask);
for(int i=0; i<ncores; i++) CPU_SET(i, &mask);
sched_setaffinity(0, sizeof(mask), &mask);
#endif
Edit: here is the code I used now which seems to be stable on Linux and Windows
#ifdef _WIN32
HANDLE process;
DWORD_PTR processAffinityMask;
//Windows uses a compact thread topology. Set mask to every other thread
for(int i=0; i<ncores; i++) processAffinityMask |= 1<<(2*i);
process = GetCurrentProcess();
SetProcessAffinityMask(process, processAffinityMask);
#pragma omp parallel
{
HANDLE thread = GetCurrentThread();
DWORD_PTR threadAffinityMask = 1<<(2*omp_get_thread_num());
SetThreadAffinityMask(thread, threadAffinityMask);
}
#else
cpu_set_t mask;
CPU_ZERO(&mask);
for(int i=0; i<ncores; i++) CPU_SET(i, &mask);
sched_setaffinity(0, sizeof(mask), &mask);
#pragma omp parallel
{
cpu_set_t mask;
CPU_ZERO(&mask);
CPU_SET(omp_get_thread_num(),&mask);
pthread_setaffinity_np(pthread_self(), sizeof(mask), &mask);
}
#endif
You should use the SetThreadAffinityMask function (see MSDN reference). You are setting the process's mask.
You can obtain a thread ID in OpenMP with this code:
int tid = omp_get_thread_num();
However the code above provides OpenMP's internal thread ID, and not the system thread ID. This article explains more on the subject:
http://msdn.microsoft.com/en-us/magazine/cc163717.aspx
if you need to explicitly work with those trheads - use the explicit affinity type as explained in this Intel documentation:
https://software.intel.com/sites/products/documentation/studio/composer/en-us/2011Update/compiler_c/optaps/common/optaps_openmp_thread_affinity.htm
Related
I have two functions which I want to run using threads.
1) CPU function, which I can join to thread using:
thread t1(vector_add, p->iNum1, p->iNum2, p->iNumAns, p->flag);
t1.join();
2) and a GPU kernel
vectorAdd_gpu <<<blocksPerGrid, threadsPerBlock >>>(s.a1, s.a2, s.a2, s.flag);
But my problem is how to call GPU kernal call using threads and join it so that it can run simultaneously with CPU function.
vectorAdd_gpu <<<blocksPerGrid, threadsPerBlock >>>(s.a1, s.a2, s.a2, s.flag);
thread t2(vectorAdd_gpu);
t2.join();
Any other way to run a CPU and a GPU function simultanliously using threads?
As talonmies said,
Put its call into a lambda function
auto myFunc = [&](){
cudaStream_t stream2;
cudaSetDevice(device2);
cudaStreamCreate (&stream2);
vectorAdd_gpu <<<blocksPerGrid, threadsPerBlock,0,stream2 >>>(s.a1, s.a2, s.a2, s.flag);
cudaStreamSynchronize(stream2);
cudaStreamDestroy(stream2);
};
then give it to thread.
thread t2(myFunc);
t2.join();
But instead of this, you can still use same main thread of your application with streams asynchronously on CPU work. I just showed what you wanted to see. Using same thread asynchronously could be more efficient than re-creating streams and re-joining threads, depending on size of work. Maybe re-joining has more overhead than synchronizing and launching kernel here. How many kernel calls do you make per second?
In the following blog from Nvidia, https://devblogs.nvidia.com/how-overlap-data-transfers-cuda-cc/ there is a nice example about single-thread asynchronous CUDA:
for (int i = 0; i < nStreams; ++i) {
int offset = i * streamSize;
cudaMemcpyAsync(&d_a[offset], &a[offset],
streamBytes, cudaMemcpyHostToDevice, cudaMemcpyHostToDevice, stream[i]);
}
for (int i = 0; i < nStreams; ++i) {
int offset = i * streamSize;
kernel<<<streamSize/blockSize, blockSize, 0, stream[i]>>>(d_a, offset);
}
for (int i = 0; i < nStreams; ++i) {
int offset = i * streamSize;
cudaMemcpyAsync(&a[offset], &d_a[offset],
streamBytes, cudaMemcpyDeviceToHost, cudaMemcpyDeviceToHost, stream[i]);
}
this is only one of different ways to do asynchronous stream overlapping.
I've reduced a crash to the following toy code:
// DLLwithOMP.cpp : build into a dll *with* /openmp
#include <tchar.h>
extern "C"
{
__declspec(dllexport) void funcOMP()
{
#pragma omp parallel for
for (int i = 0; i < 100; i++)
_tprintf(_T("Please fondle my buttocks\n"));
}
}
_
// ConsoleApplication1.cpp : build into an executable *without* /openmp
#include <windows.h>
#include <stdio.h>
#include <tchar.h>
typedef void(*tDllFunc) ();
int main()
{
HMODULE hDLL = LoadLibrary(_T("DLLwithOMP.dll"));
tDllFunc pDllFunc = (tDllFunc)GetProcAddress(hDLL, "funcOMP");
pDllFunc();
FreeLibrary(hDLL);
// At this point the omp runtime vcomp140[d].dll refcount is zero
// and windows unloads it, but the omp thread team remains active.
// A crash usually ensues.
return 0;
}
Is this an MS bug? Is there some OMP thread-cleanup API I missed (probably not, but maybe)? I don't have other compilers under hand. Do they treat this scenario differently? (again, probably not) Does the OMP standard has anything to say on such a scenario?
I got an answer from Eric Brumer # MS Connect. Re-posting it here in case it is of interest to anyone in the future:
for optimal performance, the openmp threadpool spin waits for about a
second prior to shutting down in case more work becomes available. If
you unload a DLL that's in the process of spin-waiting, it will crash
in the manner you see (most of the time).
You can tell openmp not to spin-wait and the threads will immediately
block after the loop finishes. Just set OMP_WAIT_POLICY=passive in
your environment, or call SetEnvironmentVariable(L"OMP_WAIT_POLICY",
L"passive"); in your function before loading the dll. The default is
"active" which tells the threadpool to spin wait. Use the environment
variable, or just wait a few seconds before calling FreeLibrary.
I want to count the (more or less) exact amount of instructions for some piece of code. Additionally, I want to receive a Signal after a specific amount of instructions passed.
For this purpose, I use the overflow signal behaviour provided by
perf_event_open.
I'm using the second way the manpage proposes to achieve overflow signals:
Signal overflow
Events can be set to deliver a signal when a threshold
is crossed. The signal handler is set up using the poll(2), select(2),
epoll(2) and fcntl(2), system calls.
[...]
The other way is by use of the PERF_EVENT_IOC_REFRESH ioctl. This
ioctl adds to a counter that decrements each time the event overflows.
When nonzero, a POLL_IN signal is sent on overflow, but once the value
reaches 0, a signal is sent of type POLL_HUP and the underlying event
is disabled.
Further explanation of PERF_EVENT_IOC_REFRESH ioctl:
PERF_EVENT_IOC_REFRESH
Non-inherited overflow counters can use this to enable a
counter for a number of overflows specified by the argument,
after which it is disabled. Subsequent calls of this ioctl
add the argument value to the current count. A signal with
POLL_IN set will happen on each overflow until the count
reaches 0; when that happens a signal with POLL_HUP set is
sent and the event is disabled. Using an argument of 0 is
considered undefined behavior.
A very minimal example would look like this:
#define _GNU_SOURCE 1
#include <asm/unistd.h>
#include <fcntl.h>
#include <linux/perf_event.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
long perf_event_open(struct perf_event_attr* event_attr, pid_t pid, int cpu, int group_fd, unsigned long flags)
{
return syscall(__NR_perf_event_open, event_attr, pid, cpu, group_fd, flags);
}
static void perf_event_handler(int signum, siginfo_t* info, void* ucontext) {
if(info->si_code != POLL_HUP) {
// Only POLL_HUP should happen.
exit(EXIT_FAILURE);
}
ioctl(info->si_fd, PERF_EVENT_IOC_REFRESH, 1);
}
int main(int argc, char** argv)
{
// Configure signal handler
struct sigaction sa;
memset(&sa, 0, sizeof(struct sigaction));
sa.sa_sigaction = perf_event_handler;
sa.sa_flags = SA_SIGINFO;
// Setup signal handler
if (sigaction(SIGIO, &sa, NULL) < 0) {
fprintf(stderr,"Error setting up signal handler\n");
perror("sigaction");
exit(EXIT_FAILURE);
}
// Configure perf_event_attr struct
struct perf_event_attr pe;
memset(&pe, 0, sizeof(struct perf_event_attr));
pe.type = PERF_TYPE_HARDWARE;
pe.size = sizeof(struct perf_event_attr);
pe.config = PERF_COUNT_HW_INSTRUCTIONS; // Count retired hardware instructions
pe.disabled = 1; // Event is initially disabled
pe.sample_type = PERF_SAMPLE_IP;
pe.sample_period = 1000;
pe.exclude_kernel = 1; // excluding events that happen in the kernel-space
pe.exclude_hv = 1; // excluding events that happen in the hypervisor
pid_t pid = 0; // measure the current process/thread
int cpu = -1; // measure on any cpu
int group_fd = -1;
unsigned long flags = 0;
int fd = perf_event_open(&pe, pid, cpu, group_fd, flags);
if (fd == -1) {
fprintf(stderr, "Error opening leader %llx\n", pe.config);
perror("perf_event_open");
exit(EXIT_FAILURE);
}
// Setup event handler for overflow signals
fcntl(fd, F_SETFL, O_NONBLOCK|O_ASYNC);
fcntl(fd, F_SETSIG, SIGIO);
fcntl(fd, F_SETOWN, getpid());
ioctl(fd, PERF_EVENT_IOC_RESET, 0); // Reset event counter to 0
ioctl(fd, PERF_EVENT_IOC_REFRESH, 1); //
// Start monitoring
long loopCount = 1000000;
long c = 0;
long i = 0;
// Some sample payload.
for(i = 0; i < loopCount; i++) {
c += 1;
}
// End monitoring
ioctl(fd, PERF_EVENT_IOC_DISABLE, 0); // Disable event
long long counter;
read(fd, &counter, sizeof(long long)); // Read event counter value
printf("Used %lld instructions\n", counter);
close(fd);
}
So basically I'm doing the following:
Set up a signal handler for SIGIO signals
Create a new performance counter with perf_event_open (returns a file descriptor)
Use fcntl to add signal sending behavior to the file descriptor.
Run a payload loop to execute many instructions.
When executing the payload loop, at some point 1000 instructions (the sample_interval) will have been executed. According to the perf_event_open manpage this triggers an overflow which will then decrement an internal counter.
Once this counter reaches zero, "a signal is sent of type POLL_HUP and the underlying event is disabled."
When a signal is sent, the control flow of the current process/thread is stopped, and the signal handler is executed. Scenario:
1000 instructions have been executed.
Event is automatically disabled and a signal is sent.
Signal is immediately delivered, control flow of the process is stopped and the signal handler is executed.
This scenario would mean two things:
The final amount of counted instructions would always be equal to an example which does not use signals at all.
The instruction pointer which has been saved for the signal handler (and can be accessed through ucontext) would directly point to the instruction which caused the overflow.
Basically you could say, the signal behavior can be seen as synchronous.
This is the perfect semantic for what I want to achieve.
However, as far as I'm concerned, the signal I configured is generally rather asynchronous and some time may pass until it is eventually delivered and the signal handler is executed. This may pose a problem for me.
For example, consider the following scenario:
1000 instructions have been executed.
Event is automatically disabled and a signal is sent.
Some more instructions pass
Signal is delivered, control flow of the process is stopped and the signal handler is executed.
This scenario would mean two things:
The final amount of counted instructions would be less than an example which does not use signals at all.
The instruction pointer which has been saved for the signal handler would point to the instructions which caused the overflow or to any one after it.
So far, I've tested above example a lot and did not experience missed instructions which would support the first scenario.
However, I'd really like to know, whether I can rely on this assumption or not.
What happens in the kernel?
I want to count the (more or less) exact amount of instructions for some piece of code. Additionally, I want to receive a Signal after a specific amount of instructions passed.
You have two task which may conflict with each other. When you want to get counting (exact amounts of some hardware event), just use performance monitoring unit of your CPU in counting mode (don't set sample_period/sample_freq of perf_event_attr structure used) and place the measurement code in your target program (as it was done in your example). In this mode according to the man page of perf_event_open no overflows will be generated (CPU's PMU are usually 64-bit wide and don't overflow when not set to small negative value when sampling mode is used):
Overflows are generated only by sampling events (sample_period must a nonzero value).
To count part of program, use ioctls of perf_event_open returned fd as described in man page
perf_event ioctl calls - Various ioctls act on perf_event_open() file descriptors: PERF_EVENT_IOC_ENABLE ... PERF_EVENT_IOC_DISABLE ... PERF_EVENT_IOC_RESET
You can read current value with rdpmc (on x86) or by read syscall on the fd like in the short example from the man page:
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <sys/ioctl.h>
#include <linux/perf_event.h>
#include <asm/unistd.h>
static long
perf_event_open(struct perf_event_attr *hw_event, pid_t pid,
int cpu, int group_fd, unsigned long flags)
{
int ret;
ret = syscall(__NR_perf_event_open, hw_event, pid, cpu,
group_fd, flags);
return ret;
}
int
main(int argc, char **argv)
{
struct perf_event_attr pe;
long long count;
int fd;
memset(&pe, 0, sizeof(struct perf_event_attr));
pe.type = PERF_TYPE_HARDWARE;
pe.size = sizeof(struct perf_event_attr);
pe.config = PERF_COUNT_HW_INSTRUCTIONS;
pe.disabled = 1;
pe.exclude_kernel = 1;
pe.exclude_hv = 1;
fd = perf_event_open(&pe, 0, -1, -1, 0);
if (fd == -1) {
fprintf(stderr, "Error opening leader %llx\n", pe.config);
exit(EXIT_FAILURE);
}
ioctl(fd, PERF_EVENT_IOC_RESET, 0);
ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);
printf("Measuring instruction count for this printf\n");
/* Place target code here instead of printf */
ioctl(fd, PERF_EVENT_IOC_DISABLE, 0);
read(fd, &count, sizeof(long long));
printf("Used %lld instructions\n", count);
close(fd);
}
Additionally, I want to receive a Signal after a specific amount of instructions passed.
Do you really want to get signal or you just need instruction pointers at every 1000 instructions executed? If you want to collect pointers, use perf_even_open with sampling mode, but do it from other program to disable measuring of the event collection code. Also, it will have less negative effect on your target program, if you will use not signals for every overflow (with huge amount of kernel-tracer interactions and switching from/to kernel), but instead use capabilities of perf_events to collect several overflow events into single mmap buffer and poll on this buffer. On overflow interrupt from PMU perf interrupt handler will be called to save the instruction pointer into buffer and then counting will be reset and program will return to execution. In your example, perf interrupt handler will woke your program, it will do several syscalls, return to kernel and then kernel will restart target code (so overhead per sample is greater than using mmap and parsing it). With precise_ip flag you may activate advanced sampling of your PMU (if it has such mode, like PEBS and PREC_DIST in intel x86/em64t for some counters like INST_RETIRED, UOPS_RETIRED, BR_INST_RETIRED, BR_MISP_RETIRED, MEM_UOPS_RETIRED, MEM_LOAD_UOPS_RETIRED, MEM_LOAD_UOPS_LLC_HIT_RETIRED and with simple hack to cycles too; or like IBS of AMD x86/amd64; paper about PEBS and IBS), when instruction address is saved directly by hardware with low skid. Some very advanced PMUs has ability to do sampling in hardware, storing overflow information of several events in row with automatic reset of counter without software interrupts (some descriptions on precise_ip are in the same paper).
I don't know if it is possible in perf_events subsystem and in your CPU to have two perf_event tasks active at same time: both count events in the target process and in the same time have sampling from other process. With advanced PMU this can be possible in the hardware and perf_events in modern kernel may allow it. But you give no details on your kernel version and your CPU vendor and family, so we can't answer this part.
You also may try other APIs to access PMU like PAPI or likwid (https://github.com/RRZE-HPC/likwid). Some of them may directly read PMU registers (sometimes MSR) and may allow sampling at the same time when counting is enabled.
I wrote a simple program to calculate the maximum number of threads that a process can have in linux (Centos 5). here is the code:
int main()
{
pthread_t thrd[400];
for(int i=0;i<400;i++)
{
int err=pthread_create(&thrd[i],NULL,thread,(void*)i);
if(err!=0)
cout << "thread creation failed: " << i <<" error code: " << err << endl;
}
return 0;
}
void * thread(void* i)
{
sleep(100);//make the thread still alive
return 0;
}
I figured out that max number for threads is only 300!? What if i need more than that?
I have to mention that pthread_create returns 12 as error code.
Thanks before
There is a thread limit for linux and it can be modified runtime by writing desired limit to /proc/sys/kernel/threads-max. The default value is computed from the available system memory. In addition to that limit, there's also another limit: /proc/sys/vm/max_map_count which limits the maximum mmapped segments and at least recent kernels will mmap memory per thread. It should be safe to increase that limit a lot if you hit it.
However, the limit you're hitting is lack of virtual memory in 32bit operating system. Install a 64 bit linux if your hardware supports it and you'll be fine. I can easily start 30000 threads with a stack size of 8MB. The system has a single Core 2 Duo + 8 GB of system memory (I'm using 5 GB for other stuff in the same time) and it's running 64 bit Ubuntu with kernel 2.6.32. Note that memory overcommit (/proc/sys/vm/overcommit_memory) must be allowed because otherwise system would need at least 240 GB of committable memory (sum of real memory and swap space).
If you need lots of threads and cannot use 64 bit system your only choice is to minimize the memory usage per thread to conserve virtual memory. Start with requesting as little stack as you can live with.
Your system limits may not be allowing you to map the stacks of all the threads you require. Look at /proc/sys/vm/max_map_count, and see this answer. I'm not 100% sure this is your problem, because most people run into problems at much larger thread counts.
I had also encountered the same problem when my number of threads crosses some threshold.
It was because of the user level limit (number of process a user can run at a time) set to 1024 in /etc/security/limits.conf .
so check your /etc/security/limits.conf and look for entry:-
username -/soft/hard -nproc 1024
change it to some larger values to something 100k(requires sudo privileges/root) and it should work for you.
To learn more about security policy ,see http://linux.die.net/man/5/limits.conf.
check the stack size per thread with ulimit, in my case Redhat Linux 2.6:
ulimit -a
...
stack size (kbytes, -s) 10240
Each of your threads will get this amount of memory (10MB) assigned for it's stack. With a 32bit program and a maximum address space of 4GB, that is a maximum of only 4096MB / 10MB = 409 threads !!! Minus program code, minus heap-space will probably lead to your observed max. of 300 threads.
You should be able to raise this by compiling a 64bit application or setting ulimit -s 8192 or even ulimit -s 4096. But if this is advisable is another discussion...
You will run out of memory too unless u shrink the default thread stack size. Its 10MB on our version of linux.
EDIT:
Error code 12 = out of memory, so I think the 1mb stack is still too big for you. Compiled for 32 bit, I can get a 100k stack to give me 30k threads. Beyond 30k threads I get Error code 11 which means no more threads allowed. A 1MB stack gives me about 4k threads before error code 12. 10MB gives me 427 threads. 100MB gives me 42 threads. 1 GB gives me 4... We have 64 bit OS with 64 GB ram. Is your OS 32 bit? When I compile for 64bit, I can use any stack size I want and get the limit of threads.
Also I noticed if i turn the profiling stuff (Tools|Profiling) on for netbeans and run from the ide...I only can get 400 threads. Weird. Netbeans also dies if you use up all the threads.
Here is a test app you can run:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <signal.h>
// this prevents the compiler from reordering code over this COMPILER_BARRIER
// this doesnt do anything
#define COMPILER_BARRIER() __asm__ __volatile__ ("" ::: "memory")
sigset_t _fSigSet;
volatile int _cActive = 0;
pthread_t thrd[1000000];
void * thread(void *i)
{
int nSig, cActive;
cActive = __sync_fetch_and_add(&_cActive, 1);
COMPILER_BARRIER(); // make sure the active count is incremented before sigwait
// sigwait is a handy way to sleep a thread and wake it on command
sigwait(&_fSigSet, &nSig); //make the thread still alive
COMPILER_BARRIER(); // make sure the active count is decrimented after sigwait
cActive = __sync_fetch_and_add(&_cActive, -1);
//printf("%d(%d) ", i, cActive);
return 0;
}
int main(int argc, char** argv)
{
pthread_attr_t attr;
int cThreadRequest, cThreads, i, err, cActive, cbStack;
cbStack = (argc > 1) ? atoi(argv[1]) : 0x100000;
cThreadRequest = (argc > 2) ? atoi(argv[2]) : 30000;
sigemptyset(&_fSigSet);
sigaddset(&_fSigSet, SIGUSR1);
sigaddset(&_fSigSet, SIGSEGV);
printf("Start\n");
pthread_attr_init(&attr);
if ((err = pthread_attr_setstacksize(&attr, cbStack)) != 0)
printf("pthread_attr_setstacksize failed: err: %d %s\n", err, strerror(err));
for (i = 0; i < cThreadRequest; i++)
{
if ((err = pthread_create(&thrd[i], &attr, thread, (void*)i)) != 0)
{
printf("pthread_create failed on thread %d, error code: %d %s\n",
i, err, strerror(err));
break;
}
}
cThreads = i;
printf("\n");
// wait for threads to all be created, although we might not wait for
// all threads to make it through sigwait
while (1)
{
cActive = _cActive;
if (cActive == cThreads)
break;
printf("Waiting A %d/%d,", cActive, cThreads);
sched_yield();
}
// wake em all up so they exit
for (i = 0; i < cThreads; i++)
pthread_kill(thrd[i], SIGUSR1);
// wait for them all to exit, although we might be able to exit before
// the last thread returns
while (1)
{
cActive = _cActive;
if (!cActive)
break;
printf("Waiting B %d/%d,", cActive, cThreads);
sched_yield();
}
printf("\nDone. Threads requested: %d. Threads created: %d. StackSize=%lfmb\n",
cThreadRequest, cThreads, (double)cbStack/0x100000);
return 0;
}
So in my ilumination days, i started to think about how the hell do windows/linux implement the mutex, i've implemented this synchronizer in 100... different ways, in many diferent arquitectures but never think how it is really implemented in big ass OS, for example in the ARM world i made some of my synchronizers disabling the interrupts but i always though that it wasn't a really good way to do it.
I tried to "swim" throgh the linux kernel but just like a though i can't see nothing that satisfies my curiosity. I'm not an expert in threading, but i have solid all the basic and intermediate concepts of it.
So does anyone know how a mutex is implemented?
A quick look at code apparently from one Linux distribution seems to indicate that it is implemented using an interlocked compare and exchange. So, in some sense, the OS isn't really implementing it since the interlocked operation is probably handled at the hardware level.
Edit As Hans points out, the interlocked exchange does the compare and exchange in an atomic manner. Here is documentation for the Windows version. For fun, I just now wrote a small test to show a really simple example of creating a mutex like that. This is a simple acquire and release test.
#include <windows.h>
#include <assert.h>
#include <stdio.h>
struct homebrew {
LONG *mutex;
int *shared;
int mine;
};
#define NUM_THREADS 10
#define NUM_ACQUIRES 100000
DWORD WINAPI SomeThread( LPVOID lpParam )
{
struct homebrew *test = (struct homebrew*)lpParam;
while ( test->mine < NUM_ACQUIRES ) {
// Test and set the mutex. If it currently has value 0, then it
// is free. Setting 1 means it is owned. This interlocked function does
// the test and set as an atomic operation
if ( 0 == InterlockedCompareExchange( test->mutex, 1, 0 )) {
// this tread now owns the mutex. Increment the shared variable
// without an atomic increment (relying on mutex ownership to protect it)
(*test->shared)++;
test->mine++;
// Release the mutex (4 byte aligned assignment is atomic)
*test->mutex = 0;
}
}
return 0;
}
int main( int argc, char* argv[] )
{
LONG mymutex = 0; // zero means
int shared = 0;
HANDLE threads[NUM_THREADS];
struct homebrew test[NUM_THREADS];
int i;
// Initialize each thread's structure. All share the same mutex and a shared
// counter
for ( i = 0; i < NUM_THREADS; i++ ) {
test[i].mine = 0; test[i].shared = &shared; test[i].mutex = &mymutex;
}
// create the threads and then wait for all to finish
for ( i = 0; i < NUM_THREADS; i++ )
threads[i] = CreateThread(NULL, 0, SomeThread, &test[i], 0, NULL);
for ( i = 0; i < NUM_THREADS; i++ )
WaitForSingleObject( threads[i], INFINITE );
// Verify all increments occurred atomically
printf( "shared = %d (%s)\n", shared,
shared == NUM_THREADS * NUM_ACQUIRES ? "correct" : "wrong" );
for ( i = 0; i < NUM_THREADS; i++ ) {
if ( test[i].mine != NUM_ACQUIRES ) {
printf( "Thread %d cheated. Only %d acquires.\n", i, test[i].mine );
}
}
}
If I comment out the call to the InterlockedCompareExchange call and just let all threads run the increments in a free-for-all fashion, then the results do result in failures. Running it 10 times, for example, without the interlocked compare call:
shared = 748694 (wrong)
shared = 811522 (wrong)
shared = 796155 (wrong)
shared = 825947 (wrong)
shared = 1000000 (correct)
shared = 795036 (wrong)
shared = 801810 (wrong)
shared = 790812 (wrong)
shared = 724753 (wrong)
shared = 849444 (wrong)
The curious thing is that one time the results showed now incorrect contention. That might be because there is no "everyone start now" synchronization; maybe all threads started and finished in order in that case. But when I have the InterlockedExchangeCall in place, it runs without failure (or at least it ran 100 times without failure ... that doesn't prove I didn't write a subtle bug into the example).
Here is the discussion from the people who implemented it ... very interesting as it shows the tradeoffs ..
Several posts from Linus T ... of course
In earlier days pre-POSIX etc I used to implement synchronization by using a native mode word (e.g. 16 or 32 bit word) and the Test And Set instruction lurking on every serious processor. This instruction guarantees to test the value of a word and set it in one atomic instruction. This provides the basis for a spinlock and from that a hierarchy of synchronization functions could be built. The simplest is of course just a spinlock which performs a busy wait, not an option for more than transitory sync'ing, then a spinlock which drops the process time slice at each iteration for a lower system impact. Notional concepts like Semaphores, Mutexes, Monitors etc can be built by getting into the kernel scheduling code.
As I recall the prime usage was to implement message queues to permit multiple clients to access a database server. Another was a very early real time car race result and timing system on a quite primitive 16 bit machine and OS.
These days I use Pthreads and Semaphores and Windows Events/Mutexes (mutices?) etc and don't give a thought as to how they work, although I must admit that having been down in the engine room does give one and intuitive feel for better and more efficient multiprocessing.
In windows world.
The mutex before the windows vista mas implemented with a Compare Exchange to change the state of the mutex from Empty to BeingUsed, the other threads that entered the wait on the mutex the CAS will obvious fail and it must be added to the mutex queue for furder notification. Those operations (add/remove/check) of the queue would be protected by an common lock in windows kernel.
After Windows XP, the mutex started to use a spin lock for performance reasons being a self-suficiant object.
In unix world i didn't get much furder but probably is very similar to the windows 7.
Finally for kernels that work on a single processor the best way is to disable the interrupts when entering the critical section and re-enabling then when exiting.