I'm new to concurrent programming. I implement a CPU intensive work and measure how much speedup I could gain. However, I cannot get any speedup as I increase #threads.
The program does the following task:
There's a shared counter to count from 1 to 1000001.
Each thread does the following until the counter reaches 1000001:
increments the counter atomically, then
run a loop for 10000 times.
There're 1000001*10000 = 10^10 operations in total to be perform, so I should be able to get good speedup as I increment #threads.
Here's how I implemented it:
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <stdatomic.h>
pthread_t workers[8];
atomic_int counter; // a shared counter
void *runner(void *param);
int main(int argc, char *argv[]) {
if(argc != 2) {
printf("Usage: ./thread thread_num\n");
return 1;
}
int NUM_THREADS = atoi(argv[1]);
pthread_attr_t attr;
counter = 1; // initialize shared counter
pthread_attr_init(&attr);
const clock_t begin_time = clock(); // begin timer
for(int i=0;i<NUM_THREADS;i++)
pthread_create(&workers[i], &attr, runner, NULL);
for(int i=0;i<NUM_THREADS;i++)
pthread_join(workers[i], NULL);
const clock_t end_time = clock(); // end timer
printf("Thread number = %d, execution time = %lf s\n", NUM_THREADS, (double)(end_time - begin_time)/CLOCKS_PER_SEC);
return 0;
}
void *runner(void *param) {
int temp = 0;
while(temp < 1000001) {
temp = atomic_fetch_add_explicit(&counter, 1, memory_order_relaxed);
for(int i=1;i<10000;i++)
temp%i; // do some CPU intensive work
}
pthread_exit(0);
}
However, as I run my program, I cannot get better performance than sequential execution!!
gcc-4.9 -std=c11 -pthread -o my_program my_program.c
for i in 1 2 3 4 5 6 7 8; do \
./my_program $i; \
done
Thread number = 1, execution time = 19.235998 s
Thread number = 2, execution time = 20.575237 s
Thread number = 3, execution time = 25.161116 s
Thread number = 4, execution time = 28.278671 s
Thread number = 5, execution time = 28.185605 s
Thread number = 6, execution time = 28.050380 s
Thread number = 7, execution time = 28.286925 s
Thread number = 8, execution time = 28.227132 s
I run the program on a 4-core machine.
Does anyone have suggestions to improve the program? Or any clue why I cannot get speedup?
The only work here that can be done in parallel is the loop:
for(int i=0;i<10000;i++)
temp%i; // do some CPU intensive work
gcc, even with the minimal optimisation level, will not emit any code for the temp%i; void expression (disassemble it and see), so this essentially becomes an empty loop, which will execute very fast - the execution time in the case with multiple threads running on different cores will be dominated by the cacheline containing your atomic variable ping-ponging between the different cores.
You need to make this loop actually do a significant amount of work before you'll see a speed-up.
Related
I am running a multi-threaded program on my computer which has 4 cores. I am creating threads that run with SCHED_FIFO, SCHED_OTHER, and SCHED_RR priorities. What is the maximum number of each type of thread that can run simultaneously?
For example,
I'm pretty sure only four SCHED_FIFO threads can run at a time (one per core)
but I'm not sure about the other two.
edit my code, as asked (it's long, but most of it is for testing how long each thread completes a delay task)
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
#include <sys/time.h>
#include <time.h>
#include <string.h>
void *ThreadRunner(void *vargp);
void DisplayThreadSchdStats(void);
void delayTask(void);
int threadNumber = 0;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
#define NUM_THREADS 9
//used to store the information of each thread
typedef struct{
pthread_t threadID;
int policy;
struct sched_param param;
long startTime;
long taskStartTime;
long endTime1;
long endTime2;
long endTime3;
long runTime;
char startDate[30];
char endDate[30];
}ThreadInfo;
ThreadInfo myThreadInfo[NUM_THREADS];
//main function
int main(void){
printf("running...\n");
int fifoPri = 60;
int rrPri = 30;
//create the 9 threads and assign their scheduling policies
for(int i=0; i<NUM_THREADS; i++){
if(i%3 == SCHED_OTHER){
myThreadInfo[i].policy = SCHED_OTHER;
myThreadInfo[i].param.sched_priority = 0;
}
else if (i%3 == SCHED_FIFO){
myThreadInfo[i].policy = SCHED_RR;
myThreadInfo[i].param.sched_priority = rrPri++;
}
else{
myThreadInfo[i].policy = SCHED_FIFO;
myThreadInfo[i].param.sched_priority = fifoPri++;
}
pthread_create( &myThreadInfo[i].threadID, NULL, ThreadRunner, &myThreadInfo[i]);
pthread_cond_wait(&cond, &mutex);
}
printf("\n\n");
//join each thread
for(int g = 0; g < NUM_THREADS; g++){
pthread_join(myThreadInfo[g].threadID, NULL);
}
//print out the stats for each thread
DisplayThreadSchdStats();
return 0;
}
//used to print out all of the threads, along with their stats
void DisplayThreadSchdStats(void){
int otherNum = 0;
long task1RR = 0;
long task2RR = 0;
long task3RR = 0;
long task1FIFO = 0;
long task2FIFO = 0;
long task3FIFO = 0;
long task1OTHER = 0;
long task2OTHER = 0;
long task3OTHER = 0;
for(int g = 0; g < threadNumber; g++){
printf("\nThread# [%d] id [0x%x] exiting...\n", g + 1, (int) myThreadInfo[g].threadID);
printf("DisplayThreadSchdStats:\n");
printf(" threadID = 0x%x \n", (int) myThreadInfo[g].threadID);
if(myThreadInfo[g].policy == 0){
printf(" policy = SHED_OTHER\n");
task1OTHER += (myThreadInfo[g].endTime1 - myThreadInfo[g].taskStartTime);
task2OTHER += (myThreadInfo[g].endTime2 - myThreadInfo[g].endTime1);
task3OTHER += (myThreadInfo[g].endTime3 - myThreadInfo[g].endTime2);
otherNum++;
}
if(myThreadInfo[g].policy == 1){
printf(" policy = SHED_FIFO\n");
task1FIFO += (myThreadInfo[g].endTime1 - myThreadInfo[g].taskStartTime);
task2FIFO += (myThreadInfo[g].endTime2 - myThreadInfo[g].endTime1);
task3FIFO += (myThreadInfo[g].endTime3 - myThreadInfo[g].endTime2);
}
if(myThreadInfo[g].policy == 2){
printf(" policy = SHED_RR\n");
task1RR+= (myThreadInfo[g].endTime1 - myThreadInfo[g].taskStartTime);
task2RR += (myThreadInfo[g].endTime2 - myThreadInfo[g].endTime1);
task3RR += (myThreadInfo[g].endTime3 - myThreadInfo[g].endTime2);
}
printf(" priority = %d \n", myThreadInfo[g].param.sched_priority);
printf(" startTime = %s\n", myThreadInfo[g].startDate);
printf(" endTime = %s\n", myThreadInfo[g].endDate);
printf(" Task start TimeStamp in micro seconds [%ld]\n", myThreadInfo[g].taskStartTime);
printf(" Task end TimeStamp in micro seconds [%ld] Delta [%lu]us\n", myThreadInfo[g].endTime1 , (myThreadInfo[g].endTime1 - myThreadInfo[g].taskStartTime));
printf(" Task end Timestamp in micro seconds [%ld] Delta [%lu]us\n", myThreadInfo[g].endTime2, (myThreadInfo[g].endTime2 - myThreadInfo[g].endTime1));
printf(" Task end Timestamp in micro seconds [%ld] Delta [%lu]us\n\n\n", myThreadInfo[g].endTime3, (myThreadInfo[g].endTime3 - myThreadInfo[g].endTime2));
printf("\n\n");
}
printf("Analysis: \n");
printf(" for SCHED_OTHER, task 1 took %lu, task2 took %lu, and task 3 took %lu. (average = %lu)\n", (task1OTHER/otherNum), (task2OTHER/otherNum), (task3OTHER/otherNum), (task1OTHER/otherNum + task2OTHER/otherNum + task3OTHER/otherNum)/3 );
printf(" for SCHED_RR, task 1 took %lu, task2 took %lu, and task 3 took %lu. (average = %lu)\n", (task1RR/otherNum), (task2RR/otherNum), (task3RR/otherNum), (task1RR/otherNum + task2RR/otherNum + task3RR/otherNum)/3 );
printf(" for SCHED_FIFO, task 1 took %lu, task2 took %lu, and task 3 took %lu. (average = %lu)\n", (task1FIFO/otherNum), (task2FIFO/otherNum), (task3FIFO/otherNum) , (task1FIFO/otherNum + task2FIFO/otherNum + task3FIFO/otherNum)/3);
}
//the function that runs the threads
void *ThreadRunner(void *vargp){
pthread_mutex_lock(&mutex);
char date[30];
struct tm *ts;
size_t last;
time_t timestamp = time(NULL);
ts = localtime(×tamp);
last = strftime(date, 30, "%c", ts);
threadNumber++;
ThreadInfo* currentThread;
currentThread = (ThreadInfo*)vargp;
//set the start time
struct timeval tv;
gettimeofday(&tv, NULL);
long milltime0 = (tv.tv_sec) * 1000 + (tv.tv_usec) / 1000;
currentThread->startTime = milltime0;
//set the start date
strcpy(currentThread->startDate, date);
if(pthread_setschedparam(pthread_self(), currentThread->policy,(const struct sched_param *) &(currentThread->param))){
perror("pthread_setschedparam failed");
pthread_exit(NULL);
}
if(pthread_getschedparam(pthread_self(), ¤tThread->policy,(struct sched_param *) ¤tThread->param)){
perror("pthread_getschedparam failed");
pthread_exit(NULL);
}
gettimeofday(&tv, NULL);
long startTime = (tv.tv_sec) * 1000 + (tv.tv_usec) / 1000;
currentThread->taskStartTime = startTime;
//delay task #1
delayTask();
//set the end time of task 1
gettimeofday(&tv, NULL);
long milltime1 = (tv.tv_sec) * 1000 + (tv.tv_usec) / 1000;
currentThread->endTime1 = milltime1;
//delay task #2
delayTask();
//set the end time of task 2
gettimeofday(&tv, NULL);
long milltime2 = (tv.tv_sec) * 1000 + (tv.tv_usec) / 1000;
currentThread->endTime2 = milltime2;
//delay task #3
delayTask();
//set the end time of task 3
gettimeofday(&tv, NULL);
long milltime3 = (tv.tv_sec) * 1000 + (tv.tv_usec) / 1000;
currentThread->endTime3 = milltime3;
//set the end date
timestamp = time(NULL);
ts = localtime(×tamp);
last = strftime(date, 30, "%c", ts);
strcpy(currentThread->endDate, date);
//set the total run time of the thread
long runTime = milltime3 - milltime0;
currentThread->runTime = runTime;
//unlock mutex
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&cond);
pthread_exit(NULL);
}
//used to delay each thread
void delayTask(void){
for(int i = 0; i < 5000000; i++){
printf("%d", i % 2);
}
}
In short: no guarantees how many threads will be run parallelly, but all of them will run concurrently.
No matter how many threads you start in an application controlled by a general-purpose operating system, they all will run concurrently. That is, each thread will be provided with some non-zero time to run, and no particular execution order of execution of threads' sections outside OS-defined synchronization primitives (waiting on mutexes, locks etc.) is guaranteed. The only limit on thread number may be imposed by OS'es policies.
How many of your threads will be chosen to run parallelly at any given moment of time is not defined. The number cannot obviously exceed number of logical processors visible to an OS (remember that the OS itself may be run inside a virtual machine, and there are hardware tricks like SMT), and your threads will be competing with other threads present in the same system. OSes do offer APIs to query which threads/processes are currently in running state and which are blocked or ready but not scheduled, otherwise writing programs like top would become problematic.
Explicitly setting priorities to threads may affect the operating system's choices and increase average number of your threads being executed parallelly. Note that it can either help or hurt if used without thinking. Still, it will never be strictly equal to four inside a multitasking OS as long as there are other processes. The only way to make sure 100% of CPU's hardware is dedicated to your threads 100% of the time is to run a barebone application, outside of any OS outside of any hypervisor (and even then there are peculiarities, see "Intel System Management Mode").
Inside a mostly idle general purpose OS, if your threads are compute-intensive, I would guess the average parallel utilization ratio would be 3.9 — 4.0. But a slightest perturbation — and all bets are off.
I'm running a simple threaded test program on both a Windows machine (compiled using MSVS2015) and a server running Solaris 10 (compiled using GCC 4.9.3). On Windows I'm getting significant performance increases from increasing the threads from 1 to the amount of cores available; however, the very same code does not see any performance gains at all on Solaris 10.
The Windows machine has 4 cores (8 logical) and the Unix machine has 8 cores (16 logical).
What could be the cause for this? I'm compiling with -pthread, and it is creating threads since it prints all the "S"es before the first "F". I don't have root access on the Solaris machine, and from what I can see there's no installed tool which I can use to view a process' affinity.
Example code:
#include <iostream>
#include <vector>
#include <future>
#include <random>
#include <chrono>
std::default_random_engine gen(std::chrono::system_clock::now().time_since_epoch().count());
std::normal_distribution<double> randn(0.0, 1.0);
double generate_randn(uint64_t iterations)
{
// Print "S" when a thread starts
std::cout << "S";
std::cout.flush();
double rvalue = 0;
for (int i = 0; i < iterations; i++)
{
rvalue += randn(gen);
}
// Print "F" when a thread finishes
std::cout << "F";
std::cout.flush();
return rvalue/iterations;
}
int main(int argc, char *argv[])
{
if (argc < 2)
return 0;
uint64_t count = 100000000;
uint32_t threads = std::atoi(argv[1]);
double total = 0;
std::vector<std::future<double>> futures;
std::chrono::high_resolution_clock::time_point t1;
std::chrono::high_resolution_clock::time_point t2;
// Start timing
t1 = std::chrono::high_resolution_clock::now();
for (int i = 0; i < threads; i++)
{
// Start async tasks
futures.push_back(std::async(std::launch::async, generate_randn, count/threads));
}
for (auto &future : futures)
{
// Wait for tasks to finish
future.wait();
total += future.get();
}
// End timing
t2 = std::chrono::high_resolution_clock::now();
// Take the average of the threads' results
total /= threads;
std::cout << std::endl;
std::cout << total << std::endl;
std::cout << "Finished in " << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() << " ms" << std::endl;
}
As a general rule, classes defined by the C++ standard library do not have any internal locking. Modifying an instance of a standard library class from more than one thread, or reading it from one thread while writing it from another, is undefined behavior, unless "objects of that type are explicitly specified as being sharable without data races". (N3337, sections 17.6.4.10 and 17.6.5.9.) The RNG classes are not "explicitly specified as being sharable without data races". (cout is an example of a stdlib object that is "sharable with data races" — as long as you haven't done ios::sync_with_stdio(false).)
As such, your program is incorrect because it accesses a global RNG object from more than one thread simultaneously; every time you request another random number, the internal state of the generator is modified. On Solaris, this seems to result in serialization of accesses, whereas on Windows it is probably instead causing you not to get properly "random" numbers.
The cure is to create separate RNGs for each thread. Then each thread will operate independently, and they will neither slow each other down nor step on each other's toes. This is a special case of a very general principle: multithreading always works better the less shared data there is.
There's an additional wrinkle to worry about: each thread will call system_clock::now at very nearly the same time, so you may end up with some of the per-thread RNGs seeded with the same value. It would be better to seed them all from a random_device object. random_device requests random numbers from the operating system, and does not need to be seeded; but it can be very slow. The random_device should be created and used inside main, and seeds passed to each worker function, because a global random_device accessed from multiple threads (as in the previous edition of this answer) is just as undefined as a global default_random_engine.
All told, your program should look something like this:
#include <iostream>
#include <vector>
#include <future>
#include <random>
#include <chrono>
static double generate_randn(uint64_t iterations, unsigned int seed)
{
// Print "S" when a thread starts
std::cout << "S";
std::cout.flush();
std::default_random_engine gen(seed);
std::normal_distribution<double> randn(0.0, 1.0);
double rvalue = 0;
for (int i = 0; i < iterations; i++)
{
rvalue += randn(gen);
}
// Print "F" when a thread finishes
std::cout << "F";
std::cout.flush();
return rvalue/iterations;
}
int main(int argc, char *argv[])
{
if (argc < 2)
return 0;
uint64_t count = 100000000;
uint32_t threads = std::atoi(argv[1]);
double total = 0;
std::vector<std::future<double>> futures;
std::chrono::high_resolution_clock::time_point t1;
std::chrono::high_resolution_clock::time_point t2;
std::random_device make_seed;
// Start timing
t1 = std::chrono::high_resolution_clock::now();
for (int i = 0; i < threads; i++)
{
// Start async tasks
futures.push_back(std::async(std::launch::async,
generate_randn,
count/threads,
make_seed()));
}
for (auto &future : futures)
{
// Wait for tasks to finish
future.wait();
total += future.get();
}
// End timing
t2 = std::chrono::high_resolution_clock::now();
// Take the average of the threads' results
total /= threads;
std::cout << '\n' << total
<< "\nFinished in "
<< std::chrono::duration_cast<
std::chrono::milliseconds>(t2 - t1).count()
<< " ms\n";
}
(This isn't really an answer, but it won't fit into a comment, especially with the command formatting an links.)
You can profile your executable on Solaris using Solaris Studio's collect utility. On Solaris, that will be able to show you where your threads are contending.
collect -d /tmp -p high -s all app [app args]
Then view the results using the analyzer utility:
analyzer /tmp/test.1.er &
Replace /tmp/test.1.er with the path to the output generated by a collect profile run.
If your threads are contending over some resource(s) as #zwol posted in his answer, you will see it.
Oracle marketing brief for the toolset can be found here: http://www.oracle.com/technetwork/server-storage/solarisstudio/documentation/o11-151-perf-analyzer-brief-1405338.pdf
You can also try compiling your code with Solaris Studio for more data.
I am measuring the running time of kernels, as seen from a CPU thread, by measuring the interval from before launching a kernel to after a cudaDeviceSynchronize (using gettimeofday). I have a cudaDeviceSynchronize before I start recording the interval. I also instrument the kernels to record the timestamp on the GPU (using clock64) at the start of the kernel by thread(0,0,0) of each block from block(0,0,0) to block(occupancy-1,0,0) to an array of size equal to number of SMs. Every thread at the end of the kernel code, updates the timestamp to another array (of the same size) at the index equal to the index of the SM it runs on.
The intervals calculated from the two arrays are 60-70% of that measured from the CPU thread.
For example, on a K40, while gettimeofday gives an interval of 140ms, the avg of intervals calculated from GPU timestamps is only 100ms. I have experimented with many grid sizes (15 blocks to 6K blocks) but have found similar behavior so far.
__global__ void some_kernel(long long *d_start, long long *d_end){
if(threadIdx.x==0){
d_start[blockIdx.x] = clock64();
}
//some_kernel code
d_end[blockIdx.x] = clock64();
}
Does this seem possible to the experts?
Does this seem possible to the experts?
I suppose anything is possible for code you haven't shown. After all, you may just have a silly bug in any of your computation arithmetic. But if the question is "is it sensible that there should be 40ms of unaccounted-for time overhead on a kernel launch, for a kernel that takes ~140ms to execute?" I would say no.
I believe the method I outlined in the comments is reasonably accurate. Take the minimum clock64() timestamp from any thread in the grid (but see note below regarding SM restriction). Compare it to the maximum time stamp of any thread in the grid. The difference will be comparable to the reported execution time of gettimeofday() to within 2 percent, according to my testing.
Here is my test case:
$ cat t1040.cu
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#define LS_MAX 2000000000U
#define MAX_SM 64
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
#include <time.h>
#include <sys/time.h>
#define USECPSEC 1000000ULL
__device__ int result;
__device__ unsigned long long t_start[MAX_SM];
__device__ unsigned long long t_end[MAX_SM];
unsigned long long dtime_usec(unsigned long long start){
timeval tv;
gettimeofday(&tv, 0);
return ((tv.tv_sec*USECPSEC)+tv.tv_usec)-start;
}
__device__ __inline__ uint32_t __mysmid(){
uint32_t smid;
asm volatile("mov.u32 %0, %%smid;" : "=r"(smid));
return smid;}
__global__ void kernel(unsigned ls){
unsigned long long int ts = clock64();
unsigned my_sm = __mysmid();
atomicMin(t_start+my_sm, ts);
// junk code to waste time
int tv = ts&0x1F;
for (unsigned i = 0; i < ls; i++){
tv &= (ts+i);}
result = tv;
// end of junk code
ts = clock64();
atomicMax(t_end+my_sm, ts);
}
// optional command line parameter 1 = kernel duration, parameter 2 = number of blocks, parameter 3 = number of threads per block
int main(int argc, char *argv[]){
unsigned ls;
if (argc > 1) ls = atoi(argv[1]);
else ls = 1000000;
if (ls > LS_MAX) ls = LS_MAX;
int num_sms = 0;
cudaDeviceGetAttribute(&num_sms, cudaDevAttrMultiProcessorCount, 0);
cudaCheckErrors("cuda get attribute fail");
int gpu_clk = 0;
cudaDeviceGetAttribute(&gpu_clk, cudaDevAttrClockRate, 0);
if ((num_sms < 1) || (num_sms > MAX_SM)) {printf("invalid sm count: %d\n", num_sms); return 1;}
unsigned blks;
if (argc > 2) blks = atoi(argv[2]);
else blks = num_sms;
if ((blks < 1) || (blks > 0x3FFFFFFF)) {printf("invalid blocks: %d\n", blks); return 1;}
unsigned ntpb;
if (argc > 3) ntpb = atoi(argv[3]);
else ntpb = 256;
if ((ntpb < 1) || (ntpb > 1024)) {printf("invalid threads: %d\n", ntpb); return 1;}
kernel<<<1,1>>>(100); // warm up
cudaDeviceSynchronize();
cudaCheckErrors("kernel fail");
unsigned long long *h_start, *h_end;
h_start = new unsigned long long[num_sms];
h_end = new unsigned long long[num_sms];
for (int i = 0; i < num_sms; i++){
h_start[i] = 0xFFFFFFFFFFFFFFFFULL;
h_end[i] = 0;}
cudaMemcpyToSymbol(t_start, h_start, num_sms*sizeof(unsigned long long));
cudaMemcpyToSymbol(t_end, h_end, num_sms*sizeof(unsigned long long));
unsigned long long htime = dtime_usec(0);
kernel<<<blks,ntpb>>>(ls);
cudaDeviceSynchronize();
htime = dtime_usec(htime);
cudaMemcpyFromSymbol(h_start, t_start, num_sms*sizeof(unsigned long long));
cudaMemcpyFromSymbol(h_end, t_end, num_sms*sizeof(unsigned long long));
cudaCheckErrors("some error");
printf("host elapsed time (ms): %f \n device sm clocks:\n start:", htime/1000.0f);
unsigned long long max_diff = 0;
for (int i = 0; i < num_sms; i++) {printf(" %12lu ", h_start[i]);}
printf("\n end: ");
for (int i = 0; i < num_sms; i++) {printf(" %12lu ", h_end[i]);}
for (int i = 0; i < num_sms; i++) if ((h_start[i] != 0xFFFFFFFFFFFFFFFFULL) && (h_end[i] != 0) && ((h_end[i]-h_start[i]) > max_diff)) max_diff=(h_end[i]-h_start[i]);
printf("\n max diff clks: %lu\nmax diff kernel time (ms): %f\n", max_diff, max_diff/(float)(gpu_clk));
return 0;
}
$ nvcc -o t1040 t1040.cu -arch=sm_35
$ ./t1040 1000000 1000 128
host elapsed time (ms): 2128.818115
device sm clocks:
start: 3484744 3484724
end: 2219687393 2228431323
max diff clks: 2224946599
max diff kernel time (ms): 2128.117432
$
Notes:
This code can only be run on a cc3.5 or higher GPU due to the use of 64-bit atomicMin and atomicMax.
I've run it on a variety of grid configurations, on both a GT640 (very low end cc3.5 device) and K40c (high end) and the timing results between host and device agree to within 2% (for reasonably long kernel execution times. If you pass 1 as the command line parameter, with very small grid sizes, the kernel execution time will be very short (nanoseconds) whereas the host will see about 10-20us. This is kernel launch overhead being measured. So the 2% number is for kernels that take much longer than 20us to execute).
It accepts 3 (optional) command line parameters, the first of which varies the amount of time the kernel will execute.
My timestamping is done on a per-SM basis, because the clock64() resource is indicated to be a per-SM resource. The sm clocks are not guaranteed to be synchronized between SMs.
You can modify the grid dimensions. The second optional command line parameter specifies the number of blocks to launch. The third optional command line parameter specifies the number of threads per block. The timing methodology I have shown here should not be dependent on number of blocks launched or number of threads per block. If you specify fewer blocks than SMs, the code should ignore "unused" SM data.
I wrote the following very simple pthread code to test how it scales up. I am running the code on a machine with 8 logical processors and at no time do I create more than 8 threads (to avoid context switching).
With increasing number of threads, each thread has to do lesser amount of work. Also, it is evident from the code that there are no shared Data structures between the threads which might be a bottleneck. But still, my performance degrades as I increase the number of threads.
Can somebody tell me what am I doing wrong here.
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int NUM_THREADS = 3;
unsigned long int COUNTER = 10000000000000;
unsigned long int LOOP_INDEX;
void* addNum(void *data)
{
unsigned long int sum = 0;
for(unsigned long int i = 0; i < LOOP_INDEX; i++) {
sum += 100;
}
return NULL;
}
int main(int argc, char** argv)
{
NUM_THREADS = atoi(argv[1]);
pthread_t *threads = (pthread_t*)malloc(sizeof(pthread_t) * NUM_THREADS);
int rc;
clock_t start, diff;
LOOP_INDEX = COUNTER/NUM_THREADS;
start = clock();
for (int t = 0; t < NUM_THREADS; t++) {
rc = pthread_create((threads + t), NULL, addNum, NULL);
if (rc) {
printf("ERROR; return code from pthread_create() is %d", rc);
exit(-1);
}
}
void *status;
for (int t = 0; t < NUM_THREADS; t++) {
rc = pthread_join(threads[t], &status);
}
diff = clock() - start;
int sec = diff / CLOCKS_PER_SEC;
printf("%d",sec);
}
Note: All the answers I found online said that the overhead of creating the threads is more than the work they are doing. To test it, I commented out everything in the "addNum()" function. But then, after doing that no matter how many threads I create, the time taken by the code is 0 seconds. So there is no overhead as such, I think.
clock() counts CPU time used, across all threads. So all that's telling you is that you're using a little bit more total CPU time, which is exactly what you would expect.
It's the total wall clock elapsed time which should be going down if your parallelisation is effective. Measure that with clock_gettime() specifying the CLOCK_MONOTONIC clock instead of clock().
Ok so here's what the problem says.
Implement a simple loop that calls a function containing a delay. Partition this loop across four threads using static, dynamic and guided scheduling. Measure execution times for each type of scheduling with respect to both the size of the loop and the size of the delay.
this is what I've done so far, I have no idea if I'm on the right track
#include <omp.h>
#include <stdio.h>
int main() {
double start_time, run_time;
omp_set_num_threads(4);
start_time = omp_get_wtime();
#pragma omp parallel
#pragma omp for schedule(static)
for (int n = 0; n < 100; n++){
printf("square of %d=%d\n", n, n*n);
printf("cube of %d=%d\n", n, n*n*n);
int ID = omp_get_thread_num();
printf("Thread(%d) \n", ID);
}
run_time = omp_get_wtime() - start_time;
printf("Time Elapsed (%f)", run_time);
getchar();
}
At first you need a loop, where the distribution makes a difference. The loop has 100 iterations, so the OpenMP schedule will only 100 times decide what is the next iteration for a thread what takes no mensurable time. The output with printf takes very long so in your code it makes no difference which schedule is used. Its better to make a loop without console output and a very high loop count like
#pragma omp parallel
{
#pragma omp for schedule(static) private(value)
for (int i = 0; i < 100000000; i++) {
value = ...
}
}
At last you have to write code in the loop which "result" is used after the loop with a printf for example. If not the body could be deleted by the compiler because of optimize the code (it is not used later so its not needed). You can concentrate the time measurings on the parallel pool without the output of the results.
If your iterations nearly takes the same time, then a static distribution should be faster. If they differ very much the dynamic and guided schedules should dominate your measurings.