64-bit compiler error on va_list variable - 64-bit

i have compiled the below code in 32-bit and 64-bit.
no issues in 32-bit, but i am getting compiler error in 64-bit mode.
please help me in removing the error without using slandered function like va_arg.
#include<stdarg.h>
#include<stdio.h>
int sum(int, ...);
int main(void)
{
printf("Sum of 10, 20 and 30 = %d\n", sum(8, 10, 20, 30, 40, 50, 60, 70, 80) );
printf("Sum of 4, 20, 25 and 30 = %d\n", sum(4, 4, 20, 25, 30) );
return 0;
}
int sum(int num_args, ...)
{
int val = 0;
va_list ap;
int i;
va_start(ap, num_args);
for(i = 0; i < num_args; i++)
{
val += *(int *)((ap += sizeof(int)) - sizeof(int));
}
va_end(ap);
return val;
}
error is below.
[avinta#la-lnx61dev01 ~]$ gcc -m64 var_list1.c
var_list1.c: In function âsumâ:
var_list1.c:28: error: invalid operands to binary + (have âva_listâ and âlong unsigned intâ)
[avinta#la-lnx61dev01 ~]$

What you are doing is not portable. Using va_arg is the only way allowed by the standard. You are relying on the details of how va_list is implemented on one architecture, which is not the same as how it is on a different architecture.

Related

CUDA Object copy from device to host

I'm trying to copy an object back from the device to host, and it works, but if the object contains a pointer to something i can't find the right way of calling cudaMemcpy.
This is a simplified code to show what i'm trying to do. The cudaMemcpy returns with cudaSuccess but the temp variable stays "empty".
class A {
public:
int *s;
};
__global__ void MethodA(A *a) {
printf("%d\n", a->s[2]);
}
int main() {
A *a = new A();
int asd[] = { 0, 1, 2, 3, 4 };
a->s = asd;
A *d_a;
cudaMalloc((void**)&d_a, sizeof(A));
cudaMemcpy(d_a, a, sizeof(A), cudaMemcpyHostToDevice);
int * temp;
cudaError e;
e = cudaMalloc((void**)&temp, sizeof(int) * 5);
e = cudaMemcpy(temp, a->s, sizeof(int) * 5, cudaMemcpyHostToDevice);
e = cudaMemcpy(&(d_a->s), &temp, sizeof(int*), cudaMemcpyHostToDevice);
MethodA << <1, 1 >> > (d_a);
cudaMemcpy(a, d_a, sizeof(A), cudaMemcpyDeviceToHost);
e = cudaMemcpy(&temp, a->s, sizeof(int) * 5, cudaMemcpyDeviceToHost);
a->s = temp;
cudaFree(d_a);
delete(a);
return 0;
}
The problem is here:
e = cudaMemcpy(&(d_a->s), &temp, sizeof(int*), cudaMemcpyHostToDevice);
d_a is a pointer to a device object, you cannot dereference it on the host.
You'll have to first copy s to the device, then create an object of type A on the host which has a pointer to the device copy of s, and then copy this object on the device.
This is a known issue with CUDA, and happens often with structures like linked lists or trees, that's one of the reasons why Nvidia is investing a lot of effort in improving unified memory. If you can use that, and it doesn't decrease the performance of your application, it could save you a lot of trouble with problems like this.
Here is your example with the problems fixed:
class A {
public:
int *s;
};
__global__ void MethodA(A *a) {
printf("%d\n", a->s[2]);
a->s[2] = 6;
}
int main() {
A *a = new A();
int asd[] = { 0, 1, 2, 3, 4 };
a->s = asd;
A *a_with_d_s = new A();
cudaMalloc(&(a_with_d_s->s), sizeof(int) * 5);
cudaMemcpy(a_with_d_s->s, a->s, sizeof(int) * 5, cudaMemcpyHostToDevice);
A *d_a;
cudaMalloc(&d_a, sizeof(A));
cudaMemcpy(d_a, a_with_d_s, sizeof(A), cudaMemcpyHostToDevice);
MethodA << <1, 1 >> > (d_a);
// note that if we call the following line, a->s will point to device
// memory!
//cudaMemcpy(a, d_a, sizeof(A), cudaMemcpyDeviceToHost);
cudaMemcpy(a->s, a_with_d_s->s, sizeof(int) * 5, cudaMemcpyDeviceToHost);
printf("%d\n", a->s[2]);
cudaFree(d_a);
cudaFree(a_with_d_s->s);
delete(a);
delete(a_with_d_s);
return 0;
}
Prints out:
2
6

Decrease in Random read IOPs on NVME SSD if requests issued over small region

(TL;DR) On NVME SSDs (Intel p3600 as well as Avant), I am seeing decrease in the IOPS if I issue random reads over a small subset of the disk instead of the entire disk.
While reading the same offset over and over, the IOPS are about 36-40K for 4k blocksize. The IOPS gradually increase as I grow the region over which random reads are being issued. The program (seen below) uses asynchronous IO on Linux to submit the read requests.
Disk Range(in 4k blocks), IOPS
0, 38833
1, 68596
10, 76100
30, 80381
40, 113647
50, 148205
100, 170374
200, 239798
400, 270197
800, 334767
OS : Linux 4.2.0-35-generic
SSD : Intel P3600 NVME Flash
What could be causing this problem ?
The program can be run as follows
$ for i in 0 1 10 30 40 50 100 200 400 800
do
<program_name> /dev/nvme0n1 10 $i
done
and validate if you also see the increasing pattern of IOPS seen above
/**
* $ g++ <progname.cpp> -o progname -std=c++11 -lpthread -laio -O3
* $ progname /dev/nvme0n1 10 100
*/
#include <random>
#include <libaio.h>
#include <stdlib.h>//malloc, exit
#include <future> //async
#include <unistd.h> //usleep
#include <iostream>
#include <sys/time.h> // gettimeofday
#include <vector>
#include <fcntl.h> // open
#include <errno.h>
#include <sys/types.h> // open
#include <sys/stat.h> // open
#include <cassert>
#include <semaphore.h>
io_context_t ioctx;
std::vector<char*> buffers;
int fd = -1;
sem_t sem;
constexpr int numPerRound = 20;
constexpr int numRounds = 100000;
constexpr int MAXEVENT = 10;
constexpr size_t BLKSIZE = 4096;
constexpr int QDEPTH = 200;
off_t startBlock = 0;
off_t numBlocks = 100;
const int numSubmitted = numRounds * numPerRound;
void DoGet()
{
io_event eventsArray[MAXEVENT];
int numCompleted = 0;
while (numCompleted != numSubmitted)
{
bzero(eventsArray, MAXEVENT * sizeof(io_event));
int numEvents;
do {
numEvents = io_getevents(ioctx, 1, MAXEVENT, eventsArray, nullptr);
} while (numEvents == -EINTR);
for (int i = 0; i < numEvents; i++)
{
io_event* ev = &eventsArray[i];
iocb* cb = (iocb*)(ev->data);
assert(ev->res2 == 0);
assert(ev->res == BLKSIZE);
sem_post(&sem); // free ioctx
}
numCompleted += numEvents;
}
std::cout << "completed=" << numCompleted << std::endl;
}
int main(int argc, char* argv[])
{
if (argc == 1) {
std::cout << "usage <nvme_device_name> <start_4k_block> <num_4k_blocks>" << std::endl;
exit(1);
}
char* deviceName = argv[1];
startBlock = atoll(argv[2]);
numBlocks = atoll(argv[3]);
int ret = 0;
ret = io_queue_init(QDEPTH, &ioctx);
assert(ret == 0);
ret = sem_init(&sem, 0, QDEPTH);
assert(ret == 0);
auto DoGetFut = std::async(std::launch::async, DoGet);
// preallocate buffers
for (int i = 0; i < QDEPTH; i++)
{
char* buf ;
ret = posix_memalign((void**)&buf, 4096, BLKSIZE);
assert(ret == 0);
buffers.push_back(buf);
}
fd = open("/dev/nvme0n1", O_DIRECT | O_RDONLY);
assert(fd >= 0);
off_t offset = 0;
struct timeval start;
gettimeofday(&start, 0);
std::mt19937 generator (getpid());
// generate random offsets within [startBlock, startBlock + numBlocks]
std::uniform_int_distribution<off_t> offsetgen(startBlock, startBlock + numBlocks);
for (int j = 0; j < numRounds; j++)
{
iocb mycb[numPerRound];
iocb* posted[numPerRound];
bzero(mycb, sizeof(iocb) * numPerRound);
for (int i = 0; i < numPerRound; i++)
{
// same buffer may get used in 2 different async read
// thats ok - not validating content in this program
char* iobuf = buffers[i];
iocb* cb = &mycb[i];
offset = offsetgen(generator) * BLKSIZE;
io_prep_pread(cb, fd, iobuf, BLKSIZE, offset);
cb->data = iobuf;
posted[i] = cb;
sem_wait(&sem); // wait for ioctx to be free
}
int ret = 0;
do {
ret = io_submit(ioctx, numPerRound, posted);
} while (ret == -EINTR);
assert(ret == numPerRound);
}
DoGetFut.wait();
struct timeval end;
gettimeofday(&end, 0);
uint64_t diff = ((end.tv_sec - start.tv_sec) * 1000000) + (end.tv_usec - start.tv_usec);
io_queue_release(ioctx);
std::cout
<< "ops=" << numRounds * numPerRound
<< " iops=" << (numRounds * numPerRound *(uint64_t)1000000)/diff
<< " region-size=" << (numBlocks * BLKSIZE)
<< std::endl;
}
Surely it is to do with the structure of the memory. Internally this drive is built from many memory chips and may have multiple memory buses internally. If you do requests across a small range all the requests will resolve to a single or few chips and will have to be queued. If you access across the whole device then the multiple request are across many internal chips and buses and can be run asynchronously so will provide more throughput.

Zero copy in using vmsplice/splice in Linux

I am trying to get zero copy semantics working in linux using
vmsplice()/splice() but I don't see any performance improvement. This
is on linux 3.10, tried on 3.0.0 and 2.6.32. The following code tries
to do file writes, I have tried network socket writes() also, couldn't
see any improvement.
Can somebody tell what am I doing wrong ?
Has anyone gotten improvement using vmsplice()/splice() in production ?
#include <assert.h>
#include <fcntl.h>
#include <iostream>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>
#include <unistd.h>
#include <vector>
const char *filename = "Test-File";
const int block_size = 4 * 1024;
const int file_size = 4 * 1024 * 1024;
using namespace std;
int pipes[2];
vector<char *> file_data;
static int NowUsecs() {
struct timeval tv;
const int err = gettimeofday(&tv, NULL);
assert(err >= 0);
return tv.tv_sec * 1000000LL + tv.tv_usec;
}
void CreateData() {
for (int xx = 0; xx < file_size / block_size; ++xx) {
// The data buffer to fill.
char *data = NULL;
assert(posix_memalign(reinterpret_cast<void **>(&data), 4096, block_size) == 0);
file_data.emplace_back(data);
}
}
int SpliceWrite(int fd, char *buf, int buf_len) {
int len = buf_len;
struct iovec iov;
iov.iov_base = buf;
iov.iov_len = len;
while (len) {
int ret = vmsplice(pipes[1], &iov, 1, SPLICE_F_GIFT);
assert(ret >= 0);
if (!ret)
break;
len -= ret;
if (len) {
auto ptr = static_cast<char *>(iov.iov_base);
ptr += ret;
iov.iov_base = ptr;
iov.iov_len -= ret;
}
}
len = buf_len;
while (len) {
int ret = splice(pipes[0], NULL, fd, NULL, len, SPLICE_F_MOVE);
assert(ret >= 0);
if (!ret)
break;
len -= ret;
}
return 1;
}
int WriteToFile(const char *filename, bool use_splice) {
// Open and write to the file.
mode_t mode = S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH;
int fd = open(filename, O_CREAT | O_RDWR, mode);
assert(fd >= 0);
const int start = NowUsecs();
for (int xx = 0; xx < file_size / block_size; ++xx) {
if (use_splice) {
SpliceWrite(fd, file_data[xx], block_size);
} else {
assert(write(fd, file_data[xx], block_size) == block_size);
}
}
const int time = NowUsecs() - start;
// Close file.
assert(close(fd) == 0);
return time;
}
void ValidateData() {
// Open and read from file.
const int fd = open(filename, O_RDWR);
assert(fd >= 0);
char *read_buf = (char *)malloc(block_size);
for (int xx = 0; xx < file_size / block_size; ++xx) {
assert(read(fd, read_buf, block_size) == block_size);
assert(memcmp(read_buf, file_data[xx], block_size) == 0);
}
// Close file.
assert(close(fd) == 0);
assert(unlink(filename) == 0);
}
int main(int argc, char **argv) {
auto res = pipe(pipes);
assert(res == 0);
CreateData();
const int without_splice = WriteToFile(filename, false /* use splice */);
ValidateData();
const int with_splice = WriteToFile(filename, true /* use splice */);
ValidateData();
cout << "TIME WITH SPLICE: " << with_splice << endl;
cout << "TIME WITHOUT SPLICE: " << without_splice << endl;
return 0;
}
I did a proof-of-concept some years ago where I got as 4x speedup using an optimized, specially tailored, vmsplice() code. This was measured against a generic socket/write() based solution. This blog post from natsys-lab echoes my findings. But I believe you need to have the exact right use case to get near this number.
So what are you doing wrong? Primarily I think you are measuring the wrong thing. When writing directly to a file you have 1 system call, which is write(). And you are not actually copying data (except to the kernel). When you have a buffer with data that you want to write to disk, it's not gonna get faster than that.
In you vmsplice/splice setup you are still copying you data into the kernel, but you have a total of 2 system calls vmsplice()+splice() to get it to disk. The speed being identical to write() is probably just a testament to Linux system call speed :-)
A more "fair" setup would be to write one program that read() from stdin and write() the same data to stdout. Write an identical program that simply splice() stdin into a file (or point stdout to a file when you run it). Although this setup might be too simple to really show anything.
Aside: an (undocumented?) feature of vmsplice() is that you can also use to to read data from a pipe. I used this in my old POC. It was basically just an IPC layer based on the idea of passing memory pages around using vmsplice().
Note: NowUsecs() probably overflows the int

mpi i/o file missed random lines

I am working on a MPI I/O problem. Rank 0 reads the position from a parameter file and then sends to Rank 1, 2, 3. All these processes(1,2,3) will get the text from the reading file according to the position Rank 0 gave them and write in different lines in a writing file. When I run the program in one single computer, everything is ok. But when I use 2 computers(still 4 processes, Rank 0,1 on server while Rank 1,2 on client), some random lines of the output file has gone missing! Here is my code
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
//define the message
#define MSG_MISSION_COMPLETE 78
#define MSG_EXIT 79
//define a structural message of MPI
int array_of_blocklengths[3] = { 1, 1, 1 };
MPI_Aint array_of_displacements[3] = { 0, sizeof(float), sizeof(float) + sizeof(int) };
MPI_Datatype array_of_types[3] = {MPI_FLOAT, MPI_FLOAT, MPI_INT};
MPI_Datatype location;
int master();
int slave(MPI_File fhr, MPI_File fhw);
int main(int argc, char* argv[])
{
int rank;
MPI_File fhr, fhw;
char read[] = "./sharedReadSample1.txt";
char write[] = "./sharedWriteSample1.txt";
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
printf("%d is speaking\n", rank);
MPI_File_open(MPI_COMM_WORLD, read, MPI_MODE_RDONLY, MPI_INFO_NULL, &fhr);
MPI_File_open(MPI_COMM_WORLD, write, MPI_MODE_CREATE|MPI_MODE_WRONLY, MPI_INFO_NULL, &fhw);
if (rank == 0)//rank 0, dispatch the tasks
master();
else//other processes
slave(fhr, fhw);
MPI_Finalize();
printf("%d said byebye\n", rank);
MPI_File_close(&fhr);
MPI_File_close(&fhw);
return 0;
}
int master()//master, read the parameters, send them to other slave processes, get the message of task finishing, arrange next task to the slave who completed the task
{
int i, size, firstmsg, nslave;
int buf[256];
struct{
float pause;//pause time
int stand;//starting position in the file
int offset;//offset
}buf_str[10000] = { {0.0,0,0} };
MPI_Comm_size(MPI_COMM_WORLD, &size);
nslave = size - 1;//the number of slaves
FILE* fp;
FILE* fpm;//for log
fp = fopen("sharedAttributeSample1.txt", "rb");
if (fp == NULL)
{
printf("The file was not opened\n");
getchar();
//send a quit message to slaves, use the tag to tell them(>10000)
for (i = 10000; i < 10000 + nslave; i++)
{
buf[0] = MSG_EXIT;
MPI_Send(&buf[0], 1, MPI_INT, i - 10000 + 1, i, MPI_COMM_WORLD);
}
return 0;
}
else
printf("The file was opened\n");
fpm = fopen("./logs/log_master.txt","wb");
if (fpm == NULL)
printf("master log system failed to load!\n");
for (i = 0; i < 10000;i++)
{
fscanf(fp,"%f,%d,%d", &buf_str[i].pause, &buf_str[i].stand, &buf_str[i].offset);
}
MPI_Status status;
MPI_Type_struct(3, array_of_blocklengths, array_of_displacements, array_of_types, &location);
MPI_Type_commit(&location);
for (i = 0; i < nslave; i++)
{
MPI_Send(&buf_str[i], 1, location, i+1, i, MPI_COMM_WORLD);
fprintf(fpm, "initial message %d sent\n",i);
}
for (i = nslave; i < 10000; i++)
{
MPI_Recv(buf, 256, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);//receive messages from slaves
fprintf(fpm, "task %d complete massage received\n",status.MPI_TAG);
if (buf[0] == MSG_MISSION_COMPLETE)//send next task
{
firstmsg = status.MPI_SOURCE;
fprintf(fpm, "task %d is sent to %d \n", i, firstmsg);
MPI_Send(&buf_str[i], 1, location, firstmsg, i, MPI_COMM_WORLD);
}
}
for (i = 10000; i < 10000+nslave; i++)//send quitting message
{
buf[0] = MSG_EXIT;
MPI_Send(&buf_str[0], 1, location, i-10000+1, i, MPI_COMM_WORLD);
}
fclose(fp);
fclose(fpm);
return 0;
}
int slave(MPI_File fhr, MPI_File fhw)
{
struct{
float pause;
int stand;
int offset;
}buf_str;
char buf[256];
int buf_s[256];
int rank, size, nslave, i=0;
char name[30];
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
nslave = size - 1;
FILE* fps[nslave];
//open their own logging pointers
for(i=0;i<nslave;i++)
{
if(i == rank-1)
{
sprintf(name,"./logs/logfile_slave%d",i+1);
fps[i] = fopen(name, "w");
if(fps[i] == NULL)
printf("failed to open logfile of slave %d\n", i+1);
break;
}
}
MPI_Status status;
MPI_Status status_read;
MPI_Status status_write;
MPI_Type_struct(3, array_of_blocklengths, array_of_displacements, array_of_types, &location);
MPI_Type_commit(&location);
while (1)
{
//receive the message from master
MPI_Recv(&buf_str, 1, location, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
fprintf(fps[i], "process %d message %d received\n",rank,status.MPI_TAG);
if (status.MPI_TAG < 10000){//if it is a task
sleep(buf_str.pause);//sleep, to simulate a computing process
fprintf(fps[i], "process %d sleep for %f seconds\n", rank, buf_str.pause);
//read from the position given
MPI_File_read_at(fhr, buf_str.stand, buf, buf_str.offset, MPI_CHAR, &status_read);
buf[buf_str.offset] = '\n';//need a \n
MPI_File_write_at(fhw, status.MPI_TAG*(buf_str.offset+1), buf, buf_str.offset+1, MPI_CHAR, &status_write);
fprintf(fps[i], "%d has done task %d\n", rank, status.MPI_TAG);
//send task complete message to master
buf_s[0] = MSG_MISSION_COMPLETE;
MPI_Send(&buf_s, 1, MPI_INT, 0, status.MPI_TAG, MPI_COMM_WORLD);
}
else
break;
}
fclose(fps[i]);
return 0;
}

OpenMPI multiple MPI_Send and MPI_recv not working

When i tried to call multiple MPI_Send or MPI_Recv in the program, the executable is getting hanged in the nodes and the root. ie, when it is trying to execute the second MPI_Send or MPI_Recv, the communication is getting blocked. At the same time the binaries are running at 100% in the machines.
When i tried to run this code in windows 7 64 bit with OpenMPI 1.6.3 64-bit, it ran successfully. But the same code is not working in Linux, ie, CentOS 6.3 x86_64 with OpenMPI 1.6.3 -64 bit. What is the problem i have done.
Posting the code below
#include <mpi.h>
int main(int argc, char** argv) {
MPI::Init();
int rank = MPI::COMM_WORLD.Get_rank();
int size = MPI::COMM_WORLD.Get_size();
char name[256] = { };
int len = 0;
MPI::Get_processor_name(name, len);
printf("Hi I'm %s:%d\n", name, rank);
if (rank == 0)
{
while (size >= 1)
{
int val, stat = 1;
MPI::Status status;
MPI::COMM_WORLD.Recv(&val, 1, MPI::INT, 1, 0, status);
int source = status.Get_source();
printf("%s:%d received %d from %d\n", name, rank, val, source);
MPI::COMM_WORLD.Send(&stat, 1, MPI::INT, 1, 2);
printf("%s:%d sent status %d\n", name, rank, stat);
size--;
}
} else
{
int val = rank + 10;
int stat = 0;
printf("%s:%d sending %d...\n", name, rank, val);
MPI::COMM_WORLD.Send(&val, 1, MPI::INT, 0, 0);
printf("%s:%d sent %d\n", name, rank, val);
MPI::Status status;
MPI::COMM_WORLD.Recv(&stat, 1, MPI::INT, 0, 2, status);
int source = status.Get_source();
printf("%s:%d received status %d from %d\n", name, rank, stat, source);
}
size = MPI::COMM_WORLD.Get_size();
if (rank == 0)
{
while (size >= 1)
{
int val, stat = 1;
MPI::Status status;
MPI::COMM_WORLD.Recv(&val, 1, MPI::INT, 1, 1, status);
int source = status.Get_source();
printf("%s:0 received %d from %d\n", name, val, source);
size--;
}
printf("all workers checked in!\n");
}
else
{
int val = rank + 10 + 5;
printf("%s:%d sending %d...\n", name, rank, val);
MPI::COMM_WORLD.Send(&val, 1, MPI::INT, 0, 1);
printf("%s:%d sent %d\n", name, rank, val);
}
MPI::Finalize();
return 0;
}
Hi Hristo, I have changed the source as you said and the code is again posting
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv)
{
int iNumProcess = 0, iRank = 0, iNameLen = 0, n;
char szNodeName[MPI_MAX_PROCESSOR_NAME] = {};
MPI_Status stMPIStatus;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &iNumProcess);
MPI_Comm_rank(MPI_COMM_WORLD, &iRank);
MPI_Get_processor_name(szNodeName, &iNameLen);
printf("Hi I'm %s:%d\n", szNodeName, iRank);
if (iRank == 0)
{
int iNode = 1;
while (iNumProcess > 1)
{
int iVal = 0, iStat = 1;
MPI_Recv(&iVal, 1, MPI_INT, MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, &stMPIStatus);
printf("%s:%d received %d\n", szNodeName, iRank, iVal);
MPI_Send(&iStat, 1, MPI_INT, iNode, 1, MPI_COMM_WORLD);
printf("%s:%d sent Status %d\n", szNodeName, iRank, iStat);
MPI_Recv(&iVal, 1, MPI_INT, MPI_ANY_SOURCE, 2, MPI_COMM_WORLD, &stMPIStatus);
printf("%s:%d received %d\n", szNodeName, iRank, iVal);
iNumProcess--;
iNode++;
}
printf("all workers checked in!\n");
}
else
{
int iVal = iRank + 10;
int iStat = 0;
printf("%s:%d sending %d...\n", szNodeName, iRank, iVal);
MPI_Send(&iVal, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
printf("%s:%d sent %d\n", szNodeName, iRank, iVal);
MPI_Recv(&iStat, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &stMPIStatus);
printf("%s:%d received status %d\n", szNodeName, iRank, iVal);
iVal = 20;
printf("%s:%d sending %d...\n", szNodeName, iRank, iVal);
MPI_Send(&iVal, 1, MPI_INT, 0, 2, MPI_COMM_WORLD);
printf("%s:%d sent %d\n", szNodeName, iRank, iVal);
}
MPI_Finalize();
return 0;
}
i got the output as folows. ie, after the send send/receive, root is infinitely waiting and the nodes are ruing with 100% CPU utilisation. Its output is giving below
Hi I'm N1433:1
N1433:1 sending 11...
Hi I'm N1425:0
N1425:0 received 11
N1425:0 sent Status 1
N1433:1 sent 11
N1433:1 received status 11
N1433:1 sending 20...
Here N1433 and N1425 are machine names. Please help
The code for the master is wrong. It is always sending to and awaiting messages from the same rank - rank 1. Thus the program would only function correctly if run as mpiexec -np 2 .... What you've probably wanted to do is to use MPI_ANY_SOURCE as the source rank and then use that source rank as the destination in the send operation. You shouldn't also use while (size >= 1) since rank 0 is not talking to itself and the number of communications is expected to be one less than size.
if (rank == 0)
{
while (size > 1)
// ^^^^^^^^
{
int val, stat = 1;
MPI::Status status;
MPI::COMM_WORLD.Recv(&val, 1, MPI::INT, MPI_ANY_SOURCE, 0, status);
// Use wildcard source here ------------^^^^^^^^^^^^^^
int source = status.Get_source();
printf("%s:%d received %d from %d\n", name, rank, val, source);
MPI::COMM_WORLD.Send(&stat, 1, MPI::INT, source, 2);
// Send back to the same process --------^^^^^^
printf("%s:%d sent status %d\n", name, rank, stat);
size--;
}
} else
Doing something like this in the worker is pointless:
MPI::Status status;
MPI::COMM_WORLD.Recv(&stat, 1, MPI::INT, 0, 2, status);
// Source rank is fixed here ------------^
int source = status.Get_source();
printf("%s:%d received status %d from %d\n", name, rank, stat, source);
You have already specified rank 0 as the source in the receive operation so it would only be able to receive messages from rank 0. There is no way that status.Get_source() would return any value other than 0, unless some communication error had occurred, in which case an exception would get thrown by MPI::COMM_WORLD.Recv().
The same is also true for the second loop in your code.
By the way, your are using what used to be the official standard C++ bindings. They were deprecated in MPI-2.2 and the latest version of the standard (MPI-3.0) removed them completely as no longer supported by the MPI Forum. You should be using the C bindings instead or rely on 3-rd party C++ interfaces like Boost.MPI.
After installing and MPICH2 instead of OpenMPI, it worked successfully. I think there is some problem in using OpenMPI 1.6.3 in my cluster machines.

Resources