Hybrid MPI+OpenMP Vs MPI Performance - multithreading

I am converting a 3-D Jacobi solver from pure MPI to Hybrid MPI+OpenMP. I have a 192x192x192 array which is divided among 24 processes in Pure MPI in 1-D decomposition i.e. each process has 192/24 x 192 x 192 = 8 x 192 x 192 slab of data. Now I do :
for(i=0 ; i <= 7; i++)
for(j=0; j<= 191; j++)
for(k=0; k<= 191; k++)
{
unew[i][j][k] = 1/6.0 * (u[i+1][j][k]+u[i-1][j][k]+
u[i][j+1][k]+u[i][j-1][k]+
u[i][j][k+1]+u[i][j][k-1]);
}
This update takes around 60 seconds for each process.
Now with Hybrid MPI, I run two processes (1 process per socket --bind-to socket --map-by socket and OMP_PROC_PLACES=coreswith OMP_PROC_BIND=close). I create 12 threads per MPI Process (i.e. 12 threads per socket or processor). Now each MPI process has an array of size : 192/2 x 192 x 192 = 96x192x192 elements. Each thread works on 96/12 x 192 x 192 = 8 x 192 x 192 portion of the array owned by each process. I do the same triple loop update using threads but the time is approximately 76 seconds for each thread. The load balance is perfect in both the problems. What could be the possible causes of performance degradation ? Is is False Sharing because threads could be invalidating the cache lines close to each other's chunk of data ? If yes, then how do I reduce this performance degradation ? (I have purposefully not mentioned ghost data but initially I am NOT overlapping communication with computation.)
In response to the comments below, am posting the code. Apologies for the long MWE but you can very safely ignore (1) Header files declaration (2) Variable Declaration (3) Memory allocation routine (4) Formation of Cartesian Topology (5) Setting boundary conditions in parallel using OpenMP parallel region (6) Declaration of MPI_Type_subarray datatype (7) MPI_Isend() and MPI_Irecv() calls and just concentrate on (a) INDEPENDENT UPDATE OpenMP parallel region (b) independent_update(...) routine being called from here.
/* IGNORE THIS PORTION */
#include<mpi.h>
#include<omp.h>
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#define MIN(a,b) (a < b ? a : b)
#define Tol 0.00001
/* IGNORE THIS ROUTINE */
void input(int *X, int *Y, int *Z)
{
int a=193, b=193, c=193;
*X = a;
*Y = b;
*Z = c;
}
/* IGNORE THIS ROUTINE */
float*** allocate_mem(int X, int Y, int Z)
{
int i,j;
float ***matrix;
float *arr;
arr = (float*)calloc(X*Y*Z, sizeof(float));
matrix = (float***)calloc(X, sizeof(float**));
for(i = 0 ; i<= X-1; i++)
matrix[i] = (float**)calloc(Y, sizeof(float*));
for(i = 0 ; i <= X-1; i++)
for(j=0; j<= Y-1; j++)
matrix[i][j] = &(arr[i*Y*Z + j*Z]);
return matrix ;
}
/* THIS ROUTINE IS IMPORTANT */
float independent_update(float ***old, float ***new, int NX, int NY, int NZ, int tID, int chunk)
{
int i,j,k, start, end;
float error = 0.0;
float diff;
start = tID * chunk + 1;
end = MIN( (tID+1)*chunk, NX-2 );
for(i = start; i <= end ; i++)
{
for(j = 1; j<= NY-2; j++)
{
#pragma omp simd
for(k = 1; k<= NZ-2; k++)
{
new[i][j][k] = (1/6.0) *(old[i-1][j][k] + old[i+1][j][k] + old[i][j-1][k] + old[i][j+1][k] + old[i][j][k-1] + old[i][j][k+1] );
diff = 1.0 - new[i][j][k];
diff = (diff > 0 ? diff : -1.0 * diff );
if(diff > error)
error = diff;
}
}
}
return error;
}
int main(int argc, char *argv[])
{
/* IGNORE VARIABLE DECLARATION */
int size, rank; //Size of old_comm and rank of process
int i, j, k,l; //General loop variables
MPI_Comm old_comm, new_comm; //MPI_COMM_WORLD handle and for MPI_Cart_create()
int N[3]; //For taking input of size of matrix from user
int P; //Represent number of processes i.e. same as size
int dims[3]; //For dimensions of Cartesian topology
int PX, PY, PZ; //X dim, Y dim, Z dim of each process
float ***old, ***new, ***temp; //Matrices for results dimensions is (Px+2)*(PY+2)*(PZ+2)
int period[3]; //Periodicity for each dimension
int reorder; //Whether processes should be reordered in new cartesian topology
int ndims; //Number of dimensions (which is 3)
int Z_TOWARDS_U, Z_AWAY_U; //Z neighbour towards you and away from you (Z const)
int X_DOWN, X_UP; //Below plane and above plane (X const)
int Y_LEFT, Y_RIGHT; //Left plane and right plane (Y const)
int coords[3]; //Finding coordinates of processes
int dimension; //Used in MPI_Cart_shift() , values = 0, 1,2
int displacement; //Used in MPI_Cart_shift(), values will be +1 to find immediate neighbours
float l_max_err; //Local maximum error on process
float l_max_err_new; //For dependent faces.
float G_max_err = 1.0; //Maximum error for stopping criterion
int iterations = 0 ; //Counting number of iterations
MPI_Request send[6], recv[6]; //For MPI_Isend and MPI_Irecv
int start[3]; //Start will be defined in MPI_Isend() and MPI_Irecv()
int gsize[3]; //Defining global size of subarray
MPI_Datatype x_subarray; //For sending X_UP and X_DOWN
int local_x[3]; //Defining local plane size for X_UP/X_DOWN
MPI_Datatype y_subarray; //For sending Y_LEFT and Y_RIGHT
int local_y[3]; //Defining local plane for Y_LEFT/Y_RIGHT
MPI_Datatype z_subarray; //For sending Z_TOWARDS_U and Z_AWAY_U
int local_z[3]; //Defining local plan size for XY plane i.e. where Z=0
double strt, end; //For measuring time
double strt1, end1, delta1; //For measuring trivial time 1
double strt2, end2, delta2; //For measuring trivial time 2
double t_i_strt, t_i_end, t_i_sum=0; //Time for independent computational kernel
double t_up_strt, t_up_end, t_up_sum=0; //Time for X_UP
double t_down_strt, t_down_end, t_down_sum=0; //Time for X_DOWN
double t_left_strt, t_left_end, t_left_sum=0; //Time for Y_LEFT
double t_right_strt, t_right_end, t_right_sum=0; //Time for Y_RIGHT
double t_towards_strt, t_towards_end, t_towards_sum=0; //For Z_TOWARDS_U
double t_away_strt, t_away_end, t_away_sum=0; //For Z_AWAY_U
double t_comm_strt, t_comm_end, t_comm_sum=0; //Time comm + independent update (need to subtract to get comm time)
double t_setup_strt,t_setup_end; //Set-up start and end time
double t_allred_strt,t_allred_end,t_allred_total=0.0; //Measuring Allreduce time separately.
int threadID; //ID of a thread
int nthreads; //Total threads in OpenMP region
int chunk; //chunk - used to calculate iterations of a thread
/* IGNORE MPI STARTUP ETC */
MPI_Init(&argc, &argv);
t_setup_strt = MPI_Wtime();
old_comm = MPI_COMM_WORLD;
MPI_Comm_size(old_comm, &size);
MPI_Comm_rank(old_comm, &rank);
P = size;
if(rank == 0)
{
input(&N[0], &N[1], &N[2]);
}
MPI_Bcast(N, 3, MPI_INT, 0, old_comm);
dims[0] = 0;
dims[1] = 0;
dims[2] = 0;
period[0] = period[1] = period[2] = 0; //All dimensions aperiodic
reorder = 0 ; //No reordering of ranks in new_comm
ndims = 3;
MPI_Dims_create(P,ndims,dims);
MPI_Cart_create(old_comm, ndims, dims, period, reorder, &new_comm);
if( (N[0]-1) % dims[0] == 0 && (N[1]-1) % dims[1] == 0 && (N[2]-1) % dims[2] == 0 )
{
PX = (N[0]-1)/dims[0]; //Rows of unknowns each process gets
PY = (N[1]-1)/dims[1]; //Columns of unknowns each process gets
PZ = (N[2]-1)/dims[2]; //Depth of unknowns each process gets
}
old = allocate_mem(PX+2, PY+2, PZ+2); //3D arrays with ghost points
new = allocate_mem(PX+2, PY+2, PZ+2); //3D arrays with ghost points
dimension = 0;
displacement = 1;
MPI_Cart_shift(new_comm, dimension, displacement, &X_UP, &X_DOWN); //Find UP and DOWN neighbours
dimension = 1;
MPI_Cart_shift(new_comm, dimension, displacement, &Y_LEFT, &Y_RIGHT); //Find UP and DOWN neighbours
dimension = 2;
MPI_Cart_shift(new_comm, dimension, displacement, &Z_TOWARDS_U, &Z_AWAY_U); //Find UP and DOWN neighbours
/* IGNORE BOUNDARY SETUPS FOR PDE */
#pragma omp parallel for default(none) shared(old,new,PX,PY,PZ) private(i,j,k) schedule(static)
for(i = 0; i <= PX+1; i++)
{
for(j = 0; j <= PY+1; j++)
{
for(k = 0; k <= PZ+1; k++)
{
old[i][j][k] = 0.0;
new[i][j][k] = 0.0;
}
}
}
#pragma omp parallel default(none) shared(X_DOWN,X_UP,Y_LEFT,Y_RIGHT,Z_TOWARDS_U,Z_AWAY_U,old,new,PX,PY,PZ) private(i,j,k,threadID,nthreads)
{
threadID = omp_get_thread_num();
nthreads = omp_get_num_threads();
if(threadID == 0)
{
if(X_DOWN == MPI_PROC_NULL) //X is constant here, this is YZ upper plane
{
for(j = 1 ; j<= PY ; j++)
for(k = 1 ; k<= PZ ; k++)
{
old[0][j][k] = 1;
new[0][j][k] = 1; //Set boundaries in new also
}
}
}
if(threadID == (nthreads-1))
{
if(X_UP == MPI_PROC_NULL) //YZ lower plane
{
for(j = 1 ; j<= PY ; j++)
for(k = 1; k<= PZ ; k++)
{
old[PX+1][j][k] = 1;
new[PX+1][j][k] = 1;
}
}
}
if(Y_LEFT == MPI_PROC_NULL) //Y is constant, this is left XZ plane, possibly can use collapse(2)
{
#pragma omp for schedule(static)
for(i = 1 ; i<= PX ; i++)
for(k = 1; k<= PZ; k++)
{
old[i][0][k] = 1;
new[i][0][k] = 1;
}
}
if(Y_RIGHT == MPI_PROC_NULL) //XZ right plane, again collapse(2) potential
{
#pragma omp for schedule(static)
for(i = 1 ; i<= PX; i++)
for(k = 1; k<= PZ ; k++)
{
old[i][PY+1][k] = 1;
new[i][PY+1][k] = 1;
}
}
if(Z_TOWARDS_U == MPI_PROC_NULL) //Z is constant here, towards you XY plane, collapse(2)
{
#pragma omp for schedule(static)
for(i = 1 ; i<= PX ; i++)
for(j = 1; j<= PY ; j++)
{
old[i][j][0] = 1;
new[i][j][0] = 1;
}
}
if(Z_AWAY_U == MPI_PROC_NULL) //Away from you XY plane, collapse(2)
{
#pragma omp for schedule(static)
for(i = 1 ; i<= PX; i++)
for(j = 1; j<= PY ; j++)
{
old[i][j][PZ+1] = 1;
new[i][j][PZ+1] = 1;
}
}
}
/* IGNORE SUBARRAY DECLARATION */
gsize[0] = PX+2; //Global sizes of 3-D cubes for each process
gsize[1] = PY+2;
gsize[2] = PZ+2;
start[0] = 0; //Will specify starting location while sending/receiving
start[1] = 0;
start[2] = 0;
local_x[0] = 1;
local_x[1] = PY;
local_x[2] = PZ;
MPI_Type_create_subarray(ndims, gsize, local_x, start, MPI_ORDER_C, MPI_FLOAT, &x_subarray);
MPI_Type_commit(&x_subarray);
local_y[0] = PX;
local_y[1] = 1;
local_y[2] = PZ;
MPI_Type_create_subarray(ndims, gsize, local_y, start, MPI_ORDER_C, MPI_FLOAT, &y_subarray);
MPI_Type_commit(&y_subarray);
local_z[0] = PX;
local_z[1] = PY;
local_z[2] = 1;
MPI_Type_create_subarray(ndims, gsize, local_z, start, MPI_ORDER_C, MPI_FLOAT, &z_subarray);
MPI_Type_commit(&z_subarray);
t_setup_end = MPI_Wtime();
strt = MPI_Wtime();
while(G_max_err > Tol) //iterations < ITERATIONS)
{
iterations++ ;
t_comm_strt = MPI_Wtime();
/* IGNORE MPI COMMUNICATION */
MPI_Irecv(&old[0][1][1], 1, x_subarray, X_DOWN, 10, new_comm, &recv[0]);
MPI_Irecv(&old[PX+1][1][1], 1, x_subarray, X_UP, 20, new_comm, &recv[1]);
MPI_Irecv(&old[1][PY+1][1], 1, y_subarray, Y_RIGHT, 30, new_comm, &recv[2]);
MPI_Irecv(&old[1][0][1], 1, y_subarray, Y_LEFT, 40, new_comm, &recv[3]);
MPI_Irecv(&old[1][1][PZ+1], 1, z_subarray, Z_AWAY_U, 50, new_comm, &recv[4]);
MPI_Irecv(&old[1][1][0], 1, z_subarray, Z_TOWARDS_U, 60, new_comm, &recv[5]);
MPI_Isend(&old[PX][1][1], 1, x_subarray, X_UP, 10, new_comm, &send[0]);
MPI_Isend(&old[1][1][1], 1, x_subarray, X_DOWN, 20, new_comm, &send[1]);
MPI_Isend(&old[1][1][1], 1, y_subarray, Y_LEFT, 30, new_comm, &send[2]);
MPI_Isend(&old[1][PY][1], 1, y_subarray, Y_RIGHT, 40, new_comm, &send[3]);
MPI_Isend(&old[1][1][1], 1, z_subarray, Z_TOWARDS_U, 50, new_comm, &send[4]);
MPI_Isend(&old[1][1][PZ], 1, z_subarray, Z_AWAY_U, 60, new_comm, &send[5]);
MPI_Waitall(6, send, MPI_STATUSES_IGNORE);
MPI_Waitall(6, recv, MPI_STATUSES_IGNORE);
t_comm_end = MPI_Wtime();
t_comm_sum = t_comm_sum + (t_comm_end - t_comm_strt);
/* Use threads in Independent update */
t_i_strt = MPI_Wtime();
l_max_err = 0.0; //Very important, Reduction result is combined with this !
/* THIS IS THE IMPORTANT REGION */
#pragma omp parallel default(none) shared(old,new,PX,PY,PZ,chunk) private(threadID,nthreads) reduction(max:l_max_err)
{
nthreads = omp_get_num_threads();
threadID = omp_get_thread_num();
chunk = (PX-1+1) / nthreads ;
l_max_err = independent_update(old, new, PX+2, PY+2, PZ+2, threadID, chunk);
}
t_i_end = MPI_Wtime();
t_i_sum = t_i_sum + (t_i_end - t_i_strt) ;
/* IGNORE THE REMAINING CODE */
t_allred_strt = MPI_Wtime();
MPI_Allreduce(&l_max_err, &G_max_err, 1, MPI_FLOAT, MPI_MAX, new_comm);
t_allred_end = MPI_Wtime();
t_allred_total = t_allred_total + (t_allred_end - t_allred_strt);
temp = new ;
new = old;
old = temp;
}
MPI_Barrier(new_comm);
end = MPI_Wtime();
if( rank == 0)
{
printf("\nIterations = %d, G_max_err = %f", iterations, G_max_err);
printf("\nThe total SET-UP time for MPI and boundary conditions is %lf", (t_setup_end-t_setup_strt));
printf("\nThe total time for SOLVING is %lf", (end-strt));
printf("\nThe total time for INDEPENDENT COMPUTE %lf", t_i_sum);
printf("\nThe total time for COMMUNICATION OVERHEAD is %lf", t_comm_sum);
printf("\nThe total time for MPI_ALLREDUCE() is %lf", t_allred_total);
}
MPI_Type_free(&x_subarray);
MPI_Type_free(&y_subarray);
MPI_Type_free(&z_subarray);
free(&old[0][0][0]);
free(&new[0][0][0]);
MPI_Finalize();
return 0;
}
P.S. : I am almost sure that the cost of spawning/waking the threads is not the reason for such a huge difference in the timing.
Please find attached Scalasca snapshot for INDEPENDENT COMPUTE of the Hybrid Program.
Using loop simd construct
#pragma omp parallel default(none) shared(old,new,PX,PY,PZ,l_max_err) private(i,j,k,diff)
{
#pragma omp for simd schedule(static) reduction(max:l_max_err)
for(i = 1; i <= PX ; i++)
{
for(j = 1; j<= PY; j++)
{
for(k = 1; k<= PZ; k++)
{
new[i][j][k] = (1/6.0) *(old[i-1][j][k] + old[i+1][j][k] + old[i][j-1][k] + old[i][j+1][k] + old[i][j][k-1] + old[i][j][k+1] );
diff = 1.0 - new[i][j][k];
diff = (diff > 0 ? diff : -1.0 * diff );
if(diff > l_max_err)
l_max_err = diff;
}
}
}
}

You frequently get memory access and cache issues when you just do one MPI process per socket on a CPU with multiple memory controllers. It can be on either the read or the write side, so you can't really say which. This is especially an issue when doing thread-parallel execution with lightweight compute tasks (e.g. math on arrays). One MPI process per socket in this case tends to fare significantly worse than pure MPI.
In your BIOS, set up whatever the maximal NUMA per socket option is
Use one MPI process per NUMA node.
Try some different parameter values in schedule(static). I've rarely found the default to be best.
Essentially what this will do is ensure each bundle of threads only works on a single pool of memory.

Related

realloc() : invalid next size making p_thread program

I am attempting to make a multithreaded program that computes the prime-factor of each input(each input having its own thread). Using realloc or calloc with a int* Name, inputing large numbers or lots of thread {1..100} gives me realloc:invalid size or malloc.c failure. I am not sure if it is a deep pointer issue, but I need an output of such:
./thread {1..10}
1:
2: 2
3: 3
4: 2 2
5: 5
etc....
10: 2 5
int size = 5;
int* primeNumbers = (int*)calloc(size,sizeof(int));
int n = atoi(param);
//sum = 0;
int counter = 0;
//Add original value to int* primeNumbers for return printing
primeNumbers[counter] = n;
counter++;
while (n % 2 == 0)
{
printf("%d ", 2);
//If max allocated memory is reached, double memory size
if (counter == size)
{
size = size * 2;
primeNumbers = realloc(primeNumbers,size*sizeof(int));
primeNumbers[size] = 0;
}
primeNumbers[counter] = 2;
counter++;
n = n/2;
}
for (int i = 3; i <= sqrt(n); i = i + 2)
{
while (n % i == 0)
{
printf("%d ",i);
//If max allocated memory is reached, double memory size
if (counter == size)
{
size = size * 2;
primeNumbers = realloc(primeNumbers,size*sizeof(int));
primeNumbers[size] = 0;
}
primeNumbers[counter] = i;
counter++;
n = n/i;
}
}
enter code here

Add anti-aliasing/bandlimit for looped wav sample (NOT Fourier transform)

How to build antialiasing interpolation using c++ code? I have a simple 4096 or 1024 buffer. Of course when I play this at high frequencies I get aliasing issues. to avoid this, the signal must be limited by the bandwidth at high frequencies. Roughly speaking, the 'sawtooth' wave at high frequencies should looks like a regular sine. And that is what I want to get so that my sound didn't sound dirty like you moving knobs in your old FM/AM radio in your car.
I know how to build bandlimited square,triangle,sawtoth with Fourier transform. So my question is only about wavetable
Found solution in the AudioKit sources. One buffer will be split into 10 buffers/octaves. So when you play a sound, you don't play your original wave, but play a sample that was prepared for a specific octave.
Import to your project WaveStack.hpp
namespace AudioKitCore
{
// WaveStack represents a series of progressively lower-resolution sampled versions of a
// waveform. Client code supplies the initial waveform, at a resolution of 1024 samples,
// equivalent to 43.6 Hz at 44.1K samples/sec (about 23.44 cents below F1, midi note 29),
// and then calls initStack() to create the filtered higher-octave versions.
// This provides a basis for anti-aliased oscillators; see class WaveStackOscillator.
struct WaveStack
{
// Highest-resolution rep uses 2^maxBits samples
static constexpr int maxBits = 10; // 1024
// maxBits also defines the number of octave levels; highest level has just 2 samples
float *pData[maxBits];
WaveStack();
~WaveStack();
// Fill pWaveData with 1024 samples, then call this
void initStack(float *pWaveData, int maxHarmonic=512);
void init();
void deinit();
float interp(int octave, float phase);
};
}
WaveStack.cpp
#include "WaveStack.hpp"
#include "kiss_fftr.h"
namespace AudioKitCore
{
WaveStack::WaveStack()
{
int length = 1 << maxBits; // length of level-0 data
pData[0] = new float[2 * length]; // 2x is enough for all levels
for (int i=1; i<maxBits; i++)
{
pData[i] = pData[i - 1] + length;
length >>= 1;
}
}
WaveStack::~WaveStack()
{
delete[] pData[0];
}
void WaveStack::initStack(float *pWaveData, int maxHarmonic)
{
// setup
int fftLength = 1 << maxBits;
float *buf = new float[fftLength];
kiss_fftr_cfg fwd = kiss_fftr_alloc(fftLength, 0, 0, 0);
kiss_fftr_cfg inv = kiss_fftr_alloc(fftLength, 1, 0, 0);
// copy supplied wave data for octave 0
for (int i=0; i < fftLength; i++) pData[0][i] = pWaveData[i];
// perform initial forward FFT to get spectrum
kiss_fft_cpx spectrum[fftLength / 2 + 1];
kiss_fftr(fwd, pData[0], spectrum);
float scaleFactor = 1.0f / (fftLength / 2);
for (int octave = (maxHarmonic==512) ? 1 : 0; octave < maxBits; octave++)
{
// zero all harmonic coefficients above new Nyquist limit
int maxHarm = 1 << (maxBits - octave - 1);
if (maxHarm > maxHarmonic) maxHarm = maxHarmonic;
for (int h=maxHarm; h <= fftLength/2; h++)
{
spectrum[h].r = 0.0f;
spectrum[h].i = 0.0f;
}
// perform inverse FFT to get filtered waveform
kiss_fftri(inv, spectrum, buf);
// resample filtered waveform
int skip = 1 << octave;
float *pOut = pData[octave];
for (int i=0; i < fftLength; i += skip) *pOut++ = scaleFactor * buf[i];
}
// teardown
kiss_fftr_free(inv);
kiss_fftr_free(fwd);
delete[] buf;
}
void WaveStack::init()
{
}
void WaveStack::deinit()
{
}
float WaveStack::interp(int octave, float phase)
{
while (phase < 0) phase += 1.0;
while (phase >= 1.0) phase -= 1.0f;
int nTableSize = 1 << (maxBits - octave);
float readIndex = phase * nTableSize;
int ri = int(readIndex);
float f = readIndex - ri;
int rj = ri + 1; if (rj >= nTableSize) rj -= nTableSize;
float *pWaveTable = pData[octave];
float si = pWaveTable[ri];
float sj = pWaveTable[rj];
return (float)((1.0 - f) * si + f * sj);
}
}
Then use it in this way:
//wave and outputWave should be float[1024];
void getSample(int octave, float* wave, float* outputWave){
uint_fast32_t impulseCount = 1024;
if (octave == 0){
impulseCount = 737;
}else if (octave == 1){
impulseCount = 369;
}
else if (octave == 2){
impulseCount = 185;
}
else if (octave == 3){
impulseCount = 93;
}
else if (octave == 4){
impulseCount = 47;
}
else if (octave == 5){
impulseCount = 24;
}
else if (octave == 6){
impulseCount = 12;
}
else if (octave == 7){
impulseCount = 6;
}
else if (octave == 8){
impulseCount = 3;
}
else if (octave == 9){
impulseCount = 2;
}
//Get sample for octave
stack->initStack(wave, impulseCount);
for (int i = 0; i < 1024;i++){
float phase = (1.0/float(1024))*i;
//get interpolated wave and apply volume compensation
outputWave[i] = stack->interp(0, phase) / 2.0;
}
}
Then... when 10 buffers is ready. You can use them when playing a sound. Using this code you can get index of buffer/octave depending to your frequency
uint_fast8_t getBufferIndex(const float& frequency){
if (frequency >= 0 && frequency < 40){
return 0;
}
else if (frequency >= 40 && frequency < 80){
return 1;
}else if (frequency >= 80 && frequency < 160){
return 2;
}else if (frequency >= 160 && frequency < 320){
return 3;
}else if (frequency >= 320 && frequency < 640){
return 4;
}else if (frequency >= 640 && frequency < 1280){
return 5;
}else if (frequency >= 1280 && frequency < 2560){
return 6;
}else if (frequency >= 2560 && frequency < 5120){
return 7;
}else if (frequency >= 5120 && frequency < 10240){
return 8;
}else if (frequency >= 10240){
return 9;
}
return 0;
}
So if I know that my note frequency 440hz. Then for this note I'm getting wave in this way:
float notInterpolatedSound[1024];
float interpolatedSound[1024];
uint_fast8_t octaveIndex = getBufferIndex(440.0);
getSample(octaveIndex, notInterpolatedSound, interpolatedSound);
//tada!
ps. the code above is a low pass filter. I also tried to do sinc interpolation. But sinc worked for me very expensive and not exactly. Although maybe I did it wrong.

Multiply matrices using dgemv and multithreads in c

I have a problem in my code. I want to multiply 2 matrices using dgemv from cblas, but I want to share the operations to the threads I have. I have also used dgemv to multiply the matrices in a previous exercise where there was no parallelism needed. Is there any idea of what I should do?
The code:
for (it = 0; it < itime; it++) {
cblas_dgemv(CblasColMajor,CblasNoTrans,n,n, 1 , sigma, n, u , 1, 0.0 , d, 1);
#pragma omp parallel for private(i,j,sum) schedule(static)
for (i = 0; i < n; i++) {
sum = 0.0;
uplus[i] = u[i] + dtmu - dt * u[i];
#pragma omp simd reduction(+:sum)
for (j = 0; j < n; j++) {
sum += sigma[i*n+j]*u[j];
}
sum = sum - u[i]*m[i];
uplus[i] += dtdiv * sum;
if (uplus[i] > uth) {
uplus[i] = 0.0;
if (it >= ttransient) {
omega1[i] += 1.0;
}
}
}
t = u;
u = uplus;
uplus = t;
}
I want to get the dgemv function into the parallel region and share somehow the multiplications to the threads I have.

Modifying a matrix in groovy with multiple threads

I have a multi-threading problem in which I should multiply 2 random matrices. The problem is that after I finish the execution the matrix is empty although if I print the element that is inserted into the matrix it is displayed right. The matrices to be multiplied are not empty.
import java.util.Random
p1 = 500
p2 = 500
threads = 4
def giveTasks(int workers, int tasks) {
int[] taskArray = new int[workers + 1]
taskArray[0] = 0
for (i = 1; i <= workers; i++) {
taskArray[i] = taskArray[i - 1] + tasks / workers + Math.max(tasks % workers - i + 1, 0)
}
return taskArray
}
class Matrix {
public int[][] table
public Matrix(int p1, int p2) {
table = new int[p1][p2]
}
public Matrix(int[][] matrix) {
table = matrix
}
}
def createMatrix(int lines, int columns) {
int[][] matrix = new int[lines][columns]
Random rn = new Random()
for (i = 0; i < lines; i++)
for (j = 0; j < columns; j++)
matrix[i][j] = rn.nextInt(100)
return matrix
}
Matrix matrix1 = new Matrix(createMatrix(p1, p2))
Matrix matrix2 = new Matrix(createMatrix(p2, p1))
Matrix matrix = new Matrix(p1, p2)
int[] taskArray = giveTasks(threads, p1)
def thread
int tn = 0
for (int i = 1; i < threads + 1; i++) {
start = taskArray[i - 1]
stop = taskArray[i]
thread = Thread.start {
for (int job = start; job < stop; job++) { //line for matrix1
int sum = 0
for (int j = 0; j < p1; j++) {
for (int k = 0; k < p1; k++)
sum += matrix1.table[job][k] + matrix2.table[k][j]
matrix.table[job][j] = sum
}
}
tn += 1
println "Thread " + tn + "finished"
}
}
thread.join()
print matrix.table
There is one major thing wrongly understood by you in the code you have shown us - you overwrite thread variable inside for-loop and after you spawn all 4 threads you wait only for the last one to finish execution.
Instead you should store a list of all spawned threads and you would have to join them all in the end of the script. Something like:
def queue = []
int tn = 0
for (int i = 1; i < threads + 1; i++) {
start = taskArray[i - 1]
stop = taskArray[i]
def thread = Thread.start {
for (int job = start; job < stop; job++) { //line for matrix1
int sum = 0
for (int j = 0; j < p1; j++) {
for (int k = 0; k < p1; k++)
sum += matrix1.table[job][k] + matrix2.table[k][j]
matrix.table[job][j] = sum
}
}
tn += 1
println "Thread " + tn + "finished"
}
queue << thread
}
queue*.join()
matrix.table.each { println it }
You can see that in the end of the script it does:
queue*.join()
It uses Groovy's spread operator to call join() method on all elements collected in the list. And we add every spawned thread to the queue list using left shift operator:
queue << thread
This is an equivalent of queue.add(thread).
I have run your program with p1=16 and p2=16 with those changes applied and I got an output like:
Thread 3finished
Thread 4finished
Thread 1finished
Thread 2finished
[1470, 2794, 4343, 5924, 7388, 9015, 10533, 12064, 13713, 15672, 17354, 18916, 20524, 22086, 23370, 24982]
[1464, 2782, 4325, 5900, 7358, 8979, 10491, 12016, 13659, 15612, 17288, 18844, 20446, 22002, 23280, 24886]
[1629, 3112, 4820, 6560, 8183, 9969, 11646, 13336, 15144, 17262, 19103, 20824, 22591, 24312, 25755, 27526]
[1466, 2786, 4331, 5908, 7368, 8991, 10505, 12032, 13677, 15632, 17310, 18868, 20472, 22030, 23310, 24918]
[1487, 2828, 4394, 5992, 7473, 9117, 10652, 12200, 13866, 15842, 17541, 19120, 20745, 22324, 23625, 25254]
[1570, 2994, 4643, 6324, 7888, 9615, 11233, 12864, 14613, 16672, 18454, 20116, 21824, 23486, 24870, 26582]
[1345, 2544, 3968, 5424, 6763, 8265, 9658, 11064, 12588, 14422, 15979, 17416, 18899, 20336, 21495, 22982]
[1622, 3098, 4799, 6532, 8148, 9927, 11597, 13280, 15081, 17192, 19026, 20740, 22500, 24214, 25650, 27414]
[1557, 2968, 4604, 6272, 7823, 9537, 11142, 12760, 14496, 16542, 18311, 19960, 21655, 23304, 24675, 26374]
[1477, 2808, 4364, 5952, 7423, 9057, 10582, 12120, 13776, 15742, 17431, 19000, 20615, 22184, 23475, 25094]
[1447, 2748, 4274, 5832, 7273, 8877, 10372, 11880, 13506, 15442, 17101, 18640, 20225, 21764, 23025, 24614]
[1473, 2800, 4352, 5936, 7403, 9033, 10554, 12088, 13740, 15702, 17387, 18952, 20563, 22128, 23415, 25030]
[1727, 3308, 5114, 6952, 8673, 10557, 12332, 14120, 16026, 18242, 20181, 22000, 23865, 25684, 27225, 29094]
[1483, 2820, 4382, 5976, 7453, 9093, 10624, 12168, 13830, 15802, 17497, 19072, 20693, 22268, 23565, 25190]
[1575, 3004, 4658, 6344, 7913, 9645, 11268, 12904, 14658, 16722, 18509, 20176, 21889, 23556, 24945, 26662]
[1474, 2802, 4355, 5940, 7408, 9039, 10561, 12096, 13749, 15712, 17398, 18964, 20576, 22142, 23430, 25046]
Hope it helps.

Errors with repeated FFTW calls

I'm having a strange issue that I can't resolve. I made this as a simple example that demonstrates the problem. I have a sine wave defined between [0, 2*pi]. I take the Fourier transform using FFTW. Then I have a for loop where I repeatedly take the inverse Fourier transform. In each iteration, I take the average of my solution and print the results. I expect that the average stays the same with each iteration because there is no change to solution, y. However, when I pick N = 256 and other even values of N, I note that the average grows as if there are numerical errors. However, if I choose, say, N = 255 or N = 257, this is not the case and I get what is expect (avg = 0.0 for each iteration).
Code:
#include <stdio.h>
#include <stdlib.h>
#include <fftw3.h>
#include <math.h>
int main(void)
{
int N = 256;
double dx = 2.0 * M_PI / (double)N, dt = 1.0e-3;
double *x, *y;
x = (double *) malloc (sizeof (double) * N);
y = (double *) malloc (sizeof (double) * N);
// initial conditions
for (int i = 0; i < N; i++) {
x[i] = (double)i * dx;
y[i] = sin(x[i]);
}
fftw_complex yhat[N/2 + 1];
fftw_plan fftwplan, fftwplan2;
// forward plan
fftwplan = fftw_plan_dft_r2c_1d(N, y, yhat, FFTW_ESTIMATE);
fftw_execute(fftwplan);
// set N/2th mode to zero if N is even
if (N % 2 < 1.0e-13) {
yhat[N/2][0] = 0.0;
yhat[N/2][1] = 0.0;
}
// backward plan
fftwplan2 = fftw_plan_dft_c2r_1d(N, yhat, y, FFTW_ESTIMATE);
for (int i = 0; i < 50; i++) {
// yhat to y
fftw_execute(fftwplan2);
// rescale
for (int j = 0; j < N; j++) {
y[j] = y[j] / (double)N;
}
double avg = 0.0;
for (int j = 0; j < N; j++) {
avg += y[j];
}
printf("%.15f\n", avg/N);
}
fftw_destroy_plan(fftwplan);
fftw_destroy_plan(fftwplan2);
void fftw_cleanup(void);
free(x);
free(y);
return 0;
}
Output for N = 256:
0.000000000000000
0.000000000000000
0.000000000000000
-0.000000000000000
0.000000000000000
0.000000000000022
-0.000000000000007
-0.000000000000039
0.000000000000161
-0.000000000000314
0.000000000000369
0.000000000004775
-0.000000000007390
-0.000000000079126
-0.000000000009457
-0.000000000462023
0.000000000900855
-0.000000000196451
0.000000000931323
-0.000000009895302
0.000000039348379
0.000000133179128
0.000000260770321
-0.000003233551979
0.000008285045624
-0.000016331672668
0.000067450106144
-0.000166893005371
0.001059055328369
-0.002521514892578
0.005493164062500
-0.029907226562500
0.093383789062500
-0.339111328125000
1.208251953125000
-3.937500000000000
13.654296875000000
-43.812500000000000
161.109375000000000
-479.250000000000000
1785.500000000000000
-5369.000000000000000
19376.000000000000000
-66372.000000000000000
221104.000000000000000
-753792.000000000000000
2387712.000000000000000
-8603776.000000000000000
29706240.000000000000000
-96833536.000000000000000
Any ideas?
libfftw has the odious habit of modifying its inputs. Back up yhat if you want to do repeated inverse transforms.
OTOH, it's perverse, but why are you repeating the same operation if you don't expect it give different results? (Despite this being the case)
As indicated in comments: "if you want to keep the input data unchanged, use the FFTW_PRESERVE_INPUT flag. Per http://www.fftw.org/doc/Planner-Flags.html"
For example:
// backward plan
fftwplan2 = fftw_plan_dft_c2r_1d(N, yhat, y, FFTW_ESTIMATE | FFTW_PRESERVE_INPUT);

Resources