parallel write to different groups with h5py - io

I'm trying to use parallel h5py to create an independent group for each process and fill each group with some data.. what happens is that only one group gets created and filled with data. This is the program:
from mpi4py import MPI
import h5py
rank = MPI.COMM_WORLD.Get_rank()
f = h5py.File('parallel_test.hdf5', 'w', driver='mpio', comm=MPI.COMM_WORLD)
data = range(1000)
dset = f.create_dataset(str(rank), data=data)
f.close()
Any thoughts on what is going wrong here?
Thanks alot

Ok, so as mentioned in the comments I had to create the datasets for every process then fill them up.. The following code is writing data in parallel as many times as the size of the communicator:
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
data = [random.randint(1, 100) for x in range(4)]
f = h5py.File('parallel_test.hdf5', 'w', driver='mpio', comm=comm)
dset = []
for i in range(size):
dset.append(f.create_dataset('test{0}'.format(i), (len(data),), dtype='i'))
dset[rank][:] = data
f.close()

Related

Reading Files in Parallel mpi4py

I have a series of n files that I'd like to read in parallel using mpi4py. Every file contains a column vector and, as final result, I want to obtain a matrix containing all the single vectors as X = [x1 x2 ... xn].
In the first part of the code I create the list containing all the names of the files and I distribute part of the list to the different cores through the scatter method.
import numpy as np
import pandas as pd
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
nprocs = comm.Get_size()
folder = "data/" # Input directory
files = [] # File List
# Create File List -----------------------------------------------------------
if rank == 0:
for i in range(1,2000):
filename = "file_" + str(i) + ".csv"
files = np.append(files,filename)
print("filelist complete!")
# Determine the size of each sub task
ave, res = divmod(files.size, nprocs)
counts = [ave + 1 if p < res else ave for p in range(nprocs)]
# Determine starting and ending indices of each sub-task
starts = [sum(counts[:p]) for p in range(nprocs)]
ends = [sum(counts[:p+1]) for p in range(nprocs)]
# Convert data into list of arrays
fileList = [files[starts[p]:ends[p]] for p in range(nprocs)]
else:
fileList = None
fileList = comm.scatter(fileList, root = 0)
Here I create a matrix X where to store the vectors.
# Variables Initialization ---------------------------------------------------
# Creation Support Vector
vector = pd.read_csv(folder + fileList[0])
vector = vector.values
vectorLength = len(vector)
# Matrix
X = np.ones((vectorLength, len(fileList)))
# ----------------------------------------------------------------------------
Here, I read the different files and I append the column vector to the matrix X. With the gather method I store all the X matrix calculated by the single cores into one single matrix X. The X matrix resulting from the gather method is a list of 2D numpy arrays. As final step, I reorganize the list X into a matrix
# Reading Files -----------------------------------------------------------
for i in range(len(fileList)):
data = pd.read_csv(folder + fileList[i])
data = np.array(data.values)
X[:,i] = data[:,0]
X = comm.gather(X, root = 0)
if rank == 0:
X_tot = np.empty((vectorLength, 1))
for i in range(nprocs):
X_proc = np.array(X[i])
X_tot = np.append(X_tot, X_proc, axis=1)
X_tot = X_tot[:,1:]
X = X_tot
del X_tot
print("printing X", X)
The code works fine. I tested it on a small dataset and did what it is meant to do. However I tried to run it on a large dataset and I got the following error:
X = comm.gather(X[:,1:], root = 0)
File "mpi4py/MPI/Comm.pyx", line 1578, in mpi4py.MPI.Comm.gather
File "mpi4py/MPI/msgpickle.pxi", line 773, in mpi4py.MPI.PyMPI_gather
File "mpi4py/MPI/msgpickle.pxi", line 778, in mpi4py.MPI.PyMPI_gather
File "mpi4py/MPI/msgpickle.pxi", line 191, in mpi4py.MPI.pickle_allocv
File "mpi4py/MPI/msgpickle.pxi", line 182, in mpi4py.MPI.pickle_alloc
SystemError: Negative size passed to PyBytes_FromStringAndSize
It seems a really general error, however I could process the same data in serial mode without problems or in parallel without using all the n files. I also noticed that only the rank 0 core seems to work, while the others seem to do nothing.
This is my first project using mpi4py so I'm sorry if the code is not perfect and if I have committed any conceptual mistake.
This error typically occurs when the data passed between MPI processes exceeds a certain size (I think 2GB). It's supposed to be fixed with future MPI versions, but for now, you'll probably have to resort to a workaround like storing your data on the hard disk and reading it with each process separately...
See for example here: https://github.com/mpi4py/mpi4py/issues/23

How to scatter/send all possible column pairs to the child processes and find coherence between the columns using python mpi4py? Parallel computation

I've a big matrix/2D array for which every possible column-pair I need to find the coherence by parallel computation in python (e.g. mpi4py). Coherence [a function] are computed at various child processes and the child process should send the coherence value to the parent process that gather the coherence value as a list. To do this, I've created a small matrix and list of all possible column pairs as follows:
import numpy as np
from scipy import signal
from itertools import combinations
from mpi4py import MPI
comm = MPI.COMM_WORLD
nproc = comm.Get_size()
rank = comm.Get_rank()
data=np.arange(20).reshape(5, 4)
#List of all possible column pairs
data_col = list(combinations(np.transpose(data), 2)) #list
# Function creation
def myFunc(X,Y):
..................
..................
return Real_coh
if rank==0:
Data= comm.scatter(data_col,root=0) #col_pair
Can anyone suggest me how to proceed further. You are welcome to ask any questions/clarifications. Expecting your cordial help. Thanks
check out the following scripts [with comm.Barrier for sync. communication]. In the script, I've written and read the files as a chunk of h5py dataset which is memory efficient.
import numpy as np
from scipy import signal
from mpi4py import MPI
import h5py as t
chunk_len = 5000 # No. of rows of a matrix
num_c = 34 # No. of column of the matrix
# Actual Dataset
data_mat = np.random.random((10000, num_c))
shape = (chunk_len, data_mat.shape[1])
chunk_size = (chunk_len, 1)
no_of_chunks = data_mat.shape[1]
with t.File('file_name.h5', 'w') as hf:
hf.create_dataset("chunked_arr", data=data_mat, chunks=chunk_size, compression='lzf')
del data_mat
def myFunc(dset_X, dset_Y):
..............
............
return Real_coh
res = np.zeros((num_c, num_c))
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
for i in range(num_c):
with t.File('file_name.h5', 'r', libver='latest') as hf:
dset_X = hf['chunked_arr'][:, i] # Chunk data reading
if i % size == rank:
for j in range(num_c):
with t.File('file_name.h5', 'r', libver='latest') as hf:
dset_Y = hf['chunked_arr'][:, j] # Chunk data reading
res[i][j] = spac(dset_X, dset_Y)
comm.Barrier()
print('Shape of final result :', res.shape )

Improvement of a Python script | Performance

I wrote a code. But it's very slow. The goal is to look for matches. It doesn't have to be one-on-one matches.
I have a data frame which has about 3,600,000 entries --> "SingleDff"
I have a data frame with about 110'000 entries --> "dfnumbers"
Now - The code tries to find out if out of these 110'000 entries you can find entries in the 3'600'000 million.
I added a counter to see how "fast" it is. After 24h I only got 11'000 entries. 10% in total
I'm looking now for ways and/or ideas how I can improve the performance of the Code.
The Code:
import os
import glob
import numpy as np
import pandas as pd
#Preparation
pathfiles = 'C:\\Python\\Data\\Input\\'
df_Files = glob.glob(pathfiles + "*.csv")
df_Files = [pd.read_csv(f, encoding='utf-8', sep=';', low_memory=False) for f in df_Files]
SingleDff = pd.concat(df_Files, ignore_index=True, sort=True)
dfnumbers = pd.read_excel('C:\\Python\\Data\\Input\\UniqueNumbers.xlsx')
#Output
outputDf = pd.DataFrame()
SingleDff['isRelevant'] = np.nan
count = 0
max = len(dfnumbers['Korrigierter Wert'])
arrayVal = dfnumbers['Korrigierter Wert']
for txt in arrayVal:
outputDf = outputDf.append(SingleDff[SingleDff['name'].str.contains(txt)], ignore_index = True)
outputDf['isRelevant'] = np.where(outputDf['isRelevant'].isnull(),txt,outputDf['isRelevant'])
count += 1
outputDf.to_csv('output_match.csv')
Edit:
Example of Data
In the 110'000 Data Frame I have something like this:
ABCD-12345-1245-T1
ACDB-98765-001 AHHX800.0-3
In the huge DF i have entrys like:
AHSG200-B0097小样图.dwg
MUDI-070097-0-05-00.dwg
ABCD-12345-1245.xlsx
ABCD-12345-1245.pdf
ABCD-12345.xlsx
Now i try to find matches - For which number we can find documents
Thank you for your inputs

how to make the following for loop use multiple core in Python?

That's a normal Python Code which is running normally
import pandas as pd
dataset=pd.read_csv(r'C:\Users\efthi\Desktop\machine_learning.csv')
registration = pd.read_csv(r'C:\Users\efthi\Desktop\studentVle.csv')
students = list()
result = list()
p=350299
i =749
interactions = 0
while i <8659:
student = dataset["id_student"][i]
print(i)
i +=1
while p <1917865:
if student == registration['id_student'][p]:
interactions += registration ["sum_click"][p]
p+=1
students.insert(i,student)
result.insert(i,interactions)
p=0
interactions = 0
st = pd.DataFrame(students)#create data frame
st.to_csv(r'C:\Users\efthi\Desktop\ttest.csv', index=False)#insert data frame to csv
st = pd.DataFrame(result)#create data frame
st.to_csv(r'C:\Users\efthi\Desktop\results.csv', index=False)#insert data frame to csv
This is supposed to be running in an even bigger dataset, which I think is more efficient to utilize the multiple cores of my pc
How can I implement it to use all 4 cores?
For performing any function in parallel you can something like:
import multiprocessing
import pandas as pd
def f(x):
# Perform some function
return y
# Load your data
data = pd.read_csv('file.csv')
# Look at docs to see why "if __name__ == '__main__'" is necessary
if __name__ == '__main__':
# Create pool with 4 processors
pool = multiprocessing.Pool(4)
# Create jobs
jobs = []
for group in data['some_group']:
# Create asynchronous jobs that will be submitted once a processor is ready
data_for_job = data[data.some_group == group]
jobs.append(pool.apply_async(f, (data_for_job, )))
# Submit jobs
results = [job.get() for job in jobs]
# Combine results
results_df = pd.concat(results)
Regardless of the function your performing, for multiprocessing you:
Create a pool with your desired number of processors
Loop through your data in whatever way you want to chunk it
Create a job with that chunk (using pool.apply_async() <- read the docs about this if it's confusing)
Submit your jobs with job.get()
Combine your results

Submit looping calculation to dask and get back the result

My co-worker and I have been setting up, configuring, and testing Dask for a week or so now, and everything is working great (can't speak highly enough about how easy, straightforward, and powerful it is), but now we are trying to leverage it for more than just testing and are running into an issue. We believe it's a fairly simple one related to syntax and an understanding gap. Any help to get it running is greatly appreciated. Any support in evolving our understanding of more optimal paths is also greatly appreciated.
We got fairly close with these two posts:
Dask: How would I parallelize my code with dask delayed?
Unpacking result of delayed function
High level flow:
Open data in pandas & clean it (we plan on moving this to a pipeline)
From there, convert the cleaned data set for regression into a dask data frame
Set the x & y variables and create all unique x combination sets
Create all unique formulas (y ~ x1 + x2 +0)
Run each individual formula set with the data through a linear lasso lars model to get the AIC for each formula for ranking
Current Issue:
Run each individual formula set (~1700 formulas) with the data (1 single data set which doesn’t vary with each run) on the dask cluster and get the results back
Optimize the calculation & return the final data
Code:
# In[]
# Imports:
import logging as log
import datetime as dat
from itertools import combinations
import numpy as np
import pandas as pd
from patsy import dmatrices
import sklearn as sk
from sklearn.linear_model import LogisticRegression, SGDClassifier, LinearRegression
import dask as dask
import dask.dataframe as dk
from dask.distributed import Client
# In[]
# logging, set the dask client, open & clean the data, pass into a dask dataframe
log.basicConfig(level=log.INFO,
format='%(asctime)s %(message)s',
datefmt="%m-%d %H:%M:%S"
)
c = Client('ip:port')
ST = dat.datetime.now()
data_pd = pd.read_csv('some.txt', sep="\t")
#fill some na/clean up the data a bit
data_pd['V9'] = data_pd.V9.fillna("Declined")
data_pd['y'] = data_pd.y.fillna(0)
data_pd['x1'] = data_pd.x1.fillna(0)
#output the clean data and re-import into dask, we could alse use from_pandas to get to dask dataframes
data_pd.to_csv('clean_rr_cp.csv')
data = dk.read_csv(r'C:\path\*.csv', sep=",")
# set x & y variables - the below is truncated
y_var = "y"
x_var = ['x1',
'x2',
'x3',
'x4',......
#list of all variables
all_var = list(y_var) + x_var
#all unique combinations
x_var_combos = [combos for combos in combinations(x_var,2)]
#add single variables for testing as well
for i in x_var:
x_var_combos.append((i,""))
# create formulas from our y, x variables
def formula(y_var, combo):
combo_len = len(combo)
if combo_len == 2:
formula = y_var +"~"+combo[0] +"+"+ combo[1]+"+0"
else:
formula = y_var +"~"+combo[0]+"+0"
return formula
#dask.delayed
def model_aic(dt, formula):
k = 2
y_df, x_df = dmatrices(formula, dt, return_type = 'dataframe')
y_df = np.ravel(y_df)
log.info('dmatrices successful')
LL_model = sk.linear_model.LassoLarsIC(max_iter = 100)
AIC_Value = min(LL_model.fit(x_df, y_df).criterion_) + ( (2*(k**2)+2*(k)) / (len(x_df)-k-1) )
log.info('AIC_Value: %s', AIC_Value)
oup = [formula ,AIC_Value, len(dt)-AIC_Value]
return oup
# ----------------- here's where we're stuck ---------------------
# ----------------- we think this is correct ----------------------
# ----------------- create a list of all formula to execute -------
# In[]
out = []
for i in x_var_combos:
var = model_aic(data, formula(y_var, i))
out.append(var)
# ----------------- but we're stuck figuring out how to -----------
# ------------------make it compute & return the result -----------
ans = c.compute(*out)
ans2 = c.compute(out[1])
print (ans2)

Resources