The applyInPandas method can be used to apply a function in parallel to a GroupedData pyspark object as in the minimal example below.
import pandas as pd
from time import sleep
from pyspark.sql import SparkSession
# spark session object
spark = SparkSession.builder.getOrCreate()
# test function
def func(x):
sleep(1)
return x
# run test function in parallel
pdf = pd.DataFrame({'x': range(8)})
sdf = spark.createDataFrame(pdf)
sdf = sdf.groupby('x').applyInPandas(func, schema=sdf.schema)
dx = sdf.toPandas()
The minimal example has been tested on an 8 CPU single node system (eg a m5.4xlarge Amazon EC2 instance) and takes approximately 1 second to run, as the one-second sleep function is applied to each of 8 CPUs in parallel. pdf and dx objects are in the screenshot below.
My issue is how to run the same minimal example on a cluster, eg an Amazon EMR cluster. So far, after setting up a cluster the code is being executed with a single core, so the code will require appx 8 sec to run (each function executed in series).
UPDATE
Following #Douglas M's answer, the below code parallelizes on an EMR cluster
import pandas as pd
from datetime import datetime
from time import sleep
# test function
def func(x):
sleep(1)
return x
# run and time test function
sdf = spark.range(start=0, end=8, step=1, numPartitions=8)
sdf = sdf.groupby('id').applyInPandas(func, schema=sdf.schema)
now = datetime.now()
dx = sdf.toPandas()
print((datetime.now() - now).total_seconds()) # 1.09 sec
However using repartition does not parallelize (code below).
import pandas as pd
from datetime import datetime
from time import sleep
# test function
def func(x):
sleep(1)
return x
# run and time test function
pdf = pd.DataFrame({'x': range(8)})
sdf = spark.createDataFrame(pdf)
sdf = sdf.groupby('x').applyInPandas(func, schema=sdf.schema)
sdf = sdf.repartition(8)
now = datetime.now()
dx = sdf.toPandas()
print((datetime.now() - now).total_seconds()) # 8.33 sec
Running the above code, the spark progressbar first indicates 8 tasks then switches to 1 task.
Spark's parallelism is based on the number of partitions in the dataframe it is processing. Your sdf dataframe has only one partition, because it is very small.
It would be better to first create your range with the SparkSession.range:
SparkSession.range(start: int, end: Optional[int] = None, step: int = 1, numPartitions: Optional[int] = None) → pyspark.sql.dataframe.DataFrame
Create a DataFrame with single pyspark.sql.types.LongType column named id, containing elements in a range from start to end (exclusive) with step value step.
New in version 2.0.0.
Parameters:
start : int
the start value
end : int, optional
the end value (exclusive)
step : int, optional
the incremental step (default: 1)
numPartitions : int, optional
the number of partitions of the DataFrame
Returns: DataFrame
For a quick fix, add repartition:
sdf = spark.createDataFrame(pdf).repartition(8)
Which will put each of the 8 elements into their own partition. The partitions can then be processed by separate worker cores.
Related
When i profiling the following data using pandas-profiling==2.8.0 it won't return min, max and mean values.
CSV data
a,b,c
12,2.5,0
12,4.7,5
33,5,4
44,44.21,67
python code
import json
import pandas as pd
from pandas_profiling import ProfileReport
def profile_report(data):
dataset = data.select_dtypes(include=['int64', 'float64'])
profile=ProfileReport(dataset, minimal=True)
json_data=profile.to_json()
results = json.loads(json_data)
print(json.dumps(results, indent=4))
if __name__ == "__main__":
df = pd.read_csv('data.csv',index_col=None)
profile_report(df)
in some case it work properly and return min, max and mean values. but when i execute above csv data it won't return that values
For a dataset with less elements than a given number (say 5), pandas-profiling assumes that your variable is categorical instead of interval.
Use the vars.num.low_categorical_threshold parameter to change this (docs)
Example:
profile = ProfileReport(dataset, minimal=True, vars=dict(num={"low_categorical_threshold": 0}))
In Python 3.6, I am running multiple processes in parallel, where each process pings a URL and returns a Pandas dataframe. I want to keep running the (2+) processes continually, I have created a minimal representative example as below.
My questions are:
1) My understanding is that since I have different functions, I cannot use Pool.map_async() and its variants. Is that right? The only examples of these I have seen were repeating the same function, like on this answer.
2) What is the best practice to make this setup to run perpetually? In my code below, I use a while loop, which I suspect is not suited for this purpose.
3) Is the way I am using the Process and Manager optimal? I use multiprocessing.Manager.dict() as the shared dictionary to return the results form the processes. I saw in a comment on this answer that using a Queue here would make sense, however the Queue object has no `.dict()' method. So, I am not sure how that would work.
I would be grateful for any improvements and suggestions with example code.
import numpy as np
import pandas as pd
import multiprocessing
import time
def worker1(name, t , seed, return_dict):
'''worker function'''
print(str(name) + 'is here.')
time.sleep(t)
np.random.seed(seed)
df= pd.DataFrame(np.random.randint(0,1000,8).reshape(2,4), columns=list('ABCD'))
return_dict[name] = [df.columns.tolist()] + df.values.tolist()
def worker2(name, t, seed, return_dict):
'''worker function'''
print(str(name) + 'is here.')
np.random.seed(seed)
time.sleep(t)
df = pd.DataFrame(np.random.randint(0, 1000, 12).reshape(3, 4), columns=list('ABCD'))
return_dict[name] = [df.columns.tolist()] + df.values.tolist()
if __name__ == '__main__':
t=1
while True:
start_time = time.time()
manager = multiprocessing.Manager()
parallel_dict = manager.dict()
seed=np.random.randint(0,1000,1) # send seed to worker to return a diff df
jobs = []
p1 = multiprocessing.Process(target=worker1, args=('name1', t, seed, parallel_dict))
p2 = multiprocessing.Process(target=worker2, args=('name2', t, seed+1, parallel_dict))
jobs.append(p1)
jobs.append(p2)
p1.start()
p2.start()
for proc in jobs:
proc.join()
parallel_end_time = time.time() - start_time
#print(parallel_dict)
df1= pd.DataFrame(parallel_dict['name1'][1:],columns=parallel_dict['name1'][0])
df2 = pd.DataFrame(parallel_dict['name2'][1:], columns=parallel_dict['name2'][0])
merged_df = pd.concat([df1,df2], axis=0)
print(merged_df)
Answer 1 (map on multiple functions)
You're technically right.
With map, map_async and other variations, you should use a single function.
But this constraint can be bypassed by implementing an executor, and passing the function to execute as part of the parameters:
def dispatcher(args):
return args[0](*args[1:])
So a minimum working example:
import multiprocessing as mp
def function_1(v):
print("hi %s"%v)
return 1
def function_2(v):
print("by %s"%v)
return 2
def dispatcher(args):
return args[0](*args[1:])
with mp.Pool(2) as p:
tasks = [
(function_1, "A"),
(function_2, "B")
]
r = p.map_async(dispatcher, tasks)
r.wait()
results = r.get()
Answer 2 (Scheduling)
I would remove the while from the script and schedule a cron job (on GNU/Linux) (on windows) so that the OS will be responsible for it's execution.
On Linux you can run cronotab -e and add the following line to make the script run every 5 minutes.
*/5 * * * * python /path/to/script.py
Answer 3 (Shared Dictionary)
yes but no.
To my knowledge using the Manager for data such as collections is the best way.
For Arrays or primitive types (int, floats, ecc) exists Value and Array which are faster.
As in the documentation
A manager object returned by Manager() controls a server process which holds > Python objects and allows other processes to manipulate them using proxies.
A manager returned by Manager() will support types list, dict, Namespace, Lock, > RLock, Semaphore, BoundedSemaphore, Condition, Event, Barrier, Queue, Value and > Array.
Server process managers are more flexible than using shared memory objects because they can be made to support arbitrary object types. Also, a single manager can be shared by processes on different computers over a network. They are, however, slower than using shared memory.
But you have only to return a Dataframe, so the shared dictionary it's not needed.
Cleaned Code
Using all the previous ideas the code can be rewritten as:
map version
import numpy as np
import pandas as pd
from time import sleep
import multiprocessing as mp
def worker1(t , seed):
print('worker1 is here.')
sleep(t)
np.random.seed(seed)
return pd.DataFrame(np.random.randint(0,1000,8).reshape(2,4), columns=list('ABCD'))
def worker2(t , seed):
print('worker2 is here.')
sleep(t)
np.random.seed(seed)
return pd.DataFrame(np.random.randint(0, 1000, 12).reshape(3, 4), columns=list('ABCD'))
def dispatcher(args):
return args[0](*args[1:])
def task_generator(sleep_time=1):
seed = np.random.randint(0,1000,1)
yield worker1, sleep_time, seed
yield worker2, sleep_time, seed + 1
with mp.Pool(2) as p:
results = p.map(dispatcher, task_generator())
merged = pd.concat(results, axis=0)
print(merged)
If the process of concatenation of the Dataframe is the bottleneck, An approach with imap might become optimal.
imap version
with mp.Pool(2) as p:
merged = pd.DataFrame()
for result in p.imap_unordered(dispatcher, task_generator()):
merged = pd.concat([merged,result], axis=0)
print(merged)
The main difference is that in the map case, the program first wait for all the process tasks to end, and then concatenate all the Dataframes.
While in the imap_unoredered case, As soon as a task as ended, the Dataframe is concatenated ot the current results.
That's a normal Python Code which is running normally
import pandas as pd
dataset=pd.read_csv(r'C:\Users\efthi\Desktop\machine_learning.csv')
registration = pd.read_csv(r'C:\Users\efthi\Desktop\studentVle.csv')
students = list()
result = list()
p=350299
i =749
interactions = 0
while i <8659:
student = dataset["id_student"][i]
print(i)
i +=1
while p <1917865:
if student == registration['id_student'][p]:
interactions += registration ["sum_click"][p]
p+=1
students.insert(i,student)
result.insert(i,interactions)
p=0
interactions = 0
st = pd.DataFrame(students)#create data frame
st.to_csv(r'C:\Users\efthi\Desktop\ttest.csv', index=False)#insert data frame to csv
st = pd.DataFrame(result)#create data frame
st.to_csv(r'C:\Users\efthi\Desktop\results.csv', index=False)#insert data frame to csv
This is supposed to be running in an even bigger dataset, which I think is more efficient to utilize the multiple cores of my pc
How can I implement it to use all 4 cores?
For performing any function in parallel you can something like:
import multiprocessing
import pandas as pd
def f(x):
# Perform some function
return y
# Load your data
data = pd.read_csv('file.csv')
# Look at docs to see why "if __name__ == '__main__'" is necessary
if __name__ == '__main__':
# Create pool with 4 processors
pool = multiprocessing.Pool(4)
# Create jobs
jobs = []
for group in data['some_group']:
# Create asynchronous jobs that will be submitted once a processor is ready
data_for_job = data[data.some_group == group]
jobs.append(pool.apply_async(f, (data_for_job, )))
# Submit jobs
results = [job.get() for job in jobs]
# Combine results
results_df = pd.concat(results)
Regardless of the function your performing, for multiprocessing you:
Create a pool with your desired number of processors
Loop through your data in whatever way you want to chunk it
Create a job with that chunk (using pool.apply_async() <- read the docs about this if it's confusing)
Submit your jobs with job.get()
Combine your results
From a simple complete example of using window aggregation on Spark 2.31 (HDP 3.0), I can see that Spark creates intervals that are aligned to some whole number. For example, here I specify 60 seconds windowDuration, and Spark starts the first interval on the nearest minute:
>>> from pyspark.sql import functions as F
>>> df = spark.createDataFrame([("2016-03-11 09:00:07", 1),("2016-03-11 09:00:08", 1)]).toDF("date", "val")
>>> w = df.groupBy(F.window("date", "60 seconds")).agg(F.sum("val").alias("sum"))
>>> w.select(w.window.start.cast("string").alias("start"),w.window.end.cast("string").alias("end"), "sum").collect()
[Row(start='2016-03-11 09:00:00', end='2016-03-11 09:01:00', sum=2)]
Is there a way to start interval at the first message, i.e. in my case I'd like to have:
[Row(start='2016-03-11 09:00:07', end='2016-03-11 09:01:07', sum=2)]
Here you go =>
from pyspark.sql import functions as F
from datetime import datetime
df = spark.createDataFrame([("2016-03-11 09:00:07", 1),("2016-03-11 09:00:08", 1)]).toDF("date", "val")
startSecond = datetime.strptime(df.head()[0], '%Y-%m-%d %H:%M:%S').second
w = df.groupBy(F.window("date", "60 seconds", "60 seconds", str(startSecond) + " seconds")).agg(F.sum("val").alias("sum"))
w.select(w.window.start.cast("string").alias("start"),w.window.end.cast("string").alias("end"), "sum").collect()
My co-worker and I have been setting up, configuring, and testing Dask for a week or so now, and everything is working great (can't speak highly enough about how easy, straightforward, and powerful it is), but now we are trying to leverage it for more than just testing and are running into an issue. We believe it's a fairly simple one related to syntax and an understanding gap. Any help to get it running is greatly appreciated. Any support in evolving our understanding of more optimal paths is also greatly appreciated.
We got fairly close with these two posts:
Dask: How would I parallelize my code with dask delayed?
Unpacking result of delayed function
High level flow:
Open data in pandas & clean it (we plan on moving this to a pipeline)
From there, convert the cleaned data set for regression into a dask data frame
Set the x & y variables and create all unique x combination sets
Create all unique formulas (y ~ x1 + x2 +0)
Run each individual formula set with the data through a linear lasso lars model to get the AIC for each formula for ranking
Current Issue:
Run each individual formula set (~1700 formulas) with the data (1 single data set which doesn’t vary with each run) on the dask cluster and get the results back
Optimize the calculation & return the final data
Code:
# In[]
# Imports:
import logging as log
import datetime as dat
from itertools import combinations
import numpy as np
import pandas as pd
from patsy import dmatrices
import sklearn as sk
from sklearn.linear_model import LogisticRegression, SGDClassifier, LinearRegression
import dask as dask
import dask.dataframe as dk
from dask.distributed import Client
# In[]
# logging, set the dask client, open & clean the data, pass into a dask dataframe
log.basicConfig(level=log.INFO,
format='%(asctime)s %(message)s',
datefmt="%m-%d %H:%M:%S"
)
c = Client('ip:port')
ST = dat.datetime.now()
data_pd = pd.read_csv('some.txt', sep="\t")
#fill some na/clean up the data a bit
data_pd['V9'] = data_pd.V9.fillna("Declined")
data_pd['y'] = data_pd.y.fillna(0)
data_pd['x1'] = data_pd.x1.fillna(0)
#output the clean data and re-import into dask, we could alse use from_pandas to get to dask dataframes
data_pd.to_csv('clean_rr_cp.csv')
data = dk.read_csv(r'C:\path\*.csv', sep=",")
# set x & y variables - the below is truncated
y_var = "y"
x_var = ['x1',
'x2',
'x3',
'x4',......
#list of all variables
all_var = list(y_var) + x_var
#all unique combinations
x_var_combos = [combos for combos in combinations(x_var,2)]
#add single variables for testing as well
for i in x_var:
x_var_combos.append((i,""))
# create formulas from our y, x variables
def formula(y_var, combo):
combo_len = len(combo)
if combo_len == 2:
formula = y_var +"~"+combo[0] +"+"+ combo[1]+"+0"
else:
formula = y_var +"~"+combo[0]+"+0"
return formula
#dask.delayed
def model_aic(dt, formula):
k = 2
y_df, x_df = dmatrices(formula, dt, return_type = 'dataframe')
y_df = np.ravel(y_df)
log.info('dmatrices successful')
LL_model = sk.linear_model.LassoLarsIC(max_iter = 100)
AIC_Value = min(LL_model.fit(x_df, y_df).criterion_) + ( (2*(k**2)+2*(k)) / (len(x_df)-k-1) )
log.info('AIC_Value: %s', AIC_Value)
oup = [formula ,AIC_Value, len(dt)-AIC_Value]
return oup
# ----------------- here's where we're stuck ---------------------
# ----------------- we think this is correct ----------------------
# ----------------- create a list of all formula to execute -------
# In[]
out = []
for i in x_var_combos:
var = model_aic(data, formula(y_var, i))
out.append(var)
# ----------------- but we're stuck figuring out how to -----------
# ------------------make it compute & return the result -----------
ans = c.compute(*out)
ans2 = c.compute(out[1])
print (ans2)