generating a list of arrays using multiprocessing in python - python-3.x

I am having difficulty implementing parallelisation for generating a list of arrays. In this case, each array is generated independently, and then appended to a list. Somehow multiprocessing.apply_asynch() is outputting an empty array when I feed it with complicated arguments.
More specifically, just to give the context, I am attempting implement a machine learning algorithm using parallelisation . The idea is the following: I have an 'system', and an 'agent' which performs actions on the system. To teach the agent (in this case a neural net) how to behave optimally (with respect to a certain reward scheme that I have omitted here), the agent needs to generate trajectories of the system by applying actions on it. From the obtained reward obtained upon performing the actions, the agent then learns what to do and what not to do. Note importantly that the possible actions in the code are referred to as integers with:
possible_actions = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
So here I am attempting to generate many such trajectories using multiprocessing (sorry the code is not runnable here as it requires many other files, but I'm hoping somebody can spot the issue):
from quantum_simulator_EC import system
from reinforce_keras_EC import Agent
import multiprocessing as mp
s = system(1200, N=3)
s.set_initial_state([0,0,1])
agent = Agent(alpha=0.0003, gamma=0.95, n_actions=len( s.actions ))
def get_result(result):
global action_batch
action_batch.append(result)
def generate_trajectory(s, agent):
sequence_of_actions = []
for k in range( 5 ):
net_input = s.generate_net_input_FULL(6)
action = agent.choose_action( net_input )
sequence_of_actions.append(action)
return sequence_of_actions
action_batch = []
pool = mp.Pool(2)
for i in range(0, batch_size):
pool.apply_async(generate_trajectory, args=(s,agent), callback=get_result)
pool.close()
pool.join()
print(action_batch)
The problem is the code returns an empty array []. Can somebody explain to me what the issue is? Are there restrictions on the kind of arguments that I can pass to apply_asynch? In this example I am passing my system 's' and my 'agent', both complicated objects. I am mentioning this because when I test my code with simple arguments like integers or matrices, instead of agent and system, it works fine. If there is no obvious reason why it's not working, if somebody has some tips to debug the code that would also be helpful.
Note that there is no problem if I do not use multiprocessing by replacing the last part by:
action_batch = []
for i in range(0, batch_size):
get_result( generate_sequence(s,agent) )
print(action_batch)
And in this case, the output here is as expected, a list of sequences of 5 actions:
[[4, 2, 1, 1, 7], [8, 2, 2, 12, 1], [8, 1, 9, 11, 9], [7, 10, 6, 1, 0]]

The final results can directly be appended to a list in the main process, no need to create a callback function. Then you can close and join the pool, and finally retrieve all the results using get.
See the following two examples, using apply_async and starmap_async, (see this post for the difference).
Solution apply
import multiprocessing as mp
import time
def func(s, agent):
print(f"Working on task {agent}")
time.sleep(0.1) # some task
return (s, s, s)
if __name__ == '__main__':
agent = "My awesome agent"
with mp.Pool(2) as pool:
results = []
for s in range(5):
results.append(pool.apply_async(func, args=(s, agent)))
pool.close()
pool.join()
print([result.get() for result in results])
Solution starmap
import multiprocessing as mp
import time
def func(s, agent):
print(f"Working on task {agent}")
time.sleep(0.1) # some task
return (s, s, s)
if __name__ == '__main__':
agent = "My awesome agent"
with mp.Pool(2) as pool:
result = pool.starmap_async(func, [(s, agent) for s in range(5)])
pool.close()
pool.join()
print(result.get())
Output
Working on task My awesome agent
Working on task My awesome agent
Working on task My awesome agent
Working on task My awesome agent
Working on task My awesome agent
[(0, 0, 0), (1, 1, 1), (2, 2, 2), (3, 3, 3), (4, 4, 4)]

Related

How to use Logging with multiprocessing in Python3

I am trying to use python built in logging with multiprocessing.
Goal -- is to have errors logged to a file called "error.log"
Issue -- The errors are printed in the console instead of the log file. see code below
import concurrent.futures
from itertools import repeat
import logging
def data_logging():
error_logger = logging.getLogger("error.log")
error_logger.setLevel(logging.ERROR)
formatter = logging.Formatter('%(asctime)-12s %(levelname)-8s %(message)s')
file_handler = logging.FileHandler('error.log')
file_handler.setLevel(logging.ERROR)
file_handler.setFormatter(formatter)
error_logger.addHandler(file_handler)
return error_logger
def check_number(error_logger, key):
if key == 1:
print ("yes")
else:
error_logger.error(f"{key} is not = 1")
def main():
key_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 4, 5, 4, 3, 4, 5, 4, 3, 4, 5, 4, 3, 4, 3]
error_logger = data_logging()
with concurrent.futures.ProcessPoolExecutor(max_workers=1) as executor:
executor.map(check_number, repeat(error_logger), key_list)
if __name__ == '__main__':
main()
function check_number checks if numbers in list key_list is 1 or not
if key = 1, prints yes to the console, if not i would like the program to add {key} is not = 1 to the log file.
instead with the code above it prints it to the console. please help if u can. this is a mini example to my program so don't change the logic
Be able to pass a logger instance to the child processes, you must have been using python 3.7+. So here is a little about how things work.
The basic
Only serializable objects can be passed to the child process, or in the other way of speaking, pickleable. This includes all primitive types, such as int, float, str. Why? because python knows how to reconstruct (or unpickle) them back to an object in the child process.
For any other complex class instance, it is unpickable because the lack of information about the class to reconstruct its instance from serialized bytes.
So if we provide the class information, our instance can be unpickable, right?
To a certain degree, yes. By calling a ClassName(*parameters) it certainly can reconstruct the instance from scratch. So, what if you have modified your instance before it gets pickled, like adding some attributes that not in the __init__ method, such as error_logger.addHandler(file_handler) ? The pickle module is not that smart to know every other thing that you added to your instance afterward.
The why
Then, how does python 3.7+ can pickle a Logger instance? It doesn't do anything much. It just saves the logger's name which is a pure str. Next, to unpickle, it just calls getLogger(name) to reconstruct the instance. So now you understand your first complicated problem: the logger that the child process reconstructs, is a default logger without any handler attached to it and a default level of WARNING.
The how
Long story short: use logger-tt. It supports multiprocessing out of the box.
import concurrent.futures
from itertools import repeat
from logger_tt import setup_logging, logger
setup_logging(use_multiprocessing=True)
def check_number(key):
if key == 1:
print ("yes")
else:
logger.error(f"{key} is not = 1")
def main():
key_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 4, 5, 4, 3, 4, 5, 4, 3, 4, 5, 4, 3, 4, 3]
error_logger = data_logging()
with concurrent.futures.ProcessPoolExecutor(max_workers=1) as executor:
executor.map(check_number, key_list)
if __name__ == '__main__':
main()
If you want to start from the beginning, there are some more problems that you need to solve:
interprocess communication: multiprocessing.Queue or socket
logging using QueueHandler and QueueListener
offload the logging to a different thread or child process
The above things are needed to avoid duplicating log entries or missing a part of the log or no log at all.

Python Multiprocessing Scheduling

In Python 3.6, I am running multiple processes in parallel, where each process pings a URL and returns a Pandas dataframe. I want to keep running the (2+) processes continually, I have created a minimal representative example as below.
My questions are:
1) My understanding is that since I have different functions, I cannot use Pool.map_async() and its variants. Is that right? The only examples of these I have seen were repeating the same function, like on this answer.
2) What is the best practice to make this setup to run perpetually? In my code below, I use a while loop, which I suspect is not suited for this purpose.
3) Is the way I am using the Process and Manager optimal? I use multiprocessing.Manager.dict() as the shared dictionary to return the results form the processes. I saw in a comment on this answer that using a Queue here would make sense, however the Queue object has no `.dict()' method. So, I am not sure how that would work.
I would be grateful for any improvements and suggestions with example code.
import numpy as np
import pandas as pd
import multiprocessing
import time
def worker1(name, t , seed, return_dict):
'''worker function'''
print(str(name) + 'is here.')
time.sleep(t)
np.random.seed(seed)
df= pd.DataFrame(np.random.randint(0,1000,8).reshape(2,4), columns=list('ABCD'))
return_dict[name] = [df.columns.tolist()] + df.values.tolist()
def worker2(name, t, seed, return_dict):
'''worker function'''
print(str(name) + 'is here.')
np.random.seed(seed)
time.sleep(t)
df = pd.DataFrame(np.random.randint(0, 1000, 12).reshape(3, 4), columns=list('ABCD'))
return_dict[name] = [df.columns.tolist()] + df.values.tolist()
if __name__ == '__main__':
t=1
while True:
start_time = time.time()
manager = multiprocessing.Manager()
parallel_dict = manager.dict()
seed=np.random.randint(0,1000,1) # send seed to worker to return a diff df
jobs = []
p1 = multiprocessing.Process(target=worker1, args=('name1', t, seed, parallel_dict))
p2 = multiprocessing.Process(target=worker2, args=('name2', t, seed+1, parallel_dict))
jobs.append(p1)
jobs.append(p2)
p1.start()
p2.start()
for proc in jobs:
proc.join()
parallel_end_time = time.time() - start_time
#print(parallel_dict)
df1= pd.DataFrame(parallel_dict['name1'][1:],columns=parallel_dict['name1'][0])
df2 = pd.DataFrame(parallel_dict['name2'][1:], columns=parallel_dict['name2'][0])
merged_df = pd.concat([df1,df2], axis=0)
print(merged_df)
Answer 1 (map on multiple functions)
You're technically right.
With map, map_async and other variations, you should use a single function.
But this constraint can be bypassed by implementing an executor, and passing the function to execute as part of the parameters:
def dispatcher(args):
return args[0](*args[1:])
So a minimum working example:
import multiprocessing as mp
def function_1(v):
print("hi %s"%v)
return 1
def function_2(v):
print("by %s"%v)
return 2
def dispatcher(args):
return args[0](*args[1:])
with mp.Pool(2) as p:
tasks = [
(function_1, "A"),
(function_2, "B")
]
r = p.map_async(dispatcher, tasks)
r.wait()
results = r.get()
Answer 2 (Scheduling)
I would remove the while from the script and schedule a cron job (on GNU/Linux) (on windows) so that the OS will be responsible for it's execution.
On Linux you can run cronotab -e and add the following line to make the script run every 5 minutes.
*/5 * * * * python /path/to/script.py
Answer 3 (Shared Dictionary)
yes but no.
To my knowledge using the Manager for data such as collections is the best way.
For Arrays or primitive types (int, floats, ecc) exists Value and Array which are faster.
As in the documentation
A manager object returned by Manager() controls a server process which holds > Python objects and allows other processes to manipulate them using proxies.
A manager returned by Manager() will support types list, dict, Namespace, Lock, > RLock, Semaphore, BoundedSemaphore, Condition, Event, Barrier, Queue, Value and > Array.
Server process managers are more flexible than using shared memory objects because they can be made to support arbitrary object types. Also, a single manager can be shared by processes on different computers over a network. They are, however, slower than using shared memory.
But you have only to return a Dataframe, so the shared dictionary it's not needed.
Cleaned Code
Using all the previous ideas the code can be rewritten as:
map version
import numpy as np
import pandas as pd
from time import sleep
import multiprocessing as mp
def worker1(t , seed):
print('worker1 is here.')
sleep(t)
np.random.seed(seed)
return pd.DataFrame(np.random.randint(0,1000,8).reshape(2,4), columns=list('ABCD'))
def worker2(t , seed):
print('worker2 is here.')
sleep(t)
np.random.seed(seed)
return pd.DataFrame(np.random.randint(0, 1000, 12).reshape(3, 4), columns=list('ABCD'))
def dispatcher(args):
return args[0](*args[1:])
def task_generator(sleep_time=1):
seed = np.random.randint(0,1000,1)
yield worker1, sleep_time, seed
yield worker2, sleep_time, seed + 1
with mp.Pool(2) as p:
results = p.map(dispatcher, task_generator())
merged = pd.concat(results, axis=0)
print(merged)
If the process of concatenation of the Dataframe is the bottleneck, An approach with imap might become optimal.
imap version
with mp.Pool(2) as p:
merged = pd.DataFrame()
for result in p.imap_unordered(dispatcher, task_generator()):
merged = pd.concat([merged,result], axis=0)
print(merged)
The main difference is that in the map case, the program first wait for all the process tasks to end, and then concatenate all the Dataframes.
While in the imap_unoredered case, As soon as a task as ended, the Dataframe is concatenated ot the current results.

How to share a variable among threads in joblib using external module

I am trying to modify sklearn source code. In particular, I am modifying GridSearch source code, in a way that the separate processes/threads that evaluate the different model configuration share a variable among themselves. I need each thread/process to read/update that variable during running time in order to modify their execution according to what the other threads obtained. More specifically the parameter that I would like to share is best, in the snippet below:
out = parallel(delayed(_fit_and_score)(clone(base_estimator), X, y, best, self.early,train=train, test=test,parameters=parameters,**fit_and_score_kwargs) for parameters, (train, test) in product(candidate_params, cv.split(X, y, groups)))
Nota bene that the _fit_and_score function is in a separate module.
Sklearn utilizes joblib for parallelization, but I am not able to understand how I can effectively do that using an external module. In joblib doc this code is provided:
>>> shared_set = set()
>>> def collect(x):
... shared_set.add(x)
...
>>> Parallel(n_jobs=2, require='sharedmem')(
... delayed(collect)(i) for i in range(5))
[None, None, None, None, None]
>>> sorted(shared_set)
[0, 1, 2, 3, 4]
but I am not able to understand how to make it run in my context. You can find the source code here:
gridsearch: https://github.com/scikit-learn/scikit-learn/blob/7389dbac82d362f296dc2746f10e43ffa1615660/sklearn/model_selection/_search.py#L704
fit_and_score: https://github.com/scikit-learn/scikit-learn/blob/7389dbac82d362f296dc2746f10e43ffa1615660/sklearn/model_selection/_validation.py#L406
You can do it with python's Manager (https://docs.python.org/3/library/multiprocessing.html#multiprocessing.sharedctypes.multiprocessing.Manager), simple code for example:
from joblib import Parallel, delayed
from multiprocessing import Manager
manager = Manager()
q = manager.Namespace()
q.flag = False
def test(i, q):
#update shared var in 0 process
if i == 0:
q.flag = True
# do nothing for few seconds
for n in range(100000000):
if q.flag == True:
return f'process {i} was updated'
return 'process {i} was not updated'
out = Parallel(n_jobs=4)(delayed(test)(i, q) for i in range(4))
out:
['process 0 was updated',
'process 1 was updated',
'process 2 was updated',
'process 3 was updated']

How can I make my program to use multiple cores of my system in python?

I wanted to run my program on all the cores that I have. Here is the code below which I used in my program(which is a part of my full program. somehow, managed to write the working flow).
def ssmake(data):
sslist=[]
for cols in data.columns:
sslist.append(cols)
return sslist
def scorecal(slisted):
subspaceScoresList=[]
if __name__ == '__main__':
pool = mp.Pool(4)
feature,FinalsubSpaceScore = pool.map(performDBScan, ssList)
subspaceScoresList.append([feature, FinalsubSpaceScore])
#for feature in ssList:
#FinalsubSpaceScore = performDBScan(feature)
#subspaceScoresList.append([feature,FinalsubSpaceScore])
return subspaceScoresList
def performDBScan(subspace):
minpoi=2
Epsj=2
final_data = df[subspace]
db = DBSCAN(eps=Epsj, min_samples=minpoi, metric='euclidean').fit(final_data)
labels = db.labels_
FScore = calculateSScore(labels)
return subspace, FScore
def calculateSScore(cluresult):
score = random.randint(1,21)*5
return score
def StartingFunction(prvscore,curscore,fe_select,df):
while prvscore<=curscore:
featurelist=ssmake(df)
scorelist=scorecal(featurelist)
a = {'a' : [1,2,3,1,2,3], 'b' : [5,6,7,4,6,5], 'c' : ['dog', 'cat', 'tree','slow','fast','hurry']}
df2 = pd.DataFrame(a)
previous=0
current=0
dim=[]
StartingFunction(previous,current,dim,df2)
I had a for loop in scorecal(slisted) method which was commented, takes each column to perform DBSCAN and has to calculate the score for that particular column based on the result(but I tried using random score here in example). This looping is making my code to run for a longer time. So I tried to parallelize each column of the DataFrame to perform DBSCAN on the cores that i had on my system and wrote the code in the above fashion which is not giving the result that i need. I was new to this multiprocessing library. I was not sure with the placement of '__main__' in my program. I also would like to know if there is any other way in python to run in a parallel fashion. Any help is appreciated.
Your code has all what is needed to run on multi-core processor using more than one core. But it is a mess. I don't know what problem you trying to solve with the code. Also I cannot run it since I don't know what is DBSCAN. To fix your code you should do several steps.
Function scorecal():
def scorecal(feature_list):
pool = mp.Pool(4)
result = pool.map(performDBScan, feature_list)
return result
result is a list containing all the results returned by performDBSCAN(). You don't have to populate the list manually.
Main body of the program:
# imports
# functions
if __name__ == '__main__':
# your code after functions' definition where you call StartingFunction()
I created very simplified version of your code (pool with 4 processes to handle 8 columns of my data) with dummy for loops (to achieve cpu-bound operation) and tried it. I got 100% cpu load (I have 4-core i5 processor) that naturally resulted in approx x4 faster computation (20 seconds vs 74 seconds) in comparison with single process implementation through for loop.
EDIT.
The complete code I used to try multiprocessing (I use Anaconda (Spyder) / Python 3.6.5 / Win10):
import multiprocessing as mp
import pandas as pd
import time
def ssmake():
pass
def score_cal(data):
if True:
pool = mp.Pool(4)
result = pool.map(
perform_dbscan,
(data.loc[:, col] for col in data.columns))
else:
result = list()
for col in data.columns:
result.append(perform_dbscan(data.loc[:, col]))
return result
def perform_dbscan(data):
assert isinstance(data, pd.Series)
for dummy in range(5 * 10 ** 8):
dummy += 0
return data.name, 101
def calculate_score():
pass
def starting_function(data):
print(score_cal(data))
if __name__ == '__main__':
data = {
'a': [1, 2, 3, 1, 2, 3],
'b': [5, 6, 7, 4, 6, 5],
'c': ['dog', 'cat', 'tree', 'slow', 'fast', 'hurry'],
'd': [1, 1, 1, 1, 1, 1]}
data = pd.DataFrame(data)
start = time.time()
starting_function(data)
print(
'running time = {:.2f} s'
.format(time.time() - start))

How to get different answers from different threads?

To get to know threading concept better I tried to use threads in a simple program. I want to call a function 3 times which does random selection.
def func(arg):
lst = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
num = random.choice(lst)
arg.append(num)
return arg
def search(arg):
a = func(arg)
a = func(a)
threads_list = []
que = queue.Queue()
for i in range(3):
t = threading.Thread(target=lambda q, arg1: q.put(func(arg1)), args=(que, a))
t.start()
threads_list.append(t)
for t in threads_list:
t.join()
while not que.empty():
result = que.get()
print (result)
if __name__ == '__main__':
lst = []
search(lst)
As you can see In the third part, I used threads but I expected to get different lists ( different for the third part).
but all the threads return the same answer.
Can anyone help me to get different lists from different threads?
I think I misunderstood the concept of multiprocessing and multithreading.
Possibly, the pseudo-random number generator which random.choice is using is using three instances - one for each thread - and in the absence of a unique seed will produce the same pseudo-random sequence. Since no seed is provided, it may be using the system time which, depending on the precision, may be the same for all three threads.
You might try seeding the PRNG with something that will differ from thread to thread, inside the thread that invokes the PRNG. This should cause the three threads to use different seeds and give you different pseudo-random sequences.

Resources