How does one lock specific array indices in multiprocessing package? - python-3.x

I have a multiprocessing.manager.Array object that will be shared by multiple workers to tally observed events: each element in the array holds the tally of a different event type. Incrementing a tally requires both read and write operations, so I believe that to avoid race conditions, each worker needs to request a lock that covers both stages, e.g.
with lock:
my_array[event_type_index] += 1
My intuition is that it should be possible to place a lock on a specific array element. With that type of lock, worker #1 could increment element 1 at the same time that worker #2 is incrementing element 2. This would be especially helpful for my application (n-gram counting), where the array length is quite large and collisions would be rare.
However, I can't figure out how to request an element-wise lock for an array. Does such a thing exist in multiprocessing, or is there a workaround?
For more context, I've included my current implementation below:
import multiprocessing as mp
from queue import Empty
def count_ngrams_in_sentence(n, ngram_counts, char_to_idx_dict, sentence_queue, lock):
while True:
try:
my_sentence_str = sentence_queue.get_nowait()
my_sentence_indices = [char_to_idx_dict[i] for i in my_sentence_str]
my_n = n.value
for i in range(len(my_sentence_indices) - my_n + 1):
my_index = int(sum([my_sentence_indices[i+j]*(27**(my_n - j - 1)) \
for j in range(my_n)]))
with lock: # lock the whole array?
ngram_counts[my_index] += 1
sentence_queue.task_done()
except Empty:
break
return
if __name__ == '__main__':
n = 4
num_ngrams = 27**n
num_workers = 2
sentences = [ ... list of sentences in lowercase ASCII + spaces ... ]
manager = mp.Manager()
sentence_queue = manager.JoinableQueue()
for sentence in sentences:
sentence_queue.put(sentence)
n = manager.Value('i', value=n, lock=False)
char_to_idx_dict = manager.dict([(i,ord(i)-97) for i in string.ascii_lowercase] + [(' ', 26)],
lock=False)
lock = manager.Lock()
ngram_counts = manager.Array('l', [0]*num_ngrams, lock=lock)
''
workers = [mp.Process(target=count_ngrams_in_sentence,
args=[n,
ngram_counts,
char_to_idx_dict,
sentence_queue,
lock]) for i in range(num_workers)]
for worker in workers:
worker.start()
sentence_queue.join()

Multiprocessing.manager.Array comes with a built-in lock. Gotta switch to RawArray.
Have an list of locks. Before modifying an indice, acquire the lock for your array. Then release.
locks[i].acquire()
array[i,:]=0
locks[i].release()
As I said, if the array is a MultiProcessing.RawArray or similar, multiple processes can read or write simultaneously. For some types of Arrays, reading/writing to an Array is inherently atomic - the lock is essentially built in. Carefully research this before proceeding.
As for performance, indexing into a list is on the order of nanoseconds in Python, and acquiring and releasing a lock on the order of microseconds. It's not a huge issue.

Related

list and dictionary: which one is faster

I have the following pieces of code doing the sorting of a list by swapping pairs of elements:
# Complete the minimumSwaps function below.
def minimumSwaps(arr):
counter = 0
val_2_indx = {val: arr.index(val) for val in arr}
for indx, x in enumerate(arr):
if x != indx+1:
arr[indx] = indx+1
s_indx = val_2_indx[indx+1]
arr[s_indx] = x
val_2_indx[indx+1] = indx
val_2_indx[x] = s_indx
counter += 1
return counter
def minimumSwaps(arr):
temp = [0] * (len(arr) + 1)
for pos, val in enumerate(arr):
temp[val] = pos
swaps = 0
for i in range(len(arr)):
if arr[i] != i+1:
swaps += 1
t = arr[i]
arr[i] = i+1
arr[temp[i+1]] = t
temp[t] = temp[i+1]
temp[i+1] = i
return swaps
The second function works much faster than the first one. However, I was told that dictionary is faster than list.
What's the reason here?
A list is a data structure, and a dictionary is a data structure. It doesn't make sense to say one is "faster" than the other, any more than you can say that an apple is faster than an orange. One might grow faster, you might be able to eat the other one faster, and they might both fall to the ground at the same speed when you drop them. It's not the fruit that's faster, it's what you do with it.
If your problem is that you have a sequence of strings and you want to know the position of a given string in the sequence, then consider these options:
You can store the sequence as a list. Finding the position of a given string using the .index method requires a linear search, iterating through the list in O(n) time.
You can store a dictionary mapping strings to their positions. Finding the position of a given string requires looking it up in the dictionary, in O(1) time.
So it is faster to solve that problem using a dictionary.
But note also that in your first function, you are building the dictionary using the list's .index method - which means doing n linear searches each in O(n) time, building the dictionary in O(n^2) time because you are using a list for something lists are slow at. If you build the dictionary without doing linear searches, then it will take O(n) time instead:
val_2_indx = { val: i for i, val in enumerate(arr) }
But now consider a different problem. You have a sequence of numbers, and they happen to be the numbers from 1 to n in some order. You want to be able to look up the position of a number in the sequence:
You can store the sequence as a list. Finding the position of a given number requires linear search again, in O(n) time.
You can store them in a dictionary like before, and do lookups in O(1) time.
You can store the inverse sequence in a list, so that lst[i] holds the position of the value i in the original sequence. This works because every permutation is invertible. Now getting the position of i is a simple list access, in O(1) time.
This is a different problem, so it can take a different amount of time to solve. In this case, both the list and the dictionary allow a solution in O(1) time, but it turns out it's more efficient to use a list. Getting by key in a dictionary has a higher constant time than getting by index in a list, because getting by key in a dictionary requires computing a hash, and then probing an array to find the right index. (Getting from a list just requires accessing an array at an already-known index.)
This second problem is the one in your second function. See this part:
temp = [0] * (len(arr) + 1)
for pos, val in enumerate(arr):
temp[val] = pos
This creates a list temp, where temp[val] = pos whenever arr[pos] == val. This means the list temp is the inverse permutation of arr. Later in the code, temp is used only to get these positions by index, which is an O(1) operation and happens to be faster than looking up a key in a dictionary.

A strategy-proof method of finding the time complexity of complex algorithms?

I have a question in regard to time complexity (big-O) in Python. I want to understand the general method I would need to implement when trying to find the big-O of a complex algorithm. I have understood the reasoning behind calculating the time complexity of simple algorithms, such as a for loop iterating over a list of n elements having a O(n), or having two nested for loops each iterating over 2 lists of n elements each having a big-O of n**2. But, for more complex algorithms that implement multiple if-elif-else statements coupled with for loops, I would want to see if there is a strategy to, simply based on the code, in an iterative fashion, to determine the big-O of my code using simple heuristics (such as, ignoring constant time complexity if statements or always squaring the n upon going over a for loop, or doing something specific when encountering an else statement).
I have created a battleship game, for which I would like to find the time complexity, using such an aforementioned strategy.
from random import randint
class Battle:
def __init__(self):
self.my_grid = [[False,False,False,False,False,False,False,False,False,False],[False,False,False,False,False,False,False,False,False,False],[False,False,False,False,False,False,False,False,False,False],[False,False,False,False,False,False,False,False,False,False],[False,False,False,False,False,False,False,False,False,False],[False,False,False,False,False,False,False,False,False,False],[False,False,False,False,False,False,False,False,False,False],[False,False,False,False,False,False,False,False,False,False],[False,False,False,False,False,False,False,False,False,False],[False,False,False,False,False,False,False,False,False,False]]
def putting_ship(self,x,y):
breaker = False
while breaker == False:
r1=x
r2=y
element = self.my_grid[r1][r2]
if element == True:
continue
else:
self.my_grid[r1][r2] = True
break
def printing_grid(self):
return self.my_grid
def striking(self,r1,r2):
element = self.my_grid[r1][r2]
if element == True:
print("STRIKE!")
self.my_grid[r1][r2] = False
return True
elif element == False:
print("Miss")
return False
def game():
battle_1 = Battle()
battle_2 = Battle()
score_player1 = 0
score_player2 = 0
turns = 5
counter_ships = 2
while True:
input_x_player_1 = input("give x coordinate for the ship, player 1\n")
input_y_player_1 = input("give y coordinate for the ship, player 1\n")
battle_1.putting_ship(int(input_x_player_1),int(input_y_player_1))
input_x_player_2 = randint(0,9)
input_y_player_2 = randint(0,9)
battle_2.putting_ship(int(input_x_player_2),int(input_y_player_2))
counter_ships -= 1
if counter_ships == 0:
break
while True:
input_x_player_1 = input("give x coordinate for the ship\n")
input_y_player_1 = input("give y coordinate for the ship\n")
my_var = battle_1.striking(int(input_x_player_1),int(input_y_player_1))
if my_var == True:
score_player1 += 1
print(score_player1)
input_x_player_2 = randint(0,9)
input_y_player_2 = randint(0,9)
my_var_2 = battle_2.striking(int(input_x_player_2),int(input_y_player_2))
if my_var_2 == True:
score_player2 += 1
print(score_player2)
counter_ships -= 1
if counter_ships == 0:
break
print("the score for player 1 is",score_player1)
print("the score for player 2 is",score_player2)
print(game())
If it's just nested for loops and if/else statements, you can take the approach ibonyun has suggested - assume all if/else cases are covered and look at the deepest loops (being aware that some operations like sorting, or copying an array, might hide loops of their own.)
However, your code also has while loops. In this particular example it's not too hard to replace them with fors, but for code containing nontrivial whiles there is no general strategy that will always give you the complexity - this is a consequence of the halting problem.
For example:
def collatz(n):
n = int(abs(n))
steps = 0
while n != 1:
if n%2 == 1:
n=3*n+1
else:
n=n//2
steps += 1
print(n)
print("Finished in",steps,"steps!")
So far nobody has been able to prove that this will even finish for all n, let alone shown an upper bound to the run-time.
Side note: instead of the screen-breaking
self.my_grid = [[False,False,...False],[False,False,...,False],...,[False,False,...False]]
consider something like:
grid_size = 10
self.my_grid = [[False for i in range(grid_size)] for j in range(grid_size)]
which is easier to read and check.
Empirical:
You could do some time trials while increasing n (so maybe increasing the board size?) and plot the resulting data. You could tell by the curve/slope of the line what the time complexity is.
Theoretical:
Parse the script and keep track of the biggest O() you find for any given line or function call. Any sorting operations will give you nlogn. A for loop inside a for loop will give you n^2 (assuming their both iterating over the input data), etc. Time complexity is about the broad strokes. O(n) and O(n*3) are both linear time, and that's what really matters. I don't think you need to worry about the minutia of all your if-elif-else logic. Maybe just focus on worst case scenario?

How to accelerate the application of the following for loop and function?

I have the following for loop:
for j in range(len(list_list_int)):
arr_1_, arr_2_, arr_3_ = foo(bar, list_of_ints[j])
arr_1[j,:] = arr_1_.data.numpy()
arr_2[j,:] = arr_2_.data.numpy()
arr_3[j,:] = arr_3_.data.numpy()
I would like to apply foo with multiprocessing, mainly because it is taking a lot of time to finish. I tried to do it in batches with funcy's chunks method:
for j in chunks(1000, list_list_int):
arr_1_, arr_2_, arr_3_ = foo(bar, list_of_ints[j])
arr_1[j,:] = arr_1_.data.numpy()
arr_2[j,:] = arr_2_.data.numpy()
arr_3[j,:] = arr_3_.data.numpy()
However, I am getting list object cannot be interpreted as an integer. What is the correct way of applying foo using multiprocessing?
list_list_int = [1,2,3,4,5,6]
for j in chunks(2, list_list_int):
for i in j:
avg_, max_, last_ = foo(bar, i)
I don't have chunks installed, but from the docs I suspect it produces (for size 2 chunks, from:
alist = [[1,2],[3,4],[5,6],[7,8]]
j = [[1,2],[3,4]]
j = [[5,6],[7,8]]
which would produce an error:
In [116]: alist[j]
TypeError: list indices must be integers or slices, not list
And if your foo can't work with the full list of lists, I don't see how it will work with that list split into chunks. Apparently it can only work with one sublist at a time.
If you are looking to perform parallel operations on a numpy array, then I would use Dask.
With just a few lines of code, your operation should be able to be easily ran on multiple processes and the highly developed Dask scheduler will balance the load for you. A huge benefit to Dask compared to other parallel libraries like joblib, is that it maintains the native numpy API.
import dask.array as da
# Setting up a random array with dimensions 10K rows and 10 columns
# This data is stored distributed across 10 chunks, and the columns are kept together (1_000, 10)
x = da.random.random((10_000, 10), chunks=(1_000, 10))
x = x.persist() # Allow the entire array to persist in memory to speed up calculation
def foo(x):
return x / 10
# Using the native numpy function, apply_along_axis, applying foo to each row in the matrix in parallel
result_foo = da.apply_along_axis(foo, 0, x)
# View original contents
x[0:10].compute()
# View sample of results
result_foo = result_foo.compute()
result_foo[0:10]

Python fails to parallelize buffer reads

I'm having performances issues in multi-threading.
I have a code snippet that reads 8MB buffers in parallel:
import copy
import itertools
import threading
import time
# Basic implementation of thread pool.
# Based on multiprocessing.Pool
class ThreadPool:
def __init__(self, nb_threads):
self.nb_threads = nb_threads
def map(self, fun, iter):
if self.nb_threads <= 1:
return map(fun, iter)
nb_threads = min(self.nb_threads, len(iter))
# ensure 'iter' does not evaluate lazily
# (generator or xrange...)
iter = list(iter)
# map to results list
results = [None] * nb_threads
def wrapper(i):
def f(args):
results[i] = map(fun, args)
return f
# slice iter in chunks
chunks = [iter[i::nb_threads] for i in range(nb_threads)]
# create threads
threads = [threading.Thread(target = wrapper(i), args = [chunk]) \
for i, chunk in enumerate(chunks)]
# start and join threads
[thread.start() for thread in threads]
[thread.join() for thread in threads]
# reorder results
r = list(itertools.chain.from_iterable(map(None, *results)))
return r
payload = [0] * (1000 * 1000) # 8 MB
payloads = [copy.deepcopy(payload) for _ in range(40)]
def process(i):
for i in payloads[i]:
j = i + 1
if __name__ == '__main__':
for nb_threads in [1, 2, 4, 8, 20]:
t = time.time()
c = time.clock()
pool = ThreadPool(nb_threads)
pool.map(process, xrange(40))
t = time.time() - t
c = time.clock() - c
print nb_threads, t, c
Output:
1 1.04805707932 1.05
2 1.45473504066 2.23
4 2.01357698441 3.98
8 1.56527090073 3.66
20 1.9085559845 4.15
Why does the threading module miserably fail at parallelizing mere buffer reads?
Is it because of the GIL? Or because of some weird configuration on my machine, one process
is allowed only one access to the RAM at a time (I have decent speed-up if I switch ThreadPool for multiprocessing.Pool is the code above)?
I'm using CPython 2.7.8 on a linux distro.
Yes, Python's GIL prevents Python code from running in parallel across multiple threads. You describe your code as doing "buffer reads", but it's really running arbitrary Python code (in this case, iterating over a list adding 1 to other integers). If your threads were making blocking system calls (like reading from a file, or from a network socket), then the GIL would usually be released while the thread blocked waiting on the external data. But since most operations on Python objects can have side effects, you can't do several of them in parallel.
One important reason for this is that CPython's garbage collector uses reference counting as its main way to know when an object can be cleaned up. If several threads try to update the reference count of the same object at the same time, they might end up in a race condition and leave the object with the wrong count. The GIL prevents that from happening, as only one thread can be making such internal changes at a time. Every time your process code does j = i + 1, it's going to be updating the reference counts of the integer objects 0 and 1 a couple of times each. That's exactly the kind of thing the GIL exists to guard.

changing the value of iterator of for loop at certain condition in python

Hello friend while learning python it came into my mind that is there any way by which we can directly jump to a particular value of iterator without iterating fro example
a=range(1.10) or (1,2,3,4,5,6,7,8,9)
for i in a
print ("value of i:",i)
if (certain condition)
#this condition will make iterator to directly jump on certain value of
#loop here say if currently i=2 and after this it will directly jump the
#the iteration value of i=8 bypassing the iterations from 3 to 7 and
#saving the cycles of CPU)
There is a solution, however it involves complicating your code somewhat.
It does not require an if function however it does require both while and try loops.
If you wish to change the numbers skipped then you simply change the for _ in range() statement.
This is the code:
a = [1,2,3,4,5,6,7,8,9,10]
at = iter(a)
while True:
try:
a_next = next(at)
print(a_next)
if a_next == 3:
for _ in range(4, 8):
a_next = next(at)
a_next = str(a_next)
print(a_next)
except StopIteration:
break
The iterator interface is based on the next method. Multiple next calls are necessary to advance in the iteration for more that one element. There is no shortcut.
If you iterate over sequences only, you may abandon the interator and write an old-fashioned C-like code that allows you to move the index:
a = [1,2,3,4,5,6,7,8,9,10]
a_len = len(a)
i = 0
while i < a_len:
print(a[i])
if i == 2:
i = 8
continue
i += 1

Resources