Why is drawing a large matrix of random numbers outside a threaded loop faster than drawing a column at a time inside the loop? - multithreading

I have to run some operation do_stuff on a lot of random numbers. I initially wrote code that looks something like this:
X = randn( 3, 1000000 )
Y = Vector{Float64}( undef, 1000000 )
#threads for n in 1:size( X, 2 )
Y[n] = do_stuff( X[:,n] )
end
I thought I'd make the random number draws parallel too like so:
Y = Vector{Float64}( undef, 1000000 )
#threads for n in 1:size( X, 2 )
X = randn( 3 )
Y[n] = do_stuff( X )
end
I expected this to be faster because randn is being called in parallel but it's actually about 20% slower. And this is actually about 40-50% slower:
X = Vector{Float64}( undef, 3, 1000000 )
Y = Vector{Float64}( undef, 1000000 )
#threads for n in 1:size( X, 2 )
X[:,n] = randn( 3 )
Y[n] = do_stuff( X[:,n] )
end
Why is this so?

Generating many random numbers at a time can indeed be efficient, but you have more problems here. You are allocating an enormous number of temporary arrays here:
X[:,n] = randn( 3 )
Y[n] = do_stuff( X[:,n] )
The call randn(3) in the first line allocates a new array for each iteration, and the call X[:, n] creates a copy, too (the X[:, n] in the first line isn't actually a slice, so does not allocate.) Benchmark:
function foo!(X, Y)
Threads.#threads for n in axes(X, 2) # don't use 1:size, use axes
X[:, n] = randn(3)
Y[n] = maximum(X[:,n])
end
return Y
end
X = Matrix{Float64}( undef, 3, 10000 );
Y = Vector{Float64}( undef, 10000 );
julia> #btime foo!($X, $Y);
260.600 μs (20048 allocations: 1.53 MiB)
Let's fix the two issues I mentioned:
function bar!(X, Y)
Threads.#threads for n in axes(X, 2)
X[:, n] .= randn.() # makes a separate number for each row in X
Y[n] = maximum(#view X[:, n])
end
return Y
end
X[:, n] .= randn.() generates the random numbers in-place, without any temporary. And here: Y[n] = maximum(#view X[:, n]) the #view macro creates a view instead of a copy.
New benchmark:
julia> #btime baz!($X, $Y);
61.700 μs (48 allocations: 5.28 KiB)
Dramatically fewer allocations, and they are all due to the multithreading, single-threaded there should be zero allocations.
Let's compare to generating all random numbers in a single go:
function foo!(Y)
# All rands in one go, but still using slices
X = randn(3, length(Y))
Threads.#threads for n in axes(X, 2)
Y[n] = maximum(X[:,n])
end
return Y
end
function bar!(Y)
# All rands in one go, but using views
X = randn(3, length(Y))
Threads.#threads for n in axes(X, 2)
Y[n] = maximum(#view X[:,n])
end
return Y
end
julia> #btime foo!($Y);
168.700 μs (10048 allocations: 1020.89 KiB)
julia> #btime bar!($Y);
98.600 μs (48 allocations: 239.64 KiB)
As you can see, foo!(Y) is faster than foo!(X, Y), but much slower than bar!(X, Y). And even when you use views, bar!(Y) is slower than bar!(X, Y).
In other words, it is not as clear-cut that generating all rands at once is always faster. In stead, inspect your code, and look for instances of needless creation of temporary arrays. That is a performance red-flag.

Given the total allocation is the same, doing it in one go is expected to be faster.
julia> function f()
randn(3, 1000000)
end
julia> function g()
for _ in 1000000
randn(3)
end
end
julia> #benchmark f()
BenchmarkTools.Trial: 967 samples with 1 evaluation.
Range (min … max): 4.959 ms … 7.275 ms ┊ GC (min … max): 0.00% … 3.81%
Time (median): 5.205 ms ┊ GC (median): 0.00%
Time (mean ± σ): 5.160 ms ± 182.650 μs ┊ GC (mean ± σ): 2.23% ± 2.19%
▁▆█▇▄▂ ▂▄▆▅▅▁
▂▃▇██████▆▅▅▃▃▃▃▃▁▁▂▂▁▃▃▆▇██████▇▆▅▃▃▂▂▂▂▃▁▁▁▂▂▁▁▁▂▁▂▁▁▁▁▁▂ ▄
4.96 ms Histogram: frequency by time 5.57 ms <
Memory estimate: 22.89 MiB, allocs estimate: 2.
julia> #benchmark g()
BenchmarkTools.Trial: 117 samples with 1 evaluation.
Range (min … max): 41.446 ms … 47.920 ms ┊ GC (min … max): 3.53% … 2.96%
Time (median): 43.109 ms ┊ GC (median): 6.75%
Time (mean ± σ): 42.849 ms ± 871.271 μs ┊ GC (mean ± σ): 5.94% ± 1.44%
▁█▆
▃▄▇▆▂▁▁▁▂▁▁▁▁▁▁▁▂▂▅███▅▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂ ▂
41.4 ms Histogram: frequency by time 46.4 ms <
Memory estimate: 76.29 MiB, allocs estimate: 1000000.
I doubt parallel would help you beat f() in general cuz you might just be memory bandwidth bounded.

Related

How to efficiently calculate squared distance of each element to every other in a matrix?

I have a matrix of the below format:
matrix = array([[-0.2436986 , -0.25583658, -0.16579486, ..., -0.04291612,
-0.06026303, 0.08564489],
[-0.08684622, -0.21300158, -0.04034272, ..., -0.01995692,
-0.07747065, 0.06965207],
[-0.34814256, -0.20597479, 0.06931241, ..., -0.1236965 ,
-0.1300714 , -0.110122 ],
...,
[-0.04154776, -0.07538085, 0.01860147, ..., -0.01494173,
-0.08960884, -0.21338603],
[-0.34039265, -0.24616522, 0.10838407, ..., 0.22280858,
-0.03465452, 0.04178255],
[-0.30251586, -0.23072125, -0.01975435, ..., 0.34529492,
-0.03508861, 0.00699677]], dtype=float32)
Since, I want to calculate squared distance of each element to every other, I am using the below code:
def sq_dist(a,b):
"""
Returns the squared distance between two vectors
Args:
a (ndarray (n,)): vector with n features
b (ndarray (n,)): vector with n features
Returns:
d (float) : distance
"""
d = np.sum(np.square(a - b))
return d
dim = len(matrix)
dist = np.zeros((dim,dim))
for i in range(dim):
for j in range(dim):
dist[i,j] = sq_dist(matrix[i, :], matrix[j, :])
I am getting the correct result but only for 5000 elements in 17 minutes (if I use 5000 elements instead of 100k).
Since I have 100k*100k matrix, the cluster fails in 5 hours.
How to efficiently do this for a large matrix?
I am using Python3.8
and Pyspark.
Output matrix should be like:
dist = array([[0. , 0.57371938, 0.78593194, ..., 0.83454031, 0.58932155,
0.76440328],
[0.57371938, 0. , 0.66285896, ..., 0.89251578, 0.76511419,
0.59261483],
[0.78593194, 0.66285896, 0. , ..., 0.60711896, 0.80852598,
0.73895919],
...,
[0.83454031, 0.89251578, 0.60711896, ..., 0. , 1.01311994,
0.84679914],
[0.58932155, 0.76511419, 0.80852598, ..., 1.01311994, 0. ,
0.5392195 ],
[0.76440328, 0.59261483, 0.73895919, ..., 0.84679914, 0.5392195 ,
0. ]])
You can make it significantly faster by using numba:
import numpy as np
import numba as nb
#nb.njit(parallel=True)
def square_dist(matrix):
dim = len(matrix)
assert dim > 0
dist = np.zeros((dim,dim))
for i in nb.prange(dim):
for j in nb.prange(dim):
dist[i][j] = np.square(matrix[i, :] - matrix[j, :]).sum()
return dist
Test and time:
rng = np.random.default_rng()
matrix = rng.random((200, 10))
assert np.allclose(op(matrix),square_dist(matrix))
%timeit op(matrix)
%timeit square_dist(matrix)
Output:
181 ms ± 556 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
947 µs ± 43.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
First, let's do a reality check. Computing N2 distances where each one takes 3N-1 operations (N subtractions, N multiplications and N-1 additions) means you have to perform about 3N3 arithmetic operations. When N is 100k, that totals to 3x1015 operations. A modern CPU with AVX-512 running at 3 GHz (3x109 Hz) can perform 3x109 [cycles/sec] x (512 / 32) [float32 entries in a vector] x 2 [vector ALUs per core] = 1011 float32 operations/second. Therefore, to compute all entries in your distance matrix it will take no less than 3x1015 / 1011 = 30000 seconds or 8 hrs and 20 mins. This is a hard lower limit, only achievable if all operations are perfectly vectorisable, which they are not, e.g. the horizontal sum after the squaring. If the CPU isn't AVX-512 capable but only supports AVX2, then the vector length is twice as small and the time goes up to about 17 hrs. All this assuming that data fits in the CPU cache - it actually doesn't and it needs proper prefetching.
First thing you can do is cut the compute time in half by noticing that dij = dji and also dii = 0:
for i in range(dim):
dist[i,i] = 0
for j in range(i+1, dim):
d[i,j] = d[j,i] = np.sum(np.square(matrix[i, :] - matrix[j, :]))
Notice the loop here runs only for i < j and the call to sq_dist has been inlined to save you 5x109 unnecessary function calls!!
But even then, you still need more than 4 hrs on that AVX-512 CPU (more than 8 hrs with AVX2 only.)
If you really must cut down that compute time, you need to run it in parallel. With PySpark that means you have to store the vectors in a dataset, perform a self-join, and write a UDF that uses the BLAS implementation that ships with Spark (or install a native one) to compute the distance metric. Unfortunately, this is a low-level interface of Spark and it's only exposed to UDFs written in JVM languages - check this question for a Scala-based solution.

Joint construction of a random permutation and its inverse using NumPy

I am looking to construct a random permutation of [1, 2, ..., n] along with its inverse using NumPy. In my application, n can be on the order of 100 million so I am looking for a solution that constructs both the permutation and its inverse in minimum time.
What I've tried:
Computing a random permutation and its inverse separately using inbuilt NumPy functions
p = np.random.permutation(n)
pinv = np.argsort(p)
The same idea as approach 1, but using the solution provided here. I found that this solution can speed up the computation of pinv by an order of magnitude.
def invert_permutation_numpy2(permutation):
inv = np.empty_like(permutation)
inv[permutation] = np.arange(len(inv), dtype=inv.dtype)
return inv
p = np.random.permutation(n)
pinv = invert_permutation_numpy2(p)
I'm hoping that there is a solution that computes p and pinv jointly and yields additional speedup.
The following is a straightforward implementation of the Fisher-Yates method (pseudocode from here). When compiled with numba it's faster than numpy:
import numpy as np
import numba
#numba.njit
def randperm(n):
"""Permuation of 1, 2, ... n and its inverse"""
p = np.arange(1, n+1, dtype=np.int32) # initialize with identity permutation
pinv = np.empty_like(p)
for i in range(n-1, 0, -1): # loop over all items except the first one
z = np.random.randint(0, i+1)
temp = p[z] # swap numbers at i and z
p[z] = p[i]
p[i] = temp
pinv[temp-1] = i # pinv[p[z]-1] = i
pinv[p[0]-1] = 0
return p, pinv
Comparison:
%timeit p, pinv = randperm(100_000_000)
# 12.3 s ± 212 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
p = np.random.permutation(np.arange(1, 100_000_000+1, dtype=np.int32))
pinv = np.argsort(p)
# 31.8 s ± 439 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
(NB np.random.permutation(n) give the permuatation of 0, 1, ... n-1 instead of 1, 2, ... n)

multiplying an integer numpy array with a float scalar without intermediary float array

I'm dealing with very large image arrays of uint16 data that I would like to downscale and convert to uint8.
My initial way of doing this caused a MemoryError because of an intermediary float64 array:
img = numpy.ones((29632, 60810, 3), dtype=numpy.uint16)
if img.dtype == numpy.uint16:
multiplier = numpy.iinfo(numpy.uint8).max / numpy.iinfo(numpy.uint16).max
img = (img * multiplier).astype(numpy.uint8, order="C")
I then tried to do the multiplication in place, the following way:
if img.dtype == numpy.uint16:
multiplier = numpy.iinfo(numpy.uint8).max / numpy.iinfo(numpy.uint16).max
img *= multiplier
img = img.astype(numpy.uint8, order="C")
But I run into the following error:
TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('uint16') with casting rule 'same_kind'
Do you know of a way to perform this operation while minimizing the memory footprint?
Where can I change the casting rule mentioned in the error message?
Q : "Do you know of a way to perform this operation while minimizing the memory footprint?"
First, let's get the [SPACE]-domain sizing right. The base-array is 29k6 x 60k8 x RGB x 2B in-memory object:
>>> 29632 * 60810 * 3 * 2 / 1E9 ~ 10.81 [GB]
having eaten some 11 [GB] of RAM.
Any operation will need some space. Having a TB-class [SPACE]-Domain for purely in-memory numpy-vectorised tricks, we are done here.
Given the O/P task was to minimise the memory footpint, moving all the arrays and their operations into numpy.memmap()-objects will solve it.
I finally found a solution that works after some reading of the numpy ufunc documentation.
multiplier = numpy.iinfo(numpy.uint8).max / numpy.iinfo(numpy.uint16).max
numpy.multiply(img, multiplier, out=img, casting="unsafe")
img = img.astype(numpy.uint8, order="C")
I should have found this earlier, but it's not an easy read if you are not familiar with some of the technical vocabulary.
You could also use Numba or Cython in such cases
There you could explicitly avoid any temporary arrays. The code is a bit longer, but very easy to understand and faster.
Example
import numpy as np
import numba as nb
#nb.njit(parallel=True)
def conv_numba(img):
multiplier = np.iinfo(np.uint8).max / np.iinfo(np.uint16).max
img_out=np.empty(img.shape,dtype=np.uint8)
for i in nb.prange(img.shape[0]):
for j in range(img.shape[1]):
for k in range(img.shape[2]):
img_out[i,j,k]=img[i,j,k]*multiplier
return img_out
#img_in have to be contigous, otherwise reshape will fail
#nb.njit(parallel=True)
def conv_numba_opt(img_in):
multiplier = np.iinfo(np.uint8).max / np.iinfo(np.uint16).max
shape=img_in.shape
img=img_in.reshape(-1)
img_out=np.empty(img.shape,dtype=np.uint8)
for i in nb.prange(img.shape[0]):
img_out[i]=img[i]*multiplier
return img_out.reshape(shape)
def conv_numpy(img):
np.multiply(img, multiplier, out=img, casting="unsafe")
img = img.astype(np.uint8, order="C")
return img
Timings
img = np.ones((29630, 6081, 3), dtype=np.uint16)
%timeit res_1=conv_numpy(img)
#990 ms ± 2.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit res_2=conv_numba(img)
#with parallel=True
#122 ms ± 17.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
#with parallel=False
#571 ms ± 2.99 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Using integer division (instead of multiplying by the inverse) should prevent the intermediary floating point array (correct me if wrong) and allow you to do the operation in place.
divisor = numpy.iinfo(numpy.uint16).max // numpy.iinfo(numpy.uint8).max
img //= divisor
img = img.astype(numpy.uint8, order="C")
How about getting a 4.3 ~ 4.7 x faster solution ?
Polishing a bit the PiRK's numpy.ufunc-based inplace processing solution, sketched here,there is about a 4.3 ~ 4.7 x faster modification thereof:
>>> from zmq import Stopwatch; aClk = Stopwatch() # a trivial [us]-resolution clock
>>> ###
>>> ############################################### ORIGINAL ufunc()-code:
>>> ###
>>> ### np.ones( ( 29632, 608, 3 ), dtype = np.uint16 ) ## SIZE >> CPU CACHE SIZE
>>> I = np.ones( ( 29632, 608, 3 ), dtype = np.uint16 )
>>> #mg = np.ones( ( 29632, 608, 3 ), dtype = np.uint16 ); aClk.start(); _ = np.multiply( img, fMUL, out = img, casting = 'unsafe' ); img = img.astype( np.uint8, order = 'C' );aClk.stop() ########## a one-liner for fast re-testing on CLI console
>>> img = I.copy();aClk.start();_= np.multiply( img,
... fMUL,
... out = img,
... casting = 'unsafe'
... ); img = img.astype( np.uint8,
... order = 'C'
... );aClk.stop()
312802 [us]
320087 [us]
329401 [us]
317346 [us]
Using some more of the documented ufunc-smart kwargs right in the first shot, the performance grows ~ 4.3 ~ 4.7 x
>>> ### = I.copy(); aClk.start(); _ = np.multiply( img, fMUL, out = img, casting = 'unsafe', dtype = np.uint8, order = 'C' ); aClk.stop() ########## a one-liner for fast re-testing on CLI console
>>> img = I.copy(); aClk.start(); _ = np.multiply( img,
... fMUL,
... out = img,
... casting = 'unsafe',
... dtype = np.uint8,
... order = 'C'
... ); aClk.stop()
69812 [us]
71335 [us]
73112 [us]
70171 [us]
Q : Where can I change the casting rule mentioned in the error message?
The implicit (default) mode, used for the casting argument, has changed IIRC somewhere between the numpy-versions 1.10 ~ 1.11, yet it is quite well documented in the published numpy.ufunc API documentation.
These are alternative solutions that work in the specific case of converting uint16 to uint8, suggested on https://github.com/silx-kit/silx/pull/2889, using the fact that you can just read the first byte and ignore the second one:
>>> import numpy
>>> img = numpy.ones((29632, 30000, 3), dtype=numpy.uint16)
>>> %timeit img2 = numpy.ascontiguousarray(img.astype(dtype=('u2', [('lo','u1'), ('hi', 'u1')])))["hi"]
2.43 s ± 34.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
>>> %timeit img2 = numpy.ascontiguousarray(img.view(numpy.uint8)[..., 1::2])
2.58 s ± 165 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Combined with numba and parallel computation, it is slightly faster than the previous numba solution that uses an arithmetic multiplication (which took 750 ms on the same array):
In [11]: #numba.njit(parallel=True)
...: def uint16_to_uint8_shift(img):
...: img_out = numpy.empty(img.shape, dtype=numpy.uint8)
...: for i in numba.prange(img.shape[0]):
...: for j in range(img.shape[1]):
...: for k in range(img.shape[2]):
...: img_out[i, j, k] = img[i, j, k] >> 8
...: return img_out
...:
In [12]: %timeit img2 = uint16_to_uint8_shift(img)
650 ms ± 43.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

How to fill out 2-d array with elements from 1-d arrays efficiently? [duplicate]

I have two numpy arrays that define the x and y axes of a grid. For example:
x = numpy.array([1,2,3])
y = numpy.array([4,5])
I'd like to generate the Cartesian product of these arrays to generate:
array([[1,4],[2,4],[3,4],[1,5],[2,5],[3,5]])
In a way that's not terribly inefficient since I need to do this many times in a loop. I'm assuming that converting them to a Python list and using itertools.product and back to a numpy array is not the most efficient form.
A canonical cartesian_product (almost)
There are many approaches to this problem with different properties. Some are faster than others, and some are more general-purpose. After a lot of testing and tweaking, I've found that the following function, which calculates an n-dimensional cartesian_product, is faster than most others for many inputs. For a pair of approaches that are slightly more complex, but are even a bit faster in many cases, see the answer by Paul Panzer.
Given that answer, this is no longer the fastest implementation of the cartesian product in numpy that I'm aware of. However, I think its simplicity will continue to make it a useful benchmark for future improvement:
def cartesian_product(*arrays):
la = len(arrays)
dtype = numpy.result_type(*arrays)
arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(numpy.ix_(*arrays)):
arr[...,i] = a
return arr.reshape(-1, la)
It's worth mentioning that this function uses ix_ in an unusual way; whereas the documented use of ix_ is to generate indices into an array, it just so happens that arrays with the same shape can be used for broadcasted assignment. Many thanks to mgilson, who inspired me to try using ix_ this way, and to unutbu, who provided some extremely helpful feedback on this answer, including the suggestion to use numpy.result_type.
Notable alternatives
It's sometimes faster to write contiguous blocks of memory in Fortran order. That's the basis of this alternative, cartesian_product_transpose, which has proven faster on some hardware than cartesian_product (see below). However, Paul Panzer's answer, which uses the same principle, is even faster. Still, I include this here for interested readers:
def cartesian_product_transpose(*arrays):
broadcastable = numpy.ix_(*arrays)
broadcasted = numpy.broadcast_arrays(*broadcastable)
rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted)
dtype = numpy.result_type(*arrays)
out = numpy.empty(rows * cols, dtype=dtype)
start, end = 0, rows
for a in broadcasted:
out[start:end] = a.reshape(-1)
start, end = end, end + rows
return out.reshape(cols, rows).T
After coming to understand Panzer's approach, I wrote a new version that's almost as fast as his, and is almost as simple as cartesian_product:
def cartesian_product_simple_transpose(arrays):
la = len(arrays)
dtype = numpy.result_type(*arrays)
arr = numpy.empty([la] + [len(a) for a in arrays], dtype=dtype)
for i, a in enumerate(numpy.ix_(*arrays)):
arr[i, ...] = a
return arr.reshape(la, -1).T
This appears to have some constant-time overhead that makes it run slower than Panzer's for small inputs. But for larger inputs, in all the tests I ran, it performs just as well as his fastest implementation (cartesian_product_transpose_pp).
In following sections, I include some tests of other alternatives. These are now somewhat out of date, but rather than duplicate effort, I've decided to leave them here out of historical interest. For up-to-date tests, see Panzer's answer, as well as Nico Schlömer's.
Tests against alternatives
Here is a battery of tests that show the performance boost that some of these functions provide relative to a number of alternatives. All the tests shown here were performed on a quad-core machine, running Mac OS 10.12.5, Python 3.6.1, and numpy 1.12.1. Variations on hardware and software are known to produce different results, so YMMV. Run these tests for yourself to be sure!
Definitions:
import numpy
import itertools
from functools import reduce
### Two-dimensional products ###
def repeat_product(x, y):
return numpy.transpose([numpy.tile(x, len(y)),
numpy.repeat(y, len(x))])
def dstack_product(x, y):
return numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2)
### Generalized N-dimensional products ###
def cartesian_product(*arrays):
la = len(arrays)
dtype = numpy.result_type(*arrays)
arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(numpy.ix_(*arrays)):
arr[...,i] = a
return arr.reshape(-1, la)
def cartesian_product_transpose(*arrays):
broadcastable = numpy.ix_(*arrays)
broadcasted = numpy.broadcast_arrays(*broadcastable)
rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted)
dtype = numpy.result_type(*arrays)
out = numpy.empty(rows * cols, dtype=dtype)
start, end = 0, rows
for a in broadcasted:
out[start:end] = a.reshape(-1)
start, end = end, end + rows
return out.reshape(cols, rows).T
# from https://stackoverflow.com/a/1235363/577088
def cartesian_product_recursive(*arrays, out=None):
arrays = [numpy.asarray(x) for x in arrays]
dtype = arrays[0].dtype
n = numpy.prod([x.size for x in arrays])
if out is None:
out = numpy.zeros([n, len(arrays)], dtype=dtype)
m = n // arrays[0].size
out[:,0] = numpy.repeat(arrays[0], m)
if arrays[1:]:
cartesian_product_recursive(arrays[1:], out=out[0:m,1:])
for j in range(1, arrays[0].size):
out[j*m:(j+1)*m,1:] = out[0:m,1:]
return out
def cartesian_product_itertools(*arrays):
return numpy.array(list(itertools.product(*arrays)))
### Test code ###
name_func = [('repeat_product',
repeat_product),
('dstack_product',
dstack_product),
('cartesian_product',
cartesian_product),
('cartesian_product_transpose',
cartesian_product_transpose),
('cartesian_product_recursive',
cartesian_product_recursive),
('cartesian_product_itertools',
cartesian_product_itertools)]
def test(in_arrays, test_funcs):
global func
global arrays
arrays = in_arrays
for name, func in test_funcs:
print('{}:'.format(name))
%timeit func(*arrays)
def test_all(*in_arrays):
test(in_arrays, name_func)
# `cartesian_product_recursive` throws an
# unexpected error when used on more than
# two input arrays, so for now I've removed
# it from these tests.
def test_cartesian(*in_arrays):
test(in_arrays, name_func[2:4] + name_func[-1:])
x10 = [numpy.arange(10)]
x50 = [numpy.arange(50)]
x100 = [numpy.arange(100)]
x500 = [numpy.arange(500)]
x1000 = [numpy.arange(1000)]
Test results:
In [2]: test_all(*(x100 * 2))
repeat_product:
67.5 µs ± 633 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
dstack_product:
67.7 µs ± 1.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
cartesian_product:
33.4 µs ± 558 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
cartesian_product_transpose:
67.7 µs ± 932 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
cartesian_product_recursive:
215 µs ± 6.01 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
cartesian_product_itertools:
3.65 ms ± 38.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [3]: test_all(*(x500 * 2))
repeat_product:
1.31 ms ± 9.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
dstack_product:
1.27 ms ± 7.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
cartesian_product:
375 µs ± 4.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
cartesian_product_transpose:
488 µs ± 8.88 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
cartesian_product_recursive:
2.21 ms ± 38.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_itertools:
105 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [4]: test_all(*(x1000 * 2))
repeat_product:
10.2 ms ± 132 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
dstack_product:
12 ms ± 120 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product:
4.75 ms ± 57.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_transpose:
7.76 ms ± 52.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_recursive:
13 ms ± 209 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_itertools:
422 ms ± 7.77 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In all cases, cartesian_product as defined at the beginning of this answer is fastest.
For those functions that accept an arbitrary number of input arrays, it's worth checking performance when len(arrays) > 2 as well. (Until I can determine why cartesian_product_recursive throws an error in this case, I've removed it from these tests.)
In [5]: test_cartesian(*(x100 * 3))
cartesian_product:
8.8 ms ± 138 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_transpose:
7.87 ms ± 91.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_itertools:
518 ms ± 5.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: test_cartesian(*(x50 * 4))
cartesian_product:
169 ms ± 5.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
cartesian_product_transpose:
184 ms ± 4.32 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
cartesian_product_itertools:
3.69 s ± 73.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [7]: test_cartesian(*(x10 * 6))
cartesian_product:
26.5 ms ± 449 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
cartesian_product_transpose:
16 ms ± 133 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_itertools:
728 ms ± 16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: test_cartesian(*(x10 * 7))
cartesian_product:
650 ms ± 8.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
cartesian_product_transpose:
518 ms ± 7.09 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
cartesian_product_itertools:
8.13 s ± 122 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
As these tests show, cartesian_product remains competitive until the number of input arrays rises above (roughly) four. After that, cartesian_product_transpose does have a slight edge.
It's worth reiterating that users with other hardware and operating systems may see different results. For example, unutbu reports seeing the following results for these tests using Ubuntu 14.04, Python 3.4.3, and numpy 1.14.0.dev0+b7050a9:
>>> %timeit cartesian_product_transpose(x500, y500)
1000 loops, best of 3: 682 µs per loop
>>> %timeit cartesian_product(x500, y500)
1000 loops, best of 3: 1.55 ms per loop
Below, I go into a few details about earlier tests I've run along these lines. The relative performance of these approaches has changed over time, for different hardware and different versions of Python and numpy. While it's not immediately useful for people using up-to-date versions of numpy, it illustrates how things have changed since the first version of this answer.
A simple alternative: meshgrid + dstack
The currently accepted answer uses tile and repeat to broadcast two arrays together. But the meshgrid function does practically the same thing. Here's the output of tile and repeat before being passed to transpose:
In [1]: import numpy
In [2]: x = numpy.array([1,2,3])
...: y = numpy.array([4,5])
...:
In [3]: [numpy.tile(x, len(y)), numpy.repeat(y, len(x))]
Out[3]: [array([1, 2, 3, 1, 2, 3]), array([4, 4, 4, 5, 5, 5])]
And here's the output of meshgrid:
In [4]: numpy.meshgrid(x, y)
Out[4]:
[array([[1, 2, 3],
[1, 2, 3]]), array([[4, 4, 4],
[5, 5, 5]])]
As you can see, it's almost identical. We need only reshape the result to get exactly the same result.
In [5]: xt, xr = numpy.meshgrid(x, y)
...: [xt.ravel(), xr.ravel()]
Out[5]: [array([1, 2, 3, 1, 2, 3]), array([4, 4, 4, 5, 5, 5])]
Rather than reshaping at this point, though, we could pass the output of meshgrid to dstack and reshape afterwards, which saves some work:
In [6]: numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2)
Out[6]:
array([[1, 4],
[2, 4],
[3, 4],
[1, 5],
[2, 5],
[3, 5]])
Contrary to the claim in this comment, I've seen no evidence that different inputs will produce differently shaped outputs, and as the above demonstrates, they do very similar things, so it would be quite strange if they did. Please let me know if you find a counterexample.
Testing meshgrid + dstack vs. repeat + transpose
The relative performance of these two approaches has changed over time. In an earlier version of Python (2.7), the result using meshgrid + dstack was noticeably faster for small inputs. (Note that these tests are from an old version of this answer.) Definitions:
>>> def repeat_product(x, y):
... return numpy.transpose([numpy.tile(x, len(y)),
numpy.repeat(y, len(x))])
...
>>> def dstack_product(x, y):
... return numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2)
...
For moderately-sized input, I saw a significant speedup. But I retried these tests with more recent versions of Python (3.6.1) and numpy (1.12.1), on a newer machine. The two approaches are almost identical now.
Old Test
>>> x, y = numpy.arange(500), numpy.arange(500)
>>> %timeit repeat_product(x, y)
10 loops, best of 3: 62 ms per loop
>>> %timeit dstack_product(x, y)
100 loops, best of 3: 12.2 ms per loop
New Test
In [7]: x, y = numpy.arange(500), numpy.arange(500)
In [8]: %timeit repeat_product(x, y)
1.32 ms ± 24.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [9]: %timeit dstack_product(x, y)
1.26 ms ± 8.47 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
As always, YMMV, but this suggests that in recent versions of Python and numpy, these are interchangeable.
Generalized product functions
In general, we might expect that using built-in functions will be faster for small inputs, while for large inputs, a purpose-built function might be faster. Furthermore for a generalized n-dimensional product, tile and repeat won't help, because they don't have clear higher-dimensional analogues. So it's worth investigating the behavior of purpose-built functions as well.
Most of the relevant tests appear at the beginning of this answer, but here are a few of the tests performed on earlier versions of Python and numpy for comparison.
The cartesian function defined in another answer used to perform pretty well for larger inputs. (It's the same as the function called cartesian_product_recursive above.) In order to compare cartesian to dstack_prodct, we use just two dimensions.
Here again, the old test showed a significant difference, while the new test shows almost none.
Old Test
>>> x, y = numpy.arange(1000), numpy.arange(1000)
>>> %timeit cartesian([x, y])
10 loops, best of 3: 25.4 ms per loop
>>> %timeit dstack_product(x, y)
10 loops, best of 3: 66.6 ms per loop
New Test
In [10]: x, y = numpy.arange(1000), numpy.arange(1000)
In [11]: %timeit cartesian([x, y])
12.1 ms ± 199 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [12]: %timeit dstack_product(x, y)
12.7 ms ± 334 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
As before, dstack_product still beats cartesian at smaller scales.
New Test (redundant old test not shown)
In [13]: x, y = numpy.arange(100), numpy.arange(100)
In [14]: %timeit cartesian([x, y])
215 µs ± 4.75 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [15]: %timeit dstack_product(x, y)
65.7 µs ± 1.15 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
These distinctions are, I think, interesting and worth recording; but they are academic in the end. As the tests at the beginning of this answer showed, all of these versions are almost always slower than cartesian_product, defined at the very beginning of this answer -- which is itself a bit slower than the fastest implementations among the answers to this question.
>>> numpy.transpose([numpy.tile(x, len(y)), numpy.repeat(y, len(x))])
array([[1, 4],
[2, 4],
[3, 4],
[1, 5],
[2, 5],
[3, 5]])
See Using numpy to build an array of all combinations of two arrays for a general solution for computing the Cartesian product of N arrays.
You can just do normal list comprehension in python
x = numpy.array([1,2,3])
y = numpy.array([4,5])
[[x0, y0] for x0 in x for y0 in y]
which should give you
[[1, 4], [1, 5], [2, 4], [2, 5], [3, 4], [3, 5]]
I was interested in this as well and did a little performance comparison, perhaps somewhat clearer than in #senderle's answer.
For two arrays (the classical case):
For four arrays:
(Note that the length the arrays is only a few dozen entries here.)
Code to reproduce the plots:
from functools import reduce
import itertools
import numpy
import perfplot
def dstack_product(arrays):
return numpy.dstack(numpy.meshgrid(*arrays, indexing="ij")).reshape(-1, len(arrays))
# Generalized N-dimensional products
def cartesian_product(arrays):
la = len(arrays)
dtype = numpy.find_common_type([a.dtype for a in arrays], [])
arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(numpy.ix_(*arrays)):
arr[..., i] = a
return arr.reshape(-1, la)
def cartesian_product_transpose(arrays):
broadcastable = numpy.ix_(*arrays)
broadcasted = numpy.broadcast_arrays(*broadcastable)
rows, cols = reduce(numpy.multiply, broadcasted[0].shape), len(broadcasted)
dtype = numpy.find_common_type([a.dtype for a in arrays], [])
out = numpy.empty(rows * cols, dtype=dtype)
start, end = 0, rows
for a in broadcasted:
out[start:end] = a.reshape(-1)
start, end = end, end + rows
return out.reshape(cols, rows).T
# from https://stackoverflow.com/a/1235363/577088
def cartesian_product_recursive(arrays, out=None):
arrays = [numpy.asarray(x) for x in arrays]
dtype = arrays[0].dtype
n = numpy.prod([x.size for x in arrays])
if out is None:
out = numpy.zeros([n, len(arrays)], dtype=dtype)
m = n // arrays[0].size
out[:, 0] = numpy.repeat(arrays[0], m)
if arrays[1:]:
cartesian_product_recursive(arrays[1:], out=out[0:m, 1:])
for j in range(1, arrays[0].size):
out[j * m : (j + 1) * m, 1:] = out[0:m, 1:]
return out
def cartesian_product_itertools(arrays):
return numpy.array(list(itertools.product(*arrays)))
perfplot.show(
setup=lambda n: 2 * (numpy.arange(n, dtype=float),),
n_range=[2 ** k for k in range(13)],
# setup=lambda n: 4 * (numpy.arange(n, dtype=float),),
# n_range=[2 ** k for k in range(6)],
kernels=[
dstack_product,
cartesian_product,
cartesian_product_transpose,
cartesian_product_recursive,
cartesian_product_itertools,
],
logx=True,
logy=True,
xlabel="len(a), len(b)",
equality_check=None,
)
Building on #senderle's exemplary ground work I've come up with two versions - one for C and one for Fortran layouts - that are often a bit faster.
cartesian_product_transpose_pp is - unlike #senderle's cartesian_product_transpose which uses a different strategy altogether - a version of cartesion_product that uses the more favorable transpose memory layout + some very minor optimizations.
cartesian_product_pp sticks with the original memory layout. What makes it fast is its using contiguous copying. Contiguous copies turn out to be so much faster that copying a full block of memory even though only part of it contains valid data is preferable to only copying the valid bits.
Some perfplots. I made separate ones for C and Fortran layouts, because these are different tasks IMO.
Names ending in 'pp' are my approaches.
1) many tiny factors (2 elements each)
2) many small factors (4 elements each)
3) three factors of equal length
4) two factors of equal length
Code (need to do separate runs for each plot b/c I couldn't figure out how to reset; also need to edit / comment in / out appropriately):
import numpy
import numpy as np
from functools import reduce
import itertools
import timeit
import perfplot
def dstack_product(arrays):
return numpy.dstack(
numpy.meshgrid(*arrays, indexing='ij')
).reshape(-1, len(arrays))
def cartesian_product_transpose_pp(arrays):
la = len(arrays)
dtype = numpy.result_type(*arrays)
arr = numpy.empty((la, *map(len, arrays)), dtype=dtype)
idx = slice(None), *itertools.repeat(None, la)
for i, a in enumerate(arrays):
arr[i, ...] = a[idx[:la-i]]
return arr.reshape(la, -1).T
def cartesian_product(arrays):
la = len(arrays)
dtype = numpy.result_type(*arrays)
arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(numpy.ix_(*arrays)):
arr[...,i] = a
return arr.reshape(-1, la)
def cartesian_product_transpose(arrays):
broadcastable = numpy.ix_(*arrays)
broadcasted = numpy.broadcast_arrays(*broadcastable)
rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted)
dtype = numpy.result_type(*arrays)
out = numpy.empty(rows * cols, dtype=dtype)
start, end = 0, rows
for a in broadcasted:
out[start:end] = a.reshape(-1)
start, end = end, end + rows
return out.reshape(cols, rows).T
from itertools import accumulate, repeat, chain
def cartesian_product_pp(arrays, out=None):
la = len(arrays)
L = *map(len, arrays), la
dtype = numpy.result_type(*arrays)
arr = numpy.empty(L, dtype=dtype)
arrs = *accumulate(chain((arr,), repeat(0, la-1)), np.ndarray.__getitem__),
idx = slice(None), *itertools.repeat(None, la-1)
for i in range(la-1, 0, -1):
arrs[i][..., i] = arrays[i][idx[:la-i]]
arrs[i-1][1:] = arrs[i]
arr[..., 0] = arrays[0][idx]
return arr.reshape(-1, la)
def cartesian_product_itertools(arrays):
return numpy.array(list(itertools.product(*arrays)))
# from https://stackoverflow.com/a/1235363/577088
def cartesian_product_recursive(arrays, out=None):
arrays = [numpy.asarray(x) for x in arrays]
dtype = arrays[0].dtype
n = numpy.prod([x.size for x in arrays])
if out is None:
out = numpy.zeros([n, len(arrays)], dtype=dtype)
m = n // arrays[0].size
out[:, 0] = numpy.repeat(arrays[0], m)
if arrays[1:]:
cartesian_product_recursive(arrays[1:], out=out[0:m, 1:])
for j in range(1, arrays[0].size):
out[j*m:(j+1)*m, 1:] = out[0:m, 1:]
return out
### Test code ###
if False:
perfplot.save('cp_4el_high.png',
setup=lambda n: n*(numpy.arange(4, dtype=float),),
n_range=list(range(6, 11)),
kernels=[
dstack_product,
cartesian_product_recursive,
cartesian_product,
# cartesian_product_transpose,
cartesian_product_pp,
# cartesian_product_transpose_pp,
],
logx=False,
logy=True,
xlabel='#factors',
equality_check=None
)
else:
perfplot.save('cp_2f_T.png',
setup=lambda n: 2*(numpy.arange(n, dtype=float),),
n_range=[2**k for k in range(5, 11)],
kernels=[
# dstack_product,
# cartesian_product_recursive,
# cartesian_product,
cartesian_product_transpose,
# cartesian_product_pp,
cartesian_product_transpose_pp,
],
logx=True,
logy=True,
xlabel='length of each factor',
equality_check=None
)
As of Oct. 2017, numpy now has a generic np.stack function that takes an axis parameter. Using it, we can have a "generalized cartesian product" using the "dstack and meshgrid" technique:
import numpy as np
def cartesian_product(*arrays):
ndim = len(arrays)
return (np.stack(np.meshgrid(*arrays), axis=-1)
.reshape(-1, ndim))
a = np.array([1,2])
b = np.array([10,20])
cartesian_product(a,b)
# output:
# array([[ 1, 10],
# [ 2, 10],
# [ 1, 20],
# [ 2, 20]])
Note on the axis=-1 parameter. This is the last (inner-most) axis in the result. It is equivalent to using axis=ndim.
One other comment, since Cartesian products blow up very quickly, unless we need to realize the array in memory for some reason, if the product is very large, we may want to make use of itertools and use the values on-the-fly.
The Scikit-learn package has a fast implementation of exactly this:
from sklearn.utils.extmath import cartesian
product = cartesian((x,y))
Note that the convention of this implementation is different from what you want, if you care about the order of the output. For your exact ordering, you can do
product = cartesian((y,x))[:, ::-1]
I used #kennytm answer for a while, but when trying to do the same in TensorFlow, but I found that TensorFlow has no equivalent of numpy.repeat(). After a little experimentation, I think I found a more general solution for arbitrary vectors of points.
For numpy:
import numpy as np
def cartesian_product(*args: np.ndarray) -> np.ndarray:
"""
Produce the cartesian product of arbitrary length vectors.
Parameters
----------
np.ndarray args
vector of points of interest in each dimension
Returns
-------
np.ndarray
the cartesian product of size [m x n] wherein:
m = prod([len(a) for a in args])
n = len(args)
"""
for i, a in enumerate(args):
assert a.ndim == 1, "arg {:d} is not rank 1".format(i)
return np.concatenate([np.reshape(xi, [-1, 1]) for xi in np.meshgrid(*args)], axis=1)
and for TensorFlow:
import tensorflow as tf
def cartesian_product(*args: tf.Tensor) -> tf.Tensor:
"""
Produce the cartesian product of arbitrary length vectors.
Parameters
----------
tf.Tensor args
vector of points of interest in each dimension
Returns
-------
tf.Tensor
the cartesian product of size [m x n] wherein:
m = prod([len(a) for a in args])
n = len(args)
"""
for i, a in enumerate(args):
tf.assert_rank(a, 1, message="arg {:d} is not rank 1".format(i))
return tf.concat([tf.reshape(xi, [-1, 1]) for xi in tf.meshgrid(*args)], axis=1)
More generally, if you have two 2d numpy arrays a and b, and you want to concatenate every row of a to every row of b (A cartesian product of rows, kind of like a join in a database), you can use this method:
import numpy
def join_2d(a, b):
assert a.dtype == b.dtype
a_part = numpy.tile(a, (len(b), 1))
b_part = numpy.repeat(b, len(a), axis=0)
return numpy.hstack((a_part, b_part))
The fastest you can get is either by combining a generator expression with the map function:
import numpy
import datetime
a = np.arange(1000)
b = np.arange(200)
start = datetime.datetime.now()
foo = (item for sublist in [list(map(lambda x: (x,i),a)) for i in b] for item in sublist)
print (list(foo))
print ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds()))
Outputs (actually the whole resulting list is printed):
[(0, 0), (1, 0), ...,(998, 199), (999, 199)]
execution time: 1.253567 s
or by using a double generator expression:
a = np.arange(1000)
b = np.arange(200)
start = datetime.datetime.now()
foo = ((x,y) for x in a for y in b)
print (list(foo))
print ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds()))
Outputs (whole list printed):
[(0, 0), (1, 0), ...,(998, 199), (999, 199)]
execution time: 1.187415 s
Take into account that most of the computation time goes into the printing command. The generator calculations are otherwise decently efficient. Without printing the calculation times are:
execution time: 0.079208 s
for generator expression + map function and:
execution time: 0.007093 s
for the double generator expression.
If what you actually want is to calculate the actual product of each of the coordinate pairs, the fastest is to solve it as a numpy matrix product:
a = np.arange(1000)
b = np.arange(200)
start = datetime.datetime.now()
foo = np.dot(np.asmatrix([[i,0] for i in a]), np.asmatrix([[i,0] for i in b]).T)
print (foo)
print ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds()))
Outputs:
[[ 0 0 0 ..., 0 0 0]
[ 0 1 2 ..., 197 198 199]
[ 0 2 4 ..., 394 396 398]
...,
[ 0 997 1994 ..., 196409 197406 198403]
[ 0 998 1996 ..., 196606 197604 198602]
[ 0 999 1998 ..., 196803 197802 198801]]
execution time: 0.003869 s
and without printing (in this case it doesn't save much since only a tiny piece of the matrix is actually printed out):
execution time: 0.003083 s
This can also be easily done by using itertools.product method
from itertools import product
import numpy as np
x = np.array([1, 2, 3])
y = np.array([4, 5])
cart_prod = np.array(list(product(*[x, y])),dtype='int32')
Result:
array([[1, 4],
[1, 5],
[2, 4],
[2, 5],
[3, 4],
[3, 5]], dtype=int32)
Execution time: 0.000155 s
In the specific case that you need to perform simple operations such as addition on each pair, you can introduce an extra dimension and let broadcasting do the job:
>>> a, b = np.array([1,2,3]), np.array([10,20,30])
>>> a[None,:] + b[:,None]
array([[11, 12, 13],
[21, 22, 23],
[31, 32, 33]])
I'm not sure if there is any similar way to actually get the pairs themselves.
I'm a bit late to the party, but I encoutered a tricky variant of that problem.
Let's say I want the cartesian product of several arrays, but that cartesian product ends up being much larger than the computers' memory (however, the computation done with that product are fast, or at least parallelizable).
The obvious solution is to divide this cartesian product in chunks, and treat these chunks one after the other (in sort of a "streaming" manner). You can do that easily with itertools.product, but it's horrendously slow. Also, none of the proposed solutions here (as fast as they are) give us this possibility. The solution I propose uses Numba, and is slightly faster than the "canonical" cartesian_product mentioned here. It's pretty long because I tried to optimize it everywhere I could.
import numba as nb
import numpy as np
from typing import List
#nb.njit(nb.types.Tuple((nb.int32[:, :],
nb.int32[:]))(nb.int32[:],
nb.int32[:],
nb.int64, nb.int64))
def cproduct(sizes: np.ndarray, current_tuple: np.ndarray, start_idx: int, end_idx: int):
"""Generates ids tuples from start_id to end_id"""
assert len(sizes) >= 2
assert start_idx < end_idx
tuples = np.zeros((end_idx - start_idx, len(sizes)), dtype=np.int32)
tuple_idx = 0
# stores the current combination
current_tuple = current_tuple.copy()
while tuple_idx < end_idx - start_idx:
tuples[tuple_idx] = current_tuple
current_tuple[0] += 1
# using a condition here instead of including this in the inner loop
# to gain a bit of speed: this is going to be tested each iteration,
# and starting a loop to have it end right away is a bit silly
if current_tuple[0] == sizes[0]:
# the reset to 0 and subsequent increment amount to carrying
# the number to the higher "power"
current_tuple[0] = 0
current_tuple[1] += 1
for i in range(1, len(sizes) - 1):
if current_tuple[i] == sizes[i]:
# same as before, but in a loop, since this is going
# to get called less often
current_tuple[i + 1] += 1
current_tuple[i] = 0
else:
break
tuple_idx += 1
return tuples, current_tuple
def chunked_cartesian_product_ids(sizes: List[int], chunk_size: int):
"""Just generates chunks of the cartesian product of the ids of each
input arrays (thus, we just need their sizes here, not the actual arrays)"""
prod = np.prod(sizes)
# putting the largest number at the front to more efficiently make use
# of the cproduct numba function
sizes = np.array(sizes, dtype=np.int32)
sorted_idx = np.argsort(sizes)[::-1]
sizes = sizes[sorted_idx]
if chunk_size > prod:
chunk_bounds = (np.array([0, prod])).astype(np.int64)
else:
num_chunks = np.maximum(np.ceil(prod / chunk_size), 2).astype(np.int32)
chunk_bounds = (np.arange(num_chunks + 1) * chunk_size).astype(np.int64)
chunk_bounds[-1] = prod
current_tuple = np.zeros(len(sizes), dtype=np.int32)
for start_idx, end_idx in zip(chunk_bounds[:-1], chunk_bounds[1:]):
tuples, current_tuple = cproduct(sizes, current_tuple, start_idx, end_idx)
# re-arrange columns to match the original order of the sizes list
# before yielding
yield tuples[:, np.argsort(sorted_idx)]
def chunked_cartesian_product(*arrays, chunk_size=2 ** 25):
"""Returns chunks of the full cartesian product, with arrays of shape
(chunk_size, n_arrays). The last chunk will obviously have the size of the
remainder"""
array_lengths = [len(array) for array in arrays]
for array_ids_chunk in chunked_cartesian_product_ids(array_lengths, chunk_size):
slices_lists = [arrays[i][array_ids_chunk[:, i]] for i in range(len(arrays))]
yield np.vstack(slices_lists).swapaxes(0,1)
def cartesian_product(*arrays):
"""Actual cartesian product, not chunked, still fast"""
total_prod = np.prod([len(array) for array in arrays])
return next(chunked_cartesian_product(*arrays, total_prod))
a = np.arange(0, 3)
b = np.arange(8, 10)
c = np.arange(13, 16)
for cartesian_tuples in chunked_cartesian_product(*[a, b, c], chunk_size=5):
print(cartesian_tuples)
This would output our cartesian product in chunks of 5 3-uples:
[[ 0 8 13]
[ 0 8 14]
[ 0 8 15]
[ 1 8 13]
[ 1 8 14]]
[[ 1 8 15]
[ 2 8 13]
[ 2 8 14]
[ 2 8 15]
[ 0 9 13]]
[[ 0 9 14]
[ 0 9 15]
[ 1 9 13]
[ 1 9 14]
[ 1 9 15]]
[[ 2 9 13]
[ 2 9 14]
[ 2 9 15]]
If you're willing to understand what is being done here, the intuition behind the njitted function is to enumerate each "number" in a weird numerical base whose elements would be composed of the sizes of the input arrays (instead of the same number in regular binary, decimal or hexadecimal bases).
Obviously, this solution is interesting for large products. For small ones, the overhead might be a bit costly.
NOTE: since numba is still under heavy development, i'm using numba 0.50 to run this, with python 3.6.
Yet another one:
>>>x1, y1 = np.meshgrid(x, y)
>>>np.c_[x1.ravel(), y1.ravel()]
array([[1, 4],
[2, 4],
[3, 4],
[1, 5],
[2, 5],
[3, 5]])
Inspired by Ashkan's answer, you can also try the following.
>>> x, y = np.meshgrid(x, y)
>>> np.concatenate([x.flatten().reshape(-1,1), y.flatten().reshape(-1,1)], axis=1)
This will give you the required cartesian product!
This is a generalized version of the accepted answer (Cartesian product of multiple arrays using numpy.tile and numpy.repeat functions).
from functors import reduce
from operator import mul
def cartesian_product(arrays):
return np.vstack(
np.tile(
np.repeat(arrays[j], reduce(mul, map(len, arrays[j+1:]), 1)),
reduce(mul, map(len, arrays[:j]), 1),
)
for j in range(len(arrays))
).T
If you are willing to use PyTorch, I should think it is highly efficient:
>>> import torch
>>> torch.cartesian_prod(torch.as_tensor(x), torch.as_tensor(y))
tensor([[1, 4],
[1, 5],
[2, 4],
[2, 5],
[3, 4],
[3, 5]])
and you can easily get a numpy array:
>>> torch.cartesian_prod(torch.as_tensor(x), torch.as_tensor(y)).numpy()
array([[1, 4],
[1, 5],
[2, 4],
[2, 5],
[3, 4],
[3, 5]])

Julia npzread(filename.npz) very slow

I want to load numpy arrays inside julia, but it is very slow to load *.npz files.
python3 code
import numpy as np
x = np.zeros([300, 400])
y = np.zeros([500, 600])
np.save('somepath/x.npy', x)
np.save('somepath/y.npy', y)
np.savez('somepath/xy.npz', x=x, y=y)
julia 0.6 code
using NPZ
function load_xy()
xy = npzread("somepath/xy.npz")
end
function load_x_and_y()
x = npzread("somepath/x.npy")
y = npzread("somepath/y.npy")
end
#time(load_xy())
#time(load_x_and_y())
load_xy():
22.071646 seconds (24.99 M allocations: 820.903 MiB, 0.87% gc time)
load_x_and_y():
0.010131 seconds (2.88 k allocations: 9.782 MiB, 19.20% gc time)
QUESTIONS:
Why the large number of allocations for load_xy()?
How can I change my code so that load_xy() also is fast?

Resources