How to efficiently calculate squared distance of each element to every other in a matrix? - python-3.x

I have a matrix of the below format:
matrix = array([[-0.2436986 , -0.25583658, -0.16579486, ..., -0.04291612,
-0.06026303, 0.08564489],
[-0.08684622, -0.21300158, -0.04034272, ..., -0.01995692,
-0.07747065, 0.06965207],
[-0.34814256, -0.20597479, 0.06931241, ..., -0.1236965 ,
-0.1300714 , -0.110122 ],
...,
[-0.04154776, -0.07538085, 0.01860147, ..., -0.01494173,
-0.08960884, -0.21338603],
[-0.34039265, -0.24616522, 0.10838407, ..., 0.22280858,
-0.03465452, 0.04178255],
[-0.30251586, -0.23072125, -0.01975435, ..., 0.34529492,
-0.03508861, 0.00699677]], dtype=float32)
Since, I want to calculate squared distance of each element to every other, I am using the below code:
def sq_dist(a,b):
"""
Returns the squared distance between two vectors
Args:
a (ndarray (n,)): vector with n features
b (ndarray (n,)): vector with n features
Returns:
d (float) : distance
"""
d = np.sum(np.square(a - b))
return d
dim = len(matrix)
dist = np.zeros((dim,dim))
for i in range(dim):
for j in range(dim):
dist[i,j] = sq_dist(matrix[i, :], matrix[j, :])
I am getting the correct result but only for 5000 elements in 17 minutes (if I use 5000 elements instead of 100k).
Since I have 100k*100k matrix, the cluster fails in 5 hours.
How to efficiently do this for a large matrix?
I am using Python3.8
and Pyspark.
Output matrix should be like:
dist = array([[0. , 0.57371938, 0.78593194, ..., 0.83454031, 0.58932155,
0.76440328],
[0.57371938, 0. , 0.66285896, ..., 0.89251578, 0.76511419,
0.59261483],
[0.78593194, 0.66285896, 0. , ..., 0.60711896, 0.80852598,
0.73895919],
...,
[0.83454031, 0.89251578, 0.60711896, ..., 0. , 1.01311994,
0.84679914],
[0.58932155, 0.76511419, 0.80852598, ..., 1.01311994, 0. ,
0.5392195 ],
[0.76440328, 0.59261483, 0.73895919, ..., 0.84679914, 0.5392195 ,
0. ]])

You can make it significantly faster by using numba:
import numpy as np
import numba as nb
#nb.njit(parallel=True)
def square_dist(matrix):
dim = len(matrix)
assert dim > 0
dist = np.zeros((dim,dim))
for i in nb.prange(dim):
for j in nb.prange(dim):
dist[i][j] = np.square(matrix[i, :] - matrix[j, :]).sum()
return dist
Test and time:
rng = np.random.default_rng()
matrix = rng.random((200, 10))
assert np.allclose(op(matrix),square_dist(matrix))
%timeit op(matrix)
%timeit square_dist(matrix)
Output:
181 ms ± 556 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
947 µs ± 43.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

First, let's do a reality check. Computing N2 distances where each one takes 3N-1 operations (N subtractions, N multiplications and N-1 additions) means you have to perform about 3N3 arithmetic operations. When N is 100k, that totals to 3x1015 operations. A modern CPU with AVX-512 running at 3 GHz (3x109 Hz) can perform 3x109 [cycles/sec] x (512 / 32) [float32 entries in a vector] x 2 [vector ALUs per core] = 1011 float32 operations/second. Therefore, to compute all entries in your distance matrix it will take no less than 3x1015 / 1011 = 30000 seconds or 8 hrs and 20 mins. This is a hard lower limit, only achievable if all operations are perfectly vectorisable, which they are not, e.g. the horizontal sum after the squaring. If the CPU isn't AVX-512 capable but only supports AVX2, then the vector length is twice as small and the time goes up to about 17 hrs. All this assuming that data fits in the CPU cache - it actually doesn't and it needs proper prefetching.
First thing you can do is cut the compute time in half by noticing that dij = dji and also dii = 0:
for i in range(dim):
dist[i,i] = 0
for j in range(i+1, dim):
d[i,j] = d[j,i] = np.sum(np.square(matrix[i, :] - matrix[j, :]))
Notice the loop here runs only for i < j and the call to sq_dist has been inlined to save you 5x109 unnecessary function calls!!
But even then, you still need more than 4 hrs on that AVX-512 CPU (more than 8 hrs with AVX2 only.)
If you really must cut down that compute time, you need to run it in parallel. With PySpark that means you have to store the vectors in a dataset, perform a self-join, and write a UDF that uses the BLAS implementation that ships with Spark (or install a native one) to compute the distance metric. Unfortunately, this is a low-level interface of Spark and it's only exposed to UDFs written in JVM languages - check this question for a Scala-based solution.

Related

Joint construction of a random permutation and its inverse using NumPy

I am looking to construct a random permutation of [1, 2, ..., n] along with its inverse using NumPy. In my application, n can be on the order of 100 million so I am looking for a solution that constructs both the permutation and its inverse in minimum time.
What I've tried:
Computing a random permutation and its inverse separately using inbuilt NumPy functions
p = np.random.permutation(n)
pinv = np.argsort(p)
The same idea as approach 1, but using the solution provided here. I found that this solution can speed up the computation of pinv by an order of magnitude.
def invert_permutation_numpy2(permutation):
inv = np.empty_like(permutation)
inv[permutation] = np.arange(len(inv), dtype=inv.dtype)
return inv
p = np.random.permutation(n)
pinv = invert_permutation_numpy2(p)
I'm hoping that there is a solution that computes p and pinv jointly and yields additional speedup.
The following is a straightforward implementation of the Fisher-Yates method (pseudocode from here). When compiled with numba it's faster than numpy:
import numpy as np
import numba
#numba.njit
def randperm(n):
"""Permuation of 1, 2, ... n and its inverse"""
p = np.arange(1, n+1, dtype=np.int32) # initialize with identity permutation
pinv = np.empty_like(p)
for i in range(n-1, 0, -1): # loop over all items except the first one
z = np.random.randint(0, i+1)
temp = p[z] # swap numbers at i and z
p[z] = p[i]
p[i] = temp
pinv[temp-1] = i # pinv[p[z]-1] = i
pinv[p[0]-1] = 0
return p, pinv
Comparison:
%timeit p, pinv = randperm(100_000_000)
# 12.3 s ± 212 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
p = np.random.permutation(np.arange(1, 100_000_000+1, dtype=np.int32))
pinv = np.argsort(p)
# 31.8 s ± 439 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
(NB np.random.permutation(n) give the permuatation of 0, 1, ... n-1 instead of 1, 2, ... n)

Short-circuiting an np.all with a nested np.less for large array comparisons in numpy

In my current code (see MWE) I have a bottleneck where I perform np.all with a nested np.less for large 2D arrays. I know that if there is a single false value in np.less we can stop checking because the rest of the values in the index the code will evaluate to false (since I am AND-ing all values in a single index of a given dimension together).
Is there a way with numba or numpy where I can exploit this "early exit/short-circuit" condition to generate a meaningful speed-up in this calculation?
The second to last line in the MWE is what I'm trying to speed-up. Please note N and M can be very large, but only very few comparisons will actually evaluate to true.
import numpy as np
N = 10000
M = 10 # Reduced to small value to show that sometimes the comparisons evaluate to 'True'
array = np.random.uniform(low=0.0, high=10.0, size=(N, M))
comparison_array = np.random.uniform(low=0.0, high=10.0, size=(M))
# Can we apply an early exit condition on this?
mask = np.all(np.less(array, comparison_array), axis=-1)
print(f"Number of 'True' comparisons: {np.sum(mask)}")
Here's numba version, developed enough to work, not necessarily optimized:
#numba.njit
def foo(arr, carr):
N, M = arr.shape
mask = np.ones(N, dtype=np.bool_)
for i in range(N):
for j in range(M):
if arr[i,j]>=carr[j]:
mask[i]=False
break
return mask
Testing:
In [178]: np.sum(foo(array, comparison_array))
Out[178]: 2
In [179]: np.sum(np.all(np.less(array, comparison_array), axis=1))
Out[179]: 2
timing:
In [180]: timeit np.sum(foo(array, comparison_array))
155 µs ± 6.36 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [181]: timeit np.sum(np.all(np.less(array, comparison_array), axis=1))
451 µs ± 5.19 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
That's a decent improvement.

How to calculate the orthogonal vector of a unit vector with numpy?

I have a set of unit vectors in a numpy array u:
import numpy as np
a = np.arange(12).reshape(2,6) # generate some vectors
u = a/np.linalg.norm(a, axis=0) # turn them into unit vectors
print(u)
[[0. 0.5547002 0.62469505 0.65079137 0.66436384 0.67267279]
[1. 0.83205029 0.78086881 0.7592566 0.74740932 0.73994007]]
Now I want to generate the vectors orthogonal to each vector (just by flipping the components of the vectors like (x,y) -> (-y,x) ):
ortogonal_u = np.array(-u[1,:], u[0,:])
and get the error
TypeError: data type not understood
What am i doing wrong? How to fix it?
Is there a better way to find the orthogonal vectors of such a set of vectors? I would like it to be performant.
If you want this to be fast for large arrays, it helps to do things in place. The following will do that:
a = np.arange(12).reshape(2,6)
a = a[::-1, :] # change the indexing to reverse the vector to swap x and y (note that this doesn't do any copying)
np.negative(a[0,:], out=a[0, :]) # negate one axis
# [[ -6 -7 -8 -9 -10 -11]
# [ 0 1 2 3 4 5]]
Speed testing this and some of the other posted answers:
N = 10000000
a0 = np.arange(2*N).reshape(2,N)
def f0(a):
x = a[::-1, :]
np.negative(x[0,:], out=x[10, :])
return x
def f1(a):
x = np.array([-a[1,:], a[0,:]])
return x
def f2(a):
x = np.flip(a, axis=0) * np.array([[1], [-1]])
return x
%timeit f0(a0)
# 6.69 ms ± 81.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit f1(a0)
# 103 ms ± 1.41 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit f2(a0)
# 81.6 ms ± 1.76 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
So the in-place operation is more than 10x faster for a very large array. (This is particularly fast since all it does is change the indexing direction (a single operation on the array header and independent of array size), and then change the sign of one row, so it's an unusual speed gain. Currently, I suspect the sign change requires a copy, but there may be a way to do this without a copy, but I don't know it. Also, note that if you do the operation in place, the original array is over-written, so this may not work for your use case.
You're passing two data arguments to the array constructor, but it only expects one. When a second argument is passed, array expects it to be a description of the datatype of the array, and u[0, :] is not a valid type descriptor.
The minimal change needed to get the expected result is to place the two slices in a list.
np.array([-u[1,:], u[0,:]])
You can use flip and broadcast opperations:
import numpy as np
a = np.arange(12).reshape(2,6) # generate some vectors
u = a/np.linalg.norm(a, axis=0) # turn them into unit vectors
print(u)
print(np.flip(u, axis=0) * np.array([[1], [-1]])) # NEW LINE HERE
[[0. 0.14142136 0.24253563 0.31622777 0.37139068 0.41380294]
[1. 0.98994949 0.9701425 0.9486833 0.92847669 0.91036648]]
[[ 1. 0.98994949 0.9701425 0.9486833 0.92847669 0.91036648]
[-0. -0.14142136 -0.24253563 -0.31622777 -0.37139068 -0.41380294]]

How to fill out 2-d array with elements from 1-d arrays efficiently? [duplicate]

I have two numpy arrays that define the x and y axes of a grid. For example:
x = numpy.array([1,2,3])
y = numpy.array([4,5])
I'd like to generate the Cartesian product of these arrays to generate:
array([[1,4],[2,4],[3,4],[1,5],[2,5],[3,5]])
In a way that's not terribly inefficient since I need to do this many times in a loop. I'm assuming that converting them to a Python list and using itertools.product and back to a numpy array is not the most efficient form.
A canonical cartesian_product (almost)
There are many approaches to this problem with different properties. Some are faster than others, and some are more general-purpose. After a lot of testing and tweaking, I've found that the following function, which calculates an n-dimensional cartesian_product, is faster than most others for many inputs. For a pair of approaches that are slightly more complex, but are even a bit faster in many cases, see the answer by Paul Panzer.
Given that answer, this is no longer the fastest implementation of the cartesian product in numpy that I'm aware of. However, I think its simplicity will continue to make it a useful benchmark for future improvement:
def cartesian_product(*arrays):
la = len(arrays)
dtype = numpy.result_type(*arrays)
arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(numpy.ix_(*arrays)):
arr[...,i] = a
return arr.reshape(-1, la)
It's worth mentioning that this function uses ix_ in an unusual way; whereas the documented use of ix_ is to generate indices into an array, it just so happens that arrays with the same shape can be used for broadcasted assignment. Many thanks to mgilson, who inspired me to try using ix_ this way, and to unutbu, who provided some extremely helpful feedback on this answer, including the suggestion to use numpy.result_type.
Notable alternatives
It's sometimes faster to write contiguous blocks of memory in Fortran order. That's the basis of this alternative, cartesian_product_transpose, which has proven faster on some hardware than cartesian_product (see below). However, Paul Panzer's answer, which uses the same principle, is even faster. Still, I include this here for interested readers:
def cartesian_product_transpose(*arrays):
broadcastable = numpy.ix_(*arrays)
broadcasted = numpy.broadcast_arrays(*broadcastable)
rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted)
dtype = numpy.result_type(*arrays)
out = numpy.empty(rows * cols, dtype=dtype)
start, end = 0, rows
for a in broadcasted:
out[start:end] = a.reshape(-1)
start, end = end, end + rows
return out.reshape(cols, rows).T
After coming to understand Panzer's approach, I wrote a new version that's almost as fast as his, and is almost as simple as cartesian_product:
def cartesian_product_simple_transpose(arrays):
la = len(arrays)
dtype = numpy.result_type(*arrays)
arr = numpy.empty([la] + [len(a) for a in arrays], dtype=dtype)
for i, a in enumerate(numpy.ix_(*arrays)):
arr[i, ...] = a
return arr.reshape(la, -1).T
This appears to have some constant-time overhead that makes it run slower than Panzer's for small inputs. But for larger inputs, in all the tests I ran, it performs just as well as his fastest implementation (cartesian_product_transpose_pp).
In following sections, I include some tests of other alternatives. These are now somewhat out of date, but rather than duplicate effort, I've decided to leave them here out of historical interest. For up-to-date tests, see Panzer's answer, as well as Nico Schlömer's.
Tests against alternatives
Here is a battery of tests that show the performance boost that some of these functions provide relative to a number of alternatives. All the tests shown here were performed on a quad-core machine, running Mac OS 10.12.5, Python 3.6.1, and numpy 1.12.1. Variations on hardware and software are known to produce different results, so YMMV. Run these tests for yourself to be sure!
Definitions:
import numpy
import itertools
from functools import reduce
### Two-dimensional products ###
def repeat_product(x, y):
return numpy.transpose([numpy.tile(x, len(y)),
numpy.repeat(y, len(x))])
def dstack_product(x, y):
return numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2)
### Generalized N-dimensional products ###
def cartesian_product(*arrays):
la = len(arrays)
dtype = numpy.result_type(*arrays)
arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(numpy.ix_(*arrays)):
arr[...,i] = a
return arr.reshape(-1, la)
def cartesian_product_transpose(*arrays):
broadcastable = numpy.ix_(*arrays)
broadcasted = numpy.broadcast_arrays(*broadcastable)
rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted)
dtype = numpy.result_type(*arrays)
out = numpy.empty(rows * cols, dtype=dtype)
start, end = 0, rows
for a in broadcasted:
out[start:end] = a.reshape(-1)
start, end = end, end + rows
return out.reshape(cols, rows).T
# from https://stackoverflow.com/a/1235363/577088
def cartesian_product_recursive(*arrays, out=None):
arrays = [numpy.asarray(x) for x in arrays]
dtype = arrays[0].dtype
n = numpy.prod([x.size for x in arrays])
if out is None:
out = numpy.zeros([n, len(arrays)], dtype=dtype)
m = n // arrays[0].size
out[:,0] = numpy.repeat(arrays[0], m)
if arrays[1:]:
cartesian_product_recursive(arrays[1:], out=out[0:m,1:])
for j in range(1, arrays[0].size):
out[j*m:(j+1)*m,1:] = out[0:m,1:]
return out
def cartesian_product_itertools(*arrays):
return numpy.array(list(itertools.product(*arrays)))
### Test code ###
name_func = [('repeat_product',
repeat_product),
('dstack_product',
dstack_product),
('cartesian_product',
cartesian_product),
('cartesian_product_transpose',
cartesian_product_transpose),
('cartesian_product_recursive',
cartesian_product_recursive),
('cartesian_product_itertools',
cartesian_product_itertools)]
def test(in_arrays, test_funcs):
global func
global arrays
arrays = in_arrays
for name, func in test_funcs:
print('{}:'.format(name))
%timeit func(*arrays)
def test_all(*in_arrays):
test(in_arrays, name_func)
# `cartesian_product_recursive` throws an
# unexpected error when used on more than
# two input arrays, so for now I've removed
# it from these tests.
def test_cartesian(*in_arrays):
test(in_arrays, name_func[2:4] + name_func[-1:])
x10 = [numpy.arange(10)]
x50 = [numpy.arange(50)]
x100 = [numpy.arange(100)]
x500 = [numpy.arange(500)]
x1000 = [numpy.arange(1000)]
Test results:
In [2]: test_all(*(x100 * 2))
repeat_product:
67.5 µs ± 633 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
dstack_product:
67.7 µs ± 1.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
cartesian_product:
33.4 µs ± 558 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
cartesian_product_transpose:
67.7 µs ± 932 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
cartesian_product_recursive:
215 µs ± 6.01 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
cartesian_product_itertools:
3.65 ms ± 38.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [3]: test_all(*(x500 * 2))
repeat_product:
1.31 ms ± 9.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
dstack_product:
1.27 ms ± 7.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
cartesian_product:
375 µs ± 4.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
cartesian_product_transpose:
488 µs ± 8.88 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
cartesian_product_recursive:
2.21 ms ± 38.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_itertools:
105 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [4]: test_all(*(x1000 * 2))
repeat_product:
10.2 ms ± 132 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
dstack_product:
12 ms ± 120 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product:
4.75 ms ± 57.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_transpose:
7.76 ms ± 52.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_recursive:
13 ms ± 209 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_itertools:
422 ms ± 7.77 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In all cases, cartesian_product as defined at the beginning of this answer is fastest.
For those functions that accept an arbitrary number of input arrays, it's worth checking performance when len(arrays) > 2 as well. (Until I can determine why cartesian_product_recursive throws an error in this case, I've removed it from these tests.)
In [5]: test_cartesian(*(x100 * 3))
cartesian_product:
8.8 ms ± 138 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_transpose:
7.87 ms ± 91.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_itertools:
518 ms ± 5.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: test_cartesian(*(x50 * 4))
cartesian_product:
169 ms ± 5.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
cartesian_product_transpose:
184 ms ± 4.32 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
cartesian_product_itertools:
3.69 s ± 73.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [7]: test_cartesian(*(x10 * 6))
cartesian_product:
26.5 ms ± 449 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
cartesian_product_transpose:
16 ms ± 133 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
cartesian_product_itertools:
728 ms ± 16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: test_cartesian(*(x10 * 7))
cartesian_product:
650 ms ± 8.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
cartesian_product_transpose:
518 ms ± 7.09 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
cartesian_product_itertools:
8.13 s ± 122 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
As these tests show, cartesian_product remains competitive until the number of input arrays rises above (roughly) four. After that, cartesian_product_transpose does have a slight edge.
It's worth reiterating that users with other hardware and operating systems may see different results. For example, unutbu reports seeing the following results for these tests using Ubuntu 14.04, Python 3.4.3, and numpy 1.14.0.dev0+b7050a9:
>>> %timeit cartesian_product_transpose(x500, y500)
1000 loops, best of 3: 682 µs per loop
>>> %timeit cartesian_product(x500, y500)
1000 loops, best of 3: 1.55 ms per loop
Below, I go into a few details about earlier tests I've run along these lines. The relative performance of these approaches has changed over time, for different hardware and different versions of Python and numpy. While it's not immediately useful for people using up-to-date versions of numpy, it illustrates how things have changed since the first version of this answer.
A simple alternative: meshgrid + dstack
The currently accepted answer uses tile and repeat to broadcast two arrays together. But the meshgrid function does practically the same thing. Here's the output of tile and repeat before being passed to transpose:
In [1]: import numpy
In [2]: x = numpy.array([1,2,3])
...: y = numpy.array([4,5])
...:
In [3]: [numpy.tile(x, len(y)), numpy.repeat(y, len(x))]
Out[3]: [array([1, 2, 3, 1, 2, 3]), array([4, 4, 4, 5, 5, 5])]
And here's the output of meshgrid:
In [4]: numpy.meshgrid(x, y)
Out[4]:
[array([[1, 2, 3],
[1, 2, 3]]), array([[4, 4, 4],
[5, 5, 5]])]
As you can see, it's almost identical. We need only reshape the result to get exactly the same result.
In [5]: xt, xr = numpy.meshgrid(x, y)
...: [xt.ravel(), xr.ravel()]
Out[5]: [array([1, 2, 3, 1, 2, 3]), array([4, 4, 4, 5, 5, 5])]
Rather than reshaping at this point, though, we could pass the output of meshgrid to dstack and reshape afterwards, which saves some work:
In [6]: numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2)
Out[6]:
array([[1, 4],
[2, 4],
[3, 4],
[1, 5],
[2, 5],
[3, 5]])
Contrary to the claim in this comment, I've seen no evidence that different inputs will produce differently shaped outputs, and as the above demonstrates, they do very similar things, so it would be quite strange if they did. Please let me know if you find a counterexample.
Testing meshgrid + dstack vs. repeat + transpose
The relative performance of these two approaches has changed over time. In an earlier version of Python (2.7), the result using meshgrid + dstack was noticeably faster for small inputs. (Note that these tests are from an old version of this answer.) Definitions:
>>> def repeat_product(x, y):
... return numpy.transpose([numpy.tile(x, len(y)),
numpy.repeat(y, len(x))])
...
>>> def dstack_product(x, y):
... return numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2)
...
For moderately-sized input, I saw a significant speedup. But I retried these tests with more recent versions of Python (3.6.1) and numpy (1.12.1), on a newer machine. The two approaches are almost identical now.
Old Test
>>> x, y = numpy.arange(500), numpy.arange(500)
>>> %timeit repeat_product(x, y)
10 loops, best of 3: 62 ms per loop
>>> %timeit dstack_product(x, y)
100 loops, best of 3: 12.2 ms per loop
New Test
In [7]: x, y = numpy.arange(500), numpy.arange(500)
In [8]: %timeit repeat_product(x, y)
1.32 ms ± 24.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [9]: %timeit dstack_product(x, y)
1.26 ms ± 8.47 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
As always, YMMV, but this suggests that in recent versions of Python and numpy, these are interchangeable.
Generalized product functions
In general, we might expect that using built-in functions will be faster for small inputs, while for large inputs, a purpose-built function might be faster. Furthermore for a generalized n-dimensional product, tile and repeat won't help, because they don't have clear higher-dimensional analogues. So it's worth investigating the behavior of purpose-built functions as well.
Most of the relevant tests appear at the beginning of this answer, but here are a few of the tests performed on earlier versions of Python and numpy for comparison.
The cartesian function defined in another answer used to perform pretty well for larger inputs. (It's the same as the function called cartesian_product_recursive above.) In order to compare cartesian to dstack_prodct, we use just two dimensions.
Here again, the old test showed a significant difference, while the new test shows almost none.
Old Test
>>> x, y = numpy.arange(1000), numpy.arange(1000)
>>> %timeit cartesian([x, y])
10 loops, best of 3: 25.4 ms per loop
>>> %timeit dstack_product(x, y)
10 loops, best of 3: 66.6 ms per loop
New Test
In [10]: x, y = numpy.arange(1000), numpy.arange(1000)
In [11]: %timeit cartesian([x, y])
12.1 ms ± 199 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [12]: %timeit dstack_product(x, y)
12.7 ms ± 334 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
As before, dstack_product still beats cartesian at smaller scales.
New Test (redundant old test not shown)
In [13]: x, y = numpy.arange(100), numpy.arange(100)
In [14]: %timeit cartesian([x, y])
215 µs ± 4.75 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [15]: %timeit dstack_product(x, y)
65.7 µs ± 1.15 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
These distinctions are, I think, interesting and worth recording; but they are academic in the end. As the tests at the beginning of this answer showed, all of these versions are almost always slower than cartesian_product, defined at the very beginning of this answer -- which is itself a bit slower than the fastest implementations among the answers to this question.
>>> numpy.transpose([numpy.tile(x, len(y)), numpy.repeat(y, len(x))])
array([[1, 4],
[2, 4],
[3, 4],
[1, 5],
[2, 5],
[3, 5]])
See Using numpy to build an array of all combinations of two arrays for a general solution for computing the Cartesian product of N arrays.
You can just do normal list comprehension in python
x = numpy.array([1,2,3])
y = numpy.array([4,5])
[[x0, y0] for x0 in x for y0 in y]
which should give you
[[1, 4], [1, 5], [2, 4], [2, 5], [3, 4], [3, 5]]
I was interested in this as well and did a little performance comparison, perhaps somewhat clearer than in #senderle's answer.
For two arrays (the classical case):
For four arrays:
(Note that the length the arrays is only a few dozen entries here.)
Code to reproduce the plots:
from functools import reduce
import itertools
import numpy
import perfplot
def dstack_product(arrays):
return numpy.dstack(numpy.meshgrid(*arrays, indexing="ij")).reshape(-1, len(arrays))
# Generalized N-dimensional products
def cartesian_product(arrays):
la = len(arrays)
dtype = numpy.find_common_type([a.dtype for a in arrays], [])
arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(numpy.ix_(*arrays)):
arr[..., i] = a
return arr.reshape(-1, la)
def cartesian_product_transpose(arrays):
broadcastable = numpy.ix_(*arrays)
broadcasted = numpy.broadcast_arrays(*broadcastable)
rows, cols = reduce(numpy.multiply, broadcasted[0].shape), len(broadcasted)
dtype = numpy.find_common_type([a.dtype for a in arrays], [])
out = numpy.empty(rows * cols, dtype=dtype)
start, end = 0, rows
for a in broadcasted:
out[start:end] = a.reshape(-1)
start, end = end, end + rows
return out.reshape(cols, rows).T
# from https://stackoverflow.com/a/1235363/577088
def cartesian_product_recursive(arrays, out=None):
arrays = [numpy.asarray(x) for x in arrays]
dtype = arrays[0].dtype
n = numpy.prod([x.size for x in arrays])
if out is None:
out = numpy.zeros([n, len(arrays)], dtype=dtype)
m = n // arrays[0].size
out[:, 0] = numpy.repeat(arrays[0], m)
if arrays[1:]:
cartesian_product_recursive(arrays[1:], out=out[0:m, 1:])
for j in range(1, arrays[0].size):
out[j * m : (j + 1) * m, 1:] = out[0:m, 1:]
return out
def cartesian_product_itertools(arrays):
return numpy.array(list(itertools.product(*arrays)))
perfplot.show(
setup=lambda n: 2 * (numpy.arange(n, dtype=float),),
n_range=[2 ** k for k in range(13)],
# setup=lambda n: 4 * (numpy.arange(n, dtype=float),),
# n_range=[2 ** k for k in range(6)],
kernels=[
dstack_product,
cartesian_product,
cartesian_product_transpose,
cartesian_product_recursive,
cartesian_product_itertools,
],
logx=True,
logy=True,
xlabel="len(a), len(b)",
equality_check=None,
)
Building on #senderle's exemplary ground work I've come up with two versions - one for C and one for Fortran layouts - that are often a bit faster.
cartesian_product_transpose_pp is - unlike #senderle's cartesian_product_transpose which uses a different strategy altogether - a version of cartesion_product that uses the more favorable transpose memory layout + some very minor optimizations.
cartesian_product_pp sticks with the original memory layout. What makes it fast is its using contiguous copying. Contiguous copies turn out to be so much faster that copying a full block of memory even though only part of it contains valid data is preferable to only copying the valid bits.
Some perfplots. I made separate ones for C and Fortran layouts, because these are different tasks IMO.
Names ending in 'pp' are my approaches.
1) many tiny factors (2 elements each)
2) many small factors (4 elements each)
3) three factors of equal length
4) two factors of equal length
Code (need to do separate runs for each plot b/c I couldn't figure out how to reset; also need to edit / comment in / out appropriately):
import numpy
import numpy as np
from functools import reduce
import itertools
import timeit
import perfplot
def dstack_product(arrays):
return numpy.dstack(
numpy.meshgrid(*arrays, indexing='ij')
).reshape(-1, len(arrays))
def cartesian_product_transpose_pp(arrays):
la = len(arrays)
dtype = numpy.result_type(*arrays)
arr = numpy.empty((la, *map(len, arrays)), dtype=dtype)
idx = slice(None), *itertools.repeat(None, la)
for i, a in enumerate(arrays):
arr[i, ...] = a[idx[:la-i]]
return arr.reshape(la, -1).T
def cartesian_product(arrays):
la = len(arrays)
dtype = numpy.result_type(*arrays)
arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(numpy.ix_(*arrays)):
arr[...,i] = a
return arr.reshape(-1, la)
def cartesian_product_transpose(arrays):
broadcastable = numpy.ix_(*arrays)
broadcasted = numpy.broadcast_arrays(*broadcastable)
rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted)
dtype = numpy.result_type(*arrays)
out = numpy.empty(rows * cols, dtype=dtype)
start, end = 0, rows
for a in broadcasted:
out[start:end] = a.reshape(-1)
start, end = end, end + rows
return out.reshape(cols, rows).T
from itertools import accumulate, repeat, chain
def cartesian_product_pp(arrays, out=None):
la = len(arrays)
L = *map(len, arrays), la
dtype = numpy.result_type(*arrays)
arr = numpy.empty(L, dtype=dtype)
arrs = *accumulate(chain((arr,), repeat(0, la-1)), np.ndarray.__getitem__),
idx = slice(None), *itertools.repeat(None, la-1)
for i in range(la-1, 0, -1):
arrs[i][..., i] = arrays[i][idx[:la-i]]
arrs[i-1][1:] = arrs[i]
arr[..., 0] = arrays[0][idx]
return arr.reshape(-1, la)
def cartesian_product_itertools(arrays):
return numpy.array(list(itertools.product(*arrays)))
# from https://stackoverflow.com/a/1235363/577088
def cartesian_product_recursive(arrays, out=None):
arrays = [numpy.asarray(x) for x in arrays]
dtype = arrays[0].dtype
n = numpy.prod([x.size for x in arrays])
if out is None:
out = numpy.zeros([n, len(arrays)], dtype=dtype)
m = n // arrays[0].size
out[:, 0] = numpy.repeat(arrays[0], m)
if arrays[1:]:
cartesian_product_recursive(arrays[1:], out=out[0:m, 1:])
for j in range(1, arrays[0].size):
out[j*m:(j+1)*m, 1:] = out[0:m, 1:]
return out
### Test code ###
if False:
perfplot.save('cp_4el_high.png',
setup=lambda n: n*(numpy.arange(4, dtype=float),),
n_range=list(range(6, 11)),
kernels=[
dstack_product,
cartesian_product_recursive,
cartesian_product,
# cartesian_product_transpose,
cartesian_product_pp,
# cartesian_product_transpose_pp,
],
logx=False,
logy=True,
xlabel='#factors',
equality_check=None
)
else:
perfplot.save('cp_2f_T.png',
setup=lambda n: 2*(numpy.arange(n, dtype=float),),
n_range=[2**k for k in range(5, 11)],
kernels=[
# dstack_product,
# cartesian_product_recursive,
# cartesian_product,
cartesian_product_transpose,
# cartesian_product_pp,
cartesian_product_transpose_pp,
],
logx=True,
logy=True,
xlabel='length of each factor',
equality_check=None
)
As of Oct. 2017, numpy now has a generic np.stack function that takes an axis parameter. Using it, we can have a "generalized cartesian product" using the "dstack and meshgrid" technique:
import numpy as np
def cartesian_product(*arrays):
ndim = len(arrays)
return (np.stack(np.meshgrid(*arrays), axis=-1)
.reshape(-1, ndim))
a = np.array([1,2])
b = np.array([10,20])
cartesian_product(a,b)
# output:
# array([[ 1, 10],
# [ 2, 10],
# [ 1, 20],
# [ 2, 20]])
Note on the axis=-1 parameter. This is the last (inner-most) axis in the result. It is equivalent to using axis=ndim.
One other comment, since Cartesian products blow up very quickly, unless we need to realize the array in memory for some reason, if the product is very large, we may want to make use of itertools and use the values on-the-fly.
The Scikit-learn package has a fast implementation of exactly this:
from sklearn.utils.extmath import cartesian
product = cartesian((x,y))
Note that the convention of this implementation is different from what you want, if you care about the order of the output. For your exact ordering, you can do
product = cartesian((y,x))[:, ::-1]
I used #kennytm answer for a while, but when trying to do the same in TensorFlow, but I found that TensorFlow has no equivalent of numpy.repeat(). After a little experimentation, I think I found a more general solution for arbitrary vectors of points.
For numpy:
import numpy as np
def cartesian_product(*args: np.ndarray) -> np.ndarray:
"""
Produce the cartesian product of arbitrary length vectors.
Parameters
----------
np.ndarray args
vector of points of interest in each dimension
Returns
-------
np.ndarray
the cartesian product of size [m x n] wherein:
m = prod([len(a) for a in args])
n = len(args)
"""
for i, a in enumerate(args):
assert a.ndim == 1, "arg {:d} is not rank 1".format(i)
return np.concatenate([np.reshape(xi, [-1, 1]) for xi in np.meshgrid(*args)], axis=1)
and for TensorFlow:
import tensorflow as tf
def cartesian_product(*args: tf.Tensor) -> tf.Tensor:
"""
Produce the cartesian product of arbitrary length vectors.
Parameters
----------
tf.Tensor args
vector of points of interest in each dimension
Returns
-------
tf.Tensor
the cartesian product of size [m x n] wherein:
m = prod([len(a) for a in args])
n = len(args)
"""
for i, a in enumerate(args):
tf.assert_rank(a, 1, message="arg {:d} is not rank 1".format(i))
return tf.concat([tf.reshape(xi, [-1, 1]) for xi in tf.meshgrid(*args)], axis=1)
More generally, if you have two 2d numpy arrays a and b, and you want to concatenate every row of a to every row of b (A cartesian product of rows, kind of like a join in a database), you can use this method:
import numpy
def join_2d(a, b):
assert a.dtype == b.dtype
a_part = numpy.tile(a, (len(b), 1))
b_part = numpy.repeat(b, len(a), axis=0)
return numpy.hstack((a_part, b_part))
The fastest you can get is either by combining a generator expression with the map function:
import numpy
import datetime
a = np.arange(1000)
b = np.arange(200)
start = datetime.datetime.now()
foo = (item for sublist in [list(map(lambda x: (x,i),a)) for i in b] for item in sublist)
print (list(foo))
print ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds()))
Outputs (actually the whole resulting list is printed):
[(0, 0), (1, 0), ...,(998, 199), (999, 199)]
execution time: 1.253567 s
or by using a double generator expression:
a = np.arange(1000)
b = np.arange(200)
start = datetime.datetime.now()
foo = ((x,y) for x in a for y in b)
print (list(foo))
print ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds()))
Outputs (whole list printed):
[(0, 0), (1, 0), ...,(998, 199), (999, 199)]
execution time: 1.187415 s
Take into account that most of the computation time goes into the printing command. The generator calculations are otherwise decently efficient. Without printing the calculation times are:
execution time: 0.079208 s
for generator expression + map function and:
execution time: 0.007093 s
for the double generator expression.
If what you actually want is to calculate the actual product of each of the coordinate pairs, the fastest is to solve it as a numpy matrix product:
a = np.arange(1000)
b = np.arange(200)
start = datetime.datetime.now()
foo = np.dot(np.asmatrix([[i,0] for i in a]), np.asmatrix([[i,0] for i in b]).T)
print (foo)
print ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds()))
Outputs:
[[ 0 0 0 ..., 0 0 0]
[ 0 1 2 ..., 197 198 199]
[ 0 2 4 ..., 394 396 398]
...,
[ 0 997 1994 ..., 196409 197406 198403]
[ 0 998 1996 ..., 196606 197604 198602]
[ 0 999 1998 ..., 196803 197802 198801]]
execution time: 0.003869 s
and without printing (in this case it doesn't save much since only a tiny piece of the matrix is actually printed out):
execution time: 0.003083 s
This can also be easily done by using itertools.product method
from itertools import product
import numpy as np
x = np.array([1, 2, 3])
y = np.array([4, 5])
cart_prod = np.array(list(product(*[x, y])),dtype='int32')
Result:
array([[1, 4],
[1, 5],
[2, 4],
[2, 5],
[3, 4],
[3, 5]], dtype=int32)
Execution time: 0.000155 s
In the specific case that you need to perform simple operations such as addition on each pair, you can introduce an extra dimension and let broadcasting do the job:
>>> a, b = np.array([1,2,3]), np.array([10,20,30])
>>> a[None,:] + b[:,None]
array([[11, 12, 13],
[21, 22, 23],
[31, 32, 33]])
I'm not sure if there is any similar way to actually get the pairs themselves.
I'm a bit late to the party, but I encoutered a tricky variant of that problem.
Let's say I want the cartesian product of several arrays, but that cartesian product ends up being much larger than the computers' memory (however, the computation done with that product are fast, or at least parallelizable).
The obvious solution is to divide this cartesian product in chunks, and treat these chunks one after the other (in sort of a "streaming" manner). You can do that easily with itertools.product, but it's horrendously slow. Also, none of the proposed solutions here (as fast as they are) give us this possibility. The solution I propose uses Numba, and is slightly faster than the "canonical" cartesian_product mentioned here. It's pretty long because I tried to optimize it everywhere I could.
import numba as nb
import numpy as np
from typing import List
#nb.njit(nb.types.Tuple((nb.int32[:, :],
nb.int32[:]))(nb.int32[:],
nb.int32[:],
nb.int64, nb.int64))
def cproduct(sizes: np.ndarray, current_tuple: np.ndarray, start_idx: int, end_idx: int):
"""Generates ids tuples from start_id to end_id"""
assert len(sizes) >= 2
assert start_idx < end_idx
tuples = np.zeros((end_idx - start_idx, len(sizes)), dtype=np.int32)
tuple_idx = 0
# stores the current combination
current_tuple = current_tuple.copy()
while tuple_idx < end_idx - start_idx:
tuples[tuple_idx] = current_tuple
current_tuple[0] += 1
# using a condition here instead of including this in the inner loop
# to gain a bit of speed: this is going to be tested each iteration,
# and starting a loop to have it end right away is a bit silly
if current_tuple[0] == sizes[0]:
# the reset to 0 and subsequent increment amount to carrying
# the number to the higher "power"
current_tuple[0] = 0
current_tuple[1] += 1
for i in range(1, len(sizes) - 1):
if current_tuple[i] == sizes[i]:
# same as before, but in a loop, since this is going
# to get called less often
current_tuple[i + 1] += 1
current_tuple[i] = 0
else:
break
tuple_idx += 1
return tuples, current_tuple
def chunked_cartesian_product_ids(sizes: List[int], chunk_size: int):
"""Just generates chunks of the cartesian product of the ids of each
input arrays (thus, we just need their sizes here, not the actual arrays)"""
prod = np.prod(sizes)
# putting the largest number at the front to more efficiently make use
# of the cproduct numba function
sizes = np.array(sizes, dtype=np.int32)
sorted_idx = np.argsort(sizes)[::-1]
sizes = sizes[sorted_idx]
if chunk_size > prod:
chunk_bounds = (np.array([0, prod])).astype(np.int64)
else:
num_chunks = np.maximum(np.ceil(prod / chunk_size), 2).astype(np.int32)
chunk_bounds = (np.arange(num_chunks + 1) * chunk_size).astype(np.int64)
chunk_bounds[-1] = prod
current_tuple = np.zeros(len(sizes), dtype=np.int32)
for start_idx, end_idx in zip(chunk_bounds[:-1], chunk_bounds[1:]):
tuples, current_tuple = cproduct(sizes, current_tuple, start_idx, end_idx)
# re-arrange columns to match the original order of the sizes list
# before yielding
yield tuples[:, np.argsort(sorted_idx)]
def chunked_cartesian_product(*arrays, chunk_size=2 ** 25):
"""Returns chunks of the full cartesian product, with arrays of shape
(chunk_size, n_arrays). The last chunk will obviously have the size of the
remainder"""
array_lengths = [len(array) for array in arrays]
for array_ids_chunk in chunked_cartesian_product_ids(array_lengths, chunk_size):
slices_lists = [arrays[i][array_ids_chunk[:, i]] for i in range(len(arrays))]
yield np.vstack(slices_lists).swapaxes(0,1)
def cartesian_product(*arrays):
"""Actual cartesian product, not chunked, still fast"""
total_prod = np.prod([len(array) for array in arrays])
return next(chunked_cartesian_product(*arrays, total_prod))
a = np.arange(0, 3)
b = np.arange(8, 10)
c = np.arange(13, 16)
for cartesian_tuples in chunked_cartesian_product(*[a, b, c], chunk_size=5):
print(cartesian_tuples)
This would output our cartesian product in chunks of 5 3-uples:
[[ 0 8 13]
[ 0 8 14]
[ 0 8 15]
[ 1 8 13]
[ 1 8 14]]
[[ 1 8 15]
[ 2 8 13]
[ 2 8 14]
[ 2 8 15]
[ 0 9 13]]
[[ 0 9 14]
[ 0 9 15]
[ 1 9 13]
[ 1 9 14]
[ 1 9 15]]
[[ 2 9 13]
[ 2 9 14]
[ 2 9 15]]
If you're willing to understand what is being done here, the intuition behind the njitted function is to enumerate each "number" in a weird numerical base whose elements would be composed of the sizes of the input arrays (instead of the same number in regular binary, decimal or hexadecimal bases).
Obviously, this solution is interesting for large products. For small ones, the overhead might be a bit costly.
NOTE: since numba is still under heavy development, i'm using numba 0.50 to run this, with python 3.6.
Yet another one:
>>>x1, y1 = np.meshgrid(x, y)
>>>np.c_[x1.ravel(), y1.ravel()]
array([[1, 4],
[2, 4],
[3, 4],
[1, 5],
[2, 5],
[3, 5]])
Inspired by Ashkan's answer, you can also try the following.
>>> x, y = np.meshgrid(x, y)
>>> np.concatenate([x.flatten().reshape(-1,1), y.flatten().reshape(-1,1)], axis=1)
This will give you the required cartesian product!
This is a generalized version of the accepted answer (Cartesian product of multiple arrays using numpy.tile and numpy.repeat functions).
from functors import reduce
from operator import mul
def cartesian_product(arrays):
return np.vstack(
np.tile(
np.repeat(arrays[j], reduce(mul, map(len, arrays[j+1:]), 1)),
reduce(mul, map(len, arrays[:j]), 1),
)
for j in range(len(arrays))
).T
If you are willing to use PyTorch, I should think it is highly efficient:
>>> import torch
>>> torch.cartesian_prod(torch.as_tensor(x), torch.as_tensor(y))
tensor([[1, 4],
[1, 5],
[2, 4],
[2, 5],
[3, 4],
[3, 5]])
and you can easily get a numpy array:
>>> torch.cartesian_prod(torch.as_tensor(x), torch.as_tensor(y)).numpy()
array([[1, 4],
[1, 5],
[2, 4],
[2, 5],
[3, 4],
[3, 5]])

How to go through an array apply a threshold to each pixel

I have a 3 channel numpy array and I would like to apply a function to each pixel. Specifically I want to process an image and return a greyscale image highlighting where specific colours appear in the image. If the red, green, blue channels are within 10 in L2 distance from the colour: (30,70,130) then set that pixel's value on the greyscale image to be 255, otherwise 0.
My current process to doing it is with:
def L2_dist(p1,p2):
dist = ( (p1[0]-p2[0] )**2 + (p1[1]-p2[1] )**2 + (p1[2]-p2[2] )**2 ) **0.5
if dist<10: return 255
return 0
def colour_img(image):
colour = my_colour
img_dim = image.shape
new_img = np.zeros((img_dim[0],img_dim[1])) # no alpha channel replica
for c in range(img_dim[0]):
for r in range(img_dim[1]):
pixel = image[r,c,:3]
new_img[r,c] = L2_dist(colour,pixel)
return new_img
But it's very slow. How can I do this faster instead of using loops?
Simple one line solution
You can do what you want in a single line like this:
new_img = (((image - color)**2).sum(axis=2)**.5 <= 10) * 255
Optimized two line solution
The above line isn't the most efficient way possible to perform all of the operations that the OP wants. Here's a significantly faster way (credit to Paul Panzer for suggesting the optimizations in the comments, readability not guaranteed):
d = image - color
new_img = (np.einsum('...i, ...i', d, d) <= 100) * 255
Timings:
Given some test data with 100x100 pixels:
import numpy as np
color = np.array([30, 70, 130])
# random data within [20,60,120]-[40,80,140] for demo purposes
image = np.random.randint(10*2 + 1, size=[100,100,3]) + color - 10
Here's a comparison of the timings of the OP's method and the solutions from this answer. The one-line solution is about 100x faster than than the OP's, whereas the fully optimized version is about 300x faster:
%%timeit
# OP's code
img_dim = image.shape
new_img = np.zeros((img_dim[0],img_dim[1])) # no alpha channel replica
for c in range(img_dim[0]):
for r in range(img_dim[1]):
pixel = image[r,c,:3]
new_img[r,c] = L2_dist(color,pixel)
43.8 ms ± 502 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit
# one line solution
new_img = (((image - color)**2).sum(axis=2)**.5 <= 10) * 255
439 µs ± 13.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
# fully optimized solution
d = image - color
new_img = (np.einsum('...i, ...i', d, d) <= 100) * 255
145 µs ± 2.29 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Explanation of simple one line solution
The simple one-liner given as the first solution will:
Find the Euclidean distance between every pixel in image (which will be an array of shape (m, n, 3)) and color (which will be an array of shape (3)).
Check if any of those distances is within 10, and return a boolean array that is True wherever the condition is met and False otherwise.
A boolean array is really just an array of 0s and 1s, so we then multiply the boolean array by 255 to get the final result you wanted.
Explanation of optimized solution
Here's the list of optimizations used:
Uses einsum to calculate the sum of the squares required for the distance calculation. Under the hood, einsum makes use of the BLAS library that Numpy wraps to calculate the needed sum-product, so it should be faster.
Skips taking the square root by comparing the square of the distance to the square of the threshold.
I tried to find a way to minimize allocation/copying of arrays, but this actually made things slower. Here's a version of the optimized solution that allocates exactly two arrays (one for intermediate results and one for final result) and makes no other copies:
%%timeit
# fully optimized solution, makes as few array copies as possible
scr = image.astype(np.double)
new_img = np.zeros(image.shape[:2], dtype=np.uint8)
np.multiply(np.less_equal(np.einsum('...i,...i', np.subtract(image, color, out=scr), scr, out=scr[:,:,0]), 100, out=new_img), 255, out=new_img)
232 µs ± 7.72 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
You can do something like this
color = np.array([30, 70, 130])
L2 = np.sqrt(np.sum((image - color) ** 2, axis=2)) # L2 distance of each pixel from color
img_dim = image.shape
new_img = np.zeros((img_dim[0], img_dim[1]))
new_img[L2 < 10] = 255
But as you can see, we are iterating though the array twice, first to calculate L2 and then to do the thresholding in L2 < 10, We can improve it as is done in your code, through nested loops. But, loops in python are slow. So, JIT compile the function to get the fastest version. Below I use numba:
import numba as nb
#nb.njit(cache=True)
def L2_dist(p1,p2):
dist = (p1[0]-p2[0] )**2 + (p1[1]-p2[1] )**2 + (p1[2]-p2[2] )**2
if dist < 100: return 255
return 0
#nb.njit(cache=True)
def color_img(image):
n_rows, n_cols, _ = image.shape
new_img = np.zeros((n_rows, n_cols), dtype=np.int32)
for c in range(n_rows):
for r in range(n_cols):
pixel = image[r, c, :3]
new_img[r,c] = L2_dist(color,pixel)
return new_img
Timings:
# #tel's fully optimised solution(using einsum to short circuit np to get to BLAS directly, the no sqrt trick)
128 µs ± 6.94 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
# JITed version without the sqrt trick
30.8 µs ± 10.2 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
# JITed version with the sqrt trick
24.8 µs ± 11.9 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
HTH.

Resources