should I use basinhopping instead of minimize to find colors based on a small set of transforms? - scikit-learn

question up front
def shade_func(color, offset):
return tuple([int(c * (1 - offset)) for c in color])
def tint_func(color, offset):
return tuple([int(c + (255 - c) * offset) for c in color])
def tone_func(color, offset):
return tuple([int(c * (1 - offset) + 128 * offset) for c in color])
given an objective over a collection of colors that returns the least distance to a target color, how do I ensure that basinhopping isn't better than minimization in scikit learn?
I was thinking that, for any one color, there will be up to 4 moments in a v-shaped curve, and so only one minimum. if the value with offset zero is itself a minimum, maybe it could be 5. Am I wrong? In any case each is a single optimum, so if we are only searching one color at a time, no reason to use basinhopping.
If we instead use basinhopping to scan all colors at once (we can scale the two different dimensions, in fact this is where the idea of a preprocessor function first came from), it scans them, but does not do such a compelling just of scanning all colors. Some colors it only tries once. I think it might completely skip some colors with large enough sets.
details
I was inspired by way artyclick shows colors and allows searching for them. If you look at an individual color, for example mauve you'll notice that it prominently displays the shades, tints, and tones of the color, rather like an artist might like. If you ask it for the name of a color, it will use a hidden unordered list of about a thousand color names, and some javascript to find the nearest colorname to the color you chose. In fact it will also show alternatives.
I noticed that quite often a shade, tint or tone of an alternative (or even the best match) was often a better match than the color it provided. For those who don't know about shade, tint and tone, there's a nice write up at Dunn Edward's Paints. It looks like shade and tint are the same but with signs reversed, if doing this on tuples representing colors. For tone it is different, a negative value would I think saturate the result.
I felt like there must be authoritative (or at least well sourced) colorname sources it could be using.
In terms of the results, since I want any color or its shade/tint/tone, I want a result like this:
{'color': '#aabbcc',
'offset': {'type': 'tint', 'value': 0.31060384614807254}}
So I can return the actual color name from the color, plus the type of color transform to get there and the amount of you have to go.
For distance of colors, there is a great algorithm that is meant to model human perception that I am using, called CIEDE 2000. Frankly, I'm just using a snippet I found that implements this, it could be wrong.
So now I want to take in two colors, compare their shade, tint, and tone to a target color, and return the one with the least distance. After I am done, I can reconstruct if it was a shade, tint or tone transform from the result just by running all three once and choosing the best fit. With that structure, I can iterate over every color, and that should do it. I use optimization because I don't want to hard code what offsets it should consider (though I am reconsidering this choice now!).
because I want to consider negatives for tone but not for shade/tint, my objective will have to transform that. I have to include two values to optimize, since the objection function will need to know what color to transform (or else the result will give me know way of knowing which color to use the offset with).
so my call should look something like the following:
result = min(minimize(objective, (i,0), bounds=[(i, i), (-1, 1)]) for i in range(len(colors)))
offset_type = resolve_offset_type(result)
with that in mind, I implemented this solution, over the past couple of days.
current solution
from scipy.optimize import minimize
import numpy as np
import math
def clamp(low, x, high):
return max(low, min(x, high))
def hex_to_rgb(hex_color):
hex_color = hex_color.lstrip('#')
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
def rgb_to_hex(rgb):
return '#{:02x}{:02x}{:02x}'.format(*rgb)
def rgb_to_lab(color):
# Convert RGB to XYZ color space
R = color[0] / 255.0
G = color[1] / 255.0
B = color[2] / 255.0
R = ((R + 0.055) / 1.055) ** 2.4 if R > 0.04045 else R / 12.92
G = ((G + 0.055) / 1.055) ** 2.4 if G > 0.04045 else G / 12.92
B = ((B + 0.055) / 1.055) ** 2.4 if B > 0.04045 else B / 12.92
X = R * 0.4124 + G * 0.3576 + B * 0.1805
Y = R * 0.2126 + G * 0.7152 + B * 0.0722
Z = R * 0.0193 + G * 0.1192 + B * 0.9505
return (X,Y,Z)
def shade_func(color, offset):
return tuple([int(c * (1 - offset)) for c in color])
def tint_func(color, offset):
return tuple([int(c + (255 - c) * offset) for c in color])
def tone_func(color, offset):
return tuple([int(c * (1 - offset) + 128 * offset) for c in color])
class ColorNameFinder:
def __init__(self, colors, distance=None):
if distance is None:
distance = ColorNameFinder.ciede2000
self.distance = distance
self.colors = [hex_to_rgb(color) for color in colors]
#classmethod
def euclidean(self, left, right):
return (left[0] - right[0]) ** 2 + (left[1] - right[1]) ** 2 + (left[2] - right[2]) ** 2
#classmethod
def ciede2000(self, color1, color2):
# Convert color to LAB color space
lab1 = rgb_to_lab(color1)
lab2 = rgb_to_lab(color2)
# Compute CIE 2000 color difference
C1 = math.sqrt(lab1[1] ** 2 + lab1[2] ** 2)
C2 = math.sqrt(lab2[1] ** 2 + lab2[2] ** 2)
a1 = math.atan2(lab1[2], lab1[1])
a2 = math.atan2(lab2[2], lab2[1])
dL = lab2[0] - lab1[0]
dC = C2 - C1
dA = a2 - a1
dH = 2 * math.sqrt(C1 * C2) * math.sin(dA / 2)
L = 1
C = 1
H = 1
LK = 1
LC = math.sqrt(math.pow(C1, 7) / (math.pow(C1, 7) + math.pow(25, 7)))
LH = math.sqrt(lab1[0] ** 2 + lab1[1] ** 2)
CB = math.sqrt(lab2[1] ** 2 + lab2[2] ** 2)
CH = math.sqrt(C2 ** 2 + dH ** 2)
SH = 1 + 0.015 * CH * LC
SL = 1 + 0.015 * LH * LC
SC = 1 + 0.015 * CB * LC
T = 0.0
if (a1 >= a2 and a1 - a2 <= math.pi) or (a2 >= a1 and a2 - a1 > math.pi):
T = 1
else:
T = 0
dE = math.sqrt((dL / L) ** 2 + (dC / C) ** 2 + (dH / H) ** 2 + T * (dC / SC) ** 2)
return dE
def __factory_objective(self, target, preprocessor=lambda x: x):
def fn(x):
print(x, preprocessor(x))
x = preprocessor(x)
color = self.colors[x[0]]
offset = x[1]
bound_offset = abs(offset)
offsets = [
shade_func(color, bound_offset),
tint_func(color, bound_offset),
tone_func(color, offset)]
least_error = min([(right, self.distance(target, right)) \
for right in offsets], key = lambda x: x[1])[1]
return least_error
return fn
def __resolve_offset_type(self, sample, target, offset):
bound_offset = abs(offset)
shade = shade_func(sample, bound_offset)
tint = tint_func(sample, bound_offset)
tone = tone_func(sample, offset)
lookup = {}
lookup[shade] = "shade"
lookup[tint] = "tint"
lookup[tone] = "tone"
offsets = [shade, tint, tone]
least_error = min([(right, self.distance(target, right)) for right in offsets], key = lambda x: x[1])[0]
return lookup[least_error]
def nearest_color(self, target):
target = hex_to_rgb(target)
preprocessor=lambda x: (int(x[0]), x[1])
result = min(\
[minimize( self.__factory_objective(target, preprocessor=preprocessor),
(i, 0),
bounds=[(i, i), (-1, 1)],
method='Powell') \
for i, color in enumerate(self.colors)], key=lambda x: x.fun)
color_index = int(result.x[0])
nearest_color = self.colors[color_index]
offset = preprocessor(result.x)[1]
offset_type = self.__resolve_offset_type(nearest_color, target, offset)
return {
"color": rgb_to_hex(nearest_color),
"offset": {
"type": offset_type,
"value": offset if offset_type == 'tone' else abs(offset)
}
}
let's demonstrate this with mauve. We'll define a target that is similar to a shade of mauve, include mauve in a list of colors, and ideally we'll get mauve back from our test.
colors = ['#E0B0FF', '#FF0000', '#000000', '#0000FF']
target = '#DFAEFE'
agent = ColorNameFinder(colors)
agent.nearest_color(target)
we do get mauve back:
{'color': '#e0b0ff',
'offset': {'type': 'shade', 'value': 0.0031060384614807254}}
the distance is 0.004991238317138219
agent.distance(hex_to_rgb(target), shade_func(hex_to_rgb(colors[0]), 0.0031060384614807254))
why use Powell's method?
in this arrangement, it is simply the best. No other method that uses bounds did a good job of scanning positives and negatives, and I had mixed results using the preprocessor to scale the values back to negative with bounds of (0,2).
I do notice that in the sample test, a range between about 0.003 and 0.0008 seems to produce the same distance, and that the values my approach considers includes a large number of these. is there a more efficient solution?
If I'm wrong, please let me know.
correctness of the color transformations
what is adding a negative amount of white? (in the case of a tint) I was thinking it is like adding a positive amount of black -- ie a shade, with signs reversed.
my implementation is not correct:
agent.distance(hex_to_rgb(target), shade_func(hex_to_rgb(colors[0]), 0.1)) - agent.distance(hex_to_rgb(target), tint_func(hex_to_rgb(colors[0]), -0.1))
produces 0.3239904390784106 instead of 0.
I'll probably be fixing that soon

Related

Linearly evolutive color map

I am trying to create a colormap that should linearly vary according to a "w" value, from white-red to white-purple.
So...
For w = 1, the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be red.
For w = 10 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be orange.
For w = 30 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be yellow.
and so on, until...
For w = 100 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be purple.
I used this website to generate the image : https://g.co/kgs/utJPmw
I can get the first (w = 1) color map by using this code, but no idea on how to make it vary according to what I would like to :
import matplotlib.cm as cm
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
color_map_1 = cm.get_cmap('Reds', 256)
newcolors_1 = color_map_1(np.linspace(0, 1, 256))
color_map_1 = ListedColormap(newcolors_1)
Any idea to do such a thing in python would be so much welcome,
Thank you guys
I finally found the solution. Maybe this is not the cleanest way, but it works very well for what I want to do. The colormaps I create can vary from white-red to white-purple (color spectrum). 765 variations are possible here, but by adding some small changes to the code, it could vary much more or less, depending on what you want.
In the following code : using the create_custom_colormap function, you get as an output cmap and color_map. cmap is the matrix containing the (r,g,b) values. color_map is the object that can be used in matplotlib (imshow) as an actual colormap, on any image.
Using the following code, define the function we will need for this job:
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
def create_image():
'''
Create some random image on which we will apply the colormap. Any other image could replace this one, with or without extent.
'''
dx, dy = 0.015, 0.05
x = np.arange(-4.0, 4.0, dx)
y = np.arange(-4.0, 4.0, dy)
X, Y = np.meshgrid(x, y)
extent = np.min(x), np.max(x), np.min(y), np.max(y)
def z_fun(x, y):
return (1 - x / 2 + x**5 + y**6) * np.exp(-(x**2 + y**2))
Z2 = z_fun(X, Y)
return(extent, Z2)
def create_cmap(**kwargs):
'''
Create a color matrix and a color map using 3 lists of r (red), g (green) and b (blue) values.
Parameters:
- r (list of floats): red value, between 0 and 1
- g (list of floats): green value, between 0 and 1
- b (list of floats): blue value, between 0 and 1
Returns:
- color_matrix (numpy 2D array): contains all the rgb values for a given colormap
- color_map (matplotlib object): the color_matrix transformed into an object that matplotlib can use on figures
'''
color_matrix = np.empty([256,3])
color_matrix.fill(0)
color_matrix[:,0] = kwargs["r"]
color_matrix[:,1] = kwargs["g"]
color_matrix[:,2] = kwargs["b"]
color_map = ListedColormap(color_matrix)
return(color_matrix, color_map)
def standardize_timeseries_between(timeseries, borne_inf = 0, borne_sup = 1):
'''
For lisibility reasons, I defined r,g,b values between 0 and 255. But the matplotlib ListedColormap function expects values between 0 and 1.
Parameters:
timeseries (list of floats): can be one color vector in our case (either r, g o r b)
borne_inf (int): The minimum value in our timeseries will be replaced by this value
borne_sup (int): The maximum value in our timeseries will be replaced by this value
'''
timeseries_standardized = []
for i in range(len(timeseries)):
a = (borne_sup - borne_inf) / (max(timeseries) - min(timeseries))
b = borne_inf - a * min(timeseries)
timeseries_standardized.append(a * timeseries[i] + b)
timeseries_standardized = np.array(timeseries_standardized)
return(timeseries_standardized)
def create_custom_colormap(weight):
'''
This function is at the heart of the process. It takes only one < weight > parameter, that you can chose.
- For weight between 0 and 255, the colormaps that are created will vary between white-red (min-max) to white-yellow (min-max).
- For weight between 256 and 510, the colormaps that are created will vary between white-green (min-max) to white-cyan (min-max).
- For weight between 511 and 765, the colormaps that are created will vary between white-blue (min-max) to white-purple (min-max).
'''
if weight <= 255:
### 0>w<255
r = np.repeat(1, 256)
g = np.arange(0, 256, 1)
g = standardize_timeseries_between(g, weight/256, 1)
g = g[::-1]
b = np.arange(0, 256, 1)
b = standardize_timeseries_between(b, 1/256, 1)
b = b[::-1]
if weight > 255 and weight <= 255*2:
weight = weight - 255
### 255>w<510
g = np.repeat(1, 256)
r = np.arange(0, 256, 1)
r = standardize_timeseries_between(r, 1/256, 1)
r = r[::-1]
b = np.arange(0, 256, 1)
b = standardize_timeseries_between(b, weight/256, 1)
b = b[::-1]
if weight > 255*2 and weight <= 255*3:
weight = weight - 255*2
### 510>w<765
b = np.repeat(1, 256)
r = np.arange(0, 256, 1)
r = standardize_timeseries_between(r, weight/256, 1)
r = r[::-1]
g = np.arange(0, 256, 1)
g = standardize_timeseries_between(g, 1/256, 1)
g = g[::-1]
cmap, color_map = create_cmap(r=r, g=g, b=b)
return(cmap, color_map)
Use the function create_custom_colormap to get the colormap you want, by giving as argument to the function a value between 0 and 765 (see 5 examples in the figure below):
### Let us create some image (any other could be used).
extent, Z2 = create_image()
### Now create a color map, using the w value you want 0 = white-red, 765 = white-purple.
cmap, color_map = create_custom_colormap(weight=750)
### Plot the result
plt.imshow(Z2, cmap =color_map, alpha=0.7,
interpolation ='bilinear', extent=extent)
plt.colorbar()

Optimizing asymmetrically reweighted penalized least squares smoothing (from matlab to python)

I'm trying to apply the method for baselinining vibrational spectra, which is announced as an improvement over asymmetric and iterative re-weighted least-squares algorithms in the 2015 paper (doi:10.1039/c4an01061b), where the following matlab code was provided:
function z = baseline(y, lambda, ratio)
% Estimate baseline with arPLS in Matlab
N = length(y);
D = diff(speye(N), 2);
H = lambda*D'*D;
w = ones(N, 1);
while true
W = spdiags(w, 0, N, N);
% Cholesky decomposition
C = chol(W + H);
z = C \ (C' \ (w.*y) );
d = y - z;
% make d-, and get w^t with m and s
dn = d(d<0);
m = mean(d);
s = std(d);
wt = 1./ (1 + exp( 2* (d-(2*s-m))/s ) );
% check exit condition and backup
if norm(w-wt)/norm(w) < ratio, break; end
end
that I rewrote into python:
def baseline_arPLS(y, lam, ratio):
# Estimate baseline with arPLS
N = len(y)
k = [numpy.ones(N), -2*numpy.ones(N-1), numpy.ones(N-2)]
offset = [0, 1, 2]
D = diags(k, offset).toarray()
H = lam * numpy.matmul(D.T, D)
w_ = numpy.ones(N)
while True:
W = spdiags(w_, 0, N, N, format='csr')
# Cholesky decomposition
C = cholesky(W + H)
z_ = spsolve(C.T, w_ * y)
z = spsolve(C, z_)
d = y - z
# make d- and get w^t with m and s
dn = d[d<0]
m = numpy.mean(dn)
s = numpy.std(dn)
wt = 1. / (1 + numpy.exp(2 * (d - (2*s-m)) / s))
# check exit condition and backup
norm_wt, norm_w = norm(w_-wt), norm(w_)
if (norm_wt / norm_w) < ratio:
break
w_ = wt
return(z)
Except for the input vector y the method requires parameters lam and ratio and it runs ok for values lam<1.e+07 and ratio>1.e-01, but outputs poor results. When values are changed outside this range, for example lam=1e+07, ratio=1e-02 the CPU starts heating up and job never finishes (I interrupted it after 1min). Also in both cases the following warning shows up:
/usr/local/lib/python3.9/site-packages/scipy/sparse/linalg/dsolve/linsolve.py: 144: SparseEfficencyWarning: spsolve requires A to be CSC or CSR matrix format warn('spsolve requires A to be CSC or CSR format',
although I added the recommended format='csr' option to the spdiags call.
And here's some synthetic data (similar to one in the paper) for testing purposes. The noise was added along with a 3rd degree polynomial baseline The method works well for parameters bl_1 and fails to converge for bl_2:
import numpy
from matplotlib import pyplot
from scipy.sparse import spdiags, diags, identity
from scipy.sparse.linalg import spsolve
from numpy.linalg import cholesky, norm
import sys
x = numpy.arange(0, 1000)
noise = numpy.random.uniform(low=0, high = 10, size=len(x))
poly_3rd_degree = numpy.poly1d([1.2e-06, -1.23e-03, .36, -4.e-04])
poly_baseline = poly_3rd_degree(x)
y = 100 * numpy.exp(-((x-300)/15)**2)+\
200 * numpy.exp(-((x-750)/30)**2)+ \
100 * numpy.exp(-((x-800)/15)**2) + noise + poly_baseline
bl_1 = baseline_arPLS(y, 1e+07, 1e-01)
bl_2 = baseline_arPLS(y, 1e+07, 1e-02)
pyplot.figure(1)
pyplot.plot(x, y, 'C0')
pyplot.plot(x, poly_baseline, 'C1')
pyplot.plot(x, bl_1, 'k')
pyplot.show()
sys.exit(0)
All this is telling me that I'm doing something very non-optimal in my python implementation. Since I'm not knowledgeable enough about the intricacies of scipy computations I'm kindly asking for suggestions on how to achieve convergence in this calculations.
(I encountered an issue in running the "straight" matlab version of the code because the line D = diff(speye(N), 2); truncates the last two rows of the matrix, creating dimension mismatch later in the function. Following the description of matrix D's appearance I substituted this line by directly creating a tridiagonal matrix using the diags function.)
Guided by the comment #hpaulj made, and suspecting that the loop exit wasn't coded properly, I re-visited the paper and found out that the authors actually implemented an exit condition that was not featured in their matlab script. Changing the while loop condition provides an exit for any set of parameters; my understanding is that algorithm is not guaranteed to converge in all cases, which is why this condition is necessary but was omitted by error. Here's the edited version of my python code:
def baseline_arPLS(y, lam, ratio):
# Estimate baseline with arPLS
N = len(y)
k = [numpy.ones(N), -2*numpy.ones(N-1), numpy.ones(N-2)]
offset = [0, 1, 2]
D = diags(k, offset).toarray()
H = lam * numpy.matmul(D.T, D)
w_ = numpy.ones(N)
i = 0
N_iterations = 100
while i < N_iterations:
W = spdiags(w_, 0, N, N, format='csr')
# Cholesky decomposition
C = cholesky(W + H)
z_ = spsolve(C.T, w_ * y)
z = spsolve(C, z_)
d = y - z
# make d- and get w^t with m and s
dn = d[d<0]
m = numpy.mean(dn)
s = numpy.std(dn)
wt = 1. / (1 + numpy.exp(2 * (d - (2*s-m)) / s))
# check exit condition and backup
norm_wt, norm_w = norm(w_-wt), norm(w_)
if (norm_wt / norm_w) < ratio:
break
w_ = wt
i += 1
return(z)

Speed Up a for Loop - Python

I have a code that works perfectly well but I wish to speed up the time it takes to converge. A snippet of the code is shown below:
def myfunction(x, i):
y = x + (min(0, target[i] - data[i, :]x))*data[i]/(norm(data[i])**2))
return y
rows, columns = data.shape
start = time.time()
iterate = 0
iterate_count = []
norm_count = []
res = 5
x_not = np.ones(columns)
norm_count.append(norm(x_not))
iterate_count.append(0)
while res > 1e-8:
for row in range(rows):
y = myfunction(x_not, row)
x_not = y
iterate += 1
iterate_count.append(iterate)
norm_count.append(norm(x_not))
res = abs(norm_count[-1] - norm_count[-2])
print('Converge at {} iterations'.format(iterate))
print('Duration: {:.4f} seconds'.format(time.time() - start))
I am relatively new in Python. I will appreciate any hint/assistance.
Ax=b is the problem we wish to solve. Here, 'A' is the 'data' and 'b' is the 'target'
Ugh! After spending a while on this I don't think it can be done the way you've set up your problem. In each iteration over the row, you modify x_not and then pass the updated result to get the solution for the next row. This kind of setup can't be vectorized easily. You can learn the thought process of vectorization from the failed attempt, so I'm including it in the answer. I'm also including a different iterative method to solve linear systems of equations. I've included a vectorized version -- where the solution is updated using matrix multiplication and vector addition, and a loopy version -- where the solution is updated using a for loop to demonstrate what you can expect to gain.
1. The failed attempt
Let's take a look at what you're doing here.
def myfunction(x, i):
y = x + (min(0, target[i] - data[i, :] # x)) * (data[i] / (norm(data[i])**2))
return y
You subtract
the dot product of (the ith row of data and x_not)
from the ith row of target,
limited at zero.
You multiply this result with the ith row of data divided my the norm of that row squared. Let's call this part2
Then you add this to the ith element of x_not
Now let's look at the shapes of the matrices.
data is (M, N).
target is (M, ).
x_not is (N, )
Instead of doing these operations rowwise, you can operate on the entire matrix!
1.1. Simplifying the dot product.
Instead of doing data[i, :] # x, you can do data # x_not and this gives an array with the ith element giving the dot product of the ith row with x_not. So now we have data # x_not with shape (M, )
Then, you can subtract this from the entire target array, so target - (data # x_not) has shape (M, ).
So far, we have
part1 = target - (data # x_not)
Next, if anything is greater than zero, set it to zero.
part1[part1 > 0] = 0
1.2. Finding rowwise norms.
Finally, you want to multiply this by the row of data, and divide by the square of the L2-norm of that row. To get the norm of each row of a matrix, you do
rownorms = np.linalg.norm(data, axis=1)
This is a (M, ) array, so we need to convert it to a (M, 1) array so we can divide each row. rownorms[:, None] does this. Then divide data by this.
part2 = data / (rownorms[:, None]**2)
1.3. Add to x_not
Finally, we're adding each row of part1 * part2 to the original x_not and returning the result
result = x_not + (part1 * part2).sum(axis=0)
Here's where we get stuck. In your approach, each call to myfunction() gives a value of part1 that depends on target[i], which was changed in the last call to myfunction().
2. Why vectorize?
Using numpy's inbuilt methods instead of looping allows it to offload the calculation to its C backend, so it runs faster. If your numpy is linked to a BLAS backend, you can extract even more speed by using your processor's SIMD registers
The conjugate gradient method is a simple iterative method to solve certain systems of equations. There are other more complex algorithms that can solve general systems well, but this should do for the purposes of our demo. Again, the purpose is not to have an iterative algorithm that will perfectly solve any linear system of equations, but to show what kind of speedup you can expect if you vectorize your code.
Given your system
data # x_not = target
Let's define some variables:
A = data.T # data
b = data.T # target
And we'll solve the system A # x = b
x = np.zeros((columns,)) # Initial guess. Can be anything
resid = b - A # x
p = resid
while (np.abs(resid) > tolerance).any():
Ap = A # p
alpha = (resid.T # resid) / (p.T # Ap)
x = x + alpha * p
resid_new = resid - alpha * Ap
beta = (resid_new.T # resid_new) / (resid.T # resid)
p = resid_new + beta * p
resid = resid_new + 0
To contrast the fully vectorized approach with one that uses iterations to update the rows of x and resid_new, let's define another implementation of the CG solver that does this.
def solve_loopy(data, target, itermax = 100, tolerance = 1e-8):
A = data.T # data
b = data.T # target
rows, columns = data.shape
x = np.zeros((columns,)) # Initial guess. Can be anything
resid = b - A # x
resid_new = b - A # x
p = resid
niter = 0
while (np.abs(resid) > tolerance).any() and niter < itermax:
Ap = A # p
alpha = (resid.T # resid) / (p.T # Ap)
for i in range(len(x)):
x[i] = x[i] + alpha * p[i]
resid_new[i] = resid[i] - alpha * Ap[i]
# resid_new = resid - alpha * A # p
beta = (resid_new.T # resid_new) / (resid.T # resid)
p = resid_new + beta * p
resid = resid_new + 0
niter += 1
return x
And our original vector method:
def solve_vect(data, target, itermax = 100, tolerance = 1e-8):
A = data.T # data
b = data.T # target
rows, columns = data.shape
x = np.zeros((columns,)) # Initial guess. Can be anything
resid = b - A # x
resid_new = b - A # x
p = resid
niter = 0
while (np.abs(resid) > tolerance).any() and niter < itermax:
Ap = A # p
alpha = (resid.T # resid) / (p.T # Ap)
x = x + alpha * p
resid_new = resid - alpha * Ap
beta = (resid_new.T # resid_new) / (resid.T # resid)
p = resid_new + beta * p
resid = resid_new + 0
niter += 1
return x
Let's solve a simple system to see if this works first:
2x1 + x2 = -5
−x1 + x2 = -2
should give a solution of [-1, -3]
data = np.array([[ 2, 1],
[-1, 1]])
target = np.array([-5, -2])
print(solve_loopy(data, target))
print(solve_vect(data, target))
Both give the correct solution [-1, -3], yay! Now on to bigger things:
data = np.random.random((100, 100))
target = np.random.random((100, ))
Let's ensure the solution is still correct:
sol1 = solve_loopy(data, target)
np.allclose(data # sol1, target)
# Output: False
sol2 = solve_vect(data, target)
np.allclose(data # sol2, target)
# Output: False
Hmm, looks like the CG method doesn't work for badly conditioned random matrices we created. Well, at least both give the same result.
np.allclose(sol1, sol2)
# Output: True
But let's not get discouraged! We don't really care if it works perfectly, the point of this is to demonstrate how amazing vectorization is. So let's time this:
import timeit
timeit.timeit('solve_loopy(data, target)', number=10, setup='from __main__ import solve_loopy, data, target')
# Output: 0.25586539999994784
timeit.timeit('solve_vect(data, target)', number=10, setup='from __main__ import solve_vect, data, target')
# Output: 0.12008900000000722
Nice! A ~2x speedup simply by avoiding a loop while updating our solution!
For larger systems, this will be even better.
for N in [10, 50, 100, 500, 1000]:
data = np.random.random((N, N))
target = np.random.random((N, ))
t_loopy = timeit.timeit('solve_loopy(data, target)', number=10, setup='from __main__ import solve_loopy, data, target')
t_vect = timeit.timeit('solve_vect(data, target)', number=10, setup='from __main__ import solve_vect, data, target')
print(N, t_loopy, t_vect, t_loopy/t_vect)
This gives us:
N t_loopy t_vect speedup
00010 0.002823 0.002099 1.345390
00050 0.051209 0.014486 3.535048
00100 0.260348 0.114601 2.271773
00500 0.980453 0.240151 4.082644
01000 1.769959 0.508197 3.482822

Different shape arrays operations

A bit of background:
I want to calculate the array factor of a MxN antenna array, which is given by the following equation:
Where w_i are the complex weight of the i-th element, (x_i,y_i,z_i) is the position of the i-th element, k is the wave number, theta and phi are the elevation and azimuth respectively, and i ranges from 0 to MxN-1.
In the code I have:
-theta and phi are np.mgrid with shape (200,200) each,
-w_i, and (x,y,z)_i are np.array with shape (NxM,) each
so AF is a np.array with shape (200,200) (sum over i).There is no problem so far, and I can get AF easily doing:
af = zeros([theta.shape[0],phi.shape[0]])
for i in range(self.size[0]*self.size[1]):
af = af + ( w[i]*e**(-1j*(k * x_pos[i]*sin(theta)*cos(phi) + k * y_pos[i]* sin(theta)*sin(phi)+ k * z_pos[i] * cos(theta))) )
Now, each w_i depends on frequency, so AF too, and now I have w_i with shape (NxM,1000) (I have 1000 samples of each w_i in frequency). I tried to use the above code changing
af = zeros([1000,theta.shape[0],phi.shape[0]])
but I get 'operands could not be broadcast together'. I can solve this by using a for loop through the 1000 values, but it is slow and is a bit ugly. So, what is the correct way to do the summation, or the correct way to properly define w_i and AF ?
Any help would be appreciated. Thanks.
edit
The code with the new dimension I'm trying to add is the next:
from numpy import *
class AntennaArray:
def __init__(self,f,asize=None,tipo=None,dx=None,dy=None):
self.Lambda = 299792458 / f
self.k = 2*pi/self.Lambda
self.size = asize
self.type = tipo
self._AF_DATA_SIZE = 200
self.theta,self.phi = mgrid[0 : pi : self._AF_DATA_SIZE*1j,0 : 2*pi : self._AF_DATA_SIZE*1j]
self.element_pos = None
self.element_amp = None
self.element_pha = None
if dx == None:
self.dx = self.Lambda/2
else:
self.dx = dx
if dy == None:
self.dy = self.Lambda/2
else:
self.dy = dy
self.generate_array()
def generate_array(self):
M = self.size[0]
N = self.size[1]
dx = self.dx
dy = self.dy
x_pos = arange(0,dx*N,dx)
y_pos = arange(0,dy*M,dy)
z_pos = 0
ele = zeros([N*M,3])
for i in range(M):
ele[i*N:(i+1)*N,0] = x_pos[:]
for i in range(M):
ele[i*N:(i+1)*N,1] = y_pos[i]
self.element_pos = ele
#self.array_factor = self.calculate_array_factor()
def calculate_array_factor(self):
theta,phi = self.theta,self.phi
k = self.k
x_pos = self.element_pos[:,0]
y_pos = self.element_pos[:,1]
z_pos = self.element_pos[:,2]
w = self.element_amp*exp(1j*self.element_pha)
if len(self.element_pha.shape) > 1:
#I have f_size samples of w_i(f)
f_size = self.element_pha.shape[1]
af = zeros([f_size,theta.shape[0],phi.shape[0]])
else:
#I only have w_i
af = zeros([theta.shape[0],phi.shape[0]])
for i in range(self.size[0]*self.size[1]):
**strong text**#This for loop does the summation over i
af = af + ( w[i]*e**(-1j*(k * x_pos[i]*sin(theta)*cos(phi) + k * y_pos[i]* sin(theta)*sin(phi)+ k * z_pos[i] * cos(theta))) )
return af
I tried to test it with the next main
from numpy import *
f_points = 10
M = 2
N = 2
a = AntennaArray(5.8e9,[M,N])
a.element_amp = ones([M*N,f_points])
a.element_pha = zeros([M*N,f_points])
af = a.calculate_array_factor()
But I get
ValueError: 'operands could not be broadcast together with shapes (10,) (200,200) '
Note that if I set
a.element_amp = ones([M*N])
a.element_pha = zeros([M*N])
This works well.
Thanks.
I had a look at the code, and I think this for loop:
af = zeros([theta.shape[0],phi.shape[0]])
for i in range(self.size[0]*self.size[1]):
af = af + ( w[i]*e**(-1j*(k * x_pos[i]*sin(theta)*cos(phi) + k * y_pos[i]* sin(theta)*sin(phi)+ k * z_pos[i] * cos(theta))) )
is wrong in many ways. You are mixing dimensions, you cannot loop that way.
And by the way, to make full use of numpy efficiency, never loop over the arrays. It slows down the execution significantly.
I tried to rework that part.
First, I advice you to not use from numpy import *, it's bad practice (see here). Use import numpy as np. I reintroduced the np abbreviation, so you can understand what comes from numpy.
Frequency independent case
This first snippet assumes that w is a 1D array of length 4: I am neglecting the frequency dependency of w, to show you how you can get what you already obtained without the for loop and using instead the power of numpy.
af_points = w[:,np.newaxis,np.newaxis]*np.e**(-1j*
(k * x_pos[:,np.newaxis,np.newaxis]*np.sin(theta)*np.cos(phi) +
k * y_pos[:,np.newaxis,np.newaxis]*np.sin(theta)*np.sin(phi) +
k * z_pos[:,np.newaxis,np.newaxis]*np.cos(theta)
))
af = np.sum(af_points, axis=0)
I am using numpy broadcasting to obtain a 3D array named af_points, whose shape is (4, 200, 200). To do it, I use np.newaxis to extend the number of axis of an array in order to use broadcasting correctly. More here on np.newaxis.
So, w[:,np.newaxis,np.newaxis] is an array of shape (4, 1, 1). Similarly for x_pos[:,np.newaxis,np.newaxis], y_pos[:,np.newaxis,np.newaxis] and z_pos[:,np.newaxis,np.newaxis]. Since the angles have shape (200, 200), broadcasting can be done, and af_points has shape (4, 200, 200).
Finally the sum is done by np.sum, summing over the first axis to obtain a (200, 200) array.
Frequency dependent case
Now w has shape (4, 10), where 10 are the frequency points. The idea is the same, just consider that the frequency is an additional dimension in your numpy arrays: now af_points will be an array of shape (4, 10, 200, 200) where 10 are the f_points you have defined.
To keep it understandable, I've split the calculation:
#exp_point is only the exponent, frequency independent. Will be a (4, 200, 200) array.
exp_points = np.e**(-1j*
(k * x_pos[:,np.newaxis,np.newaxis]*np.sin(theta)*np.cos(phi) +
k * y_pos[:,np.newaxis,np.newaxis]*np.sin(theta)*np.sin(phi) +
k * z_pos[:,np.newaxis,np.newaxis]*np.cos(theta)
))
af_points = w[:,:,np.newaxis,np.newaxis] * exp_points[:,np.newaxis,:,:]
af = np.sum(af_points, axis=0)
And now af has shape (10, 200, 200).

Numpy tensor implementation slower than loop

I have two functions that compute the same metric. One ends up using a list comprehension to cycle through a calculation, the other uses only numpy tensor operations. The functions take in a (N, 3) array, where N is the number of points in 3D space. When N <~ 3000 the tensor function is faster, when N >~ 3000 the list comprehension is faster. Both seem to have linear time complexity in terms of N i.e two time-N lines cross at N=~3000.
def approximate_area_loop(section, num_area_divisions):
n_a_d = num_area_divisions
interp_vectors = get_section_interp_(section)
a1 = section[:-1]
b1 = section[1:]
a2 = interp_vectors[:-1]
b2 = interp_vectors[1:]
c = lambda u: (1 - u) * a1 + u * a2
d = lambda u: (1 - u) * b1 + u * b2
x = lambda u, v: (1 - v) * c(u) + v * d(u)
area = np.sum([np.linalg.norm(np.cross((x((i + 1)/n_a_d, j/n_a_d) - x(i/n_a_d, j/n_a_d)),\
(x(i/n_a_d, (j +1)/n_a_d) - x(i/n_a_d, j/n_a_d))), axis = 1)\
for i in range(n_a_d) for j in range(n_a_d)])
Dt = section[-1, 0] - section[0, 0]
return area, Dt
def approximate_area_tensor(section, num_area_divisions):
divisors = np.linspace(0, 1, num_area_divisions + 1)
interp_vectors = get_section_interp_(section)
a1 = section[:-1]
b1 = section[1:]
a2 = interp_vectors[:-1]
b2 = interp_vectors[1:]
c = np.multiply.outer(a1, (1 - divisors)) + np.multiply.outer(a2, divisors) # c_areas_vecs_divs
d = np.multiply.outer(b1, (1 - divisors)) + np.multiply.outer(b2, divisors) # d_areas_vecs_divs
x = np.multiply.outer(c, (1 - divisors)) + np.multiply.outer(d, divisors) # x_areas_vecs_Divs_divs
u = x[:, :, 1:, :-1] - x[:, :, :-1, :-1] # u_areas_vecs_Divs_divs
v = x[:, :, :-1, 1:] - x[:, :, :-1, :-1] # v_areas_vecs_Divs_divs
sub_area_norm_vecs = np.cross(u, v, axis = 1) # areas_crosses_Divs_divs
sub_areas = np.linalg.norm(sub_area_norm_vecs, axis = 1) # areas_Divs_divs (values are now sub areas)
area = np.sum(sub_areas)
Dt = section[-1, 0] - section[0, 0]
return area, Dt
Why does the list comprehension version work faster at large N? Surely the tensor version should be faster? I'm wondering if it's something to do with the size of the calculations meaning it's too big to be done in cache? Please ask if I haven't included enough information, I'd really like to get to the bottom of this.
The bottleneck in the fully vectorized function was indeed in the np.linalg.norm as #hpauljs comment suggested.
Norm was used only to get the magnitude of all the vectors contained in axis 1. A much simpler and faster method was to just:
sub_areas = np.sqrt((sub_area_norm_vecs*sub_area_norm_vecs).sum(axis = 1))
This gives exactly the same results and sped up the code by up to 25 times faster than the loop implementation (even when the loop doesn't use linalg.norm either).

Resources