So, i am working with python and using Simpy, and i need to evaluate the Arc Cosine , but i can only get the expression, not the numerical value that i need.
line0.angle_between(ppline0)
line01.angle_between(ppline01)
line02.angle_between(ppline02)
def ang_lines(line0,line01,line02):
if line0.angle_between(line01) == line0.angle_between(line02):
return True
else:
return nsimplify(line0.angle_between(line01) - line0.angle_between(line02) , tolerance = 0.02)
line1.angle_between(ppline1)
line11.angle_between(ppline11)
line12.angle_between(ppline12)
ang_lines(line0,line01,line02)
I created a function that gives me the difference between the angles of two lines between a third one, but when i call the function, i only get the symbolic expression for it.
Related
Consider the function:
def f(x,y):
return x + 3*exp(y**2)
I was wondering, is it possible to use SciPy.optimize.minimize
to find the minimum value on say [0,1] (the unit interval) (for both, x and y)?
Here is my attempt:
bound = (0,1)
bds = [bound,bound]
x_0 = [0,0] (initial guess)
And thus,
scipy.optimize.minimize(f,x_0,method='SLSQP', \ bounds = bds)
But this isn't working.
I keep getting:
"unexpected character after line continuation character" At \ bounds = bnds
Note that I want my x and y to vary over the real numbers on [0,1]
Edit:
def f(x):
return x[0] + 3*exp(x[1]**2)
bound = (0,1)
bds = [bound,bound]
x_0 = [0,0] (initial guess)
scipy.optimize.minimize(f,x_0,method='SLSQP', bounds = bds)
Is this minimise function looking at only integer values 0 and 1? or is it looking at all real numbers in [0,1] (the unit interval?). If its the first, im not sure how to make it to the second how do I do so?
Your original code wouldn't work because
""unexpected character after line continuation character" At \ bounds = bnds":
is telling you that the "line continuation character" (the backslash) is causing a problem. You can't have anything after that character. Insert a line break after the backslash, or remove the backslash altogether
Once you fix that, you'll get an error saying
TypeError: f() missing 1 required positional argument: 'y'
This is because minimize wants a function that takes one input (read the "Parameters: fun part of the documentation). That input can be an array of shape (n, ). When you want a multivariate minimization, all n variables go into that single argument to your function
Re: "Is this minimise function looking at only integer values 0 and 1? or is it looking at all real numbers in [0,1] (the unit interval?). If its the first, im not sure how to make it to the second."
It would be a pretty useless optimizer if it only checked the values at the bounds, don't you think?
This is easy enough to check though! Your current function has a minimum at [0, 0], so it's not a great way to test what the function does. Let's define a function that has a minimum at a different number. For example, let's define a function that has a minimum at [0.5, 0.5]
def f(X):
return abs(X[0] - 0.5) * abs(X[1] - 0.5)
Running your code gives the result:
fun: 0.0
jac: array([0., 0.])
message: 'Optimization terminated successfully.'
nfev: 8
nit: 2
njev: 2
status: 0
success: True
x: array([0.5, 0.5])
which makes it pretty clear that minimize() looks in the entire interval.
It doesn't really look at all real numbers in the interval though (that would be impossible, given that there are infinite real numbers in any interval). Instead, it uses the optimization algorithms that you specify in the method argument.
The optimization result represented as a OptimizeResult object. Important attributes are: x the solution array, success a Boolean flag indicating if the optimizer exited successfully and message which describes the cause of the termination.
For reference, I'm using this page. I understand the original pagerank equation
but I'm failing to understand why the sparse-matrix implementation is correct. Below is their code reproduced:
def compute_PageRank(G, beta=0.85, epsilon=10**-4):
'''
Efficient computation of the PageRank values using a sparse adjacency
matrix and the iterative power method.
Parameters
----------
G : boolean adjacency matrix. np.bool8
If the element j,i is True, means that there is a link from i to j.
beta: 1-teleportation probability.
epsilon: stop condition. Minimum allowed amount of change in the PageRanks
between iterations.
Returns
-------
output : tuple
PageRank array normalized top one.
Number of iterations.
'''
#Test adjacency matrix is OK
n,_ = G.shape
assert(G.shape==(n,n))
#Constants Speed-UP
deg_out_beta = G.sum(axis=0).T/beta #vector
#Initialize
ranks = np.ones((n,1))/n #vector
time = 0
flag = True
while flag:
time +=1
with np.errstate(divide='ignore'): # Ignore division by 0 on ranks/deg_out_beta
new_ranks = G.dot((ranks/deg_out_beta)) #vector
#Leaked PageRank
new_ranks += (1-new_ranks.sum())/n
#Stop condition
if np.linalg.norm(ranks-new_ranks,ord=1)<=epsilon:
flag = False
ranks = new_ranks
return(ranks, time)
To start, I'm trying to trace the code and understand how it relates to the PageRank equation. For the line under the with statement (new_ranks = G.dot((ranks/deg_out_beta))), this looks like the first part of the equation (the beta times M) BUT it seems to be ignoring all divide by zeros. I'm confused by this because the PageRank algorithm requires us to replace zero columns with ones (except along the diagonal). I'm not sure how this is accounted for here.
The next line new_ranks += (1-new_ranks.sum())/n is what I presume to be the second part of the equation. I can understand what this does, but I can't see how this translates to the original equation. I would've thought we would do something like new_ranks += (1-beta)*ranks.sum()/n.
This happens because in the row sums
e.T * M * r = e.T * r
by the column sum construction of M. The convex combination with coefficient beta has the effect that the sum over the new r vector is again 1. Now what the algorithm does is to take the first matrix-vector product b=beta*M*r and then find a constant c so that r_new = b+c*e has row sum one. In theory this should be the same as what the formula says, but in the floating point practice this approach corrects and prevents floating point error accumulation in the sum of r.
Computing it this way also allows to ignore zero columns, as the compensation for them is automatically computed.
I want to evaluate the exponential integral function numerically using trapezoidal rule.This function is defined as:
The reference is available here.
This function is already available in some libraries for example scipy.special. For some reasons I do not want to use these libraries. Instead, I need to evaluate this function directly by the trapezoidal rule. I wrote the trapezoidal rule and checked it to make sure it works fine.
Then I used it for numerical evaluation of the Ei function. Unfortunately the results is not correct for example I want to evaluate Ei(1) which is equal to 1.89511 but the code I have written returns infinity which is wrong. Here is the code:
import numpy as np
# Integration using Trapezoidal rule
def trapezoidal(f, a, b, n):
h = float(b - a) / n
s = 0.0
s += f(a)/2.0
for i in range(1, n):
s += f(a + i*h)
s += f(b)/2.0
return s * h
# Define integrand
def Ei(t):
return - np.exp(-t) / t
# Define Ei(1)
A = trapezoidal(Ei, -1, np.inf, 20)
print (A)
# Checking Ei(1)
from scipy.special import expi
print (expi(1))
Do you know how I can modify the above code so I can get correct results?
Thanks!
1) You cannot define range ending woth +inf and divide it onto 20 parts.
Instead you can choose arbitrary right limit and increase it until difference abs(integral(limit(i+1))-integral(limit(i))) becomes negligible
2) Consider function evaluation in zero point (if occurs). It causes division by zero
If argument is too close to zero, try to shift it a bit.
I am trying to find the output of a langrange interpolation function and predict the interpolated values from the equation post fitting the curve.
I have got the code for the function from a website. However I presume it just stores the equation in a format whereas I expect a result of values for the list supplied.
def langrange_polynomial(X, Y):
def L(i):
return lambda x: np.prod([(x-X[j])/(X[i]-X[j]) for j in range(len(X)) if i != j]) * Y[i]
Sx = [L(i) for i in range(len(X))] # summands
return lambda x: np.sum([s(x) for s in Sx])
Expectation is for the given function evaluate or predict the function at a certain value or list i.e.if I pass a list of numbers [2,3,4,5], i should get the corresponding output value f(x) where f(x) is my lagrange equation
As you said, the langrange_polynomial function returns a special kind of function in python, called a "lambda function". You can read about lambdas here: https://www.w3schools.com/python/python_lambda.asp
To use your code just assign the output of the langrange_polynomial to a variable (which will be of type function) and then use that variable as a usual function luke such:
f = langrange_polynomial(np.arange(10),np.arange(10))
f([3,60,30])
# output
3.115663712769497e+21
My problems consists of the following: I am given two pairs angles (in spherical coordinates) which consists of two parts--an azimuth and a colatitude angle. If we extend both angles (thereby increasing their respective radii) infinitely to make a long line pointing in the direction given by the pair of angles, then my goal is to determine
if they intersect or extremely close to one another and
where exactly they intersect.
Currently, I have tried several methods:
The most obvious one is to iteratively compare each radii until there is either a match or a small enough distance between the two. (When I say compare each radii, I am referring to converting each spherical coordinate into Cartesian and then finding the euclidean distance between the two). However, this runtime is $O(n^{2})$, which is extremely slow if I am trying to scale this program
The second most obvious method is to use the optimization package to find this distance. Unfortunately, I cannot the optimization package iteratively and after one instance the optimization algorithm repeats the same answer, which is not useful.
The least obvious method is to directly calculate (using calculus) the exact radii from the angles. While this is fast method, it is not extremely accurate.
Note: while it might seem simple that the intersection is always at the zero-origin (0,0,0), this is not ALWAYS the case. Some points never intersect.
Code for Method (1)
def match1(azimuth_recon_1,colatitude_recon_1,azimuth_recon_2, colatitude_recon_2,centroid_1,centroid_2 ):
# Constants: tolerance factor and extremely large distance
tol = 3e-2
prevDist = 99999999
# Initialize a list of radii to loop through
# Checking iteravely for a solution
for r1 in list(np.arange(0,5,tol)):
for r2 in list(np.arange(0,5,tol)):
# Get the estimates
estimate_1 = np.array(spher2cart(r1,azimuth_recon_1,colatitude_recon_1)) + np.array(centroid_1)
estimate_2 = np.array(spher2cart(r2,azimuth_recon_2,colatitude_recon_2))+ np.array(centroid_2)
# Calculate the euclidean distance between them
dist = np.array(np.sqrt(np.einsum('i...,i...', (estimate_1 - estimate_2), (estimate_1 - estimate_2)))[:,np.newaxis])
# Compare the distance to this tolerance
if dist < tol:
if dist == 0:
return estimate_1, [], True
else:
return estimate_1, estimate_2, False
## If the distance is too big break out of the loop
if dist > prevDist:
prevDist = 9999999
break
prevDist = dist
return [], [], False
Code for Method (3)
def match2(azimuth_recon_1,colatitude_recon_1,azimuth_recon_2, colatitude_recon_2,centriod_1,centroid_2):
# Set a Tolerance factor
tol = 3e-2
def calculate_radius_2(azimuth_1,colatitude_1,azimuth_2,colatitude_2):
"""Return radius 2 using both pairs of angles (azimuth and colatitude). Equation is provided in the document"""
return 1/((1-(math.sin(azimuth_1)*math.sin(azimuth_2)*math.cos(colatitude_1-colatitude_2))
+math.cos(azimuth_1)*math.cos(azimuth_2))**2)
def calculate_radius_1(radius_2,azimuth_1,colatitude_1,azimuth_2,colatitude_2):
"""Returns radius 1 using both pairs of angles (azimuth and colatitude) and radius 2.
Equation provided in document"""
return (radius_2)*((math.sin(azimuth_1)*math.sin(azimuth_2)*math.cos(colatitude_1-colatitude_2))
+math.cos(azimuth_1)*math.cos(azimuth_2))
# Compute radius 2
radius_2 = calculate_radius_2(azimuth_recon_1,colatitude_recon_1,azimuth_recon_2,colatitude_recon_2)
#Compute radius 1
radius_1 = calculate_radius_1(radius_2,azimuth_recon_1,colatitude_recon_1,azimuth_recon_2,colatitude_recon_2)
# Get the estimates
estimate_1 = np.array(spher2cart(radius_1,azimuth_recon_1,colatitude_recon_1))+ np.array(centroid_1)
estimate_2 = np.array(spher2cart(radius_2,azimuth_recon_2,colatitude_recon_2))+ np.array(centroid_2)
# Calculate the euclidean distance between them
dist = np.array(np.sqrt(np.einsum('i...,i...', (estimate_1 - estimate_2), (estimate_1 - estimate_2)))[:,np.newaxis])
# Compare the distance to this tolerance
if dist < tol:
if dist == 0:
return estimate_1, [], True
else:
return estimate_1, estimate_2, False
else:
return [], [], False
My question is two-fold:
Is there a faster and more accurate way to find the radii for both
points?
If so, how do I do it?
EDIT: I am thinking about just creating two numpy arrays of the two radii and then comparing them via numpy boolean logic. However, I would still be comparing them iteratively. Is there is a faster way to perform this comparison?
Use a kd-tree for such situations. It will easily look up the minimal distance:
def match(azimuth_recon_1,colatitude_recon_1,azimuth_recon_2, colatitude_recon_2,centriod_1,centroid_2):
cartesian_1 = np.array([np.cos(azimuth_recon_1)*np.sin(colatitude_recon_1),np.sin(azimuth_recon_1)*np.sin(colatitude_recon_1),np.cos(colatitude_recon_1)]) #[np.newaxis,:]
cartesian_2 = np.array([np.cos(azimuth_recon_2)*np.sin(colatitude_recon_2),np.sin(azimuth_recon_2)*np.sin(colatitude_recon_2),np.cos(colatitude_recon_2)]) #[np.newaxis,:]
# Re-center them via adding the centroid
estimate_1 = r1*cartesian_1.T + np.array(centroid_1)[np.newaxis,:]
estimate_2 = r2*cartesian_2.T + np.array(centroid_2)[np.newaxis,:]
# Add them to the output list
n = estimate_1.shape[0]
outputs_list_1.append(estimate_1)
outputs_list_2.append(estimate_2)
# Reshape them so that they are in proper format
a = np.array(outputs_list_1).reshape(len(two_pair_mic_list)*n,3)
b = np.array(outputs_list_2).reshape(len(two_pair_mic_list)*n,3)
# Get the difference
c = a - b
# Put into a KDtree
tree = spatial.KDTree(c)
# Find the indices where the radius (distance between the points) is 3e-3 or less
indices = tree.query_ball_tree(3e-3)
This will output a list of the indices where the distance is 3e-3 or less. Now all you will have to do is use the list of indices with the estimate list to find the exact points. And there you have it, this will save you a lot of time and space!