I'm writing a program which randomly chooses two integers within a certain interval. I also wrote a class (which I didn't add below) which uses two numbers 'a' and 'b' and creates an elliptical curve of the form:
y^2 = x^3 + ax + b
I've written the following to create the two random numbers.
def numbers():
n = 1
while n>0:
a = random.randint(-100,100)
b = random.randint(-100,100)
if -16 * (4 * a ** 3 + 27 * b ** 2) != 0:
result = [a,b]
return result
n = n+1
Now I would like to generate a random point on this elliptical curve. How do I do that?
The curve has an infinite length, as for every y ϵ ℝ there is at least one x ϵ ℝ so that (x, y) is on the curve. So if we speak of a random point on the curve we cannot hope to have a homogeneous distribution of the random point over the whole curve.
But if that is not important, you could take a random value for y within some range, and then calculate the roots of the following function:
f(x) = x3 + ax + b - y2
This will result in three roots, of which possibly two are complex (not real numbers). You can take a random real root from that. This will be the x coordinate for the random point.
With the help of numpy, getting the roots is easy, so this is the function for getting a random point on the curve, given values for a and b:
def randomPoint(a, b):
y = random.randint(-100,100)
# Get roots of: f(x) = x^3 + ax + b - y^2
roots = numpy.roots([1, 0, a, b - y**2])
# 3 roots are returned, but ignore potential complex roots
# At least one will be real
roots = [val.real for val in roots if val.imag == 0]
# Choose a random root among those real root(s)
x = random.choice(roots)
return [x, y]
See it run on repl.it.
Related
Consider a piecewise cubic Bezier curve with N segments, defined in terms of 4N control points. How can I determine whether this curve passes the Vertical Line Test? That is: do there exist points x,y1,y2 such that y1!=y2 and both (x,y1) and (x,y2) lie on the curve? Additionally, it would be nice but not essential to return the values of the points x,y1,y2 if such points exist.
In principle, one could just evaluate the curve explicitly at a large number of points, but in my application the curves can have a large number of segments so this is prohibitively slow. Thus I am looking for an algorithm that operates on the control points only, and does not rely on explicitly evaluating the curves at a large number of points.
Closed curves will fail your test by definition, so let's look at open curves: for a curve to fail your vertical line test, at least one segment in your piecewise cubic polyBezier needs to "reverse direction" along the x-axis, which means its x component needs to have extrema in the inclusive interval [0,1].
For example, this curve doesn't (the right side shows the component function for x, with global extrema in red and inflections in purple):
(Also note that we're only drawing the [0,1] interval, but if we were to extend it, those global extrema wouldn't actually be at t=0 and t=1. This is in fact critically important, and we'll see why, below)
This curve also doesn't (but only just):
But this curve does:
As does this one. Twice, in fact:
As does a curve "just past its cusp":
Which is easier to see when we exaggerate it:
This means we need to find the first derivative, find out where it's zero, and then make sure that (because Beziers can have cusps) the sign of that derivative flips at that point (or points, as there can be two).
Turns out, the derivative is trivially not-even-really-computed, so for each segment we have:
w = [x1, x2, x3, x4]
Bx(t) = w[0] * (1-t)³ + 3 * w[1] * t(1-t)² + w[2] * t²(1-t) + w[3] * t³
// bezier form of the derivative:
v = [
3 * (w[1] - w[0]),
3 * (w[2] - w[1]),
3 * (w[3] - w[2]),
]
Bx'(t) = v[0] * (1-t)² + 2 * v[1] * t(1-t) + v[2] * t²
Which we can trivially rewrite to polynomial form:
a = v[0] - 2*v[1] + v[2]
b = 2 * (v[1] - v[0])
c = v[0]
Bx'(t) = a * t² + b * t + c
So we find the roots for that, which is a matter of applying the quadratic formula, which gives us 0, 1, or 2 roots.
if (a == 0) there are no roots
denominator = 2 * a
discriminant = b * b - 2 * denominator * c
if (discriminant < 0) there are no (real) roots
d = sqrt(discriminant)
t1 = -(b + d) / denominator
if (0 ≤ t1 ≤ 1) then t1 is a valid root
t2 = (d - b) / denominator
if (0 ≤ t2 ≤ 1) then t2 is a valid root
If there are no (real) roots, then this segment does not cause the curve to fail your vertical line test, and we move on to the next segment and repeat until we've either found a failure, or we've run out of segments to test.
If it's 1 or 2 roots, we check that the signs of Bx'(t-ε) and Bx'(t+ε) for some very small value of ε differ (because we want to make sure we don't conclude that the above curve with a cusp fails the test: it has a zero derivative but it "keeps going in the same direction" across that root instead of flipping direction). If they do, this segment is (one of) the segment(s) that makes your curve fail your vertical line test.
Also, note that we're testing even if the root is at 0 or 1: the piecewise curve might inflect "across segments", and we can take advantage of the fact that we can evaluate Bx'(t) for t = -ε or t = 1+ε to see if we flip direction, even if we never draw the curve prior to t=0 or after t=1.
Leaving solution here for reference. Not especially elegant but gets the job done. Assume that the curve is parametrized from left to right. The key observation is that the curve intersects a vertical at two points if and only if there is a point at which the derivative x'(t) of the x component is strictly negative. For a cubic Bezier curve the derivative is a quadratic polynomial x'(t)=at^2+bt+c. So we just need to check whether the minimum of this quadratic over the interval 0<=t<=1 is negative, which is straightforward (if a bit tedious) using basic algebra.
I'm attempting to solve the differential equation:
m(t) = M(x)x'' + C(x, x') + B x'
where x and x' are vectors with 2 entries representing the angles and angular velocity in a dynamical system. M(x) is a 2x2 matrix that is a function of the components of theta, C is a 2x1 vector that is a function of theta and theta' and B is a 2x2 matrix of constants. m(t) is a 2*1001 array containing the torques applied to each of the two joints at the 1001 time steps and I would like to calculate the evolution of the angles as a function of those 1001 time steps.
I've transformed it to standard form such that :
x'' = M(x)^-1 (m(t) - C(x, x') - B x')
Then substituting y_1 = x and y_2 = x' gives the first order linear system of equations:
y_2 = y_1'
y_2' = M(y_1)^-1 (m(t) - C(y_1, y_2) - B y_2)
(I've used theta and phi in my code for x and y)
def joint_angles(theta_array, t, torques, B):
phi_1 = np.array([theta_array[0], theta_array[1]])
phi_2 = np.array([theta_array[2], theta_array[3]])
def M_func(phi):
M = np.array([[a_1+2.*a_2*np.cos(phi[1]), a_3+a_2*np.cos(phi[1])],[a_3+a_2*np.cos(phi[1]), a_3]])
return np.linalg.inv(M)
def C_func(phi, phi_dot):
return a_2 * np.sin(phi[1]) * np.array([-phi_dot[1] * (2. * phi_dot[0] + phi_dot[1]), phi_dot[0]**2])
dphi_2dt = M_func(phi_1) # (torques[:, t] - C_func(phi_1, phi_2) - B # phi_2)
return dphi_2dt, phi_2
t = np.linspace(0,1,1001)
initial = theta_init[0], theta_init[1], dtheta_init[0], dtheta_init[1]
x = odeint(joint_angles, initial, t, args = (torque_array, B))
I get the error that I cannot index into torques using the t array, which makes perfect sense, however I am not sure how to have it use the current value of the torques at each time step.
I also tried putting odeint command in a for loop and only evaluating it at one time step at a time, using the solution of the function as the initial conditions for the next loop, however the function simply returned the initial conditions, meaning every loop was identical. This leads me to suspect I've made a mistake in my implementation of the standard form but I can't work out what it is. It would be preferable however to not have to call the odeint solver in a for loop every time, and rather do it all as one.
If helpful, my initial conditions and constant values are:
theta_init = np.array([10*np.pi/180, 143.54*np.pi/180])
dtheta_init = np.array([0, 0])
L_1 = 0.3
L_2 = 0.33
I_1 = 0.025
I_2 = 0.045
M_1 = 1.4
M_2 = 1.0
D_2 = 0.16
a_1 = I_1+I_2+M_2*(L_1**2)
a_2 = M_2*L_1*D_2
a_3 = I_2
Thanks for helping!
The solver uses an internal stepping that is problem adapted. The given time list is a list of points where the internal solution gets interpolated for output samples. The internal and external time lists are in no way related, the internal list only depends on the given tolerances.
There is no actual natural relation between array indices and sample times.
The translation of a given time into an index and construction of a sample value from the surrounding table entries is called interpolation (by a piecewise polynomial function).
Torque as a physical phenomenon is at least continuous, a piecewise linear interpolation is the easiest way to transform the given function value table into an actual continuous function. Of course one also needs the time array.
So use numpy.interp1d or the more advanced routines of scipy.interpolate to define the torque function that can be evaluated at arbitrary times as demanded by the solver and its integration method.
I want to create an array A [1 ,1 , 2, 2 ,2 , 5, 5 ,5 ,....] with numbers from [a,b] such that
An histogram where Y-Axis is the frequency of the number in the array and X-axis is [a,b] resembles a bell curve.
Bell Curve
The sum of frequency(i)*i for all i in [a,b] is approximately around a large number K
Many functions are available in python like numpy.random.normal or scipsy.stats.truncnorm but I am not able to fully understand their use and how they can help me to create such an array.
The first point is easy, for the second point, I'm assuming you want the "integral" of freq * x to be close to K (making each x * freq(x) ~ K is mathematically impossible). You can do that by adjusting sample size.
First step: bell curve shaped integer numbers between a and b, use scipy.stats.truncnorm. From the docs:
Notes
The standard form of this distribution is a standard normal truncated to the range [a, b] --- notice that a and b are defined over
the domain of the standard normal. To convert clip values for a
specific mean and standard deviation, use::
a, b = (myclip_a - my_mean) / my_std, (myclip_b - my_mean) / my_std
Take a normal in the -3, 3 range, so the curve is nice. Adjust mean and standard deviation so -3, 3 becomes a, b:
from scipy.stats import truncnorm
a, b = 10, 200
loc = (a + b) / 2
scale = (b - a) / 6
n = 100
f = truncnorm(-3,3, loc=(a+b)/2,scale=(b-a)/6)
Now, since frequency is related to the probability density function: sum(freq(i) * i ) ~ n * sum(pdf(i) * i). Therefore, n = K / sum(pdf(i) * i). This can be obtained as:
K = 200000
i = np.arange(a, b +1)
n = int(K / i.dot(f.pdf(i)))
Now generate integer random samples, and check function:
samples = f.rvs(size=n).astype(np.int)
import matplotlib.pyplot as plt
plt.hist(samples, bins = 20)
print(np.histogram(samples, bins=b-a+1)[0].dot(np.arange(a,b+1)))
>> 200315
I've got a shape consisting of four points, A, B, C and D, of which the only their position is known. The goal is to transform these points to have specific angles and offsets relative to each other.
For example: A(-1,-1) B(2,-1) C(1,1) D(-2,1), which should be transformed to a perfect square (all angles 90) with offsets between AB, BC, CD and AD all being 2. The result should be a square slightly rotated counter-clockwise.
What would be the most efficient way to do this?
I'm using this for a simple block simulation program.
As Mark alluded, we can use constrained optimization to find the side 2 square that minimizes the square of the distance to the corners of the original.
We need to minimize f = (a-A)^2 + (b-B)^2 + (c-C)^2 + (d-D)^2 (where the square is actually a dot product of the vector argument with itself) subject to some constraints.
Following the method of Lagrange multipliers, I chose the following distance constraints:
g1 = (a-b)^2 - 4
g2 = (c-b)^2 - 4
g3 = (d-c)^2 - 4
and the following angle constraints:
g4 = (b-a).(c-b)
g5 = (c-b).(d-c)
A quick napkin sketch should convince you that these constraints are sufficient.
We then want to minimize f subject to the g's all being zero.
The Lagrange function is:
L = f + Sum(i = 1 to 5, li gi)
where the lis are the Lagrange multipliers.
The gradient is non-linear, so we have to take a hessian and use multivariate Newton's method to iterate to a solution.
Here's the solution I got (red) for the data given (black):
This took 5 iterations, after which the L2 norm of the step was 6.5106e-9.
While Codie CodeMonkey's solution is a perfectly valid one (and a great use case for the Lagrangian Multipliers at that), I believe that it's worth mentioning that if the side length is not given this particular problem actually has a closed form solution.
We would like to minimise the distance between the corners of our fitted square and the ones of the given quadrilateral. This is equivalent to minimising the cost function:
f(x1,...,y4) = (x1-ax)^2+(y1-ay)^2 + (x2-bx)^2+(y2-by)^2 +
(x3-cx)^2+(y3-cy)^2 + (x4-dx)^2+(y4-dy)^2
Where Pi = (xi,yi) are the corners of the fitted square and A = (ax,ay) through D = (dx,dy) represent the given corners of the quadrilateral in clockwise order. Since we are fitting a square we have certain contraints regarding the positions of the four corners. Actually, if two opposite corners are given, they are enough to describe a unique square (save for the mirror image on the diagonal).
Parametrization of the points
This means that two opposite corners are enough to represent our target square. We can parametrise the two remaining corners using the components of the first two. In the above example we express P2 and P4 in terms of P1 = (x1,y1) and P3 = (x3,y3). If you need a visualisation of the geometrical intuition behind the parametrisation of a square you can play with the interactive version.
P2 = (x2,y2) = ( (x1+x3-y3+y1)/2 , (y1+y3-x1+x3)/2 )
P4 = (x4,y4) = ( (x1+x3+y3-y1)/2 , (y1+y3+x1-x3)/2 )
Substituting for x2,x4,y2,y4 means that f(x1,...,y4) can be rewritten to:
f(x1,x3,y1,y3) = (x1-ax)^2+(y1-ay)^2 + ((x1+x3-y3+y1)/2-bx)^2+((y1+y3-x1+x3)/2-by)^2 +
(x3-cx)^2+(y3-cy)^2 + ((x1+x3+y3-y1)/2-dx)^2+((y1+y3+x1-x3)/2-dy)^2
a function which only depends on x1,x3,y1,y3. To find the minimum of the resulting function we then set the partial derivatives of f(x1,x3,y1,y3) equal to zero. They are the following:
df/dx1 = 4x1-dy-dx+by-bx-2ax = 0 --> x1 = ( dy+dx-by+bx+2ax)/4
df/dx3 = 4x3+dy-dx-by-bx-2cx = 0 --> x3 = (-dy+dx+by+bx+2cx)/4
df/dy1 = 4y1-dy+dx-by-bx-2ay = 0 --> y1 = ( dy-dx+by+bx+2ay)/4
df/dy3 = 4y3-dy-dx-2cy-by+bx = 0 --> y3 = ( dy+dx+by-bx+2cy)/4
You may see where this is going, as simple rearrangment of the terms leads to the final solution.
Final solution
I need a function that returns points on a circle in three dimensions.
The circle should "cap" a line segment defined by points A and B and it's radius. each cap is perpendicular to the line segment. and centered at one of the endpoints.
Here is a shitty diagram
Let N be the unit vector in the direction from A to B, i.e., N = (B-A) / length(A-B). The first step is to find two more vectors X and Y such that {N, X, Y} form a basis. That means you want two more vectors so that all pairs of {N, X, Y} are perpendicular to each other and also so that they are all unit vectors. Another way to think about this is that you want to create a new coordinate system whose x-axis lines up with the line segment. You need to find vectors pointing in the direction of the y-axis and z-axis.
Note that there are infinitely many choices for X and Y. You just need to somehow find two that work.
One way to do this is to first find vectors {N, W, V} where N is from above and W and V are two of (1,0,0), (0,1,0), and (0,0,1). Pick the two vectors for W and V that correspond to the smallest coordinates of N. So if N = (.31, .95, 0) then you pick (1,0,0) and (0,0,1) for W and V. (Math geek note: This way of picking W and V ensures that {N,W,V} spans R^3). Then you apply the Gram-Schmidt process to {N, W, V} to get vectors {N, X, Y} as above. Note that you need the vector N to be the first vector so that it doesn't get changed by the process.
So now you have two vectors that are perpendicular to the line segment and perpendicular to each other. This means the points on the circle around A are X * cos t + Y * sin t + A where 0 <= t < 2 * pi. This is exactly like the usual description of a circle in two dimensions; it is just written in the new coordinate system described above.
As David Norman noted the crux is to find two orthogonal unit vectors X,Y that are orthogonal to N. However I think a simpler way to compute these is by finding the householder reflection Q that maps N to a multiple of (1,0,0) and then to take as X the image of (0,1,0) under Q and Y as the image of (0,0,1) under Q. While this might sound complicated it comes down to:
s = (N[0] > 0.0) ? 1.0 : -1.0
t = N[0] + s; f = -1.0/(s*t);
X[0] = f*N[1]*t; X[1] = 1 + f*N[1]*N[1]; X[2] = f*N[1]*N[2];
Y[0] = f*N[2]*t; Y[1] = f*N[1]*N[2]; Y[2] = 1 + f*N[2]*N[2];