Evaluating and graphing functions in MATLAB - graphics

I am trying to graph the following Gaussian function in MATLAB (should graph in 3 dimensions) but I am making some mistakes somewhere. What is wrong?
sigma = 1
for i = 1:20
for j = 1:20
z(i,j) = (1/(2*pi*sigma^2))*exp(-(i^2+j^2)/(2*sigma^2));
end
end
surf(z)

The problem you are likely having is that you are evaluating the Gaussian over the range of 1 to 20 for both i and j. Since sigma is 1, you are only going to see a segment of one side of the Gaussian (not including the center at [i,j] = [0,0]), and the values of z from 3 to 20 in each direction are very close to 0.
Instead of using for loops, you can do things "the MATLAB way" by creating matrices of x and y values using the function MESHGRID and performing element-wise operations on them to compute and plot z:
[x,y] = meshgrid(-4:0.1:4); %# Use values from -4 to 4 in x and y directions
z = (1/(2*pi*sigma^2)).*exp(-(x.^2+y.^2)./(2*sigma^2)); %# Compute z
surf(x,y,z); %# Plot z

Related

Solving vector second order differential equation while indexing into an array

I'm attempting to solve the differential equation:
m(t) = M(x)x'' + C(x, x') + B x'
where x and x' are vectors with 2 entries representing the angles and angular velocity in a dynamical system. M(x) is a 2x2 matrix that is a function of the components of theta, C is a 2x1 vector that is a function of theta and theta' and B is a 2x2 matrix of constants. m(t) is a 2*1001 array containing the torques applied to each of the two joints at the 1001 time steps and I would like to calculate the evolution of the angles as a function of those 1001 time steps.
I've transformed it to standard form such that :
x'' = M(x)^-1 (m(t) - C(x, x') - B x')
Then substituting y_1 = x and y_2 = x' gives the first order linear system of equations:
y_2 = y_1'
y_2' = M(y_1)^-1 (m(t) - C(y_1, y_2) - B y_2)
(I've used theta and phi in my code for x and y)
def joint_angles(theta_array, t, torques, B):
phi_1 = np.array([theta_array[0], theta_array[1]])
phi_2 = np.array([theta_array[2], theta_array[3]])
def M_func(phi):
M = np.array([[a_1+2.*a_2*np.cos(phi[1]), a_3+a_2*np.cos(phi[1])],[a_3+a_2*np.cos(phi[1]), a_3]])
return np.linalg.inv(M)
def C_func(phi, phi_dot):
return a_2 * np.sin(phi[1]) * np.array([-phi_dot[1] * (2. * phi_dot[0] + phi_dot[1]), phi_dot[0]**2])
dphi_2dt = M_func(phi_1) # (torques[:, t] - C_func(phi_1, phi_2) - B # phi_2)
return dphi_2dt, phi_2
t = np.linspace(0,1,1001)
initial = theta_init[0], theta_init[1], dtheta_init[0], dtheta_init[1]
x = odeint(joint_angles, initial, t, args = (torque_array, B))
I get the error that I cannot index into torques using the t array, which makes perfect sense, however I am not sure how to have it use the current value of the torques at each time step.
I also tried putting odeint command in a for loop and only evaluating it at one time step at a time, using the solution of the function as the initial conditions for the next loop, however the function simply returned the initial conditions, meaning every loop was identical. This leads me to suspect I've made a mistake in my implementation of the standard form but I can't work out what it is. It would be preferable however to not have to call the odeint solver in a for loop every time, and rather do it all as one.
If helpful, my initial conditions and constant values are:
theta_init = np.array([10*np.pi/180, 143.54*np.pi/180])
dtheta_init = np.array([0, 0])
L_1 = 0.3
L_2 = 0.33
I_1 = 0.025
I_2 = 0.045
M_1 = 1.4
M_2 = 1.0
D_2 = 0.16
a_1 = I_1+I_2+M_2*(L_1**2)
a_2 = M_2*L_1*D_2
a_3 = I_2
Thanks for helping!
The solver uses an internal stepping that is problem adapted. The given time list is a list of points where the internal solution gets interpolated for output samples. The internal and external time lists are in no way related, the internal list only depends on the given tolerances.
There is no actual natural relation between array indices and sample times.
The translation of a given time into an index and construction of a sample value from the surrounding table entries is called interpolation (by a piecewise polynomial function).
Torque as a physical phenomenon is at least continuous, a piecewise linear interpolation is the easiest way to transform the given function value table into an actual continuous function. Of course one also needs the time array.
So use numpy.interp1d or the more advanced routines of scipy.interpolate to define the torque function that can be evaluated at arbitrary times as demanded by the solver and its integration method.

Generate a random point on an elliptical curve

I'm writing a program which randomly chooses two integers within a certain interval. I also wrote a class (which I didn't add below) which uses two numbers 'a' and 'b' and creates an elliptical curve of the form:
y^2 = x^3 + ax + b
I've written the following to create the two random numbers.
def numbers():
n = 1
while n>0:
a = random.randint(-100,100)
b = random.randint(-100,100)
if -16 * (4 * a ** 3 + 27 * b ** 2) != 0:
result = [a,b]
return result
n = n+1
Now I would like to generate a random point on this elliptical curve. How do I do that?
The curve has an infinite length, as for every y ϵ ℝ there is at least one x ϵ ℝ so that (x, y) is on the curve. So if we speak of a random point on the curve we cannot hope to have a homogeneous distribution of the random point over the whole curve.
But if that is not important, you could take a random value for y within some range, and then calculate the roots of the following function:
f(x) = x3 + ax + b - y2
This will result in three roots, of which possibly two are complex (not real numbers). You can take a random real root from that. This will be the x coordinate for the random point.
With the help of numpy, getting the roots is easy, so this is the function for getting a random point on the curve, given values for a and b:
def randomPoint(a, b):
y = random.randint(-100,100)
# Get roots of: f(x) = x^3 + ax + b - y^2
roots = numpy.roots([1, 0, a, b - y**2])
# 3 roots are returned, but ignore potential complex roots
# At least one will be real
roots = [val.real for val in roots if val.imag == 0]
# Choose a random root among those real root(s)
x = random.choice(roots)
return [x, y]
See it run on repl.it.

Fitting exponent with gnuplot

I am trying to fit the beneath data to the form - I am most interested in 'c' (I know that c ≈ 1/8, b ≈ 3) but would like to extract all these values from the data.
Formula:
y = a*(x-b)**c
Values.txt:
# "values.txt"
2.000000e+00 6.058411e-04
2.200000e+00 5.335520e-04
2.400000e+00 3.509583e-03
2.600000e+00 1.655943e-03
2.800000e+00 1.995418e-03
3.000000e+00 9.437851e-04
3.200000e+00 5.516159e-04
3.400000e+00 6.765981e-04
3.600000e+00 3.860859e-04
3.800000e+00 2.942881e-04
4.000000e+00 5.039975e-04
4.200000e+00 3.962199e-04
4.400000e+00 4.659717e-04
4.600000e+00 2.892683e-04
4.800000e+00 2.248839e-04
5.000000e+00 2.536980e-04
I have tried using the following commands in gnuplot however I am not meaningful results
f(x) = a*(x-b)**c
b = 3
c = 1/8
fit f(x) "values.txt" via a,b,c
Does anyone know the best way to extract these values? I would rather not provide initial guesses for 'b' & 'c' if possible.
Thanks,
J
The main problem with your fitting function is finding b. You can express your equation as a linear function in log(x-b), after which the fitting is trivial:
b = 3
f(x) = c0 + c1 * x
fit f(x) "values.txt" using (log($1-b)):(log($2)) via c0, c1
a = exp(c0)
c = c1
As you see, you need to provide b but do not need initial guesses for the other parameters because it's a trivial linear fit.
Now, I would suggest that you provide a series of values of b and check how good the fitting is for each value. gnuplot gives you the error in the fitting parameter. Then you can plot the overall error (error_c0 + error_c1) as a function of b and figure out for which b the error is minimum. About the optimum b the curve error_c0 + error_c1 vs b should be quadratic and have the minimum at b_opt. Then run the fitting as in the code above with this b = b_opt and get a and c.

Transform curve into linear

How can I transform the blue curve values into linear (red curve)? I am doing some tests in excel, but basically I have those blue line values inside a 3D App that I want to manipulate with python so I can make those values linear. Is there any mathematical approach that I am missing?
The x axis goes from 0 to 90, and the y axis from 0 to 1.
For example: in the middle of the graph the blue line gives me a value of "0,70711", and I know that in linear it is "0,5". I was wondering if there's an easy formula to transform all the incoming non-linear values into linear.
I have no idea what "formula" is creating that non-linear blue line, also ignore the yellow line since I was just trying to "reverse engineer" to see if would lead me to any conclusion.
Thank you
Find a linear function y = ax + b that for x = 0 gives the value 1 and for x = 90 gives 0, just like the function that is represented by a blue curve.
In that case, your system of equations is the following:
1 = b // for x = 0
0 = a*90 + b // for x = 90
Solution provided by solver is the following : { a = -1/90, b = 1 }, the red linear function will have form y = ax + b, we put the values of a and b we found from the solver and we discover that the linear function you are looking for is y = -x/90 + 1 .
The tool I used to solve the system of equations:
http://wims.unice.fr/wims/en_tool~linear~linsolver.en.html
What exactly do you mean? You can calculate points on the red line like this:
f(x) = 1-x/90
and the point then is (x,f(x)) = (x, 1-x/90). But to be honest, I think your question is still rather unclear.

How to find variability of a set of Cartesian Points (xyz) or fitting/distance to 3D line and/or plane?

So I was looking at this question:
Matlab - Standard Deviation of Cartesian Points
Which basically answers my question, except the problem is I have xyz, not xy. So I don't think Ax=b would work in this case.
I have, say, 10 Cartesian points, and I want to be able to find the standard deviation of these points. Now, I don't want standard deviation of each X, Y and Z (as a result of 3 sets) but I just want to get one number.
This can be done using MATLAB or excel.
To better understand what I'm doing, I have this desired point (1,2,3) and I recorded (1.1,2.1,2.9), (1.2,1.9,3.1) and so on. I wanted to be able to find the variability of all the recorded points.
I'm open for any other suggestions.
If you do the same thing as in the other answer you linked, it should work.
x_vals = xyz(:,1);
y_vals = xyz(:,2);
z_vals = xyz(:,3);
then make A with 3 columns,
A = [x_vals y_vals ones(size(x_vals))];
and
b = z_vals;
Then
sol=A\b;
m = sol(1);
n = sol(2);
c = sol(3);
and then
errs = (m*x_vals + n*y_vals + c) - z_vals;
After that you can use errs just as in the linked question.
Randomly clustered data
If your data is not expected to be near a line or a plane, just compute the distance of each point to the centroid:
xyz_bar = mean(xyz);
M = bsxfun(#minus,xyz,xyz_bar);
d = sqrt(sum(M.^2,2)); % distances to centroid
Then you can compute variability anyway you like. For example, standard deviation and RMS error:
std(d)
sqrt(mean(d.^2))
Data about a 3D line
If the data points are expected to be roughly along the path of a line, with some deviation from it, you might look at the distance to a best fit line. First, fit a 3D line to your points. One way is using the following parametric form of a 3D line:
x = a*t + x0
y = b*t + y0
z = c*t + z0
Generate some test data, with noise:
abc = [2 3 1]; xyz0 = [6 12 3];
t = 0:0.1:10;
xyz = bsxfun(#plus,bsxfun(#times,abc,t.'),xyz0) + 0.5*randn(numel(t),3)
plot3(xyz(:,1),xyz(:,2),xyz(:,3),'*') % to visualize
Estimate the 3D line parameters:
xyz_bar = mean(xyz) % centroid is on the line
M = bsxfun(#minus,xyz,xyz_bar); % remove mean
[~,S,V] = svd(M,0)
abc_est = V(:,1).'
abc/norm(abc) % compare actual slope coefficients
Distance from points to a 3D line:
pointCentroidSeg = bsxfun(#minus,xyz_bar,xyz);
pointCross = cross(pointCentroidSeg, repmat(abc_est,size(xyz,1),1));
errs = sqrt(sum(pointCross.^2,2))
Now you have the distance from each point to the fit line ("error" of each point). You can compute the mean, RMS, standard deviation, etc.:
>> std(errs)
ans =
0.3232
>> sqrt(mean(errs.^2))
ans =
0.7017
Data about a 3D plane
See David's answer.

Resources