I learnt how to differentiate on octave through this. But I now want to plot graph of the same function on octave. I am not able to do so. I want to know what the right command for plotting the 'ffd' equation in the code is.
`f = #(x) x.^2 + 3*x - 1 + 5*x.*sin(x);
pkg load symbolic
syms x;
ff = f(x);
ffd = diff(ff, x) `
What I have tried so far: I have tried adding these lines of code to the end and it didn't work. The graph was empty and I got an error.
real
ffd = (sym) 5⋅x⋅cos(x) + 2⋅x + 5⋅sin(x) + 3
error: __go_line__: invalid value for array property "ydata", unable to create graphics handle
error: called from
__plt__\>__plt2vs__ at line 466 column 15
__plt__\>__plt2__ at line 245 column 14
__plt__ at line 112 column 18
plot at line 229 column 10
real at line 10 column 1\`
I was expecting it to plot the graph of (sym) 5⋅x⋅cos(x) + 2⋅x + 5⋅sin(x) + 3 i.e., ffd, but it didn't work
In order to draw graphs of the differentiated functions we need to have code similar to this and in this case I am using pkg symbolic
pkg load symbolic
syms x;
x = sym('x');
y = #(x) x.^3;
yy = y(x);
ffd = diff(diff(yy,x));
ffe = diff(yy,x);
ez1=ezplot(y,[-10,10])
hold on
ez2=ezplot(ffe,[-10,10])
hold on
ez3=ezplot(ffd,[-10,10])
note: hold on function is used to draw more than one graphs on the same screen. However, if you modify the program and run it again it doesn't clear the screen of the previous graphs, if anyone knows how to do this, please leave a comment.
Related
Im trying to call a function from a different file in python. Im trying to process satellite images from Goes 16 in NetCDF format. Im extracting different values from the file necessary for the functions saved in a .py file called "remap". A piece of my main code goes like this:
from remap import remap
# Calculate the image extent required for the reprojection
H = nc.variables['goes_imager_projection'].perspective_point_height
x1 = nc.variables['x_image_bounds'][0] * H
x2 = nc.variables['x_image_bounds'][1] * H
y1 = nc.variables['y_image_bounds'][1] * H
y2 = nc.variables['y_image_bounds'][0] * H
# Projection Prameters
lat_0 = nc.variables['goes_imager_projection'].latitude_of_projection_origin
lon_0 = nc.variables['goes_imager_projection'].longitude_of_projection_origin
a = nc.variables['goes_imager_projection'].semi_major_axis
b = nc.variables['goes_imager_projection'].semi_minor_axis
f = 1/nc.variables['goes_imager_projection'].inverse_flattening
# Call the reprojection funcion
grid = remap(path, extent, resolution, x1, y1, x2, y2)
In the .py file that I called "remap", the function is defined as:
# Define KM_PER_DEGREE
KM_PER_DEGREE = 111.32
# GOES-16 Spatial Reference System
sourcePrj = osr.SpatialReference()
sourcePrj.ImportFromProj4('+proj=geos +h=' + H + ' +a=' + a + ' +b=' + b + ' +f=' + f + 'lat_0=' + lat_0 + ' +lon_0=' + lon_0 + ' +sweep=x +no_defs')
# Lat/lon WSG84 Spatial Reference System
targetPrj = osr.SpatialReference()
targetPrj.ImportFromProj4('+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs')
def remap(path, extent, resolution, x1, y1, x2, y2):
... (and so on)
Now I have two different problems:
(1) My frist problem is that Im getting an error from the system saying:
"remap() takes 4 positional arguments but 7 were given", which I dont understand why is happening, because I already defined those 7 arguments in the function from the second file called "remap"
(2) My second problem is that I don't know how to call values from my original code that where extracted from the NetCDF file such as: "lat_0, lon_0, a, b, f, and H" to be used in the second file which are necessary from the beginning to used the function "remap".
Any suggestions?
to your first problem:
How do you define the path, extent and resolution needed in remap()?
and to your second problem:
You don't need to call those arguments on the remap file, because from the main code your are calling remap and make the reprojection with those 7 arguments.
I am having a problem while importing data with pandas.
Here is the situation. I have a file which I have to skip the initial 40 lines.
Afterwards, it needs to separate the data with ';' and use ',' as a decimal.
Then, the first and second columns should be assigned to the variable x and y, respectively.
Here is the code I am using:
data = pd.read_csv(path, sep=';', skiprows=40, header=None,engine='python', decimal=",")
# Separates the data into x vector and y vector
x=data.loc[:,0].values
y=data.loc[:,1].values
The data vector comes out as:
0 29.49486 0.10915 -0.30708
1 30.45667 0.17562 -0.30724
2 31.41848 0.23216 -0.30735
3 32.38029 0.27814 -0.30750
4 33.34211 0.31412 -0.30764
5 34.30390 0.34117 -0.30794
.
.
.
166 189.15537 0.41301 -0.16899
167 190.11718 0.41302 -7,7716e-002
168 191.07899 2,7883e-002
Everything, except the last 2 lines are imported as expected.
when assigning the vectors to variables with:
x=data.loc[:,0].values
y=data.loc[:,1].values
I get the x vector as float64 and the y as an object.
What am I doing wrong while importing?
Kind Regards!
How can I transform the blue curve values into linear (red curve)? I am doing some tests in excel, but basically I have those blue line values inside a 3D App that I want to manipulate with python so I can make those values linear. Is there any mathematical approach that I am missing?
The x axis goes from 0 to 90, and the y axis from 0 to 1.
For example: in the middle of the graph the blue line gives me a value of "0,70711", and I know that in linear it is "0,5". I was wondering if there's an easy formula to transform all the incoming non-linear values into linear.
I have no idea what "formula" is creating that non-linear blue line, also ignore the yellow line since I was just trying to "reverse engineer" to see if would lead me to any conclusion.
Thank you
Find a linear function y = ax + b that for x = 0 gives the value 1 and for x = 90 gives 0, just like the function that is represented by a blue curve.
In that case, your system of equations is the following:
1 = b // for x = 0
0 = a*90 + b // for x = 90
Solution provided by solver is the following : { a = -1/90, b = 1 }, the red linear function will have form y = ax + b, we put the values of a and b we found from the solver and we discover that the linear function you are looking for is y = -x/90 + 1 .
The tool I used to solve the system of equations:
http://wims.unice.fr/wims/en_tool~linear~linsolver.en.html
What exactly do you mean? You can calculate points on the red line like this:
f(x) = 1-x/90
and the point then is (x,f(x)) = (x, 1-x/90). But to be honest, I think your question is still rather unclear.
So I was looking at this question:
Matlab - Standard Deviation of Cartesian Points
Which basically answers my question, except the problem is I have xyz, not xy. So I don't think Ax=b would work in this case.
I have, say, 10 Cartesian points, and I want to be able to find the standard deviation of these points. Now, I don't want standard deviation of each X, Y and Z (as a result of 3 sets) but I just want to get one number.
This can be done using MATLAB or excel.
To better understand what I'm doing, I have this desired point (1,2,3) and I recorded (1.1,2.1,2.9), (1.2,1.9,3.1) and so on. I wanted to be able to find the variability of all the recorded points.
I'm open for any other suggestions.
If you do the same thing as in the other answer you linked, it should work.
x_vals = xyz(:,1);
y_vals = xyz(:,2);
z_vals = xyz(:,3);
then make A with 3 columns,
A = [x_vals y_vals ones(size(x_vals))];
and
b = z_vals;
Then
sol=A\b;
m = sol(1);
n = sol(2);
c = sol(3);
and then
errs = (m*x_vals + n*y_vals + c) - z_vals;
After that you can use errs just as in the linked question.
Randomly clustered data
If your data is not expected to be near a line or a plane, just compute the distance of each point to the centroid:
xyz_bar = mean(xyz);
M = bsxfun(#minus,xyz,xyz_bar);
d = sqrt(sum(M.^2,2)); % distances to centroid
Then you can compute variability anyway you like. For example, standard deviation and RMS error:
std(d)
sqrt(mean(d.^2))
Data about a 3D line
If the data points are expected to be roughly along the path of a line, with some deviation from it, you might look at the distance to a best fit line. First, fit a 3D line to your points. One way is using the following parametric form of a 3D line:
x = a*t + x0
y = b*t + y0
z = c*t + z0
Generate some test data, with noise:
abc = [2 3 1]; xyz0 = [6 12 3];
t = 0:0.1:10;
xyz = bsxfun(#plus,bsxfun(#times,abc,t.'),xyz0) + 0.5*randn(numel(t),3)
plot3(xyz(:,1),xyz(:,2),xyz(:,3),'*') % to visualize
Estimate the 3D line parameters:
xyz_bar = mean(xyz) % centroid is on the line
M = bsxfun(#minus,xyz,xyz_bar); % remove mean
[~,S,V] = svd(M,0)
abc_est = V(:,1).'
abc/norm(abc) % compare actual slope coefficients
Distance from points to a 3D line:
pointCentroidSeg = bsxfun(#minus,xyz_bar,xyz);
pointCross = cross(pointCentroidSeg, repmat(abc_est,size(xyz,1),1));
errs = sqrt(sum(pointCross.^2,2))
Now you have the distance from each point to the fit line ("error" of each point). You can compute the mean, RMS, standard deviation, etc.:
>> std(errs)
ans =
0.3232
>> sqrt(mean(errs.^2))
ans =
0.7017
Data about a 3D plane
See David's answer.
I need to find the quadratic equation term of a graph I have plotted in R.
When I do this in excel, the term appears in a text box on the chart but I'm unsure how to move this to a cell for subsequent use (to apply to values requiring calibrating) or indeed how to ask for it in R. If it is summonable in R, is it saveable as an object to do future calculations with?
This seems like it should be a straightforward request in R, but I can't find any similar questions. Many thanks in advance for any help anyone can provide on this.
All the answers provide aspects of what you appear at want to do, but non thus far brings it all together. Lets consider Tom Liptrot's answer example:
fit <- lm(speed ~ dist + I(dist^2), cars)
This gives us a fitted linear model with a quadratic in the variable dist. We extract the model coefficients using the coef() extractor function:
> coef(fit)
(Intercept) dist I(dist^2)
5.143960960 0.327454437 -0.001528367
So your fitted equation (subject to rounding because of printing is):
\hat{speed} = 5.143960960 + (0.327454437 * dist) + (-0.001528367 * dist^2)
(where \hat{speed} is the fitted values of the response, speed).
If you want to apply this fitted equation to some data, then we can write our own function to do it:
myfun <- function(newdist, model) {
coefs <- coef(model)
res <- coefs[1] + (coefs[2] * newdist) + (coefs[3] * newdist^2)
return(res)
}
We can apply this function like this:
> myfun(c(21,3,4,5,78,34,23,54), fit)
[1] 11.346494 6.112569 6.429325 6.743024 21.386822 14.510619 11.866907
[8] 18.369782
for some new values of distance (dist), Which is what you appear to want to do from the Q. However, in R we don't do things like this normally, because, why should the user have to know how to form fitted or predicted values from all the different types of model that can be fitted in R?
In R, we use standard methods and extractor functions. In this case, if you want to apply the "equation", that Excel displays, to all your data to get the fitted values of this regression, in R we would use the fitted() function:
> fitted(fit)
1 2 3 4 5 6 7 8
5.792756 8.265669 6.429325 11.608229 9.991970 8.265669 10.542950 12.624600
9 10 11 12 13 14 15 16
14.510619 10.268988 13.114445 9.428763 11.081703 12.122528 13.114445 12.624600
17 18 19 20 21 22 23 24
14.510619 14.510619 16.972840 12.624600 14.951557 19.289106 21.558767 11.081703
25 26 27 28 29 30 31 32
12.624600 18.369782 14.057455 15.796751 14.057455 15.796751 17.695765 16.201008
33 34 35 36 37 38 39 40
18.688450 21.202650 21.865976 14.951557 16.972840 20.343693 14.057455 17.340416
41 42 43 44 45 46 47 48
18.038887 18.688450 19.840853 20.098387 18.369782 20.576773 22.333670 22.378377
49 50
22.430008 21.93513
If you want to apply your model equation to some new data values not used to fit the model, then we need to get predictions from the model. This is done using the predict() function. Using the distances I plugged into myfun above, this is how we'd do it in a more R-centric fashion:
> newDists <- data.frame(dist = c(21,3,4,5,78,34,23,54))
> newDists
dist
1 21
2 3
3 4
4 5
5 78
6 34
7 23
8 54
> predict(fit, newdata = newDists)
1 2 3 4 5 6 7 8
11.346494 6.112569 6.429325 6.743024 21.386822 14.510619 11.866907 18.369782
First up we create a new data frame with a component named "dist", containing the new distances we want to get predictions for from our model. It is important to note that we include in this data frame a variable that has the same name as the variable used when we created our fitted model. This new data frame must contain all the variables used to fit the model, but in this case we only have one variable, dist. Note also that we don't need to include anything about dist^2. R will handle that for us.
Then we use the predict() function, giving it our fitted model and providing the new data frame just created as argument 'newdata', giving us our new predicted values, which match the ones we did by hand earlier.
Something I glossed over is that predict() and fitted() are really a whole group of functions. There are versions for lm() models, for glm() models etc. They are known as generic functions, with methods (versions if you like) for several different types of object. You the user generally only need to remember to use fitted() or predict() etc whilst R takes care of using the correct method for the type of fitted model you provide it. Here are some of the methods available in base R for the fitted() generic function:
> methods(fitted)
[1] fitted.default* fitted.isoreg* fitted.nls*
[4] fitted.smooth.spline*
Non-visible functions are asterisked
You will possibly get more than this depending on what other packages you have loaded. The * just means you can't refer to those functions directly, you have to use fitted() and R works out which of those to use. Note there isn't a method for lm() objects. This type of object doesn't need a special method and thus the default method will get used and is suitable.
You can add a quadratic term in the forumla in lm to get the fit you are after. You need to use an I()around the term you want to square as in the example below:
plot(speed ~ dist, cars)
fit1 = lm(speed ~ dist, cars) #fits a linear model
abline(fit1) #puts line on plot
fit2 = lm(speed ~ I(dist^2) + dist, cars) #fits a model with a quadratic term
fit2line = predict(fit2, data.frame(dist = -10:130))
lines(-10:130 ,fit2line, col=2) #puts line on plot
To get the coefficients from this use:
coef(fit2)
I dont think it is possible in Excel, as they only provide functions to get coefficients for a linear regression (SLOPE, INTERCEPT, LINEST) or for a exponential one (GROWTH, LOGEST), though you may have more luck by using Visual Basic.
As for R you can extract model coefficients using the coef function:
mdl <- lm(y ~ poly(x,2,raw=T))
coef(mdl) # all coefficients
coef(mdl)[3] # only the 2nd order coefficient
I guess you mean that you plot X vs Y values in Excel or R, and in Excel use the "Add trendline" functionality. In R, you can use the lm function to fit a linear function to your data, and this also gives you the "r squared" term (see examples in the linked page).