I new using openMDAO so I assume this should be an easy question.
I have a complex group (with explicit components and cycles) that I can solve for a set of input variables:
y1, y2,..., yi = Group(x1, x2,..., xj)
What I am trying to do now is to match two outputs (y1_target and y2_target) changing two inputs from the group (x1, x2), i.e., adding two equations out of the group such as,
y1_target - y1 = 0
y2_target - y2 = 0
I understand that this should be done with two implicit components but ¿how I force to only change x1 and x2?
Thanks,
This can be done with a single BalanceComp on a group outside of the group you mention there. A simple diagram of the system, as I understand it, is below.
Here a BalanceComp is added that handles your two residual equations. BalanceComp is an ImplicitComponent in OpenMDAO that handles simple "set X equal to Y" situations like the one in your case. The documentation for it is here.
In your case, the outer group (called g_outer in the code below) would have a balance comp that is set to satisfy two residual equations. Subsystem "group" refers to your existing group.
bal = g_outer.add_subsystem('balance', bal)
bal.add_balance('x1', lhs_name='y1_target', rhs_name='y1')
bal.add_balance('x2', lhs_name='y2_target', rhs_name='y2')
g_outer.connect('balance.x1', 'group.x1')
g_outer.connect('balance.x2', 'group.x2')
g_outer.connect('group.y1', 'balance.y1')
g_outer.connect('group.y2', 'balance.y2')
Another absolutely critical set is setting the nonlinear and linear solvers of g_outer. The default solvers only work for explicit systems. This implicit system requires a NewtonSolver for the nonlinear solver, and some iterative linear solver. DirectSolver often works fine unless the system is very large.
g_outer.nonlinear_solver = om.NewtonSolver(solve_subsystems=True)
g_outer.linear_solver = om.DirectSolver()
Left out of the above snippet is the code that either connects a value to balance.y1_target and balance.y2_target, or sets them after setup.
Related
I am studying the nvidia torch matmul function.
### variable creation
a = torch.randn(size=(1,128,3),dtype=torch.float32).to(cuda)
b = torch.randn(size=(1,3,32),dtype=torch.float32).to(cuda)
### execution
c = torch.matmul(a,b)
I profiled this code using pyprof and this gives me the result below.
I cannot understand many things in there.
what is sgemm_128_32 means?
I see the 's' in sgemm stands for single precision and 'gemm' means general matrix multiplication. But i don't know the 128_32 means. My output matrix dimension is 128 by 32. But I know that cutlass optimizes the sgemm using outer product. (i will give you the link, ref 1) Actually i cannot understand the link.
(1)Does 128_32 means simply the output matrix's dimension?
(2)Is there any way how my output matrix(c, in my code) is actually calculated?
(for example, there are total 128*32 threads. And each thread is responsible for one output element using inner product way)
Why the Grid and Block have 3 dimension each and how the grid and block is used for sgemm_128_32?
Grid consists of x, y, z. And Block consists of x, y, z. (1) Why do you need 3 dimension? I see that (in the picture above) block X has 256 thread. (2) is this true? And Grid Y is 4. so this means that there is 4 blocks in Grid Y. (3) is this true?
By using that pyprof result, can i figure out how many SMs are used? how many warps are activated in that SM?
Thank you.
ref 1 : https://developer.nvidia.com/blog/cutlass-linear-algebra-cuda/
I am trying to predict a variable using a number of explanatory variables each of which has no visually detectable relationship, that is the scatterplots between each regressor and the predicted variable are completely flat clouds.
I took 2 approaches:
1) Running individual regressions, yields not significant relationship at all.
2) Once I play around with multiple combinations of multivariable regressions, I get significant relationships for some combinations (which are not robust though, that is, a variable is significant in one setting and looses signifcance in a different setting).
I am wondering, if based on 1), i.e. the fact that on an individual basis, there seems to be no relationship at all, I can conclude that a multivariable aprroach is destined to fail as well?
The answer is most definitely no, it is not guaranteed to fail. In fact you've already observed this to be the case in #2 where you get significant predictors in a multiple predictor setting
A regression between 1 predictor and 1 outcome amounts to the covariance or the correlation between the two variables. This is the relationship you observe in your scatterplots.
A regression where you have multiple predictors (multiple regression) has a rather different interpretation. Lets say you have a model like: Y = b0 + b1X1 + b2X2
b1 is interpreted as the relationship between X1 and Y holding X2 constant. That is, you are controlling for the effect of X2 in this model. This is a very important feature of multiple regression.
To see this, run the following models:
Y = b0 + b1X1
Y = b0 + b1X1 + b2X2
You will see that the value of b1 in both cases are different. The degree of difference between the b1 values will depend on the magnitude of the covariance/correlation between X1 and X2
Just because a straight correlation between 2 variables is not significant does not mean that the relationship will remain non-significant once you control for the effect of other predictors
This point is highlighted by your example of robustness in #2. Why would a predictor be significant in some models, and non-significant when you use another subset of predictors? It is precisely because you are controlling for the effect of different variables in your various models.
Which other variables you choose to control for, and ultimately which specific regression model you choose to use, depends on what your goals are.
I want to have a measure of similarity between two points in a cluster.
Would the similarity calculated this way be an acceptable measure of similarity between the two datapoint?
Say I have to vectors: vector A and vector B that are in the same cluster. I have trained a cluster which is denoted by model and then model.computeCost() computes thesquared distance between the input point and the corresponding cluster center.
(I am using Apache Spark MLlib)
val costA = model.computeCost(A)
val costB = model.computeCost(B)
val dissimilarity = |cost(A)-cost(B)|
Dissimilarity i.e. the higher the value, the more unlike each other they are.
If you are just asking is this a valid metric then the answer is almost, it is a valid pseudometric if only .computeCost is deterministic.
For simplicity i denote f(A) := model.computeCost(A) and d(A, B) := |f(A)-f(B)|
Short proof: d is a L1 applied to an image of some function, thus is a pseudometric itself, and a metric if f is injective (in general, yours is not).
Long(er) proof:
d(A,B) >= 0 yes, since |f(A) - f(B)| >= 0
d(A,B) = d(B,A) yes, since |f(A) - f(B)| = |f(B) - f(A)|
d(A,B) = 0 iff A=B, no, this is why it is pseudometric, since you can have many A != B such that f(A) = f(B)
d(A,B) + d(B,C) <= d(A,C), yes, directly from the same inequality for absolute values.
If you are asking will it work for your problem, then the answer is it might, depends on the problem. There is no way to answer this without analysis of your problem and data. As shown above this is a valid pseudometric, thus it will measure something decently behaving from mathematical perspective. Will it work for your particular case is completely different story. The good thing is most of the algorithms which work for metrics will work with pseudometrics as well. The only difference is that you simply "glue together" points which have the same image (f(A)=f(B)), if this is not the issue for your problem - then you can apply this kind of pseudometric in any metric-based reasoning without any problems. In practise, that means that if your f is
computes the sum of squared distances between the input point and the corresponding cluster center
this means that this is actually a distance to closest center (there is no summation involved when you consider a single point). This would mean, that 2 points in two separate clusters are considered identical when they are equally far away from their own clusters centers. Consequently your measure captures "how different are relations of points and their respective clusters". This is a well defined, indirect dissimilarity computation, however you have to be fully aware what is happening before applying it (since it will have specific consequences).
Your "cost" is actually the distance to the center.
Points that have the same distance to the center are considered to be identical (distance 0), which creates a really odd pseudonetric, because it ignores where on the circle of that distance points are.
It's not very likely this will work on your problem.
I have a set of data I have acquired from simulations. There are 3 parameters that go into my simulations and I get one result out.
I can graph the data from the small subset i have and see the trends for each input, but I need to be able to extrapolate this and get some form of a regression equation seeing as the simulation takes a long time.
In matlab or excel, is it possible to list the inputs and outputs to obtain a 4 parameter regression line for a given set of information?
Before this gets flagged as a duplicate, i understand polyfit will give me an equation of best fit and will be as accurate as i want it, but i need the equation to correspond to the inputs, not just a regression line.
In other words if i 20 simulations of inputs a, b, c and output y, is there a way to obtain a "best fit":
y=B0+B1*a+B2*b+B3*c
using the data?
My usual recommendation for higher-dimensional curve fitting is to pose the problem as a minimization problem (that may be unneeded here with the nice linear model you've proposed, but I'm a hammer-nail guy sometimes).
It starts by creating a correlation function (the functional form you think maps your inputs to the output) given a vector of fit parameters p and input data xData:
correl = #(p,xData) p(1) + p(2)*xData(:,1) + p(3)*xData(:2) + p(4)*xData(:,3)
Then you need to define a function to minimize given the parameter vector, which I call the objective; this is typically your correlation minus you output data.
The details of this function are determined from the solver you'll use (see below).
All of the method need a starting vector pGuess, which is dependent on the trends you see.
For nonlinear correlation function, finding a good pGuess can be a trial but necessary for a good solution.
fminsearch
To use fminsearch, the data must be collapsed to a scalar value using some norm (2 here):
x = [a,b,c]; % your input data as columns of x
objective = #(p) norm(correl(p,x) - y,2);
p = fminsearch(objective,pGuess); % you need to define a good pGuess
lsqnonlin
To use lsqnonlin (which solves the same problem as above in different ways), the norm-ing of the objective is not needed:
objective = #(p) correl(p,x) - y ;
p = lsqnonlin(objective,pGuess); % you need to define a good pGuess
(You can also specify lower and upper bounds on the parameter solution, which is nice.)
lsqcurvefit
To use lsqcurvefit (which is simply a wrapper for lsqnonlin), only the correlation function is needed along with the data:
p = lsqcurvefit(correl,pGuess,x,y); % you need to define a good pGuess
I'm an engineering student. Pretty much all math I have to do is something in R2 or R3 and concerns differential geometry. Naturally I really like sympy because it makes my calculations reusable and presentable.
What I found:
The thing in sympy that comes closeset to what I know functions as, which is as mapping of scalar or vector values to scalar or vector values, with a name and connected to an expressions seems to be something of the form
functionname=sympy.Lambda(Variables in tuple, Expression)
or as an example
f=sympy.Lambda((x),x+1)
I also found that sympy has the diffgeom module that defines Manifolds, Patches and can then perform some operations on functions without expressions or points. Like translating a point in a coordinate system to the same point in a different, linked coordinate system.
I haven't found a way to perform those operations and transformations on functions like those above. Or to define something in the diffgeom context that performs like the Lambda function.
Examples of what I'd like to do:
scalarfield f (x,y,z) = expression
grad (f) = ( d expression / dx , d expression / dy , d expression / dz)^T
vectorfield v (x,y,z) = ( expression 1 , expression 2 , expression 3 )^T
I'd then like to be able to integrate the vectorfield over bodies or curves.
Do these things exist and I haven't found them?
Are they doable with diffgeom and I didn't understand it?
Would I have to write this myself with the backbones that sympy already provides?
There is a differential geometry module within sympy:
http://docs.sympy.org/latest/modules/diffgeom.html
For more examples you can see http://blog.krastanov.org/pages/diff-geometry-in-python.html
To do the suggested in the diffgeom module, just define your expression using the base coordinates of your manifold:
from diffgeom.rn import R2
scalar = (R2.r**2 - R2.x**2 - R2.y**2) # you can mix coordinate systems
gradient = (R2.e_x + R2.e_y).rcall(scalar)
There are various functions for change of coordinates, etc. Probably many things are missing, but it would take usage and bug reports (and help) for all this to get implemented.
You can see some other examples in the test files:
tested examples from a text book https://github.com/sympy/sympy/blob/master/sympy/diffgeom/tests/test_function_diffgeom_book.py
more tests https://github.com/sympy/sympy/blob/master/sympy/diffgeom/tests/test_diffgeom.py
However for doing what is suggested in your question, doing it through differential geometry (while possible) would be an overkill. You can just use the matrices module:
def gradient(expr, vars):
return Matrix([expr.diff(v) for v in vars])
More fancy things like matrix jacobians and more are implemented.
A final remark: using expressions instead of functions and lambdas will probably result in more readable and idiomatic sympy code (often it is more natural to use subs to substitute a symbols instead of some kind of closure, lambda, function call, etc).