I am using nuXmv to develop a model, Where I want to initialize 3 of the variables as a range of integers and not fixed integer value. The values are - 1..100, 20..100 and 0..200
the model simulation for a fixed set of init values works as expected and also the properties can be verified.
However, when I feed with a range of init values to be randomly chosen, the model hangs indefinitely.
Does any one know what could be the reason and how to solve this?
Related
In my OpenModelica model (OpenModelica 1.19.2 on Ubuntu 20.04) I'm using Modelica.Blocks.Math.Mean to compute the mean of the final period of some of my results, like:
when terminal() then
result = blockMean.y;
end when;
where result is a Real and blockMean is an instance of Modelica.Blocks.Math.Mean (with the variable of interest as input).
When I run this model in OpenModelica, this works fine, and result shows up in the resulting list of variables in the Plotting tab, with the correct value.
However, after exporting an fmu of this model and running it in Python using pyFMI, result is an array of zeros, of the same length as all resulting time-varying signals.
I'm fairly new to Modelica and pyFMI, so I'm not awfully familiar with all options and details. As a first attempt at solving this I tried playing with opts["CVode_options"]["store_event_points"], but that didn't make any difference.
Is there some option that I should set? Or is this a bug that I just should live with?
The output of my pytorch neural network is a float64 type of data. This variable has to be used as a pixel offset and as such I need to convert it to long type.
However I have just discovered that a conversion out=out.long() switches the variable attribute ".requires_grad" to False.
How can I convert it to long maintaining ".requires_grad" to true?
In general, you cannot convert a tensor to an integer-based type while maintaining it's gradient properties since converting to an integer is a non-differentiable operation. Thus, you essentially have two options:
If the data is only required as type long for inference operations that need not maintain their gradient,you can back-propagate loss before converting to long type, sequentially. You could also make a copy or use torch.detach().
Change the input-output structure of your model such that integer outputs are not needed. One way to do this might be to output a pixel-map with one value for each value in the original tensor which you are trying to index. This would be similar to NNs that output masks for segmentation.
Without more detail on what you're trying to accomplish, it's difficult to say what your best path forward is. Please add more code so the context of this operation is visible.
I am working on an assignment for a course. The code creates variables for use in a data.dist function from rms. We then create a simple linear regression model using ols(). Printing/creating the first plot before the data.dist() and ols() functions is simple. We use:
plot(x,y,pch='o')
lines(x,yTrue,lty=2,lwd=.5,col='red')
Then we create the data.dist() and the ols(), here named fit0.
mydat=data.frame(x=x,y=y)
dd=datadist(mydat)
options(datadist='dd')
fit0=ols(y~x,data=mydat)
fit0
anova(fit0)
This all works smoothly, printing out the results of the linear regression and the anova table. Then, we want to predict based on the model, and plot these predictions. The plot prints out nicely, however the lines and points won't show up here. The code:
ff=Predict(fit0)
plot(ff)
lines(x,yTrue,lwd=2,lty=1,col='red')
points(x,y,pch='.')
Note - this works fine in R. I much prefer to use RStudio, though can switch to R if there's no clear solution this issue. I've tried dev.off() several times (i.e. repeat until get, I've tried closing RStudio and re-opening, I've uninstalled and reinstalled R and RStudio, rms package (which includes ggplot2), updated the packages, made my RStudio graphics window larger. Any solution I've seen, doesn't work. Help!
I'm making some classification experiments using sklearn. During the experiments, I'm building csr_matrix objects to store my data and used LogisticRegression classifier on these objects and get some results.
I dump the data using dump_svmlight_file and the model using joblib.
But when I then load the data using load_svmlight_file and the model, I obtained (very) different results.
I realized that if I dump the data setting the zero_based parameter to False, I retrieve my original results. What is exactly the effect of this parameter? Is it usual to have different results modifying the value of this parameter?
The docs are pretty explicit:
zero_based : boolean or “auto”, optional, default “auto”
Whether column indices in f are zero-based (True) or one-based (False). If column indices are one-based, they are transformed to zero-based to match Python/NumPy conventions. If set to “auto”, a heuristic check is applied to determine this from the file contents. Both kinds of files occur “in the wild”, but they are unfortunately not self-identifying. Using “auto” or True should always be safe.
Your observation seems odd, though. If you dump with zero_based=False and load with zero_based='auto' the heuristic should be able to detect the right format.
Also, if the wrong format would have been detected, the number of features would have changed, so there would be an error from your classifier.
I have a bunch of independent variables: height, weight, etc that I want to regress a dummy variable on to. For instance, if I wanted to regress diabetes (0 if patient doesnt have diabetes, 1 if patient does have diabetes) and I wanted to figure out the effect of an increase in 1 pound of weight on the probability of having diabetes, how would I do that? I'm sure there are multiple ways of doing it but I just never have heard of a model that does this. I thought it was the probit model but I'm not sure. Any thoughts?
The problem you are describing is known as logistic regression; a web search for that should turn up a lot of hits. Most commonly, the response is some function of a linear combination of inputs, but more generally, the response could be a nonlinear function of inputs.
The dependence of the response on an input (e.g. weight) is interesting, but not exactly well-posed, since the change of the probability of the response varies over the range of the input variable; the change is very small for very large or very small values of the input, and reaches some maximum in between.