In my OpenModelica model (OpenModelica 1.19.2 on Ubuntu 20.04) I'm using Modelica.Blocks.Math.Mean to compute the mean of the final period of some of my results, like:
when terminal() then
result = blockMean.y;
end when;
where result is a Real and blockMean is an instance of Modelica.Blocks.Math.Mean (with the variable of interest as input).
When I run this model in OpenModelica, this works fine, and result shows up in the resulting list of variables in the Plotting tab, with the correct value.
However, after exporting an fmu of this model and running it in Python using pyFMI, result is an array of zeros, of the same length as all resulting time-varying signals.
I'm fairly new to Modelica and pyFMI, so I'm not awfully familiar with all options and details. As a first attempt at solving this I tried playing with opts["CVode_options"]["store_event_points"], but that didn't make any difference.
Is there some option that I should set? Or is this a bug that I just should live with?
Related
does anyone has an idea what is the input text size limit that can be passed to the
predict(passage, question) method of the AllenNLP Predictors.
I have tried with passage of 30-40 sentences, which is working fine. But eventually it is not working for me when I am passing some significant amount of text around 5K statement.
Which model are you using? Some models truncate the input, others try to handle arbitrary length input using a sliding window approach. With the latter, the limit will depend on the memory available on your system.
I am using nuXmv to develop a model, Where I want to initialize 3 of the variables as a range of integers and not fixed integer value. The values are - 1..100, 20..100 and 0..200
the model simulation for a fixed set of init values works as expected and also the properties can be verified.
However, when I feed with a range of init values to be randomly chosen, the model hangs indefinitely.
Does any one know what could be the reason and how to solve this?
I try to fit data using standard defined functions (Lorentzian & Gaussian) from lmfit package. The program works quite well for some data set but for another one its not able to fit because the initial values doesnt seem right. Is there any algorithm which can extract the initial values from the data set and do some iterations in order to find the best fit?
I tried some common methods like bruethe-force algorithm but the results are not satisfactory and it cost a lot of time.
It is always recommended to provide a small, complete example script that shows the problem you are having. How could we know why it works in some cases and not in others?
lmfit.GaussianModel and lmfit.LorentzianModel both have guess methods. This should work reasonably well data with an isolated peak, working like
import lmfit
model = lmfit.models.GaussianModel()
params = model.guess(ydata, x=xdata)
for p in params.values():
print(p)
result = model.fit(ydata, params, x=xdata)
print(result.fit_report())
If the data doesn't have a clear isolated peak, that might not work so well.
If finding the peak(s) is the actual problem, try scipy.signal.find_peaks
(https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html)or peakutils (https://peakutils.readthedocs.io/en/latest/). Either of these should give you a good estimate of the center parameter, which is probably the most likely to cause bad fits if a poor initial value is give.
I am working on an assignment for a course. The code creates variables for use in a data.dist function from rms. We then create a simple linear regression model using ols(). Printing/creating the first plot before the data.dist() and ols() functions is simple. We use:
plot(x,y,pch='o')
lines(x,yTrue,lty=2,lwd=.5,col='red')
Then we create the data.dist() and the ols(), here named fit0.
mydat=data.frame(x=x,y=y)
dd=datadist(mydat)
options(datadist='dd')
fit0=ols(y~x,data=mydat)
fit0
anova(fit0)
This all works smoothly, printing out the results of the linear regression and the anova table. Then, we want to predict based on the model, and plot these predictions. The plot prints out nicely, however the lines and points won't show up here. The code:
ff=Predict(fit0)
plot(ff)
lines(x,yTrue,lwd=2,lty=1,col='red')
points(x,y,pch='.')
Note - this works fine in R. I much prefer to use RStudio, though can switch to R if there's no clear solution this issue. I've tried dev.off() several times (i.e. repeat until get, I've tried closing RStudio and re-opening, I've uninstalled and reinstalled R and RStudio, rms package (which includes ggplot2), updated the packages, made my RStudio graphics window larger. Any solution I've seen, doesn't work. Help!
I'm making some classification experiments using sklearn. During the experiments, I'm building csr_matrix objects to store my data and used LogisticRegression classifier on these objects and get some results.
I dump the data using dump_svmlight_file and the model using joblib.
But when I then load the data using load_svmlight_file and the model, I obtained (very) different results.
I realized that if I dump the data setting the zero_based parameter to False, I retrieve my original results. What is exactly the effect of this parameter? Is it usual to have different results modifying the value of this parameter?
The docs are pretty explicit:
zero_based : boolean or “auto”, optional, default “auto”
Whether column indices in f are zero-based (True) or one-based (False). If column indices are one-based, they are transformed to zero-based to match Python/NumPy conventions. If set to “auto”, a heuristic check is applied to determine this from the file contents. Both kinds of files occur “in the wild”, but they are unfortunately not self-identifying. Using “auto” or True should always be safe.
Your observation seems odd, though. If you dump with zero_based=False and load with zero_based='auto' the heuristic should be able to detect the right format.
Also, if the wrong format would have been detected, the number of features would have changed, so there would be an error from your classifier.