Multinomial HMM Fitting Issue - hmmlearn

I cannot seem to figure out the problem why while fitting the hmm this error occurs: "total count of sample should be equal to total number of trials

Related

Confusion matrix 5x5 formula for finding accuracy, precision, recall ,and f1-score

im try to study confusion matrix. i know about 2x2 confusion matrix but i still don't understand how to count 5x5 confusion matrix for finding accuracy, precision, recall and, f1 - score. Can anyone help me with this ? i appreciate every help.
See my answer here: Calculating Equal error rate(EER) for a multi class classification problem
In short, one strategy is to split the multiclass problem into a set of binary classification, for each class a "one vs. all others" classification. Then for each binary problem you can calculate F1, precision and recall, and if you want you can average (uniformly or weighted) the scores of each class to get one F1 score which will represent the multiclass problem.
As for confusion matrix larger than 2x2: the rows are the true labels and the columns are predicated labels. Then the number in cell (i,j) is the number of samples from class i which were classified as class j (note that i=j corresponds to correct prediction). The accuracy is the trace of the confusion matrix divided by the number of samples.

Scoring Model giving reversed results using logistic regression

I am trying to implement a scoring model following the link https://rstudio-pubs-static.s3.amazonaws.com/376828_032c59adbc984b0ab892ce0026370352.html#1_introduction.
Post the entire implementation though, When I create pivot with my generated scores and the original labels, the average scores for "good' labels is significantly lower than the ones for " high" labels.
Hence, my problem can be oversimplified to why would logistic regression give reversed probabilities for 0-1 target variable( In my model I am using 0 for bad and 1 for good).
Any suggestions and solutions would be welcome.

Gaussian Mixture model log-likelihood to likelihood-Sklearn

I want to calculate the likelihoods instead of log-likelihoods. I know that score gives per sample average log-likelihood and for that I need to multiply score with sample size but the log likelihoods are very large negative numbers such as -38567258.1157 and when I take np.exp(scores) , I get a zero. Any help is appreciated.
gmm=GaussianMixture(covariance_type="diag",n_components=2)
y_pred=gmm.fit_predict(X_test)
scores=gmm.score(X_test)

StatsModels SARIMAX with exogenous variable and linear time trend

I am trying to forecast a SARIMAX model with a the linear time trend taking the value 1 for the first datapoint in the and and increasing by 1 for each successive observation up to N= sample size. The trend term is introduced because of it improves the model’s predictive power significantly but we want to freeze it out to the last observed value for out-sample forecasting. Namely if in-sample size is 100 we want to use this value for each step in the forecasting insted of increasing by 1 at each step
The model has been fitted as follows
from statsmodels.tsa.statespace.sarimax import SARIMAX
model = SARIMAX(endog=Unemployment_series,exog=sm.add_constant(insample['GDP_yoy'].values),order=(1,0,0),trend ='t').fit(disp=-1)
According to statsmodels documentation in https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html, the parameter trend allows us to fix the linear time trend.
The problem arises when try to forecast using the methods get_forecast or get_prediction
forecast = model.get_forecast(steps=len(outsample),exog = sm.add_constant(outsample['GDP_yoy'].values,has_constant='add'))
or
forecast = model.get_prediction(start=len(insample),end=len(insample)+len(outsample)-1,exog = sm.add_constant(outsample['GDP_yoy'].values,has_constant='add'))
Because of I have not found any parameter that allows to control the behavior of the time trend, any advice?

Why does k=1 in KNN give the best accuracy?

I am using Weka IBk for text classificaiton. Each document basically is a short sentence. The training dataset contains 15,000 documents. While testing, I can see that k=1 gives the best accuracy? How can this be explained?
If you are querying your learner with the same dataset you have trained on with k=1, the output values should be perfect barring you have data with the same parameters that have different outcome values. Do some reading on overfitting as it applies to KNN learners.
In the case where you are querying with the same dataset as you trained with, the query will come in for each learner with some given parameter values. Because that point exists in the learner from the dataset you trained with, the learner will match that training point as closest to the parameter values and therefore output whatever Y value existed for that training point, which in this case is the same as the point you queried with.
The possibilities are:
The data training with data tests are the same data
Data tests have high similarity with the training data
The boundaries between classes are very clear
The optimal value for K is depends on the data. In general, the value of k may reduce the effect of noise on the classification, but makes the boundaries between each classification becomes more blurred.
If your result variable contains values of 0 or 1 - then make sure you are using as.factor, otherwise it might be interpreting the data as continuous.
Accuracy is generally calculated for the points that are not in training dataset that is unseen data points because if you calculate the accuracy for unseen values (values not in training dataset), you can claim that my model's accuracy is the accuracy that is been calculated for the unseen values.
If you calculate accuracy for training dataset, KNN with k=1, you get 100% as the values are already seen by the model and a rough decision boundary is formed for k=1. When you calculate the accuracy for the unseen data it performs really bad that is the training error would be very low but the actual error would be very high. So it would be better if you choose an optimal k. To choose an optimal k you should be plotting a graph between error and k value for the unseen data that is the test data, now you should choose the value of the where the error is lowest.
To answer your question now,
1) you might have taken the entire dataset as train data set and would have chosen a subpart of the dataset as the test dataset.
(or)
2) you might have taken accuracy for the training dataset.
If these two are not the cases than please check the accuracy values for higher k, you will get even better accuracy for k>1 for the unseen data or the test data.

Resources