I’am trying to use the tune.svm-function and since I don’t really know which parameters will produce a good model (as training data will be picked by a user) I need to cover a wide range of values. Currently I've got this behavior
tune(svm, value ~ . , data= data_l, ranges=list(cost = 10^(0:5), epsilon = 10^(-1:0)))
Parameter tuning of ‘svm’:
- sampling method: 10-fold cross validation
- best parameters:
cost epsilon
100 0.1
- best performance: 277.5491
and
tune(svm, value ~ . , data= data_l, ranges=list(cost = 10^(0:5), epsilon = 10^(-1:1)))
Error in predict.svm(ret, xhold, decision.values = TRUE) : Model is empty!
(Difference in the highest value of epsilon)
I know that svm won’t work with epsilon =10 but my intuition for this tuning function would be, that it can handle parameters that will not produce a model.
Why wouldn't it pick the models that could be generated?
Is there an “easy” way to omit this error-behavior? (I tried tryCatch(tune()) and a lot of other stuff I found, but I guess I would have to dig deep into the tune/svm/predict-codes which doesn’t sound “easy” anymore)
My understanding of the "Model is empty!" error is that it indicates a singular training matrix was fed into the SVM. See Salvy's answer to this related post and Oldrich Kruza's related message in the R mailing list.
Related
I am aware of the standard process of finding the optimal value of alpha/lambda using Cross Validation technique through GridSearchCV class in sklearn.model_selection library.Here's my code to find that .
alphas=np.arange(0.0001,0.01,0.0005)
cv=RepeatedKFold(n_splits=10,n_repeats=3, random_state=100)
hyper_param = {'alpha':alphas}
model = Lasso()
model_cv = GridSearchCV(estimator = model,
param_grid=hyper_param,
scoring='r2',
cv=cv,
verbose=1,
return_train_score=True
)
model_cv.fit(X_train,y_train)
#checking the bestscore
model_cv.best_params_
This gives me alpha=0.01
Now, looking on LassoCV , as per my understanding , this library creates model by selecting best optimal alpha by the passed alphas list, and please note , I have used the same cross validation scheme for both of them. But when trying sklearn.linear_model.LassoCV with RepeatedKFold cross validation scheme.
alphas=np.arange(0.0001,0.01,0.0005)
cv=RepeatedKFold(n_splits=10,n_repeats=3,random_state=100)
ls_cv_m=LassoCV(alphas,cv=cv,n_jobs=1,verbose=True,random_state=100)
ls_cv_m.fit(X_train_reduced,y_train)
print('Alpha Value %d'%ls_cv_m.alpha_)
print('The coefficients are {}',ls_cv_m.coef_)
I get alpha=0 for the same data and this alpha value in not present in the list of decimal values passed in alphas argument for this.
This has confused me about the actual implementation of LassoCV.
and my doubts are ..
Why do I get optimal alpha as 0 in LassoCV when the list passed to the argument does not has zero in it.
What is the difference between LassoCV and Lasso then, if I have to anyways find most suitable alpha from GridSearchCV only?
First you should pass your alphas as keywords parameters rather then positional parameters since the first positional parameter for LassoCV is eps.
ls_cv_m=LassoCV(alphas=alphas,cv=cv,n_jobs=1,verbose=True,random_state=100)
Then, the model is returning as optimal parameter one of the alphas that you previously defined, however you are simply printing it as an integer number casting the float to int. Replace %d with %f to print it in the float format:
print('Alpha Value %f'%ls_cv_m.alpha_)
Have a look here for more details about Python printing formats and styles.
As for your second question, Lasso is the linear model while LassoCV is an iterative process that allows you to find the optimal parameters for a Lasso model using Cross-validation.
I have a Naive Bayes classifier that I wrote in Python using a Pandas data frame and now I need it in PySpark. My problem here is that I need the feature importance of each column. When looking through the PySpark ML documentation I couldn't find any info on it. documentation
Does anyone know if I can get the feature importance with the Naive Bayes Spark MLlib?
The code using Python is the following. The feature importance is retrieved with .coef_
df = df.fillna(0).toPandas()
X_df = df.drop(['NOT_OPEN', 'unique_id'], axis = 1)
X = X_df.values
Y = df['NOT_OPEN'].values.reshape(-1,1)
mnb = BernoulliNB(fit_prior=True)
y_pred = mnb.fit(X, Y).predict(X)
estimator = mnb.fit(X, Y)
# coef_: For a binary classification problems this is the log of the estimated probability of a feature given the positive class. It means that higher values mean more important features for the positive class.
feature_names = X_df.columns
coefs_with_fns = sorted(zip(estimator.coef_[0], feature_names))
If you're interested in an equivalent of coef_, the property, you're looking for, is NaiveBayesModel.theta
log of class conditional probabilities.
New in version 2.0.0.
i.e.
model = ... # type: NaiveBayesModel
model.theta.toArray() # type: numpy.ndarray
The resulting array is of size (number-of-classes, number-of-features), and rows correspond to consecutive labels.
It is, probably, better to evaluate a difference
log(P(feature_X|positive)) - log(P(feature_X|negative))
as a feature importance.
Because, we are interested in the Discriminative power of each feature_X (sure-sure NB is a generative model).
Extreme example: some feature_X1 has the same value across all + and - samples, so no discriminative power.
So, the probability of this feature value is high for both + and - samples, but the difference of log probabilities = 0.
I am new to machine learning and python. Now I am trying to apply random forest to predict binary results of a target. In my data I have 24 predictors (1000 observations) where one of them is categorical(gender) and all the others numerical. Among numerical ones, there are two types of values which are volume of money in euros (very skewed and scaled) and numbers (number of transactions from an atm). I have transformed the big scale features and did the imputation. Last, I have checked correlation and collinearity and based on that removed some features (as a result I had 24 features.) Now when I implement RF it is always perfect in the training set while the ratios not so good according to crossvalidation. And even applying it in the test set it gives very very low recall values. How should I remedy this?
def classification_model(model, data, predictors, outcome):
# Fit the model:
model.fit(data[predictors], data[outcome])
# Make predictions on training set:
predictions = model.predict(data[predictors])
# Print accuracy
accuracy = metrics.accuracy_score(predictions, data[outcome])
print("Accuracy : %s" % "{0:.3%}".format(accuracy))
# Perform k-fold cross-validation with 5 folds
kf = KFold(data.shape[0], n_folds=5)
error = []
for train, test in kf:
# Filter training data
train_predictors = (data[predictors].iloc[train, :])
# The target we're using to train the algorithm.
train_target = data[outcome].iloc[train]
# Training the algorithm using the predictors and target.
model.fit(train_predictors, train_target)
# Record error from each cross-validation run
error.append(model.score(data[predictors].iloc[test, :], data[outcome].iloc[test]))
print("Cross-Validation Score : %s" % "{0:.3%}".format(np.mean(error)))
# Fit the model again so that it can be refered outside the function:
model.fit(data[predictors], data[outcome])
outcome_var = 'Sold'
model = RandomForestClassifier(n_estimators=20)
predictor_var = train.drop('Sold', axis=1).columns.values
classification_model(model,train,predictor_var,outcome_var)
#Create a series with feature importances:
featimp = pd.Series(model.feature_importances_, index=predictor_var).sort_values(ascending=False)
print(featimp)
outcome_var = 'Sold'
model = RandomForestClassifier(n_estimators=20, max_depth=20, oob_score = True)
predictor_var = ['fet1','fet2','fet3','fet4']
classification_model(model,train,predictor_var,outcome_var)
In Random Forest it is very easy to overfit. To resolve this you need to do parameter search a little more rigorously to know the best parameter to use. [Here](http://scikit-learn.org/stable/auto_examples/model_selection/randomized_search.html
) is the link on how to do this: (from the scikit doc).
It is overfitting and you need to search for the best parameter that will work work on the model. The link provides implementation for Grid and Randomized search for hyper parameter estimation.
And it will also be fun to go through this MIT Artificial Intelligence lecture to get get deep theoretical orientation: https://www.youtube.com/watch?v=UHBmv7qCey4&t=318s.
Hope this helps!
To be clear, by weights I mean the entries in the matrices (Ws) of the affine transformation in a node of a neural net.
I start with categorical_crossentropy as my loss function. And I want to add an additional term to penalize negative weights.
To this end I want to introduce a term of the form
theano.tensor.sum(theano.tensor.exp(-10 * ws))
Where "ws" are the weights.
If I follow the source code of categorical_crossentropy:
if true_dist.ndim == coding_dist.ndim:
return -tensor.sum(true_dist *tensor.log(coding_dist), axis=coding_dist.ndim - 1)
elif true_dist.ndim == coding_dist.ndim - 1:
return crossentropy_categorical_1hot(coding_dist, true_dist)
else:
raise TypeError('rank mismatch between coding and true distributions')
Seems like I should update the third line (from the bottom) to read
crossentropy_categorical_1hot(coding_dist, true_dist) + theano.tensor.sum(theano.tensor.exp(- 10 * ws))
And change the declaration of the function to be
my_categorical_crossentropy(coding_dist, true_dist, ws) Where in calling for my_categorical_crossentropy I write
loss = my_categorical_crossentropy(net_output, true_output, l_layers[1].W)
with, for a start, l_layers[1].W to be the weights coming from the first layer of my neural net.
With those updates, I go on writing:
loss = aggregate(loss, mode = 'mean')
updates = sgd(loss, all_params, learning_rate = 0.005)
train = theano.function([l_input.input_var, true_output], loss, updates = updates)
[...]
This passes the compiler and everything runs smoothly, the training of the network completes. However, for some reason the additional term " theano.tensor.sum(theano.tensor.exp(- 10 * ws)) is ignored, it seems not to effect the loss value.
I was trying to look into Theano documentation, but so far I could not figure out what might be wrong? The weighs l_layers[1].W are shared variables, so I could not pass those as
train = theano.function([l_input.input_var, true_output, l_layers[1].W], loss, updates = updates)
Any comments are welcome. Thanks!
Solution
Though, I didn't find why what I did, didn't work, adding the penalty term outside the 'categorical_crossentropy' as suggested in the comments did solve the problem:
loss = aggregate(categorical_crossentropy(net_output, true_output) + theano.tensor.sum(theano.tensor.exp(- 10 * l_layers[1].W))
After fitting GARCH model in R and obtain the output, how do I know whether there is any evidence of ARCH effect?
I am not toosure whether I have to check in optimal parameters, Information criteria, Q-statistics on standardized residuals, ARCM LM Tests, Nyblom stability test, Sign Bias Test or Adjusted Pearson Goodness-of-fit test?
I assume I have to check under ARCH LM Tests, and if the p-value is rather high, there is an ARCH effect, am I right?
Thank you
You need to start by looking for second order persistence in the return series itself before going on to fit a GARCH model. Lets work through a quick example of how this will work
Start by getting the return series. Here I will use the quantmod library to load in the data for SPDR S&P 500 ETF or SPY
library(quantmod)
library(PerformanceAnalytics)
rtn<-getSymbols(c('SPY'),return.class='ts')
Next, calculate the return series either yourself or using the Return.calculate function as provided by the PerformanceAnalytics library
Rtn <- diff(log(SPY[,"SPY.Close"])) * 100
#OR
Rtn <- Return.calculate(SPY[,"SPY.Close"], method = c("compound","simple")[2]) * 100
Now, lets have a look at the persistence of the first and second order moments of the series. For second order moments, lets use the squared return series as a proxy.
Plotdata<-cbind(Rtn, Rtn^2)
plot.zoo(Plotdata)
There remains strong first persistence in returns and there is clearly periods of strong second order persistence as seen in the squared returns.
We can now formally start testing for ARCH-effects. A formal test for ARCH effects is LBQ stats on squared returns:
Box.test(coredata(Rtn^2), type = "Ljung-Box", lag = 12)
Box-Ljung test
data: coredata(Rtn^2)
X-squared = 2001.2, df = 12, p-value < 2.2e-16
We can clearly reject the null hypothesis of independence in a given time series. (ARCH-effects)
Fin.Ts also provides the ARCH-LM test for conditional heteroskedasticity in the returns:
library(FinTS)
ArchTest(Rtn)
ARCH LM-test; Null hypothesis: no ARCH effects
data: Rtn
Chi-squared = 722.19, df = 12, p-value < 2.2e-16
This supports the conclusion of the LBQ test that ARCH-effects are present.