I have a multilabel prediction with a scikit-learn pipeline. It is working properly in terms of internal testing and getting metrics for each of the label predictions. However, I'm having trouble getting the right structure for data output. When I run code on unseen/external data, it apparently runs through predictions for each of the labels but replaces the values in the same column. So I only get one column of predictions.
This data set involves more than 20 labels (categories), and it's part of an NLP model. Each of the labels is binarized (0 or 1). I am new and really appreciate the help. Thank you!
Here are three parts to the code: (1) pipeline, (2) for loop for test/validation data with fit/predict, and (3) attempts at coding the predict function for external data.
1) Pipeline:
SVC_pipeline = Pipeline([
('tfidf',
TfidfVectorizer(tokenizer=LemmaTokenizer(), min_df=8)),
('clf', OneVsRestClassifier(LinearSVC(), n_jobs=6)),
])
2) For loop:
for category in categories:
print('processing {}'.format(category))
# train
SVC_pipeline.fit(X_train, train[category])
# test
prediction = SVC_pipeline.predict(X_test)
print('Test accuracy is
{}'.format(accuracy_score(test[category], prediction)))
3) Predict external data:
doctext = sampdf['doc_text']
pred = SVC_pipeline.predict(doctext)
Also tried this:
for category in categories:
print('... Processing {}'.format(category))
svcpredict = SVC_pipeline.predict(testthis)
np.savetxt("/Users/.../Dropbox/.../svcpredicts.csv", svcpredict)
I also tried other a few other variations, but they all had the same result. The metrics ran through all labels and gave me varying metrics for each category. But the output only gave me one column of predictions.
Thanks!
Related
I'm really new at ML. I trained my dataset then I save it with pickle. My trained dataset has text and value. I'm trying to get an estimate from my new dataset, which has only text.
However, when I try to predict new values with my trained data, I'm getting an error, which says
ValueError: Number of features of the model must match the input. Model n_features is 17804 and input n_features is 24635
You can check my code below. What I have to do at this point ?
with open('trained.pickle', 'rb') as read_pickle:
loaded=pickle.load(read_pickle)
dataset2 = pandas.read_csv('/root/Desktop/predict.csv' , encoding='cp1252')
X2_train=dataset2['text']
train_tfIdf = vectorizer_tfidf.fit_transform(X2_train.values.astype('U'))
x = loaded.predict(train_tfIdf)
print(x)
fit_transform fits to the data and then transforms it, which you don't want to do while testing. It is like retraining the tfidf. So, for the purpose of prediction, I would suggest using the transform method simply.
I am doing some text classification.
Let's say I have 10 categories and 100 "samples", where each sample is a sentence of text. I have split my samples into 80:20 (training, testing) and trained the SVM classifier:
text_clf_svm = Pipeline([('vect', CountVectorizer(stop_words=('english'),ngram_range=(1,2))), ('tfidf', TfidfTransformer()),
('clf-svm', SGDClassifier(loss='hinge', penalty='l2', random_state=42, learning_rate='adaptive', eta0=0.9))])
# Fit training data to SVM classifier, predict with testing data and print accuracy
text_clf_svm = text_clf_svm.fit(training_data, training_sub_categories)
Now when it comes to predicting, I do not want just a single category to be predicted. I want to see, for example, a list of the "top 5" categories for a given unseen sample as well as their associated probabilities:
top_5_category_predictions = text_clf_svm.predict(a_single_unseen_sample)
Since text_clf_svm.predict returns a value which represents the index of the categories available, I want to see something like this as output:
[(4,0.70),(1,0.20),(7,0.04),(9,0.06)]
Anyone know how to achieve this?
This is something I had used a while back for a similar problem:
probs = clf.predict_proba(X_test)
# Sort desc and only extract the top-n
top_n_category_predictions = np.argsort(probs)[:,:-n-1:-1]
This will give you the top n categories for each sample.
If you also want to see the probabilities corresponding to these categories, then you can do:
top_n_probs = np.sort(probs)[:,:-n-1:-1]
Note: Here X_test is of shape (n_samples, n_features). So make sure you use your single_unseen_sample in the same format.
I am using sklearn's cross_val_predict for training like so:
myprobs_train = cross_val_predict(LogisticRegression(),X = x_old, y=y_old, method='predict_proba', cv=10)
I am happy with the returned probabilities, and would like now to score up a brand-new dataset. I tried:
myprobs_test = cross_val_predict(LogisticRegression(), X =x_new, y= None, method='predict_proba',cv=10)
but this did not work, it's complaining about y having zero shape. Does it mean there's no way to apply the trained and cross-validated model from cross_val_predict on new data? Or am I just using it wrong?
Thank you!
You are looking at a wrong method. Cross validation methods do not return a trained model; they return values that evaluate the performance of a model (logistic regression in your case). Your goal is to fit some data and then generate prediction for new data. The relevant methods are fit and predict of the LogisticRegression class. Here is the basic structure:
logreg = linear_model.LogisticRegression()
logreg.fit(x_old, y_old)
predictions = logreg.predict(x_new)
I have the same concern as #user3490622. If we can only use cross_val_predict on training and testing sets, why y (target) is None as the default value? (sklearn page)
To partially achieve the desired results of multiple predicted probability, one could use the fit then predict approach repeatedly to mimic the cross-validation.
I am using scikitlearn for svm classification.
I need a classifier that returns default value when a given test item doesn't match any of the training-set items, i.e. when the distance is very high. Is that possible?
For Example
Let's say my training-set is
X= [[0.5,0.5,2],[4, 4,16],[16, 16,64]]
and labels
y=[0,1,2]
then I run training
clf = svm.SVC()
clf.fit(X, y)
then I run prediction
clf.predict([-100,-100,-200])
Now as we can see the test-item [-100,-100,-200] is too far away from any of the training-items, in this case the prediction will yield [2] which is this item [16, 16,64], is there anyway to make it return anything else (not from training-set)?
I think you can create a label for those big values, and added into your training set.
X= [[0.5,0.5,2],[4, 4,16],[16, 16,64],[-100,-100,200]]
Y=[0,1,2,100]
and give a try.
Since SVM is supervised learning, which means the 'OUTPUT' have to be specified. If you are not certain about the 'OUTPUT', do some non supervised clustering (kmeans for example), and have a rough idea how many possible 'OUTPUT' you will expect.
I am new to machine learning and python. Now I am trying to apply random forest to predict binary results of a target. In my data I have 24 predictors (1000 observations) where one of them is categorical(gender) and all the others numerical. Among numerical ones, there are two types of values which are volume of money in euros (very skewed and scaled) and numbers (number of transactions from an atm). I have transformed the big scale features and did the imputation. Last, I have checked correlation and collinearity and based on that removed some features (as a result I had 24 features.) Now when I implement RF it is always perfect in the training set while the ratios not so good according to crossvalidation. And even applying it in the test set it gives very very low recall values. How should I remedy this?
def classification_model(model, data, predictors, outcome):
# Fit the model:
model.fit(data[predictors], data[outcome])
# Make predictions on training set:
predictions = model.predict(data[predictors])
# Print accuracy
accuracy = metrics.accuracy_score(predictions, data[outcome])
print("Accuracy : %s" % "{0:.3%}".format(accuracy))
# Perform k-fold cross-validation with 5 folds
kf = KFold(data.shape[0], n_folds=5)
error = []
for train, test in kf:
# Filter training data
train_predictors = (data[predictors].iloc[train, :])
# The target we're using to train the algorithm.
train_target = data[outcome].iloc[train]
# Training the algorithm using the predictors and target.
model.fit(train_predictors, train_target)
# Record error from each cross-validation run
error.append(model.score(data[predictors].iloc[test, :], data[outcome].iloc[test]))
print("Cross-Validation Score : %s" % "{0:.3%}".format(np.mean(error)))
# Fit the model again so that it can be refered outside the function:
model.fit(data[predictors], data[outcome])
outcome_var = 'Sold'
model = RandomForestClassifier(n_estimators=20)
predictor_var = train.drop('Sold', axis=1).columns.values
classification_model(model,train,predictor_var,outcome_var)
#Create a series with feature importances:
featimp = pd.Series(model.feature_importances_, index=predictor_var).sort_values(ascending=False)
print(featimp)
outcome_var = 'Sold'
model = RandomForestClassifier(n_estimators=20, max_depth=20, oob_score = True)
predictor_var = ['fet1','fet2','fet3','fet4']
classification_model(model,train,predictor_var,outcome_var)
In Random Forest it is very easy to overfit. To resolve this you need to do parameter search a little more rigorously to know the best parameter to use. [Here](http://scikit-learn.org/stable/auto_examples/model_selection/randomized_search.html
) is the link on how to do this: (from the scikit doc).
It is overfitting and you need to search for the best parameter that will work work on the model. The link provides implementation for Grid and Randomized search for hyper parameter estimation.
And it will also be fun to go through this MIT Artificial Intelligence lecture to get get deep theoretical orientation: https://www.youtube.com/watch?v=UHBmv7qCey4&t=318s.
Hope this helps!