How to add more features in multi text classification? - python-3.x

I have a retail dataset with product_description, price, supplier, category as columns.
I used product_description as feature:
from sklearn import model_selection, preprocessing, naive_bayes
# split the dataset into training and validation datasets
train_x, valid_x, train_y, valid_y = model_selection.train_test_split(df['product_description'], df['category'])
# label encode the target variable
encoder = preprocessing.LabelEncoder()
train_y = encoder.fit_transform(train_y)
valid_y = encoder.fit_transform(valid_y)
tfidf_vect = TfidfVectorizer(analyzer='word', token_pattern=r'\w{1,}', max_features=5000)
tfidf_vect.fit(df['product_description'])
xtrain_tfidf = tfidf_vect.transform(train_x)
xvalid_tfidf = tfidf_vect.transform(valid_x)
classifier = naive_bayes.MultinomialNB().fit(xtrain_tfidf, train_y)
# predict the labels on validation dataset
predictions = classifier.predict(xvalid_tfidf)
metrics.accuracy_score(predictions, valid_y) # ~20%, very low
Since the accuracy is very low, I want to add the supplier and price as features too. How can I incorporate this in the code?
I have tried other classifiers like LR, SVM, and Random Forrest, but they had (almost) the same outcome.

The TF-IDF vectorizer returns a matrix: one row per example with the scores. You can modify this matrix as you wish before feeding it into the classifier.
Prepare your additional features as a NumPy array of shape: number of examples × number of features.
Use np.concatenate with axis=1.
Fit the classifier as you did before.
It is usually a good idea to normalize real-valued features. Also, you can try different classifiers: Logistic Regression or SVM might do a better job for real-valued features than Naive Bayes.

Related

Viewing model coefficients for a single prediction

I have a logistic regression model housed in a scikit-learn pipeline using the following:
pipeline = make_pipeline(
StandardScaler(),
LogisticRegressionCV(
solver='lbfgs',
cv=10,
scoring='roc_auc',
class_weight='balanced'
)
)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
I can view the model's coefficients for predictions as a whole with this code ...
# Look at model's coefficients to see what features are most important
plt.rcParams['figure.dpi'] = 50
model = pipeline.named_steps['logisticregressioncv']
coefficients = pd.Series(model.coef_[0], X_train.columns)
plt.figure(figsize=(10,12))
coefficients.sort_values().plot.barh(color='grey');
Which returns a bar plot of the features and their coefficients.
What I'm trying to do is be able to see how different input values for a single observation impact its prediction. The idea is to be able to run predictions on a sample population and examine the group with "low" predictions ... for example if I run predictions for 10 observations, I'd like to see how different input values impacted each of those 10 predictions, individually.
Recalled that I can achieve this via Shap Values using something along the following (but using LinearExplainer instead of TreeExplainer):
# Instantiate model and encoder outside of pipeline for
# use with shap
model = RandomForestClassifier( random_state=25)
# Fit on train, score on val
model.fit(X_train_encoded, y_train2)
y_pred_shap = model.predict(X_val_encoded)
# Get an individual observation to explain.
row = X_test_encoded.iloc[[-3]]
# Why did the model predict this?
# Look at a Shapley Values Force Plot
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value[1],
shap_values=shap_values[1],
features=row
)```

Hypermarameters optimization in Gaussian Process in scikitlearn

Are the hyper-parameters in Gaussian Process Regresor optimized during the fitting in scikit-learn?
In the page
https://scikit-learn.org/stable/modules/gaussian_process.html
it is said:
"The hyperparameters of the kernel are optimized during fitting of GaussianProcessRegressor by maximizing the log-marginal-likelihood (LML) based on the passed optimizer"
So, it is not required, for instance, to optimize it by using grid earch?
A hyperparameter is something that you need to specify, usually, the best way to do it is within a pipeline ( series of steps) in which you try many hyperparameters and get the best one. Here is an example of just trying different hyperparameters for a k-means in which you give a list of hyperparameters (n_neighbors for K-Means) in order to see which ones work best! Hope it helps you!
neighbors = np.arange(1, 9)
train_accuracy = np.empty(len(neighbors))
test_accuracy = np.empty(len(neighbors))
# Loop over different values of k
for i, k in enumerate(neighbors):
# Setup a k-NN Classifier with k neighbors: knn
knn = KNeighborsClassifier(n_neighbors= k)
# Fit the classifier to the training data
knn.fit(X_train,y_train)
#Compute accuracy on the training set
train_accuracy[i] = knn.score(X_train, y_train)
knn.predict(X_test)
#Compute accuracy on the testing set
test_accuracy[i] = knn.score(X_test, y_test)
# Generate plot
plt.title('k-NN: Varying Number of Neighbors')
plt.plot(neighbors, test_accuracy, label = 'Testing Accuracy')
plt.plot(neighbors, train_accuracy, label = 'Training Accuracy')
plt.legend()
plt.xlabel('Number of Neighbors')

What is the accuracy of a clustering algorithm?

I have a set of points that I have clustered using a clustering algorithm (k-means in this case). I also know the ground-truth labels and I want to measure how accurate my clustering is. What I need is to find the actual accuracy. The problem, of course, is that the labels given by the clustering do not match the ordering of the original one.
Is there a way to measure this accuracy? The intuitive idea would be to compute the score of the confusion matrix of every combination of labels, and only keep the maximum. Is there a function that does this?
I have also evaluated my results using rand scores and adjusted rand score. How close are these two measures to actual accuracy?
Thanks!
First of all, what does The problem, of course, is that the labels given by the clustering do not match the ordering of the original one. mean?
If you know the ground truth labels then you can re-arrange them to match the order of the X matrix and in that way, the Kmeans labels will be in accordance with the true labels after prediction.
In this situation, I suggest the following.
If you have the ground truth labels and you want to see how accurate your model is, then you need metrics such as the Rand index or mutual information between the predicted and true labels. You can do that in a cross-validation scheme and see how the model behaves i.e. if it can predict correctly the classes/labels under a cross-validation scheme. The assessment of prediction goodness can be calculated using metrics like the Rand index.
In summary:
Define a Kmeans model and use cross-validation and in each iteration estimate the Rand index (or mutual information) between the assignments and the true labels. Repeat that for all iterations and finally, take the mean of the Rand index scores. If this score is high, then the model is good.
Full example:
from sklearn.cluster import KMeans
from sklearn.metrics.cluster import adjusted_rand_score
from sklearn.datasets import load_iris
from sklearn.model_selection import LeaveOneOut
import numpy as np
# some data
data = load_iris()
X = data.data
y = data.target # ground truth labels
loo = LeaveOneOut()
rand_index_scores = []
for train_index, test_index in loo.split(X): # LOOCV here
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
# the model
kmeans = KMeans(n_clusters=3, random_state=0)
kmeans.fit(X_train) # fit using training data
predicted_labels = kmeans.predict(X_test) # predict using test data
rand_index_scores.append(adjusted_rand_score(y_test, predicted_labels)) # calculate goodness of predicted labels
print(np.mean(rand_index_scores))
Since clustering is an unsupervised learning problem, you have specific metrics for it: https://scikit-learn.org/stable/modules/classes.html#clustering-metrics
You can refer to the discussion in the scikit-learn User Guide to have an idea of the differences between the different metrics for clustering: https://scikit-learn.org/stable/modules/clustering.html#clustering-performance-evaluation
For instance, the adjusted Rand index will compare a pair of points and check that if the labels are the same in the ground-truth, it will be the same in the predictions. Unlike the accuracy, you cannot make strict label equality.
you can use sklearn.metrics.accuracy as documented in link mentioned below
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html
an example can be seen in link mentioned below
sklearn: calculating accuracy score of k-means on the test data set

How to use Sklearn linear regression with doc2vec input

I have 250k text documents (tweets and newspaper articles) represented as vectors obtained with a doc2vec model. Now, I want to use a regressor (multiple linear regression) to predict continuous value outputs - in my case the UK Consumer Confidence Index.
My code runs, since forever. What am I doing wrong?
I imported my data from Excel and splitted it into x_train and x_dev. The data are composed of preprocessed text and CCI continuous values.
# Import doc2vec model
dbow = Doc2Vec.load('dbow_extended.d2v')
dmm = Doc2Vec.load('dmm_extended.d2v')
concat = ConcatenatedDoc2Vec([dbow, dmm]) # model uses vector_size 400
def get_vectors(model, input_docs):
vectors = [model.infer_vector(doc.words) for doc in input_docs]
return vectors
# Prepare X_train and y_train
train_text = x_train["preprocessed_text"].tolist()
train_tagged = [TaggedDocument(words=str(_d).split(), tags=[str(i)]) for i, _d in list(enumerate(train_text))]
X_train = get_vectors(concat, train_tagged)
y_train=x_train['CCI_UK']
# Fit regressor
from sklearn import linear_model
reg = linear_model.LinearRegression()
reg.fit(X_train, y_train)
# Predict and evaluate
prediction=reg.predict(X_dev)
print(classification_report(y_true=y_dev,y_pred=prediction),'\n')
Since the fitting never completed, I wonder whether I am using a wrong input. However, no error message is shown and the code simply runs forever. What am I doing wrong?
Thank you so much for your help!!
The variable X_train is a list or a list of lists (since the function get_vectors() return a list) whereas the input to sklearn's Linear Regression should be a 2-D array.
Try converting X_train to an array using this :
X_train = np.array(X_train)
This should help !

Scaling of stock data

I am trying to apply machine learning on stock prediction, and I run into problem regarding scaling on future unseen (much higher) stock close value.
Lets say I use random forrest regression on predicting stock price. I break the data into train set and test set.
For the train set, I use standardscaler, and do fit and transform
And then I use regressor to fit
For the test set, I use standardscaler, and do transform
And then I use regressor to predict, and compare to test label
If I plot predict and test label on a graph, predict seems to max out or ceiling. The problem is that standardscaler fit on train set, test set (later in the timeline) have much higher value, the algorithm does not know what to do with these extreme data
def test(X, y):
# split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, shuffle=False)
# preprocess the data
pipeline = Pipeline([
('std_scaler', StandardScaler()),
])
# model = LinearRegression()
model = RandomForestRegressor(n_estimators=20, random_state=0)
# preprocessing fit transform on train data
X_train = pipeline.fit_transform(X_train)
# fit model on train data with train label
model.fit(X_train, y_train)
# transform on test data
X_test = pipeline.transform(X_test)
# predict on test data
y_pred = model.predict(X_test)
# print(np.sqrt(mean_squared_error(y_test, y_pred)))
d = {'actual': y_test, 'predict': y_pred}
plot_data = pd.DataFrame.from_dict(d)
sns.lineplot(data=plot_data)
plt.show()
What should be done with the scaling?
This is what I got for plotting prediction, actual close price vs time
The problem mainly comes from the model you are using. RandomForest regressor is created upon Decision Trees. It is learning to map an input to an output for every examples in the training set. Consequently RandomForest regressor will work for middle values but for extreme values that it hasn't seen during training it will of course perform has your picture is showing.
What you want, is to learn a function directly using linear/polynomial regression or more advanced algorithms like ARIMA.

Resources