How to use Sklearn linear regression with doc2vec input - scikit-learn

I have 250k text documents (tweets and newspaper articles) represented as vectors obtained with a doc2vec model. Now, I want to use a regressor (multiple linear regression) to predict continuous value outputs - in my case the UK Consumer Confidence Index.
My code runs, since forever. What am I doing wrong?
I imported my data from Excel and splitted it into x_train and x_dev. The data are composed of preprocessed text and CCI continuous values.
# Import doc2vec model
dbow = Doc2Vec.load('dbow_extended.d2v')
dmm = Doc2Vec.load('dmm_extended.d2v')
concat = ConcatenatedDoc2Vec([dbow, dmm]) # model uses vector_size 400
def get_vectors(model, input_docs):
vectors = [model.infer_vector(doc.words) for doc in input_docs]
return vectors
# Prepare X_train and y_train
train_text = x_train["preprocessed_text"].tolist()
train_tagged = [TaggedDocument(words=str(_d).split(), tags=[str(i)]) for i, _d in list(enumerate(train_text))]
X_train = get_vectors(concat, train_tagged)
y_train=x_train['CCI_UK']
# Fit regressor
from sklearn import linear_model
reg = linear_model.LinearRegression()
reg.fit(X_train, y_train)
# Predict and evaluate
prediction=reg.predict(X_dev)
print(classification_report(y_true=y_dev,y_pred=prediction),'\n')
Since the fitting never completed, I wonder whether I am using a wrong input. However, no error message is shown and the code simply runs forever. What am I doing wrong?
Thank you so much for your help!!

The variable X_train is a list or a list of lists (since the function get_vectors() return a list) whereas the input to sklearn's Linear Regression should be a 2-D array.
Try converting X_train to an array using this :
X_train = np.array(X_train)
This should help !

Related

Viewing model coefficients for a single prediction

I have a logistic regression model housed in a scikit-learn pipeline using the following:
pipeline = make_pipeline(
StandardScaler(),
LogisticRegressionCV(
solver='lbfgs',
cv=10,
scoring='roc_auc',
class_weight='balanced'
)
)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
I can view the model's coefficients for predictions as a whole with this code ...
# Look at model's coefficients to see what features are most important
plt.rcParams['figure.dpi'] = 50
model = pipeline.named_steps['logisticregressioncv']
coefficients = pd.Series(model.coef_[0], X_train.columns)
plt.figure(figsize=(10,12))
coefficients.sort_values().plot.barh(color='grey');
Which returns a bar plot of the features and their coefficients.
What I'm trying to do is be able to see how different input values for a single observation impact its prediction. The idea is to be able to run predictions on a sample population and examine the group with "low" predictions ... for example if I run predictions for 10 observations, I'd like to see how different input values impacted each of those 10 predictions, individually.
Recalled that I can achieve this via Shap Values using something along the following (but using LinearExplainer instead of TreeExplainer):
# Instantiate model and encoder outside of pipeline for
# use with shap
model = RandomForestClassifier( random_state=25)
# Fit on train, score on val
model.fit(X_train_encoded, y_train2)
y_pred_shap = model.predict(X_val_encoded)
# Get an individual observation to explain.
row = X_test_encoded.iloc[[-3]]
# Why did the model predict this?
# Look at a Shapley Values Force Plot
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value[1],
shap_values=shap_values[1],
features=row
)```

How to add more features in multi text classification?

I have a retail dataset with product_description, price, supplier, category as columns.
I used product_description as feature:
from sklearn import model_selection, preprocessing, naive_bayes
# split the dataset into training and validation datasets
train_x, valid_x, train_y, valid_y = model_selection.train_test_split(df['product_description'], df['category'])
# label encode the target variable
encoder = preprocessing.LabelEncoder()
train_y = encoder.fit_transform(train_y)
valid_y = encoder.fit_transform(valid_y)
tfidf_vect = TfidfVectorizer(analyzer='word', token_pattern=r'\w{1,}', max_features=5000)
tfidf_vect.fit(df['product_description'])
xtrain_tfidf = tfidf_vect.transform(train_x)
xvalid_tfidf = tfidf_vect.transform(valid_x)
classifier = naive_bayes.MultinomialNB().fit(xtrain_tfidf, train_y)
# predict the labels on validation dataset
predictions = classifier.predict(xvalid_tfidf)
metrics.accuracy_score(predictions, valid_y) # ~20%, very low
Since the accuracy is very low, I want to add the supplier and price as features too. How can I incorporate this in the code?
I have tried other classifiers like LR, SVM, and Random Forrest, but they had (almost) the same outcome.
The TF-IDF vectorizer returns a matrix: one row per example with the scores. You can modify this matrix as you wish before feeding it into the classifier.
Prepare your additional features as a NumPy array of shape: number of examples × number of features.
Use np.concatenate with axis=1.
Fit the classifier as you did before.
It is usually a good idea to normalize real-valued features. Also, you can try different classifiers: Logistic Regression or SVM might do a better job for real-valued features than Naive Bayes.

Error when classifying new Linear SVM dataframe

I created a multi-class classification model with Linear SVM. But I am not able to classify a new loaded dataframe (my base that must be classified) I have the following error.
What should I do to convert my new text(df.reason_text) to TFID and classify(call model.prediction(?)) with my model?
Training Model
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(ngram_range=(1,2), stop_words=stopwords)
features = tfidf.fit_transform(training.Description).toarray()
labels = training.category_id
model = LinearSVC()
X_train, X_test, y_train, y_test, indices_train, indices_test = train_test_split(features, labels, training.index, test_size=0.33, random_state=0)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
Now I'm not able to convert my new dataframe to classify
Load New DataFrame by Classification
from pyathena import connect
import pandas as pd
conn = connect(s3_staging_dir='s3://athenaxxxxxxxx/result/',
region_name='us-east-2')
df = pd.read_sql("select * from data.classification_text_reason", conn)
features2 = tfidf.fit_transform(df.reason_text).toarray()
features2.shape
After I convert the new data frame text with TFID and have it sorted, I get the following message
y_pred1 = model.predict(features2)
error
ValueError: X has 1272 features per sample; expecting 5319
'
When you are loading a new DF for classification, you are calling fit_tranform() again, but you should be calling only transform().
fit_transform() description: Learn vocabulary and idf, return term-document matrix.
transform() description: Transform documents to document-term matrix.
You need to use the transformer created when training the algorithm, so the code would be:
tfidf.transform(df.reason_text).toarray()
If you still have the feature shape error, there may be a problem with the shapes of the arrays. Solve the transform part and if the error still occurs, post an example of the train and the test data in array format, I will keep helping.

W2VTransformer: Only works with one word as input?

Following reproducible script is used to compute the accuracy of a Word2Vec classifier with the W2VTransformer wrapper in gensim:
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from gensim.sklearn_api import W2VTransformer
from gensim.utils import simple_preprocess
# Load synthetic data
data = pd.read_csv('https://pastebin.com/raw/EPCmabvN')
data = data.head(10)
# Set random seed
np.random.seed(0)
# Tokenize text
X_train = data.apply(lambda r: simple_preprocess(r['text'], min_len=2), axis=1)
# Get labels
y_train = data.label
train_input = [x[0] for x in X_train]
# Train W2V Model
model = W2VTransformer(size=10, min_count=1)
model.fit(X_train)
clf = LogisticRegression(penalty='l2', C=0.1)
clf.fit(model.transform(train_input), y_train)
text_w2v = Pipeline(
[('features', model),
('classifier', clf)])
score = text_w2v.score(train_input, y_train)
score
0.80000000000000004
The problem with this script is that it only works when train_input = [x[0] for x in X_train], which essentially is always the first word only.
Once change to train_input = X_train (or train_input simply substituted by X_train), the script returns:
ValueError: cannot reshape array of size 10 into shape (10,10)
How can I solve this issue, i.e. how can the classifier work with more than one word of input?
Edit:
Apparently, the W2V wrapper can't work with the variable-length train input, as compared to D2V. Here is a working D2V version:
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score, classification_report
from sklearn.pipeline import Pipeline
from gensim.utils import simple_preprocess, lemmatize
from gensim.sklearn_api import D2VTransformer
data = pd.read_csv('https://pastebin.com/raw/bSGWiBfs')
np.random.seed(0)
X_train = data.apply(lambda r: simple_preprocess(r['text'], min_len=2), axis=1)
y_train = data.label
model = D2VTransformer(dm=1, size=50, min_count=2, iter=10, seed=0)
model.fit(X_train)
clf = LogisticRegression(penalty='l2', C=0.1, random_state=0)
clf.fit(model.transform(X_train), y_train)
pipeline = Pipeline([
('vec', model),
('clf', clf)
])
y_pred = pipeline.predict(X_train)
score = accuracy_score(y_train,y_pred)
print(score)
This is technically not an answer, but cannot be written in comments so here it is. There are multiple issues here:
LogisticRegression class (and most other scikit-learn models) work with 2-d data (n_samples, n_features).
That means that it needs a collection of 1-d arrays (one for each row (sample), in which the elements of array contains the feature values).
In your data, a single word will be a 1-d array, which means that the single sentence (sample) will be a 2-d array. Which means that the complete data (collection of sentences here) will be a collection of 2-d arrays. Even in that, since each sentence can have different number of words, it cannot be combined into a single 3-d array.
Secondly, the W2VTransformer in gensim looks like a scikit-learn compatible class, but its not. It tries to follows "scikit-learn API conventions" for defining the methods fit(), fit_transform() and transform(). They are not compatible with scikit-learn Pipeline.
You can see that the input param requirements of fit() and fit_transform() are different.
fit():
X (iterable of iterables of str) – The input corpus.
X can be simply a list of lists of tokens, but for larger corpora, consider an iterable that streams the sentences directly from
disk/network. See BrownCorpus, Text8Corpus or LineSentence in word2vec
module for such examples.
fit_transform():
X (numpy array of shape [n_samples, n_features]) – Training set.
If you want to use scikit-learn, then you will need to have the 2-d shape. You will need to "somehow merge" word-vectors for a single sentence to form a 1-d array for that sentence. That means that you need to form a kind of sentence-vector, by doing:
sum of individual words
average of individual words
weighted averaging of individual words based on frequency, tf-idf etc.
using other techniques like sent2vec, paragraph2vec, doc2vec etc.
Note:- I noticed now that you were doing this thing based on D2VTransformer. That should be the correct approach here if you want to use sklearn.
The issue in that question was this line (since that question is now deleted):
X_train = vectorizer.fit_transform(X_train)
Here, you overwrite your original X_train (list of list of words) with already calculated word vectors and hence that error.
Or else, you can use other tools / libraries (keras, tensorflow) which allow sequential input of variable size. For example, LSTMs can be configured here to take a variable input and an ending token to mark the end of sentence (a sample).
Update:
In the above given solution, you can replace the lines:
model = D2VTransformer(dm=1, size=50, min_count=2, iter=10, seed=0)
model.fit(X_train)
clf = LogisticRegression(penalty='l2', C=0.1, random_state=0)
clf.fit(model.transform(X_train), y_train)
pipeline = Pipeline([
('vec', model),
('clf', clf)
])
y_pred = pipeline.predict(X_train)
with
pipeline = Pipeline([
('vec', model),
('clf', clf)
])
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_train)
No need to fit and transform separately, since pipeline.fit() will automatically do that.

Cross_val_predict return each test result independently

From my understanding cross_val_predict with cv parameter set to 10, will create 10 independent data sets, use one as a test set and the other 9 as training sets, and do this until all 10 sets have been used as the test data. the return is one combined data set containing all the predictions for the data with confidence levels.
What I want is something like an array containing ten arrays of predictions so I can calculate a mean score, i know cross_val_score returns this but I'm trying to compare the ROC AUC metric, so need to use the 'predict_proba' method, below is the code to create the current output I am receiving.
Here is the output :
from sklearn.preprocessing import label_binarize
y_bin = label_binarize(y, classes=['label1','label2'])
#import module to calculate roc_auc score
from sklearn.metrics import roc_auc_score
#Import the cross-validation prediction module
from sklearn.model_selection import cross_val_predict
#using the loops to change parameters again
NNmodel = MLPClassifier(hidden_layer_sizes=(10,),activation='logistic')
proba = cross_val_predict(NNmodel, X, y, cv=10, method='predict_proba')
auc = roc_auc_score(y_bin,proba[:,1])
print ("Hidden Neurons set to 10", "ROC AUC Score = %",auc*100)
I would much rather report the mean score of each test set rather than just the single result I can produce at the moment.

Resources