I'm using Python 3.
I am doing TF_IDF, and I record more than 80% of results.
But for is too slow. because shape is 51,336 x 51,336.
How can you create dataframes faster without using for statement.
It's taking 50 minutes now.
I want to make a dataframe like this.
[column_0],[column_1],[similarity]
index[0], column[0], value
index[0], column[1], value
index[0], column[2], value
....
index[100], column[51334], value
index[100], column[51335], value
index[100], column[51336], value
...
index[51336], column[51335], value
index[51336], column[51336], value
cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrix)
indices = pd.Series(df.index, index=df['index_name'])
tfidf_matrix = tf.fit_transform(df['text'])
similarity = pd.DataFrame(columns=['a', 'b', 'similarity'])
for n in range(len(cosine_sim)):
for i in list(enumerate(cosine_sim[n])):
if i[1] > 0.8 and i[1] < 0.99:
similarity = similarity.append({'column_0': indices.index[n],'column_1': indices.index[i[0]],'similarity': i[1]},ignore_index=True)
If you think of parallelize a job, unfortunately there is no-way to parallelize/distribute access to the vocabulary that is need for these vectorizers.
Hence you choose the alternative hack for that. By using the hashingvectorizer.
well for this scikit docs provide an example using this vectorizer to train a classifier in batches.
https://scikit-learn.org/stable/auto_examples/applications/plot_out_of_core_classification.html.
Hope this will help you
Related
In scikit-learn TfidfVectorizer allows us to fit over training data, and later use the same vectorizer to transform over our test data.
The output of the transformation over the train data is a matrix that represents a tf-idf score for each word for a given document.
However, how does the fitted vectorizer compute the score for new inputs? I have guessed that either:
The score of a word in a new document computed by some aggregation of the scores of the same word over documents in the training set.
The new document is 'added' to the existing corpus and new scores are calculated.
I have tried deducing the operation from scikit-learn's source code but could not quite figure it out. Is it one of the options I've previously mentioned or something else entirely?
Please assist.
It is definitely the former: each word's idf (inverse document-frequency) is calculated based on the training documents only. This makes sense because these values are precisely the ones that are calculated when you call fit on your vectorizer. If the second option you describe was true, we would essentially refit a vectorizer each time, and we would also cause information leak as idf's from the test set would be used during model evaluation.
Beyond these purely conceptual explanations, you can also run the following code to convince yourself:
from sklearn.feature_extraction.text import TfidfVectorizer
vect = TfidfVectorizer()
x_train = ["We love apples", "We really love bananas"]
vect.fit(x_train)
print(vect.get_feature_names())
>>> ['apples', 'bananas', 'love', 'really', 'we']
x_test = ["We really love pears"]
vectorized = vect.transform(x_test)
print(vectorized.toarray())
>>> array([[0. , 0. , 0.50154891, 0.70490949, 0.50154891]])
Following the reasoning of how the fit methodology works, you can recalculate these tfidf values yourself:
"apples" and "bananas" obviously have a tfidf score of 0 because they do not appear in x_test. "pears", on the other hand, does not exist in x_train and so will not even appear in the vectorization. Hence, only "love", "really" and "we" will have a tfidf score.
Scikit-learn implements tfidf as log((1+n)/(1+df) + 1) * f where n is the number of documents in the training set (2 for us), df the number of documents in which the word appears in the training set only, and f the frequency count of the word in the test set. Hence:
tfidf_love = (np.log((1+2)/(1+2))+1)*1
tfidf_really = (np.log((1+2)/(1+1))+1)*1
tfidf_we = (np.log((1+2)/(1+2))+1)*1
You then need to scale these tfidf scores by the L2 distance of your document:
tfidf_non_scaled = np.array([tfidf_love,tfidf_really,tfidf_we])
tfidf_list = tfidf_non_scaled/sum(tfidf_non_scaled**2)**0.5
print(tfidf_list)
>>> [0.50154891 0.70490949 0.50154891]
You can see that indeed, we are getting the same values, which confirms the way scikit-learn implemented this methodology.
I have a Naive Bayes classifier that I wrote in Python using a Pandas data frame and now I need it in PySpark. My problem here is that I need the feature importance of each column. When looking through the PySpark ML documentation I couldn't find any info on it. documentation
Does anyone know if I can get the feature importance with the Naive Bayes Spark MLlib?
The code using Python is the following. The feature importance is retrieved with .coef_
df = df.fillna(0).toPandas()
X_df = df.drop(['NOT_OPEN', 'unique_id'], axis = 1)
X = X_df.values
Y = df['NOT_OPEN'].values.reshape(-1,1)
mnb = BernoulliNB(fit_prior=True)
y_pred = mnb.fit(X, Y).predict(X)
estimator = mnb.fit(X, Y)
# coef_: For a binary classification problems this is the log of the estimated probability of a feature given the positive class. It means that higher values mean more important features for the positive class.
feature_names = X_df.columns
coefs_with_fns = sorted(zip(estimator.coef_[0], feature_names))
If you're interested in an equivalent of coef_, the property, you're looking for, is NaiveBayesModel.theta
log of class conditional probabilities.
New in version 2.0.0.
i.e.
model = ... # type: NaiveBayesModel
model.theta.toArray() # type: numpy.ndarray
The resulting array is of size (number-of-classes, number-of-features), and rows correspond to consecutive labels.
It is, probably, better to evaluate a difference
log(P(feature_X|positive)) - log(P(feature_X|negative))
as a feature importance.
Because, we are interested in the Discriminative power of each feature_X (sure-sure NB is a generative model).
Extreme example: some feature_X1 has the same value across all + and - samples, so no discriminative power.
So, the probability of this feature value is high for both + and - samples, but the difference of log probabilities = 0.
What I have understood from it is, If max_feature = n; It means that it is selecting the top n Feature on the basis of Tf-Idf value. I went through the Documentation of TfidfVectorizer on scikit-learn but didn't understand it properly.
If you want row-wise words which have the highest tfidf values, then you need to access the transformed tf-idf matrix from Vectorizer, access it row by row (doc by doc) and then sort the values to get those.
Something like this:
# TfidfVectorizer will by default output a sparse matrix
tfidf_data = tfidf_vectorizer.fit_transform(text_data).tocsr()
vocab = np.array(tfidf_vectorizer.get_feature_names())
# Replace this with the number of top words you want to get in each row
top_n_words = 5
# Loop all the docs present
for i in range(tfidf_data.shape[0]):
doc = tfidf_data.getrow(i).toarray().ravel()
sorted_index = np.argsort(doc)[::-1][:top_n_words]
print(sorted_index)
for word, tfidf in zip(vocab[sorted_index], doc[sorted_index]):
print("%s - %f" %(word, tfidf))
If you can use pandas, then the logic becomes simpler:
for i in range(tfidf_data.shape[0]):
doc_data = pd.DataFrame({'Tfidf':tfidf_data.getrow(i).toarray().ravel(),
'Word': vocab})
doc_data.sort_values(by='Tfidf', ascending=False, inplace=True)
print(doc_data.iloc[:top_n_words])
I am planning to use an SGDClassifier in production. The idea is to train the classifier on some training data, use cPickle to dump it to a .pkl file and reuse it later in a script. However, there are certain high cardinality fields which are categorical in nature and translated to one hot matrix representation which creates around 5000 features. Now the input that I get for the predict will only have one of these features and rest all will be zeroes. It will also include ofcourse the other numerical features apart from this. From the docs, it appears that the predict function expects an array of array as input. Is there any way I can transform my input to the format expected by the predict function without having to store the fields everytime I train the model ?
Update
So, let us say my input contains 3 fields:
{
rate: 10, // numeric
flagged: 0, //binary
host: 'somehost.com' // keeping this categorical
}
host can have around 5000 different values. Now I loaded the file to a pandas dataframe, used the get_dummies function to transform the host field to around 5000 new fields which are binary fields.
Then I trained by model and stored it using cPickle.
Now, when I need to use the predict function, for the input, I only have 3 fields (shown above). However, as per my understanding the predict endpoint will expect an array of vectors and each vector is supposed to have those 5000 fields.
For the entry that I need to predict, I know only one field for that entry which will be the value of host itself.
For example, if my input is
{
rate: 5,
flagged: 1
host: 'new_host.com'
}
I know that the fields expected by the predict should be:
{
rate: 5,
flagged: 1
new_host: 1
}
But if I translate it to vector format, I don't know which index to place the new_host field. Also, I don't know in advance what other hosts are (unless I store it somewhere during the training phase)
I hope I am making some sense. Let me know if I am doing it the wrong way.
I don't know which index to place the new_host field
A good approach that has worked for me is to build a pipeline which you then use for training and prediction. This way you do not have to concern yourself with the column index of whatever output is produced by your transformation:
# in training
pipl = Pipeline(steps=[('binarizer', LabelBinarizer(),
('clf', SGDClassifier())])
model = pipl.train(X, Y)
pickle.dump(mf, model)
# in production
model = pickle.load(mf)
y = model.predict(X)
As X, Y inputs you need to pass an array-like object. Make sure the input is the same structure for both training and test, e.g.
X = [[data.get('rate'), data.get('flagged'), data.get('host')]]
Y = [[y-cols]] # your example doesn't specify what is Y in your data
More flexible: Pandas DataFrame + Pipeline
What also works nicely is to use a Pandas DataFrame in combination with sklearn-pandas as it allows you to use different transformations on different column names. E.g.
df = pd.DataFrame.from_dict(data)
mapper = DataFrameMapper([
('host', sklearn.preprocessing.LabelBinarizer()),
('rate', sklearn.preprocessing.StandardScaler())
])
pipl = Pipeline(steps=[('mapper', mapper),
('clf', SGDClassifier())])
X = df[x-cols]
y = df[y-col(s)]
pipl.fit()
Note that x-cols and y-col(s) are the list of the feature and target columns respectively.
You should use a scikit-learn transformer instead of get_dummies. In this case, LabelBinarizer makes sense. Seeing as LabelBinarizer doesn't work in a pipeline, this is one way to do what you want:
binarizer = LabelBinarizer()
# fitting LabelBinarizer means it remembers all the columns it's seen
one_hot_data = binarizer.fit_transform(X_train[:, categorical_col])
# replace string column with one-hot representation
X_train = np.concatenate([np.delete(X_train, categorical_col, axis=1),
one_hot_data], axis=1)
model = SGDClassifier()
clf.fit(X_train, y)
pickle.dump(f, {'clf': clf, 'binarizer': binarizer})
then at prediction time:
estimators = pickle.load(f)
clf = estimators['clf']
binarizer = estimators['binarizer']
one_hot_data = binarizer.transform(X_test[:, categorical_col])
X_test = np.concatenate([np.delete(X_test, categorical_col, axis=1),
one_hot_data], axis=1)
clf.predict(X_test)
I'm trying to perform feature selection by evaluating my regressions coefficient outputs, and select the features with the highest magnitude coefficients. The problem is, I don't know how to get the respective features, as only coefficients are returned form the coef._ attribute. The documentation says:
Estimated coefficients for the linear regression problem. If multiple
targets are passed during the fit (y 2D), this is a 2D array of
shape (n_targets, n_features), while if only one target is passed,
this is a 1D array of length n_features.
I am passing into my regression.fit(A,B), where A is a 2-D array, with tfidf value for each feature in a document. Example format:
"feature1" "feature2"
"Doc1" .44 .22
"Doc2" .11 .6
"Doc3" .22 .2
B are my target values for the data, which are just numbers 1-100 associated with each document:
"Doc1" 50
"Doc2" 11
"Doc3" 99
Using regression.coef_, I get a list of coefficients, but not their corresponding features! How can I get the features? I'm guessing I need to modfy the structure of my B targets, but I don't know how.
What I found to work was:
X = your independent variables
coefficients = pd.concat([pd.DataFrame(X.columns),pd.DataFrame(np.transpose(logistic.coef_))], axis = 1)
The assumption you stated: that the order of regression.coef_ is the same as in the TRAIN set holds true in my experiences. (works with the underlying data and also checks out with correlations between X and y)
You can do that by creating a data frame:
cdf = pd.DataFrame(regression.coef_, X.columns, columns=['Coefficients'])
print(cdf)
coefficients = pd.DataFrame({"Feature":X.columns,"Coefficients":np.transpose(logistic.coef_)})
I suppose you are working on some feature selection task. Well using regression.coef_ does get the corresponding coefficients to the features, i.e. regression.coef_[0] corresponds to "feature1" and regression.coef_[1] corresponds to "feature2". This should be what you desire.
Well I in its turn recommend tree model from sklearn, which could also be used for feature selection. To be specific, check out here.
Coefficients and features in zip
print(list(zip(X_train.columns.tolist(),logreg.coef_[0])))
Coefficients and features in DataFrame
pd.DataFrame({"Feature":X_train.columns.tolist(),"Coefficients":logreg.coef_[0]})
This is the easiest and most intuitive way:
pd.DataFrame(logisticRegr.coef_, columns=x_train.columns)
or the same but transposing index and columns
pd.DataFrame(logisticRegr.coef_, columns=x_train.columns).T
Suppose your train data X variable is 'df_X' then you can map into a dictionary and feed into pandas dataframe to get the mapping:
pd.DataFrame(dict(zip(df_X.columns,model.coef_[0])),index=[0]).T
Try putting them in a series with the data columns names as index:
coeffs = pd.Series(model.coef_[0], index=X.columns.values)
coeffs.sort_values(ascending = False)