PySpark feature selection and interpretability - apache-spark

Is there a way in PySpark to perform feature selection, but preserve or obtain a mapping back to the original feature indices/descriptions?
For example:
I have a StringArray column of raw feature strings (col =
"rawFeatures").
I've converted them to numerical counts using
CountVectorizer (col = "features").
Then I've run the ChiSqSelector
to select the top 1000 features (col = "selectedFeatures).
How do I get the raw feature strings that correspond to those top 1000 features (or even just the corresponding indices of these selected features in the original "features" col from step #2)?

This information can be traced back using fitted Transformers. With Pipeline like this one:
from pyspark.ml.feature import *
from pyspark.ml import Pipeline
import numpy as np
data = spark.createDataFrame(
[(1, ["spark", "foo", "bar"]), (0, ["kafka", "bar", "foo"])],
("label", "rawFeatures"))
model = Pipeline(stages = [
CountVectorizer(inputCol="rawFeatures", outputCol="features"),
ChiSqSelector(outputCol="selectedFeatures", numTopFeatures=2)
]).fit(data)
you can extract Transformers:
vectorizer, chisq = model.stages
and compare selectedFeatures with vocabulary:
np.array(vectorizer.vocabulary)[chisq.selectedFeatures]
array(['spark', 'kafka'], dtype='<U5')
Unfortunately this combination of Transformers doesn't preserve labels metadata:
features_meta, selected_features_meta = (f.metadata for f in model
.transform(data).select("features", "selectedFeatures")
.schema
.fields)
features_meta
{}
selected_features_meta
{'ml_attr': {'attrs': {'nominal': [{'idx': 0}, {'idx': 1}]}, 'num_attrs': 2}}

Related

how to use ColumnTransformer() to return a dataframe?

I have a dataframe like this:
department review projects salary satisfaction bonus avg_hrs_month left
0 operations 0.577569 3 low 0.626759 0 180.866070 0
1 operations 0.751900 3 medium 0.443679 0 182.708149 0
2 support 0.722548 3 medium 0.446823 0 184.416084 0
3 logistics 0.675158 4 high 0.440139 0 188.707545 0
4 sales 0.676203 3 high 0.577607 1 179.821083 0
I want to try ColumnTransformer() and return a transformed dataframe.
ord_features = ["salary"]
ordinal_transformer = OrdinalEncoder()
cat_features = ["department"]
categorical_transformer = OneHotEncoder(handle_unknown="ignore")
ct = ColumnTransformer(
transformers=[
("ord", ordinal_transformer, ord_features),
("cat", categorical_transformer, cat_features ),
]
)
df_new = ct.fit_transform(df)
df_new
which gives me a 'sparse matrix of type '<class 'numpy.float64'>'
if I use pd.DataFrame(ct.fit_transform(df)) then I'm getting a single column:
0
0 (0, 0)\t1.0\n (0, 7)\t1.0
1 (0, 0)\t2.0\n (0, 7)\t1.0
2 (0, 0)\t2.0\n (0, 10)\t1.0
3 (0, 5)\t1.0
4 (0, 9)\t1.0
however, I was expecting to see the transformed dataframe like this?
review projects salary satisfaction bonus avg_hrs_month operations support ...
0 0.577569 3 1 0.626759 0 180.866070 1 0
1 0.751900 3 2 0.443679 0 182.708149 1 0
2 0.722548 3 2 0.446823 0 184.416084 0 1
3 0.675158 4 3 0.440139 0 188.707545 0 0
4 0.676203 3 3 0.577607 1 179.821083 0 0
Is it possible with ColumnTransformer()?
As quickly sketched in the comment there are a couple of considerations to be done on your example:
method .fit_transform() generally returns either a sparse matrix or a numpy array. Returning a sparse matrix serves the purpose of saving memory; think to the example where you one-hot-encode a categorical attribute with many categories. You'll end up having a matrix with many columns and a single non-zero entry per row; with a sparse matrix you can store the location of the non-zero element only. In these situation you can call .toarray() on the output of .fit_transform() to get a numpy array back to be passed to the pd.DataFrame constructor.
Actually, on a five-rows dataset similar to the one you provided
df = pd.DataFrame({
'department': ['operations', 'operations', 'support', 'logistics', 'sales'],
'review': [0.577569, 0.751900, 0.722548, 0.675158, 0.676203],
'projects': [3, 3, 3, 4, 3],
'salary': ['low', 'medium', 'medium', 'low', 'high'],
'satisfaction': [0.626759, 0.751900, 0.722548, 0.675158, 0.676203],
'bonus': [0, 0, 0, 0, 1],
'avg_hrs_month': [180.866070, 182.708149, 184.416084, 188.707545, 179.821083],
'left': [0, 0, 1, 0, 0]
})
ord_features = ["salary"]
ordinal_transformer = OrdinalEncoder()
cat_features = ["department"]
categorical_transformer = OneHotEncoder(handle_unknown="ignore")
ct = ColumnTransformer(transformers=[
("ord", ordinal_transformer, ord_features),
("cat", categorical_transformer, cat_features),
])
I can't reproduce your issue (namely, I directly obtain a numpy array), but basically pd.DataFrame(ct.fit_transform(df).toarray()) should work for your case. This is the output you would get:
As you can see, with respect to your expected output, this only contains the transformed (ordinally encoded) salary column as first column and the transformed (one-hot-encoded) department column from the second to the last column. That's because, as you can see within the docs, parameter remainder is set to 'drop' by default, which implies that all columns which are not subject to transformation are dropped. To avoid this, you should set it to 'passthrough'; this will help you to transform the columns you need and keep the other untouched.
ct = ColumnTransformer(transformers=[
("ord", ordinal_transformer, ord_features),
("cat", categorical_transformer, cat_features )],
remainder='passthrough'
)
This would be the output of your pd.DataFrame(ct.fit_transform(df).toarray()) in such a case:
Again, as you can see also column order is not the one you would expect after the transformation. Long story short, that's because in a ColumnTransformer
The order of the columns in the transformed feature matrix follows the order of how the columns are specified in the transformers list. Columns of the original feature matrix that are not specified are dropped from the resulting transformed feature matrix, unless specified in the passthrough keyword. Those columns specified with passthrough are added at the right to the output of the transformers.
I would aggest reading Preserve column order after applying sklearn.compose.ColumnTransformer at this proposal.
Eventually, for what concerns column names you should probably apply a custom solution passing what you want directly to the columns parameter to be passed to the pd.DataFrame constructor. Indeed, OrdinalEncoder (differently from OneHotEncoder) does not provide a .get_feature_names_out() method that makes it generally easy to pass columns=ct.get_feature_names_out() to the pd.DataFrame constructor. See ColumnTransformer & Pipeline with OHE - Is the OHE encoded field retained or removed after ct is performed? for an example of its usage.
Update 10/2022 - sklearn version 1.2.dev0
With sklearn version 1.2.0 it will be possible to solve the problem of returning a DataFrame when transforming a ColumnTransformer instance much more easily. Such version has not been released yet, but you can test the following in dev (version 1.2.dev0), by installing the nightly builds as such:
pip install --pre --extra-index https://pypi.anaconda.org/scipy-wheels-nightly/simple scikit-learn -U
The ColumnTransformer (and other transformers as well) now exposes a .set_output() method which gives the possibility to configure a transformer to output pandas DataFrames, by passing parameter transform='pandas' to it.
Therefore, the example becomes:
import pandas as pd
from sklearn.preprocessing import LabelEncoder, OneHotEncoder, OrdinalEncoder
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier
df = pd.DataFrame({
'department': ['operations', 'operations', 'support', 'logistics', 'sales'],
'review': [0.577569, 0.751900, 0.722548, 0.675158, 0.676203],
'projects': [3, 3, 3, 4, 3],
'salary': ['low', 'medium', 'medium', 'low', 'high'],
'satisfaction': [0.626759, 0.751900, 0.722548, 0.675158, 0.676203],
'bonus': [0, 0, 0, 0, 1],
'avg_hrs_month': [180.866070, 182.708149, 184.416084, 188.707545, 179.821083],
'left': [0, 0, 1, 0, 0]
})
ord_features = ["salary"]
ordinal_transformer = OrdinalEncoder()
cat_features = ["department"]
categorical_transformer = OneHotEncoder(sparse_output=False, handle_unknown="ignore")
ct = ColumnTransformer(transformers=[
("ord", ordinal_transformer, ord_features),
("cat", categorical_transformer, cat_features )],
remainder='passthrough'
)
ct.set_output('pandas')
df_pandas = ct.fit_transform(df)
df_pandas
The output also becomes much easier to read as it has proper column names (indeed, at each step, the transformers of which ColumnTransformer is made of do have the attribute feature_names_in_; so you don't lose column names anymore while transforming the input).
Last note. Observe that the example now requires parameter sparse_output=False to be passed to the OneHotEncoder instance in order to work.
This answer skips the workaround and directly provides a solution for scikit-learn version 1.2+
From sklearn version 1.2 on, transformers can return a pandas DataFrame directly without further handling. It is done with set_output, which can be configured per estimator by calling the set_output method or globally by setting set_config(transform_output="pandas"). See Release Highlights for scikit-learn 1.2 - Pandas output with set_output API
In your case the solution would be:
ord_features = ["salary"]
ordinal_transformer = OrdinalEncoder()
cat_features = ["department"]
categorical_transformer = OneHotEncoder(handle_unknown="ignore")
ct = ColumnTransformer(
transformers=[
("ord", ordinal_transformer, ord_features),
("cat", categorical_transformer, cat_features ),
]
)
# Add the following line to your code
ct.set_output(transform="pandas")
df_new = ct.fit_transform(df)
df_new

How to encode multiple categorical columns for test data efficiently?

I have multiple category columns (nearly 50). I using custom made frequency encoding and using it on training data. At last i am saving it as nested dictionary. For the test data I am using map function to encode and unseen labels are replaced with 0. But I need more efficient way?
I have already tried pandas replace method but it don't cares of unseen labels and leaves it as it. Further I am much concerned about the time and i want say 80 columns and 1 row to be encoded within 60 ms. Just need the most efficient way I can do it. I have taken my example from here.
import pandas
from sklearn import preprocessing
df = pandas.DataFrame({'pets': ['cat', 'dog', 'cat', 'monkey', 'dog', 'meo'],
'owner': ['Champ', 'Ron', 'Brick', 'Champ', 'Veronica', 'Ron'],
'location': ['San_Diego', 'New_York', 'New_York', 'San_Diego', 'San_Diego',
'New_York']})
My dict looks something like this :
enc = {'pets': {'cat': 0, 'dog': 1, 'monkey': 2},
'owner': {'Brick': 0, 'Champ': 1, 'Ron': 2, 'Veronica': 3},
'location': {'New_York': 0, 'San_Diego': 1}}
for col in enc:
if col in input_df.columns:
input_df[col]= input_df[col].map(dict_online['encoding'][col]).fillna(0)
Further I want multiple columns to be encoded at once. I don't want any loop for every column.... I guess we cant do it in map. Hence replace is good choice but in that as said it doesn't cares about unseen labels.
EDIT:
This the code i am using for now, Please note there is only 1 row in test data frame ( Not very sure i should handle it like numpy array to reduce time...). But i need to decrease this time to under 60 ms: Further i have dictionary only for mapping ( Cant use one hot because of use case). Currently time = 331.74 ms. Any idea how to do it more efficiently. Not sure that multiprocessing will work..? Further with replace method i have got many issues like : 1. It does not handle unseen labels and leave them as it is ( for string its issue). 2. It has problem with overlapping of keys and values.
from string import ascii_lowercase
import itertools
import pandas as pd
import numpy as np
import time
def iter_all_strings():
for size in itertools.count(1):
for s in itertools.product(ascii_lowercase, repeat=size):
yield "".join(s)
l = []
for s in iter_all_strings():
l.append(s)
if s == 'gr':
break
columns = l
df = pd.DataFrame(columns=columns)
for col in df.columns:
df[col] = np.random.randint(1, 4000, 3000)
transform_dict = {}
for col in df.columns:
cats = pd.Categorical(df[col]).categories
d = {}
for i, cat in enumerate(cats):
d[cat] = i
transform_dict[col] = d
print(f"The length of the dictionary is {len(transform_dict)}")
# Creating another test data frame
df2 = pd.DataFrame(columns=columns)
for col in df2.columns:
df2[col] = np.random.randint(1, 4000, 1)
print(f"The shape of teh 2nd data frame is {df2.shape}")
t1 = time.time()
for col in df2.columns:
df2[col] = df2[col].map(transform_dict[col]).fillna(0)
print(f"Time taken is {time.time() - t1}")
# print(df)
Firstly, when you want to encode categorical variables, which is not ordinal (meaning: there is no inherent ordering between the values of the variable/column. ex- cat, dog), you must use one hot encoding.
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
df = pd.DataFrame({'pets': ['cat', 'dog', 'cat', 'monkey', 'dog', 'meo'],
'owner': ['Champ', 'Ron', 'Brick', 'Champ', 'Veronica', 'Ron'],
'location': ['San_Diego', 'New_York', 'New_York', 'San_Diego', 'San_Diego',
'New_York']})
enc = [['cat','dog','monkey'],
['Brick', 'Champ', 'Ron', 'Veronica'],
['New_York', 'San_Diego']]
ohe = OneHotEncoder(categories=enc, handle_unknown='ignore', sparse=False)
Here, I have modified your enc in a way that can be fed into the OneHotEncoder.
Now comes the point of how can we going to handle the unseen
labels?
when you handle_unknown as False, the unseen values will have zeros in all the dummy variables, which in a way would help the model to understand its a unknown value.
colnames= ['{}_{}'.format(col,val) for col,unique_values in zip(df.columns,ohe.categories_) \
for val in unique_values]
pd.DataFrame(ohe.fit_transform(df), columns=colnames)
Update:
If you are fine with ordinal endocing, the following change could help.
df2.apply(lambda row: [transform_dict[val].get(col,0) \
for val,col in row.items()],
axis=1,
result_type='expand')
#1000 loops, best of 3: 1.17 ms per loop

Pyspark random forest feature importance mapping after column transformations

I am trying to plot the feature importances of certain tree based models with column names. I am using Pyspark.
Since I had textual categorical variables and numeric ones too, I had to use a pipeline method which is something like this -
use string indexer to index string columns
use one hot encoder for all columns
use a vectorassembler to create the feature column containing the feature vector
Some sample code from the docs for steps 1,2,3 -
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer,
VectorAssembler
categoricalColumns = ["workclass", "education", "marital_status",
"occupation", "relationship", "race", "sex", "native_country"]
stages = [] # stages in our Pipeline
for categoricalCol in categoricalColumns:
# Category Indexing with StringIndexer
stringIndexer = StringIndexer(inputCol=categoricalCol,
outputCol=categoricalCol + "Index")
# Use OneHotEncoder to convert categorical variables into binary
SparseVectors
# encoder = OneHotEncoderEstimator(inputCol=categoricalCol + "Index",
outputCol=categoricalCol + "classVec")
encoder = OneHotEncoderEstimator(inputCols=
[stringIndexer.getOutputCol()], outputCols=[categoricalCol + "classVec"])
# Add stages. These are not run here, but will run all at once later on.
stages += [stringIndexer, encoder]
numericCols = ["age", "fnlwgt", "education_num", "capital_gain",
"capital_loss", "hours_per_week"]
assemblerInputs = [c + "classVec" for c in categoricalColumns] + numericCols
assembler = VectorAssembler(inputCols=assemblerInputs, outputCol="features")
stages += [assembler]
# Create a Pipeline.
pipeline = Pipeline(stages=stages)
# Run the feature transformations.
# - fit() computes feature statistics as needed.
# - transform() actually transforms the features.
pipelineModel = pipeline.fit(dataset)
dataset = pipelineModel.transform(dataset)
finally train the model
after training and eval, I can use the "model.featureImportances" to get the feature rankings, however I dont get the feature/column names, rather just the feature number, something like this -
print dtModel_1.featureImportances
(38895,[38708,38714,38719,38720,38737,38870,38894],[0.0742343395738,0.169404823667,0.100485791055,0.0105823115814,0.0134236162982,0.194124862158,0.437744255667])
How do I map it back to the initial column names and the values? So that I can plot ?**
Extract metadata as shown here by user6910411
attrs = sorted(
(attr["idx"], attr["name"])
for attr in (
chain(*dataset.schema["features"].metadata["ml_attr"]["attrs"].values())
)
)
and combine with feature importance:
[
(name, dtModel_1.featureImportances[idx])
for idx, name in attrs
if dtModel_1.featureImportances[idx]
]
The transformed dataset metdata has the required attributes.Here is an easy way to do -
create a pandas dataframe (generally feature list will not be huge, so no memory issues in storing a pandas DF)
pandasDF = pd.DataFrame(dataset.schema["features"].metadata["ml_attr"]
["attrs"]["binary"]+dataset.schema["features"].metadata["ml_attr"]["attrs"]["numeric"]).sort_values("idx")
Then create a broadcast dictionary to map. broadcast is necessary in a distributed environment.
feature_dict = dict(zip(pandasDF["idx"],pandasDF["name"]))
feature_dict_broad = sc.broadcast(feature_dict)
When creating your assembler you used a list of variables (assemblerInputs). The order is preserved in 'features' variable. So just do a Pandas DataFrame:
features_imp_pd = (
pd.DataFrame(
dtModel_1.featureImportances.toArray(),
index=assemblerInputs,
columns=['importance'])
)

how to "normalize" vectors values when using Spark CountVectorizer?

CountVectorizer and CountVectorizerModel often creates a sparse feature vector that looks like this:
(10,[0,1,4,6,8],[2.0,1.0,1.0,1.0,1.0])
this basically says the total size of the vocabulary is 10, the current document has 5 unique elements, and in the feature vector, these 5 unique elements take position at 0, 1, 4, 6 and 8. Also, one of the elements show up twice, therefore the 2.0 value.
Now, I would like to "normalize" the above feature vector and make it look like this,
(10,[0,1,4,6,8],[0.3333,0.1667,0.1667,0.1667,0.1667])
i.e., each value is divided by 6, the total number of all elements together. For example, 0.3333 = 2.0/6.
So is there a way to do this efficiently here?
Thanks!
You can use Normalizer
class pyspark.ml.feature.Normalizer(*args, **kwargs)
Normalize a vector to have unit norm using the given p-norm.
with 1-norm
from pyspark.ml.linalg import SparseVector
from pyspark.ml.feature import Normalizer
df = spark.createDataFrame([
(SparseVector(10,[0,1,4,6,8],[2.0,1.0,1.0,1.0,1.0]), )
], ["features"])
Normalizer(inputCol="features", outputCol="features_norm", p=1).transform(df).show(1, False)
# +--------------------------------------+---------------------------------------------------------------------------------------------------------------------+
# |features |features_norm |
# +--------------------------------------+---------------------------------------------------------------------------------------------------------------------+
# |(10,[0,1,4,6,8],[2.0,1.0,1.0,1.0,1.0])|(10,[0,1,4,6,8],[0.3333333333333333,0.16666666666666666,0.16666666666666666,0.16666666666666666,0.16666666666666666])|
# +--------------------------------------+---------------------------------------------------------------------------------------------------------------------+

Does TfidfVectorizer keep order of the features?

I wonder if TfidfVectorizer keeps the order of the features when transforming documents using scikit-learn. Here is what I am doing:
from sklearn.feature_exteraction.text import TfidfVectorizer
corpus = ['this movie is cool', 'I love this book']
vec = TfidfVectorizer()
X = vec.fit_tranform(corpus)
joblib.dump(vec, './vec')
doc = 'What are the coolest movies in 2015'
vec = joblib.load('./vec')
X_test = vec.transform([doc])
Now, my question is that are the feature entries in X and X_test in the same order?
Yes. As when you call fit(), it creates a vocabulary dictionary from text strings to column indexes. It uses that to transform additional data sets. This is preserved in any serialization and deserialization.
vec.vocabulary_
> {u'book': 0, u'cool': 1, u'is': 2, u'love': 3, u'movie': 4, u'this': 5}

Resources