I have a dataset for train and test as follows,
dataset['train'], dataset['test'] = torch.utils.data.random_split(dataset_all, [num_train,num_test],
generator=torch.Generator().manual_seed(random_seed))
Is there any good way to change targets and retrieve certain dataset by providing index?
Right now, I am using list this way only to get dataset for labels==0
dataloader['train'] = torch.utils.data.DataLoader(dataset['train'], batch_size=len(dataset['train']), num_workers=4)
inputs, labels = next(iter(dataloader['train']))
x_train = inputs[np.where(labels==0)]
y_train = labels[np.where(labels==0)]
data_train = My_Dataset(x_train, y_train, transform=None)
This way takes lots of time and memory when the size of dataset is too large.
Related
I have a Tensorflow regression model that i have with been working with. I have the model tuned well and getting good results while training. However, when i goto evalute the results are horrible. I did some research and found that i am not normalizing my test features and labels as well so i suspect that is where the problem is. My thought is to normalize the whole dataset before splitting the dataset into train and test sets but i am getting an attribute error that has me stumped.
here is the code sample. Please help :)
#concatenate the surface data and single_downhole_col into a single dataframe
training_Data =[]
training_Data = pd.concat([surface_Data, single_downhole_col], axis=1)
#print('training data shape:',training_Data.shape)
#print(training_Data.head())
#normalize the data using keras
model_normalizer_layer = tf.keras.layers.Normalization(axis=-1)
model_normalizer_layer.adapt(training_Data)
normalized_training_Data = model_normalizer_layer(training_Data)
#convert the data frame to array
dataset = normalized_training_Data.copy()
dataset.tail()
#create a training and test set
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
#check the data
train_dataset.describe().transpose()
#split features from labels
train_features = train_dataset.copy()
test_features = test_dataset.copy()
and if there is any interest in knowing how the normalizer layer is used in the model then please see below
def build_and_compile_model(data):
model = keras.Sequential([
model_normalizer_layer,
layers.Dense(260, input_dim=401,activation='relu'),
layers.Dense(80, activation='relu'),
#layers.Dense(40, activation='relu'),
layers.Dense(1)
])
i found that quasimodos suggestion of using normalization of the data set before processing in my model was the ideal solution. It scaled the data 0 to 1 for all columns as expected and allowed me to display the data prior to training to validate it was correct.
For whatever reason the keras.layers.normalization was not working in my case.
x = training_Data.values
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
training_Data = pd.DataFrame(x_scaled)
# normalize the data using keras
model_normalizer_layer = tf.keras.layers.Normalization(axis=-1)
model_normalizer_layer.adapt(training_Data)
normalized_training_Data = model_normalizer_layer(training_Data)
The only part that i have yet to figure out is how do i scale the predict data from the model back to the original ranges of the column??? i'm sure its simple but i'm stumped.
After MFCC feature extraction, I am attempting on using PCA feature selection then carrying out classification using Random Forest.
Prior to standardscale() on the data, I have separated out the X_train, y_train, X_test and y_test data.
Step 1: I firstly scale the data as follows:
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.reshape(-1, X_train.shape[-1])).reshape(X_train.shape)
X_test_scaled = scaler.transform(X_test.reshape(-1, X_test.shape[-1])).reshape(X_test.shape)
# Flatten data for PCA
X_train_scaled = np.array([features_2d.flatten() for features_2d in X_train_scaled])
X_test_scaled = np.array([features_2d.flatten() for features_2d in X_test_scaled])
Step 2: Then I apply PCA and PCA.fit as follows:
pca_train = PCA().fit(X_train_scaled)
pca_train = PCA(n_components = index_95) # Transformation into 31 principal components
x_pca_train = pca_train.fit_transform(X_train_scaled)
x_pca_test = pca_test.fit_transform(X_test_scaled)
X_train = x_pca_train
X_test = x_pca_test
Step 3: Carry out Random Forest classification.
I wanted to know if the procedure is correct in Step 1 and Step 2 for correct standardscale and PCA analysis for the X_train and X_test data.
Thanks for your time and help!
First of all, PCA is not guaranteed to be useful for classification tasks (see e.g. https://www.csd.uwo.ca/~oveksler/Courses/CS434a_541a/Lecture8.pdf).
I cannot say whether all the reshapes on the scaler step are needed without knowing your data, however step 2 certainly looks a bit off:
Your first call to pca_train = PCA().fit(X_train_scaled) is redundant since you immediately redefine it afterwards.
x_pca_test = pca_test.fit_transform(X_test_scaled) looks like a mistake, you should only fit train data and apply transform() to the test set.
I have a logistic regression model housed in a scikit-learn pipeline using the following:
pipeline = make_pipeline(
StandardScaler(),
LogisticRegressionCV(
solver='lbfgs',
cv=10,
scoring='roc_auc',
class_weight='balanced'
)
)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
I can view the model's coefficients for predictions as a whole with this code ...
# Look at model's coefficients to see what features are most important
plt.rcParams['figure.dpi'] = 50
model = pipeline.named_steps['logisticregressioncv']
coefficients = pd.Series(model.coef_[0], X_train.columns)
plt.figure(figsize=(10,12))
coefficients.sort_values().plot.barh(color='grey');
Which returns a bar plot of the features and their coefficients.
What I'm trying to do is be able to see how different input values for a single observation impact its prediction. The idea is to be able to run predictions on a sample population and examine the group with "low" predictions ... for example if I run predictions for 10 observations, I'd like to see how different input values impacted each of those 10 predictions, individually.
Recalled that I can achieve this via Shap Values using something along the following (but using LinearExplainer instead of TreeExplainer):
# Instantiate model and encoder outside of pipeline for
# use with shap
model = RandomForestClassifier( random_state=25)
# Fit on train, score on val
model.fit(X_train_encoded, y_train2)
y_pred_shap = model.predict(X_val_encoded)
# Get an individual observation to explain.
row = X_test_encoded.iloc[[-3]]
# Why did the model predict this?
# Look at a Shapley Values Force Plot
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value[1],
shap_values=shap_values[1],
features=row
)```
I want to do multioutput prediction of labels and continuous data. My data consists of time series, one 10 time-points series of 30 observables per sample. I want to predict 10 labels that are binary, and 5 that are continuous, based on this data.
For the sake of simplicity I have flattened the time series data - ending up with one row per sample.
Since there are many labels to predict about the same system, and since there exists relationships between these, I want to use MutliOutputPrediction to do so. My idea is to divide the task into two parts; one for MultiOutputClassification, another for MultiOutputRegression.
I generally like XGBoost and wish to use it for this task, but of course I want to prevent overfitting when doing so. So I have a piece of code as follows, and I wish to pass the early_stopping_rounds to the fit method of the XGBClassifier, but don't know how to.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
pipeline = Pipeline([
('imputer', SimpleImputer()), # XGBoost can deal with NaNs, but MultiOutputClassifier cannot
('classifier', MultiOutputClassifier(XGBClassifier()))
])
param_grid = dict(
classifier__estimator__n_estimators=[100], # this works
# classifier__estimator__early_stopping_rounds=[30], # needs to be passed to .fit
# classifier__estimator__scale_pos_weight=[scale_pos_weight], # XGBoostError: Invalid Parameter format for scale_pos_weight expect float
)
clf = GridSearchCV(estimator=pipeline, param_grid=param_grid, scoring='roc_auc', refit='roc_auc', cv=5, n_jobs=-1)
clf.fit(X_train, y_train[CLASSIFICATION_LABELS])
y_hat_proba = np.array(clf.predict_proba(X_test))
y_hat = pd.DataFrame(np.array([y_hat_proba[:, i, 0] for i in range(y_hat_proba.shape[1])]), columns=CLASSIFICATION_LABELS)
auc_roc_scores = np.array([roc_auc_score(y_test[label], (y_hat[label] > 0.5).astype(int)) for label in y_hat.columns])
print(f'average ROC AUC score: {np.mean(auc_roc_scores).round(3)}+/-{np.std(auc_roc_scores).round(3)}')
>>> average ROC AUC score: 0.499+/-0.002
I tried passing it to fit as follows:
classifier__estimator__early_stopping_rounds=30
classifier__early_stopping_rounds=30
I get AUC ROC scores of 0.5 on the labels, which means this clearly isn't working and hence why I want to pass the early_stopping_rounds parameter and the eval_set. I suppose that being able to pass scale_pos_weight could also be useful, but probably doesn't work for MultiOutput prediction. At the moment I get the feeling that this is not the way to go to solve this, and in case you agree I would appreciate alternative suggestions.
I have a highly unbalanced dataset.
My dataset contains 1450 records and my outputs are binary 0 and 1. Output 0 has 1200 records and the 1 has 250 records.
I am using this piece of code to build my testing and training data set for the model.
from sklearn.cross_validation import train_test_split
X = Actual_DataFrame
y = Actual_DataFrame.pop('Attrition')
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.20, random_state=42, stratify=y)
But what I would like is a way through a function in which I want to specify the number of records for training and how much percent of them needs to come from class '0' and how much percent of them needs to come from class '1'.
So, a function which takes 2 Inputs are needed for creating the training_data:-
Total Number of Records for Training Data,
Number of Records that belongs to Class '1'
This would be a huge help to solve biased sampling dataset problems.
You can simply write a function that's very similar to the train_test_split from sklearn. The idea is that, from the input parameters train_size and pos_class_size, you can calculate how many positive class sample and negative class sample you will need.
def custom_split(X, y, train_size, pos_class_size, random_state=42):
neg_class_size = train_size = pos_class_size
pos_df = X[y == 1]
neg_df = X[y == 0]
pos_train = pos_df.sample(pos_class_size)
pos_test = pos_df[~pos_df.index.isin(pos_train.index)]
neg_train = neg_df.sample(neg_class_size)
neg_test = neg_df[~neg_df.index.isin(neg_train.index)]
X_train = pd.concat([pos_train,neg_train], axis=1)
X_test = pd.concat([pos_test,neg_test], axis=1)
y_train = y[X_train.index]
y_test = y[X_test.index]
return X_train, X_test, y_train, y_test
There are methods that are memory efficient or runs quicker, I didn't do any test with this code, but it should work.
At least, you should be able to get the idea behind.