I have a highly unbalanced dataset.
My dataset contains 1450 records and my outputs are binary 0 and 1. Output 0 has 1200 records and the 1 has 250 records.
I am using this piece of code to build my testing and training data set for the model.
from sklearn.cross_validation import train_test_split
X = Actual_DataFrame
y = Actual_DataFrame.pop('Attrition')
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.20, random_state=42, stratify=y)
But what I would like is a way through a function in which I want to specify the number of records for training and how much percent of them needs to come from class '0' and how much percent of them needs to come from class '1'.
So, a function which takes 2 Inputs are needed for creating the training_data:-
Total Number of Records for Training Data,
Number of Records that belongs to Class '1'
This would be a huge help to solve biased sampling dataset problems.
You can simply write a function that's very similar to the train_test_split from sklearn. The idea is that, from the input parameters train_size and pos_class_size, you can calculate how many positive class sample and negative class sample you will need.
def custom_split(X, y, train_size, pos_class_size, random_state=42):
neg_class_size = train_size = pos_class_size
pos_df = X[y == 1]
neg_df = X[y == 0]
pos_train = pos_df.sample(pos_class_size)
pos_test = pos_df[~pos_df.index.isin(pos_train.index)]
neg_train = neg_df.sample(neg_class_size)
neg_test = neg_df[~neg_df.index.isin(neg_train.index)]
X_train = pd.concat([pos_train,neg_train], axis=1)
X_test = pd.concat([pos_test,neg_test], axis=1)
y_train = y[X_train.index]
y_test = y[X_test.index]
return X_train, X_test, y_train, y_test
There are methods that are memory efficient or runs quicker, I didn't do any test with this code, but it should work.
At least, you should be able to get the idea behind.
Related
i have a dataframe with more than 1 millions row and i need do a linear regression on this dataframe in python3. but my Ram is 8 GB and i can't load the dataframe completely and run linear regression on that.
my code is as follow:
def get_data():
client = MongoClient(host='127.0.0.1', port=27017)
database = client['database']
collection = database['AI']
query = {}
return collection.find(query)
df = get_data()
xx = pd.DataFrame(df[0:100000])
xx = xx.iloc[:,2:]
xx.dropna(inplace = True)
X = np.array(xx.iloc[:,:-1])
y = np.array(xx['price']).reshape(-1, 1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
regr = LinearRegression()
regr.fit(X_train, y_train)
print(regr.score(X_test, y_test))
May not be possible for LinearRegression, but SGDRegessor has a method partial_fit to handle large datasets.
To quote : here
partial_fit(X, y, sample_weight=None)[source] Perform one epoch of
stochastic gradient descent on given samples.
Internally, this method uses max_iter = 1. Therefore, it is not
guaranteed that a minimum of the cost function is reached after
calling it once. Matters such as objective convergence and early
stopping should be handled by the user.
I am using the PyTorch implementation of tabnet and cannot figure out why I'm still getting this error. I import the data to a dataframe, I use this function to get my X, and y then my train-test split
def get_X_y(df):
''' This function takes in a dataframe and splits it into the X and y variables
'''
X = df.drop(['is_goal'], axis=1)
y = df.is_goal
return X,y
X,y = get_X_y(df)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=101)
Then I use this to reshape my y_train
y_train.values.reshape(-1,1)
Then create an instance of the model and try to fit it
reg = TabNetRegressor()
reg.fit(X_train, y_train)
and I get this error
ValueError: Targets should be 2D : (n_samples, n_regression) but y_train.shape=(639912,) given.
Use reshape(-1, 1) for single regression.
I understand why I need to reshape it as this is pretty common, but I cannot understand why it's still giving me this error. I've restarted the kernel in notebooks so I don't think it's persistence memory issues either.
You have to re-assign it:
y_train = y_train.values.reshape(-1,1)
Otherwise, it won't change.
I have a very imbalanced dataset. I used sklearn.train_test_split function to extract the train dataset. Now I want to oversample the train dataset, so I used to count number of type1(my data set has 2 categories and types(type1 and tupe2) but approximately all of my train data are type1. So I cant oversample.
Previously I used to split train test datasets with my written code. In that code 0.8 of all type1 data and 0.8 of all type2 data were in the train dataset.
How I can use this method with train_test_split function or other spliting methods in sklearn?
*I should just use sklearn or my own written methods.
You're looking for stratification. Why?
There's a parameter stratify in method train_test_split to which you can give the labels list e.g. :
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
stratify=y,
test_size=0.2)
There's also StratifiedShuffleSplit.
It seems like we both had similar issues here. Unfortunately, imbalanced-learn isn't always what you need and scikit does not offer the functionality you want. You will want to implement your own code.
This is what I came up for my application. Note that I have not had extensive time to debug it but I believe it works from the testing I have done. Hope it helps:
def equal_sampler(classes, data, target, test_frac):
# Find the least frequent class and its fraction of the total
_, count = np.unique(target, return_counts=True)
fraction_of_total = min(count) / len(target)
# split further into train and test
train_frac = (1-test_frac)*fraction_of_total
test_frac = test_frac*fraction_of_total
# initialize index arrays and find length of train and test
train=[]
train_len = int(train_frac * data.shape[0])
test=[]
test_len = int(test_frac* data.shape[0])
# add values to train, drop them from the index and proceed to add to test
for i in classes:
indeces = list(target[target ==i].index.copy())
train_temp = np.random.choice(indeces, train_len, replace=False)
for val in train_temp:
train.append(val)
indeces.remove(val)
test_temp = np.random.choice(indeces, test_len, replace=False)
for val in test_temp:
test.append(val)
# X_train, y_train, X_test, y_test
return data.loc[train], target[train], data.loc[test], target[test]
For the input, classes expects a list of possible values, data expects the dataframe columns used for prediction, target expects the target column.
Take care that the algorithm may not be extremely efficient, due to the triple for-loop(list.remove takes linear time). Despite that, it should be reasonably fast.
You may also look into stratified shuffle split as follows:
# We use a utility to generate artificial classification data.
from sklearn.datasets import make_classification
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.svm import SVC
from sklearn.pipeline import make_pipeline
X, y = make_classification(n_samples=100, n_informative=10, n_classes=2)
sss = StratifiedShuffleSplit(n_splits=5, test_size=0.5, random_state=0)
for train_index, test_index in sss.split(X, y):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
clf = make_pipeline(StandardScaler(), SVC(gamma='auto'))
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
Suppose I have a data set with 1000 rows. I want to split it into train and test set. I want to split first 800 row into train set then rest 200 row into test set. Is it possible?
My python test code for train and test splitting is like this:
from sklearn.cross_validation import train_test_split
xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=0.20)
There are multiple ways to do this, I will run by a few of them.
Slicing is a powerful method in python and accepts the arguments as data[start:stop:step] in your case if you just want the first 800 copies and your data frame is named as train for input features and Y for output features you can use
X_train = train[0:800]
X_test = train[800:]
y_train = Y[0:800]
y_test = Y[800:]
Iloc function is associated with a dataFrame and is associated with an Index, if your Index is numeric then you can use
X_train = train.iloc[0:800]
X_test = train.iloc[800:]
y_train = Y.iloc[0:800]
y_test = Y.iloc[800:]
If you just have to split the data into two parts, you can even use the df.head() and df.tail() to do it,
X_train = train.head(800)
X_test = train.tail(200)
y_train = Y.head(800)
y_test = Y.tail(200)
There are other ways to do it too, I would recommend using the first method as it is common across multiple datatypes and will also work if you were working with a numpy array. To learn more about slicing I would suggest that you checkout. Understanding slice notation here it is explained for a list, but it works with almost all forms.
You want to set shuffle= False.
from sklearn.cross_validation import train_test_split
xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=0.20, shuffle = False)
I have he following code to run a 10-fold cross validation in SkLearn:
cv = model_selection.KFold(n_splits=10, shuffle=True, random_state=0)
scores = model_selection.cross_val_score(MyEstimator(), x_data, y_data, cv=cv, scoring='mean_squared_error') * -1
For debugging purposes, while I am trying to make MyEstimator work, I would like to run only one fold of this cross-validation, instead of all 10. Is there an easy way to keep this code but just say to run the first fold and then exit?
I would still like that data is split into 10 parts, but that only one combination of that 10 parts is fitted and scored, instead of 10 combinations.
No, not with cross_val_score I suppose. You can set n_splits to minimum value of 2, but still that will be 50:50 split of train, test which you may not want.
If you want maintain a 90:10 ration and test other parts of code like MyEstimator(), then you can use a workaround.
You can use KFold.split() to get the first set of train and test indices and then break the loop after first iteration.
cv = model_selection.KFold(n_splits=10, shuffle=True, random_state=0)
for train_index, test_index in cv.split(x_data):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = x_data[train_index], x_data[test_index]
y_train, y_test = y_data[train_index], y_data[test_index]
break
Now use this X_train, y_train to train the estimator and X_test, y_test to score it.
Instead of :
scores = model_selection.cross_val_score(MyEstimator(),
x_data, y_data,
cv=cv,
scoring='mean_squared_error')
Your code becomes:
myEstimator_fitted = MyEstimator().fit(X_train, y_train)
y_pred = myEstimator_fitted.predict(X_test)
from sklearn.metrics import mean_squared_error
# I am appending to a scores list object, because that will be output of cross_val_score.
scores = []
scores.append(mean_squared_error(y_test, y_pred))
Rest assured, cross_val_score will be doing this only internally, just some enhancements for parallel processing.