GridSearchCV best model CV history - scikit-learn

I am trying to use GridSearchCV along with KerasRegressor for hyperparameter search. Keras model.fit function on its own allows to look at the 'loss' and 'val_loss' variables using the history object.
Is it possible to look at the 'loss' and 'val_loss' variables when using GridSearchCV.
Here is the code i am using to do a gridsearch:
model = KerasRegressor(build_fn=create_model_gridsearch, verbose=0)
layers = [[16], [16,8]]
activations = ['relu' ]
optimizers = ['Adam']
param_grid = dict(layers=layers, activation=activations, input_dim=[X_train.shape[1]], output_dim=[Y_train.shape[1]], batch_size=specified_batch_size, epochs=num_of_epochs, optimizer=optimizers)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring='neg_mean_squared_error', n_jobs=-1, verbose=1, cv=7)
grid_result = grid.fit(X_train, Y_train)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in sorted(zip(means, stds, params), key=lambda x: x[0]):
print("%f (%f) with: %r" % (mean, stdev, param))
def create_model_gridsearch(input_dim, output_dim, layers, activation, optimizer):
model = Sequential()
for i, nodes in enumerate(layers):
if i == 0:
model.add(Dense(nodes, input_dim=input_dim))
model.add(Activation(activation))
else:
model.add(Dense(nodes))
model.add(Activation(activation))
model.add(Dense(output_dim, activation='linear'))
model.compile(optimizer=optimizer, loss='mean_squared_error')
return model
How can i get the training and CV loss per epoch for the best model, grid_result.best_estimator_.model?
There is no variable like grid_result.best_estimator_.model.history.keys()

The history is well hidden. I was able to find it in
grid_result.best_estimator_.model.model.history.history

There is slight change in above answer.
"grid_result.best_estimator_.model.history.history" will give history object.

Related

GridsearchCV loss doesn't equal model.fit() loss values

I am confused as to which metric GridsearchCV is using in its parameter search. My understanding is that my model object feeds it a metric and this is what is used to determine the "best_params". But this doesn't appear to be the case. I thought that score=None is the default and as a result the first metric given in the metrics option of model.compile() was used. So in my case the the scoring function used should be the mean_squred_error. My explanation for this issue is described next.
Here is what I am doing. I simulated some regression data using sklearn with 10 features on 100,000 observations. I am playing around with keras because I typically used pytorch in the past and never really dabbled with keras until now. I am noticing a discrepancy in the loss function output from my GridsearchCV call vs the model.fit() call after I have my optimal set of parameters. Now I know I can just refit=True and not re-fit the model again, but I am trying to get a feel for the output of the keras and sklearn GridsearchCV functions.
To be explicit about the discrepancy here is what I am seeing. I simulated some data using sklearn as follows:
# Setting some data basics
N = 10000
feats = 10
# generate regression dataset
X, y = make_regression(n_samples=N, n_features=feats, n_informative=2, noise=3)
# training data and testing data #
X_train = X[:int(N * 0.8)]
y_train = y[:int(N * 0.8)]
X_test = X[int(N * 0.8):]
y_test = y[int(N * 0.8):]
I have created a "create_model" function that is looking to tune which activation function I am using (again this is a simple example for a proof of concept).
def create_model(activation_fn):
# create model
model = Sequential()
model.add(Dense(30, input_dim=feats, activation=activation_fn,
kernel_initializer='normal'))
model.add(Dropout(0.2))
model.add(Dense(10, activation=activation_fn))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
# Compile model
model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['mean_squared_error','mae'])
return model
Performing the grid search I get the following output
model = KerasRegressor(build_fn=create_model, epochs=50, batch_size=200, verbose=0)
activations = ['linear','relu']
param_grid = dict(activation_fn = activations)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, cv=3)
grid_result = grid.fit(X_train, y_train, verbose=1)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
Best: -21.163454 using {'activation_fn': 'linear'}
Ok, so the best metric is the mean squared error of 21.16 (I understand they flip the sign to create a maximization problem). So, when I fit the model using the activation_fn = 'linear' the MSE I get is totally different.
best_model = create_model('linear')
history = best_model.fit(X_train, y_train, epochs=50, batch_size=200, verbose=1)
.....
.....
Epoch 49/50
8000/8000 [==============================] - 0s 48us/step - loss: 344.1636 - mean_squared_error: 344.1636 - mean_absolute_error: 12.2109
Epoch 50/50
8000/8000 [==============================] - 0s 48us/step - loss: 326.4524 - mean_squared_error: 326.4524 - mean_absolute_error: 11.9250
history.history['mean_squared_error']
Out[723]:
[10053.778002929688,
9826.66806640625,
......
......
344.16363830566405,
326.45237121582034]
The difference is in 326.45 vs. 21.16. Any insight as to what I am misunderstanding would be greatly appreciated. I would be more comfortable if they were within a reasonable neighborhood of each other, given one is the error from one fold vs the entire training data set. But 21 is nowhere near 326. Thanks!
The entire code is seen here.
import pandas as pd
import numpy as np
from keras import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from sklearn.model_selection import GridSearchCV
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor
from keras.constraints import maxnorm
from sklearn import preprocessing
from sklearn.preprocessing import scale
from sklearn.datasets import make_regression
from matplotlib import pyplot as plt
# Setting some data basics
N = 10000
feats = 10
# generate regression dataset
X, y = make_regression(n_samples=N, n_features=feats, n_informative=2, noise=3)
# training data and testing data #
X_train = X[:int(N * 0.8)]
y_train = y[:int(N * 0.8)]
X_test = X[int(N * 0.8):]
y_test = y[int(N * 0.8):]
def create_model(activation_fn):
# create model
model = Sequential()
model.add(Dense(30, input_dim=feats, activation=activation_fn,
kernel_initializer='normal'))
model.add(Dropout(0.2))
model.add(Dense(10, activation=activation_fn))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
# Compile model
model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['mean_squared_error','mae'])
return model
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# create model
model = KerasRegressor(build_fn=create_model, epochs=50, batch_size=200, verbose=0)
# define the grid search parameters
activations = ['linear','relu']
param_grid = dict(activation_fn = activations)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, cv=3)
grid_result = grid.fit(X_train, y_train, verbose=1)
best_model = create_model('linear')
history = best_model.fit(X_train, y_train, epochs=50, batch_size=200, verbose=1)
history.history.keys()
plt.plot(history.history['mean_absolute_error'])
# summarize results
grid_result.cv_results_
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
The large loss reported in your output (326.45237121582034) is the training loss. If you need a metric to be compared with the grid_result.best_score_ (in the GridSearchCV) and the MSE (in the best_model.fit), you have to request the validation loss (cf. code below).
Now to the question: why is the validation loss lower than the training loss? In your case it is essentially because of dropout (which is applied during training but not during validation/test) - that is why the difference between training and validation losses disappears when you remove dropout. You can find a detailed explanation here of the possible reasons for a lower validation loss.
In short, the performance (MSE) of your model is given by the grid_result.best_score_ (21.163454 in your example).
import numpy as np
from keras import Sequential
from keras.layers import Dense, Dropout
from sklearn.model_selection import GridSearchCV
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.datasets import make_regression
import tensorflow as tf
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
tf.random.set_seed(42)
# Setting some data basics
N = 10000
feats = 10
# generate regression dataset
X, y = make_regression(n_samples=N, n_features=feats, n_informative=2, noise=3)
# training data and testing data #
X_train = X[:int(N * 0.8)]
y_train = y[:int(N * 0.8)]
X_test = X[int(N * 0.8):]
y_test = y[int(N * 0.8):]
def create_model(activation_fn):
# create model
model = Sequential()
model.add(Dense(30, input_dim=feats, activation=activation_fn,
kernel_initializer='normal'))
model.add(Dropout(0.2))
model.add(Dense(10, activation=activation_fn))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
# Compile model
model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['mean_squared_error','mae'])
return model
# create model
model = KerasRegressor(build_fn=create_model, epochs=50, batch_size=200, verbose=0)
# define the grid search parameters
activations = ['linear','relu']
param_grid = dict(activation_fn = activations)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, cv=3)
grid_result = grid.fit(X_train, y_train, verbose=1, validation_data=(X_test, y_test))
best_model = create_model('linear')
history = best_model.fit(X_train, y_train, epochs=50, batch_size=200, verbose=1, validation_data=(X_test, y_test))
history.history.keys()
# plt.plot(history.history['mae'])
# summarize results
print(grid_result.cv_results_)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))

Custom Metric Scoring for Multi Label KerasClassifier in GridSearchCV

I am trying to do gridsearch using scikit-learn GridSearchCV and KerasClassifier for a multilabel classification problem having 346 labels. I am trying to evaluate the models based on a custom metric. But I am always running into the following error. My training data size is 5334.
ValueError: operands could not be broadcast together with shapes (5334,346) (5334,)
def my_custom_scorer(y_true,y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
custom_scorer = make_scorer(my_custom_scorer, greater_is_better=True)
def create_model(learn_rate = 0.01):
# create model
model = Sequential()
model.add(Dense(100, activation = 'sigmoid',input_dim=4))
model.add(Dense(346, activation='sigmoid'))
# Compile model
optimizer = SGD(lr=learn_rate, momentum=0)
model.compile(optimizer=optimizer,
loss=custom_loss(weights),
metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model,verbose=0)
learn_rate = [0.001, 0.01, 0.1]
batch_size = [10]
epochs = [1]
param_grid = dict(epochs=epochs,batch_size=batch_size,learn_rate=learn_rate)
grid = GridSearchCV(estimator=model,
param_grid=param_grid,n_jobs=1,scoring=custom_scorer)
grid_result = grid.fit(X_train, Y_train)
print("Best: %f using %s" % (grid_result.best_score_,
grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))

Keras GridSearch scikit learn freezes

I struggle to implement grid search in Keras using scikit learn. Based on this tutorial, I wrote the following code:
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
def create_model():
model = Sequential()
model.add(Dense(100, input_shape=(max_len, len(alphabet)), kernel_regularizer=regularizers.l2(0.001)))
model.add(Dropout(0.85))
model.add(LSTM(100, input_shape=(100,)))
model.add(Dropout(0.85))
model.add(Dense(num_output_classes, activation='softmax'))
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, decay=1e-6)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
return model
seed = 7
np.random.seed(seed)
model = KerasClassifier(build_fn=create_model, epochs=10, verbose=0)
batch_size = [10,20]
param_grid = dict(batch_size=batch_size)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
grid_result = grid.fit(train_data_reduced, train_labels_reduced)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
It does not give me any error message, but it just runs on and on forever without printing out anything. I deliberately tried it with very few epochs, very few training examples and very few hyperparameters to search. Without grid search, one epoch runs through extremely fast, so I don’t think that I just need to give it more time. It simply doesn’t do anything.
Can anyone point out what I am missing?
Thanks a lot!
I had the same problem.
Removing n_jobs=-1 from your parameters list might help! Also try to not do the hot encoding.

Optimizing two estimators (dependent on each other) using Sklearn Grid Search

The flow of my program is in two stages.
I am using Sklearn ExtraTreesClassifier along with SelectFromModelmethod to select the most important features. Here it should be noted that the ExtraTreesClassifier takes many parameters as input like n_estimators etc for classification and eventually giving different set of important features for different values of n_estimators via SelectFromModel. This means that I can optimize the n_estimators to get the best features.
In the second stage, I am traing my NN keras model based on the features selected in the first stage. I am using AUROC as the score for grid search but this AUROC is calculated using Keras based neural network. I want to use Grid Search for n_estimators in my ExtraTreesClassifier to optimize the AUROC of keras neural Network. I know I have to use Pipline but I am confused in implementing both together. I don't know where to put Pipeline in my code. I am getting an error which saysTypeError: estimator should be an estimator implementing 'fit' method, <function fs at 0x0000023A12974598> was passed
#################################################################################
I concatenate the CV set and the train set so that I may select the most important features
in both CV and Train together.
##############################################################################
frames11 = [train_x_upsampled, cross_val_x_upsampled]
train_cv_x = pd.concat(frames11)
frames22 = [train_y_upsampled, cross_val_y_upsampled]
train_cv_y = pd.concat(frames22)
def fs(n_estimators):
m = ExtraTreesClassifier(n_estimators = tree_number)
m.fit(train_cv_x,train_cv_y)
sel = SelectFromModel(m, prefit=True)
##################################################
The code below is to get the names of the selected important features
###################################################
feature_idx = sel.get_support()
feature_name = train_cv_x.columns[feature_idx]
feature_name =pd.DataFrame(feature_name)
X_new = sel.transform(train_cv_x)
X_new =pd.DataFrame(X_new)
######################################################################
So Now the important features selected are in the data-frame X_new. In
code below, I am again dividing the data into train and CV but this time
only with the important features selected.
####################################################################
train_selected_x = X_new.iloc[0:train_x_upsampled.shape[0], :]
cv_selected_x = X_new.iloc[train_x_upsampled.shape[0]:train_x_upsampled.shape[0]+cross_val_x_upsampled.shape[0], :]
train_selected_y = train_cv_y.iloc[0:train_x_upsampled.shape[0], :]
cv_selected_y = train_cv_y.iloc[train_x_upsampled.shape[0]:train_x_upsampled.shape[0]+cross_val_x_upsampled.shape[0], :]
train_selected_x=train_selected_x.values
cv_selected_x=cv_selected_x.values
train_selected_y=train_selected_y.values
cv_selected_y=cv_selected_y.values
##############################################################
Now with this new data which only contains the important features,
I am training a neural network as below.
#########################################################
def create_model():
n_x_new=train_selected_x.shape[1]
model = Sequential()
model.add(Dense(n_x_new, input_dim=n_x_new, kernel_initializer='glorot_normal', activation='relu'))
model.add(Dense(10, kernel_initializer='glorot_normal', activation='relu'))
model.add(Dropout(0.8))
model.add(Dense(1, kernel_initializer='glorot_normal', activation='sigmoid'))
optimizer = keras.optimizers.Adam(lr=0.001)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
seed = 7
np.random.seed(seed)
model = KerasClassifier(build_fn=create_model, epochs=20, batch_size=400, verbose=0)
n_estimators=[10,20,30]
param_grid = dict(n_estimators=n_estimators)
grid = GridSearchCV(estimator=fs, param_grid=param_grid,scoring='roc_auc',cv = PredefinedSplit(test_fold=my_test_fold), n_jobs=1)
grid_result = grid.fit(np.concatenate((train_selected_x, cv_selected_x), axis=0), np.concatenate((train_selected_y, cv_selected_y), axis=0))
I created a pipeline using keras classifier and a function. The function is not satisfying the conditions of sklearn custom estimator. Still , I am not getting it right.
def feature_selection(n_estimators=10):
m = ExtraTreesClassifier(n_estimators)
m.fit(train_cv_x,train_cv_y)
sel = SelectFromModel(m, prefit=True)
print(" Getting features names ")
print(" ")
feature_idx = sel.get_support()
feature_name = train_cv_x.columns[feature_idx]
feature_name =pd.DataFrame(feature_name)
X_new = sel.transform(train_cv_x)
X_new =pd.DataFrame(X_new)
print(" adding names and important feature values ")
print(" ")
X_new.columns = feature_name
print(" dividing the imporrtant features into train and test ")
print(" ")
#-----------ARE Data splitting Value-------------
train_selected_x = X_new.iloc[0:train_x_upsampled.shape[0], :]
cv_selected_x = X_new.iloc[train_x_upsampled.shape[0]:train_x_upsampled.shape[0]+cross_val_x_upsampled.shape[0], :]
train_selected_y = train_cv_y.iloc[0:train_x_upsampled.shape[0], :]
cv_selected_y = train_cv_y.iloc[train_x_upsampled.shape[0]:train_x_upsampled.shape[0]+cross_val_x_upsampled.shape[0], :]
##################################################
print(" Converting the selected important festures on train and test into numpy array to be suitable for NN model ")
print(" ")
train_selected_x=train_selected_x.values
cv_selected_x=cv_selected_x.values
train_selected_y=train_selected_y.values
cv_selected_y=cv_selected_y.values
print(" Now test fold ")
my_test_fold = []
for i in range(len(train_selected_x)):
my_test_fold.append(-1)
for i in range(len(cv_selected_x)):
my_test_fold.append(0)
print(" Now after test fold ")
return my_test_fold,train_selected_x,cv_selected_x,train_selected_y,cv_selected_y
def create_model():
n_x_new=X_new.shape[1]
np.random.seed(6000)
model_new = Sequential()
model_new.add(Dense(n_x_new, input_dim=n_x_new, kernel_initializer ='he_normal', activation='sigmoid'))
model_new.add(Dense(10, kernel_initializer='he_normal', activation='sigmoid'))
model_new.add(Dropout(0.3))
model_new.add(Dense(1, kernel_initializer='he_normal', activation='sigmoid'))
model_new.compile(loss='binary_crossentropy', optimizer='adam', metrics=['binary_crossentropy'])
return model_new
pipeline = pipeline.Pipeline(steps=[('featureselection', custom_classifier()),('nn',KerasClassifier(build_fn=model, nb_epoch=10, batch_size=1000,
verbose=0))])
n_estimators=[10,20,30,40]
param_grid = dict(n_estimators=n_estimators)
grid = GridSearchCV(estimator=pipeline, param_grid=param_grid,scoring='roc_auc',cv = PredefinedSplit(test_fold=my_test_fold), n_jobs=1)
grid_result = grid.fit(np.concatenate((train_selected_x, cv_selected_x), axis=0), np.concatenate((train_selected_y, cv_selected_y), axis=0))
This is how I built my own custom transformer.
class fs(TransformerMixin, BaseEstimator):
def __init__(self, n_estimators=10 ):
self.ss=None
self.n_estimators = n_estimators
self.x_new = None
def fit(self, X, y):
m = ExtraTreesClassifier(10)
m.fit(X,y)
self.ss = SelectFromModel(m, prefit=True)
return self
def transform(self, X):
self.x_new=self.ss.transform(X)
print(np.shape(self.x_new))
return self.x_new

Why is Random Search showing better results than Grid Search?

I'm playing with RandomizedSearchCV function from scikit-learn. Some academic paper claims that Randomized Search can provide 'good enough' results comparing with a whole grid search, but saves a lot of time.
Surprisingly, on one occasion, the RandomizedSearchCV provided me better results than GridSearchCV. I think GridSearchCV is suppose to be exhaustive, so the result has to be better than RandomizedSearchCV suppose they search through the same grid.
for the same dataset and mostly same settings, GridsearchCV returned me the following result:
Best cv accuracy: 0.7642857142857142
Test set score: 0.725
Best parameters: 'C': 0.02
the RandomizedSearchCV returned me the following result:
Best cv accuracy: 0.7428571428571429
Test set score: 0.7333333333333333
Best parameters: 'C': 0.008
To me the test score of 0.733 is better than 0.725, and the difference between test score and training score for the RandomizedSearchCV is smaller, which to my knowledge means less overfitting.
So why did GridSearchCV return me worse results?
GridSearchCV code:
def linear_SVC(x, y, param, kfold):
param_grid = {'C':param}
k = KFold(n_splits=kfold, shuffle=True, random_state=0)
grid = GridSearchCV(LinearSVC(), param_grid=param_grid, cv=k, n_jobs=4, verbose=1)
return grid.fit(x, y)
#high C means more chance of overfitting
start = timer()
param = [i/1000 for i in range(1,1000)]
param1 = [i for i in range(1,101)]
param.extend(param1)
#progress = progressbar.bar.ProgressBar()
clf = linear_SVC(x=x_train, y=y_train, param=param, kfold=3)
print('LinearSVC:')
print('Best cv accuracy: {}' .format(clf.best_score_))
print('Test set score: {}' .format(clf.score(x_test, y_test)))
print('Best parameters: {}' .format(clf.best_params_))
print()
duration = timer() - start
print('time to run: {}' .format(duration))
RandomizedSearchCV code:
from sklearn.model_selection import RandomizedSearchCV
def Linear_SVC_Rand(x, y, param, kfold, n):
param_grid = {'C':param}
k = StratifiedKFold(n_splits=kfold, shuffle=True, random_state=0)
randsearch = RandomizedSearchCV(LinearSVC(), param_distributions=param_grid, cv=k, n_jobs=4,
verbose=1, n_iter=n)
return randsearch.fit(x, y)
start = timer()
param = [i/1000 for i in range(1,1000)]
param1 = [i for i in range(1,101)]
param.extend(param1)
#progress = progressbar.bar.ProgressBar()
clf = Linear_SVC_Rand(x=x_train, y=y_train, param=param, kfold=3, n=100)
print('LinearSVC:')
print('Best cv accuracy: {}' .format(clf.best_score_))
print('Test set score: {}' .format(clf.score(x_test, y_test)))
print('Best parameters: {}' .format(clf.best_params_))
print()
duration = timer() - start
print('time to run: {}' .format(duration))
First, try to understand this:
https://stats.stackexchange.com/questions/49540/understanding-stratified-cross-validation
So you should know that StratifiedKFold is better than KFold.
Use StratifiedKFold in both GridSearchCV and RandomizedSearchCV. Make sure to set "shuffle = False" and not use "random_state" parameter. What this does: the dataset you are using will not be shuffled so that your results won't be changed each time you train it. You might get what you expect.
GridSearchCV code:
def linear_SVC(x, y, param, kfold):
param_grid = {'C':param}
k = StratifiedKFold(n_splits=kfold)
grid = GridSearchCV(LinearSVC(), param_grid=param_grid, cv=k, n_jobs=4, verbose=1)
return grid.fit(x, y)
RandomizedSearchCV code:
def Linear_SVC_Rand(x, y, param, kfold, n):
param_grid = {'C':param}
k = StratifiedKFold(n_splits=kfold)
randsearch = RandomizedSearchCV(LinearSVC(), param_distributions=param_grid, cv=k, n_jobs=4,
verbose=1, n_iter=n)
return randsearch.fit(x, y)
From what it looks like it is performing properly. Your train/cv set accuracy in gridsearch is higher than train/cv set accuracy in randomized search. The hyper parameters should not be tuned using the test set, so assuming you're doing that properly it might just be a coincidence that the hyper parameters that were chosen from randomized search performed better on the test set.

Resources