How to predict multi outputs using gradient boosting regression? - python-3.x

I have implemented simple code for gradient boosting regression (GBR) to predict one output and it works well, but when I try to predict two outputs I got error as shown below:
ValueError Traceback (most recent call last)
<ipython-input-5-bb1f191ee195> in <module>()
4 }
5 gradient_boosting_regressor = ensemble.GradientBoostingRegressor(**params)
----> 6 gradient_boosting_regressor.fit(train_data,train_targets)
7 # 'learning_rate': 0.01
D:\Anoconda\lib\site-packages\sklearn\ensemble\gradient_boosting.py in fit(self, X, y, sample_weight, monitor)
977
978 # Check input
--> 979 X, y = check_X_y(X, y, accept_sparse=['csr', 'csc', 'coo'], dtype=DTYPE)
980 n_samples, self.n_features_ = X.shape
981 if sample_weight is None:
D:\Anoconda\lib\site-packages\sklearn\utils\validation.py in check_X_y(X, y, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, warn_on_dtype, estimator)
576 dtype=None)
577 else:
--> 578 y = column_or_1d(y, warn=True)
579 _assert_all_finite(y)
580 if y_numeric and y.dtype.kind == 'O':
D:\Anoconda\lib\site-packages\sklearn\utils\validation.py in column_or_1d(y, warn)
612 return np.ravel(y)
613
--> 614 raise ValueError("bad input shape {0}".format(shape))
615
616
ValueError: bad input shape (22, 2)
Could I get any assistance or idea to predict two outputs using GBR?
My try as below:
Data_ini = pd.read_excel('Data - 2output -Ra-in - angle.xlsx').iloc[:,:]
Data_ini_val = pd.read_excel('val - Ra-in -angle 12.xlsx').iloc[:,:]
train_data = Data_ini.iloc[:,:4]
train_targets = Data_ini.iloc[:,-2:]
val_data = Data_ini_val.iloc[:,:4]
val_targets = Data_ini_val.iloc[:,-2:]
params = {'n_estimators': 5000, 'max_depth': 4, 'min_samples_split': 2, 'min_samples_leaf': 2
}
gradient_boosting_regressor = ensemble.GradientBoostingRegressor(**params)
gradient_boosting_regressor.fit(train_data,train_targets)

Use MultiOutputRegressor for that.
Multi target regression
This strategy consists of fitting one regressor per target. This is a
simple strategy for extending regressors that do not natively support
multi-target regression.
Example:
from sklearn.multioutput import MultiOutputRegressor
...
params = {'n_estimators': 5000, 'max_depth': 4, 'min_samples_split': 2, 'min_samples_leaf': 2
}
estimator = MultiOutputRegressor(ensemble.GradientBoostingRegressor(**params))
estimator.fit(train_data,train_targets)

Related

Multiple Keras GridSearchCV with multiprocessing: TypeError: cannot pickle '_thread.RLock' object

Edit: I figured that leaving out the early stopping solves the problem, but that is not an option for me. Is there a way to include early stopping in a grid search together with multiprocessing?
I am trying to run GridSeachCV in a Jupyter Notebook for multiple datasets, one after another, using multiprocessing for each grid search. This works fine for the first grid search, but after that I keep getting errors for all other grid searches. I even get errors when I run the same cell again, that worked initially.
import numpy as np
from sklearn.model_selection import GridSearchCV
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
def create_model(n_weights=1000, hidden_layers=1, learning_rate=0.001):
# formulas derived from nWeights = sum (d(l-1)+1)*d(l) for all layers l with output dim d(l)
if hidden_layers == 1: # 100% of neurons in first hidden layer
neurons = [ceil((n_weights - 1) / 3)]
elif hidden_layers == 2: # 70% / 30% split of neurons
x = 1/7 * (np.sqrt(21 * n_weights + 79) - 10)
neurons = list(map(floor,[7/3 * x, x]))
elif hidden_layers == 3: # 50% / 30% / 20% split
x = 1/21 * (np.sqrt(84 * n_weights + 205) - 17)
neurons = list(map(floor, [5/2 * x, 3/2 * x, x]))
else:
raise Exception('Only 1, 2 or 3 layers allowed')
model = Sequential([Dense(neurons[0], activation='relu', input_dim=1)])
for n in neurons[1:]:
model.add(Dense(n, activation='relu'))
model.add(Dense(1))
model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate), loss='mse')
return model
batch_size = [64, 128]
learning_rate = [0.01, 0.001]
hidden_layers = [1, 2, 3]
n_weights = [10, 30, 60]
p_grid = dict(n_weights=n_weights, hidden_layers=hidden_layers, batch_size=batch_size, learning_rate=learning_rate)
earlyStop = keras.callbacks.EarlyStopping(monitor='loss', patience=3, restore_best_weights=True)
epochs = 5000
# first grid search
model1 = KerasRegressor(create_model, epochs=epochs, verbose=0)
grid1 = GridSearchCV(estimator=model, param_grid=p_grid, n_jobs=-1, cv=4, verbose=1)
result1 = grid1.fit(X_train1, y_train1, callbacks=[earlyStop])
# second grid search
model2 = KerasRegressor(create_model, epochs=epochs, verbose=0)
grid2 = GridSearchCV(estimator=model, param_grid=p_grid, n_jobs=-1, cv=4, verbose=1)
result2 = grid2.fit(X_train1, y_train1, callbacks=[earlyStop])
The above code runs two identical grid searches on two different data sets. The first one works fine, but the second fails in this line
grid2 = GridSearchCV(estimator=model, param_grid=p_grid, n_jobs=-1, cv=4, verbose=1)
with the error
_RemoteTraceback Traceback (most recent call last)
_RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\oli-w\anaconda3\envs\tensorflow\lib\site-packages\joblib\externals\loky\backend\queues.py", line 153, in _feed
obj_ = dumps(obj, reducers=reducers)
File "C:\Users\oli-w\anaconda3\envs\tensorflow\lib\site-packages\joblib\externals\loky\backend\reduction.py", line 271, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "C:\Users\oli-w\anaconda3\envs\tensorflow\lib\site-packages\joblib\externals\loky\backend\reduction.py", line 264, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "C:\Users\oli-w\anaconda3\envs\tensorflow\lib\site-packages\joblib\externals\cloudpickle\cloudpickle_fast.py", line 602, in dump
return Pickler.dump(self, obj)
TypeError: cannot pickle '_thread.RLock' object
"""
The above exception was the direct cause of the following exception:
PicklingError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_9700/1799096090.py in <module>
1 # second grid search
2 model2 = KerasRegressor(create_model, epochs=epochs, verbose=0)
----> 3 grid2 = GridSearchCV(estimator=model, param_grid=p_grid, n_jobs=-1, cv=4, verbose=1)
4 result2 = grid2.fit(X_train1, y_train1, callbacks=[earlyStop])
~\anaconda3\envs\tensorflow\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params)
889 return results
890
--> 891 self._run_search(evaluate_candidates)
892
893 # multimetric is determined here because in the case of a callable
~\anaconda3\envs\tensorflow\lib\site-packages\sklearn\model_selection\_search.py in _run_search(self, evaluate_candidates)
1390 def _run_search(self, evaluate_candidates):
1391 """Search all candidates in param_grid"""
-> 1392 evaluate_candidates(ParameterGrid(self.param_grid))
1393
1394
~\anaconda3\envs\tensorflow\lib\site-packages\sklearn\model_selection\_search.py in evaluate_candidates(candidate_params, cv, more_results)
836 )
837
--> 838 out = parallel(
839 delayed(_fit_and_score)(
840 clone(base_estimator),
~\anaconda3\envs\tensorflow\lib\site-packages\joblib\parallel.py in __call__(self, iterable)
1054
1055 with self._backend.retrieval_context():
-> 1056 self.retrieve()
1057 # Make sure that we get a last message telling us we are done
1058 elapsed_time = time.time() - self._start_time
~\anaconda3\envs\tensorflow\lib\site-packages\joblib\parallel.py in retrieve(self)
933 try:
934 if getattr(self._backend, 'supports_timeout', False):
--> 935 self._output.extend(job.get(timeout=self.timeout))
936 else:
937 self._output.extend(job.get())
~\anaconda3\envs\tensorflow\lib\site-packages\joblib\_parallel_backends.py in wrap_future_result(future, timeout)
540 AsyncResults.get from multiprocessing."""
541 try:
--> 542 return future.result(timeout=timeout)
543 except CfTimeoutError as e:
544 raise TimeoutError from e
~\anaconda3\envs\tensorflow\lib\concurrent\futures\_base.py in result(self, timeout)
437 raise CancelledError()
438 elif self._state == FINISHED:
--> 439 return self.__get_result()
440 else:
441 raise TimeoutError()
~\anaconda3\envs\tensorflow\lib\concurrent\futures\_base.py in __get_result(self)
386 def __get_result(self):
387 if self._exception:
--> 388 raise self._exception
389 else:
390 return self._result
PicklingError: Could not pickle the task to send it to the workers.
I would understand if the grid search didn't work at all, but it confuses me that it works once and then starts throwing errors. I was only able to run another grid search after I restarted the kernel. What can I do in order to make multiple grid searches possible in one sitting without restarting the kernel?

ValueError: at least one array or dtype is required HistGradientBoostingRegressor

I am using HistGradientBoostingRegressor
My code :-
X_train, X_test, y_train, y_test = train_test_split(Train, target, test_size=0.2, random_state=16)
model = HistGradientBoostingRegressor(learning_rate = 0.1,
max_iter=800,
random_state = 16,
validation_fraction=None,
verbose = 0,
max_depth=12,
min_samples_leaf=25,
l2_regularization=0.05)
model.fit(X_train, y_train)
I am getting the error as shown in the title from .fit() method, and didn't understand much from the documentation as I am new to ML.
Error :
ValueError Traceback (most recent call last)
in
7 min_samples_leaf=25,
8 l2_regularization=0.05)
----> 9 model.fit(X_train, y_train)
/opt/conda/lib/python3.6/site-packages/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py in fit(self, X, y)
102 # time spent predicting X for gradient and hessians update
103 acc_prediction_time = 0.
--> 104 X, y = check_X_y(X, y, dtype=[X_DTYPE], force_all_finite=False)
105 y = self._encode_y(y)
106
/opt/conda/lib/python3.6/site-packages/sklearn/utils/validation.py in check_X_y(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, warn_on_dtype, estimator)
753 ensure_min_features=ensure_min_features,
754 warn_on_dtype=warn_on_dtype,
--> 755 estimator=estimator)
756 if multi_output:
757 y = check_array(y, 'csr', force_all_finite=True, ensure_2d=False,
/opt/conda/lib/python3.6/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
473
474 if all(isinstance(dtype, np.dtype) for dtype in dtypes_orig):
--> 475 dtype_orig = np.result_type(*dtypes_orig)
476
477 if dtype_numeric:
<array_function internals> in result_type(*args, **kwargs)
ValueError: at least one array or dtype is required
Thanks for any help
I had similar issue when I was using StandardScaler to fit_transform().
I found, I had one column which was non-numeric causing :
ValueError: at least one array or dtype is required
I dropped that column to resolve the issue.
Please look into data types of your data.

Explaining LSTM keras with Eli5 library

I'm trying to use Eli5 for explaining an LSTM keras model for time series prediction. The keras model receives as input an array with shape (nsamples, timesteps, nfeatures).
This is my code:
def baseline_model():
model = Sequential()
model.add(LSTM(32, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(loss='logcosh', optimizer='adam')
return model
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor
import eli5
from eli5.sklearn import PermutationImportance
my_model = KerasRegressor(build_fn= baseline_model, nb_epoch= 30, batch_size= 32, verbose= False)
history = my_model.fit(X_train, y_train)
So far, everything is ok. The problem is when I execute the following line that launchs an error:
# X_train has a shape equal to (nsamples, timesteps, nfeatures) and y_train has a shape (nsamples)
perm = PermutationImportance(my_model, random_state=1).fit(X_train, y_train)
Error:
ValueError Traceback (most recent call last)
in ()
2 d2_train_dataset = X_train.reshape((nsamples, timesteps * features))
3
----> 4 perm = PermutationImportance(my_model, random_state=1).fit(X_train, y_train)
5 #eli5.show_weights(perm, feature_names = X.columns.tolist())
~/anaconda3/lib/python3.6/site-packages/eli5/sklearn/permutation_importance.py in fit(self, X, y, groups, **fit_params)
183 self.estimator_.fit(X, y, **fit_params)
184
--> 185 X = check_array(X)
186
187 if self.cv not in (None, "prefit"):
~/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
568 if not allow_nd and array.ndim >= 3:
569 raise ValueError("Found array with dim %d. %s expected <= 2."
--> 570 % (array.ndim, estimator_name))
571 if force_all_finite:
572 _assert_all_finite(array,
ValueError: Found array with dim 3. Estimator expected <= 2.
What can I do to fix this error? How can I use eli5 with my LSTM Keras Model?

Found input variables with inconsistent numbers of samples

It looks like I have some inconsistency, but I am not able to troubleshoot.
def train(classifier, X, y):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=33)
print(X_train.shape)
print(y_train.shape)
classifier.fit(X_train, y_train)
print ("Accuracy: %s" % classifier.score(X_test, y_test))
return classifier
def load_data_and_labels(filename):
"""Load sentences and labels"""
df = pd.read_csv(filename, compression='zip')
columns = df.columns.tolist()
columns = [c for c in columns if c not in ["col1","_id", "col2", "col3"]]
target = "target_col"
df= df.dropna(axis=0, how='any', subset=columns) # Drop null rows
# x_raw = df[columns]
# y_raw = df[target]
return df[columns], df[target],df
input_file = '.file.zip'
x_raw, y_raw,df=load_data_and_labels(input_file)
trial1 = Pipeline([
('vectorizer', TfidfVectorizer()),
('classifier', MultinomialNB()),
])
train(trial1, x_raw, y_raw)
I am getting folowing error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-411-be9c83097d5b> in <module>()
5 ])
6
----> 7 train(trial1, x_raw, y_raw)
<ipython-input-407-33b2ec7fc112> in train(classifier, X, y)
3 print(X_train.shape)
4 print(y_train.shape)
----> 5 classifier.fit(X_train, y_train)
6
7
/home/manisha/anaconda3/lib/python3.5/site-packages/sklearn/pipeline.py in fit(self, X, y, **fit_params)
268 Xt, fit_params = self._fit(X, y, **fit_params)
269 if self._final_estimator is not None:
--> 270 self._final_estimator.fit(Xt, y, **fit_params)
271 return self
272
/home/manisha/anaconda3/lib/python3.5/site-packages/sklearn/naive_bayes.py in fit(self, X, y, sample_weight)
560 Returns self.
561 """
--> 562 X, y = check_X_y(X, y, 'csr')
563 _, n_features = X.shape
564
/home/manisha/anaconda3/lib/python3.5/site-packages/sklearn/utils/validation.py in check_X_y(X, y, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, warn_on_dtype, estimator)
529 y = y.astype(np.float64)
530
--> 531 check_consistent_length(X, y)
532
533 return X, y
/home/manisha/anaconda3/lib/python3.5/site-packages/sklearn/utils/validation.py in check_consistent_length(*arrays)
179 if len(uniques) > 1:
180 raise ValueError("Found input variables with inconsistent numbers of"
--> 181 " samples: %r" % [int(l) for l in lengths])
182
183
ValueError: Found input variables with inconsistent numbers of samples: [3, 2997]
Shape of my X.train and y.train are (2997, 3)
(2997,) receptively.

Keras error when finetuning InceptionV3

I am trying to follow the "Fine-tune InceptionV3 on a new set of classes" sample code to freeze the first 172 layers and re-train the last layers on cats/dogs dataset. I keep getting an error which I have noted at the bottom. Please help. I am using Ubuntu 16.04, keras 1.2.1, theano 0.9.0beta1.dev, numpy 1.12.0 and python 3.5.
from PIL import Image
import os
import matplotlib.pyplot as plt
import numpy as np
data_root_dir = "/home/ubuntu/ML/data/dogscats/"
train_dir = os.path.join(data_root_dir,"sample", "train")
valid_dir = os.path.join(data_root_dir, "valid")
from keras.applications.inception_v3 import InceptionV3
from keras.preprocessing import image
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=True)
# add a global spatial average pooling layer
x = base_model.output
#x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 200 classes
predictions = Dense(2, activation='softmax')(x)
# this is the model we will train
model = Model(input=base_model.input, output=predictions)
for layer in model.layers[:172]:
layer.trainable = False
for layer in model.layers[172:]:
layer.trainable = True
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')
from sklearn.preprocessing import OneHotEncoder
def get_data(path, target_size=(299,299)):
batches = get_batches(path, shuffle=False, batch_size=1, class_mode=None, target_size=target_size)
return np.concatenate([batches.next() for i in range(batches.nb_sample)])
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=2, class_mode='categorical',
target_size=(299,299)):
return gen.flow_from_directory(dirname, target_size=target_size,
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
def onehot(x): return np.array(OneHotEncoder().fit_transform(x.reshape(-1,1)).todense())
# Use batch size of 1 since we're just doing preprocessing on the CPU
val_batches = get_batches(valid_dir, shuffle=False, batch_size=10)
train_batches = get_batches(train_dir, shuffle=False, batch_size=10)
val_classes = val_batches.classes
trn_classes = train_batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
model.fit_generator(train_batches, samples_per_epoch=train_batches.n, nb_epoch=10,
validation_data=val_batches, nb_val_samples=val_batches.n)
The exception is: padding must be zero for average_exc_pad
Here is the full stack-trace:
ValueError Traceback (most recent call last)
/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.5/site-packages/theano/compile/function_module.py in __call__(self, *args, **kwargs)
883 outputs =\
--> 884 self.fn() if output_subset is None else\
885 self.fn(output_subset=output_subset)
ValueError: padding must be zero for average_exc_pad
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-4-369d7760ec6e> in <module>()
34
35 model.fit_generator(train_batches, samples_per_epoch=train_batches.n, nb_epoch=10,
---> 36 validation_data=val_batches, nb_val_samples=val_batches.n)
/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.5/site-packages/keras/engine/training.py in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch)
1551 outs = self.train_on_batch(x, y,
1552 sample_weight=sample_weight,
-> 1553 class_weight=class_weight)
1554
1555 if not isinstance(outs, list):
/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.5/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight)
1314 ins = x + y + sample_weights
1315 self._make_train_function()
-> 1316 outputs = self.train_function(ins)
1317 if len(outputs) == 1:
1318 return outputs[0]
/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.5/site-packages/keras/backend/theano_backend.py in __call__(self, inputs)
957 def __call__(self, inputs):
958 assert isinstance(inputs, (list, tuple))
--> 959 return self.function(*inputs)
960
961
/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.5/site-packages/theano/compile/function_module.py in __call__(self, *args, **kwargs)
896 node=self.fn.nodes[self.fn.position_of_error],
897 thunk=thunk,
--> 898 storage_map=getattr(self.fn, 'storage_map', None))
899 else:
900 # old-style linkers raise their own exceptions
/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.5/site-packages/theano/gof/link.py in raise_with_op(node, thunk, exc_info, storage_map)
323 # extra long error message in that case.
324 pass
--> 325 reraise(exc_type, exc_value, exc_trace)
326
327
/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.5/site-packages/six.py in reraise(tp, value, tb)
683 value = tp()
684 if value.__traceback__ is not tb:
--> 685 raise value.with_traceback(tb)
686 raise value
687
/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.5/site-packages/theano/compile/function_module.py in __call__(self, *args, **kwargs)
882 try:
883 outputs =\
--> 884 self.fn() if output_subset is None else\
885 self.fn(output_subset=output_subset)
886 except Exception:
ValueError: padding must be zero for average_exc_pad
Apply node that caused the error: AveragePoolGrad{ignore_border=True, mode='average_exc_pad', ndim=2}(Join.0, IncSubtensor{InplaceInc;::, ::, :int64:, :int64:}.0, TensorConstant{(2,) of 3}, TensorConstant{(2,) of 1}, TensorConstant{(2,) of 1})
Toposort index: 5270
Inputs types: [TensorType(float32, 4D), TensorType(float32, 4D), TensorType(int64, vector), TensorType(int64, vector), TensorType(int64, vector)]
Inputs shapes: [(10, 2048, 8, 8), (10, 2048, 8, 8), (2,), (2,), (2,)]
Inputs strides: [(524288, 256, 32, 4), (524288, 256, 32, 4), (8,), (8,), (8,)]
Inputs values: ['not shown', 'not shown', array([3, 3]), array([1, 1]), array([1, 1])]
Outputs clients: [[Elemwise{add,no_inplace}(CorrMM_gradInputs{half, (1, 1), (1, 1)}.0, CorrMM_gradInputs{half, (1, 1), (1, 1)}.0, CorrMM_gradInputs{half, (1, 1), (1, 1)}.0, AveragePoolGrad{ignore_border=True, mode='average_exc_pad', ndim=2}.0)]]
Fine-tuning in that situation possibly means using the convolutional layers as pre-trained feature extractors. So you don't really want the top layers (densely connected layers) of the Inception network.
Changing
base_model = InceptionV3(weights='imagenet', include_top=True)
to
base_model = InceptionV3(weights='imagenet', include_top=False)
should work.
Also, if you have 200 classes you should change
# and a logistic layer -- let's say we have 200 classes
predictions = Dense(2, activation='softmax')(x)
to
predictions = Dense(200, activation='softmax')(x)
So your last layer will have the desired 200 elements.

Resources