ValueError: Graph disconnected in vgg16 - keras

Traceback:
model = Model(input_tensor,x,name = 'vgg16_trunk')
File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/network.py", line 93, in __init__
self._init_graph_network(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/network.py", line 231, in _init_graph_network
self.inputs, self.outputs)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/network.py", line 1443, in _map_graph_network
str(layers_with_complete_input))
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_2:0", shape=(?, 32, 32, 3), dtype=float32) at layer "input_2". The following previous layers were accessed without issue: []
How to solve this problem in vgg16 ??
def create_model(input_shape):
channel_axis = 1 if K.image_data_format() == "channels_first" else -1
input_tensor = Input(shape=input_shape)
base_model = VGG16(classes=10,input_tensor=None,input_shape=input_shape,include_top=False)
x = base_model.output
x = BatchNormalization(axis=channel_axis, momentum=mom,
epsilon=eps, gamma_initializer=gamma)(x)
x = LeakyReLU(leakiness)(x)
model = Model(input_tensor,x,name = 'vgg16_trunk')
return model

Pass the input_tensor you created here:
input_tensor = Input(shape=input_shape)
where base_model is created:
base_model = VGG16(classes=10,input_tensor=input_tensor,include_top=False)
Please note also, that the tensor will already have the input_shape so it's not necessary to give it as parameter again when creating the base_model.

Related

Problem using tf.keras.utils.timeseries_dataset_from_array in Functional Keras API

I am working on building a LSTM model on M5 Forecasting Challenge (a Kaggle dataset)
I using functional keras API to build my model. I have attached picture of my model. Input is generated using 'tf.keras.utils.timeseries_dataset_from_array' and the error I receive is
ValueError: Layer "model_4" expects 18 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, 18) dtype=float32>]
This is the code I am using to generate a time series dataset.
dataset = tf.keras.utils.timeseries_dataset_from_array(data=array, targets=None,
sequence_length=window, sequence_stride=1, batch_size=32)
My NN model
input_tensors = {}
for col in train_sel.columns:
if col in cat_cols:
input_tensors[col] = layers.Input(name = col, shape=(1,),dtype=tf.string)
else:
input_tensors[col]=layers.Input(name = col, shape=(1,), dtype = tf.float16
embedding = []
for feature in input_tensors:
if feature in cat_cols:
embed = layers.Embedding(input_dim = train_sel[feature].nunique(), output_dim = int(math.sqrt(train_sel[feature].nunique())))
embed = embed(input_tensors[feature])
else:
embed = layers.BatchNormalization()
embed = embed(tf.expand_dims(input_tensors[feature], -1))
embedding.append(embed)
temp = embedding
embedding = layers.concatenate(inputs = embedding)
nn_model = layers.LSTM(128)(embedding)
nn_model = layers.Dropout(0.1)(nn_model)
output = layers.Dense(1, activation = 'tanh')(nn_model)
model = tf.keras.Model(inputs=split_input,outputs = output)
Presently, I am fitting the model using
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
loss=tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.losses.MeanSquaredError()])
model.fit(dataset,epochs = 5)
I am receiving a value error
ValueError: in user code:
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 889, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.8/dist-packages/keras/engine/input_spec.py", line 200, in assert_input_compatibility
raise ValueError(f'Layer "{layer_name}" expects {len(input_spec)} input(s),'
ValueError: Layer "model_4" expects 18 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, 18) dtype=float32>]

ValueError: Shapes (None, 200, 3) and (1, 3) are incompatible

This is the model that I am trying to train for identifying possible tag(out of three tags) for each word, also I have added a layer from another model whose output shape is [1, 100]tensors and then I have concatenate it with BiLSTM output-
input1_entity = Input(shape = (200,))
last_hidden_layer_output = last_hidden_layer(tensorflow.reshape(input1_entity, [1, 200]))
embedding_entity = Embedding((4817), 200, input_length = 200, weights = [embedding_matrix], trainable = False)(input1_entity)
bilstm1_entity = Bidirectional(LSTM(100, return_sequences = True, recurrent_dropout = 0.2), merge_mode = 'concat')(embedding_entity)
lstm1_entity = Bidirectional(LSTM(100, return_sequences = True, dropout = 0.5, recurrent_dropout = 0.2))(bilstm1_entity)
lstm2_entity = Bidirectional(LSTM(50))(lstm1_entity)
merge_layer = concatenate([lstm2_entity, last_hidden_layer_output])
dense1_entity = Dense(128, activation = 'relu')(merge_layer)
dense2_entity = Dense(128, activation = 'relu')(dense1_entity)
dropout1_entity = Dropout(0.5)(dense2_entity)
dense3_entity = Dense(64, activation = 'tanh')(dropout1_entity)
output1_entity = Dense(3, activation = 'softmax')(dense3_entity)
model_entity = Model(inputs = input1_entity, outputs = output1_entity)
model_entity.compile(
loss = 'categorical_crossentropy',
optimizer = 'adam',
metrics = [tensorflow.keras.metrics.CategoricalAccuracy()],
sample_weight_mode = 'temporal'
)
And this is how I am training the model -
history = model_entity.fit(pad_tokens_train,
np.array(pad_tags_train),
batch_size=250,
verbose=1,
epochs=50,
sample_weight = sample_weight,
validation_split=0.2)
But I keep on getting this error -
ValueError: in user code:
File "/Users/kawaii/miniforge3/envs/tensor_no_gpu/lib/python3.8/site-packages/keras/engine/training.py", line 878, in train_function *
return step_function(self, iterator)
File "/Users/kawaii/miniforge3/envs/tensor_no_gpu/lib/python3.8/site-packages/keras/engine/training.py", line 867, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/Users/kawaii/miniforge3/envs/tensor_no_gpu/lib/python3.8/site-packages/keras/engine/training.py", line 860, in run_step **
outputs = model.train_step(data)
File "/Users/kawaii/miniforge3/envs/tensor_no_gpu/lib/python3.8/site-packages/keras/engine/training.py", line 809, in train_step
loss = self.compiled_loss(
File "/Users/kawaii/miniforge3/envs/tensor_no_gpu/lib/python3.8/site-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/Users/kawaii/miniforge3/envs/tensor_no_gpu/lib/python3.8/site-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/Users/kawaii/miniforge3/envs/tensor_no_gpu/lib/python3.8/site-packages/keras/losses.py", line 245, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/Users/kawaii/miniforge3/envs/tensor_no_gpu/lib/python3.8/site-packages/keras/losses.py", line 1664, in categorical_crossentropy
return backend.categorical_crossentropy(
File "/Users/kawaii/miniforge3/envs/tensor_no_gpu/lib/python3.8/site-packages/keras/backend.py", line 4994, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
ValueError: Shapes (None, 200, 3) and (1, 3) are incompatible

Tensorflow map_fn Error PartialTensorShape: Incompatible ranks during merge

The following code is giving me an error which I cannot find the answer to. I am trying to apply a python function to each element of a tensor, which transforms the element into a vector of shape 3, so I can calculate a custom evaluation metric. It needs to be a Python function as it is used in other places too.
The error (log below) is Invalid argument: PartialTensorShape: Incompatible ranks during merge: 1 vs. 0, and I assume it has to do with the result of map_fn and its shape. However, it only happens at runtime as if I have any other shape then it throws an error with incompatible shapes when I do model.compile(). Have I misundertood how to use map_fn? Any suggestions?
Thanks in advance!
2021-04-09 12:19:31.357542: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at list_kernels.h:101 : Invalid argument: PartialTensorShape: Incompatible ranks during merge: 1 vs. 0
Traceback (most recent call last):
File "test.py", line 93, in <module>
validation_data=(val_input, val_output))
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
tmp_logs = train_function(iterator)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 840, in _call
return self._stateless_fn(*args, **kwds)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2829, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call
cancellation_manager=cancellation_manager)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 550, in call
ctx=ctx)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: PartialTensorShape: Incompatible ranks during merge: 1 vs. 0
[[node map/TensorArrayV2Stack/TensorListStack (defined at test.py:27) ]]
[[map_1/while/LoopCond/_50/_64]]
(1) Invalid argument: PartialTensorShape: Incompatible ranks during merge: 1 vs. 0
[[node map/TensorArrayV2Stack/TensorListStack (defined at test.py:27) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_823]
Function call stack:
train_function -> train_function
This is the code to reproduce the issue, using Tensorflow 2.3.1 and Python 3.6.
from typing import List
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input, Flatten
INPUT_SHAPE = (2, 10, 10)
class CustomMetric(tf.keras.metrics.Metric):
def __init__(self, name='custom_metric', **kwargs):
super().__init__(name=name, **kwargs)
self.mean_custom_metric = self.add_weight(name='mean_custom_metric', initializer='zeros', dtype=float)
def update_state(self, y_true, y_pred, sample_weight=None):
# y_true is a probability distribution (batch, 2*10*10), so find index of most likely position
y_pred = tf.argmax(y_pred, axis=1)
# y_pred and y_true are both tensors with shape (batch, 1)
print(f"y_pred: {y_pred}")
# apply python func to convert each value to a 3D value (single scalar to vector with 3 scalars)
# according to docs: map_fn(fn, elems).shape = [elems.shape[0]] + fn(elems[0]).shape.
# So: elems.shape[0] == batch | fn(elems[0]).shape == 3,
# error happens when trying to do anything with the result of map_fn below
y_true_positions = tf.map_fn(self.wrapper, y_true, fn_output_signature=tf.float32)
y_pred_positions = tf.map_fn(self.wrapper, y_pred, fn_output_signature=tf.float32)
# y_true_positions, y_pred_positions: tensors with shape (batch, 3)
print(f"y_true_positions: {y_true_positions}")
# do something with y_true_positions and y_pred_positions
y_final = y_true_positions
mean = tf.reduce_sum(y_final)
print('---')
self.mean_custom_metric.assign(mean)
def result(self):
return self.mean_custom_metric
def reset_states(self):
self.mean_custom_metric.assign(0.0)
def wrapper(self, x):
# x: tensor with shape (1,)
print(f"x: {x}")
result = tf.py_function(python_function, [int(x)], tf.float32)
# result is a tensor of shape unknown
print(f"result: {result}")
result.set_shape(tf.TensorShape(3))
# result: tensor with shape (3,)
print(f"result: {result}")
return result
def python_function(index: int) -> List[float]:
# dummy function
return [0, 0, 0]
# dummy model
block_positions = Input(shape=(*INPUT_SHAPE, 1), dtype=tf.float32)
block_positions_layer = Flatten()(block_positions)
target_output_layer = Dense(128, activation='relu')(block_positions_layer)
target_output = Dense(np.prod(INPUT_SHAPE), activation='softmax', name='regions')(target_output_layer)
model = tf.keras.models.Model(
inputs=[block_positions],
outputs=(target_output))
custom_metric = CustomMetric()
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer=tf.optimizers.Adam(learning_rate=0.001),
metrics=['accuracy', custom_metric])
print(model.summary())
# placeholder data
train_input = np.zeros(shape=(100, *INPUT_SHAPE), dtype=np.float32)
train_output = np.zeros(shape=(100, 1), dtype=np.int32)
val_input = np.zeros(shape=(100, *INPUT_SHAPE), dtype=np.float32)
val_output = np.zeros(shape=(100, 1), dtype=np.int32)
history = model.fit(
train_input, train_output, epochs=10, verbose=1,
validation_data=(val_input, val_output))
I found the solution after a while. The wrapper function was returning a tensor of shape (3,), whereas the map_fn was applied over a tensor of shape (batch, 1). I don't fully understand why, but it seems that map_fn requires a return tensor of shape (batch, 1,) and not fn(elems[0]).shape as the documentation suggests.
Changing the line:
result.set_shape(tf.TensorShape(3))
for
result = tf.reshape(tf.concat(result, 1), (1, 3)) in wrapper
so that the return value is (1, 3) instead of (3) fixed the issue. After map_fn, you end up with a tensor of shape (batch, 1, 3), which I reshaped to be (batch, 3).

Keras Flatten layer in Functional API?

model = Sequential()
model.add(Flatten(input_shape=(1,) + (52,)))
model.add(Dense(100))
model.add(Activation('relu'))
model.add(Dense(2))
model.add(Activation('linear'))
print(model.summary())
I want to change this keras code in sequential version to same code with functional version like the following.
input = Input(shape=(1,) + (52,))
i = Flatten()(input)
h = Dense(100, activation='relu')(i)
o = Dense(2, activation='linear')(h)
model = Model(inputs=i, outputs=o)
model.summary()
But it got error
File "C:\Users\SDS\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\SDS\Anaconda3\lib\site-packages\keras\engine\network.py", line 93, in __init__
self._init_graph_network(*args, **kwargs)
File "C:\Users\SDS\Anaconda3\lib\site-packages\keras\engine\network.py", line 237, in _init_graph_network
self.inputs, self.outputs)
File "C:\Users\SDS\Anaconda3\lib\site-packages\keras\engine\network.py", line 1430, in _map_graph_network
str(layers_with_complete_input))
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(?, 1, 52), dtype=float32) at layer "input_1". The following previous layers were accessed without issue: []
Your model definition is incorrect, the inputs parameter of Model should go to your Input layer, like this:
input = Input(shape=(1,) + (52,))
i = Flatten()(input)
h = Dense(100, activation='relu')(i)
o = Dense(2, activation='linear')(h)
model = Model(inputs=inputs, outputs=o)
I believe you cannot put any tensor other than the Input layer as input to a model.
The input for model should be input layer(first layer without any dense layer).
So it should be like :
model = Model(inputs=input, outputs=o)

ValueError: Dimension 2 in both shapes must be equal, but are 3 and 32

I am currently studying Tensorflow. I used a pre-trained model for prediction through the Django app, But during the prediction, I got the error, Please help me to resolve the error.
def alpha_to_color(image, color=(255, 255, 255)):
x = np.array(image)
r, g, b, a = np.rollaxis(x, axis=-1)
r[a == 0] = color[0]
g[a == 0] = color[1]
b[a == 0] = color[2]
x = np.dstack([r, g, b, a])
return Image.fromarray(x, 'RGBA')
def preprocess(data):
# dimensions of our images.
img_width, img_height = 250, 250
dataUrlPattern = re.compile('data:image/(png|jpeg);base64,(.*)$')
imgb64 = dataUrlPattern.match(data).group(2)
if imgb64 is not None and len(imgb64) > 0:
data= base64.b64decode(imgb64)
im1 = Image.open(BytesIO(data))
im1 = alpha_to_color(im1)
im1=im1.convert('RGB')
im1= im1.resize((250,250))
print("[INFO] loading and preprocessing image...")
image = img_to_array(im1)
image = image.reshape((1,) + image.shape) # this is a Numpy array with shape (1, 3, 250,250)
test_ob = ImageDataGenerator(rescale=1./255)
X=[]
for batch in test_ob.flow(image, batch_size=1):
X= batch
break
return X
def build_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(250, 250, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
#model.add(Dropout(0.5))
model.add(Dense(250))
model.add(Activation('sigmoid'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
module_dir = os.path.dirname(__file__) # get current directory
file_path = os.path.join(module_dir, 'bestWeight.hdf5')
model.load_weights(file_path)
return model
def load_labels():
module_dir = os.path.dirname(__file__) # get current directory
file_path = os.path.join(module_dir, 'labels.csv')
df = pd.read_csv(file_path,
header=0)
target_names = df['Category'].tolist()
return target_names
def predict_labels(data):
model = build_model()
image = preprocess(data)
target_names = load_labels()
encoder = LabelEncoder()
encoder.fit(target_names)
pL = model.predict(image)
prob = model.predict_proba(image)
p= np.argsort(pL, axis=1)
n1 = (p[:,-4:]) #gives top 5 labels
pL_names = (encoder.inverse_transform(n1))
pL_names = pL_names[0]
p= np.sort(prob, axis=1)
convertperc = [stats.percentileofscore(p[0], a, 'rank') for a in p[0]]
n = (convertperc[-4:]) #gives top 5 probabilities perc
prob_values = (p[:,-4:])
prob_single_values = prob_values[0]
return zip(pL_names,n,prob_single_values)
The code give this error
ValueError: Dimension 2 in both shapes must be equal, but are 3 and 32. Shapes are [3,3,3,32] and [3,3,32,3]. for 'Assign' (op: 'Assign') with input shapes: [3,3,3,32], [3,3,32,3].
This error occurs when running the line for cross_entropy. I don't understand why this is happing, if you need any more information I would be happy to give it to you.
Here is my compilation log
Traceback (most recent call last):
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1576, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 2 in both shapes must be equal, but are 3 and 32. Shapes are [3,3,3,32] and [3,3,32,3]. for 'Assign' (op: 'Assign') with input shapes: [3,3,3,32], [3,3,32,3].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\django\core\handlers\exception.py", line 34, in inner
response = get_response(request)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\django\core\handlers\base.py", line 126, in _get_response
response = self.process_exception_by_middleware(e, request)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\django\core\handlers\base.py", line 124, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\django\views\decorators\csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "C:\Users\RAHKARP\Desktop\webApplication\sketchPad\views.py", line 148, in recognizeSketch
result = predict_labels(data)
File "C:\Users\RAHKARP\Desktop\webApplication\sketchPad\views.py", line 113, in predict_labels
model = build_model()
File "C:\Users\RAHKARP\Desktop\webApplication\sketchPad\views.py", line 99, in build_model
model.load_weights(file_path)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\keras\engine\network.py", line 1161, in load_weights
f, self.layers, reshape=reshape)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\keras\engine\saving.py", line 928, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2435, in batch_set_value
assign_op = x.assign(assign_placeholder)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\tensorflow\python\ops\variables.py", line 645, in assign
return state_ops.assign(self._variable, value, use_locking=use_locking)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\tensorflow\python\ops\state_ops.py", line 216, in assign
validate_shape=validate_shape)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 63, in assign
use_locking=use_locking, name=name)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 454, in new_func
return func(*args, **kwargs)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3155, in create_op
op_def=op_def)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1731, in __init__
control_input_ops)
File "C:\Users\RAHKARP\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1579, in _create_c_op
raise ValueError(str(e))
ValueError: Dimension 2 in both shapes must be equal, but are 3 and 32. Shapes are [3,3,3,32] and [3,3,32,3]. for 'Assign' (op: 'Assign') with input shapes: [3,3,3,32], [3,3,32,3].
Can you send a minimal example of your code with error which can be executed on our side? It would be very helpful. I suppose that error is in wrong channel order. You generate batch with following shape:
image = image.reshape((1,) + image.shape) # shape = (1, 3, 250,250)
In keras channels should be a last dimension: (1, 250, 250, 3)

Resources