ValueError: Input 0 of layer "complex_conv2d" is incompatible with the layer: expected min_ndim=4, found ndim=3. Full shape received: (None, 2, 288) - conv-neural-network

I am working with a complex convolutional neural network and my dataset is complex IQ signal data. I am trying to run the following code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from cvnn import *
import cvnn.layers as complex_layers
import tensorflow as tf
def createSB_modrelu():
inputs = complex_layers.complex_input((2, 288))
c0 = complex_layers.ComplexConv2D(32, activation='cart_relu', kernel_size=3)(inputs)
c1 = complex_layers.ComplexConv2D(32, activation='cart_relu', kernel_size=3)(c0)
c2 = complex_layers.ComplexMaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid')(c1)
t01 = complex_layers.ComplexConv2DTranspose(5, kernel_size=2, strides=(2, 2), activation='cart_relu')(c2)
concat01 = tf.keras.layers.concatenate(\[t01, c1\], axis=-1)
c3 = complex_layers.ComplexConv2D(4, activation='cart_relu', kernel_size=3)(concat01)
out = complex_layers.ComplexConv2D(4, activation='cart_relu', kernel_size=3)(c3)
return tf.keras.Model(inputs, out)
I am getting this error!! Can anyone help me?
ValueError: Input 0 of layer "complex_conv2d" is incompatible with the layer: expected min_ndim=4, found ndim=3. Full shape received: (None, 2, 288)

Related

Keep getting errors with Tensorflow python

import tensorflow as tf
import numpy as np
import random
class ModelClass:
""" used for getting all the juicy payloads and making them """
def __init__(self, model=None):
if not model:
self.model = tf.keras.Sequential([tf.keras.layers.Embedding(input_dim=256, output_dim=128),
tf.keras.layers.LSTM(128),
tf.keras.layers.Dense(
1, activation='sigmoid')
])
self.model.compile(optimizer='adam',loss='binary_crossentropy', metrics=['accuracy'])
else:
self.model = model
def train(self):
payload_list = open('./sqli_tester/all.txt', 'r').readlines()
seeds = np.random.randint(0,high=2**32-1, size=(500, 1), dtype=np.int64)
labels = [random.choice(payload_list) for _ in range(500)]
dataset = tf.data.Dataset.from_tensor_slices((seeds, labels))
train_dataset = dataset.take(80)
val_dataset = dataset.skip(80)
print(train_dataset.as_numpy_iterator())
#inputs = tf.Tensor(1,(), dtype=tf.int64)
#train_dataset = tf.constant(train_dataset).reshape(-1,1)
history = self.model.fit(train_dataset, epochs=10, validation_data=val_dataset, validation_split=0.1)
# self.model.save('model.h5')
return self.model
if __name__ == '__main__':
modelc = ModelClass()
modelc.train()
I keep getting the error: Input 0 of layer "lstm" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (1, 128) can anyone help me please? I use the model for other code like predicting data. If it would be possible to get an explanation on why this is happening and how to fix it in the future that would be amazing. Thank you. The warning I get with tensorflow is WARNING:tensorflow:Model was constructed with shape (None, None) for input KerasTensor(type_spec=TensorSpec(shape=(None, None), dtype=tf.float32, name='embedding_input'), name='embedding_input', description="created by layer 'embedding_input'"), but it was called on an input with incompatible shape (1,).

Keras Input 0 of layer "conv2d_1" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, 1)

I am trying to create a branched Keras model to output multiple classes (age and gender)
My input X_train and X_test have the shape:
(4000,128,128,3)
and
(1000,128,128,3)
this is my code to create the layers:
from keras.models import Sequential,load_model,Model
from keras.layers import Conv2D,MaxPool2D,MaxPooling2D,Dense,Dropout,BatchNormalization,Flatten,Input
from keras.layers import *
#model creation
# input_shape = (128, 128, 3)
# inputs = Input((input_shape))
input = Input(shape=(128,128,3))
conv1 = Conv2D(32,(3,3),activation="relu")(input)
pool1 = MaxPool2D((2,2))(conv1)
conv2 = Conv2D(64,(3,3),activation="relu")(pool1)
pool2 = MaxPool2D((2,2))(conv2)
conv3 = Conv2D(128,(3,3),activation="relu")(pool2)
pool3 = MaxPool2D((2,2))(conv3)
flt = Flatten()(pool3)
#age
age_l = Dense(128,activation="relu")(flt)
age_l = Dense(64,activation="relu")(age_l)
age_l = Dense(32,activation="relu")(age_l)
age_l = Dense(1,activation="relu")(age_l)
#gender
gender_l = Dense(128,activation="relu")(flt)
gender_l = Dense(80,activation="relu")(gender_l)
gender_l = Dense(64,activation="relu")(gender_l)
gender_l = Dense(32,activation="relu")(gender_l)
gender_l = Dropout(0.5)(gender_l)
gender_l = Dense(2,activation="softmax")(gender_l)
modelA = Model(inputs=input,outputs=[age_l,gender_l])
modelA.compile(loss=['mse', 'sparse_categorical_crossentropy'], optimizer='adam', metrics=['accuracy', 'mae'])
modelA.summary()
however, i keep getting this error:
ValueError Traceback (most recent call last)
Cell In [27], line 1
----> 1 save = modelA.fit(X_train_arr, [y_train, y_train2],
2 validation_data = (X_test, [y_test, y_test2]),
3 epochs = 30)
ValueError: Exception encountered when calling layer "model" " f"(type Functional).
Input 0 of layer "conv2d_1" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, 1)
Call arguments received by layer "model" " f"(type Functional):
• inputs=tf.Tensor(shape=(None, 1), dtype=string)
• training=False
• mask=None
I cannot see what the issue is as the input dimensions seem to be correct?
Apologies I have tried studying similar posts and the required text but still do not understand what the issue is!
I have checked your code and run it with some dummy data, here at my end it is running fine, I think the problem is with your dataset, kindly check your dataset before passing it to the model.

Coremltools: errors to get simplest convolutional model working

Suppose I create simplest model in Keras:
from keras.layers import *
from keras import Input, Model
import coremltools
def MyModel(inputs_shape=(None,None,3), channels=64):
inpt = Input(shape=inputs_shape)
# channels
skip = Conv2D(channels, (3, 3), strides=1, activation=None, padding='same', name='conv_in')(inpt)
out = Conv2D(3, (3, 3), strides=1, padding='same', activation='tanh',name='out')(skip)
return Model(inputs=inpt, outputs=out)
model = MyModel()
coreml_model = coremltools.converters.keras.convert(model,
input_names=["inp1"],
output_names=["out1"],
image_scale=1.0,
model_precision='float32',
use_float_arraytype=True,
input_name_shape_dict={'inp1': [None, 384, 384, 3]}
)
spec = coreml_model._spec
print(spec.description.input[0])
print(spec.description.input[0].type.multiArrayType.shape)
print(spec.description.output[0])
coremltools.utils.save_spec(spec, "test.mlmodel")
The output is:
2 : out, <keras.layers.convolutional.Conv2D object at 0x7f08ca491470>
3 : out__activation__, <keras.layers.core.Activation object at 0x7f08ca4b0b70>
name: "inp1"
type {
multiArrayType {
shape: 3
shape: 384
shape: 384
dataType: FLOAT32
}
}
[3, 384, 384]
name: "out1"
type {
multiArrayType {
shape: 3
dataType: FLOAT32
}
}
So the output shape is 3, which is incorrect. And when I try to get rid from input_name_shape_dict I get:
Please provide a finite height (H), width (W) & channel value (C) using input_name_shape_dict arg with key = 'inp1' and value = [None, H, W, C]
Converted .mlmodel can be modified to have flexible input shape using coremltools.models.neural_network.flexible_shape_utils
So it wants NHWC.
Attempt of inference yields:
Layer 'conv_in' of type 'Convolution' has input rank 3 but expects rank at least 4
When I attempt to add extra dimension to input:
spec.description.input[0].type.multiArrayType.shape.extend([1, 3, 384, 384])
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
[name: "inp1"
type {
multiArrayType {
shape: 1
shape: 3
shape: 384
shape: 384
dataType: FLOAT32
}
}
]
I get for inference:
Shape (1 x 384 x 384 x 3) was not in enumerated set of allowed shapes
Following this advice and making input shape (1,1,384,384,3) dos not help.
How can I make it working and producing correct output?
Inference:
From PIL import Image
model_cml = coremltools.models.MLModel('my.mlmodel')
# load image
img = np.array(Image.open('patch4.png').convert('RGB'))[np.newaxis,...]/127.5 - 1
# Make predictions
predictions = model_cml.predict({'inp1':img})
# save result
res = predictions['out1']
res = np.clip((res[0]+1)*127.5,0,255).astype(np.uint8)
Image.fromarray(res).save('out32.png')
UPDATE:
I am able to run this model with inputs (3,1,384,384), the result produces is (1,3,3,384,384) which does not make any sense for me.
UPDATE 2:
setting fixed shape in Keras
def MyModel(inputs_shape=(384,384,3), channels=64):
inpt = Input(shape=inputs_shape)
fixed output shape problem, but I still cannot run the model (Layer 'conv_in' of type 'Convolution' has input rank 3 but expects rank at least 4)
UPDATE:
The following works to get rid of input and conv_in shapes mismatch.
1). Downgrade to coremltools==3.0. Version 3.3 (model version 4) seems broken.
2.) Use fixed shape in keras model, no input_shape_dist and variable shape for coreml model
from keras.layers import *
from keras import Input, Model
import coremltools
def MyModel(inputs_shape=(384,384,3), channels=64):
inpt = Input(shape=inputs_shape)
# channels
skip = Conv2D(channels, (3, 3), strides=1, activation=None, padding='same', name='conv_in')(inpt)
out = Conv2D(3, (3, 3), strides=1, padding='same', activation='tanh',name='out')(skip)
return Model(inputs=inpt, outputs=out)
model = MyModel()
model.save('test.model')
print(model.summary())
'''
# v.3.3
coreml_model = coremltools.converters.keras.convert(model,
input_names=["image"],
output_names="out1",
image_scale=1.0,
model_precision='float32',
use_float_arraytype=True,
input_name_shape_dict={'inp1': [None, 384, 384, 3]}
)
'''
coreml_model = coremltools.converters.keras.convert(model,
input_names=["image"],
output_names="out1",
image_scale=1.0,
model_precision='float32',
)
spec = coreml_model._spec
from coremltools.models.neural_network import flexible_shape_utils
shape_range = flexible_shape_utils.NeuralNetworkMultiArrayShapeRange()
shape_range.add_channel_range((3,3))
shape_range.add_height_range((64, 384))
shape_range.add_width_range((64, 384))
flexible_shape_utils.update_multiarray_shape_range(spec, feature_name='image', shape_range=shape_range)
print(spec.description.input)
print(spec.description.input[0].type.multiArrayType.shape)
print(spec.description.output)
coremltools.utils.save_spec(spec, "my.mlmodel")
In the inference script, feed array of the shape (1,1,3,384,384):
img = np.zeros((1,1,3,384,384))
# Make predictions
predictions = model_cml.predict({'inp1':img})
res = predictions['out1'] # (3, 384,384)
You can ignore what the mlmodel file has in the output shape if it is incorrect. This is more of a metadata issue, i.e. the model will still work fine and do the right thing. The converter isn't always able to figure out the correct output shape (not sure why).

fchollet 5.4-visualizing-what-convnets-learn input_13:0 is both fed and fetched error

Using Keras 2.2.4, I'm working my way though this notebook 5.4-visualizing-what-convnets-learn , except I switched the model with a unet one provided by Kaggle-Carvana-Image-Masking-Challenge . The first layer of the Kaggle model looks like this, followed by the rest of the example code.
def get_unet_512(input_shape=(512, 512, 3),
num_classes=1):
inputs = Input(shape=input_shape)
...
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_13 (InputLayer) (None, 512, 512, 3) 0
...
from keras import models
layer_outputs = [layer.output for layer in model.layers[:8]]
activation_model = models.Model(inputs=model.input, outputs=layer_outputs)
activations = activation_model.predict(img_tensor)
Now the error I am getting is
InvalidArgumentError: input_13:0 is both fed and fetched.
Does anyone have any suggestions on how to work around this?
This error is caused by:
layer_outputs = [layer.output for layer in model.layers[:8]]
, and it says that the first layer(input layer) is both fed and fetched.
Here's a workaround:
import keras.backend as K
layer_outputs = [K.identity(layer.output) for layer in model.layers[:8]]
EDIT:
Full example, code adapted from: Mask_RCNN - run_graph
import numpy as np
import keras.backend as K
from keras.models import Sequential, Model
from keras.layers import Input, Dense, Flatten
model = Sequential()
ip = Input(shape=(512,512,3,))
fl = Flatten()(ip)
d1 = Dense(20, activation='relu')(fl)
d2 = Dense(3, activation='softmax')(d1)
model = Model(ip, d2)
model.compile('adam', 'categorical_crossentropy')
model.summary()
layer_outputs = [K.identity(layer.output) for layer in model.layers]
#layer_outputs = [layer.output for layer in model.layers] #fails
kf = K.function([ip], layer_outputs)
activations = kf([np.random.random((1,512,512,3))])
print(activations)

CNN RNN integration for images

I'm trying to integrate CNN and LSTM for MNIST images by the following code:
from __future__ import division, print_function, absolute_import
import tensorflow as tf
import tflearn
import numpy as np
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.normalization import local_response_normalization
from tflearn.layers.estimator import regression
import tflearn.datasets.mnist as mnist
height = 128
width = 128
X, Y, testX, testY = mnist.load_data(one_hot=True)
X = X.reshape([-1, 28, 28, 1])
testX = testX.reshape([-1, 28, 28, 1])
# Building convolutional network
network = tflearn.input_data(shape=[None, 28, 28,1], name='input')
network = tflearn.conv_2d(network, 32, 3, activation='relu',regularizer="L2")
network = tflearn.max_pool_2d(network, 2)
network = tflearn.local_response_normalization(network)
network = tflearn.conv_2d(network, 64, 3, activation='relu',regularizer="L2")
network = tflearn.max_pool_2d(network, 2)
network = tflearn.local_response_normalization(network)
network = fully_connected(network, 128, activation='tanh')
network = dropout(network, 0.8)
network = fully_connected(network, 256, activation='tanh')
network = dropout(network, 0.8)
network = tflearn.reshape(network, [-1, 1, 28*28])
#lstm
network = tflearn.lstm(network, 128, return_seq=True)
network = tflearn.lstm(network, 128)
network = tflearn.fully_connected(network, 10, activation='softmax')
network = tflearn.regression(network, optimizer='adam',
loss='categorical_crossentropy', name='target')
#train
model = tflearn.DNN(network, tensorboard_verbose=0)
model.fit(X, Y, n_epoch=1, validation_set=0.1, show_metric=True,snapshot_step=100)
CNN accepts a 4D tensor and LSTM a 3D. Hence I have reshaped the network by : network = tflearn.reshape(network, [-1, 1, 28*28])
But on running the error comes:
InvalidArgumentError (see above for traceback): Input to reshape is a
tensor with 16384 values, but the requested shape requires a multiple
of 784 [[Node: Reshape/Reshape = Reshape[T=DT_FLOAT,
Tshape=DT_INT32,
_device="/job:localhost/replica:0/task:0/cpu:0"](Dropout_1/cond/Merge, Reshape/Reshape/shape)]]
I'm not clear why they need a tensor of size 16384, and even if I hard code 128*128 it still doesn't work! I cannot proceed at all.
The error is in this line:
network = tflearn.reshape(network, [-1, 1, 28*28])
The previous FC layer has n_units=256, hence it can't be reshaped to 28*28. Change this line to:
network = tflearn.reshape(network, [-1, 1, 256])
Note that you're feeding the features produced by CNN, not the input MNIST images, to LSTM.

Resources