I am trying to use NCHW ie channel first data format in my cpu. It is a maxpool layer as a part of Resnet18.
MaxPooling2D(pool_size=[3, 3], strides=2, padding='same', data_format='channels_first')
And the error i am getting is:
InvalidArgumentError (see above for traceback): Default MaxPoolingOp only supports NHWC on device type CPU
[[Node: max_pooling2d_3/MaxPool = MaxPool[T=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 3, 3], padding="SAME", strides=[1, 1, 2, 2], _device="/job:localhost/replica:0/task:0/device:CPU:0"](batch_normalization_51/cond/Merge)]]
Is there a way to fix this? I have also tried data_format="NCHW" but it gave the same error.
Can you please try with a simple model to debug the issue? This works on my system with CPU.
model = Sequential()
model.add(MaxPooling2D(pool_size=[3, 3], strides=2, padding='same',
data_format='channels_first', input_shape=(3,224,224)))
model.summary()
X = np.random.randn(1,3,224,224)
Y = model.predict(X)
print(Y.shape)
(1, 3, 112, 112)
pip install intel-tensorflow
solved the problem, but the training seems to be slower than before.
Related
I'm trying to add a RNN layer within a Convolutional layer. Unfortunately due to difference of ndim it's failing to create a model.
Model:
model = keras.Sequential(
[
# layers.Rescaling(1.0/255),
keras.Input(shape=(256, 256, 3)),
layers.Conv2D(32, (3,3), padding="valid", activation='swish'),
layers.MaxPooling2D(pool_size=(2,2)),
layers.Conv2D(64, 3, activation="swish"),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, activation="swish"),
layers.Flatten(),
layers.Dense(64, activation='sigmoid'),
layers.Dense(10),
layers.Dense(2),
]
)
It'll be really helpful, if someone can help to figure this out :)
EDIT
Code that gave the error
model = keras.Sequential(
[
# layers.Rescaling(1.0/255),
keras.Input(shape=(256, 256, 3)),
layers.Conv2D(32, (3,3), padding="valid", activation='swish'),
layers.SimpleRNN(512, activation='relu')
layers.MaxPooling2D(pool_size=(2,2)),
layers.Conv2D(64, 3, activation="swish"),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, activation="swish"),
layers.Flatten(),
layers.Dense(64, activation='sigmoid'),
layers.Dense(10),
layers.Dense(2),
]
)
Error Message
Input 0 of layer "simple_rnn" is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 254, 254, 32)
If you add a layers.Reshape layer before and after the RNN layer, the shape issue is resolved.
model = keras.Sequential(
[
keras.Input(shape=(256, 256, 3)),
layers.Conv2D(32, (3,3), padding="valid", activation='swish'),
layers.Reshape((-1, 32)), # flatten only the two spatial dimensions
layers.SimpleRNN(512),
layers.Reshape((16, 16, 2)), # whatever shape you like
layers.MaxPooling2D(pool_size=(2,2)),
layers.Conv2D(64, 3, activation="swish"),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, activation="swish"),
layers.Flatten(),
layers.Dense(64, activation='sigmoid'),
layers.Dense(10),
layers.Dense(2),
]
)
I don't know if it makes sense to use 2d convolution after the RNN layer. I think it might destroy the spatial semantics of the input. Also the RNN layer will have a huge number of weights. That may speak for adding the RNN layer at the end. Reshaping will work there as well, but you will have to add a dimension, instead of flattening one.
What is the working of Output_padding in Conv2dTranspose? Please Help me to understand this?
Conv2dTranspose(1024, 512, kernel_size=3, stride=2, padding=1, output_padding=1)
According to documentation here: https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html when applying Conv2D operation with Stride > 1 you can get same output dimensions with different inputs. For example, 7x7 and 8x8 inputs would both return 3x3 output with Stride=2:
import torch
conv_inp1 = torch.rand(1,1,7,7)
conv_inp2 = torch.rand(1,1,8,8)
conv1 = torch.nn.Conv2d(1, 1, kernel_size = 3, stride = 2)
out1 = conv1(conv_inp1)
out2 = conv1(conv_inp2)
print(out1.shape) # torch.Size([1, 1, 3, 3])
print(out2.shape) # torch.Size([1, 1, 3, 3])
And when applying the transpose convolution, it is ambiguous that which output shape to return, 7x7 or 8x8 for stride=2 transpose convolution. Output padding helps pytorch to determine 7x7 or 8x8 output with output_padding parameter. Note that, it doesn't pad zeros or anything to output, it is just a way to determine the output shape and apply transpose convolution accordingly.
conv_t1 = torch.nn.ConvTranspose2d(1, 1, kernel_size=3, stride=2)
conv_t2 = torch.nn.ConvTranspose2d(1, 1, kernel_size=3, stride=2, output_padding=1)
transposed1 = conv_t1(out1)
transposed2 = conv_t2(out2)
print(transposed1.shape) # torch.Size([1, 1, 7, 7])
print(transposed2.shape) # torch.Size([1, 1, 8, 8])
For some reason, I cannot seem to assign all the weights of a Conv2d layer in PyTorch - I have to do it in two steps. Can anyone help me with what I am doing wrong?
layer = torch.nn.Conv2d(in_channels=1, out_channels=2, kernel_size=(2,2), stride=(2,2))
layer.state_dict()['weight']
gives me a tensor of size (2,1,2,2)
tensor([[[[ 0.4738, -0.2197],
[-0.3436, -0.0754]]],
[[[ 0.1662, 0.4098],
[-0.4306, -0.4828]]]])
When I try to assign weights like so
layer.state_dict()['weight'] = torch.tensor([
[[[ 1, 2],
[3, 4]]],
[[[-1, -2],
[-3, -4]]]
])
the weights don't change. However, if I do something like this
layer.state_dict()['weight'][0] = torch.tensor([
[[[1, 2],
[3, 4]]],
])
layer.state_dict()['weight'][1] = torch.tensor([
[[[-1, -2],
[-3, -4]]],
])
The weights change. Why is this so?
I'm not sure about why you can't directly assign them but the more proper way to achieve what you're trying to do would be
layer.load_state_dict({'weight': torch.tensor([[[[0.4738, -0.2197],
[-0.3436, -0.0754]]],
[[[0.1662, 0.4098],
[-0.4306, -0.4828]]]])}, strict=False)
I am attempting to stride over the channel dimension, and the following code exhibits surprising behaviour. It is my expectation that tf.nn.max_pool and tf.nn.avg_pool should produce tensors of identical shape when fed the exact same arguments. This is not the case.
import tensorflow as tf
x = tf.get_variable('x', shape=(100, 32, 32, 64),
initializer=tf.constant_initializer(5), dtype=tf.float32)
ksize = (1, 2, 2, 2)
strides = (1, 2, 2, 2)
max_pool = tf.nn.max_pool(x, ksize, strides, padding='SAME')
avg_pool = tf.nn.avg_pool(x, ksize, strides, padding='SAME')
print(max_pool.shape)
print(avg_pool.shape)
This prints
$ python ex04/mini.py
(100, 16, 16, 32)
(100, 16, 16, 64)
Clearly, I am misunderstanding something.
The link https://github.com/Hvass-Labs/TensorFlow-Tutorials/issues/19 states:
The first and last stride must always be 1,
because the first is for the image-number and
the last is for the input-channel.
Turns out this is really a bug.
https://github.com/tensorflow/tensorflow/issues/14886#issuecomment-352934112
I am trying to do a grid search for a multiclass classification with Keras. Here is a section of the code:
Some properties of the data are below:
y_
array(['fast', 'immobile', 'immobile', ..., 'slow',
'immobile', 'slow'],
dtype='<U17')
y_onehot = pd.get_dummies(y_).values
y_onehot
array([[1, 0, 0],
[0, 0, 1],
[0, 0, 1],
...
[0, 1, 0],
[0, 0, 1],
[0, 1, 0]], dtype=uint8)
#Do train-test split
y_train.shape
(1904,)
y_train_onehot.shape
(1904, 3)
And the model...
# Function to create model, required for KerasClassifier
def create_model(optimizer='rmsprop', init='glorot_uniform'):
# create model
model = Sequential()
model.add(Dense(2048, input_dim=X_train.shape[1], kernel_initializer=init, activation='relu'))
model.add(Dense(512, kernel_initializer=init, activation='relu'))
model.add(Dense(y_train_onehot.shape[1], kernel_initializer=init, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
# create model
model = KerasClassifier(build_fn=create_model, verbose=0)
# grid search epochs, batch size and optimizer
optimizers = ['rmsprop', 'adam']
init = ['glorot_uniform', 'normal', 'uniform']
epochs = [50, 100, 150]
batches = [5, 10, 20]
param_grid = dict(optimizer=optimizers, epochs=epochs, batch_size=batches, init=init)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring='accuracy')
grid_result = grid.fit(X_train, y_train_onehot)
And here is the error:
--> grid_result = grid.fit(X_train, y_train_onehot)
ValueError: Classification metrics can't handle a mix of multilabel-indicator and multiclass targets
The code was for a binary model but I am hoping to modify it for a multiclass data set. Kindly assist. Thanks!
The error is in the softmax layer.
I think you mean y_train_onehot.shape[1] instead of y_train_onehot[1]
Update 1: This is strange but your second problem seems to be y_train_onehot, would you mind to try 2 things:
try the same model without the onehot encoding on y_train.
if that alone doesn't work, change the loss to sparse_categorical_crossentropy
Also make sure to change y_train_onehot.shape[1] to the number of classes in the softmax layer