Where is Keras 2 's channels? - keras

I once used keras 1 (maybe 1.0.5) for multi-category classification. And my input in CNN is (n, 1, 24, 113) and 113 is channel numbers, and kernel size is (1, 5).
code like:
X_train = X_train.reshape((-1, 1, SLIDING_WINDOW_LENGTH, NUM_SENSOR_CHANNELS))
X_test = X_test.reshape((-1, 1, SLIDING_WINDOW_LENGTH, NUM_SENSOR_CHANNELS))
# network
inputs = Input(shape=(1, SLIDING_WINDOW_LENGTH, NUM_SENSOR_CHANNELS))
conv1 = ELU()(Convolution2D(NUM_FILTERS, FILTER_SIZE, 1, border_mode='valid', init='normal', activation='relu')(inputs))
conv2 = ELU()(Convolution2D(NUM_FILTERS, FILTER_SIZE, 1, border_mode='valid', init='normal', activation='relu')(conv1))
conv3 = ELU()(Convolution2D(NUM_FILTERS, FILTER_SIZE, 1, border_mode='valid', init='normal', activation='relu')(conv2))
conv4 = ELU()(Convolution2D(NUM_FILTERS, FILTER_SIZE, 1, border_mode='valid', init='normal', activation='relu')(conv3))
reshape1 = Reshape((8, NUM_FILTERS * NUM_SENSOR_CHANNELS))(conv4)
gru1 = GRU(NUM_UNITS_LSTM, return_sequences=True, consume_less='mem')(reshape1)
gru2 = GRU(NUM_UNITS_LSTM, return_sequences=False, consume_less='mem')(gru1)
outputs = Dense(NUM_CLASSES, activation='softmax')(gru2)
# Hardcoded number of sensor channels employed in the OPPORTUNITY challenge
NUM_SENSOR_CHANNELS = 113
# Hardcoded number of classes in the gesture recognition problem
NUM_CLASSES = 18
# Hardcoded length of the sliding window mechanism employed to segment the data
SLIDING_WINDOW_LENGTH = 24
# Length of the input sequence after convolutional operations
FINAL_SEQUENCE_LENGTH = 8
# Hardcoded step of the sliding window mechanism employed to segment the data
SLIDING_WINDOW_STEP = 12
# Batch Size
BATCH_SIZE = 100
# Number filters convolutional layers
NUM_FILTERS = 64
# Size filters convolutional layers
FILTER_SIZE = 5
# Number of unit in the long short-term recurrent layers
NUM_UNITS_LSTM = 128
And these days I switched keras to keras 2. and the networks did not change. And my code like:
X_train = X_train.reshape((-1, 1, SLIDING_WINDOW_LENGTH, NUM_SENSOR_CHANNELS))
X_test = X_test.reshape((-1, 1, SLIDING_WINDOW_LENGTH, NUM_SENSOR_CHANNELS))
# network
inputs = Input(shape=(1, SLIDING_WINDOW_LENGTH, NUM_SENSOR_CHANNELS))
conv1 = ELU()(
Conv2D(filters=NUM_FILTERS, kernel_size=(1, FILTER_SIZE), strides=(1, 1), padding='valid', activation='relu',
kernel_initializer='normal', data_format='channels_last')(inputs))
conv2 = ELU()(
Conv2D(filters=NUM_FILTERS, kernel_size=(1, FILTER_SIZE), strides=(1, 1), padding='valid', activation='relu',
kernel_initializer='normal', data_format='channels_last')(conv1))
conv3 = ELU()(
Conv2D(filters=NUM_FILTERS, kernel_size=(1, FILTER_SIZE), strides=(1, 1), padding='valid', activation='relu',
kernel_initializer='normal', data_format='channels_last')(conv2))
conv4 = ELU()(
Conv2D(filters=NUM_FILTERS, kernel_size=(1, FILTER_SIZE), strides=(1, 1), padding='valid', activation='relu',
kernel_initializer='normal', data_format='channels_last')(conv3))
# permute1 = Permute((2, 1, 3))(conv4)
reshape1 = Reshape((SLIDING_WINDOW_LENGTH - (FILTER_SIZE - 1) * 4, NUM_FILTERS * 1))(conv4) # 4 for 4 convs
gru1 = GRU(NUM_UNITS_LSTM, return_sequences=True, implementation=0)(reshape1)
gru2 = GRU(NUM_UNITS_LSTM, return_sequences=False, implementation=0)(gru1) # implementation=2 for GPU
outputs = Dense(NUM_CLASSES, activation='softmax')(gru2)
and the speed seems faster but the shape is strange since I didn't know where is my channels ?
Is there anything wrong with my code and could someone help ? THX

It seems that Keras handles the channel parameter himself.

Related

Image data augmentation and training

I am using Cats Vs Dogs dataset which contains 2000 images in 2 categories and divided into train and validation directory which can be download here.
I am trying to use real-time image augmentation to feed into a CNN model using train and validation generators. I am using Python 3.8 and TF2.5. The code is as follows:
path_to_imgs = "cats_and_dogs_filtered\\"
# Define the train and validation directory-
train_dir = os.path.join(path_to_imgs, 'train')
val_dir = os.path.join(path_to_imgs, 'validation')
batch_size = 64
IMG_HEIGHT, IMG_WIDTH = 150, 150
def plotImages(images_arr):
# function to plot 5 images together-
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
return None
# Use image augmentation for training dataset-
image_generator = ImageDataGenerator(
rescale = 1./255, rotation_range = 135)
train_data_gen = image_generator.flow_from_directory(
directory = train_dir, batch_size = batch_size,
shuffle = True, target_size = (IMG_HEIGHT, IMG_WIDTH),
class_mode = 'sparse'
)
# Found 2000 images belonging to 2 classes.
# Validation images need no augmentations-
val_data_gen = tf.keras.preprocessing.image_dataset_from_directory(
val_dir, image_size = (IMG_HEIGHT, IMG_WIDTH),
batch_size = batch_size)
# Found 1000 files belonging to 2 classes.
# Configure the dataset for performance-
# AUTOTUNE = tf.data.AUTOTUNE
# val_data_gen = val_data_gen.cache().prefetch(buffer_size = AUTOTUNE)
val_data_gen = val_data_gen.take(batch_size).cache().repeat()
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
# Get a batch of training images and labels-
x, y = next(iter(train_data_gen))
# Get a batch of validation images and labels-
x_t, y_t = next(iter(val_data_gen))
x.shape, y.shape
# ((64, 150, 150, 3), (64,))
x_t.shape, y_t.shape
# (TensorShape([64, 150, 150, 3]), TensorShape([64]))
weight_decay = 0.0005
model = Sequential()
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay),
input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)
)
)
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay)
)
)
model.add(
# AveragePooling2D(
MaxPooling2D(
pool_size=(2, 2), strides=(2, 2)
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay)
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay)
)
)
model.add(
# AveragePooling2D(
MaxPooling2D(
pool_size=(2, 2), strides=(2, 2)
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay)
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay)
)
)
model.add(
AveragePooling2D(
# MaxPooling2D(
pool_size=(2, 2), strides=(2, 2)
)
)
model.add(Flatten())
model.add(
Dense(
units = 2, activation = 'sigmoid'
)
)
# Compile defined model-
model.compile(
optimizer = tf.keras.optimizers.Adam(learning_rate = 0.001),
# loss=tf.losses.SparseCategoricalCrossentropy(from_logits = True),
# loss = tf.losses.SparseCategoricalCrossentropy(),
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy']
)
model(x).shape
# TensorShape([64, 2])
model.predict(x).shape
# (64, 2)
'''
# This is deprecated in favor of model.fit()-
model.fit_generator(
generator = train_data_gen, steps_per_epoch = len(train_data_gen),
epochs = 5
)
'''
model.fit(train_data_gen, val_data_gen, batch_size = batch_size, epochs = 5)
Using "model.fit()" gives the error:
ValueError: y argument is not supported when using
keras.utils.Sequence as input.
What am I doing wrong?

Freeze certain weights - TensorFlow 2

I am using a Conv-6 CNN in TensorFlow 2.5 and Python3. The objective is to selectively set certain weights within any trainable layer. The Conv-6 CNN model definition is as follows:
def conv6_cnn():
"""
Function to define the architecture of a neural network model
following Conv-6 architecture for CIFAR-10 dataset and using
provided parameter which are used to prune the model.
Conv-6 architecture-
64, 64, pool -- convolutional layers
128, 128, pool -- convolutional layers
256, 256, pool -- convolutional layers
256, 256, 10 -- fully connected layers
Output: Returns designed and compiled neural network model
"""
l = tf.keras.layers
model = Sequential()
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same',
input_shape=(32, 32, 3)
)
)
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(Flatten())
model.add(
Dense(
units = 256, activation='relu',
kernel_initializer = tf.initializers.GlorotNormal()
)
)
model.add(
Dense(
units = 256, activation='relu',
kernel_initializer = tf.initializers.GlorotNormal()
)
)
model.add(
Dense(
units = 10, activation='softmax'
)
)
'''
# Compile CNN-
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
# optimizer='adam',
optimizer=tf.keras.optimizers.Adam(learning_rate = 0.0003),
metrics=['accuracy']
)
'''
return model
# Load trained model from before-
best_model = conv6_cnn()
best_model.load_weights("best_weights.h5")
I came across this GitHub answer of freezing certain weights during training. On it's basis, I coded the following to freeze weights in the first and sixth conv layers:
conv1 = pruned_model.trainable_weights[0]
# Find all weights less than a threshold (0.1) and set them to zero-
conv1 = tf.where(conv1 < 0.1, 0, conv1)
# For all weights set to zero, stop training them-
conv1 = tf.where(conv1 == 0, tf.stop_gradient(conv1), conv1)
# Sanity check: number of parameters set at 0-
tf.math.count_nonzero(conv1, axis = None).numpy()
# 133
# Original number of paramaters-
tf.math.count_nonzero(best_model.trainable_weights[0], axis = None).numpy()
# 1728
# Assign conv layer1 back to pruned model-
pruned_model.trainable_weights[0].assign(conv1)
# Sanity check-
tf.math.count_nonzero(pruned_model.trainable_weights[0], axis = None).numpy()
# 133
# conv layer 6-
conv6 = pruned_model.trainable_weights[10]
# Find all weights less than a threshold (0.1) and set them to zero-
conv6 = tf.where(conv6 < 0.1, 0, conv6)
# For all weights set to zero, stop training them-
conv6 = tf.where(conv6 == 0, tf.stop_gradient(conv6), conv6)
# Sanity check: number of parameters set at 0-
tf.math.count_nonzero(conv6, axis = None).numpy()
# 5369
# Original number of paramaters-
tf.math.count_nonzero(best_model.trainable_weights[10], axis = None).numpy()
# 589824
# Assign conv layer6 back to pruned model-
pruned_model.trainable_weights[10].assign(conv6)
# Sanity check-
tf.math.count_nonzero(pruned_model.trainable_weights[10], axis = None).numpy()
# 5369
# Train model for 10 epochs for testing:
# Compile CNN-
pruned_model.compile(
loss = tf.keras.losses.CategoricalCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(learning_rate = 0.01),
metrics=['accuracy']
)
history = pruned_model.fit(
x = X_train, y = y_train,
epochs = 10, validation_data = (X_test, y_test)
)
However, after training when I check the number of non-zero weights:
# first conv layer-
tf.math.count_nonzero(pruned_model.trainable_weights[0], axis = None).numpy()
# sixth conv layer-
tf.math.count_nonzero(pruned_model.trainable_weights[10], axis = None).numpy()
The weights have increased in numbers again. They should have been 133 and 5369, but they are not.
Help?

How should I fit the sample data in model combining CNN and LSTM

I have a sample data of page visits of one page for 803 days. I have extracted features from data like mean, median etc and final shape of data is (803, 25). I had taken a train set of 640 and a test set of 160. I am trying to use CNN+LSTM model using Keras. But I am getting an error in model.fit method.
I have tried permute layer and changed input shapes but still not able to fix it.
trainX.shape = (642, 1, 25)
trainY.shape = (642,)
testX.shape = (161, 1, 25)
testY.shape = (161,)
'''python
# Basic layer
model = Sequential()
model.add(TimeDistributed(Convolution2D(filters = 32, kernel_size = (3, 3), strides=1, padding='SAME', input_shape = (642, 25, 1), activation = 'relu')))
model.add(TimeDistributed(Convolution2D(filters = 32, kernel_size = (3, 3), activation = 'relu')))
model.add(TimeDistributed(MaxPooling2D(pool_size = (2, 2))))
model.add(TimeDistributed(Convolution2D(32, 3, 3, activation = 'relu')))
model.add(TimeDistributed(MaxPooling2D(pool_size = (2, 2))))
model.add(TimeDistributed(Flatten()))
model.add(Permute((2, 3), input_shape=(1, 25)))
model.add(LSTM(units=54, return_sequences=True))
# To avoid overfitting
model.add(Dropout(0.2))
# Adding 6 more layers
model.add(LSTM(units=25, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=54))
model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(units = 1, activation='relu', kernel_regularizer=regularizers.l1(0.0001))))
model.add(PReLU(weights=None, alpha_initializer="zero")) # add an advanced activation
model.compile(optimizer = 'adam', loss = customSmapeLoss, metrics=['mae'])
model.fit(trainX, trainY, epochs = 50, batch_size = 32)
predictions = model.predict(testX)
'''
#Runtime Error
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-218-86932db86d0b> in <module>()
42
43 model.compile(optimizer = 'adam', loss = customSmapeLoss, metrics=['mae'])
---> 44 model.fit(trainX, trainY, epochs = 50, batch_size = 32)
Error - IndexError: list index out of range
The input_shape requires a tuple of length 4 when it using TimeDistributed with Conv2D
see https://keras.io/layers/wrappers/
input_shape=(10, 299, 299, 3))
Dont you think your data field is too little? usually, CNN+LSTM is meant for a more complicated task with thousands of sequential images/videos.

The name "Generator" is used 2 times in the model. All layer names should be unique

I am trying to make a cycle-GAN for an unpaired image to image translation as per this reference. when trying to compile the combined model, following error encounters. I don't know why is it so as I have used the same configurations as per reference. Attaches is my code. Please have a review if anyone of you can solve my problem. Thanks in advance. Sorry for my bad English.
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras_contrib.layers.normalization.instancenormalization import InstanceNormalization
img_rows, img_columns, channels = 256, 256, 1
img_shape = (img_rows, img_columns, channels)
def Generator():
inputs = Input(img_shape)
conv1 = Conv2D(64, (4, 4), strides=2, padding='same')(inputs) # 128
conv1 = Activation(LeakyReLU(alpha=0.2))(conv1)
conv1 = InstanceNormalization()(conv1)
conv2 = Conv2D(128, (4, 4), strides=2, padding='same')(conv1) # 64
conv2 = Activation(LeakyReLU(alpha=0.2))(conv2)
conv2 = InstanceNormalization()(conv2)
conv3 = Conv2D(256, (4, 4), strides=2, padding='same')(conv2) # 32
conv3 = Activation(LeakyReLU(alpha=0.2))(conv3)
conv3 = InstanceNormalization()(conv3)
Deconv3 = concatenate([Conv2DTranspose(256, (4, 4), strides=2, padding='same')(conv3), conv2], axis=-1) # 64
Deconv3 = InstanceNormalization()(Deconv3)
Deconv3 = Dropout(0.2)(Deconv3)
Deconv3 = Activation('relu')(Deconv3)
Deconv2 = concatenate([Conv2DTranspose(128, (4, 4), strides=2, padding='same')(Deconv3), conv1], axis=-1) # 128
Deconv2 = InstanceNormalization()(Deconv2)
Deconv2 = Dropout(0.2)(Deconv2)
Deconv2 = Activation('relu')(Deconv2)
Deconv1 = UpSampling2D(size=(2, 2))(Deconv2) # 256
Deconv1 = Conv2D(1, (4, 4), strides=1, padding='same')(Deconv1)
outputs = Activation('tanh')(Deconv1)
return Model(inputs=inputs, outputs=outputs, name='Generator')
def Discriminator():
inputs = Input(img_shape)
conv1 = Conv2D(64, (4, 4), strides=2, padding='same')(inputs) # 128
conv1 = Activation(LeakyReLU(alpha=0.2))(conv1)
conv1 = InstanceNormalization()(conv1)
conv2 = Conv2D(128, (4, 4), strides=2, padding='same')(conv1) # 64
conv2 = Activation(LeakyReLU(alpha=0.2))(conv2)
conv2 = InstanceNormalization()(conv2)
conv3 = Conv2D(256, (4, 4), strides=2, padding='same')(conv2) # 32
conv3 = Activation(LeakyReLU(alpha=0.2))(conv3)
conv3 = InstanceNormalization()(conv3)
conv4 = Conv2D(256, (4, 4), strides=2, padding='same')(conv3) # 16
conv4 = Activation(LeakyReLU(alpha=0.2))(conv4)
conv4 = InstanceNormalization()(conv4)
conv5 = Conv2D(512, (4, 4), strides=2, padding='same')(conv4) # 8
conv5 = Activation(LeakyReLU(alpha=0.2))(conv5)
conv5 = InstanceNormalization()(conv5)
conv6 = Conv2D(512, (4, 4), strides=2, padding='same')(conv5) # 4
conv6 = Activation(LeakyReLU(alpha=0.2))(conv6)
conv6 = InstanceNormalization()(conv6)
outputs = Conv2D(1, (4, 4), strides=1, padding='same')(conv6) # 4
return Model(inputs=inputs, outputs=outputs, name='Discriminator')
# Calculate output shape of D (PatchGAN)
patch = int(height / 2**6)
disc_patch = (patch, patch, 1)
# Loss weights
lambda_cycle = 10.0 # Cycle-consistency loss
lambda_id = 0.1 * lambda_cycle # Identity loss
optimizer = Adam(0.0002, 0.5)
# Build and compile the discriminators
d_A = Discriminator()
d_B = Discriminator()
d_A.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])
d_B.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])
# Build the generators
g_AB = Generator()
g_BA = Generator()
# Input images from both domains
img_A = Input(shape=img_shape)
img_B = Input(shape=img_shape)
# Translate images to the other domain
fake_B = g_AB(img_A)
fake_A = g_BA(img_B)
# Translate images back to original domain
reconstr_A = g_BA(fake_B)
reconstr_B = g_AB(fake_A)
# Identity mapping of images
img_A_id = g_BA(img_A)
img_B_id = g_AB(img_B)
# For the combined model we will only train the generators
d_A.trainable = False
d_B.trainable = False
# Discriminators determines validity of translated images
valid_A = d_A(fake_A)
valid_B = d_B(fake_B)
# Combined model trains generators to fool discriminators
combined = Model(inputs=[img_A, img_B], outputs=[ valid_A, valid_B, reconstr_A, reconstr_B, img_A_id, img_B_id ])
combined.compile(loss=['mse', 'mse', 'mae', 'mae', 'mae', 'mae'],loss_weights=[ 1, 1, lambda_cycle, lambda_cycle, lambda_id, lambda_id ], optimizer=optimizer)
and the error is
The name "Generator" is used 2 times in the model. All layer names should be unique.
These lines are the cause of the problem in the Generator and Discriminator methods as they're are invoked twice causing the duplicate name issue. Generate a unique name on every invocation or don't provide the name argument.
return Model(inputs=inputs, outputs=outputs, name='Generator')
return Model(inputs=inputs, outputs=outputs, name='Discriminator')
one possible solution:
return Model(inputs=inputs, outputs=outputs)

How cnn layers connect in keras?

Is there anyone who knows how cnn layers connect in keras?
## layer #1
inputs = Input((img_rows, img_cols, 5, 1), name='inputs') # shape (?, 192,192,5,1)
conv1_1 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(inputs) # shape (?,192,192,5,32)
conv1_1 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(conv1_1)
pool1_1 = MaxPooling3D(pool_size=(2, 2, 1))(conv1_1)
drop1_1 = Dropout(0.2)(pool1_1)
## layer #2
conv2_1 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(drop1_1) # shape(?,96,96,5,64)
conv2_1 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(conv2_1)
pool2_1 = MaxPooling3D(pool_size=(2, 2, 1))(conv2_1)
drop2_1 = Dropout(0.2)(pool2_1)
## layer #3
conv3_1 = Conv3D(128, (3, 3, 3), activation='relu', padding='same')(drop2_1) # shape(?, 48,48,5,128)
conv3_1 = Conv3D(128, (3, 3, 3), activation='relu', padding='same')(conv3_1)
pool3_1 = MaxPooling3D(pool_size=(2, 2, 1))(conv3_1)
drop3_1 = Dropout(0.2)(pool3_1)
For example, the shape of conv1_1 layer is (?,192,192,5,32) and the shape of conv2_1 layer is (?,96,96,5,64). In this case, 32 and 64 indicate the number of filters(or output channels) in each cnn layer. At this point, how can I estimate the number of features or number of nodes from layer #1 to layer #2?
how can I estimate the number of features or number of nodes from
layer #1 to layer #2?
If you mean number of weights, if you define your layer as l (l = Conv3D(...) etc) you can access kernel and biase weights, and get its shape. For the number of features you can do this with layer output (you'd need to define separate variables for that, and not just use one name as you did).

Resources