Fully Connected layer incorrect feed - python-3.x

Dragging some idea for building CNN from here, I want to build a convnet which comprises of two convolution layers, one fully connected layer(FCL) and softmax layer(SL). I couldn't understand defining the convolution operations to perform on FCL and back connected to SL.
In FCL, are the convolution operation performed in 1D where the input is flattened ? The weight for FCL are generated in 2D but how can I do the Conv operations if so ? because the matrix dimension dont match with the reshaped input and weights generated.( comparing VGGNET in detail column at the end). Even If I can do a 1xM and MxN conv operation the sizes of the matrix are missmatching where did I go wrong in FCL ?
Traceback (most recent call last):
File "D:/Lab_Project_Files/TF/Practice Files/basictest22.py", line 108, in <module>
y = conv_net( x )
File "D:/Lab_Project_Files/TF/Practice Files/basictest22.py", line 93, in conv_net
FClayer = tf.nn.relu(tf.add(tf.matmul(reshape,layer3_weights),layer3_biases))
ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' (op: 'MatMul') with input shapes: [15360], [2240,64]
How to define the FCL ?
I'm bit confused whether these operations apply on each and every image of the batch ?
My input parameters are
INPUT_WIDTH = 16 # input image width
INPUT_HEIGHT = 12 # input image height
INPUT_DEPTH = 1 # input image depth = 1 for monochrome
NUM_CLASSES = 8 # output classes
BATCH_SIZE = 5 # grouping batch for training
# input output placeholders
x = tf.placeholder(tf.float32, [BATCH_SIZE, INPUT_WIDTH,INPUT_HEIGHT,INPUT_DEPTH ])
y_ = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_CLASSES])
my trail code
def outputdetails(W1, H1,F, P, S):
# W1,W2 - width of input and output
# H1,H2 - height of input and output
# F - size of the filter
# P - padding
# S - Stride
P = 0.00
W2 = int((W1 - F + 2*P)/S + 1)
H2 = int((H1 - F + 2*P)/S + 1)
return W2, H2
# CNN trail
def conv_net(x):
# CONV1 layer
FILTER_SIZE = 3 # applying 3x3 filter
STRIDE = 1
num_hidden = 64 # used for FCL as num of outputs
NUM_CHANNELS = INPUT_DEPTH # input channels
DEPTH = 16 # Output channels Apply 16 filters
layer1_weights = tf.Variable(tf.random_normal([FILTER_SIZE,FILTER_SIZE,NUM_CHANNELS,DEPTH],stddev = 0.1))
layer1_biases = tf.Variable(tf.zeros([DEPTH]))
#CONV2 layer
NUM_CHANNELS = 16
DEPTH = 16
layer2_weights = tf.Variable(tf.random_normal([FILTER_SIZE, FILTER_SIZE, NUM_CHANNELS, DEPTH], stddev=0.1))
layer2_biases = tf.Variable(tf.zeros([DEPTH]))
# Fully Connected layer
# W1 - INPUT_WIDTH, H1 - INPUT_HEIGHT, F - FILTER_SIZE, S - STRIDE
finalsize_width,finalsize_height = outputdetails(INPUT_WIDTH,INPUT_HEIGHT,FILTER_SIZE,1,STRIDE)
layer3_weights = tf.Variable(
tf.truncated_normal([finalsize_width * finalsize_height * DEPTH, num_hidden], stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
# softmax layer
Outlayer_weights = tf.Variable(tf.random_normal([num_hidden, NUM_CLASSES], stddev=0.1))
Outlayer_biases = tf.Variable(tf.constant(1.0,shape = [NUM_CLASSES]))
conv1 = tf.nn.relu(tf.add(tf.nn.conv2d(x,layer1_weights,strides = [1,1,1,1],padding='SAME'),layer1_biases))
conv2 = tf.nn.relu(tf.add(tf.nn.conv2d(conv1, layer2_weights, strides=[1, 1, 1, 1], padding='SAME'), layer2_biases))
shape = conv2.get_shape().as_list()
reshape = tf.reshape(conv2,[shape[0]*shape[1]*shape[2]*shape[3]])
FClayer = tf.nn.relu(tf.add(tf.matmul(reshape,layer3_weights),layer3_biases))
out = tf.add(tf.matmul(FClayer, Outlayer_weights), Outlayer_biases)
return out
Files if required
source file
classes
data

Change this
reshape = tf.reshape(conv2,[shape[0]*shape[1]*shape[2]*shape[3]])
to this
reshape = tf.reshape(conv2,[shape[0],shape[1]*shape[2]*shape[3]])
matmul can work with a batch dimension which you are destroying.

Related

Applying a 2D function to a 4D tensor PyTorch

I got a 2D function that takes a matrix - 2D tensor with shape (28, 28)
and I got a tensor, lets say (64, 10, 28, 28) - it's a tensor that contains a batch of 64 images that passed through a (10 kernels) conv2d layer.
Now, I want to activate on the last two dimentions of the tensor, the (28,28) bit, a 2D function.
Now I did that in a very inefficient way:
def activation_func(input):
for batch_idx in range(input.shape[0]):
for channel_inx in range(input.shape[1]):
input[batch_idx][channel_inx] = 2D_function(input[batch_idx][channel_inx])
return input
which is highly inefficient as I noticed.
is there any way of doing this efficiently?
I can write the entire code If necessary
EDIT:
def 2D_function(input):
global indices # yes I know, I will remove this global stuff later
# indices = [(i, j) for i in range(1, 28, 4) for j in range(1, 28, 4)]
for x, y in indices:
relu_decision = relu(input[x, y]) # standard relu - relu(x)=(x>1)*x
if not relu_decision:
# zero out the patch
input[x - 1: x + 3, y - 1: y + 3] = 0
return input
In such cases, I use a Kronecker product trick:
import torch
torch.set_printoptions(linewidth=200) # you can better see how the mask is shaped
# simulating an input
input = torch.rand(1, 1, 28, 28) - 0.5
ids = torch.meshgrid((torch.arange(1, 28, 4), torch.arange(1, 28, 4)))
# note that relu(x) = (x > 0.) * x, so adjust it to your needs
relus = torch.nn.functional.relu(input[(slice(None), slice(None), *ids)]).to(bool)
A = torch.ones(4, 4)
# generate a block matrix with ones in positions where blocks are set to 0 in correspondence of relus = 0
mask = torch.kron(relus, A)
print(mask.shape)
output = input * mask
print(mask[0, 0])
print(output[0, 0])

MNIST data processing - PyTorch

I am trying to code a Variational Autoencoder for MNIST dataset and the data pre-processing is as follows:
# Create transformations to be applied to dataset-
transforms = torchvision.transforms.Compose(
[
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,)
# (0.5,), (0.5,)
)
]
)
# Create training and validation datasets-
train_dataset = torchvision.datasets.MNIST(
# root = 'data', train = True,
root = path_to_data, train = True,
download = True, transform = transforms
)
val_dataset = torchvision.datasets.MNIST(
# root = 'data', train = False,
root = path_to_data, train = False,
download = True, transform = transforms
)
# Sanity check-
len(train_dataset), len(val_dataset)
# (60000, 10000)
# Create training and validation data loaders-
train_dataloader = torch.utils.data.DataLoader(
dataset = train_dataset, batch_size = 32,
shuffle = True,
# num_workers = 2
)
val_dataloader = torch.utils.data.DataLoader(
dataset = val_dataset, batch_size = 32,
shuffle = True,
# num_workers = 2
)
# Get a mini-batch of train data loaders-
imgs, labels = next(iter(train_dataloader))
imgs.shape, labels.shape
# (torch.Size([32, 1, 28, 28]), torch.Size([32]))
# Minimum & maximum pixel values-
imgs.min(), imgs.max()
# (tensor(-0.4242), tensor(2.8215))
# Compute min and max for train dataloader-
min_mnist, max_mnist = 0.0, 0.0
for img, _ in train_dataloader:
if img.min() < min_mnist:
min_mnist = img.min()
if img.max() > max_mnist:
max_mnist = img.max()
print(f"MNIST - train: min pixel value = {min_mnist:.4f} & max pixel value = {max_mnist:.4f}")
# MNIST - train: min pixel value = -0.4242 & max pixel value = 2.8215
min_mnist, max_mnist = 0.0, 0.0
for img, _ in val_dataloader:
if img.min() < min_mnist:
min_mnist = img.min()
if img.max() > max_mnist:
max_mnist = img.max()
print(f"MNIST - validation: min pixel value = {min_mnist:.4f} & max pixel value = {max_mnist:.4f}")
# MNIST - validation: min pixel value = -0.4242 & max pixel value = 2.8215
Using 'ToTensor()' and 'Normalize()' transforms, the output image pixels are in the range [-0.4242, 2.8215]. The output layer of the decoder within the VAE either uses the sigmoid or tanh activation function. Sigmoid outputs values in the range [0, 1], while tanh outputs values in the range[-1, 1].
This can be a problem since the input is in the range [-0.4242, 2.8215], while the output can be in the range [0, 1] or [-1, 1] depending on the activation being used - sigmoid or tanh.
The reconstruction loss being used is MSE. BCE could also be used but it is suggested for Bernoulli distributions vs. continuous data - pixel values.
One simple fix is to just use 'ToTensor()' transformation which scales the input in the range [0, 1] and then use sigmoid activation function for the output decoder layer within the VAE. But what's a better approach for data pre-processing using images which need normalization with 'Normalize()' transformation for each of the channels such that the input and output/reconstructions are in the same range?
The easiest way would be to remove the sigmoid or tanh activation function in the last layer and just use a Linear layer as your output. In that case, the network can output any value and is not restricted to [0,1] or [-1, 1].

Pytorch couldn't build multi scaled kernel nested model

I'm trying to create a modified MNIST model which takes input 1x28x28 MNIST tensor images, and it kind of branches into different models with different sized kernels, and accumulates at the end, so as to give a multi-scale-kerneled response in the spatial domain of the images. I'm worried about the model, since, I'm unable to construct it.
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as Data
from torchvision import datasets, transforms
import torch.nn.functional as F
import timeit
import unittest
torch.manual_seed(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(0)
# check availability of GPU and set the device accordingly
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# define a transforms for preparing the dataset
transform = transforms.Compose([
transforms.ToTensor(), # convert the image to a pytorch tensor
transforms.Normalize((0.1307,), (0.3081,)) # normalise the images with mean and std of the dataset
])
# Load the MNIST training, test datasets using `torchvision.datasets.MNIST` using the transform defined above
train_dataset = datasets.MNIST('./data',train=True,transform=transform,download=True)
test_dataset = datasets.MNIST('./data',train=False,transform=transform,download=True)
# create dataloaders for training and test datasets
# use a batch size of 32 and set shuffle=True for the training set
train_dataloader = Data.DataLoader(dataset=train_dataset, batch_size=32, shuffle=True)
test_dataloader = Data.DataLoader(dataset=test_dataset, batch_size=32, shuffle=True)
# My Net
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# define a conv layer with output channels as 16, kernel size of 3 and stride of 1
self.conv11 = nn.Conv2d(1, 16, 3, 1) # Input = 1x28x28 Output = 16x26x26
self.conv12 = nn.Conv2d(1, 16, 5, 1) # Input = 1x28x28 Output = 16x24x24
self.conv13 = nn.Conv2d(1, 16, 7, 1) # Input = 1x28x28 Output = 16x22x22
# define a conv layer with output channels as 32, kernel size of 3 and stride of 1
self.conv21 = nn.Conv2d(16, 32, 3, 1) # Input = 16x26x26 Output = 32x24x24
self.conv22 = nn.Conv2d(16, 32, 5, 1) # Input = 16x24x24 Output = 32x20x20
self.conv23 = nn.Conv2d(16, 32, 7, 1) # Input = 16x22x22 Output = 32x16x16
# define a conv layer with output channels as 64, kernel size of 3 and stride of 1
self.conv31 = nn.Conv2d(32, 64, 3, 1) # Input = 32x24x24 Output = 64x22x22
self.conv32 = nn.Conv2d(32, 64, 5, 1) # Input = 32x20x20 Output = 64x16x16
self.conv33 = nn.Conv2d(32, 64, 7, 1) # Input = 32x16x16 Output = 64x10x10
# define a max pooling layer with kernel size 2
self.maxpool = nn.MaxPool2d(2), # Output = 64x11x11
# define dropout layer with a probability of 0.25
self.dropout1 = nn.Dropout(0.25)
# define dropout layer with a probability of 0.5
self.dropout2 = nn.Dropout(0.5)
# define a linear(dense) layer with 128 output features
self.fc11 = nn.Linear(64*11*11, 128)
self.fc12 = nn.Linear(64*8*8, 128) # after maxpooling 2x2
self.fc13 = nn.Linear(64*5*5, 128)
# define a linear(dense) layer with output features corresponding to the number of classes in the dataset
self.fc21 = nn.Linear(128, 10)
self.fc22 = nn.Linear(128, 10)
self.fc23 = nn.Linear(128, 10)
self.fc33 = nn.Linear(30,10)
def forward(self, x1):
# Use the layers defined above in a sequential way (folow the same as the layer definitions above) and
# write the forward pass, after each of conv1, conv2, conv3 and fc1 use a relu activation.
x = F.relu(self.conv11(x1))
x = F.relu(self.conv21(x))
x = F.relu(self.maxpool(self.conv31(x)))
#x = torch.flatten(x, 1)
x = x.view(-1,64*11*11)
x = self.dropout1(x)
x = F.relu(self.fc11(x))
x = self.dropout2(x)
x = self.fc21(x)
y = F.relu(self.conv12(x1))
y = F.relu(self.conv22(y))
y = F.relu(self.maxpool(self.conv32(y)))
#x = torch.flatten(x, 1)
y = y.view(-1,64*8*8)
y = self.dropout1(y)
y = F.relu(self.fc12(y))
y = self.dropout2(y)
y = self.fc22(y)
z = F.relu(self.conv13(x1))
z = F.relu(self.conv23(z))
z = F.relu(self.maxpool(self.conv33(z)))
#x = torch.flatten(x, 1)
z = z.view(-1,64*5*5)
z = self.dropout1(z)
z = F.relu(self.fc13(z))
z = self.dropout2(z)
z = self.fc23(z)
out = self.fc33(torch.cat((x, y, z), 0))
output = F.log_softmax(out, dim=1)
return output
import unittest
class TestImplementations(unittest.TestCase):
# Dataloading tests
def test_dataset(self):
self.dataset_classes = ['0 - zero',
'1 - one',
'2 - two',
'3 - three',
'4 - four',
'5 - five',
'6 - six',
'7 - seven',
'8 - eight',
'9 - nine']
self.assertTrue(train_dataset.classes == self.dataset_classes)
self.assertTrue(train_dataset.train == True)
def test_dataloader(self):
self.assertTrue(train_dataloader.batch_size == 32)
self.assertTrue(test_dataloader.batch_size == 32)
def test_total_parameters(self):
model = Net().to(device)
#self.assertTrue(sum(p.numel() for p in model.parameters()) == 1015946)
suite = unittest.TestLoader().loadTestsFromModule(TestImplementations())
unittest.TextTestRunner().run(suite)
def train(model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# send the image, target to the device
data, target = data.to(device), target.to(device)
# flush out the gradients stored in optimizer
optimizer.zero_grad()
# pass the image to the model and assign the output to variable named output
output = model(data)
# calculate the loss (use nll_loss in pytorch)
loss = F.nll_loss(output, target)
# do a backward pass
loss.backward()
# update the weights
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
# send the image, target to the device
data, target = data.to(device), target.to(device)
# pass the image to the model and assign the output to variable named output
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
model = Net().to(device)
## Define Adam Optimiser with a learning rate of 0.01
optimizer = torch.optim.Adam(model.parameters(),lr=0.01)
start = timeit.default_timer()
for epoch in range(1, 11):
train(model, device, train_dataloader, optimizer, epoch)
test(model, device, test_dataloader)
stop = timeit.default_timer()
print('Total time taken: {} seconds'.format(int(stop - start)) )
Here is my full code. I couldn't understand what could possibly go wrong...
It is giving
<ipython-input-72-194680537dcc> in forward(self, x1)
46 x = F.relu(self.conv11(x1))
47 x = F.relu(self.conv21(x))
---> 48 x = F.relu(self.maxpool(self.conv31(x)))
49 #x = torch.flatten(x, 1)
50 x = x.view(-1,64*11*11)
TypeError: 'tuple' object is not callable
Error.
P.S.: Pytorch Noob here.
You have mistakenly placed a comma at the end of the line where you define self.maxpool : self.maxpool = nn.MaxPool2d(2), # Output = 64x11x11 see?
This comma makes self.maxpool a tuple instead of a torch.nn.modules.pooling.MaxPool2d. Drop the comma at the end and this error is fixed.
I see you haven't given the stride argument in you definition of self.maxpool = nn.MaxPool2d(2). Choose one: e.g. self.maxpool = nn.MaxPool2d(2, stride = 2).

3D CNN parameter calculation

I have a CNN code that was written using tensorflow library:
x_img = tf.placeholder(tf.float32)
y_label = tf.placeholder(tf.float32)
def convnet_3d(x_img, W):
conv_3d_layer = tf.nn.conv3d(x_img, W, strides=[1,1,1,1,1], padding='VALID')
return conv_3d_layer
def maxpool_3d(x_img):
maxpool_3d_layer = tf.nn.max_pool3d(x_img, ksize=[1,2,2,2,1], strides=[1,2,2,2,1], padding='VALID')
return maxpool_3d_layer
def convolutional_neural_network(x_img):
weights = {'W_conv1_layer':tf.Variable(tf.random_normal([3,3,3,1,32])),
'W_conv2_layer':tf.Variable(tf.random_normal([3,3,3,32,64])),
'W_fc_layer':tf.Variable(tf.random_normal([409600,1024])),
'W_out_layer':tf.Variable(tf.random_normal([1024, num_classes]))}
biases = {'b_conv1_layer':tf.Variable(tf.random_normal([32])),
'b_conv2_layer':tf.Variable(tf.random_normal([64])),
'b_fc_layer':tf.Variable(tf.random_normal([1024])),
'b_out_layer':tf.Variable(tf.random_normal([num_classes]))}
x_img = tf.reshape(x_img, shape=[-1, img_x, img_y, img_z, 1])
conv1_layer = tf.nn.relu(convnet_3d(x_img, weights['W_conv1_layer']) + biases['b_conv1_layer'])
conv1_layer = maxpool_3d(conv1_layer)
conv2_layer = tf.nn.relu(convnet_3d(conv1_layer, weights['W_conv2_layer']) + biases['b_conv2_layer'])
conv2_layer = maxpool_3d(conv2_layer)
fc_layer = tf.reshape(conv2_layer,[-1, 409600])
fc_layer = tf.nn.relu(tf.matmul(fc_layer, weights['W_fc_layer'])+biases['b_fc_layer'])
fc_layer = tf.nn.dropout(fc_layer, keep_rate)
output_layer = tf.matmul(fc_layer, weights['W_out_layer'])+biases['b_out_layer']
return output_layer
my input image x_img is 25x25x25(3d image), I have some questions about the code:
1- is [3,3,3,1,32] in 'W_conv1_layer' means [width x height x depth x channel x number of filters]?
2- in 'W_conv2_layer' weights are [3,3,3,32,64], why the output is 64? I know that 3x3x3 is filter size and 32 is input come from first layer.
3- in 'W_fc_layer' weights are [409600,1024], 1024 is number of nodes in FC layer, but where this magic number '409600' come from?
4- before the image get into the conv layers why we need to reshape the image
x_img = tf.reshape(x_img, shape=[-1, img_x, img_y, img_z, 1])
All the answers can be found in the official doc of conv3d.
The weights should be [filter_depth, filter_height, filter_width, in_channels, out_channels]
The numbers 32 and 64 are chosen because it works simply they are just hyperparameters
409600 comes from reshaping the output of maxpool3d (it is probably a mistake the real size should be 4096 see comments)
Because tensorflow expects certain layouts for its input
Your should try implementing a simple convnet on images before moving to more complicated stuff.

Visualization of the filters of VGG16

I am learning CNN, right now, working on deconvolution of the layers. I have begun the process of learning upsampling and observe how convolution layers see the world by generating feature maps from the filters from the source Visualization of the filters of VGG16, with the Source code. I have changed the input and the code is as follows:
import imageio
import numpy as np
import time
from keras.applications import vgg16
from keras import backend as K
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# dimensions of the generated pictures for each filter.
img_width = 128
img_height = 128
# the name of the layer we want to visualize
# (see model definition at keras/applications/vgg16.py)
layer_name = 'block5_conv1'
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + K.epsilon())
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
if K.image_data_format() == 'channels_first':
x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
# build the VGG16 network with ImageNet weights
model = vgg16.VGG16(weights='imagenet', include_top=False)
print('Model loaded.')
model.summary()
# this is the placeholder for the input images
input_img = model.input
# get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers[1:]])
def normalize(x):
# utility function to normalize a tensor by its L2 norm
return x / (K.sqrt(K.mean(K.square(x))) + K.epsilon())
kept_filters = []
for filter_index in range(200):
# we only scan through the first 200 filters,
# but there are actually 512 of them
print('Processing filter %d' % filter_index)
start_time = time.time()
# we build a loss function that maximizes the activation
# of the nth filter of the layer considered
layer_output = layer_dict[layer_name].output
if K.image_data_format() == 'channels_first':
loss = K.mean(layer_output[:, filter_index, :, :])
else:
loss = K.mean(layer_output[:, :, :, filter_index])
# we compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, input_img)[0]
# normalization trick: we normalize the gradient
grads = normalize(grads)
# this function returns the loss and grads given the input picture
iterate = K.function([input_img], [loss, grads])
# step size for gradient ascent
step = 1.
inpImgg = '/home/sanaalamgeer/Downloads/cat.jpeg'
inpImg = mpimg.imread(inpImgg)
inpImg = cv2.resize(inpImg, (img_width, img_height))
# we start from a gray image with some random noise
if K.image_data_format() == 'channels_first':
input_img_data = inpImg.reshape((1, 3, img_width, img_height))
else:
input_img_data = inpImg.reshape((1, img_width, img_height, 3))
input_img_data = (input_img_data - 0.5) * 20 + 128
# we run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([input_img_data])
input_img_data += grads_value * step
print('Current loss value:', loss_value)
if loss_value <= 0.:
# some filters get stuck to 0, we can skip them
break
# decode the resulting input image
if loss_value > 0:
img = deprocess_image(input_img_data[0])
kept_filters.append((img, loss_value))
end_time = time.time()
print('Filter %d processed in %ds' % (filter_index, end_time - start_time))
# we will stich the best 64 filters on a 8 x 8 grid.
n = 8
# the filters that have the highest loss are assumed to be better-looking.
# we will only keep the top 64 filters.
kept_filters.sort(key=lambda x: x[1], reverse=True)
kept_filters = kept_filters[:n * n]
# build a black picture with enough space for
# our 8 x 8 filters of size 128 x 128, with a 5px margin in between
margin = 5
width = n * img_width + (n - 1) * margin
height = n * img_height + (n - 1) * margin
stitched_filters = np.zeros((width, height, 3))
# fill the picture with our saved filters
for i in range(n):
for j in range(n):
img, loss = kept_filters[i * n + j]
stitched_filters[(img_width + margin) * i: (img_width + margin) * i + img_width,
(img_height + margin) * j: (img_height + margin) * j + img_height, :] = img
# save the result to disk
imageio.imwrite('stitched_filters_%dx%d.png' % (n, n), stitched_filters)
The input image I am using is
It is supposed to generate an output with 64 feature maps embedded into one image as shown in Visualization of the filters of VGG16, but it is generating the same input image at each filter,
.
I am confused what's wrong or where I should make changes.
Please help.
What a complex code....
I'd do this:
from keras.applications.vgg16 import preprocess_input
layer_name = 'block5_conv1'
#create a section of the model to output the layer we want
model = vgg16.VGG16(weights='imagenet', include_top=False)
model = Model(model.input, model.get_layer(layer_name).output)
#open and preprocess the cat image
catImage = openTheCatImage(catFile)
catImage = np.expand_dims(catImage,axis=0)
catImage = preprocess_input(catImage)
#get the layer outputs
features = model.predict(catImage)
#plot
for channel in range(features.shape[-1]): #or .shape[1], or up to a limit you like
featureMap = features[:,:,:,channel] #or features[:,channel]
featureMap = deprocess_image(feature_map)[0]
saveOrPlot(featureMap)

Resources