Applying a 2D function to a 4D tensor PyTorch - pytorch

I got a 2D function that takes a matrix - 2D tensor with shape (28, 28)
and I got a tensor, lets say (64, 10, 28, 28) - it's a tensor that contains a batch of 64 images that passed through a (10 kernels) conv2d layer.
Now, I want to activate on the last two dimentions of the tensor, the (28,28) bit, a 2D function.
Now I did that in a very inefficient way:
def activation_func(input):
for batch_idx in range(input.shape[0]):
for channel_inx in range(input.shape[1]):
input[batch_idx][channel_inx] = 2D_function(input[batch_idx][channel_inx])
return input
which is highly inefficient as I noticed.
is there any way of doing this efficiently?
I can write the entire code If necessary
EDIT:
def 2D_function(input):
global indices # yes I know, I will remove this global stuff later
# indices = [(i, j) for i in range(1, 28, 4) for j in range(1, 28, 4)]
for x, y in indices:
relu_decision = relu(input[x, y]) # standard relu - relu(x)=(x>1)*x
if not relu_decision:
# zero out the patch
input[x - 1: x + 3, y - 1: y + 3] = 0
return input

In such cases, I use a Kronecker product trick:
import torch
torch.set_printoptions(linewidth=200) # you can better see how the mask is shaped
# simulating an input
input = torch.rand(1, 1, 28, 28) - 0.5
ids = torch.meshgrid((torch.arange(1, 28, 4), torch.arange(1, 28, 4)))
# note that relu(x) = (x > 0.) * x, so adjust it to your needs
relus = torch.nn.functional.relu(input[(slice(None), slice(None), *ids)]).to(bool)
A = torch.ones(4, 4)
# generate a block matrix with ones in positions where blocks are set to 0 in correspondence of relus = 0
mask = torch.kron(relus, A)
print(mask.shape)
output = input * mask
print(mask[0, 0])
print(output[0, 0])

Related

keras input not matching output confusion

I am trying to implement a silly learn to rank example. Essentially, I have 2 descriptions of a location, size and number of bathrooms. I want to "combine" them to create a score. Then I wish to compare the scores for the "best". I will always be comparing 3 locations at a time.
The neuralnetwork I expect to do this:
# 3 locations with 2 descriptions.
rinputs = Input(shape=(3, 2), name ='inputlayer')
# take my 3 expected inputs, split them
split = Lambda( lambda x: tf.split(x,num_or_size_splits=3,axis=1))(rinputs)
input_one_tensor = split[0]
input_two_tensor = split[1]
input_three_tensor = split[2]
# combine each set of location elements into 1 "score"
layer2 = Dense(1, name = 'Layer2', use_bias = True, activation = 'sigmoid') # 60 was better than 100
layer2a = layer2(input_one_tensor)
layer2b = layer2(input_two_tensor)
layer2c = layer2(input_three_tensor)
concatLayer = Concatenate(name = 'ConcatLayer2')([layer2a,layer2b, layer2c])
# softmax my score to get "best selection"
softmaxLayer = Dense(3, activation='softmax', name = 'softmax', use_bias = False)
softmaxLayer = softmaxLayer(concatLayer)
model = Model(inputs=rinputs, outputs=softmaxLayer)
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
I now create my test data:
loc1 = [1, 5]
loc2 = [4, 1]
loc3 = [6, 7]
# create two entries for my trial run
inputs = np.asarray([[loc1, loc2, loc3], [loc3,loc3,loc1]]).reshape(2,3,2)
ytrue = np.asarray([[1, 0, 0], [0, 0, 1]]).reshape(2,3)
model.fit(inputs, ytrue,verbose=True,)
But then I get the following error about my outputs. That I am not understanding.
File "/.virtualenvs/python310/lib/python3.10/site-packages/keras/losses.py", line 1990, in categorical_crossentropy
return backend.categorical_crossentropy(
File "/.virtualenvs/python310/lib/python3.10/site-packages/keras/backend.py", line 5529, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
ValueError: Shapes (None, 3) and (None, 1, 3) are incompatible
I'm not entirely understanding why the shapes don't match. I expect my softmax layer to output 3 numbers that sum to 1 and can be compared to my ytrue.
any insights appreciated
Just from the model architecture itself, it seems like you just need a two-dimensional data to be fed into Layer2:
One may use a Reshape/Flatten layer to fix it.
By reshaping the output of Lambda layer from (None, 1, 2) to (None, 2), the final output's shape should become compatible too (None, 3).
Additional notes:
As an example borrowed (with some modifications) from the TensorFlow website, let's assume we want to split an input tensor of the shape of (3, 2) into 3 smaller tensors along the axis=1:
x = tf.Variable(tf.random.uniform([3, 2], -1, 1))
s0, s1, s2 = tf.split(x, num_or_size_splits=3, axis=1)
Output:
Here are the smaller tensor splits:
Now, we can see the shape is (1, 2), i.e. a 2D tensor consistent with the tensor it is derived from, and not a vector of the shape of (2,). In the context of your problem, for a batch, that would be (None, 1, 2).

How to increase batch size in GPT2 training for translation task?

I am developing a code to use the pre-trained GPT2 model for a machine translation task. The length of my data's word-to-id is 91, and I developed the following code for my model:
import torch
from torch.utils.data import DataLoader
from transformers.models.gpt2.modeling_gpt2 import GPT2Model
# data preparation code
def batch_sequences(x, y, env):
"""
Take as input a list of n sequences (torch.LongTensor vectors) and return
a tensor of size (slen, n) where slen is the length of the longest
sentence, and a vector lengths containing the length of each sentence.
"""
lengths_x = torch.LongTensor([len(s) + 2 for s in x])
lengths_y = torch.LongTensor([len(s) + 2 for s in y])
max_length = max(lengths_x.max().item(), lengths_y.max().item())
sent_x = torch.LongTensor(
max_length, lengths_x.size(0)).fill_(env.pad_index)
sent_y = torch.LongTensor(
max_length, lengths_y.size(0)).fill_(env.pad_index)
assert lengths_x.min().item() > 2
assert lengths_y.min().item() > 2
sent_x[0] = env.eos_index
for i, s in enumerate(x):
sent_x[1:lengths_x[i] - 1, i].copy_(s)
sent_x[lengths_x[i] - 1, i] = env.eos_index
sent_y[0] = env.eos_index
for i, s in enumerate(y):
sent_y[1:lengths_y[i] - 1, i].copy_(s)
sent_y[lengths_y[i] - 1, i] = env.eos_index
return sent_x, sent_y, max_length
def collate_fn(elements):
"""
Collate samples into a batch.
"""
x, y = zip(*elements)
x = [torch.LongTensor([env.word2id[w]
for w in seq if w in env.word2id]) for seq in x]
y = [torch.LongTensor([env.word2id[w]
for w in seq if w in env.word2id]) for seq in y]
x, y, length = batch_sequences(x, y, env)
return (x, length), (y, length), torch.LongTensor(nb_ops)
loader = DataLoader(data, batch_size=1, shuffle=False, collate_fn=collate_fn)
gpt2 = GPT2Model.from_pretrained('gpt2')
in_layer = nn.Embedding(len(env.word2id), 768)
out_layer = nn.Linear(768, len(env.word2id))
parameters = list(gpt2.parameters()) + list(in_layer.parameters()) + list(out_layer.parameters())
optimizer = torch.optim.Adam(parameters)
loss_fn = nn.CrossEntropyLoss()
for layer in (gpt2, in_layer, out_layer):
layer.train()
accuracies = list()
n_epochs = 5
for i in range(n_epochs):
for (x, x_len), (y, y_len) in loader:
x = x.to(device=device)
y = y.to(device=device)
embeddings = in_layer(x.reshape(1, -1))
hidden_state = gpt2(inputs_embeds=embeddings).last_hidden_state[:, :]
logits = out_layer(hidden_state)[0]
loss = loss_fn(logits, y.reshape(-1))
accuracies.append(
(logits.argmax(dim=-1) == y.reshape(-1)).float().mean().item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
if len(accuracies) % 500 == 0:
accuracy = sum(accuracies[-50:]) / len(accuracies[-50:])
print(f'Samples: {len(accuracies)}, Accuracy: {accuracy}')
This code works pretty well when the batch size is 1. But it is so slow. I wanted to increase the batch size from 1 to 32, but I get some dimension compatibility problems. How can I increase the batch size without errors?
My data consists of pair of sentences, the first one is a sentence in the first language and the second one is its translation in the second language.
For example, assume that x.shape is (batch_size, 12) (meaning we have 'batch_size' sentences of length 12 as input and y.shape is also (batch_size, 12) (the translations). And also we have a word-to-id dictionary of length 90 that matches each word in a sentence with its index)
This problem can be solved using padding. We need two special symbols:
code 0 in inputs (x) will denote "blank" tokens that should not be translated.
code -100 in outputs (y) will denote "blank" tokens that should not participate in the calculation of loss. nn.CrossEntropyLoss() is programmed to ignore this value (by the argument ignore_index).
The batch of size 3 could look like this:
x:
[[1, 2, 3, 0, 0],
[ 4, 5, 6, 7, 8],
[ 9, 8, 0, 0, 0]]
y:
[[1, 2, 3, -100, -100],
[ 4, 5, 6, 7, 8],
[ 9, 8, -100, -100, -100]]
You could generate it with code such as:
def pad_sequences(batch, pad_value=0):
n = max(len(v) for v in batch)
return torch.tensor([v + [pad_value] * (n - len(v)) for v in batch])
However, I feel there is an issue with your problem statement. If you perform machine translation, then your inputs and outputs can have different lengths, but your architecture only allows x and y to have the same lengths. If you want to support x and y of different lengths, I would suggest to use a seq2seq architecture such as T5 instead.
Another issue is that GPT is autoregressive, so if y is completely aligned with x, then we cannot use the suffix of x while generating the left part of y. So if you wish your x and y to be perfectly aligned, but still would like to use the full information about x when generating y, I would recommend using a bidirectional encoder such as BERT.

How to properly implement data reorganization using PyTorch?

It's going to be a long post, sorry in advance...
I'm working on a denoising algorithm and my goal is to:
Use PyTorch to design / train the model
Convert the PyTorch model into a CoreML model
The denoising algorithm consists in the following 3 parts:
A "down-sampling" + noise level map
A regular convnet
An "up-sampling"
The first part is quite simple in its idea, but not so easy to explain. Given for instance an input color image and a input value "sigma" that represents the standard deviation of the image noise.
The "down-sampling" part is in fact a space-to-depth. In short, for a given channel and for a subset of 2x2 pixels, the space-to-depth creates a single pixel composed of 4 channels. The number of channels is multiplied by 4 while the height and width are divided by 2. The data is simply reorganized.
The noise level map consists in creating 3 channels containing the standard deviation value so that the convnet knows how to properly denoise the input image.
This will be maybe more clear with some code:
def downsample_and_noise_map(input, sigma):
# Input tensor size (batch, channels, height, width)
in_n, in_c, in_h, in_w = input.size()
# Output tensor size
out_h = in_h // 2
out_w = in_w // 2
sigma_c = in_c # nb of channels of the standard deviation tensor
image_c = in_c * 4 # nb of channels of the image tensor
# Standard deviation tensor
output_sigma = sigma.view(1, 1, 1, 1).repeat(in_n, sigma_c, out_h, out_w)
# Image tensor
output_image = torch.zeros((in_n, image_c, out_h, out_w))
output_image[:, 0::4, :, :] = input[:, :, 0::2, 0::2]
output_image[:, 1::4, :, :] = input[:, :, 0::2, 1::2]
output_image[:, 2::4, :, :] = input[:, :, 1::2, 0::2]
output_image[:, 3::4, :, :] = input[:, :, 1::2, 1::2]
# Concatenate standard deviation and image tensors
return torch.cat((output_sigma, output_image), dim=1)
This function is then called as the first step in the model's forward function:
def forward(self, x, sigma):
x = downsample_and_noise_map(x, sigma)
x = self.convnet(x)
x = upsample(x)
return x
Let's consider an input tensor of size 1x3x100x100 (PyTorch standard: batch, channels, height, width) and a sigma value of 0.1. The output tensor has the following properties:
Tensor's shape is 1x15x50x50
Tensor's values for channels 0, 1 and 2 are all equal to sigma = 0.1
Tensor's values for channels 3, 4, 5, 6 are composed of the input image values of channel 0
Tensor's values for channels 7, 8, 9, 10 are composed of the input image values of channel 1
Tensor's values for channels 11, 12, 13, 14 are composed of the input image values of channel 2
If this code is not clear enough, I can post an even more naive version.
The up-sampling part is the reciprocal function of the downsampling one.
I was able to use this function for training and testing in PyTorch.
Then, I tried to convert the model to CoreML with ONNX as an intermediate step.
The conversion to ONNX generated "TracerWarning". Conversion from ONNX to CoreML failed (TypeError: 1.0 has type numpy.float64, but expected one of: int, long). The problem came from the down-sampling + noise level map (and from up-sampling too).
When I removed the down-sampling + noise level map and up-sampling layers, I was able to convert to ONNX and to CoreML very easily since only a simple convnet remained. This means I have a solution to my problem: implement these 2 layers using 2 shaders on the mobile side. But I'm not satisfied with this solution as I want my model to contain all layers ^^
Before considering writing a post here, I crawled Internet to find an answer and I was able to write a better version of the previous function using reshape and permute. This version removed all ONNX warning, but the CoreML conversion still failed...
def downsample_and_noise_map(input, sigma):
# Input image size
in_n, in_c, in_h, in_w = input.size()
# Output tensor size
out_n = in_n
out_h = in_h // 2
out_w = in_w // 2
# Create standard deviation tensor
output_sigma = sigma.view(out_n, 1, 1, 1).repeat(out_n, in_c, out_h, out_w)
# Split RGB channels
channels_rgb = torch.split(input, 1, dim=1)
# Reshape (space-to-depth) each image channel
channels_reshaped = []
for channel in channels_rgb:
channel = channel.reshape(1, out_h, 2, out_w, 2)
channel = channel.permute(2, 4, 0, 1, 3)
channel = channel.reshape(1, 4, out_h, out_w)
channels_reshaped.append(channel)
# Concatenate all reshaped image channels together
output_image = torch.cat(channels_reshaped, dim=1)
# Concatenate standard deviation and image tensors
output = torch.cat([output_sigma, output_image], dim=1)
return output
So here are (some of) my questions:
What is the preferred PyTorch way to implement a function such as downsample_and_noise_map function within a model?
Same question but when the conversion to ONNX and then to CoreML is part of the equation?
Is the PyTorch -> ONNX -> CoreML still best path to deploy the model for iOS production?
Thanks for your help (and your patience) ^^
Disclaimer I'm not familiar with CoreML or deploying to iOS but I do have experience deploying PyTorch models in TensorRT and OpenVINO via ONNX.
The main issues I've faced when deploying to other frameworks is that operations like slicing and repeating tensors tend to have limited support in other frameworks. Often we can construct equivalent conv or transpose-conv operations which achieve the desired behavior.
In order to ensure we don't export the logic used to construct the conv weights I've separated the weight initialization from the application of the weights. This makes the ONNX export much more straightforward since all it sees is some constant tensors being applied.
class DownsampleAndNoiseMap():
def __init__(self):
self.initialized = False
self.weight = None
self.zeros = None
def init_weights(self, input):
with torch.no_grad():
in_n, in_c, in_h, in_w = input.size()
out_h = int(in_h // 2)
out_w = int(in_w // 2)
sigma_c = in_c
image_c = in_c * 4
# conv weights used for downsampling
self.weight = torch.zeros(image_c, in_c, 2, 2).to(input)
for c in range(in_c):
self.weight[4 * c, c, 0, 0] = 1
self.weight[4 * c + 1, c, 0, 1] = 1
self.weight[4 * c + 2, c, 1, 0] = 1
self.weight[4 * c + 3, c, 1, 1] = 1
# zeros used to replace repeat
self.zeros = torch.zeros(in_n, sigma_c, out_h, out_w).to(input)
self.initialized = True
def __call__(self, input, sigma):
assert self.initialized
output_sigma = self.zeros + sigma
output_image = torch.nn.functional.conv2d(input, self.weight, stride=2)
return torch.cat((output_sigma, output_image), dim=1)
class Upsample():
def __init__(self):
self.initialized = False
self.weight = None
def init_weights(self, input):
with torch.no_grad():
in_n, in_c, in_h, in_w = input.size()
image_c = in_c * 4
self.weight = torch.zeros(in_c + image_c, in_c, 2, 2).to(input)
for c in range(in_c):
self.weight[in_c + 4 * c, c, 0, 0] = 1
self.weight[in_c + 4 * c + 1, c, 0, 1] = 1
self.weight[in_c + 4 * c + 2, c, 1, 0] = 1
self.weight[in_c + 4 * c + 3, c, 1, 1] = 1
self.initialized = True
def __call__(self, input):
assert self.initialized
return torch.nn.functional.conv_transpose2d(input, self.weight, stride=2)
I made the assumption that upsample was the reciprocal of downsample in the sense that x == upsample(downsample_and_noise_map(x, sigma)) (correct me if I'm wrong in this assumption). I also verified that my version of downsample agrees with yours.
# consistency checking code
x = torch.randn(1, 3, 100, 100)
sigma = torch.randn(1)
# OP downsampling
y1 = downsample_and_noise_map(x, sigma)
ds = DownsampleAndNoiseMap()
ds.init_weights(x)
y2 = ds(x, sigma)
print('downsample diff:', torch.sum(torch.abs(y1 - y2)).item())
us = Upsample()
us.init_weights(x)
x_recov = us(ds(x, sigma))
print('recovery error:', torch.sum(torch.abs(x - x_recov)).item())
which results in
downsample diff: 0.0
recovery error: 0.0
Exporting to ONNX
When exporting we need to invoke init_weights for the new classes before using torch.onnx.export. For example
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.downsample = DownsampleAndNoiseMap()
self.upsample = Upsample()
self.convnet = lambda x: x # placeholder
def init_weights(self, x):
self.downsample.init_weights(x)
self.upsample.init_weights(x)
def forward(self, x, sigma):
x = self.downsample(x, sigma)
x = self.convnet(x)
x = self.upsample(x)
return x
x = torch.randn(1, 3, 100, 100)
sigma = torch.randn(1)
model = Model()
# ... load state dict here
model.init_weights(x)
torch.onnx.export(model, (x, sigma), 'deploy.onnx', verbose=True, input_names=["input", "sigma"], output_names=["output"])
which gives the ONNX graph
graph(%input : Float(1, 3, 100, 100)
%sigma : Float(1)) {
%2 : Float(1, 3, 50, 50) = onnx::Constant[value=<Tensor>](), scope: Model
%3 : Float(1, 3, 50, 50) = onnx::Add(%2, %sigma), scope: Model
%4 : Float(12, 3, 2, 2) = onnx::Constant[value=<Tensor>](), scope: Model
%5 : Float(1, 12, 50, 50) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2]](%input, %4), scope: Model
%6 : Float(1, 15, 50, 50) = onnx::Concat[axis=1](%3, %5), scope: Model
%7 : Float(15, 3, 2, 2) = onnx::Constant[value=<Tensor>](), scope: Model
%output : Float(1, 3, 100, 100) = onnx::ConvTranspose[dilations=[1, 1], group=1, kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2]](%6, %7), scope: Model
return (%output);
}
As for the last question about the recommended way to deploy on iOS I can't answer that since I don't have experience in that area.

Numpy and tensorflow RNN shape representation mismatch

I'm building my first RNN in tensorflow. After understanding all the concepts regarding the 3D input shape, I came across with this issue.
In my numpy version (1.15.4), the shape representation of 3D arrays is the following: (panel, row, column). I will make each dimension different so that it is clearer:
In [1]: import numpy as np
In [2]: arr = np.arange(30).reshape((2,3,5))
In [3]: arr
Out[3]:
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]],
[[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]]])
In [4]: arr.shape
Out[4]: (2, 3, 5)
In [5]: np.__version__
Out[5]: '1.15.4'
Here my understanding is: I have two timesteps with each timestep having 3 observations with 5 features in each observation.
However, in tensorflow "theory" (which I believe it is strongly based in numpy) RNN cells expect tensors (i.e. just n-dimensional matrices) of shape [batch_size, timesteps, features], which could be translated to: (row, panel, column) in the numpy "jargon".
As can be seen, the representation doesn't match, leading to errors when feeding numpy data into a placeholder, which in most of the examples and theory is defined like:
x = tf.placeholder(tf.float32, shape=[None, N_TIMESTEPS_X, N_FEATURES], name='XPlaceholder')
np.reshape() doesn't solve the issue because it just rearranges the dimensions, but messes up with the data.
I'm using for the first time the Dataset API, but I encounter the problems once into the session, not in the Dataset API ops.
I'm using the static_rnn method, and everything works well until I have to feed the data into the placeholder, which obviously results in a shape error.
I have tried to change the placeholder shape to shape=[N_TIMESTEPS_X, None, N_FEATURES]. HOWEVER, I'm using the dataset API, and I get errors when making the initializer if I change the Xplaceholder to the shape=[N_TIMESTEPS_X, None, N_FEATURES].
So, to summarize:
First problem: Shape errors with different shape representations.
Second problem: Dataset error when equating the shape representations (I think that either static_rnn or dynamic_rnn would function if this is resolved).
My question is:
¿Is there anything I'm missing in regard to this different representation logic which makes the practice confusing?
¿Could the solution be attained to switching to dynamic_rnn? (although the problems about the shape I encounter are related to the dataset API initializer being fed with shape [N_TIMESTEPS_X, None, N_FEATURES], not with the RNN cell itself.
Thank you very much for your time.
Full code:
'''The idea is to create xt, yt, xval and yval. My numpy arrays to
be fed are of the following shapes:
The 3D xt array has a shape of: (11, 69579, 74)
The 3D xval array has a shape of: (11, 7732, 74)
The yt array has a shape of: (69579, 3)
The yval array has a shape of: (7732, 3)
'''
N_TIMESTEPS_X = xt.shape[0] ## The stack number
BATCH_SIZE = 256
#N_OBSERVATIONS = xt.shape[1]
N_FEATURES = xt.shape[2]
N_OUTPUTS = yt.shape[1]
N_NEURONS_LSTM = 128 ## Number of units in the LSTMCell
N_NEURONS_DENSE = 64 ## Number of units in the Dense layer
N_EPOCHS = 600
LEARNING_RATE = 0.1
### Define the placeholders anda gather the data.
train_data = (xt, yt)
validation_data = (xval, yval)
## We define the placeholders as a trick so that we do not break into memory problems, associated with feeding the data directly.
'''As an alternative, you can define the Dataset in terms of tf.placeholder() tensors, and feed the NumPy arrays when you initialize an Iterator over the dataset.'''
batch_size = tf.placeholder(tf.int64)
x = tf.placeholder(tf.float32, shape=[None, N_TIMESTEPS_X, N_FEATURES], name='XPlaceholder')
y = tf.placeholder(tf.float32, shape=[None, N_OUTPUTS], name='YPlaceholder')
# Creating the two different dataset objects.
train_dataset = tf.data.Dataset.from_tensor_slices((x,y)).batch(BATCH_SIZE).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((x,y)).batch(BATCH_SIZE)
# Creating the Iterator type that permits to switch between datasets.
itr = tf.data.Iterator.from_structure(train_dataset.output_types, train_dataset.output_shapes)
train_init_op = itr.make_initializer(train_dataset)
validation_init_op = itr.make_initializer(val_dataset)
next_features, next_labels = itr.get_next()
### Create the graph
cellType = tf.nn.rnn_cell.LSTMCell(num_units=N_NEURONS_LSTM, name='LSTMCell')
inputs = tf.unstack(next_features, N_TIMESTEPS_X, axis=0)
'''inputs: A length T list of inputs, each a Tensor of shape [batch_size, input_size]'''
RNNOutputs, _ = tf.nn.static_rnn(cell=cellType, inputs=inputs, dtype=tf.float32)
predictionsLayer = tf.layers.dense(inputs=tf.layers.batch_normalization(RNNOutputs[-1]), units=N_NEURONS_DENSE, activation=None, name='Dense_Layer')
### Define the cost function, that will be optimized by the optimizer.
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=predictionsLayer, labels=next_labels, name='Softmax_plus_Cross_Entropy'))
optimizer_type = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE, name='AdamOptimizer')
optimizer = optimizer_type.minimize(cost)
### Model evaluation
correctPrediction = tf.equal(tf.argmax(predictionsLayer,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correctPrediction,tf.float32))
#confusionMatrix = tf.confusion_matrix(next_labels, predictionsLayer, num_classes=3, name='ConfMatrix')
N_BATCHES = train_data[0].shape[0] // BATCH_SIZE
## Saving variables so that we can restore them afterwards.
saver = tf.train.Saver()
save_dir = '/home/zmlaptop/Desktop/tfModels/{}_{}'.format(cellType.__class__.__name__, datetime.now().strftime("%Y%m%d%H%M%S"))
os.mkdir(save_dir)
varDict = {'nTimeSteps':N_TIMESTEPS_X, 'BatchSize': BATCH_SIZE, 'nFeatures':N_FEATURES,
'nNeuronsLSTM':N_NEURONS_LSTM, 'nNeuronsDense':N_NEURONS_DENSE, 'nEpochs':N_EPOCHS,
'learningRate':LEARNING_RATE, 'optimizerType': optimizer_type.__class__.__name__}
varDicSavingTxt = save_dir + '/varDict.txt'
modelFilesDir = save_dir + '/modelFiles'
os.mkdir(modelFilesDir)
logDir = save_dir + '/TBoardLogs'
os.mkdir(logDir)
acc_summary = tf.summary.scalar('Accuracy', accuracy)
loss_summary = tf.summary.scalar('Cost_CrossEntropy', cost)
summary_merged = tf.summary.merge_all()
with open(varDicSavingTxt, 'w') as outfile:
outfile.write(repr(varDict))
with tf.Session() as sess:
tf.set_random_seed(2)
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter(logDir + '/train', sess.graph)
validation_writer = tf.summary.FileWriter(logDir + '/validation')
# initialise iterator with train data
sess.run(train_init_op, feed_dict = {x : train_data[0], y: train_data[1], batch_size: BATCH_SIZE})
print('¡Training starts!')
for epoch in range(N_EPOCHS):
batchAccList = []
tot_loss = 0
for batch in range(N_BATCHES):
optimizer_output, loss_value, summary = sess.run([optimizer, cost, summary_merged])
accBatch = sess.run(accuracy)
tot_loss += loss_value
batchAccList.append(accBatch)
if batch % 10 == 0:
train_writer.add_summary(summary, batch)
epochAcc = tf.reduce_mean(batchAccList)
if epoch%10 == 0:
print("Epoch: {}, Loss: {:.4f}, Accuracy: {}".format(epoch, tot_loss / N_BATCHES, epochAcc))
#confM = sess.run(confusionMatrix)
#confDic = {'confMatrix': confM}
#confTxt = save_dir + '/confMDict.txt'
#with open(confTxt, 'w') as outfile:
# outfile.write(repr(confDic))
#print(confM)
# initialise iterator with validation data
sess.run(validation_init_op, feed_dict = {x : validation_data[0], y: validation_data[1], batch_size:len(validation_data[0])})
print('Validation Loss: {:4f}, Validation Accuracy: {}'.format(sess.run(cost), sess.run(accuracy)))
summary_val = sess.run(summary_merged)
validation_writer.add_summary(summary_val)
saver.save(sess, modelFilesDir)
Is there anything I'm missing in regard to this different
representation logic which makes the practice confusing?
In fact, you made a mistake about the input shapes of static_rnn and dynamic_rnn. The input shape of static_rnn is [timesteps,batch_size, features](link),which is a list of 2D tensors of shape [batch_size, features]. But The input shape of dynamic_rnn is either [timesteps,batch_size, features] or [batch_size,timesteps, features] depending on time_major is True or False(link).
Could the solution be attained to switching to dynamic_rnn?
The key is not that you use static_rnn or dynamic_rnn, but that your data shape matches the required shape. The general format of placeholder is like your code is [None, N_TIMESTEPS_X, N_FEATURES]. It's also convenient for you to use dataset API.
You can use transpose()(link) instead of reshape().transpose() will permute the dimensions of an array and won't messes up with the data.
So your code needs to be modified.
# permute the dimensions
xt = xt.transpose([1,0,2])
xval = xval.transpose([1,0,2])
# adjust shape,axis=1 represents timesteps
inputs = tf.unstack(next_features, axis=1)
Other errors should have nothing to do with rnn shape.

Fully Connected layer incorrect feed

Dragging some idea for building CNN from here, I want to build a convnet which comprises of two convolution layers, one fully connected layer(FCL) and softmax layer(SL). I couldn't understand defining the convolution operations to perform on FCL and back connected to SL.
In FCL, are the convolution operation performed in 1D where the input is flattened ? The weight for FCL are generated in 2D but how can I do the Conv operations if so ? because the matrix dimension dont match with the reshaped input and weights generated.( comparing VGGNET in detail column at the end). Even If I can do a 1xM and MxN conv operation the sizes of the matrix are missmatching where did I go wrong in FCL ?
Traceback (most recent call last):
File "D:/Lab_Project_Files/TF/Practice Files/basictest22.py", line 108, in <module>
y = conv_net( x )
File "D:/Lab_Project_Files/TF/Practice Files/basictest22.py", line 93, in conv_net
FClayer = tf.nn.relu(tf.add(tf.matmul(reshape,layer3_weights),layer3_biases))
ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' (op: 'MatMul') with input shapes: [15360], [2240,64]
How to define the FCL ?
I'm bit confused whether these operations apply on each and every image of the batch ?
My input parameters are
INPUT_WIDTH = 16 # input image width
INPUT_HEIGHT = 12 # input image height
INPUT_DEPTH = 1 # input image depth = 1 for monochrome
NUM_CLASSES = 8 # output classes
BATCH_SIZE = 5 # grouping batch for training
# input output placeholders
x = tf.placeholder(tf.float32, [BATCH_SIZE, INPUT_WIDTH,INPUT_HEIGHT,INPUT_DEPTH ])
y_ = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_CLASSES])
my trail code
def outputdetails(W1, H1,F, P, S):
# W1,W2 - width of input and output
# H1,H2 - height of input and output
# F - size of the filter
# P - padding
# S - Stride
P = 0.00
W2 = int((W1 - F + 2*P)/S + 1)
H2 = int((H1 - F + 2*P)/S + 1)
return W2, H2
# CNN trail
def conv_net(x):
# CONV1 layer
FILTER_SIZE = 3 # applying 3x3 filter
STRIDE = 1
num_hidden = 64 # used for FCL as num of outputs
NUM_CHANNELS = INPUT_DEPTH # input channels
DEPTH = 16 # Output channels Apply 16 filters
layer1_weights = tf.Variable(tf.random_normal([FILTER_SIZE,FILTER_SIZE,NUM_CHANNELS,DEPTH],stddev = 0.1))
layer1_biases = tf.Variable(tf.zeros([DEPTH]))
#CONV2 layer
NUM_CHANNELS = 16
DEPTH = 16
layer2_weights = tf.Variable(tf.random_normal([FILTER_SIZE, FILTER_SIZE, NUM_CHANNELS, DEPTH], stddev=0.1))
layer2_biases = tf.Variable(tf.zeros([DEPTH]))
# Fully Connected layer
# W1 - INPUT_WIDTH, H1 - INPUT_HEIGHT, F - FILTER_SIZE, S - STRIDE
finalsize_width,finalsize_height = outputdetails(INPUT_WIDTH,INPUT_HEIGHT,FILTER_SIZE,1,STRIDE)
layer3_weights = tf.Variable(
tf.truncated_normal([finalsize_width * finalsize_height * DEPTH, num_hidden], stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
# softmax layer
Outlayer_weights = tf.Variable(tf.random_normal([num_hidden, NUM_CLASSES], stddev=0.1))
Outlayer_biases = tf.Variable(tf.constant(1.0,shape = [NUM_CLASSES]))
conv1 = tf.nn.relu(tf.add(tf.nn.conv2d(x,layer1_weights,strides = [1,1,1,1],padding='SAME'),layer1_biases))
conv2 = tf.nn.relu(tf.add(tf.nn.conv2d(conv1, layer2_weights, strides=[1, 1, 1, 1], padding='SAME'), layer2_biases))
shape = conv2.get_shape().as_list()
reshape = tf.reshape(conv2,[shape[0]*shape[1]*shape[2]*shape[3]])
FClayer = tf.nn.relu(tf.add(tf.matmul(reshape,layer3_weights),layer3_biases))
out = tf.add(tf.matmul(FClayer, Outlayer_weights), Outlayer_biases)
return out
Files if required
source file
classes
data
Change this
reshape = tf.reshape(conv2,[shape[0]*shape[1]*shape[2]*shape[3]])
to this
reshape = tf.reshape(conv2,[shape[0],shape[1]*shape[2]*shape[3]])
matmul can work with a batch dimension which you are destroying.

Resources