Image Segmentation U-Net model Assignment - conv-neural-network

My U-Net model
def unet_model(input_size=(96, 128, 3), n_filters=32, n_classes=23):
"""
Unet model
Arguments:
input_size -- Input shape
n_filters -- Number of filters for the convolutional layers
n_classes -- Number of output classes
Returns:
model -- tf.keras.Model
"""
inputs = Input(input_size)
# Contracting Path (encoding)
# Add a conv_block with the inputs of the unet_ model and n_filters
### START CODE HERE
cblock1 = conv_block(inputs, n_filters)
# Chain the first element of the output of each block to be the input of the next conv_block.
# Double the number of filters at each new step
cblock2 = conv_block(cblock1[0], n_filters*2)
cblock3 = conv_block(cblock2[0], n_filters*4)
cblock4 = conv_block(cblock3[0], n_filters*8, dropout_prob=0.3) # Include a dropout_prob of 0.3 for this layer
# Include a dropout_prob of 0.3 for this layer, and avoid the max_pooling layer
cblock5 = conv_block(cblock4[0], n_filters*16, dropout_prob=0.3, max_pooling=False)
### END CODE HERE
# Expanding Path (decoding)
# Add the first upsampling_block.
# Use the cblock5[0] as expansive_input and cblock4[1] as contractive_input and n_filters * 8
### START CODE HERE
ublock6 = upsampling_block(cblock5[0], cblock4[1], n_filters*8)
# Chain the output of the previous block as expansive_input and the corresponding contractive block output.
# Note that you must use the second element of the contractive block i.e before the maxpooling layer.
# At each step, use half the number of filters of the previous block
ublock7 = upsampling_block(ublock6[0], cblock5[0], n_filters*4)
ublock8 = upsampling_block(ublock7[0], ublock6[0], n_filters*2)
ublock9 = upsampling_block(ublock8[0], ublock7[0], n_filters)
### END CODE HERE
conv9 = Conv2D(n_filters,
3,
activation='relu',
padding='same',
kernel_initializer='he_normal')(ublock9)
# Add a Conv2D layer with n_classes filter, kernel size of 1 and a 'same' padding
### START CODE HERE
conv10 = Conv2D(n_filters, 1 , padding='same')(conv9)
### END CODE HERE
model = tf.keras.Model(inputs=inputs, outputs=conv10)
return model
...
In the Above Unet Model, the fisrt half of the model is completed i.e., upto cblock5
but from the second half of the model i.e., from cblock6 till cblock9 I got bit confused at
...
# Chain the output of the previous block as expansive_input and the corresponding contractive block output.
# Note that you must use the second element of the contractive block i.e before the maxpooling layer.
# At each step, use half the number of filters of the previous block
...
Please help me with the above instruction meaning.
...

The unet in the picture has 4 encoding block ( the descending one) and 4 decoding blocks.
in a unet the input of the decoding blocks (the ones where the tensor returns at the previous dimension) its the concatenation of the block "at the same level" and the previous block, the assignment is asking you to do this concatenation ( you can see in the picture how 2 different arrows go in the decoding level, this are the 2 inputs)
at each step use half the filters: just use half the filters on each decoding level ( in the picture there are 4 decoding levels, so say you use N filters on the first decoding layer ( the one lower) you then use N/2 on the second decoding layer and so on)
Note that you must use the second element of the contractive block i.e before the maxpooling layer. : hard to tell, i think he is sayng that when you take the output of the encoder at level 3, at some point, you will want to give this input to the decoder at level 3 (the horizontal grey arrows in the figure, the input you need to concatenate), you need to take this input BEFORE the maxpooling, or it will not have the same dimensions (basically from an encoder there are 2 outputs, the red (maxpool) one and the grey (copy) one)

here you go the problem was tracing the cblocks in the second half
# UNQ_C3
# GRADED FUNCTION: unet_model
def unet_model(input_size=(96, 128, 3), n_filters=32, n_classes=23):
"""
Unet model
Arguments:
input_size -- Input shape
n_filters -- Number of filters for the convolutional layers
n_classes -- Number of output classes
Returns:
model -- tf.keras.Model
"""
inputs = Input(input_size)
# Contracting Path (encoding)
# Add a conv_block with the inputs of the unet_ model and n_filters
### START CODE HERE
cblock1 = conv_block(inputs, n_filters)
# Chain the first element of the output of each block to be the input of the next conv_block.
# Double the number of filters at each new step
cblock2 = conv_block(cblock1[0], 2*n_filters)
cblock3 = conv_block(cblock2[0], 4*n_filters)
cblock4 = conv_block(cblock3[0], 8*n_filters, dropout_prob=0.3) # Include a dropout_prob of 0.3 for this layer
# Include a dropout_prob of 0.3 for this layer, and avoid the max_pooling layer
cblock5 = conv_block(cblock4[0], 16*n_filters, dropout_prob=0.3, max_pooling=False)
### END CODE HERE
# Expanding Path (decoding)
# Add the first upsampling_block.
# Use the cblock5[0] as expansive_input and cblock4[1] as contractive_input and n_filters * 8
### START CODE HERE
ublock6 = upsampling_block(cblock5[0], cblock4[1], n_filters * 8)
# Chain the output of the previous block as expansive_input and the corresponding contractive block output.
# Note that you must use the second element of the contractive block i.e before the maxpooling layer.
# At each step, use half the number of filters of the previous block
ublock7 = upsampling_block(ublock6, cblock3[1], n_filters * 4)
ublock8 = upsampling_block(ublock7, cblock2[1], n_filters * 2)
ublock9 = upsampling_block(ublock8, cblock1[1], n_filters*1)
### END CODE HERE
conv9 = Conv2D(n_filters,
3,
activation='relu',
padding='same',
kernel_initializer='he_normal')(ublock9)
# Add a Conv2D layer with n_classes filter, kernel size of 1 and a 'same' padding
### START CODE HERE
conv10 = Conv2D(n_classes, 1, padding='same')(conv9)
### END CODE HERE
model = tf.keras.Model(inputs=inputs, outputs=conv10)
return model

Related

TypeError: Inputs to a layer should be tensors. Got: None for U-net

Error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-41-da7e85621955> in <module>
1 # Call the helper function for defining the layers for the model, given the input image size
----> 2 unet = UNetCompiled(input_size=(128,128,3), n_filters=32, n_classes=3)
3 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
195 # have a `shape` attribute.
196 if not hasattr(x, 'shape'):
--> 197 raise TypeError(f'Inputs to a layer should be tensors. Got: {x}')
198
199 if len(inputs) != len(input_spec):
TypeError: Inputs to a layer should be tensors. Got: None
Code:
def UNetCompiled(input_size=(128, 128, 3), n_filters=32, n_classes=3):
"""
Combine both encoder and decoder blocks according to the U-Net research paper
Return the model as output
"""
# Input size represent the size of 1 image (the size used for pre-processing)
inputs = Input(input_size)
# Encoder includes multiple convolutional mini blocks with different maxpooling, dropout and filter parameters
# Observe that the filters are increasing as we go deeper into the network which will increase the # channels of the image
cblock1 = EncoderMiniBlock(inputs, n_filters,dropout_prob=0, max_pooling=True)
cblock2 = EncoderMiniBlock(cblock1[0],n_filters*2,dropout_prob=0, max_pooling=True)
cblock3 = EncoderMiniBlock(cblock2[0], n_filters*4,dropout_prob=0, max_pooling=True)
cblock4 = EncoderMiniBlock(cblock3[0], n_filters*8,dropout_prob=0.3, max_pooling=True)
cblock5 = EncoderMiniBlock(cblock4[0], n_filters*16, dropout_prob=0.3, max_pooling=False)
# Decoder includes multiple mini blocks with decreasing number of filters
# Observe the skip connections from the encoder are given as input to the decoder
# Recall the 2nd output of encoder block was skip connection, hence cblockn[1] is used
ublock6 = DecoderMiniBlock(cblock5[0], cblock4[1], n_filters * 8)
ublock7 = DecoderMiniBlock(ublock6, cblock3[1], n_filters * 4)
ublock8 = DecoderMiniBlock(ublock7, cblock2[1], n_filters * 2)
ublock9 = DecoderMiniBlock(ublock8, cblock1[1], n_filters)
# Complete the model with 1 3x3 convolution layer (Same as the prev Conv Layers)
# Followed by a 1x1 Conv layer to get the image to the desired size.
# Observe the number of channels will be equal to number of output classes
conv9 = Conv2D(n_filters,
3,
activation='relu',
padding='same',
kernel_initializer='he_normal')(ublock9)
conv10 = Conv2D(n_classes, 1, padding='same')(conv9)
# Define the model
model = tf.keras.Model(inputs=inputs, outputs=conv10)
return model
Function called:
# Call the helper function for defining the layers for the model, given the input image size
unet = UNetCompiled(input_size=(128,128,3), n_filters=32, n_classes=3)
I am trying to implement an image segmentation model using U-Net. Masked images are in png format and 4 channeled and original images are in jpg format.
Define the desired shape
target_shape_img = [128, 128, 3] target_shape_mask = [128, 128,1]

Why does output shape in a simple Elman RNN depend on the sequence length(while hidden state shape doesn't)?

I am learning about RNNs, and am trying to code one up using PyTorch.
I have some trouble understanding the output dimensions
Here is some code for a simple RNN architecture
class RNN(nn.Module):
def __init__(self, input_size, hidden_dim, n_layers):
super(RNN, self).__init__()
self.hidden_dim=hidden_dim
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
def forward(self, x, hidden):
r_out, hidden = self.rnn(x, hidden)
return r_out, hidden
So, what I understand is the hidden_dim is the number blocks I will have in my hidden layer, and essentially the number of features in the output and in the hidden state.
I create some dummy data, to test it
test_rnn = RNN(input_size=1, hidden_dim=4, n_layers=1)
# generate evenly spaced, test data pts
time_steps = np.linspace(0, 6, 3)
data = np.sin(time_steps)
data.resize((3, 1))
test_input = torch.Tensor(data).unsqueeze(0) # give it a batch_size of 1 as first dimension
print('Input size: ', test_input.size())
# test out rnn sizes
test_out, test_h = test_rnn(test_input, None)
print('Hidden state size: ', test_h.size())
print('Output size: ', test_out.size())
What I get is
Input size: torch.Size([1, 3, 1])
Hidden state size: torch.Size([1, 1, 4])
Output size: torch.Size([1, 3, 4])
So I understand that the shape of x is determined like so
x = (batch_size, seq_length, input_size).. so 1 bath size, and input of 1 feature and 3 time steps(sequence length).
For hidden state, like so hidden = (n_layers, batch_size, hidden_dim).. so I had 1 layer, batch size 1, and 4 blocks in my hidden layer.
What I don't get is the RNN output. From the documentation, r_out = (batch_size, time_step, hidden_size).. Wasn't the output supposed to be the same as the hidden state that was output from the hidden units? That is, if I have 4 units in my hidden layer, I would expect it to output 4 numbers for the hidden state, and 4 numbers for the output. Why is the time step a dimension of the output? Because, each hidden unit, takes in some numbers, outputs a state S and output Y, and both of these are equal, yes? I tried a diagram, this is what I came up with. Help me understand what part of it I'm doing wrong.
So TL;DR
Why does output shape in a simple Elman RNN depend on the sequence length(while hidden state shape doesn't)? For in the diagram I drew, I see both of them being the same.
In the PyTorch API, the output is a sequence of hidden states during the RNN computation, i.e., there is one hidden state vector per input vector. The hidden state is the last hidden state, the state the RNN ends with after processing the input, so test_out[:, -1, :] = test_h.
Vector y in your diagrams is the same as a hidden state Ht, it indeed has 4 numbers, but the state is different for every time step, so you have 4 number for every time step.
The reason why PyTorch separates the sequence of outputs = hidden states (it's not the same in LSTMs, though) is that you can have a batch of sequences of different lengths. In that case, the final state is not simply test_out[:, -1, :], because you need to select final states based on the lengths of individual sequences.

RuntimeError: Given groups=3, weight of size 12 64 3 768, expected input[32, 12, 30, 768] to have 192 channels, but got 12 channels instead

I started working with Pytorch recently so my understanding of it isn't quite strong. I previously had a 1 layer CNN but wanted to extend it to 2 layers, but the input and output channels have been throwing errors I can seem to decipher. Why does it expect 192 channels? Can someone give me a pointer to help me understand this better? I have seen several related problems on here, but I don't understand those solutions either.
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from transformers import BertConfig, BertModel, BertTokenizer
import math
from transformers import AdamW, get_linear_schedule_with_warmup
def pad_sents(sents, pad_token): # Pad list of sentences according to the longest sentence in the batch.
sents_padded = []
max_len = max(len(s) for s in sents)
for s in sents:
padded = [pad_token] * max_len
padded[:len(s)] = s
sents_padded.append(padded)
return sents_padded
def sents_to_tensor(tokenizer, sents, device):
tokens_list = [tokenizer.tokenize(str(sent)) for sent in sents]
sents_lengths = [len(tokens) for tokens in tokens_list]
tokens_list_padded = pad_sents(tokens_list, '[PAD]')
sents_lengths = torch.tensor(sents_lengths, device=device)
masks = []
for tokens in tokens_list_padded:
mask = [0 if token == '[PAD]' else 1 for token in tokens]
masks.append(mask)
masks_tensor = torch.tensor(masks, dtype=torch.long, device=device)
tokens_id_list = [tokenizer.convert_tokens_to_ids(tokens) for tokens in tokens_list_padded]
sents_tensor = torch.tensor(tokens_id_list, dtype=torch.long, device=device)
return sents_tensor, masks_tensor, sents_lengths
class ConvModel(nn.Module):
def __init__(self, device, dropout_rate, n_class, out_channel=16):
super(ConvModel, self).__init__()
self.bert_config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)
self.dropout_rate = dropout_rate
self.n_class = n_class
self.out_channel = out_channel
self.bert = BertModel.from_pretrained('bert-base-uncased', config=self.bert_config)
self.out_channels = self.bert.config.num_hidden_layers * self.out_channel
self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', config=self.bert_config)
self.conv = nn.Conv2d(in_channels=self.bert.config.num_hidden_layers,
out_channels=self.out_channels,
kernel_size=(3, self.bert.config.hidden_size),
groups=self.bert.config.num_hidden_layers)
self.conv1 = nn.Conv2d(in_channels=self.out_channels,
out_channels=48,
kernel_size=(3, self.bert.config.hidden_size),
groups=self.bert.config.num_hidden_layers)
self.hidden_to_softmax = nn.Linear(self.out_channels, self.n_class, bias=True)
self.dropout = nn.Dropout(p=self.dropout_rate)
self.device = device
def forward(self, sents):
sents_tensor, masks_tensor, sents_lengths = sents_to_tensor(self.tokenizer, sents, self.device)
encoded_layers = self.bert(input_ids=sents_tensor, attention_mask=masks_tensor)
hidden_encoded_layer = encoded_layers[2]
hidden_encoded_layer = hidden_encoded_layer[0]
hidden_encoded_layer = torch.unsqueeze(hidden_encoded_layer, dim=1)
hidden_encoded_layer = hidden_encoded_layer.repeat(1, 12, 1, 1)
conv_out = self.conv(hidden_encoded_layer) # (batch_size, channel_out, some_length, 1)
conv_out = self.conv1(conv_out)
conv_out = torch.squeeze(conv_out, dim=3) # (batch_size, channel_out, some_length)
conv_out, _ = torch.max(conv_out, dim=2) # (batch_size, channel_out)
pre_softmax = self.hidden_to_softmax(conv_out)
return pre_softmax
def batch_iter(data, batch_size, shuffle=False, bert=None):
batch_num = math.ceil(data.shape[0] / batch_size)
index_array = list(range(data.shape[0]))
if shuffle:
data = data.sample(frac=1)
for i in range(batch_num):
indices = index_array[i * batch_size: (i + 1) * batch_size]
examples = data.iloc[indices]
sents = list(examples.train_BERT_tweet)
targets = list(examples.train_label.values)
yield sents, targets # list[list[str]] if not bert else list[str], list[int]
def train():
label_name = ['Yes', 'Maybe', 'No']
device = torch.device("cpu")
df_train = pd.read_csv('trainn.csv') # , index_col=0)
train_label = dict(df_train.train_label.value_counts())
label_max = float(max(train_label.values()))
train_label_weight = torch.tensor([label_max / train_label[i] for i in range(len(train_label))], device=device)
model = ConvModel(device=device, dropout_rate=0.2, n_class=len(label_name))
optimizer = AdamW(model.parameters(), lr=1e-3, correct_bias=False)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=100, num_training_steps=1000) # changed the last 2 arguments to old ones
model = model.to(device)
model.train()
cn_loss = torch.nn.CrossEntropyLoss(weight=train_label_weight, reduction='mean')
train_batch_size = 16
for epoch in range(1):
for sents, targets in batch_iter(df_train, batch_size=train_batch_size, shuffle=True): # for each epoch
optimizer.zero_grad()
pre_softmax = model(sents)
loss = cn_loss(pre_softmax, torch.tensor(targets, dtype=torch.long, device=device))
loss.backward()
optimizer.step()
scheduler.step()
TrainingModel = train()
Here's a snippet of data https://github.com/Kosisochi/DataSnippet
It seems that the original version of the code you had in this question behaved differently. The final version of the code you have here gives me a different error from what you posted, more specifically - this:
RuntimeError: Calculated padded input size per channel: (20 x 1). Kernel size: (3 x 768). Kernel size can't be greater than actual input size
I apologize if I misunderstood the situation, but it seems to me that your understanding of what exactly nn.Conv2d layer does is not 100% clear and that is the main source of your struggle. I interpret the part "detailed explanation on 2 layer CNN in Pytorch" you requested as an ask to explain in detail on how that layer works and I hope that after this is done there will be no problem applying it 1 time, 2 times or more.
You can find all the documentation about the layer here, but let me give you a recap which hopefully will help to understand more the errors you're getting.
First of all nn.Conv2d inputs are 4-d tensors of the shape (BatchSize, ChannelsIn, Height, Width) and outputs are 4-d tensors of the shape (BatchSize, ChannelsOut, HeightOut, WidthOut). The simplest way to think about nn.Conv2d is of something applied to 2d images with pixel grid of size Height x Width and having ChannelsIn different colors or features per pixel. Even if your inputs have nothing to do with actual images the behavior of the layer is still the same. Simplest situation is when the nn.Conv2d is not using padding (as in your code). In that case the kernel_size=(kernel_height, kernel_width) argument specifies the rectangle which you can imagine sweeping through Height x Width rectangle of your inputs and producing one pixel for each valid position. Without padding the coordinate of the rectangle's point can be any pair of indicies (x, y) with x between 0 and Height - kernel_height and y between 0 and Width - kernel_width. Thus the output will look like a 2d image of size (Height - kernel_height + 1) x (Width - kernel_width + 1) and will have as many output channels as specified to nn.Conv2d constructor, so the output tensor will be of shape (BatchSize, ChannelsOut, Height - kernel_height + 1, Width - kernel_width + 1).
The parameter groups is not affecting how shapes are changed by the layer - it is only controlling which input channels are used as inputs for the output channels (groups=1 means that every input channel is used as input for every output channel, otherwise input and output channels are divided into corresponding number of groups and only input channels from group i are used as inputs for the output channels from group i).
Now in your current version of the code you have BatchSize = 16 and the output of pre-trained model is (BatchSize, DynamicSize, 768) with DynamicSize depending on the input, e.g. 22. You then introduce additional dimension as axis 1 with unsqueeze and repeat the values along that dimension transforming the tensor of shape (16, 22, 768) into (16, 12, 22, 768). Effectively you are using the output of the pre-trained model as 12-channel (with each channel having same values as others) 2-d images here of size (22, 768), where 22 is not fixed (depends on the batch). Then you apply a nn.Conv2d with kernel size (3, 768) - which means that there is no "wiggle room" for width and output 2-d images will be of size (20, 1) and since your layer has 192 channels final size of the output of first convolution layer has shape (16, 192, 20, 1). Then you try to apply second layer of convolution on top of that with kernel size (3, 768) again, but since your 2-d "image" is now just (20 x 1) there is no valid position to fit (3, 768) kernel rectangle inside a rectangle (20 x 1) which leads to the error message Kernel size can't be greater than actual input size.
Hope this explanation helps. Now to the choices you have to avoid the issue:
(a) is to add padding in such a way that the size of the output is not changing comparing to input (I won't go into details here,
because I don't think this is what you need)
(b) Use smaller kernel on both first and/or second convolutions (e.g. if you don't change first convolution the only valid width for
the second kernel would be 1).
(c) Looking at what you're trying to do my guess is that you actually don't want to use 2d convolution, you want 1d convolution (on the sequence) with every position described by 768 values. When you're using one convolution layer with 768 width kernel (and same 768 width input) you're effectively doing exactly same thing as 1d convolution with 768 input channels, but then if you try to apply second one you have a problem. You can specify kernel width as 1 for the next layer(s) and that will work for you, but a more correct way would be to transpose pre-trained model's output tensor by switching the last dimensions - getting shape (16, 768, DynamicSize) from (16, DynamicSize, 768) and then apply nn.Conv1d layer with 768 input channels and arbitrary ChannelsOut as output channels and 1d kernel_size=3 (meaning you look at 3 consecutive elements of the sequence for convolution). If you do that than without padding input shape of (16, 768, DynamicSize) will become (16, ChannelsOut, DynamicSize-2), and after you apply second Conv1d with e.g. the same settings as first one you'll get a tensor of shape (16, ChannelsOut, DynamicSize-4), etc. (each time the 1d length will shrink by kernel_size-1). You can always change number of channels/kernel_size for each subsequent convolution layer too.

Restricting the output values of layers in Keras

I have defined my MLP in the code below. I want to extract the values of layer_2.
def gater(self):
dim_inputs_data = Input(shape=(self.train_dim[1],))
dim_svm_yhat = Input(shape=(3,))
layer_1 = Dense(20,
activation='sigmoid')(dim_inputs_data)
layer_2 = Dense(3, name='layer_op_2',
activation='sigmoid', use_bias=False)(layer_1)
layer_3 = Dot(1)([layer_2, dim_svm_yhat])
out_layer = Dense(1, activation='tanh')(layer_3)
model = Model(input=[dim_inputs_data, dim_svm_yhat], output=out_layer)
adam = optimizers.Adam(lr=0.01)
model.compile(loss='mse', optimizer=adam, metrics=['accuracy'])
return model
Suppose the output of layer_2 is below in matrix form
0.1 0.7 0.8
0.1 0.8 0.2
0.1 0.5 0.5
....
I would like below to be fed into layer_3 instead of above
0 0 1
0 1 0
0 1 0
Basically, I want the first maximum values to be converted to 1 and other to 0.
How can this be achieved in keras?.
Who decides the range of output values?
Output range of any layer in a neural network is decided by the activation function used for that layer. For example, if you use tanh as your activation function, your output values will be restricted to [-1,1] (and the values are continuous, check how the values get mapped from [-inf,+inf] (input on x-axis) to [-1,+1] (output on y-axis) here, understanding this step is very important)
What you should be doing is add a custom activation function that restricts your values to a step function i.e., either 1 or 0 for [-inf, +inf] and apply it to that layer.
How do I know which function to use?
You need to create y=some_function that satisfies all your needs (the input to output mapping) and convert that to Python code just like this one:
from keras import backend as K
def binaryActivationFromTanh(x, threshold) :
# convert [-inf,+inf] to [-1, 1]
# you can skip this step if your threshold is actually within [-inf, +inf]
activated_x = K.tanh(x)
binary_activated_x = activated_x > threshold
# cast the boolean array to float or int as necessary
# you shall also cast it to Keras default
# binary_activated_x = K.cast_to_floatx(binary_activated_x)
return binary_activated_x
After making your custom activation function, you can use it like
x = Input(shape=(1000,))
y = Dense(10, activation=binaryActivationFromTanh)(x)
Now test the values and see if you are getting the values like you expected. You can now throw this piece into a bigger neural network.
I strongly discourage adding new layers to add restriction to your outputs, unless it is solely for activation (like keras.layers.LeakyReLU).
Use Numpy in between. Here is an example with a random matrix:
a = np.random.random((5, 5)) # simulate random value output of your layer
result = (a == a.max(axis=1)[:,None]).astype(int)
See also this thread: Numpy: change max in each row to 1, all other numbers to 0
You than feed in result as input to your next layer.
For wrapping the Numpy calculation you could use the Lambda layer. See examples here: https://keras.io/layers/core/#lambda
Edit:
Suggestion doesn´t work. I keep answer only to keep related comments.

How to combine FCNN and RNN in Tensorflow?

I want to make a Neural Network, which would have recurrency (for example, LSTM) at some layers and normal connections (FC) at others.
I cannot find a way to do it in Tensorflow.
It works, if I have only FC layers, but I don't see how to add just one recurrent layer properly.
I create a network in a following way :
with tf.variable_scope("autoencoder_variables", reuse=None) as scope:
for i in xrange(self.__num_hidden_layers + 1):
# Train weights
name_w = self._weights_str.format(i + 1)
w_shape = (self.__shape[i], self.__shape[i + 1])
a = tf.multiply(4.0, tf.sqrt(6.0 / (w_shape[0] + w_shape[1])))
w_init = tf.random_uniform(w_shape, -1 * a, a)
self[name_w] = tf.Variable(w_init,
name=name_w,
trainable=True)
# Train biases
name_b = self._biases_str.format(i + 1)
b_shape = (self.__shape[i + 1],)
b_init = tf.zeros(b_shape)
self[name_b] = tf.Variable(b_init, trainable=True, name=name_b)
if i+1 == self.__recurrent_layer:
# Create an LSTM cell
lstm_size = self.__shape[self.__recurrent_layer]
self['lstm'] = tf.contrib.rnn.BasicLSTMCell(lstm_size)
It should process the batches in a sequential order. I have a function for processing just one time-step, which will be called later, by a function, which process the whole sequence :
def single_run(self, input_pl, state, just_middle = False):
"""Get the output of the autoencoder for a single batch
Args:
input_pl: tf placeholder for ae input data of size [batch_size, DoF]
state: current state of LSTM memory units
just_middle : will indicate if we want to extract only the middle layer of the network
Returns:
Tensor of output
"""
last_output = input_pl
# Pass through the network
for i in xrange(self.num_hidden_layers+1):
if(i!=self.__recurrent_layer):
w = self._w(i + 1)
b = self._b(i + 1)
last_output = self._activate(last_output, w, b)
else:
last_output, state = self['lstm'](last_output,state)
return last_output
The following function should take sequence of batches as input and produce sequence of batches as an output:
def process_sequences(self, input_seq_pl, dropout, just_middle = False):
"""Get the output of the autoencoder
Args:
input_seq_pl: input data of size [batch_size, sequence_length, DoF]
dropout: dropout rate
just_middle : indicate if we want to extract only the middle layer of the network
Returns:
Tensor of output
"""
if(~just_middle): # if not middle layer
numb_layers = self.__num_hidden_layers+1
else:
numb_layers = FLAGS.middle_layer
with tf.variable_scope("process_sequence", reuse=None) as scope:
# Initial state of the LSTM memory.
state = initial_state = self['lstm'].zero_state(FLAGS.batch_size, tf.float32)
tf.get_variable_scope().reuse_variables() # THIS IS IMPORTANT LINE
# First - Apply Dropout
the_whole_sequences = tf.nn.dropout(input_seq_pl, dropout)
# Take batches for every time step and run them through the network
# Stack all their outputs
with tf.control_dependencies([tf.convert_to_tensor(state, name='state') ]): # do not let paralelize the loop
stacked_outputs = tf.stack( [ self.single_run(the_whole_sequences[:,time_st,:], state, just_middle) for time_st in range(self.sequence_length) ])
# Transpose output from the shape [sequence_length, batch_size, DoF] into [batch_size, sequence_length, DoF]
output = tf.transpose(stacked_outputs , perm=[1, 0, 2])
return output
The issue is with a variable scopes and their property "reuse".
If I run this code as it is I am getting the following error:
' Variable Train/process_sequence/basic_lstm_cell/weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope? '
If I comment out the line, which tell it to reuse variables ( tf.get_variable_scope().reuse_variables() ) I am getting the following error:
'Variable Train/process_sequence/basic_lstm_cell/weights already exists, disallowed. Did you mean to set reuse=True in VarScope?'
It seems, that we need "reuse=None" for the weights of the LSTM cell to be initialized and we need "reuse=True" in order to call the LSTM cell.
Please, help me to figure out the way to do it properly.
I think the problem is that you're creating variables with tf.Variable. Please, use tf.get_variable instead -- does this solve your issue?
It seems that I have solved this issue using the hack from the official Tensorflow RNN example (https://www.tensorflow.org/tutorials/recurrent) with the following code
with tf.variable_scope("RNN"):
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(inputs[:, time_step, :], state)
outputs.append(cell_output)
The hack is that when we run LSTM first time, tf.get_variable_scope().reuse is set to False, so that the new LSTM cell is created. When we run it next time, we set tf.get_variable_scope().reuse to True, so that we are using the LSTM, which was already created.

Resources