Tensorflow: Setting an array element with a sequence - python-3.x

I'm trying to train a CNN using my own image dataset, but when passing the batch data and label to the feed_dict I get the error ValueError: setting an array element with a sequence from what I read here, this is a dimension issue, and probably coming from my batch_label Tensor, but I couldn't figure out how to make it a one-hot Tensor (what my graph expects).
I uploaded the full code as a gist here: https://gist.github.com/guivn/f7f753547f77a3b12992

TL;DR: You can't feed a tf.Tensor object (viz. batch_data and batch_labels in your gist) as the value for another tensor. (I believe the error message should be clearer about this in more recent versions of TensorFlow.)
Unfortunately you can't currently use the feed/tf.placeholder() mechanism to pass the result of one TensorFlow graph to another. We are investigating ways to make this easier, since it is a common confusion and feature request. For your exact program, it should be easy to solve this however. Simply move the lines that create the input and replace the placeholders with them. Your program will then look something like:
with graph.as_default():
# Input data.
filename_and_label_tensor = tf.train.string_input_producer(['train.txt'], shuffle=True)
data, label = parse_csv(filename_and_label_tensor)
tf_train_dataset, tf_train_labels = tf.train.batch([data, label], batch_size, num_threads=4)
# Rest of the model construction goes here....
Typically, if you want to pass another dataset through the same model—e.g. for evaluation—it's easiest to make another copy of the graph (perhaps sharing the same tf.Variable objects).

Related

Why are input data transferrd to Variable type?

I'm reading code that implementing YOLOv3 with Pytorch, and coming with a line like this:
for batch_i, (_, imgs, targets) in enumerate(dataloader):
batches_done = len(dataloader) * epoch + batch_i
imgs = Variable(imgs.to(device)) # ??
targets = Variable(targets.to(device), requires_grad=False)
imgs is the input data, and I can't understand why there exits the transform: Variable(imgs.to(device)))
Does this mean that the input data should be trained(since the default option is that requires_grad=true) or is there another reason?
As Natthaphon pointed out in his comment I don't really see the calls to Variable make any sense in the scenario.
Technically the Variable automatically becomes part of the computational graph. So maybe it's written by someone coming over from tensorflow or with visualization of the complete computational graph in mind.
if you read doc here
the Variable API has been deprecated.
Hence, we should not bother using Variable to wrap a tensor anymore.
you can proceed with the variable wrapper in latest torch version.

Custom binary cross-entropy loss with weight-map using Keras

I have a question regarding the implementation of a custom loss-function for my neural network.
I am currently trying to segment cells for a project and I decided to use a unet as it seems to work quite well. In order to improve my current model, I decided to follow the idea of the original paper of the unet (https://arxiv.org/abs/1505.04597) where they implemented a weight-map assigning thus more weight to pixels that are located in between cells that are tightly associated, as you can see in this picture: Example of a weight map.
I am currently using Keras for my unet and my problem is that I do not know how to give my weights to my model without creating any problem. My idea was to create a generator with the images and a 2-channeled array containing the labels in the first channel and the weights in the second channel, that way I can extract my weights and my labels easily in my custom loss function.
My code looks like that:
train_generator = zip(image_generator, label_generator, weight_generator)
for (img, label, weight) in train_generator:
img, label = adjustData(img, True, label)
label_weights = np.concatenate((label, weight),axis=3)
# This is the final generator
yield (img, label_weights)
As you can see, I construct the train_generator with three previously constructed generators, I adjust some things and then I yield my images and combined labels and weights.
Then, when I try to fit my model with fit_generator, I get this error: ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays.
I really do not know what to do and how to implement correctly what I want to do.
Thank you in advance for your answers.

Understanding Input Sequences of Unlimited Length for RNNs in Keras

I have been looking into an implementation of a certain architecture of deep learning model in keras when I came across a technicality that I could not grasp. In the code, the model is implemented as having two inputs; the first is the normal input that goes through the graph (word_ids in the sample code below), while the second is the length of that input, which seems to be involved nowhere other than the inputs argument in the keras Model instant (sequence_lengths in the sample code below).
word_ids = Input(batch_shape=(None, None), dtype='int32')
word_embeddings = Embedding(input_dim=embeddings.shape[0],
output_dim=embeddings.shape[1],
mask_zero=True,
weights=[embeddings])(word_ids)
x = Bidirectional(LSTM(units=64, return_sequences=True))(word_embeddings)
x = Dense(64, activation='tanh')(x)
x = Dense(10)(x)
sequence_lengths = Input(batch_shape=(None, 1), dtype='int32')
model = Model(inputs=[word_ids, sequence_lengths], outputs=[x])
I think this is done to make the network accept a sequence of any length. My questions are as follow:
Is what I think correct?
If yes, then, I feel like there is a bit of
magic going on under the hood. Any suggestions on how to wrap
one's head around this?
Does this mean that using this method, one doesn't need to pad his sequences (neither in training nor in prediction), and that keras will somehow know how to pad them automatically?
Do you need to pass sequence_lengths as an input?
No, it's absolutely not necessary to pass the sequence lengths as inputs, either if you're working with fixed or with variable length sequences.
I honestly don't understand why that model in the code uses this input if it's not sent to any of the model layers to be processed.
Is this really the complete model?
Why would one pass the sequence lengths as an input?
Well, maybe they want to perform some custom calculations with those. It might be an interesting option, but none of these calculations are present (or shown) in the code you posted. This model is doing absolutely nothing with this input.
How to work with variable sequence length?
For that, you've got two options:
Pad the sequences, as you mentioned, to a fixed size, and add Masking layers to the input (or use the mask_zeros=True option in the embedding layer).
Use the length dimension as None. This is done with one of these:
batch_shape=(batch_size, None)
input_shape=(None,)
PS: these shapes are for Embedding layers. An input that goes directly into recurrent networks would have an additional last dimension for input features
When using the second option (length = None), you should process each batch separately, because you are not able to put all sequences with different lengths in the same numpy array. But there is no limitation in the model itself, and no padding is necessary in this case.
How to work with "unlimited" length
The only way to work with unlimited length is using stateful=True.
In this case, every batch you pass will not be seen as "another group of sequences", but "additional steps of the previous batch".

when do you use Input shape vs batch_shape in keras?

I don't find API that explains keras Input.
When should you use shape attribute vs batch_shape attribute?
From the Keras source code:
Arguments
shape: A shape tuple (integer), not including the batch size.
For instance, `shape=(32,)` indicates that the expected input
will be batches of 32-dimensional vectors.
batch_shape: A shape tuple (integer), including the batch size.
For instance, `batch_shape=(10, 32)` indicates that
the expected input will be batches of 10 32-dimensional vectors.
`batch_shape=(None, 32)` indicates batches of an arbitrary number
of 32-dimensional vectors.
The batch size is how many examples you have in your training data.
You can use any. Personally I never used "batch_shape". When you use "shape", your batch can be any size, you don't have to care about it.
shape=(32,) means exactly the same as batch_shape=(None,32)
To expand on Daniel's answer, one case I've found where it's necessary to specify batch_shape instead of shape to an Input layer is when you are using stateful LSTMs in the functional API. It's described well in Phillipe Remy's blog. In short, the stateful mode allows you to keep the hidden state values in an LSTM across batches (they usually get reset every batch if the default stateful=False is set). That means it needs knowledge about the batch size in order to shape everything properly. If you don't do this, it yells at you:
ValueError: If a RNN is stateful, it needs to know its batch size. Specify the batch size of your input tensors:
- If using a Sequential model, specify the batch size by passing a `batch_input_shape` argument to your first layer.
- If using the functional API, specify the batch size by passing a `batch_shape` argument to your Input layer.
The second point is the relevant one here. If using LSTM with stateful=True in the functional API, you need to set batch_shape for your Input layers.

How to get batch size for `tf.PaddingFIFOQueue.dequeu_up_to` in case of fewer elements?

I'm referring to the example codes in
http://www.wildml.com/2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/
and
https://indico.io/blog/tensorflow-data-inputs-part1-placeholders-protobufs-queues/
for standard approaches in feeding the data to the language models (rnn) using TensorFlow.
So before feeding the input, I'm making sure they are padded to the max input length in the current batch and then randomly shuffled within the current batch. so far good.
However, I have difficulty in specifying the initial state for tf.nn.dynamic_rnn whose shape depends on the current batch_size.
If the batch size is fixed there is should not be any problem.
However, if I'm using tf.PaddingFIFOQueue.dequeue_up_to(batch_size), it may be possible to return less than the batch_size if not many elements in the queue. (which is possible if dequeuing the last set of elements).
In this case how do we specify the initial state with the exact batch size returned.
You can use the None dimension in your Tensors to specify TensorFlow that the batch dimension can be different from one run to another.
You might want to read this faq on tensor shapes to get a better sense, and in particular this section (formatting is mine):
How do I build a graph that works with variable batch sizes?
It is often useful to build a graph that works with variable batch sizes, for example so that the same code can be used for (mini-)batch training, and single-instance inference. The resulting graph can be saved as a protocol buffer and imported into another program.
When building a variable-size graph, the most important thing to remember is not to encode the batch size as a Python constant, but instead to use a symbolic Tensor to represent it. The following tips may be useful:
Use batch_size = tf.shape(input)[0] to extract the batch dimension from a Tensor called input, and store it in a Tensor called batch_size.
Use tf.reduce_mean instead of tf.reduce_sum(...) / batch_size.
If you use placeholders for feeding input, you can specify a variable batch dimension by creating the placeholder with tf.placeholder(..., shape=[None, ...]). The None element of the shape corresponds to a variable-sized dimension.

Resources