add datatable in pytorch tensorboard - pytorch

I want to add training and validation loss to tensorboard as well as the result of test set.
I can't find add datatable in tensorboard.
Is the effort to add a data table to the tensor board meaningless?
Then let me know the alternative.

Related

Can you control the outputs when converting pytorch to CoreML?

ā¯“Question
I'm trying to convert a pytorch model to coreml. The model was based on yolov5.
Here is a netron view our our exported coreml model.
Currently, the architecture has 3 outputs. You can see one of the outputs in the screenshot, number '740'.
However, we want a different output from coreml. We need to get the output before the reshapeStatic and transpose layers. So in this image you can see that we need the last convolution layer instead of 740.
Those reshapeStatic and transpose layers were added by the process which convert the net to coreML, they are not organic layers of yolov5.
Is there any way we can do the conversion to coreml differently in order to have more control over which layers are output? For example, can we have control over the output layers in the sample code below:
model = ct.convert(
traced_model,
inputs=[ct.ImageType(name="input_1", shape=example_input.shape)],
classifier_config = ct.ClassifierConfig(class_labels) )
Alternatively, is there a way where we can choose at runtime which values to pull out of the coreml model? For example, is there a way to specify in the code below which layers we want to output?
img = load_image(img_path, resize_to=(img_size, img_size))
//can we specify here which layers to output?
coreml_out_dict = model.predict({'image': img})
Thanks!

Visualizing deep layer filters in Keras CNN

My question is simple. I want to visualize what filters are used in ConvNet in deep layers to extract the features predicting the final model. By visualize I mean to save it in .png format like filters shown in final layer of https://s3-ap-south-1.amazonaws.com/av-blog-media/wp-content/uploads/2018/03/cnn_filters.png , we can actually see a car in final layer filters
I can visualise the filters of first convolutional layer by help provided from my own question Visualising Keras CNN final trained filters at each layer but that shows only to visualise for first layer. First layer filters look like some random coloured 3x3 pixel images. But I want to see the final layer filters like the car filter in the first link.
First layer filters look like some random coloured 3x3 pixel images. But I want to see the final layer filters like the car filter in the first link.
Even the article of car filter https://www.analyticsvidhya.com/blog/2018/03/essentials-of-deep-learning-visualizing-convolutional-neural-networks/ has code for only first layer
The Python library keras-vis is a great tool for visualizing CNNs. It can generate conv filter visualizations, dense layer visualizations, and attention maps. The latest release is quite old (and a little bit buggy), so I recommend installing from master:
pip install git+https://github.com/raghakot/keras-vis.git
You can adress the weights of the different layers by:
w = model.layers[i].get_weights()[0][:,:,:,:]
where i is the number of your layer.
In case of the picture in the link I am not sure if it is actually the weights or maybe the activation map which is shown. You could get that one by:
from keras import backend as K
get_output = K.function([model.layers[0].input],[cnn.layers[i].output])
output_normal = get_output([X])[0][m]
where m is the number of a certain image in X as input.

Custom binary cross-entropy loss with weight-map using Keras

I have a question regarding the implementation of a custom loss-function for my neural network.
I am currently trying to segment cells for a project and I decided to use a unet as it seems to work quite well. In order to improve my current model, I decided to follow the idea of the original paper of the unet (https://arxiv.org/abs/1505.04597) where they implemented a weight-map assigning thus more weight to pixels that are located in between cells that are tightly associated, as you can see in this picture: Example of a weight map.
I am currently using Keras for my unet and my problem is that I do not know how to give my weights to my model without creating any problem. My idea was to create a generator with the images and a 2-channeled array containing the labels in the first channel and the weights in the second channel, that way I can extract my weights and my labels easily in my custom loss function.
My code looks like that:
train_generator = zip(image_generator, label_generator, weight_generator)
for (img, label, weight) in train_generator:
img, label = adjustData(img, True, label)
label_weights = np.concatenate((label, weight),axis=3)
# This is the final generator
yield (img, label_weights)
As you can see, I construct the train_generator with three previously constructed generators, I adjust some things and then I yield my images and combined labels and weights.
Then, when I try to fit my model with fit_generator, I get this error: ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays.
I really do not know what to do and how to implement correctly what I want to do.
Thank you in advance for your answers.

Visualize vector value at each step with Tensorflow

For debugging purposes I want to visualize the output vector of the NN at each step of the training process.
I tried to use TensorBoard with a tf.summary.tensor_summary:
available_outputs_summary = tf.summary.tensor_summary(name='Probability of move', tensor=available_outputs)
Which I use to write during each iteration step:
summary_str = available_outputs_summary.eval(feed_dict={X: obs})
file_writer.add_summary(summary_str, iteration)
But in TensorBoard when I click on the required tensor I won't see my data:
I know how to print every single value in the console with tf.Print but it's not conveniant...
Is there anything else I can do ?
First, your picture is the graph visualization. I believe graph visualization is not supposed to have any summaries - it just shows you the graph.
TensorBoard has other tabs for summaries including "scalar", "histogram", "distribution". Normally, you would look in these tabs for visualizations. However, base release of TensorBoard does not yet have a tab to visualize tensor summaries (there might be third-party plugins though).
Depending on the kind of visualization you want for you tensor, you have the following options:
Create interesting scalar statistics that you care about, e.g. mean, std, etc.
Use "histogram" and/or "distribution" tabs (https://www.tensorflow.org/programmers_guide/tensorboard_histograms).
If your tensor is not very large and fixed size, you can create scalar summaries for each of its fields. See last answer in
How to visualize a tensor summary in tensorboard

keras metric different during training

I have implemented a custom metric based on SIM and when i try the code it works. I have implemented it using tensors and np arrays and both give the same results. However when I start fitting the model the values given back are a lot higher then the values I get when i load the weights generated by the training and applying the same function.
My function is:
def SIM(y_true,y_pred):
n_y_true=y_true/(K.sum(y_true)+K.epsilon())
n_y_pred=y_pred/(K.sum(y_pred)+K.epsilon())
return K.mean(K.sum( K.minimum(n_y_true, n_y_pred)))
When I compile the Keras model I add this to the metrics and during training it gives for example SIM: 0.7092.
When i load the weights and try it the SIM score is around 0.3. The correct weights are loaded (when restarting training with these weights the same values popup). Does anybody know if I am doing anything wrong?
Why are the metrics given back during training so much higher compared to running the function over a batch?

Resources