SHAP (SHapley Additive exPlanations) for keypoints detection ML models - keras

I have a Keras CNN model that is trained to predict keypoints on a image. How do I use the SHAP package in Python to explain and visualise input feature contribution to prediction? The image examples on the official site are for classification and captioning models.

Related

Extracting gradients of weight of ONNX model

I am currently working on the testing of DL libraries. Specifically, I want to extract the gradients of weights of ONNX model to do differential testing.
However, I have read a lot of posts online but still could not find a clear way to extract the gradients of weights given an input of an ONNX model. Is there any API to access the gradients of ONNX model?
Thank you so much!
I have read many posts but cannot find any APIs to access the gradients of weights of ONNX model.

Pytorch image segmentation transfer learning

I am new in Pytorch. My question is: How do I apply transfer learning to a custom dataset? I am doing image segmentation on brain tumors. I can find examples which use U-net structure but I could not find examples using weights of the pre-trained models for a U-net image segmentation?
You could obtain pre-trained models in two ways:
Model weights or complete models shared in formats such .pt or .pth:
In this case, Saving and Loading Models is a good starting point. Copying from the tutorial there, you could load a model as
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
The other way is to load the model from torchvision. A list is available models is available at Torchvision Models. U-Net is not available yet. However, it is possible to load a pre-trained model as the encoder and write a separate decoder to form a U-Net with a pre-trained encoder.
In this case, the model object returned from the function calls shown in the API are already loaded with pretrained weights when pretrained=True.
For writing a custom dataloader, PyTorch data loaders may be a useful guide.

Is there a possibility to visualize intermediate layers in Keras?

I am using the DenseNet121 CNN in the Keras library and I would like to visualize the features maps when I predict images. I know that is possible with CNN we have made on our own.
Is it the same thing for models available in Keras like DenseNet?

VGG16 trained on grayscale imagenet

I have found the VGG16 network pre-trained on the (color) imagenet database (as .npy). Is there a VGG16 network pre-trained on a gray-scale version of the imagenet database available?
(The usual 'tricks' for using the 3-channel filters of the conv1.1 layer on the gray 1-channel input are not enough for me. I am looking at incremental improvements of the network performance, so I need to see how the transfer learning behaves when the pre-trained model was 'looking' at gray-scale input).
Thanks!
Yes, there's this one:
https://github.com/DaveRichmond-/grayscale-imagenet
Greyscale imagenet trained model, and also a version of it that's finetuned on X-rays. They showed that Imagenet performance barely drops btw.
#GrimSqueaker gave you the code of this paper : https://openaccess.thecvf.com/content_eccv_2018_workshops/w33/html/Xie_Pre-training_on_Grayscale_ImageNet_Improves_Medical_Image_Classification_ECCVW_2018_paper.html
However, the model trained in it is Inception v3 not VGG16.
You have two options:
Use a colored pre-trained VGG16 model and duplicate one channel to the three channels
Train your VGG16 model on the ImageNet grayscaled dataset.
You may find this link useful:
https://github.com/zzangho/VGG16_grayscale

Implement Gaussian Mixture Model using keras

I am trying to implement Gaussian Mixture Model using keras with tensorflow backend. Is there any guide or example on how to implement it?
Are you sure that it is what you want? you want to integrate a GMM into a neural network?
Tensorflow and Keras are libraries to create, train and use neural networks models. The Gaussian Mixture Model is not a neural network.

Resources