Can I use Grad-CAM for a Clarifai model? - conv-neural-network

I want to use Grad-CAM to visualize Clarifai's prediction (which I don't know if it's feasible)
I am only trying to find a way on how to do it for my research proposal. Implementation is not my issue now, but I'm still struggling to find solutions.

Related

How to perform Multi Task with Res2Net or other pre trained CNNs?

Im currently trying to adapt Multitask to pretrained CNN's. From what i gathered, i have to change the Last layer of the Net and change the Loss function, so all routes benefit from it. I have 3 tasks to all of them are multiclass-classifications. I code all of that in Pytorch. I wanted to try Hard parameter sharing for i go for Soft.
I tried to find concrete examples but i did not find any, do you have any guides or tips how to solve that problem?

How to get the inference compute graph of the pytorch model?

I want to hand write a framework to perform inference of a given neural network. The network is so complicated, so to make sure my implementation is correct, I need to know how exactly the inference process is done on device.
I tried to use torchviz to visualize the network, but what I got seems to be the back propagation compute graph, which is really hard to understand.
Then I tried to convert the pytorch model to ONNX format, following the instruction enter link description here, but when I tried to visualize it, it seems that the original layers of the model had been seperated into very small operators.
I just want to get the result like this
How can I get this? Thanks!
Have you tried saving the model with torch.save (https://pytorch.org/tutorials/beginner/saving_loading_models.html) and opening it with Netron? The last view you showed is a view of the Netron app.
You can try also the package torchview, which provides several features (useful especially for large models). For instance you can set the display depth (depth in nested hierarchy of moduls).
It is also based on forward prop
github repo
Disclaimer: I am the author of the package
Note: The accepted format for tool is pytorch model

How to build Classifiers which is used in creating the models?

I was going through the tutorial of speech emotion recognition and in between saw an "MLPClassifier(Multilayer_perceptron)" which was imported from the sklearn. And there are lots of other like Random forest and linear Regression, standardscalar, GridSearchCV, etc. I was searching for tutorials or steps to how can I create these types of classifiers or modules on my own?
When I searched regarding these, I was getting examples of tutorials of the use cases of predefined classifiers of sklearn and third party claassifiers. Like above specified.
If you guys know any tutorial or steps to achieve these please suggest to me.
Fro MLP, The implementation is quite easy, there is good explanation on how to implement on the Coursera's ML introdcution look to week 4 and week 5, for Linear and logistic regression look to week 2 and week 3. Look at this link for implementing CART, random forests are quite similar I think you can figure out how to implement them easily if you are able to implement CART. For SVM and kernel methods you can look to this repo

Image Augmentation of Siamese CNN

I have a task to compare two images and check whether they are of the same class (using Siamese CNN). Because I have a really small data set, I want to use keras imageDataGenerate.
I have read through the documentation and have understood the basic idea. However, I am not quite sure how to apply it to my use case, i.e. how to generate two images and a label that they are in the same class or not.
Any help would be greatly appreciated?
P.S. I can think of a much more convoluted process using sklearn's extract_patches_2d but I feel there is an elegant solution to this.
Edit: It looks like creating my own data generator may be the way to go. I will try this approach.

OpenCV 2.4.3 with Visual C++ express cascading classifiers images query

I am learning to implement a hand gesture recognition project. For this, I have gone through several tutorials where they use color information, background subtraction, various object segmentation techniques.
However, one that I would like to use is a method using cascading classifiers however I dont have much understanding in this approach. I have read several text and papers and I understand its theory however, I still dont understand what are good images to train the cascading classifer on. Is it better to train it on natural color images or images with hand gestures processed with canny edge detection or some other way.
Also, is there any method that uses online training and testing methods similar to openTLD but where the steps are explained. The openCV documentation for 2.3-2.4.3 are incomplete with respect to the machine learning and object recognition and tracking except for the code available at: http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html
I know this is a long question but I wanted to explain my problem thoroughly. It would help me to understand the concept better than just to use online code.
Sincere thanks in advance!
if you think about haar classifier, a good tutorial is here

Resources