How to make attention map for Swin Transformers? - pytorch

I have been working on attention maps feature for one of my projects. I have fine-tuned an image classification model for the same. So how can I develop attention maps for Swin Transformers using Pytorch ?
Thanks in advance !!

Related

How to create a custom BERT language model for a different language?

I want to create a language translation model using transformers. However, Tensorflow seems to only have a BERT model for English https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4 . If I want a BERT for another language, what is the best way to go about accomplishing this? Should I create a new BERT or can I train Tensorflow's own BertTokenizer on another language?
The Hugging Face model hub contains a plethora of pre-trained monolingual and multilingual transformers (and relevant tokenizers) which can be fine-tuned for your downstream task.
However, if you are unable to locate a suitable model for you language, then yes training from scratch is the only option. Beware though that training from scratch can be a resource-intensive task that will require significant compute power. Here is an excellent blog post to get you started.

Tensorflow and Bert What are they exactly and what's the difference between them?

I'm interested in NLP and I come up with Tensorflow and Bert, both seem to be from Google and both seem to be the best thing for Sentiment Analysis as of today but I don't understand what are they exactly and what is the difference between them... Can someone explain?
Tensorflow is an open-source library for machine learning that will let you build a deep learning model/architecture. But the BERT is one of the architectures itself. You can build many models using TensorFlow including RNN, LSTM, and even the BERT. The transformers like the BERT are a good choice if you just want to deploy a model on your data and you don't care about the deep learning field itself. For this purpose, I recommended the HuggingFace library that provides a straightforward way to employ a transformer model in just a few lines of code. But if you want to take a deeper look at these models, I will suggest you to learns about the well-known deep learning architectures for text data like RNN, LSTM, CNN, etc., and try to implement them using an ML library like Tensorflow or PyTorch.
Bert and Tensorflow is not different thing , There are not only 2, but many implementations of BERT. Most are basically equivalent.
The implementations that you mentioned are:
The original code by Google, in Tensorflow. https://github.com/google-research/bert
Implementation by Huggingface, in Pytorch and Tensorflow, that reproduces the same results as the original implementation and uses the same checkpoints as the original BERT article. https://github.com/huggingface/transformers
These are the differences regarding different aspects:
In terms of results, there is no difference in using one or the other, as they both use the same checkpoints (same weights) and their results have been checked to be equal.
In terms of reusability, HuggingFace library is probably more reusable, as it is designed specifically for that. Also, it gives you the freedom of choosing TensorFlow or Pytorch as deep learning framework.
In terms of performance, they should be the same.
In terms of community support (e.g. asking questions in github or stackoverflow about them), HuggingFace library is better suited, as there are a lot of people using it.
Apart from BERT, the transformers library by HuggingFace has implementations for lots of models: OpenAI GPT-2, RoBERTa, ELECTRA, ...

Using AllenNLP Interpret with a HuggingFace model

I would like to use AllenNLP Interpret (code + demo) with a PyTorch classification model trained with HuggingFace (electra base discriminator). Yet, it is not obvious to me, how I can convert my model, and use it in a local allen-nlp demo server.
How should I proceed ?
Thanks in advance
If your task is binary classification, you can look at the BoolQ example in https://github.com/allenai/allennlp-models/blob/main/training_config/classification/boolq_roberta.jsonnet. You can change that configuration to use a different model (such as Electra).
We also just put some new documentation out for the Interpret functionality: https://guide.allennlp.org/interpret
To give you a more specific answer, I'll need to know some more details, like what the task is you're trying to solve, how you trained the original model, etc.

How can i know the architecture of pre-trained model in Pytorch?

I have downloaded this pre-trained model "model_ir_se50.pth" for face recognition. It is giving very good results. How can i know its architecture?
simple print(net) #replace net with the network variable will work as per your need.

Keras layers explaination

I want to get a deep idea about how this keras layers works in a model. What does each layer doing in the model etc. I followed kers documentation and information isn't enough. If any of you know place to get more knowledge let me know.Thanks in advance
Keras layers are widely used CNN, DNN and RNN layers. There is atleast one research paper for each of them and there is a lot of educational material out there. If you are really curious you could look at keras' code. Some links for you:
https://github.com/keras-team/keras/tree/master/keras/layers
http://cs231n.github.io/convolutional-networks/
https://leonardoaraujosantos.gitbooks.io/artificial-inteligence
http://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf

Resources