May I ask whether anyone else have also look into the SGD mapping on BSP model? I'm investigating whether BSP model is a good candidate for implementing distributed version of SVM SGD.
Thanks!
Joe
Related
BERT pre-training of the base-model is done by a language modeling approach, where we mask certain percent of tokens in a sentence, and we make the model learn those missing mask. Then, I think in order to do downstream tasks, we add a newly initialized layer and we fine-tune the model.
However, suppose we have a gigantic dataset for sentence classification. Theoretically, can we initialize the BERT base architecture from scratch, train both the additional downstream task specific layer + the base model weights form scratch with this sentence classification dataset only, and still achieve a good result?
Thanks.
BERT can be viewed as a language encoder, which is trained on a humongous amount of data to learn the language well. As we know, the original BERT model was trained on the entire English Wikipedia and Book corpus, which sums to 3,300M words. BERT-base has 109M model parameters. So, if you think you have large enough data to train BERT, then the answer to your question is yes.
However, when you said "still achieve a good result", I assume you are comparing against the original BERT model. In that case, the answer lies in the size of the training data.
I am wondering why do you prefer to train BERT from scratch instead of fine-tuning it? Is it because you are afraid of the domain adaptation issue? If not, pre-trained BERT is perhaps a better starting point.
Please note, if you want to train BERT from scratch, you may consider a smaller architecture. You may find the following papers useful.
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
I can give help.
First of all, MLM and NSP (which are the original pre-training objectives from NAACL 2019) are meant to train language encoders with prior language knowledge. Like a primary school student who read many books in the general domain. Before BERT, many neural networks would be trained from scratch, from a clean slate where the model doesn't know anything. This is like a newborn baby.
So my question is, "is it a good idea to start teaching a newborn baby when you can begin with a primary school student?" My answer is no. This is supported by numerous State-of-The-Arts achieved by the pre-trained models, compared to the old methods of training a neural network from scratch.
As someone who works in the field, I can assure you that it is a much better idea to fine-tune a pre-trained model. It doesn't matter if you have a 200k dataset or a 1mil datapoints. In fact, more fine-tuning data will only make the downstream results better if you use the right hyperparameters.
Though I recommend the learning rate between 2e-6 ~ 5e-5 for sentence classification tasks, you can explore. If your dataset is very, very domain-specific, it's up to you to fine-tune with a higher learning rate, which will deviate the model further away from its "pre-trained" knowledge.
And also, regarding your question on
can we initialize the BERT base architecture from scratch, train both the additional downstream task specific layer + the base model weights form scratch with this sentence classification dataset only, and still achieve a good result?
I'm negative about this idea. Even though you have a dataset with 200k instances, BERT is pre-trained on 3300mil words. BERT is too inefficient to be trained with 200k instances (both size-wise and architecture-wise). If you want to train a neural network from scratch, I'd recommend you look into LSTMs or RNNs.
I'm not saying I recommend LSTMs. Just fine-tune BERT. 200k is not even too big anyways.
All the best luck with your NLP studies :)
In a CNN, the convolution operation 'convolves' a kernel matrix over an input matrix. Now, I know how a fully connected layer makes use of gradient descent and backpropagation to get trained. But how does the kernel matrix change over time?
There are multiple ways in which the kernel matrix is initialized as mentioned here, in the Keras documentation. However, I am interested to know how it is trained? If it uses backpropagation too, then is there any paper that describes in detail the training process?
This post also raises a similar question, but it is unanswered.
Here you have a well explained post about backpropagation for Convolutional layer. In short, it is also gradient descent just like with FC layer. In fact, you can effectively turn a Convolutional layer into a Fuly Connected layer as explained here.
I study SVM and I will implement svm using python sklearn.svm.SVC.
As i know SVM problem can be represented a QP(Quadratic Programming)
So here i was wondering which QP solver is used to solve the SVM QP problem in sklearn svm.
I think it may be SMO or coordinate descent algorithm.
Please let me know what the exact algorithm is used in sklearn svm
Off-the-shelf QP-solvers have been used in the past, but for many years now dedicated code is used (much faster and more robust). Those solvers are not (general) QP-solvers anymore and are just build for this one use-case.
sklearn's SVC is a wrapper for libsvm (proof).
As the link says:
Since version 2.8, it implements an SMO-type algorithm proposed in this paper:
R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using second order information for training SVM. Journal of Machine Learning Research 6, 1889-1918, 2005.
(link to paper)
I'd like to understand if I can and if it's valid approach to train your MNB model with SGD. My application is text classification. In sklearn I've found out that there is no MNB available, and by default it's SVM, however NB is the linear model, isn't it?
So if my likelihood parameters (with Laplacian smoothing) can be estimated as
Can I update my parameters with SGD and minimize the cost function?
Please let me know if SGD is irrelevant here. Thanks in advance.
UPDATE:
So I got the answer and hope that I got it right, that MNB's parameters are updated by the word occurence in the given input text (like tf-idf). But I still don't understand clearly why we can't use SGD for MNB training. I'd understand it if it's explained in explicit description or with some mathematical interpretation. Thanks
In sklearn I've found out that there is no MNB available
Multinomial naive Bayes is implemented in scikit-learn. There is no gradient descent to use. This implementation just uses relative frequencies counts (with smoothing) to find the parameters of the model in a single pass (which the standard and most efficient way to fit an MNB model):
http://scikit-learn.org/stable/modules/naive_bayes.html
Reading the docs of scikit-learn I had understood that the implementation behind the DPGMM class use variational inference rather than the also traditional Gibbs sampling.
Nevertheless, while going through this Edwin Chen's popular post ("Infinite Mixture Models with Nonparametric Bayes and the Dirichlet Process") he says he uses scikit-learn to run Gibbs sampling inference of a DPGMM.
So, is there a Gibbs Sampling implementation of the DP-GMM in scikit-learn, Chen got it wrong or there was a Gibbs version that was replaced by the variational one?
As far as I know there never was a Gibbs sampling implementation. (And I have been with the project for a couple of years)