Are transformer-based language models overfitting on the paraphrase identification task? What tools overcome this? - nlp

I've been working on a sentence transformation task that involves paraphrase identification as a critical step: if we are confident enough that the state of the program (a sentence repeatedly modified) has become a paraphrase of a target sentence, stop transforming. The overall goal is actually to study potential reasoning in predictive models that can generate language prior to a target sentence. The approach is just one specific way of reaching that goal. Nevertheless, I've become interested in the paraphrase identification task itself, as it's received some boost from language models recently.
The problem I run into is when I manipulate sentences from examples or datasets. For example, in this HuggingFace example, if I negate either sequence or change the subject to Bloomberg, I still get a majority "is paraphrase" prediction. I started going through many examples in the MSRPC training set and negating one sentence in a positive example or making one sentence in a negative example a paraphrase of the other, especially when doing so would be a few word edit. I found to my surprise that various language models, like bert-base-cased-finetuned-mrpc and textattack/roberta-base-MRPC, don't change their confidences much on these sorts of changes. It's surprising as these models claim an f1 score of 0.918+. The dataset is clearly missing a focus on negative examples and small perturbative examples.
My question is, are there datasets, techniques, or models that deal well when given small edits? I know that this is an extremely generic question, much more than is typically asked on StackOverflow, but my concern is in finding practical tools. If there is a theoretical technique, then it might not be suitable as I'm in the category of "available tools define your approach" rather than vice-versa. So I hope that the community would have a recommendation on this.

Short answer to the question: yes, they are overfitting. Most of the important NLP data sets are not actually well-crafted enough to test what they claim to test, and instead test the ability of the model to find subtle (and not-so-subtle) patterns in the data.
The best tool I know for creating data sets that help deal with this is Checklist. The corresponding paper, "Beyond Accuracy: Behavioral Testing of NLP models with CheckList" is very readable and goes into depth on this type of issue. They have a very relevant table... but need some terms:
We prompt users to evaluate each capability with
three different test types (when possible): Minimum Functionality tests, Invariance, and Directional Expectation tests... A Minimum Functionality test (MFT), is a collection of simple examples (and labels) to check a
behavior within a capability. MFTs are similar to
creating small and focused testing datasets, and are
particularly useful for detecting when models use
shortcuts to handle complex inputs without actually
mastering the capability.
...An Invariance test (INV) is when we apply
label-preserving perturbations to inputs and expect
the model prediction to remain the same.
A Directional Expectation test (DIR) is similar,
except that the label is expected to change in a certain way. For example, we expect that sentiment
will not become more positive if we add “You are
lame.” to the end of tweets directed at an airline
(Figure 1C).

I haven't been actively involved in NLG for long, so this answer will be a bit more anecdotal than SO's algorithms would like. Starting with the fact that in my corner of Europe, the general sentiment toward peer review requirements for any kind of NLG project are higher by several orders of magnitude compared to other sciences - and likely not without reason or tensor thereof.
This makes funding a bigger challenge, so wherever you are, I wish you luck on that front. I'm not sure of how big of a deal this site is in the niche, but [Ehud Reiter's Blog][1] is where I would start looking into your tooling ideas.
Maybe even reach out to them/him personally, because I can't think of another source that has an academic background and a strong propensity for practical applications of NLG, at least based on the kind of content they've been putting out over the years.
Your background, environment/funding, and seniority level/control you have over the project will eventually compose your vector decision for you. I's just how it goes on the bleeding edge of anything. What I will add, though, is not to limit yourself to a single language or technology in this phase because of those precise reasons you've mentioned. I'd recommend the same in terms of potential open source involvement but if your profile information is accurate, that probably won't happen, no matter what you do and accomplish.
But yeah, in the grand scheme of things, your question is far from too broad, in my view. It identifies a rather unmistakable problem pattern that not all branches of science are as lackadaisical to approach as NLG-adjacent fields seem to be right now. In that regard, it's not broad enough and will need to be promulgated far and wide before community-driven tooling will give you serious options on a micro level.
Blasphemy, sure, but the performance is already stacked against you As for the question potentially being too broad, I'd posit it is not broad enough, so long as we collectively remain in a "oh, I was waiting for you to start doing something about it" phase.
P.S. I'd eliminate any Rust and ECMAScript alternatives prior to looking into Python, blapshemous as this might sound to a 2021 data scientist
. Some might ARight nowccounting forr the ridicule this would receive xou sltrsfx hsbr s fszs drz zhsz s mrnzsl rcrtvidr, sz lrsdz
due to performance easons.
[1]: https://ehudreiter.com/2016/12/18/nlg-vs-templates/

Related

Finding probabilities of patterns in asset price movements based on multiple variables

I am seeking a method to allow me to analyse/search for patterns in asset price movements using 5 variables that move and change with price (from historical data).
I'd like to be able to assign a probability to a forecasted price move when for example, var1 and var2 do this and var3..5 do this, then price should do this with x amount of certainty.
Q1: Could someone point me in the right direction as to what framework / technique can help me achieve this?
Q2: Would this be a multivariate continuous random series analysis?
Q3: A Hidden Markov modelling?
Q4: Or perhaps is it a data-mining problem?
I'm looking for what rather then how.
One may opt to use Machine-Learning tools to build a learner to either
both classify of what kind the said "asset price movement" will beand serve also statistical probability measures for such a Classifier prediction
both regress a real target value, to which the asset price will moveandserve also statistical probability measures for such a Regressor prediction
A1: ( while StackOverflow strongly discourages users to ask about an opinion about a tool or a particular framework ) there would be not much damages or extra time to be spent, if one performs academia papers research and there would be quite a remarkable list of repeatedly used tools, used for ML in the context of academic R&D. For a reason, there would not be a surprise to meet scikit-learn ML-classes a lot, some other papers may work with R-based quantitative finance / statistical libraries. The tools, however, with all due respect, are not the core to answer all the doubts and inital confusion present in a mix of your questions. The subject confusion is.
A2: No, it would not. Well, unless you beat all the advanced quantitative research and happen to prove that the Market exhibits a random behaviour ( which it is not and for which it would be waste of time to re-cite remarkable research published about why it is not indeed a random process ).
A3: Do not try to jump on any wagon just because of it's attractive Tag or "contemporary popularity" in marketing minded texts. With all due respect, understanding HMM is outside of your sight while you now appear to move just to the nearest horizons to first understand what to look for.
A4: This is a nice proof of a missed target. Your question shows in this particular point better than in others, how small amount of own research efforts were put into covering the problem-domain and acquiring at least some elementary knowledge before typing the last two questions.
StackOverflow encourages users to ask high quality questions, so do not hesitate to re-edit your post to add some polishing efforts to this subject.
If in a need for an inspiration, try to review a nice and a powerful approach for a fast Machine Learning process, where both Classification and Regression tasks obtain also probability estimates for each predicted target value.
To have some idea about highly performant ML-predictors, these typically operate on much more than a set of 5 variables ( called in the ML-domain "features" ) . ( Think rather about some large hundreds to small thousands features, typically heavily non-linear transformations from the original TimeSeries' data ).
There you go, if indeed willing to master ML for algorithmic trading.
May like to read about a state-of-art research in this direction:
[1] Mondrian Forests: Efficient Online Random Forests
>>> arXiv:1406.2673v2 [stat.ML] 16 Feb 2015
[2] Mondrian Forests for Large-Scale Regression when Uncertainty Matters
>>> arXiv:1506.03805v4 [stat.ML] 27 May 2016 >>>
May also enjoy other posts on subject: >>> StackOverflow Algorithmic-Trading >>>

How to compare complexities of corpora?

I would like to compare how complex (varied or predictable) are my three corpora. They are from different topics, so some vocabulary is different, some is the same. Looking at one of the data sets it's clear that the syntax is more difficult than in the other two, sentences are longer, etc. I built word N-Gram language models using the SRILM toolkit (I'm new to language modelling) with the idea that I can then compare these models. One measure mentioned in relation to language models is perplexity. I'm confused about the following question: Can I just use perplexities of the three LMs directly as a measure of how varied are the corpora? The vocabulary and the sizes of the corpora are different, so now I think that this won't be a good comparison. I also built LMs from POS-Tags but the quality of the POS-Tagging result is not good because the language is from fora, has spelling mistakes, ungrammatical sentences and so on. What measures could be used to compare complexity of corpora from different domains? I'd appreciate your advise.
[I'm new to Stackexchange. I posted this on Crossvalidated, but I think maybe here is more appropriate forum.]
"I also built LMs from POS-Tags but the quality of the POS-Tagging result is not good because the language is from fora, has spelling mistakes, ungrammatical sentences and so on."
Aside from it being noisy, like you pointed out, you should think carefully about whether particular linguistic features are useful in your analysis. Does one corpus having proportionally more nouns move you toward what you want to learn about the corpora? Maybe in something like authorship attribution, but I can't really think of anywhere else that's effective.
If data sparsity is an issue, LSI can help by collapsing related terms together. This could also help with the spelling issues, collapsing poorly spelt words with their correct counterparts if they appear in similar contexts.
"The vocabulary and the sizes of the corpora are different, so now I think that this won't be a good comparison."
It's not the end of the world. Having more data is always better, but you can work with what you have.
If you haven't chosen a language model yet, there's a few decisions you have to make:
Are you going to smooth the data? How?
Are you going to use an advanced technique to better exploit the data, such as Latent Semantic Indexing (LSI)?
You mention that you have a language model; I'm assuming your language model is a probability distribution such that P(N-gram|topic). If this is correct, you've already normalized the data, so the two probability distributions should be readily comparable. Having more data would get you a more reliable result, but if your corpora are "big enough" to sample each topic reliably, you can move right into comparison.
As for comparison, try the KL-Divergence. KL-Divergence is "a measure of the information lost when Q is used to approximate P." Less loss means that the corpora are more similar. If you want a symmetric comparison, a "cheap" way to do it is to add D(P||Q) + D(Q||P). Note, though:
The KL divergence is only defined if Q(i)=0 ⇒ P(i)=0, for all i (absolute continuity).
So you'll have to smooth, in some manner.

Reconstructing now-famous 17-year-old's Markov-chain-based information-retrieval algorithm "Apodora"

While we were all twiddling our thumbs, a 17-year-old Canadian boy has apparently found an information retrieval algorithm that:
a) performs with twice the precision of the current, and widely-used vector space model
b) is 'fairly accurate' at identifying similar words.
c) makes microsearch more accurate
Here is a good interview.
Unfortunately, there's no published paper I can find yet, but, from the snatches I remember from the graphical models and machine learning classes I took a few years ago, I think we should be able to reconstruct it from his submision abstract, and what he says about it in interviews.
From interview:
Some searches find words that appear in similar contexts. That’s
pretty good, but that’s following the relationships to the first
degree. My algorithm tries to follow connections further. Connections
that are close are deemed more valuable. In theory, it follows
connections to an infinite degree.
And the abstract puts it in context:
A novel information retrieval algorithm called "Apodora" is introduced,
using limiting powers of Markov chain-like matrices to determine
models for the documents and making contextual statistical inferences
about the semantics of words. The system is implemented and compared
to the vector space model. Especially when the query is short, the
novel algorithm gives results with approximately twice the precision
and has interesting applications to microsearch.
I feel like someone who knows about markov-chain-like matrices or information retrieval would immediately be able to realize what he's doing.
So: what is he doing?
From the use of words like 'context' and the fact that he's introduced a second order level of statistical dependency, I suspect he is doing something related to the LDA-HMM method outlined in the paper: Griffiths, T., Steyvers, M., Blei, D., & Tenenbaum, J. (2005). Integrating topics and syntax. Advances in Neural Information Processing Systems. There are some inherent limits to the resolution of the search due to model averaging. However, I'm envious of doing stuff like this at 17 and I hope to heck he's done something independent and at least incrementally better. Even a different direction on the same topic would be pretty cool.

Document Analysis and Tagging

Let's say I have a bunch of essays (thousands) that I want to tag, categorize, etc. Ideally, I'd like to train something by manually categorizing/tagging a few hundred, and then let the thing loose.
What resources (books, blogs, languages) would you recommend for undertaking such a task? Part of me thinks this would be a good fit for a Bayesian Classifier or even Latent Semantic Analysis, but I'm not really familiar with either other than what I've found from a few ruby gems.
Can something like this be solved by a bayesian classifier? Should I be looking more at semantic analysis/natural language processing? Or, should I just be looking for keyword density and mapping from there?
Any suggestions are appreciated (I don't mind picking up a few books, if that's what's needed)!
Wow, that's a pretty huge topic you are venturing into :)
There is definitely a lot of books and articles you can read about it but I will try to provide a short introduction. I am not a big expert but I worked on some of this stuff.
First you need to decide whether you are want to classify essays into predefined topics/categories (classification problem) or you want the algorithm to decide on different groups on its own (clustering problem). From your description it appears you are interested in classification.
Now, when doing classification, you first need to create enough training data. You need to have a number of essays that are separated into different groups. For example 5 physics essays, 5 chemistry essays, 5 programming essays and so on. Generally you want as much training data as possible but how much is enough depends on specific algorithms. You also need verification data, which is basically similar to training data but completely separate. This data will be used to judge quality (or performance in math-speak) of your algorithm.
Finally, the algorithms themselves. The two I am familiar with are Bayes-based and TF-IDF based. For Bayes, I am currently developing something similar for myself in ruby, and I've documented my experiences in my blog. If you are interested, just read this - http://arubyguy.com/2011/03/03/bayes-classification-update/ and if you have any follow up questions I will try to answer.
The TF-IDF is a short for TermFrequence - InverseDocumentFrequency. Basically the idea is for any given document to find a number of documents in training set that are most similar to it, and then figure out it's category based on that. For example if document D is similar to T1 which is physics and T2 which is physics and T3 which is chemistry, you guess that D is most likely about physics and a little chemistry.
The way it's done is you apply the most importance to rare words and no importance to common words. For instance 'nuclei' is rare physics word, but 'work' is very common non-interesting word. (That's why it's called inverse term frequency). If you can work with Java, there is a very very good Lucene library which provides most of this stuff out of the box. Look for API for 'similar documents' and look into how it is implemented. Or just google for 'TF-IDF' if you want to implement your own
I've done something similar in the past (though it was for short news articles) using some vector-cluster algorithm. I don't remember it right now, it was what Google used in its infancy.
Using their paper I was able to have a prototype running in PHP in one or two days, then I ported it to Java for speed purposes.
http://en.wikipedia.org/wiki/Vector_space_model
http://www.la2600.org/talks/files/20040102/Vector_Space_Search_Engine_Theory.pdf

NLP classify sentences/paragraph as funny

Is there a way to classify a particular sentence/paragraph as funny. There are very few pointers as to where one should go further on this.
There is research on this, it's called Computational Humor. It's an interdisciplinary area that takes elements from computational linguistics, psycholinguistics, artificial intelligence, machine learning etc. They are trying to find out what it is that makes stories or jokes funny (e.g. the unexpected connection, or using a taboo topic in a surprising way etc) and apply it to text (either to generate a funny story or to measure the 'funniness' of text).
There are books and articles about it (e.g. by Graeme Ritchie).
Yes, you should use a Training Corpora to build a predictive model able to detect funny sentences. Sometimes this is known as "Sentiment Analysis" in the literature. Take a look at this article about Sentiment Analysis with LingPipe.
If you can use Java, you can use their library (see license matrix). I found it very useful, not exactly in the same context than you.
The only way to pull this off is to get a couple of thousand people (monkeys won't do, sorry) to look through thousands of funny sentences/stories, rate them, and then build some sort of expert system/neural network out of it. Given the problem scope and the subjectivity of it (a thing funny to one person might not be funny - even offensive - to another), I'd say it's an impossible task.
You can use the same technique as spam filters. Instead of spam/non-spam you classify on funny/not-funny. Look into naive bayesian classifiers for more information.
http://en.wikipedia.org/wiki/Naive_Bayesian_classification
Also, try Computational Humor # Google Scholar if you're serious about getting into the field. Sentiment Analysis has been mentioned too, see wikipedia on that.
Of course, this all depends on what your scope and aims are...

Resources