grade separation and shortest path on networks in spatstat - spatstat

I have a question not on spatstat but on use and limitation of spatsat.
During the calculation of metrics like pcf and k function equivalents on linear networks, a shortest path distance is used instead of euclidean distance. I have the spatsat book from 2015 and I remember reading somewhere in the text that the shortest path calculation on networks is not sensitive to grade separations like flyover, bridges, underpass and therefore caution should be exercised in selecting the study area or be aware of this limitation while interpreting results.
Is there any publication that discusses this limitation of grade separation in detail and may be suggesting some workarounds? Or limitations of network equivalents in general?
Thank you

The code for linear networks in spatstat can handle networks which contain flyovers, bridges, underpasses and so on.
Indeed the dataset dendrite, supplied with spatstat, includes some of these features.
The shortest-path calculation takes account of these features correctly.
The only challenge is that you can't build the network structure using the data conversion function as.linnet.psp, because it takes a list of line segments and tries to guess which segments are connected at a vertex. In this context it will guess wrongly.
The connectivity information has to be specified somehow! You can use the constructor function linnet to build the network object when you have this information. The connectivity can be edited interactively using clickjoin.
This is explained briefly on page 713 of the book (which also mentions dendrite).
The networks that can be handled by spatstat are slightly more general than the simple model described on page 711. Lines can cross over without intersecting.
I'm sorry the documentation is terse, but much of this information has been kept confidential until recently (while our PhD students were finishing).

Related

Are transformer-based language models overfitting on the paraphrase identification task? What tools overcome this?

I've been working on a sentence transformation task that involves paraphrase identification as a critical step: if we are confident enough that the state of the program (a sentence repeatedly modified) has become a paraphrase of a target sentence, stop transforming. The overall goal is actually to study potential reasoning in predictive models that can generate language prior to a target sentence. The approach is just one specific way of reaching that goal. Nevertheless, I've become interested in the paraphrase identification task itself, as it's received some boost from language models recently.
The problem I run into is when I manipulate sentences from examples or datasets. For example, in this HuggingFace example, if I negate either sequence or change the subject to Bloomberg, I still get a majority "is paraphrase" prediction. I started going through many examples in the MSRPC training set and negating one sentence in a positive example or making one sentence in a negative example a paraphrase of the other, especially when doing so would be a few word edit. I found to my surprise that various language models, like bert-base-cased-finetuned-mrpc and textattack/roberta-base-MRPC, don't change their confidences much on these sorts of changes. It's surprising as these models claim an f1 score of 0.918+. The dataset is clearly missing a focus on negative examples and small perturbative examples.
My question is, are there datasets, techniques, or models that deal well when given small edits? I know that this is an extremely generic question, much more than is typically asked on StackOverflow, but my concern is in finding practical tools. If there is a theoretical technique, then it might not be suitable as I'm in the category of "available tools define your approach" rather than vice-versa. So I hope that the community would have a recommendation on this.
Short answer to the question: yes, they are overfitting. Most of the important NLP data sets are not actually well-crafted enough to test what they claim to test, and instead test the ability of the model to find subtle (and not-so-subtle) patterns in the data.
The best tool I know for creating data sets that help deal with this is Checklist. The corresponding paper, "Beyond Accuracy: Behavioral Testing of NLP models with CheckList" is very readable and goes into depth on this type of issue. They have a very relevant table... but need some terms:
We prompt users to evaluate each capability with
three different test types (when possible): Minimum Functionality tests, Invariance, and Directional Expectation tests... A Minimum Functionality test (MFT), is a collection of simple examples (and labels) to check a
behavior within a capability. MFTs are similar to
creating small and focused testing datasets, and are
particularly useful for detecting when models use
shortcuts to handle complex inputs without actually
mastering the capability.
...An Invariance test (INV) is when we apply
label-preserving perturbations to inputs and expect
the model prediction to remain the same.
A Directional Expectation test (DIR) is similar,
except that the label is expected to change in a certain way. For example, we expect that sentiment
will not become more positive if we add “You are
lame.” to the end of tweets directed at an airline
(Figure 1C).
I haven't been actively involved in NLG for long, so this answer will be a bit more anecdotal than SO's algorithms would like. Starting with the fact that in my corner of Europe, the general sentiment toward peer review requirements for any kind of NLG project are higher by several orders of magnitude compared to other sciences - and likely not without reason or tensor thereof.
This makes funding a bigger challenge, so wherever you are, I wish you luck on that front. I'm not sure of how big of a deal this site is in the niche, but [Ehud Reiter's Blog][1] is where I would start looking into your tooling ideas.
Maybe even reach out to them/him personally, because I can't think of another source that has an academic background and a strong propensity for practical applications of NLG, at least based on the kind of content they've been putting out over the years.
Your background, environment/funding, and seniority level/control you have over the project will eventually compose your vector decision for you. I's just how it goes on the bleeding edge of anything. What I will add, though, is not to limit yourself to a single language or technology in this phase because of those precise reasons you've mentioned. I'd recommend the same in terms of potential open source involvement but if your profile information is accurate, that probably won't happen, no matter what you do and accomplish.
But yeah, in the grand scheme of things, your question is far from too broad, in my view. It identifies a rather unmistakable problem pattern that not all branches of science are as lackadaisical to approach as NLG-adjacent fields seem to be right now. In that regard, it's not broad enough and will need to be promulgated far and wide before community-driven tooling will give you serious options on a micro level.
Blasphemy, sure, but the performance is already stacked against you As for the question potentially being too broad, I'd posit it is not broad enough, so long as we collectively remain in a "oh, I was waiting for you to start doing something about it" phase.
P.S. I'd eliminate any Rust and ECMAScript alternatives prior to looking into Python, blapshemous as this might sound to a 2021 data scientist
. Some might ARight nowccounting forr the ridicule this would receive xou sltrsfx hsbr s fszs drz zhsz s mrnzsl rcrtvidr, sz lrsdz
due to performance easons.
[1]: https://ehudreiter.com/2016/12/18/nlg-vs-templates/

How does DenseLayout in qiskit's transpiler work?

I'm looking for an explanation about the Dense Layout algorithm used by qiskit's transpiler.
I saw the source code, but still I don't understand what """Choose a Layout by finding the most connected subset of qubits""" means!
Is there a paper about this kind of mapping algorithm or other resource I can learn about it from?
It does a breadth first search for a connected subset starting at each qubit. The subset with the most connectivity is selected. Due to symmetry there are many subsets with same connectivity. However, it also looks at the noise in the device and picks the subset with the least amount of noise. Finally that set is run through a reverse cuthill mckee traversal to reorder the qubits in the set for a lower degree.
There is no paper on it as I came up with it to solve a bug in earlier versions of the Qiskit swap mapper.

what check is to be used to confirm that point pattern is random

I have a very basic question. I am not a student of spatial statistics. But for an application, I feel that point pattern on a network is a good approximation for my case. I like the spatstat approach and to limit myself to this package, I would like to ask:
Based on some observations, I have the rate (λ = points per km) of occurrence of a point event on a network. Which check(function/test) in spatstat should I perform to verify that my point pattern generated by rpoislpp is indeed random in nature.
I would be happy if someone could help me in this or direct me to some relevant literature for a beginner level.
Thank you
A standard procedure would be to calculate the (network version of the) K-function of the point pattern dataset, and compare this with the envelopes of K-functions for simulated patterns which are completely random.
If X is your point pattern on a linear network (class lpp) then
plot(envelope(X, nsim=19))
will give a simple instance of these envelopes.
For more information see chapter 17 of the book cited.
Since you are using spatstat I would once again recommend our book Spatial Point Patterns: Methodology and Applications with R. Chapter 17 is about point patterns on a linear network. I can assure you that rpoislpp indeed generates points that are random in nature. You can just generate a bunch of samples and look at a plot of the patterns to see that they appear very random.

Reconstructing now-famous 17-year-old's Markov-chain-based information-retrieval algorithm "Apodora"

While we were all twiddling our thumbs, a 17-year-old Canadian boy has apparently found an information retrieval algorithm that:
a) performs with twice the precision of the current, and widely-used vector space model
b) is 'fairly accurate' at identifying similar words.
c) makes microsearch more accurate
Here is a good interview.
Unfortunately, there's no published paper I can find yet, but, from the snatches I remember from the graphical models and machine learning classes I took a few years ago, I think we should be able to reconstruct it from his submision abstract, and what he says about it in interviews.
From interview:
Some searches find words that appear in similar contexts. That’s
pretty good, but that’s following the relationships to the first
degree. My algorithm tries to follow connections further. Connections
that are close are deemed more valuable. In theory, it follows
connections to an infinite degree.
And the abstract puts it in context:
A novel information retrieval algorithm called "Apodora" is introduced,
using limiting powers of Markov chain-like matrices to determine
models for the documents and making contextual statistical inferences
about the semantics of words. The system is implemented and compared
to the vector space model. Especially when the query is short, the
novel algorithm gives results with approximately twice the precision
and has interesting applications to microsearch.
I feel like someone who knows about markov-chain-like matrices or information retrieval would immediately be able to realize what he's doing.
So: what is he doing?
From the use of words like 'context' and the fact that he's introduced a second order level of statistical dependency, I suspect he is doing something related to the LDA-HMM method outlined in the paper: Griffiths, T., Steyvers, M., Blei, D., & Tenenbaum, J. (2005). Integrating topics and syntax. Advances in Neural Information Processing Systems. There are some inherent limits to the resolution of the search due to model averaging. However, I'm envious of doing stuff like this at 17 and I hope to heck he's done something independent and at least incrementally better. Even a different direction on the same topic would be pretty cool.

Word Map for Emotions

I am looking for a resource similar to WordNet. However, I want to be able to look up the positive/negative connotation of a word. For example:
bribe - negative
offer - positive
I'm curious as to whether anyone has run across any tool like this in AI/NLP research, or even in linguistics.
UPDATE:
For the curious, the accepted answer below put me on the right track towards what I needed. Wikipedia listed several different resources. The two I would recommend (because of ease of use/free use for a small number of API calls) are AlchemyAPI and Lymbix. I decided to go with AlchemyAPI, since people affiliated with academic institutions (like myself) and non-profits can get even more API calls per day if they just email the company.
Start looking up topics on 'sentiment analysis': http://en.wikipedia.org/wiki/Sentiment_analysis
The are some vocabulary compilations regarding affect, aka dictionaries of affect, such as the Affective Norms of English Words (ANEW) or the Dictionary of Affect in Language (DAL). They provide a dimensional representation of affect (valence, activation and control) that may be of use in a sentiment analysis scenario (detection of positive/negative connotation). In this sense, EmoLib works with the former, by default, but may be easily extended with a more specific lexicon to tackle particular needs (for example, EmoLib provides an additional neutral label that is more appropriate than the positive/negative tag set alone in a Text-To-Speech synthesis setting).
There is also SentiWordNet, which gives you positive, negative and objective scores for each WordNet synset.
However, you should be aware that the positive and negative connotation of a term often depends on the context in which it is used. A great introduction to this topic is the book Opinion mining and sentiment analysis by Bo Pang and Lillian Lee, which is available online for free.

Resources