Linear temporal logic (LTL) nad automata modelling - modeling

I`m having difficulties at understanding how we modelled these automata by using Linear Temporal Logic. Can someone, please please, explain this to me, on the cases which are on the picture in this link, or point me to a source that explains this on examples.
I thank you in advance for your help.

LTL formula are defined over an alphabet (which is usually referred to as "atomic propositions", and in your examples the alphabet is the set {x,y}). An LTL formula splits the infinite sequences of subsets of the alphabet into (infinite-)words that satisfy the formula and those that don't. For example, the word {x}, {x,y}, {}, {}... satisfies the formula F not x and not y, but does not satisfy the formula Gy.
A Buchi automaton does the same. It reads an infinite word over some alphabet and either accepts or rejects the word. Vardi and Wolper showed that it is possible, given an LTL formula, to construct a Buchi automaton that accepts exactly the infinite words that satisfy the formula. You can see the construction here.

Related

WAG matrix implementation

I am working with certain programs in python3.4. I want to use WAG matrix for phylogeny inference, but I am confused about the formula implemented by it.
For example, in phylogenetics study, when a sequence file is used to generate a distance based matrix, there is a formula called "p-distance" implemented and on the basis of this formula and some standard values for sequence data, a matrix is generated which is later used to construct a tree. When a character based method for tree construction is used, "WAG" is one of the matrices used for likelihood tree construction. What I want to say is that if one wants to implement this matrix, then what is the formula basis for it?
I want to write codes for this implementation. But first I need to understand the logic used by WAG matrix.
I have an aligned protein sequence file and I need to generate "WAG"
matrix from it. The thing is that I have been studying literature
regarding wag matrix but I could not get how does it perform
calculation??? Does it have a specific formula?? (For example,
'p-distance' is a formula used bu distance matrix) I want to give
aligned protein sequence file as input and have a matrix generated as
output.

special interleaving string coding

The interleaving rule is to form a new word by inserting one word into another, in a letter by letter fashion, like showing below:
a p p l e
o l d
=
aoplpdle
It does not matter which word goes first. (oalpdple is also valid)
The problem is given a vector of strings {old, apple, talk, aoplpdle, otladlk}, find all the words that are valid interleavings of two word from the vector.
The simplest solution asks for at least O(n^2) time complexity, taking every two word and form a interleaving word, check if it is in the vector.
Is there better solutions?
Sort by length. You only need to check combinations where the sum of lengths of 2 entries (words...) is equal to the length of existing entry(ies).
This will reduce your average complexity. I didn't take the time to compute the worst complexity, but it's probably lower then O(n^2) as well.
You can also optimize the "inner loop" by rejecting matches early - you don't really need to construct the entire interleaved word to reject a match - iterate the candidate word alongside the 2 input words till you find a mismatch. This won't reduce your worst complexity, but will have a positive effect on overall performance.

Algorithm to find all substrings of a string within a given edit distance of another string

I know the title is a bit messy, so let me explain in detail:
I have two strings, T and P. T represents the text to be searched, and P represents the pattern to be searched for. I want to find ALL substrings of T which are within a given edit distance of P.
Example:
T = "cdddx"
P = "mdddt"
Say I wanted all substrings within edit distance 2, the answers would be:
cdddx //rewrite c and x
dddx // insert m, rewrite x
cddd //insert t, rewrite c
ddd // insert m and t
Don't know if that's all of them, but you get the point.
I know the Wagner–Fischer algorithm can be employed for solution of this problem - I check the numbers of the last row of the Wagner–Fischer matrix and see if they fulfill this condition and find the substrings that way, then run the algorithm again for T', where T' is T where the first letter has been removed, and so on. The problem is the time complexity of this shoots up to a staggering O(T^3*P). I'm looking for a solution close to the original time complexity of the Wagner-Fisher algorithm, i.e. O(T*P).
Is there a way to get this done in such time or something better than what I have right now? Note that I am not necessarily looking for a Wagner-Fischer solution, but anything is ok. Thanks!

Fast way to match strings with typo

I have a huge list of strings (city-names) and I want to find the name of a city even if the user makes a typo.
Example
User types "chcago" and the system finds "Chicago"
Of course I could calculate the Levenshtein distance of the query for all strings in the list but that would be horribly slow.
Is there any efficient way to perform this kind of string-matching?
I think the basic idea is to use Levenshtein distance, but on a subset of the names. One approach that works if the names are long enough is to use n-grams. You can store n-grams and then use more efficient techniques to say that at least x n-grams need to match. Alas, your example misspelling has 2-matching 3-grams with Chicago out of 5 (unless you count partials at the beginning and end).
For shorter names, another approach is to store the letters in each name. So, "Chicago" would turn into 6 "tuples": "c", "h", "i", "a", "g", "o". You would do the same for the name entered and then require that 4 or 5 match. This is a fairly simple match operation, so it can go quite fast.
Then, on this reduced set, apply Levenshtein distance to determine what the closest match is.
You're asking to determine Levenshtein without using Levenshtein.
You would have to determine how far the words could be deviated before it could be identified, and see if it would be acceptable to apply this less accurate algorithm. For instance, you could lookup commonly switched typed letters and limit it to that. Or apply the first/last letter rule from this paper. You could also assume the first few letters are correct and look up the cities in a sorted list and if you don't find it, apply Levenshtein to the n-1 and n+1 words where n is the location of the last lookup (or some variant of it).
There are several ideas, but I don't think there is a single best solution for what you are asking, without more assumptions.
Efficient way to search for fuzzy matches on a text string based on a Levenshtein distance (or any other metric that obeys the triangle inequality) is Levenshtein automaton. It's implemented in a Lucene project (Java) and particulary in a Lucene.net project (C#). This method works fast, but is very complex to implement

Best way to rank sentences based on similarity from a set of Documents

I want to know the best way to rank sentences based on similarity from a set of documents.
For e.g lets say,
1. There are 5 documents.
2. Each document contains many sentences.
3. Lets take Document 1 as primary, i.e output will contain sentences from this document.
4. Output should be list of sentences ranked in such a way that sentence with FIRST rank is the most similar sentence in all 5 documents, then 2nd then 3rd...
Thanks in advance.
I'll cover the basics of textual document matching...
Most document similarity measures work on a word basis, rather than sentence structure. The first step is usually stemming. Words are reduced to their root form, so that different forms of similar words, e.g. "swimming" and "swims" match.
Additionally, you may wish to filter the words you match to avoid noise. In particular, you may wish to ignore occurances of "the" and "a". In fact, there's a lot of conjunctions and pronouns that you may wish to omit, so usually you will have a long list of such words - this is called "stop list".
Furthermore, there may be bad words you wish to avoid matching, such as swear words or racial slur words. So you may have another exclusion list with such words in it, a "bad list".
So now you can count similar words in documents. The question becomes how to measure total document similarity. You need to create a score function that takes as input the similar words and gives a value of "similarity". Such a function should give a high value if the same word appears multiple times in both documents. Additionally, such matches are weighted by the total word frequency so that when uncommon words match, they are given more statistical weight.
Apache Lucene is an open-source search engine written in Java that provides practical detail about these steps. For example, here is the information about how they weight query similarity:
http://lucene.apache.org/java/2_9_0/api/all/org/apache/lucene/search/Similarity.html
Lucene combines Boolean model (BM) of Information Retrieval with
Vector Space Model (VSM) of Information Retrieval - documents
"approved" by BM are scored by VSM.
All of this is really just about matching words in documents. You did specify matching sentences. For most people's purposes, matching words is more useful as you can have a huge variety of sentence structures that really mean the same thing. The most useful information of similarity is just in the words. I've talked about document matching, but for your purposes, a sentence is just a very small document.
Now, as an aside, if you don't care about the actual nouns and verbs in the sentence and only care about grammar composition, you need a different approach...
First you need a link grammar parser to interpret the language and build a data structure (usually a tree) that represents the sentence. Then you have to perform inexact graph matching. This is a hard problem, but there are algorithms to do this on trees in polynomial time.
As a starting point you can compute soundex for each word and then compare documents based on soundexes frequencies.
Tim's overview is very nice. I'd just like to add that for your specific use case, you might want to treat the sentences from Doc 1 as documents themselves, and compare their similarity to each of the four remaining documents. This might give you a quick aggregate similarity measure per sentence without forcing you to go down the route of syntax parsing etc.

Resources