Relation of pumping lengths between related regular languages - regular-language

How does the pumping length of a regular language relate to the pumping length of a related language. For example, if A :< B :< C are all regular languages and k is the pumping length of B, do we know anything about the pumping lengths of A and C?
One might be inclined naively to think that a sublanguage has a smaller (<=) pumping length when we look at finite languages. {a,ab,abc} :< {a,ab,abc,abcd} have respective pumping lengths 4 <= 5. Taking an element out of a set can't make its longest word even longer.
On the other hand if you look at the state machine formed by the synchronized product of two languages, the intersection language and the union language have the same state machine structure, but differ in that the set of final states of the intersection is a subset of the set of final states of the union. Having more final states, could make it more probable to find a shorter path through the state machine. But on the contrary having fewer final states makes it more likely that the state machine has non-co-accessible states, and is thus reducible.

Note first that all languages over an alphabet are related to all other languages over that alphabet by some relation, however contrived. We really do need to limit discussion, as you have suggested, to something like subset in order to meaningfully scope the question.
Now, you've already correctly noted that in the general case, the subset relation doesn't have a clear bearing on the pumping lengths of relative languages. You can easily take a* and compare to {a^n} and show that a minimal DFA for a* is almost always simpler than one for {a^n}.
Let us further restrict ourselves to languages that differ by a finite number of entries; that is, L \ R is finite and R \ L is finite. This is an indicator of similarity different from subset; but if we require that, w.l.o.g., that R \ L be empty, then we recover a restricted version of subset. We might now ask the same question given this modified version: for languages that differ in a finite number of entries, does the subset relation tell us anything?
The answer is still no. Consider L = a* and R = a* \ A, where A is a finite non-empty subset of a*. Even still, L takes one state and R takes potentially many more.
Restricting ourselves to finite sets only, as you suggest, does let us deduce what you propose: that a minimal automaton for R will have no more states than the one for L. Why is this? We must have n+1 states to accept any string of length n, and we must have a dead state to accept strings not in the language (of which there will be infinitely many). A minimal DFA will have no loops at all (except centering on the dead state) if the language is finite, since otherwise you'd be able to get infinitely many strings.
Your observation about taking the Cartesian product is correct, in that the result of applying the construction gives a structurally identical DFA for any set operation (union, difference, intersection, etc.); however, these DFAs are not guaranteed to be minimal and the point for finite languages still holds, namely, the DFA for intersection will have no more states than the one for union, before and after DFA minimization of both machines.
You might be able to take the Myhill-Nerode theorem and define a notion of "close-relatedness" using that which does allow you to compare languages to determine which will have the larger minimal DFA. I am not sure about that, however, since an easy way of doing that would allow you to compare to parameterized reference languages to, for instance, easily prove any non-regular language is non-regular, which might be a big deal in mathematics in its own right (like, either it's impossible in general or it would prove P=NP, etc. to do so).

Related

Using string of set length with pumping lemma to prove irregularity

There is this proof that I thought of that I am not quite sure if it's valid or not.
Suppose you had to prove the nonregularity of the following language:
A = { 0^n 1^n 2^n | n>= 0 }
The proof I devised picks a string that belongs in the language, such as 012, and show that it doesn't matter how it's divided, the pumping lemma is not wholly satisfied(I could post the entire proof, but the post is verbose as it is). According to my professor however, this proof cannot be accepted.
He did not explain why, and I don't see how such a proof would be insufficent to demonstrate that a language is not regular. If a string clearly belonging to an assumed regular language does not satisfy the pumping lemma, the language clearly has strings that are not regular as part of it set of strings, therefore the language is not regular.
I believe the reason my professor rejected this proof is because in the majority of problems the pumping length P cannot be correctly guessed. At the same time I do not see how my proof could be proven wrong with a counterexample.
You can only choose p (the pumping length) to be a specific number if the language is regular and p actually exists. The fact itself, that you pick an exact number, means that p exists, which is the thing to be actually proven.
Suppose that p exists. Lets choose a word, that is long enough: w=0^{p}1^{p}2^{p}. According to the pumping lemma there must exist a decomposition of each string in language A as w=xyz with |xy|≤p and |y|≥1 such that xy^{i}z in language A for every i≥0. To satisfy |xy|≤p choose x to be empty, y=0^{p}, and as a consequence z=1^{p}2^{p}. From the lemma, |y|≥1 so |xy|≥1. The strings with i≠1 (in xy^{i}z) are not in the language A. The language is thus not regular, and p does not exist.
If p existed then a finite state automaton could be constructed for this language. But no such automaton exists, because it would need memory to remember the number of 0 to later match the same number of 1 and 2. If n was a finite number, then you could construct, a probably large, automaton, but for infinite n no finite automaton can be constructed.
This language is not even context-free, because there is no push-down automaton that can be constructed for it. It is context-sensitive.

How many languages does a DFA recognize?

According to Sipser's "Introduction to the Theory of Computation": If A is the set of all strings that machine M accepts, we say that A is the
language of machine M and write L(M) = A. We say that M recognizes A ... A machine may accept several strings, but it always recognizes only one language. and also We say that M recognizes language A if A = {w| M accepts w}.
I guess the question has already been answered, but I would like to know if anyone has any thought about it, if there is anything interesting we can say about the subsets of a regular language, if we can say that the original DFA recognizes them and if there is any interesting relationship between the original DFA and the ones that recognize the smaller languages
If the language recognized by a DFA (of which there is always exactly one) is finite, then there are finitely many sublanguages of that language (indeed, if the language accepted consists of N strings, there are 2^N sublanguages).
There is no useful relationship which can be easily inferred from the sub/super language relationship w.r.t. where in the Chomsky hierarchy the language falls. That is: a sublanguage of a regular language may be undecidable, and a sublanguage of an undecidable language may be regular, with all possible variations in between.
Because of this, there is no particularly neat relationship to be worked out among DFAs of sub/super languages: not all of the sublanguages will even be regular; some sublanguages will have simpler DFAs than the DFA of the super language, and some will have more complicated DFAs than the DFA of the super language. Some will have the same DFA but a different set of accepting states.
Given a DFA, there is only one language corresponding to the machine. A language is a set, that is, a collection of all the strings accepted by the dfa.

Finiteness of Regular Language

We all know that (a + b)* is a regular language for containing only symbols a and b.
But (a + b)* is a string of infinite length and it is regular as we can build a finite automata, so it should be finite.
Can anyone please explain this?
Finite automaton can be constructed for any regular language, and regular language can be a finite or an infinite set. Of-course there are infinite sets those are not regular sets. Check the Venn diagram below:
Notes:
1. every finite set is a regular set.
2. any dfa for an infinite set will always contains loop (or dfa without loop is not possible for infinite set).
3. every non-regular language is an infinite set.
The word "finite" in finite automata significance the presence of 'finite amount of memory' in automata for the class of regular languages, hence only 'finite' (or says bounded) amount of information can be stored at any instance of time while processing a string of language.
In finite automata, memory is present in the form of states only (whereas in the other class of automata like Pda, Turing Machines external memory are used to store unbounded information). You can think a finite automata as a CPU without explicit memory; that can only store recent results in its registers.
So, we can define "regular language" as — a class of languages for which only bounded (finite) information is required to stored at any instance of time while processing language strings.
Further read (for infinite languages):
What is regular language: What is basically a regular language? And Why a*b* is regular? But language { anbn | n > 0 } is not a regular language
To understand how states are uses as memory element read this answer: How to write regular expression for a DFA
And difference between automate for finite ans infinite regular language: To make sure: Pumping lemma for infinite regular languages only?
Each word in the language (a+b) is of finite length. The same way as there are infinitely many integers, but each of them finite.
Yes, the language itself is an infinite set. Most languages are. But a finite automaton (NB: automata is plural) works just fine for them, provided each word is of finite length.
As an aside: This type of question probably should go to cs.stackexchange.com.
But (a + b)* is a string of infinite length
No, (a + b)* is a finite way to express an infinite set (language) of finite strings.
1. A regular expression describes the string generated by some language. Applying that regular expression gives you all the strings that can be described by that language.
2. When you convert that regular expression to a finite automaton (automata with finite states) , it means that those same strings can also be generated by traversing from state-to-state on that automaton. Now, intuitively, each state here represents a group of strings belonging to that language. It says, after having "absorbed" some input, the string is now in state X.
Example:
If you want a regex to accept strings with even numbers of 0 , then you'll have one state (group) which indicates that even number of 0 has been observed in the input so far. And another state (group) for odd numbers --> this state would be your non-accepting state in the FA.
As shown here, you just needed 2 (finite) states to generate an infinite number of strings, because of the grouping of odd and even we did.
And that is why it is regular.
It just means there exists a finite regular expression for the specified language and is no where related to no of strings generated from the expression.
For many regular languages we can generate infinite number of strings which follow that language but to that language is regular to prove that we need a regular expression which must be finite.
So here the expression (a+b)* is finite way of expressing 0-n number of a's or b's or combination of that but n can take any value which results in infinite no. of strings.

Consequences of inability to add natural numbers in C

In System F I can define the genuine total addition function using Church numerals.
In Haskell I cannot define that function because of the bottom value. For example, in haskell if x + y = x, then I cannot say that y is zero - if x is bottom, x + y = x for any y. So the addition is not the true addition but an approximation to it.
In C I cannot define that function because C specification requires everything to have finite size. So in C possible approximations are even worse than in Haskell.
So we have:
In System F it's possible to define the addition but it's not possible to have a complete implementation (because there are no infinite hardware).
In Haskell it's not possible to define the addition (because of the bottom), and it's not possible to have a complete implementation.
In C it's not possible to define the total addition function (because semantic of everything is bounded) but compliant implementations are possible.
So all 3 formal systems (Haskell, System F and C) seem to have different design tradeoffs.
So what are consequences of choosing one over another?
Haskell
This is a strange problem because you're working with a vague notion of =. _|_ = _|_ only "holds" (and even then you should really use ⊑) at the domain semantic level. If we distinguish between information available at the domain semantic level and equality in the language itself, then it's perfectly correct to say that True ⊑ x + y == x --> True ⊑ y == 0.
It's not addition that's the problem, and it's not natural numbers that are the problem either -- the issue is simply distinguishing between equality in the language and statements about equality or information in the semantics of the language. Absent the issue of bottoms, we can typically reason about Haskell using naive equational logic. With bottoms, we can still use equational reasoning -- we just have to be more sophisticated with our equations.
A fuller and clearer exposition of the relationship between total languages and the partial languages defined by lifting them is given in the excellent paper "Fast and Loose Reasoning is Morally Correct".
C
You claim that the C requires everything (including addressable space) to have a finite size, and therefore that C semantics "impose a limit" on the size of representable naturals. Not really. The C99 standard says the following: "Any pointer type may be converted to an integer type. Except as previously specified, the
result is implementation-defined. If the result cannot be represented in the integer type,
the behavior is undefined. The result need not be in the range of values of any integer
type." The rationale document further emphasizes that "C has now been implemented on a wide range of architectures. While some of these
architectures feature uniform pointers which are the size of some integer type, maximally
portable code cannot assume any necessary correspondence between different pointer types and the integer types. On some implementations, pointers can even be wider than any integer type."
As you can see, there's explicitly no assumption that pointers must be of a finite size.
You have a set of theories as frameworks to do your reasoning with; finite reality, Haskell semantics, System F are just ones of them.
You can choose appropriate theory for your work, build new theory from scratch or from big pieces of existing theories gathered together. For example, you can consider set of always terminating Haskell programs and employ bottomless semantics safely. In this case your addition will be correct.
For low level language there may be considerations to plug finiteness in but for high level language it is worth to omit such things because more abstract theories allow wider application.
While programming, you use not "language specification" theory but "language specification + implementation limitations" theory so there is no difference between cases where memory limits present in language specification or in language implementation. Absence of limits become important when you start building pure theoretic constructions in framework of language semantics. For example, you may want to prove some program equivalences or language translations and find that every unneeded detail in language specification brings a much pain in proof.
I'm sure you've heard the aphorism that "in theory there is no difference between theory and practice, but in practice there is."
In this case, in theory there are differences, but all of these systems deal with the same finite amount of addressable memory so in practice there is no difference.
EDIT:
Assuming you can represent a natural number in any of these systems, you can represent addition in any of them. If the constraints you are concerned about prevent you from representing a natural number then you can't represent Nat*Nat addition.
Represent a natural number as a pair of (heuristic lower bound on the maximum bit size and a lazily evaluated list of bits).
In the lambda calculus, you can represent the list as a function that returns a function that called with true returns the 1's bit, and called with false returns a function that does the same for the 2's bit and so on.
Addition is then an operation applied to the zip of those two lazy lists that propagates a carry bit.
You of course have to represent the maximum bit size heuristic as a natural number, but if you only instantiate numbers with a bit count that is strictly smaller than the number you are representing, and your operators don't break that heuristic, then the bit size is inductively a smaller problem than the numbers you want to manipulate, so operations terminate.
On the ease of accounting for edge cases, C will give you very little help. You can return special values to represent overflow/underflow, and even try to make them infectious (like IEEE-754 NaN) but you won't get complaints at compile time if you fail to check. You could try and overload a signal SIGFPE or something similar to trap problems.
I cannot say that y is zero - if x is bottom, x + y = x for any y.
If you're looking to do symbolic manipulation, Matlab and Mathematica are implemented in C and C like languages. That said, python has a well-optimized bigint implementation that is used for all integer types. It's probably not suitable for representing really really large numbers though.

A Question About the Expressive Power of Higher-Order Logical Reasoning Formalisms

I do not really know if this is scientifically proven, but I've read in a book (It was a relatively modern AI book by Peter Norvig) that second-order logical programming could be more expressive than existing first-order languages.
The question is: Is it statistically/symbolically proven that higher-order predicate logics exceed first-order predicates in their expressive power? Or they just bring the modularity/convenience/maintainability to your knowledge bases?
Additionally: If there is some kind of firm direction in which I could go seeking more expressive power than I have (I mean exactly the descriptive potential of the symbols I write in given semantics/syntax) - then I would be glad to hear just almost everything :)
Thank you.
Second order logic is more powerful and expressive than first order logic. Second order logic allows one to quantify over relations in addition to variables; thus it is possible, using a single sentence of second order logic, to express something that would require an infinite number of first order logic sentences. The relationship is similar to that between FOL and propositional logic.
As an example, consider the SOL statement:
\forall R \exists x \exists y (x R y)
This states that for any relation R there are x and y such that x R y holds. In order to express this in FOL, one would need a statement for each relation R in the language, which clearly could be infinite.
For a more interesting example, one could look at the proof that the transitive closure of a relation is not expressible in FOL. I can post it if you want to see it; but for the sake of succinctness I will omit it unless someone wants it.
Edit: You may also be interested in Descriptive Complexity -- essentially, it ties together the notions of complexity and expressibility -- if you can fully state a problem in a certain fragment of logic, then you know it is contained within the corresponding complexity class. For example, if a problem can be stated in Existential Second Order Logic, then it's in NP; if it can be stated in First Order Logic + a Least Fixed Point operator, then it's in P. If you can show that every statement of existential second order logic can be translated to FOL(LFP), then you've proven P=NP. (well, you've proven NP\subset P, but since the other containment is already known, you've proven equality...)
You may want to look into dependent type theories.

Resources