how do I show a regular language is equal to another? - regular-language

So this is related to regular languages,their quotient, and closure properties. I am suppose first answer if the question is true or false and then explain why. I dont know how to answer the following question.
1.)
Which of the following identities are true?
(L/a)a = L (the left side represents the concatenation of the languages L/a and {a}).

If they are not equal, this is most easily shown by a counterexample. Equality is usually shown by proving the inclusion in both directions.
Here, take for example L = {b}. Then L/a = L and thus (L/a)a = {ba}, which is not equal to L.

Related

Haskell what does the ' symbol do?

As the title states, I see pieces of code online where the variables/functions have ' next to it, what does this do/mean?
ex:
function :: [a] -> [a]
function ...
function' :: ....
The notation comes from mathematics. It is read x prime. In pretty much any math manual you can find something like let x be a number and x' be the projection of ... (math stuff).
Why not using another convention? well, in mathematics It makes a lot of sense because It can be very pedagogical... In programming we aren't used to this convention so I don't see the point of using it, but I am not against it neither.
Just to give you an example of its use in mathematics so you can understand why It is used in Haskell. Below, the same triangle concept but one using prime convention and other not using it. It is pretty clear in the first picture that pairs (A, A'), (B, B'), ... are related by one being the vertex and the prime version being the midpoint of the oposite edge. Whereas in the second example, you just have to remember that A is the midpoint of the oposite edge of vertex P. First is easier and more pedagogical:
As the other answers said, function' is just another variable name. So,
don'tUse :: Int -> IO ()
don'tUse won'tBe''used'' = return ()
is just like
dontUse :: Int -> IO ()
dontUse wontBeUsed = return ()
with slightly different names. The only requirement is that the name starts with a lowercase-letter or underscore, after that you can have as many single-quote characters as you want.
Prelude> let _' = 1
Prelude> let _'' = 2
Prelude> let _''''''''' = 9
Prelude> _' + _'' * _'''''''''
19
...Of course it's not necessarily a good idea to name variables like that; normally such prime-names are used when making a slightly different version of an already named thing. For example, foldl and foldl' are functions with the same signature that do essentially the same thing, only with different strictness (which often affects performance memory usage and whether infinite inputs are allowed, but not the actual results).
That said, to the question
Haskell what does the ' symbol do?
– the ' symbol does in fact do various other things as well, but only when it appears not as a non-leading character in a name.
'a' is a character literal.
'Foo is a constructor used on the type level. See DataKinds.
'bar and ''Baz are quoted names. See TemplateHaskell.

Alloy - Dealing with unbounded universal quantifiers

Good afternoon,
I've been experiencing an issue with Alloy when dealing with unbounded universal quantifiers. As explained in Daniel Jackson's book 'Software Abstractions' (Section 5.3 'Unbounded Universal Quantifiers'), Alloy has a subtle limitation regarding universal quantifiers and assertion checking. Alloy produces spurious counterexamples in some cases, such as the next one to check that sets are closed under union (shown in the aforementioned book):
sig Set {
elements: set Element
}
sig Element {}
assert Closed {
all s0, s1: Set | some s2: Set |
s2.elements = s0.elements + s1.elements
}
check Closed for 3
Producing a counterexample such as:
Set = {(S0),(S1)}
Element = {(E0),(E1)}
s0 = {(S0)}
s1 = {(S1)}
elements = {(S0,E0), (S1,E1)}
where the analyser didn't populate Set with enough values (a missing Set atom, S2, containing the union of S0 and S1).
Two solutions to this general problem are suggested then in the book:
1) Declaring a generator axiom to force Alloy to generate all possible instances.
For example:
fact SetGenerator{
some s: Set | no s.elements
all s: Set, e: Element |
some s': Set | s'.elements = s.elements + e
}
This solution, however, produces a scope explosion and may also lead to inconsistencies.
2) Omitting the generator axiom and using the bounded-universal form for constraints. That is, quantified variable's bounding expression doesn't mention the names of generated signatures. However, not every assertion can be expressed in such a form.
My question is: is there any specific rule to choose any of these solutions? It isn't clear to me from the book.
Thanks.
No, there's no specific rule (or at least none that I've come up with). In practice, this doesn't arise very often, so I would deal with each case as it comes up. Do you have a particular example in mind?
Also, bear in mind that sometimes you can formulate your problem with a higher order quantifier (ie a quantifier over a set or relation) and in that case you can use Alloy*, an extension of Alloy that supports higher order analysis.

Why must equations of a function in Haskell have the same number of arguments? [duplicate]

I noticed today that such a definition
safeDivide x 0 = x
safeDivide = (/)
is not possible. I am just curious what the (good) reason behind this is. There must be a very good one (it's Haskell after all :)).
Note: I am not looking suggestions for alternative implementations to the code above, it's a simple example to demonstrate my point.
I think it's mainly for consistency so that all clauses can be read in the same manner, so to speak; i.e. every RHS is at the same position in the type of the function. I think would mask quite a few silly errors if you allowed this, too.
There's also a slight semantic quirk: say the compiler padded out such clauses to have the same number of patterns as the other clauses; i.e. your example would become
safeDivide x 0 = x
safeDivide x y = (/) x y
Now consider if the second line had instead been safeDivide = undefined; in the absence of the previous clause, safeDivide would be ⊥, but thanks to the eta-expansion performed here, it's \x y -> if y == 0 then x else ⊥ — so safeDivide = undefined does not actually define safeDivide to be ⊥! This seems confusing enough to justify banning such clauses, IMO.
The meaning of a function with multiple clauses is defined by the Haskell standard (section 4.4.3.1) via translation to a lambda and case statement:
fn pat1a pat1b = r1
fn pat2a pat2b = r2
becomes
fn = \a b -> case (a,b) of
(pat1a, pat1b) -> r1
(pat2a, pat2b) -> r2
This is so that the function definition/case statement way of doing things is nice and consistent, and the meaning of each isn't specified redundantly and confusingly.
This translation only really makes sense when each clause has the same number of arguments. Of course, there could be extra rules to fix that, but they'd complicate the translation for little gain, since you probably wouldn't want to define things like that anyway, for your readers' sake.
Haskell does it this way because it's predecessors (like LML and Miranda) did. There is no technical reason it has to be like this; equations with fewer arguments could be eta expanded. But having a different number of arguments for different equations is probably a typo rather than intentional, so in this case we ban something sensible&rare to get better error reporting in the common case.

All possible combinations for evalution of conditions

Example :
if(A & B)
{
if(C)
{
}
if(D)
{
}
}
We have four different states for all the conditions in this code.
0 represents False and 1 represents true state.
* shows that the condition is not valid in this state flow.
So in this case, all the possible states are listed below.
A B C D
0 * * *
1 0 * *
1 1 1 0
1 1 0 1
Explanation :
In first state (0 * * *), the condition A is true. So there is no role for B in the code. Becuase after evaluating the A itself the if case is failed. Therefore the conditions C and D also are not evaluated.
Like wise the three other possible states also.
But is there any already implemented algorithms by which i can find all these states for a particular input. Because this thing turns to be huge complex problem when we try to solve more complex nested code.
I think it's very difficult to code an application to give such a result.
If any one knows some kind of already implemented things which may help me, please let me know about the same.
I'm sorry to be the bearer or bad news but this algorithm is impossible for two, very famous reasons.
The Halting Problem
To solve this problem in a Turing-Complete language you would need to solve the The Halting problem. If your example program looked like this:
if(A & B & maybeAnInfiniteLoop())
{
if(C)
{
}
if(D)
{
}
}
Then we would have no theoretic way of know if the function maybeAnInfiniteLoop terminates or not and thus if C and D matter at all or if the only valid states of the booleans are 00*,10*, or 01*, since 11* would never finish and C and D are never reached.
NP-Complete
Now let's suppose your capable of reducing your problem just to boolean expressions. In a subset of your language where you only have IF, AND, OR, NOT and booleans the language is not Turing Complete. It is what is called strongly normalizing. The language of boolean expressions is an example of one such useful language.
However even if we can guarantee that the program halts, an algorithm to decide all meaningful states of booleans in that language is an NP-complete problem. It is, in fact, among the most famous. It's called the Boolean Satisfiability Problem. Notice that in your example you say that C and D are meaningless when either A or B are false. This is because you know that only set of values for A and B that satisfy the expression "A&B" is (1,1). You can do this because it's a very simple expression but an algorithm to solve this in the general case might not finish in your lifetime for some very reasonable inputs.
Is there hope?
The question of P=NP does not have a known answer. In fact, it is possibly the most important open math question today. If P=NP then you're in luck but I wouldn't get your hopes up. The smart money is on P does not equal NP.

Haskell and Lambda-Calculus: Implementing Alpha-Congruence (Alpha-Equivalence)

I am implementing an impure untyped lambda-calculus interpreter in Haskell.
I'm presently stuck on implementing "alpha-congruence" (also called "alpha-equivalence" or "alpha-equality" in some textbooks). I want to be able to check whether two lambda-expressions are equal or not equal to each other. For example, if I enter the following expression into the interpreter it should yield True (\ is used to indicate the lambda symbol):
>\x.x == \y.y
True
The problem is understanding whether the following lambda-expressions are considered alpha-equivalent or not:
>\x.xy == \y.yx
???
>\x.yxy == \z.wzw
???
In the case of \x.xy == \y.yx I would guess that the answer is True. This is because \x.xy => \z.zy and \y.yx => \z.zy and the right-hand sides of both are equal (where the symbol => is used to denote alpha-reduction).
In the cae of \x.yxy == \z.wzw I would likewise guess that the answer is True. This is because \x.yxy => \a.yay and \z.wzw => \a.waw which (I think) are equal.
The trouble is that all of my textbooks' definitions state that only the names of the bound variables need to be changed for two lambda-expressions to be considered equal. It says nothing about the free variables in an expression needing to be renamed uniformly also. So even though y and w are both in their correct places in the lambda-expressions, how would the program "know" that the first y represents the first w and the second y represents the second w. I would need to be consistent about this in an implementation.
In short, how would I go about implementing an error-free version of a function isAlphaCongruent? What are the exact rules that I need to follow in order for this to work?
I would prefer to do this without using de Bruijn indices.
You are misunderstanding: different free variables are not alpha equivalent. So y /= x, and \w.wy /= \w.wx, and \x.xy /= \y.yx. Similarly, \x.yxy /= \z.wzw because y /= w.
Your book says nothing about free variables being allowed to be uniformly renamed because they are not allowed to be uniformly renamed.
(Think of it this way: if I haven't yet told you the definition of not and id, would you expect \x. not x and \x. id x to be equivalent? I sure hope not!)

Resources