Is this a context free or context sensitive language? - context-free-language

I am studying Formal Languages and Automata Theory, and I have a question about a problem inside the book that is not answered in it. the question is:
Is this language Context Free, Regular or Context Sensitive?
L={a^i b^j c^k|i<=j or j<=i , j=k}

It's context-free. It can be specified with the following CFG:
S -> AX
A -> aA
A -> epsilon
X -> bXc
X -> epsilon
The A state accepts as many as as you need. X generates b and c in equal quantity. Therefore this CFG specifies the language L.

It's context sensitive.
Not Regular : We have to remember the number of occurences of b or c which finite state machine can't.
Not context free as if we apply pumping lemma you will see we have more b's than c's after pushing b for a string like a^{2}b^{2} b^{n-4}b^{2}c^{n}.
So it is context sensitive.

Related

How to derive the `one` multiplicity constraint using the Alloy Kernel language?

I was reading Appendix C: Kernel Semantics of the Software Abstraction book (by Daniel Jackson, second edition, very nice read btw!) and found myself a bit stuck in understanding how to derive the one multiplicity constraint using the other kernel constructs.
I understand that no can be derived using expr = none, and some can be derived using the negation of the previous rule but I don't understand how to express the one (and thus lone) constraint using only the kernel constructs (or derivations).
I am probably missing something obvious but I don't see it :)
Here is how I'd express one expr
//there is some expr
not (expr = none)
//and all expr should be one and the same because there's only one expr.
all x1,x2: expr | x1=x2
You can define one like this:
one e iff
not all x: e | not x = x // e is non-empty
and
all x: e | all x': e | x = x' // e has no more than one member
Note that the kernel language is not sufficient to express higher order quantifications (which are supported by Alloy* but not very effectively by Alloy itself). So the quantifier gives us the notion of a singleton.
Daniel

Under what circumstances could Common Subexpression Elimination affect the laziness of a Haskell program?

From wiki.haskell.org:
First of all, common subexpression elimination (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:
foo x = (bar x) * (bar x)
might be transformed into
foo x = let x' = bar x in x' * x'
thus, the bar function is only called once. (And if bar is a particularly expensive function, this might save quite a lot of work.)
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/laziness of the program. So GHC does do CSE, but only in specific circumstances --- see the GHC manual. (Section??)
Long story short: "If you care about CSE, do it by hand."
I'm wondering under what circumstances CSE "affects" the strictness/laziness of the program and what kind of effect that could be.
The naive CSE rule would be
e'[e, e] ~> let x = e in e'[x, x].
That is, whenever a subexpression e occurs twice in the expression e', we use a let-binding to compute e once. This however leads itself to some trivial space leaks. For example
sum [1..n] + prod [1..n]
is typically O(1) space usage in a lazy functional programming language like Haskell (as sum and prod would tail-recurse and blah blah blah), but would become O(n) when the naive CSE rule is enacted. This can be terrible for programs when n is high!
The approach is then to make this rule more specific, restricting it to a small set of cases that we know won't have the problem. We can begin by more specifically enumerating the problems with the naive rule, which will form a set of priorities for us to develop a better CSE:
The two occurrences of e might be far apart in e', leading to a long lifetime for the let x = e binding.
The let-binding must always allocate a closure where previously there might not have been one.
This can create an unbound number of closures.
There are cases where the closure might never deallocate.
Something better
let x = e in e'[e] ~> let x = e in e'[x]
This is a more conservative rule but is much safer. Here we recognize that e appears twice but the first occurrence syntactically dominates the second expression, meaning here that the programmer has already introduced a let-binding. We can safely just reuse that let-binding and replace the second occurrence of e with x. No new closures are allocated.
Another example of syntactic domination:
case e of { x -> e'[e] } ~> case e of { x -> e'[x] }
And yet another:
case e of {
Constructor x0 x1 ... xn ->
e'[e]
}
~>
case e of {
Constructor x0 x1 ... xn ->
e'[Constructor x0 x1 ... xn]
}
These rules all take advantage of existing structure in the program to ensure that the kinetics of space usage remain the same before and after the transformation. They are much more conservative than the original CSE but they are also much safer.
See also
For a full discussion of CSE in a lazy FPL, read Chitil's (very accessible) 1997 paper. For a full treatment of how CSE works in a production compiler, see GHC's CSE.hs module, which is documented very thoroughly thanks to GHC's tradition of writing long footnotes. The comment-to-code ratio in that module is off the charts. Also note how old that file is (1993)!

Unexpected results in playing with relations

/*
sig a {
}
sig b {
}
*/
pred rel_test(r : univ -> univ) {
# r = 1
}
run {
some r : univ -> univ {
rel_test [r]
}
} for 2
Running this small test, $r contains one element in every generated instance. When sig a and sig b are uncommented, however, the first instance is this:
In my explanation, $r has 9 tuples here and still, the predicate which asks for a one tuple relation succeeds. Where am I wrong?
An auxiliary question: are these two declarations equivalent?
pred rel_test(r : univ -> univ)
pred rel_test(r : set univ -> univ)
The problem is that with the Forbid Overflow option set to No the integer semantics in Alloy is wrap around, and with the default scope of 3 (bits), then indeed 9=1, as you can confirm in the evaluator.
With the signatures a and b commented the biggest relation that can be generated with scope 2 has 4 tuples (since the max size of univ is 2), so the problem does not occur.
It also does not occur in the latest build because I believe it comes with the Forbid Overflow option set to Yes by default, and with that option the semantics of integers rules out instances where overflows occur, precisely the case when you compute the size of the relation with 9 tuples. More details about this alternative integer semantics can be found in the paper "Preventing arithmetic overflows in Alloy" by Aleksandar Milicevic and Daniel Jackson.
On the main question: what version of Alloy are you using? I'm unable to replicate the behavior you describe (using Alloy 4.2 of 22 Feb 2015 on OS X 10.6.8).
On the auxiliary question: it appears so. (The language reference is not quite as explicit as one might wish, but it begins one part of its discussion of multiplicities with "If the right-hand expression denotes a unary relation ..." and (in what I take to be the context so defined) "the default multiplicity is one"; the conditional would make no sense if the default multiplicity were always one.
On the other hand, the same interpretive logic would lead to the conclusion that the language reference believes that unary multiplicity keywords are only allowed before expressions denoting unary relations (which would appear to make r: set univ -> univ ungrammatical). But Alloy accepts the expression and parses it as set (univ -> univ). (The alternative parse, (set univ) -> univ, would be very hard to assign a meaning to.)

G-machine, (non-)strict contexts - why case expressions need special treatment

I'm currently reading Implementing functional languages: a tutorial by SPJ and the (sub)chapter I'll be referring to in this question is 3.8.7 (page 136).
The first remark there is that a reader following the tutorial has not yet implemented C scheme compilation (that is, of expressions appearing in non-strict contexts) of ECase expressions.
The solution proposed is to transform a Core program so that ECase expressions simply never appear in non-strict contexts. Specifically, each such occurrence creates a new supercombinator with exactly one variable which body corresponds to the original ECase expression, and the occurrence itself is replaced with a call to that supercombinator.
Below I present a (slightly modified) example of such transformation from 1
t a b = Pack{2,1} ;
f x = Pack{2,2} (case t x 7 6 of
<1> -> 1;
<2> -> 2) Pack{1,0} ;
main = f 3
== transformed into ==>
t a b = Pack{2,1} ;
f x = Pack{2,2} ($Case1 (t x 7 6)) Pack{1,0} ;
$Case1 x = case x of
<1> -> 1;
<2> -> 2 ;
main = f 3
I implemented this solution and it works like charm, that is, the output is Pack{2,2} 2 Pack{1,0}.
However, what I don't understand is - why all that trouble? I hope it's not just me, but the first thought I had of solving the problem was to just implement compilation of ECase expressions in C scheme. And I did it by mimicking the rule for compilation in E scheme (page 134 in 1 but I present that rule here for completeness): so I used
E[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
and wrote
C[[case e of alts]] p = C[[e]] p ++ [Eval] ++ [Casejump D[[alts]] p]
I added [Eval] because Casejump needs an argument on top of the stack in weak head normal form (WHNF) and C scheme doesn't guarantee that, as opposed to E scheme.
But then the output changes to enigmatic: Pack{2,2} 2 6.
The same applies when I use the same rule as for E scheme, i.e.
C[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
So I guess that my "obvious" solution is inherently wrong - and I can see that from outputs. But I'm having trouble stating formal arguments as to why that approach was bound to fail.
Can someone provide me with such argument/proof or some intuition as to why the naive approach doesn't work?
The purpose of the C scheme is to not perform any computation, but just delay everything until an EVAL happens (which it might or might not). What are you doing in your proposed code generation for case? You're calling EVAL! And the whole purpose of C is to not call EVAL on anything, so you've now evaluated something prematurely.
The only way you could generate code directly for case in the C scheme would be to add some new instruction to perform the case analysis once it's evaluated.
But we (Thomas Johnsson and I) decided it was simpler to just lift out such expressions. The exact historical details are lost in time though. :)

design NFA which accepts specific length of strings

Im looking forward to design a FA which accepts some kind of string that accept some A and B.
First a string which the number of A is five more times higher than B.
i mean L={w∈{A,B}* and (nA(W)-nB(W) mod 5=0)
And also a FA which accept different number of character in a string:
L={A^n B^m C^k | n,k>0 and m>3}
I design some FAs But they did not work perfectly on this complicated strings.
Any help on how should i design this ?
Unfortunately, your questions are confusing as the english text doesn't agree with the mathematical formula. I will try to answer to these four questions then:
A language which consists of string over {a,b} that the number of a (= #a(w))
is five times as the number of b ( #b(w)),
L = { w in {a,b}* : #a(w)>#b(w) and #a(w)=#b(w)mod5 }
This cannot be done by an NFA. The proof is simple by using the pumping lemma (P.L) with the string a^pb^5p, where p is the constant of P.L.
For the language: L={w∈{A,B}* : (nA(W)-nB(W)) mod 5=0} that you wrote,
you can do it with an DFA that consists of a cycle of 5 states.
The transitions are, if you read a go clockwise if you read b go counter-clocwise. Choose at random one state to be initial state and the same state will be the final state.
For the language L={A^n B^m C^k | n,k>0 and m>3}, it should be easy to find out
if you read L as L=A(A)* B(B)* c^4(C)*
For the language that accepts different number of character in the string (let's say over a,b). The language should be R={ w in {a,b}* : #a(w) not equal #b(w)}
This language again it cannot be recognized by an NFA. If this language was regular (recognzied by an NFA) so would be this language:
L=a*b* intersection (R complement). The language L is {a^n b^n/ n non-negative integer}.
The language L is the first example of most books when they speak about languages that are non-regular.
Hopefully, you will find this answer helpful.

Resources