I am trying to implement the closure of D FA. I have successfully implemented Union, Compliment Intersection, Subtraction and Concatenation of D FA without using N FA. Our teacher did not tell us the algorithm to find the closure. I tried to do it by concatenating a D FA to itself but quite obviously it did not work.
I just need the steps by the way I am representing D FA by using matrix. Alongside can you please elaborate on Klein closure but I am sure I can can do that once I know how to get the closure.
I am not sure what you mean by closure in general. But for the Kleene closure you can proceed like this: from every final state you add an epsilon-transition (not reading anything) to the start state. So after reading one word of the language, you can start with another one.
Of course, the resulting automaton is not deterministic any more. But there are standard procedures for determinizing it again.
For a direct construction: look at all the transition that enter a final state, for example (q,a) -> f. From the outgoing state add another transition reading the same symbol and going to the start state: (q,a) -> s. So the automaton has the possibility to finish reading the word and just after that starting again.
Related
I'm following "Implementing functional languages: a tutorial" by SPJ, and I'm stuck on Exercise 2.18 (page 70), reproduced below. This is in the chapter about a template-instantiation evaluator for the simple lazy functional language described in the book (similar to a mini Miranda/Haskell):
Exercise 2.18. Why is it hard to introduce case expressions into the template instantiation machine?
(Hint: think about what instantiate would do with a case expression.)
The tutorial then goes on to cover an implementation of several less-general versions of destructuring structured data: an if primitive, a casePair primitive, and a caseList primitive. I haven't yet done the implementation for this section (Chapter 2 Mark 5), but I don't see why implementing these separately would be significantly easier than implementing a single case primitive.
The only plausible explanations I can offer is that the most generic case form is variadic in both number of alternatives (number of tags to match against) and arity (number of arguments to the structured data). All of the above primitives are fixed-arity and have a known number of alternatives. I don't see why this would make implementation significantly more difficult, however.
The instantiation of the case statement is straightforward:
Instantiate the scrutinee.
Instantiate the body expression of each alternative. (This may be somewhat wasteful if we substitute in unevaluated branches.) (I notice now this may be a problem, will post in an answer.)
Encapsulate the result in a new node type, NCase, where:
data Node = NAp Addr Addr
| ...
| NCase [(Int, [Name], Addr)]
Operationally, the reduction of the case statement is straightforward.
Check if the argument is evaluated.
If not, make it the new stack and push the current stack to the dump. (Similar to evaluating the argument of any primitive.)
If the argument is evaluated, then search for an alternative with a matching tag.
If no alternative with a matching tag is found, then throw an error (inexhaustive case branches).
Instantiate the body of the matching alternative with the environment augmented with the structured data arguments. (E.g., in case Pack {0, 2} 3 4 in <0> a b -> a + b, instantiate a + b with environment [a <- 3, b <- 4])
A new node type would likely have to be introduced for case (NCase) containing the list of alternatives, but that's not too dissuading.
I found a GitHub repository #bollu/timi which seems to implement a template-instantiation evaluator also following this tutorial. There is a section called "Lack of lambda and case", which attributes the lack of a generic case statement to the following reason:
Case requires us to have some notion of pattern matching / destructuring which is not present in this machine.
However, in this tutorial there is no notion of pattern-matching; we would simply be matching by tag number (an integer), so I'm not sure if this explanation is valid.
Aside, partly for myself: a very similar question was asked about special treatment for case statements in the next chapter of the tutorial (concerning G-machines rather than template-instantiation).
I think I figured it out while I was expanding on my reasoning in the question. I'll post here for posterity, but if someone has a more understandable explanation or is able to correct me I'll be happy to accept it.
The difficulty lies in the fact that the instantiate step performs all of the variable substitutions, and this happens separately from evaluation (the step function). The problem is as bollu says in the GitHub repository linked in the original question: it is not easy to destructure structured data at instantiation time. This makes it difficult to instantiate the bodies of all of the alternatives.
To illustrate this, consider the instantiation of let expressions. This works like so:
Instantiate each new binding expression.
Augment the current environment with the new bindings.
Instantiate the body with the augmented expression.
However, now consider the case of case expressions. What we want to do is:
Instantiate the scrutinee. (Which should eventually evaluate to the form Pack {m, n} a0 a1 ... an)
For each alternative (each of which has the form <m> b0 b1 ... bn -> body), augment the environment with the new bindings ([b0 <- a0, b1 <- a1, ..., bn <- an] and then instantiate the body of the alternative.)
The problem lies somewhere in between the two steps: calling instantiate on the scrutinee results in the instantiated Addr, but we don't readily have access to a1, a2, ... an to augment the environment with at instantiation time. While this might be possible if the scrutinee was a literal Pack value, if it needed further evaluation (e.g., was the evaluated result of a call to a supercombinator) then we would need to first evaluate it.
To solidify my own understanding, I'd like to answer the additional question: How do the primitives if, casePair, and caseList avoid this problem?
if trivially avoids this problem because boolean values are nullary. casePair and caseList avoid this problem by deferring the variable bindings using thunk(s); the body expressions get instantiated once the thunk is called, which is after the scrutinee is evaluated.
Possible solutions:
I'm thinking that it might be possible to get around this if we define a destructuring primitive operator on structured data objects. I.e., (Pack {m, n} a0 a1 ... an).3 would evaluate to a3.
In this case, what we could do is call instantiate scrut which would give us the address scrutAddr, and we could then augment the environment with new bindings [b0 <- (NAp .0 scrut), b1 <- (NAp .1 scrut), ..., bn <- (NAp .n scrut)].
The issue seems to lie in the fact that instantiation (substitution) and evaluation are separated. If variables were not instantiated separately from evaluation but rather added to/looked up from the environment upon binding/usage, then this would not be a problem. This is as if we placed the bodies of the case statements into thunks to be instantiated after the scrutinee is evaluated, which is similar to what casePair and caseList do.
I haven't worked through either of these alternate solutions or how much extra work they would incur.
This question already has an answer here:
Julia: invoke a function by a given string
(1 answer)
Closed 6 years ago.
I know that you can call functions using their name as follows
f = x -> println(x)
y = :f
eval(:($y("hi")))
but this is slow since it is using eval is it possible to do this in a different way? I know it's easy to go the other direction by just doing symbol(f).
What are you trying to accomplish? Needing to eval a symbol sounds like a solution in search of a problem. In particular, you can just pass around the original function, thereby avoiding issues with needing to track the scope of f (or, since f is just an ordinary variable in your example, the possibility that it would get reassigned), and with fewer characters to type:
f = x -> println(x)
g = f
g("hi")
I know it's easy to go the other direction by just doing symbol(f).
This is misleading, since it's not actually going to give you back f (that transform would be non-unique). But it instead gives you the string representation for the function (which might happen to be f, sometimes). It is simply equivalent to calling Symbol(string(f)), since the combination is common enough to be useful for other purposes.
Actually I have found use for the above scenario. I am working on a simple form compiler allowing for the convenient definition of variational problems as encountered in e.g. finite element analysis.
I am relying on the Julia parser to do an initial analysis of the syntax. The equations entered are valid Julia syntax, but will trigger errors on execution because some of the symbols or methods are not available at the point of the problem definition.
So what I do is roughly this:
I have a type that can hold my problem description:
type Cmd f; a; b; end
I have defined a macro so that I have access to the problem description AST. I travers this expression and create a Cmd object from its elements (this is not completely unlike the strategy behind the #mat macro in MATLAB.jl):
macro m(xp)
c = Cmd(xp.args[1], xp.args[3], xp.args[2])
:($c)
end
At a later step, I run the Cmd. Evaluation of the symbols happens only at this stage (yes, I need to be careful of the evaluation context):
function run(c::Cmd)
xp = Expr(:call, c.f, c.a, c.b)
eval(xp)
end
Usage example:
c = #m a^b
...
a, b = 2, 3
run(c)
which returns 9. So in short, the question is relevant in at least some meta-programming scenarios. In my case I have to admit I couldn't care less about performance as all of this is mere preprocessing and syntactic sugar.
Suppose you're building a fairly large simulation in Haskell. There are many different types of entities whose attributes update as the simulation progresses. Let's say, for the sake of example, that your entities are called Monkeys, Elephants, Bears, etc..
What is your preferred method for maintaining these entities' states?
The first and most obvious approach I thought of was this:
mainLoop :: [Monkey] -> [Elephant] -> [Bear] -> String
mainLoop monkeys elephants bears =
let monkeys' = updateMonkeys monkeys
elephants' = updateElephants elephants
bears' = updateBears bears
in
if shouldExit monkeys elephants bears then "Done" else
mainLoop monkeys' elephants' bears'
It's already ugly having each type of entity explicitly mentioned in the mainLoop function signature. You can imagine how it would get absolutely awful if you had, say, 20 types of entities. (20 is not unreasonable for complex simulations.) So I think this is an unacceptable approach. But its saving grace is that functions like updateMonkeys are very explicit in what they do: They take a list of Monkeys and return a new one.
So then the next thought would be to roll everything into one big data structure that holds all state, thus cleaning up the signature of mainLoop:
mainLoop :: GameState -> String
mainLoop gs0 =
let gs1 = updateMonkeys gs0
gs2 = updateElephants gs1
gs3 = updateBears gs2
in
if shouldExit gs0 then "Done" else
mainLoop gs3
Some would suggest that we wrap GameState up in a State Monad and call updateMonkeys etc. in a do. That's fine. Some would rather suggest we clean it up with function composition. Also fine, I think. (BTW, I'm a novice with Haskell, so maybe I'm wrong about some of this.)
But then the problem is, functions like updateMonkeys don't give you useful information from their type signature. You can't really be sure what they do. Sure, updateMonkeys is a descriptive name, but that's little consolation. When I pass in a god object and say "please update my global state," I feel like we're back in the imperative world. It feels like global variables by another name: You have a function that does something to the global state, you call it, and you hope for the best. (I suppose you still avoid some concurrency problems that would be present with global variables in an imperative program. But meh, concurrency isn't nearly the only thing wrong with global variables.)
A further problem is this: Suppose the objects need to interact. For example, we have a function like this:
stomp :: Elephant -> Monkey -> (Elephant, Monkey)
stomp elephant monkey =
(elongateEvilGrin elephant, decrementHealth monkey)
Say this gets called in updateElephants, because that's where we check to see if any of the elephants are in stomping range of any monkeys. How do you elegantly propagate the changes to both the monkeys and elephants in this scenario? In our second example, updateElephants takes and returns a god object, so it could effect both changes. But this just muddies the waters further and reinforces my point: With the god object, you're effectively just mutating global variables. And if you're not using the god object, I'm not sure how you'd propagate those types of changes.
What to do? Surely many programs need to manage complex state, so I'm guessing there are some well-known approaches to this problem.
Just for the sake of comparison, here's how I might solve the problem in the OOP world. There would be Monkey, Elephant, etc. objects. I'd probably have class methods to do lookups in the set of all live animals. Maybe you could lookup by location, by ID, whatever. Thanks to the data structures underlying the lookup functions, they'd stay allocated on the heap. (I'm assuming GC or reference counting.) Their member variables would get mutated all the time. Any method of any class would be able to mutate any live animal of any other class. E.g. an Elephant could have a stomp method that would decrement the health of a passed-in Monkey object, and there would be no need to pass that
Likewise, in an Erlang or other actor-oriented design, you could solve these problems fairly elegantly: Each actor maintains its own loop and thus its own state, so you never need a god object. And message passing allows one object's activities to trigger changes in other objects without passing a bunch of stuff all the way back up the call stack. Yet I have heard it said that actors in Haskell are frowned upon.
The answer is functional reactive programming (FRP). It it a hybrid of two coding styles: component state management and time-dependent values. Since FRP is actually a whole family of design patterns, I want to be more specific: I recommend Netwire.
The underlying idea is very simple: You write many small, self-contained components each with their own local state. This is practically equivalent to time-dependent values, because each time you query such a component you may get a different answer and cause a local state update. Then you combine those components to form your actual program.
While this sounds complicated and inefficient it's actually just a very thin layer around regular functions. The design pattern implemented by Netwire is inspired by AFRP (Arrowized Functional Reactive Programming). It's probably different enough to deserve its own name (WFRP?). You may want to read the tutorial.
In any case a small demo follows. Your building blocks are wires:
myWire :: WireP A B
Think of this as a component. It is a time-varying value of type B that depends on a time-varying value of type A, for example a particle in a simulator:
particle :: WireP [Particle] Particle
It depends on a list of particles (for example all currently existing particles) and is itself a particle. Let's use a predefined wire (with a simplified type):
time :: WireP a Time
This is a time-varying value of type Time (= Double). Well, it's time itself (starting at 0 counted from whenever the wire network was started). Since it doesn't depend on another time-varying value you can feed it whatever you want, hence the polymorphic input type. There are also constant wires (time-varying values that don't change over time):
pure 15 :: Wire a Integer
-- or even:
15 :: Wire a Integer
To connect two wires you simply use categorical composition:
integral_ 3 . 15
This gives you a clock at 15x real time speed (the integral of 15 over time) starting at 3 (the integration constant). Thanks to various class instances wires are very handy to combine. You can use your regular operators as well as applicative style or arrow style. Want a clock that starts at 10 and is twice the real time speed?
10 + 2*time
Want a particle that starts and (0, 0) with (0, 0) velocity and accelerates with (2, 1) per second per second?
integral_ (0, 0) . integral_ (0, 0) . pure (2, 1)
Want to display statistics while the user presses the spacebar?
stats . keyDown Spacebar <|> "stats currently disabled"
This is just a small fraction of what Netwire can do for you.
I know this is old topic. But I am facing the same problem right now while trying to implement Rail Fence cipher exercise from exercism.io. It is quite disappointing to see such a common problem having such poor attention in Haskell. I don't take it that to do some as simple as maintaining state I need to learn FRP. So, I continued googling and found solution looking more straightforward - State monad: https://en.wikibooks.org/wiki/Haskell/Understanding_monads/State
What is the direct and easy approach to draw minimal DFA, that accepts the same language as of given Regular Expression(RE).
I know it can be done by:
Regex ---to----► NFA ---to-----► DFA ---to-----► minimized DFA
But is there any shortcut way? like for (a+b)*ab
Regular Expression to DFA
Although there is NO algorithmic shortcut to draw DFA from a Regular Expression(RE) but a shortcut technique is possible by analysis not by derivation, it can save your time to draw a minimized dfa. But off-course the technique you can learn only by practice. I take your example to show my approach:
(a + b)*ab
First, think about the language of the regular expression. If its difficult to sate what is the language description at first attempt, then find what is the smallest possible strings can be generate in language then find second smallest.....
Keep memorized solution of some basic regular expressions. For example, I have written here some basic idea to writing left-linear and right-linear grammars directly from regular expression. Similarly you can write for construing minimized dfa.
In RE (a + b)*ab, the smallest possible string is ab because using (a + b)* one can generate NULL(^) string. Second smallest string can be either aab or bab. Now one thing we can easily notice about language is that any string in language of this RE always ends with ab (suffix), Whereas prefix can be any possible string consist of a and b including ^.
Also, if current symbol is a; then one possible chance is that next symbol would be a b and string end. Thus in dfa we required, a transition such that when ever a b symbol comes after symbol a, then it should be move to some of the final state in dfa.
Next, if a new symbol comes on final state then we should move to some non-final state because any symbol after b is possible only in middle of some string in language as all language string terminates with suffix 'ab'.
So with this knowledge at this stage we can draw an incomplete transition diagram like below:
--►(Q0)---a---►(Q1)---b----►((Qf))
Now at this point you need to understand: every state has some meaning for example
(Q0) means = Start state
(Q1) means = Last symbol was 'a', and with one more 'b' we can shift to a final state
(Qf) means = Last two symbols was 'ab'
Now think what happens if a symbol a comes on final state. Just more to state Q1 because this state means last symbol was a. (updated transition diagram)
--►(Q0)---a---►(Q1)---b----►((Qf))
▲-----a--------|
But suppose instead of symbol a a symbol b comes at final state. Then we should move from final state to some non-final state. In present transition graph in this situation we should make a move to initial state from final state Qf.(as again we need ab in string for acceptation)
--►(Q0)---a---►(Q1)---b----►((Qf))
▲ ▲-----a--------|
|----------------b--------|
This graph is still incomplete! because there is no outgoing edge for symbol a from Q1. And for symbol a on state Q1 a self loop is required because Q1 means last symbol was an a.
a-
||
▼|
--►(Q0)---a---►(Q1)---b----►((Qf))
▲ ▲-----a--------|
|----------------b--------|
Now I believe all possible out-going edges are present from Q1 & Qf in above graph. One missing edge is an out-going edge from Q0 for symbol b. And there must be a self loop at state Q0 because again we need a sequence of ab so that string can be accept. (from Q0 to Qf shift is possible with ab)
b- a-
|| ||
▼| ▼|
--►(Q0)---a---►(Q1)---b----►((Qf))
▲ ▲-----a--------|
|----------------b--------|
Now DFA is complete!
Off-course the method might look difficult at first few tries. But if you learn to draw this way you will observe improvement in your analytically skills. And you will find this method is quick and objective way to draw DFA.
* In the link I given, I described some more regular expressions, I would highly encourage you to learn them and try to make DFA for those regular expressions too.
Say you had two classes A and B. If the relationship between is has-a
i.e. A has-a B
how can you pass information from B into A? Say for example in B you work out a calculation and need the answer in A.
Is there any other way of doing this besides passing a pointer to class A into class B and calling a function which takes the answer as a parameter.
Hope this makes sense,
MD.
Sorry I should have been more specific
quote from my comment below.
"well I gave a simple example. I am programming this in java and my class B runs a new thread and and will calculate the answer. Therefore I cannot just call the function from class A as I don't know when the calculation will be completed."
The method in A that needs the result of the computation should call into the method in B that does the computation.
This answer is so obvious that there may be something you're not telling us (?)
Ok, so the question is really about threading. Yes, then passing a reference to owner object and calling back into it may be a good idea. A better idea might be to return a future object that encapsulates the result of the computation.
If I understand correctly, you have this kind of relationship:
class Car {
Engine engine;
int test() {
int fuelLevel = engine.getFuelLevel();
// do sth with fuel level, store it, use it etc.
}
}
This example shows how can you pass information between the two classes: for instance as a result of function. Car object (your class A) calls a method on Engine object (class B) and int this way he obtains desired information. This can be easily translated to any kind of work that class B does.
There are basically two ways to manage "asynchronous" calls.
The first is having a callback and the other polling.
Having a callback is what , you describe. When B has finished, it needs to call A somehow that it has finished. That can been done by "giving" the adress of A to B, so it knows what to call, or by using a intermediate object C, which calls B synchronously and send the result back to A. C then needs to know about A.
Polling is when A check regularly if B has finished. This solution is usually less satisfying intellectually and more CPU consuming. You are also not notified exactly when B finished. (When B finish, nothing happend, you'll have to wait for the next poll to be aware of it). However, that way , B doesn't need no know anything about A.
I would use the first pattern with an intermediate object (and special class C). So that your model is still clean (B doesn't need to know about A or C). I suggest also you have a look at the Observer pattern.