It doesn't make sense to construct a language acceptor which does not able to accept any language. I specifically talking about FA which accept languages not transducer or translator which translate languages.
People build them all the time. You have a set of states, and each state is accessible ultimately from every other, and there is no final state, so it never halts, though it might get stuck in a cycling loop. No issue with that at all.
Do a search on "busy beaver".
The mathematical model of a FSM, as described on the wikipedia page, notes that the set F of final states may be empty. While an empty set of final states isn't much use if the FSM is used as a recogniser, FSMs can also be used as transducers.
For example, a Mealy machine doesn't include a set of final states, since it the output from the machine as the input is processed which is of interest.
It is not just that it can exist - it is even necessary: how else would one accept the empty set, which is one of the regular languages. Unless you use an automaton with unaccessible states, which is pretty similar to not having a final state.
Related
In UML 2.5.1, the initial pseudostate of a state machine is defined as follows:
An initial Pseudostate represents a starting point for a Region; that
is, it is the point from which execution of its contained behavior
commences when the Region is entered via default activation. It is the
source for at most one Transition, which may have an associated effect
Behavior, but not an associated trigger or guard. There can be at
most one initial Vertex in a Region.
In other words, a UML state machine should almost always contain exactly one initial pseudostate, which should have exactly one outgoing transition.
However, can an initial pseudostate have incoming transitions as well? For example:
I cannot find anything forbidding it in the UML specification, yet I cannot find any example online where this case happen, therefore I was wondering whether or not I overlooked anything.
EDIT: To go into more detail, if we look into the OCL constraints stated in the specification, we can only find the following one that affects outgoing transitions (section 14.5.6.7):
inv: (kind = PseudostateKind::initial) implies (outgoing->size() <= 1)
but I cannot find any constraint regarding incoming transitions
EDIT2: I have just realized that my model is wrong! Considering this sentence of the specification (cited above): "It is the source for at most one Transition, which may have an associated effect Behavior, but not an associated trigger or guard."
Therefore the transition between init and s1 should actually have zero triggers, instead of having e1 as a trigger.
Note that while this does not invalidate the initial question.
I see nothing in the UML 2.5.1 Specification that prohibits a transition whose target is the initial pseudostate.
Such a transition would be meaningless at best and confusing at worst, which is likely why no examples are found.
Edit: see the comments!
On p. 423 UML 2.5:
15.7.18 InitialNode [Class]
15.7.18.4 Constraints
• no_incoming_edges
An InitialNode has no incoming ActivityEdges.
inv: incoming->isEmpty()
N.B. If you intend to have a self-transition for e1 then why not just using that? The Initial can anyway have only on singular outgoing edge, namely to the first state (here s1).
No this is not allowed. And why would one Do that? As you already stated in the cited text,it can only have one outgoing edge without any guard. So what is the added value, as you cannot reuse anything.
I think the text is pretty clear as-is: "[An initial Pseudostate] is the point from which execution of its contained behavior commences when the Region is entered via default activation." If you connect a transition back around to the initial psuedostate, the initial psuedostate is no longer "the point from which execution of its contained behavior commences," it is something else, and is therefore undefined.
I'm writing a simple finite state machine and realized that there are situations where an event can take a state to more than one possible results. Basically, from state A, if Event E happens, the state could be either C or D.
I'm currently using the Javascript Finite State Machine code written here: https://github.com/jakesgordon/javascript-state-machine
From the documentation I don't see an obvious way that makes this possible. More so, I feel like maybe this is actually a flow in my original design.
Essentially, in a Finite State Machine, should there be a situation where a transition happens, and based on some logic result in one of multiple states (1 to many), or should it be that we check the logic to see which transition needs to takes place (1 to 1)?
Congratulations, you've just discovered non-deterministic finite state machines! The ideas are similar to that of a deterministic state machine, except that there may be multiple ways to transition from a state given the same input symbol. How this is actually done is unspecified (randomness, user input, branch out and run them all at once, etc.).
For example, could I implement a rule that would change every string that followed the pattern '1..4' into the array [1,2,3,4]? In JavaScript:
//here you create a rule that changes every string that matches /$([0-9]+)_([0-9]+)*/
//ever created into range($1,$2) (imagine a b are the results of the regexp)
var a = '1..4';
console.log(a);
>> output: [1,2,3,4];
Of course, I'm pretty confident that would be impossible in most languages. My question is: is there any language in which that would be possible? Or have anyone ever proposed something like that? Does this thing have a 'name' for which I can google to read more about?
Modifying the language from whithin itself falls under the umbrell of reflection and metaprogramming. It is referred as behavioral reflection. It differs from structural reflection that opperates at the level of the application (e.g. classes, methods) and not the language level. Support for behavioral reflection varies greatly across languages.
We can broadly categorize language changes in two categories:
changes that modify the semantics (i.e. the rules) of the language itself (e.g. redefine the method lookup algorithm),
changes that modify the syntax (e.g. your syntax '1..4' to create arrays).
For case 1, certain languages expose the structure of the application (structural reflection) and the inner working of their implementation (behavioral reflection) to the application itself via special object, called meta-objects. Meta-objects are reifications of otherwise implicit aspects, that become then explicitely manipulable: the application can modify the meta-objects to redefine part of its structure, or part of the language. When it comes to langauge changes, the focus is usually on modifiying message sending / method invocation since it is the core mechanism of object-oriented language. But the same idea could be applied to expose other aspects of the language, e.g. field accesses, synchronization primitives, foreach enumeration, etc. depending on the language.
For case 2, the program must be representated in a suitable data structure to be modified. For languages of the lisp family, the program manipulates lists, and the program can be itself represented as lists. This is called homoiconicity and is handy for metaprogramming, hence the flexibility of lisp-like languages. For other languages, their representation is usually an AST. Transforming the representation of the program, or rewriting it, is possible with macro, preprocessors, or hooks during compilation or class loading.
The line between 1 and 2 is however blurry. Syntactical changes can appear to modify the semantics of the language. For instance, I can rewrite all fields accesses with proper getter and setter and perform additional logic there, say to implement transactional memory. Did I perform a semantical change of what a field access is, or merely a syntax change?
Also, there are other constructs the fall bewten the lines. For instance, proxies and #doesNotUnderstand trap are popular techniques to simulate the reification of message sends to some extent.
Lisp and Smalltalk have been very influencial in the field of metaprogramming, and I think the two following projects/platform are interesting to look at for a representative of each of these:
Racket, a lisp-like language focused on growing languages from within the langauge
Helvetia, a Smalltalk extension to embed new languages into the host language by leveraging the AST of the host environment.
I hope you enjoyed this even if I did not really address your question ;)
Your desired change require modifying the way literals are created. This is AFAIK not usually exposed to the application. The closed work that I can think of is Virtual Values for Language Extension, that tackled Javascript.
Yes. Common Lisp (and certain other lisps) have "reader macros" which allow the user to reprogram (incrementally) the mapping between the input stream and the actual language construct as parsed.
See http://dorophone.blogspot.com/2008/03/common-lisp-reader-macros-simple.html
If you want to operate on the level of objects, you will want to use a debugging/memory management framework that keeps track of all objects, and processes the rules on each evaluation step (nasty). This seems like the kind of thing you might be able to shoehorn into smalltalk.
CLOS (Common Lisp Object System) allows redefinition of live objects.
Ultimately you need two things to implement this:
Access to the running system's AST (Abstract Syntax Tree), and
Access to the running system's objects.
You'll want to study meta-object protocols and the languages that use them, then the implementations of both the MOPs and the environment within which these programs are executed.
Image-based systems will be the easiest to modify (e.g., Lisp, potentially Smalltalk).
(Image-based systems store a snapshot of a running system, allowing complete shutdown and restarts, redefinitions, etc. of a complete environment, including existing objects, and their definitions.)
Ruby allows you to extend classes. For instance, this example adds functionality to the String class. But you can do more than add methods to classes. You can also overwrite methods, but defining a method that's already been defined. You may want to preserve access to the original method using alias_method.
Putting all this together, you can overload a constructor in Ruby, but in your case, there's a catch: It sounds like you want the constructor to return a different type. Constructors by definition return instances of their class. If you just want it to return the string "[1,2,3,4]", that's simple enough:
class string
alias_method :initialize :old_constructor
def initialize
old_constructor
# code that applies your transformation
end
end
But there's no way to make it return an Array if that's what you want.
I'm confronted with the task of implementing algorithms (mostly business logic style) expressed as flowcharts. I'm aware that flowcharts are not the best algorithm representation due to its spaghetti-code property (would this be a use-case for CPS?), but I'm stuck with the specification expressed as flowcharts.
Although I could transform the flowcharts into more appropriate equivalent representations before implementing them, that could make it harder to "recognize" the orginal flow-chart in the resulting implementation, so I was hoping there is some way to directly represent flowchart-algorithms as (maybe monadic) EDSLs in Haskell, so that the semblance to the original flowchart-specification would be (more) obvious.
One possible representation of flowcharts is by using a group of mutually tail-recursive functions, by translating "go to step X" into "evaluate function X with state S". For improved readability, you can combine into a single function both the action (an external function that changes the state) and the chain of if/else or pattern matching that helps determine what step to take next.
This is assuming, of course, that your flowcharts are to be hardcoded (as opposed to loaded at runtime from an external source).
Sounds like Arrows would fit exactly what you describe. Either do a visualization of arrows (should be quite simple) or generate/transform arrow code from flow-graphs if you must.
Assuming there's "global" state within the flowchart, then that makes sense to package up into a state monad. At least then, unlike how you're doing it now, each call doesn't need any parameters, so can be read as a) modify state, b) conditional on current state, jump.
Many a times, I've come across statements of the form
X does/doesn't compose well.
I can remember few instances that I've read recently :
Macros don't compose well (context: clojure)
Locks don't compose well (context: clojure)
Imperative programming doesn't compose well... etc.
I want to understand the implications of composability in terms of designing/reading/writing code ? Examples would be nice.
"Composing" functions basically just means sticking two or more functions together to make a big function that combines their functionality in a useful way. Essentially, you define a sequence of functions and pipe the results of each one into the next, finally giving the result of the whole process. Clojure provides the comp function to do this for you, you could do it by hand too.
Functions that you can chain with other functions in creative ways are more useful in general than functions that you can only call in certain conditions. For example, if we didn't have the last function and only had the traditional Lisp list functions, we could easily define last as (def last (comp first reverse)). Look at that — we didn't even need to defn or mention any arguments, because we're just piping the result of one function into another. This would not work if, for example, reverse took the imperative route of modifying the sequence in-place. Macros are problematic as well because you can't pass them to functions like comp or apply.
Composition in programming means assembling bigger pieces out of smaller ones.
Composition of unary functions creates a more complicated unary function by chaining simpler ones.
Composition of control flow constructs places control flow constructs inside other control flow constructs.
Composition of data structures combines multiple simpler data structures into a more complicated one.
Ideally, a composed unit works like a basic unit and you as a programmer do not need to be aware of the difference. If things fall short of the ideal, if something doesn't compose well, your composed program may not have the (intended) combined behavior of its individual pieces.
Suppose I have some simple C code.
void run_with_resource(void) {
Resource *r = create_resource();
do_some_work(r);
destroy_resource(r);
}
C facilitates compositional reasoning about control flow at the level of functions. I don't have to care about what actually happens inside do_some_work(); I know just by looking at this small function that every time a resource is created on line 2 with create_resource(), it will eventually be destroyed on line 4 by destroy_resource().
Well, not quite. What if create_resource() acquires a lock and destroy_resource() frees it? Then I have to worry about whether do_some_work acquires the same lock, which would prevent the function from finishing. What if do_some_work() calls longjmp(), and skips the end of my function entirely? Until I know what goes on in do_some_work(), I won't be able to predict the control flow of my function. We no longer have compositionality: we can no longer decompose the program into parts, reason about the parts independently, and carry our conclusions back to the whole program. This makes designing and debugging much harder and it's why people care whether something composes well.
"Bang for the Buck" - composing well implies a high ratio of expressiveness per rule-of-composition. Each macro introduces its own rules of composition. Each custom data structure does the same. Functions, especially those using general data structures have far fewer rules.
Assignment and other side effects, especially wrt concurrency have even more rules.
Think about when you write functions or methods. You create a group of functionality to do a specific task. When working in an Object Oriented language you cluster your behavior around the actions you think a distinct entity in the system will perform. Functional programs break away from this by encouraging authors to group functionality according to an abstraction. For example, the Clojure Ring library comprises a group of abstractions that cover routing in web applications.
Ring is composable where functions that describe paths in the system (routes) can be grouped into higher order functions (middlewhere). In fact, Clojure is so dynamic that it is possible (and you are encouraged) to come up with patterns of routes that can be dynamically created at runtime. This is the essence of composablilty, instead of coming up with patterns that solve a certain problem you focus on patterns that generate solutions to a certain class of problem. Builders and code generators are just two of the common patterns used in functional programming. Function programming is the art of patterns that generate other patterns (and so on and so on).
The idea is to solve a problem at its most basic level then come up with patterns or groups of the lowest level functions that solve the problem. Once you start to see patterns in the lowest level you've discovered composition. As folks discover second order patterns in groups of functions they may start to see a third level. And so on...
Composition (in the context you describe at a functional level) is typically the ability to feed one function into another cleanly and without intermediate processing. Such an example of composition is in std::cout in C++:
cout << each << item << links << on;
That is a simple example of composition which doesn't really "look" like composition.
Another example with a form more visibly compositional:
foo(bar(baz()));
Wikipedia Link
Composition is useful for readability and compactness, however chaining large collections of interlocking functions which can potentially return error codes or junk data can be hazardous (this is why it is best to minimize error code or null return values.)
Provided your functions use exceptions, or alternatively return null objects you can minimize the requirement for branching (if) on errors and maximize the compositional potential of your code at no extra risk.
Object composition (vs inheritance) is a separate issue (and not what you are asking, but it shares the name). It is one of containment to derive object hierarchy as opposed to direct inheritance.
Within the context of clojure, this comment addresses certain aspects of composability. In general, it seems to emerge when units of the system do one thing well, do not require other units to understand its internals, eschew side-effects, and accept and return the system's pervasive data structures. All of the above can be seen in M2tM's C++ example.
composability, applied to functions, means that the functions are smaller and well-defined, thus easy to integrate into other functions (i have seen this idea in the book "the joy of clojure")
the concept can apply to other things that are supposed be composed into something else.
the purpose of composability is reuse. for example, a function well-build (composable) is easier to reuse
macros aren't that well-composable because you can't pass them as parameters
lock are crap because you can't really give them names (define them well) or reuse them. you just do them inplace
imperative languages aren't that composable because (some of them, at least) don't have closures. if you want functionality passed as parameter, you're screwed. you have to build an object and pass that; disclaimer here: this last idea i'm not entirely convinced is true, therefore research more before taking it for granted
another idea on imperative languages is that they don't compose well because they imply state (from wikipedia knowledgebase :) "Imperative programming - describes computation in terms of statements that change a program state").
state does not compose well because although you have given a specific "something" in input, that "something" generates an output according to it's state. different internal state, different behaviour. and thus you can say good-bye to what you where expecting to happen.
with state, you depend to much on knowing what the current state of an object is... if you want to predict it's behavior. more stuff to keep in the back of your mind, less composable (remember well-defined ? or "small and simple", as in "easy to use" ?)
ps: thinking of learning clojure, huh ? investigating... ? good for you ! :P