Creating a specific Turing Machine - state-machine

I am having some troubles with a exercise that asks to draw a Turing Machine which decides a language L2 = {w ∈ {0,1}∗|w contains an even number of 1’s }.
Does anybody have a solution to compare with what I have?
Thanks

You can design a turing machine that closely resembles this:

Related

How many languages does a DFA recognize?

According to Sipser's "Introduction to the Theory of Computation": If A is the set of all strings that machine M accepts, we say that A is the
language of machine M and write L(M) = A. We say that M recognizes A ... A machine may accept several strings, but it always recognizes only one language. and also We say that M recognizes language A if A = {w| M accepts w}.
I guess the question has already been answered, but I would like to know if anyone has any thought about it, if there is anything interesting we can say about the subsets of a regular language, if we can say that the original DFA recognizes them and if there is any interesting relationship between the original DFA and the ones that recognize the smaller languages
If the language recognized by a DFA (of which there is always exactly one) is finite, then there are finitely many sublanguages of that language (indeed, if the language accepted consists of N strings, there are 2^N sublanguages).
There is no useful relationship which can be easily inferred from the sub/super language relationship w.r.t. where in the Chomsky hierarchy the language falls. That is: a sublanguage of a regular language may be undecidable, and a sublanguage of an undecidable language may be regular, with all possible variations in between.
Because of this, there is no particularly neat relationship to be worked out among DFAs of sub/super languages: not all of the sublanguages will even be regular; some sublanguages will have simpler DFAs than the DFA of the super language, and some will have more complicated DFAs than the DFA of the super language. Some will have the same DFA but a different set of accepting states.
Given a DFA, there is only one language corresponding to the machine. A language is a set, that is, a collection of all the strings accepted by the dfa.

How do constraints defined in Alloy result in better software?

Alloy newbie here.
I really like creating constraints in Alloy and having the analyzer check that the model is valid per the constraints.
But I ask myself, “Are these constraint definitions mere mental gymnastics, or will they help build better software?”
Let’s take a specific example. In a model of an email client’s address book one might define this constraint for adding a new entry into the address book:
The address mapping in the new book b’ is equal to the address
mapping in the old book b, with the addition of a link from the
name to the address. This constraint is expressed in Alloy as:
b'.addr = b.addr + n->a
That is beautiful.
But I am struggling to see its relevance when the add operation is implemented in code. For instance, I implemented the add operation in Common Lisp:
(defun add (b n a)
"Add a new entry, (n a), to address book b"
(adjoin (list n a) b :key #'first))
In words: “Here is a definition of a function named add which has three parameters: b, n, and a (book, name, address). Use the Common Lisp set function, adjoin, to add the list (n a) to b.”
That function seems to correctly implement a simple add function.
What do I do with the constraint that I defined in Alloy? Should I write additional code that expresses the constraint? In pseudo-code: [1]
(defun add (b n a)
"Add a new entry, (n a), to address book b"
(adjoin (list n a) b :key #'first)
Check that nothing has changed in the
address book except that it now has
a n->a mapping)
Writing such code seems like a lot of work, with no apparent benefit.
So my question is this: When implementing an Alloy model in code, what should I do with the constraints that are defined in Alloy?
Also, is there a tutorial which describes how to convert Alloy models to code, including a description of how Alloy constraint definitions are used in code?
Thank you.
[1] Note: I realize that there is a language extension called Screamer for constraint programming in Common Lisp
I believe the question stems from a minor misunderstanding of the goals and capabilities of some of the modeling languages like Alloy and touches some of the fundamental aspects of formal methods, verification and software modeling. Probably good resources to get a background story that would make things more clear include books like Calculus of Computation, as well as the book about Alloy you've mentioned.
Constraints are first-class citizens of the Alloy modeling language. The idea is that constraints reflect the semantics of the code one wants to model (and check properties about). Thus, from one perspective, Alloy program represents the code itself and there is no need for writing additional code (e.g. in a functional language) or mixing declared constraints with code. However, Alloy programs cannot be directly executed; rather, they can only be used to produce models according to such programs (i.e. sets of constraints).
More specifically, the Alloy constraint b'.addr = b.addr + n->a, might, under an appropriate interpretation, indeed capture the behavior of the add operation in Lisp, so that checking some properties that involve the constraint corresponds to checking whether those properties hold for the given operation in Lisp. This is the standard (and, arguably, intended) use of Alloy for modeling and verification of software. (In addition, Alloy is frequently used to model systems that don't have clear mappings into programs, like certain kinds of cyber-physical systems [1].) Note that this of course entails that the model one writes in Alloy must correctly model the program at hand (with respects to its semantics), in order for the checking of properties to make sense.
As mentioned, Alloy itself cannot produce executable code (such as the one you've given, written in Common Lisp). However, there are multiple approaches that use Alloy and models that Alloy generates to produce fragments of code or test cases. (Again, according to the specific program at hand.) In addition, an extension of Alloy, called Alloy* [2], which adds the possibility of solving higher order constraints within Alloy (quantification over relations) can be used to generate programs; actual code that performs the operation and accepts different inputs (similarly to the add function, according to the model). Again, such programs are represented with Alloy models and one needs to (perform additional effort to) translate those models into programs in some desired programming language.
Having that said, note that some (verification) systems indeed allow you to mix specifications with code, where specifications might be: written in the same way as the code that is to be executed (like in the verification/synthesis framework Leon [3]); or written in a distinct language (from the language of executable code), while the specifications are bound to code by some other means (like annotations in Dafny [4]).
[1] Kang, Eunsuk, et al. "Model-based security analysis of a water treatment system." Proceedings of the 2nd International Workshop on Software Engineering for Smart Cyber-Physical Systems. ACM, 2016.
[2] Milicevic, Aleksandar, et al. "Alloy*: A general-purpose higher-order relational constraint solver." Proceedings of the 37th International Conference on Software Engineering-Volume 1. IEEE Press, 2015.
[3] Blanc, Régis, et al. "An overview of the Leon verification system: Verification by translation to recursive functions." Proceedings of the 4th Workshop on Scala. ACM, 2013.
[4] Leino, K. Rustan M. "Dafny: An automatic program verifier for functional correctness." International Conference on Logic for Programming Artificial Intelligence and Reasoning. Springer Berlin Heidelberg, 2010.

Language features helpful for writing quines (self-printing programs)?

OK, for those who have never encountered the term, a quine is a "self-replicating" computer program. To be more specific, one which - upon execution - produces a copy of its own source code as its only output.
The quines can, of course, be developed in many programming languages (but not all); but some languages are obviously more suited to producing quines than others (to clearly understand the somewhat subjective-sounding "more suited", look at a Haskell example vs. C example in the Wiki page - and I provide my more-objective definition below).
The question I have is, from programming language perspective, what language features (either theoretical design ones or syntax sugar) make the language more suitable/helpful for writing quines?
My definition of "more suitable" is "quines are easier to write" and "are shorter/more readable/less obfuscated". But you're welcome to add more criteria that are at least somewhat objective.
Please note that this question explicitly excludes degenerate cases, like a language which is designed to contain "print_a_quine" primitive.
I am not entirely sure, so correct me if anyone of you knows better.
I agree with both other answers, going further by explaining, that a quine is this:
Y g
where Y is a Y fixed-point combinator (or any other fixed-point combinator), which means in lambda calculus:
Y g = g(Y g)
now, it is quite apparent, that we need the code to be data and g be a function which will print its arguments.
So to summarize we need for constructing such a quines functions, printing function, fixed-point combinator and call-by-name evaluation strategy.
The smallest language that satisfies this conditions is AFAIK Zot from the Iota and Jot family.
Languages like the Io Programming Language and others allow the treating of code as data. In tree walking systems, this typically allows the language implementer to expose the abstract syntax tree as a first class citizen. In the case of Io, this is what it does. Being object oriented, the AST is modelled around Message objects, and a special sentinel is created to represent the currently executing message; this sentinel is called thisMessage. thisMessage is a full Message like any other, and responds to the print message, which prints it to the screen. As a result, the shortest quine I've ever been able to produce in any language, has come from Io and looks like this:
thisMessage print
Anyway, I just couldn't help but sharing this with you on this subject. The above certainly makes writing quines easy, but not doing it this way certainly doesn't preclude easily creating a quine.
I'm not sure if this is useful answer from a practical point of view, but there is some useful theory in computability theory. In particular fixed points and Kleene's recursion theorem can be used for writing quines. Apparently, the theory can be used for writing quine in LISP (as the wikipedia page shows).

Is there a visual modeling language or style for the functional programming paradigm?

UML is a standard aimed at the modeling of software which will be written in OO languages, and goes hand in hand with Java. Still, could it possibly be used to model software meant to be written in the functional programming paradigm? Which diagrams would be rendered useful given the embedded visual elements?
Is there a modeling language aimed at functional programming, more specifically Haskell? What tools for putting together diagrams would you recommend?
Edited by OP Sept 02, 2009:
What I'm looking for is the most visual, lightest representation of what goes on in the code. Easy to follow diagrams, visual models not necessarily aimed at other programmers. I'll be developing a game in Haskell very soon but because this project is for my graduation conclusion work I need to introduce some sort of formalization of the proposed solution. I was wondering if there is an equivalent to the UML+Java standard, but for Haskell.
Should I just stick to storyboards, written descriptions, non-formalized diagrams (some shallow flow-chart-like images), non-formalized use case descriptions?
Edited by jcolebrand June 21, 2012:
Note that the asker originally wanted a visual metphor, and now that we've had three years, we're looking for more/better tools. None of the original answers really addressed the concept of "visual metaphor design tool" so ... that's what the new bounty is looking to provide for.
I believe the modeling language for Haskell is called "math". It's often taught in schools.
Yes, there are widely used modeling/specification languages/techniques for Haskell.
They're not visual.
In Haskell, types give a partial specification.
Sometimes, this specification fully determines the meaning/outcome while leaving various implementation choices.
Going beyond Haskell to languages with dependent types, as in Agda & Coq (among others), types are much more often useful as a complete specification.
Where types aren't enough, add formal specifications, which often take a simple functional form.
(Hence, I believe, the answers that the modeling language of choice for Haskell is Haskell itself or "math".)
In such a form, you give a functional definition that is optimized for clarity and simplicity and not all for efficiency.
The definition might even involve uncomputable operations such as function equality over infinite domains.
Then, step by step, you transform the specification into the form of an efficiently computable functional program.
Every step preserves the semantics (denotation), and so the final form ("implementation") is guaranteed to be semantically equivalent to the original form ("specification").
You'll see this process referred to by various names, including "program transformation", "program derivation", and "program calculation".
The steps in a typical derivation are mostly applications of "equational reasoning", augmented with a few applications of mathematical induction (and co-induction).
Being able to perform such simple and useful reasoning was a primary motivation for functional programming languages in the first place, and they owe their validity to the "denotative" nature of "genuinely functional programming".
(The terms "denotative" and "genuinely functional" are from Peter Landin's seminal paper The Next 700 Programming languages.)
Thus the rallying cry for pure functional programming used to be "good for equational reasoning", though I don't hear that description nearly as often these days.
In Haskell, denotative corresponds to types other than IO and types that rely on IO (such as STM).
While the denotative/non-IO types are good for correct equational reasoning, the IO/non-denotative types are designed to be bad for incorrect equational reasoning.
A specific version of derivation-from-specification that I use as often as possible in my Haskell work is what I call "semantic type class morphisms" (TCMs).
The idea there is to give a semantics/interpretation for a data type, and then use the TCM principle to determine (often uniquely) the meaning of most or all of the type's functionality via type class instances.
For instance, I say that the meaning of an Image type is as a function from 2D space.
The TCM principle then tells me the meaning of the Monoid, Functor, Applicative, Monad, Contrafunctor, and Comonad instances, as corresponding to those instances for functions.
That's a lot of useful functionality on images with very succinct and compelling specifications!
(The specification is the semantic function plus a list of standard type classes for which the semantic TCM principle must hold.)
And yet I have tremendous freedom of how to represent images, and the semantic TCM principle eliminates abstraction leaks.
If you're curious to see some examples of this principle in action, check out the paper Denotational design with type class morphisms.
We use theorem provers to do formal modelling (with verification), such as Isabelle or Coq. Sometimes we use domain specific languages (e.g. Cryptol) to do the high level design, before deriving the "low level" Haskell implementation.
Often we just use Haskell as the modelling language, and derive the actual implementation via rewriting.
QuickCheck properties also play a part in the design document, along with type and module decompositions.
Yes, Haskell.
I get the impression that programmers using functional languages don't feel the need to simplify their language of choice away when thinking about their design, which is one (rather glib) way of viewing what UML does for you.
I have watched a few video interviews, and read some interviews, with the likes of erik meijer and simon peyton-jones. It seems as when it comes to modelling and understanding ones problem domain, they use type signatures, especially function signatures.
Sequence diagrams (UML) could be related to the composition of functions.
A static class diagram (UML) could be related to type signatures.
In Haskell, you model by types.
Just begin with writing your function-, class- and data signatures without any implementation and try to make the types fit. Next step is QuickCheck.
E.g. to model a sort:
class Ord a where
compare :: a -> a -> Ordering
sort :: Ord a => [a] -> [a]
sort = undefined
then tests
prop_preservesLength l = (length l) == (length $ sort l)
...
and finally the implementation ...
Although not a recommendation to use (as it appears to be not available for download), but the HOPS system visualizes term graphs, which are often a convenient representation of functional programs.
It may be also considered a design tool as it supports documenting the programs as well as constructing them; I believe it can also step through the rewriting of the terms if you want it to so you can see them unfold.
Unfortunately, I believe it is no longer actively developed though.
I realize I'm late to the party, but I'll still give my answer in case someone would find it useful.
I think I'd go for systemic methodologies of the likes of SADT/IDEF0.
https://en.wikipedia.org/wiki/Function_model
https://en.wikipedia.org/wiki/Structured_Analysis_and_Design_Technique
Such diagrams can be made with the Dia program that is available on both Linux, Windows and MacOS.
You can a data flow process network model as described in Realtime Signal Processing: Dataflow, Visual, and Functional Programming by Hideki John Reekie
For example for code like (Haskell):
fact n | n == 0 = 1
| otherwise = n * fact (n - 1)
The visual representation would be:
What would be the point in modelling Haskell with Maths? I thought the whole point of Haskell was that it related so closely to Maths that Mathematicians could pick it up and run with it. Why would you translate a language into itself?
When working with another functional programming language (f#) I used diagrams on a white board describing the large blocks and then modelled the system in an OO way using UML, using classes. There was a slight miss match in the building blocks in f# (split the classes up into data structures and functions that acted upon them). But for the understanding from a business perspective it worked a treat. I would add mind that the problem was business/Web oriented and don't know how well the technique would work for something a bit more financial. I think I would probably capture the functions as objects without state and they should fit in nicely.
It all depends on the domain your working in.
I use USL - Universal Systems Language. I'm learning Erlang and I think it's a perfect fit.
Too bad the documentation is very limited and nobody uses it.
More information here.

Explain concatenative languages to me like I'm an 8-year-old

I've read the Wikipedia article on concatenative languages, and I am now more confused than I was when I started. :-)
What is a concatenative language in stupid people terms?
In normal programming languages, you have variables which can be defined freely and you call methods using these variables as arguments. These are simple to understand but somewhat limited. Often, it is hard to reuse an existing method because you simply can't map the existing variables into the parameters the method needs or the method A calls another method B and A would be perfect for you if you could only replace the call to B with a call to C.
Concatenative language use a fixed data structure to save values (usually a stack or a list). There are no variables. This means that many methods and functions have the same "API": They work on something which someone else left on the stack. Plus code itself is thought to be "data", i.e. it is common to write code which can modify itself or which accepts other code as a "parameter" (i.e. as an element on the stack).
These attributes make this languages perfect for chaining existing code to create something new. Reuse is built in. You can write a function which accepts a list and a piece of code and calls the code for each item in the list. This will now work on any kind of data as long it's behaves like a list: results from a database, a row of pixels from an image, characters in a string, etc.
The biggest problem is that you have no hint what's going on. There are only a couple of data types (list, string, number), so everything gets mapped to that. When you get a piece of data, you usually don't care what it is or where it comes from. But that makes it hard to follow data through the code to see what is happening to it.
I believe it takes a certain set of mind to use the languages successfully. They are not for everyone.
[EDIT] Forth has some penetration but not that much. You can find PostScript in any modern laser printer. So they are niche languages.
From a functional level, they are at par with LISP, C-like languages and SQL: All of them are Turing Complete, so you can compute anything. It's just a matter of how much code you have to write. Some things are more simple in LISP, some are more simple in C, some are more simple in query languages. The question which is "better" is futile unless you have a context.
First I'm going to make a rebuttal to Norman Ramsey's assertion that there is no theory.
Theory of Concatenative Languages
A concatenative language is a functional programming language, where the default operation (what happens when two terms are side by side) is function composition instead of function application. It is as simple as that.
So for example in the SKI Combinator Calculus (one of the simplest functional languages) two terms side by side are equivalent to applying the first term to the second term. For example: S K K is equivalent to S(K)(K).
In a concatenative language S K K would be equivalent to S . K . K in Haskell.
So what's the big deal
A pure concatenative language has the interesting property that the order of evaluation of terms does not matter. In a concatenative language (S K) K is the same as S (K K). This does not apply to the SKI Calculus or any other functional programming language based on function application.
One reason this observation is interesting because it reveals opportunities for parallelization in the evaluation of code expressed in terms of function composition instead of application.
Now for the real world
The semantics of stack-based languages which support higher-order functions can be explained using a concatenative calculus. You simply map each term (command/expression/sub-program) to be a function that takes a function as input and returns a function as output. The entire program is effectively a single stack transformation function.
The reality is that things are always distorted in the real world (e.g. FORTH has a global dictionary, PostScript does weird things where the evaluation order matters). Most practical programming languages don't adhere perfectly to a theoretical model.
Final Words
I don't think a typical programmer or 8 year old should ever worry about what a concatenative language is. I also don't find it particularly useful to pigeon-hole programming languages as being type X or type Y.
After reading http://concatenative.org/wiki/view/Concatenative%20language and drawing on what little I remember of fiddling around with Forth as a teenager, I believe that the key thing about concatenative programming has to do with:
viewing data in terms of values on a specific data stack
and functions manipulating stuff in terms of popping/pushing values on the same the data stack
Check out these quotes from the above webpage:
There are two terms that get thrown
around, stack language and
concatenative language. Both define
similar but not equal classes of
languages. For the most part though,
they are identical.
Most languages in widespread use today
are applicative languages: the central
construct in the language is some form
of function call, where a function is
applied to a set of parameters, where
each parameter is itself the result of
a function call, the name of a
variable, or a constant. In stack
languages, a function call is made by
simply writing the name of the
function; the parameters are implicit,
and they have to already be on the
stack when the call is made. The
result of the function call (if any)
is then left on the stack after the
function returns, for the next
function to consume, and so on.
Because functions are invoked simply
by mentioning their name without any
additional syntax, Forth and Factor
refer to functions as "words", because
in the syntax they really are just
words.
This is in contrast to applicative languages that apply their functions directly to specific variables.
Example: adding two numbers.
Applicative language:
int foo(int a, int b)
{
return a + b;
}
var c = 4;
var d = 3;
var g = foo(c,d);
Concatenative language (I made it up, supposed to be similar to Forth... ;) )
push 4
push 3
+
pop
While I don't think concatenative language = stack language, as the authors point out above, it seems similar.
I reckon the main idea is 1. We can create new programs simply by joining other programs together.
Also, 2. Any random chunk of the program is a valid function (or sub-program).
Good old pure RPN Forth has those properties, excluding any random non-RPN syntax.
In the program 1 2 + 3 *, the sub-program + 3 * takes 2 args, and gives 1 result. The sub-program 2 takes 0 args and returns 1 result. Any chunk is a function, and that is nice!
You can create new functions by lumping two or more others together, optionally with a little glue. It will work best if the types match!
These ideas are really good, we value simplicity.
It is not limited to RPN Forth-style serial language, nor imperative or functional programming. The two ideas also work for a graphical language, where program units might be for example functions, procedures, relations, or processes.
In a network of communicating processes, every sub-network can act like a process.
In a graph of mathematical relations, every sub-graph is a valid relation.
These structures are 'concatenative', we can break them apart in any way (draw circles), and join them together in many ways (draw lines).
Well, that's how I see it. I'm sure I've missed many other good ideas from the concatenative camp. While I'm keen on graphical programming, I'm new to this focus on concatenation.
My pragmatic (and subjective) definition for concatenative programming (now, you can avoid read the rest of it):
-> function composition in extreme ways (with Reverse Polish notation (RPN) syntax):
( Forth code )
: fib
dup 2 <= if
drop 1
else
dup 1 - recurse
swap 2 - recurse +
then ;
-> everything is a function, or at least, can be a function:
( Forth code )
: 1 1 ; \ define a function 1 to push the literal number 1 on stack
-> arguments are passed implicitly over functions (ok, it seems to be a definition for tacit-programming), but, this in Forth:
a b c
may be in Lisp:
(c a b)
(c (b a))
(c (b (a)))
so, it's easy to generate ambiguous code...
you can write definitions that push the xt (execution token) on stack and define a small alias for 'execute':
( Forth code )
: <- execute ; \ apply function
so, you'll get:
a b c <- \ Lisp: (c a b)
a b <- c <- \ Lisp: (c (b a))
a <- b <- c <- \ Lisp: (c (b (a)))
To your simple question, here's a subjective and argumentative answer.
I looked at the article and several related web pages. The web pages say themselves that there isn't a real theory, so it's no wonder that people are having a hard time coming up with a precise and understandable definition. I would say that at present, it is not useful to classify languages as "concatenative" or "not concatenative".
To me it looks like a term that gives Manfred von Thun a place to hang his hat but may not be useful for other programmers.
While PostScript and Forth are worth studying, I don't see anything terribly new or interesting in Manfred von Thun's Joy programming language. Indeed, if you read Chris Okasaki's paper on Techniques for Embedding Postfix Languages in Haskell you can try out all this stuff in a setting that, relative to Joy, is totally mainstream.
So my answer is there's no simple explanation because there's no mature theory underlying the idea of a concatenative language. (As Einstein and Feynman said, if you can't explain your idea to a college freshman, you don't really understand it.) I'll go further and say although studying some of these languages, like Forth and PostScript, is an excellent use of time, trying to figure out exactly what people mean when they say "concatenative" is probably a waste of your time.
You can't explain a language, just get one (Factor, preferably) and try some tutorials on it. Tutorials are better than Stack Overflow answers.

Resources