Is VHDL Turing complete? - programming-languages

Is VHDL Turing complete? My understanding is that VHDL creates a register machine, and that register machines - without arbitrary RAM - aren't Turing complete.
Is this accurate? For problems that can't be solved in register machines, is there a standard approach - use RAM outside the VHDL, and manage it via VHDL, for instance?

There are 3 main criteria for Turing Completeness:
Sequence. do this thing and then do that thing and then do the other thing
Selection. if this then something
Iteration (or recursion). do this over and over until this
The requirement for memory is not that it be infinite (which is impossible with modern technology, and all languages would fail), but that it be unbounded, or infinitely extendible: ie. if you run out, you can add more and try again.
So yes, I think VHDL certainly qualifies. It can do all that stuff.

Another way to show turing completeness is a chain of transformations:
Turing Machines are turing complete.
Turing Machines can be simulated by register machines and vice versa.
Register machines are an abstract and simple model of a modern processor
You can describe a processor with VHDL
So VHDL is turing complete.

Related

Example of code implemented using state machine versus procedural

I am curious about when one would use state machines (Event-driven) versus procedural paradigm. Any useful program takes some input(s), and then delivers some output(s). With some googling, it seems like common examples for state machines are for parsers or low level embedded devices, where as procedural is for all sorts of things, but I still am struggling to understand how different they would actually be for certain applications. Would it be possible for someone to demonstrate state machines usefulness with code written in that paradigm versus standard procedural?

Is natural language Turing complete?

I'm pretty sure a human language (e.g. English) is powerful enough to simulate a Turing machine, which would make it Turing complete. However, that would imply natural languages are no more or less expressive than programming languages, which seems questionable.
Is natural language Turing complete?
First of all "Is language X Turing complete" is only a well-defined question given a well-defined semantics for language X. It is nearly impossible to define one for natural languages due to natural languages' complex nature and reliance on context and intuition. Most (all?) natural languages don't even have a well-defined syntax.
That aside, your main confusion is based on the assumption that it's not possible for a computational model to be strictly more powerful than a Turing machine, i.e. be able to simulate a Turing machine, but also to express computations that a Turing machine can not. This is not true. For example we can extend Turing machines with oracles and we get a computational model that's strictly more powerful than plain Turing machines.
In the same vein we could define a programming language MagicLang that can do everything an ordinary programming language can do plus solve the halting problem. Defining a semantics for such a language is easy: just take the semantics of the language we used as a basis and add a function bool halts(string src, string input) with the semantics "returns true if the program described by the source code src successfully terminates after a finite amount of time when given the input input". So that's easy. What's hard, or rather impossible, is implementing this language.
Now one may argue that natural language can also describe the halting problem and our brain can "execute" natural language, i.e. it can answer the question "does this program halt". So if we could build a computer that could do everything our brain can do, it should be able to do this as well. But the thing is our brain can't solve the halting problem with 100% accuracy. Our brain can't even execute regular programs with 100% accuracy. Just remember how often you've stepped through a program in your head and came up with a different result than reality. Our brain is very good at learning, making intuitive connections and applying heuristics, but those things always come with the risk of giving the wrong result.
So could a computer do the same thing? Yes, we can use heuristics and machine learning to approach otherwise unsolvable problems and with that normal programming languages can attempt to solve every problem that can be described in natural language (even the undecidable ones). But just like the brain, those programs will sometimes give wrong results. In fact they will give wrong results much more often as our machine learning algorithms and heuristics aren't nearly as advanced as those of the human brain.
If a software language is sufficiently complex that it can be used to define arbitrary extensions to itself (such as defining arbitrary new functions), then it's clearly Turing-complete.
Using natural language I can, given sufficient time, teach another human terminology and concepts to extend their understanding and ability to discuss arbitrary subjects that they previously couldn't -- I could teach them copyright law, or astrophysics, for example (if they didn't already know them). So, while this may be more of an analogy than an exact identity, there does seem to be a Turing-completeness-like property to natural languages: they can be used to define and transmit arbitrary extensions to themselves. (Admittedly, not every human is really cut out to learn astrophysics -- but then any non-idealized Turning machine has only some finite amount of memory, so it's always possible to define a program that it can't run because it doesn't have enough memory.)

G is CFG, Is L(G) Regular

G is a given CFG, is L(G) regular? It's an undecidable problem.
But My argument is, Language is given and if i can do any of the following things then it will be regular else non-regular:
Creation of DFA/NFA
Writing Left Linear or Right Linear Grammar
Writing Regular Expression
Please tell me why is it undecidable?
"Undecidable" means there is no algorithm that can decide the problem. Let's delve into what these terms mean.
An algorithm is anything that you can code into a Turing machine. Turing machines are not creative, they do not think, they cannot get lucky or learn new things to try. They are coded one way and then have to work the same on all inputs without any possibility of changing. Change its behavior and you have a new Turing machine.
To decide in this context means to correctly determine for each problem instance whether the answer is yes or no. You have to be able to say yes or no for each instance with 100% certainty in finite time; it's not enough to be able to say "yes" only, or "no" only, or even both only most of the time.
To answer your question then:
You are (presumably) not a Turing machine
Nothing stops you (or a Turing machine) for answering the problem for vast numbers of real problem instances of interest
It is currently unknown whether this problem is undecidable for human beings; we can only prove it's undecidable for Turing machines (and equivalent systems of computation).
The Church-Turing thesis conjectures that human beings' computational facilities are not in excess of those of Turing machines, but this is not proven

Is it possible to create a universal intermediate programming language?

What I mean is, is there a language or could one be designed, such that all high level programming languages could be compiled into this intermediate language?
This is excluding machine languages.
Every general-purpose language that is Turing-complete is a universal programming language.
Two languages (or machines) are considered to be Turing-equivalent if any program for one can be compiled into a program for the other. A language is Turing-complete if it is Turing-equivalent to a Turing machine.
There were several early efforts to formalize the notion of a computation; a Turing machine was one, the lambda calculus another, and the class of general recursive functions a third. Alonzo Church and Alan Turing proved that all three of these formalizations were Turing-equivalent; any program for a Turing machine could be compiled to the lambda calculus, and vice versa, as could any general recursive function be implemented by either the lambda calculus or a Turing machine, and again vice versa.
The Church-Turing thesis hypothesizes that any computation that can be expressed in any formal system can be converted into a program that can run on a Turing machine; or equivalently, can be expressed in the untyped lambda calculus, or is general recursive, based on the equivalence described above.
It is merely a hypothesis and cannot be formally proven, as there is no way to formally characterize the class of computations that are subject to it (without circular reasoning by defining them as the class of computations that can be performed by a Turing machine), but there has never been any proposed model of computation that is not possible to compute with a Turing machine.
Because you can write a simulator of a Turing machine (or implementation of lambda calculus) in almost any general purpose language, and likewise those languages can be compiled to a program running on a Turing machine, pretty much all general purpose languages are Turing complete.
There are, however, some languages which are not Turing complete; regular expressions are an example. They can be simulated by a Turing machine, but they cannot in turn simulate a Turing machine.
Note that none of this addresses efficiency or access to host system resources; merely that the same computation can be expressed, and that it will eventually provide the same answer. There are some languages that are Turing complete in which there are some problems that cannot be computed at the same asymptotic efficiency as in other languages. And some languages provide access to external resources like the filesystem, I/O, networking, etc, while others which just allow computation in memory, but in any language that is Turing complete it would be possible to add an API or method of manipulating memory that allows it to access those external resources, so lack of access to system resources isn't a fundamental limitation, just a limitation of implementation.
As a more practical matter, there are several languages that have been designed to be portable, intermediate languages that are targets of compilation. The LLVM IR is one commonly used example, C-- is another. Also, any bytecode for a language runtime acts this way, the JVM is a compilation target for many languages, the CLR is another. Finally, many languages compile to C, as C compilers are widely available and the code is more portable than machine code.
And more recently, with the advent of the web and JavaScript being a language that is available in every web browser, JavaScript has become a popular target for compilation, both for languages that were designed to compile down to JavaScript like CoffeeScript and Dart, but also existing languages that were originally design to compile to machine code, via projects like Emscripten. Recognizing this usage, there has been effort to specify a subset of JavaScript, with more strict rules, known as asm.js, that makes a better target for compilation, while still allowing the same code to work backwards-compatibly with regular JavaScript engines that don't know anything about asm.js.

Non-deterministic programming languages

I know in Prolog you can do something like
someFunction(List) :-
someOtherFunction(X, List)
doSomethingWith(X)
% and so on
This will not iterate over every element in List; instead, it will branch off into different "machines" (by using multiple threads, backtracking on a single thread, creating parallel universes or what have you), with a separate execution for every possible value of X that causes someOtherFunction(X, List) to return true!
(I have no idea how it does this, but that's not important to the question)
My question is: What other non-deterministic programming languages are out there? It seems like non-determinism is the simplest and most logical way to implement multi-threading in a language with immutable variables, but I've never seen this done before - Why isn't this technique more popular?
Prolog is actually deterministic—the order of evaluation is prescribed, and order matters.
Why isn't nondeterminism more popular?
Nondeterminism is unpopular because it makes it harder to reason about the outcomes of your programs, and truly nondeterministic executions (as opposed to semantics) are hard to implement.
The only nondeterministic languages I'm aware of are
Dijkstra's calculus of guarded commands, which he wanted never to be implemented
Concurrent ML, in which communications may be synchronized nondeterministically
Gerard Holzmann's Promela language, which is the language of the model checker SPIN
SPIN does actually use the nondeterminism and explores the entire state space when it can.
And of course any multithreaded language behaves nondeterministically if the threads are not synchronized, but that's exactly the sort of thing that's difficult to reason about—and why it's so hard to implement efficient, correct lock-free data structures.
Incidentally, if you are looking to achieve parallelism, you can achieve the same thing by a simple map function in a pure functional language like Haskell. There's a reason Google MapReduce is based on functional languages.
The Wikipedia article points to Amb which is a Scheme-derivative with capacities for non-deterministic programming.
As far as I understand, the main reason why programming languages do not do that is because running a non-deterministic program on a deterministic machine (as are all existing computers) is inherently expensive. Basically, a non-deterministic Turing machine can solve complex problems in polynomial time, for which no polynomial algorithm for a deterministic Turing machine is known. In other words, non-deterministic programming fails to capture the essence of algorithmics in the context of existing computers.
The same problem impacts Prolog. Any efficient, or at least not-awfully-inefficient Prolog application must use the "cut" operator to avoid exploring an exponential number of paths. That operator works only as long as the programmer has a good mental view of how the Prolog interpreter will explore the possible paths, in a deterministic and very procedural way. Things which are very procedural do not mix well with functional programming, since the latter is mostly an effort of not thinking procedurally at all.
As a side note, in between deterministic and non-deterministic Turing machines, there is the "quantum computing" model. A quantum computer, assuming that one exists, does not do everything that a non-deterministic Turing machine can do, but it can do more than a deterministic Turing machine. There are people who are currently designing programming languages for the quantum computer (assuming that a quantum computer will ultimately be built). Some of those new languages are functional. You may find a host of useful links on this Wikipedia page. Apparently, designing a quantum programming language, functional or not, and using it, is not easy and certainly not "simple".
One example of a non-deterministic language is Occam, based on CSP theory. The combination of the PAR and ALT constructs can give rise to non-deterministic behaviour in multiprocessor systems, implementing fine grain parallel programs.
When using soft channels, i.e. channels between processes on the same processor, the implementation of ALT will make the behaviour close to deterministic†, but as soon as you start using hard channels (physical off-processor communication links) any illusion of determinism vanishes. Different remote processors are not expected to be synchronised in any way and they may not even have the same core or clock speed.
†The ALT construct is often implemented with a PRI ALT, so you have to explicitly code in fairness if you need it to be fair.
Non-determinism is seen as a disadvantage when it comes to reasoning about and proving programs correct, but in many ways once you've accepted it, you are freed from many of the constraints that determinism forces on your reasoning.
As long as the sequencing of communication doesn't lead to deadlock, which can be done by applying CSP techniques, then the precise order in which things are done should matter much less than whether you get the results that you want in time.
It was arguably this lack of determinism which was a major factor in preventing the adoption of Occam and Transputer systems in military projects, dominated by Ada at the time, where knowing precisely what a CPU was doing at every clock cycle was considered essential to proving a system correct. Without this constraint, Occam and the Transputer systems it ran on (the only CPUs at the time with a formally proven IEEE floating point implementation) would have been a perfect fit for hard real-time military systems needing high levels of processing functionality in a small space.
In Prolog you can have both non-determinism and concurrency. Non-determinism is what you described in your question concerning the example code. You can imagine that a Prolog clause is full of implicit amb statements. It is less known that concurrency is also supported by logic-programming.
History says:
The first concurrent logic programming language was the Relational
Language of Clark and Gregory, which was an offshoot of IC-Prolog.
Later versions of concurrent logic programming include Shapiro's
Concurrent Prolog and Ueda's Guarded Horn Clause language GHC.
https://en.wikipedia.org/wiki/Concurrent_logic_programming
But today we might just go with treads inside logic programming. Here is an example to implement a findall via threads. This can also be modded to perform all kinds of tasks on the collection, or maybe even produce agent networks towards distributed artificial intelligence.
I believe Haskell has the capability to construct and non-deterministic machine. Haskell at first may seem too difficult and abstract for practical use, but it's actually very powerful.
There is a programming language for non-deterministic problems which is called as "control network programming". If you want more information go to http://controlnetworkprogramming.com. This site is still in progress but you can read some info about it.
Java 2K
Note: Before you click the link and being disappointed: This is an esoteric language and has nothing to do with parallelism.
The Sly programming language under development at IBM Research is an attempt to include the non-determinism inherent in multi-threaded execution in the execution of certain types of algorithms. Looks to be very much a work in progress though.

Resources