Conditions for a meta-circular evaluator - programming-languages

Are there any conditions that a language must satisfy so that a meta-circular evaluator can be written for that language? Can I write one for BASIC, or for Python?

To quote Reg Braithwaite:
The difference between self-interpreters and meta-circular interpreters is that the latter restate language features in terms of the features themselves, instead of actually implementing them. (Circular definitions, in other words; hence the name). They depend on their host environment to give the features meaning.
Given that, one of the key features of a language that allows meta-circular interpreters to be written for them is homoiconicity, that is, that the primary representation of the program is a primitive datastructure of the language itself. Lisp exhibits this by virtue of the fact that programs are themselves expressed as lists.

You can write it for any language that is Turing-complete, however, your mileage may vary.
For Python, it has been done (PyPy). A list of languages for which it has been done can be found at the Wikipedia article.

Related

How many languages does a DFA recognize?

According to Sipser's "Introduction to the Theory of Computation": If A is the set of all strings that machine M accepts, we say that A is the
language of machine M and write L(M) = A. We say that M recognizes A ... A machine may accept several strings, but it always recognizes only one language. and also We say that M recognizes language A if A = {w| M accepts w}.
I guess the question has already been answered, but I would like to know if anyone has any thought about it, if there is anything interesting we can say about the subsets of a regular language, if we can say that the original DFA recognizes them and if there is any interesting relationship between the original DFA and the ones that recognize the smaller languages
If the language recognized by a DFA (of which there is always exactly one) is finite, then there are finitely many sublanguages of that language (indeed, if the language accepted consists of N strings, there are 2^N sublanguages).
There is no useful relationship which can be easily inferred from the sub/super language relationship w.r.t. where in the Chomsky hierarchy the language falls. That is: a sublanguage of a regular language may be undecidable, and a sublanguage of an undecidable language may be regular, with all possible variations in between.
Because of this, there is no particularly neat relationship to be worked out among DFAs of sub/super languages: not all of the sublanguages will even be regular; some sublanguages will have simpler DFAs than the DFA of the super language, and some will have more complicated DFAs than the DFA of the super language. Some will have the same DFA but a different set of accepting states.
Given a DFA, there is only one language corresponding to the machine. A language is a set, that is, a collection of all the strings accepted by the dfa.

In what way is haskells type system more helpful than the type system of another statically typed language

I have been using haskell for a while now. I understand most/some of the concepts but I still do not understand, what exactly does haskells type system allow me to do that I cannot do in another statically typed language. I just intuitively know that haskells type system is better in every imaginable way compared to the type system in C,C++ or java, but I can't explain it logically, primarily because of a lack of in depth knowledge about the differences in type systems between haskell and other statically typed languages.
Could someone give me examples of how haskells type system is more helpful compared to a language with a static type system. Examples, that are terse and can be succinctly expressed would be nice.
The Haskell type system has a number of features which all exist in other languages, but are rarely combined within a single, consistent language:
it is a sound, static type system, meaning that a number of errors are guaranteed not to happen at runtime without needing runtime type checks (this is also the case in Caml, SML and almost the case in Java, but not in, say, Lisp, Python, C, or C++);
it performs static type reconstruction, meaning that the programmer doesn't need to write types unless he wants to, the compiler will reconstruct them on its own (this is also the case in Caml and SML, but not in Java or C);
it supports impredicative polymorphism (type variables), even at higher kinds (unlike Caml and SML, or any other production-ready language known to me);
it has good support for overloading (type classes) (unlike Caml and SML).
Whether any of those make Haskell a better language is open to discussion — for example, while I happen to like type classes a lot, I know quite a few Caml programmers who strongly dislike overloading and prefer to use the module system.
On the other hand, the Haskell type system lacks a few features that other languages support elegantly:
it has no support for runtime dispatch (unlike Java, Lisp, and Julia);
it has no support for existential types and GADTs (these are both GHC extensions);
it has no support for dependent types (unlike Coq, Agda and Idris).
Again, whether any of these are desirable features in a general-purpose programming language is open to discussion.
In addition to what others have answered, it is also Haskell's type system that makes the language pure, i.e. which distinguishes between values of a certain type and effectful computations that produce a result of that type.
One major difference between Haskell's type system and that of most OO languages is that the ability for a function to have side effects is represented by a data type (a monad such as IO). This allows you to write pure functions that the compiler can verify are side-effect-free and referentially transparent, which generally means that they're easier to understand and less prone to bugs. It's possible to write side-effect-free code in other languages, but you don't have the compiler's help in doing so. Haskell makes you think more carefully about which parts of your program need to have side effects (such as I/O or mutable variables) and which parts should be pure.
Also, although it's not quite part of the type system itself, the fact that function definitions in Haskell are expressions (rather than lists of statements) means that more of the code is subject to type-checking. In languages like C++ and Java, it's often possible to introduce logic errors by writing statements in the wrong order, since the compiler doesn't have a way to determine that one statement must precede another. For example, you might have one line that modifies an object's state, and another line that does something important based on that state, and it's up to you to ensure that these things happen in the correct order. In Haskell, this kind of ordering dependency tends to be expressed through function composition — e.g. f (g x) means that g must run first — and the compiler can check the return type of g against the argument type of f to make sure you haven't composed them the wrong way.

Are features of programming languages a concept in semantics, syntax or something else?

When talking about features of programming languages, such as in Programming Language Comparison and D Language Feature Comparison Table, I was wondering what aspect of languages the concept "features" belong to or are discussed under?
Semantics,
syntax
or something else?
Thanks and regards!
This is just a gut feeling, I'm not a language theory guy or anything. I'd say adding a feature to a programming language means both
adding semantics for certain circumstances or construction (e.g. "Is-expressions return a boolean according whether the type of a template argument matches some type according to the following fifty rules: ...")
defining a syntax that belongs to it (e.g. adding IsExpr : "is" "(" someKindOfExpression ")" in the grammar)
It depends entirely on what you mean by a "feature," and how it's implemented. Some features, like Java's generics, are nothing but syntactic sugar - so that's a "syntax feature." The bytecode generated is unaffected by using Java's generics due to type erasure. This allows for backwards compatibility with pre-generic (e.g. Java 1.5) bytecode.
Other language features go much deeper than the syntactic level, like C#'s generics, which are implemented using reification to provide "first-class" generic objects.
I don't think that there is a clean separation for the concept of programming language "features", as many features like garbage collection (Java) or pattern matching (Haskell) are being provided by the runtime environment. So, generally I would say that the programming language - the grammar - per se provides no features. It just determines the rules of the language (Syntax). As the behaviour is being determined by how the code (produced by the grammar by obeying its rules) is being interpreted, programming language features are a sematic aspect.

Why are most scripting languages loosely typed?

why most of the scripting languages are loosely typed ? for example
javascript , python , etc ?
First of all, there are some issues with your terminology. There is no such thing as a loosely typed language and the term scripting language is vague too, most commonly referring to so called dynamic programming languges.
There is weak typing vs. strong typing about how rigorously is distinguished between different types (i.e. if 1 + "2" yields 3 or an error).
And there is dynamic vs. static typing, which is about when type information is determined - while or before running.
So now, what is a dynamic language? A language that is interpreted instead of compiled? Surely not, since the way a language is run is never some inherent characteristic of the language, but a pure implementation detail. In fact, there can be interpreters and compilers for one-and-the-same language. There is GHC and GHCi for Haskell, even C has the Ch interpreter.
But then, what are dynamic languges? I'd like to define them through how one works with them.
In a dynamic language, you like to rapidly prototype your program and just get it work somehow. What you don't want to do is formally specifying the behaviour of your programs, you just want it to behave like intended.
Thus if you write
foo = greatFunction(42)
foo.run()
in a scripting language, you'll simply assume that there is some greatFunction taking a number that will returns some object you can run. You don't prove this for the compiler in any way - no predetmined types, no IRunnable ... . This automatically gets you in the domain of dynamic typing.
But there is type inference too. Type inference means that in a statically-typed language, the compiler does automatically figure out the types for you. The resulting code can be extremely concise but is still statically typed. Take for example
square list = map (\x -> x * x) list
in Haskell. Haskell figures out all types involved here in advance. We have list being a list of numbers, map some function that applies some other function to any element of a list and square that produces a list of numbers from another list of numbers.
Nonetheless, the compiler can prove that everything works out in advance - the operations anything supports are formally specified. Hence, I'd never call Haskell a scripting language though it can reach similar levels of expressiveness (if not more!).
So all in all, scripting languages are dynamically typed because that allows you to prototype a running system without specifying, but assuming every single operation involved exists, which is what scripting languages are used for.
I don't quite understand your question. Apart from PHP, VBScript, COMMAND.COM and the Unix shell(s) I can't really think of any loosely typed scripting languages.
Some examples of scripting languages which are not loosely typed are Python, Ruby, Mondrian, JavaFXScript, PowerShell, Haskell, Scala, ELisp, Scheme, AutoLisp, Io, Ioke, Seph, Groovy, Fantom, Boo, Cobra, Guile, Slate, Smalltalk, Perl, …

Why are most S-Expression languages dynamically typed?

How come most Lisps and Schemes are dynamically typed?
Does static typing not mix with some of their common features?
Typing and s-expressions can be made to work together, see typed scheme.
Partly it is a historical coincidence that s-expression languages are dynamically typed. These languages tend to rely more heavily on macros, and the ease of parsing and pattern-matching on s-expressions makes macro processing much easier. Most research on sophisticated macros happens in s-expression languages.
Typed Hygienic Macros are hard.
When Lisp was invented in the years from 1958 to 1960 it introduced a lot of features both as a language and an implementation (garbage collection, a self-hosting compiler, ...). Some features were inherited (with some improvements) from other languages (list processing, ...). The language implemented computation with functions. The s-expressions were more an implementation detail ( at that time ), than a language feature. A type system was not part of the language. Using the language in an interactive way was also an early implementation feature.
The useful type systems for functional languages were not yet invented at that time. Still until today it is also relatively difficult to use statically typed languages in an interactive way. There are many implementations of statically typed languages which also provide some interactive interface - but mostly they don't offer the same level of support of interactive use as a typical Lisp system. Programming in an interactive Lisp system means that many things can be changed on the fly and it could be problematic if type changes had to be propagated through whole programs and data in such an interactive Lisp system. note that Some Schemers have a different view about these things. R6RS is mostly a batch language generally not that much in the spirit of Lisp...
The functional languages that got invented later with static type systems then also got a non-s-expression syntax - they did not offer support for macros or related features. later some of these languages/implementations used a preprocessor for syntactic extensions.
Static typing is lexical, it means that all information about types can be inferred from reading source code without evaluating any expressions or computing any things, conditionals being most important here. A statically typed language is designed so that this can happen, a better term would be 'lexically typed', as in, a compiler can prove from reading the source alone that no type errors will occur.
In the case of lisp, this is awkwardly different because lisp's source code itself is not static, lisp is homo-iconic, it uses data as code and can to some extend dynamically edit its own running source.
Lisp was the first dynamically typed language, and probably for this reason, program code itself is no longer lexical in Lisp.
Edit: a far more powerful reason, in the case of static typing you'd have to type lists. You can either have extremely complex types for each lists which account for all elements, of demand that each element has the same type and type it as a list of that. The former option will produce hell with lists of lists. The latter option demands that source code only contains the same type for each datum, this means that you can't even build expressions as a list is anyhow a different type than an integer.
So I dare say that it is completely and utterly infeasible to realize.

Resources