are the variables declared in programming language **terminal** symbols or **non-terminal** symbols - automata-theory

I have a question in computation theory and automata .
are the variables declared in programming language terminal symbols or non-terminal symbols .

According to theory of automata, Variables in programming are nonterminals i.e. a finite
set of other symbols, each of which
represents a language.
Nonterminal symbols (or syntactic variables) are replaced by groups of terminal symbols according to the production rules.
In programming, variables are considered to be non-terminals because, variables represents data but not the variables themselves are data. They are just representing the data with names known as non-terminals in automata theory.
Since, the variables in programming are obeying the properties of non-terminals in automata theory, variables are said to be non-terminals in programming because the compilers of the programming languages are designed by using automata.
Variables in programming are non-terminals

Related

How many languages does a DFA recognize?

According to Sipser's "Introduction to the Theory of Computation": If A is the set of all strings that machine M accepts, we say that A is the
language of machine M and write L(M) = A. We say that M recognizes A ... A machine may accept several strings, but it always recognizes only one language. and also We say that M recognizes language A if A = {w| M accepts w}.
I guess the question has already been answered, but I would like to know if anyone has any thought about it, if there is anything interesting we can say about the subsets of a regular language, if we can say that the original DFA recognizes them and if there is any interesting relationship between the original DFA and the ones that recognize the smaller languages
If the language recognized by a DFA (of which there is always exactly one) is finite, then there are finitely many sublanguages of that language (indeed, if the language accepted consists of N strings, there are 2^N sublanguages).
There is no useful relationship which can be easily inferred from the sub/super language relationship w.r.t. where in the Chomsky hierarchy the language falls. That is: a sublanguage of a regular language may be undecidable, and a sublanguage of an undecidable language may be regular, with all possible variations in between.
Because of this, there is no particularly neat relationship to be worked out among DFAs of sub/super languages: not all of the sublanguages will even be regular; some sublanguages will have simpler DFAs than the DFA of the super language, and some will have more complicated DFAs than the DFA of the super language. Some will have the same DFA but a different set of accepting states.
Given a DFA, there is only one language corresponding to the machine. A language is a set, that is, a collection of all the strings accepted by the dfa.

How can we distinguish between regular languages and context free languages?

to express regular languages we use regexp and for context free languages we can use an stack-like memory, I know context free languages have some specifications like center embedding, but still I'm not sure when we can be confidant a given language is context free language? for example why does natural language is not a regular language. is there any reason except center embedding?
Automata theory states that a regular language can be processed by a Finite State Machine (FSM). However, if a language has "center-embedding", then that language is a Context-Free Language(CFL) which requires a Push-Down Automata(PDA).
Importantly, a PDA is a FSM with an additional resource of a memory-like device that is a "stack" or "counter" in order to keep track of the embeddings.
Wikipedia says in Languages that are not context-free :-
To prove that a given language is not context-free, one may employ
the pumping lemma for context-free languages or a number of other
methods, such as Ogden's lemma or Parikh's theorem.
Wikipedia says in Deciding whether a language is regular :-
To prove that a language is not regular, one often uses
the Myhill–Nerode theorem or the pumping lemma among other methods.
why does natural language is not a regular language ?
Chomsky said in (1957): “English is not a regular language”. As for context-free languages, “I do not know whether or not English is itself literally outside the range of such analyses”.
I am adding that English is such a vast language which can't be recognised by a finite machine.

Are features of programming languages a concept in semantics, syntax or something else?

When talking about features of programming languages, such as in Programming Language Comparison and D Language Feature Comparison Table, I was wondering what aspect of languages the concept "features" belong to or are discussed under?
Semantics,
syntax
or something else?
Thanks and regards!
This is just a gut feeling, I'm not a language theory guy or anything. I'd say adding a feature to a programming language means both
adding semantics for certain circumstances or construction (e.g. "Is-expressions return a boolean according whether the type of a template argument matches some type according to the following fifty rules: ...")
defining a syntax that belongs to it (e.g. adding IsExpr : "is" "(" someKindOfExpression ")" in the grammar)
It depends entirely on what you mean by a "feature," and how it's implemented. Some features, like Java's generics, are nothing but syntactic sugar - so that's a "syntax feature." The bytecode generated is unaffected by using Java's generics due to type erasure. This allows for backwards compatibility with pre-generic (e.g. Java 1.5) bytecode.
Other language features go much deeper than the syntactic level, like C#'s generics, which are implemented using reification to provide "first-class" generic objects.
I don't think that there is a clean separation for the concept of programming language "features", as many features like garbage collection (Java) or pattern matching (Haskell) are being provided by the runtime environment. So, generally I would say that the programming language - the grammar - per se provides no features. It just determines the rules of the language (Syntax). As the behaviour is being determined by how the code (produced by the grammar by obeying its rules) is being interpreted, programming language features are a sematic aspect.

Why are most scripting languages loosely typed?

why most of the scripting languages are loosely typed ? for example
javascript , python , etc ?
First of all, there are some issues with your terminology. There is no such thing as a loosely typed language and the term scripting language is vague too, most commonly referring to so called dynamic programming languges.
There is weak typing vs. strong typing about how rigorously is distinguished between different types (i.e. if 1 + "2" yields 3 or an error).
And there is dynamic vs. static typing, which is about when type information is determined - while or before running.
So now, what is a dynamic language? A language that is interpreted instead of compiled? Surely not, since the way a language is run is never some inherent characteristic of the language, but a pure implementation detail. In fact, there can be interpreters and compilers for one-and-the-same language. There is GHC and GHCi for Haskell, even C has the Ch interpreter.
But then, what are dynamic languges? I'd like to define them through how one works with them.
In a dynamic language, you like to rapidly prototype your program and just get it work somehow. What you don't want to do is formally specifying the behaviour of your programs, you just want it to behave like intended.
Thus if you write
foo = greatFunction(42)
foo.run()
in a scripting language, you'll simply assume that there is some greatFunction taking a number that will returns some object you can run. You don't prove this for the compiler in any way - no predetmined types, no IRunnable ... . This automatically gets you in the domain of dynamic typing.
But there is type inference too. Type inference means that in a statically-typed language, the compiler does automatically figure out the types for you. The resulting code can be extremely concise but is still statically typed. Take for example
square list = map (\x -> x * x) list
in Haskell. Haskell figures out all types involved here in advance. We have list being a list of numbers, map some function that applies some other function to any element of a list and square that produces a list of numbers from another list of numbers.
Nonetheless, the compiler can prove that everything works out in advance - the operations anything supports are formally specified. Hence, I'd never call Haskell a scripting language though it can reach similar levels of expressiveness (if not more!).
So all in all, scripting languages are dynamically typed because that allows you to prototype a running system without specifying, but assuming every single operation involved exists, which is what scripting languages are used for.
I don't quite understand your question. Apart from PHP, VBScript, COMMAND.COM and the Unix shell(s) I can't really think of any loosely typed scripting languages.
Some examples of scripting languages which are not loosely typed are Python, Ruby, Mondrian, JavaFXScript, PowerShell, Haskell, Scala, ELisp, Scheme, AutoLisp, Io, Ioke, Seph, Groovy, Fantom, Boo, Cobra, Guile, Slate, Smalltalk, Perl, …

Conditions for a meta-circular evaluator

Are there any conditions that a language must satisfy so that a meta-circular evaluator can be written for that language? Can I write one for BASIC, or for Python?
To quote Reg Braithwaite:
The difference between self-interpreters and meta-circular interpreters is that the latter restate language features in terms of the features themselves, instead of actually implementing them. (Circular definitions, in other words; hence the name). They depend on their host environment to give the features meaning.
Given that, one of the key features of a language that allows meta-circular interpreters to be written for them is homoiconicity, that is, that the primary representation of the program is a primitive datastructure of the language itself. Lisp exhibits this by virtue of the fact that programs are themselves expressed as lists.
You can write it for any language that is Turing-complete, however, your mileage may vary.
For Python, it has been done (PyPy). A list of languages for which it has been done can be found at the Wikipedia article.

Resources