Why are most scripting languages loosely typed? - programming-languages

why most of the scripting languages are loosely typed ? for example
javascript , python , etc ?

First of all, there are some issues with your terminology. There is no such thing as a loosely typed language and the term scripting language is vague too, most commonly referring to so called dynamic programming languges.
There is weak typing vs. strong typing about how rigorously is distinguished between different types (i.e. if 1 + "2" yields 3 or an error).
And there is dynamic vs. static typing, which is about when type information is determined - while or before running.
So now, what is a dynamic language? A language that is interpreted instead of compiled? Surely not, since the way a language is run is never some inherent characteristic of the language, but a pure implementation detail. In fact, there can be interpreters and compilers for one-and-the-same language. There is GHC and GHCi for Haskell, even C has the Ch interpreter.
But then, what are dynamic languges? I'd like to define them through how one works with them.
In a dynamic language, you like to rapidly prototype your program and just get it work somehow. What you don't want to do is formally specifying the behaviour of your programs, you just want it to behave like intended.
Thus if you write
foo = greatFunction(42)
foo.run()
in a scripting language, you'll simply assume that there is some greatFunction taking a number that will returns some object you can run. You don't prove this for the compiler in any way - no predetmined types, no IRunnable ... . This automatically gets you in the domain of dynamic typing.
But there is type inference too. Type inference means that in a statically-typed language, the compiler does automatically figure out the types for you. The resulting code can be extremely concise but is still statically typed. Take for example
square list = map (\x -> x * x) list
in Haskell. Haskell figures out all types involved here in advance. We have list being a list of numbers, map some function that applies some other function to any element of a list and square that produces a list of numbers from another list of numbers.
Nonetheless, the compiler can prove that everything works out in advance - the operations anything supports are formally specified. Hence, I'd never call Haskell a scripting language though it can reach similar levels of expressiveness (if not more!).
So all in all, scripting languages are dynamically typed because that allows you to prototype a running system without specifying, but assuming every single operation involved exists, which is what scripting languages are used for.

I don't quite understand your question. Apart from PHP, VBScript, COMMAND.COM and the Unix shell(s) I can't really think of any loosely typed scripting languages.
Some examples of scripting languages which are not loosely typed are Python, Ruby, Mondrian, JavaFXScript, PowerShell, Haskell, Scala, ELisp, Scheme, AutoLisp, Io, Ioke, Seph, Groovy, Fantom, Boo, Cobra, Guile, Slate, Smalltalk, Perl, …

Related

Is Haskell an imperative or declarative paradigm?

I have read some sources where the Haskell's paradigm is described as functional but imperative paradigm. The main source where this is said is Wikipedia. How is possible a functional and imperative paradigm at the same time, or is this a mistake?
Functional languages generally use a declarative methodology when describing their programs, and Haskell is definitely one of these languages; Haskell programs are described as a set of declarations of values (including functions, because functions are first class values).
However, in order for a computer to execute a program, it has to follow a sequence of instructions. To that end, Haskell has a way you can describe what can end up as sequencing when it's executed, in a special syntax (called "do syntax"). Each line of this syntax looks like the next "statement" in a normal imperative language, and kind of behaves that way, too. Really, though, it's just syntactic sugar for ordinary Haskell expressions of function application.
Because it's still written in Haskell, though, and Haskell values and expressions are for the most part typesafe, Haskell is often called "the best imperative language".
As long as changing the order of the atomic parts of a program can change its output, you can safely smack "imperative" on the language. Almost any high-level language is "declarative" in some way, and there are languages that support "declarative programming" better than Haskell does.
In C, for example, enums and even the switch can be "declarative", if you use it in a certain way. Much further in the "declarative" direction is Prolog, and you could write programs in a declarative style, but again, you can also use Prolog to write an imperative program.
Declarative, but there is some caveat to that.
Alonso Church created lambda calculus which Haskell is based on, and Alan Turing created the turing machine. Lambda calculus is purely functions, a turing machine changes the state of a program. Sounds familiar?
Except, it has been shown that these two are equivalent. You can build a turing machine from lambda calculus and vice versa. Which means everything inside Haskell is really just functions, but from them you can build an imperative style of programming.
So yeah, declarative. You don't tell Haskell how to do things, you tell it what a thing is and then build from it.

In what way is haskells type system more helpful than the type system of another statically typed language

I have been using haskell for a while now. I understand most/some of the concepts but I still do not understand, what exactly does haskells type system allow me to do that I cannot do in another statically typed language. I just intuitively know that haskells type system is better in every imaginable way compared to the type system in C,C++ or java, but I can't explain it logically, primarily because of a lack of in depth knowledge about the differences in type systems between haskell and other statically typed languages.
Could someone give me examples of how haskells type system is more helpful compared to a language with a static type system. Examples, that are terse and can be succinctly expressed would be nice.
The Haskell type system has a number of features which all exist in other languages, but are rarely combined within a single, consistent language:
it is a sound, static type system, meaning that a number of errors are guaranteed not to happen at runtime without needing runtime type checks (this is also the case in Caml, SML and almost the case in Java, but not in, say, Lisp, Python, C, or C++);
it performs static type reconstruction, meaning that the programmer doesn't need to write types unless he wants to, the compiler will reconstruct them on its own (this is also the case in Caml and SML, but not in Java or C);
it supports impredicative polymorphism (type variables), even at higher kinds (unlike Caml and SML, or any other production-ready language known to me);
it has good support for overloading (type classes) (unlike Caml and SML).
Whether any of those make Haskell a better language is open to discussion — for example, while I happen to like type classes a lot, I know quite a few Caml programmers who strongly dislike overloading and prefer to use the module system.
On the other hand, the Haskell type system lacks a few features that other languages support elegantly:
it has no support for runtime dispatch (unlike Java, Lisp, and Julia);
it has no support for existential types and GADTs (these are both GHC extensions);
it has no support for dependent types (unlike Coq, Agda and Idris).
Again, whether any of these are desirable features in a general-purpose programming language is open to discussion.
In addition to what others have answered, it is also Haskell's type system that makes the language pure, i.e. which distinguishes between values of a certain type and effectful computations that produce a result of that type.
One major difference between Haskell's type system and that of most OO languages is that the ability for a function to have side effects is represented by a data type (a monad such as IO). This allows you to write pure functions that the compiler can verify are side-effect-free and referentially transparent, which generally means that they're easier to understand and less prone to bugs. It's possible to write side-effect-free code in other languages, but you don't have the compiler's help in doing so. Haskell makes you think more carefully about which parts of your program need to have side effects (such as I/O or mutable variables) and which parts should be pure.
Also, although it's not quite part of the type system itself, the fact that function definitions in Haskell are expressions (rather than lists of statements) means that more of the code is subject to type-checking. In languages like C++ and Java, it's often possible to introduce logic errors by writing statements in the wrong order, since the compiler doesn't have a way to determine that one statement must precede another. For example, you might have one line that modifies an object's state, and another line that does something important based on that state, and it's up to you to ensure that these things happen in the correct order. In Haskell, this kind of ordering dependency tends to be expressed through function composition — e.g. f (g x) means that g must run first — and the compiler can check the return type of g against the argument type of f to make sure you haven't composed them the wrong way.

Language features helpful for writing quines (self-printing programs)?

OK, for those who have never encountered the term, a quine is a "self-replicating" computer program. To be more specific, one which - upon execution - produces a copy of its own source code as its only output.
The quines can, of course, be developed in many programming languages (but not all); but some languages are obviously more suited to producing quines than others (to clearly understand the somewhat subjective-sounding "more suited", look at a Haskell example vs. C example in the Wiki page - and I provide my more-objective definition below).
The question I have is, from programming language perspective, what language features (either theoretical design ones or syntax sugar) make the language more suitable/helpful for writing quines?
My definition of "more suitable" is "quines are easier to write" and "are shorter/more readable/less obfuscated". But you're welcome to add more criteria that are at least somewhat objective.
Please note that this question explicitly excludes degenerate cases, like a language which is designed to contain "print_a_quine" primitive.
I am not entirely sure, so correct me if anyone of you knows better.
I agree with both other answers, going further by explaining, that a quine is this:
Y g
where Y is a Y fixed-point combinator (or any other fixed-point combinator), which means in lambda calculus:
Y g = g(Y g)
now, it is quite apparent, that we need the code to be data and g be a function which will print its arguments.
So to summarize we need for constructing such a quines functions, printing function, fixed-point combinator and call-by-name evaluation strategy.
The smallest language that satisfies this conditions is AFAIK Zot from the Iota and Jot family.
Languages like the Io Programming Language and others allow the treating of code as data. In tree walking systems, this typically allows the language implementer to expose the abstract syntax tree as a first class citizen. In the case of Io, this is what it does. Being object oriented, the AST is modelled around Message objects, and a special sentinel is created to represent the currently executing message; this sentinel is called thisMessage. thisMessage is a full Message like any other, and responds to the print message, which prints it to the screen. As a result, the shortest quine I've ever been able to produce in any language, has come from Io and looks like this:
thisMessage print
Anyway, I just couldn't help but sharing this with you on this subject. The above certainly makes writing quines easy, but not doing it this way certainly doesn't preclude easily creating a quine.
I'm not sure if this is useful answer from a practical point of view, but there is some useful theory in computability theory. In particular fixed points and Kleene's recursion theorem can be used for writing quines. Apparently, the theory can be used for writing quine in LISP (as the wikipedia page shows).

Why are most S-Expression languages dynamically typed?

How come most Lisps and Schemes are dynamically typed?
Does static typing not mix with some of their common features?
Typing and s-expressions can be made to work together, see typed scheme.
Partly it is a historical coincidence that s-expression languages are dynamically typed. These languages tend to rely more heavily on macros, and the ease of parsing and pattern-matching on s-expressions makes macro processing much easier. Most research on sophisticated macros happens in s-expression languages.
Typed Hygienic Macros are hard.
When Lisp was invented in the years from 1958 to 1960 it introduced a lot of features both as a language and an implementation (garbage collection, a self-hosting compiler, ...). Some features were inherited (with some improvements) from other languages (list processing, ...). The language implemented computation with functions. The s-expressions were more an implementation detail ( at that time ), than a language feature. A type system was not part of the language. Using the language in an interactive way was also an early implementation feature.
The useful type systems for functional languages were not yet invented at that time. Still until today it is also relatively difficult to use statically typed languages in an interactive way. There are many implementations of statically typed languages which also provide some interactive interface - but mostly they don't offer the same level of support of interactive use as a typical Lisp system. Programming in an interactive Lisp system means that many things can be changed on the fly and it could be problematic if type changes had to be propagated through whole programs and data in such an interactive Lisp system. note that Some Schemers have a different view about these things. R6RS is mostly a batch language generally not that much in the spirit of Lisp...
The functional languages that got invented later with static type systems then also got a non-s-expression syntax - they did not offer support for macros or related features. later some of these languages/implementations used a preprocessor for syntactic extensions.
Static typing is lexical, it means that all information about types can be inferred from reading source code without evaluating any expressions or computing any things, conditionals being most important here. A statically typed language is designed so that this can happen, a better term would be 'lexically typed', as in, a compiler can prove from reading the source alone that no type errors will occur.
In the case of lisp, this is awkwardly different because lisp's source code itself is not static, lisp is homo-iconic, it uses data as code and can to some extend dynamically edit its own running source.
Lisp was the first dynamically typed language, and probably for this reason, program code itself is no longer lexical in Lisp.
Edit: a far more powerful reason, in the case of static typing you'd have to type lists. You can either have extremely complex types for each lists which account for all elements, of demand that each element has the same type and type it as a list of that. The former option will produce hell with lists of lists. The latter option demands that source code only contains the same type for each datum, this means that you can't even build expressions as a list is anyhow a different type than an integer.
So I dare say that it is completely and utterly infeasible to realize.

Does functional programming mandate new naming conventions?

I recently started studying functional programming using Haskell and came upon this article on the official Haskell wiki: How to read Haskell.
The article claims that short variable names such as x, xs, and f are fitting for Haskell code, because of conciseness and abstraction. In essence, it claims that functional programming is such a distinct paradigm that the naming conventions from other paradigms don't apply.
What are your thoughts on this?
In a functional programming paradigm, people usually construct abstractions not only top-down, but also bottom-up. That means you basically enhance the host language. In this kind of situations I see terse naming as appropriate. The Haskell language is already terse and expressive, so you should be kind of used to it.
However, when trying to model a certain domain, I don't believe succinct names are good, even when the function bodies are small. Domain knowledge should reflect in naming.
Just my opinion.
In response to your comment
I'll take two code snippets from Real World Haskell, both from chapter 3.
In the section named "A more controlled approach", the authors present a function that returns the second element of a list. Their final version is this:
tidySecond :: [a] -> Maybe a
tidySecond (_:x:_) = Just x
tidySecond _ = Nothing
The function is generic enough, due to the type parameter a and the fact we're acting on a built in type, so that we don't really care what the second element actually is. I believe x is enough in this case. Just like in a little mathematical equation.
On the other hand, in the section named "Introducing local variables", they're writing an example function that tries to model a small piece of the banking domain:
lend amount balance = let reserve = 100
newBalance = balance - amount
in if balance < reserve
then Nothing
else Just newBalance
Using short variable name here is certainly not recommended. We actually do care what those amounts represent.
I think if the semantics of the arguments are clear within the context of the code then you can get away with short variable names. I often use these in C# lambdas for the same reason. However if it is ambiguous, you should be more explicit with naming.
map :: (a->b) -> [a] -> [b]
map f [] = []
map f (x:xs) = f x : map f xs
To someone who hasn't had any exposure to Haskell, that might seem like ugly, unmaintainable code. But most Haskell programmers will understand this right away. So it gets the job done.
var list = new int[] { 1, 2, 3, 4, 5 };
int countEven = list.Count(n => n % 2 == 0)
In that case, short variable name seems appropriate.
list.Aggregate(0, (total, value) => total += value);
But in this case it seems more appropriate to name the variables, because it isn't immediately apparent what the Aggregate is doing.
Basically, I believe not to worry too much about convention unless it's absolutely necessary to keep people from screwing up. If you have any choice in the matter, use what makes sense in the context (language, team, block of code) you are working, and will be understandable by someone else reading it hours, weeks or years later. Anything else is just time-wasting OCD.
I think scoping is the #1 reason for this. In imperative languages, dynamic variables, especially global ones need to be named properly, as they're used in several functions. With lexical scoping, it's clear what the symbol is bound to at compile time.
Immutability also contributes to this to some extent- in traditional languages like C/ C++/ Java, a variable can represent different data at different points in time. Therefore, it needs to be given a name to give the programmer an idea of its functionality.
Personally, I feel that features features like first-class functions make symbol names pretty redundant. In traditional languages, it's easier to relate to a symbol; based on its usage, we can tell if it's data or a function.
I'm studying Haskell now, but I don't feel that its naming conventions is so very different. Of course, in Java you're hardly to find a names like xs. But it is easy to find names like x in some mathematical functions, i, j for counters etc. I consider such names to be perfectly appropriate in right context. xs in Haskell is appropriate only generic functions over lists. There's a lot of them in Haskell, so this name is wide-spread. Java doesn't provide easy way to handle such a generic abstractions, that's why names for lists (and lists themselves) are usually much more specific, e.g. lists or users.
I just attended a number of talks on Haskell with lots of code samples. As longs as the code dealt with x, i and f the naming didn't bother me. However, as soon as we got into heavy duty list manipulation and the like I found the three letters or so names to be a lot less readable than I prefer.
To be fair a significant part of the naming followed a set of conventions, so I assume that once you get into the lingo it will be a little easier.
Fortunately, nothing prevents us from using meaningful names, but I don't agree that the language itself somehow makes three letter identifiers meaningful to the majority of people.
When in Rome, do as the Romans do
(Or as they say in my town: "Donde fueres, haz lo que vieres")
Anything that aids readability is a good thing - meaningful names are therefore a good thing in any language.
I use short variable names in many languages but they're reserved for things that aren't important in the overall meaning of the code or where the meaning is clear in the context.
I'd be careful how far I took the advice about Haskell names
My Haskell practice is only of mediocre level, thus, I dare to try to reply only the second, more general part of Your question:
"In essence, it claims that functional programming is such a distinct paradigm that the naming conventions from other paradigms don't apply."
I suspect, the answer is "yes", but my motivation behind this opinion is restricted only on experience in just one single functional language. Still, it may be interesting, because this is an extremely minimalistic one, thus, theoretically very "pure", and underlying a lot of practical functional languages.
I was curios how easy it is to write practical programs on such an "extremely" minimalistic functional programming language like combinatory logic.
Of course, functional programming languages lack mutable variables, but combinatory logic "goes further one step more" and it lacks even formal parameters. It lacks any syntactic sugar, it lacks any predefined datatypes, even booleans or numbers. Everything must be mimicked by combinators, and traced back to the applications of just two basic combinators.
Despite of such extreme minimalism, there are still practical methods for "programming" combinatory logic in a neat and pleasant way. I have written a quine in it in a modular and reusable way, and it would not be nasty even to bootstrap a self-interpreter on it.
For summary, I felt the following features in using this extremely minimalistic functional programming language:
There is a need to invent a lot of auxiliary functions. In Haskell, there is a lot of syntactic sugar (pattern matching, formal parameters). You can write quite complicated functions in few lines. But in combinatory logic, a task that could be expressed in Haskell by a single function, must be replaced with well-chosen auxiliary functions. The burden of replacing Haskell syntactic sugar is taken by cleverly chosen auxiliary functions in combinatory logic. As for replying Your original question: it is worth of inventing meaningful and catchy names for these legions of auxiliary functions, because they can be quite powerful and reusable in many further contexts, sometimes in an unexpected way.
Moreover, a programmer of combinatory logic is not only forced to find catchy names of a bunch of cleverly chosen auxiliary functions, but even more, he is forced to (re)invent whole new theories. For example, for mimicking lists, the programmer is forced to mimick them with their fold functions, basically, he has to (re)invent catamorphisms, deep algebraic and category theory concepts.
I conjecture, several differences can be traced back to the fact that functional languages have a powerful "glue".
In Haskell, meaning is conveyed less with variable names than with types. Being purely functional has the advantage of being able to ask for the type of any expression, regardless of context.
I agree with a lot of the points made here about argument naming but a quick 'find on page' shows that no one has mentioned Tacit programming (aka pointfree / pointless). Whether this is easier to read may be debatable so it's up to you & your team, but definitely worth a thorough consideration.
No named arguments = No argument naming conventions.

Resources