Why is Verilog not considered a programming language? [closed] - programming-languages

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In class the professor said that students shouldn't say that they learned to program in Verilog. He said something like Verilog isn't used to program it's used to design. So how is Verilog different from other programming languages?

Verilog, just like VHDL, is meant to describe hardware. Instead, programming languages such as C or C++ provide a high level description of software programs, that is, a series of instructions that a microprocessor executes.
In practice, Verilog and VHDL do not offer the same features as programming languages, even though they look very much alike. For instance, a for loop in C/C++ describes the sequential execution of a given snippet of code; instead, a for ... generate loop in Verilog/VHDL describes multiple parallel instances of a same hardware building block (say, a AND logic gate). To be precise, there also exists a plain for loop in Verilog, but again, it has to be "synthesizable", that is, the compiler must be able to generate logic that fits the description.
Typically, a beginner in Verilog/VHDL will be tempted to "translate" a given function/algorithm from a C/C++ type of pseudocode directly to Verilog/VHDL: surprisingly, it might sometimes work, but it always lead to dramatically poor design. One must really be aware of these differences in order to become a good Verilog/VHDL programmer.

Verilog is a hardware definition language. Programming languages are generally understood to be languages for telling existing hardware what to do, not for reconfiguring said hardware.

I don't know anything about Verilog but just did a quick googling and the wiki pages seem to do a pretty good job of explaining the differences in concept that your teacher seemed to be eluding to. As some of the other posters here wrote I don't know that I would dismiss this as not a programming language, I think there's a high tendency for programmers to believe if it isn't somehow application programming or assembly programming then it's not really programming, but in short that's BS. Everything above machine code is basically the same to me, if it's a file I give to a computer and it tells the computer how to do something it's programming the computer (I guess the problem is drawing a line between users and developers, we like to feel special). Unless we plan to roll back to punch-cards sometime soon, I think anything that has a C like syntax or allows you to describe in a syntactically strict (well defined) way and modifies the behavior of the computer (what it outputs for a given input) then you've done some programming in one sense or another.
http://dictionary.reference.com/browse/programming
From the wiki page:
http://en.wikipedia.org/wiki/Dataflow_language
Dataflow programming focuses on how things connect, unlike imperative programming, which focuses on how things happen. In imperative programming a program is modeled as a series of operations (thing that "happen"), the flow of data between these operations is of secondary concern to the behavior of the operations themselves. However, dataflow programming models programs as a series of (sometimes interdependent) connections, with the operations between these connections being of secondary importance.
(I think the key here is the qualifiers of the type of programming not that one is a "programming language" and the other is a "design language", from what I understand they're both programming languages they just have distinct purposes and implementations). When I think of design I basically think of this:
http://dictionary.reference.com/browse/design
and that is not a program although a program may utilize designs (and probably should, generally referred to as design patterns, but not what you're doing)
Linked in from: http://en.wikipedia.org/wiki/Verilog
To your teachers point this language would likely be used to solve different problems from your every day Java/C program, and via a different means, however to say it is not a program seems wrong.

Because it is an HDL, so it is to define hardware, and anything done in verilog (not really anything, but synthesizable things) will be synthesized into actual hardware. So you can't just use programming features like class and OOPS concept because it can't create any hardware.
But in C, everything will be converted into executable hex file, which will be loaded in your ram while executing the program.
Another basic difference is everything in hardware is concurrent, so if you have written a=b+1 and c=d+1 in verilog, then in the synthesized hardware, both modules will work simaltaneously. But in C everything is sequential, so in same C program actually both instruction will be loaded one by one in your processor.

It is a programming language, not to program software, but to describe hardware design - but the output is not necessarily an "application" as we understand it.
The language has a formal syntax.

Verilog contains features to describe logical netlists(RTL) and features to facilitate simulation of them. Describing an RTL description as a program may convey that one who describes it as such does not throughly understand logic design or synthesis. Describing a testbench stimulus as a program would be appropriate.

verilog/vhdl is used to create and design specific application system on the chip which embedded into electronic devices.
c/c++ used design softwares on the computer

I am going to tackle this question in a different way. What is a purpose of a programming language? Can the output of a program affect real world and your goals and expectation? If yes then ofcourse verilog is a programming language. Console.log has as much meaning as what it translates to in real world eg. console.log("you have a million unit") has no fiat without authority. So verilog is a programming language in certain sense.

Related

Learning Functional Programming For Improved C++11 style. Haskel, Lisp, or [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've never programmed in a "pure" functional language. I earned my stripes on C and C++, tried Java, C#, PHP etc... but always I found myself going back to C++. Perhaps I'm a bit of a masochist, but I love the low level stuff.
I also find that I can accomplish rapid development quickly through the embedding of LUA, Python or other scripting languages (along with their focus on rapid development).
Long story short, I'm not quitting C/C++ so don't talk me out of it. However I've had little time to learn C++11 and I'm starting to feel the acceleration of the curve towards functional programming happening in the future.
My question is twofold. What language was C++11's concept of lambda functionality "borrowed" from, and what language would be the ideal one, if not that one, or if any to get a feel for "the way" to use C++11's new lambda functionality (no pun intended).
PS: I'm honestly not too happy about the new "bloated" additions to C++. I liked C++ how it was, it's starting to feel like the language is becoming bloated. I won't clam that to be a fact; I hear you have to have experienced a functional language to "get it".
It honestly seems like there is a new heavyweight in town. First it was just "procedural" programming, then came the OOP paradigm shift, while now it seems like things are heading towards the "functional" way of doing things.
Of course procedural programming is still alive and well (inside classes), I have to wonder where the lambda way will fit in (properly used) to class/oop design. Will it just be a replacement for the procedural part? Make OOP a thing of the past (pfft)? Or something else entirely (say, a functional event system generating events for objects encapsulating procedural code)?
I would try to curtail your opinions until you have more rigorous experience of the issues involved.
To paraphrase Bjarne Stroustrup: Functional programming has had a lot of airtime in academia over the last several decades, yet the number of deployed functional systems in industry remains about zero.
More concretely to your question, a lambda is just a short-hand syntactic way to declare a singleton functor object (a class with an operator() function) that captures variables from its enclosing scope as member variables. I wouldn't consider it a "functional programming" concept, any more so than any other entity in C++.
Functional programming generally involves immutable data types (objects that dont change once constructed) and pure functions (functions that have output that depends purely on their input, and nothing else).
If you are interested in functional programming there is a free online course (MOOC) starting right now called Functional Programming Principles in Scala, that serves as a very good and highly regarded introduction to the subject from one of the top Swiss universities.
I can't speak about lambdas in C++11, but I know that part of the rationale for adding lambdas to Java 8 is to enable transparent concurrency support out of the box. How? It provides a (lazy) Stream interface where you can switch between parallel and sequential processing simply by calling parallel and sequential (these methods return new streams, and do not have side effects on existing streams).
If you look at the methods in Stream, you'll quickly notice that without a lambda facility, they would be an extreme pain to use. Have a look at some examples of what you can do with streams in combination with lambdas.
It should be possible to implement a similar library for C++11, if there isn't already such a library.

What is Haskell used for in the real world? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
There is a lot of hype around Haskell, however, it is hard to get information on how it is used in the real world applications. What are the most popular projects / usages of Haskell and why it excels at solving these problems?
What are some common uses for this
language?
Rapid application development.
If you want to know "why Haskell?", then you need to consider advantages of functional programming languages (taken from https://c2.com/cgi/wiki?AdvantagesOfFunctionalProgramming):
Functional programs tend to be much more terse than their ImperativeLanguage counterparts. Often this leads to enhanced
programmer productivity
FP encourages quick prototyping. As such, I think it is the best software design paradigm for ExtremeProgrammers... but what do I know?
FP is modular in the dimension of functionality, where ObjectOrientedProgramming is modular in the dimension of different
components.
The ability to have your cake and eat it. Imagine you have a complex OO system processing messages - every component might make state
changes depending on the message and then forward the message to some
objects it has links to. Wouldn't it be just too cool to be able to
easily roll back every change if some object deep in the call
hierarchy decided the message is flawed? How about having a history of
different states?
Many housekeeping tasks made for you: deconstructing data structures (PatternMatching), storing variable bindings (LexicalScope with
closures), strong typing (TypeInference), GarbageCollection, storage
allocation, whether to use boxed (pointer-to-value) or unboxed (value
directly) representation...
Safe multithreading! Immutable data structures are not subject to data race conditions, and consequently don't have to be protected by
locks. If you are always allocating new objects, rather than
destructively manipulating existing ones, the locking can be hidden in
the allocation and GarbageCollection system.
Apart from this Haskell has its own advantages such as:
Clear, intuitive syntax inspired by mathematical notation.
List comprehensions to create a list based on existing lists.
Lambda expressions: create functions without giving them explicit names. So it's easier to handle big formulas.
Haskell is completely referentially transparent. Any code that uses I/O must be marked as such. This way, it encourages you to separate code with side effects (e.g. putting text on the screen) from code without (calculations).
Lazy evaluation is a really nice feature:
Even if something would usually cause an error, it will still work as long as you don't use the result. For example, you could put 1 / 0 as the first item of a list and it will still work if you only used the second item.
It is easier to write search programs such as this sudoku solver because it doesn't load every combination at once—it just generates them as it goes along. You can do this in other languages, but only Haskell does this by default.
You can check out following links:
https://c2.com/cgi/wiki?AdvantagesOfFunctionalProgramming
https://learn.microsoft.com/archive/blogs/wesdyer/why-functional-programming-is-important-in-a-mixed-environment
https://web.archive.org/web/20160626145828/http://blog.kickino.org/archives/2007/05/22/T22_34_16/
https://useless-factor.blogspot.com/2007/05/advantage-of-functional-programming.html
I think people in this post are missing the most important point for anyone who has never used a functional programming language: expanding your mind. If you are new to functional programming then Haskell will make you think in ways you've never thought before. As a result your programming in other areas and other languages will improve. How much? Hard to quantify.
There is one good answer for what a general purpose language like Haskell is good for: writing programs in general.
For what it is used for in practice, I've three approaches to establishing that:
A tag cloud of Haskell library and app areas, weighted by frequency on Hackage.
Indicates that it is good for graphics, networking, systems programming, data structures, databases, development, text processing ...
Areas it is used in industry - a lot of DSLs, web apps, compiler design, networking, analysis, systems programming , ...
And finally, my opinion on what it is really strong at:
Problems where correctness matters, domain specific languages, and parallel and concurrent programming
I hope that gives you a sense on how broad your question is, if it is to be answered with any specificity.
One example of Haskell in action is xmonad, a "featureful window manager in less than 1200 lines of code".
From the Haskell Wiki:
Haskell has a diverse range of use
commercially, from aerospace and
defense, to finance, to web startups,
hardware design firms and lawnmower
manufacturers. This page collects
resources on the industrial use of
Haskell.
According to Wikipedia, the Haskell language was created out of the need to consolidate existing functional languages into a common one which could be used for future research in functional-language design.
It is apparent based on the information available that it has outgrown it's original purpose and is used for much more than research. It is now considered a general purpose functional programming language.
If you're still asking yourself, "Why should I use it?", then read the Why use it? section of the Haskell Wiki Introduction.
Haskell is a general purpose programming language. It can be used for anything you use any other language to do. You aren't limited by anything but your own imagination. As for what it's suited for? Well, pretty much everything. There are few tasks in which a functional language does not excel.
And yes, I'm the Rayne from Dreamincode. :)
I would also like to mention that, in case you haven't read the Wikipedia page, functional programming is a paradigm like Object Oriented programming is a paradigm. Just in case you didn't know. Haskell is also functional in the sense that it works; it works quite well at that.
Just because a language isn't an Object Oriented language doesn't mean the language is limited by anything. Haskell is a general-purpose programming language, and is just as general purpose as Java.
I have a cool one, facebook created a automated tool for rewriting PHP code. They parse the source into an abstract syntax tree, do some transformations:
if ($f == false) -> if (false == $f)
I don't know why, but that seems to be their particular style and then they pretty print it.
https://github.com/facebook/lex-pass
We use haskell for making small domain specific languages. Huge amounts of data processing. Web development. Web spiders. Testing applications. Writing system administration scripts. Backend scripts, which communicate with other parties. Monitoring scripts (we have a DSL which works nicely together with munin, makes it much easier to write correct monitor code for your applications.)
All kind of stuff actually. It is just a everyday general purpose language with some very powerful and useful features, if you are somewhat mathematically inclined.
From Haskell:
Haskell is a standardized, general-purpose purely functional
programming language, with
non-strict semantics and strong static
typing. It is named after logician
Haskell Curry.
Basically Haskell can be used to create pretty much anything you would normally create using other general-purpose languages (e.g. C#, Java, C, C++, etc.).
For example, for developing interactive, realtime HTML5 web applications. See Elm, the compiler of which is implemented in Haskell and the syntax of which borrows a lot from Haskell's.
This is a pretty good source for info about Haskell and its uses:
Open Source Haskell Releases and Growth

What is a computer programming language?

At the risk of sounding naive, I ask this question in search of a deeper understanding of the concept of programming languages in general. I write this question for my own edification and the edification of others.
What is a useful definition of a computer programming language and what are its basic and necessary components? What are the key features that differentiate languages (functional, imperative, declarative, object oriented, scripting, etc...)?
One way to think about this question. Imagine you are looking at the hardware of a modern desktop or laptop computer. Assume, that the C language or any of its variants do not exist. How would you describe to others all the things needed to make the computer expressive and functional in terms of what we expect of personal computers today?
Tangentially related, what is it about computer languages that allow other languages to exist? For example take a scripting language like Javascript, Perl, or PHP. I assume part of the definition of these is that there is an interpreter most likely implemented in C or C++ at some level. Is it possible to write an interpreter for Javascript in Javascript? Is this a requirement for a complete language? Same for Perl, PHP, etc?
I would be satisfied with a list of concepts that can be looked up or researched further.
Like any language, programming languages are simply a communication tool for expressing and conveying ideas. In this case, we're translating our ideas of how software should work into a structured and methodical form that computers (as well as other humans who know the language, in most cases) can read and understand.
What is a useful definition of a computer programming language and what are its basic and necessary components?
I would say the defining characteristic of a programming language is as follows: things written in that language are intended to eventually be transformed into something that is executed. Thus, pseudocode, while perhaps having the structure and rigor of a programming language, is not actually a programming language. Likewise, UML can express many powerful ideas in an abstract manner just like a programming language can, but it falls short because people don't generally write UML to be executed.
How would you describe to others all the things needed to make the computer expressive and functional in terms of what we expect of personal computers today?
Even if the word "programming language" wasn't part of the shared vocabulary of the group I was talking to, I think it would be obvious to the others that we'd need a way to communicate with the computer. Just as no one expects a car to drive itself (yet!) without external instructions in the form of interaction with the steering wheel and pedals, no one could expect the hardware to function without being told what to do. As noted above, a programming language is the conduit through which we can make that communication happen.
Tangentially related, what is it about computer languages that allow other languages to exist?
All useful programming languages have a property called Turing completeness. If one language in the Turing-complete set can do something, then any of them can; they are said to be computationally equivalent.
However, just because they're equally "powerful" doesn't mean they're equally nice to work with for humans. This is why many people are willing to sacrifice the unparalleled micromanagement you get from writing assembly code in exchange for the expressiveness and power you get with higher-level languages, like Ruby, Python, or C#.
Is it possible to write an interpreter for Javascript in Javascript? Is this a requirement for a complete language? Same for Perl, PHP, etc?
Since there is a Javascript interpreter written in C, it follows that it must be possible to write a Javascript interpreter in Javascript, since both are Turing-complete. However, again, note that Turing-completeness says nothing about how hard it is to do something in one language versus another -- only whether it is possible to begin with. Your Javascript-interpreter-inside-Javascript might well be horrendously inefficient, consume absurd amounts of memory, require enormous processing power, and be a hideously ugly hack. But Turing-completeness guarantees it can be done!
While this doesn't directly answer your question, I am reminded of the Revenge of the Nerds essay by Paul Graham about the evolution of programming languages. It's certainly an interesting place to start your investigation.
Not a definition, but I think there are essentially two strands of development in programming languages:
Those working their way up from what the machine can do to something more expressive and less tied to the machine (Assembly, Fortran, C, C++, Java, ...)
Those going down from some mathematical or theoretical computer science concept of computation to something implementable on a real machine (Lisp, Prolog, ML, Haskell, ...)
Of course, in reality the picture is not as neat, and both strands influence each other by borrowing the best ideas.
Slightly long rant ahead.
A computer language is actually not all that different from a human language. Both are used to express ideas and concepts in commonly understood terms. Among different human languages there are syntactic differences, but you can express the same thing in every language (does that make human languages Turing complete? :)). Some languages are better suited for expressing certain things than others.
For example, although technically not completely correct, the Inuit language seems quite suited to describe various kinds of snow. Japanese in my experience is very suitable for expressing ones feelings and state of mind thanks to a large, concise vocabulary in that area. German is pretty good for being very precise thanks to largely unambiguous grammar.
Different programming languages have different specialities as well, but they mostly differ in the level of detail required to express things. The big difference between human and programming languages is mostly that programming languages lack a lot of vocabulary and have very few "grammatical" rules. With libraries you can extend the vocabulary of a language though.
For example:
Make me coffee.
Very easy to understand for a human, but only because we know what each of the words mean.
coffee : a drink made from the roasted and ground beanlike seeds of a tropical shrub
drink : a liquid that can be swallowed
swallow : cause or allow to pass down the throat
... and so on and so on
We know all these definitions by heart, but we had to learn them at some point.
In the same way, a computer can be "taught" to "understand" words as well.
Coffee::make()->giveTo($me);
This could be a perfectly valid expression in a computer language. If the computer "knows" what Coffee, make() and giveTo() means and if $me is defined. It expresses the same idea as the English sentence, just with a different, more rigorous syntax.
In a different environment you'd have to say slightly different things to get the same outcome. In Japanese for example you'd probably say something like:
コーヒーを作ってもらっても良いですか?
Kōhī o tsukuttemoratte mo ii desu ka?
Which would roughly translate to:
if ($Person->isAgreeable('Coffee::make()')) {
return $Person->return(Coffee::make());
}
Same idea, same outcome, but the $me is implied and if you don't check for isAgreeable first you may get a runtime error. In computer terms that would be somewhat analogous to Ruby's implied behaviour of returning the result of the last expression ("grammatical feature") and checking for available memory first (environmental necessity).
If you're talking to a really slow person with little vocabulary, you probably have to explain things in a lot more detail:
Go to the kitchen.
Take a pot.
Fill the pot with water.
...
Just like Assembler. :o)
Anyway, the point being, a programming language is actually a language just like a human language. Their syntax is different and specialized for the problem domain (logic/math) and the "listener" (computers), but they're just ways to transport ideas and concepts.
EDIT:
Another point about "optimization for the listener" is that programming languages try to eliminate ambiguity. The "make me coffee" example could, technically, be understood as "turn me into coffee". A human can tell what's meant intuitively, a computer can't. Hence in programming languages everything usually has one and one meaning only. Where it doesn't you can run into problems, the "+" operator in Javascript being a common example.
1 + 1 -> 2
'1' + '1' -> '11'
See "Programming Considered as a Human Activity." EWD 117.
http://www.cs.utexas.edu/~EWD/transcriptions/EWD01xx/EWD117.html
Also See http://www.csee.umbc.edu/331/current/notes/01/01introduction.pdf
Human expression which:
describes mathematical functions
makes the computer turn switches on and off
This question is very broad. My favorite definition is that a programming language is a means of expressing computations
Precisely
At a high level
In ways we can reason about them
By computation I mean what Turing and Church meant: the Turing machine and the lambda calculus have equivalent expressive power (which is a theorem), and the Church-Turing hypothesis (which is a conjecture) says roughly that there's no more powerful notion of computation out there. In other words, the kinds of computations that can be expressed in any programming languages are at best the kinds that can be expressed using Turing machines or lambda-calculus programs—and some languages will be able to express only a subset of those calculations.
This definition of computation also encompasses your friendly neighborhood hardware, which is pretty easy to simulate using a Turing machine and even easier to simulate using the lambda calculus.
Expressing computations precisely means the computer can't wiggle out of its obligations: if we have a particular computation in mind, we can use a programming language to force the computer to perform that computation. (Languages with "implementation defined" or "undefined" constructs make this task more difficult. Programmers using these languages are often willing to settle for—or may be unknowingly settling for—some computation that is only closely related to the computation they had in mind.)
Expressing computation at a high level is what programming langauges are all about. An important reason that there are so many different programming languages out there is that there are so many different high-level ways of thinking about problems. Often, if you have an important new class of problems to solve, you may be best off creating a new programming language. For example, Larry Wall's writing suggests that solving a class of problems called "systems administration" was a motivation for him to create Perl.
(Another reason there are so many different programming languages out there is that creating a new language is a lot of fun, and anyone can learn to do it.)
Finally, many programmers want languages that make it easy to reason about programs. For example, today a student of mine implemented a new algorithm that made his program run over six times faster. He had to reason very carefully about the contents of C arrays to make sure that the new algorithm would do the same job the old one did. Luckily C has decent tools for reasoning about programs, for example:
A change in a[i] cannot affect the value of a[i-1].
My student also applied a reasoning principle that isn't valid in C:
The sum of of a sequence unsigned integers will be at least as large as any integer in the sequence.
This isn't true in C because the sum might overflow. One reason some programmers prefer languages like Standard ML is that in SML, this reasoning principle is always valid. Of languages in wide use, probably Haskell has the strongest reasoning principles Richard Bird has developed equational reasoning about programs to a high art.
I will not attempt to address all the tangential details that follow your opening question. But I hope you will get something out of an answer that aims to give a deeper understanding, as you asked, of a fundamental question about programming languages.
One thing a lot of "IT" types forget is that there are 2 types of computer programming languages:
Software programming languages: C, Java, Perl, COBAL, etc.
Hardware programming languages: VHDL, Verilog, System Verilog, etc.
Interesting.
I'd say the defining feature of a programming language is the ability to make decisions based on input. Effectively, if and goto. Everything else is lots and lots of syntactic sugar. This is the idea that spawned Brainfuck, which is actually remarkably fun to (try to) use.
There are places where the line blurs; for example, I doubt people would consider XSLT to really be a programming language, but it's Turing-complete. I've even solved a Project Euler problem with it. (Very, very slowly.)
Three main properties of languages come to mind:
How is it run? Is it compiled to bare metal (C), compiled to mostly bare metal with some runtime lookup (C++), run on a JIT virtual machine (Java, .NET), bytecode-interpreted (Perl), or purely interpreted (uhh..)? This doesn't comment much on the language itself, but speaks to how portable the code may be, what sort of speed I might expect (and thus what broad classes of tasks would work well), and sometimes how flexible the language is.
What paradigms does it support? Procedural? Functional? Is the standard library built with classes or functions? Is there reflection? Is there, ideally, support for pretty much whatever I want to do?
How can I represent my data? Are there arrays, and are they fixed-size or not? How easy is it to use strings? Are there structs or hashes built in? What's the type system like? Are there objects? Are they class-based or prototype-based? Is everything an object, or are there primitives? Can I inherit from built-in objects?
I realize the last one is a very large collection of potential questions, but it's all related in my mind.
I imagine rebuilding the programming language landscape entirely from scratch would work pretty much how it did the first time: iteratively. Start with assembly, the list of direct commands the processor understands, and wrap it with something a bit easier to use. Repeat until you're happy.
Yes, you can write a Javascript interpreter in Javascript, or a Python interpreter in Python (see: PyPy), or a Python interpreter in Javascript. Such languages are called self-hosting. Have a look at Perl 6; this has been a goal for its main implementation from the start.
Ultimately, everything just has to translate to machine code, not necessarily C. You can write D or Fortran or Haskell or Lisp if you want. C just happens to be an old standard. And if you write a compiler for language Foo that can ultimately spit out machine code, by whatever means, then you can rewrite that compiler in Foo and skip the middleman. Of course, if your language is purely interpreted, this will probably result in a stack overflow...
As a friend taught me about computer languages, a language is a world. A world of communication with that machine. It is world for implementing ideas, algorithms, functionality, as Alonzo and Alan described. It is the technical equivalent of the mathematical structures that the aforementioned scientists built. It is a language with epxressions and also limits. However, as Ludwig Wittgenstein said "The limits of my language mean the limits of my world", there are always limitations and that's how one chooses it's language that fits better his needs.
It is a generic answer... some thoughts actually and less an answer.
There are many definitions to this but what I prefer is:
Computer programming is programming that helps to solve a particular technical task/problem.
There are 3 key phrases to look out for:
You: Computer will do what you (Programmer) told it to do.
Instruct: Instruction is given to the computer in a language that it can understand. We will discuss that below.
Problem: At the end of the day computers are tools (Complex). They are there to make out life simpler.
The answer can be lengthy but you can find more about computer programming

Why create a new programming language? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the real benefit of creating a new programming language? It is highly unlikely that you are going to actually use it.
In short, how will the process of creating a new language make you a better programmer?
You will understand the decisions behind language design and garner a better overall understanding of the compromises made between readability, performance, and reliability.
Your familiarity with concepts such as recursion, closures, garbage collection, reference management, typing, data structures and how these things actually work will increase. Most programmers will utilize resources and language features better.
Similar to the way we learn new ways to code solutions when we use other languages, when we write our own languages, we explore new ways to create solutions. See Metaprogramming. Contrary to the what the question suggests, Domain Specific Languages are used in many environments.
If you're writing a compiler, you'll learn more about how computers work than you ever did before. (Depending on your goal, perhaps more than you intended to learn)
When I wrote my own sort routines in school, even re-implementations of good ones, it really drove home some of the weaknesses of some of the algorithms.
In short, there's an order of magnitude of difference in a programmer who knows how to use tools, and a programmer who knows how to make tools.
I can speak from experience here ...
Fun, Domain specific problem solving, Complexity in context
I love creating new languages for fun, and for tackling domain specific problems. A very simple example might be Wikipedia markup or something as complex as Erlang which specializes in concurrent processing.
Many general purpose languages are similar, because they are general purpose. Sometimes you need a more accurate abstraction of the mechanics of the problem you are solving. Another example would be the M4 macro language.
Remember a language is not magic, it is just a collection of defined grammatical structures with implied semantics. SQL is a good example of a language for a purpose, with that purpose defined in it's syntax and semantics.
Learning how languages work, what makes a language parsable, what makes semantics sensible and the implementation of this, I think can make you a better programmer.
compilers embody alot of theory that underpins computer science:
Translation, abstraction, interpretation, data structures, state .... the list goes on. Learning these things will make you understand the implications of your program and what goes on under the hood. You can of course learn things independently but compilers are a great context to learn complex topics such as DFA/NDFA automata, stack-based parsers, abstract syntax trees ....
compilers are beautiful machines I think :)
Multiple reasons:
bragging rights
economic incentives
extreme boredom
dissatisfaction with the hundreds of existing languages
untreated insanity
desire to implement language that facilitates new design concepts (like languages that make design patterns more straightforward to incorporate)
other reasons, perhaps
I think Jeff Attwood answers this well in this Coding Horror post -- though he's talking about a more general issue (why create any new library, framework, etc, when other artifacts in the same design space already exist), I suspect that exactly said broader viewpoint gives him a different and interesting perspective.
I will add that if you write a semantics, so that your language is an actual language and not merely what happens to be accepted by some particular implementation, you will learn an enormous amount about how to describe computational behaviors precisely:
You will learn what kinds of behaviors are and are not easy to describe—and prove correct.
You will learn how to trade off different kinds of formalisms for describing different kinds of features.
You will ultimately be a better programmer because the formalism and proof techniques you will learn will apply to all kinds of problems: locking techniques, safety properties in kernels, lock-free data structures, network protocols, and information security, to name just a few. All these areas are amenable to the same kind of formal treatment that is given to a programming language.
To pick just one example, if you give your language a static type system and you then prove that a well-type program is guaranteed to be memory-safe, you will learn just as much (on a different dimension) as you will by writing an interpreter or compiler.
EDIT: If you want to learn this stuff I think the easiest starting point is Benjamin Pierce's series of two books on Types and Programming Languages. There is also a graduate textbook by Glynn Winskel which is a little harder but more oriented toward semantics and proof techniques.
Creating Domain Specific Languages is very valuable. Instead of thinking only about general purpose languages, consider creating so-called "little languages" that clearly express abstractions in your project.
For example, in a recent project I decided to use a Command Pattern to drive a Service Layer. I found some repetition in my command code, so I wrote a little compiler that accepts a simple language that expresses commands and emits command implementations in the "underlying" language.
For the same reason that taking a Compiler Construction course at university will benefit you even if you never write a single compiler in your whole life. It's a look under the hood, if you may.
In addition to what altCognito said, which is a theoretical/academic perspective, some highly specialized languages are created to solve specific problems efficiently when existing "general-purpose" languages are either extremely inefficient for your task or there just isn't an easy-to-use existing alternative.
Granted, that such cases tend to be rare and if your first instinct on encountering a problem is "I need a new language for this.", then it is most likely you're missing something. There needs to be a fairly substantial gap in "available" tech and and your needs to warrant such an undertaking.
I think there are really two conceptually different answers to this. First, you gain an understanding of how compilers transform your code into executable code. This can help you make better decisions about how to structure your code to optimize (or allow it to be optimized) better. If, for instance, you knew that a certain construct would prohibit the compiler from inlining a code block or unrolling a loop, then you could avoid that if performance became a real concern.
Second, all current languages were invented (or derived) at some point in history. For each one of these, the likelihood that it would actually be used was potentially small, yet here they are. They all found their reason for being in the fact that someone wanted to do something that wasn't possible or easy to do in an existing language and decided to do something about it. Laziness (or the desire to let the computer do the work for you) is the mother of invention.
Just for fun... and then you'll realize that you cannot make anything better than all the languages that you thought they sucked xD (so you stop complaining about them).
how will the process of creating a new language make you a better programmer?
You're right, you may or may not use the language, but at the least the experience you will gain from doing it will benefit you to understand the implementation of programming languages and of certain things that you will be able to apply to future computation problems that you run into.
Writing a compiler or interpreter requires a very firm understanding in computer science theory. And if you're compiling to machine code instead of to another language, it requires a firm understanding in hardware design as well.
In addition to that, knowing how to design a compiler means you will have a better understanding of languages in general, and the languages you work with specifically. You will have a better appreciation for syntax and trade-offs the language designers took when they wrote their specification.
It's not that writing compilers makes you a better programmer. It's the deep understanding of language theory and compiler design that makes you better.
Mostly you do this for fun or to broaden your comprehension of a subject.
I disagree that creating new language influences performance - performance of what? IMHO execution speed should not depend on the language constructs but what the language is translated to - which is something different: like creating a syntax for a language and writting a compiler/virtual machine for it.
Because a talking frog is pretty neat.
I want a managed language that permits tinkering with its internals as standard practice. Kind of like Ruby's duck punching on a wider scale.
I should, as the client of a library, be able to swap out library functions that don't do what I want.
That's what drives me crazy with .NET. There are bugs in the framework Microsoft will not fix and thanks to GAC signing I cannot. And even if it were not for GAC signing, hotpatching a global library is a bad idea (might break some other application).
I for one don't care about how compilers work, don't care about learning new languages, and don't care about using scripting languages like perl and javascript. I'm much more interested in the ways big programs are constructed (or should be constructed). There are still no good solutions for making LARGE software as easy to use as prototyped code. Programming languages are not helping with that. They solve trivial problems like sorting and memory deallocation, and leave you struggling alone with problems that really matter (that keep you or your firm from losing money).

How can future programming languages better facilitate abstraction? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
One of the key properties to designing comprehensible software (and, indeed, designing anything at all) is to develop a good set of abstractions. These days, those abstractions include things like functions, classes, interfaces, recursion, and higher-order functions. But what else is there? How can we further abstract our designs, so that I needn't think about anything but my immediate, direct goal? What novel abstractions have yet to be leveraged by existing technologies?
Also note that most of the items on my list (with the exception, perhaps, of recursion) are also tools used for code reuse. Code reuse is not the subject of this question, and is not what I see as a necessary aspect of a good abstraction. Functions are useful as abstractions because they hide what they are doing behind a descriptive name, not because I can call them from several different places.
A poorly-formed idea: Is a driver function that only calls a sequence of other functions, without maintaining any state of its own, really the same as a function? We write it as a function, and call it as a function, but perhaps it represents a different concept? This is reflected in some languages by making a distinctions between procedures returning values and procedures not returning values. But maybe there's a better way to view that difference, some different way to abstract the sequence of relatively unrelated steps?
So to reiterate, how can future programming languages better facilitate abstraction?
A powerful absraction tool, Lisp macros. Why not look into the past and present? :)
They can use self-exposing semantics to better allow metaprogramming of the environment/language presented as the end-user interface. Mutable language semantics.
Some areas that I think are potentially fruitful:
Intentional Programming, or something similar. Charles Simonyi's company Intentional Software has been keeping pretty quiet for a while but is now starting to show some promising early demonstrations.
Functional Programming: ideas from functional programming are increasingly finding their way into more mainstream languages like Python, C# (Linq, lambdas, etc.) and even C++ (lambdas in C++ 0x). F# is becoming a first class .NET language with full support in Visual Studio. The rise of multi core development is another factor driving the wider adoption of functional concepts.
Domain Specific Languages (DSLs): closely related to the ideas behind Intentional Programming, Microsoft seem to be putting some effort into supporting DSLs as part of the .NET ecosystem.
Much more sophisticated IDEs. There are already some positive developments with refactoring tools in IDEs like Visual Studio and IntelliJ but I think there's a lot of room for progress in this area. Moving away from dumb text source files towards something more like an abstract syntax tree representation could make it much easier to work at a higher level of abstraction. Again, this connects with many of the ideas behind Intentional Programming.
By having built in detection of stupid ideas that, when tripped, lock the developer out of the IDE and refuse to let them code ever again.
OOP facilitates abstraction quite nicely. It's developers that come up with poorly formed ideas.
Let's see, how about if we make abstraction mandatory for every data type, and then provide ways of generalizing our abstractions over type parameters? Wait! I've just reinvented CLU. Do I get a Turing Award?
Anyone interested in the role of abstraction in programming should study CLU.
Eiffel code proofs. (warning: link to PDF!)
Functional programming, aspect oriented programming, design by contract and generally everything that takes us away from the dark age of imperative programming.
Also, I hope non - managed software development will cease to exist. C++ and other low level stuff makes me sad. :-(
I like my LINQ, my lambda operator, my extension methods and my fluent interfaces. Oh, and I love PostSharp.NET. And F#, but I guess it's very hard NOT to love F#. :-)
I will give an indirect answer. Before we can develop better constructions in programming languages, we must first understand the theory of abstraction.
Oh yes, there is an actual theory which predates modern computing, it is called category theory.

Resources