I am a student in a university, and I'm studying and making the programming language with delimited continuations. I want to use this study to improve environmental problems. Here, The word "environmental" means nature of the Earth. I searched articles about programming languages and environmental problems. I found articles about software or hardware to improve environmental problems. However I could not find articles about programming languages to improve environmental problems.
I heard that the programming language which is useful to make efficient algorithms is good to improve environmental problems. So, I think there are some cases that the programming language is good to improve environmental problems. However, as I wrote above, I could not find such case. Do you know studies or articles about programming languages to improve environmental problems?
I have watched an interview with Bjarne Stroustrup where he claims C++ has contributed against global warming, since the language is so efficient and fast compared to some others I won't name, since I really hate language wars anyway.
I don't know if that answers your question. Nice one, though.
How C++ Combats Global Warming
I used quite some modelling within hydrology, the research of water. For this I used Matlab , SMS, tuflow and Mike Zero - to model rivers, progression of groundwater flow / intrusion of pumping, etc. Hope that helps!
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have read a few articles on Internet about programming language choice in the enterprise. Recently many dynamic typed languages have been popular, i.e. Ruby, Python, PHP and Erlang. But many enterprises still stay with static typed languages like C, C++, C# and Java.
And yes, one of the benefits of static typed languages is that programming errors are catched earlier, at compile time, rather than at run time. But there are also advantages with dynamic typed languages. (more at Wikipedia)
The main reason why enterprises don't start to use languages like Erlang, Ruby and Python, seem to be the fact that they are dynamic typed. That also seem to be the main reason why people on StackOverflow decide against Erlang. See Why did you decide "against" Erlang.
However, there seem to be a strong criticism against dynamic typing in the enterprises, but I don't really get it why it is that strong.
Really, why is there so much cristicism against dynamic typing in the enterprises? Does it really affect the cost of projects that much, or what? But maybe I'm wrong.
The word "enterprise" doesn't really mean anything to me, so I'm just going to assume you're talking about large corporations.
Dynamic typing is just that: dynamic. There is no way to effectively statically analyze your program with a dynamically-typed language. Static typing allows developers to catch mistakes before ever compiling or running their code, something that is very important in the corporate world. It makes debugging much less of a pain and thus increases overall productivity (or that's what they argue, anyway). Static typing is also very important in a team setting because it allows your IDE to tell you how to use a method that you've never seen. These kinds of "hints" are very difficult, if not impossible, to achieve with dynamically-typed languages.
The other big thing is that dynamic languages are simply not as mature as static languages. Languages like C++, Java, and C# have been in use in the corporate world for years and years and years, whereas dynamically typed languages are just recently coming into play. There is a lot more code written in Java than in Python, and a lot more support for the former as well.
Note that I'm not arguing for either side. I personally prefer dynamically-typed languages because they allow me to write the code much more quickly and spend less time thinking about the problem, but I can see the appeal of languages like C# in a huge corporate environment.
It's probably more about what people are familiar with than anything else. From a manager's point of view, he/she needs a good reason to use a technology that:
May have never been used by the company on a project,
No one on the team has any experience with,
Does not (appear to) have the backing of a solid "Enterprise" company such as Microsoft, IBM, etc
These factors are especially important if the project needs to be maintained for many years down the road.
I am not defending this point of view, just pointing out that it exists and may be a source of this criticism.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm currently working on the topic of programming-languages and interpreter-design. I have already created several programming languages but couldn't reach my goal so far:
Create a programming-language which focuses on giving the programmer a good feeling when writing code in it. It should just be fun and/or interesting and in no case annoying to write something in it.
I get this feeling when writing code in Python. I sometimes get the opposite with PHP and in rare cases when having to reinvent some wheel in C++.
So I've tried to figure out some syntactical features to make programming in my new language fun, but I just can't find any.
Which concrete features, maybe mainly in terms of syntax, do/could make programming in a language fun?
Examples:
I find it enjoyable to program in Ruby because of it's use of code blocks.
It would be nice if you could include exactly one example in your answer
Those features do not have to already exist in any language!
I'm doing this because I have experienced extreme rises in (my own) productivity when programming in languages I love (because of particular features).
You mentioned Ruby in your question. AFAIK, Ruby is the only programming language, for which Joy is an actual, stated, explicit design goal. (In fact, it is the only design goal.)
The reason that Yukihiro Matsumoto was able to design Ruby this way, is that he already knew and used tons of programming languages before he started designing Ruby and learned tons more in order to design Ruby. (Interestingly, he didn't know Python, and has said that he probably wouldn't have created Ruby if he did.)
Here's just a tiny fraction of the languages that matz has either used himself, or looked at for inspiration (or in some cases for inspiration what not to do):
CLU
Sather
Lisp
Scheme
Smalltalk
Perl
Python
Haskell
Scala
PHP
C
C++
Java
C#
Objective-C
Erlang
And I believe that this is one way that good programming languages can be designed (what Larry Wall calls postmodernist language design): Throw away everything that didn't work in the past, take everything that worked and combine that tastefully.
Of course, this requires that you actually know all those languages from which you want to "steal" and in particular, it requires that you know lots of very different languages with different paradigms, different concepts and different "feels", otherwise the idea pool from which you steal is rather small and inbred.
Consistency.
Its the feeling that you already know something when you use an API or feature you've never used before. It also makes you more productive as you don't have to learn something new for the sake of it.
I think this is also one of the Ruby 'likes', in that if you follow the naming convention, things start to 'just work' without bindings and glue and suchlike.
For example, using the STL in C++, many of the algorithms are the same for all containers - even strings. That makes it nice to use... except for those parts that do not follow the same API (eg vector of bools) then the difference is more noticable.
Two things to keep in mind are orthogonality and the principle of least surprise.
A programming language should make it easy to write correct programs and difficult (if not impossible) to write incorrect programs. For instance, in Java
long x = 2000000000 + 2000000000;
overflows, while
long x = 2000000000L + 2000000000;
doesn't. Is this obvious? I don't think so. Does anyone ever want something to overflow? I don't think so.
Hilarity.
http://lolcode.com/
Follow common practices (like using + for addition, & for bitwise/logical and)
Group logicaly-similar code in namespaces
Have an extensive string processing library
Incorporate debugging facilities
For a cross-platform language, try to minimize platform differences as much as possible
A language feature that appears simple and easy to learn surprises and delights the programmer with its unexpected power. I nominate Haskell type classes :-)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am currently entering my senior year as a dual major in Electrical Engineering and Computer Engineering, and have touched on a wide variety of different languages: C, C++, C#/XAML, Java, bash, python, VHDL, assembly, etc. I was wondering what you think would be a good language/few languages to become more proficient in, or to explore for a first time. Also, what level of programming you prefer (hardware, local, network, system, design, integration, and so on) If you could tell me why, I would be grateful, or if you'd like to relate your experiences, I am quite interested
. I am hoping to find a job in hardware design, but as I become better with some languages, I am finding just how much I enjoy programming, so I really have an open mind at this juncture. I would love to hear from some people in the 'real world'.
You want to understand:
Different language paradigms (procedural, oop, functional, parallel, logic [e.g., Prolog], constraint). Do some programming in each.
Different software architectures. OSes, standard applications (MVC, ...)
Software Engineering: requirements, specifation (especially design-by-contract), design, testing. These ideas hold in hardware engineering too.
I would start not by learning a programming language but the fundementals like below 1) computer organisation 2) operating systems theory 3) fundementals of programming (oop and functional) 4) data structures 5) Compiler design and principles 6) dbms concepts
As a budding hardware designer you might want to learn Bluespec. This is a very high-level hardware-description language based on work done at MIT. It's both a language and a company. They have some very impressive results on modularity, predictability, and reuse in hardware design. Check out the page on the Bluespec compiler and find out if you want to pursue it.
I was wondering what you think would be a good language/few languages to become more proficient in, or to explore for a first time?
What do you want to accomplish? You seem to have a good grasp of many popular languages with several typing systems and paradigms. If you want to learn something else new, I would recommend functional programming as it's vastly different from anything you will have encountered before (imagine trying to write a program without an assignment operator eg. =) and becoming more and more useful. Haskell, Scala, and F# are all forerunners of the functional programming pack.
Also, what level of programming you prefer?
It all depends on what you want to do and what skills you want to use. Hardware and system programming will involve more low level stuff (assem, C, C++). The others are less language specific, but involve other skills, like a thorough knowledge of networks and APIs.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Question 1: How exactly do modern computer language come into being and why? How do they get their start and who is behind them?
Question 2: If any, what languages currently in their infancy are showing promise?
How exactly do modern computer language come into being and why? How do they get their start and who is behind them?
It's a multistage process:
Pointy-headed type theorists and other professionals are continually proposing new language features. You can read about them in places like the Proceedings of the ACM Symposium on Principles of Programming Languages (POPL), which has been held annually since 1973.
Many of these proposals are actually implemented in some research language; some research languages I personally find promising include Coq and Agda. Haskell is a former research language that made it big. A research language that gets 10 users is often considered a success by its designers. Many research languages never get that far.
From research to deployment I know of two models:
Model A: A talented amateur comes along and synthesizes a whole bunch of existing features, maybe including some new ideas, into a new language. The amateur has talent, charisma, and maybe a killer app. Thus C, Perl, Python, Ruby, and Tcl are born.
Model P: A talented professional make career sacrifices in order to build and promulgate a new language. The professional has talent, a deep knowledge of the field, and maybe a killer app. Thus Haskell, Lua, ML, Pascal, Scala, and Scheme are born.
My definition of a professional is someone who is paid to know about programming languages, to pass on that knowledge, and to develop new knowledge in programming languages. Unfortunately this is not the same as designing and implementing new languages, and it is not the same as making implementations that many people can use. This is why most successful programming languages are designed and built by amateurs, not professionals.
There have been quite a few interesting research languages that have had hundreds or even thousands of users but yet never quite made it big. Of these one of my favorites is probably Icon. I have argued elsewhere that nobody really knows why languages become popular.
Summary: Languages come into being because people want to make programming better, and they have new ideas. Languages get their start when somebody takes a whole bunch of ideas, some new and some proven, and synthesizes them into a coherent whole. It's a big job. The person behind a new language might be a programming-language professional, but historically, most languages that become widely used seem to have been created by talented amateurs.
Answer 2: Fortran 2008 looks very promising.
Come on, bring on the downvotes you humourless Java-teenies, Pythonettes, Rubes and Haskellites !
1) Most development environments these days are built to abstract a lot of low-level/inner workings of an platform to speed up development and cater for new user-interfaces and plaform technologies. There are a both open-source projects and corporates behind these changes... For instance an example would be jQuery is a newish Library that just wraps a lot of javascript making things easier and cross-platform...
Bjarne Stroustrup wrote a book on the history of C++, called "The Design and Evolution of C++".
The genesis of a programming language is always a different story. I'm currently reading "Masterming of programming", which is a series of interview with authors of popular languages. They explain what problems they tackled and how the language was born -- a really cool book.
The TIOBE index can give somehow a trend amongst the programming languages, including the emerging ones. I bet that the future lies in language that will run on top of the JVM or CLR (Notably due to the effort invested in the VMs which are now really great). Concurrency seems to be one of the hot problem of today; so I guess we will see some interesting moves in this area (e.g. Clojure).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I remember reading an article saying something like
"The number of bugs introduced doesn't vary much with different programming languages, but it depends pretty much on SLOC (source lines of code). So, using the programming language that can implement the same functions with smaller SLOC is preferable in terms of stability."
The author wanted to stress the advantages of using Functional Programming, as normally one can program with a smaller number of LOC. I remember the author cited a research paper about the irrelevance of choice of programming language and the number of bugs.
Is there anyone who knows the research paper or the article?
Paul Graham wrote something very like this in his essay Succinctness is Power. He quotes a report from Ericsson, which may be the paper you remember?
Reports from the field, though they will necessarily be less precise than "scientific" studies, are likely to be more meaningful. For example, Ulf Wiger of Ericsson did a study that concluded that Erlang was 4-10x more succinct than C++, and proportionately faster to develop software in:
Comparisons between Ericsson-internal development projects indicate similar line/hour productivity, including all phases of software development, rather independently of which language (Erlang, PLEX, C, C++, or Java) was used. What differentiates the different languages then becomes source code volume.
I'm not sure if it's the source you're thinking of, but there's something about this in Code Complete chapter 27.3 (p652) - that references "Program Quality and Programmer Productivity" (Jones 1977) and "Estimating Software Costs" (Jones 1998).
I've seen this argument about "succinctness = power" a few times, and I've never really bought it. That's because there are languages (e.g., J, Ursala) which are quite succinct but not (IMO) easy to read because they put so much meaning into individual symbols.
Perhaps the true metric should be the extent to which it is possible to write a particular algorithm both clearly and succinctly. Mind you, I don't know how to measure that.
The book of pragmatic Thinking & Learning points to this article.
Can a Manufacturing Quality Model Work for Software?