programming late binding - programming-languages

I have a situation which is very similar to late binding in programming languages, i work for a javaee based enterprise software shop. All the happenings in the programming universe makes me think there could be a dynamic language which is better suited than java for the problem at hand.
Scenario - We are writing UI for a firewall configuration app. The firewall rules have to be defined at a higher level of abstraction, in terms of abstract objects that can represent a class/family of actual devices etc. This is the design phase.
There is a deploy phase where the abstract objects are resolved to produce actual CLI s to be pushed to the hardware. Here "resolution" involves the abstract objects mapped to real values like ip/port/zone based on a list of runtime contexts that are available at the time of deploy.
This process sounds very similar to the late binding in the compilation/interpretation phase of languages and makes me wonder there must be another language that has implicit support to model this more accurately. Request all to please throw some light.

Related

What's the status of current Functional Reactive Programming implementations?

I'm trying to visualize some simple automatic physical systems (such things as pendulum, robot arms,etc.) in Haskell.
Often those systems can be described by equations like
df/dt = c*f(t) + u(t)
where u(t) represents some kind of 'intelligent control'. Those systems look to fit very nicely in the Functional Reactive Programming paradigm.
So I grabbed the book "The Haskell School of Expression" by Paul Hudak,
and found that the domain specific language "FAL" (for Functional Animation Language) presented there actually works quite pleasently for my simple toy systems (although some functions, notably integrate, seemed to be a bit too lazy for an efficient use, but easily fixable).
My question is, what's the more mature, up-to-date, well-maintained, performance-tuned alternative for more advanced, or even practical applications today?
This wiki page lists several options for Haskell, but I'm not clear about the following respects:
The status of "reactive", the project from Conal Eliott who is (as I understand it) one of the inventers of this programming paradigm, looks a bit stale. I love his code, but maybe I should try other more up-to-date alternatives? What's the primary difference between them, in terms of syntax/performance/runtime-stability?
To quote from a survey in 2011, Section 6, "... FRP implementations are still not efficient enough or predictable enough in performance to be used effectively in domains which require latency guarantees ...". Alghough the survey suggests some interesting possible optimizations, given the fact that FRP is there for more than 15 years, I get the impression that this performance problem might be something very or even inherently difficult to solve at least within a few years. Is this true?
The same author of the survey talks about "time leaks" in his blog. Is the problem unique to FRP, or something we are generally having when programming in a pure, non-strict language? Have you ever found it just too difficult to stabilize an FRP-based system over time, if not performant enough?
Is this still a research level project? Are the people like plant engineers, robotics engineers, financial engineers, etc. actually using them (in whaterver language that suits their needs)?
Although I personally prefer a Haskell implementation, I'm open to other suggestions. For example, it would be particularly fun to have an Erlang implementation --- it would then be very easy to have an intelligent, adaptive, self-learning server process!
Right now there are mainly two practical Haskell libraries out there for functional reactive programming. Both are maintained by single persons, but are receiving code contributions from other Haskell programmers as well:
Netwire focusses on efficiency, flexibility and predictability. It has its own event paradigm and can be used in areas where traditional FRP does not work, including network services and complex simulations. Style: applicative and/or arrowized. Initial author and maintainer: Ertugrul Söylemez (this is me).
reactive-banana builds on the traditional FRP paradigm. While it is practical to use it also serves as ground for classic FRP research. Its main focus is on user interfaces and there is a ready-made interface to wx. Style: applicative. Initial author and maintainer: Heinrich Apfelmus.
You should try both of them, but depending on your application you will likely find one or the other to be a better fit.
For games, networking, robot control and simulations you will find Netwire to be useful. It comes with ready-made wires for those applications, including various useful differentials, integrals and lots of functionality for transparent event handling. For a tutorial visit the documentation of the Control.Wire module on the page I linked.
For graphical user interfaces currently your best choice is reactive-banana. It already has a wx interface (as a separate library reactive-banana-wx) and Heinrich blogs a lot about FRP in this context including code samples.
To answer your other questions: FRP isn't suitable in scenarios where you need real-time predictability. This is largely due to Haskell, but unfortunately FRP is difficult to realize in lower level languages. As soon as Haskell itself becomes real-time-ready, FRP will get there, too. Conceptually Netwire is ready for real-time applications.
Time leaks aren't really a problem anymore, because they are largely related to the monadic framework. Practical FRP implementations simply don't offer a monadic interface. Yampa has started this and Netwire and reactive-banana both build on that.
I know of no commercial or otherwise large scale projects using FRP right now. The libraries are ready, but I think the people aren't – yet.
Although there are some good answers already, I'm going to attempt to answer your specific questions.
reactive is not usable for serious projects, due to time leak problems. (see #3). The current library with the most similar design is reactive-banana, which was developed with reactive as an inspiration, and in discussion with Conal Elliott.
Although Haskell itself is inappropriate for hard real-time applications, it is possible to use Haskell for soft realtime applications in some cases. I'm not familiar with current research, but I don't believe this is an insurmountable problem. I suspect that either systems like Yampa, or code generation systems like Atom, are possibly the best approach to solving this.
A "time leak" is a problem specific to switchable FRP. The leak occurs when a system is unable to free old objects because it may need them if a switch were to occur at some point in the future. In addition to a memory leak (which can be quite severe), another consequence is that, when the switch occurs, the system must pause while the chain of old objects is traversed to generate current state.
Non-switchable frp libraries such as Yampa and older versions of reactive-banana don't suffer from time leaks. Switchable frp libraries generally employ one of two schemes: either they have a special "creation monad" in which FRP values are created, or they use an "aging" type parameter to limit the contexts in which switches can occur. elerea (and possibly netwire?) use the former, whereas recent reactive-banana and grapefruit use the latter.
By "switchable frp", I mean one which implements Conal's function switcher :: Behavior a -> Event (Behavior a) -> Behavior a, or identical semantics. This means that the shape of the network can dynamically switch as it's run.
This doesn't really contradict #ertes's statement about monadic interfaces: it turns out that providing a Monad instance for an Event makes time leaks possible, and with either of the above approaches it's no longer possible to define the equivalent Monad instances.
Finally, although there's still a lot of work remaining to be done with FRP, I think some of the newer platforms (reactive-banana, elerea, netwire) are stable and mature enough that you can build reliable code from them. But you may need to spend a lot of time learning the ins and outs in order to understand how to get good performance.
I'm going to list a couple of items in the Mono and .Net space and one from the Haskell space that I found not too long ago. I'll start with Haskell.
Elm - link
Its description as per its site:
Elm aims to make front-end web development more pleasant. It
introduces a new approach to GUI programming that corrects the
systemic problems of HTML, CSS, and JavaScript. Elm allows you to
quickly and easily work with visual layout, use the canvas, manage
complicated user input, and escape from callback hell.
It has its own variant of FRP. From playing with its examples it seems pretty mature.
Reactive Extensions - link
Description from its front page:
The Reactive Extensions (Rx) is a library for composing asynchronous
and event-based programs using observable sequences and LINQ-style
query operators. Using Rx, developers represent asynchronous data
streams with Observables, query asynchronous data streams using LINQ
operators, and parameterize the concurrency in the asynchronous data
streams using Schedulers. Simply put, Rx = Observables + LINQ +
Schedulers.
Reactive Extensions comes from MSFT and implements many excellent operators that simplify handling events. It was open sourced just a couple of days ago. It's very mature and used in production; in my opinion it would have been a nicer API for the Windows 8 APIs than the TPL-library provides; because observables can be both hot and cold and retried/merged etc, while tasks always represent hot or done computations that are either running, faulted or completed.
I've written server-side code using Rx for asynchronocity, but I must admit that writing functionally in C# can be a bit annoying. F# has a couple of wrappers, but it's been hard to track the API development, because the group is relatively closed and isn't promoted by MSFT like other projects are.
Its open sourcing came with the open sourcing of its IL-to-JS compiler, so it could probably work well with JavaScript or Elm.
You could probably bind F#/C#/JS/Haskell together very nicely using a message broker, like RabbitMQ and SocksJS.
Bling UI Toolkit - link
Description from its front page:
Bling is a C#-based library for easily programming images, animations,
interactions, and visualizations on Microsoft's WPF/.NET. Bling is
oriented towards design technologists, i.e., designers who sometimes
program, to aid in the rapid prototyping of rich UI design ideas.
Students, artists, researchers, and hobbyists will also find Bling
useful as a tool for quickly expressing ideas or visualizations.
Bling's APIs and constructs are optimized for the fast programming of
throw away code as opposed to the careful programming of production
code.
Complimentary LtU-article.
I've tested this, but not worked with it for a client project. It looks awesome, has nice C# operator overloading that form the bindings between values. It uses dependency properties in WPF/SL/(WinRT) as event sources. Its 3D animations work well on reasonable hardware. I would use this if I end up on a project in need for visualizations; probably porting it to Windows 8.
ReactiveUI - link
Paul Betts, previously at MSFT, now at Github, wrote that framework. I've worked with it pretty extensively and like the model. It's more decoupled than Blink (by its nature from using Rx and its abstractions) - making it easier to unit test code using it. The github git client for Windows is written in this.
Comments
The reactive model is performant enough for most performance-demanding applications. If you are thinking of hard real-time, I'd wager that most GC-languages have problems. Rx, ReactiveUI create some amount of small object that need to be GCed, because that's how subscriptions are created/disposed and intermediate values are progressed in the reactive "monad" of callbacks. In general on .Net I prefer reactive programming over task-based programming because callbacks are static (known at compile time, no allocation) while tasks are dynamically allocated (not known, all calls need an instance, garbage created) - and lambdas compile into compiler-generated classes.
Obviously C# and F# are strictly evaluated, so time-leak isn't a problem here. Same for JS. It can be a problem with replayable or cached observables though.

Agents in Haskell or functional languages?

I'm architecturing a Multi Agent System (MAS) framework to describe Beliefs-Desires-Intents (BDI) agents in Haskell (i.e. agents are concurrent, communicating monadic actions).
I searched on the web throughly but I wasn't able to find any reference on similar works, apart from a technical report of an unfinished work, Specifying and Controlling Agents in Haskell.
Do you know about any existing implementation or research paper dealing with BDI agents that can be defined in Haskell or in any other functional language, please?
My aim is to find possible related works, everything that could manage a system of concurrent intelligent agents written in a functional language. I don't need anything specific, I just want to find out whether my work has something in common with existing approaches.
edit: I managed to find a reference to Clojure, a lisp dialect that supports a form of agent programming very close to the actor model, but it's not meant to directly support BDI agents (one should implement another layer on top of it to get the BDI part done I guess).
To sum up, it doesn't seem like there are proposals for BDI-style communicating agents described by means of functional languages, so together with a friend/colleague of mine we collected info about related work, put together some ideas, and we wrote a short position paper that I will present at the DALT2012 workshop. It's a really preliminary work, so do not expect too much from it, but I think in the future it may evolve in something interesting.
Alessandro Solimando, Riccardo Traverso. Designing and Implementing a Framework for BDI-style Communicating Agents in Haskell. DALT 2012, Workshop notes, pages 108--112.
EDIT:
I later found this project on GitHub, which uses free monads (whatever that means, I don't know about them) to provide a framework for multi-agent systems: https://github.com/fizruk/free-agent.

Has anyone used UML with OCL? Do programmers use it or only analysts who don't code?

I am trying to wrap my head around why we first approach the problem of design and decide upon a visual method (UML), instead of starting with formal specifications that happen to also be executable (RAD prototyping), we start with diagrams that can't be easily proven to work. So when it comes time to prove properties of a model, we find we need to define constraints into our design, so we design a formal syntax (OCL) to define the constraints on the model. I am having a hard time understanding this leap back to where we started.
I find OCL encumbered UML designs (even samples shown in brochures) unreadable, even more impenetrable than the myriad of UML symbols and conventions. So what I want to know is: What are the key areas where OCL is used in the working software development world today, and for whom is it relevant or worthwhile to learn? What does your job role look like? Do architects who never write code use UML and OCL, or do programmers who also design and architect the systems with the same team that implements it, use it too?
[updated: Secondly, it occurs to me that Agile development seems kind of opposed to "Heavyweight" procedures, and that a domain specific language for design diagram constraints like OCL doesn't seem very Agile. Is UML+OCL used in ANY "Agile" shops, or is it universally eschewed by Scrummers?]
Interesting question.
The "holy grail" of the Object Constraint Language was to provide a framework that when coupled with UML allowed a tool to transform that into a concrete Object Graph / Meta Model i.e. a set of classes that already had their basic structure and constraints wired in, so that all the developer had to do was implement business methods. (all this in a language independent way)
JBuilder from Borland tried supoprting this in their enterprise edition, and Delphi with ECO also made use of OCL in a practical way (though not as a transformation input) by supporting the Query abilities. In fact Anders Inver from Borland / BoldSoft, and one of the ECO team, wrote the forward on the OCL bible, The Object Constraint Language, Second Edition (addison wesley)
My personal opinion is that there is not enough pay back to warrant the learning curve. Without using specialised (and expensive) tools the UML/OCL model is still not easily testable in real terms, and the value you get is marginal (if anything) over itterative test driven development. The language independence thing is waaay overrated, lets face it once we start down the Java, C#, Delphi, C++ or whatever path, there is no way in hell we will re-generate in something else, its just not practical.
For what its worth, I am yet to see Model Driven Development with OCL actually used in the real world for a real project. (other than as a proof of concept) What seems to be working lately in the real world is Agile processes, Scrum etc and just itterative development using standard IDE's with standard languages and user stories (perhaps some UML on a whiteboard or storyboard).
The benefit of defining OCL constraints on your models is the possibility of specifying all the business rules of your domain that you cannot represent with the graphical constructs of the UML (for instance, multiplicities are constraints that can be graphically represented as part of an association definition, saying that the attribute A of class C has to be greater than 5 is also a constraint but in this case has to be defined in OCL since UML does not provide a graphical syntax for this)
Obviously, this would be very useful if code-generation tools would be able to take these constraints and automatically generate a code that enforces them (e.g. as if-conditions in Java methods or as triggers in databases that raise an exception when the data violates the rule).
Unfortunately, there aren't many tools offering this functionality (see a list here: http://modeling-languages.com/content/list-ocl-tools) but the situation is slowly improving
Much has changed.
UML 2.5 used Eclipse OCL tooling to remedy the numerous bugs in the UML 2.0...2.4 embedded OCL.
SysML is using Papyrus and Eclipse OCL tooling for SysML next.
Eclipse OCL provides a much stronger UI and an OCL2Java code generator so that OCL embedded in Ecore/UML provides much more acceptable/executable code.
Much has still to change.
The OCL embedded in UML has never been seriously executed.
The OCL definition of OCL itself is lamentable.
I worked with OCL Constraints as a small part of my bachelor thesis. Borland (now Microfocus) Together had an interesting approach thus generating Java code out of OCL Constraints. You defined that variable X should be >= 0 or not empty and Together created assert commands to verify it automatically.

What is the best way to create multiple language versions of a domain?

I would like to create a set of domain objects in multiple languages, so that I can target different platforms. I have been looking at external DSLs as a way to define a language for my domain, and then potentially writing adapters that generate code for the languages I'm interested in targeting. Is this the best way to solve this problem? Or is it just simpler to maintain multiple versions of the project?
I think that Apache Thrift delivers what you are asking for.
Sorry for late answer, but as you mention C# being your main language, this practically fully supported Visual Studio based technology is exactly what you are looking for.
You have to understand what you want to abstract with your DSLs, but the multiple-platform support is trivial on top of that.
Disclaimer: This is our technology, but it's publicly open and it solves exactly the problem presented in the question.
http://abstraction.codeplex.com/
Note! Mind the very "alpha" stage of the current download, I suggest you skip the zipped download and grab the latest source. I am updating better construct in relatively near future. Check out the "Context" implementation in "Production/Dev/AbstractionTemplate" solution.
It is difficult to be helpful without understanding what you are planning to use your DSL for.
Is portability your main problem here?
To succesfully target these different platforms, you will probably have to maintain plaftorm-specific layers anyway (generated or not).
If you plan to write your whole application in your DSL, then use your own compiler to transform it into runnable code for each platform, well it is most probably a bad idea, too complex and overengineered.
However, if you have a well-defined chunk of platform-independent logic, then a DSL is a good choice. Just write an interpreter for it on each target platform (provided that performance is not critical, this is also simpler and easier than generating code).
What is the best way to create multiple language versions of a domain?
This is (was?) somehow the idea of Model Driven Architecture (MDA). Quoting Model-driven architecture from Wikipedia:
The Model-Driven Architecture approach
defines system functionality using a
platform-independent model (PIM) using
an appropriate domain-specific
language (DSL).
Then, given a platform definition
model (PDM) corresponding to CORBA,
.NET, the Web, etc., the PIM is
translated to one or more
platform-specific models (PSMs) that
computers can run. This requires
mappings and transformations and
should be modeled too.
The PSM may use different Domain
Specific Languages (DSLs), or a
General Purpose Language (GPL) like
Java, C#, PHP, Python, etc. Automated tools generally
perform this translation.
Depending on the complexity of your domain and the availability of a MDA Tool, this might be an option (with a lower implementation cost).
See also
MDA: Nice idea, shame about the ...
Language Workbenches and Model Driven Architecture
UML vs. Domain-Specific Languages
DSL in the context of UML and GPL
UML or DSL: Which Bear Is Best? (be sure to read this one)

For reliable code, NModel, Spec Explorer, F# or other?

I've got a business app in C#, with unit tests. Can I increase the reliability and cut down on my testing time and expense by using NModel or Spec Explorer? Alternately, if I were to rewrite it in F# (or even Haskell), what kinds (if any) of reliability increase might I see?
Code Contracts? ASML?
I realize this is subjective, and possibly argumentative, so please back up your answers with data, if possible. :) Or maybe an worked example, such as Eric Evans Cargo Shipping System?
If we consider
Unit tests to be specific and strong theorems, checked
quasi-statically on particular “interesting instances” and Types to be general but weak theorems (usually checked statically), and contracts to be general and strong theorems, checked dynamically for particular instances that occur during regular program operation.
(from B. Pierce's Types Considered Harmful),
where do these other tools fit?
We could pose the analogous question for Java, using Java PathFinder, Scala, etc.
Reliability is a function of several variables, including the general architecture of the software, the capability of the programmers, the quality of the requirements and the maturity of your configuration management and general QA processes. All these will affect the reliability of a rewrite.
Having said that, language certainly has a significant impact. All other things being equal:
Defects are roughly proportional to SLOC count. Languages that are terser see fewer coding errors. Haskell seems to require about 10% of the SLOC required by C++, Erlang about 14%, Java around 50%. I guess C# probably fits alongside Java on this scale.
Type systems are not borne equal. Languages with type inference (e.g. Haskell and to a lesser extent O'Caml) will have fewer defects. Haskell in particular will allow you to encode invariants in the type system so that a program will only compile if they can be proven true. Doing so requires extra work, so consider the trade-off on a case-by-case basis.
Managing state is a source of many defects. Functional languages, and especially pure functional languages, avoid this problem.
QuickCheck and its relatives allow you to write unit and system tests that verify general properties rather than individual test cases. This can greatly reduce the work required to test the code, especially if you are aiming for high test coverage metrics. A set of QuickCheck properties resembles a formal specification, and this concept fits nicely with Test Driven Development (write your tests first, and when the code passes them you are done).
Put all of these things together and you should have a powerful toolkit for driving quality through the development lifecycle. Unfortunately I'm not aware of any robust studies that actually prove this. All the factors I listed at the start would confound any real study, and you would need a lot of data before an unambiguous pattern showed itself.
Some comments on the quote, in the context of C# which is my "first" language:
Unit tests to be specific and strong
theorems,
Yes, but they might not give you first order logic checks, like "for all x there exists a y where f(y)", more like "there exists a y, here it is (!), f(y)", aka setup, act, assert. ;)*
checked quasi-statically on
particular “interesting instances” and
Types to be general but weak theorems
(usually checked statically),
Types are not necessarily that weak**.
and
contracts to be general and strong
theorems, checked dynamically for
particular instances that occur during
regular program operation. (from B.
Pierce's Types Considered Harmful),
Unit Testing
Pex + Moles I think is getting closer to the first-order logic type of checking, as it generates the edge-cases and uses the C9 solver to work with integer constraint solving. I would really like to see more Moles tutorials (moles is for replacing implementations), specifically together with some sort of inversion of control container that can leverage what stub- and real- implementations of abstract classes and interfaces already exist.
Weak Types
In C# they are fairly weak, sure: generic typing/types allows you to add protocol semantics for one operation -- i.e. constraining types to be on interfaces, which are in some sense protocols which implementing classes agree to. However, the static typing of the protocol is just for one operation.
Example: Reactive Extensions API
Let's take Reactive Extensions as a discussion topic.
The contract required by the consumer, implemented by the observable.
interface IObserver<in T> : IDisposable {
void OnNext(T);
void OnCompleted();
void OnError(System.Exception);
}
There are more to the protocol than this interface shows: methods called on an IObserver< in T > instance must follow this protocol:
Ordering:
OnNext{0,n} (OnCompleted | OnError){0, 1}
Furthermore, on another axis; time-dimension:
Time:
for all t|-> t:(method -> time). t(OnNext) < t(OnCompleted)
for all t|-> t:(method -> time). t(OnNext) < t(OnError)
i.e. no invocation to OnNext may be done after one to OnCompleted xor OnError.
Furthermore, the axis of parallelism:
Parallelism:
no invocation to OnNext may be done in parallel
i.e. there's a scheduling constraint that needs to be followed from implementers of IObservable. No IObservable may push from multiple threads at the same time, without first synchronizing the invocation around a context.
How do you test this contract holds in an easy way? With c#, I don't know.
Consumer of API
From the consuming side of the application, there might be interactions between different contexts, such as Dispatcher, Background/other threads, and preferably we'd like to give guarantees that we don't end up in a deadlock.
Further, there is the requirement to handle deterministic disposing of the observables. It might not be clear all the time when an extension method's returned IObservable instance takes care of the method's arguments' IObservable instances and dispose those, so there's a requirement to know about the inner workings of the black box (alternatively you can let the references go in a "reasonable way" and the GC will take them at some point)
<<< Without Reactive Extensions, it's not necessarily easier:
There is the task pool on top of TPL is implemented. In the task pool we have a work-stealing queue of delegates to invoke on the worker threads.
Using the APM/begin/end or the async pattern (which queues to the task pool) could leave us open to callback-ordering bugs if we mutating state. Also, the protocol of begin-invocations and their callbacks might be too convoluted and hence impossible to follow. I read a post-mortem the other day about a silverlight project having problems seeing the business logic-forest for all the callback-trees. Then there's the possibility of implementing the poor-man's async monad, the IEnumerable with an async 'manager' iterating through it and calling MoveNext() every time a yielded IAsyncResult completes.
...and don't get me started on the nuuuumerous hidden protocols in IAsyncResult.
Another problem, without using Reactive extensions is the turtles problem - once you decide that you want an IO-blocking operation to be async, there need to be turtles all the way down to the p/invoke call that places the associated Win32-thread on an IO-completion port! If you have three layers and then some logic as well inside of your topmost layer, you need to make all three layers implement the APM pattern; and fulfil the numerous contract obligations of IAsyncResult (or leave it partially broken) -- and there's no default public AsyncResult implementation in the base class library.
>>>
Working with exceptions from the interface
Even with the above memory-management + parallelism + contract + protocol items covered, there are still exceptions to be handled (not just received and forgotten about), in a good, reliable application. I want to make an example;
Context
Let's say that we find ourselves catching an exception from the contract/interface (not necessarily from reactive extensions' IObservable implementations here which have monadic exception handling rather than stack-frame based).
Hopefully the programmer was diligent and documented the possible exceptions, but there might be exception possibilities all the way down. If everything is correctly defined with code contracts at least we can be sure we are capable of catching a few of the exceptions, but many different causes may be lumped together inside of one exception type, and once an exception is thrown, how do we ensure that the work of the least possible size is rectified?
Aim
Say that we are pushing some data-record from a message-bus-consumer in our application, and receiving them on the background thread which decides what to do with them.
Example
A real-life example here could be Spotify, which I'm using every day.
My $100 router/access point throws in the towel at random times. I guess it has a cache-bug or some sort of stack overflow bug, as it happens every time I push more than 2 MB/s LAN/WAN data through it.
I have to NICs up; the wifi and the ethernet card. Ethernet's connection goes down. The sockets of Spotify's event-handler loop return an invalid code (I think it's C or C++) or throw exceptions. Spotify has to handle it, but it doesn't know what my network topology looks like (and there is no code to try all routes/update the routing table and hence the interface to be used); I still have a route to the internet, but just not on the same interface. Spotify crashes.
A thesis
Exceptions are simply not semantic enough. I believe one can look at exceptions from the perspective of the Error monad in Haskell. We either continue or break: unwinding the stack, executing the catches, executing the finally's an praying we don't end up with race conditions on either other exception handlers or the GC, or async exceptions for outstanding IO-completion ports.
But when one of my interfaces' connection/route goes down, Spotify crashes freezes.
Now we have SEH/Structured Exception Handling, but I think we will have SEH2 in the future, where each source of exception gives, with the actual exception, a discriminated union (i.e. it should be statically typed to the linked library/assembly), of possible compensating actions -- in this example, I could imagine Windows' network API telling the application to execute a compensating action to open the same socket on another interface, or to handle it on its own (like now), or to retry the socket, with some kernel-managed retry policy. Each of these options are parts of a discriminated union type, so the implementer must use one of them.
I think that, when we have SEH2, it won't be called exceptions anymore.
^^
Anyway, I have digressed too much already.
Instead of reading my thoughts, listen to some of Erik Meijer's -- this is a very good round-table discussion between him and Joe Duffy. They discuss handling side-effects of calls. Or have a look at this search listing.
I'm finding myself in a position, today, as a consultant, of maintaining a system where stronger static semantics could be good, and I'm looking at tools which can give me the speed of programming + the correctness verification on a level which is accurate and precise. I haven't found it yet.
I simply think we are another 20 years if not more away from developer oriented reliable computing. There are just too many languages, frameworks, marketing BS and concepts in the air right now, for the ordinary develop to stay on top of things.
Why is this under the heading of "weak types"?
Because I find that the type system will be part of the solution; types need not be weak! Terse code and strong type systems (think Haskell) help programmers build reliable software.

Resources