Known "Z notation" applications? - programming-languages

I was just remembering back my university classes and was wondering to know if anyone out here even used the "Z notation" in a professional environment. I honestly must say that it was the single most boring class that I have ever attended in my life. Maybe because of the teacher, but at the time we really all thought it was a big waste of time. I might have been wrong, which is why I'd like to hear you about it.
If you are using it or some derived language (Z++), I'd just like to know how is it useful for you. Just curious to know some commonly-known applications of Z or your application.
For those who are not familiar : http://staff.washington.edu/jon/z/z-examples.html

It's worth looking at the B Method (http://en.wikipedia.org/wiki/B-Method). It's a slightly more pragmatic descendent of Z. The idea is that you can actually discharge a bunch of proof obligations through refinements steps (with the help of a theorem prover that is hiding behind the scenes) and then eventually generate code directly from your specification. I believe it has been used in a number of "real world" projects.

Z is (as you pointed out) a specification notation and not a programming language intended to facilitate formal verification.
One of the larger (publicly known) projects specified using the notation was the protocol used in the Mondex smart card platform. There was recently a revival to determine the correctness of the original manual proofs with mechanical checking by multiple teams that included verification of the original Z specifications. Not surprisingly, no new fundamental errors were detected, although a number of assumptions were shown invalid by most of the teams.
The National Security Agency Tokeneer project was specified in Z before implementation in the Spark Ada subset.
Given the expressiveness of the notation it is unlikely that it will be extended. This would also make proofs more complex and be counter-productive.

I first encountered Z notation when I read that XCB (a replacement for the original Xlib API in X11) was proven correct with Z-notation .

The Web Services Description Language (WSDL) was developed using the Z notation. You can find the specification with the Z notation here: http://www.w3.org/TR/wsdl20/wsdl20-z.html. The specification states that
The Z Notation was used to improve the quality of the normative text that defines the Component Model, and to help ensure that the test suite covered all important rules implied by the Component Model.

I had to do Z back in uni! Brings back memories, if you have a Linux install handy try this application CADiZ...

'Z++' is called object-Z. I haven't been active in Z since the early-'90s (working in part on a Windows port of CADiZ, which appears to have vanished), so have no idea if its current community, but some more recent papers have been published on using object-Z to formalise UML.

Related

Anyone know of any real systems using Computational Semantics with Lambda Calculus?

I was wondering if Computational Semantics is actually used in any real-world system? (Simple examples here and here). I would like to see how an actual system works.
It seems like there are a bunch of issues with actually using Computational Semantics in any real world system:
It seems just labeling sentences with part-of-speech tags is error prone.
But you also need a reliable parse tree which is error prone and there can be many valid trees for one sentence.
Finding what pronouns are referring to what entities is error prone.
Word disambiguation is also another source of errors and multiple meanings could be valid in the same context.
Any context-free-grammar of English I can find seems to be incomplete.
Finally, after all these sources of error are dodged, we can finally convert the sentence to FOL with Computation Semantics!
Also, I can't seem to figure out how to deal with prepositions in Computation Semantics.
Is this really just an academic exercise or is Computational Semantics actually useful?
There are several better aproaches to natural language than simple lambda calculus and context free grammars, ie. HPSG, Montague Grammar, TAG, ...
Word disambiguation can be handled by Markov chains, for example.
Siri, Google Now, Cortana and IBM Watson are some examples for real world systems.
Google Translate is another application that uses Computational Semantics.
I believe (bu't don't quote me on this) that the technology spun out of the now-defunct Natural Language Theory and Technology group at the Palo Alto Research Center (PARC, formerly Xerox PARC), utilizes the lambda calculus to provide inferences about textual entailments. idk i only worked there a summer (freshman, so was wonderfully igorant of most of the goings-on there).
Anyway, that 'technology' was developed over 30 years and then Powerset bought the right to all of it for $15 million, attempting to disrupt smart search in general. Then Bing's fatass came along, gobbled it up nom nom nom, then continued devouring the entire research group as whole. The principal core investigators now work solely as adjuct profs at Stanford. Sad.

machine representation of natural text

I'm currently working on high-level machine representation of natural text.
For example,
"I had one dog but I gave it to Danny who didn't have any"
would be
I.have.dog =1
I.have.dog -=1
Danny.have.dog = 0
Danny.have.dog +=1
something like this....
I'm trying to find resources, but can't really find matching topics..
Is there a valid subject name for this type of research? Any library of resources?
Natural logic sounds like something related but it's not really the same thing I'm working on. Please help me out!
Representing natural language's meaning is the domain of computational semantics. Within that area, lots of frameworks have been developed, though the basic one is still first-order logic.
Specifically, your problem seems to be that of recognizing discourse semantics, which deals with information change brought about by language use. This is pretty much an open area of research, so expect to find a lot of research papers and PhD positions, but little readily-usable software.
As larsmans already said, this is pretty much a really open field of research, called computational semantics (a subfield of computational linguistics.)
There's one important thing that you'll need to understand before starting off in the comp-sem world: most people there use fancy high-level languages. By high-level I don't mean C, but more something like LISP, Prolog, or, as of late, Haskell. Computational semantics is very close to logic, which is why people researching the topic are more comfortable with functional and logical languages — they're closer to what they actually use all day long.
It will also be very useful for you to first look at some foundational course in predicate logic, since that's what the underlying literature usually takes for granted.
A good introduction to the connection between logic and language is L.T.F. Gamut — Logic, Language, and Meaning, volume I. This deals with the linguistic side of semantics, which won't help you implement anything, but it will help you understand the following literature. That said, there are at least some books that will explain predicate logic as they go, but if you ask me, any person really interested in the representation of language as a formal system should take a course in predicate and possibly intuitionist and intensional logic.
To give you a bit of a peek, your example is rather difficult to treat for
current comp-sem approaches. Not impossible, but already pretty high up the
scale of difficulty. What makes it difficult is the tense for one part (dealing
with tense and aspect will typically bring you into even semantics,) but also
that you'd have to define the give and have relations in a way that
works for this example. (An easier example to work with would be, say "I had
a dog, but I gave it to Danny who didn't have any." Can you see why?)
Let's translate "I have a dog."
∃x[dog(x) ∧ have(I,x)]
(There is an object x, such that x is a dog and the have-relation holds between
"I" and x.)
These sentences would then be evaluated against a model, where the "I"
constant might already be defined. By evaluating multiple sentences in sequence,
you could then alter that model so that it keeps track of a conversation.
Let's give you some suggestions to start you off.
The classic comp-sem system is
SHRDLU, which places geometric
figures of certain color in a virtual environment. You can play around with it, since there's a Windows-compatible demo online at that page I linked you to.
The best modern book on the topic is probably Blackburn and Bos
(2005). It's written in Prolog, but
there are sources linked on the page to learn Prolog
(now!)
Van Eijck and Unger give a good course on computational semantics in Haskell, which is a bit more recent, but in my eyes not quite as educational in terms of raw computational semantics as Blackburn and Bos.

Agents in Haskell or functional languages?

I'm architecturing a Multi Agent System (MAS) framework to describe Beliefs-Desires-Intents (BDI) agents in Haskell (i.e. agents are concurrent, communicating monadic actions).
I searched on the web throughly but I wasn't able to find any reference on similar works, apart from a technical report of an unfinished work, Specifying and Controlling Agents in Haskell.
Do you know about any existing implementation or research paper dealing with BDI agents that can be defined in Haskell or in any other functional language, please?
My aim is to find possible related works, everything that could manage a system of concurrent intelligent agents written in a functional language. I don't need anything specific, I just want to find out whether my work has something in common with existing approaches.
edit: I managed to find a reference to Clojure, a lisp dialect that supports a form of agent programming very close to the actor model, but it's not meant to directly support BDI agents (one should implement another layer on top of it to get the BDI part done I guess).
To sum up, it doesn't seem like there are proposals for BDI-style communicating agents described by means of functional languages, so together with a friend/colleague of mine we collected info about related work, put together some ideas, and we wrote a short position paper that I will present at the DALT2012 workshop. It's a really preliminary work, so do not expect too much from it, but I think in the future it may evolve in something interesting.
Alessandro Solimando, Riccardo Traverso. Designing and Implementing a Framework for BDI-style Communicating Agents in Haskell. DALT 2012, Workshop notes, pages 108--112.
EDIT:
I later found this project on GitHub, which uses free monads (whatever that means, I don't know about them) to provide a framework for multi-agent systems: https://github.com/fizruk/free-agent.

is it possible to markup all programming languages under object oriented paradigm using a common markup schema?

i have planned to develop a tool that converts a program written in a programming language (eg: Java) to a common markup language (eg: XML) and that markup code is converted to another language (eg: C#).
in simple words, it is a programming language converter that converts program written in one language to another language.
i think it is possible but i don know where to start. i wanna know the possibilities to do so and information about some existing system.
What you are trying to do is extremely hard, but if you want to know what you are up for I've listed the steps you need to follow below:
First the hard bit:
First you obtain or derive an operational semantics for your source and target languages.
Then you enhance the semantics to capture your source and target memory models.
Then you need to unify the two enhanced-semantics within a common operational model.
Then you need to define a mapping from your source languages onto the common operational model.
Then you need to define a mapping from your operational model to your target language
Step 4, as you pointed out in your question, is trivial.
Step 1 is difficult, as most languages do not have sufficiently formal semantics specified; but I recommend checking out http://lucacardelli.name/TheoryOfObjects.html as this is the best starting point for building a traditional OO semantics.
Step 2 is almost certainly impossible in general, but may be merely obscenely difficult if you are willing to sacrifice some efficiency.
Step 3 will depend on how clean the result of step 1 turned out, but is going to be anything from delicate and tricky to impossible.
Step 5 is not going to be trivial, it is effectively writing a compiler.
Ultimately, what you propose to do is impossible in general, due to the difficulties inherited in steps 1 and 2. However it should be difficult, but doable, if you are willing to: severely restrict the source language constructs supported; pretty much forget handling threads correctly; and pick two languages with sufficiently similar semantics (ie. Java and C# are ok, but C++ and anything-else is not).
It depends on what languages you want to support, but in general this is a huge & difficult task unless you plan to only support a very small subset of each language.
The real problem is that each programming languages has different features (with some areas that overlap and others that don't) and different ways of solving the same problems -- and it's pretty tricky to detect the problem the programmer is trying to solve and convert that to a new idiom. :) And think about the differences between GUIs created in different languages....
See http://xmlvm.org/ as an example (a project aimed at converting between source code of many different languages, with an XML middle-point) -- the site covers in some depth the challenges they are tackling and the compromises they take, and (if you still have any interest in this kind of project...) ask more specific followup questions.
Notice specifically what the output source code looks like -- it's not at all readable, maintainable, efficient, etc..
It is "technically easy" to produce XML for any single langauge: build a parser, construct and abstract syntax tree, and dump out that tree as XML. (I build tools that do this off-the-shelf for many languages). By technically easy, I mean that the community knows how to do this (see any compiler textbook, e.g., Aho&Ullman Dragon book). I do not mean this is a trivial exercise in terms of effort, because real languages are complicated and messy; there have been many attempts to build C++ parsers and few successes. (I have one of the successes, and it was expensive to get right).
What is really hard (and I don't try to do) is produce XML according to a single schema in which the language semantics are exposed. And without that, it will be essentially impossible to write a translator from a generic XML to an arbitrary target language. This is known as the UNCOL problem and people have been looking since 1958 for the answer. I note that the Wikipedia article seems to indicate the problem is solved, but you can't find many references to UNCOL in the literature since 1961.
The closest attempt I've seen to this is the OMG's "ASTM" model (http://www.omg.org/spec/ASTM/1.0/Beta1/); it exports XMI which is XML. But the ASTM model has lots of escapes built into it to allow langauges that it doesn't model perfectly (AFAIK, that means every language) to extend the XMI in arbitrary ways so that the language-specific information can be encoded. Consequently each language parser produces a custom version of the XMI, and thus each reader has to pretty much know about the extensions and full generality vanishes.

Is Antlr a DSL generator and an alternative to Intentional Programming?

I am struck by the ambition and creativity of Charles Simonyi's efforts to establish the field of Intentional Programming, first at Microsoft and then with his own company.
What exactly is Intentional Programming
http://en.wikipedia.org/wiki/Intentional_programming
In this approach to software, a
programmer first builds a toolbox
specific to a given problem domain
(such as life insurance). Domain
experts, aided by the programmer, then
describe the program's intended
behavior in a What You See Is What You
Get (WYSIWYG)-like manner. An
automated system uses the program
description and the toolbox to
generate the final program. Successive
changes are only done at the WYSIWYG
level.
It seems to be such a useful and practical approach to programming, potentially circumventing many of the problems with current approaches to software development.
Essentially it seems to facilitate the creation of domain-specific languages by non-programmers (business/systems analysts) but at a stage much closer to real-life implementation than UML could provide. He says it will be completed eventually but that it is not there yet (almost 15 years later).
DSLs run the gamut from simple 5-line rule engines to complex applications like Ruby on Rails. So I imagine the delay in releasing his product has to do with the fact that he is dealing with simplifying a much higher level of abstraction because he has to essentially allow for the encapsulation of all domain languages at once.
So, my question is
(a) whether Antlr could be an alternative to Intentional Programming - although perhaps a less user-friendly alternative which requires the intervention of programmers rather than permitting business analysts to generate the DSL? Could you use Antlr to generate a DSL like Ruby on Rails (assuming it supported Ruby as an output - which I think it does not)? What can it not do? Also, I don't understand why it's called a "language parser" rather than a "language generator" - since the latter describes what it is used for while the former describes how it achieves its end result.
and
(b) if Antlr is different from Intentional Programming, is there anything similar to Intentional Programming?
In answer to part b), three systems that work in a similar space are:
JetBrains MPS
Eclipse xText
MetaCase MetaEdit+
Each of these products has different strengths and weaknesses, but all of them fall into the category of Language Workbenches. Intentional Software's Intentional Workbench is possibly the most ambitious product in this category to date, but is also not generally available.
MPS and xText are free, open-source products. MetaCase is the most mature, and is a commercial product. All of them have a steep learning curve.
I am not an expert on this, so treat with a large pinch of salt. However...
ANTLR itself is not a DSL generator, though it can be used to create code that interprets DSLs. It is a parser generator - but the DSL generator would have to create what ANTLR generates a parser from.
ANTLR is just a parser generator. In any non-trivial DSL, writing the parser is less than 50% of the effort expended in implementing the DSL. The evaluator/rule engine/code generator/schedule or whatever else your DSL does, probably requires more work and can't be generated like a parser.

Resources