A naive questioin about UML and Turing completeness - uml

It'a a well-known fact that UML does not Turing complete (in contrast to usual programming languages). But it seems to me UML is even more flexible than traditional languages. I can't imagine a problem you can describe by means of such language as C++ (f.e) but at the same time can't describe by means of UML. Quite the contrary it's much more easier for me to fancy a construction existing in UML but unreliazable in C++ (Java, Delphi, VB and so on...)
Could you help me to understand this moment? I really can't catch it.

I´d say that UML IS a turing complete language since the addition of the Action Semantics package (this happened in UML 1.5 version).
Now UML includes an imperative action language (not to be confused with OCL) that allows a precise definition of the behaviour of class methods. This imperative action language includes the typical set of assignments, if conditions, iterators,... you´d expect from any programming language.
This action language is one of the popular components of Executable UML approaches but it´s now part of the UML standard itself

Interesting question. A couple of points come to mind, although there's probably a whole more to it. Apologies it's quite long.
What can you describe with e.g. C++ that you can't describe with UML?
First, you have to define what you mean by "UML". Generally, people tend to mean the 'core' elements - those on Class Diagrams, State Diagrams, Activity Diagrams etc. - plus OCL (the constraint language).
Given those elements you can't specify imperative algorithms. Specifically, anything that requires assignment. You can however get very close: the steps and decision logic can be expressed using e.g. Activity Diagrams, and the function of each step defined as pre- and post-conditions in OCL. However, you never quite get to fully specifying the behaviour. Take an example of an atomic step whose purpose is to increment the value of an integer. The input is an integer - say X. The output is described by the post-condition X == X#pre+1. However, there's nothing in UML to actually implement the step.
Now it's entirely conceivable to extend usage of UML to address above. The UML Action Semantics were developed precisely to enable specification of actions. Doing so makes the language computationally complete. The problems are merely practical:
There's no universally agreed and adopted syntax for the semantics;
There are very few implementations
What can you describe with UML that can't be implemented in e.g. C++?
In essence nothing. However there are two practical limitations:
UML "specifications" are usually imprecise, ambiguous and/or incomplete. Activity Diagrams, for example, often leave paths dangling. Could it be represented directly in C++? Yes. Would it compile? No.
Some of the mappings for UML constructs to imperative, stack based languages are non-trivial. State Models are an example: while there are well-known patterns, the mapping is quite complex. This is especially true for hierarchical and/or concurrent behaviour. In an activity Diagram, it's easy to express that two activities happen in parallel and then synchronise before moving to the next step. That can of course be done in C++ but requires the use of e.g. threading libraries.
It can however be done. In fact, it's what the Executable UML tools do: Model Compilers take an executable UML model and translate it into 100% functioning imperative code.
hth.

As the name implies UML is a modelling language. It can sometimes be applied as a methodology for designing software.
Once upon a time they were dreaming up ways of automatic code generation, they were called CASE tools. They failed to get the code generators to work effectively, although they did remove a lot of boiler plate code from the language. This augmentation became the key to UML as it provided a way to augment the experience of designing and programming software.
I don't know if UML is "Turing Complete", I hope it is, wouldn't it be great to come up with solutions by describing the problem to the computer in pictorial format and letting the computer do all that hard nasty programming for you.
UML is the meta language to the doing in the code. It describes artefacts, how they relate/interact and what they do.
UML is being added to, new design artefacts are being added year by year, and if it is not already Turing Complete I don't see why it couldn't be.
However I think somewhere along the line I read something about languages being "Turing Equivalent" if they could both express and solve the same solution.
Since UML is the design language and code is the implementation language based on the UML design I would say that UML and code (c#, java, etc) are Turing Equivalent. If they are agreed to be Turing Equivalent then UML must be Turing Complete.

Related

Is it possible to generate code from Use Case, Activity or Sequence Diagram in Enterprise Architect?

I'm a student of software engineering. My lecturer of "Software Architecture and Design" has told us that we can generate source code from all the UML diagrams (or most).
I already can / have generated code from class diagram. I'm unable to generate code from other diagrams.
Do I have to someway connect those diagrams with class diagrams to do that?
This is simply nonsense. You can not generate code from any diagram at all. You can however generate code from a UML model. This can (but not must) have a couple of diagrams to help visualization for humans.
Now, code is related to classes. That means you need at least some classes defined in your model. A use case helps understanding why classes will do things they are supposed to do. But in no case can you create code from a use case.
There are other model elements which help support creating more detailed code. Those are e.g. state machines which can translate into equivalent code sections.
Activity and sequence diagram also help visualizing how certain code sections run during execution. But you will not (seriously) use them to create code.
Yes, you can, but it's not as simple as what you're describing. Model-Driven Architecture is an active area of research right now, but it hasn't really "caught on" yet. Its proponents argue that it allows for a higher level of abstraction in much the same way that C offered a higher level of abstraction than assembly language and Java offered a higher level of abstraction than C. I think that this could be very useful in the future if they can get the tooling right.
Actually, this isn't even an entirely new idea - the idea of graphical programming in general (which, if you think about it, is basically a generalization of UML-derived programming) has been around at least since the 1980s that I know of (and probably a lot earlier). In fact, Frederick Brooks Jr. talks about it in No Silver Bullet – Essence and Accident in Software Engineering (which was originally published in 1986 and appears in current editions of The Mythical Man-Month):
A favorite subject for Ph.D. dissertations in software engineering is graphical, or visual, programming, the application of computer graphics to software design. Sometimes the promise of such an approach is postulated from the analogy with VLSI chip design, where computer graphics plays so fruitful a role. Sometimes this approach is justfied by considering flowcharts as the ideal program design medium, and providing powerful facilities for constructing them.
Nothing even convincing, much less exciting, has yet emerged from such efforts. I am persuaded that nothing will...
His argument was that, at the time it was written, the tooling just wasn't "there" yet; for example, screen sizes were notoriously small. Also, the flow chart is actually a really bad design mechanism. Also,
More fundamentally, as I have argued above, software is very difficult to visualize. Whether we diagram control flow, variable scope nesting, variable cross-references, data flow, hierarchical data structures, or whatever, we feel only one dimension of the intricately interlocked software elephant. If we superimpose all the diagrams generated by the many relevant views, it is difficult to extract any global overview. The VLSI analogy is fundamentally misleading - a chip design is a layered two-dimensional object whose geometry reflects its essence. A software system is not.
I'll leave it to you to judge whether or not you agree with him or whether this still applies.
So, to summarize: yes, it's at least theoretically possible, and there have been considerable efforts to generate code from UML diagrams, but you'll need multiple diagrams to generate much more than basic class structures and method stubs. It's not like you can write a use case diagram, press a button, and magically have a complete software system.
I think I have found the answer. We can generate code. Say I have a "use case". I right-click on it. Go to "advance" and select "instance classifier". Over there I can actually make my "use cases" , "sequence diagram objects" etc the instances of an already created class or I can even create a class right over there.

Is UML a domain specific language (DSL)?

Is it appropriate to think of UML as a DSL?
I know that everybody thinks of UML as a way to draw pictures, but underlying the "view" of UML diagrams there is a model...and the "L" in UML stands for Language.
Another question - is SysML a DSL?
UML is a DSL.
A "domain specific language" lets one specify a problem or a solution in a narrow area of application; banking, telephony circuit design, .... One way to distinguish a DSL is that it cannot do general purpose computation (although there are some DSLs that can). Java, C#, Python and COBOL fail this test. (Some would say COBOL is domain-specific for "business" but its only serious concession to that is a decimal data type, and C# has that too.) ColdFusion fails this test; nonstandard syntax does not a DSL make, but IIRC ColdFusion has some support for generating HTML. Fortran fails this test, but its array(-section) sublanguage is only good for arrays and not general purpose computation. Verilog is very domain specific: it is designed to let you write down digital circuits.
UML focuses on specifying different aspects of how software is structured. [You'll note it can't do general purpose computation; one hallmark]. (It actually has 9 or more different aspects it addresses: classes, statecharts, deployment, ... I'll stick to the class aspect for this discussion). The class diagram aspect lets one describe how data is organized, and operations on that data. You can argue this about software so it can't be "domain-specific". What, building software isn't a problem domain?
SYSML is focused on expressing how systems are joined, so it fits this category too.
A more useful question to ask IMHO is, "If I think of UML as a DSL, what do I gain?" Here I don't think you get a lot. The concept of DSL is useful when you arguing for one you don't have (designed or possess) with the point being better expressiveness for a common problem, and might be useful for arguing "you don't want implement your system entirely in it because it isn't Turing capable". It is also useful if you want to explain that your language is going to have a lot of funny notations, precisely because they serve special purposes. People already know this about UML, so... nothing learned.
While I'm a big fan of DSLs, I'm also a big fan of GPLs (general purpose languages). I think in big systems you should necessarily find a "lot" of both: the DSLs to express what they can succinctly (cuts engineering and maintenance costs), and the GPLs to provide arbitrary computation and glue between the system parts. For me what counts in a language is:
what's the class of problem it claims to address and how well does it do it?
what's the syntax (and is it relatively standard for the problem domain)?
what are the precise semantics (this is where you learn the most)?
how good is the tool support?
how well does the DSL integrate into other parts of a big system?
how big and supportive is the community?
UML has (after 15 years) arrived at pretty good answers to these questions.
Homegrown DSLs often don't do so well, partly due to poor design, but often due to the fact that tool support is difficult to get. My company provides machinery to give DSL builders excellent support to improve this situation.
UML is NOT a DSL because UML can be used to model any vertical domain (insurance software, embedded systems,...)
UML is a (horizontal) DSL because UML is a specialized language to model software systems.
So UML is and is not a DSL depending on how you look at it. You could apply the same reasoning to many other languages like html or SQL. They are general because they can be used to represent/manipulate any kind of data but they are specific because they are focused on one task
Short answer - NO - to both questions.
Think of UML as a tool that lets you describe software architectures, software interactions and so on ... describe them in a general way, language agnostic.
DSLs are specialised syntaxes meant to make it easier to describe some specific set of problems
I think the answer to your first question depends on how to define "General" in the term "General Purpose Language". Wikipedia says it is not a DSL:
The opposite is:
a general-purpose programming language, such as C, Java or Python,
or a general-purpose modeling language such as the Unified Modeling Language (UML).
I am a MDA enthusiast so I think I can provide you a very detailed answer to your question.
What is the UML:
The Object Management Group (OMG), a consortium of companies aimed at providing standard languages and technologies, defined a meta-meta modeling language called "The Meta Object Facility" or MOF (http://www.omg.org/mof). A meta-model is a model describing a model or, in other terms, describing the vocabulary (the elements you can use in a model), the syntax (ho they relate each other) and their semantics (what does each entity mean and how its meaning changes in a given context, etc.). A meta-model plays the same role played by Context-Free grammars with respect to the languages they produce. You can thus think at a meta-meta model as a language you can use to define meta-model. This is what the OMG actually did with the UML. The UML language has a meta-model described by means of the MOF in two documents: The UML Infrastructure and the UML Supersturcture (http://www.omg.org/spec/UML).
The UML meta-model has been defined with the aim of been generic enough to cope with the modeling of different systems belonging to different domains. When you define a new UML model you create an instance of the UML meta-model. You could do that for many reasons: to analyse some characteristics of the system, to share some aspects of the system with other stakeholders and so on. However, one of the most important aspects of the OMG vision are model transformations. You can think at a transformation as a set of rules telling an interpreter how to explore a model and produce something else. You can basically transform a model into two different kind of thins, other models (Model2Model, M2M transformations, defined by means of the QVT language) or text such as code or documentation (Model2Text, M2T, transformations defined by means of the MOFM2T Transformation language). So it is VERY IMPORTANT to understand that a UML model is not its diagram. A diagram is just a pictorical representation of the model contents, useful for umans, but not machine readable. You can't apply transformations to a diagram.
The Eclipse Modeling Framework (EMF) is a very powerful (and FREE!) framework implementing all the technologies I have mentioned. A subset of the MOF is implemented in the Eclipse ECORE language. By means of the ECORE the UML meta-model is defined so grafical UML editors (i.e. Papyrus, TopCased, etc.) actually creates XMI representation of the graphically defined UML models conform to the ECORE representation of the UML meta-model. Such representation can be provided as input to transformations engine. The two transformation languages, and related engines, are also available in the EMF with the QVTo plugin and ACCELEO (implementing the MOFM2T transformation language).
As mentioned UML is intentionally generic. However it also provide lightweight extension mechanisms to extends original language vocabulary with domain specific constructs. This can be done by means of stereotypes. A stereotype is a sort of label (actually with meta-attributes) you can attach on model elements to create new entities in the language. You can for instance say in your models some of the classes could be requirements or something else. There are of course some rules, for instance when you stereotype a meta-class you can not violate its original semantics, just reduce it.
SySML is a profile of the UML http://www.omgsysml.org/. A SysML Block is just a UML class stereotyped as Block, a SysML Requirement is just another UML class stereotyped as Block and so on.
Profiling a meta-model like the UML is a quite easy way of creating a sort of DSL (as with stereotypes you add to a more general language some constructs which belong to your domain) which is compatible with the UML (i.e. you can use SysML and UML together). There is another way of creating a DSL which is defining its meta-model by means of the MOF (ECORE). In this case you create a brand new language which is conceptually at the same level of the UML itself.
Many people say UML is just about diagrams because in many cases they do not know what they are talking about. The topic is far more complex, interesting and promising.
UML is a general modelling language that is not specific to any domain whilst the S in DSL stands for Specific. UML is used for modelling systems that can also be represented by multi-purpose programming languages. DSL on the other hand are constrained programming/scripting languages which are specific to a particular domain.

Has anyone used UML with OCL? Do programmers use it or only analysts who don't code?

I am trying to wrap my head around why we first approach the problem of design and decide upon a visual method (UML), instead of starting with formal specifications that happen to also be executable (RAD prototyping), we start with diagrams that can't be easily proven to work. So when it comes time to prove properties of a model, we find we need to define constraints into our design, so we design a formal syntax (OCL) to define the constraints on the model. I am having a hard time understanding this leap back to where we started.
I find OCL encumbered UML designs (even samples shown in brochures) unreadable, even more impenetrable than the myriad of UML symbols and conventions. So what I want to know is: What are the key areas where OCL is used in the working software development world today, and for whom is it relevant or worthwhile to learn? What does your job role look like? Do architects who never write code use UML and OCL, or do programmers who also design and architect the systems with the same team that implements it, use it too?
[updated: Secondly, it occurs to me that Agile development seems kind of opposed to "Heavyweight" procedures, and that a domain specific language for design diagram constraints like OCL doesn't seem very Agile. Is UML+OCL used in ANY "Agile" shops, or is it universally eschewed by Scrummers?]
Interesting question.
The "holy grail" of the Object Constraint Language was to provide a framework that when coupled with UML allowed a tool to transform that into a concrete Object Graph / Meta Model i.e. a set of classes that already had their basic structure and constraints wired in, so that all the developer had to do was implement business methods. (all this in a language independent way)
JBuilder from Borland tried supoprting this in their enterprise edition, and Delphi with ECO also made use of OCL in a practical way (though not as a transformation input) by supporting the Query abilities. In fact Anders Inver from Borland / BoldSoft, and one of the ECO team, wrote the forward on the OCL bible, The Object Constraint Language, Second Edition (addison wesley)
My personal opinion is that there is not enough pay back to warrant the learning curve. Without using specialised (and expensive) tools the UML/OCL model is still not easily testable in real terms, and the value you get is marginal (if anything) over itterative test driven development. The language independence thing is waaay overrated, lets face it once we start down the Java, C#, Delphi, C++ or whatever path, there is no way in hell we will re-generate in something else, its just not practical.
For what its worth, I am yet to see Model Driven Development with OCL actually used in the real world for a real project. (other than as a proof of concept) What seems to be working lately in the real world is Agile processes, Scrum etc and just itterative development using standard IDE's with standard languages and user stories (perhaps some UML on a whiteboard or storyboard).
The benefit of defining OCL constraints on your models is the possibility of specifying all the business rules of your domain that you cannot represent with the graphical constructs of the UML (for instance, multiplicities are constraints that can be graphically represented as part of an association definition, saying that the attribute A of class C has to be greater than 5 is also a constraint but in this case has to be defined in OCL since UML does not provide a graphical syntax for this)
Obviously, this would be very useful if code-generation tools would be able to take these constraints and automatically generate a code that enforces them (e.g. as if-conditions in Java methods or as triggers in databases that raise an exception when the data violates the rule).
Unfortunately, there aren't many tools offering this functionality (see a list here: http://modeling-languages.com/content/list-ocl-tools) but the situation is slowly improving
Much has changed.
UML 2.5 used Eclipse OCL tooling to remedy the numerous bugs in the UML 2.0...2.4 embedded OCL.
SysML is using Papyrus and Eclipse OCL tooling for SysML next.
Eclipse OCL provides a much stronger UI and an OCL2Java code generator so that OCL embedded in Ecore/UML provides much more acceptable/executable code.
Much has still to change.
The OCL embedded in UML has never been seriously executed.
The OCL definition of OCL itself is lamentable.
I worked with OCL Constraints as a small part of my bachelor thesis. Borland (now Microfocus) Together had an interesting approach thus generating Java code out of OCL Constraints. You defined that variable X should be >= 0 or not empty and Together created assert commands to verify it automatically.

Specification, modeling and programming are principially the same, right?

In formal specifications based on abstract algebraic types and equational theory you use formulas of equational theory to specify theory. System which will satisfy those constraints is called in formal logic a model.
Modeling is process of creating a model, which abstracts of some aspects, which are unnecessary details for a specific case. So concrete system has to adhere to created model in observed aspects.
Programming is a process of creating a program which will have specific behaviour - will perform specific algorithms - and programming languages through different paradigms enable us to think in a certain specific way, which abstracts of some details, usually machine specific ones.
So could we be doing all those things at the same time, because they are principially the same? Is declarative programming the nearest attempt to do that? Could we use some sort f programming languages which will be good for programming as well as for modeling and specification?
The scientist who has done the most to advance this point of view is Tony Hoare. Tony, along with his colleague Edsger Dijkstra, advocated nondeterministic programming languages so that there would be a smoother path from specification to implementation. Tony definitely wanted a single language for both specification and implementation. For more on this view, read his book on the Alegbra of Programming. Tony also did the seminal work on proving correctness of abstractions. All of this work was done in the context of simple, imperative languages with structured control flow and classic, side-effecting procedures. So there is not any connection with declarative programming of necessity. And historically, work on functional programming (the main branch of declarative programming) has followed more from Backus's Turing lecture on "liberating programming from the von Neumann bottleneck"; functional programming has been about programming productivity as much as anything else.
What we discovered since Hoare is that formal specifications and formal modelsl are very expensive. The expense hasn't been shown to be justified except in very special circumstances, like "if the software doesn't work, the patient will die" or "if the software doesn't work, the plane will crash." Informal models and specifications are quite useful, and much cheaper to produce and work with. There is still interesting research going on around the fringes on modelling, model checking, and so on. One of my personal favorites is the Alloy language done by Daniel Jackson's group at MIT. There's also great stuff done at Microsoft Research and plenty of good stuff elsewhere. There's some work in declarative programming as well, but it too is of the "cheap and cheerful" variety rather than a comprehensive, programmatic approach like Hoare's. One of my favorites there is Claessen's and Hughes's QuickCheck, which provides a way to state formal properties and explore them by random testing. No proofs or theorems, but still jolly useful.
In summary, you describe an agenda of doing formal models, specifications, and programs, all within a single framework. There is still plenty of good work going on piecemeal, but the unified agenda has been abandoned.

What methodologies, processes, and tools are available for designing and modeling non-object-oriented applications?

I'm quite familiar with UML for modeling object-oriented applications. However, I'm not familiar with anything specifically designed for designing and/or modeling procedural, functional, or any other paradigm. How do you design or model applications written in a non-object-oriented language?
Oh golly, there's a blast from the past.
We used to use flowcharts, pseudocode, data flow diagrams, structure charts, Hierarchy-IPO, "coathanger charts" (which are really a variant of flowcharts), Nassi-Schneiderman diagrams. Among others.
Oh, SADT is another one.
SSADM - I think I've still got my certification in a box somewhere...
UML can be used for modeling non-OO languages as well. I use UML for modeling just about anything. To be fair the core of UML is OO focused, but much of the behavior, instance level, and less common structural types work for non-OO languages. However, UML is for design in OO not implementation, your building blocks/objects are just different, modules, or whatnot.
Many of the diagram types mentioned by Charlie Martin have analogous UML representations. Even better it is a model not just a diagram/view.
Example: LISP is not OO based. So create a keyword or stereo type for classes that is function. The attributes are the arguments as it has no state. This is not perfect, but it is the most approachable.
Example: COBOL/JCL is not OO based. Have each PACBASE package be a component and have structural components as your COBOL. Artifacts can be your JCL.
Let fact that UML is broad and loosely defined to your benefit and re-purpose UML parts. You can always formalize it with a UML profile. Where I work this has been a point of discussion for some time. Mainframe programmers do not see OO design and OO-UML as relevant, but it is only partially true in that the core or how far most people go with it is just to the class/structural stuff and use cases, which is OO focused.

Resources