Comparing a concrete execution trace with an Alloy Model - modeling

I'm using Alloy to model a system. I would like to check the implemented system matches the Alloy model by comparing log traces from a concrete execution of the actual system with the model.
The way I see this working is:
Add logs to the implemented system at points that correspond to the high level concepts modelled in Alloy, such as "Receptionist checks in guest G1"
Pre-process these into a form understood by Alloy
Give this to Alloy (or some other tool) and say 'Does this model admit this trace?' (this question)
This would be run over the operational logs of the system (or maybe subsets if performance is a problem) and continuously validate that the system was operating 'to spec'.
Is that possible / reasonable?

Possible yes.
Reasonable I'm not quite sure.
To me, Alloy shine at finding unknown unknowns, i.e. pitfalls, in your specifications.
Once the specification is fool-proofed using Alloy analysis, I don't see the point of encumbering your program with unnecessary translations and analysis steps. It's not only error prone, but you might also find yourself limited with the scalability of the analyzer if the traces you want to validate are substantial...
But again, it's doable. So if you want it, sure, do it ... :-)

I'm using Alloy to model a system. I would like to check the implemented system matches the Alloy model by comparing log traces from a concrete execution of the actual system with the model.
Yes, I think that is a bit of work but it should be doable. I would be very interested in getting this to work. I have been thinking about this for a long time.
Loïc argues correctly that Alloy shines in finding solutions but to keep this manageable, Alloy must keep the scope small. Although this is true, Alloy is also a specification language. The timing issue is only in finding a solution. However, the problem you sketch is different, you already have the solution in the log. Each event specifies a transition in the state.
If you're familiar with the Alloy Evaluator then you should be aware that once you have a solution, you can run any Alloy code on that instance. Inside Alloy, there is a full set of classes to simulate an instance and run Alloy code against it.
So I think you can start with an initial instance and use your log event to create a secondary instance and then use Alloy to verify this is a valid transition. This will be very fast and I do not see why this could not handle a very large number of objects. Surely thousands and with a bit of caching wizardry millions.
We are currently working hard on Alloy 6, which will be integrate Electrum, where we will have full temporal logic that will make the rules easier to express.
I've been looking for a customer for a long time that would like to develop the necessary code to bridge Alloy & the trenches. If this works as I think it can work, it would be very interesting for the software industry.

Related

Functional approaches to designing the discrete side of hybrid systems

I'm working on developing controllers for hybrid systems in Haskell.
FRP libraries (right now I'm using netwire, but there are several good ones and a lot of interesting research on future ones) provide a great solution for the continuous-time side of the problem. Augmenting them with signal names, dimensions, preferred units, and so forth gets you a system that has modularity, is self-describing, and has a straightforward path to confidence in correctness.
I'm looking for information, folklore, or papers that provide similar properties for the discrete-time side. In some sense the problem is much easier, state machines are well-studied and simple. In other senses it's more difficult, I'll briefly explain how.
Correctness is obviously the most important thing, and thankfully it's also straightforward.
Self-description is more of a problem. You'd like the controller not just to be in the correct state, but to be capable of telling you what state it's in. Also how it got there. And where it might go next. So you can tack names on to everything, and it works, but it conflicts somewhat with modularity. You'd also like to be able to build complex discrete time behaviors from simpler ones. But when you ask the system what state it's in, generally the high-level answer is more interesting (or at least, as interesting) as the low level answer. How do you get this cleanly? I've tried a few naive approaches and have wrapped myself in spaghetti a few different ways, but it seems like there must be elegant solutions?
Another problem I've had with self-description is that I'd like to have a list of self-describing conditions (generally comparisons: has it been 10 seconds? am I within 3 feet of the next waypoint? has the battery power fallen below 15%? etc) that are being monitored which might trigger the next state transition. There are tricky questions of what even are the desirable semantics here, since it seems like some of these events are better handled "from the bottom up" (e.g. expected termination conditions of whatever low level step you are performing) and some "from the top down" (e.g. equipment failure detection, geofencing, ...). This can lead to spaghetti of its own even if you relax the goal of self-description.
In addition to diagnostics, accurate self-description information here could also be very useful for abstract interpretation, projecting the state of the system into the future by guessing which events are likely to occur when. Many of the event conditions lead themselves to fairly simple guesses (e.g. using velocity made good, fuel consumption rate, timers). Others are more complicated but might still be worth the effort to develop projections for some applications (e.g. expected orders from operators, weather forecasts, projected tracks for moving objects of interest). It would be nice to find a design that annotates conditions not only with names, but also with functions for this sort of stuff.
Does anyone have experience with this that they are willing to share?
Okay, so I would say the "real" answer to your question is that some of things that you are asking for are open areas of research --- in particular I think some of the self-describing features you desire may necessitate some degree of "spaghetti" simply because the problem you are trying to solve is inherently complicated.
That being said, your focus on modularity is exactly the right approach. I would say, take a look at Keymaera as I believe it has the features you are looking for despite being in Java. I would also recommend looking at the publications page on the Keymaera website as this should provide you valuable insight to the problem in general.
If you do not like Keymaera's approach you can also look into using Timed Automata which is another direction modeling-wise that should be sufficient for your problem description.

Is there any advantage in building a business application with an Entity-Component System?

I understand the appeal of using the data-driven Entity-Component System for game development. Naturally I am trying to find other areas to apply this paradigm. As I am about to embark on developing a small business application, I've been wondering how well Entity-Component would fit in with it. However I cannot find any examples or discussions on using Entity-Component in anything besides games. Is there a reason? Would there be any advantages in using Entity-Component in software besides games?
I ended up taking a risk and trying to use ECS outside of the gaming domain (now as an indie, formerly company employee) and with results that astounded me. I wouldn't do things any other way now and have an easier-to-maintain system than ever before (not perfect, but so much better than the COM-style architectures we used to use in my industry). I took the plunge mainly because it seemed to provide answers for all the things me and my team in the past were struggling with using a COM architecture, though I imagined with such a risky move that I might just end up exchanging one set of problems for another (was willing to take the risk now that I was on my own). Turned out I didn't exchange one can of worms for another. ECS solved practically all those problems while barely introducing any new ones.
That said, I'm in the VFX domain and it's not that different from games. We still have to animate things like characters, emit particles, interact with meshes, textures, play sound clips, render the result, allow people to write plugins, scripts, etc.
To try to apply ECS in a business domain is far more ballsy. That said, I imagine it could really help create a maintainable system if you have relatively few systems processing a huge number of entity combinations.
Maintainability
What I found that made ECS so much easier for me to maintain compared to previous object-oriented approaches, and even within my personal projects, was that the previous approaches often transferred the maintenance overhead away from the clients using the classes to the classes themselves. However, there would be dozens of interfaces, hundreds of subclasses, all inheriting different things and implementing different interfaces to maintain individually. Testing also becomes difficult with so many granular classes and the need to do mock testing.
My brain can only handle so much and hundreds of subclasses interacting with each other was far beyond the limit. Very quickly I found myself no longer able to reason about what was going on, let alone when or where, overwhelmed by complex interactions leading to complex side effects, and never so confident that I could sandwich new code somewhere in there without causing unwanted side effects.
The computing scientist’s main challenge is not to get confused by the
complexities of his own making. -- E. W. Dijkstra
This applied even for projects I exclusively authored myself. There came a breaking point, typically after a few hundred thousand LOC or so, where I could no longer even comprehend my own creation. I'd refactor here and there, pick up a little momentum, only to take a vacation, come back, and be lost all over again.
ECS removed that challenge, and I don't mean to the degree that I can take a 2-week vacation, come back to the codebase, look at some code, and get the vision of crystal clarity that I had when I was writing it in the first place. ECS doesn't improve things that much in this regard and it still takes some time for me to reacquaint myself with code I haven't looked at in a good while. The reason ECS helped so much is that I didn't need to recall everything I wrote in order to extend and change the software. The systems are so decoupled from each other that it's not a huge deal if I forgot how one works exactly. I can just concentrate on what I need to do and not have to worry about complex interactions of side effects being triggered through complex interactions of control flow. I can just focus on what I need to do and not have to think much about anything else.
This applies even when introducing brand new core-level features integrated into the product. These days when I introduce a new central feature to the product, like a brand new audio system central to the product, the only thing I have to think much about is how to integrate it into the user interface. Integrating it into the architecture is relatively effortless compared to previous architectures I worked in.
Meanwhile with the ECS, I only have to maintain a couple dozen systems to provide no less functionality than the above. They do have some complex logic inside, but I don't have to maintain the hundreds of different entity combinations there are, since they just store components, and I don't have to maintain component types since they just store raw data and I rarely ever find the need to go back and change them (very close to never).
Extensibility
Being able to extend an ECS architecture in hindsight with central concepts is about the easiest thing I've encountered so far and requires the minimum amount of knowledge of how the existing codebase works.
As a very fresh example, I recently encountered a strong desire for scripters using my software to be able to access entities in the scene using a simple, global name. Before they had to specify a full scene path like, Scene.Lights.World.Sunlight as opposed to simply, Sunlight.
Normally in the previous architectures I worked in, that would have ranged from a highly intrusive to moderately intrusive change. A COM-style system revolving around pure interfaces might require introducing a new interface or, worse, changing an existing one, and updating a few hundred subtypes to implement the new functions. If we had a central abstract base class that everything already inherited, we might be able to modify that one centrally to implement this new interface (or the new parts of an existing interface), but it would likely be monstrous if there was a central base class for everything that might want such a name, and require wading through a lot of delicate code.
With the ECS, all I had to do was introduce a new component, GlobalName, with a system that processes GlobalName components and can find an entity quickly through a specified name. It also handles making sure that no two GlobalName components have a matching name. Due to the nature of the ECS, it's also very easy to pick up when this GlobalName component is destroyed as a result of an entity being destroyed or the component being removed from it to keep the data structure used to accelerate searches by name (a trie) in sync.
After that I was just able to attach this GlobalName component to anything that scripters wanted to refer to by a global name. They can also attach it themselves and then refer to a given entity later through that name. Components also serialize themselves in ways that preserves backwards compatibility for the most part (ex: previous versions of the software which did not know what GlobalName was will simply ignore it upon loading scene data referring to it).
It was about as painless and as non-intrusive of a change as I could change imagine considering that this was added very late in hindsight to a 4-year old software which did not anticipate the need for this whatsoever. And I managed to get it working just fine on the very first try. As a bonus, all the non-trivial code newly added to make this work lives isolated in its own space; it's not jumbled up with anything else and doesn't contribute to the complexity of anything else as would inevitably have to be the case if I used abstract interfaces or base classes. I did not have to modify anything central to make this work except a few lines of trivial script and some trivial GUI code to display these global names when available.
"Inherit Anywhere"
Have you ever wished you could extend a class's functionality from anywhere in your code without actually modifying its code? For example:
// In some part of the system exists a complex beast of a class
// which is tricky modify:
class Foo {...};
// In some other part of the system is a simple class that offers
// new behavior we'd like to have in 'Foo', with abstract functionality
// (virtual functions, i.e.) open to substitution:
class Bar {...};
// In some totally different part of the system, maybe even a script,
// make Foo inherit Bar's behavior on the fly, including its default
// constructor, copy constructor, and destructor behavior for Bar's state.
Foo.inherit(Bar);
The above leaves the question: where will the abstract functionality of Bar be implemented, since Foo doesn't provide such an
implementation? That's where systems analogically kick in for ECS.
I think the temptation will be there for most of us who had to wade through some existing class's complex code to just make it do a few new things while risking causing unwanted side effects/glitches/toe-stepping, or we might have even faced a temptation for a third party library outside of our control to just offer a little bit more functionality that we'd find very useful throughout the code using this third party library if it just provided "this one thing", or we might just hate the idea of having to change our colleagues' existing code (don't want to step on toes) even though we're tasked to provide new central behavior.
ECS offers you that kind of flexibility although in a very different way from the above example (but gives you the analogical benefits). It allows you to extend anything's behavior/functionality/state from anywhere. As in the above example of extensibility, I did not have to modify anything that exists to provide that global name searching functionality and state. I can extend the behavior of these entities from the outside, even from script, by just adding a new type of component to any entity I want at which point any systems I write interested in such components will then be able to pick up and process using a duck typing approach ("If it has a GlobalName component, it can be provided a global name which can be utilized to find a matching component very quickly").
Associating Data
Similar to the above, have you ever faced a temptation to associate data to existing objects in the code? In such cases we might have to maintain parallel arrays or associative containers like dictionaries/maps, and such code can be tricky to write correctly given that it has to stay in sync as new objects are added and removed.
ECS solves that problem at a central level, since now you can just attach components and remove components to/from any entity you want very efficiently. That becomes your means of associating new data on the fly. You no longer have to manually synchronize associative data structures.
Testing
Another issue for me just personally, and it may be because I never mastered the art of unit testing (though I did work with a colleague who really studied up on the subject), is that it never made me confident that a system was relatively bug-free. Integration tests gave me greater confidence in that regard. The problem for me was this: even if the unit test passes, how do you know the client will not misuse the interface? What if they use it at the wrong time? What if they try to use it from multiple threads when it's deliberately not designed to be thread-safe?
I get no huge sense of relief to see unit tests passing, since most of the bugs encountered had to do with what was going on between the interfaces being tested, and we had many incoming in spite of all of the hundreds of unit tests we wrote passing. I love test-driven development, and I did find value in the unit test of telling me that this one unit was doing what it was supposed to do which allowed me to use it more confidently throughout the codebase, but the unit testing never gave me a huge sense of relief about the correctness of the codebase as a whole.
ECS solved that problem for me and made unit testing much more valuable even to someone like me who never mastered the art of testing since there are a handful of systems, they each do their hefty share of work (not granular little objects), and they're concrete. If we have to do anything resembling mock testing, it's simply to insert the components/entities necessary to run them and test them. It starts to feel like testing a system is closer to integration testing than unit testing, even though the system is the smallest testable unit.
Homogeneous Processing
To apply ECS requires embracing a more loopy kind of logic with more homogeneous loops doing one thing at a time. A lot of OOP tends to encourage non-homogeneous control flows and complex interactions causing many things to happen in any given phase/state of the system. This was the most difficult part I found initially since I wanted to apply disparate tasks at one time to a given entity/set of components, and my temptation couldn't be satisfied so directly given decoupled systems which only perform one task at a time. So I had to learn how to defer processing, storing some state for the next system to use, and I also use (to a minimum) an event queue so that systems can trigger events which get processed by others.
Nevertheless, I found ways to program the equivalent of a complex interaction as a result of a series of simple loops doing one thing at a time. It never turned out to be as difficult as I imagined to force myself to work this way, applying one uniform task over one set of entities at one time. And after being forced to do this for a while and maintaining the results -- wow! I should have been doing that all along. It's actually kind of depressing reflecting back on a decade of maintaining architectures that were so much harder to maintain than they needed to be after getting the breath of fresh air that was the ECS architecture.
Interactions
This is a simplified "interaction" diagram (not necessarily indicative of direct coupling, as the coupling version would be from concrete objects to abstract interfaces) comparing the differences before and after I adopted ECS. Here's before:
Except that's just between a small number of types (I was too lazy to draw hundreds). And this was why I always struggled to maintain these things and felt tangled up in the code. It's because the interactions between the code were actually a tangled mess, leading you to all sorts of remote functions in the system causing side effects along the way. After (and now the components are just raw data, they contain no functionality of their own):
And the second version was so, so much easier to comprehend, so much easier to extend, so much easier to maintain, so much easier to reason about in terms of correctness, so much easier to test, etc. If your business architecture can effectively fit into the second type of model, I can't overstate how much it can simplify everything.
Invariants
One of the scariest parts to me when I started developing the ECS engine was the lack of information hiding. When components are just raw data, they're dangling what I thought should be their privates in the air for anyone to touch. This could be doubly scary in a business domain that might be more mission-critical in nature.
Yet I found invariants just as easy to maintain, if not more, due to the limited number of systems that access any given component (and typically if the data is modified, it only makes sense for one system in the entire codebase to do it), the extremely simple control flows, and the extremely predictable side effects that result. And it's pretty easy to test the codebase for correctness when you just have a handful of systems to worry about as far as functionality.
Conclusion
So if you are willing to take the risk, I think it could potentially be applied very effectively in certain business domains. The main thing I think is worth thinking about upfront first is if you can model the entirety of your software's needs as a handful of systems processing data stored in components, with each system still performing a bulky but singular responsibility (the analogical equivalents of a RenderingSystem, GuiSystem, PhysicsSystem, InputSystem, etc). Naturally the benefits of ECS diminish if you find you need hundreds of disparate systems to capture the business logic.
If you're interested, I can extend my answer in some later iterations and try to go over some of the minor struggles I faced with the ECS when I was completely wet behind the ears about it.
(Apologies for the necromancy)
Coming from an enterprise background, I have recently been considering this question. Entity-component systems are comparatively new, and represent a completely different design paradigm to what most business developers will have experience with.
Considering my own company's example, I have seen a few scenarios where an entity-component system would offer benefits.
For example, in our primary application, addresses are associated with contacts and organisations. (There are ContactAddress and OrganisationAddress joining tables in our database.) One client wishes to associate projects with addresses as well. There are many ways of achieving this, but an entity-component based approach would seem quite elegant to me - simply add an Addressable component to the Project entity, and the GUI should sort itself out.
Instead, we will likely be adding a new joining table and new data-input pages (albeit re-using common controls).
The primary disadvantage, I would think, would be (initial) lack of developer awareness of the best ways of applying this paradigm to business software, precisely because it doesn't appear to have been done before. Once you start with such an approach, you are committed to it - if it proves frustrating once your project reaches a certain complexity, there's no way out without a significant rewrite.

Expression trees vs IL.Emit for runtime code specialization

I recently learned that it is possible to generate C# code at runtime and I would like to put this feature to use. I have code that does some very basic geometric calculations like computing line-plane intersections and I think I could gain some performance benefits by generating specialized code for some of the methods because many of the calculations are performed for the same plane or the same line over and over again. By specializing the code that computes the intersections I think I should be able to gain some performance benefits.
The problem is that I'm not sure where to begin. From reading a few blog posts and browsing MSDN documentation I've come across two possible strategies for generating code at runtime: Expression trees and IL.Emit. Using expression trees seems much easier because there is no need to learn anything about OpCodes and various other MSIL related intricacies but I'm not sure if expression trees are as fast as manually generated MSIL. So are there any suggestions on which method I should go with?
The performance of both is generally same, as expression trees internally are traversed and emitted as IL using the same underlying system functions that you would be using yourself. It is theoretically possible to emit a more efficient IL using low-level functions, but I doubt that there would be any practically important performance gain. That would depend on the task, but I have not come of any practical optimisation of emitted IL, compared to one emitted by expression trees.
I highly suggest getting the tool called ILSpy that reverse-compiles CLR assemblies. With that you can look at the code actually traversing the expression trees and actually emitting IL.
Finally, a caveat. I have used expression trees in a language parser, where function calls are bound to grammar rules that are compiled from a file at runtime. Compiled is a key here. For many problems I came across, when what you want to achieve is known at compile time, then you would not gain much performance by runtime code generation. Some CLR JIT optimizations might be also unavailable to dynamic code. This is only an opinion from my practice, and your domain would be different, but if performance is critical, I would rather look at native code, highly optimized libraries. Some of the work I have done would be snail slow if not using LAPACK/MKL. But that is only a piece of the advice not asked for, so take it with a grain of salt.
If I were in your situation, I would try alternatives from high level to low level, in increasing "needed time & effort" and decreasing reusability order, and I would stop as soon as the performance is good enough for the time being, i.e.:
first, I'd check to see if Math.NET, LAPACK or some similar numeric library already has similar functionality, or I can adapt/extend the code to my needs;
second, I'd try Expression Trees;
third, I'd check Roslyn Project (even though it is in prerelease version);
fourth, I'd think about writing common routines with unsafe C code;
[fifth, I'd think about quitting and starting a new career in a different profession :) ],
and only if none of these work out, would I be so hopeless to try emitting IL at run time.
But perhaps I'm biased against low level approaches; your expertise, experience and point of view might be different.

Haskell for mission-critical systems [duplicate]

I've been curious to understand if it is possible to apply the power of Haskell to embedded realtime world, and in googling have found the Atom package. I'd assume that in the complex case the code might have all the classical C bugs - crashes, memory corruptions, etc, which would then need to be traced to the original Haskell code that
caused them. So, this is the first part of the question: "If you had the experience with Atom, how did you deal with the task of debugging the low-level bugs in compiled C code and fixing them in Haskell original code ?"
I searched for some more examples for Atom, this blog post mentions the resulting C code 22KLOC (and obviously no code:), the included example is a toy. This and this references have a bit more practical code, but this is where this ends. And the reason I put "sizable" in the subject is, I'm most interested if you might share your experiences of working with the generated C code in the range of 300KLOC+.
As I am a Haskell newbie, obviously there may be other ways that I did not find due to my unknown unknowns, so any other pointers for self-education in this area would be greatly appreciated - and this is the second part of the question - "what would be some other practical methods (if) of doing real-time development in Haskell?". If the multicore is also in the picture, that's an extra plus :-)
(About usage of Haskell itself for this purpose: from what I read in this blog post, the garbage collection and laziness in Haskell makes it rather nondeterministic scheduling-wise, but maybe in two years something has changed. Real world Haskell programming question on SO was the closest that I could find to this topic)
Note: "real-time" above is would be closer to "hard realtime" - I'm curious if it is possible to ensure that the pause time when the main task is not executing is under 0.5ms.
At Galois we use Haskell for two things:
Soft real time (OS device layers, networking), where 1-5 ms response times are plausible. GHC generates fast code, and has plenty of support for tuning the garbage collector and scheduler to get the right timings.
for true real time systems EDSLs are used to generate code for other languages that provide stronger timing guarantees. E.g. Cryptol, Atom and Copilot.
So be careful to distinguish the EDSL (Copilot or Atom) from the host language (Haskell).
Some examples of critical systems, and in some cases, real-time systems, either written or generated from Haskell, produced by Galois.
EDSLs
Copilot: A Hard Real-Time Runtime Monitor -- a DSL for real-time avionics monitoring
Equivalence and Safety Checking in Cryptol -- a DSL for cryptographic components of critical systems
Systems
HaLVM -- a lightweight microkernel for embedded and mobile applications
TSE -- a cross-domain (security level) network appliance
It will be a long time before there is a Haskell system that fits in small memory and can guarantee sub-millisecond pause times. The community of Haskell implementors just doesn't seem to be interested in this kind of target.
There is healthy interest in using Haskell or something Haskell-like to compile down to something very efficient; for example, Bluespec compiles to hardware.
I don't think it will meet your needs, but if you're interested in functional programming and embedded systems you should learn about Erlang.
Andrew,
Yes, it can be tricky to debug problems through the generated code back to the original source. One thing Atom provides is a means to probe internal expressions, then leaves if up to the user how to handle these probes. For vehicle testing, we build a transmitter (in Atom) and stream the probes out over a CAN bus. We can then capture this data, formated it, then view it with tools like GTKWave, either in post-processing or realtime. For software simulation, probes are handled differently. Instead of getting probe data from a CAN protocol, hooks are made to the C code to lift the probe values directly. The probe values are then used in the unit testing framework (distributed with Atom) to determine if a test passes or fails and to calculate simulation coverage.
I don't think Haskell, or other Garbage Collected languages are very well-suited to hard-realtime systems, as GC's tend to amortize their runtimes into short pauses.
Writing in Atom is not exactly programming in Haskell, as Haskell here can be seen as purely a preprocessor for the actual program you are writing.
I think Haskell is an awesome preprocessor, and using DSEL's like Atom is probably a great way to create sizable hard-realtime systems, but I don't know if Atom fits the bill or not. If it doesn't, I'm pretty sure it is possible (and I encourage anyone who does!) to implement a DSEL that does.
Having a very strong pre-processor like Haskell for a low-level language opens up a huge window of opportunity to implement abstractions through code-generation that are much more clumsy when implemented as C code text generators.
I've been fooling around with Atom. It is pretty cool, but I think it is best for small systems. Yes it runs in trucks and buses and implements real-world, critical applications, but that doesn't mean those applications are necessarily large or complex. It really is for hard-real-time apps and goes to great lengths to make every operation take the exact same amount of time. For example, instead of an if/else statement that conditionally executes one of two code branches that might differ in running time, it has a "mux" statement that always executes both branches before conditionally selecting one of the two computed values (so the total execution time is the same whichever value is selected). It doesn't have any significant type system other than built-in types (comparable to C's) that are enforced through GADT values passed through the Atom monad. The author is working on a static verification tool that analyzes the output C code, which is pretty cool (it uses an SMT solver), but I think Atom would benefit from more source-level features and checks. Even in my toy-sized app (LED flashlight controller), I've made a number of newbie errors that someone more experienced with the package might avoid, but that resulted in buggy output code that I'd rather have been caught by the compiler instead of through testing. On the other hand, it's still at version 0.1.something so improvements are undoubtedly coming.

Using Polymorphic Code for Legitimate Purposes?

I recently came across the term Polymorphic Code, and was wondering if anyone could suggest a legitimate (i.e. in legal and business appropriate software) reason to use it in a computer program? Links to real world examples would be appreciated!
Before someone answers, telling us all about the benefits of polymorphism in object oriented programming, please read the following definition for polymorphic code (taken from Wikipedia):
"Polymorphic code is code that uses a polymorphic engine to mutate while keeping the original algorithm intact. That is, the code changes itself each time it runs, but the function of the code in whole will not change at all."
Thanks, MagicAndi.
Update
Summary of answers so far:
Runtime optimization of the original code
Assigning a "DNA fingerprint" to each individual copy of an application
Obfuscate a program to prevent reverse-engineering
I was also introduced to the term 'metamorphic code'.
Runtime optimization of the original code, based on actual performance statistics gathered when running the application in its real environment and real inputs.
Digitally watermarking music is something often done to determine who was responsible for leaking a track, for example. It makes each copy of the music unique so that copies can be traced back to the original owner, but doesn't affect the audible qualities of the track.
Something similar could be done for compiled software by running each individual copy through a polymorphic engine before distributing it. Then if a cracked version of this software is released onto the Internet, the developer might be able to tell who cracked it by looking for specific variations produced the polymorphic engine (a sort of DNA test). As far as I know, this technique has never been used in practice.
It's not exactly what you were looking for I guess, since the polymorphic engine is not distributed with the code, but I think it's the closest to a legitimate business use you will find for this kind of technique.
Polymorphic code is a nice thing, but metamorphic is even nicer. To the legitimate uses: well, I can't think of anything other than anti-cracking and copy protection. Look at vx.org.ua if you wan't real world uses (not that legitimate though)
As Sami notes, on-the-fly optimisation is an excellent application of polymorphic code. A great example of this is the Fastest Fourier Transform in the West. It has a number of solvers at its disposal, which it combines with self-profiling to adjust the code path and solver parameters on subsequent executions. The result is the program optimises itself for your computing environment, getting faster with subsequent runs!
A related idea that may possibly be of interest is computational steering. This is the practice of altering the execution path of large simulations as the run proceeds, to focus on areas of interest to the researcher. The overall purpose of the simulation is not changed, but the feedback cycle acts to optimise the calculation. In this case the executable code is not being explicitly rewritten, but the effect from a user perspective is similar.
Polymorph code can be used to obfuscate weak or proprietary algorithms - that may use encryption e. g.. There're many "legitimate" uses for that. The term legitimate these days is kind of narrow-minded when it comes to IT. The core-paradigms of IT contain security. Whether you use polymorph shellcode in exploits or detect such code with an AV scanner. You have to know about it.
Obfuscate a program i.e. prevent reverse-engineering: goal being to protect IP (Intellectual Property).

Resources