Statemachine in RTS game - state-machine

I am working on an RTS game and I want to be able to build structures, which means that a builder will walk to target location and, once there, start building.
I want to implement a statemachine in every unit where states can be added as the user gives input because this might come in handy later, when units guard position and stuff.
My question is: is it useful to create a statemachine for every individual, user-owned unit or is this a pitfall?

State machines are common in games.
A state machine is the usual approach for behavior of objects, NPCs, etc in games.
They are so commonly used that they are often supported by game engines: e.g.: Unity
A state machine pitfall might be resource consumption by a naive implementation, e.g. one employing a State Pattern, which is ok only if don't have relatively few state machines running at the same time, otherwise resource consumption of a State Pattern would be prohibitively high for thousand of concurrent state machines.
If you indeed intend to have thousands of individual state machines running at the same time, or need extreme efficiency, you will have to implement them with a simpler approaches: eg: nested switch statements or table based implementations examples in C.
Reasons for using State Machines, in general, not only in games.
The main reason for using state machines is modeling your problem with clarity: You can think about edge conditions visually, by drawing out the state machine, e.g. by using tools such as GraphViz or by hand.
It easier to see what would happen in which scenario, precisely.
Sometimes your problem "just calls" for a State Machine representation: it has "states", and complex behaviors, which depend on past events.
State machines have decades of computer science research behind them, and algorithms about analyzing them, simplifying them, etc are known.
If you try to model a state machine "by adding and removing IF statements", by hand, you will end up with messier code, which you won't be able to transform, model, etc.
On the "cons" side, if you use table based state machines, debugging them would be somewhat hard.

Related

How to improve this class diagram about the components of a bike

So I was given a task of making a class diagram for a bicycle. I know what a class diagram is and the concepts behind one.
Now to me, a bike has three major components: the brake system, drive system and steering system. And each system has each own activators for actions: a brake handle, a pedal and a handlebar.
For my bike to actually brake, I need to go through the brake lever to trigger my brake system (pass data on how hard the lever is squeezed from the lever to the brake system). Same for the other two systems as well. This is what I've come up with so far:
My question: Is there a better way to illustrate the connection between the activators and the systems they're supposed to pass data to? Also the system works in isolation, meaning that external factors such as crashes or mechanical failures are not in the scope of the system.
Yes, there is "a better way to illustrate the connection between the activators and the systems they're supposed to" activate. These "activators" are, in fact, part of these subsystems. Consequently, you should not use plain UML associations on the first two levels of your composition break-down hierarchy, but rather UML composition relationships (with "black diamonds"), which are special associations representing part-whole relationships where parts are exclusive (= non-shareable).
Bicycle would be composed of BreakSystem, DriveSystem and SteerSystem. Then, for instance, BreakSystem would be composed of the "activator" BreakLever and Brake. Likewise, for the other subsystems.
Since you don't model a digital bike, better say that an activator (like break lever) causes a subsystem (like the break system) to react, instead of "activators are supposed to pass data to" these subsystems. In a mechanical system, it's about physical causation, and not about passing data.
See also https://www.uml-diagrams.org/composition.html for an explanation of the UML concept of composition, which often comes with, but does not imply, a life cycle dependency.

Managing complex state in FP

I want to write a simulation of a multi-entity system. I believe such systems motivated creation of Simula and OOP where each object would maintain its own state and the runtime would manage the the entire system (e.g. stop threads, serialize data).
On the other hand, I would like to have ability to rewind, change the simulation parameters and compare the results. Thus, immutability sounds great (at least up to almost certain garbage collection issues caused by keeping track of possibly redundant data).
However I don't know how to model this. Does this mean that I must put every interacting entity into a single, huge structure where each object update would require locating it first?
I'm worried that such approach would affect performance badly because of GC overhead and constant structure traversals as opposed to keeping one fixed address of entity in memory.
UPDATE
To clarify, this question asks if there is any other design option available other than creating a single structure that contains all possibly interacting entities as a root. Intuitively, such a structure would imply logarithmic single update penalty unless updates are "clustered" somehow to amortize.
Is there a known system where interactions could be modelled differently? For example, like in cold/hot data storage optimization?
After some research, there seems to be a connection with N-body simulation where systems can be clustered but I'm not familiar with it yet. Even so, would that also mean I need to have a single structure of clusters?
While I agree with the people commenting that this is a vague question, I'll still try to address some of the issues put forth.
It's true that there's some performance overhead from immutability, because when you use mutable state, you can update some values in-place, whereas with immutable state, some copying has to take place.
It is, however, a common misconception that this is causes problems with big 'object' graphs. It doesn't have to.
Consider a Haskell data structure:
data BigDataStructure = BigDataStructure {
bigChild1 :: AnotherBigDataStructure
, bigChild2 :: YetAnotherBigDataStructure
-- more elements go here...
, bigChildN :: Whatever }
deriving (Show, Eq)
Imagine that each of these child elements are big and complex themselves. If you want to change, say, bigChild2, you could write something like:
updatedValue = myValue { bigChild2 = updatedChild }
When you do that, some data copying takes place, but it's often less that most people think. This expression does create a new BigDataStructure record, but it doesn't 'deep copy' any of its values. It just reuses bigChild1, updatedChild, bigChildN, and all the other values, because they're immutable.
In theory (but we'll get back to that in a minute), the flatter your data structures are, the more data sharing should be enabled. If, on the other hand, you have some deeply nested data structures, and you need to update the leafs, you'll need to create a copy of the immediate parents of those leafs, plus the parents of those parents, and their parents as well, all the way to the root. That might be expensive.
That's the theory, though, but we've known for decades that it's impractical to try predict how software will perform. Instead, try to measure it.
While the OP suggest that significant data is involved, it doesn't state how much, and neither does it state the hardware specs of the system that's going to run the simulation. So, as Eric Lippert explains so well, the person who can best answer questions about performance is you.
P.S. It's my experience that when I start to encounter performance problems, I need to get creative with how I design my system. Efficient data structures can address many performance issues. This is just as much the case in OOP as it is in FP.

Statecharts: Limit the number of time a state gets executed

How can I graphically represent within Statechart Diagrams that a state never gets executed more than a certain amount of times? So that it doesn't end in an infinite loop. Something like
assert enterPIN(int p) <= 3
and then branch to another state, if condition violated. Should I include it somehow in the guard? Or in the state activities?
EDIT:
(CheckPIN)--[invalid]-->(counter| + inc.)--[counter>3]-->(retainCard)
^ |
|-----[counter<=3]-----|
Something in this direction?
Legend: (StateName | (+-)activity), Transition: -->, [Guard]
I think your question is way too far down in the weeds. While you can model to infinitesimal detail, you should aim to create a much more durable model that will not require as much change over time.
H. S. Lahman makes an excellent case for using Moore state machines in his book, Model-Based Development: Applications. Moore state machines are where actions happen on entry to states, as opposed to where actions happen on transitions between states. His most compelling reason for using Moore state machines is that transitions do not degenerate into a sequence of function calls, they are instead announcements of things that have completed.
Here is an example of how to avoid all the detail and create a more durable model:
You'll notice that how things happen is completely encapsulated. For example, challenging the user might involve a PIN number, retina scan, or subdermal chip. The maximum failures allowed for each of those authentication modes might be completely different. That policy can be represented elsewhere.
To give a graphical answer:
This is how I would model it.
The counter object is usually not needed, since it's a simple counter and it's rather obvious that the rest/increment would refer to a common counter. Also there is no real <<flow>> to that counter. A not stereotyped dependency would also suffice.

Does statemachine and statechart mean the same?

I have heard people using these terms.
I wonder if they refer to the same thing or is there a difference between these two?
Wikipedia actually covers this pretty well. http://en.wikipedia.org/wiki/State_diagram
State machines have been around for a long time (decades at least). They consist of states (usually circles) and arrows between the states where certain actions can trigger an transition along an arrow. Moore and Mealy machines are the two main variants, which indicate whether the output is derived from the transitions or the states themselves.
Statecharts were invented by David Harel, and are sometimes called Harel Statecharts. He defined a pretty broad extension to typical state machines, with the goal of making state machines more useful for actual work with complicated systems.
A variant of Statecharts are build into Matlab now, as stateflow, which is an extension of simulink. Statesharts are also the basis of the UML "State Machine Diagrams".
Learn more about Stateflow in general at: https://www.mathworks.com/help/stateflow/examples.html
Stateflow has been updated for making it very easy to create state machines and flow charts in R2012b.
The major updates include a new graphical editor, state transition tables, MATLAB as the action language and an integrated debugger.
From the seminal 1999 book "Constructing the User Interface with Statecharts" by Ian Horrocks, published by Addison-Wesley (bold/italicized for emphasis):
From the very nature of user interfaces, it is apparent that states and events are a natural medium for describing their behaviour. Finite state machines are a formal mechanism for collecting and co-ordinating such fragments to form a whole. However, it is
generally agreed that, because of the large number of states and events organized in an
unstructured way, finite state machines are not appropriate for describing complex
systems. The feasibility of a state-based approach for specifying a user interface relies
on a specification language that results in diagrams that are concise, well structured,
modular and hierarchical.
There are many different notations used to represent finite state machines, such as state
transition diagrams and state transition matrices. However, such notations do not address
the fundamental problems associated with finite state machines. The statechart notation is
not just another notation for a finite state machine; statecharts are a major step forward for state-based notations. They provide a much richer and much more powerful specification
language than any finite stale machine notation. All the serious problems associated with
finite stale machines are solved by statecharts:
The number of states in a statechart rises in proportion to the complexity of the
system being specified. In finite state machines, the number of states tends to
increase rapidly with only a modest increase in the complexity of the system
being specified.
Statecharts avoid duplicate states and duplicate event arrows. This avoids large,
chaotic diagrams that are difficult to understand and difficult to modify.
The states in a statechart have a hierarchical structure, which means the system
being modelled can be considered at different levels of abstraction. The modular
nature of the states ensures that it is not necessary to understand an entire statechart in order to understand just one part of it. In a nutshell, statecharts are to
state transition diagrams what modular decomposition and abstraction are to
monolithic code

DDD/CQRS for composite .NET app with multiple databases

I'll admit that I am still quite a newbie with DDD and even more so with CQRS. I also realize that DDD and/or CQRS might not be the right approach to every problem. Nevertheless, I like the principals but have some questions in the context of a current project.
The solution is a simulator that generates performance data based on the current configuration. Administrators can create and modify the specifications for simulations. Testers set some environmental conditions and run the simulator. The results are captured, aggregated and reported.
The solution consists of 3 component areas each with their own use-cases, domain logic and supporting data structure. As a result, a modular designed seems appealing as a way to segregate logic and separate concerns.
The first area would be the administrative aspect which allows users to create and modify the specifications. This would be a CRUD heavy 'module'.
The second area would be for executing the simulations. The domain model would be similar to the first area but optimized for executing the simulation as opposed to providing a convenient model for editing.
The third area is reporting.
From this I believe that I have three Bounding Contexts, yes? I have three clear entry points into the application, three sets of domain logic and three different data models to support the domain logic.
My first instinct is to follow these lines and create three modules (assemblies) that encapsulate the domain layer for each area. Should I also have three separate databases? Maybe more than three to support write versus read?
I gather this may be preferred for CQRS but am not sure how to go about it. It appears to me that CQRS suggests a set of back-end processes that move data around. But if that's the case, and data persistence is cross-cutting (as DDD suggests), then doesn't my data access code need awareness of all of the domain objects? If so, then is there a benefit to having separate modules?
Finally, something I failed to mention earlier is that specifications are considered 'drafts' until published, which makes then available for simulation. My PublishingService needs to have knowledge of the domain model for both the first and second areas so that when it responds to the SpecificationPublishedEvent, it can read the specification, translate the model and persist it for execution. This makes me think I don't have three bounding contexts after all. Or am I missing something in my analysis?
You may have a modular UI for this, but I don't see three separate domains in what you are describing necessarily.
First off, in CQRS reporting is not directly a domain model concern, it is a facet of the separated Read Model which takes on the responsibility of presenting the domain state optimized for reporting.
Second just because you have different things happening in the domain is not necessarily a reason to bound them away from each other. I'd take a read through the blue DDD book to get a bit better feel for what BCs look like.
I don't really understand your domain well enough but I'll try to give some general suggestions.
Start with where you talked about your PublishingService. I see a Specification aggregate root which takes a few commands that probably look like CreateNewSpecification, UpdateSpecification and PublishSpecification.
The events look similar and probably feel redundant: SpecificationCreated, SpecificationUpdated, SpecificationPublished. Which kind of sucks but a CRUD heavy model doesn't have very interesting behaviors. I'd also suggest finding an automated way to deal with model/schema changes on this aggregate which will be tedious if you don't use code generation, or handle the changes in a dynamic *emphasized text*way that doesn't require you to build new events each time.
Also you might just consider not using event sourcing for such an aggregate root since it is so CRUD heavy.
The second thing you describe seems to be about starting a simulation which will run based on a Specification and produce data during that simulation (I assume). An event driven architecture makes sense here to decouple updating the reporting data from the process that is producing the data. This has huge benefits if you are producing large amounts of data to process.
However it doesn't sound like a Simulation is necessarily the kind of AR that would benefit from Event Sourcing either. For a couple reasons:
Simulation really takes only one Command which is something like StartSimulation
Simulation then produces events over it's life-time which represent what is happening internally with the simulation
Simulation doesn't seem to ever receive any other Commands that could depend on the current state of the Simulation
Simulation is not interacted with by multiple clients/users simultaneously and as we pointed out it isn't really interacted with at all
In general, domain modeling is very specific to each individual project so it's hard to give you all the information you need to build your domain model. It will come as a result of spending a great deal of time trying to understand your user's needs and the problem they are trying to solve with the software. It likely will go through multiple refinements as you develop insights into their process.

Resources