Finite State Machine: One State to Multiple States - state-machine

I'm writing a simple finite state machine and realized that there are situations where an event can take a state to more than one possible results. Basically, from state A, if Event E happens, the state could be either C or D.
I'm currently using the Javascript Finite State Machine code written here: https://github.com/jakesgordon/javascript-state-machine
From the documentation I don't see an obvious way that makes this possible. More so, I feel like maybe this is actually a flow in my original design.
Essentially, in a Finite State Machine, should there be a situation where a transition happens, and based on some logic result in one of multiple states (1 to many), or should it be that we check the logic to see which transition needs to takes place (1 to 1)?

Congratulations, you've just discovered non-deterministic finite state machines! The ideas are similar to that of a deterministic state machine, except that there may be multiple ways to transition from a state given the same input symbol. How this is actually done is unspecified (randomness, user input, branch out and run them all at once, etc.).

Related

UML Statemachine - Reuse state

I'm trying to model a state machine which reuses a state in order to reduce complexity.
I've got three states: State A, B and X.
My state X can either be entered via a transaction from state A or B.
State X includes multiple substates with lots of complexity and I don't wont to implement it twice.
After the process in state X is completed I need to transition back to back to state A or B based on which one was the previous state.
Is there a elegant way to solve this?
State X includes multiple substates with lots of complexity and I don't wont to implement it twice
Define a submachine corresponding to your state X and in your current machine use submachine state to instantiate it where you need
See §14.2.3.4.7 Submachine States and submachines page 311 in formal-17-12-05 :
Submachines are a means by which a single StateMachine specification can be reused multiple times. They are similar to encapsulated composite States in that they need to bind incoming and outgoing Transitions to their internal Vertices.
...
NOTE. Each submachine State represents a distinct instantiation of a submachine, even when two or more submachine States reference the same submachine.
A SubMachine will help you to reuse several time part of your state modelling.
But if you want to be able to enter into your state X from A or B and then retun to the previous state, ShallowHistory Would be a good idea.
In the following state machine, I modeled a SubMachine X referenced by both states X1 and X2. I also wanted to model the fact that state X2 in processed after A or B and then next state if the previous one.
Another solution consists in playing with transition guards or events/triggers. You must keep in mind that transitions are triggered when specific events occurs or when its guard is true cf. following screenshot.

Handling Failures in State diagram

I have a system with 3 states. I wanted to handle failures. That is, when the system reboots, it doesn't know the state it's in. Is the following state diagram correct?
This not a valid UML State Machine Diagram for several reasons:
The start node is the wrong symbol. It should be a bullet.
The arrows fork. Each arrow (transition) should begin and end on a node.
The Y and N don't have square brackets.
Regarding the semantics:
The decisions don't have meaningful text (should refer to previously stored state). They may be combined to one decision "storedState = " which has four outgoing transitions guarded as [S1], [S2], [S3] and [empty].
The actions to store the state in persistent storage, in order to be restored in case of crash, are not present.
In case all decisions yield N, the object is destroyed immediately, instead of ending in some default state.
I don't understand the intention of A1, A2 and A3.
Perhaps it would be good to first show the diagram without reboot logic and then tell us what you try to add to that diagram to handle the failures.

In a UML state machine, can an initial pseudostate have incoming transitions?

In UML 2.5.1, the initial pseudostate of a state machine is defined as follows:
An initial Pseudostate represents a starting point for a Region; that
is, it is the point from which execution of its contained behavior
commences when the Region is entered via default activation. It is the
source for at most one Transition, which may have an associated effect
Behavior, but not an associated trigger or guard. There can be at
most one initial Vertex in a Region.
In other words, a UML state machine should almost always contain exactly one initial pseudostate, which should have exactly one outgoing transition.
However, can an initial pseudostate have incoming transitions as well? For example:
I cannot find anything forbidding it in the UML specification, yet I cannot find any example online where this case happen, therefore I was wondering whether or not I overlooked anything.
EDIT: To go into more detail, if we look into the OCL constraints stated in the specification, we can only find the following one that affects outgoing transitions (section 14.5.6.7):
inv: (kind = PseudostateKind::initial) implies (outgoing->size() <= 1)
but I cannot find any constraint regarding incoming transitions
EDIT2: I have just realized that my model is wrong! Considering this sentence of the specification (cited above): "It is the source for at most one Transition, which may have an associated effect Behavior, but not an associated trigger or guard."
Therefore the transition between init and s1 should actually have zero triggers, instead of having e1 as a trigger.
Note that while this does not invalidate the initial question.
I see nothing in the UML 2.5.1 Specification that prohibits a transition whose target is the initial pseudostate.
Such a transition would be meaningless at best and confusing at worst, which is likely why no examples are found.
Edit: see the comments!
On p. 423 UML 2.5:
15.7.18 InitialNode [Class]
15.7.18.4 Constraints
• no_incoming_edges
An InitialNode has no incoming ActivityEdges.
inv: incoming->isEmpty()
N.B. If you intend to have a self-transition for e1 then why not just using that? The Initial can anyway have only on singular outgoing edge, namely to the first state (here s1).
No this is not allowed. And why would one Do that? As you already stated in the cited text,it can only have one outgoing edge without any guard. So what is the added value, as you cannot reuse anything.
I think the text is pretty clear as-is: "[An initial Pseudostate] is the point from which execution of its contained behavior commences when the Region is entered via default activation." If you connect a transition back around to the initial psuedostate, the initial psuedostate is no longer "the point from which execution of its contained behavior commences," it is something else, and is therefore undefined.

State machine - state transition diagram for double delay discrete time machine

I am working my way through an MIT OCW course, Introduction to Electrical Engineering and Computer Science I, in which state machines are employed. I have noticed that the course instructors do not draw state transition diagrams for most of the state machines they discuss.
One problem is to design & Python code a state machine whose state is the input from two time intervals in the past. I think that this is an infinite state machine for which a state transition diagram might be useful for getting the general idea while showing only a few of the states.
I am wondering if a state transition diagram can be drawn for such double delay machine. All the examples, so far, have a transition line emerging from a state bubble marked with an input and the resulting output and then pointing at the next state. For a double delay machine the input of consequence is entered two time periods previous. The problem instructions state that all state memory for the machine be in one argument. No mention is made of input memory, which I would think necessary.
My questions:
Can a state transition diagram be drawn for this state machine?
Is it necessarily the case that input memory be a part of this design?
It is impossible to draw a diagram since the set of all possible states includes any value of any data type, given in the example for the (single) delay state machine in the readings. So the number of possible states can't be defined. See Chapter 4: State Machines.
In the problem description it states that:
It is essential that the init and getNextValues methods in any state machine not set or read any instance variables except self.startState (not even self.state). All memory (state) must be in the state argument to getNextValues. Look at the examples in the course notes, section 4.1.
So the state is all the memory you need. There is no reason not to use an array as state to keep the last two inputs.
First we save both values in memory (state)
class Delay2Machine(StateMachine):
def __init__(self, val0, val1):
self.startState = (val0, val1)
Following the super class SM step function implementation also given in the readings:
def step(self, inp):
(s, o) = self.getNextValues(self.state, inp)
self.state = s
return o
The output will be the first of the values saved in memory, and the state will be updated to include the new input
def getNextValues(self, state, inp):
return ((state[1], inp), state[0])

Managing a stateful computation system in Haskell

So, I have a system of stateful processors that are chained together. For example, a processor might output the average of its last 10 inputs. It requires state to calculate this average.
I would like to submit values to the system, and get the outputs. I also would like to jump back and restore the state at any time in the past. Ie. I run 1000 values through the system. Now I want to "move" the system back to exactly as it was after I had sent the 500th value through. Then I want to "replay" the system from that point again.
I also need to be able to persist the historical state to disk so I can restore the whole thing again some time in the future (and still have the move back and replay functions work). And of course, I need to do this with gigabytes of data, and have it be extremely fast :)
I had been approaching it using closures to hold state. But I'm wondering if it would make more sense to use a monad. I have only read through 3 or 4 analogies for monads so don't understand them well yet, so feel free to educate me.
If each processor modifies its state in the monad in such a way that its history is kept and it is tied to an id for each processing step. And then somehow the monad is able to switch its state to a past step id and run the system with the monad in that state. And the monad would have some mechanism for (de)serializing itself for storage.
(and given the size of the data... it really shouldn't even all be in memory, which would mean the monad would need to be mapped to disk, cached, etc...)
Is there an existing library/mechanism/approach/concept that has already been done to accomplish or assist in accomplishing what I'm trying to do?
So, I have a system of stateful processors that are chained together. For example, a processor might output the average of its last 10 inputs. It requires state to calculate this average.
First of all, it sounds like what you have are not just "stateful processors" but something like finite-state machines and/or transducers. This is probably a good place to start for research.
I would like to submit values to the system, and get the outputs. I also would like to jump back and restore the state at any time in the past. Ie. I run 1000 values through the system. Now I want to "move" the system back to exactly as it was after I had sent the 500th value through. Then I want to "replay" the system from that point again.
The simplest approach here, of course, is to simply keep a log of all prior states. But since it sounds like you have a great deal of data, the storage needed could easily become prohibitive. I would recommend thinking about how you might construct your processors in a way that could avoid this, e.g.:
If a processor's state can be reconstructed easily from the states of its neighbors a few steps prior, you can avoid logging it directly
If a processor is easily reversible in some situations, you don't need to log those immediately; rewinding can be calculated directly, and logging can be done as periodic snapshots
If you can nail a processor down to a very small number of states, make sure to do so.
If a processor behaves in very predictable ways on certain kinds of input, you can record that as such--e.g., if it idles on numeric input below some cutoff, rather than logging each value just log "idled for N steps".
I also need to be able to persist the historical state to disk so I can restore the whole thing again some time in the future (and still have the move back and replay functions work). And of course, I need to do this with gigabytes of data, and have it be extremely fast :)
Explicit state is your friend. Functions are a convenient way to represent active state machines, but they can't be serialized in any simple way. You want a clean separation of a (basically static) network of processors vs. a series of internal states used by each processor to calculate the next step.
Is there an existing library/mechanism/approach/concept that has already been done to accomplish what I'm trying to do? Does the monad approach make sense? Are there other better/special approaches that would help it do this efficiently especially given the enormous amount of data I have to manage?
If most of your processors resemble finite state transducers, and you need to have processors that take inputs of various types and produce different types of outputs, what you probably want is actually something with a structure based on Arrows, which gives you an abstraction for things that compose "like functions" in some sense, e.g., connecting the input of one processor to the output of another.
Furthermore, as long as you avoid the ArrowApply class and make sure that your state machine model only returns an output value and a new state, you'll be guaranteed to avoid implicit state because (unlike functions) Arrows aren't automatically higher-order.
Given the size of the data... it really shouldn't even all be in memory, which would mean the monad would need to be mapped to disk, cached, etc...
Given a static representation of your processor network, it shouldn't be too difficult to also provide an incremental input/output system that would read the data, serialize/deserialize the state, and write any output.
As a quick, rough starting point, here's an example of probably the simplest version of what I've outlined above, ignoring the logging issue for the moment:
data Transducer s a b = Transducer { curState :: s
, runStep :: s -> a -> (s, b)
}
runTransducer :: Transducer s a b -> [a] -> [b]
runTransducer t [] = (t, [])
runTransducer t (x:xs) = let (s, y) = runStep t (curState t) x
(t', ys) = runTransducer (t { curState = s }) xs
in (t', y:ys)
It's a simple, generic processor with explicit internal state of type s, input of type a, and output of type b. The runTransducer function shoves a list of inputs through, updating the state value manually, and collects a list of outputs.
P.S. -- since you were asking about monads, you might want to know if the example I gave is one. In fact, it's a combination of multiple common monads, though which ones depends on how you look at it. However, I've deliberately avoided treating it as a monad! The thing is, monads capture only abstractions that are in some sense very powerful, but that same power also makes them more resistant in some ways to optimization and static analysis. The main thing that needs to be ruled out is processors that take other processors as input and run them, which (as you can imagine) can create convoluted logic that's nearly impossible to analyze.
So, while the processors probably could be monads, and in some logical sense intrinsically are, it may be more useful to pretend that they aren't; imposing an artificial limitation in order to make static analysis simpler.

Resources