How to design a state machine - state-machine

I was hoping someone could help me design this state machine correctly. I'm using Spring state machine with Papyrus for this project. I have a lot of simple, orthogonal states and one sort of "master" state that depends on all the others being "on." So, let's say...
M.off -> M.on
A.off -> A.on
B.off -> B.on
C.off -> C.on
There are events that trigger a transition from A.off to A.on, B.off to B.on, etc. I have each of these (A, B, C) in its own region. Only when A B and C are all in the "on" position do I want to transition to the M.on state.
I'm unsure of the best way to structure this in a state machine. Any help would be appreciated.

I don't know the details of Spring Statemachine. But for a UML state machine a solution would be to join when all regions are in their "on" state.
It's not clear from the question what happens when A, B, or C go to off when M is "on". Assumption: M goes to "off".
To make/keep this simple I add an history pseudo-state to each region in order to remember which regions were "on" whenever one goes to "off".
The transitions to the join will trigger when all of the source states of the incoming transitions are active (so only if A && B && C are "on").

Related

Clarification on FSM machine diagram

I have a simple state machine like the above. As far as, I looked over the online resources (example - x-sate) the state machine could have three different things "State", "Action Or Event" and "Side effect".
So Basically the event comes from outside of the state machine like User clicks, Reboot triggered etc., and the state machine should change its state based on those events. Also, each state could perform some side effects, but the side effect should not affect the state machine (for example, perform a cleanup task before Reboot)
Questions:
Is it allowed for a state to perform some operations(side effects) which could change the state of the state machine(fetch A, B, C in the above diagram) without any external event or action?

How to handle in Event Driven Microservices if the messaging queue is down?

Assume there are two services A and B, in a microservice environment.
In between A and B sits a messaging queue M that is a broker.
A<---->'M'<----->B
The problem is what if the broker M is down?
Possible Solution i can think of:
Ping from Service A at regular intervals to check on Messaging queue M as long as it is down. In the meantime, service A
stores the data in a local DB and dumps it into the queue once the broker M is up.
Considering the above problem, if someone can suggest whether threads or reactive programming is best suited for this scenario and ways it could be handled via code, I would be grateful.
The problem is what if the broker M is down?
If the broker is down, then A and B can't use it to communicate.
What A and B should do in that scenario is going to depend very much on the details of your particular application/use-case.
Is there useful work they can do in that scenario?
If not, then they might as well just stop trying to handle any work/transactions for the time being, and instead just sit and wait for M to come back up. Having them do periodic pings/queries of M (to see if it's back yet) while in this state is a good idea.
If they can do something useful in this scenario, then you can have them continue to work in some sort of "offline mode", caching their results locally in anticipation of M's re-appearance at some point in the future. Of course this can become problematic, especially if M doesn't come back up for a long time -- e.g.
what if the set of cached local results becomes unreasonably large, such that A/B runs out of space to store it?
Or what if A and B cache local results that will both apply to the same data structure(s) within M, such that when M comes back online, some of A's results will overwrite B's (or vice-versa, depending on the order in which they reconnect)? (This is analogous to the sort of thing that source-code-control servers have to deal with after several developers have been working offline, both making changes to the same lines in the same file, and then they both come back online and want to commit their changes to that file. It can get a bit complex and there's not always an obvious "correct" way to resolve conflicts)
Finally what if it was something A or B "said" that caused M to crash in the first place? In that case, re-uploading the same requests to M after it comes back online might only cause it to crash again, and so on in an infinite loop, making the service perpetually unusable. (In this case, of course, the proper fix would be to debug M)
Another approach might be to try to avoid the problem by having multiple redundant brokers (e.g. M1, M2, M3, ...) such that as long as at least one of them is still available, productive work can continue. Or perhaps allow A and B to communicate with each other directly rather than through an intermediary.
As for whether this sort of thing would best be handled by threads or reactive programming, that's a matter of personal preference -- personally I prefer reactive programming, because the multiple-threads style usually means blocking-RPC-operations, and a thread that is blocked inside a blocking-operation is a frozen/helpless thread until the remote party responds (e.g. if M takes 2 minutes to respond to an RPC request, then A's RPC call to M cannot return for 2 minutes, which means that the calling thread is unable to do anything at all for 2 minutes). In a reactive approach, A's thread could also be doing other things during this period (such as pinging M to make sure it's okay, or contacting a backup broker, or whatever) during that 2 minute period if it wanted to.

Design choice for a microservice event-driven architecture

Let's suppose we have the following:
DDD aggregates A and B, A can reference B.
A microservice managing A that exposes the following commands:
create A
delete A
link A to B
unlink A from B
A microservice managing B that exposes the following commands:
create B
delete B
A successful creation, deletion, link or unlink always results in the emission of a corresponding event by the microservice that performed the action.
What is the best way to design an event-driven architecture for these two microservices so that:
A and B will always eventually be consistent with each other. By consistency, I mean A should not reference B if B doesn't exist.
The events from both microservices can easily be projected in a separate read model on which queries spanning both A and B can be made
Specifically, the following examples could lead to transient inconsistent states, but consistency must in all cases eventually be restored:
Example 1
Initial consistent state: A exists, B doesn't, A is not linked to B
Command: link A to B
Example 2
Initial consistent state: A exists, B exists, A is linked to B
Command: delete B
Example 3
Initial consistent state: A exists, B exists, A is not linked to B
Two simultaneous commands: link A to B and delete B
I have two solutions in mind.
Solution 1
Microservice A only allows linking A to B if it has previously received a "B created" event and no "B deleted" event.
Microservice B only allows deleting B if it has not previously received a "A linked to B" event, or if that event was followed by a "A unlinked from B" event.
Microservice A listens to "B deleted" events and, upon receiving such an event, unlinks A from B (for the race condition in which B is deleted before it has received the A linked to B event).
Solution 2:
Microservice A always allows linking A to B.
Microservice B listens for "A linked to B" events and, upon receiving such an event, verifies that B exists. If it doesn't, it emits a "link to B refused" event.
Microservice A listens for "B deleted" and "link to B refused" events and, upon receiving such an event, unlinks A from B.
EDIT: Solution 3, proposed by Guillaume:
Microservice A only allows linking A to B if it has not previously received a "B deleted" event.
Microservice B always allows deleting B.
Microservice A listens to "B deleted" events and, upon receiving such an event, unlinks A from B.
The advantage I see for solution 2 is that the microservices don't need to keep track of of past events emitted by the other service. In solution 1, basically each microservice has to maintain a read model of the other one.
A potential disadvantage for solution 2 could maybe be the added complexity of projecting these events in the read model, especially if more microservices and aggregates following the same pattern are added to the system.
Are there other (dis)advantages to one or the other solution, or even an anti-pattern I'm not aware of that should be avoided at all costs?
Is there a better solution than the two I propose?
Any advice would be appreciated.
Microservice A only allows linking A to B if it has previously received a "B created" event and no "B deleted" event.
There's a potential problem here; consider a race between two messages, link A to B and B Created. If the B Created message happens to arrive first, then everything links up as expected. If B Created happens to arrive second, then the link doesn't happen. In short, you have a business behavior that depends on your message plumbing.
Udi Dahan, 2010
A microsecond difference in timing shouldn’t make a difference to core business behaviors.
A potential disadvantage for solution 2 could maybe be the added complexity of projecting these events in the read model, especially if more microservices and aggregates following the same pattern are added to the system.
I don't like that complexity at all; it sounds like a lot of work for not very much business value.
Exception Reports might be a viable alternative. Greg Young talked about this in 2016. In short; having a monitor that detects inconsistent states, and the remediation of those states, may be enough.
Adding automated remediation comes later. Rinat Abdullin described this progression really well.
The automated version ends up looking something like solution 2; but with separation of the responsibilities -- the remediation logic lives outside of microservice A and B.
Your solutions seem OK but there are some things that need to be clarified:
In DDD, aggregates are consistencies boundaries. An Aggregate is always in a consistent state, no matter what command it receives and if that command succeeds or not. But this does not mean that the whole system is in a permitted permanent state from the business point of view. There are moments when the system as whole is in a not-permitted state. This is OK as long as eventually it will transition in a permitted state. Here comes the Saga/Process managers. This is exactly their role: to bring the system in a valid state. They could be deployed as separate microservices.
One other type of component/pattern that I used in my CQRS projects are Eventually-consistent command validators. They validate a command (and reject it if it is not valid) before it reaches the Aggregate using a private read-model. These components minimize the situations when the system enters an invalid state and they complement the Sagas. They should be deployed inside the microservice that contains the Aggregate, as a layer on top of the domain layer (aggregate).
Now, back to Earth. Your solutions are a combination of Aggregates, Sagas and Eventually-consistent command validations.
Solution 1
Microservice A only allows linking A to B if it has previously received a "B created" event and no "B deleted" event.
Microservice A listens to "B deleted" events and, upon receiving such an event, unlinks A from B.
In this architecture, Microservice A contains Aggregate A and a Command validator and Microservice B contains Aggregate B and a Saga. Here is important to understand that the validator would not prevent the system's invalid state but only would reduce the probability.
Solution 2:
Microservice A always allows linking A to B.
Microservice B listens for "A linked to B" events and, upon receiving such an event, verifies that B exists. If it doesn't, it
emits a "link to B refused" event.
Microservice A listens for "B deleted" and "link to B refused" events and, upon receiving such an event, unlinks A from B.
In this architecture, Microservice A contains Aggregate A and a Saga and Microservice B contains Aggregate B and also a Saga. This solution could be simplified if the Saga on B would verify the existence of B and send Unlink B from A command to A instead of yielding an event.
In any case, in order to apply the SRP, you could extract the Sagas to their own microservices. In this case you would have a microservice per Aggregate and per Saga.
I will start with the same premise as #ConstantinGalbenu but follow with a different proposition ;)
Eventual consistency means that the whole system will eventually
converge to a consistent state.
If you add to that "no matter the order in which messages are received", you've got a very strong statement by which your system will naturally tend to an ultimate coherent state without the help of an external process manager/saga.
If you make a maximum number of operations commutative from the receiver's perspective, e.g. it doesn't matter if link A to B arrives before or after create A (they both lead to the same resulting state), you're pretty much there. That's basically the first bullet point of Solution 2 generalized to a maximum of events, but not the second bullet point.
Microservice B listens for "A linked to B" events and, upon receiving
such an event, verifies that B exists. If it doesn't, it emits a "link
to B refused" event.
You don't need to do this in a nominal case. You'd do it in the case where you know that A didn't receive a B deleted message. But then it shouldn't be part of your normal business process, that's delivery failure management at the messaging platform level. I wouldn't put this kind of systematic double-check of everything by the microservice where the original data came from, because things get way too complex. It looks as if you're trying to put some immediate consistency back into an eventually consistent setup.
That solution might not always be feasible, but at least from the point of view of a passive read model that doesn't emit events in response to other events, I can't think of a case where you couldn't manage to handle all events in a commutative way.

How Terraform incremental changes should be organized?

could you help me to understand
For example I have incremental changes to infrastructure, like [A] -> [B] -> [C], where [A] separately can be one server named i, [B] separately can be second server named j, and [C] separately can be third server named k. In total there should be 3 servers. Every state can be described as [A] = x, x + [B] = y, y + [C] = z where x, y, z are states in the middle.
My question are
How to organize incremental infrastructure changes for multiple modules in Terraform without affecting previous modules?
Is it possible to rollback changes in the middle of the chain eg. [B] and get x-state or we should follow chain from the last module [C] to required in the middle [B]?
At this time Terraform only considers two states[1]: the "current state" (the result of the previous Terraform run along with any "drift" in the mean time) and the "desired state" (described in configuration). Terraform's task is to identify the differences between these two states and determine which API actions are needed to move resources from their current state to the desired state.
This means that any multi-step transition cannot be orchestrated within Terraform alone. In your example, to add server j you would add it alone to the Terraform configuration, and run Terraform to create that server. You can then add server k to the configuration and run Terraform again. To automate this, an external process would need to generate these progressive configuration changes.
An alternative approach -- though not recommended for everyday use, since it can cause confusion in a collaborative environment where others can't easily see how this state was reached -- is to use the -target argument to specify one or more specific resources to operate on. In principle this allows adding both servers j and k to configuration but then using -target with only the resource representing j.
There is no formal support for rollback in Terraform. Terraform merely sees this as another state transition, "rolling forward". For example, if after creating server k you wish to revert to state [A], you would remove the configuration for server k (by reverting in version control, perhaps) and apply again. Terraform doesn't have any awareness of the fact that this is a "rollback", but it can see that the configuration no longer contains server k and thus know that it needs to be destroyed to reach the desired state.
One of your questions is about "affecting previous modules". In general, if no changes are made to a resource in your configuration (either the config changed, or the resource itself changed outside of Terraform's purview) then Terraform should not try to update it. If it did, that would be considered a bug. However, for larger systems it can be useful to split infrastructure across multiple top-level Terraform configurations that are each applied separately. If a remote backend is in use (recommended for collaborative environments) then the terraform_remote_state data source can be used to access the output values of one configuration from within another, thus allowing the creation of a tree or DAG of Terraform configurations. This adds complexity, so should be weighed carefully, but has the advantage of decreasing the "blast radius" of a particular change by taking unrelated resources out of consideration altogether.
[1] I am using "state" here in the general sense you used it, which is distinct from Terraform's physical idea of "state", a data file that contains a record of which resources existed at the last run.

IAR VisualState Requiring Trigger For Every Expression inside a State

I have been using State machine based design tools for some time, and have seen UML modeling tools that allow you to execute your logic (call functions, do other stuff) inside a state. However, after spending a couple days with IAR VisualState, it appears that you cannot execute your logic inside a state without a trigger. I am confused as it does not make sense TO HAVE A TRIGGER for every single action inside a state !
Here is what I expect from a state chart tool:
If I enter StateA, upon entering the state I set my values in entry section, then I would like to call a function (I just want to call it, NO TRIGGER), and inside that function, I want to trigger an event based on some logic, and that event would trigget state transition from StateA to StateB or StateC.
Is there something wrong with this expectation? Is it possible in VisualSTATE?
Help is greatly appreciated.
VisualSTATE imposes the event-driven paradigm, just like any Graphical User Interface program. Anything and everything that happens in such systems is triggered by an event. The system then responds by performing actions (computation) and possibly by changing the state (state transition).
Probably the most difficult aspect of event-driven systems is the inversion of control, that is, your (state machine) code is called only when there is an event to process. Otherwise, your code is not even active. This means that you are not in control, the events are. Your job is to respond to events.
Perhaps before you play with visualSTATE, you could pick up any book on GUI programming for Windows (Visual Basic is a good starting point) and build a couple of event-driven applications. After you do this, the philosophy behind visualSTATE will become much clearer.
Create 3 states: A, B, C where state A is a default state.
By entering state A, call action function [that sets you
variables a and b following some algorithm], followed by ^Signal1.
Entry/ action()^Signal1
Make a transition driven by Signal1 [will serve you as an event] from state A with 2 guards:
a <= b, transition to state C
a > b, transition to state B

Resources