The scenario:
I have a simple state machine:
Happy path:
Uninitialized->Initialized->InProgress->Done
Unhappy path:
Uninitialized->Initialized->Error
Simply put, I need to cause a transition (either into InProgress or in Error state) without an external event/trigger. I.e. Initialized state should immediately result in one of those states.
Questions:
Is it OK to cause state transition from within Initialized.Enter() ?
I could use state guards to do this, but I'd rather not have non-trivial logic in the state guard (and initialization can very well be complex).
If it is NOT OK, how can I do it differently?
Should I just take this decision out of he FSM all together and have some other component cause the appropriate transition? But then, wouldn't I still have to call that external component from within Initialized.Enter() ? so it solves nothing?
In a state machine, next state is a combinatorial logic function of both input and current state.
In the case you are describing, the same cause (Initialized state) seems to be able to trigger two different effects (either InProgress or Error state). I guess that there is a hidden input whose value makes the difference. I also guess that this input is received during transition from Uninitialized to Initialized.
Therefore I would have a different model:
Uninitialized -> Successfully initialized -> InProgress -> Done
\
`-> Failed Initialization -> Error
Possibly combining Successfully initialized with InProgress and Failed initialization with Error.
EDIT: From your comment, I understand that the hidden input actually is the result of an action (a device initialization). Taking your model, I assume that initialization takes place while in Initialized state (let's call it Initializing). This way, the result from the device is your external event which will trigger transition either to InProgress or to Error.
So keep your state machine and simply add the result of device.Initialize() to the list of inputs or external events.
Related
I'm trying to access a state of a State Monad inside an IO action.
More specifically: I'm trying to write a state-dependent signal handler using installHandlerfrom System.Posix.Signals which requires IO, however, I'd like to do different actions and change the state from inside the handler. I took a look at unliftio, but I read that State-Monads shouldn't be unlifted.
Is this possible? I'm more looking for an explanation than for a "copy-paste" solution. If unlifting State inside IO doesn't work, what would a solution look like, for when one wants to do some state-aware processing inside IO?
A value of type State a b does not contain a state. It is just an encapsulated function that can provide a resulting state and a result if you pass it a starting state (using the runState function. There is no way to access an intermediate (or "current") state from the outside of this function. This is what makes the State Monad "pure".
You seem to intend to have a handler, that does not always behave the same (even if invoked with the same parameters), but depends on an outside state. This kind of behaviour is "impure", and cannot be achieved by using only pure means. So what you need in this case is something that encapsulates the impureness in a way that you can access a "current value" of some state from the handler, without the current value itself getting passed into the handler.
As you already know from the comments, the go-to tool to provide access to mutable state to an IO action is using an IORef. IORefs work, because the Haskell runtime (traditionally, before multithreading at least) serializes IO actions. So the concept of the "current value" always makes sense. After every IO action, the value pointed to by every IORef is fixed. The order IO actions happen is also fixed, as it needs to be the order you chain them inside do blocks or using the >>= operators. Handling of Signals is performed by the Haskell runtime in a deterministic way, kind of like everytime you chain two IO actions, the runtime checks for pending signals, and invokes the corresponding handler.
In case you want to write code that manipulates data in an imperative way (where you can have a lot of variables, and even arrays where you update sinlge elements), you could write your code as I/O action and use IORef and IOArray for it, but there is a special "lite" version of IO that just supports mutable state in the same way as I/O without being able to interact with the environment. The shared state needs to be created, read and written from inside the same "capsule" of this special IO lite, so that running the whole action does not interact with outside state, but just with its internal state - the capsule as a whole is thus pure, even if single statements inside the capsule can be considered impure. The name of this lite version of IO is called ST, which is short for "state thread".
How can I catch and handle an exception in a chain of async call between contracts?
Suppose, that my transaction initiate the following calls:
contractA.run()
-> do changes in contractA
-> calls contractB.run()
-> do changes in contractB
-> then calls another method on contractA: contractA.callback()
* callback() crashes
After an exception in a Promise, NEAR is not rolling back changes occured in past promises. I also don't see any method for handling exceptions in near-sdk.
One idea would be to return errors instead of throwing exceptions and create bunch of private functions to update the state after error value and adding / releasing mutexes. However this won't solve sometimes we can't control that, eg in external smart-contracts (eg, if contractB.do would panic in the example above).
The only way to catch an exception is to have a callback on the promise that generated the exception.
In the explained scenario, the contractA.callback() shouldn't crash. You need to construct the contract carefully enough to avoid failing on the callback. Most of the time it's possible to do, since you control the input to the callback and the amount gas attached. If the callback fails, it's similar to having an exception within an exception handling code.
Also note, that you can make sure the callback is scheduled properly with the enough gas attached in contractA.run(). If it's not the case and for example you don't have enough gas attached to run, the scheduling of callback and other promise will fail and the entire state from run changes is rolled back.
But once run completes, the state changes from run are committed and callback has to be carefully processed.
We have a few places in lockup contract where the callback is allowed to fail: https://github.com/near/core-contracts/blob/6fb13584d5c9eb1b372cfd80cd18f4a4ba8d15b6/lockup/src/owner_callbacks.rs#L7-L24
And also most of the places where the callback doesn't fail: https://github.com/near/core-contracts/blob/6fb13584d5c9eb1b372cfd80cd18f4a4ba8d15b6/lockup/src/owner_callbacks.rs#L28-L61
To point out there are some situation where the contract doesn't want to rely on the stability of other contracts, e.g. when the flow is A --> B --> A --> B. In this case B can't attach the callback to the resource given to A. For these scenarios we were discussing a possibility of adding a specific construct that is an atomic and has a resolving callback once it's dropped. We called it Safe: https://github.com/nearprotocol/NEPs/pull/26
EDIT
What if contractB.run fails and I will like to update the state in contractA to rollback changes from contractA.run?
In this case contractA.callback() is still called, but it has PromiseResult::Failed for its dependency contractB.run.
So callback() can modify the state of contractA to revert changes.
For example, a callback from the lockup contract implementation to handle withdrawal from the staking pool contract: https://github.com/near/core-contracts/blob/6fb13584d5c9eb1b372cfd80cd18f4a4ba8d15b6/lockup/src/foundation_callbacks.rs#L143-L185
If we adapt names to match the example:
The lockup contract (contractA) tries to withdraws funds (run()) from the staking pool (contractB), but the funds might still be locked due to recent unstaking, so the withdrawal fails (contractB.run() fails).
The callback is called (contractA.callback()) and it checks the success of the promise (of contractB.run). Since withdrawal failed, callback reverts the state back to the original (reverts the status).
Actually, it's slightly more complicated because the actual sequence is A.withdraw_all -> B.get_amount -> A.on_amount_for_withdraw -> B.withdraw(amount) -> A.on_withdraw
I'm just reading into the theory of state machines. Please consider this:
event[guard]/action
State A -----------------------------> State B
Here's my question: If I define a transition between states A and B, with event, guard, and action, like in the above "picture"; and furthermore the event is received and the guard expression evaluates to true, then: will the action be performed while my object is in state A, or B?
In other words, do I need the action to be configured to be performable in state A, or B (let's assume I want to choose only one state in which the action can be performed)?
Google finds tell me that the action will be performed at the exact time of transitioning; but my brain has problems to accept it: imo my object needs to be in a certain state while the action is being performed (just because my object needs to always be in a certain state). And the performing of the action may take a while.
Related: What happens if an error occurs during the performing of the action. Will my object stay in state A, or will it transition to state B anyways (remember that the event was received and the guard expression evaluated to true)?
This is fairly easy to check with a custom state machine listener, in which you override the corresponding methods for entering/exiting state and transitions.
will the action be performed while my object is in state A, or B?
Your action (which is on transition) will be performed while you're in state A.
The order of what happens is the following:
Started transition
State Entered: A
SM changed states from:null to: A
Ended transition
---
Executing guard logic
Started transition
Executing normal action //action is executed before exiting State A
State exited: A
State Entered: B
SM changed states from:A to: B
Ended transition
What happens if an error occurs during the performing of the action.
Will my object stay in state A, or will it transition to state B
anyways
You will stay in State A.
As you can see in the above output, the exit of the state happens after the action is executed (successfully). If an exception occurs before that you will still be in state A.
The action will be performed while your object is in state A.
If an error happens during action your state machine will not go to target state B and remain in state A. Additionally you can also define custom Action() for error scenario when actual action() throws exception.
I've used PlusCal to model a trivial state machine that accepts strings matching the regular expression (X*)(Y).
(*--fair algorithm stateMachine
variables
state = "start";
begin
Loop:
while state /= "end" do
either
\* we got an X, keep going
state := "reading"
or
\* we got a Y, terminate
state := "end"
end either;
end while;
end algorithm;*)
Even though I've marked the algorithm fair, the following temporal condition fails because of stuttering ... the model checker allows for the case where the state machine absorbs an X and then simply stops at Loop without doing anything else ever again.
MachineTerminates == <>[](state = "end")
What does fairness actually mean in the context of a simple process like this one? Is there a way to tell the model checker that the state machine will never run out of inputs and thus will always terminate?
Okay, so this is very weird and should pass, but not for the the reason you'd think. To understand why, we first have to talk about the "usual" definition of fairness, then the technical one.
Weak fairness says that if an action is never disabled, it will eventually happen. What you probably expected is the following:
Loop could pick "end" but doesn't.
Loop could pick "end" but doesn't. This happens some arbitrary number of times.
Since Loop is fair*, it is forced to pick "end".
We're done.
But that's not the case. Fairness, in pluscal, is at the label level. It doesn't say anything about what happens in the label, just that label itself must happen. So this is a perfectly valid behavior:
Because loop is never disabled, it eventually happens. It picks "reading".
Because loop is never disabled, it eventually happens. It picks "reading".
Because loop is never disabled, it eventually happens. It picks "reading".
This keeps going forever.
This corresponds to you inputting a string of infinite length, consisting only of X's. If you only want finite strings, you have to explicitly express that, either as part of the spec or as one of the preconditions to your desired temporal property. For example, you could make your temporal property state = "end" ~> Termination, you could model the input string, you could add a count for "how many x's before the y", etc. This gets out of what's actually wrong and into what's good spec design, though.
That's the normal case. But this is a very, very specific exception to the rule. That's because of how "fairness is defined". Formally, WF_vars(A) == <>[](Enabled <<A>>_v) => []<><<A>>_v. We usually translate that as
If the system is always able to do A, then it will keep on doing A.
That's the interpretation I was using up until now. But it is wrong in one very big way. <<A>>_vars == A /\ (vars' /= vars), or "A happens and vars changes. So our definition actually is
If the system is always able to do A in a way that changes its state, then it will keep on doing A in a way that changes its state.
Once you have pc = "Loop" /\ state = "reading", doing state := "reading" does not change the state of the system, so it doesn't count as <<Next>>_vars. So <<Next>>_vars didn't actually happen, but by weak fairness it must eventually happen. The only way for <<Next>>_vars to happen is if the loop sets state := "reading, which allows the while loop to terminate.
This is a pretty unstable situation. Any slight change we make to the spec is likely to push us back into more familiar territory. For example, the following spec will not terminate, as expected:
variables
state = "start";
foo = TRUE;
begin
Loop:
while state /= "end" do
foo := ~foo;
either
\* we got an X, keep going
state := "reading"
or
\* we got a Y, terminate
state := "end"
end either;
end while;
end algorithm;*)
Even though foo doesn't affect the rest of the code, it allows us to have vars' /= vars without having to update state. Similarly, replacing WF_vars(Next) with WF_pc(Next) makes the spec fail, as we can reach a state where <<Next>>_pc is disabled (aka any state where state = "reading").
*Loop isn't fair, the total algorithm is. This can make a big difference in some cases, which is why we spec. But it's easier in this case to talk about Loop, as it's the only action.
I'm attempting to learn Clojure from the API and documentation available on the site. I'm a bit unclear about mutable storage in Clojure and I want to make sure my understanding is correct. Please let me know if there are any ideas that I've gotten wrong.
Edit: I'm updating this as I receive comments on its correctness.
Disclaimer: All of this information is informal and potentially wrong. Do not use this post for gaining an understanding of how Clojure works.
Vars always contain a root binding and possibly a per-thread binding. They are comparable to regular variables in imperative languages and are not suited for sharing information between threads. (thanks Arthur Ulfeldt)
Refs are locations shared between threads that support atomic transactions that can change the state of any number of refs in a single transaction. Transactions are committed upon exiting sync expressions (dosync) and conflicts are resolved automatically with STM magic (rollbacks, queues, waits, etc.)
Agents are locations that enable information to be asynchronously shared between threads with minimal overhead by dispatching independent action functions to change the agent's state. Agents are returned immediately and are therefore non-blocking, although an agent's value isn't set until a dispatched function has completed.
Atoms are locations that can be synchronously shared between threads. They support safe manipulation between different threads.
Here's my friendly summary based on when to use these structures:
Vars are like regular old variables in imperative languages. (avoid when possible)
Atoms are like Vars but with thread-sharing safety that allows for immediate reading and safe setting. (thanks Martin)
An Agent is like an Atom but rather than blocking it spawns a new thread to calculate its value, only blocks if in the middle of changing a value, and can let other threads know that it's finished assigning.
Refs are shared locations that lock themselves in transactions. Instead of making the programmer decide what happens during race conditions for every piece of locked code, we just start up a transaction and let Clojure handle all the lock conditions between the refs in that transaction.
Also, a related concept is the function future. To me, it seems like a future object can be described as a synchronous Agent where the value can't be accessed at all until the calculation is completed. It can also be described as a non-blocking Atom. Are these accurate conceptions of future?
It sounds like you are really getting Clojure! good job :)
Vars have a "root binding" visible in all threads and each individual thread can change the value it sees with out affecting the other threads. If my understanding is correct a var cannot exist in just one thread with out a root binding that is visible to all and it cant be "rebound" until it has been defined with (def ... ) the first time.
Refs are committed at the end of the (dosync ... ) transaction that encloses the changes but only when the transaction was able to finish in a consistent state.
I think your conclusion about Atoms is wrong:
Atoms are like Vars but with thread-sharing safety that blocks until the value has changed
Atoms are changed with swap! or low-level with compare-and-set!. This never blocks anything. swap! works like a transaction with just one ref:
the old value is taken from the atom and stored thread-local
the function is applied to the old value to generate a new value
if this succeeds compare-and-set is called with old and new value; only if the value of the atom has not been changed by any other thread (still equals old value), the new value is written, otherwise the operation restarts at (1) until is succeeds eventually.
I've found two issues with your question.
You say:
If an agent is accessed while an action is occurring then the value isn't returned until the action has finished
http://clojure.org/agents says:
the state of an Agent is always immediately available for reading by any thread
I.e. you never have to wait to get the value of an agent (I assume the value changed by an action is proxied and changed atomically).
The code for the deref-method of an Agent looks like this (SVN revision 1382):
public Object deref() throws Exception{
if(errors != null)
{
throw new Exception("Agent has errors", (Exception) RT.first(errors));
}
return state;
}
No blocking is involved.
Also, I don't understand what you mean (in your Ref section) by
Transactions are committed on calls to deref
Transactions are committed when all actions of the dosync block have been completed, no exceptions have been thrown and nothing has caused the transaction to be retried. I think deref has nothing to do with it, but maybe I misunderstand your point.
Martin is right when he say that Atoms operation restarts at 1. until is succeeds eventually.
It is also called spin waiting.
While it is note really blocking on a lock the thread that did the operation is blocked until the operation succeeds so it is a blocking operation and not an asynchronously operation.
Also about Futures, Clojure 1.1 has added abstractions for promises and futures.
A promise is a synchronization construct that can be used to deliver a value from one thread to another. Until the value has been delivered, any attempt to dereference the promise will block.
(def a-promise (promise))
(deliver a-promise :fred)
Futures represent asynchronous computations. They are a way to get code to run in another thread, and obtain the result.
(def f (future (some-sexp)))
(deref f) ; blocks the thread that derefs f until value is available
Vars don't always have a root binding. It's legal to create a var without a binding using
(def x)
or
(declare x)
Attempting to evaluate x before it has a value will result in
Var user/x is unbound.
[Thrown class java.lang.IllegalStateException]