Difference between guard and event in UML state diagram - uml

I thought I could differentiate between event and guard. But I came across an event being similar to guard:
counter > 4 [pin is high] / switch on
^^^^^^^^^^^
event
where I viewed the variable counter changes from some value smaller than 4 to that greater than 4 as event. Does that mean event can also be a condition like guard?

An event is the thing that triggers the transition. In your case counter > 4 is a change event, meaning "the counter value has changed and its value is now greater than 4".
The code between the brackets is the guard. In your case pin is high, meaning "the transition is only enabled if the pin is high".
switch on is the behavior that is executed when the transition is executed.
Footnote: In your example the event is indeed very similar to the guard.
In C it could look like that:
/**
* this interrupt is triggered when the
* counter exceeds the threshold (4)
*/
static void counter_isr(void)
{
if (pin_is_high(PIN))
switch_on();
}
From the UML 2.5 specification:
14.2.3.8 Transitions
...
A Transition may own a set of Triggers, each of which specifies an Event
whose occurrence, when dispatched, may trigger traversal of the
Transition. A Transition trigger is said to be enabled if the dispatched
Event occurrence matches its Event type.
14.2.4.9 Transition ...
The default textual notation for a Transition is defined by
the following BNF expression:
[<trigger> [‘,’ <trigger>]* [‘[‘ <guard>’]’] [‘/’ <behavior-expression>]]
In other words: trigger [guard] / behavior

Related

How to identify actions instantiated from same activation bar is not happened concurrently? (Sequence digram)

I am looking on a sequence diagram similar to the attached snapshot, object A instantiated 3 actions on same activation bar and received by object B with same activation bar as well.
So can I say the 3 functions are being executed one after one? Since they are solid arrowhead, I not sure my understanding is correct.
Please advise, thanks.
can I say the 3 functions are being executed one after one? Since they are solid arrowhead
The arrows indicate synchronous messages, so the second message cannot be send before the end of the execution of function A, the first cannot be sent before the end of the execution of function B and ExecutionSpecification on the lifeline of Object A cannot end before the end of the execution of function C.
object A instantiated 3 actions on same activation bar
All these three messages can start from the same ExecutionSpecification on the lifeline of Object A
3 actions ... received by object B with same activation bar
This is invalid, an ExecutionSpecification represents the execution of one and only action/behavior, so you need three ExecutionSpecification on the lifeline of the Object B, you cannot have only one.
A valid diagram can be :
or also showing the returns :
(on them function_c is not immediately called when function_b returns, the execution on object a does 'something' before introducing a delay, and also 'something' after)
from your remark :
From the requirements saying these 3 functions should be executed by objectB concurrently. That means i should use line arrow head instead of solid arrow head? And can i use the same ExecutionSpecification on objectB if functions are executed concurrently?
If you use asynch calls (open arrow head) there is no return message, so object a cannot know when the execution ends and it can sent immediately the next message.
The fact the 3 functions should be executed by object b concurrently is something else, asynch calls can be executed in sequence by the receiver, and the fact the receiver does concurrent executions does not imply asynch calls, but yes you can use asynch calls.
You still have to use 3 ExecutionSpecification on object b, to show concurrent execution just use a combined fragment "par".
So for instance :

UML activity diagram: fork with guard conditions on the outputs

I have met a semantic problem with guard conditions and fork in activity diagrams. Suppose terminating action A leads to a fork, and the out put of the fork leads to action B and C (that is, this fork has 1 input and 2 outputs). If A has successfully terminated, and guard condition of B is valid while guard condition of C is not, will the whole activity continue to action B and waits for guard condition of C to become true, or neither B nor C would be executed?
Update: consider the following activity example
Suppose that the first time A terminates, guard condition of C is not valid while B does not have guard. Along the merge node, A is exectued the second time. After the second termination of A, guard condition of C becomes eternally valid and it will be executed twice continuously due to the first and second termination of A. Is this correct?
Once A is finished it will emerge a token and the fork will duplicate that. One token goes straight to B which after finalization re-triggers A to infinity. Now, what happens to the token(s) traveling to C? They just queue at the guard. When the guard is opened after some time it lets pass a single token (because C can hold only a single one). When C is finished it will allow another token to enter (if meanwhile multiple tokens have reached) depending additionally on the guard. Basically C can be started as many times as A has been completed before.
N.B. Your implication in the question "guard conditions on the outputs" is wrong. A guard is always on an incoming control flow of an action. The fork will not control the guard, it's the action. And further an action can never have a guard on the output. This is controlled be the action's behavior. When it terminates it will emerge a token on each of its outgoing control flows (so called implicit fork).
Answer to initial question left as common information
Actually when you draw it, the situation is obvious:
The token emerging right from the top fork will be blocked. B will start since the token passed the guard. Because C does not start the lower fork will hang as it needs 2 tokens. So D is not reached. Unless the guard from C will somewhen be unblocked externally.

State machine timer self transition

Please explain to me if I get right the meaning of these 3 state machines.
1, StateA Enter action is called (which is nothing at the moment) and then the timer is set up. When the timer triggers Acion1 is executed, then the StateA Exit action (also nothing) is executed, then the whole loop repeats. So StateA Enter action, setting up the timer, etc. This makes a kind of polling with Action1
2, StateB Enter action is called, timer set up and triggers after 10ms and executes Action2. The timer will be not renewed, so it is a kind of delayed action on the state
3, StateC Enter action, Timer is set up, when triggers then Action3 is called, then StateC Exit action and finally StateD Enter action is executed.
Please confirm or correct if it is correct.
1: Your description is correct, with one exception: the exit action is executed before Action1 is executed, at least, that is how I interpret the UML 2.5 spec. Section 14.2.3.4.6 says:
If the composite State has an exit Behavior defined, it is executed (...) before any effect Behavior of the outgoing external Transition.
I think you can safely assume that this is also true for non-composite states, but the UML 2.5 spec should be more explicit in this respect.
2: I don't think this is a proper UML notation, so I cannot confirm or deny your description.
3: This state machine diagram does not specify whether the initial state is StateC or StateD. If it is StateC, then your description is correct, with the exception that StateC's exit action is executed before Action3. To be unambiguous, the diagram should have an initial pseudostate (filled circle) with a transition from the initial pseudostate to StateC.
Generally states are drawn with rounded rect.
1) The notation along the transition is <trigger>/<effect>. The semantics of After(10) leaves some room for interpretation. So when the <trigger> fires it will perform <effect> and return to the same state.
2) I don't know this notation. You can specify entry/do/exit operations like this
3) Is like 1 but enters a new state.

Verilog and ASM implementation

In the question below,
The ASM chart shows that value of q_next is compared to 0 to proceed to next state but before q_next is compared, the value of q is already updated with q_next, so if we compare the value of q with 0, will the results be same in terms of timing and other parameters?
Also what should be the types of q_next and q ? Should they be reg or wire?
I have attached the screenshots of the ASM chart and the Verilog code. I also don't understand the timing implications of conditional box (in general, can't we put the output of a conditional box in a separate state which doesn't depend on the output of the conditional box)?,
like when in the wait1 state, we check the value of sw and if true, we decrement the counter and then check if counter has reached to zero and then asser db_tick. I want to understand the time flow when we move from wait1 and increment counter and assert db_tick. Are there any clock cycles involved between these stages, that is moving from a state to a conditional box?
Also in the verilog code, we use q_load and q_tick to control the counter. Why these signals are used when we can simply control the counter in the states?
Is it done to make sure that the FSM (control path) controls the counter (data path)? Please explain. Thanks in advance.
In the question below, the ASM chart shows that value of q_next is
compared to 0 to proceed to next state but before q_next is compared,
the value of q is already updated with q_next, so if we compare the
value of q with 0, will the results be same in terms of timing and
other parameters?
No. When q_next has a value of 0, q still contains a value of 1 until it's updated on the next positive clock edge. If you check for q==0, you will spend an extra clock cycle in each wait state.
Also what should be the types of q_next and q? Should they be reg or
wire?
Either. reg types (like q_reg) mean they're assign a value in an always block, while wire types (like q_next) are assigned using the assign statement or as the output of a submodule.
I also don't understand the timing implications of conditional box (in
general, can't we put the output of a conditional box in a separate
state which doesn't depend on the output of the conditional box)?,
like when in the wait1 state, we check the value of sw and if true, we
decrement the counter and then check if counter has reached to zero
and then asser db_tick. I want to understand the time flow when we
move from wait1 and increment counter and assert db_tick.
Here's the flow of operations for a single clock cycle while in the wait1 state:
Is SW==1? If not, do nothing else, and go to state zero. Those operations will be done on the next cycle.
If SW==1, compute q_next, and assign that value to q for the next cycle.
Is q_next==0? If not, remain in wait1 for the next cycle and repeat.
Otherwise, assert db_tick=1 for this clock cycle, and go to state one.
If you split up the two conditionals into two separate states, counting down to 0 will take twice as long.
Are there any clock cycles involved between these stages, that is
moving from a state to a conditional box?
Based on the diagram, all operations (comparing sw, subtracting from q, etc) within a given state - that is, one of the dotted-line boxes - are performed in a single clock cycle.
Also in the verilog code, we use q_load and q_tick to control the
counter. Why these signals are used when we can simply control the
counter in the states? Is it done to make sure that the FSM (control
path) controls the counter (data path)?
You could do it that way too. Just be sure to assign a value to q_next in the default case to prevent latching. Splitting the data path and control path into separate always blocks/assign statements does improve readability though, IMO.

What are the C++11 memory ordering guarantees in this corner case?

I'm writing some lock-free code, and I came up with an interesting pattern, but I'm not sure if it will behave as expected under relaxed memory ordering.
The simplest way to explain it is using an example:
std::atomic<int> a, b, c;
auto a_local = a.load(std::memory_order_relaxed);
auto b_local = b.load(std::memory_order_relaxed);
if (a_local < b_local) {
auto c_local = c.fetch_add(1, std::memory_order_relaxed);
}
Note that all operations use std::memory_order_relaxed.
Obviously, on the thread that this is executed on, the loads for a and b must be done before the if condition is evaluated.
Similarly, the read-modify-write (RMW) operation on c must be done after the condition is evaluated (because it's conditional on that... condition).
What I want to know is, does this code guarantee that the value of c_local is at least as up-to-date as the values of a_local and b_local? If so, how is this possible given the relaxed memory ordering? Is the control dependency together with the RWM operation acting as some sort of acquire fence? (Note that there's not even a corresponding release anywhere.)
If the above holds true, I believe this example should also work (assuming no overflow) -- am I right?
std::atomic<int> a(0), b(0);
// Thread 1
while (true) {
auto a_local = a.fetch_add(1, std::memory_order_relaxed);
if (a_local >= 0) { // Always true at runtime
b.fetch_add(1, std::memory_order_relaxed);
}
}
// Thread 2
auto b_local = b.load(std::memory_order_relaxed);
if (b_local < 777) {
// Note that fetch_add returns the pre-incrementation value
auto a_local = a.fetch_add(1, std::memory_order_relaxed);
assert(b_local <= a_local); // Is this guaranteed?
}
On thread 1, there is a control dependency which I suspect guarantees that a is always incremented before b is incremented (but they each keep being incremented neck-and-neck). On thread 2, there is another control dependency which I suspect guarantees that b is loaded into b_local before a is incremented. I also think that the value returned from fetch_add will be at least as recent as any observed value in b_local, and the assert should therefore hold. But I'm not sure, since this departs significantly from the usual memory-ordering examples, and my understanding of the C++11 memory model is not perfect (I have trouble reasoning about these memory ordering effects with any degree of certainty). Any insights would be appreciated!
Update: As bames53 has helpfully pointed out in the comments, given a sufficiently smart compiler, it's possible that an if could be optimised out entirely under the right circumstances, in which case the relaxed loads could be reordered to occur after the RMW, causing their values to be more up-to-date than the fetch_add return value (the assert could fire in my second example). However, what if instead of an if, an atomic_signal_fence (not atomic_thread_fence) is inserted? That certainly can't be ignored by the compiler no matter what optimizations are done, but does it ensure that the code behaves as expected? Is the CPU allowed to do any re-ordering in such a case?
The second example then becomes:
std::atomic<int> a(0), b(0);
// Thread 1
while (true) {
auto a_local = a.fetch_add(1, std::memory_order_relaxed);
std::atomic_signal_fence(std::memory_order_acq_rel);
b.fetch_add(1, std::memory_order_relaxed);
}
// Thread 2
auto b_local = b.load(std::memory_order_relaxed);
std::atomic_signal_fence(std::memory_order_acq_rel);
// Note that fetch_add returns the pre-incrementation value
auto a_local = a.fetch_add(1, std::memory_order_relaxed);
assert(b_local <= a_local); // Is this guaranteed?
Another update: After reading all the responses so far and combing through the standard myself, I don't think it can be shown that the code is correct using only the standard. So, can anyone come up with a counter-example of a theoretical system that complies with the standard and also fires the assert?
Signal fences don't provide the necessary guarantees (well, not unless 'thread 2' is a signal hander that actually runs on 'thread 1').
To guarantee correct behavior we need synchronization between threads, and the fence that does that is std::atomic_thread_fence.
Let's label the statements so we can diagram various executions (with thread fences replacing signal fences, as required):
while (true) {
auto a_local = a.fetch_add(1, std::memory_order_relaxed); // A
std::atomic_thread_fence(std::memory_order_acq_rel); // B
b.fetch_add(1, std::memory_order_relaxed); // C
}
auto b_local = b.load(std::memory_order_relaxed); // X
std::atomic_thread_fence(std::memory_order_acq_rel); // Y
auto a_local = a.fetch_add(1, std::memory_order_relaxed); // Z
So first let's assume that X loads a value written by C. The following paragraph specifies that in that case the fences synchronize and a happens-before relationship is established.
29.8/2:
A release fence A synchronizes with an acquire fence B if there exist atomic operations X and Y, both operating on some atomic object M, such that A is sequenced before X, X modifies M, Y is sequenced before B, and Y reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation.
And here's a possible execution order where the arrows are happens-before relations.
Thread 1: A₁ → B₁ → C₁ → A₂ → B₂ → C₂ → ...
↘
Thread 2: X → Y → Z
If a side effect X on an atomic object M happens before a value computation B of M, then the evaluation B shall take its value from X or from a side effect Y that follows X in the modification order of M. — [C++11 1.10/18]
So the load at Z must take its value from A₁ or from a subsequent modification. Therefore the assert holds because the value written at A₁ and at all later modifications is greater than or equal to the value written at C₁ (and read by X).
Now let's look at the case where the fences do not synchronize. This happens when the load of b does not load a value written by thread 1, but instead reads the value that b is initialized with. There's still synchronization where the threads starts though:
30.3.1.2/5
Synchronization: The completion of the invocation of the constructor synchronizes with the beginning of the invocation of the copy of f.
This is specifying the behavior of std::thread's constructor. So (assuming the thread creation is correctly sequenced after the initialization of a) the value read by Z must take its value from the initialization of a or from one of the subsequent modifications on thread 1, which means that the assertions still holds.
This example gets at a variation of reads-from-thin-air like behavior. The relevant discussion in the spec is in section 29.3p9-11. Since the current version of the C11 standard doesn't guarantee dependences be respected, the memory model should allow the assertion to be fired. The most likely situation is that the compiler optimizes away the check that a_local>=0. But even if you replace that check with a signal fence, CPUs would be free to reorder those instructions.
You can test such code examples under the C/C++11 memory models using the open source CDSChecker tool.
The interesting issue with your example is that for an execution to violate the assertion, there has to be a cycle of dependences. More concretely:
The b.fetch_add in thread one depends on the a.fetch_add in the same loop iteration due to the if condition. The a.fetch_add in thread 2 depends on b.load. For an assertion violation, we have to have T2's b.load read from a b.fetch_add in a later loop iteration than T2's a.fetch_add. Now consider the b.fetch_add the b.load reads from and call it # for future reference. We know that b.load depends on # as it takes it value from #.
We know that # must depend on T2's a.fetch_add as T2's a.fetch_add atomic reads and updates a prior a.fetch_add from T1 in the same loop iteration as #. So we know that # depends on the a.fetch_add in thread 2. That gives us a cycle in dependences and is plain weird, but allowed by the C/C++ memory model. The most likely way of actually producing that cycle is (1) compiler figures out that a.local is always greater than 0, breaking the dependence. It can then do loop unrolling and reorder T1's fetch_add however it wants.
After reading all the responses so far and combing through the
standard myself, I don't think it can be shown that the code is
correct using only the standard.
And unless you admit that non atomic operations are magically safer and more ordered then relaxed atomic operations (which is silly) and that there is one semantic of C++ without atomics (and try_lock and shared_ptr::count) and another semantic for those features that don't execute sequentially, you also have to admit that no program at all can be proven correct, as the non atomic operations don't have an "ordering" and they are needed to construct and destroy variables.
Or, you stop taking the standard text as the only word on the language and use some common sense, which is always recommended.

Resources