assume() does not work for initial statement - verilog

For https://i.imgur.com/NCUjYmr.png , why doesn't the signal "reset" assumed to be '1' initially ? Anyone have any idea why the assume does not work ?

I found the solution. I am in temporal induction where it starts at a state which is not the initial state of the system. Therefore, the signal "reset" is not assumed to be '1' initially

Assumes will work as assumptions only in formal verification environment. But in simulation based verification, they work as an assert statement only.
As per the LRM :
The immediate assume statement specifies that its expression is assumed to hold. For example, immediate assume statements can be used with formal verification tools to specify assumptions on design inputs that constrain the verification computation. When used in this way, they specify the expected behavior of the environment of the design as opposed to that of the design itself. In simulation, an immediate assume may behave as an immediate assert to verify that the environment behaves as assumed. A simulation tool shall
provide the capability to check the immediate assume statement in this way.
Due to this, in your design, it won't actually assume the value, but it will check whether the proper value is given or not.

Related

Nested fragments UML understanding?

I have a UML book and I am reading about nested fragments. It shows an example of a nested fragment. But what I dont get.. why does it say "If the condition "cancelation not sucessful" is true when entering this fragment (i.e. cancelation was unsuccesful) the interaction within this fragment is no longer executed".
What I have learned before is that a condition should be true before the interaction will be executed? But in this case it says the opposite of it.. (because they say it should be false to execute the interaction)
See for the image: https://ibb.co/CmstLcX
I think this is simply a typo in the book. The diagram makes sense, but the text describes nonsense. While the condition is true, the messages in the loop will happen up to three times.
Maybe the author got confused, because the message immediately before the loop is Cancellation. I assume the loop guard is referring to the success of the Order cancellation message, not to this Cancellation.
By the way, reply messages need to have the name of the original message. Most people get this wrong (and some textbooks). I'll grant that the text on the reply message is often meant to be the return value. As such it should be separated with a colon from the name of the message. In your case it is probably not necessary to repeat the message name, even though techically required. However, the colon is mandatory.
If it is the return value, all reply messages in the loops return the value Acceptance. I wonder, how the guard can then evaluate to true after the first time. Maybe it is only showing the scenario for this case. This is perfectly Ok. A sequence diagram almost never shows all possible scenarios. However, then the loop doesn't make sense. I guess the author didn't mean to return a specific value.
Or maybe it is the assignment target. In this case, it should look like this: Acceptance=Order Cancellation. Then the Acceptance attribute of the Dispatcher Workstation would be filled with whatever gets returned by the Order Cancellation message. Of course, then I would expect this attribute to be used in the guard, like this [not Acceptance].
A third possiblity is, that the author didn't mean synchroneous communication and just wanted to send signals. The Acceptance Signal could well contain an attribute Cancellation not successful. Then of course, no filled arrows and no dashed lines.
I can even think of a fourth possibility. Maybe the author wanted to show the name of an out parameter of the called operation. But this would officially look like this: Order Cancellation(Acceptance). Again the name of the message could be omitted, but the round brackets are needed to make the intention clear.
I think the diagram leaves a lot more questions open than you asked.
It only says "If the cancellation was not successful then" (do exception handling). This is pretty straight forward. if not false is the same as if true. Since it is within a Break fragment, this will be performed (with what ever is inside) and as a result will break the fragment where it is contained (usually some loop). Having a break just stand alone seems a bit odd (not to say wrong).
UML 2.5 p. 581:
17.6.3.9 Break
The interactionOperator break designates that the CombinedFragment represents a breaking scenario in the sense that the operand is a scenario that is performed instead of the remainder of the enclosing InteractionFragment. A break operator with a guard is chosen when the guard is true and the rest of the enclosing Interaction Fragment is ignored. When the guard of the break operand is false, the break operand is ignored and the rest of the enclosing InteractionFragment is chosen. The choice between a break operand without a guard and the rest of the enclosing InteractionFragment is done non-deterministically.
A CombinedFragment with interactionOperator break should cover all Lifelines of the enclosing InteractionFragment.
Except for that: you should not take fragments too serious. Graphical programming is nonsense. You make only spare use of any such constructs and only if they help understanding certain behavior. Do not get tempted to re-document existing code this way. Code is much more dense and better to read. YMMV

How to interpret the Result of msat LTL commands of NuXMV

I am using NuXMV for checking LTL properties using msat_check_ltlspec_bmc command on a fairly large model. The result shows no counterexample found within the given bounds. Do I interpret it as that property is True. Or it can alternatively mean that analysis is not complete.
This is because, by changing the property proposition to true or false, the result is always no counterexample. Most of The results are counterintuitive.
Started with real variables based properties but since unable to understand the result, shifted to Boolean based properties on the same model, using the same command.
Bounded Model Checking is a bug-oriented technique which checks the validity of a property on execution traces up to a given lenght k.
When an execution trace violates a property, great: a bug was found.
Otherwise, (in the general case) the model checking result provides no useful information and it should be treated as such.
In some cases, knowing additional information about the model can help. In particular, if one knows that every execution trace of length k must loop-back to one of the k-1 states, then it is possible to draw stronger conclusions from the lack of counter-examples of length smaller or equal k.

In a UML state machine, can an initial pseudostate have incoming transitions?

In UML 2.5.1, the initial pseudostate of a state machine is defined as follows:
An initial Pseudostate represents a starting point for a Region; that
is, it is the point from which execution of its contained behavior
commences when the Region is entered via default activation. It is the
source for at most one Transition, which may have an associated effect
Behavior, but not an associated trigger or guard. There can be at
most one initial Vertex in a Region.
In other words, a UML state machine should almost always contain exactly one initial pseudostate, which should have exactly one outgoing transition.
However, can an initial pseudostate have incoming transitions as well? For example:
I cannot find anything forbidding it in the UML specification, yet I cannot find any example online where this case happen, therefore I was wondering whether or not I overlooked anything.
EDIT: To go into more detail, if we look into the OCL constraints stated in the specification, we can only find the following one that affects outgoing transitions (section 14.5.6.7):
inv: (kind = PseudostateKind::initial) implies (outgoing->size() <= 1)
but I cannot find any constraint regarding incoming transitions
EDIT2: I have just realized that my model is wrong! Considering this sentence of the specification (cited above): "It is the source for at most one Transition, which may have an associated effect Behavior, but not an associated trigger or guard."
Therefore the transition between init and s1 should actually have zero triggers, instead of having e1 as a trigger.
Note that while this does not invalidate the initial question.
I see nothing in the UML 2.5.1 Specification that prohibits a transition whose target is the initial pseudostate.
Such a transition would be meaningless at best and confusing at worst, which is likely why no examples are found.
Edit: see the comments!
On p. 423 UML 2.5:
15.7.18 InitialNode [Class]
15.7.18.4 Constraints
• no_incoming_edges
An InitialNode has no incoming ActivityEdges.
inv: incoming->isEmpty()
N.B. If you intend to have a self-transition for e1 then why not just using that? The Initial can anyway have only on singular outgoing edge, namely to the first state (here s1).
No this is not allowed. And why would one Do that? As you already stated in the cited text,it can only have one outgoing edge without any guard. So what is the added value, as you cannot reuse anything.
I think the text is pretty clear as-is: "[An initial Pseudostate] is the point from which execution of its contained behavior commences when the Region is entered via default activation." If you connect a transition back around to the initial psuedostate, the initial psuedostate is no longer "the point from which execution of its contained behavior commences," it is something else, and is therefore undefined.

Want help in understanding verilog constructs

I am new to understanding Verilog as this language requires thinking in terms of synthesis.
While doing some program I found that:
begin
buf_inm[row][col] =temp_data;
#1 mux_data=buf_inm[row][col];
end
gave correct results than
begin
buf_inm[row][col] =temp_data;
mux_data=buf_inm[row][col];
end
in terms of assignments of variables.
Can anybody explain what is the difference between these two?
In any other higher level languages construct 2 (without delay) would have given correct assignments.
Thanking you,
Yours sincerely,
R. Ganesan.
First of all #1 is a one time scale delay that you can use in simulation, but it is not synthesizeable. When you say "gave correct results", it might help if you explained what results you are seeing, but my guess is that you assigned something to the 2 dimensional vector, and then you assign that to the max_data and are expecting mux_data to be what temp data was. Is it retaining old data? Is it undefined?
That said, I would say the difference is a synthesizeable assignment versus a non-sythesizeble one. In the first case you should see two state changes separated by a timescale unit. The second case, because it is blocking, should see both values update in zero-time, and in the order as written top to bottom. If you aren't seeing that, something else is potentially at foot here. What tool are you using? Tool version? Maybe you can share your full code and test bench so we can see why your results are unexpected.

Misuse of a variables value?

I came across an instance where a solution to a particular problem was to use a variable whose value when zero or above meant the system would use that value in a calculation but when less than zero would indicate that the value should not be used at all.
My initial thought was that I didn't like the multipurpose use of the value of the variable: a.) as a range to be using in a formula; b.) as a form of control logic.
What is this kind of misuse of a variable called? Meta-'something' or is there a classic antipattern that this fits?
Sort of feels like when a database field is set to null to represent not using a value and if it's not null then use the value in that field.
Update:
An example would be that if a variable's value is > 0 I would use the value if it's <= 0 then I would not use the value and decided to perform some other logic.
Values such as these are often called "distinguished values". By far the most common distinguished value is null for reference types. A close second is the use of distinguished values to indicate unusual conditions (e.g. error return codes or search failures).
The problem with distinguished values is that all client code must be aware of the existence of such values and their associated semantics. In practical terms, this usually means that some kind of conditional logic must be wrapped around each call site that obtains such a value. It is far too easy to forget to add that logic, obtaining incorrect results. It also promotes copy-and-paste code as the boilerplate code required to deal with the distinguished values is often very similar throughout the application but difficult to encapsulate.
Common alternatives to the use of distinguished values are exceptions, or distinctly typed values that cannot be accidentally confused with one another (e.g. Maybe or Option types).
Having said all that, distinguished values may still play a valuable role in environments with extremely tight memory availability or other stringent performance constraints.
I don't think what your describing is a pure magic number, but it's kind of close. It's similar to the situation in pre-.NET 2.0 where you'd use Int32.MinValue to indicate a null value. .NET 2.0 introduced Nullable and kind of alleviated this issue.
So you're describing the use of a variable who's value really means something other than it's value -- -1 means essentially the same as the use of Int32.MinValue as I described above.
I'd call it a magic number.
Hope this helps.
Using different ranges of the possible values of a variable to invoke different functionality was very common when RAM and disk space for data and program code were scarce. Nowadays, you would use a function or an additional, accompanying value (boolean, or enumeration) to determine the action to take.
Current OS's suggest 1GiB of RAM to operate correctly, when 256KiB was high very few years ago. Cheap disk space has gone from hundreds of MiB to multiples of TiB in a matter of months. Not too long ago I wrote programs for 640KiB of RAM and 10MiB of disk, and you would probably hate them.
I think it would be good to cope with code like that if it's just a few years old (refactor it!), and denounce it as bad practice if it's recent.

Resources