Is PlantUML capable of representing finite state machine diagrams with parallel states? - state-machine

I have a finite state machine diagram which I'd like to document with PlantUML, but the state machine diagram employs parallel states to represent how multiple event sources evolve independently except in very particular cases. Does PlantUML provide a way to represent parallel states?

The docs show concurrent states exist.
#startuml
[*] --> Active
state Active {
[*] -> NumLockOff
NumLockOff --> NumLockOn : EvNumLockPressed
NumLockOn --> NumLockOff : EvNumLockPressed
--
[*] -> CapsLockOff
CapsLockOff --> CapsLockOn : EvCapsLockPressed
CapsLockOn --> CapsLockOff : EvCapsLockPressed
--
[*] -> ScrollLockOff
ScrollLockOff --> ScrollLockOn : EvCapsLockPressed
ScrollLockOn --> ScrollLockOff : EvCapsLockPressed
}
#enduml

Related

No more satisfying instances

No examples generated for my alloy model with the error message: 'No more satisfying instances' (see image attached)
I have created the following small model in Alloy:
sig System
{
subSystem : System
}
// Prevent a subsystem from directly including itself
fact noDirectInclusion
{
no s : System | s in s.subSystem
}
// Prevent a subsystem from transitivelyincluding itself
fact noTransitiveInclusion
{
no s : System | s in s.^subSystem
}
pred show {}
run show for 5
The fact 'noDirectInclusion' nicely prevents the generation of examples where a subsystem is a subsystem of itself.
I am probably missing something trivial, but When I also use the fact 'noTransitiveInclusion' there are no longer any examples generated with the error message: 'No more satisfying instances' (see image attached)
What am I missing?
What am I missing?
Try to make your graph by hand for only 2 System sigs ...
You will see that with the constraints you specified in the System sig you can only make a cycle ... You force a System to have 1 and exactly 1 subSystem, i.e. the default for a field is one. Therefore, the transitive graph can only be a cycle with a finite set of objects and that invalidates your fact.
Either make subSystem a lone or set.

Enterprise Architect: Model a simple ECU

I've used Enterprise Architect (EA) to create pretty drawings and I've liked it for that purpose. I find that I have a lack of understanding on how diagram elements link between one another. What I find particularly frustrating is that there is very little documentation on how this linking works (although lots of documentation on how to draw pictures).
I would like to create a model of a simple processor/ECU (electronics control unit). Here is the behaviour:
An ECU has an instance of NVRAM (which is just a class) for an attribute
An ECU has a voltage supply (an analog value representing the voltage level supplied to the ECU)
An ECU has two digital input ports
Each digital input port fires signals when its value changes
the ECU has a state machine with three states; the state machine enters state 1 on entry; the state machine transitions to state 2 on a firing of either digital input ports so long as the ECU voltage supply is greater than 10 V
the ECU exists to state 3 when Voltage drops below 8; and goes back to normal processing when Voltage rises above 9
Can you develop a model that demonstrates how these elements interact? (Is there some reference I can read on how to understand this approach?)
Here's my first attempt:
State Machine
I used a composite diagram in the ECU state so that I could have access to the digital ports diagramatically. I created a link for each port so that they "realize" class input PIn. I assume I can depict class attributes this way.
I "create a link" so that the DIO triggers realize the DIO ports. Not sure I can do this.
The class state machine is where I get lost. Not sure on how to create a trigger for ECU.Voltage < 8.

How do I properly handle DDC metadata and settings?

Using REDHAWK Version 2.0.5,
Given a CHANNELIZER centered at 300MHz and a DDC attached to the CHANNELIZER centered at 301MHz. The DDC is set relative to the CHANNELIZER and in this case the DDC is centered at a 1MHz offset from the CHANNELIZER.
A) How should I present the DDC center frequency to a user in the frontend tuner status and allocation? For example, would they enter 1MHz or 301MHz to set the center frequency for the DDC? Currently I am using the latter version.
B) In version 2.1.0 of the REDHAWK manual in section F.5.2 it says the COL_RF SRI keyword is the center frequency of the collector and the CHAN_RF is the center frequency of the stream. In the above case, I set COL_RF to 300MHz and CHAN_RF to 301MHz but the REDHAWK IDE plots center at 300MHz for the DDC. Should the CHAN_RF be a relative value such as 1MHz? Currently, at 301MHz, the IDE plots appear to center at the COL_RF frequency of 300MHz.
C) When the CHANNELIZER center frequency changes, I only set the valid field in the allocation to false on attached DDCs. Is there any other special bookkeeping that needs to be done when this happens?
D) Should disabling or enabling the output from the CHANNELIZER also disable or enable the output for the attached DDCs?
E) Must deallocating the CHANNELIZER force all DDCs that are attached to deallocate?
A) All external interfaces (allocation, FrontendTuner ports, status properties, etc) assume RF values, not IF or offsets. Allocate or tune to 301MHz in order to center a DDC at 301MHz. The center_frequency field of the frontend_tuner_status property should be set to 301MHz for that DDC.
B) Your understanding of how to use COL_RF (300MHz) and CHAN_RF (301MHz) is correct. You may be able to work around this by reordering the SRI keywords to have CHAN_RF appear first, if necessary.
For (C) and (D), there are some design decisions that are left up to the developer since the implementation, as well as the hardware (if any), may impact those decisions. Here are some recommendations, though.
C) In general, if at any point the DDCs become invalid, they should be marked as such. It is possible to retune a CHANNELIZER by a small amount such that one or more DDCs still fall within the frequency range and remain valid, but that may also be hardware dependent. Additionally, it's recommended that DDCs only produce data when both enabled AND valid, so if marking invalid you may also want to stop producing data from the invalid DDCs.
D) CHANNELIZER and RX_DIGITIZER_CHANNELIZER tuners both have wideband input and narrowband DDC output. Some implementations of an RX_DIGITIZER_CHANNELIZER may have the ability to produce wideband digital output of the analog input (acting as an RX_DIGITIZER). In this scenario, the RX_DIGITIZER_CHANNELIZER output enable/disable controls the wideband output, while the DDCs output enable remain independently controlled. The behavior of a CHANNELIZER, which does not produce wideband output, is left as a design decision for the developer. For behavior consistent with RX_DIGITIZER_CHANNELIZER tuners, it's recommended that the DDCs remain independently controlled. Note that the enable for a tuner is specifically the output enable, and not an overall enable/disable of the tuner itself. For that reason, it's recommended that the enable for a CHANNELIZER not affect the data flow to the DDCs since that data flow is internal to the device. Again, this is all up to the developer and these are just recommendations since the spec leaves it open.
E) Yes, deallocating a CHANNELIZER should result in deallocation of all associated DDCs.

communications in MPMD MPI executions

this post is related to a previous post binding threads to certain MPI processes. Here, it was asked how MPI ranks could be assigned a different
number of OpenMP threads. One possibility is as follows
$ mpiexec <global parameters>
-n n1 <local parameters> executable_1 <args1> :
-n n2 <local parameters> executable_2 <args2> :
...
-n nk <local parameters> executable_k <argsk>
what I don't know is how the independent instances executable_1, executable_2, ..., executable_k communicate with each other. I mean
if at some point during execution they need to exchange data, do they
use a inter-communicator (among instances) and a intra-communicator
(within the same instance, for example executable_1)?
Thanks.
All processes launched as a result of that command form a single MIMD/MPMD MPI job, i.e. they share the same world communicator. The first n1 ranks are running executable_1, the following n2 ranks are running executable_2, etc.
rank | executable
----------------------------------------+---------------
0 .. n1-1 | executable_1
n1 .. n1+n2-1 | executable_2
n1+n2 .. n1+n2+n3-1 | executable_3
.... | ....
n1+n2+n3+..+n(k-1) .. n1+n2+n3+..+nk-1 | executable_k
The communication happens simply by sending messages in MPI_COMM_WORLD. The separate executables do not form communicator groups on their own automatically. This is what distinguishes MPMD from starting child jobs using MPI_Comm_spawn - child jobs have their own world communicators and one uses intercommunicators to talk to them while the separate sub-jobs in an MIMD/MPMD job do not.
It is still possible for a rank to find out to which application context it belongs by querying the MPI_APPNUM attribute of MPI_COMM_WORLD. It makes it possible to create separate sub-communicators for each context (the different contexts are the commands separated by :) by simply performing a split using the appnum value as colour:
int *appnum, present;
MPI_Comm_get_attr(MPI_COMM_WORLD, MPI_APPNUM, &appnum, &present);
if (!present)
{
printf("MPI_APPNUM is not provided!\n");
MPI_Abort(MPI_COMM_WORLD, 0);
}
MPI_Comm appcomm;
MPI_Comm_split(MPI_COMM_WORLD, *appnum, 0, &appcomm);

How to merge Hoopl graph blocks / how to pass through the blocks

I'm trying to introduce Hoopl into some compiler and faced some problem: creating
a graph for Hoopl makes the nodes to appear in order of labels that were introduced.
Eg:
(define (test) (if (eq? (random) 1 ) 2 (if (eq? (random) 2 ) 3 0) ) )
"compiles" to
L25: call-direct random -> _tmp7_6
branch L27
L26: return RETVAL
L27: iconst 1 _tmp8_7
branch L28
L28: call-direct eq? _tmp7_6, _tmp8_7 -> _tmp4_8
branch L29
L29: cond-branch _tmp4_8 L30 L31
L30: iconst 2 RETVAL
branch L26
L31: call-direct random -> _tmp12_10
branch L32
L32: iconst 2 _tmp13_11
branch L33
L33: call-direct eq? _tmp12_10, _tmp13_11 -> _tmp9_12
branch L34
L34: cond-branch _tmp9_12 L36 L37
L35: assign RETVAL _tmp6_15
branch L26
L36: iconst 3 _tmp6_15
branch L35
L37: iconst 0 _tmp6_15
branch L35
The order of instructions (in order of showGraph) is strange, because of order of
recursive graph building from the AST. In order to generate code I need to reorder blocks in more natural way, say place return RETVAL to the end of function, merge blocks like this
branch Lx:
Lx: ...
into the one block, and so on. Seems that I need something like:
block1 = get block
Ln = get_last jump
block2 = find block Ln
if (some conditions)
remove block2
replace blick1 (merge block1 block2)
I'm totally confused how to perform this with Hoopl. Of course, I may dump all the nodes
and then perform the transformations outside the Hoopl framework, but I believe that this
is bad idea.
May someone give me any glue? I did not find any useful examples. Something similar is performed in Lambdachine project, but seems too complicated.
There is also an another question. Is there any point to make all Call instruction non-local?
What the point of this considering that implementation of Call is not changing any local
variables and always transfer the control to the next instruction of the block? If Call instructions are defined like
data Insn e x where
Call :: [Expr] -> Expr -> Label :: Insn O C -- last instruction of the block
that cause the graph to looks even more strange. So I use
-- what the difference with any other primitive, like "add a b -> c"
Call :: [Expr] -> Expr -> Label :: Insn O O
May be I'm wrong with this?
It is possible to implement the "block merging" using HOOPL. Your question is too generic, so I give you a plan:
Determine what analysis type this optimization requires (either forward or backward)
Design the analysis lattice
Design the transfer function
Design the rewriting function
Create a pass
Merge the pass with other passes of the same direction so they interleave
Run the pass using fuel
Convert optimized graph back to the form you need
With which stages do you have problems? Steps 1 and 2 should be rather straightforward once you've read the papers.
You should also understand the general concept of basic block - why instructions are merged into blocks, why control flow graph consists of blocks and not of individual instructions, why analysis is performed on blocks and not on individual instructions.
Your rewrite function should use the facts to rewrite the last node in the block. So the fact lattice should include not only "information about reachability", but also the destination blocks themselves.
I've found and tried a couple of ways to do the trick:
Using foldBlockNodesF3 function or other foldBlockNodes... functions
Using preorder_dfs* functions (like in Lambdachine project)
Build the graph with larger blocks from the start
The last option is not useful for me, because FactBase is linked with labels, so every instruction that change a liveness of the variables should have a label for using in the following analysis.
So, my final solution is to use foldBlockNodesF3 function and linearize the graph and delete extra labels manually with simultaneous register allocation

Resources