FSM in a graphical function - stateflow

Is it possible to put a state inside a graphical function?
We know that we can make graphical functions in matlab.
Can we build a FSM in a graphical function?
Can we put states in graphical functions?

No, it's impossible to include a state directly into a grahical function.
My ideas:
You build your own statechart in action language without any stateflow states.
Check your design :-)
This workaround should work: Stateflowchart -> grahical function -> simulink function -> stateflow block.
But be aware of the corect block triggering.

Related

Converting UML State Machine to SCL?

I would like to know if I can program a PLC with a state machine/diagram.
With the help of Sparx EA we can compose our state machine. Is there any chance to convert this state machine into SCL(Structured Control Language, used in PLC-Programming)? Or what kind of data can we take from the Sparx EA, that we can use as input for the PLC-Programming?
Or maybe you have a better idea how to realize this idea.
Sure. You need a code generator tool that can read the state machine diagram, and generate equivalent Structured Text.
The shape of the code is pretty straightforward. You can define a ST boolean for each bit (if you can have live parallel states as in StateCharts) or an ST integer containing a state number.
The ST code for each state is then:
if (StateXXX) then
<action in this state>
if (somecondition)
StateXXX=false;
StateYYY=true;
endif
endif
You need to generate this code for each state.
That leaves the question of what tool do you use to accomplish this?
Arguably any tool that can read the UML diagram, which is generally exportable as an XML document from UML editors; with the parsed XML, you can write code to climb over it and spit out the above code fragments.
This is perhaps easier if you code fragments which are well defined templates. You can use ad hoc templates (simply text strings containing markers where something has to be filled in) or you can use a tool that enforces the structure and composition of generated code, such as a Program Transformation System (PTS).
A PTS accepts the grammar for a language, will parse instances of that language, and let you transform that language, finally spitting out the modified language instance. A useful special case is transforming, if you like, the trivial program into a complex, real program. In addition, a good PTS will let you write patterns and transformation rules in terms of formal code templates,that enforce at least the syntax of the template to be valid. This ensures that the pieces you work with always make a certain minimum amount of sense. (In contrast, you can write any garbage you like in a text template). When you write lots of such patterns, this is pretty helpful to avoid producing junk.
For this particular example, for (my company's PTS called DMS, see bio) you can write patterns for the above fragment:
pattern StateInstance(statenumber: natural, action: statements, exit_condition: expression, exit_state: natural): statement =
" if (StateNumber=\statenumber) then
\action
if (\exit_condition) then
StateNumber=\exit_state
endif
endif
";
DMS provides APIs to instantiate this pattern (and others, you typically write many) and compose their results (using instantiated patterns as arguments to other patterns to instantiate) to produce the final program. You can also add transformation rule to optimize the generated code. (DMS is driven by grammar definitions; it already knows 40+ languages and in particular has robust definitions for ST and for XML).
I never really have programmed S7 but basically know what you are looking for. EA does not have a generator for SCL and chances are low to see that coming from Sparx. So there are two possibilities.
First (but not preferred by me) is to delve into the guts of the Sparx macro language which is used during code generation. If you just need minor adaptation for existing templates that's fine, but writing a complete new one is no fun (for me).
The second way is to use the API for code generation. This is fairly easy (well, for me since I studied compiler construction at university). What you would do is to take the state machine, traverse it and spit out the according language constructs. It heavily depends on your skills, but I'd create a rough prototype in a couple of days.
Edit Here is a sample Perl (I know it's a PITA if you don't use it for a week or so, but you can likely decipher it though) script that parses a state machine using EA's API:
package Compiler;
use strict;
use Win32::OLE qw (in);
sub new {
my ($self, $rep) = #_;
$self = {};
$self->{nodes} = {};
$self->{rep} = $rep;
bless $self;
}
sub traverse {
my ($self, $node) = #_;
my $guid = $node->ElementGUID;
return if defined($self->{nodes}->{$guid});
my $nodeInfo = { 'name' => $node->Name, 'type'=> $node->Type, 'out' => ()};
$self->{nodes}->{$guid} = $nodeInfo;
for my $trans (in $node->Connectors) {
my $target = $self->{rep}->GetElementByID($trans->SupplierID);
next if $target->ElementGUID eq $guid;
my #targetInfo = ($trans->TransitionGuard, $target->ElementGUID);
push(#{$nodeInfo->{out}}, \#targetInfo);
$self->traverse($target);
}
}
1;
and a simple main program like this:
use strict;
no strict 'refs';
use compiler;
my $rep = $ENV{'REP'}; # get repository pointer "by magic"
my $node = $rep->GetElementByGUID('{574C5E0C-E032-44c6-A6B0-783D35B9958B}'); # fixed addressing of InitialNode
my $compiler = Compiler->new($rep);
$compiler->traverse($node); # read in all possible transitions/states
my %states = %{$compiler->{nodes}}; # this hash holds all states and their transitions
for my $key (keys %states) {
my $state = $states{$key}; # loop through all found states
print "$state->{type} $state->{name}\n"; # state name
for my $out (#{$state->{out}}) {
my ($guard, $guid) = #{$out};
my $target = $compiler->{nodes}->{$guid};
print "__$guard -> $target->{name}\n";
}
}
Now assume you have a state machine like this:
When you run the above program it will print
StateNode
StateNode
__no condition -> State1
State State1
__condition -> State2
__exit ->
State State2
other condition -> State1
The first StateNode is the unnamed exit and the next the InitialNode (you could also get that info from the API and use it). State1 has two possible transitions (to exit and State2). And State2 only transits to State1.
Now, with the list of named states you can create some enumeration for your different states. Also you have the guards for all transitions which you can transform into if-cascades or switch-statements.
Of course this is not a complete code generator, but you can get the idea how to make one from this scaffold.
If you are using Siemens PLC-s then S7 had optional software package called s7-graph: http://w3.siemens.com/mcms/simatic-controller-software/en/step7/simatic-s7-graph/Pages/Default.aspx. You can implement state machines there. But don't know of any importing options for it.
I used it for some equipment that were controlled as state machines. That software package was not free, i don't remember it's price. I also don't know if all S7 PLC families supported it. I used 400 series and it worked there.
Ask your local Siemens distributor to let you play with it a bit before using it in any project.
i have write a template in EA to realize the code generation PLCopen Code from class diagramms, state machine into ST/SCL code (IEC61131 in Twincat/Codesys).
Class diagramm is used to describe the structure for the programm like FB, DUT.
State machine is used to describe the dynamic process for the plc programm.
So that a whole oop plc programm (with Interface, inherit) will be automatic from UML modell generated.
https://www.youtube.com/watch?v=z071cZgMbZ8
Here i have create a new toolbox, which is specific for the modeling for plc programm in EA
https://www.youtube.com/watch?v=RfDxkq_hDvw&t=2s

FRP - Event streams and Signals - what is lost in using just signals?

In recent implementations of Classic FRP, for instance reactive-banana, there are event streams and signals, which are step functions (reactive-banana calls them behaviours but they are nevertheless step functions). I've noticed that Elm only uses signals, and doesn't differentiate between signals and event streams. Also, reactive-banana allows to go from event streams to signals (edited: and it's sort of possible to act on behaviours using reactimate' although it not considered good practice), which kind of means that in theory we could apply all the event stream combinators on signals/behaviours by first converting the signal to event stream, applying and then converting again. So, given that it's in general easier to use and learn just one abstraction, what is the advantage of having separated signals and event streams ? Is anything lost in using just signals and converting all the event stream combinators to operate on signals ?
edit: The discussion has been very interesting. The main conclusions that I took from the discussion myself is that behaviours/event sources are both needed for mutually recursive definitions (feedback) and for having an output depend on two inputs (a behaviour and an event source) but only cause an action when one of them changes (<#>).
(Clarification: In reactive-banana, it is not possible to convert a Behavior back to an Event. The stepper function is a one-way ticket. There is a changes function, but its type indicates that it is "impure" and it comes with a warning that it does not preserve the semantics.)
I believe that having two separates concepts makes the API more elegant. In other words, it boils down to a question of API usability. I think that the two concepts behave sufficiently differently that things flow better if you have two separate types.
For example, the direct product for each type is different. A pair of Behavior is equivalent to a Behavior of pairs
(Behavior a, Behavior b) ~ Behavior (a,b)
whereas a pair of Events is equivalent to an Event of a direct sum:
(Event a, Event b) ~ Event (EitherOrBoth a b)
If you merge both types into one, then neither of these equivalence will hold anymore.
However, one of the main reasons for the separation of Event and Behavior is that the latter does not have a notion of changes or "updates". This may seem like an omission at first, but it is extremely useful in practice, because it leads to simpler code. For instance, consider a monadic function newInput that creates an input GUI widget that displays the text indicated in the argument Behavior,
input <- newInput (bText :: Behavior String)
The key point now is that the text displayed does not depend on how often the Behavior bText may have been updated (to the same or a different value), only on the actual value itself. This is a lot easier to reason about than the other case, where you would have to think about what happens when two successive event occurrences have the same value. Do you redraw the text while the user edits it?
(Of course, in order to actually draw the text, the library has to interface with the GUI framework and does keep track of changes in the Behavior. This is what the changes combinator is for. However, this can be seen as an optimization and is not available from "within FRP".)
The other main reason for the separation is recursion. Most Events that recursively depend on themselves are ill-defined. However, recursion is always allowed if you have mutual recursion between an Event and a Behavior
e = ((+) <$> b) <#> einput
b = stepper 0 e
There is no need to introduce delays by hand, it just works out of the box.
Something critically important to me is lost, namely the essence of behaviors, which is (possibly continuous) variation over continuous time.
Precise, simple, useful semantics (independent of a particular implementation or execution) is often lost as well.
Check out my answer to "Specification for a Functional Reactive Programming language", and follow the links there.
Whether in time or in space, premature discretization thwarts composability and complicates semantics.
Consider vector graphics (and other spatially continuous models like Pan's). Just as with premature finitization of data structures as explained in Why Functional Programming Matters.
I don't think there's any benefit to using the signals/behaviors abstraction over elm-style signals. As you point out, it's possible to create a signal-only API on top of the signal/behavior API (not at all ready for use, but see https://github.com/JohnLato/impulse/blob/dyn2/src/Reactive/Impulse/Syntax2.hs for an example). I'm pretty sure it's also possible to write a signal/behavior API on top of an elm-style API as well. That would make the two APIs functionally equivalent.
WRT efficiency, with a signals-only API the system should have a mechanism where only signals that have updated values will cause recomputations (e.g. if you don't move the mouse, the FRP network won't re-calculate the pointer coordinates and redraw the screen). Provided this is done, I don't think there's any loss of efficiency compared to a signals-and-streams approach. I'm pretty sure Elm works this way.
I don't think the continuous-behavior issue makes any difference here (or really at all). What people mean by saying behaviors are continuous over time is that they are defined at all times (i.e. they're functions over a continuous domain); the behavior itself isn't a continuous function. But we don't actually have a way to sample a behavior at any time; they can only be sampled at times corresponding to events, so we can't use the full power of this definition!
Semantically, starting from these definitions:
Event == for some t ∈ T: [(t,a)]
Behavior == ∀ t ∈ T: t -> b
since behaviors can only be sampled at times where events are defined, we can create a new domain TX where TX is the set of all times t at which Events are defined. Now we can loosen the Behavior definition to
Behavior == ∀ t ∈ TX: t -> b
without losing any power (i.e. this is equivalent to the original definition within the confines of our frp system). Now we can enumerate all times in TX to transform this to
Behavior == ∀ t ∈ TX: [(t,b)]
which is identical to the original Event definition except for the domain and quantification. Now we can change the domain of Event to TX (by the definition of TX), and the quantification of Behavior (from forall to for some) and we get
Event == for some t ∈ TX: [(t,a)]
Behavior == for some t ∈ TX: [(t,b)]
and now Event and Behavior are semantically identical, so they could obviously be represented using the same structure in an FRP system. We do lose a bit of information at this step; if we don't differentiate between Event and Behavior we don't know that a Behavior is defined at every time t, but in practice I don't think this really matters much. What elm does IIRC is require both Events and Behaviors to have values at all times and just use the previous value for an Event if it hasn't changed (i.e. change the quantification of Event to forall instead of changing the quantification of Behavior). This means you can treat everything as a signal and it all Just Works; it's just implemented so that the signal domain is exactly the subset of time that the system actually uses.
I think this idea was presented in a paper (which I can't find now, anyone else have a link?) about implementing FRP in Java, perhaps from POPL '14? Working from memory, so my outline isn't as rigorous as the original proof.
There's nothing to stop you from creating a more-defined Behavior by e.g. pure someFunction, this just means that within an FRP system you can't make use of that extra defined-ness, so nothing is lost by a more restricted implementation.
As for notional signals such as time, note that it's impossible to implement an actual continuous-time signal using typical programming languages. Since the implementation will necessarily be discrete, converting that to an event stream is trivial.
In short, I don't think anything is lost by using just signals.
I've unfortunately have no references in mind, but I distinctly remember
different reactive authors claiming this choice is just for efficiency. You expose
both to give the programmer a choice in what implementation of the same idea is
more efficient for your problem.
I might be lying now, but I believe Elm implements everything as event streams under the
hood. Things like time wouldn't be so nice like event streams though, since there are an
infinite amount of events during any time frame. I'm not sure how Elm solves this, but I
think it's a good example on something that makes more sense as a signal, both conceptually
and in implementation.

Why are there no functions for building Events out of non-events in reactive-banana?

I'm in the process of teaching myself FRP and Reactive-banana while writing what I hope will be a more useful tutorial for those that follow me. You can check out my progress on the tutorial here.
I'm stuck at trying to implement the simple beepy noise examples using events. I know I need to do something like this:
reactimate $ fmap (uncurry playNote) myEvent
in my NetworkDescription, but I can't figure out how to just have the network do the same thing repeatedly, or do something once. Ideally, I'm looking for things like this:
once :: a -> Event t a
repeatWithDelay :: Event t a -> Time -> Event t a
concatWithDelay :: Event t a -> Event t a -> Time -> Event t a
The Time type above is just a stand-in for whatever measurement of time we end up using. Do I need to hook up the system time as a Behavior to drive the "delay" functions? That seems more complicated than necessary.
Thanks in advance,
Echo Nolan
EDIT: Okay the types for repeatWithDelay and concatWithDelay don't make sense. Here's what I actually meant.
repeatWithDelay :: a -> Time -> Event t a
concatWithDelay :: a -> a -> Time -> Event t a
I have chosen not to include such functions in the core model for now, because time raises various challenges for consistency. For instance, if two events are scheduled to happen 5 seconds from now, should they be simultaneous? If not, which one should come first? I think the core model should be amenable to formal proof, but this does not work with actual, physical time measurements.
That said, I plan to include such functions in a "they work, but no guarantees" fashion. The main reason that I have not already done so is that there is no canonical choice for time measurement. Different applications have different needs, sometimes you want nanosecond resolution, sometimes you want to use timers from your GUI framework, and sometimes you want to synchronize to an external MIDI clock. In other words, you want the time-based function to work generically with many timer implementation, and it is only with reactive-banana-0.7.0 that I have found a nice API design for this.
Of course, it is already possible to implement your own time-based function by using timers. The Wave.hs example demonstrates how to do that. Another example is Henning Thielemann's reactive-balsa library, which implements various time-based combinators to process MIDI data in real time.

What is the equivalent of reactive-web's flatMap in Haskell's reactive-banana?

I'm looking for the function in reactive-banana that will choose from which event stream to emit next depending on the incoming value of another event stream or signal (Behaviour ?). In the library for scala, reactive-web this is done with:
flatMap[U](f: T => EventStream[U]): EventStream[U]
thanks !
This is dynamic event switching. Unfortunately, in that formulation, it has many problems, and so is not included in reactive-banana. However, a variant of dynamic event switching will be added soon. For now, you'll have to do without it.
In particular, flatMap is Scala's name for the monadic bind function; a Monad instance for behaviours is problematic because it provides the dynamic event switching functionality that leads to the time leak explained in the article I linked.
As an addendum to ehird's answer, I want to mention that it's often possible to avoid dynamic event switching, namely when the relevant behaviors/events are in scope at compile-time. Dynamic event switching is only needed when you compute a new behavior/event on the fly, not when you switch between behaviors/events that are already in scope.
In particular, have a look at the TwoCounters.hs example on the examples page to see how this can be done.

Represent Flowchart-specified Algorithms in Haskell

I'm confronted with the task of implementing algorithms (mostly business logic style) expressed as flowcharts. I'm aware that flowcharts are not the best algorithm representation due to its spaghetti-code property (would this be a use-case for CPS?), but I'm stuck with the specification expressed as flowcharts.
Although I could transform the flowcharts into more appropriate equivalent representations before implementing them, that could make it harder to "recognize" the orginal flow-chart in the resulting implementation, so I was hoping there is some way to directly represent flowchart-algorithms as (maybe monadic) EDSLs in Haskell, so that the semblance to the original flowchart-specification would be (more) obvious.
One possible representation of flowcharts is by using a group of mutually tail-recursive functions, by translating "go to step X" into "evaluate function X with state S". For improved readability, you can combine into a single function both the action (an external function that changes the state) and the chain of if/else or pattern matching that helps determine what step to take next.
This is assuming, of course, that your flowcharts are to be hardcoded (as opposed to loaded at runtime from an external source).
Sounds like Arrows would fit exactly what you describe. Either do a visualization of arrows (should be quite simple) or generate/transform arrow code from flow-graphs if you must.
Assuming there's "global" state within the flowchart, then that makes sense to package up into a state monad. At least then, unlike how you're doing it now, each call doesn't need any parameters, so can be read as a) modify state, b) conditional on current state, jump.

Resources