Let's say you have a basic elevator system defined in Alloy...
You have a set of floors and a set of people waiting on the elevator on each floor.
You work with State to show the progress the elevator makes.
How can you send the elevator in the initial state to a random floor to pick up his first person? (aka; how can you randomise the element alloy takes?)
I think what you want to do here is to leave the initial state unspecified. That is, describe its existence, clarify that there is exactly one, but leave it unspecified which of the possible states is the initial state.
The Alloy Analyzer will then check your assertions and predicates for all possible initial states, and will (eventually) generate instances of the model for all possible initial states. This resembles the behavior of a good random number generator, in that the likelihood of any given state being chosen as the initial state is equal to the likelihood of any other given state being chosen -- it's just that the likelihood here becomes 1.0, not 1/n for n possible states.
And better to say an arbitrary floor, rather than a random floor.
Related
I have a function with a lot of random.choice, random.choices, random.randint, random.uniform etc functions and i have to put random.seed before them all.
I use python 3.8.6, is there anyway to keep a seed initialized just in the function or atleast a way to toggle it instead of doing it every time?
It sounds like you have a misconception about how PRNGs work. They are generators (it's right there in the name!) which produce a sequence of values that are deterministic but are constructed in an attempt to be indistinguishable from true randomness based on statistical tests. In essence, they attempt to pass a Turing test/imitation game for randomness. They do this by maintaining an internal state of bits, which gets updated via a deterministic algorithm with every call to the generator. The only role a seed is supposed to play is to set the initial internal state so that separate runs create reproducible sequences. Repeating the seed for separate runs can be useful for debugging and for playing tricks to reduce the variability of some classes of estimators in Monte Carlo simulation.
All PRNGs eventually cycle. Since the internal state is composed of a finite number of bits and the state update mechanism is deterministic, the entire sequence will repeat from the point where any state duplication occurs. In other words, the output sequence is actually a loop of values. The seed value has nothing to do with the quality of the pseudo-random numbers, that's determined by the state transition algorithm. Conceptually you can think of the seed as just being the entry point to the PRNG's cycle. (Note that this doesn't mean you have a cycle just because you observe the same output, cycling occurs only when the internal state that produces the output repeats. That's why the 1980's and 90's saw an emergence of PRNGs whose state space contained more bits than the output space, allowing duplicate output values as predicted by the birthday problem without having the sequence repeat verbatim from that point on.)
If you mess with a good PRNG by reseeding it multiple times, you're undoing all of the hard work that went into designing an algorithm which passes statistically based Turing tests. Since the seed does not determine the quality of the results, you're invoking additional cost (to spawn a new state from the seed), gaining no statistical benefit, and quite likely harming the ability to pass statistical testing. Don't do that!
I'm trying to model a state machine which reuses a state in order to reduce complexity.
I've got three states: State A, B and X.
My state X can either be entered via a transaction from state A or B.
State X includes multiple substates with lots of complexity and I don't wont to implement it twice.
After the process in state X is completed I need to transition back to back to state A or B based on which one was the previous state.
Is there a elegant way to solve this?
State X includes multiple substates with lots of complexity and I don't wont to implement it twice
Define a submachine corresponding to your state X and in your current machine use submachine state to instantiate it where you need
See §14.2.3.4.7 Submachine States and submachines page 311 in formal-17-12-05 :
Submachines are a means by which a single StateMachine specification can be reused multiple times. They are similar to encapsulated composite States in that they need to bind incoming and outgoing Transitions to their internal Vertices.
...
NOTE. Each submachine State represents a distinct instantiation of a submachine, even when two or more submachine States reference the same submachine.
A SubMachine will help you to reuse several time part of your state modelling.
But if you want to be able to enter into your state X from A or B and then retun to the previous state, ShallowHistory Would be a good idea.
In the following state machine, I modeled a SubMachine X referenced by both states X1 and X2. I also wanted to model the fact that state X2 in processed after A or B and then next state if the previous one.
Another solution consists in playing with transition guards or events/triggers. You must keep in mind that transitions are triggered when specific events occurs or when its guard is true cf. following screenshot.
I am working my way through an MIT OCW course, Introduction to Electrical Engineering and Computer Science I, in which state machines are employed. I have noticed that the course instructors do not draw state transition diagrams for most of the state machines they discuss.
One problem is to design & Python code a state machine whose state is the input from two time intervals in the past. I think that this is an infinite state machine for which a state transition diagram might be useful for getting the general idea while showing only a few of the states.
I am wondering if a state transition diagram can be drawn for such double delay machine. All the examples, so far, have a transition line emerging from a state bubble marked with an input and the resulting output and then pointing at the next state. For a double delay machine the input of consequence is entered two time periods previous. The problem instructions state that all state memory for the machine be in one argument. No mention is made of input memory, which I would think necessary.
My questions:
Can a state transition diagram be drawn for this state machine?
Is it necessarily the case that input memory be a part of this design?
It is impossible to draw a diagram since the set of all possible states includes any value of any data type, given in the example for the (single) delay state machine in the readings. So the number of possible states can't be defined. See Chapter 4: State Machines.
In the problem description it states that:
It is essential that the init and getNextValues methods in any state machine not set or read any instance variables except self.startState (not even self.state). All memory (state) must be in the state argument to getNextValues. Look at the examples in the course notes, section 4.1.
So the state is all the memory you need. There is no reason not to use an array as state to keep the last two inputs.
First we save both values in memory (state)
class Delay2Machine(StateMachine):
def __init__(self, val0, val1):
self.startState = (val0, val1)
Following the super class SM step function implementation also given in the readings:
def step(self, inp):
(s, o) = self.getNextValues(self.state, inp)
self.state = s
return o
The output will be the first of the values saved in memory, and the state will be updated to include the new input
def getNextValues(self, state, inp):
return ((state[1], inp), state[0])
I'm writing a simple finite state machine and realized that there are situations where an event can take a state to more than one possible results. Basically, from state A, if Event E happens, the state could be either C or D.
I'm currently using the Javascript Finite State Machine code written here: https://github.com/jakesgordon/javascript-state-machine
From the documentation I don't see an obvious way that makes this possible. More so, I feel like maybe this is actually a flow in my original design.
Essentially, in a Finite State Machine, should there be a situation where a transition happens, and based on some logic result in one of multiple states (1 to many), or should it be that we check the logic to see which transition needs to takes place (1 to 1)?
Congratulations, you've just discovered non-deterministic finite state machines! The ideas are similar to that of a deterministic state machine, except that there may be multiple ways to transition from a state given the same input symbol. How this is actually done is unspecified (randomness, user input, branch out and run them all at once, etc.).
Behaviors are ubiquitously defined as “time-varying value”s1.
Why? time being the dependency/parameter for varying values is very uncommon.
My intuition for FRP would be to have behaviors as event-varying values instead; it is much more common, much more simple, I wage a much more of an efficient idea, and extensible enough to support time too (tick event).
For instance, if you write a counter, you don't care about time/associated timestamps, you just care about the "Increase-button clicked" and "Decrease-button clicked" events.
If you write a game and want a position/force behavior, you just care about the WASD/arrow keys held events, etc. (unless you ban your players for moving to the left in the afternoon; how iniquitous!).
So: Why time is a consideration at all? why timestamps? why are some libraries (e.g. reactive-banana, reactive) take it up to the extent of having Future, Moment values? Why work with event-streams instead of just responding to an event occurrence? All of this just seems to over-complicate a simple idea (event-varying/event-driven value); what's the gain? what problem are we solving here? (I'd love to also get a concrete example along with a wonderful explanation, if possible).
1 Behaviors have been defined so here, here, here... & pretty much everywhere I've encountered.
Behaviors differ from Events primarily in that a Behavior has a value right now while an Event only has a value whenever a new event comes in.
So what do we mean by "right now"? Technically all changes are implemented as push or pull semantics over event streams, so we can only possibly mean "the most recent value as of the last event of consequence for this Behavior". But that's a fairly hairy concept—in practice "now" is much simpler.
The reasoning for why "now" is simpler comes down to the API. Here are two examples from Reactive Banana.
Eventually an FRP system must always produce some kind of externally visible change. In Reactive Banana this is facilitated by the reactimate :: Event (IO ()) -> Moment () function which consumes event streams. There is no way to have a Behavior trigger external changes---you always have to do something like reactimate (someBehavior <# sampleTickEvent) to sample the behavior at concrete times.
Behaviors are Applicatives unlike Events. Why? Well, let's assume Event was an applicative and think about what happens when we have two event streams f and x and write f <*> x: since events occur all at different times the chances of f and x being defined simultaneously are (almost certainly) 0. So f <*> x would always mean the empty event stream which is useless.
What you really want is for f <*> x to cache the most current values for each and take their combined value "all of the time". That's really confusing concept to talk about in terms of an event stream, so instead lets consider f and x as taking values for all points in time. Now f <*> x is also defined as taking values for all points in time. We've just invented Behaviors.
Because it was the simplest way I could think of to give a precise denotation (implementation-independent meaning) to the notion of behaviors, including the sorts of operations I wanted, including differentiation and integration, as well as tracking one or more other behaviors (including but not limited to user-generated behavior).
Why? time being the dependency/parameter for varying values is very uncommon.
I suspect that you're confusing the construction (recipe) of a behavior with its meaning. For instance, a behavior might be constructed via a dependency on something like user input, possibly with additional synthetic transformation. So there's the recipe. The meaning, however, is simply a function of time, related to the time-function that is the user input. Note that by "function", I mean in the math sense of the word: a (deterministic) mapping from domain (time) to range (value), not in the sense that there's a purely programmatic description.
I've seen many questions asking why time matters and why continuous time. If you apply the simple discipline of giving a mathematical meaning in the style of denotational semantics (a simple and familiar style for functional programmers), the issues become much clearer.
If you really want to grok the essence of and thinking behind FRP, I recommend you read my answer to "Specification for a Functional Reactive Programming language" and follow pointers, including "What is Functional Reactive Programming".
Conal Elliott's Push-Pull FRP paper describes event-varying data, where the only points in time that are interesting are when events occcur. Reactive event-varying data is the current value and the next Event that will change it. An Event is a Future point in the event-varying Reactive data.
data Reactive a = a ‘Stepper ‘ Event a
newtype Event a = Ev (Future (Reactive a))
The Future doesn't need to have a time associated with it, it just need to represent the idea of a value that hasn't happened yet. In an impure language with events, for example, a future can be an event handle and a value. When the event occurs, you set the value and raise the handle.
Reactive a has a value for a at all points in time, so why would we need Behaviors? Let's make a simple game. In between when the user presses the WASD keys, the character, accelerated by the force applied, still moves on the screen. The character's position at different points in time is different, even though no event has occurred in the intervening time. This is what a Behavior describes - something that not only has a value at all points in time, but its value can be different at all points in time, even with no intervening events.
One way to describe Behaviors would be to repeat what we just stated. Behaviors are things that can change in-between events. In-between events they are time-varying values, or functions of time.
type Behavior a = Reactive (Time -> a)
We don't need Behavior, we could simply add events for clock ticks, and write all of the logic in our entire game in terms of these tick events. This is undesirable to some developers as the code declaring what our game is is now intermingled with the code providing how it is implemented. Behaviors allow the developer to separate this logic between the description of the game in terms of time-varying variables and the implementation of the engine that executes that description.