How to simplify/generalize a Finite State Machine (FSM) for a vending machine? - state-machine

I have enrolled into course of computational methods, and currently we are reviewing automata and regular expressions. In one of the course assignments we were asked to design a finite state automata that models the payment process of a vending machine.
For our base model they told us that the vending machine works with coins of $1, $2, $5, $10, $20, and $50, and that the final state is a product worth $80. At first I was very shocked because I thought that I would have to draw all the possible paths (combinations) of coin insertions; for example, if someone wants to pay the $80 with only $1 coins I would need an automaton with over 80 states.
Now, here's the deal. Our teacher told us that of course it is a valid way of doing the automaton, but a rather overcomplicated and inefficient one. What he really expects is that we make a generalization of the automaton and he gave us the following hint:
"If the design of the automaton works with the numbers of the coins (denominations) so that the state you are in also tells you how much money you have accumulated, then you have already found a way to generalize for any scenario".
Note: He also gave us this image, he told us that it might give a little push towards the solution:
Our automaton's final test is that it can work with ANY currency system, fictional or real. For example a currency that has coins of $3, $16 $47 $64, or Japanese currency system: ¥1, ¥5, ¥10, ¥50, ¥100, ¥500, ¥1000, ¥2000, ¥5000, and ¥10000.
Any ideas/suggestions on how to draw/design the automaton? I mean, I seriously don't believe I would have to draw more than 10000 states if I want to represent Japan's currency system.

You need exactly 81 states to solves this problem.
Consider each state corresponds how much money is deposited so far.
The final state is actually shows that there is at least $80 in the machine, it could be more. But we are not going to give change.
There will be 6 arrows going out from each state, one for each type of coin. For instance from state 0, There will be arrows to state 1, state 2, state 5, state 10, state 20 and state 50.
In the final state, the arrows will be self loops.
And you cannot do this with less than 81 states. Basically, in the machine at any time there could be money equal to 0,1,...,80+. Which is in total 81 different case. If you have less than 81 states, it means due pigeon hole principle, at least two cases will be represented by the same state. And there is no way to distinguish them.

Related

State space size of state of the art model checkers

What is the approximate maximum state space size of modern model checkers, like NuSMV.
I do not need an exact number but some state size value, where the run time is still acceptable (say a few weeks).
What kind of improvements, beyond symbolic model checking, are used to raise that limit?
The answer varies wildly, depending, among other factors, on:
what model checking algorithm is used
how the system is represented
how the model checker (or other tool) is implemented
what hardware the software is running on (and parallelization etc).
Instead of mentioning some specific number of states, I will instead note a few relevant factors (I use "specification" below as a synonym to "model"):
Symbolic or enumerative: Symbolic algorithms scale differently than enumerative ones. Also, for the same problem, there are typically differences in the computational complexity of known symbolic and enumerative algorithms.
Enumeration is relatively predictable in behavior, in that a state space with N states will most likely take a shorter time to enumerate than a state space with 1000000 * N states.
Symbolic approaches based on binary-decision diagrams (BDDs) can behave in ways (nearly) unrelated to how many states are reachable based on the specification. The main factor is what kind of Boolean functions arise in the encoding of the specification.
For example, a specification that involves a multiplier would result in BDDs that are exponentially large in the number of bits that represent the state, so of size linear in the number of states (assuming that the reachable states are exponentially more than than the bits used to represent those states). In this case, a state space with 2^50 states that may otherwise be amenable to symbolic analysis becomes prohibitive.
In other words, it is not only the number of states, but the kind of Boolean function that the system action corresponds to that matters (an action in TLA+ corresponds to what a transition relation represents in other formalisms). In addition, choosing a different encoding (of integers by bits) can have an impact on BDD size.
Symmetry (e.g., partial order reduction), and abstraction are some improvements to approach the analysis of more complex systems.
Acceptable runtime is a relative notion. Whatever the model checking approach, there is always a limit where the model's fidelity reaches the available time.
Another approach is to write a specification that has unspecified parameters, then use a model checker to find errors in instances of the specification that correspond to small parameter values, and after correcting these, then use a theorem prover to ensure correctness of the specification. This approach is supported by tools for TLA+, namely the model checker TLC and the theorem prover TLAPS.
Regarding terminology ("specification" above), please see "What good is temporal logic?" by Leslie Lamport.
Also worth noting that, depending on the approach, the number of states, and the number of reachable states can be different notions. Usually, this matters in typed formalisms: we can specify a system with 1 reachable state, but declare variable types that result in many more states, most of which are unreachable from the initial conditions. In symbolic approaches, this affects the encoding, thus the size of BDDs.
References relevant to state space size:
Bwolen Yang, Randal E. Bryant, David R. O’Hallaron, Armin Biere, Olivier Coudert, Geert Janssen, Rajeev K. Ranjan, Fabio Somenzi, A performance study of BDD-based model checking, FMCAD, 1998, DOI: 10.1007/3-540-49519-3_18
Radek Pelánek, Properties of state spaces and their applications, STTT, 2008, DOI: 10.1007/s10009-008-0070-5 (and a relevant website)
Radek Pelánek, Typical structural properties of state spaces, SPIN, 2004, DOI: 10.1007/978-3-540-24732-6_2
Yunja Choi, From NuSMV to SPIN: Experiences with model checking flight guidance systems, FMSD, 2007, DOI: 10.1007/s10703-006-0027-9
J.R. Burch, E.M. Clarke, K.L. McMillan, D.L. Dill, L.J. Hwang, Symbolic model checking: 10^20 states and beyond, LICS, 1990, DOI: 10.1109/LICS.1990.113767

Why is it unsafe to use Linear Congruential Generator to shuffle cards in online casino? How long will it take to crack this system?

I know that Linear Congruential Generator is not recommended where high randomness and security level are needed, but I don't know why. If I use Linear Congruential Generator to generate a random number for shuffle algorithm, is it easy to crack? If it is, how long will it take to crack it?
Linear congruential generators have several major flaws:
They have very little internal state. Some generators may have as few as 64 thousand possible starting states -- as a result, using one of these would mean that there are only 64 thousand possible shuffled decks. This makes it very easy to identify which one of those decks is being used at any given point.
Their future and past behavior can be perfectly determined based on their state. Once an attacker is able to gather enough information to guess the state of a LCRNG being used to shuffle cards, they can determine both what all the future shuffles will be, and what any past shuffles were.
They frequently suffer from statistical biases. One common flaw is that low-order bits will follow a short cycle -- for instance, in many generators, the least significant bit of the raw output will flip between 1 and 0 on subsequent outputs.
The first two issues are what will cause you the most trouble here. The exact severity will depend on the size of the LCRNG's state. That being said, if a 32-bit LCRNG is being used, its state can probably be guessed given 32 bits of output state. The position of one card is roughly lg(52) ≈ 5.7 bits of state, so the complete state can probably be guessed by seeing at least 6 (32 ÷ 5.7 ≈ 5.6) cards.

Take a random Object in Alloy

Let's say you have a basic elevator system defined in Alloy...
You have a set of floors and a set of people waiting on the elevator on each floor.
You work with State to show the progress the elevator makes.
How can you send the elevator in the initial state to a random floor to pick up his first person? (aka; how can you randomise the element alloy takes?)
I think what you want to do here is to leave the initial state unspecified. That is, describe its existence, clarify that there is exactly one, but leave it unspecified which of the possible states is the initial state.
The Alloy Analyzer will then check your assertions and predicates for all possible initial states, and will (eventually) generate instances of the model for all possible initial states. This resembles the behavior of a good random number generator, in that the likelihood of any given state being chosen as the initial state is equal to the likelihood of any other given state being chosen -- it's just that the likelihood here becomes 1.0, not 1/n for n possible states.
And better to say an arbitrary floor, rather than a random floor.

Modelling a vending box using Alloy

I am trying to model a vending machine program using alloy . I wish to create a model in which I could insert some money and provide the machine a selection option for an item and it would provide me the same and in case the money supplied is less then nothing would be provided .
Here I am trying to input a coin along with a button as input and it should return the desired item from the vending machine provided the value ie. amount assigned to each item is provided as input. So here button a should require ten Rs, button b requires 5 rs, c requires 1 and d requires 2 . The op instance is the item returned once the money required is inserted. opc is the balance amount of coins to be returned. ip is input button and x is money input . How can I provide an instance such that it intakes multiple coins as input and also if the amount is greater than the item cost then it should return a no of coins back. If I could get some help it'll be greatly appreciated.
If I were you, I'd proceed by asking myself what kinds of entities I care about; you've done that (signatures for coins and items -- do you also need some notion of a customer?).
Next, I'd ask myself what constitutes a legal state for the system -- sometimes it helps to think about it backwards by asking what would constitute an illegal or unacceptable state.
Then I'd try to define operations -- you've already mentioned insertion of money and selection of an item -- as transitions from one legal state of the system to the next.
At each stage I'd use the Analyzer to examine instances of the model and see whether what I'd done so far makes sense. One example of this pattern of defining entities, states, and state transitions in that order is given in the Whirlwind Tour chapter of Daniel Jackson's Software Abstractions -- if you have access to that book, you will find it helpful to review that chapter.
Good luck!
module vending_machines
open util /ordering[Event]
fun fst:Event{ordering/first}
fun nxt:Event->Event{ordering/next}
fun upto[e:Event]:set Event{prevs[e]+e}
abstract sig Event{}
sig Coin extends Event{}
pred no_vendor_loss[product:set (Event-Coin)]
{
all e:Event | let pfx=upto[e] | #(product&pfx)<=#(Coin&pfx)

Why does the halting problem make it impossible for software to determine the time complexity of an algorithm

I've read some articles about big-Oh calculation and the halting problem. Obviously it's not possible for ALL algoritms to say if they ever are going to stop, for example:
while(System.in.readline()){
}
However, what would be the big-Oh of such a program? I think it's not defined, for the same reason it's not possible to say if it's ever going to halt. You don't know that.
So... There are some possible algorithms, where you cannot say if it's ever going to halt. But if you can't say the, the big-Oh of that algorithm is by definition undefined.
Now to my point, calculating the big-oh of a piece of software. Why can't you write a program that does that? Because it is either a function, or not defined.
Also, I've not said anything about the programming language. What about a purely functional programming language? Can it be calculated there?
OK, so let's talk about Turing machines (a similar discussion using the Random-Access model could be had, but I adopt this for simplicity).
An upper-bound on the time complexity of a TM says something about the order of the rate at which the number of transitions the TM makes grows according to the input size. Specifically, if we say a TM executes an algorithm which is O(f(n)) in the worst case for input size n, we are saying that there exists an n0 and c such that, for n > n0, T(n) <= cf(n). So far, so good.
Now, the thing about Turing machines is that they can fail to halt, that is, they can execute forever for some inputs. Clearly, if for some n* > n0 a TM takes an infinite number of steps, there is no f(n) satisfying the condition (with finite n0, c) laid out in the last paragraph; that is, T(N) != O(f(n)) for any f. OK; if we were able to say for certain that a TM would halt for all inputs of length at least n0, for some n0, we're home free. Trouble is, this is equivalent to solving the halting problem.
So we conclude this: if a TM takes forever to halt on an input n > n0, then we cannot define an upper bound on complexity; furthermore, it is an unsolvable problem to algorithmically determine whether the TM will halt on all inputs greater than a fixed, finite n0.
The reason it is impossible to answer the question "is the 'while(System.in.readline()){}' program going to stop?" is that the input is not specified, so in this particular case the problem is lack of information and not undecidability.
The halting problem is about the impossibility of constructing a general algorithm which, when provided with both a program and an input, can always tell whether that program with that input will finish running or continue to run forever.
In the halting problem, both program and input can be arbitrarily large, but they are intended to be finite.
Also, there is no specific instance of 'program + input' that is undecidable in itself: given a specific instance, it is (in principle) always possible to construct an algorithm that analyses that instance and/or class of instances, and calculates the correct answer.
However, if a problem is undecidable, then no matter how many times the algorithm is extended to correctly analyse additional instances or classes of instances, the process will never end: it will always be possible to come up with new instances that the algorithm will not be capable of answering unless it is further extended.
I would say that the big O of "while(System.in.readline()){}" is O(n) where n is the size of the input (the program could be seen as the skeleton of e.g. a line counting program).
The big O is defined in this case because for every input of finite size the program halts.
So the question to be asked might be: "does a program halt on every possible finite input it may be provided with?"
If that queston can be reduced to the halting problem or any other undecidable problem then it is undecidable.
It turns out that it can be reduced, as clarified here:
https://cs.stackexchange.com/questions/41243/halting-problem-reduction-to-halting-for-all-inputs
Undecidability is a property of problems and is independent of programming languages that are used to construct programs that run on the same machines. For instance you may consider that any program written in a functional programming language can be compiled into machine code, but that same machine code can be produced by an equivalent program written in assembly, so from a Turing machine perspective, functional programming languages are no more powerful than assembly languages.
Also, undecidability would not prevent an algorithm from being able to calculate the big O for a countless (theoretically infinite) number of programs, so any effort in constructing an algorithm for that purpose would not necessarily be pointless.

Resources