Is there a name for this fallacy? - statistics

From time to time I've come across (or have been briefly tempted by) something like the following absurd example:
'40% of Coloradans are Republican, therefore, 40% of Republicans are Coloradan."
Closer to the bone would be, "Most NBA players are Black, so most Blacks are highly athletic."
Is there a term for this sort of gaffe, of not just misapplying observations from a non-random sample to the population, but of confusing 'A implies B' with 'B implies A' as well? It feels more complex than just taking an inductive conclusion as a deductive truth.
Thanks.

Related

Proof of optimal efficiency of A* Search

It is mentioned in Norvig's Artificial Intelligence that A* Search is optimally efficient. However, I could not figure out why nor find on the web the proof. Does anyone happen to have a proof?
I hope I'm not doing your homework ;). I only sketch the proof here
First thing you need to see is that A* is optimal. That is it returns the shortest path according to your cost function g. I think this prove is trivial under the assumption that the heuristic h is not overestimating the cost of the solution. If this wouldn't hold optimal efficiency would be meaningless, as A* wouldn't be optimal.
Optimal efficiency: among all optimal algorithms starting from the same name node A* is expending the fewest nodes.
Lets assume an algorithm B does not expand a node n which is A* expanded by A*. By definition for this path g(n)+h(n) <= f where f is the cost of the shortest path. Consider a second problem for which all heuristics values are the same as in the original problem. However, there is a new path to a new goal with total cost smaller f.
The assumed algorithm B would expand n hence never reach this new goal. Hence, B wouldn't find this optimal path. Therefore, our original assumption that B is optimal is violated.

Why does FRP consider time as a factor for values?

Behaviors are ubiquitously defined as “time-varying value”s1.
Why? time being the dependency/parameter for varying values is very uncommon.
My intuition for FRP would be to have behaviors as event-varying values instead; it is much more common, much more simple, I wage a much more of an efficient idea, and extensible enough to support time too (tick event).
For instance, if you write a counter, you don't care about time/associated timestamps, you just care about the "Increase-button clicked" and "Decrease-button clicked" events.
If you write a game and want a position/force behavior, you just care about the WASD/arrow keys held events, etc. (unless you ban your players for moving to the left in the afternoon; how iniquitous!).
So: Why time is a consideration at all? why timestamps? why are some libraries (e.g. reactive-banana, reactive) take it up to the extent of having Future, Moment values? Why work with event-streams instead of just responding to an event occurrence? All of this just seems to over-complicate a simple idea (event-varying/event-driven value); what's the gain? what problem are we solving here? (I'd love to also get a concrete example along with a wonderful explanation, if possible).
1 Behaviors have been defined so here, here, here... & pretty much everywhere I've encountered.
Behaviors differ from Events primarily in that a Behavior has a value right now while an Event only has a value whenever a new event comes in.
So what do we mean by "right now"? Technically all changes are implemented as push or pull semantics over event streams, so we can only possibly mean "the most recent value as of the last event of consequence for this Behavior". But that's a fairly hairy concept—in practice "now" is much simpler.
The reasoning for why "now" is simpler comes down to the API. Here are two examples from Reactive Banana.
Eventually an FRP system must always produce some kind of externally visible change. In Reactive Banana this is facilitated by the reactimate :: Event (IO ()) -> Moment () function which consumes event streams. There is no way to have a Behavior trigger external changes---you always have to do something like reactimate (someBehavior <# sampleTickEvent) to sample the behavior at concrete times.
Behaviors are Applicatives unlike Events. Why? Well, let's assume Event was an applicative and think about what happens when we have two event streams f and x and write f <*> x: since events occur all at different times the chances of f and x being defined simultaneously are (almost certainly) 0. So f <*> x would always mean the empty event stream which is useless.
What you really want is for f <*> x to cache the most current values for each and take their combined value "all of the time". That's really confusing concept to talk about in terms of an event stream, so instead lets consider f and x as taking values for all points in time. Now f <*> x is also defined as taking values for all points in time. We've just invented Behaviors.
Because it was the simplest way I could think of to give a precise denotation (implementation-independent meaning) to the notion of behaviors, including the sorts of operations I wanted, including differentiation and integration, as well as tracking one or more other behaviors (including but not limited to user-generated behavior).
Why? time being the dependency/parameter for varying values is very uncommon.
I suspect that you're confusing the construction (recipe) of a behavior with its meaning. For instance, a behavior might be constructed via a dependency on something like user input, possibly with additional synthetic transformation. So there's the recipe. The meaning, however, is simply a function of time, related to the time-function that is the user input. Note that by "function", I mean in the math sense of the word: a (deterministic) mapping from domain (time) to range (value), not in the sense that there's a purely programmatic description.
I've seen many questions asking why time matters and why continuous time. If you apply the simple discipline of giving a mathematical meaning in the style of denotational semantics (a simple and familiar style for functional programmers), the issues become much clearer.
If you really want to grok the essence of and thinking behind FRP, I recommend you read my answer to "Specification for a Functional Reactive Programming language" and follow pointers, including "What is Functional Reactive Programming".
Conal Elliott's Push-Pull FRP paper describes event-varying data, where the only points in time that are interesting are when events occcur. Reactive event-varying data is the current value and the next Event that will change it. An Event is a Future point in the event-varying Reactive data.
data Reactive a = a ‘Stepper ‘ Event a
newtype Event a = Ev (Future (Reactive a))
The Future doesn't need to have a time associated with it, it just need to represent the idea of a value that hasn't happened yet. In an impure language with events, for example, a future can be an event handle and a value. When the event occurs, you set the value and raise the handle.
Reactive a has a value for a at all points in time, so why would we need Behaviors? Let's make a simple game. In between when the user presses the WASD keys, the character, accelerated by the force applied, still moves on the screen. The character's position at different points in time is different, even though no event has occurred in the intervening time. This is what a Behavior describes - something that not only has a value at all points in time, but its value can be different at all points in time, even with no intervening events.
One way to describe Behaviors would be to repeat what we just stated. Behaviors are things that can change in-between events. In-between events they are time-varying values, or functions of time.
type Behavior a = Reactive (Time -> a)
We don't need Behavior, we could simply add events for clock ticks, and write all of the logic in our entire game in terms of these tick events. This is undesirable to some developers as the code declaring what our game is is now intermingled with the code providing how it is implemented. Behaviors allow the developer to separate this logic between the description of the game in terms of time-varying variables and the implementation of the engine that executes that description.

Maintaining complex state in Haskell

Suppose you're building a fairly large simulation in Haskell. There are many different types of entities whose attributes update as the simulation progresses. Let's say, for the sake of example, that your entities are called Monkeys, Elephants, Bears, etc..
What is your preferred method for maintaining these entities' states?
The first and most obvious approach I thought of was this:
mainLoop :: [Monkey] -> [Elephant] -> [Bear] -> String
mainLoop monkeys elephants bears =
let monkeys' = updateMonkeys monkeys
elephants' = updateElephants elephants
bears' = updateBears bears
in
if shouldExit monkeys elephants bears then "Done" else
mainLoop monkeys' elephants' bears'
It's already ugly having each type of entity explicitly mentioned in the mainLoop function signature. You can imagine how it would get absolutely awful if you had, say, 20 types of entities. (20 is not unreasonable for complex simulations.) So I think this is an unacceptable approach. But its saving grace is that functions like updateMonkeys are very explicit in what they do: They take a list of Monkeys and return a new one.
So then the next thought would be to roll everything into one big data structure that holds all state, thus cleaning up the signature of mainLoop:
mainLoop :: GameState -> String
mainLoop gs0 =
let gs1 = updateMonkeys gs0
gs2 = updateElephants gs1
gs3 = updateBears gs2
in
if shouldExit gs0 then "Done" else
mainLoop gs3
Some would suggest that we wrap GameState up in a State Monad and call updateMonkeys etc. in a do. That's fine. Some would rather suggest we clean it up with function composition. Also fine, I think. (BTW, I'm a novice with Haskell, so maybe I'm wrong about some of this.)
But then the problem is, functions like updateMonkeys don't give you useful information from their type signature. You can't really be sure what they do. Sure, updateMonkeys is a descriptive name, but that's little consolation. When I pass in a god object and say "please update my global state," I feel like we're back in the imperative world. It feels like global variables by another name: You have a function that does something to the global state, you call it, and you hope for the best. (I suppose you still avoid some concurrency problems that would be present with global variables in an imperative program. But meh, concurrency isn't nearly the only thing wrong with global variables.)
A further problem is this: Suppose the objects need to interact. For example, we have a function like this:
stomp :: Elephant -> Monkey -> (Elephant, Monkey)
stomp elephant monkey =
(elongateEvilGrin elephant, decrementHealth monkey)
Say this gets called in updateElephants, because that's where we check to see if any of the elephants are in stomping range of any monkeys. How do you elegantly propagate the changes to both the monkeys and elephants in this scenario? In our second example, updateElephants takes and returns a god object, so it could effect both changes. But this just muddies the waters further and reinforces my point: With the god object, you're effectively just mutating global variables. And if you're not using the god object, I'm not sure how you'd propagate those types of changes.
What to do? Surely many programs need to manage complex state, so I'm guessing there are some well-known approaches to this problem.
Just for the sake of comparison, here's how I might solve the problem in the OOP world. There would be Monkey, Elephant, etc. objects. I'd probably have class methods to do lookups in the set of all live animals. Maybe you could lookup by location, by ID, whatever. Thanks to the data structures underlying the lookup functions, they'd stay allocated on the heap. (I'm assuming GC or reference counting.) Their member variables would get mutated all the time. Any method of any class would be able to mutate any live animal of any other class. E.g. an Elephant could have a stomp method that would decrement the health of a passed-in Monkey object, and there would be no need to pass that
Likewise, in an Erlang or other actor-oriented design, you could solve these problems fairly elegantly: Each actor maintains its own loop and thus its own state, so you never need a god object. And message passing allows one object's activities to trigger changes in other objects without passing a bunch of stuff all the way back up the call stack. Yet I have heard it said that actors in Haskell are frowned upon.
The answer is functional reactive programming (FRP). It it a hybrid of two coding styles: component state management and time-dependent values. Since FRP is actually a whole family of design patterns, I want to be more specific: I recommend Netwire.
The underlying idea is very simple: You write many small, self-contained components each with their own local state. This is practically equivalent to time-dependent values, because each time you query such a component you may get a different answer and cause a local state update. Then you combine those components to form your actual program.
While this sounds complicated and inefficient it's actually just a very thin layer around regular functions. The design pattern implemented by Netwire is inspired by AFRP (Arrowized Functional Reactive Programming). It's probably different enough to deserve its own name (WFRP?). You may want to read the tutorial.
In any case a small demo follows. Your building blocks are wires:
myWire :: WireP A B
Think of this as a component. It is a time-varying value of type B that depends on a time-varying value of type A, for example a particle in a simulator:
particle :: WireP [Particle] Particle
It depends on a list of particles (for example all currently existing particles) and is itself a particle. Let's use a predefined wire (with a simplified type):
time :: WireP a Time
This is a time-varying value of type Time (= Double). Well, it's time itself (starting at 0 counted from whenever the wire network was started). Since it doesn't depend on another time-varying value you can feed it whatever you want, hence the polymorphic input type. There are also constant wires (time-varying values that don't change over time):
pure 15 :: Wire a Integer
-- or even:
15 :: Wire a Integer
To connect two wires you simply use categorical composition:
integral_ 3 . 15
This gives you a clock at 15x real time speed (the integral of 15 over time) starting at 3 (the integration constant). Thanks to various class instances wires are very handy to combine. You can use your regular operators as well as applicative style or arrow style. Want a clock that starts at 10 and is twice the real time speed?
10 + 2*time
Want a particle that starts and (0, 0) with (0, 0) velocity and accelerates with (2, 1) per second per second?
integral_ (0, 0) . integral_ (0, 0) . pure (2, 1)
Want to display statistics while the user presses the spacebar?
stats . keyDown Spacebar <|> "stats currently disabled"
This is just a small fraction of what Netwire can do for you.
I know this is old topic. But I am facing the same problem right now while trying to implement Rail Fence cipher exercise from exercism.io. It is quite disappointing to see such a common problem having such poor attention in Haskell. I don't take it that to do some as simple as maintaining state I need to learn FRP. So, I continued googling and found solution looking more straightforward - State monad: https://en.wikibooks.org/wiki/Haskell/Understanding_monads/State

Barber Paradox Why is this model inconsistent?

I would like to know why is this model inconsistent? We can have the following tuples in shaves.
shaves = {(man,man)}
sig Man {shaves: set Man}
one sig Barber extends Man {}
fact {
Barber.shaves = (m:Man |m not in m.shaves}
}
Barber.shaves will produce 0 tuples. In this case, fact should be valid. So why does allow tell me that my model is inconsistent?
Would really appreciate some advice on this.
There is a simple explanation using only English.
The Alloy fact above says that the set of men that the barber (there is only one barber, since one sig Barber) shaves is exactly equal to all men who don't shave themselves. At first, this statement makes sense, because every man either shaves himself or is shaved by the barber.
The trick question is then "who shaves the barber". If the barber doesn't shave himself, then the fact forces that the barber must shave the barber, which is in contradiction with the assumption that the barber doesn't shave himself. If the barber does shave himself, then the fact forces that the barber must not shave the barber, which is again a paradox, so no instance can be found.
Okay here is the solution. The idea is really simple, and I think the authors really ought to explain this in their book.
The crux of the confusion is in the fact statement.
Barber.shaves = {m:Man | m not in m.shaves}
Basically, what the above statement is saying is that, any tuples in Barber.shaves must not shave himself. For example = Barber.shaves = {(m)}, then (m, m) MUST never occur in the shaves relations set. Also m:Man indicates that every man that does not shave himself should also be in Barber.shaves.
The problem occurs when we come to the Barber instance. If barber is not in Barber.shaves, then it violates m : Man. If barber is included, then it violates m not in m.shaves since shaves would contain {(barber, barber)}
I hope this clears up the confusion for beginners in alloy like myself.
So given the current model in my question, alloy cannot create an instance that satisfies this fact.

A math modelling interview question

What's the best way to (relative) grade a class (of 50 students) on a test (with 7 questions)?
They did not want the traditional percentile-intervals answer, but a more CS-ey one.
It's a pretty open ended question, they asked to assume the following framework:
Input
[m_1,...,m_50], where each m_i is a 7-vector for marks scored in the 7 questions for each of the 50 students.
[c_1,...,c_7], where each c_i is a vector of 'concepts' tested by each question. c_i's need not be disjoint. We can assume to have an importance ordering amongst elements of union(c_i).
Simplistic approach: Assuming that all concepts have the same value I would just sum it all up. One point for each concept everywhere.
Holistic approach: It could be that the question with more concepts is significantly harder than the question with fewer (and worth more than the sum of concepts). Concepts "interact" with each other. To alleviate this I would put a value of (N over C) to each question, where N is the size of the vector of concepts, and C is total number of concepts. And then I would sum it all up.
True holistic approach: If concepts are repeated in different questions then we should "tone down" their influence. However I'm not sure how to accomplish this. Maybe we should divide each (N over C) value with the number of repetitions of each concept involved.
I ignored the importance ordering of concepts, because I don't know how to put a value on that.

Resources