An inadmissible heuristic can cause A* to fail to find the optimal path to a goal. For example, suppose the search tree has only two branches:
A -1-> B -1-> C
A -3-> D
That is, the step from A to B costs 1, the step from B to C costs 1, and the step from A to D costs 3. A is the root and C and D are both goals.
If an inadmissible heuristic gives an estimate of 3 for B, A* search will expand D before C, thus finding a path to a goal that is not the least expensive (3 rather than 2).
Now, consider the 8-puzzle. Suppose we implement a flawed Hamming distance heuristic where we count the blank as a tile. This is clearly inadmissible because it gives an estimate of 2 for a state 1 move from the goal.
My question: is there a (hopefully small) example where this causes A* to fail to find the goal closest to the root (or at least expand more nodes than necessary)?
Related
I have almost no knowledge of multithreading, but a problem I am working on right now perhaps is a good opportunity to get started on learning concurrency and parallelism. I am hoping to get some pedagogical advice directed towards my specific problem.
The Abstract Problem
In abstract, I would like to write a program with N tasks such that
N is predetermined.
Some partial information computed by the n-th task is enough for the (n+1)-th task to start. However, we don't know beforehand how much partial information is enough. The n-th task is responsible for determining that.
So what I am hoping for is to have N threads, and the important point is that the n-th thread figures out during runtime when the (n+1)-th thread can begin. Can this be achieved with multithreading? I am familiar with Java and C++. What tools/libraries should I look into to begin with?
The Concrete Problem
The sieve of Eratosthenes is an algorithm for finding prime numbers up to a given upper limit by successively finding composite numbers as multiples of known primes. Most composite numbers have more than one prime factor and will be found multiple times. So looking for multiples of 2 and 3 in parallel can cause data race. However, the algorithm can be modified so that each composite number is found exactly once. Suppose we want to find all primes between 2 and K.
Mark 2 as prime and mark all multiples of 2 as composite. We have marked all the numbers whose smallest prime factor is 2.
The smallest unmarked number is 3. So 3 is the next prime number after 2. Multiply 3 with all numbers less than K/3 that are left unmarked after step 1. The products are all the numbers whose smallest prime factor is 3. Mark these numbers.
The smallest unmarked number is now 5. So 5 is the next prime after 2 and 3. Multiply 5 with all the numbers less than K/5 that are left unmarked after step 2 and we find all the numbers whose smallest prime factor is 5. Mark these numbers.
Repeat until we have reached floor(sqrt(K)).
Note that, for instance, step 3 can begin while step 2 is running. Early on in step 2 we find 5 as the smallest unmarked number and hence the next prime number. Therefore, once all numbers less than K/5 have been marked during step 2, we then know step 3 can begin. Step 3 will not interfere with step 1 or 2 because each composite number is found exactly once.
I have two sets of tokenised sentences A and B and I want to calculate the overlap between them in terms of common tokens. For example, the overlap between two individual sentences a1 today is a good day and b1 today I went to a park is 2 (today and a). I need a simple string matching method, without fuzzy or advanced methods. So the result is a matrix between all sentences in A and B with an overlap count for each pair.
The problem is that, while trivial, it is a quadratic operation (size of A x size of B pair-wise comparisons). With large data, the computation gets very slow very quickly. What would be a smart way of computing this avoiding pair-wise comparisons or doing them very fast? Are there packages/data structures particularly good for this?
There's a table with four coins with random initial faces. You're blindfolded and each turn, you have to choose a subset of coins to flip over. Your objective is to make them all face the same way.
There is also someone else who, after you flip some coins, will rotate the table as much as they want during their turn. Their objective is to not let you win. Since you're blindfolded, you're not aware of how much the table has been rotated.
A sample game would look like: You go first, flip the top and left coins. Then, the adversary rotates the board 180 deg. Then it's your turn and you flip the bottom and right coins ( in this case, zero work was done).
What is the strategy to win?
I'm using the following moves:
1 : Flip a single coin (eg: the one in front of you)
D : (Diagonal) Flip two opposite coins (the one in front of you, the one in front of your adversary)
A : (Adjacent) Flip two adjacent coins (the one in front of you and the one on the right)
Then the sequence
D A D 1 D A D
passes always though a winning state !
This is proved by case analysis.
You don't start with a winning position. So there are at least one head and one tail coins.
I assume first that there are 2 heads and 2 tails.
Remark that, in this case, any D and A move either wins or keep 2 heads and 2 tails.
2a. If the two head are facing then D wins.
2b. If not then D doesn't change the state upto rotation (two adjacent head coins)
Then if you do A, either you win or you get two facing heads. So you arr back in 2a.
Summary : D A D wins if they are 2 heads and 2 tails.
If not, D A D keeps a state with one coins of a sort and three of the other.
So if D A D didn't win you know that you are in such a state.
Now if you just flip a coin, either you win or you end up with a 2 heads and 2 tails state. Therefore another D A D wins.
So
D A D 1 D A D
always wins !!!
I dont know in English, but in French this is a classical in automaton called "Le barman aveugle" (the blind bartender). There are a lot of page about this problem. EG:
This page
EDIT: I just dicovered an English page on Wikipedia
Note that in every turn there are precisely 2 subsets that are winning moves. The total number of subsets is 2^4=16. Therefore, in every turn there is a probability of 2/16=1/8 to win instantly if you randomly choose a subset, where the universe is {1, 2, 3, 4} and 1 denotes the coin in front of you, 2 its neighbor under clockwise order and so on.
If the number of rounds is unbounded, one winning strategy is to repeatedly 'guess' a subset of the coins to flip over. The probability to win within the first n turns is 1 - (7/8)^n. The probability is strictly increasing in n and is asymptotically 1. You will win p-a.s.
Your moves are independent of each other: Your strategy does not incorporate any information from previous turns.
Your adversary does not have any strategy to counter your efforts. Turning the table amounts to relabelling the coins in the set you draw from. You do not exploit the labelling in choosing the subset, therefore the adversary's actions cannot foil your strategy. In particular, after your k-th turn, each of your possible subset choices in turn k+1 has the same likelihood to occur and does not depend on the adversary's action.
To be precise, the relabelling is not completely arbitrary - only 4 out of 4^4=256 possible relabellings can be implemented by turning the table. Again, while this may imply a more efficient strategy for you, it cannot harm you as you do not exploit the information.
Refinement
Never choose 0 or 4 coins as your subset as this can never be a winning move (these moves only ever produce a set of coins with the same face on top if you start with such a configuration). Thus the probability for an instant win is now 2/(16-2)=1/7, with the probability to win within the first n turns becoming 1 - (6/7)^n. This refinement has no effect on the general reasoning behind the strategy.
I've read many articles about the Monte Carlo algorithm for approximating the preflop equity in NL holdem poker.
Unfortunately, it iterates over only a few possible boards to see what happens. The good thing about this is that you can put in exact hand ranges.
Well, I don't need exact ranges. It's good enough to say "Top 20% vs Top 35%".
Is there a simple formula to tell (or approximate) the likelihood of winning or losing? We can ignore splits here.
I can imagine that the way to calculate the odds will become much simpler if we just using two (percentile) numbers instead of all possible card combinations.
The thing is, I don't know if for example the case "Top 5% vs Top 10%" is equal to "Top 10% vs Top 20%".
Does anyone know of a usable relation or a formula for these inputs?
Thanks
Okay, I've made a bit analytical work and I came up wit the following.
The Formula
eq_a(a, b) := 1/2 - 1/(6*ln(10)) * ln(a/b)
Or if you like:
eq_a(a, b) := 0.5 - 0.072382 * ln(a/b)
Where a is the range in percent (0 to 1) for player a. Same for b.
The function outputs the equity for player a. To get the equity for player b just swap the two ranges.
When we plot the function it will look like this: (Where a = x and b = y)
As you can see it's very hard to get an equity greater than 80% preflop (as even AA isn't that good mostly).
How I came up with this
After I've done some analysis I became aware of the fact that the probability of winning is dependent on just the ratio of the two ranges (same for multiway pots).
So:
eq_a(a, b) = eq(a * h, b * h)
And yes, Top 5% vs Top 10% has the same equities as Top 50% vs Top 100%.
The way I've got the formula is I've done some regressions on sample data I've calculated with an app and picked the best fit (the logarithmic one). Then I optimised it using special cases like eq_a(0.1, 1)=2/3 and eq_a(a, a)=1/2.
It would be great if someone will do the work for multiway preflop all-ins.
I have a dataset of size (61573, 25). The rows represent users whereas the columns represent views on particular movie genres. For example, if data[i,j] == 3 that means that user i has viewed 3 movies of gender j in total. As expected ,rows are sparse and right-skewed.
What I would like to do is to compute how much engaged a user is on each of the 25 movie genders by assigning to him one of the following tags: {VL, L, A, H, VH}.
What I have tried so far is to compute z-scores, either row or column -wise (I haven't tried to standardize values twice, though (i.e. first on rows and then on columns)), and then apply the following function depending on how far away the z-scores are from 0:
(-oo, -2] --> VL
(-2, -1] --> L
(-1, +1) --> A
[+1, +2) --> H
[+2, +oo) --> VH
In either case, my problem is that the results seem very bad in most of the cases probably because they are laying between -1 and +1, and thus are almost always marked as A (i.e. average). So, what else should I try based on your opinion? How would YOU approach this problem?
The z-scores clearly are not the right way to go.
The reason is that they are based on the assumption that your data is normal distributed. I have strong doubts that your data is normal distributed - in particular, it probably doesn't have any negative values, does it?
Have you tried just using quantiles? top 10%, bottom 10% etc.?