In the 8 queens problem which search would be more efficient in finding a solution, and why would this be the case?
For anyone who dosnt know what the 8 queens problem is, it is basically how to arrange 8 queens on a chess board (8x8) without them attacking each other.
thanks
With First Fit Decreasing and Tabu Search and a bit of tweaking OptaPlanner could easily handle the 5000 queens problem on an old laptop.
If I recall correctly, depth and breadth first search to scale above 20 queens. Brute force doesn't scale beyond 12 queens. Try it yourself, your mileage may vary.
Related
When I was a child, I had a square board tetris puzzle, and the instruction said there are >15 thousand solutions. I've always wondered how they counted the solutions, and I'd like to replicate that.
So I would like to search for divisions of NxN boards into 4-cell and 5-cell fragments. Rotation is allowed, flipping is not.
Problem A: Determine if a set of blocks can be assembled into a NxN grid.
Problem B: Find divisions without rectangular subdivisions, except for single fragments (4 cells, that is, 2x2 and 1x4).
I'm thinking of constraint programming. But if I encode the presence of each wall as boolean, how can I efficiently count the blocks? Is it better to work in terms of fragments and check for rectangles later?
What other technique could help?
note: this is just for fun, and not homework or anything.
I once challenged solving pentomino variant in Prolog.
Program:
http://www2.koyahatataku.com/programming/pent.txt
Result:
http://www2.koyahatataku.com/programming/pent_out.txt
I tried with naive brute force Depth-first search algorithm.
I'm wondering how you can quantify the results of the Needleman-Wunsch algorithm (typically used for aligning nucleotide/protein sequences).
Consider some fixed scoring scheme and two sequences of varying length S1 and S2. Say we calculate every possible alignment of S1 and S2 by brute force, and the highest scoring alignment has a score x. And of course, this has considerably higher complexity than the Needleman-Wunsch approach.
When using the Needleman-Wunsch algorithm to find a sequence alignment, say that it has a score y.
Consider r to be the score generated via Needleman-Wunsch for two random sequences R1 and R2.
How does x compare to y? Is y always greater than r for two sequences of known homology?
In general, I do understand that we use the Needleman-Wunsch algorithm to significantly speed up sequence alignment (vs a brute-force approach), but don't understand the cost in accuracy (if any) that comes with it. I had a go at reading the original paper (Needleman & Wunsch, 1970) but am still left with this question.
Needlman-Wunsch always produces an optimal answer - it's much faster than brute force and doesn't sacrifice accuracy in the process. The key insight it uses is that it's not actually necessary to generate all possible alignments, since most of them contain bad sub-alignments and couldn't possibly be optimal. The Needleman-Wunsch algorithm works by instead slowly building up optimal alignments for fragments of the original strands and then slowly growing those smaller alignments into larger alignments using the guarantee that any optimal alignment must contain an optimal alignment for a slightly smaller case.
I think your question boils down to whether dynamic programming finds the optimal solution ie, garantees that y >= x. For a discussion on this I would refer to people who are likely smarter than me:
https://cs.stackexchange.com/questions/23599/how-is-dynamic-programming-different-from-brute-force
Basically, it says that dynamic programming will likely produce optimal result ie, same as brute force, but only for particular problems that satisfy the Bellman principle of optimality.
According to Wikipedia page for Needleman-Wunsch, the problem does satisfy Bellman principle of optimality:
https://en.wikipedia.org/wiki/Needleman%E2%80%93Wunsch_algorithm
Specifically:
The Needleman–Wunsch algorithm is still widely used for optimal global
alignment, particularly when the quality of the global alignment is of
the utmost importance. However, the algorithm is expensive with
respect to time and space, proportional to the product of the length
of two sequences and hence is not suitable for long sequences.
There is also mention of optimality elsewhere in the same Wikipedia page.
I've been asked to guess the user intention when part of expected data is missing. For example if I'm looking to get very well or not very well but I get only not instead, then I should flag it as not very well.
The Levenshtein distance for not and very well is 9 and the distance for not and not very well is 10. I think I'm actually trying to drive a screw with a wrench, but we have already agreed in our team to use Levenshtein for this case.
As you have seen the problem above, is there anyway if I can make some sense out of it by changing the insertion, replacement and deletion costs?
P.S. I'm not looking for a hack for this particular example. I want something that generally works as expected and outputs a better result in these cases also.
The Levenshtein distance for not and very well is actually 12. The alignment is:
------not
very well
So there are 6 insertions with a total cost of 6 (cost 1 for each insertion), and 3 replacements with a total cost of 6 (cost 2 for each replacement). The total cost is 12.
The Levenshtein distance for not and not very well is 10. The alignment is:
not----------
not very well
This includes only 10 insertions. So you can choose not very well as the best match.
The cost and alignment can be computed with htql for python:
import htql
a=htql.Align()
a.align('not', 'very well')
# (12.0, ['------not', 'very well'])
a.align('not', 'not very well')
# (10.0, ['not----------', 'not very well'])
You have 12 shapes:
which you can make each out of five identical squares.
You need to combine the 12 pieces to one rectangle.
You can form four different rectangles:
2339 solutions (6x10), 2 solutions (3x20), 368 solutions (4x15), 1010 solutions (5x12).
I need to build the 3X20 rectangle:
My question what is the maximum number of states (i.e., the branching factor) that is possible?
My half way calculation:
The way I see it, there are 4 operations on each shape: turn 90/180/270 degrees and mirroring (turning it upside down).
Then, you have to put the shape on the board, somewhere on the 3X20 board.
Illegal states will be one that the shape doesn't fit in the board, but they are still states.
For the first move, you can chose each shape in 4 ways which is 4X12 ways, and then you need to multiply in the number of positions the shape can be in, and that is the number of states you have. But how can I calculate the number of positions?
Please help me with this calculation it is very important, it is not some kind of homework which I'm trying to avoid.
I think there is no easy & 'intelligent' way to list solutions (or states) to pentomino puzzles. You have to try all possibilities. Recursive programming or backtracking is the way to do it. You should check this solution that also has java source code available. Hopefully that points you to the right direction.
There is also a python solution that is perhaps more readable.
I'm trying to write an audio application.
I can play a cin wave from a frequency of 20 to 20K to hear sounds. my question is how can i convert frequencies to keyboard notes in order to create a virtual keyboard (or piano) ? is there some kind of formula to achieve this ?
The programming language that I use is not important because I don't want to use other tools that calculate it for me. i want to write it myself so i need to understand the math behind it. thanks
update
i found the following url: http://www.reverse-engineering.info/Audio/bwl_eq_info.pdf
that contains the octave prequency chart. do i need to store that list or is there a formula that can be used instead ?
There are a few different ways to tune instruments. The most commonly used for pianos is the 12 tone equal temperament, a formula for which can be found here. The idea is that each pair of adjacent notes has the same frequency ratio.
See also equal temperament on Wikipedia.
You can calculate frequency of a tone as
f = 440 * exp(x*ln(2)/12)
where x is number of semitones above A in the middle of the piano keyboard.
First, you need to know about A440. This is the "standard" pitch to tune everything else against.
Double the frequency to raise an octave; halve the frequency to drop an octave. It's clear from this that the tones are logarithmic relative to the frequencies.
There are multiple systems for deciding where on the logarithmic line the rest of the notes fall. A straightforward approach is to divide the semitones geometrically along the logarithmic scale (which is the approach xofon's answer uses), but there may be better ways.
full reference of P2F F2P conversion functions. i use 69 instead of 57 though.
http://musicdsp.org/showone.php?id=125