I know how to implement union find in general, but I was thinking of whether there would be a way to utilize the set structure in python to achieve the same result.
For example, we can union sets pretty easily. But I'm not sure how to determine if two elements are in the same set using just sets.
So, I am wondering if there is a data structure in python that would support such operation, other than the usual implementation?
You could always solve this problem by visualizing it as a tree and its nodes connecting to each other via the root, and then looking up the tree if you want to know if two nodes are connected. If the two nodes you are comparing has the same root (they are in the same tree), than they are connected.
To connect two nodes, just go to the root of each tree they are in, and make one root become the parent of the other.
This video will give you a great intuition about it:
https://www.youtube.com/watch?v=YIFWCpquoS8&list=PLUX6FBiUa2g4YWs6HkkCpXL6ru02i7y3Q&index=1
The connection between the tree nodes can be made via pointers in a language which supports it, but if your language dont (python), than you can create your own pointers by storing positions and links via an array.
The array would be such that its positions would represent your nodes, and the values inside it represents the connection of the specific node to its root. On the beginning, the position in the array is filled with the node number because the nodes has initially no parent, but as you connect nodes, the roots changes, and the array has to represent this. Actually, the value stored there is the identificator of the root.
But try visualizing the problem visually first instead of thinking of arrays and too much mathematical artificats. Visually dealing with it makes the solution sound banal, and can be a good guidance while writing code.
I say this because I have watched the video from Robert Sedgewick I just posted, with a graphical simulation of the solution, and implemented myself without paying too much attention to the code on his book. The intuition the video gave me is much more valuable than any mathematics.
It will help you to encapsulate the nodes into a class, with the following methods:
climbTreeFromNodeUpToRoot
setNewParentToThisNodeAndUpdateHeights
The first method, as the name says, takes you from a node and goes up the tree until finding the root of it, which is then returned.
If you compare two nodes with this method (actually, the roots returned by it), you know easily if they are connected by just comparing their roots.
Once you want to connected them, you go up the trees of both nodes, and ask one root to take the other one as its parent.
The trees can grow very big in height (sorry I dont use the official nomeclature, but this is the one that makes sense to me), so this simple approach will get very slow when you have to climb the tree at a later time.
To prevent trees from becoming to high, dont just set one root as the parent to another without criterium, but attach the smallest tree (in terms of height, not quantity of elements) to the highest one.
For this, you need to know the heights of each tree, and this information you can store on their respective root (via an extra array in your case, or an extra pointer from each node in other languages). This information should be updated everytime another tree connects to it.
It is not possible for a tree to know that she just got a new tree attached to it, so its important that every tree attaching to a second one informs the second as to update its height.
This information can be sent to the root of the second tree, and later used to judge (as writen before) which tree is the smallest. Remember, attaching a small tree to a big one instead of the opposite will save you incredible amounts of time.
Do you want something like this?
myset = ...
all(elt in myset for elt in (a,b))
Related
Trying to learn MCST using YouTube videos and papers like this one.
http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Applications_files/grand-challenge.pdf
However I am not having much of a luck understanding the details beyond the high level theoretical explanations. Here are some quotes from the paper above and questions I have.
Selection Phase: MCTS iteratively selects the highest scoring child node of the current state. If the current state is the root node, where did these children come from in the first place? Wouldn't you have a tree with just a single root node to begin with? With just a single root node, do you get into Expansion and Simulation phase right away?
If MCTS selects the highest scoring child node in Selection phase, you never explore other children or possibly even a brand new child whilst going down the levels of the tree?
How does the Expansion phase happen for a node? In the diagram above, why did it not choose leaf node but decided to add a sibling to the leaf node?
During the Simulation phase, stochastic policy is used to select legal moves for both players until the game terminates. Is this stochastic policy a hard-coded behavior and you are basically rolling a dice in the simulation to choose one of the possible moves taking turns between each player until the end?
The way I understand this is you start at a single root node and by repeating the above phases you construct the tree to a certain depth. Then you choose the child with the best score at the second level as your next move. The size of the tree you are willing to construct is basically your hard AI responsiveness requirement right? Since while the tree is being constructed the game will stall and compute this tree.
Selection Phase: MCTS iteratively selects the highest scoring child node of the current state. If the current state is the root node, where did these children come from in the first place? Wouldn't you have a tree with just a single root node to begin with? With just a single root node, do you get into Expansion and Simulation phase right away?
The selection step is typically implemented not to actually choose among nodes which really exist in the tree (having been created through the Expansion step). It is typically ipmlemented to choose among all possible successor states of the game state matching your current node.
So, at the very beginning, when you have just a root node, you'll want your Selection step to still be able to select one out of all the possible successor game states (even if they don't have matching nodes in the tree yet). Typically you'll want a very high score (infinite, or some very large constant) for game states which have never been visited yet (which don't have nodes in the tree yet). This way, your Selection Step will always randomly select among any states that don't have a matching node yet, and only really use the exploration vs. exploitation trade-off in cases where all possible game states already have a matching node in the tree.
If MCTS selects the highest scoring child node in Selection phase, you never explore other children or possibly even a brand new child whilst going down the levels of the tree?
The ''score'' used by the Selection step should typically not just be the average of all outcomes of simulations going through that node. It should typically be a score consisting of two parts; an "exploration" part, which is high for nodes that have been visited relatively infrequently, and an "exploitation" part, which is high for nodes which appear to be good moves so far (where many simulations going through that node ended in a win for the player who's allowed to choose a move to make). This is described in Section 3.4 of the paper you linked. The W(s, a) / N(s, a) is the exploitation part (simply average score), and the B(s, a) is the exploration part.
How does the Expansion phase happen for a node? In the diagram above, why did it not choose leaf node but decided to add a sibling to the leaf node?
The Expansion step is typically implemented to simply add a node corresponding to the final game state selected by the Selection Step (following what I answered to your first question, the Selection Step will always end in selecting one game state that has never been selected before).
During the Simulation phase, stochastic policy is used to select legal moves for both players until the game terminates. Is this stochastic policy a hard-coded behavior and you are basically rolling a dice in the simulation to choose one of the possible moves taking turns between each player until the end?
The most straightforward (and probably most common) implementation is indeed to play completely at random. It is also possible to do this differently though. You could for example use heuristics to create a bias towards certain actions. Typically, completely random play is faster, allowing you to run more simulations in the same amount of processing time. However, it typically also means every individual simulation is less informative, meaning you actually need to run more simulations for MCTS to play well.
The way I understand this is you start at a single root node and by repeating the above phases you construct the tree to a certain depth. Then you choose the child with the best score at the second level as your next move. The size of the tree you are willing to construct is basically your hard AI responsiveness requirement right? Since while the tree is being constructed the game will stall and compute this tree.
MCTS does not uniformly explore all parts of the tree to the same depth. It has a tendency to explore parts which appear to be interesting (strong moves) deeper than parts which appear to be uninteresting (weak moves). So, typically you wouldn't really use a depth limit. Instead, you would use a time limit (for example, keep running iterations until you've spent 1 second, or 5 seconds, or 1 minute, or whatever amount of processing time you allow), or an iteration count limit (for example, allow it to run 10K or 50K or any number of simulations you like).
Basically, Monte Carlo is : try randomly many times(*) and then keep the move that led to the best outcome most of the times.
(*) : the number of times and the depth depends on the speed of the decision you want to acheive.
So the root node is always the current game state with immediate children being your possible moves.
If you can do 2 moves (yes/no, left/right,...) then you have 2 sub-nodes.
If you cannot do any moves (it may happen depending on the game) then you do not have any decision to make, then Montec Carlo is useless for this move.
If you have X possible moves (chess game) then each possible move is a direct child node.
Then, (in a 2 player game), evey level is alternating "your moves", "opponent moves" and so on.
How to traverse the tree should be random (uniform).
Your move 1 (random move of sub-level 1)
His move 4 (random move of sub-level 2)
Your move 3 (random move of sub-level 3) -> win yay
Pick a reference maximum depth and evaluate how many times you win or lose (or have a sot of evaluation function if the game is not finished after X depth).
You repeat the operation Y times (being quite large) and you select the immediate child node (aka: your move) that leads to you winning most of the times.
This is to evaluate which move you should do now. After this, the opponent moves and it is your turn again. So you have to re-create a tree with the root node being the new current situation and redo the Monte Carlo technique to guess what is your best possible move. And so on.
I'm taking an introductory graphics course, and while I intuitively understand that converting a click or touch into object coordinates will make the math much cleaner, reduce the chances for human error, and potentially make debugging easier, none of these are actually a very good explanation, conceptually, of why object coordinate spaces are used in selection tests, as opposed to simply using world coordinates for the test - rather, they're just observations of what tends to happen when object coordinates are used. So I ask: why?
A selection test involves comparing the click coordinates, which you get in window coordinates, against lots and lots of object features, which are represented in object coordinates.
You need to transform them into the same coordinate system in order to do the checks, so you can EITHER transform the one simple click point OR you can transform all the various object features.
Transforming one point or line is just a lot easier that transforming a whole bunch of object features of various types.
There are cases where the location of a specific object or point may not be known within a world coordinate system, but is known relative to some other coordinate system.
To summarize an example from my course text, consider the idea of two different towns, one using a grid system for its layout, and the other using what I can only describe as the New England we-made-cow-trails-into-roads method. A government employee is tasked with creating a layout of the area which includes them, and in doing so has to convert the two coordinate systems into a third, which encompasses the other two.
Sometimes, using a world atlas just isn't practical to get across the street, and so something much more local (and relevant) is used instead, as it provides much more detail over a much smaller area.
The text also explains that it may be more than simply impractical to use a given coordinate system - it may yield results that are improbable or just plain wrong. This is evidenced in the evolution of the geocentric and heliocentric models of the universe - the distance of the stars from us was calculated with very different results using the two models.
Thinking of my own example, the best that comes to mind would be something like your own internal organs - from the outside, you don't know for sure exactly the shape, size, and structure of each of them, but your own body does. In order to be able to access that information, you need to look inside the body (ideally in a way that doesn't kill you). It's not something that is plainly observable from outside.
I'm trying to plug a (very) simple graph layout algorithm into my GEF editor. I do it by simply adding calculateX() and calculateY() methods to my NodeEditParts' refreshVisuals() (the graph figure has an XYLayout obviously).
It does work, albeit only for those nodes, which have a connection to another node, of which they are the source. When I try to access the constraints for nodes to which the node in question has a connection, of which it is the target, I get a NullPointerException.
I'm guessing that this is to do with the order in which nodes are drawn in GEF.
I'm also guessing that there is no such thing as an element parser checking which elements will have to be drawn first, but rather that elements are either drawn in the order they appear in a List, or concurrently via the EditPartFactory (which, however, must get its input from some sort of ordered collection in the model).
But how is it really done?
In GEF the elements are drawn in the order they appear in the list returned by getModelChildren() (I don't remember if from start to end or backwards, but you can check the code)
Nevertheless, I couldn't understand what exactly was your problem, so if you can provide more details I may help you some more.
I'm making a chess game, rendered with OpenGL.
I'm not looking for somebody to tell me all of the answers, I would like to figure the code out on my own, but pointing me to the right concepts is what I really need. At this point, I'm not sure where to start. Here is what I've figured out:
An enumeration, TurnState, with the following values:
playerOneTurn
playerTwoTurn
Stopped
An enumeration, GameState, with the following values:
playerOneCheck
playerTwoCheck
playerOnecCheckMate
PlayerTwoCheckMate
InitializingGame
Tie
NormalPlay
An abstract class, Player, and a subclass, Computer.
A class, ChessGame, with the following fields:
Player p1, p2
TurnState turnState
GameState gameState
A class, Move, with the following fields:
*Piece
Location origin
Location destination
A class, Location, with the following fields:
row
col
*ChessBoard
A class, ChessBoard, with one method, isValid, which takes a Move and checks if the move is valid or not.
An abstract class, ChessPieces, with the following methods:
GetValue() // returns an int value of the piece (for scoring)
GetPosition() // returns the current position of a piece
getIsSelected() // returns a boolean, true if selected, false if unselected
move() // moves the piece in a way dependent upon what piece
And the following subclasses:
Pawn
Rook
Queen
King
Knight
As to the AI part of the chess game:
To get a chess AI, or any sort of turn based game AI, you will need to calculate the "value" of the game in a given turn (that's important) (i.e. you assign each piece a value and sum the values for player1 and player2 and then you do score = player1score - player2score, so negative values will benefit player 2 and positive ones, player 1, that's just a basic example and not a very efficient one, but it's the most basic way to explain what the "value" of the game would be).
After you can calculate that you need to be able to calculate every possible move of a player given a certain configuration of the board.
With that you will be able to build a decision tree in which you will have as the root node the current state of the game. The next "level" of the tree will represent every possible state you can get to from the current state (and so forth). It's important to notice that if you consider player1 possible moves in on level of the tree you will consider player two possible moves in the next.
Next thing to do would be:
suppose player1 is gonna make a move, he will look into in the tree until depth 5 (for a chess game you'll never look in the whole tree). So he will choose a move that will be optimized for him, that would mean: at each level he'll consider HIS best move or player2's best move (so he will work on the worst case scenario), so he'll move the the highest valued node in the next level of the tree.
To calculate a value of a node you do the following:
NOTE: considering root node is of depth 0, every odd depth node need to be maxValue for player1 and every even depth node minValue for player2.
You'll expand the tree to the max depth you define, for the node in the maxDepth you'll just calculate the value of the board (which I mentioned in the beginning of my answer), for upper nodes you'll do:
even node's value : minValue between all child nodes
odd node's value : maxValue between all child nodes
So basically you'll do the regression to find the value of a node based on the value of deeper nodes.
Well, that's the basic idea, from it you can research some other stuff, if you want you can PM me, I've done some work on this kind of search, and I just described the most basic idea here, for an efficient code you'll need lots of optimization techniques.
Hope it helped a little
First of all: Separate the two: AI and GUI/OpenGL. In chess it is normal to have the GUI and the AI (the "Engine" in computer chess lingo) in two different processes that's communicating with a predefined protocol. The two most popular protocols for this are UCI and WinBoard.
For the chess engine part, you basically need three thing:
A board/position representation
A leaf node evaluation function
A search algorithm
I suggest you read:
Chess Programming WIKI
TalkChess forum for computer chess
Study a open source computer chess engine, like Stockfish, Crafty or Fruit.
This may not be directly answering your question (actually what is your question?), but you mentioned you wanted pointers to the right concepts.
oysteijo is right, one of the concepts that is very important is separating parts of a program from each other.
For something like chess there exist many efficient and elegant representations of the state of a chess game. I would say that the MVC (model, view, controller) design pattern works quite well for a chess game.
Hopefully this will make some sense, if not I suggest you read up on MVC some more.
Your model is going to primarily involve the datastructure which stores the representation of state of the game, this is the chessboard. A piece can only be on one of 64 spots, and there are limitations on the types of pieces and how many there are and what each of them do. The model will be responsible for dealing with this stuff. It would also make sense to give the model the logic for determining the legality of any given move (i.e. the properties of the game which don't necessarily involve the state of any given instance of a game).
The view is where all of your presentation related code goes. All that OpenGL is going in here, as would a "debug" routine which might (for instance) print an ASCII representation of the chessboard to the console.
The controller might have some functions which interface with the user to process input. The controller is the part of code which manipulates the model ("move E5 to D3": a function in your controller might call model.moveKnight('D3')) and the view ("draw the board in glorious 3D": the controller might do something like calling openGLView.draw(model))
One of the primary goals that MVC helps achieve is the independence of parts of code that perform different tasks. If some change in your AI causes problems with a rendering algorithm, it is a frustrating and difficult position to be in. An experienced programmer would go to some great lengths to ensure that this couldn't happen.
You might be wondering at this point where your AI code fits into the picture. Well, it's really up to you. Use your best judgement. It could be a part of the controller. Personally I'd have it be a whole nother controller (chessAIController) which implements the AI algorithms, but it is just as easy to have all of it contained within the main controller.
The point is, it doesn't really matter how you actually organize the code so long as it is done in some kind of logical way. The reason that MVC is so widespread is that those 3 components are usually present in most software and it usually makes sense to separate them. Note they're not actually really separated... the controller often directly manipulates both the view and model. Restrictions such as not allowing the view to manipulate anything helps code to stay clean and intelligible.
When you have no structure or organization in a programming project it can be nearly impossible to avoid having huge routines which do a little bit of everything because there is really only one place in the code in which to build functionality upon. What this generates invariably is a tangled mass of spaghetti code that no language, no matter how high-level, can save you from. This creates code that just plain sucks because nobody else can understand it, and even you will be unable to understand it two weeks from the time it is written.
Basically my decision tree can't classify a value using the normal algorithm.
I get to a node, and there are two options (say, sunny and windy), but at this node my value is different (for example, rainy).
Are there any methods to deal with this, e.g. change the tree or just estimate based on other data?
I was thinking of assigning the most common value at that node but this is just a guess.
Have you considered fuzzy logic for the rich/poor continuum? As for things that can't be expressed as a continuum, I can't think of a way it can be done. Rainy weather, for example, is so fundamentally different from sunny and windy weather in how we experience and react to it, I'm not sure how you expect a computer (or whatever it is you're writing your decision tree for) to figure out what to do. (Aside from simply having an "I don't know what to do" output state, but I'm assuming you wanted something more meaningful than that.)
The whole point in decision trees is that the options are complete and (hopefully) mutual exclusive.
If it is not you'll get into trouble. Redefine poor and rich to cover everything. (all incomes, all states of mind...)
But honestly, interpret such weather examples as what they are: just examples for a concept, not the holy grail of meteorology.
The issue here is that you've learned a decision from different data as you are using to classify it. More specific, your decision tree knows only two values (i.e., sunny and windy) for the attribute Weather. But your data for classification also allows the value rainy.
Since your decision tree has no observation when the weather was rainy, this value turns useless. In other words, you have to eliminate this value from your classification.
The only solution is to do data cleaning before using the decision tree as classifier.
You have two options:
1. Remove all observations/instances with Weather="rainy" from your data set because you can't classify them. The disadvantage is that all instances with Weather="rainy" are not classified.
2. For all observations/instances with Weather="rainy", remove the value or rather set it to unknown/null. In case that your decision tree can handle null values, it can classify all of your data set. If not, you still have a problem. In that case you should go for option 3.
3. Relearn your decision tree with Weather={sunny, windy, rainy}
(4). In your case the following is not an option. Replace "rainy" with either "sunny" or "rainy. There are different heuristics for that.
You are talking about the "normal algorithm", which is a quite blurry statement. I assume you are using a strictly-binary rooted decision tree, where the each internal node makes a binary split of the data. Thus, the condition evaluation at each internal node outputs a Boolean variable, which splits the data into the left node (true) and right node (false). In your case, you can have a categorical variable weather with two possible values in the training data, which makes only two possible node: weather==sunny or weather==windy. Hence, the rainy samples will be always on the right node, as it is not sunny and not windy.
In the following picture, the rainy samples will be classified as not sunny, not windy.