SymPy Permutation groups parity not working as expected - python-3.x

I've implemented a Rubik's cube using permutations of Tuples. The cube with no changes is represented as (0, 1, 2, ... , 45, 46, 47).
To apply a 'turn' to the cube the numbers are shuffled around. I've pretty fully tested all of my turns to the point that I'm fairly sure that there is no typos.
I've been trying to implement a method that checks whether a cube is valid or not because only 1 in 12 random permutation of (1, 2, ... 47, 48) is a valid cube. For a permutation to be a valid Rubik's cube it must meet 3 requirements. This was well documented in this SO thread: https://math.stackexchange.com/questions/127577/how-to-tell-if-a-rubiks-cube-is-solvable
The 3 steps are:
Edge orientation: Number of edges flips has to be even.
Corner orientation: Number of corner twists has to be divisible by 3.
Permutation parity: This is where I'm having troubles. The permutation parity must be even, meaning that the corner parity must match the edge parity.
The SymPy library provides a great way for me to work with a number of permutation group properties so I included it in my attempt at computing permutation parity.
The simplest test input that it fails on when it should succeed is back turn of the cube, represented as B.
Here's the code:
def check_permutation_parity(cube):
corners = cube[:24]
edges = cube[24:]
edges = [e - 24 for e in edges]
corner_perms = corner_perms_only(corners)
edge_perms = edge_perms_only(edges)
normalized_corners = [int(c/3) for c in corner_perms]
normalized_edges = [int(e/2) for e in edge_perms]
sympy_corners = Permutation(list(normalized_corners))
sympy_edges = Permutation(list(normalized_edges))
corners_perm_parity = Permutation(list(normalized_corners)).parity()
edges_perm_parity = Permutation(list(normalized_edges)).parity()
if corners_perm_parity != edges_perm_parity:
return False
return True
Using a bunch of print statements I've outlined what happens throughout the code:
This is the initial state. It's the B permutation of the cube and looks as expected.
cube:
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 18, 19, 20, 12, 13, 14, 21, 22, 23, 15, 16, 17, 24, 25, 26, 27, 30, 31, 28, 29, 32, 33, 36, 37, 34, 35, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47)
Next we look at the corners and edges. Remember that the edge has 24 subtracted from every one. This is necessary for eventual conversion to a SymPy permutation.
corners, edges
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 18, 19, 20, 12, 13, 14, 21, 22, 23, 15, 16, 17)
[0, 1, 2, 3, 6, 7, 4, 5, 8, 9, 12, 13, 10, 11, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
Then we extract just every 3 corner and every 2 edge. This lets us look at just the permutation of each piece because we don't care about orientation.
corner_perms_only, edges_perms_only
(0, 3, 6, 9, 18, 12, 21, 15)
(0, 2, 6, 4, 8, 12, 10, 14, 16, 18, 20, 22)
Then we divine by 2 or 3 to convert to SymPy
normalized_corners, edges
[0, 1, 2, 3, 6, 4, 7, 5]
[0, 1, 3, 2, 4, 6, 5, 7, 8, 9, 10, 11]
After converting to SymPy the corners look as such:
sympy corners
(4 6 7 5)
[(4, 5), (4, 7), (4, 6)]
[[0], [1], [2], [3], [4, 6, 7, 5]]
And the edges look as such:
sympy edges
(11)(2 3)(5 6)
[(2, 3), (5, 6)]
[[0], [1], [2, 3], [4], [5, 6], [7], [8], [9], [10], [11]]
Giving us this parity because the corners consists of a 3 cycle and the edges consist of a 2 cycle:
corners, edges perm parity
1
0
Because the parities differ the function returns false.
B: False
We know that the parities should match, but I can't get that result to happen and I'm kind of lost in where to go for further debugging. All of the code can be found on my GitHub here: https://github.com/elliotmartin/RubikLite/blob/master/Rubik.py

My issue had nothing to do with SymPy and the permutation parities. To check this I implemented my own algorithm for cyclic decomposition and then checked the parities. In the end the issue had to do with how I set up the permutations for each move.
I guess I've learned a lot about testing - if your tests don't test for the correct thing then they're not that useful.

Related

What is the Easiest way to extract subset of a 2D matrix in python?

mat = [[0, 1, 2, 3, 4, 5],
[6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]]
Lets say I want to extract upper left 2x2 matrix
[[0, 1,],
[6, 7, ]]
doing mat2=mat[:2][:2] doesnt work.
It extracts the rows correctly but not columns.Seems like I need to loop throughto get the columns.
Additionally I need to do a deepcopy to mat2 suchthat modifying mat2 dont change mat.
This is because [:2] returns a list containing the first 2 elements of your matrix.
For example :-
arr = [[1, 2], [1, 3]]
print(arr[:2]) # will print the first 2 elements of the array, that is [1, 2] and [1, 3], packed into a list. So, Output : [[1, 2], [1, 3]].
In the same way,
mat = [[0, 1, 2, 3, 4, 5],
[6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]]
mat2 = mat[:2] # => [[0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11]]
# Now, if you again try to get the first 2 elements from mat2 you will get the first 2 elements of mat2, not the first 2 elements of the lists inside mat2.
mat3 = mat2[:2] # => [[0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11]]
That is where you went wrong, but this concept is quite counter-intuitive, so no worries.
So the solution would be to get the first 2 elements from matrix mat and then loop over its elements and then get the first 2 elements from them.
Therefore, this should work for you:
list(x[:2] for x in mat[:2])
Or, as #warped pointed, if you can use numpy, you can do the following:
import numpy as np
mat = np.array(mat)
mat[:2, :2]

efficient way to operate on the ndarray

There exist an numpy ndarry A of shape [100,50, 5], and I want to expand A as follows. A will be appended with an one-dimensional array of shape (50, ). The resulting A will have shape [100,50,6].
The element of this one-dimensional array is based on the array in the original ndarray, i.e., A[:,:,4] in terms of a given formula, i.e., A[:,i,5]=A[:,i,4]*B[i]+5 for i = 0:49 Here A[:,:,5] corresponds to the added one-dimensional array. B is another array working as weight.
Besides using a for loop to write this function, how to fullfill this task in a vectorized/efficient way leveraging numpy operation
Make 2 arrays - with sizes that we can look at:
In [371]: A = np.arange(24).reshape(2,3,4); B = np.array([10,20,30])
Due to broadcasting we can add a (3,) array to (2,3) array
In [372]: A[:,:,-1]+B
Out[372]:
array([[13, 27, 41],
[25, 39, 53]])
we can then convert that to (2,3,1) array:
In [373]: (A[:,:,-1]+B)[:,:,None]
Out[373]:
array([[[13],
[27],
[41]],
[[25],
[39],
[53]]])
In [374]: A
Out[374]:
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
and join them on the last axis:
In [375]: np.concatenate((A, Out[373]), axis=-1)
Out[375]:
array([[[ 0, 1, 2, 3, 13],
[ 4, 5, 6, 7, 27],
[ 8, 9, 10, 11, 41]],
[[12, 13, 14, 15, 25],
[16, 17, 18, 19, 39],
[20, 21, 22, 23, 53]]])
Or we can make a target array of the right size, and copy values to it:
In [376]: A1 = np.zeros((2,3,5),int)
In [377]: A1[:,:,:-1]=A
In [379]: A1[:,:,-1]=Out[372]

Swap pair of elements along an axis

I have a 2d numpy array as such:
import numpy as np
a = np.arange(20).reshape((2,10))
# array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
# [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]])
I want to swap pairs of elements in each row. The desired output looks like this:
# array([[ 9, 0, 2, 1, 4, 3, 6, 5, 8, 7],
# [19, 10, 12, 11, 14, 13, 16, 15, 18, 17]])
I managed to find a solution in 1d:
a = np.arange(10)
# does the job for all pairs except the first
output = np.roll(np.flip(np.roll(a,-1).reshape((-1,2)),1).flatten(),2)
# first pair done manually
output[0] = a[-1]
output[1] = a[0]
Any ideas on a "numpy only" solution for the 2d case ?
Owing to the first pair not exactly subscribing to the usual pair swap, we can do that separately. For the rest, it would relatively straight-forward with reshaping to split axes and flip axis. Hence, it would be -
In [42]: a # 2D input array
Out[42]:
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]])
In [43]: b2 = a[:,1:-1].reshape(a.shape[0],-1,2)[...,::-1].reshape(a.shape[0],-1)
In [44]: np.hstack((a[:,[-1,0]],b2))
Out[44]:
array([[ 9, 0, 2, 1, 4, 3, 6, 5, 8, 7],
[19, 10, 12, 11, 14, 13, 16, 15, 18, 17]])
Alternatively, stack and then reshape+flip-axis -
In [50]: a1 = np.hstack((a[:,[0,-1]],a[:,1:-1]))
In [51]: a1.reshape(a.shape[0],-1,2)[...,::-1].reshape(a.shape[0],-1)
Out[51]:
array([[ 9, 0, 2, 1, 4, 3, 6, 5, 8, 7],
[19, 10, 12, 11, 14, 13, 16, 15, 18, 17]])

Change one-dimensional tuple using slicing / object reassignment

I understand that tuples are immutable objects, however, I know tuples support indexing and slicing. Thus, if I have a tuple assigned to a variable, I can reassign the variable to a new tuple object and change the value at the desired index position.
When I attempt to do this using an index slice, I am getting returned a tuple containing multiple tuples. I understand why this is happening, because I am passing comma separated slices of the original tuple, but I can't figure out how (if possible) I can return a one-dimensional tuple with a single element changed when working with larger sets of data.
Example:
someNumbers = tuple(i for i in range(0, 20))
print(someNumbers)
someNumbers = someNumbers[:10], 2000, someNumbers[11:]
print(someNumbers)
Outputs the following:
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
((0, 1, 2, 3, 4, 5, 6, 7, 8, 9), 2000, (11, 12, 13, 14, 15, 16, 17, 18, 19))
Can I return a one-dimensional tuple and change only the desired index value?
Use concatenation:
someNumbers = someNumbers[:10] + (2000,) + someNumbers[11:]
You can use tuple concatenation:
someNumbers = tuple(i for i in range(0, 20))
print(someNumbers)
# (2000, ) to differentiate it from (2000) which is a number
someNumbers = someNumbers[:10]+ (2000,) + someNumbers[11:]
print(someNumbers)
Outputs:
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 2000, 11, 12, 13, 14, 15, 16, 17, 18, 19)

understanding behavior of mapping to an array

When does map modify an array in place? I know the preferred way to iterate over an array is with a list comprehension, but I'm preparing an algorithm for ipyparallel, which apparently uses the map function. Each row of my array is a set of model inputs, and I want to use map, ultimately in parallel, to run the model for each row. I'm using Python 3.4.5 and Numpy 1.11.1. I need these versions for compatibility with other packages.
This simple example creates a list and leaves the input array intact, as I expected.
grid = np.arange(25).reshape(5,5)
grid
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
def f(g):
return g + 1
n = list(map(f, grid))
grid
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
But when the function modifies a slice of the input row, the array is modified in place. Can anyone explain this behavior?
def f(g):
g[:2] = g[:2] + 1
return g
n = list(map(f, grid))
grid
array([[ 1, 2, 2, 3, 4],
[ 6, 7, 7, 8, 9],
[11, 12, 12, 13, 14],
[16, 17, 17, 18, 19],
[21, 22, 22, 23, 24]])

Resources