Why is stratifiedkfold generating the same splits in spite of using different random_state values? - python-3.x

I am trying to generate different stratified splits of my data set using stratifiedkfold split and random_state parameter. However, when I use different random_state values, I still get the same splits. My understanding is that by using different random_state values, you will be able to generate different splits. Please let me know what I am doing incorrectly. Here is the code.
import numpy as np
X_train=np.ones(10)
Y_train=np.ones(10)
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=5,random_state=0)
skf1 = StratifiedKFold(n_splits=5,random_state=100)
trn1=[]
cv1=[]
for train, cv in skf.split(X_train, Y_train):
trn1=trn1+[train]
cv1=cv1+[cv]
trn2=[]
cv2=[]
for train, cv in skf1.split(X_train, Y_train):
trn2=trn2+[train]
cv2=cv2+[cv]
for c in list(range(0,5)):
print('Fold:'+str(c+1))
print(trn1[c])
print(trn2[c])
print(cv1[c])
print(cv2[c])
Here is the output
Fold:1
[2 3 4 5 6 7 8 9]
[2 3 4 5 6 7 8 9]
[0 1]
[0 1]
Fold:2
[0 1 4 5 6 7 8 9]
[0 1 4 5 6 7 8 9]
[2 3]
[2 3]
Fold:3
[0 1 2 3 6 7 8 9]
[0 1 2 3 6 7 8 9]
[4 5]
[4 5]
Fold:4
[0 1 2 3 4 5 8 9]
[0 1 2 3 4 5 8 9]
[6 7]
[6 7]
Fold:5
[0 1 2 3 4 5 6 7]
[0 1 2 3 4 5 6 7]
[8 9]
[8 9]

As stated in the documentation:
random_state : int, RandomState instance or None, optional, default=None
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Used when shuffle == True.
So simply add shuffle=True to your StratifiedKFold calls. For example:
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
skf1 = StratifiedKFold(n_splits=5, shuffle=True, random_state=100)
Output:
Fold:1
[0 1 3 4 5 6 7 9]
[0 1 2 3 4 5 8 9]
[2 8]
[6 7]
Fold:2
[0 1 2 3 5 6 7 8]
[0 2 3 4 6 7 8 9]
[4 9]
[1 5]
Fold:3
[0 2 3 4 5 7 8 9]
[0 1 3 5 6 7 8 9]
[1 6]
[2 4]
Fold:4
[0 1 2 4 5 6 8 9]
[1 2 4 5 6 7 8 9]
[3 7]
[0 3]
Fold:5
[1 2 3 4 6 7 8 9]
[0 1 2 3 4 5 6 7]
[0 5]
[8 9]

Related

Python3, True and False of element in ndarray

I saw this question on a forum.
import numpy as np
a = np.arange(16).reshape(4,4)
print(a)
print('-'*20)
print(a[[True,True,False,False]])
print('-'*20)
print(a[:,[True,True,False,False]])
print('-'*20)
print(a[[True,True,False,False],[True,True,False,False]])
the result is
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
--------------------
[[0 1 2 3]
[4 5 6 7]]
--------------------
[[ 0 1]
[ 4 5]
[ 8 9]
[12 13]]
--------------------
[0 5]
He asked that why the result of line "print(a[[True,True,False,False],[True,True,False,False]])" wasn't
[
[0,1],
[4,5]
]
I thought about it and couldn't come to an explain as well.
No one had answer him, yet. Thus I thought that I came here for help.

Use integer for bin

Given this code
df=pd.DataFrame({"num":[1,2,3,4,5,6]})
bins = pd.IntervalIndex.from_tuples([(0, 2), (2,3), (3,6)])
df['bin']=pd.cut(df.num, bins, labels=False)
The result is
num bin
0 1 (0, 2]
1 2 (0, 2]
2 3 (2, 3]
3 4 (3, 6]
4 5 (3, 6]
5 6 (3, 6]
but I hope the result to be
num bin
0 1 1
1 2 1
2 3 2
3 4 3
4 5 3
5 6 3
i.e., use an integer to represent the bin range, how to achive this?
it turns out df['bin_num']=df['bin'].cat.codes+1 will do

How to generate an nd-array where values are greater than 1? [duplicate]

This question already has answers here:
Generate random array of floats between a range
(10 answers)
Closed 2 years ago.
Is it possible to generate random numbers in an nd-array such the elements in the array are between 1 and 2 (The interval should be between 1 and some number greater than 1 )? This is what I did.
input_array = np.random.rand(3,10,10)
But the values in the nd-array are between 0 and 1.
Please let me know if that is possible. Any help and suggestions will be highly appreciated.
You can try scaling:
min_val, max_val = 1, 2
input_array = np.random.rand(3,10,10) * (mal_val-min_val) + min_val
or use uniform:
input_array = np.random.uniform(min_val, max_val, (3,10,10))
You can use np.random.randInt() in order to generate nd array with random integers
import numpy as np
rand_arr=np.random.randint(low = 1, high = 10, size = (10,10))
print(rand_arr)
# It changes randomly
#Output:
[[6 9 3 4 9 2 6 2 9 7]
[7 1 7 1 6 2 4 1 8 6]
[9 5 8 3 5 9 9 7 8 4]
[7 3 6 9 9 4 7 2 8 5]
[7 7 7 4 6 6 6 7 2 5]
[3 3 8 5 8 3 4 5 4 3]
[7 8 9 3 5 8 3 5 7 9]
[3 9 7 1 3 6 3 1 4 6]
[2 9 3 9 3 6 8 2 4 8]
[6 3 9 4 9 5 5 6 3 7]]

Three-dimensional array processing

I want to turn
arr = np.array([[[1,2,3],[4,5,6],[7,8,9],[10,11,12]], [[2,2,2],[4,5,6],[7,8,9],[10,11,12]], [[3,3,3],[4,5,6],[7,8,9],[10,11,12]]])
into
arr = np.array([[[1,2,3],[7,8,9],[10,11,12]], [[2,2,2],[7,8,9],[10,11,12]], [[3,3,3],[7,8,9],[10,11,12]]])
Below is the code:
a = 0
b = 0
NewArr = []
while a < 3:
c = arr[a, :, :]
d = arr[a]
print(d)
if c[1, 2] == 6:
c = np.delete(c, [1], axis=0)
a += 1
b += 1
c = np.concatenate((d, c), axis=1)
print(c)
But after deleting the line containing the number 6, I cannot stitch the array together,Can someone help me?
thank you very much for your help.
If you want a more automatic way of processing your input data, here is an answer using numpy functions :
arr[np.newaxis,~np.any(arr==6,axis=2)].reshape((3,-1,3))
np.any(arr==6,axis=2) outputs an array which has True at rows which contain the value 6. We take the inverse of those booleans since we want to remove those rows. The solution is then used as an index selection in arr, with a np.newaxis because the output of np.any had one axis less than the original array.
Finally, the output is reshaped into a 3,x,3 array, where x will depend on the number of rows which were removed (hence the -1 in reshape)
Based on the input / output you provide, a simpler solution would be to just use index selection and slices:
import numpy as np
arr = np.array([[[1,2,3],[4,5,6],[7,8,9],[10,11,12]], [[2,2,2],[4,5,6],[7,8,9],[10,11,12]], [[3,3,3],[4,5,6],[7,8,9],[10,11,12]]])
print("arr=")
print(arr)
expected_result = np.array([[[1,2,3],[7,8,9],[10,11,12]], [[2,2,2],[7,8,9],[10,11,12]], [[3,3,3],[7,8,9],[10,11,12]]])
# select indices 0, 2 and 3 from dimension 2
a = np.copy(arr[:,[0,2,3],:])
print("a=")
print(a)
print(np.array_equal(a, expected_result))
Output:
arr=
[[[ 1 2 3]
[ 4 5 6]
[ 7 8 9]
[10 11 12]]
[[ 2 2 2]
[ 4 5 6]
[ 7 8 9]
[10 11 12]]
[[ 3 3 3]
[ 4 5 6]
[ 7 8 9]
[10 11 12]]]
a=
[[[ 1 2 3]
[ 7 8 9]
[10 11 12]]
[[ 2 2 2]
[ 7 8 9]
[10 11 12]]
[[ 3 3 3]
[ 7 8 9]
[10 11 12]]]
True

Compare two matrices and create a matrix of their common values [duplicate]

This question already has an answer here:
Numpy intersect1d with array with matrix as elements
(1 answer)
Closed 5 years ago.
I'm currently trying to compare two matrices and return matching rows into the "intersection matrix" via python. Both matrices are numerical data-and I'm trying to return the rows of their common entries (I have also tried just creating a matrix with matching positional entries along the first column and then creating an accompanying tuple). These matrices are not necessarily the same in dimensionality.
Let's say I have two matrices of matching column length but arbitrary (can be very large and different row length)
23 3 4 5 23 3 4 5
12 6 7 8 45 7 8 9
45 7 8 9 34 5 6 7
67 4 5 6 3 5 6 7
I'd like to create a matrix with the "intersection" being for this low dimensional example
23 3 4 5
45 7 8 9
perhaps it looks like this though:
1 2 3 4 2 4 6 7
2 4 6 7 4 10 6 9
4 6 7 8 5 6 7 8
5 6 7 8
in which case we only want:
2 4 6 7
5 6 7 8
I've tried things of this nature:
def compare(x):
# This is a matrix I created with another function-purely numerical data of arbitrary size with fixed column length D
y =n_c(data_cleaner(x))
# this is a second matrix that i'd like to compare it to. note that the sizes are probably not the same, but the columns length are
z=data_cleaner(x)
# I initialized an array that would hold the matching values
compare=[]
# create nested for loop that will check a single index in one matrix over all entries in the second matrix over iteration
for i in range(len(y)):
for j in range(len(z)):
if y[0][i] == z[0][i]:
# I want the row or the n tuple (shown here) of those columns with the matching first indexes as shown above
c_vec = ([0][i],[15][i],[24][i],[0][25],[0][26])
compare.append(c_vec)
else:
pass
return compare
compare(c_i_w)
Sadly, I'm running into some errors. Specifically it seems that I'm telling python to improperly reference values.
Consider the arrays a and b
a = np.array([
[23, 3, 4, 5],
[12, 6, 7, 8],
[45, 7, 8, 9],
[67, 4, 5, 6]
])
b = np.array([
[23, 3, 4, 5],
[45, 7, 8, 9],
[34, 5, 6, 7],
[ 3, 5, 6, 7]
])
print(a)
[[23 3 4 5]
[12 6 7 8]
[45 7 8 9]
[67 4 5 6]]
print(b)
[[23 3 4 5]
[45 7 8 9]
[34 5 6 7]
[ 3 5 6 7]]
Then we can broadcast and get an array of equal rows with
x = (a[:, None] == b).all(-1)
print(x)
[[ True False False False]
[False False False False]
[False True False False]
[False False False False]]
Using np.where we can identify the indices
i, j = np.where(x)
Show which rows of a
print(a[i])
[[23 3 4 5]
[45 7 8 9]]
And which rows of b
print(b[j])
[[23 3 4 5]
[45 7 8 9]]
They are the same! That's good. That's what we wanted.
We can put the results into a pandas dataframe with a MultiIndex with row number from a in the first level and row number from b in the second level.
pd.DataFrame(a[i], [i, j])
0 1 2 3
0 0 23 3 4 5
2 1 45 7 8 9

Resources