How to use tf.gather in batch? - python-3.x

I have a A = 10x1000 tensor and a B = 10x1000 index tensor. The tensor B has values between 0-999 and it's used to gather values from A (B[0,:] gathers from A[0,:], B[1,:] from A[1,:], etc...).
However, if I use tf.gather(A, B) I get an array of shape (10, 1000, 1000) when I'm expecting a 10x1000 tensor back. Any ideas how I could fix this?
EDIT
Let's say A= [[1, 2, 3],[4,5,6]] and B = [[0, 1, 1],[2,1,0]] What I want is to be able to sample A using the corresponding B. This should result in C = [[1, 2, 2],[6,5,4]].

Dimensions of tensors are known in advance.
First we 'unstack' both the parameters and indices (A and B respectively) along the first dimension. Then we apply tf.gather() such that rows of A correspond to the rows of B. Finally, we stack together the result.
import tensorflow as tf
import numpy as np
def custom_gather(a, b):
unstacked_a = tf.unstack(a, axis=0)
unstacked_b = tf.unstack(b, axis=0)
gathered = [tf.gather(x, y) for x, y in zip(unstacked_a, unstacked_b)]
return tf.stack(gathered, axis=0)
a = tf.convert_to_tensor(np.array([[1, 2, 3], [4, 5, 6]]), tf.float32)
b = tf.convert_to_tensor(np.array([[0, 1, 1], [2, 1, 0]]), dtype=tf.int32)
gathered = custom_gather(a, b)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(gathered))
# [[1. 2. 2.]
# [6. 5. 4.]]
For you initial case with shapes 1000x10 we get:
a = tf.convert_to_tensor(np.random.normal(size=(10, 1000)), tf.float32)
b = tf.convert_to_tensor(np.random.randint(low=0, high=999, size=(10, 1000)), dtype=tf.int32)
gathered = custom_gather(a, b)
print(gathered.get_shape().as_list()) # [10, 1000]
Update
The first dimension is unknown (i.e. None)
The previous solution works only if the first dimension is known in advance. If the dimension is unknown we solve it as follows:
We stack together two tensors such that the rows of both tensors are stacked together:
# A = [[1, 2, 3], [4, 5, 6]] [[[1 2 3]
# ---> [0 1 1]]
# [[4 5 6]
# B = [[0, 1, 1], [2, 1, 0]] [2 1 0]]]
We iterate over the elements of this stacked tensor (which consists of stacked together rows of A and B) and using tf.map_fn() function we apply tf.gather().
We stack back the elements we get with tf.stack()
import tensorflow as tf
import numpy as np
def custom_gather_v2(a, b):
def apply_gather(x):
return tf.gather(x[0], tf.cast(x[1], tf.int32))
a = tf.cast(a, dtype=tf.float32)
b = tf.cast(b, dtype=tf.float32)
stacked = tf.stack([a, b], axis=1)
gathered = tf.map_fn(apply_gather, stacked)
return tf.stack(gathered, axis=0)
a = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float32)
b = np.array([[0, 1, 1], [2, 1, 0]], dtype=np.int32)
x = tf.placeholder(tf.float32, shape=(None, 3))
y = tf.placeholder(tf.int32, shape=(None, 3))
gathered = custom_gather_v2(x, y)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(gathered, feed_dict={x:a, y:b}))
# [[1. 2. 2.]
# [6. 5. 4.]]

Use tf.gather with batch_dims=-1:
import numpy as np
import tensorflow as tf
rois = np.array([[1, 2, 3],[3, 2, 1]])
ind = np.array([[0, 2, 1, 1, 2, 0, 0, 1, 1, 2],
[0, 1, 2, 0, 2, 0, 1, 2, 2, 2]])
tf.gather(rois, ind, batch_dims=-1)
# output:
# <tf.Tensor: shape=(2, 10), dtype=int64, numpy=
# array([[1, 3, 2, 2, 3, 1, 1, 2, 2, 3],
# [3, 2, 1, 3, 1, 3, 2, 1, 1, 1]])>

Related

Efficient method to compute the row-wise dot product of two square matrices of the same size in PyTorch

Supposing I have two square matrices A, B of the same size
A = torch.tensor([[1, 2], [3, 4]])
B = torch.tensor([[1, 1], [1, 1]])
And I want a resulting tensor that consists of the row-wise dot product, say
tensor([3, 7]) # i.e. (1*1 + 2*1, 3*1 + 4*1)
What is an efficient means of achieving this in PyTorch?
As you said you can use torch.bmm but you first need to broadcast your inputs:
>>> torch.bmm(A[..., None, :], B[..., None])
tensor([[[3]],
[[7]]])
Alternatively you can use torch.einsum:
>>> torch.einsum('ij,ij->i', A, B)
tensor([3, 7])
import torch
import numpy as np
def row_wise_product(A, B):
num_rows, num_cols = A.shape[0], A.shape[1]
prod = torch.bmm(A.view(num_rows, 1, num_cols), B.view(num_rows, num_cols, 1))
return prod
A = torch.tensor(np.array([[1, 2], [3, 4]]))
B = torch.tensor(np.array([[1, 1], [1, 1]]))
C = row_wise_product(A, B)

Not able to use Stratified-K-Fold on multi label classifier

The following code is used to do KFold Validation but I am to train the model as it is throwing the error
ValueError: Error when checking target: expected dense_14 to have shape (7,) but got array with shape (1,)
My target Variable has 7 classes. I am using LabelEncoder to encode the classes into numbers.
By seeing this error, If I am changing the into MultiLabelBinarizer to encode the classes. I am getting the following error
ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multilabel-indicator' instead.
The following is the code for KFold validation
skf = StratifiedKFold(n_splits=10, shuffle=True)
scores = np.zeros(10)
idx = 0
for index, (train_indices, val_indices) in enumerate(skf.split(X, y)):
print("Training on fold " + str(index+1) + "/10...")
# Generate batches from indices
xtrain, xval = X[train_indices], X[val_indices]
ytrain, yval = y[train_indices], y[val_indices]
model = None
model = load_model() //defined above
scores[idx] = train_model(model, xtrain, ytrain, xval, yval)
idx+=1
print(scores)
print(scores.mean())
I don't know what to do. I want to use Stratified K Fold on my model. Please help me.
MultiLabelBinarizer returns a vector which is of the length of your number of classes.
If you look at how StratifiedKFold splits your dataset, you will see that it only accepts a one-dimensional target variable, whereas you are trying to pass a target variable with dimensions [n_samples, n_classes]
Stratefied split basically preserves your class distribution. And if you think about it, it does not make a lot of sense if you have a multi-label classification problem.
If you want to preserve the distribution in terms of the different combinations of classes in your target variable, then the answer here explains two ways in which you can define your own stratefied split function.
UPDATE:
The logic is something like this:
Assuming you have n classes and your target variable is a combination of these n classes. You will have (2^n) - 1 combinations (Not including all 0s). You can now create a new target variable considering each combination as a new label.
For example, if n=3, you will have 7 unique combinations:
1. [1, 0, 0]
2. [0, 1, 0]
3. [0, 0, 1]
4. [1, 1, 0]
5. [1, 0, 1]
6. [0, 1, 1]
7. [1, 1, 1]
Map all your labels to this new target variable. You can now look at your problem as simple multi-class classification, instead of multi-label classification.
Now you can directly use StartefiedKFold using y_new as your target. Once the splits are done, you can map your labels back.
Code sample:
import numpy as np
np.random.seed(1)
y = np.random.randint(0, 2, (10, 7))
y = y[np.where(y.sum(axis=1) != 0)[0]]
OUTPUT:
array([[1, 1, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 1, 0, 1],
[1, 0, 0, 1, 0, 0, 0],
[1, 0, 0, 1, 0, 0, 0],
[1, 0, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 1, 1, 0, 1, 1],
[0, 0, 1, 0, 0, 1, 1],
[1, 0, 1, 0, 0, 1, 1],
[0, 1, 1, 1, 1, 0, 0]])
Label encode your class vectors:
from sklearn.preprocessing import LabelEncoder
def get_new_labels(y):
y_new = LabelEncoder().fit_transform([''.join(str(l)) for l in y])
return y_new
y_new = get_new_labels(y)
OUTPUT:
array([7, 6, 3, 3, 2, 5, 8, 0, 4, 1])

Selecting second dim of tensor using an index tensor

I have a 2D tensor and an index tensor. The 2D tensor has a batch dimension, and a dimension with 3 values. I have an index tensor that selects exactly 1 element of the 3 values. What is the "best" way to product a slice containing just the elements in the index tensor?
t = torch.tensor([[1,2,3], [4,5,6], [7,8,9]])
t = tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
i = torch.tensor([0,0,1], dtype=torch.int64)
tensor([0, 0, 1])
Expected output...
tensor([1, 4, 8])
An example of the answer is as follows.
import torch
t = torch.tensor([[1,2,3], [4,5,6], [7,8,9]])
col_i = [0, 0, 1]
row_i = range(3)
print(t[row_i, col_i])
# tensor([1, 4, 8])

'Tensor' object has no attribute 'assign_add'

I encountered error 'Tensor' object has no attribute 'assign_add' when I try to use the assign_add or assign_sub function.
The code is shown below:
I defined two tensor t1 and t2, with the same shape, and same data type.
>>> t1 = tf.Variable(tf.ones([2,3,4],tf.int32))
>>> t2 = tf.Variable(tf.zeros([2,3,4],tf.int32))
>>> t1
<tf.Variable 'Variable_4:0' shape=(2, 3, 4) dtype=int32_ref>
>>> t2
<tf.Variable 'Variable_5:0' shape=(2, 3, 4) dtype=int32_ref>
then I use the assign_add on t1 and t2 to create t3
>>> t3 = tf.assign_add(t1,t2)
>>> t3
<tf.Tensor 'AssignAdd_4:0' shape=(2, 3, 4) dtype=int32_ref>
then I try to create a new tensor t4 using t1[1] and t2[1], which are tensors with same shape and same data type.
>>> t1[1]
<tf.Tensor 'strided_slice_23:0' shape=(3, 4) dtype=int32>
>>> t2[1]
<tf.Tensor 'strided_slice_24:0' shape=(3, 4) dtype=int32>
>>> t4 = tf.assign_add(t1[1],t2[1])
but got error,
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/admin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/state_ops.py", line 245, in assign_add
return ref.assign_add(value)
AttributeError: 'Tensor' object has no attribute 'assign_add'
same error when using assign_sub
>>> t4 = tf.assign_sub(t1[1],t2[1])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/admin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/state_ops.py", line 217, in assign_sub
return ref.assign_sub(value)
AttributeError: 'Tensor' object has no attribute 'assign_sub'
Any idea where is wrong?
Thanks.
The error is because t1 is a tf.Variable object , while t1[1] is a tf.Tensor.(you can see this in the outputs to your print statements.).Ditto for t2 and t[[2]]
As it happens, tf.Tensor can't be mutated(it's read only) whereas tf.Variable can be(read as well as write)
see here.
Since tf.scatter_add,does an inplace addtion, it doesn't work with t1[1] and t2[1] as inputs, while there's no such problem with t1 and t2 as inputs.
What you are trying to do here is a little bit confusing. I don't think you can update slices and create a new tensor at the same time/line.
If you want to update slices before creating t4, use tf.scatter_add() (or tf.scatter_sub() or tf.scatter_update() accordingly) as suggested here. For example:
sa = tf.scatter_add(t1, [1], t2[1:2])
Then if you want to get a new tensor t4 using new t1[1] and t2[1], you can do:
with tf.control_dependencies([sa]):
t4 = tf.add(t1[1],t2[1])
Here are some examples for using tf.scatter_add and tf.scatter_sub
>>> t1 = tf.Variable(tf.ones([2,3,4],tf.int32))
>>> t2 = tf.Variable(tf.zeros([2,3,4],tf.int32))
>>> init = tf.global_variables_initializer()
>>> sess.run(init)
>>> t1.eval()
array([[[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]],
[[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]]], dtype=int32)
>>> t2.eval()
array([[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]],
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]], dtype=int32)
>>> t3 = tf.scatter_add(t1,[0],[[[2,2,2,2],[2,2,2,2],[2,2,2,2]]])
>>> sess.run(t3)
array([[[3, 3, 3, 3],
[3, 3, 3, 3],
[3, 3, 3, 3]],
[[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]]], dtype=int32)
>>>t4 = tf.scatter_sub(t1,[0,0,0],[t1[1],t1[1],t1[1]])
Following is another example, which can be found at https://blog.csdn.net/efforever/article/details/77073103
Because few examples illustrating scatter_xxx can be found on the web, I paste it below for reference.
import tensorflow as tf
import numpy as np
with tf.Session() as sess1:
c = tf.Variable([[1,2,0],[2,3,4]], dtype=tf.float32, name='biases')
cc = tf.Variable([[1,2,0],[2,3,4]], dtype=tf.float32, name='biases1')
ccc = tf.Variable([0,1], dtype=tf.int32, name='biases2')
#对应label的centers-diff[0--]
centers = tf.scatter_sub(c,ccc,cc)
#centers = tf.scatter_sub(c,[0,1],cc)
#centers = tf.scatter_sub(c,[0,1],[[1,2,0],[2,3,4]])
#centers = tf.scatter_sub(c,[0,0,0],[[1,2,0],[2,3,4],[1,1,1]])
#即c[0]-[1,2,0] \ c[0]-[2,3,4]\ c[0]-[1,1,1],updates要减完:indices与updates元素个数相同
a = tf.Variable(initial_value=[[0, 0, 0, 0],[0, 0, 0, 0]])
b = tf.scatter_update(a, [0, 1], [[1, 1, 0, 0], [1, 0, 4, 0]])
#b = tf.scatter_update(a, [0, 1,0], [[1, 1, 0, 0], [1, 0, 4, 0],[1, 1, 0, 1]])
init = tf.global_variables_initializer()
sess1.run(init)
print(sess1.run(centers))
print(sess1.run(b))
[[ 0. 0. 0.]
[ 0. 0. 0.]]
[[1 1 0 0]
[1 0 4 0]]
[[-3. -4. -5.]
[ 2. 3. 4.]]
[[1 1 0 1]
[1 0 4 0]]
You can also use tf.assign() as a workaround as sliced assign was implemented for it, unlike for tf.assign_add() or tf.assign_sub(), as of TensorFlow version 1.8. Please note, you can only do one slicing operation (slice into slice is not going to work) and also this is not atomic, so if there are multiple threads reading and writing to the same variable, you don't know which operation will be the last one to write unless you explicitly code for it. tf.assign_add() and tf.assign_sub() are guaranteed to be thread safe. Still, this is better that nothing: consider this code (tested):
import tensorflow as tf
t1 = tf.Variable(tf.zeros([2,3,4],tf.int32))
t2 = tf.Variable(tf.ones([2,3,4],tf.int32))
assign_op = tf.assign( t1[ 1 ], t1[ 1 ] + t2[ 1 ] )
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run( init_op )
res = sess.run( assign_op )
print( res )
will output:
[[[0 0 0 0]
[0 0 0 0]
[0 0 0 0]]
[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]]
as desired.

Elements in a list are overwritten

I tried to program a function which creates the linear span of a list of independent vectors, but it seems that the last calculated vector overwrites all other elements. I'd be nice if someone could help me fixing it.
def span_generator(liste,n):
"""function to generate the span of a list of linear independent
vectors(in liste) in the n-dimensional vectorspace of a finite
field with characteristic 2, returns a list of all elements which
lie inside the span"""
results=[]
blank=[]
for i in range(n):
blank.append(0)
a=blank
if len(liste)>1:
listenwert=liste[-1]
liste.pop(-1)
values=span_generator(liste,n)
for i in range(2):
for j in range(len(values)):
for k in range(n):
a[k]=(i*listenwert[k]+values[j][k])%2
results.append(a)
else:
for i in range(2):
for j in range(n):
a[j]=(i*liste[0][j])
results.append(a)
print(results)
return results
print(span_generator([[1,0],[0,1]],2)) gives following results
[[1, 0], [1, 0]]
[[1, 1], [1, 1], [1, 1], [1, 1]]
[[1, 1], [1, 1], [1, 1], [1, 1]]
instead of the expected: [[0,0],[1,0],[0,1],[1,1]]
Edit: I tried to simplify the program with itertools.product, but it didn't solve the problem.
def span_generator(liste):
n=len(liste[0])
results=[]
coeff=list(itertools.product(range(2), repeat=n))
blank=[]
for i in range(n):
blank.append(0)
for i in range(len(coeff)):
a=blank
for j in range(len(coeff[0])):
for k in range(n):
a[k]=(a[k]+coeff[i][j]*liste[j][k])%2
results.append(a)
return results
Output: span_generator([[0,1],[1,0]])
[[0, 0], [0, 0], [0, 0], [0, 0]]
But it should give [[0,0],[0,1],[1,0],[1,1]]
Another example: span_generator([[0,1,1],[1,1,0]]) should give [[0,0,0],[0,1,1],[1,1,0],[1,0,1]] (2=0 since i'm calculating modulo 2)
Coefficients
You can use itertools.product to generate the coefficients:
n = len(liste[0])
coefficients = itertools.product(range(2), repeat=len(liste))
yields an iterator with this content:
[(0, 0), (0, 1), (1, 0), (1, 1)]
Linear combinations
You can then selectively multiply the results with the transpose of your liste (list(zip(*liste)))
for coeff in coefficients:
yield [sum((a * c) for a, c in zip(transpose[i], coeff)) for i in range(n)]
which take for each dimensionality (for i in range(n)) the sum of the products
def span_generator3(liste):
n = len(liste[0])
transpose = list(zip(*liste))
coefficients = itertools.product(range(2), repeat=len(liste))
for coeff in coefficients:
yield [sum((a * c) for a, c in zip(transpose[i], coeff)) % 2 for i in range(n)]
this produces an iterator. If you want the result in a list-form, just can list() on the iterator
Result
list(span_generator3([[1,2],[4,8]]))
output:
[[0, 0], [4, 8], [1, 2], [5, 10]]
Higher dimensions
list(sorted(span_generator3([[1,2, 4],[8, 16, 32], [64, 128, 256]])))
output:
[[0, 0, 0],
[1, 2, 4],
[8, 16, 32],
[9, 18, 36],
[64, 128, 256],
[65, 130, 260],
[72, 144, 288],
[73, 146, 292]]
Modulo 2
If you want the result modulo 2, that's just adding 2 characters in the right place
def span_generator3_mod2(liste):
n = len(liste[0])
transpose = list(zip(*liste))
coefficients = itertools.product(range(2), repeat=len(liste))
# print(list(itertools.product(range(2), repeat=len(liste))))
for coeff in coefficients:
yield [sum((a * c) for a, c in zip(transpose[i], coeff)) % 2 for i in range(n)]
list(span_generator3_mod2([[0,1,1],[1,1,0]])) gives
[[0, 0, 0], [1, 1, 0], [0, 1, 1], [1, 0, 1]]

Resources