I want out[b,i,j,c]:=params[indices[b,i,j,c],b,i,j,c]. I am aware of tf.gather and tf.gather_nd but not sure how to achieve this.
You can do that like this:
import tensorflow as tf
# 5D or more tensor
params = tf.placeholder(tf.float32, [2, 3, 4, 5, 6])
# 4D tensor
indices = tf.placeholder(tf.int32, [5, 4, 3, 2])
# We assume the number of dimensions of indices is statically known
# Otherwise you would need to use tf.while_loop
ndims = indices.shape.ndims
# Get shape of indices
s = tf.shape(indices, out_type=indices.dtype)
# Make grid of additional indices
ranges = [tf.range(s[i]) for i in range(ndims)]
grid = tf.meshgrid(*ranges, indexing='ij')
# Put grid together with indices
indices_all = tf.stack([indices] + grid, axis=-1)
# Gather result
out = tf.gather_nd(params, indices_all)
print(out)
# Tensor("GatherNd:0", shape=(5, 4, 3, 2), dtype=float32)
Related
Say I have tensor A, and indexes Tensor: A = [1, 2, 3, 4], indexes = [1, 0, 3, 2]
I want to create a new Tensor from these two with the following result : [2, 1, 4, 3]
Each element of the result is element from A and the order is defined by the indexes Tensor.
Is there a way to do it with PyTorch tensor ops without loops?
My goal is to do it for 2D Tensor, but I don't think there is a way to do it without loops, so I thought to project it to 1D, do the work and project it back to the 2D.
You can use scatter:
A = torch.tensor([1, 2, 3, 4])
indices = torch.tensor([1, 0, 3, 2])
result = torch.tensor([0, 0, 0, 0])
print(result.scatter_(0, indices, A))
In 1D you can simply perform A[indexes].
In 2D it is still doable in this way:
A = torch.arange(5, 10).repeat(3, 1) # shape: (3, 5)
indexes = torch.stack([torch.randperm(5) for _ in range(3)]) # shape (3, 5)
A_sort = A[torch.arange(3).unsqueeze(1), indexes]
print(A_sort)
I have made n-grams / doc-ids for document classification,
def create_dataset(tok_docs, vocab, n):
n_grams = []
document_ids = []
for i, doc in enumerate(tok_docs):
for n_gram in [doc[0][i:i+n] for i in range(len(doc[0]) - 1)]:
n_grams.append(n_gram)
document_ids.append(i)
return n_grams, document_ids
def create_pytorch_datasets(n_grams, doc_ids):
n_grams_tensor = torch.tensor(n_grams)
doc_ids_tensor = troch.tensor(doc_ids)
full_dataset = TensorDataset(n_grams_tensor, doc_ids_tensor)
return full_dataset
create_dataset returns pair of (n-grams, document_ids) like below:
n_grams, doc_ids = create_dataset( ... )
train_data = create_pytorch_datasets(n_grams, doc_ids)
>>> train_data[0:100]
(tensor([[2076, 517, 54, 3647, 1182, 7086],
[517, 54, 3647, 1182, 7086, 1149],
...
]),
tensor(([0, 0, 0, 0, 0, ..., 3, 3, 3]))
train_loader = DataLoader(train_data, batch_size = batch_size, shuffle = True)
The first of tensor content means n-grams and the second one does doc_id.
But as you know, by the length of documents, the amount of training data according to the label would changes.
If one document has very long length, there would be so many pairs that have its label in training data.
I think it can cause overfitting in model, because the classification model tends to classify inputs to long length documents.
So, I want to extract input batches from a uniform distribution for label (doc_ids). How can I fix it in code above?
p.s)
If there is train_data like below, I want to extract batch by the probability like that:
n-grams doc_ids
([1, 2, 3, 4], 1) ====> 0.33
([1, 3, 5, 7], 2) ====> 0.33
([2, 3, 4, 5], 3) ====> 0.33 * 0.25
([3, 5, 2, 5], 3) ====> 0.33 * 0.25
([6, 3, 4, 5], 3) ====> 0.33 * 0.25
([2, 3, 1, 5], 3) ====> 0.33 * 0.25
In pytorch you can specify a sampler or a batch_sampler to the dataloader to change how the sampling of datapoints is done.
docs on the dataloader:
https://pytorch.org/docs/stable/data.html#data-loading-order-and-sampler
documentation on the sampler: https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler
For instance, you can use the WeightedRandomSampler to specify a weight to every datapoint. The weighting can be the inverse length of the document for instance.
I would make the following modifications in the code:
def create_dataset(tok_docs, vocab, n):
n_grams = []
document_ids = []
weights = [] # << list of weights for sampling
for i, doc in enumerate(tok_docs):
for n_gram in [doc[0][i:i+n] for i in range(len(doc[0]) - 1)]:
n_grams.append(n_gram)
document_ids.append(i)
weights.append(1/len(doc[0])) # << ngrams of long documents are sampled less often
return n_grams, document_ids, weights
sampler = WeightedRandomSampler(weights, 1, replacement=True) # << create the sampler
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=False, sampler=sampler) # << includes the sampler in the dataloader
I have a A = 10x1000 tensor and a B = 10x1000 index tensor. The tensor B has values between 0-999 and it's used to gather values from A (B[0,:] gathers from A[0,:], B[1,:] from A[1,:], etc...).
However, if I use tf.gather(A, B) I get an array of shape (10, 1000, 1000) when I'm expecting a 10x1000 tensor back. Any ideas how I could fix this?
EDIT
Let's say A= [[1, 2, 3],[4,5,6]] and B = [[0, 1, 1],[2,1,0]] What I want is to be able to sample A using the corresponding B. This should result in C = [[1, 2, 2],[6,5,4]].
Dimensions of tensors are known in advance.
First we 'unstack' both the parameters and indices (A and B respectively) along the first dimension. Then we apply tf.gather() such that rows of A correspond to the rows of B. Finally, we stack together the result.
import tensorflow as tf
import numpy as np
def custom_gather(a, b):
unstacked_a = tf.unstack(a, axis=0)
unstacked_b = tf.unstack(b, axis=0)
gathered = [tf.gather(x, y) for x, y in zip(unstacked_a, unstacked_b)]
return tf.stack(gathered, axis=0)
a = tf.convert_to_tensor(np.array([[1, 2, 3], [4, 5, 6]]), tf.float32)
b = tf.convert_to_tensor(np.array([[0, 1, 1], [2, 1, 0]]), dtype=tf.int32)
gathered = custom_gather(a, b)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(gathered))
# [[1. 2. 2.]
# [6. 5. 4.]]
For you initial case with shapes 1000x10 we get:
a = tf.convert_to_tensor(np.random.normal(size=(10, 1000)), tf.float32)
b = tf.convert_to_tensor(np.random.randint(low=0, high=999, size=(10, 1000)), dtype=tf.int32)
gathered = custom_gather(a, b)
print(gathered.get_shape().as_list()) # [10, 1000]
Update
The first dimension is unknown (i.e. None)
The previous solution works only if the first dimension is known in advance. If the dimension is unknown we solve it as follows:
We stack together two tensors such that the rows of both tensors are stacked together:
# A = [[1, 2, 3], [4, 5, 6]] [[[1 2 3]
# ---> [0 1 1]]
# [[4 5 6]
# B = [[0, 1, 1], [2, 1, 0]] [2 1 0]]]
We iterate over the elements of this stacked tensor (which consists of stacked together rows of A and B) and using tf.map_fn() function we apply tf.gather().
We stack back the elements we get with tf.stack()
import tensorflow as tf
import numpy as np
def custom_gather_v2(a, b):
def apply_gather(x):
return tf.gather(x[0], tf.cast(x[1], tf.int32))
a = tf.cast(a, dtype=tf.float32)
b = tf.cast(b, dtype=tf.float32)
stacked = tf.stack([a, b], axis=1)
gathered = tf.map_fn(apply_gather, stacked)
return tf.stack(gathered, axis=0)
a = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float32)
b = np.array([[0, 1, 1], [2, 1, 0]], dtype=np.int32)
x = tf.placeholder(tf.float32, shape=(None, 3))
y = tf.placeholder(tf.int32, shape=(None, 3))
gathered = custom_gather_v2(x, y)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(gathered, feed_dict={x:a, y:b}))
# [[1. 2. 2.]
# [6. 5. 4.]]
Use tf.gather with batch_dims=-1:
import numpy as np
import tensorflow as tf
rois = np.array([[1, 2, 3],[3, 2, 1]])
ind = np.array([[0, 2, 1, 1, 2, 0, 0, 1, 1, 2],
[0, 1, 2, 0, 2, 0, 1, 2, 2, 2]])
tf.gather(rois, ind, batch_dims=-1)
# output:
# <tf.Tensor: shape=(2, 10), dtype=int64, numpy=
# array([[1, 3, 2, 2, 3, 1, 1, 2, 2, 3],
# [3, 2, 1, 3, 1, 3, 2, 1, 1, 1]])>
I have two tensors - one with bin specification and the other one with observed values. I'd like to count how many values are in each bin.
I know how to do this in either NumPy or bare Python, but I need to do this in pure TensorFlow. Is there a more sophisticated version of tf.histogram_fixed_width with an argument for bin specification?
Example:
# Input - 3 bins and 2 observed values
bin_spec = [0, 0.5, 1, 2]
values = [0.1, 1.1]
# Histogram
[1, 0, 1]
This seems to work, although I consider it to be quite memory- and time-consuming.
import tensorflow as tf
bins = [-1000, 1, 3, 10000]
vals = [-3, 0, 2, 4, 5, 10, 12]
vals = tf.constant(vals, dtype=tf.float64, name="values")
bins = tf.constant(bins, dtype=tf.float64, name="bins")
resh_bins = tf.reshape(bins, shape=(-1, 1), name="bins-reshaped")
resh_vals = tf.reshape(vals, shape=(1, -1), name="values-reshaped")
left_bin = tf.less_equal(resh_bins, resh_vals, name="left-edge")
right_bin = tf.greater(resh_bins, resh_vals, name="right-edge")
resu = tf.logical_and(left_bin[:-1, :], right_bin[1:, :], name="bool-bins")
counts = tf.reduce_sum(tf.to_float(resu), axis=1, name="count-in-bins")
with tf.Session() as sess:
print(sess.run(counts))
I have a theano tensor and I would like to clip its values, but each index to a different range.
For example, if I have a vector [a,b,c] , I want to clip a to [0,1] , clip b to [2,3] and c to [3,5].
How can I do that efficiently?
Thanks!
The theano.tensor.clip operation supports symbolic minimum and maximum values so you can pass three tensors, all of the same shape, and it will perform an element-wise clip of the first with respect to the second (minimum) and third (maximum).
This code shows two variations on this theme. v1 requires the minimum and maximum values to be passed as separate vectors while v2 allows the minimum and maximum values to be passed more like a list of pairs, represented as a two column matrix.
import theano
import theano.tensor as tt
def v1():
x = tt.vector()
min_x = tt.vector()
max_x = tt.vector()
y = tt.clip(x, min_x, max_x)
f = theano.function([x, min_x, max_x], outputs=y)
print f([2, 1, 4], [0, 2, 3], [1, 3, 5])
def v2():
x = tt.vector()
min_max = tt.matrix()
y = tt.clip(x, min_max[:, 0], min_max[:, 1])
f = theano.function([x, min_max], outputs=y)
print f([2, 1, 4], [[0, 1], [2, 3], [3, 5]])
def main():
v1()
v2()
main()