How to implement pytorch, adaptive_avg_pool2d in C++ own? - pytorch

I tried mimic the behavior of pytorch adaptive_avg_pool2d, but I found the result not same:
def test_pool():
a = np.fromfile("in.bin", dtype=np.float32)
a = np.reshape(a, [1, 12, 25, 25])
a = torch.as_tensor(a)
b = F.adaptive_avg_pool2d(a, [7, 7])
print(b)
print(b.shape)
avg_pool = torch.nn.AvgPool2d([7, 7], [3, 3])
c = avg_pool(a)
print(c)
print(c.shape)
what is the principles behind pytorch adaptive_avg_pool2d?

Related

torch matrix equaity sum operation

I want to do an operation similar to matrix multiplication, except instead of multiplying I want to check equality. The effect that I want to achieve is similar to the following:
a = torch.Tensor([[1, 2, 3], [4, 5, 6]]).to(torch.uint8)
b = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).to(torch.uint8)
result = [[sum(a[i] == b [j]) for j in range(len(b))] for i in range(len(a))]
Is there a way that I can use einsum, or any other function in pytorch to achieve the above efficiently?
You can use torch.repeat and torch.repeat_interleave:
a = torch.Tensor([[1, 2, 3], [4, 5, 6]])
b = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
mask = a.repeat_interleave(3, dim=0) == b.repeat((2, 1))
torch.sum(mask, axis=1).reshape(a.shape)
# output
tensor([[3, 0, 0],
[0, 3, 0]])
You can make use of the broadcasting to do the same, for instance with
result = (a[:, None, :] == b[None, :, :]).sum(dim=2)
Here None just introduces a dummy dimensions - alternatively you can use the less visual .unsqueeze() instead.
matrix multiplication is ij,jk->ik in einsum notation, all of these operations are equivalent with varying levels of verbosity:
a # b
torch.einsum("ij,jk", a, b)
torch.einsum("ij,jk->ik", a, b)
(a[:,:,None] * b[None,:,:]).sum(1)
"multiply i and k dimensions and reduce j dimension"
i, j, k i, j, k
a: (2, 3) => (2, 3, None)
b: (3, 3) (None, 3, 3)
It should now be clear from this function decomposition that multiplication can be replaced with any binary operation, e.g. the equality operation.
Unfortunately, there is no generalized form of einsum (AFAIK) in pytorch that swaps the multiplication "out-of-the-box". There is however the einops library which is basically a wrapper around deep learning frameworks such as PyTorch.

How to efficiently repeat tensor element variable of time in pytorch?

For example, if I have a tensor A = [[1,1,1], [2,2,2], [3,3,3]], and B = [1,2,3]. How do I get C = [[1,1,1], [2,2,2], [2,2,2], [3,3,3], [3,3,3], [3,3,3]], and doing this batch-wise?
My current element-wise solution btw (takes forever...):
def get_char_context(valid_embeds, words_lens):
chars_contexts = []
for ve, wl in zip(valid_embeds, words_lens):
for idx, (e, l) in enumerate(zip(ve, wl)):
if idx ==0:
chars_context = e.view(1,-1).repeat(l, 1)
else:
chars_context = torch.cat((chars_context, e.view(1,-1).repeat(l, 1)),0)
chars_contexts.append(chars_context)
return chars_contexts
I'm doing this to add bert word embedding to a char level seq2seq task...
Use this:
import torch
# A is your tensor
B = torch.tensor([1, 2, 3])
C = A.repeat_interleave(B, dim = 0)
EDIT:
The above works fine if A is a single 2D tensor. To repeat all (2D) tensors in a batch in the same manner, this is a simple workaround:
A = torch.tensor([[[1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[1, 2, 3], [4, 5, 6], [2,2,2]]]) # A has 2 tensors each of shape (3, 3)
B = torch.tensor([1, 2, 3]) # Rep. of each row of every tensor in the batch
A1 = A.reshape(1, -1, A.shape[2]).squeeze()
B1 = B.repeat(A.shape[0])
C = A1.repeat_interleave(B1, dim = 0).reshape(A.shape[0], -1, A.shape[2])
C is:
tensor([[[1, 1, 1],
[2, 2, 2],
[2, 2, 2],
[3, 3, 3],
[3, 3, 3],
[3, 3, 3]],
[[1, 2, 3],
[4, 5, 6],
[4, 5, 6],
[2, 2, 2],
[2, 2, 2],
[2, 2, 2]]])
As you can see each inside tensor in the batch is repeated in the same manner.

How to use tf.gather in batch?

I have a A = 10x1000 tensor and a B = 10x1000 index tensor. The tensor B has values between 0-999 and it's used to gather values from A (B[0,:] gathers from A[0,:], B[1,:] from A[1,:], etc...).
However, if I use tf.gather(A, B) I get an array of shape (10, 1000, 1000) when I'm expecting a 10x1000 tensor back. Any ideas how I could fix this?
EDIT
Let's say A= [[1, 2, 3],[4,5,6]] and B = [[0, 1, 1],[2,1,0]] What I want is to be able to sample A using the corresponding B. This should result in C = [[1, 2, 2],[6,5,4]].
Dimensions of tensors are known in advance.
First we 'unstack' both the parameters and indices (A and B respectively) along the first dimension. Then we apply tf.gather() such that rows of A correspond to the rows of B. Finally, we stack together the result.
import tensorflow as tf
import numpy as np
def custom_gather(a, b):
unstacked_a = tf.unstack(a, axis=0)
unstacked_b = tf.unstack(b, axis=0)
gathered = [tf.gather(x, y) for x, y in zip(unstacked_a, unstacked_b)]
return tf.stack(gathered, axis=0)
a = tf.convert_to_tensor(np.array([[1, 2, 3], [4, 5, 6]]), tf.float32)
b = tf.convert_to_tensor(np.array([[0, 1, 1], [2, 1, 0]]), dtype=tf.int32)
gathered = custom_gather(a, b)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(gathered))
# [[1. 2. 2.]
# [6. 5. 4.]]
For you initial case with shapes 1000x10 we get:
a = tf.convert_to_tensor(np.random.normal(size=(10, 1000)), tf.float32)
b = tf.convert_to_tensor(np.random.randint(low=0, high=999, size=(10, 1000)), dtype=tf.int32)
gathered = custom_gather(a, b)
print(gathered.get_shape().as_list()) # [10, 1000]
Update
The first dimension is unknown (i.e. None)
The previous solution works only if the first dimension is known in advance. If the dimension is unknown we solve it as follows:
We stack together two tensors such that the rows of both tensors are stacked together:
# A = [[1, 2, 3], [4, 5, 6]] [[[1 2 3]
# ---> [0 1 1]]
# [[4 5 6]
# B = [[0, 1, 1], [2, 1, 0]] [2 1 0]]]
We iterate over the elements of this stacked tensor (which consists of stacked together rows of A and B) and using tf.map_fn() function we apply tf.gather().
We stack back the elements we get with tf.stack()
import tensorflow as tf
import numpy as np
def custom_gather_v2(a, b):
def apply_gather(x):
return tf.gather(x[0], tf.cast(x[1], tf.int32))
a = tf.cast(a, dtype=tf.float32)
b = tf.cast(b, dtype=tf.float32)
stacked = tf.stack([a, b], axis=1)
gathered = tf.map_fn(apply_gather, stacked)
return tf.stack(gathered, axis=0)
a = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float32)
b = np.array([[0, 1, 1], [2, 1, 0]], dtype=np.int32)
x = tf.placeholder(tf.float32, shape=(None, 3))
y = tf.placeholder(tf.int32, shape=(None, 3))
gathered = custom_gather_v2(x, y)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(gathered, feed_dict={x:a, y:b}))
# [[1. 2. 2.]
# [6. 5. 4.]]
Use tf.gather with batch_dims=-1:
import numpy as np
import tensorflow as tf
rois = np.array([[1, 2, 3],[3, 2, 1]])
ind = np.array([[0, 2, 1, 1, 2, 0, 0, 1, 1, 2],
[0, 1, 2, 0, 2, 0, 1, 2, 2, 2]])
tf.gather(rois, ind, batch_dims=-1)
# output:
# <tf.Tensor: shape=(2, 10), dtype=int64, numpy=
# array([[1, 3, 2, 2, 3, 1, 1, 2, 2, 3],
# [3, 2, 1, 3, 1, 3, 2, 1, 1, 1]])>

How to process different row in tensor based on the first column value in tensorflow

let's say I have a 4 by 3 tensor:
sample = [[10, 15, 25], [1, 2, 3], [4, 4, 10], [5, 9, 8]]
I would like to return another tensor of shape 4: [r1,r2,r3,r4] where r is either equal to tf.reduce_sum(row) if row[0] is less than 5, or r is equal to tf.reduce_mean(row) if row[0] is greater or equal to 5.
output:
output = [16.67, 6, 18, 7.33]
I'm not an adept to tensorflow, please do assist me on how to achieve the above in python 3 without a for loop.
thank you
UPDATES:
So I've tried to adapt the answer given by #Onyambu to include two samples in the functions but it gave me an error in all instances.
here is the answer for the first case:
def f(x):
c = tf.constant(5,tf.float32)
def fun1():
return tf.reduce_sum(x)
def fun2():
return tf.reduce_mean(x)
return tf.cond(tf.less(x[0],c),fun1,fun2)
a = tf.map_fn(f,tf.constant(sample,tf.float32))
The above works well.
The for two samples:
sample1 = [[10, 15, 25], [1, 2, 3], [4, 4, 10], [5, 9, 8]]
sample2 = [[0, 15, 25], [1, 2, 3], [0, 4, 10], [1, 9, 8]]
def f2(x1,x2):
c = tf.constant(1,tf.float32)
def fun1():
return tf.reduce_sum(x1[:,0] - x2[:,0])
def fun2():
return tf.reduce_mean(x1 - x2)
return tf.cond(tf.less(x2[0],c),fun1,fun2)
a = tf.map_fn(f2,tf.constant(sample1,tf.float32), tf.constant(sample2,tf.float32))
The adaptation does give errors, but the principle is simple:
calculate the tf.reduce_sum of sample1[:,0] - sample2[:,0] if row[0] is less than 1
calculate the tf.reduce_sum of sample1 - sample2 if row[0] is greater or equal to 1
Thank you for your assistance in advance!
import tensorflow as tf
def f(x):
y = tf.constant(5,tf.float32)
def fun1():
return tf.reduce_sum(x)
def fun2():
return tf.reduce_mean(x)
return tf.cond(tf.less(x[0],y),fun1,fun2)
a = tf.map_fn(f,tf.constant(sample,tf.float32))
with tf.Session() as sess: print(sess.run(a))
[16.666666 6. 18. 7.3333335]
If you want to shorten it:
y = tf.constant(5,tf.float32)
f=lambda x: tf.cond(tf.less(x[0], y), lambda: tf.reduce_sum(x),lambda: tf.reduce_mean(x))
a = tf.map_fn(f,tf.constant(sample,tf.float32))
with tf.Session() as sess: print(sess.run(a))

Missing Weight Vectors when converting from PyTorch to CoreML via ONNX

I am trying to convert a PyTorch model to CoreML via ONNX, but the ONNX-->CoreML conversion is missing weight vectors?
I am following the tutorial here which makes this statement:
Step 3: Converting the model to CoreML
It's as easy as running the convert function. The resulting object is a coremltools MLModel object that you can save to a file and import in XCode later.
cml = onnx_coreml.convert(model)
Unfortunately when I try to do this it fails horribly.
Here's my code:
# convert.py
import torch
import torch.onnx
from torch.autograd import Variable
import onnx
from onnx_coreml import convert
from hourglass_model import Hourglass
model_no = 1
torch_model = Hourglass(joint_count=14, size=256)
state_dict = torch.load("hourglass_model_{}.model".format(model_no))
torch_model.load_state_dict(state_dict)
torch_model.train(False)
torch_model.eval()
# Dummy Input to the model
x = Variable(torch.randn(1,3,256,256,dtype=torch.float32))
# Export the model
onnx_filename = "test_hourglass.onnx"
torch_out = torch.onnx.export(torch_model, x, onnx_filename, export_params=False)
# Load back in ONNX model
onnx_model = onnx.load(onnx_filename)
# Check that the IR is well formed
onnx.checker.check_model(onnx_model)
# Print a human readable representation of the graph
graph = onnx.helper.printable_graph(onnx_model.graph)
print(graph)
coreml_model = convert(onnx_model,
add_custom_layers=True,
image_input_names=["input"],
image_output_names=["output"])
coreml_model.save('test_hourglass.mlmodel')
Here's what the print(graph) line gives.
graph torch-jit-export (
%0[FLOAT, 1x3x256x256]
%1[FLOAT, 64x3x5x5]
%2[FLOAT, 64]
%3[FLOAT, 64x64x5x5]
%4[FLOAT, 64]
%5[FLOAT, 64x64x5x5]
%6[FLOAT, 64]
%7[FLOAT, 64x64x5x5]
%8[FLOAT, 64]
%9[FLOAT, 64x64x5x5]
%10[FLOAT, 64]
%11[FLOAT, 64x64x5x5]
%12[FLOAT, 64]
%13[FLOAT, 64x64x5x5]
%14[FLOAT, 64]
%15[FLOAT, 64x64x1x1]
%16[FLOAT, 64]
%17[FLOAT, 14x64x1x1]
%18[FLOAT, 14]
) {
%19 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%0, %1, %2)
%20 = Relu(%19)
%21 = MaxPool[kernel_shape = [4, 4], pads = [0, 0, 0, 0], strides = [4, 4]](%20)
%22 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%21, %3, %4)
%23 = Relu(%22)
%24 = MaxPool[kernel_shape = [4, 4], pads = [0, 0, 0, 0], strides = [4, 4]](%23)
%25 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%24, %5, %6)
%26 = Relu(%25)
%27 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%26, %7, %8)
%28 = Relu(%27)
%29 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%28, %9, %10)
%30 = Relu(%29)
%31 = Upsample[height_scale = 4, mode = 'nearest', width_scale = 4](%30)
%32 = Add(%31, %23)
%33 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%32, %11, %12)
%34 = Relu(%33)
%35 = Upsample[height_scale = 4, mode = 'nearest', width_scale = 4](%34)
%36 = Add(%35, %20)
%37 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%36, %13, %14)
%38 = Relu(%37)
%39 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%38, %15, %16)
%40 = Relu(%39)
%41 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%40, %17, %18)
%42 = Relu(%41)
return %42
}
And this is the error message:
1/24: Converting Node Type Conv
Traceback (most recent call last):
File "convert.py", line 38, in <module>
image_output_names=["output"])
File "/Users/stephenf/Developer/miniconda3/envs/pytorch/lib/python3.6/site-packages/onnx_coreml/converter.py", line 396, in convert
_convert_node(builder, node, graph, err)
File "/Users/stephenf/Developer/miniconda3/envs/pytorch/lib/python3.6/site-packages/onnx_coreml/_operators.py", line 994, in _convert_node
return converter_fn(builder, node, graph, err)
File "/Users/stephenf/Developer/miniconda3/envs/pytorch/lib/python3.6/site-packages/onnx_coreml/_operators.py", line 31, in _convert_conv
"Weight tensor: {} not found in the graph initializer".format(weight_name,))
File "/Users/stephenf/Developer/miniconda3/envs/pytorch/lib/python3.6/site-packages/onnx_coreml/_error_utils.py", line 71, in missing_initializer
format(node.op_type, node.inputs[0], node.outputs[0], err_message)
ValueError: Missing initializer error in op of type Conv, with input name = 0, output name = 19. Error message: Weight tensor: 1 not found in the graph initializer
From what I can gather, it says the weight tensor %1[FLOAT, 64x3x5x5] is missing. This is how I'm saving the model:
torch.save(model.state_dict(), "hourglass_model_{}.model".format(epoch))
ONNX loads it fine - it's just the step where I'm converting from ONNX to CoreML.
Any help in figuring this out would be greatly appreciated. I'm sure I've done a bunch of other things wrong, but I just need this thing to export for now.
Thanks,
You are calling torch.onnx.export with export_params=False, which, as the 0.3.1 doc reads, is saving the model architecture without the actual parameter tensors. The more recent documentation doesn't specify this, but we can assume that due to the Weight tensor not found error that you are getting.
Try it with export_params=True, you should see how the saved model's size increases notably.
Glad it helped!
Andres

Resources