Unknown behavior of hooks on batch norm in pytorch - pytorch

I try to freeze the batch_norm layer and analyse their inputs/outputs with forward hooks
For fixed BN layers, I just couldn't understand why the hooked output is different from the output reproduced by the hooked input.
Really appreciate that if anyone could help me
Here's the code:
import torch
import torchvision
import numpy
def set_bn_eval(m):
classname = m.__class__.__name__
if classname.find('BatchNorm') != -1:
m.eval()
image = torch.randn((1, 3, 224, 224))
res = torchvision.models.resnet50(pretrained=True)
res.apply(set_bn_eval)
b = res(image)
layer_out = []
layer_in = []
def layer_hook(mod, inp, out):
layer_out.append(out)
layer_in.append(inp[0])
for name, key in res.named_modules():
hook = key.register_forward_hook(layer_hook)
res(image)
hook.remove()
out = layer_out.pop()
inp = layer_in.pop()
try:
assert (out.equal(key(inp)))
except AssertionError:
print(name)
break

TLDR; Some operators will only appear in the forward of the module: such as non-parametrized layers.
Some components are not registered in the child module list. This can usually be the case for activation functions but will ultimately depend on the module implementation. In your case, ResNet's Bottleneck section as its ReLUs applied in the forward definition, just after the batch normalization layer is called.
This means the output you will catch with the layer hook will be different from the tensor you compute from just the module and its input.
for name, module in res.named_modules():
if name != 'bn1':
hook = module.register_forward_hook(layer_hook)
res(image)
hook.remove()
inp = layer_in.pop()
out = layer_out.pop()
assert out.equal(F.relu(module(inp)))
Therefore, it's a bit tricky to actually implement since you can't rely entirely on the content of res.named_modules().

Related

How does a pytorch function (such as RoIPool) work?

For example, I'm trying to view the implementation of RoI Pooling in pytorch.
Here is a code fragment showing how to use RoIPool in pytorch
import torch
from torchvision.ops.roi_pool import RoIPool
device = torch.device('cuda')
# create feature layer, proposals and targets
num_proposals = 10
feature_map = torch.randn(1, 64, 32, 32)
proposals = torch.zeros((num_proposals, 4))
proposals[:, 0] = torch.randint(0, 16, (num_proposals,))
proposals[:, 1] = torch.randint(0, 16, (num_proposals,))
proposals[:, 2] = torch.randint(16, 32, (num_proposals,))
proposals[:, 3] = torch.randint(16, 32, (num_proposals,))
roi_pool_obj = RoIPool(3, 2**-1)
roi_pool = roi_pool_obj(feature_map, [proposals])
I'm using pychram, so when I follow RoIPool from the second line, it opens a file located at ~/anaconda3/envs/CV/lib/python3.8/site-package/torchvision/ops/roi_pool.py, which is exactly the same as codes in the documentation.
I pasted the code below without documentations.
from typing import List, Union
import torch
from torch import nn, Tensor
from torch.jit.annotations import BroadcastingList2
from torch.nn.modules.utils import _pair
from torchvision.extension import _assert_has_ops
from ..utils import _log_api_usage_once
from ._utils import convert_boxes_to_roi_format, check_roi_boxes_shape
def roi_pool(
input: Tensor,
boxes: Union[Tensor, List[Tensor]],
output_size: BroadcastingList2[int],
spatial_scale: float = 1.0,
) -> Tensor:
if not torch.jit.is_scripting() and not torch.jit.is_tracing():
_log_api_usage_once(roi_pool)
_assert_has_ops()
check_roi_boxes_shape(boxes)
rois = boxes
output_size = _pair(output_size)
if not isinstance(rois, torch.Tensor):
rois = convert_boxes_to_roi_format(rois)
output, _ = torch.ops.torchvision.roi_pool(input, rois, spatial_scale, output_size[0], output_size[1])
return output
class RoIPool(nn.Module):
def __init__(self, output_size: BroadcastingList2[int], spatial_scale: float):
super().__init__()
_log_api_usage_once(self)
self.output_size = output_size
self.spatial_scale = spatial_scale
def forward(self, input: Tensor, rois: Tensor) -> Tensor:
return roi_pool(input, rois, self.output_size, self.spatial_scale)
def __repr__(self) -> str:
s = f"{self.__class__.__name__}(output_size={self.output_size}, spatial_scale={self.spatial_scale})"
return s
So, in the code example:
When running roi_pool_obj = RoIPool(3, 2**-1) it will create an instance of RoIPool by calling its __init__ method, which only initialized two instance variables;
When running roi_pool = roi_pool_obj(feature_map, [proposals]), it must have called the forward() method (but I don't know how) which then called the roi_pool() function above;
When running the roi_pool() function, it did some checking first and then computed output with the line output, _ = torch.ops.torchvision.roi_pool(input, rois, spatial_scale, output_size[0], output_size[1]).
But this doesn't show details of how roi_pool is implemented and pycharm showed Cannot find declaration to go to when I tried to follow torch.ops.torchvision.roi_pool.
To summarize, I have two questions:
How does the forward() called by running roi_pool = roi_pool_obj(feature_map, [proposals])?
How can I view the source code of torch.ops.torchvision.roi_pool or where is the file containing it's implementaion located?
Last but not least, I've just started reading source code which is pretty difficult for me. I'd appreciate it if you can also provide some advice or tutorials.
RoIPool is a subclass of torch.nn.Module. Source code:
https://github.com/pytorch/vision/blob/07ae61bf9c21ddd1d5f65d326aa9636849b383ca/torchvision/ops/roi_pool.py#L56
nn.Module defines __call__ method which in turn calls forward method. Source code:
https://github.com/pytorch/pytorch/blob/b2311192e6c4745aac3fdd774ac9d56a36b396d4/torch/nn/modules/module.py#L1234
When you executing roi_pool = roi_pool_obj(feature_map, [proposals]) statement the __call__ method uses the forward() of RoiPool. Source code:
https://github.com/pytorch/vision/blob/07ae61bf9c21ddd1d5f65d326aa9636849b383ca/torchvision/ops/roi_pool.py#L67
RoiPool.forward calls torch.ops.torchvision.roi_pool.
https://github.com/pytorch/vision/blob/07ae61bf9c21ddd1d5f65d326aa9636849b383ca/torchvision/ops/roi_pool.py#L52
ops is a object which loads native libraries implemented in c++:
https://github.com/pytorch/pytorch/blob/b2311192e6c4745aac3fdd774ac9d56a36b396d4/torch/_ops.py#L537
so when you call torch.ops.torchvision it will use torchvision library.
Here the roi_pool function is registered:
https://github.com/pytorch/vision/blob/7947fc8fb38b1d3a2aca03f22a2e6a3caa63f2a0/torchvision/csrc/ops/roi_pool.cpp#L53
Here you can find the actual implementation of rol_pool
CPU:
https://github.com/pytorch/vision/blob/7947fc8fb38b1d3a2aca03f22a2e6a3caa63f2a0/torchvision/csrc/ops/cpu/roi_pool_kernel.cpp
GPU:
https://github.com/pytorch/vision/blob/7947fc8fb38b1d3a2aca03f22a2e6a3caa63f2a0/torchvision/csrc/ops/cuda/roi_pool_kernel.cu

Is it possible to replace keras Lambda layers with native CoreML ones to convert the model?

For the first time I faced a need to convert mu keras model to coreml. This can be done via coremltools package,
import coremltools
import keras
model = Model(...) # keras
coreml_model = coremltools.converters.keras.convert(model,
input_names="input_image_NHWC",
output_names="output_image_NHWC",
image_scale=1.0,
model_precision='float32',
use_float_arraytype=True,
custom_conversion_functions={ "Lambda": convert_lambda },
input_name_shape_dict={'input_image_NHWC': [None, 384, 384, 3]}
)
However, I have two lambda layers, where the first one is depth-to-space (pixelshuffle) and another one is scaler:
def tf_upsampler(x):
return tf.nn.depth_to_space(x, 4)
def mulfunc(x, beta=0.2):
return beta*x
...
x = Lambda(tf_upsampler)(x)
...
x = Lambda(mulfunc)(x)
The only advice I found was as far as I understand, to use custom layer with the need to implement my layer in Swift code later. Something like this with MyPixelShuffle and MyScaleLayer to be implemented somehow as classes in XCode project (?):
def convert_lambda(layer):
# Only convert this Lambda layer if it is for our swish function.
if layer.function == tf_upsampler:
params = NeuralNetwork_pb2.CustomLayerParams()
# The name of the Swift or Obj-C class that implements this layer.
params.className = "MyPixelShuffle"
# The desciption is shown in Xcode's mlmodel viewer.
params.description = "pixelshuffle"
params.parameters["blockSize"].intValue = 4
return params
elif layer.function == mulfunc:
# https://stackoverflow.com/questions/47987777/custom-layer-with-two-parameters-function-on-core-ml
params = NeuralNetwork_pb2.CustomLayerParams()
# The name of the Swift or Obj-C class that implements this layer.
params.className = "MyScaleLayer"
params.description = "scaling input"
# HERE!! This is important.
params.parameters["scale"].doubleValue = 0.2
# The desciption is shown in Xcode's mlmodel viewer.
params.description = "multiplication by constant"
return params
However, I found that CoreML actually has layers I need, they can be found as ScaleLayer and ReorganizeDataLayer
How can I use those native layers to replace lambdas in keras model? Is it possible to edit coreML protobuf for the network? Or if there are Swift/OBj-C classes for them, how they are called?
Can it be done via deleting/adding layers with coremltools.models.neural_network.NeuralNetworkBuilder ?
UPDATE:
I found that keras converter actually invokes Neural network builder to add different layers. Builder has the layer builder.add_reorganize_data I need. Now it's a question how to replace custom layers in the model. I can load it into builder and isnpect layers:
coreml_model_path = 'mymodel.mlmodel'
spec = coremltools.models.utils.load_spec(coreml_model_path)
builder = coremltools.models.neural_network.NeuralNetworkBuilder(spec=spec)
builder.inspect_layers(last=10)
[Id: 417], Name: lambda_10 (Type: custom)
Updatable: False
Input blobs: ['up1_output']
Output blobs: ['lambda_10_output']
It's much simpler to do something like this:
def convert_lambda(layer):
if layer.function == tf_upsampler:
params = NeuralNetwork_pb2.ReorganizeDataLayerParams()
params.fillInTheOtherPropertiesHere = someValue
return params
...etc..
In other words, you don't have to return a custom layer if some existing layer type already does what you want.
Okay, seems I have found a way to go. I created a virtual environment with separate copy of coremltools and edited _convert() method in _keras2_converter.py by adding the following code:
for iter, layer in enumerate(graph.layer_list):
keras_layer = graph.keras_layer_map[layer]
print("%d : %s, %s" % (iter, layer, keras_layer))
if isinstance(keras_layer, _keras.layers.wrappers.TimeDistributed):
keras_layer = keras_layer.layer
converter_func = _get_layer_converter_fn(keras_layer, add_custom_layers)
input_names, output_names = graph.get_layer_blobs(layer)
# this may be none if we're using custom layers
if converter_func:
converter_func(builder, layer, input_names, output_names,
keras_layer, respect_trainable)
else:
if _is_activation_layer(keras_layer):
import six
if six.PY2:
layer_name = keras_layer.activation.func_name
else:
layer_name = keras_layer.activation.__name__
else:
layer_name = type(keras_layer).__name__
if layer_name in custom_conversion_functions:
custom_spec = custom_conversion_functions[layer_name](keras_layer)
else:
custom_spec = None
if layer.find('tf_up') != -1:
print('TF_UPSCALE found')
builder.add_reorganize_data(layer, input_names[0], output_names[0], mode='DEPTH_TO_SPACE', block_size=4)
elif layer.find('mulfunc') != -1:
print('SCALE found')
builder.add_scale(layer, W=0.2, b=0, has_bias=False, input_name=input_names[0], output_name=output_names[0])
else:
builder.add_custom(layer, input_names, output_names, custom_spec)
The trigger is layer name. In keras I use model.load_weights(by_name=True) and the following marking of my lambdas:
x = Lambda(mulfunc, name=scope+'mulfunc')(x)
x = Lambda(tf_upsampler,name='tf_up')(x)
Now the model at least has layers I need:
[Id: 417], Name: tf_up (Type: reorganizeData)
Updatable: False
Input blobs: ['up1_output']
Output blobs: ['tf_up_output']
Now it's time to validate on my VBox MacOS, will post what I've got.
UPDATE:
I don't see errors regarding my replaced Lambda layers, but have another one not allowing me to predict:
Layer 'concatenate_1' type 320 has 1 inputs but expects at least 2
I think it's related to the fact that Keras receives single input to concatenate layer which is a list of inputs. Looking how to fix this
UPDATE 2:
Tried to hotfix this by using add_concat_nd(self, name, input_names, output_name, axis) function of builder for my concatenate layers: got error during inference that this layer type is usupported (?!) :
if converter_func:
if layer.find('concatenate') != -1:
print('CONCATENATE FOUND')
builder.add_concat_nd(layer, input_names, output_names[0], axis=3)
else:
converter_func(builder, layer, input_names, output_names,
keras_layer, respect_trainable)
Unsupported layer type (CoreML.Specification.NeuralNetworkLayer)
UPDATE 4:
Found fix for this, and changed the way how builder is initialized:
builder = _NeuralNetworkBuilder(input_features, output_features, mode = mode,
use_float_arraytype=use_float_arraytype, disable_rank5_shape_mapping=True)
Now error message is gone, but I have problems with XCode version: model has version 4, while my Xcode supports 3. Will seek to update my VM.
"CoreML survival guide" pdf suggests in this case:
pip install -U git+https://github.com/apple/coremltools.git
For example, if loading a model using coremltools gives an error such as the following, then
try installing the latest coremltools directly from the GitHub repo.
Error compiling model: "Error reading protobuf spec. validator error:
The .mlmodel supplied is of version 3, intended for a newer version of
Xcode. This version of Xcode supports model version 2 or earlier.
UPDATE:
Updated from git. Have error:
No module named 'coremltools.libcoremlpython'
Looks like latest git is broken :(
Damn, it seems I need macos 10.15 and xcode 11
UPDATE 5:
Still fighting with bug on 10.15. Found that
coremltools somehow deduplicates inputs to Concatenate layer, so if you have in your keras code something like Concatenate()([x,x]) you will have concatenate layer with 1 inputs in coreml and error. To fix it I try to further modify the code above:
if layer.find('concatenate') != -1:
print('CONCATENATE FOUND', len(input_names))
if len(input_names) == 1:
input_names = [input_names[0],input_names[0]]
builder.add_concat_nd(layer, input_names, output_names[0], axis=3)
[I faced this error: input layer 'conv_in' of type 'Convolution' has input rank 3 but expects rank at least 4](coremltools: how to properly use NeuralNetworkMultiArrayShapeRange? which seems to be caused by coreml making input 3-dimensional CHW, while it must be 4 NWHC (?). Currently playing with the following
spec = coreml_model._spec
# fixing input shape
# https://stackoverflow.com/questions/59765376/coreml-model-convert-imagetype-model-input-to-multiarray
spec.description.input[0].type.multiArrayType.shape.extend([1, 384, 384, 3])
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
coremltools.utils.save_spec(spec, "my.mlmodel")
Getting invalid blob shape from model internals

Errors when Building up a Custom Loss Function

I try to build up my own loss function as follows
import numpy as np
from keras import backend as K
def MyLoss(self, x_input, x_reconstruct):
a = np.copy(x_reconstruct)
a = np.asarray(a, dtype='float16')
a = np.floor(4*a)/4
return K.mean(K.square(a - x_input), axis=-1)`
In compilation, it says
ValueError: setting an array element with a sequence
Both x_input and x_reconstruct are [m, n, 1] np arrays. The last line of code is actually copied directly from Keras' built-in MSE loss function.
Also, I suppose loss is calculated per sample. If dimensions of the input and reconstructed input are both [m, n, 1], the result of Keras' built-in loss will also be a matrix sized [m, n]. So why does it work properly?
I then tried to us np's functions directly by
def MyLoss(self, x_input, x_reconstruct):
a = np.copy(x_reconstruct)
a = np.asarray(a, dtype=self.precision)
a = np.floor(4*a)/4
Diff = a - x_input
xx = np.mean(np.square(Diff), axis=-1)
yy = np.sum(xx)
return yy
yet the error persists. What mistake did I make? How should write the code?
Having borrowed the suggestion from Make a Custom loss function in Keras in detail, I tried following
def MyLoss(self, x_input, x_reconstruct):
if self.precision == 'float16':
K.set_floatx('float16')
K.set_epsilon(1e-4)
a = K.cast_to_floatx(x_input)
a = K.round(a*4.-0.5)/4.0
return K.sum(K.mean(K.square(x_input-a), axis=-1))
But the same error happens
You can not use numpy arrays in your loss. You have to use TensorFlow or Keras backend operations. Try this maybe:
import tensorflow as tf
import keras.backend as K
def MyLoss(x_input, x_reconstruct):
a = tf.cast(x_input, dtype='tf.float16')
a = tf.floor(4*a)/4
return K.mean(K.square(a - x_input), axis=-1)
I found the answer myself, and let me share it here
If I write code like this
def MyLoss(self, y_true, y_pred):
if self.precision == 'float16':
K.set_floatx('float16')
K.set_epsilon(1e-4)
return K.mean(K.square(y_true-K.round(y_pred*4.-0.5)/4.0), axis=-1)
It works. The trick is, I think, that I cannot use 'K.cast_to_floatx(y_true)'. Instead, simply use y_true directly. I still do not understand why...

How to use multiprocessing in PyTorch?

I'm trying to use PyTorch with complex loss function. In order to accelerate the code, I hope that I can use the PyTorch multiprocessing package.
The first trial, I put 10x1 features into the NN and get 10x4 output.
After that, I want to pass 10x4 parameters into a function to do some calculation. (The calculation will be complex in the future.)
After calculating, the function will return a 10x1 array in total. This array will be set as NN_energy and calculate loss function.
Besides, I also want to know if there is another method to create a backward-able array to store the NN_energy array, instead of using
NN_energy = net(Data_in)[0:10,0]
Thanks a lot.
Full Code:
import torch
import numpy as np
from torch.autograd import Variable
from torch import multiprocessing
def func(msg,BOP):
ans = (BOP[msg][0]+BOP[msg][1]/BOP[msg][2])*BOP[msg][3]
return ans
class Net(torch.nn.Module):
def __init__(self, n_feature, n_hidden_1, n_hidden_2, n_output):
super(Net, self).__init__()
self.hidden_1 = torch.nn.Linear(n_feature , n_hidden_1) # hidden layer
self.hidden_2 = torch.nn.Linear(n_hidden_1, n_hidden_2) # hidden layer
self.predict = torch.nn.Linear(n_hidden_2, n_output ) # output layer
def forward(self, x):
x = torch.tanh(self.hidden_1(x)) # activation function for hidden layer
x = torch.tanh(self.hidden_2(x)) # activation function for hidden layer
x = self.predict(x) # linear output
return x
if __name__ == '__main__': # apply_async
Data_in = Variable( torch.from_numpy( np.asarray(list(range( 0,10))).reshape(10,1) ).float() )
Ground_truth = Variable( torch.from_numpy( np.asarray(list(range(20,30))).reshape(10,1) ).float() )
net = Net( n_feature=1 , n_hidden_1=15 , n_hidden_2=15 , n_output=4 ) # define the network
optimizer = torch.optim.Rprop( net.parameters() )
loss_func = torch.nn.MSELoss() # this is for regression mean squared loss
NN_output = net(Data_in)
args = range(0,10)
pool = multiprocessing.Pool()
return_data = pool.map( func, zip(args, NN_output) )
pool.close()
pool.join()
NN_energy = net(Data_in)[0:10,0]
for i in range(0,10):
NN_energy[i] = return_data[i]
loss = torch.sqrt( loss_func( NN_energy , Ground_truth ) ) # must be (1. nn output, 2. target)
print(loss)
Error messages:
File
"C:\ProgramData\Anaconda3\lib\site-packages\torch\multiprocessing\reductions.py",
line 126, in reduce_tensor
raise RuntimeError("Cowardly refusing to serialize non-leaf tensor which requires_grad, "
RuntimeError: Cowardly refusing to serialize non-leaf tensor which
requires_grad, since autograd does not support crossing process
boundaries. If you just want to transfer the data, call detach() on
the tensor before serializing (e.g., putting it on the queue).
First of all, Torch Variable API is deprecated since a very long time, just don't use it.
Next, torch.from_numpy( np.asarray(list(range( 0,10))).reshape(10,1) ).float() is wrong at many levels: np.asarray of list is useless since a copy will be performed anyway, and np.array takes list as input by design. Then, np.arange is available to return a range as numpy array, and it is also available on Torch. Next, specifying both dimension for reshape is useless and error prone, you could simply do reshape((-1, 1)), or even better unsqueeze(-1).
Here is the simplified expression torch.arange(10, dtype=torch.float32, requires_grad=True).unsqueeze(-1).
Using multiprocessing pool is a bad practice if using batch processing is possible. It will be both way more efficient and readable. Indeed, performing N small algebraic operations in parallel is always slower and a larger single algebraic operation, and even more on GPU. More importantly, computing the gradient is not supported by multiprocessing, hence the error that you get. Yet, this is partially true, because it is supports for tensors on cpu since 1.6.0. Have a lok, to the official release changelog.
Could you post a more representative example of what func method could be to make sure you really need it ?
NB: Distributed autograd as you are looking is now available in Pytorch as an experimental feature available in beta since 1.6.0. Have a look to the official documentation.

Tensorflow : How to get tensor name from Tensorboard?

I downloaded a ssd_mobilenet_v2_coco from tensorflow detection model zoo. And I used import_pb_to_tensorboard.py to show the structure on Tensorboard.
I find a node named 'image_tensor', this is the picture discribed in Tensorboard.
I want to use the function 'get_tensor_by_name()' to input a new image and get the ouputs. However, it failed.
I tried 'get_operation_by_name()' , it didn't work neither.
Here is the code:
import tensorflow as tf
def one_image(im_path, model_path):
sess= tf.Session()
with sess.as_default():
image_tensor = tf.image.decode_jpeg(tf.read_file(im_path), channels=3)
saver = tf.train.import_meta_graph(model_path + "/model.ckpt.meta")
saver.restore(sess, tf.train.latest_checkpoint(model_path))
graph = tf.get_default_graph()
# x = graph.get_tensor_by_name("import/image_tensor:0")
# out_put = graph.get_tensor_by_name("import/detection_classes:0")
x = graph.get_operation_by_name("import/image_tensor").outputs[0]
outputs = graph.get_operation_by_name("import/detection_classes").outputs[0]
out_put = sess.run(outputs, feed_dict={x: image_tensor.eval()})
print(out_put)
sess.close()
if __name__ == "__main__":
one_image("testimg-4-resize.jpg", "ssd_mobilenet_v2_coco_2018_03_29")
And here is the KeyError:
KeyError: "The name 'import/image_tensor' refers to an Operation not in the graph."
I am wondering how to get the tensor name from Tensorboard and whether there is another way to load model from 'only-ckpts'.
'only-ckpts' means files only include 'model.ckpt.data-00000-of-00001' , 'model.ckpt.index' and 'model.ckpt.meta'.
Any advice will be grateful. Thank you in advance.
The tool import_pb_to_tensorboard.py uses tf.import_graph_def to import the graph and uses default name argument, which is "import" as documented.
Your code imports the graph through tf.train.import_meta_graph and uses default import_scope argument, which will not prefix imported tensor or operation name. It is obvious then you have two options to correct this error:
Do the following in place of your import_meta_graph line:
saver = tf.train.import_meta_graph(model_path + "/model.ckpt.meta",
import_scope='import')
Remove import/ prefix when trying to get tensor or operation by name like this:
x = graph.get_tensor_by_name("image_tensor:0")
out_put = graph.get_tensor_by_name("detection_classes:0")
x = graph.get_operation_by_name("image_tensor").outputs[0]
outputs = graph.get_operation_by_name("detection_classes").outputs[0]

Resources