Can't change the Anchors in Faster RCNN - pytorch

I'm a newbie in pytorch and I was trying to put some custom anchors on my Faster RCNN network in pytorch. Basically, I'm using a resnet50 backbone and when I try to put the anchors, I got a mismatch error.
This is the code that I have:
backbone = torchvision.models.detection.backbone_utils.resnet_fpn_backbone('resnet50', True)
backbone.out_channels = 256
anchor_generator = AnchorGenerator(sizes=((4, 8, 16, 32, 64, 128),),
aspect_ratios=((0.5, 1.0, 2.0),))
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=[0],
output_size=7,
sampling_ratio=2)
model = FasterRCNN(backbone,
num_classes=10,
rpn_anchor_generator=anchor_generator,
box_roi_pool=roi_pooler)
The error that I got is the following: shape '[1440000, -1]' is invalid for input of size 7674336.

Alright, after some digging into the source code of PyTorch Faster RCNN, I found how they initialize the anchors:
anchor_sizes = ((32,), (64,), (128,), (256,), (512,))
aspect_ratios = ((0.5, 1.0, 2.0),) * len(anchor_sizes)
rpn_anchor_generator = AnchorGenerator(
anchor_sizes, aspect_ratios
)
Following the same pattern for my custom anchors, the code will be:
anchor_sizes = ((4,), (8,), (16,), (32,), (64,), (128,))
aspect_ratios = ((0.5, 1.0, 2.0),) * len(anchor_sizes)
rpn_anchor_generator = AnchorGenerator(
anchor_sizes, aspect_ratios
)
It will work!

Related

pyspark: Stage failure due to One hot encoding

I am facing the below error while fitting my model. I am trying to run a model with cross validation with a pipeline inside of it.
Below is the code snippet for data transformation:
qd = QuantileDiscretizer(relativeError=0.01, handleInvalid="error", numBuckets=4,
inputCols=["time"], outputCols=["time_qd"])
#Normalize Vector
scaler = StandardScaler()\
.setInputCol ("vectorized_features")\
.setOutputCol ("features")
#Encoder for VesselTypeGroupName
encoder = StringIndexer(handleInvalid='skip')\
.setInputCols (["type"])\
.setOutputCols (["type_enc"])
#OneHot encoding categorical variables
encoder1 = OneHotEncoder()\
.setInputCols (["type_enc", "ID1", "ID12", "time_qd"])\
.setOutputCols (["type_enc1", "ID1_enc", "ID12_enc", "time_qd_enc"])
#Assembling Variables
assembler = VectorAssembler(handleInvalid="keep")\
.setInputCols (["type_enc1", "ID1_enc", "ID12_enc", "time_qd_enc"]) \
.setOutputCol ("vectorized_features")
The total number of features after one hot encoding will not exceed 200. The model code is below:
lr = LogisticRegression(featuresCol = 'features', labelCol = 'label',
weightCol='classWeightCol')
pipeline_stages = Pipeline(stages=[qd , encoder, encoder1 , assembler , scaler, lr])
#Create Logistic Regression parameter grids for parameter tuning
paramGrid_lr = (ParamGridBuilder()
.addGrid(lr.regParam, [0.01, 0.5, 2.0])# regularization parameter
.addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])# Elastic Net Parameter (Ridge = 0)
.addGrid(lr.maxIter, [1, 10, 20])# Number of iterations
.build())
cv_lr = CrossValidator(estimator=pipeline_stages, estimatorParamMaps=paramGrid_lr,
evaluator=BinaryClassificationEvaluator(), numFolds=5, seed=42)
cv_lr_model = cv_lr.fit(train_df)
.fit method throws the below error:
I have tried increasing the driver memory but still facing the same error. Please suggest what is the cause of this issue.

Cannot export PyTorch model to ONNX

I am trying to convert a pre-trained torch model to ONNX, but recive the following error:
RuntimeError: step!=1 is currently not supported
I'm trying this on a pre-trained colorization model: https://github.com/richzhang/colorization
Here is the code I ran in Google Colab:
!git clone https://github.com/richzhang/colorization.git
cd colorization/
import colorizers
model = colorizer_siggraph17 = colorizers.siggraph17(pretrained=True).eval()
input_names = [ "input" ]
output_names = [ "output" ]
dummy_input = torch.randn(1, 1, 256, 256, device='cpu')
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,
input_names=input_names, output_names=output_names)
I appreciate any help :)
UPDATE 1: #Proko suggestion solved the ONNX export issue. Now I have a new possibly related problem when I try to convert the ONNX to TensorRT. I get the following error:
[TensorRT] ERROR: Network must have at least one output
Here is the code I used:
import torch
import pycuda.driver as cuda
import pycuda.autoinit
import tensorrt as trt
import onnx
TRT_LOGGER = trt.Logger()
def build_engine(onnx_file_path):
# initialize TensorRT engine and parse ONNX model
builder = trt.Builder(TRT_LOGGER)
builder.max_workspace_size = 1 << 25
builder.max_batch_size = 1
if builder.platform_has_fast_fp16:
builder.fp16_mode = True
network = builder.create_network()
parser = trt.OnnxParser(network, TRT_LOGGER)
# parse ONNX
with open(onnx_file_path, 'rb') as model:
print('Beginning ONNX file parsing')
parser.parse(model.read())
print('Completed parsing of ONNX file')
# generate TensorRT engine optimized for the target platform
print('Building an engine...')
engine = builder.build_cuda_engine(network)
context = engine.create_execution_context()
print("Completed creating Engine")
return engine, context
ONNX_FILE_PATH = 'siggraph17.onnx' # Exported using the code above
engine,_ = build_engine(ONNX_FILE_PATH)
I tried to force the build_engine function to use the output of the network by:
network.mark_output(network.get_layer(network.num_layers-1).get_output(0))
but it did not work.
I appropriate any help!
Like I have mentioned in a comment, this is because slicing in torch.onnx supports only step = 1 but there are 2-step slicing in the model:
self.model2(conv1_2[:,:,::2,::2])
Your only option as for now is to rewrite slicing to be some other ops. You can do it by using range and reshape to obtain proper indices. Consider the following function "step-less-arange" (I hope it is generic enough for anyone with similar problem):
def sla(x, step):
diff = x % step
x += (diff > 0)*(step - diff) # add length to be able to reshape properly
return torch.arange(x).reshape((-1, step))[:, 0]
usage:
>> sla(11, 3)
tensor([0, 3, 6, 9])
Now you can replace every slice like this:
conv2_2 = self.model2(conv1_2[:,:,self.sla(conv1_2.shape[2], 2),:][:,:,:, self.sla(conv1_2.shape[3], 2)])
NOTE: you should optimize it. Indices are calculated for every call so it might be wise to pre-compute it.
I have tested it with my fork of the repo and I was able to save the model:
https://github.com/prokotg/colorization
What works for me was to add the opset_version=11 on torch.onnx.export
First I had tried use opset_version=10, but the API suggest 11 so it works.
So your function should be:
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,opset_version=11,
input_names=input_names, output_names=output_names)

cannot get the same output as the pytorch model with openvino

I have a strange problem in trying to use OpenVino.
I have exported my pytorch model to onnx and then imported it to OpenVino using the following command:
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model ~/Downloads/unet2d.onnx --disable_resnet_optimization --disable_fusing --disable_gfusing --data_type=FP32
So for the test case, I have disabled the optimizations.
Now, using the sample python applications, I run inference using the model as follows:
from openvino.inference_engine import IENetwork, IECore
import numpy as np
model_xml = path.expanduser('model.xml')
model_bin = path.expanduser('model.bin')
ie = IECore()
net = IENetwork(model=model_xml, weights=model_bin)
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
net.batch_size = 1
exec_net = ie.load_network(network=net, device_name='CPU')
np.random.seed(0)
x = np.random.randn(1, 2, 256, 256) # expected input shape
res = exec_net.infer(inputs={input_blob: x})
res = res[out_blob]
The problem is that this seems to output something completely different from my onnx or the pytorch model.
Additionally, I realized that I do not even have to pass an input, so if I do something like:
x = None
res = exec_net.infer(inputs={input_blob: x})
This still returns me the same output! So it seems to suggest that somehow my input is getting ignored or something like that?
Could you try without --disable_resnet_optimization --disable_fusing --disable_gfusing
with leaving the optimizations in.

Tensorflow Dataset API: Gradient is "None"?

I've got problems with the Tensorflow Dataset API.
I'd like to pass some per-sample parameters, but I am unable to optimize them.
sample_data = tf.placeholder(...)
design = tf.placeholder(...)
mixture_prob = tf.Variable(..., shape=[num_mixtures, num_samples])
# transpose to get 'num_samples' to axis 0:
mixture_log_prob_t = tf.transpose(tf.log(mixture_prob, name="mixture_log_prob"))
assert mixture_log_prob_t.shape == [num_samples, num_mixtures]
Here is the cause of my problem:
I've got some sample data together with a design matrix.
Also, each sample has got 'num_mixtures' parameters which I'd like to optimize.
data = tf.data.Dataset.from_tensor_slices((
sample_data,
design,
mixture_log_prob_t
))
data = data.repeat()
data = data.shuffle(batch_size * 4)
data = data.apply(tf.contrib.data.batch_and_drop_remainder(batch_size))
iterator = data.make_initializable_iterator()
batch_sample_data, batch_design, batch_mixture_log_prob = iterator.get_next()
batch_mixture_log_prob = tf.transpose(batch_mixture_log_prob)
Now, when running "optimizer.gradient()" I get "None" for this parameter:
>>> model.gradient
[(None, <tf.Variable 'mixture_prob/logit_prob:0' shape=(2, 2000) dtype=float32_ref>), ...]

Problems using poly kernel in GridSearchCV and SVM classifier

I am trying to do a grid search using a SVM classifier.
Consider my data and target that have been parsed from file and input to numpy arrays.
I then preprocess them.
# Transform the data to have zero mean and unit variance.
zeroMeanUnitVarianceScaler = preprocessing.StandardScaler().fit(data)
zeroMeanUnitVarianceScaler.transform(data)
scaledData = data
# Transform the target to have range [-1, 1].
scaledTarget = np.empty([161L,], dtype=int)
for i in range(len(target)):
if(target[i] == 'Malignant'):
scaledTarget[i] = 1
if(target[i] == 'Benign'):
scaledTarget[i] = -1
I now try to set up my grid and fit the scaled data to targets.
# Generate parameters for parameter grid.
CValues = np.logspace(-3, 3, 7)
GammaValues = np.logspace(-3, 3, 7)
kernelValues = ('poly', 'sigmoid')
# kernelValues = ('linear', 'rbf', 'sigmoid')
degreeValues = np.array([0, 1, 2, 3, 4])
coef0Values = np.logspace(-3, 3, 7)
# Generate the parameter grid.
paramGrid = dict(C=CValues, gamma=GammaValues, kernel=kernelValues,
coef0=coef0Values)
# Create and train a SVM classifier using the parameter grid and with
stratified shuffle split.
stratifiedShuffleSplit = StratifiedShuffleSplit(n_splits = 10, test_size =
0.25, train_size = None, random_state = 0)
clf = GridSearchCV(estimator=svm.SVC(), param_grid=paramGrid,
cv=stratifiedShuffleSplit, n_jobs=1)
clf.fit(scaledData, scaledTarget)
If I uncomment the line kernelValues = ('linear', 'rbf', 'sigmoid'), then the code runs in approximately 50 seconds on my 16 GB i7-4950 3.6 GHz machine running windows 10.
However, if I try to run the code as is with 'poly' as a possible kernel value, then the code hangs forever. For example, I ran it yesterday overnight and it did not return anything when I got back in the office today.
Interestingly enough, if I try to create a SVM classifier with a poly kernel, it returns a result immediately
clf = svm.SVC(kernel='poly',degree=2)
clf.fit(data, target)
It hangs up when I do the above code. I have not tried other cv methods to see if that changes anything.
Is this a bug in sci-kit learn? Am I doing things properly? On a side note, is my method of doing gridsearch/cross validation using GridSearchCV and StratifiedShuffleSplit sensible? It seems to me the most brute force (i.e. time consuming) but robust method.
Thank you!

Resources