Torch - Interpolate missing values - pytorch

I have a stock of tensor images of a form NumOfImagesxHxW that includes zeros. I am looking for a way to interpolate the missing values (zeros) using the information in the same image only (no connection between the images). Is there a way to do it using pytorch?
F.interpolate seems to work only for reshaping. I need to fill the zeros, while keeping the dimensions and the gradients of the tensor.
Thanks.

EDIT: Turns out the below does not answer the OP as it does not provide a solution to track gradients for back-propagation. Still leaving it as it can be used as part of a solution.
One way is to convert the tensor to numpy array and use scipy interpolation, e.g. scipy.interpolate.LinearGridInterpolator [1] or other possible numpy array interpolation options (some detailed here). Not sure this helps as this is not pytorch + may involve copying the tensor around.
As scipy interpolation may be slow, one possible solution is to only use pixels adjacent to missing values for interpolation (can be easily obtained by dilation on missing values mask). I think that this might speed things up by an order of magnitude, depeding on tensor dimensions and number of missing values.
Edit: implemented it, seems to give a speedup of two orders of magnitude in my case.
def fillMissingValues(target_for_interp, copy=True,
interpolator=scipy.interpolate.LinearNDInterpolator):
import cv2, scipy, numpy as np
if copy:
target_for_interp = target_for_interp.copy()
def getPixelsForInterp(img):
"""
Calculates a mask of pixels neighboring invalid values -
to use for interpolation.
"""
# mask invalid pixels
invalid_mask = np.isnan(img) + (img == 0)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
#dilate to mark borders around invalid regions
dilated_mask = cv2.dilate(invalid_mask.astype('uint8'), kernel,
borderType=cv2.BORDER_CONSTANT, borderValue=int(0))
# pixelwise "and" with valid pixel mask (~invalid_mask)
masked_for_interp = dilated_mask * ~invalid_mask
return masked_for_interp.astype('bool'), invalid_mask
# Mask pixels for interpolation
mask_for_interp, invalid_mask = getPixelsForInterp(target_for_interp)
# Interpolate only holes, only using these pixels
points = np.argwhere(mask_for_interp)
values = target_for_interp[mask_for_interp]
interp = interpolator(points, values)
target_for_interp[invalid_mask] = interp(np.argwhere(invalid_mask))
return target_for_interp
# For the target tensor:
target_filled = fillMissingValues(target.numpy().squeeze())
# transform back to tensor etc..
Note that interpolated values will be np.nan outside of the convex hull of valid points, as provided to LinearNDInterpolator.

If you only want nearest neighbor interpolation, you can make #Yuri Feldman's answer differentiable by returning the index mapping instead of the interpolated image.
What I did is to create a new class from scipy.interpolate.NearestNDInterpolator and override its __call__ method. It's just returning indices instead of values.
from scipy.interpolate.interpnd import _ndim_coords_from_arrays
class NearestNDInterpolatorIndex(NearestNDInterpolator):
def __init__(self, x, y, rescale=False, tree_options=None):
NearestNDInterpolator.__init__(self, x, y, rescale=rescale, tree_options=tree_options)
self.points = np.asarray(x)
def __call__(self, *args):
"""
Evaluate interpolator at given points.
Parameters
----------
xi : ndarray of float, shape (..., ndim)
Points where to interpolate data at.
"""
xi = _ndim_coords_from_arrays(args, ndim=self.points.shape[1])
xi = self._check_call_shape(xi)
xi = self._scale_x(xi)
dist, i = self.tree.query(xi)
return self.points[i]
Then, in fillMissingValues, instead of returning target_for_interp, we return these:
source_indices = np.argwhere(invalid_mask)
target_indices = interp(source_indices)
return source_indices, target_indices
Pass the new interpolator to fillMissingValues, then we can get the nearest neighbor interpolation of the image by
img[..., source_indices[:, 0], source_indices[:, 1]] = img[..., target_indices[:, 0], target_indices[:, 1]]
assuming that the image size is on the last two dimensions.
EDIT: This is not differentiable as I just tested. The problem lies in the index mapping. We need to use masking instead of the in-place operation, and then problem solved.

Related

Transforms.Normalize returns values higher than 255 Pytorch

I am working on an video dataset, I read the frames as integers and convert them to a numpy array float32.
After being loaded, they appear in a range between 0 and 255:
[165., 193., 148.],
[166., 193., 149.],
[167., 193., 149.],
...
Finally, to feed them to my model and stack the frames I do the "ToTensor()" plus my transformation [transforms.Resize(224), transforms.Normalize([0.454, 0.390, 0.331], [0.164, 0.187, 0.152])]
and here the code to transform and stack the frames:
res_vframes = []
for i in range(len(v_frames)):
res_vframes.append(self.transforms((v_frames[i])))
res_vframes = torch.stack(res_vframes, 0)
The problem is that after the transformation the values appears in this way, which has values higher than 255:
[tensor([[[1003.3293, 1009.4268, 1015.5244, ..., 1039.9147, 1039.9147,
1039.9147],...
Any idea on what I am missing or doing wrong?
The behavior of torchvision.transforms.Normalize:
output[channel] = (input[channel] - mean[channel]) / std[channel]
Since the numerator of the lefthand of the above equation is greater than 1 and the denominator of it is smaller than 1, the computed value gets larger.
The class ToTensor() maps a tensor's value to [0, 1] only if some condition is satisfied. Check this code from official Pytorch docs:
if isinstance(pic, np.ndarray):
# handle numpy array
if pic.ndim == 2:
pic = pic[:, :, None]
img = torch.from_numpy(pic.transpose((2, 0, 1))).contiguous()
# backward compatibility
if isinstance(img, torch.ByteTensor):
return img.to(dtype=default_float_dtype).div(255)
else:
return img
Therefore you need to divide tensors explicitly or make to match the above condition.
Your normalization uses values between 0-1 and not 0-255.
You need to change your input frames to 0-1 or the normalization vectors to 0-255.
You can divide the frames by 255 before using the transform:
res_vframes = []
for i in range(len(v_frames)):
res_vframes.append(self.transforms((v_frames[i]/255)))
res_vframes = torch.stack(res_vframes, 0)

Python scipy interpolation meshgrid data

Dear all I want to interpolate an experimental data in order to make it look with higher resolution but apparently it does not work. I followed the example in this link for mgrid data the csv data can be found goes as follow.
Csv data
My code
import pandas as pd
import numpy as np
import scipy
x=np.linspace(0,2.8,15)
y=np.array([2.1,2,1.9,1.8,1.7,1.6,1.5,1.4,1.3,1.2,1.1,0.9,0.7,0.5,0.3,0.13])
[X, Y]=np.meshgrid(x,y)
Vx_df=pd.read_csv("Vx.csv", header=None)
Vx=Vx_df.to_numpy()
tck=scipy.interpolate.bisplrep(X,Y,Vx)
plt.pcolor(X,Y,Vx, shading='nearest');
plt.show()
xi=np.linspace(0.1, 2.5, 30)
yi=np.linspace(0.15, 2.0, 50)
[X1, Y1]=np.meshgrid(xi,yi)
VxNew = scipy.interpolate.bisplev(X1[:,0], Y1[0,:], tck, dx=1, dy=1)
plt.pcolor(X1,Y1,VxNew, shading='nearest')
plt.show()
CSV DATA:
0.73,,,-0.08,-0.19,-0.06,0.02,0.27,0.35,0.47,0.64,0.77,0.86,0.90,0.93
0.84,,,0.13,0.03,0.12,0.23,0.32,0.52,0.61,0.72,0.83,0.91,0.96,0.95
1.01,1.47,,0.46,0.46,0.48,0.51,0.65,0.74,0.80,0.89,0.99,0.99,1.07,1.06
1.17,1.39,1.51,1.19,1.02,0.96,0.95,1.01,1.01,1.05,1.06,1.05,1.11,1.13,1.19
1.22,1.36,1.42,1.44,1.36,1.23,1.24,1.17,1.18,1.14,1.14,1.09,1.08,1.14,1.19
1.21,1.30,1.35,1.37,1.43,1.36,1.33,1.23,1.14,1.11,1.05,0.98,1.01,1.09,1.15
1.14,1.17,1.22,1.25,1.23,1.16,1.23,1.00,1.00,0.93,0.93,0.80,0.82,1.05,1.09
,0.89,0.95,0.98,1.03,0.97,0.94,0.84,0.77,0.68,0.66,0.61,0.48,,
,0.06,0.25,0.42,0.55,0.55,0.61,0.49,0.46,0.56,0.51,0.40,0.28,,
,0.01,0.05,0.13,0.23,0.32,0.33,0.37,0.29,0.30,0.32,0.27,0.25,,
,-0.02,0.01,0.07,0.15,0.21,0.23,0.22,0.20,0.19,0.17,0.20,0.21,0.13,
,-0.07,-0.05,-0.02,0.06,0.07,0.07,0.16,0.11,0.08,0.12,0.08,0.13,0.16,
,-0.13,-0.14,-0.09,-0.07,0.01,-0.03,0.06,0.02,-0.01,0.00,0.01,0.02,0.04,
,-0.16,-0.23,-0.21,-0.16,-0.10,-0.08,-0.05,-0.11,-0.14,-0.17,-0.16,-0.11,-0.05,
,-0.14,-0.25,-0.29,-0.32,-0.31,-0.33,-0.31,-0.34,-0.36,-0.35,-0.31,-0.26,-0.14,
,-0.02,-0.07,-0.24,-0.36,-0.39,-0.45,-0.45,-0.52,-0.48,-0.41,-0.43,-0.37,-0.22,
The image of the low resolution (without iterpolation) is Low resolution and the image I get after interpolation is High resolution
Can you please give me some advice? why it does not interpolate properly?
Ok so to interpolate we need to set up an input and output grid an possibly need to remove values from the grid that are missing. We do that like so
array = pd.read_csv(StringIO(csv_string), header=None).to_numpy()
def interp(array, scale=1, method='cubic'):
x = np.arange(array.shape[1]*scale)[::scale]
y = np.arange(array.shape[0]*scale)[::scale]
x_in_grid, y_in_grid = np.meshgrid(x,y)
x_out, y_out = np.meshgrid(np.arange(max(x)+1),np.arange(max(y)+1))
array = np.ma.masked_invalid(array)
x_in = x_in_grid[~array.mask]
y_in = y_in_grid[~array.mask]
return interpolate.griddata((x_in, y_in), array[~array.mask].reshape(-1),(x_out, y_out), method=method)
Now we need to call this function 3 times. First we fill the missing values in the middle with spline interpolation. Then we fill the boundary values with nearest neighbor interpolation. And finally we size it up by interpreting the pixels as being a few pixels apart and filling in gaps with spline interpolation.
array = interp(array)
array = interp(array, method='nearest')
array = interp(array, 50)
plt.imshow(array)
And we get the following result

How to change pixDim in medical volumetric data using nibabel preferably or numpy or something else?

I am trying to train a model where one set of data contain a particular pixDim, whereas another set contains a different pixDim. I want to normalize both the voxel resolution and execute.
Can we change the pixDim dimension of a volumetric data like .nifti.gz or .mgz file using nibabel or any other python library?
For reference, I am talking about pixDim in the header of a volumetric file highlighted in the below image.
The preferable way is to calculate target pixdim or use scipy interpolation method as in below function to achieve target pixdim or steps(in func)
import scipy.interpolate as si
def do_interpolate(values, steps, isLabel=False):
x, y, z = [steps[k] * np.arange(values.shape[k]) for k in range(3)] # original grid
if isLabel:
method = 'nearest'
else:
method = 'linear'
f = si.RegularGridInterpolator((x, y, z), values, method=method) # interpolator
dx, dy, dz = 2.0, 2.0, 3.0 # new step sizes # settings['EVAL']['target_voxel_dimension']
new_grid = np.mgrid[0:x[-1]:dx, 0:y[-1]:dy, 0:z[-1]:dz] # new grid
new_grid = np.moveaxis(new_grid, (0, 1, 2, 3), (3, 0, 1, 2)) # reorder axes for evaluation
return f(new_grid)
You will get an updated upsampled or downsampled resolution for your volume data with target pixdim maintained.
NOTE: In the above function, values hold 3d volumetric data, steps hold original pixdim data, I have hardcoded target pixdim in the form of dx, dy, dz.

When applied to an interp1d cubic spline `find_peaks' returns index position not values

I have data which I have applied a cubic spline using interp1d (assuming x and y have been previously defined)
f2 = interp1d(x, y, kind='cubic')
xnew = np.linspace(start, end, num=501, endpoint=True)
spline = np.array(f2(xnew))
if I check this I get the values I am expecting
print(spline)
[ 2.737e-02 5.0504e-02 ....]
However, I need to identify peaks in the spline and am using find_peaks from Numpy Signal
peaks = signal.find_peaks(spline)
But this returns a series of index positions whereas I am trying to return the actual peak values of the spline array.
print(peaks)
(array([ 15, 123, 290, 432]), {})
Reading the documents (https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html)
this doesn't appear to be the expected behaviour (although it is entirely possible I have misunderstood). Any suggestions?

Index 150 out of bounds in axis0 with size 1

I was making histogram using numpy array in Python with open cv. The code is as follows:
#finding histogram of an image
import numpy as np
import cv2
img = cv2.imread("cr7.jpg")
gry_img=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
a=np.zeros((1,256),dtype=np.uint8)
#finding how many times a particular pixel intensity repeats
for x in range (0,183): #size of gray_img is (184,275)
for y in range (0,274):
g=gry_ img[x,y]
a[g]=a[g]+1
print(a)
Error is as follows:
IndexError: index 150 is out of bounds for axis 0 with size 1
Since you haven't supplied the image, it is only from guessing that it seems you've made a mistake with the dimensions of the image. Alternatively the issue is entirely with the shape of your results array a.
The code you have is rather fragile, and here is a cleaner way to interact with images. I use an image from opencv's data directory: aero1.jpg.
The code here resolves both potential issues identified above, whichever one it was:
fname = 'aero1.jpg'
im = cv2.imread(fname)
gry_img = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
gry_img.shape
>>> (480, 640)
# note that the image is 640pix wide by 480 tall;
# the numpy array shows the number of rows first.
# rows are in y / columns are in x
# NOTE the results array `a` need only be 1-dimensional, not 2d (1x256)
a=np.zeros((256, ), dtype=np.uint8)
# iterating over all pixels, whatever the shape of the image.
height, width = gry_img.shape
for x in xrange(width):
for y in xrange(height):
g = gry_img[y, x] # NOTE y, x not x, y
a[g] += 1
But note that you could also achieve this easily with a numpy function np.histogram (docs), with slightly careful handling of the bin edges.
histb, bin_edges = np.histogram(gry_img.reshape(-1), bins=xrange(0, 257))
# check that we arrived at the same result as iterating manually:
(a == histb).all()
>>> True

Resources