Pytorch dimension change - pytorch

Is there any methods to change [1,512,1,1] to [1,512,2,2] tensor.
I know it is not possible just by changing the dimensions.
Are there any ways using concat or stack with PyTorch (torch.stack, torch.cat)
I make tensor with following code
a = torch.rand([1,512,1,1])
How can I change this to tensor with dimension [1,512,2,2]

That would be with torch.repeat, this will copy the data:
>>> a = a.repeat(1, 1, 2, 2)
If you do not wish to copy the data, then use torch.expand:
>>> a = a.expand(-1, -1, 2, 2)

I tried this
tmp = torch.cat([a,a],2)
a = torch.cat([tmp,tmp],3)

Related

Expand the tensor by several dimensions

In PyTorch, given a tensor of size=[3], how to expand it by several dimensions to the size=[3,2,5,5] such that the added dimensions have the corresponding values from the original tensor. For example, making size=[3] vector=[1,2,3] such that the first tensor of size [2,5,5] has values 1, the second one has all values 2, and the third one all values 3.
In addition, how to expand the vector of size [3,2] to [3,2,5,5]?
One way to do it I can think is by means of creating a vector of the same size with ones-Like and then einsum but I think there should be an easier way.
You can first unsqueeze the appropriate number of singleton dimensions, then expand to a view at the target shape with torch.Tensor.expand:
>>> x = torch.rand(3)
>>> target = [3,2,5,5]
>>> x[:, None, None, None].expand(target)
A nice workaround is to use torch.Tensor.reshape or torch.Tensor.view to do perform multiple unsqueezing:
>>> x.view(-1, 1, 1, 1).expand(target)
This allows for a more general approach to handle any arbitrary target shape:
>>> x.view(len(x), *(1,)*(len(target)-1)).expand(target)
For an even more general implementation, where x can be multi-dimensional:
>>> x = torch.rand(3, 2)
# just to make sure the target shape is valid w.r.t to x
>>> assert list(x.shape) == list(target[:x.ndim])
>>> x.view(*x.shape, *(1,)*(len(target)-x.ndim)).expand(target)

Backtransforming a PyTorch Tensor

I have trained a WGAN on the CelebA dataset in PyTorch following this youtube video. Since I do this on Google Cloud Platform where TensorBoard is not availabe, I save one figure of generated images by the GAN every epoch to see how the GAN is actually doing.
Now, the saved pdf files look sth like this: generated images. Unfortunately, this is not really readable, and I suspect this has to do with the preprocessing I do:
trafo = transforms.Compose(
[transforms.Resize(size = (64, 64)),
transforms.ToTensor(),
transforms.Normalize( mean = (0.5,), std = (0.5,))])
Is there any way to kind of undo this transformation when I save the image?
Currently, I save the image every epoch as follows:
visualization = torchvision.utils.make_grid(
tensor = gen(fixed_noise),
nrow = 8,
normalize = False)
plt.savefig("generated_WGAN_" + datetime.now().strftime("%Y%m%d-%H%M%S") + ".pdf")
Also, I should probably mention that in the Jupyter notebook, I get the following warning:
"Clipping input data to the valid range for imshow with RGB data ([0..1]) for floats or [0..255] for integers)."
The torchvision.transform.Normalize function is usually used to standardize data (make mean(data)=0 and std(x)=1) while the normalize option on torchvision.utils.make_grid is used to normalize the data between [0,1] given a range. So no need to implement a function to fix this.
If True, shift the image to the range (0, 1), by the min and max values specified by range. Default: False.
Here you are looking to normalize between 0 and 1. Given a tensor x:
torchvision.utils.make_grid(x, nrow=8, normalize=True, range=(x.min(), x.max()))
Here are some examples of use provided by the PyTorch's documentation.
Back to your original question, I should mention that torchvision.transform.Normalize(mean=0.5, std=0.5) doesn't transform your data such that it has mean=0.5 and std=0.5... Neither will it standardize it to mean=0, std=1. You have to measure the mean and std from your dataset.
torchvision.transform.Normalize simply performs a shift-scale operation. To undo that just unscale-unshift with the same values:
>>> x = torch.rand(64, 3, 100, 100)*torch.rand(64, 1, 1, 1)
>>> x.mean(), x.std()
(tensor(0.2536), tensor(0.2175))
>>> t = T.Normalize(mean, std)
>>> t_inv = lambda x: x*std + mean
>>> x_after = t(x)
>>> x_after.mean(), x_after.std()
(tensor(-0.4928), tensor(0.4350))
>>> x_before = t_inv(x_after)
>>> x_before.mean(), x_before.std()
(tensor(0.2536), tensor(0.2175))
It seems like your output pixel values are in range [-1, 1] (please verify this).
Therefore, when you save the images, the negative part is being clipped (as the error message you got suggests).
Try:
visualization = torchvision.utils.make_grid(
tensor = torch.clamp(gen(fixed_noise), -1, 1) * 0.5 + 0.5, # from [-1, 1] -> [0, 1]
nrow = 8,
normalize = False)
plt.savefig("generated_WGAN_" + datetime.now().strftime("%Y%m%d-%H%M%S") + ".pdf")

How can I get argmaxed torch tensor excluding certain index?

I wonder if I can get torch.argmax of my input excluding certain index.
For example,
target = torch.tensor([1,2])
input = torch.tensor([[0.1,0.5,0.2,0.2], [0.1,0.5,0.1,0.3]])
I want to get the maximum value in input excluding the index on the target, so that the result would be
output = torch.tensor([[0.2],[0.5]])
You can try this
Set negative infy to the target indices in temp tensor
Then use torch.max or torch.argmax
tmp_input = input.clone()
tmp_input[range(len(input)), target] = float("-Inf")
torch.max(tmp_input, dim=1).values
tensor([0.2000, 0.5000])
torch.max(tmp_input, dim=1).indices
tensor([3, 1])
torch.argmax(tmp_input, dim=1)
tensor([3, 1])
input[target[0]-1,target[1]-1] = -1 # or use -inf
#-1 is added for python indexing style
output = torch.max(input,dim = 1)

How can i reshape an array from (280, 280, 3) to (28, 28, 3)

Hi i tried to write a code, where i write a number on the screen with pygame and then a neural Network predicts the number i wrote. My problem is that i trained my neural network with image arrays in a (28, 28, 3). So i tried to reshape my (280, 280, 3) array. but when i do so my array is None.
I use Python 3.7
string_image = pygame.image.tostring(screen, 'RGB')
temp_surf = pygame.image.fromstring(string_image, (280, 280), 'RGB')
array = pygame.surfarray.array3d(temp_surf)
array = array.resize((28, 28, 3))
Can anyone help?
If you just want to scale a pygame.Surface, then I recommend to use pygame.transform.scale() or pygame.transform.smoothscale().
For instance:
temp_surf = pygame.image.fromstring(string_image, (280, 280), 'RGB')
scaled_surf = pygame.transform.smoothscale(temp_surf, (28, 28))
array = pygame.surfarray.array3d(scaled_surf)
I am not familiar with pygame but it looks like pygame.surfarray.array3d returns a numpy-array.
How to iterate through a pygame 3d surfarray and change the individual color of the pixels if they're less than a specific value?
To keep the same number of data points and just change the shape you can use numpy.reshape.
import numpy as np
c = 28*28*3 # just to start off with right amount of data
a = np.arange(c) # this just creates the initial data you want to format
b = a.reshape((28,28,3)) # this actually reshapes the data
d = b.shape #this just stores the new shape in variable d
print(d) #shows you the new shape
To interpolate the data to a new size then there are a number of ways like CV or continuing a numpy example.
ary = np.resize(b,(18,18,1))
e = ary.shape
print(e)
Hope that helps.

numpy matrix not functioning as intended

This is my code:
import random
import numpy as np
import math
populacao = 5
x_min = -10
x_max = 10
nbin = 4
def fitness(xy, populacao, resultado):
fit = np.matrix(resultado)
xy_fit = np.append(xy, fit.T, axis = 1)
xy_fit_sorted = xy_fit[np.argsort(xy_fit[:,-1].T),:]
return xy_fit_sorted
def codifica(x, x_min, x_max,n):
x = float(x)
xdec = round((x-x_min)/(x_max-x_min)*(2**n-1))
xbin = int(bin(xdec)[2:])
return(xbin)
xy = np.array([[1, 2],[3,4],[0,0],[-5,-1],[9,-2]])
resultado = np.array([5, 25, 0, 26, 85])
print(xy)
xy_fit_sorted = np.array(fitness(xy, populacao, resultado))
print(xy_fit_sorted)
parents = (xy_fit_sorted[:,:2])
print(parents)
the problem i'm having is that to select the 2 rows of "xy_fit_sorted", i'm doing this strange thing:
parents = (xy_fit_sorted[:,:2])
Intead of what makes sense in my mind:
parents = (xy_fit_sorted[:1,:])
it's like the whole matrix is in one line.
I'm not sure what most of your code is doing, so here's just a guess: are you thrown off by the shape of xy_fit_sorted being (1, 5, 3), having an extra zero axis?
That could be fixed e.g. by constructing xy_fit without the use of np.matrix:
xy_fit = np.append(xy, resultado[:, np.newaxis], axis=1)
Then xy_fit_sorted comes out with a shape of (5, 3).
The underlying issue was that np.matrix is always a 2-D array. When indexing xy_fit[...] you intend to index with a vector. But using np.matrix for xy_fit, xy_fit[:,-1].T is not a vector, but a 2-D array as well (of shape (1,5)). This leads to xy_fit_sorted having an extra dimension as well.
Note that the numpy doc says about np.matrix anyhow:
It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future.

Resources