When I try to index array, I use this code for printing column part with Numpy or Pytorch.
import numpy as np
a = np.random.randn(5,3)
a[:,1]
or
import torch
a = torch.Tensor(5,3)
a[:,1]
The output is displayed like this.
array([-0.07478094, -1.87787326, 0.50407517, 1.13335836, 0.23140931])
But I want to display output as column.(because i indexed column)
array([-0.07478094,
-1.87787326,
0.50407517,
1.13335836,
0.23140931])
Furthermore, When I make tensor with torch.ones(5), the result is
tensor([1., 1., 1., 1., 1.])
but I want to see the type of output on the buttom like this
tensor([1., 1., 1., 1., 1.])
[torch.FloatTensor of size 5]
The reason why i want to display this is that i can't distinguish tensor and numpy
Can anyone tell me how to do this? Thanks.
try this:
np.vstack(a)
Hope this helps..
Related
Using torch.round() is it possible to eventually round specific entries of a tensor? Example:
tensor([ 8.5040e+00, 7.3818e+01, 5.2922e+00, -1.8912e-01, 5.4389e-01,
-3.6032e-03, 4.5763e-01, -2.7471e-02])
Desired output:
tensor([ 9., 74., 5., 0., 5.4389e-01,
-3.6032e-03, 4.5763e-01, -2.7471e-02])
(Only first 4 rounded)
you can do as follow
a[:4]=torch.round(a[:4])
Another (a little bit shorter) option is
t = torch.tensor([ 8.5040e+00, 7.3818e+01, 5.2922e+00, -1.8912e-01, 5.4389e-01, -3.6032e-03, 4.5763e-01, -2.7471e-02])
t[:4].round()
or inplace
t[:4].round_()
I was trying to write a simple function to create a random adjacency matrix in the following way :
def create_adj(a):
a[a>0.5] = 1
a[a<=0.5] = 0
return a
given that a is assumed to be a torch.Tensor() as input, but I get the following error:
TypeError: 'int' object does not support item assignment
If I do things separately (i.e. not inside a function), I simply do:
>> a = torch.rand(3,3)
>> a[a>0.5] = 1
>> a[a<=0.5] = 0
>> a
tensor([[1., 1., 1.],
[0., 0., 0.],
[1., 0., 0.]])
But I don't understand what I'm doing wrong in the function.
I would assume you are not passing the correct variable your create_adj function. As long as a is a torch.tensor, then it should work.
Alternatively, you can directly use the mask as result:
def create_adj(x):
return (a > .5).float()
I am trying to get a specific range of values from my pytorch tensor.
tensor=torch.tensor([0,1,2,3,4,5,6,7,8,9])
new_tensor=tensor[tensor>2]
print(new_tensor)
This will give me a tensor with scalars of 3-9
new_tensor2=tensor[tensor<8]
print(new_tensor2)
This will give me a tensor with scalars of 0-7
new_tensor3=tensor[tensor>2 and tensor<8]
print(new_tensor3)
However this raises an error. Would I be able to get a tensor with the values of 3-7 using something like this? I am trying to edit the tensor directly, and do not wish to change the order of the tensor itself.
grad[x<-3]=0.1
grad[x>2]=1
grad[(x>=-3 and x<=2)]=siglrelu(grad[(x>=-3 and x<=2)])*(1.0-siglrelu(grad[(x>=-3 and x<=2)]))
This is what I am really going for, and I am not exactly sure of how to go about this. Any help is appreciated, thank you!
You can use & operation,
t = torch.arange(0., 10)
print(t)
print(t[(t > 2) & (t < 8)])
Output is,
tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
tensor([3., 4., 5., 6., 7.])
I want to add some more information to an image in a fourth layer of a tensor which first three layers are based on an image. Afterwards I want to cut a peace out of an image (data augmentation) and have to resize the image to a given size.
For this I created a tensor from a picture and joined it with a tensor with one layer of additional information using torch.cat. (Almost but not all entries of the second tensor have been zeros.)
I sent the result through a transforms.compose (to cut and resize the tensor) But after that the tensor completely consisted out of zeros.
Here I have built a reproducible example.
import torch
from torchvision import transforms
height = 2
width = 4
resize = 2
tensor3 = torch.rand(3,height,width)
tensor1 = torch.zeros(1,height,width)
#tensor1 = torch.rand(1,height,width)
imageToTensor = transforms.ToTensor()
tensorToImage = transforms.ToPILImage()
train_transform = transforms.Compose([
transforms.RandomResizedCrop(resize, scale=(0.9, 1.0)),
transforms.ToTensor(),
])
tensor4 = torch.cat((tensor3,tensor1),0)
image4 = tensorToImage(tensor4)
transformed_image4 = train_transform(image4)
print(tensor4)
print(transformed_image4)
tensor([[[0.6774, 0.5293, 0.4420, 0.2463],
[0.1391, 0.7481, 0.3436, 0.9391]],
[[0.0652, 0.2061, 0.2931, 0.6126],
[0.2618, 0.3506, 0.5095, 0.7351]],
[[0.8555, 0.6320, 0.9461, 0.0928],
[0.2094, 0.3944, 0.0528, 0.7900]],
[[0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000]]])
tensor([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0., 0.]],
[[0., 0.],
[0., 0.]],
[[0., 0.],
[0., 0.]]])
If I choose “tensor1 = torch.rand(1,height,width)” I do not have this problem. But if most entries are zero I have.
With scale=(0.5, 1.0) I don't have the problem either.
No some questions:
How may I get the first three layers just resized with non zero entries?
Did I misunderstand something, or is it really weird?
I created an issue:
https://github.com/pytorch/pytorch/issues/22611
And the answer was that only PIL-Images are supported in Torchvision.
An alternative is the albumentations-library for transformations.
how to upscale an image in Pytorch without defining height and width using transforms?
('--upscale_factor', type=int, required=True, help="super resolution upscale factor")
This might do the Job
transforms.Compose([transforms.resize(ImageSize*Scaling_Factor)])
If I understand correctly that you want to upsample a tensor x by just specifying a factor f (instead of specifying target width and height) you could try this:
from torch.nn.modules.upsampling import Upsample
m = Upsample(scale_factor=f, mode='nearest')
x_upsampled = m(x)
Note that Upsample allows for multiple interpolation modes, e.g. mode='nearest' or mode='bilinear'
Here is one interesting example:
input = torch.tensor([[1.,2.],[3.,4.]])
input=input[None]
input=input[None]
output = nn.functional.interpolate(input, scale_factor=2, mode='nearest')
print(output)
Out:
tensor([[[[1., 1., 2., 2.],
[1., 1., 2., 2.],
[3., 3., 4., 4.],
[3., 3., 4., 4.]]]])
You can do
image_tensor = transforms.functional.resize(image_tensor, size=(image_tensor.shape[1] * 2, image_tensor.shape[2] * 2))
or read out width and height using color, height, width = image_tensor.size() beforehand
check this example for reference on Resize as well.