Consider the following paragraph from the sub-section named The essence of tensors from the section named Tensors: Multidimensional arrays of the chapter named It starts with a tensor from the book titled Deep Learning with PyTorch by Eli Stevens et al.
Python lists or tuples of numbers are collections of Python objects
that are individually allocated in memory, as shown on the left
in figure 3.3. PyTorch tensors or NumPy arrays, on the other
hand, are views over (typically) contiguous memory blocks
containing unboxed C numeric types rather than Python objects. Each
element is a 32-bit (4-byte) float in this case, as we can see on the
right side of figure 3.3. This means storing a 1D tensor of 1,000,000
float numbers will require exactly 4,000,000 contiguous bytes, plus a
small overhead for the metadata (such as dimensions and numeric type).
And the figure they are referring to is shown below, taken from the book
The above paragraph is saying that tensors are views over contiguous memory blocks. What exactly is meant by a view in this context?
A "view" is how you interpret this data, or more precisely, the shape of the tensor. For example, given a memory block with 40 contiguous bytes (10 contiguous floats), you can either view it as a 2x5 tensor, or a 5x2 tensor.
In pytorch, the API to change the view of a tensor is view(). Some examples:
Python 3.8.10 (default, Sep 28 2021, 16:10:42)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x = torch.randn(10, dtype=torch.float32)
>>> x.shape
torch.Size([10])
>>>
>>> x = x.view(2, 5)
>>> x.shape
torch.Size([2, 5])
>>>
>>> x = x.view(5, 2)
>>> x.shape
torch.Size([5, 2])
Of course, some views are forbidden for 10 floats:
>>> x = x.view(3, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: shape '[3, 3]' is invalid for input of size 10
view does not change the data in the underlying memory. It merely changes how you "view" the tensor.
Related
How to find the biggest of two pytorch tensors on size
>>> tensor1 = torch.empty(0)
>>> tensor2 = torch.empty(1)
>>> tensor1
tensor([])
>>> tensor2
tensor([5.9555e-34])
torch.maximum is returrning the empty tensor as the biggest tensor
>>> torch.maximum(tensor1,tensor2)
tensor([])
Is there a way to find the biggest tensor among two tensors (mostly 1d), base on the number of elements in the tensor.
Why not comparing their first dimension size? To do so you can use equivalents: x.size(0), x.shape[0], and len(x). To return the tensor with longest size, you can use the built-in max function with the key argument:
>>> max((tensor1, tensor2), key=len)
In PyTorch, given a tensor of size=[3], how to expand it by several dimensions to the size=[3,2,5,5] such that the added dimensions have the corresponding values from the original tensor. For example, making size=[3] vector=[1,2,3] such that the first tensor of size [2,5,5] has values 1, the second one has all values 2, and the third one all values 3.
In addition, how to expand the vector of size [3,2] to [3,2,5,5]?
One way to do it I can think is by means of creating a vector of the same size with ones-Like and then einsum but I think there should be an easier way.
You can first unsqueeze the appropriate number of singleton dimensions, then expand to a view at the target shape with torch.Tensor.expand:
>>> x = torch.rand(3)
>>> target = [3,2,5,5]
>>> x[:, None, None, None].expand(target)
A nice workaround is to use torch.Tensor.reshape or torch.Tensor.view to do perform multiple unsqueezing:
>>> x.view(-1, 1, 1, 1).expand(target)
This allows for a more general approach to handle any arbitrary target shape:
>>> x.view(len(x), *(1,)*(len(target)-1)).expand(target)
For an even more general implementation, where x can be multi-dimensional:
>>> x = torch.rand(3, 2)
# just to make sure the target shape is valid w.r.t to x
>>> assert list(x.shape) == list(target[:x.ndim])
>>> x.view(*x.shape, *(1,)*(len(target)-x.ndim)).expand(target)
I would like to compute the f1-score for a classifier trained with allen-nlp. I used the working code from a allen-nlp guide, which computed accuracy, not F1, so I tried to adjust the metric in the code.
According to the documentation, CategoricalAccuracy and FBetaMultiLabelMeasure take the same inputs. (predictions: torch.Tensor of shape [batch_size, ..., num_classes], gold_labels: torch.Tensor of shape [batch_size, ...])
But for some reason the input that worked perfectly well for the accuracy results in a RuntimeError when given to the f1-multi-label metric.
I condensed the problem to the following code snippet:
>>> from allennlp.training.metrics import CategoricalAccuracy, FBetaMultiLabelMeasure
>>> import torch
>>> labels = torch.LongTensor([0, 0, 2, 1, 0])
>>> logits = torch.FloatTensor([[ 0.0063, -0.0118, 0.1857], [ 0.0013, -0.0217, 0.0356], [-0.0028, -0.0512, 0.0253], [-0.0460, -0.0347, 0.0400], [-0.0418, 0.0254, 0.1001]])
>>> labels.shape
torch.Size([5])
>>> logits.shape
torch.Size([5, 3])
>>> ca = CategoricalAccuracy()
>>> f1 = FBetaMultiLabelMeasure()
>>> ca(logits, labels)
>>> f1(logits, labels)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../lib/python3.8/site-packages/allennlp/training/metrics/fbeta_multi_label_measure.py", line 130, in __call__
true_positives = (gold_labels * threshold_predictions).bool() & mask & pred_mask
RuntimeError: The size of tensor a (5) must match the size of tensor b (3) at non-singleton dimension 1
Why is this error happening? What am I missing here?
You want to use FBetaMeasure, not FBetaMultiLabelMeasure. "Multilabel" means you can specify more than one correct answer, but "Categorical Accuracy" only allows one correct answer. That means you have to specify another dimension in your labels.
I suspect the documentation of FBetaMultiLabelMeasure is misleading. I'll look into fixing it.
I'm been trying to make a balanced clusters for my masther thesis using Google Colab (because I don't have a GPU) and balanced k-means Library (https://pypi.org/project/balanced-kmeans/), this is the process that I made:
Install the balanced k-means Library in Google Colab using this command:
!pip install balanced_kmeans
Import a CSV file to my Google Colab instance:
from google.colab import files
subido=files.upload()
Create a Pandas dataframe
import pandas as pd
import io
df = pd.read_csv(io.BytesIO(subido['prueba.csv']))
Then, changed the string data type to integer in the columns than I going to use:
df["bytes"] = pd.to_numeric(df["bytes"],errors='coerce')
df["bytes"] = df["bytes"].fillna(0)
df["paquete"]= pd.to_numeric(df["paquete"],errors='coerce')
df["paquete"]= df["paquete"].fillna(0)
Once I changed the data type, proceed to create a tensors (in this case 2 colums from the pandas dataframe) then stacked those tensors:
import torch
device = 'cuda'
A= torch.cuda.IntTensor(df['bytes'].values,device=device)
B=torch.cuda.IntTensor(df['paquete'].values,device=device)
prueba=[A,B]
X = (torch.stack(prueba,dim=1))
Finally I tried to create the clusters:
from balanced_kmeans import kmeans_equal
N=X.shape[0]
num_clusters=100
device='cuda'
cluster_size=N//num_clusters
choices,centers=kmeans_equal(X,num_clusters=num_clusters,cluster_size=cluster_size)
But unfourtanelly I got this error:
RuntimeError Traceback (most recent call last)
in ()
16 #print (W)
17
---> 18 choices, centers = kmeans_equal(X, num_clusters=num_clusters,cluster_size=cluster_size)
1 frames
/usr/local/lib/python3.6/dist-packages/balanced_kmeans/init.py in initialize(X, num_clusters)
16 indices = torch.empty(X.shape[:-1], device=X.device, dtype=torch.long)
17 for i in range(bs):
---> 18 indices = torch.randperm(num_samples, device=X.device)
19 initial_state = torch.gather(X, 1, indices.unsqueeze(-1).repeat(1, 1,X.shape[-1])).reshape(bs,num_clusters, -1, X.shape[-1]).mean(dim=-2)
20 return initial_state
RuntimeError: expand(torch.cuda.LongTensor{[20839]}, size=[]): the number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1)
I don't know if this error is caused by the tensor X having a 1 dimention, and in the library the initialization function create a empty tensor with zero dimension (I took this code from initialize method in the init.py):
indices=torch.empty(X.shape[:-1], device=X.device, dtype=torch.long)
for i in range(bs):
indices[i] = torch.randperm(num_samples, device=X.device)
So Am I doing right? (I'm new using tensors) or there is a bug in the initialization function?
I already checked the solution from PyTorch: The number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1) but doesn't work for me becauseI don't need a scalar value.
I'm using Pytorch for some robotics Reinforcement Learning tasks. I'd like to use both images and information about the state as observations for this task. The implementation I'm using does not directly support this so I'm making some amendments. Expected observations are either state, as a 1 dimensional Tensor, or images as a 3 dimensional Tensor (channels, width, height). In my task I would like the observation to be a tuple of Tensors.
In many places in my codebase, the observation is of course expected to be a single Tensor, not a tuple of Tensors. Is there an easy way to treat a tuple of Tensors as a single Tensor?
For example, I would like:
observation.to(device)
to work as normal when observation is a single Tensor, and call .to(device) on each Tensor when observation is a tuple of Tensors.
It should be simple enough to create a data type that can support this, but I'm wondering does such a data type already exist? I haven't found anything so far.
If your tensors are all of the same size, you can use torch.stack to concatenate them into one tensor with one more dimension.
Example:
>>> import torch
>>> a=torch.randn(2,1)
>>> b=torch.randn(2,1)
>>> c=torch.randn(2,1)
>>> a
tensor([[ 0.7691],
[-0.0297]])
>>> b
tensor([[ 0.4844],
[-0.9142]])
>>> c
tensor([[ 0.0210],
[-1.1543]])
>>> torch.stack((a,b,c))
tensor([[[ 0.7691],
[-0.0297]],
[[ 0.4844],
[-0.9142]],
[[ 0.0210],
[-1.1543]]])
You can then use torch.unbind to go the other direction.