How can I reduce the dimension by using 2D stationary wavelet transform?
Related
I have GAN for a graph prediction task in which there are torch_geometric.nn.NNConv layers. I want to add eigenvector centrality difference between ground-truth and predicted graph to my loss function. To calculate eigenvector centrality, I intended to use eigenvector_centrality function from NetworkX library. However, this function requires input to be a NetworkX graph which also requires to convert my torch.tensor outputted from Generator network to numpy.array. So, I need to detach() the gradient which will cause PyTorch to lose all gradient tracking. Thus, how can I properly implement a eigenvector centrality for my loss function? Thanks.
I have a (60000, 60000) sparse matrix of cosine distances? Are there any frameworks or in-built scipy instruments to perform a hierarchical clustering on that data?
In similar question author has observation matrix rather than distance.
For dimensionality reduction :
In Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), we can visualize the data projected on new reduced dimensions by doing a dot product of the data with the eigenvectors.
How to do this for Quadratic Discriminant Analysis (QDA).
I am using sklearn's qda, ( https://scikit-learn.org/stable/modules/generated/sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.html ) and performed dot multiplication of the elements of each class with the respective scalings obtained for that class.
Is this correct?
If not, then please suggest how to visualize the projected data for QDA.
I have a tensor of ground truth values of 3D points of G=[18000x3], and an output from my network of the same size O=[18000x3].
I need to compute a loss so that I basically have the square root of the distance between each 3D point, summed over all keypoints and normalized over 18000. How do I write this efficiently?
Just write the expression you propose using the vectorized operations provided by PyTorch. In this case
loss = (O - G).pow(2).sum(axis=1).sqrt().mean()
Check out pow, sum, sqrt and mean.
I need to construct a 4-dimensional PyTorch Tensor where one of the dimensions comes from multiplying a constant sparse matrix with a dense vector. The dense vector, and the resulting 4D Tensor, require gradients to be tracked. Since PyTorch only supports sparse matrices, I can't express the whole thing as a Tensor-Tensor multipllication, and I think I have to do the matrix multiplication part of the construction in a loop. In that case, I'd at least like to preallocate the result 4D Tensor and let the sparse mm fill in one dimension in a loop.
How do I in that case keep track of the resulting 4D Tensor's gradient requirements? Can I manually attach it into the gradient graph once it's been created?
My current approach is extremely inefficient, essentially building up one dimension at a time in a list, can cating.