The following code gave the error 'too many indices for tensor dimension 1, with m, M, g, L all being parameters:
f = [ x[r][1], ((0.5*g*m*np.sin(2*x[r][2]))-(l*m*((x[r][3])**2)*np.sin(x[r][2])))/((m*(np.sin(x[r][2])**2))+M),
x[r][3], ((g*(M+m)*np.sin(x[r][2]))-(l*m*((x[r][3])**2)*np.sin(x[r][2])*np.cos(x[r][2])))/(l*((m*(np.sin(x[r][2])**2))+M))]
y.append(f)
y = torch.tensor(y)
---> 15 y[:,1] = y[:,1] + ((u[:,1])/((m*((np.sin(x[r][2]))**2))+M))
16 y[:,3] = y[:,3] + ((u[:,3]*np.cos(x[r][2]))/(l*((m*((np.sin(x[r][2]))**2))+M)))
17 return
IndexError: too many indices for tensor of dimension 1.
Does anyone know what went wrong in writing this code?
Related
In the code snippet below, I am running into some sort of dimension mismatch or matrix singularity issue.
def eval(x,m):
hess = 6*x + m
return hess
x =np.ones((1, 5))
m =np.ones((1, 5))
D = np.linalg.inv(eval(x, m))
print(D)
error:
LinAlgError: Last 2 dimensions of the array must be square
I found my issue. The equation under eval was incorrect. Like hpaulj mentioned, it was returning a singular matrix. With the correct eq it works fine
I am going through Pytorch and want to create a random tensor of shape 5X3 in the interval [3,7)
torch.rand(5,3) will return a random tensor of shape 5 X 3, however, I could not figure to set the given interval.
Please guide.
You can map U ~ [0, 1] to U ~ [a, b] with u -> (a - b)*u + b:
(a - b)*torch.rand(5, 3) + b
Define the minimum and maximum value and use the code below:
import torch
max = 7
min = 3
rand_tensor = (max-min)*torch.rand((5, 3)) + min
This piece of code is originally written in numpy and I'm trying to utilise GPU computation by rewriting it in pytorch, but as I'm new to pytorch a lot of problems occured to me. Firstly I'm confused by the dimension of the tensors. Sometimes after operating on the tensors, only transposing the tensor would fix the problem, is there anyway I can stop doing .t()? The major problem here is that in the line ar = torch.stack ... the error "TypeError: 'Tensor' object is not callable " occurs. Any suggestion/correction would be appreciated. Thxxx
def vec_datastr(vector):
vector = vector.float()
# Find the indices corresponding to non-zero entries
index = torch.nonzero(vector)
index = index.t()
# Compute probability
prob = vector ** 2
if torch.sum(prob) == 0:
prob = 0
else:
prob = prob / torch.sum(prob)
d = depth(vector)
CumProb = torch.ones((2**d-len(prob.t()),1), device ='cuda')
cp = torch.cumsum(prob, dim=0)
cp = cp.reshape((len(cp.t()),1))
CumProb = torch.cat((cp, CumProb),0)
vector = vector.t()
prob = prob.t()
ar = torch.stack((index, vector([index,1]), prob([index, 1]), CumProb([index, 1]))) # Problems occur here
ar = ar.reshape((len(index), 4))
# Store the data as a 4-dimensional array
output = dict()
output = {'index':ar[:,0], 'value': ar[:,1], 'prob':ar[:,2], 'CumProb': ar[:,3]}
return output
ar = torch.stack(
(index, vector([index, 1]), prob([index, 1]), CumProb([index, 1]))
) # Problems occur here
vector is of type torch.Tensor. It has no __call__ defined. You are going for vector(...) (vector([index,1])) while you should slice the data directly like this: vector[index, 1]. Same goes for prob and CumProb.
Somehow, you do it correctly for ar with ar[:,0] so it might be a typo
Suppose I´ve got three tensors shape (n,1) T,r,E and I wanted to implement a function that calculates: sum(i,j) (T[j] < T[i]) (r[j] > r[i]) E[j].
How can I proceed?
This is what I´ve got
#tensor examples
T=K.constant([1,4,5])
r=K.constant([.8,.3,.7])
E=K.constant([1,0,1])
# cartesian product of T to compare element wise
c = tf.stack(tf.meshgrid(T, T, indexing='ij'), axis=-1)
cartesian_T = tf.reshape(c, (-1, 2))
# cartesian product of r to compare elemento wise
r = tf.stack(tf.meshgrid(r, r, indexing='ij'), axis=-1)
cartesian_r = tf.reshape(r, (-1, 2))
# TO DO:
# compare the two columns in cartesian T and cast to integer 1/0 if
# second column in T less/greater than first column in T => return tensor
# compare the two columns in cartesian E and cast to integer 1/0 if
# second column in r greater/less than first column in r => return tensor
# multiply previous tensors by a broadcasted version of E, then do K.sum()
Do you think I'm on the right track? What would you suggest to implement this?
Try:
import keras.backend as K
import tensorflow as tf
T=K.constant([1,4,5])
r=K.constant([.8,.3,.7])
E=K.constant([1,0,1])
T_new = tf.less(T[tf.newaxis,:],T[:,tf.newaxis])
r_new = tf.greater(r[tf.newaxis,:],r[:,tf.newaxis])
E_row,_ = tf.meshgrid(E, E)
result = tf.reduce_sum(tf.boolean_mask(E_row,tf.logical_and(T_new,r_new)))
with tf.Session() as sess:
print(sess.run(result))
#print
2.0
while attempting to write a cost function for linear regression the error is arising while replacing ** with pow function in cost_function :
Original cost function
def cost_function(x,y,theta):
m = np.size(y)
j = (1/(2*m))*np.sum(np.power(np.matmul(x,theta)-y),2)
return j
Cost function giving the error:
def cost_function(x,y,theta):
m = np.size(y)
j = (1/(2*m))*np.sum((np.matmul(x,theta)-y)**2)
return j
Gradient Descent
def gradient_descent(x,y,theta,learn_rate,iters):
x = np.mat(x);y = np.mat(y); theta= np.mat(theta);
m = np.size(y)
j_hist = np.zeros(iters)
for i in range(0,iters):
temp = theta - (learn_rate/m)*(x.T*(x*theta-y))
theta = temp
j_hist[i] = cost_function(x,y,theta)
return (theta),j_hist
Variable values
theta = np.zeros((2,1))
learn_rate = 0.01
iters = 1000
x is (97,2) matrix
y is (97,1) matrix
cost function is calculated fine with value of 32.0727
The error arises while using the same function in gradient descent.
The error am getting is LinAlgError: Last 2 dimensions of the array must be square
First let's distinguish between pow, ** and np.power. pow is the Python function, that according to docs is equivalent to ** when used with 2 arguments.
Second, you apply np.mat to the arrays, making np.matrix objects. According to its docs:
It has certain special operators, such as *
(matrix multiplication) and ** (matrix power).
matrix power:
In [475]: np.mat([[1,2],[3,4]])**2
Out[475]:
matrix([[ 7, 10],
[15, 22]])
Elementwise square:
In [476]: np.array([[1,2],[3,4]])**2
Out[476]:
array([[ 1, 4],
[ 9, 16]])
In [477]: np.power(np.mat([[1,2],[3,4]]),2)
Out[477]:
matrix([[ 1, 4],
[ 9, 16]])
Matrix power:
In [478]: arr = np.array([[1,2],[3,4]])
In [479]: arr#arr # np.matmul
Out[479]:
array([[ 7, 10],
[15, 22]])
With a non-square matrix:
In [480]: np.power(np.mat([[1,2]]),2)
Out[480]: matrix([[1, 4]]) # elementwise
Attempting to do matrix_power on a non-square matrix:
In [481]: np.mat([[1,2]])**2
---------------------------------------------------------------------------
LinAlgError Traceback (most recent call last)
<ipython-input-481-18e19d5a9d6c> in <module>()
----> 1 np.mat([[1,2]])**2
/usr/local/lib/python3.6/dist-packages/numpy/matrixlib/defmatrix.py in __pow__(self, other)
226
227 def __pow__(self, other):
--> 228 return matrix_power(self, other)
229
230 def __ipow__(self, other):
/usr/local/lib/python3.6/dist-packages/numpy/linalg/linalg.py in matrix_power(a, n)
600 a = asanyarray(a)
601 _assertRankAtLeast2(a)
--> 602 _assertNdSquareness(a)
603
604 try:
/usr/local/lib/python3.6/dist-packages/numpy/linalg/linalg.py in _assertNdSquareness(*arrays)
213 m, n = a.shape[-2:]
214 if m != n:
--> 215 raise LinAlgError('Last 2 dimensions of the array must be square')
216
217 def _assertFinite(*arrays):
LinAlgError: Last 2 dimensions of the array must be square
Note that the whole traceback lists matrix_power. That's why we often ask to see the whole traceback.
Why are you setting x,y and theta to np.mat? The cost_function uses matmul. With that function, and its # operator, there are few(er) good reasons for using np.matrix.
Despite the subject line, you did not try to use pow. That confused me and at least one other commentator. I tried to find a np.pow or a scipy version.