How to optimize following code with minimum use of for loop? - python-3.x

Code is taking too much time for computation and I want to reduce number of iterations. I tried to use method suggested by #Albuquerque How to optimise the following for loop code?, but for this code I have 3d array. Please suggest how to optimize following code.
K=2
D=3
N=3
sigma=np.asarray([[1,2,3],[4,5,6]])
F=np.asarray([[1,2,3],[4,5,6]])
X=np.asarray([[1,2,3],[4,5,6],[7,8,9]])
W=
[
[
[1,2,3],
[3,2,1]
],
[
[1,1,19],
[1,2,1]
],
[
[2,2,2],
[1,3,5]
]
]
result2= np.ones([N, D])
for i in range(N):
for l in range(D):
result2[i][l]=np.sum(W[i][k][l]*(F[k][l]+sigma[k][l]*X[i][l]) for k in range(K))
output-
array([[ 26., 42., 60.],
[ 25., 72., 441.],
[ 48., 171., 360.]])

Related

Point-convolution in PyTorch

I would like to implement a point-convolution to my input tensor.
Let's say we have:
input_ = torch.randn(1,1,8,1)
input_.shape
Out[70]: torch.Size([1, 1, 8, 1])
input_
Out[71]:
tensor([[[[ 0.7656],
[-0.3400],
[-0.2487],
[ 0.6246],
[ 2.0450],
[-0.9588],
[ 1.2221],
[-1.3164]]]])
where the dimensions represent respectively (batch_size, n_channels, height, width).
Then, (keeping fixed batch_size and channel), I would like to apply to each a nn.Conv1d layer basically to expand the number of channels. What I've tried has been:
list(torch.unbind(input_,dim=2))
Out[72]:
[tensor([[[0.7656]]]),
tensor([[[-0.3400]]]),
tensor([[[-0.2487]]]),
tensor([[[0.6246]]]),
tensor([[[2.0450]]]),
tensor([[[-0.9588]]]),
tensor([[[1.2221]]]),
tensor([[[-1.3164]]])]
and then applying nn.Conv1d entrywise to these elements? Would that be a correct way?
EDIT:
Should I use nn.Conv1d directly on my original input_ by doing
conv = nn.Conv1d(in_channels=1, out_channels=3, kernel_size=(1,1)
conv(input_)
conv(input_)
Out[89]:
tensor([[[[-0.1481],
[ 1.5345],
[ 0.3082],
[ 1.8677],
[ 0.7515],
[ 1.2916],
[ 0.0218],
[ 0.5606]],
[[-1.2975],
[-0.3080],
[-1.0292],
[-0.1121],
[-0.7685],
[-0.4509],
[-1.1976],
[-0.8808]],
[[-0.7169],
[ 0.6493],
[-0.3464],
[ 0.9199],
[ 0.0135],
[ 0.4521],
[-0.5790],
[-0.1415]]]], grad_fn=<ThnnConv2DBackward0>)
conv(input_).shape
Out[90]: torch.Size([1, 3, 8, 1])
I'm not sure though if doing this is the same as my original purpose.

why does LedoitWolf return zeros for off-diagonal elements?

from sklearn.covariance import EmpiricalCovariance, LedoitWolf
a = np.array([[ 0.6278, -1.1273 ],
[ 0.2323, 0.4533 ],
[ 0.3234, 1.5356 ],
[1.7473 , -0.3113 ],
[-0.3525 , 0.2577 ]])
Empirical_cov = EmpiricalCovariance().fit(a)
Empirical_cov.covariance_
array([[ 0.48009421, -0.23144627],
[-0.23144627, 0.77341954]])
LedoitWolf_cov = LedoitWolf().fit(a)
LedoitWolf_cov.covariance_
array([[ 1., -0.],
[-0., 1.]])
Why is LedoitWolf giving me zeros for the off-diagonal elements? I have a few larger datasets in which this happens.
Is this a bug? The empirical covariance is non-zero, so shouldn't LedoitWolf also be non-zero? I read that using LedoitWolf would be slightly more accurate than empirical covariance, but what is the point if most covariance elements turn out to be zero?

Extract sub tensor in PyTorch

For this tensor is PyTorch,
tensor([[ 0.7646, 0.5573, 0.4000, 0.2188, 0.7646, 0.5052, 0.2042, 0.0896,
0.7667, 0.5938, 0.3167, 0.0917],
[ 0.4271, 0.1354, 0.5000, 0.1292, 0.4260, 0.1354, 0.4646, 0.0917,
-1.0000, -1.0000, -1.0000, -1.0000],
[ 0.7208, 0.5656, 0.3000, 0.1688, 0.7177, 0.5271, 0.1521, 0.0667,
0.7198, 0.5948, 0.2438, 0.0729],
[ 0.6292, 0.8250, 0.4000, 0.2292, 0.6271, 0.7698, 0.2083, 0.0812,
0.6281, 0.8604, 0.3604, 0.0917]], device='cuda:0')
How can I extract to new Tensor for those values
0.7646, 0.5573, 0.4000, 0.2188
0.4271, 0.1354, 0.5000, 0.1292
How to get the first 4 of two rows into a new tensor?
Actually the question was answered from #zihaozhihao in the Comments but in case you are wondering where that comes from it would be helpful if you structured your Tensor like this:
x = torch.Tensor([
[ 0.7646, 0.5573, 0.4000, 0.2188, 0.7646, 0.5052, 0.2042, 0.0896, 0.7667, 0.5938, 0.3167, 0.0917],
[ 0.4271, 0.1354, 0.5000, 0.1292, 0.4260, 0.1354, 0.4646, 0.0917, -1.0000, -1.0000, -1.0000, -1.0000],
[ 0.7208, 0.5656, 0.3000, 0.1688, 0.7177, 0.5271, 0.1521, 0.0667, 0.7198, 0.5948, 0.2438, 0.0729],
[ 0.6292, 0.8250, 0.4000, 0.2292, 0.6271, 0.7698, 0.2083, 0.0812, 0.6281, 0.8604, 0.3604, 0.0917]
])
so now it is more clear that you have a shape (4, 12) you can think about it like an excel file, you have 4 rows and 12 columns. Now what you want is to extract from the two first rows the 4 first columns and that's why your solution would be:
x[:2, :4] # 2 means you want to take all the rows until the second row and then you set that you want all the columns until the fourth column, this Code will also give the same result x[0:2, 0:4]

Trouble creating 3D rotation matrix in Pytorch - ValueError: only one element tensors can be converted to Python scalars

I am trying to create 3D rotation matrices in pytorch as seen on the first page of this pdf, but I am encountering some problems. Here is my code so far:
zero = torch.from_numpy(np.zeros(len(cos)))
one = torch.from_numpy(np.ones(len(cos)))
R_transpose = torch.tensor([cos, -sin, zero, sin, cos, zero, zero, zero, one]).reshape(-1, 3, 3)
The cos and sin are matrices that look like this:
tensor([[[1.]],
[[1.]],
[[1.]],
[[1.]],
[[1.]]], dtype=torch.float64)
My goal is to create x number of rotation matrices(e.g. four matrices with the cos values shown above).
The code I currently have results in a "ValueError: only one element tensors can be converted to Python scalars"
How should I change my code to achieve my goal?
Why don't you use assignment to create R_transpose?
# define rotation angels (radians) using numpy
th_np = np.array([np.pi*0.25, np.pi/6, np.pi*0.5, np.pi/3.], dtype=np.float32)
# conver to pytorch
th_t = torch.from_numpy(th_np)
# init to zeros
R_transpose = torch.zeros(th_t.numel(), 3, 3, dtype=torch.float)
# assign the values:
R_transpose[:, 2, 2] = 1.
R_transpose[:, [0,1],[0,1]] = th_t[:, None].cos()
R_transpose[:, 0, 1] = -th_t.sin()
R_transpose[:, 1, 0] = th_t.sin()
Resulting with
tensor([[[ 7.0711e-01, -7.0711e-01, 0.0000e+00],
[ 7.0711e-01, 7.0711e-01, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 1.0000e+00]],
[[ 8.6603e-01, -5.0000e-01, 0.0000e+00],
[ 5.0000e-01, 8.6603e-01, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 1.0000e+00]],
[[-4.3711e-08, -1.0000e+00, 0.0000e+00],
[ 1.0000e+00, -4.3711e-08, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 1.0000e+00]],
[[ 5.0000e-01, -8.6603e-01, 0.0000e+00],
[ 8.6603e-01, 5.0000e-01, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 1.0000e+00]]])
Note that we assigned all all angels at once, so this solution is applicable to any number of angles you may have.

Keras 'Error when checking input' when trying to predict multiple values

I have a net with a length 4 input vector, length 2 output vector. I am trying to predict multiple inputs simultaneously. If I just want to predict one, I would do the following and it works:
in = numpy.array( [ [1,2,3,4] ] )
self.model.predict(in)
# prediction = [ [1,2] ]
However, when I try to pass in multiple inputs I get ValueError: Error when checking input: expected dense_1_input to have shape (4,) but got array with shape (1,)
in = numpy.array( [
[1,2,3,4],
[1,2,3,4]
]
)
#OR
in = numpy.array( [
[ [1,2,3,4] ],
[ [1,2,3,4] ]
]
)
self.model.predict(in)
#ERR
What am I doing wrong?
Edit:
Code =
model = Sequential()
model.add(Dense(24, input_dim=4, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dense(4, activation='linear'))
model.compile(loss='mse',
optimizer=Adam(lr=self.learning_rate))
print(batch_arr[:,3][0])
predictions = self.model.predict(batch_arr[:,3][0])
print(predictions)
print(batch_arr[:,3])
predictions = model.predict(batch_arr[:,3])
Output =
[[-0.00441936 -0.20398824 -0.08134908 0.09739554]]
[[ 0.01860509 -0.01136071]]
[array([[-0.00441936, -0.20398824, -0.08134908, 0.09739554]])
array([[-0.00517939, 0.38975933, -0.11951023, -0.9718224 ]])
array([[0.00272119, 0.0025476 , 0.002645 , 0.03973542]])
array([[-0.00421809, -0.01006362, -0.07795483, -0.16971247]])
array([[-0.00904593, 0.19332681, -0.10655871, -0.64757587]])
array([[ 0.00654432, 0.00347247, -0.15332555, -0.47302148]])
array([[-0.01921821, -0.17354519, -0.20207744, -0.58569029]])
array([[ 0.00661377, 0.20038962, -0.16278598, -0.80983334]])
array([[-0.00348096, 0.18171964, -0.07072813, -0.38913168]])
array([[-0.01268919, -0.00548544, -0.08286095, -0.27108632]])
array([[ 0.01077598, -0.19254374, -0.004982 , 0.33175341]])
array([[-4.37101750e-04, -5.68196965e-01, -1.99532537e-01,
1.10581883e-01]])
array([[ 0.00657382, -0.19263146, -0.00402872, 0.33368607]])
array([[ 0.00677398, 0.19760551, -0.00076944, -0.25153403]])
array([[ 0.00261579, 0.19642629, -0.13894668, -0.71894379]])
array([[-0.0221003 , 0.37477368, -0.03765055, -0.63564477]])
array([[-0.0110009 , 0.37599703, -0.0574645 , -0.66318148]])
array([[ 0.00277214, 0.19763152, 0.00343971, -0.25211181]])
array([[-9.31810654e-05, -2.06245307e-01, -8.09019674e-02,
1.47356796e-01]])
array([[ 0.00709025, -0.37636771, -0.19725323, -0.11396513]])
array([[ 0.00015344, -0.01233088, -0.07851076, -0.11956039]])
array([[ 0.01077811, -0.18439307, -0.19043179, -0.34107231]])
array([[-0.01460483, 0.18019651, -0.05036345, -0.35505252]])
array([[-0.0127989 , 0.19071515, -0.08828268, -0.58871071]])
array([[ 0.01072609, 0.00249456, -0.00580012, 0.0409061 ]])
array([[ 0.01062156, 0.00782762, -0.17898265, -0.57245695]])
array([[-0.01180104, -0.37085843, -0.1973209 , -0.23782701]])
array([[-0.00849912, -0.00780031, -0.07940117, -0.21980343]])
array([[ 0.00672477, 0.00246062, -0.00160252, 0.04165408]])
array([[-0.02268911, -0.36534914, -0.21379125, -0.36284594]])
array([[-0.00865513, -0.20170279, -0.08379724, 0.0468145 ]])
array([[-0.0256848 , 0.17922475, -0.03098346, -0.33335449]])]
#ERR
Edit: When I print out the shape of batch_arr[:,3] I get (32,), not (32,4) as I expected. Thus I'm guess the numpy array does not know the shape of its inner arrays. Is there an easy way to fix that? It might be the root of the problem
The issue was the way that I had created my numpy array. I created it with indices of variable size, and thus it didn't know it was shaped (32,4), only that it was (32,). Reformulating the logic to ensure that the array is always a set width from the beginning allowed the array to be a (32,4), which allowed the prediction to work.

Resources