I have 2 tensors:
A of shape [146, 33, 559]
B of shape [146, 33]
B contains integers between 0 and 559, which serve as indeces.
What I'm after is a tensor C of shape [146, 33] where each element corresponds to the index given by B.
I tried tf.gather_nd(A, B) which gives me the error
InvalidArgumentError (see above for traceback): index innermost dimension length must be <= params rank; saw: 33 vs. 3
[[Node: GatherNd = GatherNd[Tindices=DT_INT64, Tparams=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_34, _recv_contexts_1_0)]]
I also tried tf.gather(A, B) which gives me the error
InvalidArgumentError (see above for traceback): indices[1,0] = 282 is not in [0, 146)
[[Node: Gather = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_34, _recv_contexts_1_0)]]
Any idea how to resolve this?
Related
I'm new to pytorch. I'm trying to create a DCGAN project. I used the entire official pytorch tutorial as a base.
I have a numpy array that is the combination of eight arrays, which given a shape (60,60,8) this shape is special
lista2 = [0, 60, 120, 180, 240, 300, 360, 420]
total = []
for i in lista2:
N1 = intesity[0:60, i:i+60]
total.append(N1)
N2 = intesity[60:120, i:i+60]
total.append(N2)
N3 = intesity[120:180, i:i+60]
total.append(N3)
N4 = intesity[180:240, i:i+60]
total.append(N4)
N5 = intesity[240:300, i:i+60]
total.append(N5)
N6 = intesity[300:360, i:i+60]
total.append(N6)
N7 = intesity[360:420, i:i+60]
total.append(N7)
N8 = intesity[420:480, i:i+60]
total.append(N8)
total = np.reshape(total, (64, 60,60,8))
total -= total.min()
total /= total.max()
total = np.asarray(total)
print(np.shape(total)
(64, 60, 60, 8)
as you can see there are 64 elements in that array, there are 64 training images (very few for now), this array is converted to a tensor and then to a pytorch dataset
tensor_c = torch.tensor(total)
creating a dataset and a dataloader I get the following error, when trying to graph the training images of this DCGAN
dataset = TensorDataset(tensor_c) # create your datset
dataloader = DataLoader(dataset) # create your dataloader
real_batch = next(iter(dataloader))
plt.figure(figsize=(16,16))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=0, normalize=True).cpu(),(1,2,0)))
dataset_size = len(dataloader.dataset)
dataset_size
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-42-5ba2d666ef25> in <module>()
10 plt.axis("off")
11 plt.title("Training Images")
---> 12 plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=0, normalize=True).cpu(),(1,2,0)))
13 dataset_size = len(dataloader.dataset)
14 dataset_size
5 frames
/usr/local/lib/python3.7/dist-packages/matplotlib/image.py in set_data(self, A)
697 or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]):
698 raise TypeError("Invalid shape {} for image data"
--> 699 .format(self._A.shape))
700
701 if self._A.ndim == 3:
TypeError: Invalid shape (60, 60, 8) for image data
I am too new to Pytorch I would like to know how I can solve this problem
Images are expected to be stored as arrays of the form height x width x n_channels in general, where n_channels is 3 for a standard RGB image, or in some cases 4 for an RGBA image. matplotlib doesn't have a built-in understanding of how to plot an image with 8 channels, as your image data currently has.
Also pay attention to the ordering of the dimensions, as pytorch expects images of the form batch_idx x channel x height x width which is convenient for applying 2D convolutions because they can be strided across the last 2 dimensions. Be careful to do the conversion to pytorch form after trying to plot the images in matplotlib form.
I have two tensors
t1=torch.Size([400, 32, 400])
t2= torch.Size([400, 32, 32])
when i excute this
torch.matmul(t1,t2)
i got this error RuntimeError:
Expected tensor to have size 400 at dimension 1, but got size 32 for
argument #2 'batch2' (while checking arguments for bmm)
Any help will be much appreciated
You get the error because the order of matrix multiplication is wrong.
It should be:
a = torch.randn(400, 32, 400)
b = torch.randn(400, 32, 32)
out = torch.matmul(b, a) # You performed torch.matmul(a, b)
# You can also do a simpler version of the matrix multiplication using the below code
out = b # a
I want to construct a multivariate Normal model in PyMC3 in which the mean value and precision matrix involve probabilistic variables. h is meant to act as a latent variable in an larger project to which this code snippet belongs.
When I run the code provided below, I get the error message shown, and I'm not sure exactly how to interpret it. As far as I can see, the dimension of the mean value of the MvNormal (2-row column vector) match the dimension of the precision matrix B (2 x 2 matrix), so I don't expect it's the dimensions of these objects that are causing the problem. I don't know what other variables could be causing some error related to dimensions to be thrown up though. Can anyone shed some light on this please?
Here is the code:
import pymc3 as pm
import theano.tensor as tt
with pm.Model() as model:
# A matrix
a1 = pm.Uniform('a1', 0., 1.)
a2 = pm.Uniform('a2', 0., 1.)
ix = ([0, 0, 1, 1], [0, 1, 0, 1])
A = tt.eye(2)
A = tt.set_subtensor(A[ix], [a1, a2, 1, 0])
# B matrix
b1 = pm.Uniform('b1', 0., 1.)
b2 = pm.Uniform('b2', 0., 1.)
ix = ([0, 1], [0, 1])
B = tt.eye(2)
B = tt.set_subtensor(B[ix], [b1 ** 2, b2 ** 2])
# Model
y0 = pm.Normal('y0', mu=0., sd=1., observed=0)
y1 = pm.Normal('y1', mu=1., sd=1., observed=1)
s_v = tt.stack([y1, y0]).T
h = pm.MvNormal("h", mu=pm.math.dot(A, s_v), tau=B)
Error message:
h = pm.MvNormal("h", mu=pm.math.dot(A, s_v), tau=B)
File "/Users/Joel/PycharmProjects/AR(2)/venv/lib/python3.6/site-packages/pymc3/distributions/distribution.py", line 42, in __new__
return model.Var(name, dist, data, total_size)
File "/Users/Joel/PycharmProjects/AR(2)/venv/lib/python3.6/site-packages/pymc3/model.py", line 809, in Var
total_size=total_size, model=self)
File "/Users/Joel/PycharmProjects/AR(2)/venv/lib/python3.6/site-packages/pymc3/model.py", line 1209, in __init__
self.logp_elemwiset = distribution.logp(self)
File "/Users/Joel/PycharmProjects/AR(2)/venv/lib/python3.6/site-packages/pymc3/distributions/multivariate.py", line 274, in logp
quaddist, logdet, ok = self._quaddist(value)
File "/Users/Joel/PycharmProjects/AR(2)/venv/lib/python3.6/site-packages/pymc3/distributions/multivariate.py", line 85, in _quaddist
raise ValueError('Invalid dimension for value: %s' % value.ndim)
ValueError: Invalid dimension for value: 0```
I believe that you are missing the "shape" argument in the pm.MvNormal call, which lets it handle the right size of values. For example, if you have 7 variables, set shape=7.
I have created a OLS linear regression model in python and when I predict for a particular value I get the error.
My code is below:
df=pd.read_csv("smatrix.csv",index_col=0)
import statsmodels.api as sm
x=df.iloc[:,:-1]
y=df.Rating
est = sm.OLS(y.astype(float), x.astype(float))
results=est.fit()
op=list()
for i in df.columns:
if 'bad' == i:
op.append(1)
else:
op.append(0)
op=op[:-1]
X5=np.array(op).reshape(1,-1)
y1=est.predict(X5)
The error that I am getting is
ValueError: shapes (993,228) and (1,228) not aligned: 228 (dim 1) != 1 (dim 0)
The shape of X5 is (1, 228)
The shape of x is (993, 228)
The shape of y is (993,)
est.predict() expects the first argument to be params (more here), but you are passing X[5] of shape 1, 128. The error message is thrown when the model tries to multiply X with params(in this case X[5]).
X -> (993, 128)
params -> (1, 128)
This two matrices(X, params) cannot be multiplied because the columns of X(128) is not aligned with rows of params(1).
Solution
Use the parameters learned with fit method.
y1=est.predict(results.params, X5)
I have used tf.extract_image_patches() to get a tensor of overlapping patches
from the image as described in this link. The answer in the mentioned link suggests to use tf.space_to_depth() to reconstruct the image from overlapping patches. But the problem is that this does not give the desirable results in my case and upon researching I came to know that tf.space_to_depth() does not deal with the overlapping blocks. My code looks like:
import tensorflow as tf
import numpy as np
c = 3
height = 3900
width = 6000
ksizes = [1, 150, 150, 1]
strides = [1, 75, 75, 1]
image = #image of shape [1, height, width, 3]
patches = tf.extract_image_patches(image, ksizes = ksizes, strides= strides, [1, 1, 1, 1], 'VALID')
patches = tf.reshape(patches, [-1, 150, 150, 3])
reconstructed = tf.reshape(patches, [1, height, width, 3])
rec_new = tf.space_to_depth(reconstructed,75)
rec_new = tf.reshape(rec_new,[height,width,3])
This gives me error:
InvalidArgumentError Traceback (most recent call
last)
D:\AnacondaIDE\lib\site-packages\tensorflow\python\framework\common_shapes.py
in _call_cpp_shape_fn_impl(op, input_tensors_needed,
input_tensors_as_shapes_needed, require_shape_fn)
653 graph_def_version, node_def_str, input_shapes, input_tensors,
--> 654 input_tensors_as_shapes, status)
655 except errors.InvalidArgumentError as err:
D:\AnacondaIDE\lib\contextlib.py in exit(self, type, value,
traceback)
87 try:
---> 88 next(self.gen)
89 except StopIteration:
D:\AnacondaIDE\lib\site-packages\tensorflow\python\framework\errors_impl.py
in raise_exception_on_not_ok_status()
465 compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466 pywrap_tensorflow.TF_GetCode(status))
467 finally:
InvalidArgumentError: Dimension size must be evenly divisible by
70200000 but is 271957500 for 'Reshape_22' (op: 'Reshape') with input
shapes: [4029,150,150,3], [4] and with input tensors computed as
partial shapes: input1 = [?,3900,6000,3].
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call
last) in ()
----> 1 reconstructed = tf.reshape(features, [-1, height, width, channel])
2 rec_new = tf.space_to_depth(reconstructed,75)
3 rec_new = tf.reshape(rec_new,[h,h,c])
D:\AnacondaIDE\lib\site-packages\tensorflow\python\ops\gen_array_ops.py
in reshape(tensor, shape, name) 2617 """ 2618 result =
_op_def_lib.apply_op("Reshape", tensor=tensor, shape=shape,
-> 2619 name=name) 2620 return result 2621
D:\AnacondaIDE\lib\site-packages\tensorflow\python\framework\op_def_library.py
in apply_op(self, op_type_name, name, **keywords)
765 op = g.create_op(op_type_name, inputs, output_types, name=scope,
766 input_types=input_types, attrs=attr_protos,
--> 767 op_def=op_def)
768 if output_structure:
769 outputs = op.outputs
D:\AnacondaIDE\lib\site-packages\tensorflow\python\framework\ops.py in
create_op(self, op_type, inputs, dtypes, input_types, name, attrs,
op_def, compute_shapes, compute_device) 2630
original_op=self._default_original_op, op_def=op_def) 2631 if
compute_shapes:
-> 2632 set_shapes_for_outputs(ret) 2633 self._add_op(ret) 2634
self._record_op_seen_by_control_dependencies(ret)
D:\AnacondaIDE\lib\site-packages\tensorflow\python\framework\ops.py in
set_shapes_for_outputs(op) 1909 shape_func =
_call_cpp_shape_fn_and_require_op 1910
-> 1911 shapes = shape_func(op) 1912 if shapes is None: 1913 raise RuntimeError(
D:\AnacondaIDE\lib\site-packages\tensorflow\python\framework\ops.py in
call_with_requiring(op) 1859 1860 def
call_with_requiring(op):
-> 1861 return call_cpp_shape_fn(op, require_shape_fn=True) 1862 1863 _call_cpp_shape_fn_and_require_op =
call_with_requiring
D:\AnacondaIDE\lib\site-packages\tensorflow\python\framework\common_shapes.py
in call_cpp_shape_fn(op, require_shape_fn)
593 res = _call_cpp_shape_fn_impl(op, input_tensors_needed,
594 input_tensors_as_shapes_needed,
--> 595 require_shape_fn)
596 if not isinstance(res, dict):
597 # Handles the case where _call_cpp_shape_fn_impl calls unknown_shape(op).
D:\AnacondaIDE\lib\site-packages\tensorflow\python\framework\common_shapes.py
in _call_cpp_shape_fn_impl(op, input_tensors_needed,
input_tensors_as_shapes_needed, require_shape_fn)
657 missing_shape_fn = True
658 else:
--> 659 raise ValueError(err.message)
660
661 if missing_shape_fn:
ValueError: Dimension size must be evenly divisible by 70200000 but is
271957500 for 'Reshape_22' (op: 'Reshape') with input shapes:
[4029,150,150,3], [4] and with input tensors computed as partial
shapes: input1 = [?,3900,6000,3].
I know this is error due to non-compatible dimensions, but it should be that way, right? Please help me to solve this.
I guess that the problem is that in the link you posted the author is using the same value for strides and ksizes, while you are using strides equal to one half of ksizes. This is the reason why the dimensions do not match, you should write the logic of reducing the size of the patches before gluing them (for instance by selecting the central square of each patch).