Assert batch not in keys and ptr not in keys - pytorch

I'm running PyTorch Geometric (1.7.2) wrapped with PyTorch Lightning, and I get the assertion error above. I can't copy the whole stack trace, but the error is thrown by
return self.collate_fn(data)
I saw this article with the same error but it did not help. I would appreciate any tips on how to fix this.

I suppose that your data has some undesired attributes: 'batch' or 'ptr'. In order to check that, you can print 'data' directly.
Then to solve the error, you can easily turn off the 'batch' or 'ptr' attributes by:
for g in data:
g.batch = None
g.ptr = None

Related

Inexplicable value error due to encoding of file in Python

I have a file with saved modelling data like this
1
0
1
Trying to iterate over the file using readlines, which correctly reads the line as zero or one.
Error happens when I cast the string into 1 or 0 Value error my code is this
st1=myfile.readline()
try:
line=float(st1.strip(), base=10)
if line==0: zeros+=1
if line==1 or line==0: ones+=1
except ValueError
I am getting value error in each line. I have no clue what is causing this. I do not want to load whole file into a variable as it is quite large. I tried changing 1 to 100 still error.Though a trivial error, I am unable to fix it.I Searched online forums, stack overflow no help. I don't want Numpy, please don't give some solution using Array operation on NumPy. Can you please help?

"numpy.ndarray' object has no attribute 'get_support" error message after running SelectKBest in Scikit Learn

I met a question related to this old one: The easiest way for getting feature names after running SelectKBest in Scikit Learn
When trying to use "get_support()" to get the selected features, I got the error message:
numpy.ndarray' object has no attribute 'get_support
I would greatly appreciate your kind help!
Jeff
Without doing fitting you cannot get support. You need to do the fitting so that the selector can analyze the data, and then call get_support() on the selector, not the output of fit_transform()
Currently you are doing something like:
selector = SelectKBest()
#fit_transform returns the data after selecting the best features
new_data = selector.fit_transform(old_data, labels)
#so you are trying to access get_support() on new data, which is not possible
new_data.get_support()
After you call fit() or fit_transform(), do this:
# get_support is a method of SelectKBest class
selector.get_support()
I think I found out the reason why I got the errors. I used "get_support()" on the results after fit() or fit_transform(), which led to the error message.
I should have used the "get_support()" on the selector itself (but still need to use selector to do fit() or fit_transform() first).
Thanks!
Jeff

Tflearns .fit() method with numpy.ndarrays causing TypeError

So I get this error TypeError: unhashable type: 'numpy.ndarray' when executing the code below. I searched through Stackoverflow but haven't found a way to fix my problem. The goal is to classify digits via the mnist dataset. The error is in the modell.fit() method (from tflearn). I can attach the full error message of the error if needed. I tried it also with the method were you put the x and y lables in an dictionary and train it with this but it raised another error message. (Note I excluded my predict function in this code).
Code:
import tflearn.datasets.mnist as mnist
x,y,X,Y=mnist.load_data(one_hot=True)
x=x.reshape([-1,28,28,1])
X=X.reshape([-1,28,28,1])
import tflearn
class Neural_Network():
def __init__(self,x,y):
self.x=x
self.y=y
self.epochs=60000
def main(self):
cnn=tflearn.layers.core.input_data(shape=[None,28,28,1],name="input_layer")
cnn=tflearn.layers.conv.conv_2d(cnn,32,2, activation="relu")
cnn=tflearn.layers.conv.max_pool_2d(cnn,2)
cnn=tflearn.layers.conv.conv_2d(cnn,32,2, activation="relu")
cnn=tflearn.layers.conv.max_pool_2d(cnn,2)
cnn=tflearn.layers.core.flatten(cnn)
cnn=tflearn.layers.core.fully_connected(cnn,1000,activation="relu")
cnn=tflearn.layers.core.dropout(cnn,0.85)
cnn=tflearn.layers.core.fully_connected(cnn,10,activation="softmax")
cnn=tflearn.layers.estimator.regression(cnn,learning_rate=0.001)
modell=tflearn.DNN(cnn)
modell.fit(self.x,self.y)
modell.save("mnist.modell")
nn=Neural_Network(x,y)
nn.main()
nn.predict(X[1])
print("Label for prediction:",Y[1])
So the problem fixed it self. I only restarted my Jupiter-Notebook and everything worked fine. But with a few execptions: 1. I have to restart the Kernel everytime I want to retrain the net, 2. I get another error while I try to load the saved modell, so I can't work on (the error is NotFoundError: Key Conv2D_2/W not found in checkpoint). I will ask another question for this problem. Conclusion: Try to relod your Jupiter Notebook if something is't working well. And if you are want to train a ANN restart your Kernel.

getting error while running fancyRpart command

Please help me
I am not able plot using fancyRpart command though I have installed rattle and other dependency like RGtk2,rpart.plot& rpart.
I am using R version 3.4.2 (2017-09-28) on windows 10 getting following error
set.seed(123456)
modelFit<-train(classe ~.,method="rpart", data=TrainSet)
fancyrpartPlot(modelFit)
Error: the object passed to prp is not an rpart object
In addition: Warning message:
In max(model$frame$yval) : no non-missing arguments to max; returning -Inf
Please do provide complete reproducible exapmles otherwise we have to guess.
I think you are using caret::train(). This returns an object of class "train", not the actual final model but it does encapsulate the model and much more meta data: see ?caret::train.
Try:
fancyRpartPlot(modelFit$finalModel)
A reproducible example:
library(caret)
library(rattle)
modelFit<-train(Species ~., method="rpart", data=iris)
fancyRpartPlot(modelFit$finalModel)

AssertionError when using self-defined nested list in Pyspc

I installed pyspc and run on Jupyter Notebook successfully when using original samples.
But when I tried introducing a self defined nested list and an error message showed up.
pyspc library: https://github.com/carlosqsilva/pyspc
from pyspc import*
import numpy
abc=[[2,3,4],[4,5.6],[1,4,5],[3,4,4],[4,5,6]]
a=spc(abc)+xbar_rbar()+rules()+rbar()
print(a)
error message for AssertionError
Thank you for advise where went wrong and how to fix it.
Check the data you have accidentally used the . instead of , for value [4,5.6], second element of the list.
Here is the corrected data
abc=[[2,3,4],[4,5,6],[1,4,5],[3,4,4],[4,5,6]]
Hope this will help.

Resources