I had been trying to replicated an online tutorial for plotting confusion matrix but got recursion error, tried resetting the recursion limit but still the error persists. The code is a below:
log = LogisticRegression()
log.fit(x_train,y_train)
pred_log = log.predict(x_train)
confusion_matrix(y_train,pred_log)
The error I got is :
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
<ipython-input-57-4b8fbe47e72d> in <module>
----> 1 (confusion_matrix(y_train,pred_log))
<ipython-input-48-92d5242f8580> in confusion_matrix(test_data, pred_data)
1 def confusion_matrix(test_data,pred_data):
----> 2 c_mat = confusion_matrix(test_data,pred_data)
3 return pd.DataFrame(c_mat)
... last 1 frames repeated, from the frame below ...
<ipython-input-48-92d5242f8580> in confusion_matrix(test_data, pred_data)
1 def confusion_matrix(test_data,pred_data):
----> 2 c_mat = confusion_matrix(test_data,pred_data)
3 return pd.DataFrame(c_mat)
RecursionError: maximum recursion depth exceeded
The shape of the train and test data is as below
x_train.shape,y_train.shape,x_test.shape,y_test.shape
# ((712, 7), (712,), (179, 7), (179,))
Tried with: sys.setrecursionlimit(1500)
But still no resolution.
Looks like you are recursively calling the same function. Try changing the outer function name.
1 def confusion_matrix(test_data,pred_data):
----> 2 c_mat = confusion_matrix(test_data,pred_data)
3 return pd.DataFrame(c_mat)
To
def confusion_matrix_pd_convertor(test_data,pred_data):
c_mat = confusion_matrix(test_data,pred_data)
return pd.DataFrame(c_mat)
log = LogisticRegression()
log.fit(x_train,y_train)
pred_log = log.predict(x_train)
confusion_matrix_pd_convertor(y_train,pred_log)
Related
I tried the following code in nvidia-dgx2 machine.
import cirq
# Pick a qubit.
qubit = cirq.GridQubit(0, 0)
# Create a circuit
circuit = cirq.Circuit(
cirq.X(qubit)**0.5, # Square root of NOT.
cirq.measure(qubit, key='m') # Measurement.
)
print("Circuit:")
print(circuit)
# Simulate the circuit several times.
simulator = cirq.Simulator()
result = simulator.run(circuit, repetitions=20)
print("Results:")
print(result)
But, I get the attribute error.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_36197/3759634386.py in <module>
2
3 # Pick a qubit.
----> 4 qubit = cirq.GridQubit(0, 0)
5
6 # Create a circuit
AttributeError: module 'cirq' has no attribute 'GridQubit'
Any solution to this issue?
I am trying to Read an Image using GeneralizedRCNN, Input shape is given as a comment with code. The problem is I am getting an error while tracing the model with input shape. The error is :
> trace = torch.jit.trace(model, input_batch) line Providing the error
> "/usr/local/lib/python3.7/dist-packages/torch/tensor.py:467:
> RuntimeWarning: Iterating over a tensor might cause the trace to be
> incorrect. Passing a tensor of different shape won't change the number
> of iterations executed (and might lead to errors or silently give
> incorrect results). 'incorrect results).', category=RuntimeWarning)
> --------------------------------------------------------------------------- IndexError Traceback (most recent call
> last) <ipython-input-25-52ff7ef794de> in <module>()
> 1 #First attempt at tracing
> ----> 2 trace = torch.jit.trace(model, input_batch)
>
> 7 frames
> /usr/local/lib/python3.7/dist-packages/detectron2/modeling/meta_arch/rcnn.py
> in <listcomp>(.0)
> 182 Normalize, pad and batch the input images.
> 183 """
> --> 184 images = [x["image"].to(self.device) for x in batched_inputs]
> 185 images = [(x - self.pixel_mean) / self.pixel_std for x in images]
> 186 images = ImageList.from_tensors(images, self.backbone.size_divisibility)
>
> IndexError: too many indices for tensor of dimension 3
model = build_model(cfg)
model.eval()
# print(model)
input_image = Image.open("model/xxx.jpg")
display(input_image)
to_tensor = transforms.ToTensor()
input_tensor = to_tensor(input_image)
# input_tensor.size = torch.Size([3, 519, 1038])
input_batch = input_tensor.unsqueeze(0)
# input_batch.size = torch.Size([1, 3, 519, 1038])
trace = torch.jit.trace(model, input_batch)
This error occurred because input_batch.size = torch.Size([1, 3, 519, 1038]) has 4 dimensions and trace = torch.jit.trace(model, input_batch) expected to get a 3 dimensions as input.
you don't need input_batch = input_tensor.unsqueeze(0). delete or comment this line.
By default
..
The torch.jit.trace function cannot be used directly. However, it does provide a wrapper called that the model can take a tensor or a tuple of tensors as input. You can find a way to use it because of them.
The code for tracing the Mask RCNN model looks like this:
import torch
import torchvision
from detectron2.export.flatten import TracingAdapter
def inference_func(model, image):
inputs= [{"image": image}]
return model.inference(inputs, do_postprocess=False)[0]
print("cfg.MODEL.WEIGHTS: ",cfg.MODEL.WEIGHTS) ## RETURNS : cfg.MODEL.WEIGHTS: drive/Detectron2/model_final.pth
model= build_model(cfg)
example= torch.rand(1, 3, 224, 224)
wrapper= TracingAdapter(model, example, inference_func)
wrapper.eval()
traced_script_module= torch.jit.trace(wrapper, (example,))
traced_script_module.save("drive/Detectron2/model-final.pt")
recently I made this code here for the Day 11 of Hackerrank 30 day coding challenge:
arr = [[1,1,1,0,0,0],[0,1,0,0,0,0],[1,1,1,0,0,0],[0,0,2,4,4,0],[0,0,0,2,0,0],[0,0,1,2,4,0]]
f=0
sumas = []
while f<(len(arr)-2):
c = 0
for i in range(len(arr) - 2):
sumas.append((sum(arr[0+f][c:3+c]) + arr[1+f][1+c] + sum(arr[2+f][c:3+c])))
c+=1
f+=1
print(max(sumas))
It takes subarrays from "arr" and sum all the integers on it after that takes the maximum sum number from the subarrays.
When I run my code on spyder works fine but I am getting this error here while running on Hackerrank;
Error (stderr)
Traceback (most recent call last):
File "Solution.py", line 10, in <module>
print(max(sumas))
ValueError: max() arg is an empty sequence
Hope anyone can help me.
the elements in the "I" shape follows this pattern
arr[i][j]+arr[i][j+1]+arr[i][j+2]+arr[i+1][j+1]+arr[i+2][j]+arr[i+2][j+1]+arr[i+2][j+2]
so we can traverse through the complete array with this
for _ in range(6):
arr.append(list(map(int, input().rstrip().split())))
ans=[]
for i in range(4):
for j in range(4):
su=arr[i][j]+arr[i][j+1]+arr[i][j+2]+arr[i+1][j+1]+arr[i+2][j]+arr[i+2][j+1]+arr[i+2][j+2]
ans.append(su)
print(max(ans))
Not able to reshape the image in mnist dataset using sklean
This is the starting portion of my code just load the data
some_digit = X[880]
some_digit_image = some_digit.reshape(28, 28)
ERROR PART
ValueError Traceback (most recent call last)
<ipython-input-15-4d618bdb57bc> in <module>
1 some_digit = X[880]
----> 2 some_digit_image = some_digit.reshape(28,28)
ValueError: cannot reshape array of size 64 into shape (28,28)
You can only reshape it into a 8, 8 array. 8x8=64
try:
some_digit = X[880]
some_digit_image = some_digit.reshape(8, 8)
I want to shuffle my image before putting it into the hdf5 file, but got an error in the computation. As a recent learner, I can't figure this out even afer reading the hdf5 documentation. Kindly guide me.
from random import shuffle
import glob
shuffle_data = True # shuffle the addresses before saving
hdf5_path = 'Cat vs Dog/dataset.hdf5' # address to where you want to save the hdf5 file
cat_dog_train_path = 'Cat vs Dog/train/*.jpg'
# read addresses and labels from the 'train' folder
addrs = glob.glob(cat_dog_train_path)
labels = [0 if 'cat' in addr else 1 for addr in addrs] # 0 = Cat, 1 = Dog
# to shuffle data
if shuffle_data:
c = list(zip(addrs, labels))
shuffle(c)
addrs, labels = zip(*c)
Error:
> ValueError Traceback (most recent call
> last) <ipython-input-19-4408536403db> in <module>()
> 2 c = list(zip(address, labels))
> 3 shuffle(c)
> ----> 4 addrs, labels = zip(*c)
>
> ValueError: not enough values to unpack (expected 2, got 0)
Reference: http://machinelearninguru.com/deep_learning/data_preparation/hdf5/hdf5.html#list
The website gives Python 2 code. I am seeing your tag, are you using Python 3? You can convert it using
2to3 -n filename.py