So I get that google colab crashes when using cv2.imshow for displaying images. Google colab has its own solution for replacing that function and google.colab.patches import cv2_imshow can be used as a replacement for displaying images.
However, I did notice that colab raises a DisabledFunctionError when I try using cv2.imshow. This led me to think that maybe I can try catching that error using 'Try and Except' block. But in order to do that DisabledFunctionError must be defined as a custom error in python. So I wrote an exception class to define that error:
class DisabledFunctionError(Exception):
pass
Now having done that I should assume that an error can be handled using try and except block as
follows:
try:
cv2.imshow(frame, image)
except DisabledErrorFunction:
print('Error handled')
But, to my surprise, colab still raises an exception and its not caught by the try and except block. This behavior seems strange to me. Am I missing something here? Is this behavior due to colab?
Related
I need to use spark in transfer learning to train images ,the error is:
"nnot import name 'resnet50' from 'keras.applications' (/usr/local/lib/python3.7/dist-packages/keras/applications/init.py) "
i try to solve this question since one week, this one is coming from sparkdl, if you add to this file (sparkdl/transformers/keras_applications.py)
**
from tensorflow.keras.applications
**, it will be return normal, but this time you will see another error like
AttributeError: module 'tensorflow' has no attribute 'Session'
i tried on different IDE (Pycharm, Vs Code) but i got the same errors. there are different explications on Stackoverflow. but i'm totally confused now
Before I ask you, please understand that I am not good at English. im sorry.
import...
sharedMem_chk=mp.Value(ctypes.c_bool,False)
def all_loopStop(chk):
#print("[def]all_loopStop:::ready")
while True:
if chk.value==False:
if keyboard.is_pressed('q'):
print("::stop loop::")
chk.value=True
def __init__():
test1 = mp.Process(target=all_loopStop, args=(sharedMem_chk,))
test1.start()
if __name__ == '__main__':
__init__()
This is part of my code.
It works fine when compiling and debugging, but when I write an exe file with cx_freeze and run the exe file I get the following error message:
_pickle.PicklingError: Can't pickle : attribute lookup all_loopStop on main failed
After searching for an hour, I think the reason for the error is data not serialized, but the all_loopStop function has not needed serialized data, so I don't know why the error occurs.
The development environment is python3.7 32bit, and I'll also attach a detailed debug console window.
I need your help very much. I apologize once again for asking questions with my poor English.
Debug console screenshot
So I get this error TypeError: unhashable type: 'numpy.ndarray' when executing the code below. I searched through Stackoverflow but haven't found a way to fix my problem. The goal is to classify digits via the mnist dataset. The error is in the modell.fit() method (from tflearn). I can attach the full error message of the error if needed. I tried it also with the method were you put the x and y lables in an dictionary and train it with this but it raised another error message. (Note I excluded my predict function in this code).
Code:
import tflearn.datasets.mnist as mnist
x,y,X,Y=mnist.load_data(one_hot=True)
x=x.reshape([-1,28,28,1])
X=X.reshape([-1,28,28,1])
import tflearn
class Neural_Network():
def __init__(self,x,y):
self.x=x
self.y=y
self.epochs=60000
def main(self):
cnn=tflearn.layers.core.input_data(shape=[None,28,28,1],name="input_layer")
cnn=tflearn.layers.conv.conv_2d(cnn,32,2, activation="relu")
cnn=tflearn.layers.conv.max_pool_2d(cnn,2)
cnn=tflearn.layers.conv.conv_2d(cnn,32,2, activation="relu")
cnn=tflearn.layers.conv.max_pool_2d(cnn,2)
cnn=tflearn.layers.core.flatten(cnn)
cnn=tflearn.layers.core.fully_connected(cnn,1000,activation="relu")
cnn=tflearn.layers.core.dropout(cnn,0.85)
cnn=tflearn.layers.core.fully_connected(cnn,10,activation="softmax")
cnn=tflearn.layers.estimator.regression(cnn,learning_rate=0.001)
modell=tflearn.DNN(cnn)
modell.fit(self.x,self.y)
modell.save("mnist.modell")
nn=Neural_Network(x,y)
nn.main()
nn.predict(X[1])
print("Label for prediction:",Y[1])
So the problem fixed it self. I only restarted my Jupiter-Notebook and everything worked fine. But with a few execptions: 1. I have to restart the Kernel everytime I want to retrain the net, 2. I get another error while I try to load the saved modell, so I can't work on (the error is NotFoundError: Key Conv2D_2/W not found in checkpoint). I will ask another question for this problem. Conclusion: Try to relod your Jupiter Notebook if something is't working well. And if you are want to train a ANN restart your Kernel.
I installed pyspc and run on Jupyter Notebook successfully when using original samples.
But when I tried introducing a self defined nested list and an error message showed up.
pyspc library: https://github.com/carlosqsilva/pyspc
from pyspc import*
import numpy
abc=[[2,3,4],[4,5.6],[1,4,5],[3,4,4],[4,5,6]]
a=spc(abc)+xbar_rbar()+rules()+rbar()
print(a)
error message for AssertionError
Thank you for advise where went wrong and how to fix it.
Check the data you have accidentally used the . instead of , for value [4,5.6], second element of the list.
Here is the corrected data
abc=[[2,3,4],[4,5,6],[1,4,5],[3,4,4],[4,5,6]]
Hope this will help.
I have a lot of images to handle, and it is possible that some of the images have some problems to open. So I want to handle possible exceptions in my python script. The problem is I do not know what kind of exceptions might be raised during the process of opening an image. This is my script for exception handling:
from PIL import Image
# all_im_path is a list containing all the image path
for path in all_im_path:
try:
im = Image.open(path)
ratio = float(im.width) / float(im.height)
ratios.append(ratio)
im.close()
except:
continue
I know from other post that catching a bare exception is a bad practice. But I do not know beforehand which exception to raise. So my question is how to improve this code. Any suggestions?