How to troubleshoot this? I've tried setting dtype=None in the image.img_to_array method.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
from keras.preprocessing import image
image_size = (180, 180)
batch_size = 32
model = keras.models.load_model('best_model.h5')
img = keras.preprocessing.image.load_img(
"GarnetCreek_7-15-2019.jpeg", target_size=image_size
)
img_array = image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create batch axis
predictions = model.predict(img_array)
score = predictions[0]
This raises the following error:
Traceback (most recent call last):
img_array = image.img_to_array(img, dtype=None)
return image.img_to_array(img, data_format=data_format, **kwargs)
x = np.asarray(img, dtype=dtype)
return array(a, dtype, copy=False, order=order)
TypeError: __array__() takes 1 positional argument but 2 were given
Has anyone seen this before? Many thanks!
This error sometimes is due to a bug in Pillow 8.3.0 as it is here. (You may not use import PIL directly in your code, however some libraries such as tf.keras.preprocessing.image.load_img use PIL internally)
So, downgrading from PIL 8.3.0 to 8.2.0 may work.
Check PIL version:
import PIL
print(PIL.__version__)
If it is 8.3.0, then you may downgrade to 8.2.0:
!pip install pillow==8.2.0
Related
I'm trying to use the pre-trained model of detectron2. While running the following code, it shows NotImplementedError.
import torch
torch.__version__
import torchvision
#torchvision.__version__
!pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.7/index.html
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
import numpy as np
import os, json, cv2, random
import matplotlib.pyplot as plt
%matplotlib inline
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.structures import BoxMode
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCOInstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCOInstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
predictor = DefaultPredictor(cfg)
And it shows the following error:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-27-699e754fc9df> in <module>
----> 1 predictor = DefaultPredictor(cfg)
4 frames
/usr/local/lib/python3.8/dist-packages/iopath/common/file_io.py in _isfile(self, path, **kwargs)
438 bool: true if the path is a file
439 """
--> 440 raise NotImplementedError()
441
442 def _isdir(self, path: str, **kwargs: Any) -> bool:
NotImplementedError:
I had the same issue and solved it by manually downloading .pkl file and giving path to cfg.MODEL.WEIGHTS variable
You can try this:
model_weights_url = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
print(f"Downloading model from {model_weights_url}...")
local_model_weights_path = Path("./temp/downloads/model.pkl")
os.makedirs(local_model_weights_path.parent, exist_ok=True)
urllib.request.urlretrieve(model_weights_url, local_model_weights_path)
cfg.MODEL.WEIGHTS = str(local_model_weights_path)
I had the same issue just today. I manually downloaded the R-50.pkl from https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-50.pkl and set cfg.MODEL.WEIGHTS = "R-50.pkl" #path to file
valueError image part
Here is the code for eliminating features where I am getting value errors. I want to use recursive feature elimination without specifying any features . I tried to use the RFE(Recursion feature elemination) model to automatically eliminate weak features with each iteration which I have unable to do.HERE is the link of the dataset. https://drive.google.com/file/d/1neYnunu6a_Mdn3NfRZsF8wE4gwMCpjAY/view?usp=sharing .I will be grateful if you suggest me how to do it.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from pandas import DataFrame
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import accuracy_score
from sklearn import datasets
from sklearn.metrics import classification_report
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
df.keys()
x=pd.DataFrame(df)
x.head()
X = df.drop(["Sub_Cat"],axis=1).values
y = df["Sub_Cat"].values
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state=0)
X_train.shape,X_test.shape
sel=SelectFromModel(RandomForestClassifier(n_estimators=100,random_state=0,n_jobs=-1))
sel.fit(X_train,y_train)
sel.get_support()
[I am getting value error in this part][1]
Then i tried to do this also getting `X = df.drop(["Dst_IP","Timestamp","Flow_ID","Src_IP","Sub_Cat"],axis=1).values
y = df["Sub_Cat"].values
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state=0)
X = df.drop(["Dst_IP","Timestamp","Flow_ID","Src_IP","Sub_Cat"],axis=1).values
y = df["Sub_Cat"].values
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state=0)
sel=SelectFromModel(RandomForestClassifier(n_estimators=100,random_state=0,n_jobs=-1))
sel.fit(X_train,y_train)
sel.get_support()
I am still getting error:
ValueError Traceback (most recent call last)
in ()
1 sel=SelectFromModel(RandomForestClassifier(n_estimators=100,random_state=0,n_jobs=-1))
----> 2 sel.fit(X_train,y_train)
3 sel.get_support()
3 frames
/usr/local/lib/python3.7/dist-packages/numpy/core/_asarray.py in asarray(a, dtype, order)
81
82 """
---> 83 return array(a, dtype, copy=False, order=order)
84
85
ValueError: could not convert string to float: 'Anomaly'
I am a new user for yellowbrick. While implementing a sklearn LogisticRegression API in yellowbrick ClassificationReport, I found some unusual error. I have tried many syntaxes as suggested by yellowbrick official document as well as in most data science community user (medium etc.), but still I am getting the same error. Though I am getting the ClassificationReport but the error is quite annoying.
#Using yellowbrick library
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, roc_auc_score, recall_score, roc_curve, accuracy_score, auc, classification_report, plot_confusion_matrix, plot_roc_curve, precision_score, f1_score
from yellowbrick.classifier import ClassificationReport, discrimination_threshold, classification_report
classes = [0,1]
fig = plt.gcf()
ax = plt.subplot(111)
visualizer = ClassificationReport(log_model,classes=[0,1], size=(400,400),fontsize=15, cmap='GnBu', ax = ax)
ax.grid(False)
plt.title("Classification Report", fontsize=18)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
#visualizer.show() #I even tried this (This also gives me an error like LogisticRegression object has no attribute 'show'
The output I am getting is:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-89-f01efe3e9a6d> in <module>()
16 visualizer.fit(X_train, y_train)
17 visualizer.score(X_test, y_test)
---> 18 visualizer.poof()
2 frames
/usr/local/lib/python3.7/dist-packages/yellowbrick/utils/wrapper.py in __getattr__(self, attr)
40 def __getattr__(self, attr):
41 # proxy to the wrapped object
---> 42 return getattr(self._wrapped, attr)
AttributeError: 'LogisticRegression' object has no attribute 'fig'
Appreciate any suggestion to get rid of this error.
To add, currently I am using following scikit-learn and yellowbrick versions:
print(sklearn.__version__)
print(yellowbrick.__version__)
0.24.2
0.9.1
sklearn version 0.22.2.post1 and yellowbrick version 0.9.1 fix the issue for me. Install these by running:
pip install scikit-learn==0.22.2.post1
and
pip install yellowbrick==0.9.1
I am following Andrew Ng's Neural Network and Deep Learning course on Coursera.
Doing the assignment within Coursera's notebook environment called "Logistic_Regression_with_a_Neural_Network_mindset_v6a."
There is an optional and ungraded section at the very bottom titled:
"7 - Test with your own image".
I am trying to run the following code from my own notebook environment.
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_pet_cat.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
image = image/255.
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
I get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-78-362e8e86085f> in <module>
5 # We preprocess the image to fit your algorithm.
6 fname = "images/" + my_image
----> 7 image = np.array(ndimage.imread(fname, flatten=False))
8
9
AttributeError: module 'scipy.ndimage' has no attribute 'imread'
I've read that imread and imresize has been deprecated from scipy. Is there a way to make the code allow the use of a custom image from my local notebook environment without having to downgrade to scipy 1.1.0. For some reason my system won't allow me to uninstall or downgrade to scipy 1.1.0.
I got the below code from Visualizing a Decision Tree - Machine Learning
import numpy as np
from sklearn.datasets import load_iris
from sklearn import tree
iris = load_iris()
test_idx = [0, 50 , 100]
train_target = np.delete(iris.target, test_idx)
train_data = np.delete(iris.data, test_idx , axis=0)
test_target = iris.target[test_idx]
test_data = iris.data[test_idx]
clf = tree.DecisionTreeClassifier()
clf.fit(train_data, train_target)
print(test_target)
print(clf.predict(test_data))
#viz_code
from sklearn.externals.six import StringIO
import pydot
dot_data = StringIO()
tree.export_graphviz(clf,
out_file=dot_data,
feature_names = iris.feature_names,
class_names = iris.target_names,
filled = True, rounded = True,
impurity = False)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("iris.pdf")
I tried to run it in my python 3.5 but i get an error saying that graph is a list.
Traceback (most recent call last):
File "Iris.py", line 31, in <module>
graph.write_pdf("iris.pdf")
AttributeError: 'list' object has no attribute 'write_pdf'
Press any key to continue . . .
How come graph here is a list?
I think this is a duplicate, here is answered the same question link
because pydot.graph_from_dot_data return a list the solution is:
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph[0].write_pdf("iris.pdf")
This solved the problem for me with Python 3.6.5 :: Anaconda, Inc.
Pydot will not work in Python3.
You can use Pydotplus (graph.write_pdf("iris.pdf") AttributeError: 'list' object has no attribute 'write_pdf'") for python3 instead of pydot.
Although, the code shown on youtube is for Python2. So, it will be better if you use Python2.