I keep getting the unsupported operand types 'str' and 'str' in my code.
I have created a dataset for semantic segmentation of sidewalk across a campus. I want to train this dataset but i am getting errors when trying to get the labels from the labeled images to map them with the input images with the function: 'get_y_fn' . I wabt to train this dataset with fastai library in google colab
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import fastai
from fastai import *
from fastai.vision import *
import pathlib
import os
from PIL import Image
import matplotlib.pyplot as plt
fnames = get_image_files(path_img)
lbl_names = get_image_files(path_lbl)
get_y_fn = lambda x: path_lbl/f'{x.stem}.png'
data = (SegmentationItemList.from_folder(path_img)
.random_split_by_pct()
.label_from_func(get_y_fn,classes=codes)
.transform(get_transforms(),size=128,tfm_y=True)
.databunch(bs=4))
TypeError Traceback (most recent call last)
<ipython-input-18-80efbaeba6e7> in <module>()
2 data = (SegmentationItemList.from_folder(path_img)
3 .split_by_rand_pct()
----> 4 .label_from_func(get_y_fn,classes=codes)
5 .transform(get_transforms(),size=128,tfm_y=True)
6 .databunch(bs=4))
3 frames
<ipython-input-10-44f94a438cac> in <lambda>(x)
----> 1 get_y_fn = lambda x: path_lbl/f'{x.stem}.png'
TypeError: unsupported operand type(s) for /: 'str' and 'str'
Program on google colab error
beginning of program
Going through your code in google colab, I found that you've been using string as a path whereas if you're trying to reproduce fastai code then it uses path object for paths not string so you can simply replace:
get_y_fn = lambda x: path_lbl/f'{x.stem}_mask{x.suffix}'
with
get_y_fn = lambda x: path_lbl + "/" +f'{x.stem}_mask{x.suffix}'
Since path_lbl is a string object not path object.
You can also change path_lbl object from string to path using pathlib library of python.
Related
I am following Andrew Ng's Neural Network and Deep Learning course on Coursera.
Doing the assignment within Coursera's notebook environment called "Logistic_Regression_with_a_Neural_Network_mindset_v6a."
There is an optional and ungraded section at the very bottom titled:
"7 - Test with your own image".
I am trying to run the following code from my own notebook environment.
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_pet_cat.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
image = image/255.
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
I get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-78-362e8e86085f> in <module>
5 # We preprocess the image to fit your algorithm.
6 fname = "images/" + my_image
----> 7 image = np.array(ndimage.imread(fname, flatten=False))
8
9
AttributeError: module 'scipy.ndimage' has no attribute 'imread'
I've read that imread and imresize has been deprecated from scipy. Is there a way to make the code allow the use of a custom image from my local notebook environment without having to downgrade to scipy 1.1.0. For some reason my system won't allow me to uninstall or downgrade to scipy 1.1.0.
I am new to Python and I am trying to start CNN for one project. I mounted the gdrive and I am trying to download images from the gdrive directory. After, I am trying to count the images that I have in that directory. Here is my code:
import pathlib
dataset_dir = "content/drive/My Drive/Species_Samples"
data_dir = tf.keras.utils.get_file('Species_Samples', origin=dataset_dir, untar=True)
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir('*/*.png')))
print(image_count)
However, I get the following error.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-78-e5d9409807d9> in <module>()
----> 1 image_count = len(list(data_dir('*/*.png')))
2 print(image_count)
TypeError: 'PosixPath' object is not callable
Can you help, please?
After suggestion, my code looks like this:
import pathlib
data_dir = pathlib.Path("content/drive/My Drive/Species_Samples/")
count = len(list(data_dir.rglob("*.png")))
print(count)
You are trying to glob files you need to use one of the glob methods that pathlib has:
import pathlib
data_dir = pathlib.Path("/path/to/dir/")
count = len(list(data_dir.rglob("*.png")))
In this case .rglob is a recursive glob.
I'm trying to convert my Keras hdf5 file into a TensorFlow Lite file with the following code:
import tensorflow as tf
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model("/content/best_model_11class.hdf5")
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
I'm getting this error on the from_keras_model line:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-14-26467c686751> in <module>()
2
3 # Convert the model.
----> 4 converter = tf.lite.TFLiteConverter.from_keras_model("/content/best_model_11class.hdf5")
5 tflite_model = converter.convert()
6
/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py in from_keras_model(cls, model)
426 # to None.
427 # Once we have better support for dynamic shapes, we can remove this.
--> 428 if not isinstance(model.call, _def_function.Function):
429 # Pass `keep_original_batch_size=True` will ensure that we get an input
430 # signature including the batch dimension specified by the user.
AttributeError: 'str' object has no attribute 'call'
How do I fix this? By the way, I'm using Google Colab.
I'm not sure how stuff works on Colab, but looking at the documentation for tf.lite.TFLiteConverter.from_keras_model I can see that it expects a Keras model instance as an argument but you are giving it a string. Maybe you need to load the Keras model first?
Something like:
keras_model = tf.keras.models.load_model("/content/best_model_11class.hdf5")
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
import tensorflow as tf
model=tf.keras.models.load_model(""/content/best_model_11class.hdf5"")
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.experimental_new_converter = True
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
This worked according to https://github.com/tensorflow/tensorflow/issues/32693
This error can also appear when, by accident, you try to load only the weights of a saved model instead of the model fully.
For example, it can occur when using ModelCheckpoint() and save_weights_only = True, when only the weights are saved and not other model metadata, hence the same error.
I've some code like:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data = pd.read_csv('StudentsPerformance.csv')
#print(data.isnull().sum()) // checking if there are some missing values or not
#print(data.dtypes)checking datatypes of the dataset
# ANALYSÄ°S VALUES OF THE COLUMN'S
"""print(data['gender'].value_counts())
print(data['parental level of education'].value_counts())
print(data['race/ethnicity'].value_counts())
print(data['lunch'].value_counts())
print(data['test preparation course'].value_counts())"""
# Adding column total and average to the dataset
data['total'] = data['math score'] + data['reading score'] + data['writing score']
data['average'] = data ['total'] / 3
sns.distplot(data['average'])
I would like to see distplot of average for visualization but I run the program that gives me an error like
Traceback (most recent call last): File
"C:/Users/usersample/PycharmProjects/untitled1/sample.py", line 22, in
sns.distplot(data['average']) AttributeError: module 'seaborn' has no attribute 'distplot'
I've tried to reinstall and install seaborn and upgrade the seaborn to 0.9.0 but it doesn't work.
head of my data female,"group B","bachelor's
degree","standard","none","72","72","74" female,"group C","some
college","standard","completed","69","90","88" female,"group
B","master's degree","standard","none","90","95","93" male,"group
A","associate's degree","free/reduced","none","47","57","44"
this might be due to removal of paths in environment variables section. Try considering to add your IDE scripts and python folder. I am using pycharm IDE, and did the same and its working fine.
I am getting an error for the following code.
Can someone please tell me where I am going wrong?
p.s. I have given the path correctly for.mrcfile
import numpy
import Mrc
a = Mrc.bindFile('somefile.mrc')
# a is a NumPy array with the image data memory mapped from
# somefile.mrc. You can use it directly with any function
# that will take a NumPy array.
hist = numpy.histogram(a, bins=200)
# a.Mrc is an instances of the Mrc class. One thing
# you can do with that class is print out key information from the header.
a.Mrc.info()
# Use a.Mrc.hdr to access the MRC header fields.
wavelength0_nm = a.Mrc.hdr.wave[0]
AttributeError Traceback (most recent call last)
in ()
3 a = Mrc.bindFile('/home/smitha/deep-image-prior/data/Falcon_2015_05_14-20_42_18.mrc')
4 hist = numpy.histogram(a, bins=200)
----> 5 a.Mrc.info()
6 wavelength0_nm = a.Mrc.hdr.wave[0]
7
AttributeError: 'NoneType' object has no attribute 'Mrc'