I am following this tutorial: https://blog.paperspace.com/mask-r-cnn-in-tensorflow-2-0/ in order to train a custom dataset for object detection. When I run the code for training (under paragraph: "Train Mask R-CNN in TensorFlow 1.0"), I get this error on colab:
NameError Traceback (most recent call last)
<ipython-input-31-794112aa6465> in <module>()
6 import mrcnn.config
7
----> 8 import mrcnn.model
9
10 class KangarooDataset(mrcnn.utils.Dataset):
/content/drive/MyDrive/How_to_Train_an_Object_Detection_Model_with_Keras/Mask_RCNN/mrcnn/model.py in <module>()
255
256
--> 257 class ProposalLayer(KE.Layer):
258 """Receives anchor scores and selects a subset to pass as proposals
259 to the second stage. Filtering is done based on anchor scores and
NameError: name 'KE' is not defined
After searching I tried to check that RCNN is ok with this: Import Matterport's Mask-RCNN model from github - error:ZipImportError: bad local file header with the solution that the guy in the end suggests. I have also found this: NameError: name 'K' is not defined so I tried this command:
from keras import backend as KE
(instead of K, I put KE) but it didn't work!
Do you have any idea how to fix that error?
Ok, I tried this github repository instead the original MaskRCNN: https://github.com/akTwelve/Mask_RCNN with the latest tensorflow (2.7.0) + Keras (2.7.0) installed on colab. It seems to overcome the above problem I described...I do not know why..!
Related
I am trying to calibrate the output of an pyspark GradientBoostingClassifier model to probabilities and want to try this option.
I have run an IsotonicRegression like this:
from pyspark.ml.regression import IsotonicRegression, IsotonicRegressionModel
model = IsotonicRegression().fit(train_data)
predictions_train=model.transform(test_data)
But I am unable to perform fit using IsotonicRegressionModel because when I try this:
irm = IsotonicRegressionModel()
model_irm =irm.fit(train_data)
I'm getting the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[70], line 4
1 # Trains an isotonic regression model.
2 irm = IsotonicRegressionModel()
----> 3 model_irm=irm.fit(train_data)
AttributeError: 'IsotonicRegressionModel' object has no attribute 'fit'
I would like to run second option to identify the difference between IsotonicRegression vs IsotonicRegressionModel.
Thanks in advance if anyone can help me understand this difference.
Im using spark.version 3.1.3
First I downloaded the output folder of the trained model and imported it in a new project on the google colab server.
In a new project without training the model, I have given the path of model_final.pth of the existing output folder to cfg.MODEL.WEIGHTS =/content/output/model_final.pth. but goes in an infinite loop.
3.I change the model weights cfg.MODEL.WEIGHTS = "detectron2://COCO-Detection/faster_rcnn_R_101_FPN_3x/137851257/model_final_f6e8b1.pkl". but still it doesn't predict objects.
I change the model weights path and gave the previously trained model metrics JSON file still it not working
cfg.MODEL.WEIGHTS=/content/output/metrics.json 5.By using DetectionCheckpointer(model).load("/content/output/model_final.pth") DetectionCheckpointer(model).load("detectron2://COCO-Detection/faster_rcnn_R_101_FPN_3x/137851257/model_final_f6e8b1.pkl")
it gives an error model is not defined.
what is this model_final.pkl file? and where did we get it?
what should we do to import the existing train model and predict the objects in the new project?
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.DATASETS.TEST = ("microcontroller_test", )
predictor = DefaultPredictor(cfg)
Above code goes in an infinite loop
DetectionCheckpointer(model).load("/content/output/model_final.pth")
DetectionCheckpointer(model).load("detectron2://COCO-Detection/faster_rcnn_R_101_FPN_3x/137851257/model_final_f6e8b1.pkl")
Error:
NameError Traceback (most recent call last)
<ipython-input-12-69f2a7846756> in <module>()
----> 1 DetectionCheckpointer(model).load("/content/output/model_final.pth")
2
3 DetectionCheckpointer(model).load("detectron2://COCO-Detection/faster_rcnn_R_101_FPN_3x/137851257/model_final_f6e8b1.pkl")
NameError: name 'model' is not defined
I'm following the examples (jupyter notebooks) on Folium's github repository and can't find why class CustomPane is not working.
This is the code in the cell that's not working:
m = folium.Map([43, -100], zoom_start=4, tiles="stamentonerbackground", attr="My attr")
folium.GeoJson(geo_json_data).add_to(m)
folium.map.CustomPane("labels").add_to(m)
# Final layer associated to custom pane via the appropriate kwarg
folium.TileLayer("stamentonerlabels", pane="labels").add_to(m)
m.save(os.path.join('results', 'CustomPanes_1.html'))
m
Running the code results in the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
3 folium.GeoJson(geo_json_data).add_to(m)
4
----> 5 folium.map.CustomPane("labels").add_to(m)
6
7 # Final layer associated to custom pane via the appropriate kwarg
AttributeError: module 'folium.map' has no attribute 'CustomPane'
Any help to clarify what's the problem?
Folium version 0.5.0
Python 3.7.7
It seems that Leaflet CustomPane class wass added in Folium 0.9.0 and the error I was experimenting was obviously due to the use of Folium 0.5.0.
Installed Folium current version (0.11.0) and works fine.
I'm trying to calculate the F1 score using tf.contrib.metrics.f1_score, but it gives me an error. I know how to calculate it using precision and recall but i want to use this function.
I have tried it on ubuntu 16.04 LTS with tensorflow version 1.9.0 with gpu suport and no gpu suport
from tensorflow.contrib.metrics import f1_score as ms
i get this error:
ImportError: Traceback (most recent call last)
<ipython-input-6-627f14191ea2> in <module>()----> 1 from tensorflow.contrib.metrics import f1_score as ms
ImportError: cannot import name 'f1_score'
AND
from tensorflow.contrib import metrics as ms
ms.f1_score
I get this error:
AttributeError Traceback (most recent call last)
<ipython-input-8-c19f57465581> in <module>()
1 from tensorflow.contrib import metrics as ms
----> 2 ms.f1_score
AttributeError: module 'tensorflow.contrib.metrics' has no attribute 'f1_score'
I expect ms.f1_score would load
If you are sure that you have tf.contrib available and this doesn't work for you, maybe you will need to reinstall tensorflow use pip install -U tensorflow or use the -GPU if you are using that version.
If it fails, go to the place where tensorflow is installed and manually check if it is available or not, if it is available, make sure that you don't have a file in the same directory (Current working directory) named as tensorflow.py or tf.py
After that you should get
Update: As pointed by User #grwlf
Since TensorFlow 2.0, tf.contrib modules were moved to the Addons repo. See github.com/tensorflow/addons. There, F1 mesure is available as F1Score from tensorflow_addons.metrics import F1Score
You can find the documentation of f1_score here
Since it is a function, maybe you can try out:
from tensorflow.contrib import metrics as ms
ms.f1_score(labels,predictions)
Which will return a scalar tensor of the best f1 scores across different thresholds.
Example from tensorflow docs:
def model_fn(features, labels, mode):
predictions = make_predictions(features)
loss = make_loss(predictions, labels)
train_op = tf.contrib.training.create_train_op( total_loss=loss, optimizer='Adam')
eval_metric_ops = {'f1': f1_score(labels, predictions)}
return tf.estimator.EstimatorSpec( mode=mode, predictions=predictions, loss=loss, train_op=train_op, eval_metric_ops=eval_metric_ops, export_outputs=export_outputs)
estimator = tf.estimator.Estimator(model_fn=model_fn)
Hope this answers your question.
So I get this error TypeError: unhashable type: 'numpy.ndarray' when executing the code below. I searched through Stackoverflow but haven't found a way to fix my problem. The goal is to classify digits via the mnist dataset. The error is in the modell.fit() method (from tflearn). I can attach the full error message of the error if needed. I tried it also with the method were you put the x and y lables in an dictionary and train it with this but it raised another error message. (Note I excluded my predict function in this code).
Code:
import tflearn.datasets.mnist as mnist
x,y,X,Y=mnist.load_data(one_hot=True)
x=x.reshape([-1,28,28,1])
X=X.reshape([-1,28,28,1])
import tflearn
class Neural_Network():
def __init__(self,x,y):
self.x=x
self.y=y
self.epochs=60000
def main(self):
cnn=tflearn.layers.core.input_data(shape=[None,28,28,1],name="input_layer")
cnn=tflearn.layers.conv.conv_2d(cnn,32,2, activation="relu")
cnn=tflearn.layers.conv.max_pool_2d(cnn,2)
cnn=tflearn.layers.conv.conv_2d(cnn,32,2, activation="relu")
cnn=tflearn.layers.conv.max_pool_2d(cnn,2)
cnn=tflearn.layers.core.flatten(cnn)
cnn=tflearn.layers.core.fully_connected(cnn,1000,activation="relu")
cnn=tflearn.layers.core.dropout(cnn,0.85)
cnn=tflearn.layers.core.fully_connected(cnn,10,activation="softmax")
cnn=tflearn.layers.estimator.regression(cnn,learning_rate=0.001)
modell=tflearn.DNN(cnn)
modell.fit(self.x,self.y)
modell.save("mnist.modell")
nn=Neural_Network(x,y)
nn.main()
nn.predict(X[1])
print("Label for prediction:",Y[1])
So the problem fixed it self. I only restarted my Jupiter-Notebook and everything worked fine. But with a few execptions: 1. I have to restart the Kernel everytime I want to retrain the net, 2. I get another error while I try to load the saved modell, so I can't work on (the error is NotFoundError: Key Conv2D_2/W not found in checkpoint). I will ask another question for this problem. Conclusion: Try to relod your Jupiter Notebook if something is't working well. And if you are want to train a ANN restart your Kernel.