Tensorboard: AttributeError: 'Model' object has no attribute '_get_distribution_strategy' - python-3.x

I'm getting this error when i use the tensorboard callback while training.
I tried looking for answers from posts related to tensorboard errors but this exact error was not found in any stackoverflow posts or github issues.
Please let know.
The following versions are installed in my pc:
Tensorflow and Tensorflow GPU : 2.0.0
Tensorboard: 2.0.0

I had the same problem and fixed it with this hack
model._get_distribution_strategy = lambda: None

It seems to be a bug on tensorflow's side.
https://github.com/tensorflow/tensorflow/pull/34870
Remove tensorboard callback for now.

This error mostly happens because of mixed imports from keras and tf.keras. Make sure that throughout the code exact referencing of libraries is maintained. For example instead of model.add(Conv2d()) try model.add(tf.keras.layers.Conv2D()) , applying this for all layers solved the problem for me.

Related

AttributeError: 'Tokenizer' object has no attribute 'oov_token'

I have this issue when I process the text. Do you know how to fix it. I just use pip install keras.
This is a generic Python/Pickle issue, not a Keras issue.
You can set tokenizer.oov_token = None to fix this.
The error was report in github comment

AttributeError: module 'torch' has no attribute 'inference_mode'

I am very new to pytorch, and when I try to run my CNN, I encountered the Error:
AttributeError: module 'torch' has no attribute 'inference_mode'.
Does anyone know what is going one? It worked on Google colab but no where else.
Try bump the version to 1.13. Reference: https://pytorch.org/docs/stable/generated/torch.inference_mode.html
Most likely its a version issue.
torch.inference_mode() was added recently in v1.9. Make sure you have the correct version.
Try printing torch.__version__ to check your version.

AttributeError: module 'tensorflow' has no attribute 'contrib'

I am using someone else code (Code Link) which is implemented in Tensorflow1. I want to run this code into Tensorflow2 However I am getting this error:
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
AttributeError: module 'tensorflow' has no attribute 'contrib'
I upgraded this code by using this instruction:
!tf_upgrade_v2 \
--infile /research/dept8/gds/anafees/MyTest.py \
--outfile /research/dept8/gds/anafees/MyTest2.py
Most things are updated, however generated report showed:
168:21: ERROR: Using member tf.contrib.distribute.MirroredStrategy in deprecated module tf.contrib. tf.contrib.distribute.MirroredStrategy cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
I search google; however, I could not find any suitable solution. I do not want to move back to Tensorflow1. Is there any alternate solution? Can anyone help?
Few libraries are deprecated in Tensorflow 2.x such tf.contrib, To make the code compatible with Tf 2 needs some changes in library.
Replace
tf.contrib.learn.datasets.load_dataset("mnist")
with
tf.keras.datasets.mnist.load_data()
And Replace
tf.contrib.distribute.MirroredStrategy
with
tf.distribute.MirroredStrategy(devices=None, cross_device_ops=None)

AutoKeras: Exported then reloaded Classifier model does not predict correctly

UPDATE: As kindly pointed out by #Dr Snoopy in the comments below, I was mistaken in thinking that the matrix from the predict() would show the predicted labels - instead it shows classification probabilities.
Issue:
I am trying out AutoKeras by following the step-by-step instructions in their official export page. Everything went fine until print(predicted_y) produced:
[[7.10387882e-09 5.58982416e-10 3.74930835e-07 ... 9.99997973e-01
2.01310062e-08 3.60369455e-07]
[1.85361150e-05 9.04550598e-06 9.99895692e-01 ... 2.75132152e-12
3.61683783e-06 7.38385242e-10]
[4.39638507e-06 9.98704195e-01 1.83042779e-04 ... 3.79047066e-04
8.97390855e-05 2.85495821e-06]
...
[2.76643597e-09 3.89823036e-08 2.50714938e-09 ... 1.26030145e-05
4.11345856e-04 1.28301617e-04]
[1.66736356e-07 1.93144473e-10 1.16833530e-08 ... 4.25922603e-10
3.32917640e-04 2.25114619e-07]
[1.36902031e-06 2.86963953e-10 1.59475476e-05 ... 1.64523464e-11
2.79402485e-07 4.60360372e-09]]
I then tried print(clf.predict(x_test)) and that gave the correct-looking results:
[['7']
['2']
['1']
...
['4']
['5']
['6']]
My Question:
May I know if anyone else has tried running the code in the AutoKeras export page successfully or hit the same issue as me. If it's the former, I would appreciate any advice on where I did wrong/the issue is. As mentioned, I ran each line of code in the page in the Python console.
Hardware/Software Specs:
OS: Ubuntu 18.04 LTS
GPU: Nvida T4 driver v450.80.02, CUDA v10.1 and cuDNN v7_7.6.5.32
Autokeras==1.0.11
Keras==2.4.3
keras-tuner==1.0.2rc4
tensorboard==2.4.0
tensorboard-plugin-wit==1.7.0
tensorflow==2.3.1
tensorflow-addons==0.11.2
tensorflow-datasets==4.1.0
tensorflow-estimator==2.3.0
tensorflow-hub==0.10.0
tensorflow-metadata==0.25.0
tensorflow-model-optimization==0.5.0
scikit-learn==0.23.2
numpy==1.18.5
pandas==1.1.4
Research Done:
So far, I've only found this but that SO question/issue is not the same as mine. I also tried exporting a regression model, reloaded and used it to make the prediction - there was no issue. So I'm surmising the issue is limited to Classification models.
Thanks in advance!

FailedPreconditionError: Table already initialized

I am reading data from tfrecords with dataset api. I am converting string data to dummy data with following code.
SFR1 = tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list("SFR1 ",
vocabulary_list=("1", "2")))
But when i run my code, tensorflow is throwing following error.
tensorflow.python.framework.errors_impl.FailedPreconditionError: Table
already initialized. [[Node:
Generator/input_layer/SFR1 _indicator/SFR1 _lookup/hash_table/table_init
= InitializeTableV2[Tkey=DT_STRING, Tval=DT_INT64](Generator/input_layer/SFR1 _indicator/SFR1 _lookup/hash_table,
Generator/input_layer/SFR1 _indicator/SFR1 _lookup/Const,
Generator/input_layer/SFR1 _indicator/SFR1 _lookup/ToInt64)]]
[[Node: Generator2/IteratorGetNext =
IteratorGetNextoutput_shapes=[[?,10000,160]],
output_types=[DT_FLOAT],
_device="/job:localhost/replica:0/task:0/device:CPU:0"]]
I have tried many combinations to determine the source of problem. I understood that this problem occurs when model includes both tf.feature_column.categorical_column_with_vocabulary_list and dataset api. If i choose TFRecordReader instead of dataset, code is running.
When i search stackoverflow, I noticed that there is a similar issue. I am adding issue link below. As both problem are same, I didn't copy all my code. Below link includes enough data to explain my problem
Tensorflow feature columns in Dataset map Table already initialized issue
Thanks.
I came across the same issue. Then modified my code following the warning from Tensorflow and it works:
Creating lookup tables inside a function passed to Dataset.map() is not supported. Create each table outside the function, and capture it inside the function to use it.
Hope it would help.
This is issue with earlier version of TensorFlow, updating to TF2.0 should resolve this.
pip install --upgrade tensorflow

Resources