ModuleNotFoundError: No module named 'sklearn.utils._bunch' - scikit-learn

I created a model on sklearn version 1.2.0 & saved it as pickle file. Below is the scshot with details. In cell 65 when I read from pickle file back in version 1.2.0 it reads fine.
Here is the problem:
I need to read this model in sklearn version 1.0.2. We have to use 1.0.2 version so upgrading is not an option. When I read the file it throws - "ModuleNotFoundError: No module named 'sklearn.utils._bunch'" error. Below is the scshot with details:
How can I fix this so I can read model on 1.0.2 version as well.

Related

AttributeError displayed as module 'spacy' has no attribute 'cli' while downloading spacy.cli.download('en_core_web_lg') on databricks

I have written a code for NLP programme and downloading pipeline 'en_core_web_lg' using spacy, up to now the code was running but now it is showing me Attribute error
module 'spacy' has no attribute 'cli'.
i have installed the pypi-cli version 0.4.1 and spacy 3.2.3 but still not able to figure out the root cause and solution of the problem.
i am new to the coding.

Error while importing 'en_core_web_sm' for spacy in Azure Databricks

I am getting an error while loading 'en_core_web_sm' of spacy in Databricks notebook. I have seen a lot of other questions regarding the same, but they are of no help.
The code is as follows
import spacy
!python -m spacy download en_core_web_sm
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
# Process
text = ("This is a test document")
doc = nlp(text)
I get the error "OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a Python package or a valid path to a data directory"
The details of installation are
Python - 3.8.10
spaCy version 3.3
It simply does not work. I tried the following
ℹ spaCy installation:
/databricks/python3/lib/python3.8/site-packages/spacy
NAME SPACY VERSION
en_core_web_sm >=2.2.2 3.3.0 ✔
But the error still remains
Not sure if this message is relevant
/databricks/python3/lib/python3.8/site-packages/spacy/util.py:845: UserWarning: [W094] Model 'en_core_web_sm' (2.2.5) specifies an under-constrained spaCy version requirement: >=2.2.2. This can lead to compatibility problems with older versions, or as new spaCy versions are released, because the model may say it's compatible when it's not. Consider changing the "spacy_version" in your meta.json to a version range, with a lower and upper pin. For example: >=3.3.0,<3.4.0
warnings.warn(warn_msg)
Also the message when installing 'en_core_web_sm"
"Defaulting to user installation because normal site-packages is not writeable"
Any help will be appreciated
Ganesh
I suspect that you have cluster with autoscaling, and when autoscaling happened, new nodes didn't have the that module installed. Another reason could be that cluster node was terminated by cloud provider & cluster manager pulled a new node.
To prevent such situations I would recommend to use cluster init script as it's described in the following answer - it will guarantee that the module is installed even on the new nodes. Content of the script is really simple:
#!/bin/bash
pip install spacy
python -m spacy download en_core_web_sm

OSerror: loading a h5 saved model in tensorflow keras after updating the environment in anaconda on windows with python 3.7

I am recieving an OSerror (withouth any other text) from h5py when loading an h5 model created with keras- tensorflow after updating my enviroment, or working with an up-to-date environment.
I trained some models with keras and tf in the older versions, and also with keras-tf v1.15 and saved them using the model.save('filename.h5') code. Afterwards i am able to load them and work with them further using before the keras.load_model, and now tensorflow.keras.models.load_model without any problems but recieving some warnings that my tf version was not compiled to use the avx2 instructions and so.
The version installed is tensorflow 1.15 using pip install tensorflow-cpu and it seems to work well, my enviroment installed is Anaconda3-2020.02-Windows-x86_64 installed from the anaconda binaries on Windows.
After trying to change the packages to tensorflow-mkl, and needing to update my enviroment because of enviromental conflicts (shows even with the fresh install of anaconda) the OSerror raised by h5py appears.
Using the default enviromental packages from the anaconda binary with tf-cpu seems to work fine, either by cloning the environment. When updating the environment with conda update --all it raises the error either with tfc-cpu or tf-mkl.
The version of h5py in both cases is: '2.10.0' and the error is the following:
Traceback (most recent call last):
File "C:\Users\Oscar\bwSyncAndShare\OPT_PV22WP_intern\pv2wp_control\SIM\Sim_future.py", line 88, in <module>
model = load_model(pathfile_model)
File "C:\Users\Oscar\anaconda3\envs\optimizer2\lib\site-packages\tensorflow_core\python\keras\saving\save.py", line 142, in load_model
isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))):
File "C:\Users\Oscar\anaconda3\envs\optimizer2\lib\site-packages\h5py\_hl\base.py", line 44, in is_hdf5
return h5f.is_hdf5(filename_encode(fname))
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 156, in h5py.h5f.is_hdf5
OSError
Have anyone had this problem?
I have tried training a model with the updated environment and saving
it, when loading i get the same error.
Updating to tf-cpu v2.3.1
with the base environment and loading works also.
Creating a new env, with conda create -n name python==3.7.x anaconda
and then installing tf, doesn´t work.
i think then some other library is making the problem, but i cannot figure out what is the problem.
I use hd5 instead of h5 as the extension,and solve the problem.
i can load my deep model in colab bu when i want load that model in pc i can't

Error: Module 'tensorflow' has no attribute 'gfile' error while running tensorflow object detection api tutorial

I am trying to use the object detection tutorial from tensor flow api. I am using python 3 and tensor flow version 2. But getting the below error.I tried several ways:
File "C:\Aniruddhya\object_detection\object_detection\utils\label_map_util.py", line 137, in load_labelmap
with tf.gfile.GFile(path, 'r') as fid:
AttributeError: module 'tensorflow' has no attribute 'gfile'
can someone help me to run this?
code link: https://drive.google.com/drive/u/3/folders/1XHpnr5rsENzOOSzoWNTvRqhEbLKXaenL
It's not called that in TensorFlow 2. You might be using a TensorFlow 1 tutorial.
Version 1
tf.gfile.GFile
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/io/gfile/GFile
Version 2
tf.io.gfile.GFile
https://www.tensorflow.org/api_docs/python/tf/io/gfile/GFile
If you have Tensorflow version 2. You can use the next module compatible with the version 1, too.
import tensorflow.compat.v1 as tf
I solved this problem by reinstalling tensor using the previous version: sudo pip3 install tensorflow==1.14.0
You may optionally downgrade to previous version of tensorflow:
!pip install tensorflow==1.12.0
import tensorflow as tf
print(tf.__version__)
otherwise , make if tf.io.gfile and import tf.io

Tensorflow is creating an error on the proto-files

Background:
Purpose is to develop a machine learning algorithm involving tensorflow.
Problem:
Importing tensorflow results in the error: "invalid proto descriptor for file "tensorboard/compat/proto/resource_handle.proto"" (see error log after my code at the end.)
Action taken:
Tried running pip uninstall protobuf. Next, pip install --no-binary protobuf protobuf.
But, that creates a second error that there is "no google protobuf. If I install protobufs I run into the first error again.
Platform:
Ubuntu 18.10, v 64 bit forGPU. Python 3.6.8.
My Code:
import tensorflow-nightly-gpu #(regardless of version of tf I get the error)
import pandas as pd
...
Error Log:
Couldn't build proto file into descriptor pool!
Invalid proto descriptor for file
"tensorboard/compat/proto/resource_handle.proto":
tensorboard.ResourceHandleProto.device: "tensorboard.ResourceHandleProto.device" is already defined in file "tensorboard/src/resource_handle.proto".
tensorboard.ResourceHandleProto.container: "tensorboard.ResourceHandleProto.container" is already defined in file "tensorboard/src/resource_handle.proto".
tensorboard.ResourceHandleProto.name: "tensorboard.ResourceHandleProto.name" is already defined in file "tensorboard/src/resource_handle.proto".
tensorboard.ResourceHandleProto.hash_code: "tensorboard.ResourceHandleProto.hash_code" is already defined in file "tensorboard/src/resource_handle.proto".
tensorboard.ResourceHandleProto.maybe_type_name: "tensorboard.ResourceHandleProto.maybe_type_name" is already defined in file "tensorboard/src/resource_handle.proto".
Solved my own problem with these steps:
1.Remove all existing GPU, CUDA, CUDNN drivers and remove all tensorflow from my computer
Install drivers for GPU, CUDA, CUDNN with instructions here, https://www.tensorflow.org/install/gpu
Install tensorflow from here, phttps://www.tensorflow.org/install/pip

Resources