This is the code:
nlp_model=spacy.load('nlp_model')
This is the error:
OSError: [E053] Could not read config.cfg from nlp_model\config.cfg
I am using a pre trained model from spacy v2 and my spacy model is 3.2.1. How can I use this model in my spacy version
You can't use a model from spaCy v2 with spaCy v3. The major version (the first part of the version number) of spaCy and your model need to be the same.
You need to upgrade your model, or, if it's one you trained yourself, retrain it.
Related
I trained a model on Google colab , and dumped it, then I was finishing the project on VS Code on my system. I got an error saying the versions don't match since I had the latest version of libraries and the colab had an older version. I had the retrain the model on my system, which took a lot of time, as my System has a basic configuration. My only use of colab was to put the training stress on colab rather than my system
I didn't know there will be a version conflict as I thought Colab will have the latest version of libraries
I have in colab (Google) tensorflow version 2.9.2 and in my Raspberry 4 tensorflow versio 2.4.1. So diferent versions. I made in colab a pre-training model VGG19 with input_shape(220,220,3). And I classificated 2 types image
So once I have trained the model in the Colab.Google environment, then from Colab except the model and its weights.
# serialize model to JSON
model_json = loaded_model2.to_json()
with open('/content/drive/MyDrive/dataset/extract/model_5.json', "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
loaded_model2.save_weights('/content/drive/MyDrive/model_5.h5')
print("Saved model to disk")
As I have indicated, then I intend to use that trained model on my Raspberry4. So I create a model on the Raspberry just like I did in Colab, but I don't do 'fit'. What I do next is load the .h5 file with the 'weights' that was generated in Colab.Google.
And in principle this has worked for me
model_new = tf.keras.Sequential()
model_new.add(tf.keras.applications.VGG19(include_top=false, weights='imagenet',pooling='avg',input_shape=(220,220,3)))
model_new.add(tf.keras.layers.Dense(2,activation="softmax"))
opt = tf.keras.optimizers.SGC(0,004)
model_new.compile(loss='categorical_crossentropy',optimizer=opt,metrics=['accuracy'])
model_new.load_weights('/home/pi/projects/models/model_5.h5)
I try to optimize my model using 'MvNCCompile' but it doesn't accept my frozen TF2 graph.
'Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.'.
Can I somehow convert my TF2 (keras) model to TF1 graph format so it can be used? Or is there another way to get the TF2 Keras model to be accepted by the Intel optimisation tool?
Hm. MvNCCompile is a deprecated tool to work with NCS2. I suggest you to try the latest version of OpenVINO: https://docs.openvinotoolkit.org/
I trained some data science models with scikit learn from v0.19.1. The models are stored in a pickle file. After upgrading to latest version (v0.23.1), I get the following error when I try to load them:
File "../../Utils/WebsiteContentSelector.py", line 100, in build_page_selector
page_selector = pickle.load(pkl_file)
AttributeError: Can't get attribute 'DeprecationDict' on <module 'sklearn.utils.deprecation' from '/usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py'>
Is there a way to upgrade without retraining all my models (which is very expensive)?
You used a new version of sklearn to load a model which was trained by an old version of sklearn.
So, the options are:
Retrain the model with current version of sklearn if you have the training script and data
Or fall back to the lower sklearn version reported in the warning message
Depending on the kind of sklearn model used, if the model is simple regression model, what is probably needed is to get the actual weights and bias (or intercept) values.
You can check these values in your model:
model.classes_
model.coef_
model.intercept_
they are of numpy type and can be pickled easily. Also, you need to get the same parameters passed to the model construction. For example:
tol
max_iter
and so on. With this, in the upgraded version, the same model created with the same parameters can read the weights and intercept.
In this way, no re-training is needed and you can use the upgrade sklearn.
When lib versions are not backward compatible you can do the following:
Downgrade sklearn back to the original version
Load each model, extract and store its coefficients (which are model-specific - check documentation)
Upgrade sklearn, load coefficients and init models with them, save models
Related question.
I want to convert a Keras model to Tensorflow Lite model. When I examined the documentation, it is stated that we can use tf.keras HDF5 models as input. Does it mean I can use my saved HDF5 Keras model as input to it or tf.keras HDF5 model and Keras HDF5 models are different things?
Documentation: https://www.tensorflow.org/lite/convert
Edit: I could convert my Keras model to Tensorflow Lite model with using this API, but I didn't test it yet. My code:
converter = tf.lite.TFLiteConverter.from_keras_model_file(path + 'plant-
recognition-model.h5')
tflite_model = converter.convert()
with open('plant-recognition-model.tflite', 'wb') as f:
f.write(tflite_model)
tf.keras HDF5 model and Keras HDF5 models are not different things, except for inevitable software version update synchronicity. This is what the official docs say:
tf.keras is TensorFlow's implementation of the Keras API specification. This is a high-level API to build and train models that includes first-class support for TensorFlow-specific functionality
If the convertor can convert a keras model to tf.lite, it will deliver same results. But tf.lite functionality is more limited than tf.keras. If this feature set is not enough for you, you can still work with tensorflow, and enjoy its other advantages.
May be, it won't take too long before your models can run on a smartphone.
I have a pickled scikit-learn classifier trained in version 0.14.1, and I want to use it on my mac which has scikit-learn version 0.17.1. When I try to run clf.score I get the following error:
AttributeError: 'SVC' object has no attribute '_dual_coef_'
Googling this it looks like the problem is with the difference in version between making the classifier and now wanting to test it. If retraining is not an option, is there a way to upgrade the classifier to work in the new version of scikit-learn?