I am facing the following attribute error when loading glove model:
Code used to load model:
nlp = spacy.load('en_core_web_sm')
tokenizer = spacy.load('en_core_web_sm', disable=['tagger','parser', 'ner', 'textcat'])
nlp.vocab.vectors.from_glove('../models/GloVe')
Getting the following atribute error when trying to load the glove model:
AttributeError: 'spacy.vectors.Vectors' object has no attribute 'from_glove'
Have tried to search on StackOverflow and elsewhere but can't seem to find the solution. Thanks!
From pip list:
spacy version: 3.1.4
spacy-legacy 3.0.8
en-core-web-sm 3.1.0
Use spacy init vectors to load vectors from word2vec/glove text format into a new pipeline: https://spacy.io/api/cli#init-vectors
spacy version: 3.1.4 does not have the feature from_glove.
I was able to use nlp.vocab.vectors.from_glove() in spacy version: 2.2.4.
If you want, you can change your spacy version by using:
!pip install spacy==2.2.4 on your Jupyter cell.
Related
I am getting an error while loading 'en_core_web_sm' of spacy in Databricks notebook. I have seen a lot of other questions regarding the same, but they are of no help.
The code is as follows
import spacy
!python -m spacy download en_core_web_sm
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
# Process
text = ("This is a test document")
doc = nlp(text)
I get the error "OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a Python package or a valid path to a data directory"
The details of installation are
Python - 3.8.10
spaCy version 3.3
It simply does not work. I tried the following
ℹ spaCy installation:
/databricks/python3/lib/python3.8/site-packages/spacy
NAME SPACY VERSION
en_core_web_sm >=2.2.2 3.3.0 ✔
But the error still remains
Not sure if this message is relevant
/databricks/python3/lib/python3.8/site-packages/spacy/util.py:845: UserWarning: [W094] Model 'en_core_web_sm' (2.2.5) specifies an under-constrained spaCy version requirement: >=2.2.2. This can lead to compatibility problems with older versions, or as new spaCy versions are released, because the model may say it's compatible when it's not. Consider changing the "spacy_version" in your meta.json to a version range, with a lower and upper pin. For example: >=3.3.0,<3.4.0
warnings.warn(warn_msg)
Also the message when installing 'en_core_web_sm"
"Defaulting to user installation because normal site-packages is not writeable"
Any help will be appreciated
Ganesh
I suspect that you have cluster with autoscaling, and when autoscaling happened, new nodes didn't have the that module installed. Another reason could be that cluster node was terminated by cloud provider & cluster manager pulled a new node.
To prevent such situations I would recommend to use cluster init script as it's described in the following answer - it will guarantee that the module is installed even on the new nodes. Content of the script is really simple:
#!/bin/bash
pip install spacy
python -m spacy download en_core_web_sm
I'm trying to use Spacy for pos tagging in Spanish, for this I have checked the official documentation and also have read various post in Stackoverflow nonetheless neither has worked to me.
I have Python 3.7 and Spacy 2.2.4 installed and I'm running my code from a jupyter notebook
So as documentation suggests I tried:
From my terminal:
python -m spacy download en_core_web_sm
This gave the result:
Download and installation successful
Then in my jupyter notebook:
import spacy
nlp = spacy.load("es_core_news_sm")
And I got the following error:
ValueError: [E173] As of v2.2, the Lemmatizer is initialized with an instance of Lookups containing the lemmatization tables. See the docs for details: https://spacy.io/api/lemmatizer#init
Additionally, I tried:
import spacy
nlp = spacy.load("es_core_news_sm")
And this gave me a different error:
OSError: Can't find model 'es_core_news_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory
Could you please help me to solve this error?
You downloaded English model. In order to use Spanish model, you have to download it python -m spacy download es_core_news_sm
After downloading the right model you can try import it as follow
import spacy
import es_core_news_sm
nlp = es_core_news_sm.load()
I am trying to use the object detection tutorial from tensor flow api. I am using python 3 and tensor flow version 2. But getting the below error.I tried several ways:
File "C:\Aniruddhya\object_detection\object_detection\utils\label_map_util.py", line 137, in load_labelmap
with tf.gfile.GFile(path, 'r') as fid:
AttributeError: module 'tensorflow' has no attribute 'gfile'
can someone help me to run this?
code link: https://drive.google.com/drive/u/3/folders/1XHpnr5rsENzOOSzoWNTvRqhEbLKXaenL
It's not called that in TensorFlow 2. You might be using a TensorFlow 1 tutorial.
Version 1
tf.gfile.GFile
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/io/gfile/GFile
Version 2
tf.io.gfile.GFile
https://www.tensorflow.org/api_docs/python/tf/io/gfile/GFile
If you have Tensorflow version 2. You can use the next module compatible with the version 1, too.
import tensorflow.compat.v1 as tf
I solved this problem by reinstalling tensor using the previous version: sudo pip3 install tensorflow==1.14.0
You may optionally downgrade to previous version of tensorflow:
!pip install tensorflow==1.12.0
import tensorflow as tf
print(tf.__version__)
otherwise , make if tf.io.gfile and import tf.io
I notice densenet has been added to keras (https://github.com/keras-team/keras/tree/master/keras/applications)and I want to apply it in my project but when I tried to import it in jupyter anaconda, I got an error saying:
module 'keras.applications' has no attribute 'densenet'
it seems like densenet has not been incorporated into current version of keras.
Any idea how can I add it myself?
Densenet was added in keras version 2.1.3. What version of keras are you running?
Have you tried to update keras with pip install keras --upgrade since January?
I run the following lines of code in a jupyter notebook:
import spacy
nlp = spacy.load('en')
And get following error:
Warning: no model found for 'en_default'
Only loading the 'en' tokenizer.
I am using python 3.5.3, spacy 1.9.0, and jupyter notebook 5.0.0.
I downloaded spacy using conda install spacy and python3 spacy install en.
I am able to import spacy and load 'en' from my terminal but not from a jupyter notebook.
Based on the answer in your comments, it seems fairly clear that the two Python interpreters for Jupyter and your system Python are not the same, and therefore likely do not have shared libraries between them.
I would recommend re-running the installation or just specifically installation the en tool in the correct Spacy. Replace the path with the full path to the file, if the above is not the full path.
//anaconda/envs/capstone/bin/python -m spacy download
That should be enough. Let me know if there are any issues.
You can also download en language model in the jupyter notebook:
import sys
!{sys.executable} -m spacy download en