how to overcome the following issue, using TextRank? - python-3.x

Hello I want to use the following package called textrank, see the following url for details:
https://github.com/davidadamojr/TextRank
After to clone all the dependencies with pip3, I tried to use this repository as follows:
textrank extract_summary test
However I got the following error:
MacBook-Pro:TextRank-master $ textrank extract_summary test
Traceback (most recent call last):
File "/usr/local/bin/textrank", line 11, in <module>
load_entry_point('textrank==0.1.0', 'console_scripts', 'textrank')()
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/main.py", line 21, in extract_summary
summary = textrank.extract_sentences(f.read())
File "/usr/local/lib/python3.6/site-packages/textrank/__init__.py", line 169, in extract_sentences
sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 801, in load
opened_resource = _open(resource_url)
File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 919, in _open
return find(path_, path + ['']).open()
File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 641, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource 'tokenizers/punkt/PY3/english.pickle' not found.
Please use the NLTK Downloader to obtain the resource: >>>
nltk.download()
Searched in:
- '/Users/ad/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- ''
**********************************************************************
It seems that there is a file of the nltk library that is missing, so I tried:
MacBook-Pro:TextRank-master adolfocamachogonzalez$ python3
Python 3.6.1 (default, Apr 4 2017, 09:40:21)
[GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.38)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import nltk
>>> nltk.download()
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
However I was not able to get the external resource, Since I tried copy and pasting the link into the browser but I got only like an xml structure as follows:
<?xml version="1.0"?>
<?xml-stylesheet href="index.xsl" type="text/xsl"?>
<nltk_data>
<packages>
<package checksum="d577c2cd0fdae148b36d046b14eb48e6" id="maxent_ne_chunker" languages="English" name="ACE Named Entity Chunker (Maximum entropy)" size="13404747" subdir="chunkers" unzip="1" unzipped_size="23604982" url="https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/packages/chunkers/maxent_ne_chunker.zip" />
<package author="Australian Broadcasting Commission" checksum="ffb36b67ff24cbf7daaf171c897eb904" id="abc" name="Australian Broadcasting Commission 2006" size="1487851" subdir="corpora" unzip="1" unzipped_size="4054966" url="https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/packages/corpora/abc.zip" webpage="http://www.abc.net.au/" />
<package checksum="ae529a1c5f13d6074f5b0d68d8edb537" contact="Gertjan van Noord" id="alpino" license="Distributed with permission of Gertjan van Noord" name="Alpino Dutch Treebank" size="2797255" subdir="corpora" unzip="1" unzipped_size="21604821" url="https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/packages/corpora/alpino.zip" webpage="http://www.let.rug.nl/~vannoord/trees/" />
<package checksum="d3be36b53ab201372f1cd63ffc75e9a9" copyright="Public Domain (not copyrighted)" id="biocreative_ppi" license="Public Domain" name="BioCreAtIvE (Critical Assessment of Information Extraction Systems in Biology)" size="223566" subdir="corpora" unzip="1" unzipped_size="1537086" url="https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/packages/corpora/biocreative_ppi.zip" webpage="http://www.mitre.org/public/biocreative/" />
<package author="W. N. Francis and H. Kucera" checksum="a0a8630959d3d937873b1265b0a05497" id="brown" license="May be used for non-commercial purposes." name="Brown Corpus" size="3314357" subdir="corpora" unzip="1" unzipped_size="10117565" url="https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/pa

The file english.pickle is part of the "punkt" tokenizer that breaks up text into sentences. To download it, run the following once (or find "punkt" in the interactive downloader, under Models).
nltk.download("punkt")
The downloader will check a list of standard paths for a location it can write to, and will save the model file there. After that it will be available to textrank's internals.

Related

spaCy produces "PicklingError: Could not pickle object as excessively deep recursion required." after the first run (second run onwards)

I currently work on spaCy on Spyder3 editor.
But after the first run of below simple code. It gives me the error
"PicklingError: Could not pickle object as excessively deep recursion required."
Could you help me resolve the issue? Am I missing any additional code or settings?
Thanks,
Saul
I could run the code with no issue on jupyter notebook, but it fails on Spyder3.
import spacy
nlp = spacy.load('en_core_web_sm')
It should run without any error.
I have already installed 'en_core_web_sm'
I am not sure what the problem is.
please find whole error message below.
Reloaded modules: __mp_main__, spacy, thinc, thinc.about, thinc.neural, thinc.neural._classes, thinc.neural._classes.model, srsly, srsly._json_api, srsly.ujson, srsly.ujson.ujson, srsly.util, srsly._msgpack_api, srsly.msgpack, srsly.msgpack._version, srsly.msgpack.exceptions, srsly.msgpack._packer, srsly.msgpack._unpacker, srsly.msgpack._ext_type, srsly.msgpack._msgpack_numpy, srsly._pickle_api, srsly.cloudpickle, srsly.cloudpickle.cloudpickle, thinc.neural.util, thinc.neural.train, tqdm, tqdm._tqdm, tqdm._utils, tqdm._monitor, tqdm._tqdm_gui, tqdm._tqdm_pandas, tqdm._main, tqdm._version, thinc.neural.optimizers, thinc.neural.ops, thinc.neural.mem, thinc.check, thinc.compat, thinc.extra, thinc.extra.wrapt, thinc.extra.wrapt.wrappers, thinc.extra.wrapt._wrappers, thinc.extra.wrapt.decorators, thinc.extra.wrapt.importer, thinc.exceptions, wasabi, wasabi.printer, wasabi.tables, wasabi.util, wasabi.traceback, spacy.cli, spacy.cli.download, plac, plac_core, plac_ext, spacy.cli.link, spacy.compat, spacy.util, pkg_resources, pkg_resources.extern, pkg_resources._vendor, pkg_resources.extern.six, pkg_resources.py31compat, pkg_resources.extern.appdirs, pkg_resources._vendor.packaging.__about__, pkg_resources.extern.packaging, pkg_resources.extern.packaging.version, pkg_resources.extern.packaging._structures, pkg_resources.extern.packaging.specifiers, pkg_resources.extern.packaging._compat, pkg_resources.extern.packaging.requirements, pkg_resources.extern.pyparsing, pkg_resources.extern.packaging.markers, jsonschema, jsonschema.exceptions, attr, attr.converters, attr._make, attr._config, attr._compat, attr.exceptions, attr.filters, attr.validators, attr._funcs, jsonschema._utils, jsonschema.compat, jsonschema._format, jsonschema._types, pyrsistent, pyrsistent._pmap, pyrsistent._compat, pyrsistent._pvector, pyrsistent._transformations, pvectorc, pyrsistent._pset, pyrsistent._pbag, pyrsistent._plist, pyrsistent._pdeque, pyrsistent._checked_types, pyrsistent._field_common, pyrsistent._precord, pyrsistent._pclass, pyrsistent._immutable, pyrsistent._helpers, pyrsistent._toolz, jsonschema.validators, jsonschema._legacy_validators, jsonschema._validators, spacy.symbols, spacy.errors, spacy.about, spacy.cli.info, spacy.cli.package, spacy.cli.profile, thinc.extra.datasets, thinc.extra._vendorized, thinc.extra._vendorized.keras_data_utils, thinc.extra._vendorized.keras_generic_utils, spacy.cli.train, spacy._ml, thinc.v2v, thinc.neural._classes.affine, thinc.describe, thinc.neural._classes.relu, thinc.neural._classes.maxout, thinc.neural._classes.softmax, thinc.neural._classes.selu, thinc.i2v, thinc.neural._classes.hash_embed, thinc.neural._lsuv, thinc.neural._classes.embed, thinc.neural._classes.static_vectors, thinc.extra.load_nlp, thinc.t2t, thinc.neural._classes.convolution, thinc.neural._classes.attention, thinc.neural._classes.rnn, thinc.api, thinc.neural._classes.function_layer, thinc.neural._classes.feed_forward, thinc.t2v, thinc.neural.pooling, thinc.misc, thinc.neural._classes.batchnorm, thinc.neural._classes.layernorm, thinc.neural._classes.resnet, thinc.neural._classes.feature_extracter, thinc.linear, thinc.linear.linear, spacy.attrs, spacy.gold, spacy.cli.pretrain, spacy.tokens, spacy.tokens.doc, spacy.tokens.token, spacy.tokens.span, spacy.cli.debug_data, spacy.cli.evaluate, spacy.displacy, spacy.displacy.render, spacy.displacy.templates, spacy.cli.convert, spacy.cli.converters, spacy.cli.converters.conllu2json, spacy.cli.converters.iob2json, spacy.cli.converters.conll_ner2json, spacy.cli.converters.jsonl2json, spacy.cli.init_model, preshed, preshed.about, preshed.counter, spacy.vectors, spacy.cli.validate, spacy.glossary
Traceback (most recent call last):
File "<ipython-input-5-e0e768bc0aee>", line 1, in <module>
runfile('/home/saul/pythontraining/NLP/itemWork_3.py', wdir='/home/saul/pythontraining/NLP')
File "/home/saul/anaconda3/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 786, in runfile
execfile(filename, namespace)
File "/home/saul/anaconda3/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/home/saul/pythontraining/NLP/itemWork_3.py", line 11, in <module>
nlp = spacy.load('en_core_web_sm')
File "/home/saul/.local/lib/python3.7/site-packages/spacy/__init__.py", line 27, in load
return util.load_model(name, **overrides)
File "/home/saul/.local/lib/python3.7/site-packages/spacy/util.py", line 131, in load_model
return load_model_from_package(name, **overrides)
File "/home/saul/.local/lib/python3.7/site-packages/spacy/util.py", line 152, in load_model_from_package
return cls.load(**overrides)
File "/home/saul/anaconda3/lib/python3.7/site-packages/en_core_web_sm/__init__.py", line 12, in load
return load_model_from_init_py(__file__, **overrides)
File "/home/saul/.local/lib/python3.7/site-packages/spacy/util.py", line 190, in load_model_from_init_py
return load_model_from_path(data_path, meta, **overrides)
File "/home/saul/.local/lib/python3.7/site-packages/spacy/util.py", line 173, in load_model_from_path
return nlp.from_disk(model_path)
File "/home/saul/.local/lib/python3.7/site-packages/spacy/language.py", line 791, in from_disk
util.from_disk(path, deserializers, exclude)
File "/home/saul/.local/lib/python3.7/site-packages/spacy/util.py", line 630, in from_disk
reader(path / key)
File "/home/saul/.local/lib/python3.7/site-packages/spacy/language.py", line 787, in <lambda>
deserializers[name] = lambda p, proc=proc: proc.from_disk(p, exclude=["vocab"])
File "pipes.pyx", line 617, in spacy.pipeline.pipes.Tagger.from_disk
File "/home/saul/.local/lib/python3.7/site-packages/spacy/util.py", line 630, in from_disk
reader(path / key)
File "pipes.pyx", line 599, in spacy.pipeline.pipes.Tagger.from_disk.load_model
File "pipes.pyx", line 512, in spacy.pipeline.pipes.Tagger.Model
File "/home/saul/.local/lib/python3.7/site-packages/spacy/_ml.py", line 513, in build_tagger_model
pretrained_vectors=pretrained_vectors,
File "/home/saul/.local/lib/python3.7/site-packages/spacy/_ml.py", line 363, in Tok2Vec
embed >> convolution ** conv_depth, pad=conv_depth
File "/home/saul/.local/lib/python3.7/site-packages/thinc/check.py", line 131, in checker
return wrapped(*args, **kwargs)
File "/home/saul/.local/lib/python3.7/site-packages/thinc/neural/_classes/model.py", line 281, in __pow__
return self._operators["**"](self, other)
File "/home/saul/.local/lib/python3.7/site-packages/thinc/api.py", line 117, in clone
layers.append(copy.deepcopy(orig))
File "/home/saul/anaconda3/lib/python3.7/copy.py", line 169, in deepcopy
rv = reductor(4)
File "/home/saul/.local/lib/python3.7/site-packages/thinc/neural/_classes/model.py", line 96, in __getstate__
return srsly.pickle_dumps(self.__dict__)
File "/home/saul/.local/lib/python3.7/site-packages/srsly/_pickle_api.py", line 14, in pickle_dumps
return cloudpickle.dumps(data, protocol=protocol)
File "/home/saul/.local/lib/python3.7/site-packages/srsly/cloudpickle/cloudpickle.py", line 954, in dumps
cp.dump(obj)
File "/home/saul/.local/lib/python3.7/site-packages/srsly/cloudpickle/cloudpickle.py", line 288, in dump
raise pickle.PicklingError(msg)
PicklingError: Could not pickle object as excessively deep recursion required.
Yup, sure enough, your read operation is attempting
a failed write operation (serialization) behind the scenes,
most curious.
What you are attempting is absolutely vanilla,
and comes straight from the docs.
It certainly should work.
Sorry, but I can't reproduce this on my Mac.
I used conda 4.7.5 to install spacy 2.0.12
(conda install spacy),
which brought in thinc 6.10.3 as a dep.
From where Tok2Vec appears in the stack trace line numbers,
it is clear to me that you are running some different version.
Once I've asked spacy to download it,
spacy.load('en_core_web_sm') works flawlessly.
Your call stack goes from spacy to thinc to srsly.
I do not have srsly installed.
If I pip install srsly it pulls in 0.0.7,
and has no effect on subsequent successful spacy.load()
operations.
Recommend wiping your environment and doing a clean
conda install spacy, there's a fair chance that
will remedy the situation.
versionitis
The thinc rel notes point out this change in 7.0.0:
Use srsly for serialization.
Asking conda to install downrev spacy,
or to downrev one of those two deps,
may change the srsly interaction
and hence change your symptom.
Once you better understand the situation,
perhaps by seeing a successful .load(),
you may want to file a bug report against an affected project.

Running clustalw on google platform with error in generating .aln file in ubuntu

I was trying to run clustalw from Biopython library of python3 on Google Cloud Platform, then generate a phylogenetic tree from the .dnd file using the Phylo library.
The code was running perfectly with no error on my local system. However, when it runs on the Google Cloud platform it has the following error:
python3 clustal.py
Traceback (most recent call last):
File "clustal.py", line 9, in <module>
align = AlignIO.read("opuntia.aln", "clustal")
File "/home/lhcy3w/.local/lib/python3.5/site-packages/Bio/AlignIO/__init__.py", line 435, in read
first = next(iterator)
File "/home/lhcy3w/.local/lib/python3.5/site-packages/Bio/AlignIO/__init__.py", line 357, in parse
with as_handle(handle, 'rU') as fp:
File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/home/lhcy3w/.local/lib/python3.5/site-packages/Bio/File.py", line 113, in as_handle
with open(handleish, mode, **kwargs) as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'opuntia.aln'
If I run sudo python3 clustal.py, the error would be
File "clustal.py", line 1, in <module>
from Bio import AlignIO
ImportError: No module named 'Bio'
If I run it as in the interactive form of python, the following happened
Python 3.5.3 (default, Sep 27 2018, 17:25:39)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from Bio.Align.Applications import ClustalwCommandline
>>> in_file = "opuntia.fasta"
>>> clustalw_cline = ClustalwCommandline("/usr/bin/clustalw", infile=in_file)
>>> clustalw_cline()
('\n\n\n CLUSTAL 2.1 Multiple Sequence Alignments\n\n\nSequence format is Pearson\nSequence 1: gi|6273291|gb|AF191665.1|AF191665 902 bp\nSequence 2: gi|6273290|gb
|AF191664.1|AF191664 899 bp\nSequence 3: gi|6273289|gb|AF191663.1|AF191663 899 bp\nSequence 4: gi|6273287|gb|AF191661.1|AF191661 895 bp\nSequence 5: gi|627328
6|gb|AF191660.1|AF191660 893 bp\nSequence 6: gi|6273285|gb|AF191659.1|AF191659 894 bp\nSequence 7: gi|6273284|gb|AF191658.1|AF191658 896 bp\n\n', '\n\nERROR:
Cannot open output file [opuntia.aln]\n\n\n')
Here is my clustal.py file:
from Bio import AlignIO
from Bio import Phylo
import matplotlib
from Bio.Align.Applications import ClustalwCommandline
in_file = "opuntia.fasta"
clustalw_cline = ClustalwCommandline("/usr/bin/clustalw", infile=in_file)
clustalw_cline()
tree = Phylo.read("opuntia.dnd", "newick")
tree = tree.as_phyloxml()
Phylo.draw(tree)
I just want to know how to create an .aln and a .dnd file on the Google Cloud platform as I can get on my local environment. I guess it is probably because I don't have the permission to create a new file on the server with python. I have tried f = open('test.txt', 'w') on Google Cloud but it couldn't work until I add sudo before the terminal command like sudo python3 text.py. However, as you can see, for clustalw, adding sudo only makes the whole biopython library missing.

Issues tokenizing text

Started text analysing, and eventually ran into a need for downloading Corpora in using PyCharm2019 as IDE. Not really sure what traceback message wants me to do, since I used PyCharm's own lib import interface to enable Corpora already. Why does an error stating that Corpora is not available to the code keep reappearing?
Imported TextBlob, tried to do a line like: from textblob import TextBlob...view code below
from textblob import TextBlob
TextBlob(train['tweet'][1]).words
print("\nPRINT TOKENIZATION") # own instruction to allow for knowing what code result delivers
print(TextBlob(train['tweet'][1]).words)
….
Tried to install via nltk, no luck...error when downloading 'brown.tei'
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\jcst\AppData\Local\Programs\Python\Python37-32\lib\tkinter__init__.py", line 1705, in call
return self.func(*args)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\downloader.py", line 1796, in _download
return self._download_threaded(*e)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\downloader.py", line 2082, in _download_threaded
assert self._download_msg_queue == []
AssertionError
Traceback (most recent call last):
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 35, in decorated
return func(*args, **kwargs)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\tokenizers.py", line 57, in tokenize
return nltk.tokenize.sent_tokenize(text)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\tokenize__init__.py", line 104, in sent_tokenize
tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 870, in load
opened_resource = _open(resource_url)
Resource File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 995, in open
punkt not found.
Please use the NLTK Downloader to obtain the resource:
return find(path, path + ['']).open()
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 701, in find
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle
Searched in:
- 'C:\Users\jcst/nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\share\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\lib\nltk_data'
- 'C:\Users\jcst\AppData\Roaming\nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- ''
raise LookupError(resource_not_found)
LookupError:
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle
Searched in:
- 'C:\Users\jcst/nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\share\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\lib\nltk_data'
- 'C:\Users\jcst\AppData\Roaming\nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/jcst/PycharmProjects/TextMining/ModuleImportAndTrainFileIntro.py", line 151, in
TextBlob(train['tweet'][1]).words
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 24, in get
value = obj.dict[self.func.name] = self.func(obj)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\blob.py", line 649, in words
return WordList(word_tokenize(self.raw, include_punc=False))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\tokenizers.py", line 73, in word_tokenize
for sentence in sent_tokenize(text))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\base.py", line 64, in itokenize
return (t for t in self.tokenize(text, *args, **kwargs))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 38, in decorated
raise MissingCorpusError()
textblob.exceptions.MissingCorpusError:
Looks like you are missing some required data for this feature.
To download the necessary data, simply run
python -m textblob.download_corpora
or use the NLTK downloader to download the missing data: http://nltk.org/data.html
If this doesn't fix the problem, file an issue at https://github.com/sloria/TextBlob/issues.

Why running Yolo_v2 on object detection task gives SystemError: unknown opcode?

I was using a pre-trained Yolo model for my object detection project. I have downloaded the weight from someone else google drive and using the "YOLOv2" model from this GitHub repo.
My conda environment configuration:
Python 3.6.7 :: Anaconda, Inc.
keras 2.2.4
Tensorflow 1.13.1 backend
While running the program, I got the below error:
EDIT: Complete traceback
/home/anubh/anaconda3/envs/cMLdev/bin/python /snap/pycharm-professional/121/helpers/pydev/pydevconsole.py --mode=client --port=42727
import sys; print('Python %s on %s' % (sys.version, sys.platform))
sys.path.extend(['/home/anubh/PycharmProjects/add_projects/blendid_data_challenge'])
PyDev console: starting.
Python 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 19:16:44)
[GCC 7.3.0] on linux
runfile('/home/anubh/PycharmProjects/codingforfun/machine_learning/deepLearningAI-ANG/week3/car_detection_for_autonomous_driving/pretrain_yolo_model_car_detection.py', wdir='/home/anubh/PycharmProjects/codingforfun/machine_learning/deepLearningAI-ANG/week3/car_detection_for_autonomous_driving')
Using TensorFlow backend.
2019-03-20 11:08:41.522694: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
XXX lineno: 31, opcode: 0
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/snap/pycharm-professional/121/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/snap/pycharm-professional/121/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/anubh/PycharmProjects/codingforfun/machine_learning/deepLearningAI-ANG/week3/car_detection_for_autonomous_driving/pretrain_yolo_model_car_detection.py", line 89, in <module>
main()
File "/home/anubh/PycharmProjects/codingforfun/machine_learning/deepLearningAI-ANG/week3/car_detection_for_autonomous_driving/pretrain_yolo_model_car_detection.py", line 86, in main
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
File "/home/anubh/PycharmProjects/codingforfun/machine_learning/deepLearningAI-ANG/week3/car_detection_for_autonomous_driving/pretrain_yolo_model_car_detection.py", line 66, in predict
yolo_model,class_names, scores, boxes,classes = build_graph(summary_needed=1)
File "/home/anubh/PycharmProjects/codingforfun/machine_learning/deepLearningAI-ANG/week3/car_detection_for_autonomous_driving/pretrain_yolo_model_car_detection.py", line 30, in build_graph
yolo_model = load_model("model_data/yolo.h5") # (m, 19, 19, 5, 85) tensor
File "/home/anubh/anaconda3/envs/cMLdev/lib/python3.6/site-packages/keras/engine/saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "/home/anubh/anaconda3/envs/cMLdev/lib/python3.6/site-packages/keras/engine/saving.py", line 225, in _deserialize_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/home/anubh/anaconda3/envs/cMLdev/lib/python3.6/site-packages/keras/engine/saving.py", line 458, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/home/anubh/anaconda3/envs/cMLdev/lib/python3.6/site-packages/keras/layers/__init__.py", line 55, in deserialize
printable_module_name='layer')
File "/home/anubh/anaconda3/envs/cMLdev/lib/python3.6/site-packages/keras/utils/generic_utils.py", line 145, in deserialize_keras_object
list(custom_objects.items())))
File "/home/anubh/anaconda3/envs/cMLdev/lib/python3.6/site-packages/keras/engine/network.py", line 1032, in from_config
process_node(layer, node_data)
File "/home/anubh/anaconda3/envs/cMLdev/lib/python3.6/site-packages/keras/engine/network.py", line 991, in process_node
layer(unpack_singleton(input_tensors), **kwargs)
File "/home/anubh/anaconda3/envs/cMLdev/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "/home/anubh/anaconda3/envs/cMLdev/lib/python3.6/site-packages/keras/layers/core.py", line 687, in call
return self.function(inputs, **arguments)
File "/home/don/tensorflow/yad2k/yad2k/models/keras_yolo.py", line 31, in space_to_depth_x2
SystemError: unknown opcode
I found 2 threads 1 & 2 that tries to answer
How to get the TensorFlow binary compiled to support my CPU instructions.
I also found an easy get around at someone's GitHub issue, but the reasoning wasn't clear at all. They are just trying hit & trial.
but, My question is,
In the same environment configuration, I have used ResNet-50 & VGG-16 models for the image classification task, and much other functionality of keras as tensorflow backend and directly with tensorflow. All works with no such error!
Then, what's so special incompatibility Tensorflow issue is with Yolo_v2 model? Could anyone help in this regards as well why and which tensorflow versions would work and how you decide it before working with any model?

Unable to install Scipy using pip install command

I am having this
Python 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 16:02:32) [MSC v.1900 64 bit (AMD64)] on win32
configuration and installing scipy on Python but I get this error and fail to understand the issue. Please remain on Python and not Anaconda and others.
(pip3py3) C:\Users\x\PycharmProjects\a>pip install scipy-0.19.1-cp35-cp35m-win_amd64.whl
Processing c:\users\x\pycharmprojects\a\scipy-0.19.1-cp35-cp35m-win_amd64.whl
Exception:
Traceback (most recent call last):
File "C:\Users\x\pip3py3\lib\site-packages\pip-9.0.1-py3.5.egg\pip\basecommand.py", line 215, in main
status = self.run(options, args)
File "C:\Users\x\pip3py3\lib\site-packages\pip-9.0.1-py3.5.egg\pip\commands\install.py", line 335, in run
wb.build(autobuilding=True)
File "C:\Users\x\pip3py3\lib\site-packages\pip-9.0.1-py3.5.egg\pip\wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
File "C:\Users\x\pip3py3\lib\site-packages\pip-9.0.1-py3.5.egg\pip\req\req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "C:\Users\x\pip3py3\lib\site-packages\pip-9.0.1-py3.5.egg\pip\req\req_set.py", line 620, in _prepare_file
session=self.session, hashes=hashes)
File "C:\Users\x\pip3py3\lib\site-packages\pip-9.0.1-py3.5.egg\pip\download.py", line 809, in unpack_url
unpack_file_url(link, location, download_dir, hashes=hashes)
File "C:\Users\x\pip3py3\lib\site-packages\pip-9.0.1-py3.5.egg\pip\download.py", line 715, in unpack_file_url
unpack_file(from_path, location, content_type, link)
File "C:\Users\x\pip3py3\lib\site-packages\pip-9.0.1-py3.5.egg\pip\utils\__init__.py", line 599, in unpack_file
flatten=not filename.endswith('.whl')
File "C:\Users\x\pip3py3\lib\site-packages\pip-9.0.1-py3.5.egg\pip\utils\__init__.py", line 484, in unzip_file
zip = zipfile.ZipFile(zipfp, allowZip64=True)
File "C:\Users\x\AppData\Local\Programs\Python\Python35\lib\zipfile.py", line 1026, in __init__
self._RealGetContents()
File "C:\Users\x\AppData\Local\Programs\Python\Python35\lib\zipfile.py", line 1113, in _RealGetContents
fp.seek(self.start_dir, 0)
OSError: [Errno 22] Invalid argument
My problem was somewhat strange. I solved it by installing numpy+mkl from http://www.lfd.uci.edu/~gohlke/pythonlibs/ and then scipy installation worked for me.

Resources