Pysal does not have attribute open - pysal

Hi I am using Anaconda with Python 3.7 and I have imported Pysal to my environment. Now I am trying to import a dataset and open it with pysal, to my surprise it appears that pysal does not have attribute open...
import pysal as ps
import libpysal as lps
lps.examples.explain('us_income')
csv_path = lps.examples.get_path('usjoin.csv')
f = ps.open(csv_path)
I am getting an error AttributeError: module 'pysal' has no attribute 'open'
How can I fix it?

The example dataset can be accessed using the modified script;
# Import packages
import pysal as ps
import libpysal as lps
# Load example data
lps.examples.explain('us_income')
csv_path = lps.examples.get_path('usjoin.csv')
# Note the difference here
f = ps.lib.io.open(csv_path)
These changes are a result of PySAL migrating to 2.0. This assumes you are using PySAL >=2.0. Please mark this answer as correct if this answers your question.

Related

'XFoil' object has no attribute '_lib'

Hello I'm trying to use Xfoil package using python. As the instructions say I installed the package and have written the following lines which are also shown in the documentation as a simple example. But, I always get error when running it in the second line xf=XFoil() and the error is
AttributeError: 'XFoil' object has no attribute '_lib'
Thanks for any suggestions
from xfoil import XFoil
xf = XFoil()
from xfoil.test import naca0012
xf.airfoil = naca0012
xf.Re = 1e6
xf.max_iter = 40
a, cl, cd, cm = xf.aseq(-20, 20, 0.5)
import matplotlib.pyplot as plt
plt.plot(a, cl)
plt.show()
I got the same error. I installed xfoil in my windows OS with python 3.7.13. The import from xfoil import XFoil worked fine however as you mentioned xf = XFoil() resulted in an attribute error. I tried the installation guidelines given in the Github page and that worked fine. For brevity I have copy pasted the guidelines below:
create a file named distutils.cfg in PYTHONPATH\Lib\distutils with the following contents:
[build]
compiler=mingw32
I hope that helps.

tensorflow.lite.python.convert.ConverterError:could not find toco_from_protos binary

in tensorflow 2.2,i train a mobileNetv1 model,and get a .h5 file, now i want to convert it to a .tflite file for a android app,but i get a error like title . here is my code:
model = load_model(h5path)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
litemodel = converter.convert()
open(litepath,"wb").write(litemodel)
here is my error:
tensorflow.lite.python.convert.ConverterError: Could not find toco_from_protos binary, make sure your virtualenv bin directory or pip local bin directory is in your path.
In particular, if you have installed TensorFlow with --user, make sure you add the install directory to your path.
For example:
Linux: export PATH=$PATH:~/.local/bin/
Mac: export PATH=$PATH:~/Library/Python/<version#>/bin
Alternative, use virtualenv.
please help me with my puzzle.
The server of Linux system.
It seems that your environment has not finished installing all requirements.
I would suggest if problem persists or at similar issues to check it also at Google Colab In this environment you can import Tensorflow and Keras libraries like:
import tensorflow as tf
import tensorflow.keras as K
import os
import numpy as np
import sys
import matplotlib.pyplot as plt
np.set_printoptions(threshold=sys.maxsize)
print(tf.__version__)
print(K.__version__)
And you are good to make your transformations. For your problem I would use code snippet below:
# WHOLE MODEL
tflite_model = tf.keras.models.load_model('my_model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model)
tflite_save = converter.convert()
open("my_model.tflite", "wb").write(tflite_save)
Happy coding!

ImportError: cannot import name 'joblib' from 'sklearn.externals'

I am trying to load my saved model from s3 using joblib
import pandas as pd
import numpy as np
import json
import subprocess
import sqlalchemy
from sklearn.externals import joblib
ENV = 'dev'
model_d2v = load_d2v('model_d2v_version_002', ENV)
def load_d2v(fname, env):
model_name = fname
if env == 'dev':
try:
model=joblib.load(model_name)
except:
s3_base_path='s3://sd-flikku/datalake/doc2vec_model'
path = s3_base_path+'/'+model_name
command = "aws s3 cp {} {}".format(path,model_name).split()
print('loading...'+model_name)
subprocess.call(command)
model=joblib.load(model_name)
else:
s3_base_path='s3://sd-flikku/datalake/doc2vec_model'
path = s3_base_path+'/'+model_name
command = "aws s3 cp {} {}".format(path,model_name).split()
print('loading...'+model_name)
subprocess.call(command)
model=joblib.load(model_name)
return model
But I get this error:
from sklearn.externals import joblib
ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py)
Then I tried installing joblib directly by doing
import joblib
but it gave me this error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in load_d2v_from_s3
File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load
obj = _unpickle(fobj, filename, mmap_mode)
File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle
obj = unpickler.load()
File "/usr/lib64/python3.7/pickle.py", line 1088, in load
dispatch[key[0]](self)
File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global
klass = self.find_class(module, name)
File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class
__import__(module, level=0)
ModuleNotFoundError: No module named 'sklearn.externals.joblib'
Can you tell me how to solve this?
You should directly use
import joblib
instead of
from sklearn.externals import joblib
It looks like your existing pickle save file (model_d2v_version_002) encodes a reference module in a non-standard location – a joblib that's in sklearn.externals.joblib rather than at top-level.
The current scikit-learn documentation only talks about a top-level joblib – eg in 3.4.1 Persistence example – but I do see a reference in someone else's old issue to a DeprecationWarning in scikit-learn version 0.21 about an older scikit.external.joblib variant going away:
Python37\lib\site-packages\sklearn\externals\joblib_init_.py:15:
DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and
will be removed in 0.23. Please import this functionality directly
from joblib, which can be installed with: pip install joblib. If this
warning is raised when loading pickled models, you may need to
re-serialize those models with scikit-learn 0.21+.
'Deprecation' means marking something as inadvisable to rely-upon, as it is likely to be discontinued in a future release (often, but not always, with a recommended newer way to do the same thing).
I suspect your model_d2v_version_002 file was saved from an older version of scikit-learn, and you're now using scikit-learn (aka sklearn) version 0.23+ which has totally removed the sklearn.external.joblib variation. Thus your file can't be directly or easily loaded to your current environment.
But, per the DeprecationWarning, you can probably temporarily use an older scikit-learn version to load the file the old way once, then re-save it with the now-preferred way. Given the warning info, this would probably require scikit-learn version 0.21.x or 0.22.x, but if you know exactly which version your model_d2v_version_002 file was saved from, I'd try to use that. The steps would roughly be:
create a temporary working environment (or roll back your current working environment) with the older sklearn
do imports something like:
import sklearn.external.joblib as extjoblib
import joblib
extjoblib.load() your old file as you'd planned, but then immediately re-joblib.dump() the file using the top-level joblib. (You likely want to use a distinct name, to keep the older file around, just in case.)
move/update to your real, modern environment, and only import joblib (top level) to use joblib.load() - no longer having any references to `sklearn.external.joblib' in either your code, or your stored pickle files.
You can import joblib directly by installing it as a dependency and using import joblib,
Documentation.
Maybe your code is outdated. For anyone who aims to use fetch_mldata in digit handwritten project, you should fetch_openml instead. (link)
In old version of sklearn:
from sklearn.externals import joblib
mnist = fetch_mldata('MNIST original')
In sklearn 0.23 (stable release):
import sklearn.externals
import joblib
dataset = datasets.fetch_openml("mnist_784")
features = np.array(dataset.data, 'int16')
labels = np.array(dataset.target, 'int')
For more info about deprecating fetch_mldata see scikit-learn doc
none of the answers below works for me, with a little changes this modification was ok for me
import sklearn.externals as extjoblib
import joblib
for this error, I had to directly use the following and it worked like a charm:
import joblib
Simple
In case the execution / call to joblib is within another .py program instead of your own (in such case even you have installed joblib, it still causes error from within the calling python programme unless you change the code, i thought would be messy), I tried to create a hardlink:
(windows version)
Python> import joblib
then inside your sklearn path >......\Lib\site-packages\sklearn\externals
mklink /J ./joblib .....\Lib\site-packages\joblib
(you can work out the above using a ! or %, !mklink....... or %mklink...... inside your Python juptyter notebook , or use python OS command...)
This effectively create a virtual folder of joblib within the "externals" folder
Remarks:
Of course to be more version resilient, your code has to check for the version of sklearn is >= 0.23 again before hand.
This would be alternative to changing sklearn vesrion.
When getting error:
from sklearn.externals import joblib it deprecated older version.
For new version follow:
conda install -c anaconda scikit-learn (install using "Anaconda Promt")
import joblib (Jupyter Notebook)
I had the same problem
What I did not realize was that joblib was already installed!
so what you have to do is replace
from sklearn.externals import joblib
with
import joblib
and that is it
After a long investigation, given my computer setup, I've found that was because an SSL certificate was required to download the dataset.

How to use the communities module "python-louvain" in networkx 2.2?

I used to use this module like this:
import community
if __name__ == '__main__':
G = nx.karate_club_graph()
pos = nx.spring_layout(G)
partition = community.best_partition(G)
I installed the correct module:
sudo pip3 install python-louvain
I get this error:
AttributeError: module 'community' has no attribute 'best_partition'
As far as I know it follows the documentation presented here.
It seems some others have run into this problem before, see:
https://bitbucket.org/taynaud/python-louvain/issues/23/module-has-no-attribute-best_partition
If you have another library called community installed that may be causing a problem. Here is one solution proposed in the thread I linked to:
from community import community_louvain
partition = community_louvain.best_partition(G)
I am a beginner in using Networkx as well but I used following syntax in Jupyter notebook and it worked fine for me.
!pip install python-louvain
from community import community_louvain
communities =community_louvain.best_partition(G)
Kind Regards,
You need the package named python-louvain from here.
!pip install python-louvain
import networkx as nx
import community
import numpy as np
np.random.seed(0)
W = np.random.rand(15,15)
np.fill_diagonal(W,0.0)
G = nx.from_numpy_array(W)
louvain_partition = community.best_partition(G, weight='weight')
modularity2 = community.modularity(louvain_partition, G, weight='weight')
print("The modularity Q based on networkx is {}".format(modularity2))
The modularity Q based on networkx is 0.0849022950503318
I am not sure why the following situation exists, but there appears to be another package called "community" that does not contain the function "community.best_partition". As stated above, you want the "python-louvain" package, which appears to include a "community" part?! In PyCharm 2020.3, under Preferences -> Project: Python Interpreter, I deleted the "community" package and added the "python-louvain" package. After that, "import community" still worked as did "community.best_partition".
you should install the package below. i use it and it works.
i install it in windows.
https://pypi.org/project/python-louvain/
write "pip install python-louvain" in cmd and after that write program like this:
import community
import networkx as nx
import matplotlib.pyplot as plt
G = nx.erdos_renyi_graph(30, 0.05)
partition = community.best_partition(G)
size = float(len(set(partition.values())))
pos = nx.spring_layout(G)
count = 0
for com in set(partition.values()) :
count = count + 1
list_nodes = [nodes for nodes in partition.keys()if partition[nodes] == com]
nx.draw_networkx_nodes(G, pos, list_nodes, node_size = 20,node_color = str(count / size))
nx.draw_networkx_edges(G, pos, alpha=0.5)
plt.show()
i use python 3.7
For what it's worth: I had to
pip uninstall community
then
pip install python-louvain
then
pip install networkx
in order to get my conda py37 environment to work correctly and to be able to call community.best_partition() without an attribute error.
I think, if you have networkx installed before python-louvain, it will claim the namespace for community and not allow you to run what you want.
I use this solution:
import community.community_louvain as community_louvain
part = community_louvain.best_partition(G)
Reference: https://programmerclick.com/article/70941941990/
But here is another alternative:
Figured out. Reinstalled, and reloaded, all works fine.
Ps. for who does not know how to reload a module in jupyter:
from importlib import reload
reload(community)
Reference: https://github.com/taynaud/python-louvain/issues/48

python 3.x ImportError: No module named 'cStringIO'

How do I solve an ImportError: No module named 'cStringIO' under Python 3.x?
From Python 3.0 changelog:
The StringIO and cStringIO modules are gone. Instead, import the io module and use io.StringIO or io.BytesIO for text and data respectively.
From the Python 3 email documentation it can be seen that io.StringIO should be used instead:
from io import StringIO
from email.generator import Generator
fp = StringIO()
g = Generator(fp, mangle_from_=True, maxheaderlen=60)
g.flatten(msg)
text = fp.getvalue()
I had the same issue because my file was called email.py. I renamed the file and the issue disappeared.
I had the issue because my directory was called email. I renamed the directory to emails and the issue was gone.

Resources