Problems using ambhas package: statistics module has no cpdf attribute - python-3.x

I am trying to use the ambhas package for copulas and the method
foo.generate_xy()
is not working.
This is error I get:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
generate()
File "C:/Users/Mypc/Desktop/sfsdf.py", line 19, in genera
x1,y1 = foo.generate_xy()
File "C:\Python33\lib\site-packages\ambhas\copula.py", line 179, in generate_xy
self._inverse_cdf()
File "C:\Python33\lib\site-packages\ambhas\copula.py", line 256, in _inverse_cdf
x2, x1 = st.cpdf(self.X, kernel = 'Epanechnikov', n = 100)
AttributeError: 'module' object has no attribute 'cpdf'
Basically I checked the code in the ambhas module and it turns out it is trying to use a method from the statistics module (imported as st), st.cpdf() which I do not have in my statistics module.
How can I fix this problem?
Here is an example of the code working. This is the very same code I am trying to run through the generate() function:
https://code.google.com/p/ambhas/wiki/Cookbook

The AMBHAS code depends on this statistics module, not the "official" one that is now included in Python 3.4. Unfortunately, the module doesn't appear to have been updated in a while, and the latest Windows installer is for Python 3.2, so you'll need to build it from source. You can do this with Cygwin, or by installing Visual C++ 2010 Express (here is a blog post describing the process, but I haven't tried it myself so I don't know if it's completely accurate - YMMV).
Once you've compiled the extension, make sure you remove the statistics module you installed previously before running python setup.py install.

Related

H5py Driver Issue on Qiskit Nature Drivers

While using the IBM Quantum experience, whenever I want to install any driver an error with the h5py appears.
Specifically, the error is
"Using default_file_mode other than 'r' is no longer supported. Pass
the mode to h5py.File() instead."
Does anyone have any solutions to it (not sure which version to revert back to). Thanks!!
You will need to add more details about your question. I could not replicate your error based on the information you provided. This is what I did:
First, I installed Qiskit Nature with this command pip install qiskit[nature].
Next, I entered the following Python commands:
>>> from qiskit_nature.drivers.second_quantization import HDF5Driver
>>> h5f = HDF5Driver()
>>> h5f
<qiskit_nature.drivers.second_quantization.hdf5d.hdf5driver.HDF5Driver object at 0x7fa28320a820>
>>> h5f.run()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/qiskit_nature/drivers/second_quantization/hdf5d/hdf5driver.py", line 63, in run
raise LookupError(f"HDF5 file not found: {hdf5_file}")
LookupError: HDF5 file not found: molecule.hdf5
There is no error when I imported the HDF5Driver module from the qiskit_nature package. An error occurs when I enter h5f.run(). This is expected, because I don't have a file named molecule.hdf5 (the default filename). If you have a different filename, modify commands above to use h5f = HDF5Driver("your_filename").
Tests run with Python 3.8.12 and qiskit_nature 0.3.0 running on Ubuntu Linux 5.11.0-1023-gcp.

Issue in importing awkward array package: DLL load failed while importing _ext

I am trying to use awkward in my Windows 10 system. I am using python 3.8.2.
After installing the package, when I import it, I am getting this DLL import error.
>>> import awkward as ak
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\MSS\Work\PyWorkspace37\awkward_poc\venv\lib\site-packages\awkward\__init__.py", line 23, in <module>
import awkward.layout
File "C:\Users\MSS\Work\PyWorkspace37\awkward_poc\venv\lib\site-packages\awkward\layout.py", line 5, in <module>
from awkward._ext import Index8
ImportError: DLL load failed while importing _ext: The specified module could not be found.
How to know which DLL is missing and how to mitigate it?
The DLL is Awkward Array's C++ part, "awkward/_ext.pyc", which the installation should have put into place. I see two possibilities: (1) the installation went horribly wrong and should be restarted from a clean slate, or (2) the file is there but can't be loaded. If the directory it was installed to doesn't have executable permissions, that could be it. (Python code can be executed from a disk/directory without executable permissions, but native machine code can't be, which can cause some surprises.)
When it comes to installation issues, I can only make guesses because your system is different than mine. We try to use standard installation procedures, so if you use the standard Python inhalation tools (pip, conda) then it ought to work without problems, but evidently that didn't happen here.

Which Theano for PyMC3?

Disclaimer: SO is forcing me to express stdout output copied below as code. Simple quoting wouldn't allow me to post
I have a pyproject.toml file from a Proof of Concept that worked beautifully a few months ago, which includes:
[tool.poetry.dependencies]
theano = "^1.0.5"
pymc3 = "^3.9.3"
I copied this over to start extending from the Proof of Concept, and when trying to run it I get
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
The installed Theano(-PyMC) version (1.0.5) does not match the PyMC3 requirements.
For PyMC3 to work, Theano must be uninstalled and replaced with Theano-PyMC.
See https://github.com/pymc-devs/pymc3/wiki for installation instructions.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
So I followed instructions -- removed theano and added Theano-PyMC3. This gave me no big alert at the beginning of the output, but failed:
File "mycode.py", line 4, in <module>
import pymc3
File "/home/user/.cache/pypoetry/virtualenvs/project-xNaK0WN7-py3.8/lib/python3.8/site-packages/pymc3/__init__.py", line 34, in <module>
if not semver.match(theano.__version__, ">=1.1.2"):
AttributeError: module 'theano' has no attribute '__version__'
I then tried adding aesara with a just-try-it scattergun mentality, and got the same error. I tried adding stock theano again on top of all the above, and got the first error again, even with Theano-PyMC3 still installed, and the Traceback ends with:
File "/home/user/.../code.py", line 4, in <module>
import pymc3
File "/home/user/.cache/pypoetry/virtualenvs/project-xNaK0WN7-py3.8/lib/python3.8/site-packages/pymc3/__init__.py", line 50, in <module>
__set_compiler_flags()
File "/home/user/.cache/pypoetry/virtualenvs/project-xNaK0WN7-py3.8/lib/python3.8/site-packages/pymc3/__init__.py", line 46, in __set_compiler_flags
current = theano.config.gcc__cxxflags
AttributeError: 'TheanoConfigParser' object has no attribute 'gcc__cxxflags'
Why would the same version pymc3 that worked just fine with theano last time now have a problem with it? The link provided (https://github.com/pymc-devs/pymc3/wiki) in the !!!! alert is virtually blank, so not helpful.

Error using Nuitka for Python App that uses Numpy, Pandas and Plotly

I've developed a PyQT5 app that also uses Numpy, Pandas and Plotly. To package it up I decided to use Nuitka however since the product is for the end users who won't be familiar with Python, I want to make sure the interpreter is packaged up along with the app as a single binary (similar to PyInstaller).
I'm running the following command however have been getting errors that I'm struggling to figure out how to fix:
$ python -m nuitka --standalone --show-progress --recurse-all --plugin-enable=numpy main.py
I'm using PyCharm and all the libraries are in the Virtual Environment in project directory. The version of Python I'm using is 3.9.
An executable binary is generated in Windows however upon running I receive the following error message:
C:\Users\user\PycharmProjects\PlotlyExample\main.dist>main
Traceback (most recent call last):
File "C:\Users\user\PYCHAR~1\PLOTLY~1\MAIN~1.DIS\numpy\core\__init__.py", line 22, in <module numpy.core>
File "C:\Users\user\PYCHAR~1\PLOTLY~1\MAIN~1.DIS\numpy\core\multiarray.py", line 12, in <module numpy.core.multiarray>
File "C:\Users\user\PYCHAR~1\PLOTLY~1\MAIN~1.DIS\numpy\core\overrides.py", line 7, in <module numpy.core.overrides>
ImportError: LoadLibraryExW 'C:\Users\user\PYCHAR~1\PLOTLY~1\MAIN~1.DIS\numpy\core\_multiarray_umath.pyd' failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\user\PYCHAR~1\PLOTLY~1\MAIN~1.DIS\main.py", line 5, in <module>
File "C:\Users\user\PYCHAR~1\PLOTLY~1\MAIN~1.DIS\numpy\__init__.py", line 140, in <module numpy>
File "C:\Users\user\PYCHAR~1\PLOTLY~1\MAIN~1.DIS\numpy\core\__init__.py", line 48, in <module numpy.core>
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.9 from "C:\Users\user\PYCHAR~1\PLOTLY~1\MAIN~1.DIS\python.exe"
* The NumPy version is: "1.19.3"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: LoadLibraryExW 'C:\Users\\user\PYCHAR~1\PLOTLY~1\MAIN~1.DIS\numpy\core\_multiarray_umath.pyd' failed: The specified module could not be found.
Sample Code:
https://pastebin.com/RjRYUk0C
Any ideas? Thanks!

_pickle.PicklingError: args[0] from __newobj__ args has the wrong class in tensorflow project

code:
# Write vocabulary#
vocab_processor.save(os.path.join(checkpoint_dir, "vocab"))
Error:
Traceback (most recent call last):
File "train.py", line 145, in
vocab_processor.save(os.path.join(checkpoint_dir, "vocab"))
File "/home/chinu/.local/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/preprocessing/text.py", line 233, in save
f.write(pickle.dumps(self))
_pickle.PicklingError: args[0] from newobj args has the wrong class
I met the same problem as you. May be you can check this issue and the code changed in it helped me.
The detail in that pull request is to change the following code in preprocesss.py :
self.sup = super(MyVocabularyProcessor,self)
self.sup.__init__(max_document_length,min_frequency,vocabulary,tokenizer_fn)
into the next block.
sup = super(MyVocabularyProcessor,self)
sup.__init__(max_document_length,min_frequency,vocabulary,tokenizer_fn)
tips: Remember to use 2to3 -m filename tool to change other python2 files into python3 ones.
I had a look at the same project https://github.com/dhwajraj/deep-siamese-text-similarity - I had tried Python 3.x but this failed, had to change the Open file with encoding, change xrange to range etc and convert the code back - I ended up at the same issue as you. The code base is Python 2.x and after installing everything as indicated
numpy 1.11.0
tensorflow 1.2.1 (had to downgrade < 1.2 for Python2)
gensim 1.0.1
nltk 3.2.2

Resources