CVodeError while simulating with pyFMI - python-3.x

I try to set up the pyFMI on Anaconda (Python 3.6.8)
Installed all the required packages listed on the pyFMI site. The fmu is loaded without the issue but while I try to simulate the fmu I get an error:
Could not find cannot import name 'radau5'
Could not find cannot import name 'dopri5'
Could not find cannot import name 'rodas'
Could not find cannot import name 'odassl'
Could not find ODEPACK functions.
Could not find RADAR5
Could not find GLIMDA.
Traceback (most recent call last):
File "assimulo\solvers\../lib/sundials_callbacks_ida_cvode.pxi", line 240, in assimulo.solvers.sundials.cv_jac
File "C:\Users\d60378\AppData\Local\Continuum\anaconda3\lib\site-packages\pyfmi\simulation\assimulo_interface.py", line 733, in j
A = self._model._get_A(add_diag=True, output_matrix=self._A)
File "src\pyfmi\fmi.pyx", line 6041, in pyfmi.fmi.FMUModelBase2._get_A
File "src\pyfmi\fmi.pyx", line 7592, in pyfmi.fmi.FMUModelME2._get_directional_proxy
File "src\pyfmi\fmi.pyx", line 5989, in pyfmi.fmi.FMUModelBase2._get_directional_proxy
TypeError: Expected tuple, got dict_keys
Traceback (most recent call last):
File "<ipython-input-1-6c340902ef15>", line 28, in <module>
res = model.simulate(options=opts,start_time=tstart, final_time=tstart+172200)
File "src\pyfmi\fmi.pyx", line 7522, in pyfmi.fmi.FMUModelME2.simulate
File "src\pyfmi\fmi.pyx", line 304, in pyfmi.fmi.ModelBase._exec_simulate_algorithm
File "src\pyfmi\fmi.pyx", line 300, in pyfmi.fmi.ModelBase._exec_simulate_algorithm
File "C:\Users\d60378\AppData\Local\Continuum\anaconda3\lib\site-packages\pyfmi\fmi_algorithm_drivers.py", line 520, in solve
self.simulator.simulate(self.final_time, self.ncp)
File "assimulo\ode.pyx", line 168, in assimulo.ode.ODE.simulate
File "assimulo\ode.pyx", line 288, in assimulo.ode.ODE.simulate
File "assimulo\explicit_ode.pyx", line 101, in assimulo.explicit_ode.Explicit_ODE._simulate
File "assimulo\explicit_ode.pyx", line 187, in assimulo.explicit_ode.Explicit_ODE._simulate
File "assimulo\solvers\sundials.pyx", line 1894, in assimulo.solvers.sundials.CVode.integrate
File "assimulo\solvers\sundials.pyx", line 1926, in assimulo.solvers.sundials.CVode.integrate
CVodeError: {-1: 'The solver took max internal steps but could not reach tout.', -2: 'The solver could not satisfy the accuracy demanded by the user for some internal step.', -3: 'Error test failures occurred too many times during one internal time step or minimum step size was reached.', -4: 'Convergence test failures occurred too many times during one internal time step or minimum step size was reached.', -5: 'The linear solvers initialization function failed.', -6: 'The linear solvers setup function failed in an unrecoverable manner.', -7: 'The linear solvers solve function failed in an unrecoverable manner.', -8: 'The user-provided rhs function failed in an unrecoverable manner.', -9: 'The right-hand side function failed at the first call.', -10: 'The right-hand side function had repeated recoverable errors.', -11: 'The right-hand side function had a recoverable error, but no recovery is possible.', -12: 'The rootfinding function failed in an unrecoverable manner.', -20: 'A memory allocation failed.', -21: 'The cvode_mem argument was NULL.', -22: 'One of the function inputs is illegal.', -23: 'The CVode memory block was not allocated by a call to CVodeMalloc.', -24: 'The derivative order k is larger than the order used.', -25: 'The time t is outside the last step taken.', -26: 'The output derivative vector is NULL.', -27: 'The output and initial times are too close to each other.', -41: 'The sensitivity right-hand side function failed unrecoverable.'}
Would appreciate any hints where to look for the possible issue.

Kelamahim, how have you installed the PyFMI package? I have used
conda install -c chria pyfmi
and it works.
Only
Could not find RADAR5
Could not find GLIMDA are shown in the execution, but my models works. Hope this helps.

The solution is downgrading to Anaconda 3 Python 3.6.2 and installing with conda the pyfmi version 2.4.0

I have been using Anaconda2 (Conda 4.6.8/python 2.7.15). Here's the installation process:
FMIL is built from source code using CMake
pyfmi is installed via conda install -c chria pyfmi
assimulo is installed via conda install -c conda-forge assimulo
wxPython 2.8.12.1 (classic) is installed via the Windows installer available on sourceforge
Other dependencies can be installed from pip
I also saw the following warning messages after loading pyfmi in python, but my simulation doesn't seem to be affected:
Could not find cannot import name radau5
Could not find cannot import name dopri5
Could not find cannot import name rodas
Could not find cannot import name odassl
Could not find ODEPACK functions.
Could not find RADAR5
Could not find GLIMDA.
HTH

pyFMi is also available from the conda-forge channel:
https://anaconda.org/conda-forge/pyfmi
I permanentely added that channel, because it has reproducible builds and a huge number of packages, so usually dependencies can be resolved.
The following worked for me, in Anaconda3 with Python 3.6:
conda config --append channels conda-forge
conda install pyfmi
conda list

Related

HuggingFace | ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet con

Not always, but occasionally when running my code this error appears.
At first, I doubted it was a connectivity issue but to do with cashing issue, as discussed on an older Git Issue.
Clearing cache didn't help runtime:
$ rm ~/.cache/huggingface/transformers/ *
Traceback references:
NLTK also gets Error loading stopwords: <urlopen error [Errno -2] Name or service not known.
Last 2 lines re cached_path and get_from_cache.
Cache (before cleared):
$ cd ~/.cache/huggingface/transformers/
(sdg) me#PF2DCSXD:~/.cache/huggingface/transformers$ ls
16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0
16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0.json
16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0.lock
4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5
4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5.json
4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5.lock
684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f
684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.json
684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.lock
c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51
fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51.json
fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51.lock
Code:
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2') # Error
set_seed(42)
Traceback:
2022-03-03 10:18:06.803989: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-03-03 10:18:06.804057: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[nltk_data] Error loading stopwords: <urlopen error [Errno -2] Name or
[nltk_data] service not known>
2022-03-03 10:18:09.216627: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2022-03-03 10:18:09.216700: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2022-03-03 10:18:09.216751: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (PF2DCSXD): /proc/driver/nvidia/version does not exist
2022-03-03 10:18:09.217158: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-03-03 10:18:09.235409: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
All model checkpoint layers were used when initializing TFGPT2LMHeadModel.
All the layers of TFGPT2LMHeadModel were initialized from the model checkpoint at gpt2.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.
Traceback (most recent call last):
File "/home/me/miniconda3/envs/sdg/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/me/miniconda3/envs/sdg/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/mnt/c/Users/me/Documents/GitHub/project/foo/bar/__main__.py", line 26, in <module>
nlp_setup()
File "/mnt/c/Users/me/Documents/GitHub/project/foo/bar/utils/Modeling.py", line 37, in nlp_setup
generator = pipeline('text-generation', model='gpt2')
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 590, in pipeline
tokenizer = AutoTokenizer.from_pretrained(
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 463, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 324, in get_tokenizer_config
resolved_config_file = get_file_from_repo(
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/file_utils.py", line 2235, in get_file_from_repo
resolved_file = cached_path(
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/file_utils.py", line 1846, in cached_path
output_path = get_from_cache(
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/file_utils.py", line 2102, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
Failed Attempts
I closed my IDE and bash terminal. Ran wsl.exe --shutdown in PowerShell. Relaunched IDE and bash terminal with same error.
Disconnecting/ different VPN.
Clear cache $ rm ~/.cache/huggingface/transformers/ *.
make sure you are not loading a tokenizer with an empty path. That solved it for me.
I saw a answer in github which you can have a try:
pass force_download=True to from_pretrained which will override the cache and re-download the files.
Link at :https://github.com/huggingface/transformers/issues/8690 By:patil-suraj
Since I am working in a conda venv and using Poetry for handling dependencies, I needed to re-install torch - a dependency for Hugging Face 🤗 Transformers.
First, install torch:
PyTorch's website lets you chose your exact setup/ specification for install. In my case, the command was
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
Then add to Poetry:
poetry add torch
Both take ages to process. Runtime was back to normal :)

H5py Driver Issue on Qiskit Nature Drivers

While using the IBM Quantum experience, whenever I want to install any driver an error with the h5py appears.
Specifically, the error is
"Using default_file_mode other than 'r' is no longer supported. Pass
the mode to h5py.File() instead."
Does anyone have any solutions to it (not sure which version to revert back to). Thanks!!
You will need to add more details about your question. I could not replicate your error based on the information you provided. This is what I did:
First, I installed Qiskit Nature with this command pip install qiskit[nature].
Next, I entered the following Python commands:
>>> from qiskit_nature.drivers.second_quantization import HDF5Driver
>>> h5f = HDF5Driver()
>>> h5f
<qiskit_nature.drivers.second_quantization.hdf5d.hdf5driver.HDF5Driver object at 0x7fa28320a820>
>>> h5f.run()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/qiskit_nature/drivers/second_quantization/hdf5d/hdf5driver.py", line 63, in run
raise LookupError(f"HDF5 file not found: {hdf5_file}")
LookupError: HDF5 file not found: molecule.hdf5
There is no error when I imported the HDF5Driver module from the qiskit_nature package. An error occurs when I enter h5f.run(). This is expected, because I don't have a file named molecule.hdf5 (the default filename). If you have a different filename, modify commands above to use h5f = HDF5Driver("your_filename").
Tests run with Python 3.8.12 and qiskit_nature 0.3.0 running on Ubuntu Linux 5.11.0-1023-gcp.

Installation of numpy through Anaconda

getting up to speed on Anaconda, I keep receiving an error message when I try to import 'numpy' into Python.
Here is what I have done so far:
Downloaded anaconda from anaconda.com (64-Bit Graphical
Installer (466 MB) with Python 3.7 for Windows);
Installed anaconda (under C:\Users\'Username'\Anaconda3 | Option 'Register Anaconda3 as my default Python 3.7')
Now I'm trying to import numpy into Python using idle.exe, which produces following error message:
Warning: os.path.expanduser("~") points to
h:\,
but the path does not exist.
Futhermore, after executing "'import 'numpy;" in IDLE I get the following error message.
Unfortunately none of the advice was helpful. Numpy seems to be properly installed.
Warning (from warnings module): File "C:\Users\'Username'\Anaconda3\lib\site-
packages\numpy\__init__.py", line 140
from . import _distributor_init
UserWarning: mkl-service package failed to import, therefore Intel(R) MKL
initialization ensuring its correct out-of-the box operation under condition
when Gnu OpenMP had already been loaded by Python process is not assured.
Please install mkl-service package, see http://github.com/IntelPython/mkl-
service
Traceback (most recent call last):
File "C:\Users\'Username'\Anaconda3\lib\site-packages\numpy\core\__init__.py",
line 24, in <module>
from . import multiarray
File "C:\Users\'Username'\Anaconda3\lib\site-packages\numpy\core\multiarray.py",
line 14, in <module>
from . import overrides
File "C:\Users\'Username'\Anaconda3\lib\site-packages\numpy\core\overrides.py",
line 7, in <module>
from numpy.core._multiarray_umath import (
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\'Username'\Anaconda3\Scripts\myscripts\import_test.py", line 1,
in <module>
import numpy
File "C:\Users\'Username'\Anaconda3\lib\site-packages\numpy\__init__.py", line
142, in <module>
from . import core
File "C:\Users\'Username'\Anaconda3\lib\site-packages\numpy\core\__init__.py",
line 54, in <module>
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
1. Check that you expected to use Python3.7 from
"C:\Users\'Username'\Anaconda3\python.exe",
and that you have no directories in your PATH or PYTHONPATH that can
interfere with the Python and numpy version "1.18.1" you're trying to use.
2. If (1) looks fine, you can open a new issue at
https://github.com/numpy/numpy/issues. Please include details on:
- how you installed Python
- how you installed numpy
- your operating system
- whether or not you have multiple versions of Python installed
- if you built from source, your compiler versions and ideally a build log
- If you're working with a numpy git repository, try `git clean -xdf`
(removes all files not under version control) and rebuild numpy.
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: DLL load failed: The specified module could not be found.
Many thanks in advance for any help and suggestions.

Pip install openslide completes successfully but when I import it "the specified module is not found"

I need to open SVS images in Python 3.7 and it seems that Openslide is the only module capable of opening images of that size (30k*30k pixels). I have used pip install openslide-python as well as python -m pip install openslide-python and pip 3 install... etc.
I know the module has been successfully installed because if I run any of those commands again the command line returns requirement already satisfied however when I run Python and try to import openslide it gives the error at the bottom.
My guess was that the .whl or .tar.gz files were in the wrong path so I made a bunch of copies and put them in the openslide folders within the Anaconda3 folder. The error persists. I have included the full error code below for clarity.
Extra: If I run help("modules") openslide shows up along with numpy, math, sklearn etc. I can import and run all other modules without issue.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\brimk\Anaconda3\lib\site-packages\openslide\__init__.py", line 29, in <module>
from openslide import lowlevel
File "C:\Users\brimk\Anaconda3\lib\site-packages\openslide\lowlevel.py", line 41, in <module>
_lib = cdll.LoadLibrary('libopenslide-0.dll')
File "C:\Users\brimk\Anaconda3\lib\ctypes\__init__.py", line 434, in LoadLibrary
return self._dlltype(name)
File "C:\Users\brimk\Anaconda3\lib\ctypes\__init__.py", line 356, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found
My issue was resolved by the answer from my hero!
It seems that, at least for Openslide, running Python from the Path to the Bin is the easiest solution. It can be done this way.
Download the Windows Binary here.
Extract the download to whatever path you want.
Open command window
pip3 install openslide-python (pip2 if Python 2)
cd C:\Users\Path\to\Openslide-Win64-20171122\bin
python
import openslide
In the future you will have to run python from the path to the Openslide bin (Step 4). This can be done more rigorously by adding that file path to the PATH as described in detail here as well as in the answer above.

Problems using ambhas package: statistics module has no cpdf attribute

I am trying to use the ambhas package for copulas and the method
foo.generate_xy()
is not working.
This is error I get:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
generate()
File "C:/Users/Mypc/Desktop/sfsdf.py", line 19, in genera
x1,y1 = foo.generate_xy()
File "C:\Python33\lib\site-packages\ambhas\copula.py", line 179, in generate_xy
self._inverse_cdf()
File "C:\Python33\lib\site-packages\ambhas\copula.py", line 256, in _inverse_cdf
x2, x1 = st.cpdf(self.X, kernel = 'Epanechnikov', n = 100)
AttributeError: 'module' object has no attribute 'cpdf'
Basically I checked the code in the ambhas module and it turns out it is trying to use a method from the statistics module (imported as st), st.cpdf() which I do not have in my statistics module.
How can I fix this problem?
Here is an example of the code working. This is the very same code I am trying to run through the generate() function:
https://code.google.com/p/ambhas/wiki/Cookbook
The AMBHAS code depends on this statistics module, not the "official" one that is now included in Python 3.4. Unfortunately, the module doesn't appear to have been updated in a while, and the latest Windows installer is for Python 3.2, so you'll need to build it from source. You can do this with Cygwin, or by installing Visual C++ 2010 Express (here is a blog post describing the process, but I haven't tried it myself so I don't know if it's completely accurate - YMMV).
Once you've compiled the extension, make sure you remove the statistics module you installed previously before running python setup.py install.

Resources