I got an error when trying to update brightway.
after doing
conda update -c cmutel bw2io bw2data
I got a warning because there are two possible package resolutions, but updated to bw2data 3.1 and bw2io 0.5.11
now if I try to import brightway I get an import error message
cannot import name 'database_parameters'
SOLVED: I updated conda and then I could update bw2io to version 0.6.RC3 and now I do not have any error message when importing brightway.
Version 3.0 of bw2data changed the way that parameterization works; see the documentation and examples. If there is still a link to database_parameters in the documentation, please file an issue. If you need help migrating database parameters, edit the question.
Related
I have been stuck with this problem since a couple of days ago. I wanted to use numba for a project, so I created a conda environment in python=3.10.4
conda create -n scycom
Then I install the packages I need from the main repository
conda install numpy scipy numba matplotlib jupyter
which results in the installation of numpy=1.21.5 and numba=0.55.1
When I am done installing I run jupyter lab jupyter lab --port=8888 and create a notebook. When I type from numba import jit, I get the following error message
ImportError: Numba needs NumPy 1.21 or less
with a problem apparently arising with _ensure_critical_deps(). This is puzzling because numpy is version 1.21.5.
My attempts to fix this problem
Given that it seemed a compatibility issues, I tried downgrading to numpy=1.20.1 but the problem persists and, sometimes, other compatibility issues with glib show up. In fact, the exact same error message arises
I created an environment with python=3.8.13, with numpy=1.20.1 and numba=0.55.1, facing the exact same error messages
I tried different combinations of libraries but I could solve this incompatibility
I tried a few internet solutions but I did not find that any really fit into my problem
I hope somebody willing to help has faced and solved a similar issue
I am using Django and Python with Anaconda. At the beginning, I was able to migrate by the addition of a sqlite3.dll file. However, when I try to migrate again, it gives me this error. I have tried uninstalling and installing both numpy and pandas in Project Interpreter. I'm using Pycharm and I'm using Conda Package manager with the Conda environment. I also tried migrate --run-syncdb but it still gives me the error.
How would I fix this to be able to use .objects.bulk_create()?
To fix .objects.bulk_create(), one recommendation was to run migrate --run-syncdb and I cannot run it currently because of the error below.
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.8 from "C:\Users\PuTung\anaconda3\envs\swe_project\python.exe"
* The NumPy version is: "1.19.2"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.```
I was following a tutorial, and I originally tried typing
from specreduce.calibration_data import load_MAST_calspec, which gave the same error.
I'm not sure how to properly import specreduce, so if anyone could show me how to do that I would be very appreciative!
Installing astropy only installs the core astropy package. specreduce is apparently an affiliated package--a separate special-purpose package built on Astropy. It seems they have not published a release on PyPI so you should contact its authors to ask how best to install it. It seems for now you might be able to install directly from the repository with pip install git+https://github.com/astropy/specreduce.git.
I'm trying to run a bulk job in salesforce. After creating job and prepared csv_iterator through my data, when i ran this
batch = bulk.post_bulk_batch(job, csv_iterator)
I'm getting the error stating
AttributeError: 'SalesforceBulk' object has no attribute 'post_bulk_batch'
I've installed salesforce_bulk in python2 and 3 as well. Tried in both versions, but same error.
Why is this happens. How to rectify this issue? Thanks in advance.
UPDATE:
I have installed the version salesforce-bulk==1.1.0
Now it is working in python2 but in python3 this is what happening
from salesforce_bulk import SalesforceBulk
ImportError: cannot import name 'SalesforceBulk'
Is there no support for python 3 to do salesforce bulk job process?
Found it! In python3 install version salesforce-bulk == 2.1.0 and change the method name post_bulk_batch to post_batch
That's it do the trick!
I just tried to use pd.HDFStore in IPython Notebook with a Python 3 kernel (Anaconda 2&3 on Ubuntu 14.04)
import pandas as pd
store = pd.HDFStore('/home/Jian/Downloads/test.h5')
but it throws the following error
ImportError: HDFStore requires PyTables, "libhdf5.so.9: cannot open shared object file: No such file or directory" problem importing
I initially thought it's because pytables is somehow missing, but when I check $source activate py34 and $conda list, pytables 3.2.0 is already installed under anaconda python3 environment.
Also, if I switch to Python 2, for example, $source activate py27 and start ipython notebook, it works properly and no import error is thrown.
I guess that I must miss something for configuring pytables under anaconda python 3 env, but I cannot figure it out. Any help is highly appreciated.
Update:
I just tried on a fresh install of Anaconda3-2.3.0-Linux-x86_64 from official website and it ends up with the same error. When I try $locate libhdf5.so.9 in command line, nothing shows up.
This is a known issue that we are working on. When it is fixed, conda update --all will update the libraries and fix the issue.