Numpy error when installing fbprophet Python on Windows 10 - python-3.x

I'm trying to install fbprophet on my win10 64 bit machine (anaconda). I've followed the instructions here here and here, but I keep getting the same error:
ImportError Traceback (most recent call last)
<ipython-input-1-7dd3504c366f> in <module>
----> 1 import pandas as pd
~\Anaconda3\envs\fbprophet\lib\site-packages\pandas\__init__.py in <module>
15 if missing_dependencies:
16 raise ImportError(
---> 17 "Unable to import required dependencies:\n" + "\n".join(missing_dependencies)
18 )
19 del hard_dependencies, dependency, missing_dependencies
ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
1. Check that you expected to use Python3.7 from "C:\Users\Ran\Anaconda3\envs\fbprophet\python.exe",
and that you have no directories in your PATH or PYTHONPATH that can
interfere with the Python and numpy version "1.17.0" you're trying to use.
2. If (1) looks fine, you can open a new issue at
https://github.com/numpy/numpy/issues. Please include details on:
- how you installed Python
- how you installed numpy
- your operating system
- whether or not you have multiple versions of Python installed
- if you built from source, your compiler versions and ideally a build log
- If you're working with a numpy git repository, try `git clean -xdf`
(removes all files not under version control) and rebuild numpy.
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: DLL load failed: The specified module could not be found.
I assume it has something regarding the numpy version, but the version in the virtual environment I've created for fbprophet is as required (1.17). It seems that this evnironment tries to use different version of numpy.
I have numpy installed in my anaconda base and in other virtual environments and there is nothing wrong with it. It happens only when I try to install fbprophet.
UPDATE:
I was able to solve the problem by uninstall numpy (v 1.17) and install instead numpy v1.16

Related

ModuleNotFoundError: No module named 'pycocotools._mask'

I'm trying to train Mask-R CNN model from cocoapi(https://github.com/cocodataset/cocoapi), and this error code keep come out.
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-83356bb9cf95> in <module>
19 sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version
20
---> 21 from pycocotools.coco import coco
22
23 get_ipython().run_line_magic('matplotlib', 'inline ')
~/Desktop/coco/PythonAPI/pycocotools/coco.py in <module>
53 import copy
54 import itertools
---> 55 from . import mask as maskUtils
56 import os
57 from collections import defaultdict
~/Desktop/coco/PythonAPI/pycocotools/mask.py in <module>
1 __author__ = 'tsungyi'
2
----> 3 import pycocotools._mask as _mask
4
5 # Interface for manipulating masks stored in RLE format.
ModuleNotFoundError: No module named 'pycocotools._mask'
I tried all the methods on the github 'issues' tab, but it is not working to me at all. Is there are another solution for this? I'm using Python 3.6, Linux.
The answer is summarise from these three GitHub issues
1.whether you have installed cython in the correct version. Namely, you should install cython for python2/3 if you use python2/3
pip install cython
2.whether you have downloaded the whole .zip file from this github project. Namely, you should download all the things here even though you only need PythonAPI
git clone https://github.com/cocodataset/cocoapi.git
or
unzip the zip file
3.whether you open Terminal and run "make" under the correct folder. The correct folder is the one that "Makefile" is located in
cd path/to/coco/PythonAPI/Makefile
make
Almost, the question can be solved.
If not, 4 and 5 may help.
4.whether you have already installed gcc in the correct version
5.whether you have already installed python-dev in the correct version. Namely you should install python3-dev (you may try "sudo apt-get install python3-dev"), if you use python3.
Try cloning official repo and run below commands
python setup.py install
make

Unable to import numpy 1.19.1 in AWS Lambda No module named 'numpy.core._multiarray_umath'

I am unable to import numpy 1.19.1 in AWS Lambda with python3.8 on AWS Lambda
I am using the following dependencies:
pandas 1.1.0
pyarrow 1.0.0
numpy 1.19.1
psycopg2 2.8.5
Because I work on a windows environment, I created an EC2 Linux instance installed python3.8 and downloaded all required libraries, then I added them into the project, but the moment I try to import pandas I get the following:
[ERROR] ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
1. Check that you expected to use Python3.8 from "/var/lang/bin/python3.8",
and that you have no directories in your PATH or PYTHONPATH that can
interfere with the Python and numpy version "1.18.2" you're trying to use.
2. If (1) looks fine, you can open a new issue at
https://github.com/numpy/numpy/issues. Please include details on:
- how you installed Python
- how you installed numpy
- your operating system
- whether or not you have multiple versions of Python installed
- if you built from source, your compiler versions and ideally a build log
- If you're working with a numpy git repository, try `git clean -xdf`
(removes all files not under version control) and rebuild numpy.
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: No module named 'numpy.core._multiarray_umath'
Traceback (most recent call last):
  File "/var/task/src/py38-lib-test.py", line 28, in py38test
    import pandas
  File "/tmp/lib/pandas/__init__.py", line 16, in <module>
    raise ImportError(END RequestId: 07762380-1fc4)
Lastly, I noticed AWS Lambda provides a layer with numpy and sci-kit, I tried removing my numpy version but kept the rest and added the layer to the function, but the same error occurs.
Thanks in advance your comments.
I use the layer provided by Klayers to solve the problem.
Suppose you're running python 3.8 in us-east-1 region, according to this Klayers document, you can use arn:aws:lambda:us-east-1:770693421928:layer:Klayers-p38-numpy:9 as your layer so that you can run import numpy in the lambda function.
AWS Lambda function don't work this way. If you open the Pandas package it'll be having the Numpy package with them but they would not work.
The easy solution is to first download the required packages separately depending upon your python version and work enviroment from this site, unzip them and add them to your project directory. Create a .zip of your project and deploy it on AWS Lambda function. It'll work this way.
You can refer to this site in order to follow the complete procedure.
Is your ec2 instance an amazon linux2 machine? You could also try building and run a docker image for amazon linux 2 and get the python libs compatible to the environment you need in your Lambda, by volume mounting to your host.
Something similat to docker lambda:
https://github.com/lambci/docker-lambda/tree/master/python3.8
I had the same issue, tried packaging all libs with my base code, tried custom lambda layer by separating numpy and pandas libs. Nothing worked.
Used default AWS Layers. In the default layers, AWS provides layers like AWSSDKPandas, CodeGuru, Lambda Insights, etc. AWSSDKPandas layer is packaged with pandas libs and other dependencies like numpy, etc.
So I removed numpy dependency from my base package and added AWSSDKPandas as Lambda layer. Worked well.

How to fix Python script started in CMD get's DLL Error, but runs in Pycharm and Anaconda Comand Promt

I want to run a python script in cmd via a batch file. The script fails with dll loading error for numpy imports. The same python script works in pycharm (2019 Anaconda Version) and in the anaconda command prompt without errors.
For all I use the same anaconda enviorment.
For Error Reproduction
Install Anaconda, don't set any path variables
Create an enviorment stored not in the default folder
conda create --prefix
conda activate
conda install numpy flask
6.Code for testing. test-numpy.py
import numpy as np
print(np.abs([1,2-4,6]))
create batch script
run batch script
Running in cmd a python script that has only system imports or no imports works fine.
Setting the pythonpath Variable in Windows is no option.
What I did so far:
Checking if there is an PATH error:
I printed the sys.path, both via Pycharm and the same python script run via CMD gave the same results: it points to my anaconda enviorment with the folders:
...\ (the enviorment itself)
...\python37.zip
...\DLLs
...\lib
...\lib\site-packages
Edit
Tested in Visual Studio Code, gives same Error as running via cmd
Found out Visual Studio Coda until now, can't handle conda enviorments, that where installed with -p or -prefix and are not stored in the default path
Uninstalled and reinstalled numpy
Tested to import another module: Flask also fails with DDL load error but works fine while run in Pycharm.
Tested on another PC with differnt anaconda version.
Another person could reproduce the same error I get.
My guess is my code fails in cmd, becaust anacona enviorments needs somehow to be activated before hand and doesn't work as a stand alone.
The Error Messeag I get running my script in CMD (... points to my anaconda enviormant path):
...\lib\site-packages\numpy\core__init__.py", line 40, in
from . import multiarray ...\lib\site-packages\numpy\core\multiarray.py", line 12, in
from . import overrides ...\lib\site-packages\numpy\core\overrides.py", line 6, in
from numpy.core._multiarray_umath import ( ImportError: DLL load failed: Das angegebene Modul wurde nicht gefunden.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): "TestNumpy.py", line 14, in
import numpy as np ...\lib\site-packages\numpy__init__.py", line 142, in
from . import core ...\lib\site-packages\n umpy\core__init__.py", line 71, in
raise ImportError(msg) ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the multiarray numpy extension module failed. Mostlikely
you are trying to import a failed build of numpy. Here is how to
proceed:
- If you're working with a numpy git repository, try git clean -xdf (removes all files not under version control) and rebuild numpy.
- If you are simply trying to use the numpy version that you have installed: your installation is broken - please reinstall numpy.
- If you have already reinstalled and that did not fix the problem, then:
1. Check that you are using the Python you expect
and that you have no directories in your PATH or PYTHONPATH that can
interfere with the Python and numpy versions you're trying to use.
2. If (1) looks fine, you can open a new issue at
https://github.com/numpy/numpy/issues. Please include details on:
- how you installed Python
- how you installed numpy
- your operating system
- whether or not you have multiple versions of Python installed
- if you built from source, your compiler versions and ideally a build log
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: DLL load failed: Das angegebene Modul wurde nicht
gefunden.
I found not other solution it seems like it is needed to really activate the conda enviorment before calling the script via conda:
call <file_path>/Anaconda/Scripts/activate.bat <file_path/Anaconda_enviorment> && python <file_path/pythonscript.py>

Missing scikit-learn packages

When I try to run
import sklearn.linear_model
I get an error
~\AppData\Local\ESRI\conda\envs\arcgispro-py3-clone1\lib\site-packages\sklearn\linear_model\least_angle.py in <module>
21 from .base import LinearModel
22 from ..base import RegressorMixin
---> 23 from ..utils import arrayfuncs, as_float_array, check_X_y, deprecated
24 from ..model_selection import check_cv
25 from ..exceptions import ConvergenceWarning
ImportError: DLL load failed: The specified module could not be found.
I did some digging, and it looks like I don't have the as_float_array package
ModuleNotFoundError: No module named 'sklearn.utils.as_float_array'
I tried reinstalling, but no change. What's going on?
This appears to be very specific to your IDE/environment. It will be easier to solve if you provide more details. You can try the following:
Are you able to run the following code?
import sklearn
print(sklearn.__version__)
Can you list the packages installed and see if sklearn is there?
pip3 list
Do you see the same problem in Anaconda alone, or when you run this in command line?
Edit: possible solution
This looks like an access issue.
If you are on Windows OS, remove Anaconda & re-install it in a common folder, instead of under admin controlled folders (for example C:\anaconda, instead of under C:\Program Files).
Launch a new terminal from Anaconda Navigator -> Environments, then run this code.
Usually a conda install solves the problem:
conda install -c anaconda scikit-learn
It happened to me, for numpy

opencv import issue and double install

Previously ROS was installed in my system which requires opencv for its implementation and now I am using anaconda in which I need to use the opencv library once again. While writing python code import cv2 throws an error module not found.
Is there any way to use that opencv library which ROS installed in anaconda
Although I installed opencv once again using conda.
$conda install -c conda-forge opencv
however opencv-3.3 was installed using above command. Now my python code is showing different import error as shown below:
ImportError Traceback (most recent call last)
<ipython-input-9-6b49ad4d4ca5> in <module>()
1 from random import shuffle
2 import glob
----> 3 import cv2
4 shuffle_data = True # shuffle the addresses before saving
5 hdf5_path = 'dataset.hdf5' # address to where you want to save the hdf5 file
ImportError: /home/kamal/ros_catkin_ws/install_isolated/lib/python2.7/dist-packages/cv2.so: undefined symbol: PyCObject_Type
How can I particularly specify which opencv library to use. What env variables I need to change.
Any help will be appreciated.
uncommenting the line source /home/user/ros_catkin_ws/install_isolated/share/setup.bash in the .bashrc file dosen't help. You also need to remove the extrasys.path added by the ROS environment.
In a python console
import sys
print (sys.path)
you will see multiple paths related to ROS
then remove the unwanted part of the path by
sys.path.remove('unwanted_path')
This will solve the problem but now ROS will not work. To make it work you need to append the removed path again.
If someone has a better approach please answer.

Resources