I keep receiving ModuleNotFoundError: No module named 'specreduce' when I try to import specreduce - python-3.x

I was following a tutorial, and I originally tried typing
from specreduce.calibration_data import load_MAST_calspec, which gave the same error.
I'm not sure how to properly import specreduce, so if anyone could show me how to do that I would be very appreciative!

Installing astropy only installs the core astropy package. specreduce is apparently an affiliated package--a separate special-purpose package built on Astropy. It seems they have not published a release on PyPI so you should contact its authors to ask how best to install it. It seems for now you might be able to install directly from the repository with pip install git+https://github.com/astropy/specreduce.git.

Related

ImportError cannot import 'langauge' from 'google.cloud' (unknown location)

I'm trying to incorporate google-cloud-language to use sentiment analysis. I followed the guide on the google documentation website, like so:
from google.cloud import language
When I run the script I receive the following error:
ImportError cannot import 'langauge' from 'google.cloud' (unknown location)
I try to run the script like so:
python3 scriptName.py
I've installed google-cloud-language like so:
pip3 install gooogle-cloud-language
pip install google-cloud-language
I've done that in a virtual environment and outside a virtual environment.
Nothing works. How do I properly set up my script to successfully import the NLP module from google cloud?
Happy to see that you resolved your issue. However, I would like to point some things out:
In the error that you posted in the description there is a typo. It looked for a module named langauge instead of language.
pip3 install gooogle-cloud-language and python3 -m pip install google-cloud-language should be equivalent but that may not always be the case. For example when they are not in the same path. Check this answer for more details.
As I was writing this question and going through other similar questions on stackoverflow, I found another command to try and it seems to have worked.
python3 -m pip install google-cloud-language
I did this inside my virtualenv. It could also work outside of one, but because I got it working, I did not test the outside virtualenv case.

How do I import a python package which is saved locally from a Git pull

Beginner here so I'm not even sure if this even close to best practise.
I've pulled a copy of a python package from Git, specifically https://github.com/slundberg/shap.git.
The result is saved where all my other Python packages are, I've called it shap_mv. Question is, if I make some changes to shap_mv and want to test the overall result, how do I import it as a package into Python?
Currently importing shap_mv works but the package has no contents. There is a subfolder with init.py and when I try import that folder as a package it seems to be missing functions and fails on import.
If I'm grossly far away from best practise then how should I work on the package and test the results?
Thank you!
Suppose the path of __init__.py is C:\\foo\\shap\\bar\\__init__.py and you want to import baz at C:\\foo\\shap\\bar\\baz.py.
import sys
sys.path.append('C:\\foo\\shap\\bar\\')
import baz
baz.somemethod()
The best practice is to use virtualenv and pip:
pip install shap
or
pip install "git+https://github.com/slundberg/shap.git#egg=shap"
or
git clone https://github.com/slundberg/shap.git
pip install shap
I solved this by putting an empty __init__.py. file in the shap_mv folder then running import shap_mv.shap as shap from python
This post helped:
importing a module in nested packages

How to import Python package (Pytorch-neat) that is not installable from pip/conda repositories?

I am trying to used Pytorch-neat package https://github.com/uber-research/PyTorch-NEAT but I don't understand the workflow of using it. I already installed python-neat package and I can import it using import neat in my Jupyter notebook. But what should I do with Pytroch-neat code? There is no pytorch-neat package in Conda or pip repositories, so, I guess that this Pytroch-neat code is not compiled and distributed as the Python package for Jupyter notebook. But what should I do with this code? E.g. sample script contains the code:
import neat
from pytorch_neat.multi_env_eval import MultiEnvEvaluator
So - neat is package and I am importing it. But how to understand the from clause? Should I load Pytroch-neat scripts somehow in the previous cells of my notebook and then I can call this from clause? Or maybe I should compile Pytroch-neat package locally and install it from local repository and import it similarly to neat package. But if so, why the examples use from clause?
I am starting to use Python and I am greatly confused with all of this!
To import from pytorch_neat you have to clone the repository and manually copy directory pytorch_neat into your site-packages (or any directory in sys.path).

ClobberError when trying to install the nltk_data package using conda?

I am trying to install nltk_data package to my environment natlang using conda by giving the following command:
(natlang) C:\Users\asus>conda install -c conda-forge nltk_data
I receive the following errors:
Verifying transaction: failed
CondaVerificationError: The package for nltk_data located at
C:\Users\asus\Anaconda3\pkgs\nltk_data-2017.10.22-py_0
appears to be corrupted. The path
'lib/nltk_data/corpora/propbank/frames/con.xml'
specified in the package manifest cannot be found.
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme'
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme-dictionaries'
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme-nombank-proposition-
structure'
I am working on Anaconda 3, python version 3.6.5, windows 10 enterprise.
Can someone please tell me why this error is occurring and how can I fix it.
Background: I originally wanted to use punkt in one of my programs using the code lines:
import nltk_data
nltk.download()
This would open the nltk downloader and after installing all the packages including punkt, on further running the program I would still encounter the following error:
LookupError:
Resource [93mpunkt[0m not found.
Please use the NLTK Downloader to obtain the resource:
[31m>>> import nltk
>>> nltk.download('punkt')
I tried rerunning the nltk.donwload() and nltk.download('punkt') a couple of times with no change. So then I decided to simply install the nltk_data package to my environment based on the assumption that if I install the package to the env itself, I won't have to use the nltk.download function to use punkt.
Summarizing, I have the following two questions:
If I install the nltk_data package to my evn, do I still need to use the nltk.download function in my code? If yes, how do I resolve the lookup error?
If installing to the evn is enough, then how do I resolve the clobber error?
(ps: I apologize if this sounds stupid, I am very new to machine learning and working with python in general.)
The nltk_data repository is a collection of zipfiles and xml meta data. Usually, it is not installation through packaging tools such as conda or pip.
But there is this utility from conda-forge that tries to install the nltk_data, https://github.com/conda-forge/nltk_data-feedstock
To use it, on the terminal/command prompt/console, first add the conda-forge channel:
conda config --add channels conda-forge
Then you shouldn't need the -c option, and just use:
conda install nltk_data
Please try the above and see whether that get rids of the ClobberError.
This error is requesting you to download a specific nltk dataset call punkt:
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
Running nltk.download() without specifying which specific dataset you want to download will call up a tkinter GUI which normally wouldn't be possible if you are accessing your machine remotely without a GUI.
If you're unsure of which resource you need, I would suggest using the popular collection.
import nltk
nltk.download('popular')
Answering 2 que first- there have been similar issues all across windows machines. Its better to use the ntlk.download() function if you want to use punkt or a similar module.
1) The lookup error can easily be resolved. It was because of a typo. Instead of
import nltk_data
it should be
import nltk.data

PyCrypto for Python 3.4 on Windows 8.1 cannot find winrandom module

I've been trying to install Paramiko in a Python 3.4 virtual environment. I tried pip installing, as well as easy_installing with pre-built binaries and neither worked. Suggested Here. Both kept saying that winrandom is not a valid Win32 application.
I found that is is a problem with PyCrypto, and not Paramiko directly, so I installed from source and pre-built binaries and still can't get it to find/use a module called winrandom.
Have any of you solved this problem? It's very frustrating.
From the same link, change nt.py (...\Lib\site-packages\Crypto\Random\OSRNG\nt.py)
change
import winrandom
to
from . import winrandom
Not really sure if this is helpfull for you guys, but i got the problem that I was not able to install winrandom for python3.4
With the foolowing link, I fixed my problem
https://github.com/dlitz/pycrypto/commit/10abfc8633bac653eda4d346fc051b2f07554dcd
2.6.1 has a relative import that's fixed in 2.7. Probably all you need to do is fix that import in Crypto\Random\OSRNG\nt.py

Resources