Install pywin32 package in google colab or kaggle notebook environment - python-3.x

pywin32 package was required to install as part of requirements to set up the environment for pix2pix implementation codebase, pywin32 is used to enable the features of the Win32 API in python. I tried to set up an environment in google colab, and produced the following error message during pywin32 setup.
ERROR: Could not find a version that satisfies the requirement pywin32
(from versions: none) ERROR: No matching distribution found for
pywin32
Similar issue with the following message encountered while trying to implement in kaggle:
ERROR: Could not find a version that satisfies the requirement pywin32
ERROR: No matching distribution found for pywin32
The same issue encountered when I tried in my local python environment (Python 3.6.10) in my mac.
Also, I attempt to install pywin32 package from its source itself, using the latest tag build-300 as suggested for python 3.5+. But no luck, installation terminated with the dependency issue with winreg package not found, following message was shown.
ModuleNotFoundError: No module named 'winreg'
Likewise, tried with fake-winreg, but no luck at all. I checked the platform in google colab by print(sys.platform), it shows linux. Please advise if there is any workaround to install pywin32 package in colab and/or resolution solving any issue reported in the above steps. Thank you in advance.
Note:
Issue can be replicated by simply try pip install pywin32 in native python environment, and !pip install pywin32 in colab or kaggle environment.

Unfortunately you can't install it in linux python, pywin32 is a package of extension modules for accessing Windows C and COM APIs in Windows python:
Python extensions for Microsoft Windows Provides access to much of the Win32 API, the ability to create and use COM objects, and the Pythonwin environment.
Google Colab
Kaggle

Related

ImportError: Unable to import required dependencies: numpy: error when trying to migrate

I am using Django and Python with Anaconda. At the beginning, I was able to migrate by the addition of a sqlite3.dll file. However, when I try to migrate again, it gives me this error. I have tried uninstalling and installing both numpy and pandas in Project Interpreter. I'm using Pycharm and I'm using Conda Package manager with the Conda environment. I also tried migrate --run-syncdb but it still gives me the error.
How would I fix this to be able to use .objects.bulk_create()?
To fix .objects.bulk_create(), one recommendation was to run migrate --run-syncdb and I cannot run it currently because of the error below.
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.8 from "C:\Users\PuTung\anaconda3\envs\swe_project\python.exe"
* The NumPy version is: "1.19.2"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.```

Huggingface Transformers ByteLevelBPETokenizer tokenizer not found

I'm trying to run through the (new) tutorial here: https://huggingface.co/blog/how-to-train, but hit an error trying to load the ByteLevelBPETokenizer. I started from an existing conda env and also tried with a totally fresh env, but both give the same error:
Exception has occurred: ImportError
cannot import name 'ByteLevelBPETokenizer' from 'tokenizers' (/home/james/anaconda3/envs/torch/lib/python3.7/site-packages/tokenizers/__init__.py)
Any thoughts as to what might be wrong?
I'm on Ubuntu 18.04, Python 3.7
Okay, turns out the transformers installer pulls an older version (0.0.11). So...
pip uninstall tokenizers
pip install tokenizers==0.4.2
...fixes it.
It does issues a warning: ERROR: transformers 2.4.1 has requirement tokenizers==0.0.11, but you'll have tokenizers 0.4.2 which is incompatible., but this can safely be ignored (this answer came from #julien-c at huggingface/tokenizers).

how to fix install of requests library?

I am learning Python, using Windows and Vs Code editor.
My .py simply contains : import requests
and I see this error ModuleNotFoundError: No module named 'requests'
I think the library is installed because :
Pip freeze shows : requests==2.22.0
pip install requests shows : Requirement already satisfied: requests in d:\python\lib\site-packages (2.22.0)
What am I missing?
Thanks,Peter
It's likely you're using a python version that it's in a different location, do you have multiple versions of Python installed?
One easy way to see what packages you're using on that version of python is to do
>>> import site
>>> site.getsitepackages()
It's generally a good idea to use virtual environments for Python to help with that kind of package control

ClobberError when trying to install the nltk_data package using conda?

I am trying to install nltk_data package to my environment natlang using conda by giving the following command:
(natlang) C:\Users\asus>conda install -c conda-forge nltk_data
I receive the following errors:
Verifying transaction: failed
CondaVerificationError: The package for nltk_data located at
C:\Users\asus\Anaconda3\pkgs\nltk_data-2017.10.22-py_0
appears to be corrupted. The path
'lib/nltk_data/corpora/propbank/frames/con.xml'
specified in the package manifest cannot be found.
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme'
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme-dictionaries'
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme-nombank-proposition-
structure'
I am working on Anaconda 3, python version 3.6.5, windows 10 enterprise.
Can someone please tell me why this error is occurring and how can I fix it.
Background: I originally wanted to use punkt in one of my programs using the code lines:
import nltk_data
nltk.download()
This would open the nltk downloader and after installing all the packages including punkt, on further running the program I would still encounter the following error:
LookupError:
Resource [93mpunkt[0m not found.
Please use the NLTK Downloader to obtain the resource:
[31m>>> import nltk
>>> nltk.download('punkt')
I tried rerunning the nltk.donwload() and nltk.download('punkt') a couple of times with no change. So then I decided to simply install the nltk_data package to my environment based on the assumption that if I install the package to the env itself, I won't have to use the nltk.download function to use punkt.
Summarizing, I have the following two questions:
If I install the nltk_data package to my evn, do I still need to use the nltk.download function in my code? If yes, how do I resolve the lookup error?
If installing to the evn is enough, then how do I resolve the clobber error?
(ps: I apologize if this sounds stupid, I am very new to machine learning and working with python in general.)
The nltk_data repository is a collection of zipfiles and xml meta data. Usually, it is not installation through packaging tools such as conda or pip.
But there is this utility from conda-forge that tries to install the nltk_data, https://github.com/conda-forge/nltk_data-feedstock
To use it, on the terminal/command prompt/console, first add the conda-forge channel:
conda config --add channels conda-forge
Then you shouldn't need the -c option, and just use:
conda install nltk_data
Please try the above and see whether that get rids of the ClobberError.
This error is requesting you to download a specific nltk dataset call punkt:
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
Running nltk.download() without specifying which specific dataset you want to download will call up a tkinter GUI which normally wouldn't be possible if you are accessing your machine remotely without a GUI.
If you're unsure of which resource you need, I would suggest using the popular collection.
import nltk
nltk.download('popular')
Answering 2 que first- there have been similar issues all across windows machines. Its better to use the ntlk.download() function if you want to use punkt or a similar module.
1) The lookup error can easily be resolved. It was because of a typo. Instead of
import nltk_data
it should be
import nltk.data

psycopg2 import error due to failure to load libraries

I have tried many ways of installing psycopg2 after having installed PostgreSQL using the one-click installer, but anyway I try confronts me with the same import error in python: ImportError: dlopen(/Library/Python/2.5/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: libpq.5.dylib
Referenced from: /Library/Python/2.5/site-packages/psycopg2/_psycopg.so
Reason: image not found
I am on Mac OS X 10.5.8. I am using Python 2.5. I installed PostgreSQL from the installer (I did not port it) and it installed in /Library). I added /Library/PostgreSQL/9.1/bin to the setup.cfg of the source psycopg2, as instructed in the INSTALL file and everywhere on the internet and then ran sudo python setup.py build and then sudo python setup.py install.
I also tried exporting /Library/PostgreSQL/9.1/bin to my path instead and running sudo pip install psycopg2. But the exact same problem occurred in all of these scenarii.
I would greatly appreciate some help with this.
Best
Marion
The problem is that at runtime the libpq.5.dylib file can't be found because it is not in one of the default locations searched by the dynamic (runtime) linker. Try to define the environment variable DYLD_LIBRARY_PATH before launching python. I am no MacOS X expert but something like:
export DYLD_LIBRARY_PATH=/Library/PostgreSQL/9.1/lib
will probably work.

Resources