Why does PyCharm not realize that tf.keras is installed? - python-3.x

Although tensorflow is installed, Pycharm does not realize that the module tf.keras exists.
Hovering over the keras results in the following text showing up: "Cannot find reference "keras" in '__init__.py'"
Why?

You are using PyCharm version lower than 3.0.
TensorFlow 2.0 is not running on PyCharm versions older than 2019.3 version.
You can find the answer to your question here: Unable to import Keras(from TensorFlow 2.0) in PyCharm

Related

Cannot import torch Error loading ..\caffe2_nvrtc.dll" or one of its dependencies

I am on a Windows 10 64 bit system.
Pytorch for cuda has been working successfully for some time.
Today I tried to upgrade to the latest version of Pytorch (1.13) using
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
Now I cannot import torch. I get the error:
OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\alan\anaconda3\lib\site-packages\torch\lib\caffe2_nvrtc.dll" or one of its dependencies.
I have tried both in a Jupyter notebook and in the Spyder IDE.
I have tried completely removing Anaconda and reinstalling afresh and then reinstalling Pytorch with no success.
I do not believe I have any other versions of python installed.
The offending dll (caffe2_nvrtc.dll) does seem to be in the file location specified.
I have found various similar problems reported but they all date back to 2020 or earlier and none of them seemed to have a satisfactory solution.
Can anyone point me in the correct direction
I still do not understand why using conda did not work but I tried again using pip
and that did work
I experienced the same problem as yours today.
It turns out that when I use the anaconda prompt then the problem disappears.
Anaconda prompt could do it
Then I speculated that the only difference between these two scenarios is that when I use the anaconda prompt, I use the base anaconda environment, probably somehow the conda environment in spyder is not activated.
So the solution is to open the spyder in anaconda prompt like this:
Then it works.
This picture will tell you the cause for the problem.
problem...

Coremltools 5.1 on windows 10 gives "Unable to load libmodelpackage"

When I installed coremltools 5.1 on windows 10 (version 21H1, build 19043.1348, python 3.7.8) and opened a coreml model, I got the error message "Unable to load libmodelpackage. Cannot make save spec". The mlmodel can be found in this page https://github.com/likedan/Awesome-CoreML-Models
import coremltools
spec = coremltools.utils.load_spec(r"MNIST.mlmodel")
It works well in coremltools 4.1. Do you have any idea about this issue? coremltools 5.x does not support windows anymore? I did not find any information about windows support on the release note of coremltools.
Thank you!
Coremltools never supported windows and version 5.1+ -> https://github.com/apple/coremltools/issues/1348

After downgrading Tensorflow 2.0 to 1.5 results changed and results reproduction is not available

Would you help me to achieve reproducible results with Tensorflow 1.15 without restarting Python kernel. And why the output results in TF 2.0 and TF 1.5 are different with absolutely identical parameters and dataset? Is it possible to achieve identical output?
More details:
I tried to interpret model results in TF 2.0 by:
import shap
background = df3.iloc[np.random.choice(df3.shape[0], 100, replace=False)]
explainer = shap.DeepExplainer(model, background)
I recieved an error:
`get_session` is not available when using TensorFlow 2.0.`get_session` is not available when using TensorFlow 2.0.
According to the SO topic, I tried to setup TF 2.0 compatibility with TF 1 by using in the front of my code:
import tensorflow.compat.v1 as tf
But the error appeared again.
Following advice by many users, I downgraded TF2 to TF 1.15 it solved the problem, and shap module interprets the results but:
1) to make results reproducible now I have to change tf.random.set_seed(7) on tf.random.set_random_seed(7) and restart the Python kernel every time! In TF2 I didn't have to restart the kernel.
2) prediction results has been changed, especially, Economical efficiency (that is, TF1.5. wrongly classifies more important samples than TF2.0).
TF 2:
Accuracy: 94.95%, Economical efficiency = 64%
TF 1:
Accuracy: 94.85%, Economical efficiency = 56%
The code of the model is here
First, results differ from each other not only in TF1 and TF2 versions, but also in TF2.0 and TF2.2 versions. Probably, it depends on diffenent internal parameters in the packages.
Second, TensorFlow2 works with DeepExplainer in the following versions:
import tensorflow
import pandas as pd
import keras
import xgboost
import numpy
import shap
print(tensorflow.__version__)
print(pd.__version__)
print(keras.__version__)
print(xgboost.__version__)
print(numpy.__version__)
print(shap.__version__)
output:
2.2.0
0.24.2
2.3.1
0.90
1.17.5
0.35.0
But you will face some difficulties in updating the libraries.
In Python 3.5, running TF2.2, you will face the error 'DLL load failed: The specified module could not be found'.
It 100% can be solved by installing newer C++ package. See this:https://github.com/tensorflow/tensorflow/issues/22794#issuecomment-573297027
Link to download the package:https://support.microsoft.com/ru-ru/help/2977003/the-latest-supported-visual-c-downloads
In Python 3.7 you will not find the shap 0.35.0 version with whl extention. Only tar.gz extension which gives the error: "Install visual c++ package". But installation doesn't help.
Then download shap 0.35.0 for Python 3.7 here: https://anaconda.org/conda-forge/shap/files. Run Anaconda shell. Type: conda install -c conda-forge C:\shap-0.35.0-py37h3bbf574_0.tar.bz2.

Can not identify the version of imported packages in my Jupyter Notebook

Nothing seems to work when I try to identify the version of an imported package:
What is creating this problem? What can I do about it?
I am using Python 3.7, Anaconda3 (2019) and Windows 10.
If I try it in notebooks.azure.com then everything works just fine:
import tensorflow
tensorflow.__version__
'1.12.2'
You can also try:
!pip show tensorflow
Name: tensorflow
Version: 1.12.2
Summary: TensorFlow is an open source machine learning framework for everyone.
/..../

AttributeError: module 'cv2.cv2' has no attribute 'createLBPHFaceRecognizer' in python 3.6.4 and opencv 3.4

I used different types of things and I need this to work in 3.6 Python. i also try
roc = cv2.face.LBPHFaceRecognizer_create() but getting the same error but this time with face module. Need help.
The same code is working in python 2.7.13, opencv 2.4.10.
How to make it work in python 3.6 and opencv 3.4.
I am working in a windows environment.
You likely have the opencv-python package installed instead of the opencv-contrib-python package. A lot of the proprietary algorithms were removed from the main OpenCV repository when OpenCV came out and put in the contrib repository.
Try switching versions.

Resources