Raspberry PI import tensorflow Segmentation fault - linux

I have installed tensorflow on a Raspberry PI 3 and it worked correctly (following these instructions). After installing a number of libraries that I needed for loading a Keras model trained on my PC (numpy, scipy etc) I have observed the following error while importing tensorflow:
Runtime Error: module compiled against API version 0xc but this version of numpy is 0xa
Segmentation fault
I have updated all the necessary libraries, and importing any of them doesn't result in an error in a Python 3 script. Also the error is not keras related, because the statement import tensorflow raises an error no matter what follows after it. Should I uninstall tensorflow and start all over again?

Related

Segmentation fault (core dump) while training a Hugging Face model

I am trying to train a Machine Translation model from HuggingFace (t5-large) using the europarl_bilingual dataset.
When I execute the code in my Linux GPU it goes without error until it downloads the model in
from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
It performs the download, but when calling the model it throws the error:
Segmentation fault (core dumped)
I faced the same error while importing the tokenizer but I found the solution here ImportError when from transformers import BertTokenizer
by downgrading the tokenizer version to:
conda install -c huggingface tokenizers=0.10.1 transformers=4.6.1
I tried keeping the tokenizer version to 0.10.1 and upgrading the transformers version to the latest for linux = 4.11.3 but the same error.
Do you have any idea what might be happening or how to fix it?
Thanks,

After downgrading Tensorflow 2.0 to 1.5 results changed and results reproduction is not available

Would you help me to achieve reproducible results with Tensorflow 1.15 without restarting Python kernel. And why the output results in TF 2.0 and TF 1.5 are different with absolutely identical parameters and dataset? Is it possible to achieve identical output?
More details:
I tried to interpret model results in TF 2.0 by:
import shap
background = df3.iloc[np.random.choice(df3.shape[0], 100, replace=False)]
explainer = shap.DeepExplainer(model, background)
I recieved an error:
`get_session` is not available when using TensorFlow 2.0.`get_session` is not available when using TensorFlow 2.0.
According to the SO topic, I tried to setup TF 2.0 compatibility with TF 1 by using in the front of my code:
import tensorflow.compat.v1 as tf
But the error appeared again.
Following advice by many users, I downgraded TF2 to TF 1.15 it solved the problem, and shap module interprets the results but:
1) to make results reproducible now I have to change tf.random.set_seed(7) on tf.random.set_random_seed(7) and restart the Python kernel every time! In TF2 I didn't have to restart the kernel.
2) prediction results has been changed, especially, Economical efficiency (that is, TF1.5. wrongly classifies more important samples than TF2.0).
TF 2:
Accuracy: 94.95%, Economical efficiency = 64%
TF 1:
Accuracy: 94.85%, Economical efficiency = 56%
The code of the model is here
First, results differ from each other not only in TF1 and TF2 versions, but also in TF2.0 and TF2.2 versions. Probably, it depends on diffenent internal parameters in the packages.
Second, TensorFlow2 works with DeepExplainer in the following versions:
import tensorflow
import pandas as pd
import keras
import xgboost
import numpy
import shap
print(tensorflow.__version__)
print(pd.__version__)
print(keras.__version__)
print(xgboost.__version__)
print(numpy.__version__)
print(shap.__version__)
output:
2.2.0
0.24.2
2.3.1
0.90
1.17.5
0.35.0
But you will face some difficulties in updating the libraries.
In Python 3.5, running TF2.2, you will face the error 'DLL load failed: The specified module could not be found'.
It 100% can be solved by installing newer C++ package. See this:https://github.com/tensorflow/tensorflow/issues/22794#issuecomment-573297027
Link to download the package:https://support.microsoft.com/ru-ru/help/2977003/the-latest-supported-visual-c-downloads
In Python 3.7 you will not find the shap 0.35.0 version with whl extention. Only tar.gz extension which gives the error: "Install visual c++ package". But installation doesn't help.
Then download shap 0.35.0 for Python 3.7 here: https://anaconda.org/conda-forge/shap/files. Run Anaconda shell. Type: conda install -c conda-forge C:\shap-0.35.0-py37h3bbf574_0.tar.bz2.

working with Keras on a Jupyter lab using Tensorflow

I'm trying to use keras on a jupyter lab using tensorflow (python 3.6) through miniconda.
Jupyter lab keeps give me a restarting error(Kernel Restarting The kernel for Desktop/Transfer/Untitled.ipynb appears to have died. It will restart automatically.)
Because the kernel restarting error is evident whenever I try to import keras. The code I used is (from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img).
Code
Therefore, I assume that there is a problem between keras and kernel even though I already installed ipykernel in my tensorflow directory ((tensorflow) C:\Users\user).
Pip_list
Still couldn't figure out this kernel and keras problem :(.

After installing keras and tensorflow, python script crashes with Segmentation Fault (core dumped)

Why does import statement (eventually leading to keras import) work only after handling a deliberate exception???????
I have a python3.5 project with multiple main.py entry points.
One entry point runs on server, another on client(s) using pycos to communicate (network IPC).
My server/main.py used to execute fine, but after installing keras and tensorflow with
pip install -r requirements.txt --user
the server/main.py crashes reading
Using TensorFlow backend.
Segmentation fault (core dumped)
I have monitored RAM on the server VM, hard disk space and nothing indicates why server/main.py is crashing... It doesn't even import keras OR tensorflow in the import trace. I guess _ init _.py import trace eventually leads to the module which imports keras / tensorflow packages
Can anyone advise on how to find the issue?
requirements.txt
jsonpickle
matplotlib
seaborn
numpy
pycos
jsonpickle
statsmodels
pandas
sklearn
pymongo
keras
tensorflow==2.0.0-beta1
Turns out pycos executes an underlying scheduler which somehow interferes with Tensorflow (causing seg faults) if the python modules which call pycos are exposed in the _ _ init _ _.py of their respective python sub-packages.
Removing monitoringSimulation/monitor.py etc.. imports from monitoringSimulation/ _ _ init _ _.py fixes the issue.
Will submit a bug to pycos as I will eventually need to expose monitor.py module to other subpackages.

How can we fix :"could not initialize a memory descriptor, in file tensorflow/core/kernels/mkl_maxpooling_op.cc:578" error while using InceptionV3?

I am trying to train InceptionV3 on a dataset that I have.I am using python3.5 on spyder with keras and tensorflow as backend.
I am getting the following error:
could not initialize a memory descriptor, in file tensorflow/core/kernels/mkl_maxpooling_op.cc:578
I have tried installing tensor flow with conda install -c anaconda tensorflow but that didn't removed the error.

Resources