I am getting following traceback while importing ibm_db in python -
Traceback (most recent call last):
File "/working/Script.py", line 5, in <module>
import ibm_db
ImportError: Error loading shared library libcrypt.so.1: No such file or directory (needed by
/usr/local/lib/python3.9/site-packages/clidriver/lib/libdb2.so.1)
Please find below my docker image and requirements.txt file -
FROM python:3.9-alpine3.16
COPY requirements.txt requirements.txt
RUN apk --update add --virtual build-dependencies python3 py-pip openssl ca-certificates py-openssl wget libffi-dev openssl-dev python3-dev py-pip py3-pandas build-base \
&& apk add python3 make g++ \
&& pip install --upgrade pip \
&& pip install -r requirements.txt \
&& apk del build-dependencies
ENTRYPOINT [ "python", "Script.py" ]
Requirements.txt -
cryptography==2.9
botocore==1.12.253
azure-storage-blob==2.1.0
azure-storage-common==2.1.0
snowflake-connector-python==1.9.1
snowflake-sqlalchemy==1.2.4
SQLAlchemy==1.3.19
ibm_db
s3fs
The IBM clidriver (on which the python ibm_db module depends) is not supported on Alpine-Linux - that is to say, it will not work on Alpine-Linux.
That driver (clidriver) is closed source, and for Linux x64 is built with gcc.
You must choose a supported base Linux , Ubuntu, Red Hat, SuSe.
Related
I am getting exactly the same error as mentioned in ->
The library libcrypto could not be found
I understood the issue, however, I could not figure out the resolution. Do I need to update my lambda configuration or do I need to upgrade my Python libraries?
PFB my requriements.txt files
cryptography==36.0.2
botocore==1.20.0
azure-storage-blob==2.1.0
azure-storage-common==2.1.0
boto3==1.17.0
asn1crypto==1.5.1
certifi==2022.9.14
cffi==1.15.1
charset-normalizer==2.1.1
filelock==3.8.0
idna==3.4
oscrypto==1.3.0
pycparser==2.21
pycryptodomex==3.15.0
PyJWT==2.5.0
pyOpenSSL==22.0.0
pytz==2022.2.1
requests==2.28.1
typing_extensions==4.3.0
urllib3==1.26.12
My docker file -
FROM python:3.9-alpine3.16
COPY requirements.txt requirements.txt
RUN apk --update --no-cache add --virtual build-dependencies gcc python3-dev musl-dev libc-dev linux-headers libxslt-dev libxml2-dev py-pip ca-certificates wget libffi-dev openssl-dev python3-dev expat==2.4.9-r0 py-pip build-base zlib zlib-dev libressl libressl-dev \
&& apk add python3 make g++ \
&& pip install --upgrade pip \
&& pip install --upgrade pip setuptools \
&& pip install -r requirements.txt \
&& apk del build-dependencies
RUN pip install snowflake-connector-python==2.8.0 --no-use-pep517
RUN python -c 'from oscrypto import asymmetric'
Attempting docker build with the Dockerfile above results in a failure with:
Step 4/4 : RUN python -c 'from oscrypto import asymmetric'
---> Running in dc8f8b8920ac
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/oscrypto/asymmetric.py", line 19, in <module>
from ._asymmetric import _unwrap_private_key_info
File "/usr/local/lib/python3.9/site-packages/oscrypto/_asymmetric.py", line 27, in <module>
from .kdf import pbkdf1, pbkdf2, pkcs12_kdf
File "/usr/local/lib/python3.9/site-packages/oscrypto/kdf.py", line 9, in <module>
from .util import rand_bytes
File "/usr/local/lib/python3.9/site-packages/oscrypto/util.py", line 14, in <module>
from ._openssl.util import rand_bytes
File "/usr/local/lib/python3.9/site-packages/oscrypto/_openssl/util.py", line 6, in <module>
from ._libcrypto import libcrypto, libcrypto_version_info, handle_openssl_error
File "/usr/local/lib/python3.9/site-packages/oscrypto/_openssl/_libcrypto.py", line 9, in <module>
from ._libcrypto_cffi import (
File "/usr/local/lib/python3.9/site-packages/oscrypto/_openssl/_libcrypto_cffi.py", line 27, in <module>
raise LibraryNotFoundError('The library libcrypto could not be found')
oscrypto.errors.LibraryNotFoundError: The library libcrypto could not be found
The command '/bin/sh -c python -c 'from oscrypto import asymmetric'' returned a non-zero code: 1
The exception is taking place at https://github.com/wbond/oscrypto/blob/1.3.0/oscrypto/_openssl/_libcrypto_cffi.py
Tracking through oscrypto._ffi, the problem comes down to an issue opening libcrypto with the ctypes library:
>>> import ctypes.util
>>> ctypes.util.find_library('crypto')
>>>
Why? Because /usr/lib has only libcrypto.so.1.1, and not a libcrypto.so symlink pointing to it. Easily fixed by adding an extra line to your Dockerfile:
RUN ln -s libcrypto.so.1.1 /usr/lib/libcrypto.so
...after which Python's ctypes behaves:
>>> from ctypes.util import find_library
>>> find_library('crypto')
'libcrypto.so.1.1'
...and so does oscrypto:
>>> from oscrypto import asymmetric
>>>
I use the remote interpreter of my docker container.
The Dockerfile contents are:
FROM nvidia/cuda:8.0-cudnn7-devel-ubuntu16.04
RUN apt-get update && apt-get install -y \
rsync \
htop \
git \
openssh-server \
nano \
cmake \
python-opencv \
python2.7 \
unzip
RUN apt-get -qq -y install curl bzip2 && \
curl -sSL https://repo.continuum.io/miniconda/Miniconda2-4.6.14-Linux-x86_64.sh -o /tmp/miniconda.sh && \
bash /tmp/miniconda.sh -bfp /usr/local && \
rm -rf /tmp/miniconda.sh && \
conda install -y python=2 && \
conda update conda && \
apt-get -qq -y autoremove && \
apt-get autoclean && \
rm -rf /var/lib/apt/lists/* /var/log/dpkg.log && \
conda clean --all --yes
#RUN . usr/local/etc/profile.d/conda.sh #didnt work
#RUN conda create -n pwcnet_test python=2.7
#RUN chmod u+r+x usr/local/etc/profile.d/conda.sh
#RUN conda env list
#RUN usr/local/etc/profile.d/conda.sh activate pwcnet_test
# install pytorch and other dependencies
#RUN pip install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl
#RUN pip install torchvision visdom dominate opencv-python cffi
# install external packages
ADD torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl
RUN pip install torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl
#RUN conda install pytorch=0.2.0 cuda80 -c soumith
RUN pip install torchvision==0.2.2 visdom==0.1.8.8 dominate==2.3.5 opencv-python==4.1.0.25 cffi==1.12.2
ADD external_packages external_packages
RUN bash /external_packages/correlation-pytorch-master/make_cuda.sh
#RUN mkdir /usr/local/lib/python2.7/site-packages/correlation
#RUN cp /usr/local/lib/python2.7/site-packages/correlation_package-0.1-py2.7-linux-x86_64.egg /usr/local/lib/python2.7/site-packages/correlation_package-0.1-py2.7-linux-x86_64.zip
#RUN unzip /usr/local/lib/python2.7/site-packages/correlation_package-0.1-py2.7-linux-x86_64.zip -d /usr/local/lib/python2.7/site-packages/correlation
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
The content of make_cuda.sh is:
#!/usr/bin/env bash
CUDA_PATH=/usr/local/cuda-8.0
#cd correlation-pytorch/correlation_package/src
echo "Compiling correlation layer kernels by nvcc..."
# TODO (JEB): Check which arches we need
nvcc -c -o /external_packages/correlation-pytorch-master/correlation-pytorch/correlation_package/src/corr_cuda_kernel.cu.o /external_packages/correlation-pytorch-master/correlation-pytorch/correlation_package/src/corr_cuda_kernel.cu -x cu -Xcompiler -fPIC -arch=sm_52
nvcc -c -o /external_packages/correlation-pytorch-master/correlation-pytorch/correlation_package/src/corr1d_cuda_kernel.cu.o /external_packages/correlation-pytorch-master/correlation-pytorch/correlation_package/src/corr1d_cuda_kernel.cu -x cu -Xcompiler -fPIC -arch=sm_52
#cd ../../
python /external_packages/correlation-pytorch-master/correlation-pytorch/setup.py build install
The image is created successfully and the interpreter works great (in run and debug).
if I run pip list, I can see the package correlation-package 0.1 appears in the list of installed modules.
If I look at the the packages-site content I see and egg file: correlation_package-0.1-py2.7-linux-x86_64.egg.
I can also notice that if I run 'python -m site', in the output appears '/usr/local/lib/python2.7/site-packages/correlation_package-0.1-py2.7-linux-x86_64.egg'.
But if I try to imp.find_module('correlation-package')[1], it write to me that:
Traceback (most recent call last):
File "<input>", line 1, in <module>
ImportError: No module named correlation-package
It is also doesn't recognize it if I try to simply import a class from the module:
from correlation_package.modules.corr import Correlation
returns:
usr/local/bin/python2.7 /opt/.pycharm_helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 172.17.0.1 --port 45203 --file /opt/project/models/PWCNet.py
Connected to pydev debugger (build 222.3739.30)
pydev debugger: warning: trying to add breakpoint to file that does not exist: /home/vista/.cache/JetBrains/PyCharm2022.2/remote_sources/-609766439/-183260405/correlation/correlation_package/_ext/corr/_corr.py (will have no effect)
pydev debugger: warning: trying to add breakpoint to file that does not exist: /home/vista/.cache/JetBrains/PyCharm2022.2/remote_sources/-609766439/-183260405/correlation/correlation_package/_ext/corr/_corr.py (will have no effect)
Traceback (most recent call last):
File "/opt/.pycharm_helpers/pydev/pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/opt/project/models/PWCNet.py", line 13, in <module>
from correlation_package.modules.corr import Correlation
ImportError: No module named correlation_package.modules.corr
python-BaseException
and of course it doesn't see it in the file itself:
You can see in the Dockerfile in the commented commands that I tried to unzip the files in the .egg archive of the module and import it using imp.load_source as following:
Correlation = imp.load_source('Correlation', '/usr/local/lib/python2.7/site-packages/correlation/correlation_package/_ext/corr/_corr.py')
but this ended with the error:
raceback (most recent call last):
File "<input>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/correlation/correlation_package/_ext/corr/_corr.py", line 7, in <module>
__bootstrap__()
File "/usr/local/lib/python2.7/site-packages/correlation/correlation_package/_ext/corr/_corr.py", line 6, in __bootstrap__
imp.load_dynamic(__name__,__file__)
ImportError: /usr/local/lib/python2.7/site-packages/correlation/correlation_package/_ext/corr/_corr.so: undefined symbol: __gxx_personality_v0
if I run this command from the python console and a different error pop up from debugging this line in the debugger of pycharm:
/usr/local/bin/python2.7 /opt/.pycharm_helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 172.17.0.1 --port 41629 --file /opt/project/models/PWCNet.py
Connected to pydev debugger (build 222.3739.30)
pydev debugger: warning: trying to add breakpoint to file that does not exist: /home/vista/.cache/JetBrains/PyCharm2022.2/remote_sources/-609766439/-183260405/correlation/correlation_package/_ext/corr/_corr.py (will have no effect)
pydev debugger: warning: trying to add breakpoint to file that does not exist: /home/vista/.cache/JetBrains/PyCharm2022.2/remote_sources/-609766439/-183260405/correlation/correlation_package/_ext/corr/_corr.py (will have no effect)
Traceback (most recent call last):
File "/opt/.pycharm_helpers/pydev/pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/opt/project/models/PWCNet.py", line 17, in <module>
Correlation = imp.load_source('Correlation', '/usr/local/lib/python2.7/site-packages/correlation/correlation_package/_ext/corr/_corr.py')
File "/usr/local/lib/python2.7/site-packages/correlation/correlation_package/_ext/corr/_corr.py", line 7, in <module>
__bootstrap__()
File "/usr/local/lib/python2.7/site-packages/correlation/correlation_package/_ext/corr/_corr.py", line 6, in __bootstrap__
imp.load_dynamic(__name__,__file__)
ImportError: dynamic module does not define init function (initCorrelation)
python-BaseException
as the contents of the corr.py file are the following:
def __bootstrap__():
global __bootstrap__, __loader__, __file__
import sys, pkg_resources, imp
__file__ = pkg_resources.resource_filename(__name__, '_corr.so')
__loader__ = None; del __bootstrap__, __loader__
imp.load_dynamic(__name__,__file__)
__bootstrap__()
and the contents of the unzipped egg /correlation/correlation_package/_ext/corr/ are:
_corr.py
_corr.pyc
_corr.so
the debug of this load_source operation shows that the imp.load_dynamic in the bootstrap function runs as follows:
What is the correct way to import a package which is on the docker container environment ?
Thanks in advance.
P.S:
If I add to the Dockerfile the line RUN python /external_packages/correlation-pytorch-master/correlation-pytorch/test/test.py which is made to test the package installation, I get the error:
Step 9/14 : RUN python /external_packages/correlation-pytorch-master/correlation-pytorch/test/test.py
---> Running in c57c673c66c1
Traceback (most recent call last):
File "/external_packages/correlation-pytorch-master/correlation-pytorch/test/test.py", line 4, in <module>
from correlation_package.modules.corr import Correlation, Correlation1d
ImportError: No module named correlation_package.modules.corr
This is confusing because as was said above pip list does shows it int he installed packages list. This is totally executed from inside the container, as you can see below:
Step 9/15 : RUN pip list
---> Running in adbe3ba085b2
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 wont be maintained after that date. A future version of pip will drop support for Python 2.7.
Package Version
---------------------- -----------
asn1crypto 1.4.0
backports-abc 0.5
certifi 2020.6.20
cffi 1.12.2
chardet 3.0.4
conda 4.8.4
conda-package-handling 1.6.0
correlation-package 0.1
cryptography 2.6.1
dominate 2.3.5
enum34 1.1.6
futures 3.3.0
idna 2.8
ipaddress 1.0.23
numpy 1.16.6
opencv-python 4.1.0.25
Pillow 6.2.2
pip 19.0.3
pycosat 0.6.3
pycparser 2.19
pyOpenSSL 19.0.0
PySocks 1.7.1
PyYAML 5.4.1
pyzmq 19.0.2
requests 2.21.0
ruamel-yaml 0.15.46
scandir 1.10.0
scipy 1.2.3
setuptools 41.0.0
singledispatch 3.7.0
six 1.16.0
torch 0.2.0.post3
torchfile 0.1.0
torchvision 0.2.2
tornado 5.1.1
tqdm 4.19.9
urllib3 1.24.1
visdom 0.1.8.8
websocket-client 0.59.0
wheel 0.37.1
Removing intermediate container adbe3ba085b2
---> 120baf3ba68a
Step 10/15 : RUN python /external_packages/correlation-pytorch-master/correlation-pytorch/test/test.py
---> Running in e28c04ca8d12
Traceback (most recent call last):
File "/external_packages/correlation-pytorch-master/correlation-pytorch/test/test.py", line 4, in <module>
from correlation_package.modules.corr import Correlation, Correlation1d
ImportError: No module named correlation_package.modules.corr
Dockerfile
FROM python:3.6.8
COPY . /app
WORKDIR /app
RUN pip3 install --upgrade pip
RUN pip3 install opencv-python==4.3.0.38
RUN pip3 install -r requirements.txt
EXPOSE 80
CMD ["python3", "server.py"]
requirements.txt
Flask==0.12
Werkzeug==0.16.1
boto3==1.14.40
torch
torchvision==0.7.0
numpy==1.15.4
sklearn==0.0
scipy==1.2.1
scikit-image==0.14.2
pandas==0.24.2
The docker build succeeds but the docker run fails with the error
INFO:matplotlib.font_manager:Generating new fontManager, this may take some time...
PyTorch Version: 1.6.0
Torchvision Version: 0.7.0
Traceback (most recent call last):
File "server.py", line 7, in <module>
from pipeline_prediction.pipeline import ml_pipeline
File "/app/pipeline_prediction/pipeline.py", line 3, in <module>
from segmentation_color import get_swatch_color_from_segmentation
File "pipeline_prediction/segmentation_color.py", line 7, in <module>
import cv2
File "/usr/local/lib/python3.6/site-packages/cv2/__init__.py", line 5, in <module>
from .cv2 import *
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
I looked at answer import matplotlib.pyplot as plt, ImportError: libGL.so.1: cannot open shared object file: No such file or directory relating to it and replaced
import matplotlib.pyplot as plt
with
import matplotlib
matplotlib.use("Agg")
import matplotlib.pyplot as plt
but it is not working for me. Also looked at ImportError: libGL.so.1: cannot open shared object file: No such file or directory but I do not have Ubuntu as base image so this installation would not work for me as listed in the answer.
Let me know a way to make this work.
I was able to make the docker container run by making following changes to the dockerfile
FROM python:3.6.8
COPY . /app
WORKDIR /app
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -y
RUN apt install libgl1-mesa-glx -y
RUN apt-get install 'ffmpeg'\
'libsm6'\
'libxext6' -y
RUN pip3 install --upgrade pip
RUN pip3 install opencv-python==4.3.0.38
RUN pip3 install -r requirements.txt
EXPOSE 80
CMD ["python3", "server.py"]
The lines required for resolving the libGl error
RUN apt install libgl1-mesa-glx -y
RUN apt-get install 'ffmpeg'\
'libsm6'\
'libxext6' -y
were not able to run without updating the ubuntu environment. Moreover creating the docker image as noninteractive helped to skip any interactive command line inputs
Following is display in my terminal:
Running virtualenv with interpreter /usr/local/bin/python3.7
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/virtualenv.py", line 22, in <module>
import zlib
ModuleNotFoundError: No module named 'zlib'
How can i fix this?
First of all, to make sure your virtualenv does not work. I suggest that you should try something simpler for debugging:
print("Hello World!")
If this line of code works then your problem is that you don't have zlib installed properly.
No module named 'zlib'
Otherwise, please update the error while your environment running the print command for more co
looks like your python at system level is not configured correctly.
try following:
apt remove virtualenv # remove virtualenv
apt-get install -y -q zlib1g-dev python-pip # install zlib, python-pip
wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz
tar xzf Python-3.7.4.tgz
cd Python-3.7.4
./configure --enable-optimizations --enable-loadable-sqlite-extensions
--enable-ipv6 --with-cxx-main=/usr/bin/gcc --with-ensurepip
make -j $(getconf _NPROCESSORS_ONLN) altinstall
pip3 install virtualenv==16.0.0
virtualenv -p "python3.7" xyz
I am trying to get GoogleScraper Python script working on Ubuntu 14.04 LTS, but am getting the following error when I type "./GoogleScraper -h"
<code>
./GoogleScraper -h
Traceback (most recent call last):
File "./GoogleScraper", line 5, in <module>
from pkg_resources import load_entry_point
File "/home/roger/env/lib/python3.4/site-packages/pkg_resources.py", line 2716, in <module>
working_set.require(__requires__)
File "/home/roger/env/lib/python3.4/site-packages/pkg_resources.py", line 685, in require
needed = self.resolve(parse_requirements(requirements))
File "/home/roger/env/lib/python3.4/site-packages/pkg_resources.py", line 588, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: aiohttp
</code>
What do I do to install this aiohttp package? I googled and was a little confused.
Here is my "Python -V" output
roger#vbox-ubuntu:~/env/bin$ python -V
Python 2.7.6
roger#vbox-ubuntu:~/env/bin$ python3 -V
Python 3.4.0
I also ran the following prior:
virtualenv --python python3 env
source env/bin/activate
pip install GoogleScraper
sudo apt-get install python3-pip
sudo pip3 install aiohttp
sudo pip3 install aiohttp
should fix your problem (preceded by sudo apt-get install python3-pip if pip is not installed yet)
Looks like much of my problem was running "sudo apt-get install" in my local environment.
My fix was starting with a fresh Ubuntu 14.04 LTS install and then running the following:
sudo apt-get install python-virtualenv python3-pip liblz-dev python-dev libxml2-dev libxslt-dev zlib1g-dev python3-dev libmysqlclient-dev ubuntu-desktop chromium-chromedriver google-chrome-stable
After that, I ran the following commands on the author's website:
virtualenv --python python3 env
source env/bin/activate
pip install GoogleScraper
sudo pip3 install aiohttp
After that, I was able to get "GoogleScraper -h" to output the help file, as expected.