Trouble creating dockerfile for Object Detection API Tensorflow - python-3.x

As the title suggests, I'm trying to build this dockerfile. To add some context: I successfully pulled the official repository and adapted the model to my needs. Following the docs on github installing and running the model on my local machine is not a problem.
Now, to put this model into production I have to create a docker for the obvious reasons. First of all, the directory looks like this:
TFModels
> Dockerfile
> pycocotools
> research
> object_detection
> slim
Now, the dockerfile which also encompasses the instructions from github concerning protoc, looks like this:
FROM python:3.5
RUN apt-get update \
&& apt-get install -y wget \
unzip \
protobuf-compiler \
python-pil \
python-lxml \
python-tk
RUN mkdir -p /TFModel/
COPY research /TFModel/research
COPY slim /TFModel/slim
RUN pip install --trusted-host pypi.python.org -r /TFModel/research/object_detection/requirements.txt
WORKDIR /TFModel/research
RUN protoc object_detection/protos/*.proto --python_out=.
RUN export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
RUN python object_detection/builders/model_builder_test.py
RUN python object_detection/objectDetectImg.py
Building the docker gives following error:
Traceback (most recent call last):
File "object_detection/objectDetectImg.py", line 24, in <module>
from utils import label_map_util
File "/TFModel/research/object_detection/utils/label_map_util.py", line 21, in <module>
from object_detection.protos import string_int_label_map_pb2
ImportError: No module named 'object_detection'
The command '/bin/sh -c python object_detection/objectDetectImg.py' returned a non-zero code: 1
I think the error may be in this line:
RUN export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
Anyone who could help?
Thanks in advance

Related

Error with pytorch compilation: LAPACK library not found in compilation. How to solve?

I am already desperate about this problem that I am having.
RuntimeError: inverse: LAPACK library not found in compilation
The easiest way to reproduce it is:
import torch
A = torch.rand(5,5)
torch.inverse(A)
I run this inside a docker container. The part of the dockerfile that compiles pytorch is:
#PyTorch
RUN pip3 install astunparse numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses
ENV PYTORCH_INST_VERSION="v1.8.1"
RUN git clone --recursive --branch ${PYTORCH_INST_VERSION} https://github.com/pytorch/pytorch pytorch-src && \
cd pytorch-src && \
export MAX_JOBS=$((`nproc` - 2)) && \
export TORCH_CUDA_ARCH_LIST=${CUDA_ARCH} && \
python3 setup.py install --prefix=/opt/pytorch && \
cp -r /opt/pytorch/lib/python3.8/site-packages/* /usr/lib/python3/dist-packages/ && \
cd /opt && \
rm -rf /opt/pytorch-src
I am not super experienced so I don't know if I need to provide additional details. Please tell me if so.
I solved my own problem.
I added apt-get liblapack-dev on the dockerfile before the torch compilation. Then I runned the docker container again and it worked.
I had the same problem yesterday, I reinstalled pytorch for my system copying the suitable command as instructed (you should choose the parameters suits your system)
check here https://pytorch.org/#:~:text=conda%20install%20pytorch%20torchvision%20torchaudio%20cpuonly%20%2Dc%20pytorch
when I run my script again the problem was gone

Unable to build docker image for python code

I have created a docker file with the following:
FROM python:3.9.2
MAINTAINER KSMC
RUN apt-get update -y && apt-get install -y python3-pip python-dev
WORKDIR /Users/rba/Documents/Projects/DD/DD-N4
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python3"]
CMD ["main.py"]
My python code is in main.py , my python version is 3.9.2 and all my python code, requirements.txt and docker file are in the location /Users/rba/Documents/Projects/DD/DD-N4. Upon trying to create docker image using:
docker build -t ddn4image .
I am getting the following error:
#8 1.547 ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
------ executor failed running [/bin/sh -c pip3 install -r requirements.txt]: exit code: 1
Can someone point out what is causing this?
You forgot to do a COPY statement.
Before you run RUN pip3 install -r requirements.txt, just put the following line
COPY . .
You need to do this because your local files need to be copied into the docker image build, so when your container is created, the files will exists on it.
Read the docs about COPY at docker docs.
Hint
Remove the ENTRYPOINT statement and use
CMD ["python3", "main.py"]
Here it is a good explanation about ENTRYPOINT and CMD difference.

docker run error : DPI-1047: Cannot locate a 64-bit Oracle Client library

I am trying to dockerize a very simple python application with Oracle database connection and execute it on Docker. This application is executing fine on my local machine.
I was successfully able to build this application but getting an error while executing it on Docker.
DockerFile:
FROM python:3
ADD File.py /
RUN pip install cx_Oracle
RUN pip install pandas
RUN pip install openpyxl
CMD [ "python", "./File.py" ]
File.py:
import cx_Oracle
import pandas as pd
#creating database connection
dsn_tns = cx_Oracle.makedsn('dev-tr01.com', '1222', service_name='ast041.com')
conn = cx_Oracle.connect(user=r'usr', password='3451', dsn=dsn_tns)
c = conn.cursor()
query ='SELECT * FROM Employee WHERE ROWNUM <10'
result = pd.read_sql(query, con=conn)
result.to_excel("batchtable.xlsx")
conn.close()
Error:
docker run python_batchdriver:latest
cx_Oracle.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/odpi/doc/installation.html#linux for help
Update: upgrade to the latest cx_Oracle release (renamed to python-oracledb). This doesn't necessarily need Instant Client, which makes installation a lot easier. See the release announcement.
For cx_Oracle, you need to install Oracle Instant Client libraries too. See the cx_Oracle installation instructions.
There are various ways to automate installation in Docker. One example is:
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip && \
unzip instantclient-basiclite-linuxx64.zip && \
rm -f instantclient-basiclite-linuxx64.zip && \
cd instantclient* && \
rm -f *jdbc* *occi* *mysql* *jar uidrvci genezi adrci && \
echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf && \
ldconfig
You will also need the libaio or libaio1 package.
See Docker for Oracle Database Applications in Node.js and Python.
Also see Install Oracle Instant client into Docker container for Python cx_Oracle
Note that the steps may be different if you are not using a Debian-based Linux distribution.
Your Debian based docker image needs a oracle instant client. You can download it manually and copy it to your docker image. A dependency "libaio1" is also required.
Add the below line in your docker file.
DockerFile:
FROM python:3
# Installing Oracle instant client
WORKDIR /opt/oracle
# Install dependency
RUN apt-get update && apt-get install -y libaio1
# if extracted file is in current directory named "instantclient_11_2"
# For me instantclient_11_2 for Oracle9i, 10g
# copy it to /opt/oracle/instantclient_11_2
COPY instantclient_11_2 /opt/oracle/instantclient_11_2
# Linking instant client and cleanup
RUN cd /opt/oracle/instantclient* \
&& rm -f *jdbc* *occi* *mysql* *README *jar uidrvci genezi adrci \
&& echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf \
&& ldconfig
# Rest is same as yours
ADD File.py /
# To support instantclient_11_2 I used below commented line
# RUN pip install cx-Oracle==7.3.0
RUN pip install cx_Oracle
RUN pip install pandas
RUN pip install openpyxl
CMD [ "python", "./File.py" ]

caffe import error: even after installing it successfully, on ubuntu 18.04

I'm getting caffe import error even after installing it successfully using the command sudo apt install caffe-cpu. I was able to find caffe file at /usr/lib/python3/dist-packages/caffe (Path was added to PYTHONPATH). All requirements mentioned in the requirements.txt file of caffe directory was also installed.
I'm using Ubuntu 18.04 LTS, Python3.
Could anyone help me with this error?
import caffe
Traceback (most recent call last):
File "6_reconstruct_alphabet_image.py", line 17, in <module>
import caffe
File "/usr/lib/python3/dist-packages/caffe/__init__.py", line 1, in <module>
from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver, NCCL, Timer
File "/usr/lib/python3/dist-packages/caffe/pycaffe.py", line 15, in <module>
import caffe.io
File "/usr/lib/python3/dist-packages/caffe/io.py", line 2, in <module>
import skimage.io
File "/usr/lib/python3/dist-packages/skimage/__init__.py", line 158, in <module>
from .util.dtype import *
File "/usr/lib/python3/dist-packages/skimage/util/__init__.py", line 7, in <module>
from .arraycrop import crop
File "/usr/lib/python3/dist-packages/skimage/util/arraycrop.py", line 8, in <module>
from numpy.lib.arraypad import _validate_lengths
ImportError: cannot import name '_validate_lengths'
Problem solved: The error came up because caffe build wasn't done successfully. I recommend not to go up with the sudo apt install caffe-cpu command (Which is mentioned in the official caffe installation guide for Ubuntu); because it will end up in the error as above. It's better to install from the source.
Let me give step by step guidance to install caffe successfully in Ubuntu 18.04 LTS:
1] sudo apt-get install -y --no-install-recommends libboost-all-dev
2] sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev \ libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler
3] git clone https://github.com/BVLC/caffe
cd caffe
cp Makefile.config.example Makefile.config
4] sudo pip install scikit-image protobuf
cd python
for req in $(cat requirements.txt); do sudo pip install $req; done
5] Modify the Makefile.config file:
Uncomment the line CPU_ONLY := 1, and the line OPENCV_VERSION := 3.
6] Find LIBRARIES line in Makefile and change it to as follows:
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5 \
opencv_core opencv_highgui opencv_imgproc opencv_imgcodecs
7] make all
Now you could get some error like this:
CXX src/caffe/net.cpp
src/caffe/net.cpp:8:18: fatal error: hdf5.h: No such file or directory
compilation terminated.
Makefile:575: recipe for target '.build_release/src/caffe/net.o' failed
make: *** [.build_release/src/caffe/net.o] Error 1
To solve this error follow step 8.
8] install libhdf5-dev
open Makefile.config, locate line containing LIBRARY_DIRS and append /usr/lib /x86_64-linux-gnu/hdf5/serial
locate INCLUDE_DIRS and append /usr/include/hdf5/serial/ (per this SO answer)
rerun make all
9] make test
10] make runtest
11] make pycaffe
Now you could get some error like this:
CXX/LD -o python/caffe/_caffe.so python/caffe/_caffe.cpp
python/caffe/_caffe.cpp:10:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
Makefile:501: recipe for target 'python/caffe/_caffe.so' failed
make: *** [python/caffe/_caffe.so] Error 1
To solve this error follow step 12.
12] Find PYTHON_INCLUDE line in Makefile.config and do the changes as follows:
`PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/local/lib/python2.7/dist-packages/numpy/core/include`
13] Add the module directory to our $PYTHONPATH by adding this line to the end of ~/.bashrc file:
`sudo vim ~/.bashrc`
`export PYTHONPATH=$HOME/Downloads/caffe/python:$PYTHONPATH`
`source ~/.bashrc'
14] Done.
This can be happened if there is any older version of the packages. Particularly, according to the Traceback, main reason can be the older version of scikit-image package. Just upgrade the package using the command.
for Python 2.7 : pip install --upgrade scikit-image
and
for Python 3.x : pip3 install --upgrade scikit-image
Installation of Caffe on ubuntu 18.04 in cpu mode .
conda update conda
conda create -n testcaffe python=3.5
source activate testcaffe
conda install -c menpo opencv3
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y build-essential cmake git pkg-config
sudo apt-get install -y libprotobuf-dev libleveldb-dev libsnappy-dev protobuf-compiler
sudo apt-get install -y libatlas-base-dev
sudo apt-get install -y --no-install-recommends libboost-all-dev
sudo apt-get install -y libgflags-dev libgoogle-glog-dev liblmdb-dev
mkdir build
cd build
make -j8 l8
make all
make install
conda install cython scikit-image ipython h5py nose pandas protobuf pyyaml jupyter
sed -i -e 's/python-dateutil>=1.4,<2/python-dateutil>=2.0/g' requirements.txt
for req in $(cat requirements.txt); do pip install $req; done
cd ../build
cd ../python
export PYTHONPATH=pwd${PYTHONPATH:+:${PYTHONPATH}}
python -c "import caffe;print(caffe.version)"
jupyter notebook
Complete steps just run these command as it is, Caffe will be installed in the system.

Python 3 virtualenv and Docker

I'm trying to build a docker image with python 3 and virtualenv.
I understand that I wouldn't need to use wirtualenv in a docker image as I'm going to use only python 3, yet I see some clean isolation benefits of using virtualenv anyways.
What's the best practice? Should I avoid using virtualenv on docker?
If that's the case, how can I setup python3 and pip3 to be used as python and pip (without the 3)?
This is my Dockerfile:
FROM openjdk:8-alpine
RUN apk update && apk add bash gcc musl-dev
RUN apk add python3 python3-dev
RUN apk add py3-pip
RUN apk add libxslt-dev libxml2-dev
ENV PROJECT_HOME /opt/app
RUN mkdir -p /opt/app
RUN mkdir -p /opt/app/modules
ENV LD_LIBRARY_PATH /usr/lib/python3.6/site-packages/jep
ENV LD_PRELOAD /usr/lib/libpython3.6m.so
RUN pip3 install jep
RUN pip3 install ads
RUN pip3 install gspread
RUN pip3 list
COPY target/my-server-1.0-SNAPSHOT.jar $PROJECT_HOME/my-server-1.0-SNAPSHOT.jar
WORKDIR $PROJECT_HOME
CMD ["java", "-Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb","-jar","./my-server-1.0-SNAPSHOT.jar"]
Thanks
=== UPDATE 1 ===
I'm trying to create a new virtual env in the WORKDIR, install some libs and then execute a shell script, even though I see it creates the whole thing when I build the image, when running the container the environment folder is empty.
This is from my Dockerfile:
RUN virtualenv ./env && source ./env/bin/activate && pip install jep \
googleads gspread oauth2client
ENTRYPOINT ["/bin/bash", "./startup.sh"]
startup.sh:
#!/bin/sh
source ./env/bin/activate
java -Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb -jar ./my-server-1.0-SNAPSHOT.jar
It builds fine but on docker-compose up -d this is the output:
./startup.sh: source: line 2: can't open './env/bin/activate'
The env folder exists, but it's empty.
Any ideas?
Thanks!
=== UPDATE 2 ===
This is the working config:
RUN virtualenv ./my-env && source ./my-env/bin/activate \
&& pip install gspread==0.6.2 jep oauth2client googleads pandas
CMD ["/bin/bash", "-c", "./startup.sh"]
This is startup.sh:
#!/bin/sh
source ./my-env/bin/activate
java -Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb -jar ./my-server-1.0-SNAPSHOT.jar
I don't think using virtualenv in docker is something really negative, it will slow down your container builds just a bit.
As for renaming pip3 and python3, you can create a hard link like this:
ln /usr/bin/python3 /usr/bin/python
ln /usr/bin/pip3 /usr/bin/pip
assuming python3 executable is in /usr/bin/. You can find its location by running which python3
P.S.: Your dockerfile contains loads of RUN instructions, that are creating unnecessary intermediate containers. Combine them to save space and time:
RUN apk update && apk add bash gcc musl-dev \
python3 python3-dev py3-pip \
libxslt-dev libxml2-dev
RUN mkdir -p /opt/app/modules # you don't need the first one, -p will create it for you
RUN pip3 install jep ads gspread
Or combine them even further, if you aren't planning to change them often:
RUN apk update
&& apk add bash gcc musl-dev \
python3 python3-dev py3-pip \
libxslt-dev libxml2-dev \
&& mkdir -p /opt/app/modules \
&& pip3 install jep ads gspread
The only "workaround" I've found in order to use virtualenv from my docker container is to enter to the docker by ssh, create the environment, install the libs and set its folder as a volume in the docker-compose config so it won't be deleted and I can use it afterward.
(Or to have it ready and just copy the folder at build time) which could be a good option for saving build time, isn't it?
Otherwise, If I create it on Dockerfile and install the libs there, its folder gets empty when the container runs. Don't know why.
I appreciate if anyone can suggest a better way to deal with that.

Resources