Python code hangs when importing openCV in AWS Nitro Enclave - linux

I'm trying to do some image recognition inside the AWS nitro-enclave with python. But the code hangs when importing packages such as OpenCV, NumPy, and pandas. The dockerfile file used to build the enclave would function normally in my local machine or in EC2. The generated enclave console would output some openBLAS warning about L2 cache size and the process freezes. No error output of any sort.
Is there any additional dependencies I need to add when using packages in enclave or there are some conflicts with the kernel?
The docker, shell, and py test codes are shown below:
#amazonlinux still have the import issue
#python:3.7 libs importing crush
FROM amazonlinux
WORKDIR /app
#py 3.7
RUN yum install python3 zip -y
ENV VIRTIAL_ENV=/opt/venv
RUN python3 -m venv $VIRTIAL_ENV
ENV PATH="$VIRTIAL_ENV/bin:$PATH"
#3 libs needed for cv2 import
RUN yum install libSM-1.2.2-2.amzn2.x86_64 -y
RUN yum install libXrender-0.9.10-1.amzn2.x86_64 -y
RUN yum install libXext-1.3.3-3.amzn2.x86_64 -y
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r /app/requirements.txt
#shell script testing
COPY dockerfile_entrypoint.sh ./
COPY test_cv2.py ./
#ENV for shell testing printf loop
ENV HELLO="Hello from enclave side!"
RUN chmod +X dockerfile_entrypoint.sh
#shell script testing
CMD ["/app/dockerfile_entrypoint.sh"]
#!/bin/bash
#shell printf loop test in enclave
# go to work dir and check files
cd /app||return
ls
#cv2 imp issue
python3 test_cv2.py
#use shell loop to keep enclave live to see error message output
count=1
while true;do
printf "[%4d] $HELLO\n" $count
echo "$PWD"
ls
count=$((count+1))
sleep 5
done
import cv2
for i in range(10):
print('testing OpenCV')

These types of hangs can happen when applications or libraries attempt to read data from /dev/random but there is not sufficient entropy, which causes the process to block on the read. There are some possible solutions in this GitHub issue: https://github.com/aws/aws-nitro-enclaves-sdk-c/issues/41#issuecomment-792621500

Related

Multi-Stage docker container for Python

For a personnal project, I want to creat a container with Docker for a Python script (a bot for Discord) to isolate it from the system.
I need to use PM2 to run the script, but I can't use the Python from keymetrics/pm2:latest-alpine due to the version (I need the 3.9 and not the 3.8).
So I decided to use a multi stage container to get files from a python container first and then, to execute it inside the other image.
Before calling my bot, I am working step by step. So I'm trying here to get only the version of Python in a first time (and then I'll try to call an hello world script with Python).
My trouble is in this first step.
My Dockerfile is :
# =============== Python slim ========================
FROM python:3.9-slim as base
# Setup env
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONFAULTHANDLER 1
FROM base AS python-deps
# Install pipenv and compilation dependencies
RUN pip install pipenv
RUN apt-get update && apt-get install -y --no-install-recommends gcc
COPY requirements.txt .
# Install python dependencies in /opt/venv
# . Create env, activate
RUN python3 -m venv --copies /opt/venv && cd /opt/venv/bin/ && chmod a+x activate && ./activate && chmod a-x activate && cd -
# . Install packages with pip
RUN python3 -m pip install --upgrade pip && pip3 install --no-cache-dir --no-compile pipenv && PIPENV_VENV_IN_PROJECT=1 pip install --user -r requirements.txt
# >> Here, I can call :
# CMD ["/opt/venv/bin/python3.9", "--version"]
# =============== PM2 ================================
# second stage
FROM keymetrics/pm2:latest-alpine
WORKDIR /code
# Copy datas from directory
COPY ./src .
COPY ecosystem.config.js .
# Copy datas from previous
# Copy virtual env from python-deps stage
COPY --from=python-deps /opt/venv /opt/venv
# Install app dependencies : useless... (python3.8 de toute facon, et je dois etre en 3.9)
# RUN apk add --no-cache git python3
# Variables d'environnement Python :
ENV PYROOT=/opt/venv
ENV PYTHONUSERBASE=$PYROOT
ENV PATH="${PYROOT}/bin:${PATH}"
ENV PYTHONPATH="${PYROOT}/lib/python3.9/site-packages/"
# CMD ["ls", "-la", "/opt/venv/bin/python3"] # Ok here : file exists
# CMD ["which", "python3"] # Ok here : output: /opt/venv/bin/python3
CMD ["/opt/venv/bin/python3", "--version"] # not ok (cf below)
# ..... Then I will call after other stuff once Python works ....
# ENV NPM_CONFIG_LOGLEVEL warn
# RUN npm install pm2 -g
# RUN npm install --production
# RUN pm2 update && pm2 install pm2-server-monit # && pm2 install pm2-auto-pull
# CMD ["pm2-runtime", "ecosystem.config.js" ]
My requirements.txt is :
Flask==1.1.1
And my error is
/usr/local/bin/docker-entrypoint.sh: exec: line 8: /opt/venv/bin/python3: not found
I really don't understand why...
I tried to fo inside my image with
$ docker run -d --name hello myimage watch "date >> /var/log/date.log"
$ docker exec -it hello sh
And inside, I saw that Python exists with ls, which see it too, but if I go in the directory and I call it with ./python3, I get the message sh: python: not found
I am a noob with Docker, even if I did some stuff with it before, but I didn't get courses about it because I use it only for few personnal stuff (and it's my 1st big trouble with it).
Thanks !

Unable to build docker image for python code

I have created a docker file with the following:
FROM python:3.9.2
MAINTAINER KSMC
RUN apt-get update -y && apt-get install -y python3-pip python-dev
WORKDIR /Users/rba/Documents/Projects/DD/DD-N4
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python3"]
CMD ["main.py"]
My python code is in main.py , my python version is 3.9.2 and all my python code, requirements.txt and docker file are in the location /Users/rba/Documents/Projects/DD/DD-N4. Upon trying to create docker image using:
docker build -t ddn4image .
I am getting the following error:
#8 1.547 ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
------ executor failed running [/bin/sh -c pip3 install -r requirements.txt]: exit code: 1
Can someone point out what is causing this?
You forgot to do a COPY statement.
Before you run RUN pip3 install -r requirements.txt, just put the following line
COPY . .
You need to do this because your local files need to be copied into the docker image build, so when your container is created, the files will exists on it.
Read the docs about COPY at docker docs.
Hint
Remove the ENTRYPOINT statement and use
CMD ["python3", "main.py"]
Here it is a good explanation about ENTRYPOINT and CMD difference.

Setting up mysql-connector-python in Docker file

I am trying to set up a mysql connection that will work with SqlAlchemy in Python 3.6.5 . I have the following in my Dockerfile:
RUN pip3 install -r /event_git/requirements.txt
I also have, in requirements.txt:
mysql-connector-python==8.0.15
However, I am not able to connect to the DB. Is there anything else that I need to do to set this up?
Update:
I got 8.0.5 working but not 8.0.15 . Apparently, a protobuff dependency was added; does anyone know how to handle that?
docker file is:
RUN apt-get -y update && apt-get install -y python3 python3-pip fontconfig wget nodejs nodejs-legacy npm
RUN pip3 install --upgrade pip
# Copy contents of this directory (i.e. full source) to image
COPY . /my_project
# Install Python dependencies
RUN pip3 install -r /event_git/requirements.txt
# Set event_git folder as working directory
WORKDIR /my_project
ENV LANG C.UTF-8
I am running it via
docker build -t event_git .;docker run -t -i event_git /bin/bash
and then executing a script; the db is on my local machine. This is working on mysql-connector-python==8.0.5 but not 8.0.15, so the setup is ok; I think I just need to fulfill the protobuff dependency that was added (see https://github.com/pypa/warehouse/issues/5537 for mention of protobuff dependency).
The mysql-connector-python has the Python Protobuf as an installation requirement, this means that protobuf will be installed along mysql-connector-python.
If this doesn't work, try to add protobuf==3.6.1 in your requirements.txt.
Figured out the issue. The key is that import mysql.connector needs to be at the top of the file where the create_engine is. Still not sure of the exact reason, but at the very least that seems to define _CONNECTION_POOLS = {}. If anyone knows why, please do give your thoughts.

Model training using Azure Container Instance with GPU much slower than local test with same container

I am trying to train a Yolo computer vision model using a container I built which includes an installation of Darknet. The container is using the Nvidia supplied base image: nvcr.io/nvidia/cuda:9.0-devel-ubuntu16.04
Using Nvidia-Docker on my local machine with a gtx 1080 ti, training runs very fast, however that same container running as an Azure Container Instance with a P100 gpu trains very slowly. It's almost as if it's not utilizing the gpu. I also noticed that the "nvidia-smi" command does not work in the container running in Azure, but it does work when I ssh into the container running locally on my machine.
Here is the Dockerfile I am using
FROM nvcr.io/nvidia/cuda:9.0-devel-ubuntu16.04
LABEL maintainer="alex.c.schultz#gmail.com" \
description="Pre-Configured Darknet Machine Learning Environment" \
version=1.0
# Container Dependency Setup
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install software-properties-common -y
RUN apt-get install vim -y
RUN apt-get install dos2unix -y
RUN apt-get install git -y
RUN apt-get install wget -y
RUN apt-get install python3-pip -y
RUN apt-get install libopencv-dev -y
# setup virtual environment
WORKDIR /
RUN pip3 install virtualenv
RUN virtualenv venv
WORKDIR venv
RUN mkdir notebooks
RUN mkdir data
RUN mkdir output
# Install Darknet
WORKDIR /venv
RUN git clone https://github.com/AlexeyAB/darknet
RUN sed -i 's/GPU=0/GPU=1/g' darknet/Makefile
RUN sed -i 's/OPENCV=0/OPENCV=1/g' darknet/Makefile
WORKDIR /venv/darknet
RUN make
# Install common pip packages
WORKDIR /venv
COPY requirements.txt ./
RUN . /venv/bin/activate && pip install -r requirements.txt
# Setup Environment
EXPOSE 8888
VOLUME ["/venv/notebooks", "/venv/data", "/venv/output"]
CMD . /venv/bin/activate && jupyter notebook --ip=0.0.0.0 --port=8888 --allow-root
The requirements.txt file is as shown below:
jupyter
matplotlib
numpy
opencv-python
scipy
pandas
sklearn
The issue was that my training data was on an Azure File Share volume and the network latency was causing the training to be slow. I copied the data from the share into my container and then pointed the training to it and everything ran much faster.

Docker and Plotly

I created a python script using plotly dash to draw graphs, then using plotly-orca to export a static image of the created graph. I want to dockerise this script but my problem is I build and run the image i get a "The orca executable is required in order to export figures as static images" error. My question now is how do I include the executable as part of my docker image?
It's a bit complicated due to the nature of plotly-orca, but it can be done, according to this Dockerfile based on this advice. Add this to your Dockerfile:
# Download orca AppImage, extract it, and make it executable under xvfb
RUN apt-get install --yes xvfb
RUN wget https://github.com/plotly/orca/releases/download/v1.1.1/orca-1.1.1-x86_64.AppImage -P /home
RUN chmod 777 /home/orca-1.1.1-x86_64.AppImage
# To avoid the need for FUSE, extract the AppImage into a directory (name squashfs-root by default)
RUN cd /home && /home/orca-1.1.1-x86_64.AppImage --appimage-extract
RUN printf '#!/bin/bash \nxvfb-run --auto-servernum --server-args "-screen 0 640x480x24" /home/squashfs-root/app/orca "$#"' > /usr/bin/orca
RUN chmod 777 /usr/bin/orca
RUN chmod -R 777 /home/squashfs-root/
I would just upgrade to plotly 4.9 or newer, and use kaleido through pip - an official substitute for Orca which has been a pain to setup with docker. https://plotly.com/python/static-image-export/

Resources