Azure ACR Flask webapp deployment - azure

I'm trying to deploy a Flask app on Azure by using the container registry. I used Az DevOps to pipeline building and pushing the image. Then I configure a web app to pull this image from ACR. But I'm running into one or the other error. The container is timing out and crashing and there are some errors in the application log as well. 'No module named flask'.
This is my Dockerfile
FROM python:3.7.11
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt-get update
RUN apt-get install -y wget && rm -rf /var/lib/apt/lists/*
WORKDIR /app
ADD . /app/
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN python -m pip install --upgrade pip
RUN pip3 install Flask==2.0.2
RUN pip3 install --no-cache-dir -r /app/requirements.txt
RUN conda install python=3.7
RUN conda install -c rdkit rdkit -y
EXPOSE 5000
ENV NAME cheminformatics
CMD python app.py
I had to install miniconda because the rdkit package can be installed only from conda. I also add a PORT: 5000 key value to the configuration of the web app, but it hasn't loaded even once. I've been at this for 2 hours now. Previously, I've built images on local machine and pushed to dockerhub and ACR and was able to pull those images but it's the first time I used DevOps and I'm not sure what's going wrong here. Can someone please help?

Related

Install Linux dependencies in Azure App Service permanently using Azure DevOps

I created a CI/CD DevOps pipeline for deploying the Django app. After the deployment, manually I go to SSH in the azure app service to execute the below Linux dependencies
apt-get update && apt install -y libxrender1 libxext6
apt-get install -y libfontconfig1
After every deployment, this package is removed automatically. Is there any way to install these Linux dependencies permanently?
I suppose you are using Azure App Service Linux. Azure App Service Linux uses its own customized Docker image to host your application.
Unfortunately you cannot customize Azure Linux App Service Docker image, but you can use App Service Linux container with your own custom Docker image that includes your Linux dependencies.
https://github.com/Azure-Samples/docker-django-webapp-linux
Dockerfile example:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim
EXPOSE 8000
EXPOSE 27017
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt \
&& apt-get update && apt install -y libxrender1 libxext6 \
&& apt-get install -y libfontconfig1
WORKDIR /app
COPY . /app
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000
ENTRYPOINT ["init.sh"]
init.sh example:
#!/bin/bash
set -e
python /app/manage.py runserver 0.0.0.0:8000
https://learn.microsoft.com/en-us/azure/developer/python/tutorial-containerize-deploy-python-web-app-azure-01

Attempting to host a flutter project on Azure App Services using a docker image; local image and cloud image behave differently

I am having trouble with azure and docker where my local machine image is behaving differently than the image I push to ACR. while trying to deploy to web, I get this error:
ERROR - failed to register layer: error processing tar file(exit status 1): Container ID 397546 cannot be mapped to a host IDErr: 0, Message: mapped to a host ID
So in trying to fix it, I have come to find out that azure has a limit on uid numbers of 65000. Easy enough, just change ownership of the affected files to root, right?
Not so. I put the following command into my Dockerfile:
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
Works great locally for changing the uids of the affected files from 397546 to 0. I do a command in the cli of the container:
find / -uid 397546
It finds none of the same files it found before. Yay! I even navigate to the directories where the affected files are, and do a quick
ls -n to double confirm they are fine, and sure enough the uids are now 0 on all of them. Good to go?
Next step, push to cloud. When I push and reset the app service, I still continue to get the same exact error above. I have confirmed on multiple fronts that it is indeed pushing the correct image to the cloud.
All of this means that somehow my local image and the cloud image are behaving differently.
I am stumped guys please help.
The Dockerfile is as below:
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 psmisc
RUN apt-get clean
# Clone the flutter repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Enable flutter web
RUN flutter upgrade
RUN flutter config --enable-web
# Run flutter doctor
RUN flutter doctor -v
# Change ownership to root of affected files
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
# Copy the app files to the container
COPY ./build/web /usr/local/bin/app
COPY ./startup /usr/local/bin/app/server
COPY ./pubspec.yaml /usr/local/bin/app/pubspec.yaml
# Set the working directory to the app files within the container
WORKDIR /usr/local/bin/app
# Get App Dependencies
RUN flutter pub get
# Build the app for the web
# Document the exposed port
EXPOSE 4040
# Set the server startup script as executable
RUN ["chmod", "+x", "/usr/local/bin/app/server/server.sh"]
# Start the web server
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]```
So basically we have made a shell script to build web BEFORE building the docker image. we then use the static js from the build/web folder and host that on the server. No need to download all of flutter. Makes pipelines a little harder, but at least it works.
New Dockerfile:
FROM ubuntu:20.04 as build-env
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get -y install sudo
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
## preesed tzdata, update package index, upgrade packages and install needed software
RUN echo "tzdata tzdata/Areas select US" > /tmp/preseed.txt; \
echo "tzdata tzdata/Zones/US select Colorado" >> /tmp/preseed.txt; \
debconf-set-selections /tmp/preseed.txt && \
apt-get update && \
apt-get install -y tzdata
RUN apt-get install -y curl git wget unzip libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 python3 nginx nano vim
RUN apt-get clean
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN cd /app/
# Configure nginx and remove secret files
RUN mv /app/build/web/ /var/www/html/patient
RUN cd /etc/nginx/sites-enabled
RUN cp -f /app/default /etc/nginx/sites-enabled/default
RUN cd /app/ && rm -r .dart_tool .vscode assets bin ios android google_place lib placepicker test .env .flutter-plugins .flutter-plugins-dependencies .gitignore .metadata analysis_options.yaml flutter_01.png pubspec.lock pubspec.yaml README.md
# Record the exposed port
EXPOSE 5000
# Start the python server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]

Unable to run aliyun-cli in Docker:stable container after installing it. Errors as command not found

I am unsure if stack overflow or system fault is the right stack exchange site but I'm going with stack overflow cause the alicloud site posted to add a tag and ask a question here.
So. I'm currently building an image based on Docker:stable, that is an alpine distro, that will have aliyun-cli installed and available for use. However I am getting a weird error of Command Not Found when I'm running it. I have followed the guide here https://partners-intl.aliyun.com/help/doc-detail/139508.htm and moved the aliyun binary to /usr/sbin
Here is my Dockerfile for example
FROM docker:stable
RUN apk update && apk add curl
#Install python 3
RUN apk update && apk add python3 py3-pip
#Install AWS Cli
RUN pip3 install awscli --upgrade
# Install Aliyun CLI
RUN curl -L -o aliyun-cli.tgz https://aliyuncli.alicdn.com/aliyun-cli-linux-3.0.30-amd64.tgz
RUN tar -xzvf aliyun-cli.tgz
RUN mv aliyun /usr/bin
RUN chmod +x /usr/bin/aliyun
RUN rm aliyun-cli.tgz
However when i'm running aliyun (which can be auto-completed) I am getting this
/ # aliyun
sh: aliyun: not found
I've tried moving it to other bins. Cding into the folder and calling it explicitly but still always getting a command not found. Any suggestions would be welcome.
Did you check this Dockerfile?
Also why you need to install aws-cli in the same image and why you will need to maintain it for your self when AWS provide managed aws-cli image.
docker run --rm -it amazon/aws-cli --version
that's it for aws-cli image,but if you want in existing image then you can try
RUN pip install awscli --upgrade
DockerFile
FROM python:2-alpine3.8
LABEL com.frapsoft.maintainer="Maik Ellerbrock" \
com.frapsoft.version="0.1.0"
ARG SERVICE_USER
ENV SERVICE_USER ${SERVICE_USER:-aliyun}
RUN apk add --no-cache curl
RUN curl https://raw.githubusercontent.com/ellerbrock/docker-collection/master/dockerfiles/alpine-aliyuncli/requirements.txt > /tmp/requirements.txt
RUN \
adduser -s /sbin/nologin -u 1000 -H -D ${SERVICE_USER} && \
apk add --no-cache build-base && \
pip install aliyuncli && \
pip install --no-cache-dir -r /tmp/requirements.txt && \
apk del build-base && \
rm -rf /tmp/*
USER ${SERVICE_USER}
WORKDIR /usr/local/bin
ENTRYPOINT [ "aliyuncli" ]
CMD [ "--help" ]
build and run
docker build -t aliyuncli .
docker run -it --rm aliyuncli
output
docker run -it --rm abc aliyuncli
usage: aliyuncli <command> <operation> [options and parameters]
<aliyuncli> the valid command as follows:
batchcompute | bsn
bss | cms
crm | drds
ecs | ess
ft | ocs
oms | ossadmin
ram | rds
risk | slb
ubsms | yundun
After a lot of lookup I found a github issue in the official aliyun-cli that sort of describes that it is not compatible with alpine linux because of it's not muslc compatible.
Link here: https://github.com/aliyun/aliyun-cli/issues/54
Following the workarounds there I build a multi-stage docker file with the following that simply fixed my issue.
Dockerfile
#Build aliyun-cli binary ourselves because of issue
#in alpine https://github.com/aliyun/aliyun-cli/issues/54
FROM golang:1.13-alpine3.11 as cli_builder
RUN apk update && apk add curl git make
RUN mkdir /srv/aliyun
WORKDIR /srv/aliyun
RUN git clone https://github.com/aliyun/aliyun-cli.git
RUN git clone https://github.com/aliyun/aliyun-openapi-meta.git
ENV GOPROXY=https://goproxy.cn
WORKDIR aliyun-cli
RUN make deps; \
make testdeps; \
make build;
FROM docker:19
#Install python 3 & jq
RUN apk update && apk add python3 py3-pip python3-dev jq
#Install AWS Cli
RUN pip3 install awscli --upgrade
# Install Aliyun CLI from builder
COPY --from=cli_builder /srv/aliyun/aliyun-cli/out/aliyun /usr/bin
RUN aliyun configure set --profile default --mode EcsRamRole --ram-role-name build --region cn-shanghai

Model training using Azure Container Instance with GPU much slower than local test with same container

I am trying to train a Yolo computer vision model using a container I built which includes an installation of Darknet. The container is using the Nvidia supplied base image: nvcr.io/nvidia/cuda:9.0-devel-ubuntu16.04
Using Nvidia-Docker on my local machine with a gtx 1080 ti, training runs very fast, however that same container running as an Azure Container Instance with a P100 gpu trains very slowly. It's almost as if it's not utilizing the gpu. I also noticed that the "nvidia-smi" command does not work in the container running in Azure, but it does work when I ssh into the container running locally on my machine.
Here is the Dockerfile I am using
FROM nvcr.io/nvidia/cuda:9.0-devel-ubuntu16.04
LABEL maintainer="alex.c.schultz#gmail.com" \
description="Pre-Configured Darknet Machine Learning Environment" \
version=1.0
# Container Dependency Setup
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install software-properties-common -y
RUN apt-get install vim -y
RUN apt-get install dos2unix -y
RUN apt-get install git -y
RUN apt-get install wget -y
RUN apt-get install python3-pip -y
RUN apt-get install libopencv-dev -y
# setup virtual environment
WORKDIR /
RUN pip3 install virtualenv
RUN virtualenv venv
WORKDIR venv
RUN mkdir notebooks
RUN mkdir data
RUN mkdir output
# Install Darknet
WORKDIR /venv
RUN git clone https://github.com/AlexeyAB/darknet
RUN sed -i 's/GPU=0/GPU=1/g' darknet/Makefile
RUN sed -i 's/OPENCV=0/OPENCV=1/g' darknet/Makefile
WORKDIR /venv/darknet
RUN make
# Install common pip packages
WORKDIR /venv
COPY requirements.txt ./
RUN . /venv/bin/activate && pip install -r requirements.txt
# Setup Environment
EXPOSE 8888
VOLUME ["/venv/notebooks", "/venv/data", "/venv/output"]
CMD . /venv/bin/activate && jupyter notebook --ip=0.0.0.0 --port=8888 --allow-root
The requirements.txt file is as shown below:
jupyter
matplotlib
numpy
opencv-python
scipy
pandas
sklearn
The issue was that my training data was on an Azure File Share volume and the network latency was causing the training to be slow. I copied the data from the share into my container and then pointed the training to it and everything ran much faster.

Python 3 virtualenv and Docker

I'm trying to build a docker image with python 3 and virtualenv.
I understand that I wouldn't need to use wirtualenv in a docker image as I'm going to use only python 3, yet I see some clean isolation benefits of using virtualenv anyways.
What's the best practice? Should I avoid using virtualenv on docker?
If that's the case, how can I setup python3 and pip3 to be used as python and pip (without the 3)?
This is my Dockerfile:
FROM openjdk:8-alpine
RUN apk update && apk add bash gcc musl-dev
RUN apk add python3 python3-dev
RUN apk add py3-pip
RUN apk add libxslt-dev libxml2-dev
ENV PROJECT_HOME /opt/app
RUN mkdir -p /opt/app
RUN mkdir -p /opt/app/modules
ENV LD_LIBRARY_PATH /usr/lib/python3.6/site-packages/jep
ENV LD_PRELOAD /usr/lib/libpython3.6m.so
RUN pip3 install jep
RUN pip3 install ads
RUN pip3 install gspread
RUN pip3 list
COPY target/my-server-1.0-SNAPSHOT.jar $PROJECT_HOME/my-server-1.0-SNAPSHOT.jar
WORKDIR $PROJECT_HOME
CMD ["java", "-Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb","-jar","./my-server-1.0-SNAPSHOT.jar"]
Thanks
=== UPDATE 1 ===
I'm trying to create a new virtual env in the WORKDIR, install some libs and then execute a shell script, even though I see it creates the whole thing when I build the image, when running the container the environment folder is empty.
This is from my Dockerfile:
RUN virtualenv ./env && source ./env/bin/activate && pip install jep \
googleads gspread oauth2client
ENTRYPOINT ["/bin/bash", "./startup.sh"]
startup.sh:
#!/bin/sh
source ./env/bin/activate
java -Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb -jar ./my-server-1.0-SNAPSHOT.jar
It builds fine but on docker-compose up -d this is the output:
./startup.sh: source: line 2: can't open './env/bin/activate'
The env folder exists, but it's empty.
Any ideas?
Thanks!
=== UPDATE 2 ===
This is the working config:
RUN virtualenv ./my-env && source ./my-env/bin/activate \
&& pip install gspread==0.6.2 jep oauth2client googleads pandas
CMD ["/bin/bash", "-c", "./startup.sh"]
This is startup.sh:
#!/bin/sh
source ./my-env/bin/activate
java -Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb -jar ./my-server-1.0-SNAPSHOT.jar
I don't think using virtualenv in docker is something really negative, it will slow down your container builds just a bit.
As for renaming pip3 and python3, you can create a hard link like this:
ln /usr/bin/python3 /usr/bin/python
ln /usr/bin/pip3 /usr/bin/pip
assuming python3 executable is in /usr/bin/. You can find its location by running which python3
P.S.: Your dockerfile contains loads of RUN instructions, that are creating unnecessary intermediate containers. Combine them to save space and time:
RUN apk update && apk add bash gcc musl-dev \
python3 python3-dev py3-pip \
libxslt-dev libxml2-dev
RUN mkdir -p /opt/app/modules # you don't need the first one, -p will create it for you
RUN pip3 install jep ads gspread
Or combine them even further, if you aren't planning to change them often:
RUN apk update
&& apk add bash gcc musl-dev \
python3 python3-dev py3-pip \
libxslt-dev libxml2-dev \
&& mkdir -p /opt/app/modules \
&& pip3 install jep ads gspread
The only "workaround" I've found in order to use virtualenv from my docker container is to enter to the docker by ssh, create the environment, install the libs and set its folder as a volume in the docker-compose config so it won't be deleted and I can use it afterward.
(Or to have it ready and just copy the folder at build time) which could be a good option for saving build time, isn't it?
Otherwise, If I create it on Dockerfile and install the libs there, its folder gets empty when the container runs. Don't know why.
I appreciate if anyone can suggest a better way to deal with that.

Resources