I have a ML container image which is accessible at - http://localhost:8501/v1/models/my_model on my local system.
After I push it to an Azure Container Registry, I create Azure Container Instance (set port - 8501). its deployment is successful, and IP address is generated.
My issue is when i try to access the IP address, it does not respond.
I have tried:
http://40.12.10.13/
http://40.12.10.13:8501
http://40.12.10.13:8501/v1/models/my_model
It's a TensorFlow serving image https://www.tensorflow.org/tfx/serving/docker
Below is the Docker file code -
ARG TF_SERVING_VERSION=latest
ARG TF_SERVING_BUILD_IMAGE=tensorflow/serving:${TF_SERVING_VERSION}-devel
FROM ${TF_SERVING_BUILD_IMAGE} as build_image
FROM ubuntu:18.04
ARG TF_SERVING_VERSION_GIT_BRANCH=master
ARG TF_SERVING_VERSION_GIT_COMMIT=head
LABEL maintainer="gvasudevan#google.com"
LABEL tensorflow_serving_github_branchtag=${TF_SERVING_VERSION_GIT_BRANCH}
LABEL tensorflow_serving_github_commit=${TF_SERVING_VERSION_GIT_COMMIT}
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install TF Serving pkg
COPY --from=build_image /usr/local/bin/tensorflow_model_server /usr/bin/tensorflow_model_server
# Expose ports
# gRPC
EXPOSE 8500
# REST
EXPOSE 8501
# Set where models should be stored in the container
ENV MODEL_BASE_PATH=/models
RUN mkdir -p ${MODEL_BASE_PATH}
# The only required piece is the model name in order to differentiate endpoints
ENV MODEL_NAME=model
# Create a script that runs the model server so we can use environment variables
# while also passing in arguments from the docker command line
RUN echo '#!/bin/bash \n\n\
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} \
"$#"' > /usr/bin/tf_serving_entrypoint.sh \
&& chmod +x /usr/bin/tf_serving_entrypoint.sh
ENTRYPOINT ["/usr/bin/tf_serving_entrypoint.sh"]
Related
I have a docker image and I can run the docker image as a container. But I have to bind two folders local to the docker container. And I do it like this:
docker run -d -p 80:80 --name cntr-apache2
and then I go to: localhost:80. And the app is running.
But now I want to deploy it to the azure environment.
So Logged in in the azure portal: azure login.
and in docker I logged in: docker login webadres.azurecr.io.
and then I did a push: docker push webaddres.azurecr.io/webaddress:latest.
And I got this response:
latest: digest: sha256:7eefd5631cbe8907afb4cb941f579a7e7786ab36f52ba7d3eeadb4524fc5dddd size: 4909
And I made a web app for docker.
But now if I go to the url: https://docker-webaddress.azurewebsites.net
the index is empty. And that is logical. Because I didnt bind the two directories to the docker container in azure.
So my question is:
How now to bind the two direcories with azure docker?
Thank you
And this is my dockerfile:
FROM php:7.3-apache
# Copy virtual host into container
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
# Enable rewrite mode
RUN a2enmod rewrite
# Install necessary packages
RUN apt-get update && \
apt-get install \
libzip-dev \
wget \
git \
unzip \
-y --no-install-recommends
# Install PHP Extensions
RUN docker-php-ext-install zip pdo_mysql
COPY ./install-composer.sh ./
COPY ./php.ini /usr/local/etc/php/
EXPOSE 80
# Cleanup packages and install composer
RUN apt-get purge -y g++ \
&& apt-get autoremove -y \
&& rm -r /var/lib/apt/lists/* \
&& rm -rf /tmp/* \
&& sh ./install-composer.sh \
&& rm ./install-composer.sh
WORKDIR /var/www
RUN chown -R www-data:www-data /var/www
CMD ["apache2-foreground"]
One option is to use App Service persistent storage.
You need to enable the functionality first:
az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE
Then create a mapping in a Docker Compose file even if you have a single container to deploy. The {WEBAPP_STORAGE_HOME} environment variable will point to /home in the App Service VM. You can use the Advanced Tools (Kudu) to create the folders and copy the needed files.
Wordpress:
image: <image name:tag>
volumes:
- ${WEBAPP_STORAGE_HOME}/folder1:/etc/apache2/sites-enabled/
- ${WEBAPP_STORAGE_HOME}/folder2:/var/www/html/ docker_webcrawler2
Documentation
Another option is to mount Azure Storage.
Note that mounts using Blobs are read-only and File Share are read-write.
Start by creating a storage account and a File Share.
Create the mount. I find that it's easier using the Portal. See how in the doc link below.
az storage share-rm create \
--resource-group $resourceGroupName \
--storage-account $storageAccountName \
--name $shareName \
--quota 1024 \
--enabled-protocols SMB \
--output none
Documentation
I am using a Openfaas Python3 function with terraform to create bucket in my AWS Account. I am trying it locally and created a local cluster using k3s and installed openfaas cli.
Steps:
Created local cluster and deployed faas-netes.
Created a new project using openfaas template of python3.
Exposed the port.
Changed my project yaml file with new host on gateway key in provider.
I am creating my tf files with python only and I am sure that the files are created with proper info as checked it by executing ls and printing the file data after closing the file.
Checked if terraform is installed in image and live by executing terraform--version.
I have already changes my dockerfile to have terraform installed while building image.
OS: Ubuntu 20.04 Docker: 20.10.15 Faas-CLI: 0.14.2
Docker File
FROM --platform=${TARGETPLATFORM:-linux/amd64} ghcr.io/openfaas/classic-watchdog:0.2.0 as watchdog
FROM --platform=${TARGETPLATFORM:-linux/amd64} python:3-alpine
ARG TARGETPLATFORM
ARG BUILDPLATFORM
# Allows you to add additional packages via build-arg
ARG ADDITIONAL_PACKAGE
COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog
RUN chmod +x /usr/bin/fwatchdog
RUN apk --no-cache add ca-certificates ${ADDITIONAL_PACKAGE}
RUN apk add --no-cache wget
RUN apk add --no-cache unzip
RUN wget https://releases.hashicorp.com/terraform/1.1.9/terraform_1.1.9_linux_amd64.zip
RUN unzip terraform_1.1.9_linux_amd64.zip
RUN mv terraform /usr/bin/
RUN rm terraform_1.1.9_linux_amd64.zip
# Add non root user
RUN addgroup -S app && adduser app -S -G app
WORKDIR /home/app/
COPY index.py .
COPY requirements.txt .
RUN chown -R app /home/app && \
mkdir -p /home/app/python && chown -R app /home/app
USER app
ENV PATH=$PATH:/home/app/.local/bin:/home/app/python/bin/
ENV PYTHONPATH=$PYTHONPATH:/home/app/python
RUN pip install -r requirements.txt --target=/home/app/python
RUN mkdir -p function
RUN touch ./function/__init__.py
WORKDIR /home/app/function/
COPY function/requirements.txt .
RUN pip install -r requirements.txt --target=/home/app/python
WORKDIR /home/app/
USER root
COPY function function
# Allow any user-id for OpenShift users.
RUN chown -R app:app ./ && \
chmod -R 777 /home/app/python
USER app
ENV fprocess="python3 index.py"
EXPOSE 8080
HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1
CMD ["fwatchdog"]
Handler.py
import os
import subprocess
def handle(req):
"""handle a request to the function
Args:
req (str): request body
"""
print(os.system("terraform --version"))
with open('terraform-local/bucket.tf','w') as resourcefile:
resourcefile.write('resource "aws_s3_bucket" "ABHINAV" { bucket = "new"}')
resourcefile.close()
with open('terraform-local/main.tf','w') as mainfile:
mainfile.write('provider "aws" {' + '\n')
mainfile.write('access_key = ""' + '\n')
mainfile.write('secret_key = ""' + '\n')
mainfile.write('region = "ap-south-1"'+ '\n')
mainfile.write('}')
mainfile.close()
print(os.system("ls"))
os.system("terraform init")
os.system("terraform plan")
os.system("terraform apply -auto-approve")
return req
But still not able to use terraform commands like terraform init and create bucket on AWS.
i like to run the self-hosted Linux container only once per pipeline
that means when the pipeline is done i like the container to stop
i saw that there is a parameter called "--once"
please this link in the bottom :
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops
but when i start the docker like this with the once after the run :
docker run --once --rm -it -e AZP_WORK=/home/working_dir -v /home/working_dir:/azp -e AZP_URL=https://dev.azure.com/xxxx -e AZP_TOKEN=nhxxxxxu76mlua -e AZP_AGENT_NAME=ios_dockeragent xxx.xxx.com:2000/azure_self_hosted_agent/agent:latest
I'm getting :
unknown flag: --once
See 'docker run --help'.
also if i put it in the docker file
as
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh --once"]
Im getting error when trying to run the docker :
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./start.sh --once\": stat ./start.sh --once: no such file or directory": unknown
where do i need to set this "--once" command in dockerized agent?
Is for the agent's run, not the docker run. from the docs:
For agents configured to run interactively, you can choose to have the
agent accept only one job. To run in this configuration:
./run.sh --once
Agents in this mode will accept only one job and then spin down
gracefully (useful for running in Docker on a service like Azure
Container Instances).
So, you need to add it in the bash script you configure the docker image:
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh --once
As far as I know, there's no way to pass it in from the outside; you have to go into the container and edit the start.sh file to add the --once argument to the appropriate line.
exec ./externals/node/bin/node ./bin/AgentService.js interactive --once & wait $!
cleanup
Side note: depending on your requirements, you might also take the opportunity to remove the undocumented web-server from start.sh.
There is a Node js application in the docker container, it works on port 3149, but I need the container to run on port 3000, how can I change the port and register it in the Dockerfile without changing anything in the application code?
dokerfile
COPY package*.json /
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa && \
chmod 0700 /root/.ssh && \
ssh-keyscan bitbucket.org > /root/.ssh/known_hosts && \
apt update -qqy && \
apt -qqy install \
ruby \
ruby-dev \
yarn \
locales \
autoconf automake gdb git libffi-dev zlib1g-dev libssl-dev \
build-essential
RUN gem install compass
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
locale-gen
ENV LC_ALL en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
COPY . .
RUN npm ci && \
node ./node_modules/gulp/bin/gulp.js client && \
rm -rf /app/id_rsa \
rm -rf /root/.ssh/
EXPOSE 3000
CMD [ "node", "server.js" ] ```
To have the container running on port 3000 you have specify this once you run the container using --port or -p options/flags, and note that EXPOSE does not publish the port :
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them
to high-order ports.
so you have to run the container with -p option from the terminal:
docker run -p 3000:3149 ...
when you run the container map port 3000 on the host to port 3149 in the container e.g.
docker run -p 3000:3149 image
I am trying to deploy a docker image on azure. I am able to create the docker image successfully, also deploy successfully. But I am not able to see anything on my URL I specified to create the deployment for container. My app is a python flask app which also uses dash.
I followed this azure tutorial from documentation. The example app works. But my app does not. I don't know how to debug this or where I am going wrong.
Dockerfile
FROM ubuntu:18.04
COPY . /app
WORKDIR /app
RUN apt update && \
apt install -y make curl gcc g++ gfortran git patch wget unixodbc-dev vim-tiny build-essential \
libssl-dev zlib1g-dev libncurses5-dev libncursesw5-dev libreadline-dev libsqlite3-dev \
libgdbm-dev libdb5.3-dev libbz2-dev libexpat1-dev liblzma-dev libffi-dev && \
apt install -y python3 python3-dev python3-pip python3-venv && \
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list > /etc/apt/sources.list.d/mssql-release.list && \
apt update && \
ACCEPT_EULA=Y apt-get -y install msodbcsql17 && \
python3 -m venv spo_venv && \
. spo_venv/bin/activate && \
pip3 install --upgrade pip && \
pip3 install wheel && \
pip3 install -r requirements.txt && \
cd / && \
wget http://download.redis.io/releases/redis-5.0.5.tar.gz && \
tar xzf redis-5.0.5.tar.gz && \
cd redis-5.0.5 && \
make && \
# make test && \
cd / && \
rm redis-5.0.5.tar.gz && \
cd / && \
wget https://www.coin-or.org/download/source/Ipopt/Ipopt-3.12.13.tgz && \
tar xvzf Ipopt-3.12.13.tgz && \
cd Ipopt-3.12.13/ThirdParty/Blas/ && \
./get.Blas && \
cd ../Lapack && \
./get.Lapack && \
cd ../Mumps && \
./get.Mumps && \
cd ../Metis && \
./get.Metis && \
cd ../../ && \
mkdir build && \
cd build && \
../configure && \
make -j 4 && \
make install && \
cd / && \
rm Ipopt-3.12.13.tgz && \
echo "export PATH=\"$PATH:/redis-5.0.5/src/\"" >> ~/.bashrc && \
. ~/.bashrc
CMD ["./commands.sh"]
commands.sh
#!/bin/sh
. spo_venv/bin/activate
nohup redis-server > redislogs.log 2>&1 &
nohup celery worker -A task.celery -l info -P eventlet > celerylogs.log 2>&1 &
python app.py
Azure commands
sudo az acr login --name mynameregistry
sudo az acr show --name mynameregistry--query loginServer --output table
sudo docker tag spo_third_trial:v1 mynameregistry.azurecr.io/spo_third_trial:v1
sudo docker push mynameregistry.azurecr.io/spo_third_trial:v1
sudo az acr repository list --name mynameregistry--output table
sudo az acr repository show-tags --name mynameregistry--repository spo_third_trial --output table
sudo az acr show --name mynameregistry--query loginServer
sudo az container create --resource-group myResourceGroup --name spo-third-trial --image mynameregistry.azurecr.io/spo_third_trial:v1 --cpu 1 --memory 1 --registry-login-server mynameregistry.azurecr.io --registry-username mynameregistry --registry-password randomPassword --dns-name-label spo-third-trial --ports 80 8050
But when I go to http://spo-third-trial.eastus.azurecontainer.io I get this.
This site can’t be reached spo-third-trial.eastus.azurecontainer.io took too long to respond.
Search Google for spo third trial eastus azure container io
ERR_CONNECTION_TIMED_OUT
When I access logs with this command sudo az container logs --resource-group myResourceGroup --name spo-third-trial I get this
Running on http://127.0.0.1:8050/
Debugger PIN: 950-132-306
* Serving Flask app "server" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
Running on http://127.0.0.1:8050/
Debugger PIN: 871-957-278
My curl output
curl -v http://spo-third-trial.eastus.azurecontainer.io
* Rebuilt URL to: http://spo-third-trial.eastus.azurecontainer.io/
* Trying <ip-address>...
* TCP_NODELAY set
* connect to <ip-address> port 80 failed: Connection timed out
* Failed to connect to spo-third-trial.eastus.azurecontainer.io port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to spo-third-trial.eastus.azurecontainer.io port 80: Connection timed out
I don't understand whether I am doing something wrong with creating docker, or with azure, or both. Would appreciate any help I can get.
NOTE: I am working on a virtual PC - AWS client, to access azure inside the client.
As discussed, a couple of points to your question:
I think you meant to map the internal port 8050 to the external port 80. But this is not possible currently (Aug-2019) in Azure Container Instances. There is an open item on Uservoice for this. --ports 80 8050 means that you expose both ports, not that you map one to the other.
The actual problem, as we found out, was that the python flask app was listening only on 127.0.0.1, not on 0.0.0.0. So it was not accepting requests from the outside.