Azure Container Instance not showing anything via url after deployment - azure

I am trying to deploy a docker image on azure. I am able to create the docker image successfully, also deploy successfully. But I am not able to see anything on my URL I specified to create the deployment for container. My app is a python flask app which also uses dash.
I followed this azure tutorial from documentation. The example app works. But my app does not. I don't know how to debug this or where I am going wrong.
Dockerfile
FROM ubuntu:18.04
COPY . /app
WORKDIR /app
RUN apt update && \
apt install -y make curl gcc g++ gfortran git patch wget unixodbc-dev vim-tiny build-essential \
libssl-dev zlib1g-dev libncurses5-dev libncursesw5-dev libreadline-dev libsqlite3-dev \
libgdbm-dev libdb5.3-dev libbz2-dev libexpat1-dev liblzma-dev libffi-dev && \
apt install -y python3 python3-dev python3-pip python3-venv && \
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list > /etc/apt/sources.list.d/mssql-release.list && \
apt update && \
ACCEPT_EULA=Y apt-get -y install msodbcsql17 && \
python3 -m venv spo_venv && \
. spo_venv/bin/activate && \
pip3 install --upgrade pip && \
pip3 install wheel && \
pip3 install -r requirements.txt && \
cd / && \
wget http://download.redis.io/releases/redis-5.0.5.tar.gz && \
tar xzf redis-5.0.5.tar.gz && \
cd redis-5.0.5 && \
make && \
# make test && \
cd / && \
rm redis-5.0.5.tar.gz && \
cd / && \
wget https://www.coin-or.org/download/source/Ipopt/Ipopt-3.12.13.tgz && \
tar xvzf Ipopt-3.12.13.tgz && \
cd Ipopt-3.12.13/ThirdParty/Blas/ && \
./get.Blas && \
cd ../Lapack && \
./get.Lapack && \
cd ../Mumps && \
./get.Mumps && \
cd ../Metis && \
./get.Metis && \
cd ../../ && \
mkdir build && \
cd build && \
../configure && \
make -j 4 && \
make install && \
cd / && \
rm Ipopt-3.12.13.tgz && \
echo "export PATH=\"$PATH:/redis-5.0.5/src/\"" >> ~/.bashrc && \
. ~/.bashrc
CMD ["./commands.sh"]
commands.sh
#!/bin/sh
. spo_venv/bin/activate
nohup redis-server > redislogs.log 2>&1 &
nohup celery worker -A task.celery -l info -P eventlet > celerylogs.log 2>&1 &
python app.py
Azure commands
sudo az acr login --name mynameregistry
sudo az acr show --name mynameregistry--query loginServer --output table
sudo docker tag spo_third_trial:v1 mynameregistry.azurecr.io/spo_third_trial:v1
sudo docker push mynameregistry.azurecr.io/spo_third_trial:v1
sudo az acr repository list --name mynameregistry--output table
sudo az acr repository show-tags --name mynameregistry--repository spo_third_trial --output table
sudo az acr show --name mynameregistry--query loginServer
sudo az container create --resource-group myResourceGroup --name spo-third-trial --image mynameregistry.azurecr.io/spo_third_trial:v1 --cpu 1 --memory 1 --registry-login-server mynameregistry.azurecr.io --registry-username mynameregistry --registry-password randomPassword --dns-name-label spo-third-trial --ports 80 8050
But when I go to http://spo-third-trial.eastus.azurecontainer.io I get this.
This site can’t be reached spo-third-trial.eastus.azurecontainer.io took too long to respond.
Search Google for spo third trial eastus azure container io
ERR_CONNECTION_TIMED_OUT
When I access logs with this command sudo az container logs --resource-group myResourceGroup --name spo-third-trial I get this
Running on http://127.0.0.1:8050/
Debugger PIN: 950-132-306
* Serving Flask app "server" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
Running on http://127.0.0.1:8050/
Debugger PIN: 871-957-278
My curl output
curl -v http://spo-third-trial.eastus.azurecontainer.io
* Rebuilt URL to: http://spo-third-trial.eastus.azurecontainer.io/
* Trying <ip-address>...
* TCP_NODELAY set
* connect to <ip-address> port 80 failed: Connection timed out
* Failed to connect to spo-third-trial.eastus.azurecontainer.io port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to spo-third-trial.eastus.azurecontainer.io port 80: Connection timed out
I don't understand whether I am doing something wrong with creating docker, or with azure, or both. Would appreciate any help I can get.
NOTE: I am working on a virtual PC - AWS client, to access azure inside the client.

As discussed, a couple of points to your question:
I think you meant to map the internal port 8050 to the external port 80. But this is not possible currently (Aug-2019) in Azure Container Instances. There is an open item on Uservoice for this. --ports 80 8050 means that you expose both ports, not that you map one to the other.
The actual problem, as we found out, was that the python flask app was listening only on 127.0.0.1, not on 0.0.0.0. So it was not accepting requests from the outside.

Related

How to push docker image from local to azure environment?

I have a docker image and I can run the docker image as a container. But I have to bind two folders local to the docker container. And I do it like this:
docker run -d -p 80:80 --name cntr-apache2
and then I go to: localhost:80. And the app is running.
But now I want to deploy it to the azure environment.
So Logged in in the azure portal: azure login.
and in docker I logged in: docker login webadres.azurecr.io.
and then I did a push: docker push webaddres.azurecr.io/webaddress:latest.
And I got this response:
latest: digest: sha256:7eefd5631cbe8907afb4cb941f579a7e7786ab36f52ba7d3eeadb4524fc5dddd size: 4909
And I made a web app for docker.
But now if I go to the url: https://docker-webaddress.azurewebsites.net
the index is empty. And that is logical. Because I didnt bind the two directories to the docker container in azure.
So my question is:
How now to bind the two direcories with azure docker?
Thank you
And this is my dockerfile:
FROM php:7.3-apache
# Copy virtual host into container
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
# Enable rewrite mode
RUN a2enmod rewrite
# Install necessary packages
RUN apt-get update && \
apt-get install \
libzip-dev \
wget \
git \
unzip \
-y --no-install-recommends
# Install PHP Extensions
RUN docker-php-ext-install zip pdo_mysql
COPY ./install-composer.sh ./
COPY ./php.ini /usr/local/etc/php/
EXPOSE 80
# Cleanup packages and install composer
RUN apt-get purge -y g++ \
&& apt-get autoremove -y \
&& rm -r /var/lib/apt/lists/* \
&& rm -rf /tmp/* \
&& sh ./install-composer.sh \
&& rm ./install-composer.sh
WORKDIR /var/www
RUN chown -R www-data:www-data /var/www
CMD ["apache2-foreground"]
One option is to use App Service persistent storage.
You need to enable the functionality first:
az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE
Then create a mapping in a Docker Compose file even if you have a single container to deploy. The {WEBAPP_STORAGE_HOME} environment variable will point to /home in the App Service VM. You can use the Advanced Tools (Kudu) to create the folders and copy the needed files.
Wordpress:
image: <image name:tag>
volumes:
- ${WEBAPP_STORAGE_HOME}/folder1:/etc/apache2/sites-enabled/
- ${WEBAPP_STORAGE_HOME}/folder2:/var/www/html/ docker_webcrawler2
Documentation
Another option is to mount Azure Storage.
Note that mounts using Blobs are read-only and File Share are read-write.
Start by creating a storage account and a File Share.
Create the mount. I find that it's easier using the Portal. See how in the doc link below.
az storage share-rm create \
--resource-group $resourceGroupName \
--storage-account $storageAccountName \
--name $shareName \
--quota 1024 \
--enabled-protocols SMB \
--output none
Documentation

Nodejs port change in Dockerfile

There is a Node js application in the docker container, it works on port 3149, but I need the container to run on port 3000, how can I change the port and register it in the Dockerfile without changing anything in the application code?
dokerfile
COPY package*.json /
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa && \
chmod 0700 /root/.ssh && \
ssh-keyscan bitbucket.org > /root/.ssh/known_hosts && \
apt update -qqy && \
apt -qqy install \
ruby \
ruby-dev \
yarn \
locales \
autoconf automake gdb git libffi-dev zlib1g-dev libssl-dev \
build-essential
RUN gem install compass
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
locale-gen
ENV LC_ALL en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
COPY . .
RUN npm ci && \
node ./node_modules/gulp/bin/gulp.js client && \
rm -rf /app/id_rsa \
rm -rf /root/.ssh/
EXPOSE 3000
CMD [ "node", "server.js" ] ```
To have the container running on port 3000 you have specify this once you run the container using --port or -p options/flags, and note that EXPOSE does not publish the port :
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them
to high-order ports.
so you have to run the container with -p option from the terminal:
docker run -p 3000:3149 ...
when you run the container map port 3000 on the host to port 3149 in the container e.g.
docker run -p 3000:3149 image

Azure webapp running "docker run" twice and fails to deploy

I am trying to deploy my web app on azure using docker. On my local machine it works fine, but when I deploy it in azure I can see that it is running docker run twice (why twice?)
2019-10-02 11:15:26.257 INFO - Status: Image is up to date for *******.azurecr.io/****_****:v2.11
2019-10-02 11:15:26.266 INFO - Pull Image successful, Time taken: 0 Minutes and 1 Seconds
2019-10-02 11:15:26.297 INFO - Starting container for site
2019-10-02 11:15:26.298 INFO - docker run -d -p 27757:8000 --name **********-dv_0_a70e438e -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=8000 -e WEBSITE_SITE_NAME=********-dv -e WEBSITE_AUTH_ENABLED=True -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=********-dv.azurewebsites.net -e WEBSITE_INSTANCE_ID=************************* -e HTTP_LOGGING_ENABLED=1 ********.azurecr.io/*****_*****:v2.11 init.sh
2019-10-02 11:15:28.069 INFO - Starting container for site
2019-10-02 11:15:28.070 INFO - docker run -d -p 6078:8081 --name **********_middleware -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=8000 -e WEBSITE_SITE_NAME=******-dv -e WEBSITE_AUTH_ENABLED=True -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=********** -e WEBSITE_INSTANCE_ID=******73***** -e HTTP_LOGGING_ENABLED=1 appsvc/middleware:1907112318 /Host.ListenUrl=http://0.0.0.0:8081 /Host.DestinationHostUrl=http://172.16.1.3:8000 /Host.UseFileLogging=true
This leads to an error later :
2019-10-02 11:15:30.410 INFO - Initiating warmup request to container drillx-stuckpipe-dv_0_a70e438e for site *********-dv
2019-10-02 11:19:38.791 ERROR - Container *******-dv_0_a70e438e for site ********-dv did not start within expected time limit. Elapsed time = 248.3813377 sec
In the logs stream of the app I can see that the web app has started but du to the fact that the port 8000 is not accessible I get this error :
2019-10-02 11:43:55.827 INFO - Container ********-dv_0_33e8d6cc_middleware for site ********-dv initialized successfully and is ready to serve requests.
2019-10-02 11:43:55.881 ERROR - Container ********-dv_0_33e8d6cc didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
In my Dockerfile I do have at end EXPOSE 8000. I do not know what I am missing here...
FROM python:3.6
# ssh
ENV SSH_PASSWD "root:PWD!"
RUN apt-get update \
&& apt-get -y install \
apt-transport-https \
apt-utils \
curl \
openssh-server \
&& apt-get clean \
&& echo "$SSH_PASSWD" | chpasswd
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
&& ACCEPT_EULA=Y apt-get -y install \
msodbcsql17 \
unixodbc-dev \
libxmlsec1-dev \
&& apt-get clean
RUN mkdir /code
WORKDIR code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
WORKDIR /code/
RUN ls -ltr
COPY sshd_config /etc/ssh/
COPY init.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000
ENTRYPOINT ["init.sh"]
Init.sh :
#!/bin/bash
set -e
echo "Starting SSH ..."
service ssh start
gunicorn --bind 0.0.0.0:8000 wsgi
If we correctly look at the logs, its actually not running twice. The docker run logs are for different images.
Your application image
Middleware - "appsvc/middleware" is the image used to handle Easy Auth/MSI/CORS on Web App Linux.
https://hajekj.net/2019/01/21/exploring-app-service-authentication-on-linux/
Now coming to the actual issue, if we take a look at the second set of logs. It states that your application failed to start in expected time limit.
This is by default 230 seconds on Web App Linux and can be increased using WEBSITES_CONTAINER_START_TIME_LIMIT application setting. Maximum value can be upto 1800 seconds.
How does Azure verify that application has started or not? : Azure will ping to a PORT and will wait for a HTTP response. If it receives one, then container will be started otherwise docker run will be executed again and process continues.
Which PORT: https://blogs.msdn.microsoft.com/waws/2017/09/08/things-you-should-know-web-apps-and-linux/#NoPing

azure container instance : deployment successful - IP address doesn't not respond

I have a ML container image which is accessible at - http://localhost:8501/v1/models/my_model on my local system.
After I push it to an Azure Container Registry, I create Azure Container Instance (set port - 8501). its deployment is successful, and IP address is generated.
My issue is when i try to access the IP address, it does not respond.
I have tried:
http://40.12.10.13/
http://40.12.10.13:8501
http://40.12.10.13:8501/v1/models/my_model
It's a TensorFlow serving image https://www.tensorflow.org/tfx/serving/docker
Below is the Docker file code -
ARG TF_SERVING_VERSION=latest
ARG TF_SERVING_BUILD_IMAGE=tensorflow/serving:${TF_SERVING_VERSION}-devel
FROM ${TF_SERVING_BUILD_IMAGE} as build_image
FROM ubuntu:18.04
ARG TF_SERVING_VERSION_GIT_BRANCH=master
ARG TF_SERVING_VERSION_GIT_COMMIT=head
LABEL maintainer="gvasudevan#google.com"
LABEL tensorflow_serving_github_branchtag=${TF_SERVING_VERSION_GIT_BRANCH}
LABEL tensorflow_serving_github_commit=${TF_SERVING_VERSION_GIT_COMMIT}
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install TF Serving pkg
COPY --from=build_image /usr/local/bin/tensorflow_model_server /usr/bin/tensorflow_model_server
# Expose ports
# gRPC
EXPOSE 8500
# REST
EXPOSE 8501
# Set where models should be stored in the container
ENV MODEL_BASE_PATH=/models
RUN mkdir -p ${MODEL_BASE_PATH}
# The only required piece is the model name in order to differentiate endpoints
ENV MODEL_NAME=model
# Create a script that runs the model server so we can use environment variables
# while also passing in arguments from the docker command line
RUN echo '#!/bin/bash \n\n\
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} \
"$#"' > /usr/bin/tf_serving_entrypoint.sh \
&& chmod +x /usr/bin/tf_serving_entrypoint.sh
ENTRYPOINT ["/usr/bin/tf_serving_entrypoint.sh"]

Can't curl linked container in Docker even though I can ping it

I have a Docker container called backend that exposes a port, 8200, and runs a django server behind gunicorn inside of it. This is my Dockerfile:
FROM debian:wheezy
RUN rm /bin/sh && \
ln -s /bin/bash /bin/sh && \
apt-get -y update && \
apt-get install -y -q \
curl \
procps \
python=2.7.3-4+deb7u1 \
git \
python-pip=1.1-3 \
python-dev \
libpq-dev && \
rm -rf /var/lib/{apt,dpkg,cache,log}
RUN pip install virtualenv && \
virtualenv mockingbird && \
/bin/bash -c "source mockingbird/bin/activate"
ADD ./requirements.txt /mockingbird/backend/requirements.txt
RUN /mockingbird/bin/pip install -r /mockingbird/backend/requirements.txt
ADD ./src /mockingbird/backend/src
CMD ["/mockingbird/bin/gunicorn", "--workers", "8", "--pythonpath", "/mockingbird/backend/src/", "--bind", "localhost:8200", "backend.wsgi"]
I'm running this container like so:
vagrant#10:~$ sudo docker run --name backend --env-file /mockingbird/apps/backend/env/dev -d --restart always --expose 8200 mockingbird/backend
I know that the django server is up and responding on the correct port by doing the following and getting a response:
vagrant#10:~$ sudo docker exec -it backend /bin/bash
root#b488874c204d:/# curl localhost:8200
I then start a new container linking to backend as follows:
sudo docker run -it --link backend:backend debian:wheezy /bin/bash
But when I try to curl backend, it doesn't work:
root#72946da3dff9:/# apt-get update && apt-get install curl
root#72946da3dff9:/# curl backend:8200
curl: (7) couldn't connect to host
I am, however, able to ping backend:
root#72946da3dff9:/# ping backend
PING backend (172.17.0.41): 48 data bytes
56 bytes from 172.17.0.41: icmp_seq=0 ttl=64 time=0.116 ms
56 bytes from 172.17.0.41: icmp_seq=1 ttl=64 time=0.081 ms
Anyone know anything else I can try to debug why I can't connect to the service running in my linked Docker container? Is there something I am missing here to be able to curl backend:8200 from the linked container?
this might be a problem: "--bind", "localhost:8200" as connections to backend hostname won't be accepted. You might want to change it to "0.0.0.0:8200" or maybe ":8200", depending on the notation supported.

Resources