I have a docker image and I can run the docker image as a container. But I have to bind two folders local to the docker container. And I do it like this:
docker run -d -p 80:80 --name cntr-apache2
and then I go to: localhost:80. And the app is running.
But now I want to deploy it to the azure environment.
So Logged in in the azure portal: azure login.
and in docker I logged in: docker login webadres.azurecr.io.
and then I did a push: docker push webaddres.azurecr.io/webaddress:latest.
And I got this response:
latest: digest: sha256:7eefd5631cbe8907afb4cb941f579a7e7786ab36f52ba7d3eeadb4524fc5dddd size: 4909
And I made a web app for docker.
But now if I go to the url: https://docker-webaddress.azurewebsites.net
the index is empty. And that is logical. Because I didnt bind the two directories to the docker container in azure.
So my question is:
How now to bind the two direcories with azure docker?
Thank you
And this is my dockerfile:
FROM php:7.3-apache
# Copy virtual host into container
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
# Enable rewrite mode
RUN a2enmod rewrite
# Install necessary packages
RUN apt-get update && \
apt-get install \
libzip-dev \
wget \
git \
unzip \
-y --no-install-recommends
# Install PHP Extensions
RUN docker-php-ext-install zip pdo_mysql
COPY ./install-composer.sh ./
COPY ./php.ini /usr/local/etc/php/
EXPOSE 80
# Cleanup packages and install composer
RUN apt-get purge -y g++ \
&& apt-get autoremove -y \
&& rm -r /var/lib/apt/lists/* \
&& rm -rf /tmp/* \
&& sh ./install-composer.sh \
&& rm ./install-composer.sh
WORKDIR /var/www
RUN chown -R www-data:www-data /var/www
CMD ["apache2-foreground"]
One option is to use App Service persistent storage.
You need to enable the functionality first:
az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE
Then create a mapping in a Docker Compose file even if you have a single container to deploy. The {WEBAPP_STORAGE_HOME} environment variable will point to /home in the App Service VM. You can use the Advanced Tools (Kudu) to create the folders and copy the needed files.
Wordpress:
image: <image name:tag>
volumes:
- ${WEBAPP_STORAGE_HOME}/folder1:/etc/apache2/sites-enabled/
- ${WEBAPP_STORAGE_HOME}/folder2:/var/www/html/ docker_webcrawler2
Documentation
Another option is to mount Azure Storage.
Note that mounts using Blobs are read-only and File Share are read-write.
Start by creating a storage account and a File Share.
Create the mount. I find that it's easier using the Portal. See how in the doc link below.
az storage share-rm create \
--resource-group $resourceGroupName \
--storage-account $storageAccountName \
--name $shareName \
--quota 1024 \
--enabled-protocols SMB \
--output none
Documentation
Related
I'm facing an interesting challenge, I'm trying to run kubectl in a docker image with a proper configuration, to reach my cluster.
I've been able to create the image, kubecod
FROM ubuntu:xenial
WORKDIR /project
RUN apt-get update && apt-get install -y \
curl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
ENTRYPOINT ["kubectl"]
#
CMD ["version"]
When I run the image, the container is functionning correctly, giving me the expected answer.
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
However, my aim is to create an image with the kubectl connecting to my node. Reading the doc, I need to add a configuration file in the following folder ~/.kube/config
I've created another Dockerfile to build another image, kubedock, with the proper config file and the creation of the requisite directory, .kube
FROM ubuntu:xenial
#setup a working directory
WORKDIR /project
RUN apt-get update && apt-get install -y \
curl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
#create the directory
RUN mkdir .kube
#Copy the config file to the .kube folder
COPY ./config .kube
ENTRYPOINT ["kubectl"]
CMD ["cluster-info dump"]
However, when I run the new image in a container, I have the following message
me#os:~/_projects/kubedock$ docker run --name kubecont kubedock
Error: unknown command "cluster-info dump" for "kubectl"
Run 'kubectl --help' for usage.
Not sure what I'm missing.
Any hints are welcomed.
Cheers.
It's not clear to me where your K8s cluster is running,
If you run your cluster in GKE you will need to run something like:
gcloud container clusters get-credentials $CLUSTER_NAME --zone $CLUSTER_ZONE
Which will create the ~/.config/gcloud tree of files in the users home directory.
On AWS EKS you will need to setup ~/.aws/credentials and other IAM settings.
I suggest you post the details of where your K8s cluster is running and we can take it from there.
PS maybe if you mount/copy the host home directory of a working user into the docker it will work.
The answer is that
CMD ["cluster-info","dump"]
Or when there is a space in the kubectl command line, separate it with a comma.
i like to run the self-hosted Linux container only once per pipeline
that means when the pipeline is done i like the container to stop
i saw that there is a parameter called "--once"
please this link in the bottom :
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops
but when i start the docker like this with the once after the run :
docker run --once --rm -it -e AZP_WORK=/home/working_dir -v /home/working_dir:/azp -e AZP_URL=https://dev.azure.com/xxxx -e AZP_TOKEN=nhxxxxxu76mlua -e AZP_AGENT_NAME=ios_dockeragent xxx.xxx.com:2000/azure_self_hosted_agent/agent:latest
I'm getting :
unknown flag: --once
See 'docker run --help'.
also if i put it in the docker file
as
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh --once"]
Im getting error when trying to run the docker :
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./start.sh --once\": stat ./start.sh --once: no such file or directory": unknown
where do i need to set this "--once" command in dockerized agent?
Is for the agent's run, not the docker run. from the docs:
For agents configured to run interactively, you can choose to have the
agent accept only one job. To run in this configuration:
./run.sh --once
Agents in this mode will accept only one job and then spin down
gracefully (useful for running in Docker on a service like Azure
Container Instances).
So, you need to add it in the bash script you configure the docker image:
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh --once
As far as I know, there's no way to pass it in from the outside; you have to go into the container and edit the start.sh file to add the --once argument to the appropriate line.
exec ./externals/node/bin/node ./bin/AgentService.js interactive --once & wait $!
cleanup
Side note: depending on your requirements, you might also take the opportunity to remove the undocumented web-server from start.sh.
I am trying to deploy my web app on azure using docker. On my local machine it works fine, but when I deploy it in azure I can see that it is running docker run twice (why twice?)
2019-10-02 11:15:26.257 INFO - Status: Image is up to date for *******.azurecr.io/****_****:v2.11
2019-10-02 11:15:26.266 INFO - Pull Image successful, Time taken: 0 Minutes and 1 Seconds
2019-10-02 11:15:26.297 INFO - Starting container for site
2019-10-02 11:15:26.298 INFO - docker run -d -p 27757:8000 --name **********-dv_0_a70e438e -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=8000 -e WEBSITE_SITE_NAME=********-dv -e WEBSITE_AUTH_ENABLED=True -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=********-dv.azurewebsites.net -e WEBSITE_INSTANCE_ID=************************* -e HTTP_LOGGING_ENABLED=1 ********.azurecr.io/*****_*****:v2.11 init.sh
2019-10-02 11:15:28.069 INFO - Starting container for site
2019-10-02 11:15:28.070 INFO - docker run -d -p 6078:8081 --name **********_middleware -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=8000 -e WEBSITE_SITE_NAME=******-dv -e WEBSITE_AUTH_ENABLED=True -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=********** -e WEBSITE_INSTANCE_ID=******73***** -e HTTP_LOGGING_ENABLED=1 appsvc/middleware:1907112318 /Host.ListenUrl=http://0.0.0.0:8081 /Host.DestinationHostUrl=http://172.16.1.3:8000 /Host.UseFileLogging=true
This leads to an error later :
2019-10-02 11:15:30.410 INFO - Initiating warmup request to container drillx-stuckpipe-dv_0_a70e438e for site *********-dv
2019-10-02 11:19:38.791 ERROR - Container *******-dv_0_a70e438e for site ********-dv did not start within expected time limit. Elapsed time = 248.3813377 sec
In the logs stream of the app I can see that the web app has started but du to the fact that the port 8000 is not accessible I get this error :
2019-10-02 11:43:55.827 INFO - Container ********-dv_0_33e8d6cc_middleware for site ********-dv initialized successfully and is ready to serve requests.
2019-10-02 11:43:55.881 ERROR - Container ********-dv_0_33e8d6cc didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
In my Dockerfile I do have at end EXPOSE 8000. I do not know what I am missing here...
FROM python:3.6
# ssh
ENV SSH_PASSWD "root:PWD!"
RUN apt-get update \
&& apt-get -y install \
apt-transport-https \
apt-utils \
curl \
openssh-server \
&& apt-get clean \
&& echo "$SSH_PASSWD" | chpasswd
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
&& ACCEPT_EULA=Y apt-get -y install \
msodbcsql17 \
unixodbc-dev \
libxmlsec1-dev \
&& apt-get clean
RUN mkdir /code
WORKDIR code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
WORKDIR /code/
RUN ls -ltr
COPY sshd_config /etc/ssh/
COPY init.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000
ENTRYPOINT ["init.sh"]
Init.sh :
#!/bin/bash
set -e
echo "Starting SSH ..."
service ssh start
gunicorn --bind 0.0.0.0:8000 wsgi
If we correctly look at the logs, its actually not running twice. The docker run logs are for different images.
Your application image
Middleware - "appsvc/middleware" is the image used to handle Easy Auth/MSI/CORS on Web App Linux.
https://hajekj.net/2019/01/21/exploring-app-service-authentication-on-linux/
Now coming to the actual issue, if we take a look at the second set of logs. It states that your application failed to start in expected time limit.
This is by default 230 seconds on Web App Linux and can be increased using WEBSITES_CONTAINER_START_TIME_LIMIT application setting. Maximum value can be upto 1800 seconds.
How does Azure verify that application has started or not? : Azure will ping to a PORT and will wait for a HTTP response. If it receives one, then container will be started otherwise docker run will be executed again and process continues.
Which PORT: https://blogs.msdn.microsoft.com/waws/2017/09/08/things-you-should-know-web-apps-and-linux/#NoPing
I am trying to deploy a docker image on azure. I am able to create the docker image successfully, also deploy successfully. But I am not able to see anything on my URL I specified to create the deployment for container. My app is a python flask app which also uses dash.
I followed this azure tutorial from documentation. The example app works. But my app does not. I don't know how to debug this or where I am going wrong.
Dockerfile
FROM ubuntu:18.04
COPY . /app
WORKDIR /app
RUN apt update && \
apt install -y make curl gcc g++ gfortran git patch wget unixodbc-dev vim-tiny build-essential \
libssl-dev zlib1g-dev libncurses5-dev libncursesw5-dev libreadline-dev libsqlite3-dev \
libgdbm-dev libdb5.3-dev libbz2-dev libexpat1-dev liblzma-dev libffi-dev && \
apt install -y python3 python3-dev python3-pip python3-venv && \
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list > /etc/apt/sources.list.d/mssql-release.list && \
apt update && \
ACCEPT_EULA=Y apt-get -y install msodbcsql17 && \
python3 -m venv spo_venv && \
. spo_venv/bin/activate && \
pip3 install --upgrade pip && \
pip3 install wheel && \
pip3 install -r requirements.txt && \
cd / && \
wget http://download.redis.io/releases/redis-5.0.5.tar.gz && \
tar xzf redis-5.0.5.tar.gz && \
cd redis-5.0.5 && \
make && \
# make test && \
cd / && \
rm redis-5.0.5.tar.gz && \
cd / && \
wget https://www.coin-or.org/download/source/Ipopt/Ipopt-3.12.13.tgz && \
tar xvzf Ipopt-3.12.13.tgz && \
cd Ipopt-3.12.13/ThirdParty/Blas/ && \
./get.Blas && \
cd ../Lapack && \
./get.Lapack && \
cd ../Mumps && \
./get.Mumps && \
cd ../Metis && \
./get.Metis && \
cd ../../ && \
mkdir build && \
cd build && \
../configure && \
make -j 4 && \
make install && \
cd / && \
rm Ipopt-3.12.13.tgz && \
echo "export PATH=\"$PATH:/redis-5.0.5/src/\"" >> ~/.bashrc && \
. ~/.bashrc
CMD ["./commands.sh"]
commands.sh
#!/bin/sh
. spo_venv/bin/activate
nohup redis-server > redislogs.log 2>&1 &
nohup celery worker -A task.celery -l info -P eventlet > celerylogs.log 2>&1 &
python app.py
Azure commands
sudo az acr login --name mynameregistry
sudo az acr show --name mynameregistry--query loginServer --output table
sudo docker tag spo_third_trial:v1 mynameregistry.azurecr.io/spo_third_trial:v1
sudo docker push mynameregistry.azurecr.io/spo_third_trial:v1
sudo az acr repository list --name mynameregistry--output table
sudo az acr repository show-tags --name mynameregistry--repository spo_third_trial --output table
sudo az acr show --name mynameregistry--query loginServer
sudo az container create --resource-group myResourceGroup --name spo-third-trial --image mynameregistry.azurecr.io/spo_third_trial:v1 --cpu 1 --memory 1 --registry-login-server mynameregistry.azurecr.io --registry-username mynameregistry --registry-password randomPassword --dns-name-label spo-third-trial --ports 80 8050
But when I go to http://spo-third-trial.eastus.azurecontainer.io I get this.
This site can’t be reached spo-third-trial.eastus.azurecontainer.io took too long to respond.
Search Google for spo third trial eastus azure container io
ERR_CONNECTION_TIMED_OUT
When I access logs with this command sudo az container logs --resource-group myResourceGroup --name spo-third-trial I get this
Running on http://127.0.0.1:8050/
Debugger PIN: 950-132-306
* Serving Flask app "server" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
Running on http://127.0.0.1:8050/
Debugger PIN: 871-957-278
My curl output
curl -v http://spo-third-trial.eastus.azurecontainer.io
* Rebuilt URL to: http://spo-third-trial.eastus.azurecontainer.io/
* Trying <ip-address>...
* TCP_NODELAY set
* connect to <ip-address> port 80 failed: Connection timed out
* Failed to connect to spo-third-trial.eastus.azurecontainer.io port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to spo-third-trial.eastus.azurecontainer.io port 80: Connection timed out
I don't understand whether I am doing something wrong with creating docker, or with azure, or both. Would appreciate any help I can get.
NOTE: I am working on a virtual PC - AWS client, to access azure inside the client.
As discussed, a couple of points to your question:
I think you meant to map the internal port 8050 to the external port 80. But this is not possible currently (Aug-2019) in Azure Container Instances. There is an open item on Uservoice for this. --ports 80 8050 means that you expose both ports, not that you map one to the other.
The actual problem, as we found out, was that the python flask app was listening only on 127.0.0.1, not on 0.0.0.0. So it was not accepting requests from the outside.
I have a ML container image which is accessible at - http://localhost:8501/v1/models/my_model on my local system.
After I push it to an Azure Container Registry, I create Azure Container Instance (set port - 8501). its deployment is successful, and IP address is generated.
My issue is when i try to access the IP address, it does not respond.
I have tried:
http://40.12.10.13/
http://40.12.10.13:8501
http://40.12.10.13:8501/v1/models/my_model
It's a TensorFlow serving image https://www.tensorflow.org/tfx/serving/docker
Below is the Docker file code -
ARG TF_SERVING_VERSION=latest
ARG TF_SERVING_BUILD_IMAGE=tensorflow/serving:${TF_SERVING_VERSION}-devel
FROM ${TF_SERVING_BUILD_IMAGE} as build_image
FROM ubuntu:18.04
ARG TF_SERVING_VERSION_GIT_BRANCH=master
ARG TF_SERVING_VERSION_GIT_COMMIT=head
LABEL maintainer="gvasudevan#google.com"
LABEL tensorflow_serving_github_branchtag=${TF_SERVING_VERSION_GIT_BRANCH}
LABEL tensorflow_serving_github_commit=${TF_SERVING_VERSION_GIT_COMMIT}
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install TF Serving pkg
COPY --from=build_image /usr/local/bin/tensorflow_model_server /usr/bin/tensorflow_model_server
# Expose ports
# gRPC
EXPOSE 8500
# REST
EXPOSE 8501
# Set where models should be stored in the container
ENV MODEL_BASE_PATH=/models
RUN mkdir -p ${MODEL_BASE_PATH}
# The only required piece is the model name in order to differentiate endpoints
ENV MODEL_NAME=model
# Create a script that runs the model server so we can use environment variables
# while also passing in arguments from the docker command line
RUN echo '#!/bin/bash \n\n\
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} \
"$#"' > /usr/bin/tf_serving_entrypoint.sh \
&& chmod +x /usr/bin/tf_serving_entrypoint.sh
ENTRYPOINT ["/usr/bin/tf_serving_entrypoint.sh"]