Azure App Service "Bind mount must start with ${WEBAPP_STORAGE_HOME}" - azure

I tried running this inside a docker compose file.
image: nginx:latest
ports:
- 80:80
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
However, I'm getting "Bind mount must start with ${WEBAPP_STORAGE_HOME}." I tried adding ${WEBAPP_STORAGE_HOME} but then receive: "invalid volume specification"

Related

Running into issues while deploying multi container instances on Azure using Docker compose

docker compose-up locally is able to build and bring up the services, but when doing the same on Azure Container Instances I get the below error
containerinstance.ContainerGroupsClient#CreateOrUpdate: Failure
sending request: StatusCode=400 -- Original Error:
Code="InaccessibleImage" Message="The image
'docker/aci-hostnames-sidecar:1.0' in container group 'djangodeploy'
is not accessible. Please check the image and registry credential."
Also, what is the purpose of this image docker/aci-hostnames-sidecar
The ACI deployment was working fine and now suddenly it doesn't work anymore
The docker-compose.yml contents are provided below
version: '3.7'
services:
django-gunicorn:
image: oasisdatacr.azurecr.io/oasis_django:latest
env_file:
- .env
build:
context: .
command: >
sh -c "
python3 manage.py makemigrations &&
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
python3 manage.py runserver 0:8000"
ports:
- "8000:8000"
celery-worker:
image: oasisdatacr.azurecr.io/oasis_django_celery_worker:latest
restart: always
build:
context: .
command: celery -A oasis worker -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
celery-beat:
image: oasisdatacr.azurecr.io/oasis_django_celery_beat:latest
build:
context: .
command: celery -A oasis beat -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
- celery-worker
UPDATE There might have been some issue from Azure end as I was able to deploy the containers as I usually do without any changes whatsoever
When you use the docker-compose to deploy multiple containers to ACI, firstly you need to build the images locally, and then you need to push the images to the ACR through the command docker-compose push, of course, you need to log in to your ACR first. See the example here.
And if you already push the images to your ACR, then you need to make sure if you log in to your ACR with the right credential and the image name with the tag is absolutely right.

How do I get my app in a docker instance to talk to my database in another docker instance inside the same network?

Ok I give up! I spent far too much time on this:
So I want my app inside a docker container to talk to my postgres which is inside another container.
Docker-compose.yml
version: "3.8"
services:
foodbudget-db:
container_name: foodbudget-db
image: postgres:12.4
restart: always
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: foodbudget
PGDATA: /var/lib/postgresql/data
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- 5433:5432
web:
image: node:14.10.1
env_file:
- .env
depends_on:
- foodbudget-db
ports:
- 8080:8080
build:
context: .
dockerfile: Dockerfile
Dockerfile
FROM node:14.10.1
WORKDIR /src/app
ADD https://github.com/palfrey/wait-for-db/releases/download/v1.0.0/wait-for-db-linux-x86 /src/app/wait-for-db
RUN chmod +x /src/app/wait-for-db
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5433 -t 1000000
EXPOSE 8080
But I keep getting this error when I build the Dockerfile, even though the database is up and running when I run docker ps. I tried connecting to the postgres database in my host machine, and it connected successfully...
Temporary error (pausing for 3 seconds): PostgresError { error: Error(Io(Custom { kind: Other, error: "failed to lookup address information: Name does not resolve" })) }
Have anyone created an app and talk to a db in another docker instance be4?
This line is the issue:
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5433 -t 1000000
You must use the internal port of the docker container (5432) instead of the exposed one within a network:
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5432 -t 1000000

Docker volume binds file as directory on remote linux server

I have docker installed on linux on remote server. I use ssh to connect to docker from my host machine with Windows 10. To deploy my app to docker, I use docker-compose v3, but I have serious problem. When I try to mount volume with file from my host machine, file converts to directory in my container.
Here's my docker-compose.yml:
version: "3.8"
services:
zeebe:
container_name: zeebe
image: camunda/zeebe
environment:
- ZEEBE_LOG_LEVEL=debug
ports:
- "26500:26500"
- "9600:9600"
- "5701:5701"
volumes:
- zeebe_data:/usr/local/zeebe/data
- type: bind
source: ./ZeebeServer/lib/application.yml
target: /usr/local/zeebe/config/application.yaml
read_only: true
depends_on:
- elasticsearch
networks:
- backend
operate:
container_name: operate
image: camunda/operate
ports:
- "8080:8080"
depends_on:
- zeebe
- elasticsearch
volumes:
- operate_data:/usr/local/operate
- type: bind
source: C:/Users/Aset/IdeaProjects/ABC-Store/ZeebeServer/operate/application.yml
target: /usr/local/operate/config/application.yml
read_only: true
networks:
- backend
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
ports:
- "9200:9200"
environment:
- discovery.type=single-node
- cluster.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- zeebe_elasticsearch_data:/usr/share/elasticsearch/data
networks:
- backend
And this is my error after command docker-compose -d up:
ERROR: for zeebe Cannot start service zeebe: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/User/IdeaProjects/ABC-Store/ZeebeServer/lib/application.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/fec3c1f3ad8748e2bf3aa5fdf30558434c2474ecac8b7fdbad2fdbb27df24415/merged\\\" at \\\"/var/lib/docker/overlay2/fec3c1f3ad8748e2bf3aa5fdf30558434c2474ecac8b7fdbad2fdbb27df24415/merged/usr/local/zeebe/config/application.yaml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I understand that you are trying to mount a local folder from your Windows host (C:/Users/Aset/IdeaProjects/...) into a docker container.
But this is not possible (that easily) since docker can only mount folders local to the docker host i.e. your remote server you're connecting to via ssh.
So in order to mount the folder of your local Windows host into the container you'd first have to mount that folder somehow on the docker host/remote server (e.g. with a Samba share) or copy the contents of that folder to the remote server and edit your volume source to the path of the folder on the docker host.
P.S. in case you're wondering why this doesn't work but docker build with files from your Windows host does: docker build uploads the build context i.e. the current directory, to the docker host making its content available on the server.

How to mount docker volume in Azure Web App for containers?

I'm trying to run KrakenD image in Azure App Service.
KrakenD requires json config file krakend.json to be put into /etc/krakend/ (KrakenD image is based on Linux Alpine)
I created Web App for containers with the following docker-compose file:
version: "3"
services:
krakend:
image: devopsfaith/krakend:latest
volumes:
- ${WEBAPP_STORAGE_HOME}/site/krakend:/etc/krakend
ports:
- "8080:8080"
restart: always
Added storage account with a blob container where uploaded sample kraken.json file
In app configuration i added a path mapping like this:
But it looks like volume was not mounted correctly
2019-11-15 12:46:29.368 ERROR - Container create failed for
krakend_krakend_0_3032a936 with System.AggregateException, One or
more errors occurred. (Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"} ) (Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"} ) InnerException:
Docker.DotNet.DockerApiException, Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"}
2019-11-15 12:46:29.369 ERROR - multi-container unit was not started
successfully
Additional questions
What does mean Mount path in Storage mounting? - i put there value /krankend
volume definition starts with ${WEBAPP_STORAGE_HOME} in docs they specified it as
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
so i did it by analogy and tried all 3 possible paths
${WEBAPP_STORAGE_HOME}/site/wwwroot/krakend
${WEBAPP_STORAGE_HOME}/site/krakend
${WEBAPP_STORAGE_HOME}/krakend
but no luck - still getting the error
ERROR parsing the configuration file: '/etc/krakend/krakend.json'
(open): no such file or directory
finally resolved that with the following docker-compose file
version: "3"
services:
krakend:
image: devopsfaith/krakend:latest
volumes:
- volume1:/etc/krakend
environment:
WEBSITES_ENABLE_APP_SERVICE_STORAGE: TRUE
ports:
- "8080:8080"
restart: always
where volume1 is a blob storage mounted as the following
This did not work for me. I was getting Bind mount must start with ${WEBAPP_STORAGE_HOME}.
This worked. docker-compose.yml
version: "3"
services:
backend:
image: xxxx
ports:
- 80:80
volumes:
- vol1:/var/www/html/public
volumes:
vol1:
driver: local
Volume definition:
Name: vol1
Config: basic
Storage accounts: ...
Storage container: ...
Mount paht: /var/www/html/public

Docker-compose access container with service name in python file?

I have a two container and How I can access another container in python file with django web server
Docker-compose.yml file
version: '2'
services:
web:
build: ./web/
command: python3 manage.py runserver 0.0.0.0:8001
volumes:
- ./web:/code
ports:
- "8001:80"
networks:
- dock_net
container_name: con_web
depends_on:
- "api"
links:
- api
api:
build: ./api/
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- ./api:/code
ports:
- "8000:80"
networks:
- dock_net
container_name: con_api
networks:
dock_net:
driver: bridge
Python File:
I can get mail_string from form
mail = request.POST.get('mail_string', '')
url = 'http://con_api:8000/api/'+mail+'/?format=json'
resp = requests.get(url=url)
return HttpResponse(resp.text)
I request api container and get value but I dont know ip address
Updated Answer
In your python file, you can use url = 'http://api/'+mail+'/?format=json'. This will enable you to access the url you are trying to get request from.
Original Answer
If the two containers are independent, then you can create a network and when you make both the containers a part of same network then you can access them using their hostname which you can specify by --hostname=HOSTNAME.
Another easy way is to use docker-compose file which creates a network by default and all the services declared in the file are a part of that network. By that you can access other services by their service name. Simply like http://container1/ when your docker-compose file is like this:
version: '3'
services:
container1:
image: some_image
container2:
image: another_or_same_image
Now enter into container2 by:
docker exec -it container2 /bin/bash
and run ping http://contianer1
You will receive packets and therefore be able to access other container.

Resources