Inline nginx conf in composer.yml file: possible? - azure

I want to setup a reverse proxy with authentication for my webapp, here's my composer file:
version: '3'
services:
nginx:
image: registry.azurecr.io/nginx:latest
container_name: nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/.htpasswd:/etc/nginx/.htpasswd
ports:
- 80:80
myapp:
image: registry.azurecr.io/myapp:latest
container_name: myapp
expose:
- 8080
As you can see I rely on two files to edit the nginx configuration. Problem is that I want to deploy this application to azure's app services but azure does not allow to specify external configurations (as far as I know).
So, is there a way to specify a couple of username/passwords in this same composer file?

For your requirement, you can use the persist storage for the web app or use the path mapping, then put the file in it. If you just want to put the file in the compose file, it seems it's impossible.
For persist storage, mount the persist volume to the path inside the container in the compose file, then enable the persist storage through setting the WEBSITES_ENABLE_APP_SERVICE_STORAGE as true in the app settings. For more details, see Use persistent storage in Docker Compose.
For path mapping, you need to create a storage account. Use the blob or the file share. See the document containerized app. Set the volume like this:
wordpress:
image: wordpress:latest
volumes:
- <custom-id>:<path_in_container>
For more details about steps, see Serve content from Azure Storage in App Service on Linux.

Related

Mounting individual files from an Azure file share into a container

I'm currently attempting to use Azure's docker compose integration to deploy a visualization tool. The default compose file can be found here. Below is a snippet of the file:
services:
# other services omitted
cbioportal-database:
restart: unless-stopped
image: mysql:5.7
container_name: cbioportal-database-container
environment:
MYSQL_DATABASE: cbioportal
MYSQL_USER: cbio_user
MYSQL_PASSWORD: somepassword
MYSQL_ROOT_PASSWORD: somepassword
volumes:
- ./data/cgds.sql:/docker-entrypoint-initdb.d/cgds.sql:ro
- ./data/seed.sql.gz:/docker-entrypoint-initdb.d/seed.sql.gz:ro
- cbioportal_mysql_data:/var/lib/mysql
One of services it uses is based on the official mysql image, which allows the developer to put sql files into the /docker-entrypoint-initdb.d folder to be executed the first time the container starts. In this case, these are sql scripts used to create a database schema and seed it with some default data.
Below is a snippet I took from Docker's official documentation in my attempt to mount these files from a file share into the cbioportal-database container:
services:
# other services omitted
cbioportal-database:
volumes:
- cbioportal_data/mysql:/var/lib/mysql
- cbioportal_data/data/cgds.sql:/docker-entrypoint-initdb.d/cgds.sql:ro
- cbioportal_data/data/seed.sql.gz:/docker-entrypoint-initdb.d/seed.sql.gz:ro
volumes:
cbioportal_data:
driver: azure_file
driver_opts:
share_name: cbioportal-file-share
storage_account_name: cbioportalstorage
Obviously, that doesn't work. Is there a way to mount specific files from Azure file share into the cbioportal-database container so it can create the database, its schema, and seed it?
I tried to reproduce but unable mount a single file/folder from Azure File Share to Azure Container Instance. file share will treated as common once you mount as volume to the container or host.
Here one more thing notice whatever the files you will update in file share it will update in container as well after you mount the file share as volume to the container.
May this be the reason we can not mount specific file and folder as volume in container.
There are some limitations to this as well
• You can only mount Azure Files shares to Linux containers. Review more about the differences in feature support for Linux and Windows container groups in the overview.
• You can only mount the whole share and not the subfolders within it.
• Azure file share volume mount requires the Linux container run as root.
• Azure File share volume mounts are limited to CIFS support.
• Share cannot be mounted as read-only.
• You can mount multiple volumes but not with Azure CLI and would have to use ARM templates instead.
Reference: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
https://www.c-sharpcorner.com/article/mounting-azure-file-share-as-volumes-in-azure-containers-step-by-step-demo/

Host path not allowed as volume source, you need to reference an Azure File Share defined in the 'volumes' section

My simple docker-compose.yaml file:
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- .hi:/var/www/html
ports:
- 8000:80
in folder hi/ I have just an index.php with a hello world print in it. (Do I need to have a Dockerfile here also?)
Now I just want to run this container with docker compose up:
$ docker compose up
host path ("/Users/xy/project/TEST/hi") not allowed as volume source, you need to reference an Azure File Share defined in the 'volumes' section
What has "docker compose" up to do with Azure? - I don't want to use Azure File Share at this moment, and I never mentioned or configured anything with Azure. I logged out of azure with: $az logout but got still this strange error on my macbook.
I've encountered the same issue as you but in my case I was trying to use an init-mongo.js script for a MongoDB in an ACI. I assume you were working in an Azure context at some point, I can't speak on that logout issue but I can speak to volumes on Azure.
If you are trying to use volumes in Azure, at least in my experience, (regardless if you want to use file share or not) you'll need to reference an Azure file share and not your host path.
Learn more about Azure file share: Mount an Azure file share in Azure Container Instances
Also according to the Compose file docs:
The top-level volumes key defines a named volume and references it from each service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format.
So the docker-compose file should look something like this
docker-compose.yml
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- hi:/var/www/html
ports:
- 8000:80
volumes:
hi:
driver: azure_file
driver_opts:
share_name: <name of share>
storage_account_name: <name of storage account>
Then just place the file/folder you wanted to use in the file share that is driving the volume beforehand. Again, I'm not sure why you are encountering that error if you've never used Azure but if you do end up using volumes with Azure this is the way to go.
Let me know if this helps!
I was testing to deploy docker compose on Azure and I faced the same problem as yours
then I tried to use docker images and that gave me the clue:
it says: image command not available in current context, try to use default
so I found the command "docker context use default"
and it worked!
so Azure somehow changed the docker context, and you need to change it back
https://docs.docker.com/engine/context/working-with-contexts/

Deploying via Docker Compose to Azure App Service with multiple containers from different sources

I have a docker-compose.yml file which is created from a build step in Azure Devops. The build step works well and I can see how the docker-compose.yml file is produced. That makes sense to me.
However, it is looking for a normal docker image to run one of the services and the other service is one I've created and am hosting in my Azure Container Registry.
The docker compose file looks like this:
networks:
my-network:
external: true
name: my-network
services:
clamav:
image: mkodockx/docker-clamav#sha256:b90929eebf08b6c3c0e2104f3f6d558549612611f0be82c2c9b107f01c62a759
networks:
my-network: {}
ports:
- published: 3310
target: 3310
super-duper-service:
build:
context: .
dockerfile: super-duper-service/Dockerfile
image: xxxxxx.azurecr.io/superduperservice#sha256:ec3dd010ea02025c23b336dc7abeee17725a3b219e303d73768c2145de710432
networks:
my-network: {}
ports:
- published: 80
target: 80
- published: 443
target: 443
version: '3.4'
When I put this into an Azure App Service using the Docker Compose tab, I have to select an image tab - either Azure Container Registry or Docker Hub - I'm guessing the former because I am connected to that.
When I start the service, my logs say:
2020-12-04T14:11:38.175Z ERROR - Start multi-container app failed
2020-12-04T14:23:28.531Z INFO - Starting multi-container app..
2020-12-04T14:23:28.531Z ERROR - Exception in multi-container config parsing: Exception: System.NullReferenceException, Msg: Object reference not set to an instance of an object.
2020-12-04T14:23:28.532Z ERROR - Start multi-container app failed
2020-12-04T14:23:28.534Z INFO - Stopping site ingeniuus-antivirus because it failed during startup.
It's not very helpful, and I don't think there's anything wrong with that docker-compose.yml file.
If I try to deploy ONLY the service from the Azure Container registry, it deploys, but doesn't deploy the other service.
Does anyone know why the service doesn't start?
Well, there are two problems I find in your docker-compose file for the Azure Web App.
One problem is that Azure Web App only supports configuring one image repository in the docker-compose file. It means you only can configure the Docker Hub or ACR, not both.
Another problem is that Azure Web App does not support the build option in the docker-compose file. See the details here.
According to all the above, I suggest you can create all your custom images and push them to the ACR and use the ACR only.

Mounting from persistent storage into azure multi container app

I'm trying to build a multiconatiner app with azure. I'm struggling with accessing persistent storage. In my docker-compose file I want to add the config file for my rabbitmq container. I've mounted a fileshare to the directory "/home/fileshare" which contains the def.json. On the cloud it doesn't seem to create the volume, as on startup rabbitmq can't find the file. If i do this locally and just save the file somewhere it works.
Docker Compose File:
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
volumes:
- /home/fileshare/def.json:/opt/rabbitmq-conf/def.json
expose:
- 5672
- 15672
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS: -rabbitmq_management load_definitions "/opt/rabbitmq-conf/def.json"
networks:
- cloudnet
networks:
cloudnet:
driver: bridge
You need to use the WEBAPP_STORAGE_HOME env variable that is mapped to persistent storage at /home.
${WEBAPP_STORAGE_HOME}/fileshare/def.json:/opt/rabbitmq-conf/def.json
On my understanding, you want to mount the Azure File Share to the container rabbitmq and upload the def.json file to the file share so that you can access the def.json file inside the container.
Follow the steps here to mount the file share to the container. And it just supports to mount the file share, not the file directly.
The solution to this problem seems to be to use ftp to access the webapp and save the definitions file. Docker-Compose is in preview mode (since 2018), and a lot of the options are actually not supported. I tried mounting the storage to a single container app and used ssh to connect to it, and the file is exactly where one would expect it to be. With a multi-container app this doesn't work.
I feel like the docker-compose feature is not yet fully supported

Read/Write docker volume from nodejs application

I have 2 applications.
1.Backend nodejs application
2.Frontend nodejs application
Both are running in docker containers.
My goal is upload images from backend application and access them from frontend application. As i guess, the only way to share image files between containers is volumes.
So i created a volume "assets" in docker-compose file. But how can i write data to volume folder from backend app and how can i access to volume folder from frontend application.
Expected behaviour
// on backend app
fs.writeFileSync("{volume_magic_path}/sample.txt", "Hey there!");
// on frontend app
fs.readFileSync("{volume_magic_path}/sample.txt", 'utf8');
docker-compose.yml
version: "3.7"
services:
express:
build: ./
restart: always
ports:
- "5000:5000"
volumes:
- .:/usr/src/app
volumes:
assets:
So basically, what should i write for "volume_magic_path" to access volume folder?
It looks like a XY problem. A front-end app never has to access physical files directly. Front-end code is made to be ran in browsers and they doesn't have file system APIs like that. The interface to access files is HTTP.
What I think you're looking for is to upload files from front-end, and make them available as HTTP resources. I'm order to do that, you'll have to create endpoints for file upload and resource access. I would recommend using express.static() if you're using Express, or whatever equivalent for the HTTP library you're using, for serving your files.
Please find the below example , In backend or front end you have write the path inside your container location_in_the_container
services:
nginx:
build: ./nginx/
ports:
- 80:80
links:
- php
volumes:
- app-volume: location_in_the_container
express:
build: ./
restart: always
ports:
- "5000:5000"
volumes:
- app-volume: location_in_the_container
volumes:
app-volume:
Please replace location_in_the_container.
Please check Volume configuration reference

Resources