Azure App Service - resolving environment variables in docker-compose yml - azure

I have a multicontainer application that I would like to deploy and run within an Azure App Service. The compose file I have, docker-compose.prod.yml, contains environment variables (as shown in the below code).
When I run build the compose file locally, these variables are resolved during the build process by referencing an environment file (env.) located in the same directory:
docker-compose -f docker-compose.prod.yml build push
I know that it is possible to build a DevOps pipeline (as described here), which will insert environment variables during the build process, similar to how I build it locally.
However, I wanted to know if it is possible to either
1.) configure environment variables via the Azure Portal web interface (as you can do with application settings), which the docker compose file can then reference to resolve these variables on startup
or
2.) somehow upload the env. file I use locally which can then be used to resolve the variables on create or startup.
Contents of docker-compose.prod.yml
--------------------------------------------
version: '3.7'
services:
app1:
build: ./app1
command : python3 manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
image: "${AZ_CONTAINER_REGISTRY}/${APP1_IMAGE}:${APP1_VERSION}"
app2:
build: ./app2
command: python app.py
ports:
- 8081:8080
image: "${AZ_CONTAINER_REGISTRY}/${APP2_IMAGE}:${APP2_VERSION}"
restart: always
Contents of env. file
--------------------------------------------
AZ_CONTAINER_REGISTRY=sample.azurecr.io
APP1_IMAGE=app1test
APP1_VERSION=1.0
APP2_IMAGE=app2test
APP2_VERSION=2.0
Any feedback is appreciated, thanks

1.) configure environment variables via the Azure Portal web interface (as you can do with application settings), which the docker compose
file can then reference to resolve these variables on startup
According to my knowledge, the Azure Web App does not support that uses variables instead of the real image currently. Only some variables that Azure gives can be used in the docker-compose file, such as the variable WEBAPP_STORAGE_HOME for persistent storage.
2.) somehow upload the env. file I use locally which can then be used to resolve the variables on create or startup
And it also does not support the env. file, all the docker commands are executed by Azure in the backend. Otherwise, the Azure Web App also does not support the build option in the docker-compopse file, see the supported options.

I was struggling with the exact same question. After a lot of searching I found this post describing a nice little hack.
Add this to your init/startup script:
# Get environment variables to show up in SSH session
eval $(printenv | awk -F= '{print "export " "\""$1"\"""=""\""$2"\"" }' >> /etc/profile)
I can now run my management commands via SSH.

Related

Host path not allowed as volume source, you need to reference an Azure File Share defined in the 'volumes' section

My simple docker-compose.yaml file:
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- .hi:/var/www/html
ports:
- 8000:80
in folder hi/ I have just an index.php with a hello world print in it. (Do I need to have a Dockerfile here also?)
Now I just want to run this container with docker compose up:
$ docker compose up
host path ("/Users/xy/project/TEST/hi") not allowed as volume source, you need to reference an Azure File Share defined in the 'volumes' section
What has "docker compose" up to do with Azure? - I don't want to use Azure File Share at this moment, and I never mentioned or configured anything with Azure. I logged out of azure with: $az logout but got still this strange error on my macbook.
I've encountered the same issue as you but in my case I was trying to use an init-mongo.js script for a MongoDB in an ACI. I assume you were working in an Azure context at some point, I can't speak on that logout issue but I can speak to volumes on Azure.
If you are trying to use volumes in Azure, at least in my experience, (regardless if you want to use file share or not) you'll need to reference an Azure file share and not your host path.
Learn more about Azure file share: Mount an Azure file share in Azure Container Instances
Also according to the Compose file docs:
The top-level volumes key defines a named volume and references it from each service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format.
So the docker-compose file should look something like this
docker-compose.yml
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- hi:/var/www/html
ports:
- 8000:80
volumes:
hi:
driver: azure_file
driver_opts:
share_name: <name of share>
storage_account_name: <name of storage account>
Then just place the file/folder you wanted to use in the file share that is driving the volume beforehand. Again, I'm not sure why you are encountering that error if you've never used Azure but if you do end up using volumes with Azure this is the way to go.
Let me know if this helps!
I was testing to deploy docker compose on Azure and I faced the same problem as yours
then I tried to use docker images and that gave me the clue:
it says: image command not available in current context, try to use default
so I found the command "docker context use default"
and it worked!
so Azure somehow changed the docker context, and you need to change it back
https://docs.docker.com/engine/context/working-with-contexts/

How to store environment variable in dockerfile or docker compose.yaml whose value is fetched from the os on which it gets deployed?

I want to use some env variables in my app, that runs on docker container. How can I get those environment variables in my code?
If I use dockerfile or docker-compose yaml, then how can I export it to read the values from the os on which it runs eventually and how can it be read in the code?
So for example, if I run my docker on AWS instances, I need the AWS keys from the instance to be used in my containerized app.
For docker-compose:
Create a .env file in the same directory as docker-compose.yml
For example, for .env file with contents:
DOCKER_IMAGE_NAME=docker_image_here
POSTGRES_PASSWORD=POSTGRES_PASSWORD_HERE
You may then access the variables inside the docker-compose.yml file:
service-name:
web:
image: ${IMAGE_NAME}
environment:
postgres_password: "${POSTGRES_PASSWORD}"

NodeJS: How to include environment variable from CircleCI into the application

In my front end application, I'm storing sensitive information in the environment and using them as following:
const client_secret = process.env.CLIENT_SECRET;
On local development, I use dotenv package to pass in the values in .env file
CLIENT_SECRET=XXXXX
The .env file is not committed.
I use CircleCI for my deployment process, and saved the CLIENT_SECRET value in CircleCI environment variables, but how can I pass into the application?
This is my CircleCI config.yml:
- deploy:
name: Deploy
command: |
ENVIRONMENT=${ENVIRONMENT:=test}
VERSION=`date "+%Y-%m-%dt%H%M"`
if [ "${ENVIRONMENT}" = "production" ]; then
APP_FILE=app-prod.yaml
else
APP_FILE=app.yaml
fi
gcloud app deploy ${APP_FILE} --quiet --version ${VERSION}
I can do this in app.yaml:
env_variables:
NODE_ENV: 'production'
CLIENT_SECRET: XXXXX
But I don't want to include the sensitive information into the .yaml file and commit them. Does anyone know any way I can pass environment values into the application?
I'm using Google Cloud Platform, and gcloud app deploy command doesn't seem to have a flag to include the environment variables.
Using bash script to create a .env file with environment variables manually
app.yaml.sh
#!/bin/bash
echo """
env: flex
runtime: nodejs
resources:
memory_gb: 4.0
disk_size_gb: 10
manual_scaling:
instances: 1
env_variables:
NODE_ENV: 'test'
CLIENT_SECRET: \"$CLIENT_SECRET\"
"""
config.yml
steps:
- checkout
- run:
name: chmod permissions
command: chmod -R 755 ./
- run:
name: Copy across app.yaml config
command: ./app.yaml.sh > ./app.yaml
- deploy:
name: Deploy
command: |
VERSION=`date "+%Y-%m-%dt%H%M"`
gcloud app deploy app.yaml --quiet --version ${VERSION}
Reading about it, it's indeed, as you mentioned, that the only "official" way to set environment variables, it's by setting them in the app.yaml - this article provides more information on it. Considering that, I went to search further and I have found this good question from the Community - accessible here - where some workarounds are provided.
For example, the one that you mentioned, about creating a second file with the values and call it in the app.yaml is a good one. You can them use the .gitignore for the file not exist in the repository - in case you are using one. Another option would be to use Cloud Datastore to store the information and use it in your application. This way, Datastore would keep this information secured and accessible for your application, without becoming public within your App Engine configuration.
I just thought a good idea of adding this information here, with the article and question included, in case you want more information! :)
Let me know if the information helped you!

Error passing docker secrets to azure web app 'No such file or directory: '/run/secrets/'

I am relatively new to Docker and am currently building a multi-container dockerized azure web app (in flask). However, I am having some difficulty with secret management. I had successfully built a version that was storing app secrets through environment variables. But based on some recent reading it has come to my attention that that is not a good idea. I've been attempting to update my app to use Docker Secrets but have had no luck.
I have successfully created the secrets based on this post:
how do you manage secret values with docker-compose v3.1?
I have deployed the stack and verified that the secrets are available in both containers in /run/secrets/. However, when I run the app in azure I get an error.
Here are the steps I've taken to launch the app in azure.
docker swarm init --advertise-addr XXXXXX
$ echo "This is an external secret" | docker secret create my_external_secret
docker-compose build
docker push
docker stack deploy -c *path-to*/docker-compose.yml webapp
Next I'll restart the azure web app to pull latest images
Basic structure of the docker-compose is below.
version: '3.1'
services:
webapp:
build: .
secrets:
- my_external_secret
image: some_azure_registry/flask_site:latest
celery:
build: .
command: celery worker -A tasks.celery --loglevel=INFO -P gevent
secrets:
- my_external_secret
image: some_azure_registry.azurecr.io/flask_site_celery:latest
secrets: # top level secrets block
- my_external_secret
external: true
However, when I run the app in azure I get:
No such file or directory: '/run/secrets/my_external_secret
I can attach a shell to the container and successfully run:
python
open('/run/secrets/*my_external_secret*', 'r').read().strip()
But when the above line is executed by the webapp it fails with the no file or directory error. Any help would be greatly appreciated.
Unfortunately, the secret at the top-level of docker-compose is not supported in Azure Web App for Container. Take a look below:
Supported options
command
entrypoint
environment
image
ports
restart
services
volumes
Unsupported options
build (not allowed)
depends_on (ignored)
networks (ignored)
secrets (ignored)
ports other than 80 and 8080 (ignored)
For more details, see Docker Compose options.

GitLab CI/CD configuration problem using shared runners

I have problems with GitLab CI/CD configuration - I'm using free runners on GitLab it self.
I have joomla (test) project using docker - I'm learng how it's work.
I created .gitlab-ci.yml with:
image: docker:latest
services:
- docker:dind
at top of file.
On test stage I want run docker image created at the build stage.
When I add:
services:
- mariadb:latest
to test stage I always get
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? at docker pull command. Without it I get error at docker run command at joomla image initialization cose of lack of MySql server
Any help will be appreciated.
If you set
services:
- mariadb:latest
in your test job, this will override the globally defined services. Therefore, the docker daemon is not running during test. This also explains why you do not get the Docker daemon error when you omit the services definition for the test job.
Either specify the docker:dind service also for the test job, or remove the local services definition and add mariadb to your global services definition.

Resources